Vasudevan Mukunth is the science editor at The Wire.
Infinite in All Directions is The Wire‘s science newsletter. Click here to subscribe and receive a digest of the most interesting science news and analysis from around the web every Monday, 10 am.
Sci-Wi
Starting today, you can follow The Wire‘s science coverage – as well as a thoughtful curation of science news and analysis from other publications – on our dedicated Facebook and Twitter accounts for the section, called Sci-Wi. Please like, follow and share widely!
§
Spotting scientists, lazy scientists
Indian scientists are lazy, says CNR Rao:
Bharat Ratna Prof CNR Rao on Wednesday said Indian scientists are “lazy” compared to those in countries like Japan, South Korea and China. “We are generally a lazy lot. If a person is angered by his superiors or something like that happened in Japan, he tends to work for an additional two hours. But in India, we stop working,” he said at a ceremony organized by the Karnataka State Council for Science and Technology, and the department of Information Technology, Biotechnology and Science & Technology to honour scientists and engineers.
Aside from my general displeasure about this man being accorded the prefix ‘Bharat Ratna’ at every mention, Rao has been coming across as a superficial commentator of late. Recently, while speaking at some event, he said that given as large a population as India’s, and making the safe assumption that a fixed fraction of it would have be significantly smarter than the rest, it was a tragedy that we still hadn’t spotted the country’s brightest scientists yet. This might make logical sense to many people but it absolutely should not to educators like Rao. He heads JNCASR and served as the prime minister chief scientific advisor in 2004-2014. To make India’s research excellence a matter of spotting is to abdicate the responsibility of nurturing these scientists. Who will you spot if you aren’t thinking about the best ways to create them?
And then this example of Japanese scientists working longer hours because they’re pissed with their bosses. What’s wrong with the Japanese? At least that was my first thought before I realised I couldn’t disparage Japan. It could be possible that they have a system that rewards hard work without bureaucracy getting in the way. We clearly don’t. I can work 10-times as hard as others in some Indian government offices but I sure as hell won’t receive proportionate appreciation for it. Similarly, I can’t expect people to work harder in any other setting if they think they aren’t going to get their dues, and I’d actively discourage them from doing so if it impacted their personal lives. So like in the previous instance, Rao sounds like he’s simply not thinking things through: calling scientists as a community ‘lazy’ is to abdicate the responsibility to make it easier for them to enjoy the fruits of their labours.
Also, let’s try to stop importing cross-border solutions for good governance?
§
None for all, all for one
From Times of India:
Last year, Harish Chand Tiwari, who works at the residence of Prakash Sharma in the Dal Bazar area of Gwalior, moved the SC through advocate Nivedita Sharma, complaining that a BSNL tower illegally installed on a neighbour’s rooftop in 2002 had exposed him to harmful radiation 24×7 for the last 14 years. Radiation from the BSNL tower, less than 50 metres from the house where he worked, afflicted him with Hodgkin’s lymphoma caused by continuous and prolonged exposure to radiation, Tiwari complained. In a recent order, a bench of Justices Ranjan Gogoi and Navin Sinha said, “We direct that the particular mobile tower shall be deactivated by BSNL within seven days from today.” The tower will be the first to be closed on an individual’s petition alleging harmful radiation.
Unbelievable. If the radiation received and transmitted by base station towers really causes cancer, where’s the explosion of cancer rates in urban centres around the world? In fact, data from the US suggests that cancer incidence is actually on the decline (or at least not exploding if you account for population growth) – except for cancers of the lung/bronchus (due to smoking)…
… whereas the number of cell-sites has been surging.
Even if we are to give Harish Chand Tiwari the benefit of doubt, taking a cell site down because one man in its vicinity had cancer seems quite excessive. Moreover, I don’t think Tiwari has a way to prove it was the cell site alone and not anything else that gave him leukaemia. For that matter, how does any study purport to be able to show cancer being caused by one agent exclusively? We speak only in terms of risk and comorbidity even with smoking, the single-largest risk factor in modern times. Moreover, none of this has forced us to distance the hordes of other factors – including the pesticides in our food and excessive air pollution – in our daily lives. But through all these stochasticities and probabilities, the SC seems to be imposing a measure of certainty that we’ll never find. And its judgment has set a precedent that will only make it harder to beat down the pseudoscience that stalks irrational fears.
§
The adjacent possible: Biological dynamism
Curious and intriguing ideas from the mind of Stuart Kauffman, a noted theoretical biologist, dating from 2003 but still relevant:
Kauffman asks a question that goes beyond those asked by other evolutionary theorists: if selection is operating all the time, how do we build a theory that combines self-organization (order for free) and selection? The answer lies in a “new” biology, somewhat similar to that proposed by Brian Goodwin, in which natural selection is married to structuralism.
Lately, Kauffman says that he has been “hamstrung by the fact that I don’t see how you can see ahead of time what the variables will be. You begin science by stating the configuration space. You know the variables, you know the laws, you know the forces, and the whole question is, how does the thing work in that space? If you can’t see ahead of time what the variables are, the microscopic variables for example for the biosphere, how do you get started on the job of an integrated theory? I don’t know how to do that. I understand what the paleontologists do, but they’re dealing with the past. How do we get started on something where we could talk about the future of a biosphere?”
“There is a chance that there are general laws. I’ve thought about four of them. One of them says that autonomous agents have to live the most complex game that they can. The second has to do with the construction of ecosystems. The third has to do with Per Bak’s self-organized criticality in ecosystems. And the fourth concerns the idea of the adjacent possible. It just may be the case that biospheres on average keep expanding into the adjacent possible. By doing so they increase the diversity of what can happen next. It may be that biospheres, as a secular trend, maximize the rate of exploration of the adjacent possible. If they did it too fast, they would destroy their own internal organization, so there may be internal gating mechanisms. This is why I call this an average secular trend, since they explore the adjacent possible as fast as they can get away with it. There’s a lot of neat science to be done to unpack that, and I’m thinking about it.”
The longer conversation – published by Edge – is definitely worth a read.
§
The adjacent possible II: A lesser dimension
There has been speculation that Moore’s law will cease to apply in this century, mostly thanks to how deep we can go into the particulate realm before we’re up against physical laws that actively prohibit us from manipulating materials to do our bidding. But then how about an artificial way out? Gordon Moore formulated his law in 1965 keeping in mind developments involving the element silicon. Now, engineers think they see a way out by shifting to graphene and molybdenum. From Spectrum:
A three-atom-thick microchip with more than 100 transistors is the most complex microprocessor made from a 2-dimensional material to date, researchers say.
The new device is made of a thin film of molybdenite, or molybdenum disulfide (MoS2), which consists of a sheet of molybdenum atoms sandwiched between two layers of sulfur atoms. A single-molecule layer of molybdenum disulfide is only six-tenths of a nanometer thick. In comparison, the active layer of a silicon microchip is up to about 100 nanometers thick. (A nanometer is a billionth of a meter; the average human hair is about 100,000 nanometers wide.)
Scientists hope two-dimensional materials such as graphene or molybdenite will allow Moore’s Law to continue once it becomes impossible to make further progress using silicon. Whereas graphene is an excellent conductor, making it ideal for use in wiring and interconnections, molybdenite is a semiconductor, which means it can serve in the transistor switches that lie at the heart of electronic circuits.
Out of curiosity: is Moore’s law materials-specific?
§
(Engineered) negative mass
Washington State University scientists have claimed to have created something called ‘negative mass’. It’s actually a Bose-Einstein condensate (of rubidium atoms) that, under the influence of a laser directed in a certain way, accelerates in the opposite direction when you push against it, i.e. towards the pushing force instead of away. But there’s something fishy about the way the press release has been crafted, given away by the first line of the paper‘s abstract:
A negative effective mass can be realized in quantum systems by engineering the dispersion relation.
(Emphasis added.) First off, this isn’t a natural substance we’re talking about: this won’t form in nature no matter how long you wait. Second: while matter can hypothetically have negative mass, as the press release claims, it’s nonetheless a disputed hypothesis. The thing about engineering in the abstract also stands for the fact that this isn’t negative mass itself but something that simulates the behaviour of negative masses – sort of like a meta-material.
And third: the press release claims that the (engineered) negative mass’s behaviour can be used to further understand astrophysical entities like dark energy. This smells a bit like desperation because the most recent in-the-news description of dark energy as being negative mass came through the controversial theories of Erik Verlinde, and they’re so far from being accepted as a valid theory.
§
Peer-reviewing data
Many of the top scientific journals are asking for researchers to publish their raw data together with their manuscripts as a way to encourage other researchers to perform replication studies as well as use it in their own research. Paralleling this, journal editors developed have also familiarised themselves with the methods used to test the robustness and quality of data. But at the same time they’re also becoming more aware of the need to peer-review the data just the way they peer-review the manuscripts. So how do you subject data to peer reviews? Todd Carpenter on Scholarly Kitchen:
Peer review of data is similar to peer review of an article, but it includes a lot more issues that make the process a lot more complicated. First, a reviewer has to deal with the overall complexity of a research dataset — these can be large and multifaceted information objects. Oftentimes, the data go through a variety of pre-processing and error-cleansing steps that should be monitored and tracked. Some datasets are constantly changing and being added to over time, so the question must be asked, does every new study based on a given dataset need a new review or could an earlier review still apply? To conduct a proper analysis, the methodology of the data collection should be considered, an examination that can go as deep as describing instrument calibration and maintenance. Even after a dataset is assembled, analysis can vary significantly according to the software used to process, render, or analyze it. Review of a dataset would likely require an examination of the software code used to process the data as well. All of these criteria create more work for already burdened reviewers.
§
Intergalactic GPS and propriety
Brace yourselves: are pulsars evidence of alien engineering? That’s the question astrophysicist Clement Vidal is asking. And he’s asking because he thinks it’s reasonable for an alien intelligence to have been using the pulsed, clock-like emission from pulsars to set up an intergalactic GPS system. And he also thinks we should consider the possibility that alines might just have built certain kinds of variable emission stars to use in their system, which he calls XNAV. Incredible. And then he gets a bit carried away. From his arXiv preprint paper:
What are the policy issues of SETI-XNAV? It might be preposterous to think about this, but the core issue is: If ETs made this pulsar positioning system, do we have the right to use it without asking permission? Can NASA, ESA, the China Academy of Launch Vehicle Technology or other XNAV actors simply and safely ignore this possibility? What if XNAV belongs to a galactic federation? Can we use it with a peace of mind? What could be the risks of using a galactically engineered tool we didn’t participate in? Do we want to enter the “galactic club” as free riders? Or do we want to be polite, and take the occasion to Message to Extraterrestrial Intelligence, and ask permission to use XNAV? What are the risks and benefits of asking or not asking permission? These questions are open for (space) politicians and ethicists on Earth.
§
From The Wire
- Scientists in Bengaluru have unraveled an ingenious mechanism behind the phenomenon called brain-sparing, first noticed in the 1970s. It helps explain how undernourished mothers can still give birth to children with healthy brains.
- Glassfrogs have been found to pee on their eggs to protect them. The same study also found that, apart from parental care being widespread in this group of frogs, the females are primary caregivers in many species, contrary to prevailing wisdom.
- Medical technologies don’t evolve in a vacuum. They are driven not only by trends in scientific research but also by business interests and the regulatory environment. Case in point: the complicated history of how MRI was invented – and ‘brought’ to India.
- The Riemann hypothesis, which provides important information about the distribution of prime numbers, was first publicised in 1859. Its solution carries a $1-million reward. Now, a new paper says its authors have come close to a solution thanks to the work of none other than the most infamous cat-caretaker in history.
- A study has found that conditions conducive to the spread of malaria are created when projects that cause land-use change and labour migration are kicked off. This effectively means that India’s developmental trajectory engendered malaria – and the country couldn’t ever have escaped having to deal with the disease.
Bonus: In her latest column, The Wire‘s public editor Pamela Philipose wrote about why our science coverage is notable.
§
Other bits of interestingness
- 5 important ways Henrietta Lacks changed medical science
- Riddle of why Hitler didn’t use sarin gas remains unsolved
- A cat co-authored a physics paper in 1975
- The evolutionary dynamics of CRISPR gene drives
- ‘Cash is not a sufficient incentive for pregnant women in India to take up free institutional delivery services’
- Why do researchers commit misconduct?
If you enjoyed the newsletter, please share it with a friend. They can look at previous issues here and subscribe here.