A new particle to break the Standard Model?

The Wire
July 2, 2015

Scientists at the Large Hadron Collider particle-smasher have unearthed data from an experiment conducted in 2012 that shows signs of a new particle. If confirmed, its discovery could herald a new period of particle physics research.

On June 2, members of the ATLAS detector collaboration uploaded a paper to the arXiv pre-print server discussing the possible sighting of a new particle, which hasn’t been named yet. If the data is to be believed, it weighs as much as about 2,000 protons, making it 12-times heavier than the heaviest known fundamental particle, the top quark. It was spotted in the first place when scientists found an anomalous number of ‘events’ recorded by ATLAS at a particular energy scale, more than predicted by the Standard Model set of theories.

Actually, the Standard Model is more like a collection of principles and rules that dictate the behaviour of fundamental particles. Since the 1960s, it has dominated particle physics research but of late has revealed some weaknesses by not being able to explain the causes behind some of its own predictions. For example, two physicists – Peter Higgs and Francois Englert – used the Standard Model to predict the existence of a Higgs boson in 1964. The particle was found at the LHC in 2012. However, the model has no explanation for why the particle is much lighter than it was thought to be.

If its existence is confirmed, the new probable-particle sighted by ATLAS could force the Standard Model to pave way for a more advanced, and comprehensive, theory of physics and ultimately of nature. However, proving that it exists could take at least a year.

The scientists found the probable-particle in data that was recorded by a detector trained to look for the decays of W and Z bosons. These are two fundamental particles that mediate the weak nuclear force that’s responsible for radioactivity. A particle’s mass is equivalent to its energy, which every particle wants to lose if it has too much of it. So heavier particle often break down into smaller clumps of energy, which manifest as smaller particles. Similarly, at the 2 TeV energy scale, scientists spotted a more-than-predicted clumping of energy that’s often the sign of a new particle, in the W/Z channel.

The chance of the telltale spike in the data belonging to a fluke or impostor event, on the other hand, was 0.00135 (with 0 being ‘no chance’ and 1, certainty) – enough to claim evidence but insufficient to claim a discovery. For the latter, the chances will have to be reduced to at least 0.000000287. In the future, this is what scientists intent on zeroing in on the particle will be gunning for.

The LHC shut in early 2013 for upgrades, waking up in May 2015 to smash protons together at almost twice the energy and detect them with twice the sensitivity as before. The ATLAS data about the new particle was gathered in 2012, when the LHC was still smashing protons at a collision energy of 8 TeV (more than 8,000 proton-masses). In its new avatar, it will be smashing them at 13 TeV and with increased intensity as well. As a result, rarer events like this probable-particle’s formation could happen more often, making it easier for scientists to spot and validate them.

If unfortunately the probable-particle is found to have been something else, particle physicists will be disappointed. Since the LHC kicked off in 2009, physicists have been eager to find some data that will “break” the Standard Model, expose cracks in its foundations, that could be taken advantage of to build a theory that can explain the Higgs boson’s mass or why gravity among the four fundamental forces is so much more weaker than the other three.

The ATLAS team acknowledges a paper from members of the CMS collaboration, also at the LHC, from last year that found similar but weaker signs of the same particle.

SpaceX rocket blows up but let’s remember that #SpaceIsHard

The Wire
June 30, 2015

“… it’s not all or nothing. We must get to orbit eventually, and we will. It might take us one, two or three more tries, but we will. We will make it work.” Elon Musk said this in a now-famous interview to Wired in 2008 when questioned about what the future of private spaceflight looked like after SpaceX had failed three times in a row trying to launch its Falcon 1 rocket. At the close, Musk, the company’s founder and CEO, asserted, “As God is my bloody witness, I’m hell-bent on making it work.”

Fast forward to June 28, 2015, at Cape Canaveral, Florida, 1950 IST. There’s a nebulaic cloud of white-grey smoke hanging in the sky, the signature of a Falcon 9 rocket that disintegrated minutes after takeoff. @SpaceX’s Twitter feed is MIA while other handles are bustling with activity. News trickles in that an “overpressurization” event occurred in the rocket’s second stage, a liquid-oxygen fueled motor. A tang of resolve hangs in conversations about the mishap – a steely reminder that #SpaceIsHard.

In October 2014, an Antares rocket exploded moments after lifting off, crashing down to leave the Mid-Atlantic Regional Spaceport on Wallops Island, Virginia, unusable for months. In April 2015, a Progress 59 cargo module launched by the Russian space agency’s Soyuz 2-1A rocket spun wildly out of control and fell back toward Earth – rather was incinerated in the atmosphere.

All three missions – Orbital’s, Roscosmos’s and SpaceX’s – were resupply missions to the International Space Station. All three missions together destroyed food and clothing for the ISS crew, propellants, 30 small satellites, spare parts for maintenance and repairs, a water filtration system and a docking port – at least. The result is that NASA’s six-month buffer of surplus resources on the ISS has now been cut back to four. The next resupply mission is Roscosmos’s next after its April accident, on July 3, followed by a Japanese mission in August.

But nobody is going to blame any of these agencies overmuch – rather, they shouldn’t. Although hundreds of rockets are successfully launched every year, what’s invisible on TV is the miracle of millions of engineering-hours and tens of thousands of components coming together in each seamless launch. And like Musk said back in 2008, it’s not all-or-nothing each time people try to launch a rocket. Accidents will happen because of the tremendous complexity.

SpaceX’s Falcon 9 launch was the third attempt in six months to reuse the rocket’s first-stage. It’s an ingenious idea: to have the first-stage robotically manoeuvre itself onto a barge, floated off Wallops Island, after performing its duties. Had the attempt succeeded, SpaceX would’ve created history. Being able to reuse such an important part of the rocket reduces launch costs – possible by a factor of hundred, Musk has claimed.

Broad outlay of how SpaceX's attempt to recover Falcon's first-stage will work. Credit: SpaceX
Broad outlay of how SpaceX’s attempt to recover Falcon’s first-stage will work. Credit: SpaceX

In September 2013, the first stage changed direction, reentered Earth’s atmosphere and made a controlled descent – but landed too hard in the water. A second attempt in April 2014 played out a similar narrative, with the stage getting broken up in hard seas. Then, in January 2015, an attempt to land the stage on the barge – called the autonomous spaceport drone ship – was partially successful. The stage guided itself toward the barge in an upright position but eventually came down too hard. Finally, on June 28, a yet-unknown glitch blew up the whole rocket 2.5 minutes after launch.

The Falcon 9’s ultimate goal is to ferry astronauts into space. After retiring its Space Shuttle fleet in 2011, NASA had no vehicles to send American astronauts into space from American soil, and currently coughs up $70 million to Roscosmos for each seat. As remedy, it awarded contracts to SpaceX and Boeing to build human-rated rockets fulfilling the associated and stringent criteria in September 2014. The vehicles have until 2017 to be ready. So in a way, it’s good that these accidents are happening now while the missions are uncrewed (and the ISS is under no real threat of running out of supplies).

June 28 was also Musk’s 44th birthday. On behalf of humankind, and in thanks to his ambitions and perseverance, someone buy the man a drink.

Want anonymity on the Internet? ICANN thinks you can’t

The Wire
June 29, 2015

Your face. When you commit murder and someone sees your face, you’re given away. Even if it was your evil identical twin who did it, the police now know where to start looking. A person’s face is one of the fundamental modes of physical identity, and its ability to correspond to a unique individual is matched in precision only by more sophisticated ID-ing techniques like DNA-profiling and fingerprints. However, life on the Internet obviates the need for any of these techniques not just because virtual murder isn’t (yet) a crime but also because identity on the web can be established using non-physical information – like an encrypted password.

Generating this information is remarkably easy, cost-effective, and non-intrusive – to the extent that a person can have multiple identities on the web. This has proven both good and bad, but far more good than bad. The fair use of multiple identities, many of which are typically redundant, is what has made whistleblowing possible and encouraged satire. Being a whistleblower or a satirist just physically renders you always liable to physical retaliation, but on the web, you can be both persons at once – the content citizen and the disenfranchised contractor. At the heart of this realm of human enterprise is the need for the services that provide anonymity to also be anonymous. If not, they will simply make for that proverbial trail of blood.

It is this need for anonymity that a new policy document from the Internet Corporation for Assigned Names and Numbers tries to eliminate.

ICANN is an organisation based out of Los Angeles and responsible for managing the technical infrastructure of the Internet. Its policy document on the subject, published on the web on May 5, is open for public comments. After that, a Working Group will “review” the comments and prepare a final report due July 21 this year. And if the final report speaks against anonymisation services like proxies that provide a layer of opacity so a user’s information doesn’t show up in a search, a lot of websites are going to be in trouble. In fact, while the proposal is targeted at “commercial” websites, the World Intellectual Property Organisation considers websites that run ads to be commercial. This doesn’t limit the dragnet in a way it deserves to be because it endangers even harmless blogs.

One of the simplest ways to look for information about who owns a domain is to perform a WHOIS lookup. For example, looking up theladiesfinger.com throws up the following information about the domain:

Domain Name: THELADIESFINGER.COM
Registrar URL: http://www.godaddy.com
Registrant Name: Registration Private
Registrant Organization: Domains By Proxy, LLC
Name Server: NS17.DOMAINCONTROL.COM
Name Server: NS18.DOMAINCONTROL.COM
DNSSEC: unsigned

At the time of purchasing the domain, some domain registrars give an option for the purchaser to pay a fee and mask these details from showing up on lookups. If someone really wants to access them, they’d have to get a court order. The ICANN proposal wants to abolish this option. Effectively, it’s the removal of discretionary access that’s tantamount to denying what’s increasingly being called a new fundamental right: the right to encryption, and with it the right to anonymity. Without it, domains like theladiesfinger.com could be susceptible to increased harassment that they’ve been able to easily dodge until now*.

An entertainment-industry lobby called the Coalition for Online Accountability has been rooting for the proposal – for it will unlock access to private registrations that make up at least 20% of all domains on the web, many of which, according to the COA, deserve to be shut down for trading in pirated content. Its argument against retaining the private registrations system is that the authorities in many foreign countries often aren’t cooperative when investigating intellectual property theft. In fact, the coalition seems very eager to push through the proposed policy, going by a testimony it submitted on May 13, 2015, stating that “if a satisfactory accreditation system cannot be achieved in the near future within the ICANN structure, it would be timely and appropriate for Congress to consider whether a legislative solution is feasible”.

Tens of thousands of comments have been submitted to date, and most of them speak against removing private registrations for commercial websites. A bulk of them also use the same language – hopefully the result of a targeted campaign and not astroturfing.

Dear ICANN –

Regarding the proposed rules governing companies that provide WHOIS privacy services (as set forth in the Privacy and Policy Services Accreditation Issues Policy document):

I urge you to respect internet users’ rights to privacy and due process.
– Everyone deserves the right to privacy.
– No one’s personal information should be revealed without a court order, regardless of whether the request comes from a private individual or law enforcement agency.

Private information should be kept private. Thank you.

The last day to submit comments is July 7. They can be emailed to: comments-ppsai-initial-05may15@icann.org.

*There are many groups on the web that, on the face of it, just don’t like women doing things. While a Pew Research Centre study found that men are “somewhat more likely than women to experience at least one of the elements of online harassment”, the intensity of harassment has been greater toward women.

How small universities can game rankings

The Wire
June 28, 2015

Gandhi Bhawan at Panjab University. Credit: Wikimedia Commons
Gandhi Bhawan at Panjab University. Credit: Wikimedia Commons

A recent survey by the influential Nature science journal ranked Panjab University as the first in India, with the Tata Institute of Fundamental Research a close second, based on the number of citations of papers stemming from them. Panjab University is a public university, and its place of pride in the rankings raised many eyebrows. Its contributions to the wider scientific community, as well as to government projects, are relatively less known than, say, the Indian Institute of Science, which boasts of a far broader and more visible body of work while still being ranked a few rungs below.

A good placement on these rankings invites favourable interest toward the universities, attracting researchers and students as well as increasing opportunities for raising funds. At the same time, using a single number to rank universities possesses unique dangers – much like using the impact factor to judge a researcher’s work has been derided for its inability to account for qualitative contributions. P. Sriram, the dean of administration at IIT-Madras, suggests that universities could be biasing rankings in their favour thanks to technical loopholes. They would still be doing good work, he says, but the techniques of organizations like Nature will remain blind to the inequity of placing some universities farther down the list than they deserved to be.

For example, Panjab University has a strong physics department that’s been associated with the Large Hadron Collider experiments in Europe. In the context of scientific publishing, these experiments are known for including the name of every member of the collaboration as authors on all papers based on their results. In May 2015, for example, the ATLAS and CMS collaborations published a 33-page article in the journal Physical Review Letters with a combined authorship of 5,154 – a world record. If a team from University X was part of this group, then the paper will count against X’s research output that year.

“Looking at the example of Panjab University, the contributors to the ATLAS collaboration are unquestionably respected researchers,” Prof. Sriram said, “but the issue is whether their contributions should be counted against the ATLAS collaboration or Panjab University.”

He recommended that these ‘mega-author’ papers be considered more carefully when assembling university rankings, especially when a university is the source of far fewer papers that’re not the outcomes of collaborative work. “If the volume of publications and citations outside of the collaboration is not significant, what is reported as the institutional bibliometric information is actually the bibliometric of the collaborations,” he clarified.

Indeed, the rankings – like Nature‘s or the more widely used Times Higher Education list – have become susceptible to being gamed. It isn’t very hard for any university to became part of a sufficient number of international collaborations “as the scientific requirements are fairly modest”. Then, Prof. Sriram explained that their rankings would give the impression of them being far ahead of even the Massachusetts Institute of Technology while in reality they would be far behind, thanks to the mistake of conflating citations and research quality.

A telling sign of this emerged in the Times Higher Education report published in June 2015. It showed that Panjab University scored the lowest among all Asian universities it surveyed, clocking 10.5, for research. Overall, it was able to come out on top with a total score of 84.4 thanks to better numbers for teaching, industrial income and international outlook. According to Prof. Sriram’s calculations as well, to quote from a critical letter (paywall) he sent to Nature: “about 20% of the publications attributed to the highly rated Panjab University have long author lists and contribute almost two-thirds of the citations. Excluding these papers reduces Panjab University’s citation impact ratio from 1.4 to 0.7, causing it to drop out of [Nature’s] top ten”.

Even so, the joust for getting placed well in rankings is only the symptom of a larger problem: How do you evaluate research output in a country? The Nature ranking was based on citations data in the SCOPUS database, filtered by Indian institutions that had produced at least 2,000 papers between 2010 and 2014. The Times Higher Education rankings are based on the Thomson-Reuters Web of Science database. For Prof. Sriram, neither are able to account for the contexts in which the universities function.

“If a common man asks me what I have done in return for his tax money, I can tell him about the work I’ve done for the government,” Prof. Sriram said, adding that such services should place him and his institution above those doing lesser or no such work in the eyes of his funders. In fact, the Ministry of Human Resources and Development has set up a small committee headed by Bhaskar Ramamurthi, Director at IIT-Madras, to develop a India-specific ranking system for just this purpose.

Beijing’s buildings make for bad breathing but the news from India is worse

The Wire
June 27, 2015

Breathing in Beijing could be like passive smoking without the short-lived pleasure of a tobacco-high. Writing in The Guardian in December 2014, Oliver Wainwright described an atmosphere akin to one in the aftermath of a nuclear explosion – instead of radioactive particles and clouds of ash, China’s capital city was enveloped in a dense suspension of smoke and dust that refused to blow away. The US Embassy in the city has an air-quality monitor that automatically tweets its readings, and for more than two years now, it has been pinging “Unhealthy”. In January 2013, it briefly went off the charts. During the Beijing marathon last year, many dropped out so they wouldn’t have to pant in the smog.

In fact, it was a sports-related concern back in 2006 that prompted officials to act. Chinese and American scientists were trying to understand how a worsening atmosphere could affect performance in the Beijing Summer Olympics, which was two years away.

They found that fine particulate emissions from the nearby Hebei and Shandong Provinces and the Tianjin Municipality contributed 50-70% of Beijing’s overhang. To improve the air quality, this meant officials couldn’t focus simply on emissions within the city but within the region as a whole, considering the winds carrying the most pollutants flow into Beijing from the south and the southeast. However, the same administration that has committed to reduce coal-burning in and around the city by 2.6 million tonnes by 2017 will be disappointed to find out that a more immovable hurdle now stands in their way.

Data from NASA’s QuikScat satellite show the changing extent of Beijing between 2000 and 2009 through changes to its infrastructure. Credit: NASA/JPL-Caltech
Data from NASA’s QuikScat satellite show the changing extent of Beijing between 2000 and 2009 through changes to its infrastructure. Credit: NASA/JPL-Caltech

NASA scientists used data from a space-borne radar to study how Beijing has grown, and how that has affected the patterns of winds blowing in the region. While the city has been expanding in all directions, large-scale constructions are still about to begin in some areas while in others, towering buildings are already silhouetted against the skyline. The lead scientist, Mark Jacobson from Stanford University, developed a technique to see how the wind blows around these areas.

He found that Beijing has acquired an outcrop of buildings that are trapping the air within the city, caging it and preventing it from escaping as easily as it would have if the buildings hadn’t been there. Jacobson stated, “Buildings slow down winds just by blocking the air, and also by creating friction.” And because more buildings cover up more of the soil beneath them, there’s less water evaporating than before, heating up the ground. The result is that there are now parts of Beijing where the air is cooking its own dirt, within a dome circumscribed by retarded winds.

The scientists write in their paper: “The astounding urbanisation … created a ring of impact that decreased surface albedo, increased ground and near-surface air temperatures … and decreased the near-surface relative humidity and wind speed.” The paper was published in the Journal of Geophysical Research on June 19. According to them, even if a city didn’t allow any sources of pollution to operate within its limits, “not even a single gas-powered car”, large structures and winds would result in similar impacts on the atmosphere – leave alone what’s happening in New Delhi.

It’s worse in India

A WHO survey in 2014 found that 13 of the 20 cities with the worst air on the planet were in India. The Economist used data from the survey to estimate that every year 1.6 million Indians lost their lives thanks to the plummeting air quality. The problem was, and is, the worst in the national capital, whose PM2.5 measure stands at 153. To compare, Beijing’s has been hovering between 100-135 in the last few days. PM2.5 refers to solid and liquid particulate matter that’s smaller than 2.5 microns. They are able to sink deep into the lungs and cause lung and heart diseases that can be fatal, so their levels are used by the WHO as indicators of air quality.

To the southwest of Delhi are two other very-polluted cities: Lucknow (PM2.5 100) and Gwalior (PM2.5 144), while further down in that direction is Patna (PM2.5 149). All four cities have been rapidly urbanizing, often at a rate that the region’s electricity generation and distribution system hasn’t been able to keep up with. Many have been able to afford diesel generators for auxiliary power for their homes during power-cuts, and the fumes from those generators have also been contributing to lowering air quality, while their ozone emissions are among the leading reasons why rice- and wheat-crop yields have been falling, too. However, like with Beijing, the cities are also part of a more ‘regional’ assault.

Over 40% of Indonesia’s greenhouse gas emissions are due to peat smoke. Peat is a form of partially decomposed vegetable matter, and the archipelago is home to swaths of peatland that are burnt every year to clear space for the profitable oil palms that fuel a $50 billion industry in the country. As Mike Ives reported in January 2015, the resultant smoke contains large amounts of PM2.5 that’re blown into the mainland, carried into currents that then blow across southeast Asia.

Similarly and closer home, crop-burning is widespread in Punjab, Haryana and Uttar Pradesh despite the fact that the activity is prohibited by law in these states. They border New Delhi to the north south and east. The crop-burning came to light recently when images taken by the NASA Aqua satellite were released, in November 2013. They showed a large cloud of smoke floating over western Uttar Pradesh, “obscuring the satellite’s view of cities such as … Lucknow and Kanpur”. The petitioner who took the matter to the National Green Tribunal, Vikrant Tongad, later alleged that had the cloud not wafted over to Delhi to afflict the people of the city, the government wouldn’t have bothered to check on crop-burnings in the three states. In rural areas, the issue is compounded by the widespread use of fuelwood for cooking and heating.

A photo taken by the NASA Aqua satellite shows a pall of smoke hanging over Uttar Pradesh as a result of crop-burning. Credit: NASA
A photo taken by the NASA Aqua satellite shows a pall of smoke hanging over Uttar Pradesh as a result of crop-burning. Credit: NASA

While Beijing may have a better grip on air pollution control than New Delhi does, its problems are indicative of the world’s growing centres of urbanization. As much as civil engineers and planners try to accommodate gardens and lakes into their ideas of the environmentally perfect city, the winds of change will continue to blow in just as strongly from the farms of Punjab, the power plants of Shandong and the peatlands of Indonesia. The problem speaks to the greater challenge of being environmentally conscious about all the developmental projects we undertake instead of thinking about emissions only in terms of the cars in our cities.

On the need for the India-based Neutrino Observatory

A prototype of the ICAL detector at TIFR. Credit: TIFR
A prototype of the ICAL detector at TIFR. Credit: TIFR

“I bet @1amnerd disagrees with this” was how Kapil Subramanian’s piece in The Hindu today was pointed out to me on Twitter. Titled ‘India must look beyond neutrinos’, the piece examines if India should be a “global leader in science” and if investing in a neutrino detector is the way to do it. A few days ago, former Indian President Abdul Kalam and his advisor Srijan Pal Singh had penned a piece, also in The Hindu, about how India could do with the neutrino detector planned to be constructed in Theni, Tamil Nadu. While I wrote a piece along the lines of Kalam’s (again, in The Hindu) in March 2014, I must admit I have since become less convinced by an urgent need for the detector entirely due to administrative reasons. There are some parts of Subramanian’s piece that I disagree with nonetheless, and in fact I admit I have doubts about my commitment to whatever factions are involved in this debate. Here’s the break-down.

To raise the first question [Why must India gain leadership in science?] is to risk being accused of Luddite blasphemy.

This tag about “leadership in science” must be dropped from the INO debates. It is corrupting how we are seeing this problem.

How can you even question the importance of science we’ll be asked; if pressed, statistics and rankings of the poor state of Indian science will be quoted. We’ll be told that scientific research will lead to economic growth; comparisons with the West and China will be drawn. The odd spin-off story about the National Aeronautics and Space Administration (NASA) or the Indian Space Research Organisation will be quoted to demonstrate how Big Science changes lives and impacts the economy. Dr. Kalam and Mr. Singh promise applications in non-proliferation and counter terrorism, mineral and oil exploration, as well as in earthquake detection. But there has been a long history of the impact of spin-offs being exaggerated; an article in the journal of the Federation of American Scientists (a body whose board of sponsors included over 60 Nobel laureates) calculated that NASA produced only $5 million of spin-offs for $65 billion invested over eight years.

This is wrong. The document in question says $55 billion was invested between 1978 and 1986 and the return via spin-offs was $5 billion, not $5 million. Second, the document itself states that as long as it considered only the R&D spending between 1978 and 1986, the ROI was 4x ($10 billion for $2.5 billion), but when it considered the total expenditure, the ROI dropped to 0.1x ($5 billion for $55 billion). Here, government ROI should be calculated differently when compared to ROI on private investments because why would anyone consider overall expenditure that includes capital expenditure, operational expenses, legal fees and HR? Even as it is impossible to have an R&D facility without those expenses, NASA doesn’t have a product to sell either.

Update: The Hindu has since corrected the figure from $5 million to $5 billion.

If such is the low return from projects which involve high levels of engineering design, can spin-offs form a plausible rationale for what is largely a pure science project? The patchy record of Indian Big Science in delivering on core promises (let alone spin-offs) make it difficult to accept that INO will deliver any significant real-world utility despite claims. It was not for nothing that the highly regarded Science magazine termed the project “India’s costly neutrino gamble”.

That sentence there in bold – that’s probably going to keep us from doing anything at all, leaving us to stick perpetually with only the things we’re good at. In fact, we’re concerned about deliverables, let’s spend a little more and build a strongly accountable system instead of calling for less spending and more efficiency. And while it wasn’t for nothing that Science magazine called it a costly gamble, it also stated, “As India’s most expensive basic science facility ever, INO will have a profound impact on the nation’s science. Its opening in 2020 would mark a homecoming for India’s particle physicists, who over the last quarter-century dispersed overseas as they waited for India to build a premier laboratory. And the INO team is laying plans to propel the facility beyond neutrinos into other areas, such as the hunt for dark matter, in which a subterranean setting is critical.”

Even if it delivers useful technology, the argument that research spurs economic growth is highly suspect. As David Edgerton has shown, contrary to popular perception, there is actually a negative correlation between national spending on R&D and national GDP growth rates with few exceptions. This correlation does not, of course, suggest that research is a drag on the economy; merely that rich countries (which tend to grow slowly) spend more on science and technology.

Rich countries spend more – but India is spending too little. Second, the book addressed UK’s research and productive capacity – India’s capacities are different. Third, David Edgerton wrote that in a book titled Warfare State: Britain, 1920-1970, addressing research and manufacturing capacities during the Second World War and the Cold War that followed. These were periods of building and then rebuilding, and were obviously skewed against heavy investments in research (apart from in disciplines relevant to defense and security). Second, Edgerton’s contention is centered on R&D spending beyond a point and its impact on economic growth because, at the time, Britain had one of the highest state expenditures on R&D in the European region yet one of the lowest growth rates. His call was to strike a balance between research and manufacturing – theory and prototyping – instead of over-researching. As he writes of Sir Solly Zuckerman, Chairman of the Central Advisory Council for Science and Technology (in 1967), in the same book,

[He] argued, implicitly but clearly enough, that the British government, and British industry, were spending too much on R&D in absolute and relative terms. It noted that ‘a high level of R&D is far from being the main key to successful innovation’, and that ‘Capital investment in new productive capacity has not … been matching our outlays in R&D’.

In India, the problem is on both ends of this pipe: insufficient and inefficient research on the one hand due to a lack of funds among various complaints and insufficient productive capacity, as well as incentive, on the other for realizing research. Finally, if anyone expects one big science experiment to contribute tremendously to India’s economic growth, then they can also expect Chennai to have snowfall in May. What must happen is that initiatives like the INO must be (conditionally) encouraged and funded before we concern ourselves with over-researching.

Thus, national investment in science and technology is more a result of growing richer as an economy than a cause of it. Investment in research is an inefficient means of economic growth in middle income countries such as India where cheaper options for economic development are plentiful. Every country gets most of its technology from R&D done by others. The East Asian Tigers, for example, benefitted from reverse engineering Western technologies before building their own research capabilities. Technologies have always been mobile in their economic impact; this is more so today when Apple’s research in California creates more jobs in China than in the United States. Most jobs in our own booming IT sector arose from technological developments in the U.S. rather than Indian invention.

Subramanian makes a good point: that poor countries can benefit from rich countries. Apple gets almost all – if not all – of its manufacturing done in China – that’s thousands of jobs created in China and, implicitly, lost in the USA. But this argument overlooks what Apple has done to California, where the technology giant pays taxes, where it creates massive investment opportunities, where it bedecks an entire valley renowned for its creative and remunerative potential. In fact, it wouldn’t be remiss to say the digital revolution that the companies of Silicon Valley were at the forefront of were largely responsible for catapulting the United States as a global superpower after the Cold War.

It may have suited Subramanian to instead have quoted the example of France trying to recreate a Silicon Valley of its own in Grenoble, and failing, illustrating how countries need to stick to doing what they’re best at at least for the moment. (First) Then again, this presupposes India will not be good at managing a Big Science experiment – and I wouldn’t dispute the skepticism much because we’re all aware how much of a bully the DAE can be. (Second) At the same time, we must also remember that we have very few institutions that do world-class work and are at the same time free from bureaucratic interventions. The first, and only, institution that comes to mind is ISRO, and it is today poised to reach for blue sky research only after having appeased the central government for over five decades. One reason for its enviable status is that it comes under the Department of Space. These two departments – Space and Atomic Energy – are more autonomous because of the histories of their establishment, and I believe that in the near future, no large-scale scientific program can come up and hope to be well-managed that’s not under the purview of these two departments.

(Third) There is also the question of initiative. My knowledge at this point is fuzzy; nonetheless: I believe the government is not going to come up with research laboratories and R&D opportunities of its own (unless the outcomes are tied to defense purposes). I would have sided with Subramanian had it been the government’s plan to come up with a $224 million neutrino detector at the end of a typically non-consultative process. But that’s not what happened – the initiative arose at the TIFR, Mumbai, and MatScience, Chennai. Even though they’re both government-funded, the idea of the INO didn’t stem from some hypothetical need to host a large experiment in India but by physicists to complement a strong theoretical research community in the country.

Is the INO the best way forward for Indian science?

One may cite better uses (sanitation, roads, schools and hospitals) for the $224 million that is to be spent on the most expensive research facility in Indian history; but that argument is unfashionable (and some may say unfair). However, even if one concedes the importance of India pursuing global leadership in scientific research, one may question if investing in the INO is the best way to do so.

Allocation of resources

Like many other countries, India has long had a skewed approach to allocating its research budget to disciplines, institutions and individual researchers; given limited resources, this has a larger negative impact in India than in the rich countries. Of the Central government’s total research spend in 2009-10, almost a third went to the Defence Research and Development Organisation, 15 per cent to the Department of Space, 14 per cent to the Department of Atomic Energy (which is now in-charge of the INO project) and 11 per cent to the Indian Council of Agricultural Research. The Department of Science, which covers most other scientific disciplines, accounted for barely 8 per cent of the Central government’s total R&D spending. Barely 4 per cent of India’s total R&D spending took place in the higher education sector which accounts for a large share of science and technology personnel in the country. Much of this meagre spending took place in elite institutes such as the IITs and IISc., leaving little for our universities where vast numbers of S&T professors and research scholars work.

Spending on Big Science has thus been at the cost of a vibrant culture of research at our universities. Given its not so insubstantial investment in research, India punches well below its weight in research output. This raises serious questions as to whether our hierarchical model of allocating resource to research has paid off.

Subramanian’s right, but argues from the angle that government spending on science will remain the same and that what’s spent should be split among all disciplines. I’m saying that spending should increase for all fields, and developments in one field should not be held back by the slow rate of development in others, that we should ensure ambitious science experiments should go forward alongside increased funding for other research. In fact, my overall dispute with Subramanian’s opinions are centered on the concession that there are two broad models of economic development involved in this debate – whether a country should only do what it can be truly competitive in, or whether it should do all it can to be self-sufficient and protect itself. I believe Kapil Subramanian’s rooting for the former idea and I, for the latter.

It may be argued that to gain leadership in science, money is best spent in supporting a wide range of research at many institutions, rather than investing an amount equivalent to nearly 16 per cent of the 2015-16 Science Ministry budget in a very expensive facility like INO designed to benefit a relatively small number of scientists working in a highly specialised and esoteric field.

We need to invest in nurturing research at the still-struggling new IITs (and IISERs) as well as increase support to the old IITs (and IISc). More generally, we need to allocate public resources for research more fairly (though perhaps not entirely equitably) to the specialised bodies and educational institutions, including the universities. Besides raising the overall quality and quantity of our research output, this will allow students to experience being taught by leaders in their discipline who would not only inspire the young to pursue a career in research, but also encourage the small but growing trend of the best and the brightest staying back in India for their doctorate rather than migrating overseas.

Unquestionably true. We need to increase funding for the IITs, IISERs, and the wealth of other centrally funded institutions in our midst, as well as pay our researchers and technicians more. However, what Subramanian’s piece overlooks is that particle physics research, definitely one esoteric discipline of scientific research in that its contribution to our daily lives is nowhere as immediate as that of genetics or chemical engineering, in the country has managed to become somewhat more efficient, more organized and more collaborative than many other disciplines sharing its complexity. If managed well, the INO project can lead by example. The Science Ministry may have been screwing with its funding priorities since 1991 but that doesn’t mean all that’s come of it has been misguided.

Finally, like I wrote in the beginning: my support for the INO was once at its peak, then declined, and now stagnates at a plateau. If you’re interested: I’m meeting some physicists who are working on the INO on Monday (June 29), and will try to get them to open up – on the demands made in Subramanian’s piece, on the legal issues surrounding the project, and they themselves have to say about government support.

(Many thanks to Anuj Srivas for helping bounce around ideas.)

Pluto-bound probe takes first colour images of the dwarf planet and its moon

The Wire
June 22, 2015

A GIF of Charon orbiting Pluto compiled using images taken by the New Horizons MVIC Color Imager between May 29 and June 3, 2015. Credit: NASA
Credit: NASA

It may not look like much, but this heavily pixelated GIF image is cause for celebration. It effectively cost a robotic space probe more than nine years of travel and $600 million in manufacturing and operational charges. But then again, that’s not why the image is (and must be) celebrated. That privilege goes to the fact that these are the first colour photographs of our most adored dwarf planet. Say hello to an orangeish Pluto, being orbited by a grayish Charon.

Technically, it’s wrong to say Charon orbits Pluto – the two bodies were recently observed by the probe, New Horizons, to be orbiting a point in space called the barycenter. The barycenter is always closer to the larger body, so Pluto’s orbital radius is much smaller than Charon’s (as the GIF below shows).

A GIF showing Pluto and Charon in a binary system, compiled using images taken by the New Horizons MVIC Color Imager. Credit: NASA
A GIF showing Pluto and Charon in a binary system, compiled using images taken by the New Horizons MVIC Color Imager. Credit: NASA

As New Horizons gets closer to Pluto, its images will become sharper, affording humankind its first glimpse of the dwarf planet as it actually looks – not as imaginative illustrators have depicted it over the years. Alex Parker, a planetary astronomer at the Southwest Research Institute, Texas, had computed a “histogram of hues” in April 2015 showing that most people who didn’t use the correct reddish hue when depicting Pluto went for blueish hues.

On July 14, 2015, we’ll have the sharpest images to date of Pluto and Charon as New Horizons will make the first of its planned flybys, the manoeuvres it was built for, as it will study the atmosphere and surface characteristics of the bodies (here’s why it matters). The image resolution then will be down to a few kilometers. Astronomers can’t wait. On June 16, the National Space Society put out an anthemic video about the New Horizons mission, calling Pluto and its moons “the farthest worlds to be explored by humankind”.

The Indus and Ganges-Brahmaputra Basins are drying up faster than we’d like

The Wire
June 22, 2015

To the north, India is flanked by two giant groundwater aquifers – the Indus Basin to the west and the Ganges-Brahmaputra Basin to the east. Between them, they underlie a surface area of over 2.2 million sq. km and water from them supports the livelihoods of about 800 million people. That’s an area the size of the Democratic Republic of the Congo and the population of the United States, Indonesia, Brazil and Malaysia combined. What would happen if it was announced today that these countries’ water supplies were terribly distressed and under threat of running dry? The effect on the world’s economy would be disastrous, not to speak of what this would do to social peace in the affected countries.

If a recent NASA survey is to be believed,  the two basins which hold up two among the world’s largest agricultural regions including India’s breadbasket are, in fact, running dry. The Indus Basin is the second-most overstressed on the planet, its water levels falling by 4-6 mm/year. The level in the Ganges-Brahmaputra Basin, which is relatively less stressed, is still falling by 15-20 mm every year. The findings were published last week in two papers in the journal Water Resources Research.

Annual groundwater level decline rates of the 37 largest aquifers. Credit: NASA/JPL
Annual groundwater level decline rates of the 37 largest aquifers. Credit: NASA/JPL (Click on the image to enlarge)

The data supporting these alarming conclusions was collected by the twin Gravity Recovery and Climate Experiment (GRACE) satellites between 2003 and 2013. According to the Jet Propulsion Laboratory, California, which helps operate the satellites, eight of the world’s 37 largest subterranean aquifers are receiving “little to no replenishment” while five more are extremely stressed, “depending on the level of replenishment in each”. The most stressed is the Arabian Aquifer System that’s straddled by the Arabian desert region, and the second-most stressed is the Indus Basin.

Pumping, sedimentation and salinity

The origins of the Indus Basin’s stresses can be traced to 1948, when, after Partition, India cut off water supplies to Pakistan stemming from the three chief tributaries of the Indus river: Ravi, Beas and Sutlej. As a result, the construction of a continuous irrigation system climbed to the top of Pakistan’s priorities as a young and developing nation. When the Indus Water Treaty was signed in 1960 after extensive negotiations, Pakistan moved to replace its inundation canals with canals and barrages that diverted water from the three tributaries as well as the Indus to its farms.

Today, the country possesses the world’s largest contiguous irrigation system, according to a Food and Agriculture Organization report, using 63% of the water from the river and its tributaries. India uses 36% and the remaining is split between Afghanistan and China. These developments have depleted groundwater in the basin, especially rapidly during periods of deficient rainfall and droughts, and have in some years left behind so little water that none reached the sea.

Apart from reducing quantities of water available, the quality of the water is also under threat because of sedimentation and increasing salinity, which are by-products of falling groundwater levels. An influential study conducted by V.M. Tiwari and others had found back in 2009 that the amount of water in the basin fell by 10 trillion litres a year from April 2002 to June 2008. The Indian Ministry of Water Resources announced in 2011 that Punjab and Haryana had annual water overdrafts of 9.89 trillion and 1 trillion litres respectively. Moreover, the Indus river also carries over 290 million tonnes of sediments from the Himalayan and Karakoram ranges every year.

Regulating supply and demand

Because of these additional stresses, rehabilitative efforts have focused on regulating both water supply and demand. India’s efforts to regulate water supply have been widely regarded as successful, especially after the Green Revolution period of 1960-1990 when community-driven initiatives proliferated. An illustrative example is the work of Rajendra Singh, a Ramon Magsaysay award recipient and winner of the Stockholm Water Prize in 2015, whose NGO – Tarun Bharat Sangh – has led the construction of traditional catchment structures called johads to trap rainwater and use it to recharge groundwater in the semi-arid areas of Rajasthan.

However, a problem with planning plagues water supply. As Ramaswamy Sakthivadivel writes in a paper for the International Water Management Institute, the “practice of pumping-induced recharge water outside the command area has had a very negative effect in managing large irrigation systems due to the siphoning of a considerable quantity of water to areas not originally included in the command”. The effects of this practice are exacerbated by increasing population.

While Pakistan has repeatedly complained that India has been violating the Indus Water Treaty by building an overabundance of dams on the Jhelum river, experts have pointed out that by being upstream of the river, India can build how much ever it wants as long as it doesn’t use more than the 1.50 MAF it’s allowed to under the treaty. Of course any debates surrounding this issue are mired in political controversies, but making matters doubly worse for Pakistan is a fractal image of the controversy playing out within Pakistan itself. Two provinces that use a lot of water from the Jhelum are Punjab and Sindh. However, Punjab is upstream of Sindh and also more politically influential, muscling more of the water into its territory before Sindh can have its share.

Ecological threats in the Ganges Basin

While the Ganges-Brahmaputra Basin faces similar problems – not surprising considering it’s the most densely populated river basin – it faces some unique ecological threats as well.

The Ganges Basin subset has had the Indian government’s eye on it for the installation of at least six hydroelectric projects. Controversy erupted in February 2015 when the environment ministry informed the Supreme Court that the projects had acquired almost all the necessary certificates and that their construction should begin. As Business Standard reported, experts “had also warned these dams could have a huge impact on the people, ecology and safety of the region, and should not be permitted at all on the basis of old clearances”. Soon after, the ministry conceded that it would let the experts take the final call – before setting up a new committee in June to review the projects once more.

The two GRACE papers come at an opportune time. While American concerns about hydrological issues are centered around the ongoing drought in California, the papers confirm that similar problems are playing out in far-flung parts of the world and that we must do more to secure these once-renewable resources. Jay Famiglietti, senior water scientist at JPL, stated, “Available physical and chemical measurements are simply insufficient.” “We don’t actually know how much is stored in each of these aquifers,” Alexandra Richey, the lead author on both papers, said, adding that estimates “of remaining storage might vary from decades to millennia.” She recommended using invasive probes like drilling to make better estimates.

The GRACE satellites are the result of a collaboration between NASA and Deutsche Forschungsanstalt fur Luft und Raumfahrt (DLR) in Germany. They measure changes in groundwater levels by mapping the resultant minuscule changes in Earth’s gravity in the region of space directly over the basins. As a result, the measurements are more reliable for larger basins where the effects are relatively more tangible. The stress levels are measured as a ratio of use to availability, the latter in turn being dependent on recharge rates, proportionate to their relative sizes.

Is Reliance Jio sending unsecured Indian data into China?

The Wire
June 19, 2015

A group of hackers calling itself AnonOps India on Thursday tweeted what it called evidence of the Reliance Jio Chat app transmitting geolocation data to Chinese servers – a potentially illegal act that compromises user privacy because the data is being transmitted through an unsecured connection. The purported exposé was possible after AnonOps members decompiled the binary code running in the app, an intricate piece of reverse engineering which in turn is also illegal.

The group had launched a video on YouTube on June 13 alleging that the Reliance Jio enterprise was engaging in widespread surveillance and privacy violations, and that the group was launching Operation Reliance (#OpStopReliance) in an effort to corroborate its allegations. The latest volley in this campaign is the geolocation data transmission.

AnonOps India posted screenshots to Facebook, Twitter and its Tumblr showing data – not necessarily geolocation data – being sent to Chinese IP addresses via an HTTP connection. Trak.in had reported that the addresses were 124.193.183.96:8086, poc.gongsunda.com:8083, www.rsocial.net:8087 and acp.jiobuzz.com:8090.

HTTP is a protocol that transmits textual data between two nodes using hyperlinks. The secured version of this protocol is called HTTPS, whose use on a website is popularly indicated by an image of a lock placed in or near the URL bar. That the data was transmitted to Chinese servers at all is of little concern – Facebook, for example, regularly redirects user data through servers in the US. The concern is that the data was not encrypted, being sent through an HTTP and not an HTTPS connection, putting it up for grabs for anyone with the know-how to find it.

This slideshow requires JavaScript.

Their tweets prompted Gautam Chikermane, New Media Director of Reliance Industries, Ltd., to retort that AnonOps didn’t know what it was talking about, that its data had always been encrypted, and that the group was wasting Reliance’s time. However, security experts weren’t convinced. Specifically, Chikermane had said the transmitted data had always been encrypted by “binary encoded protocol”, and that the app had recently been switched to using AES (Advanced Encryption Standard).

Aditya Anand, the founder of a software services firm in Mumbai, clarified that HTTPS was for data transfer over the Internet and AES for saving data, probably on disk, and that there was no excuse for not using HTTPS because AES didn’t forestall the risks that only HTTPS could guard against. He added that implementing great software security could be a nightmare. If that lets Reliance off the hook just a little bit for sending unencrypted data into China, it climbs back up by sending it into servers that aren’t using HTTPS either, and by denying anything is amiss.

Apart from Chikermane’s tweets, no official statement has emerged from Reliance Industries, Ltd. He, however, also said that the data was being sent to China from within the app for users there, billing Jio as a global product. China doesn’t allow Google Maps in the country’s network so apps that seek to provide geolocation facilities must rely on Chinese services, he added. In reply, AnonOps asked why data was being transmitted from India to China and why they were accompanied by errors logs in Chinese.

The dust on this debate hasn’t settled yet. It was only a month ago that Reliance had announced it was launching 4G-enabled mobile devices priced around Rs.2,000 – a bargain any which way – and signalled Mukesh Ambani’s intentions to re-breach the telecom market. Reliance had also said that it was in talks with cheap handset manufacturers like Huawei and Xiaomi in China for hardware support.

Could there be life on Europa? NASA okays mission to find out

The Wire
June 19, 2015

Artist concept of NASA’s Europa mission spacecraft approaching its target for one of many flybys. Credit: NASA/JPL-Caltech
Artist concept of NASA’s Europa mission spacecraft approaching its target for one of many flybys. Credit: NASA/JPL-Caltech

On Thursday, NASA okayed the development of a probe to Jupiter’s moon Europa, currently planned for the mid-2020s, to investigate if it has conditions suitable for life. The milestone parallels the European Space Agency’s JUICE (Jupiter Icy Moons Explorer) mission, also planned for the mid-2020s, which will study the icy moons of the Solar System’s largest planet.

The NASA mission has tentatively been called Clipper, and its proposal comes on the back of tantalizing evidence from the Galileo mission that Europa could have the conditions to harbour life. Galileo conducted multiple flybys of the moon in the 1990s and revealed signs that it could be harbouring a massive subsurface ocean – with more than twice as much water as on Earth – under an ice shell a few kilometres thick. It also found that the ocean-floor could be rocky, there were tidal forces acting on the water-body, and that the thick ice shell could be host to plate tectonics like on Earth.

These characteristics make a strong case for the existence of habitable conditions on Europa because they mimic similar conditions on Earth. For example, plate tectonics on Earth moves a jigsaw of landmasses on the surface around. Their resulting interactions are responsible for moving minerals on the surface into the ground and dredging new deposits upward, creating an important replenishment cycle that feeds many lifeforms. A rocky seafloor also conducts heat efficiently toward and away from the water, and tidal forces provide warmth through friction.

With NASA’s okay, the Europa mission moves to the “formulation stage”, when mission scientists and engineers will start technology development. The agency’s fiscal year 2016 budget includes $30 million for just this, according to a May 26 statement, out of a total of $18.29 billion that Congress has awarded it. NASA has already also asked for $285 million through 2020 for the Europa mission, with the overall mission expected to cost $2 billion notwithstanding delays at the time of a launch planned for 2022.

The same statement also announced the scientific payload that would accomplish the mission. Out of 33 proposals submitted, NASA selected nine – all geared toward exploring the ice- and water-related properties of the moon. They could also be pressed into observing other moons in the Jovian neighbourhood – many of which are icy and have curious surface and atmospheric characteristics resembling Europa’s. These include another of Jupiter’s moons, Ganymede, and Saturn’s Dione, Enceladus, Hyperion, Iapetus, Phoebe and Tethys.

ESA’s JUICE mission – part of its broader Cosmic Vision strategy for a class of long-term missions in the 2020s – is planned to launch in 2022 and reach Jupiter by 2030. At one point, it will enter into orbit around Ganymede. If NASA’s Clipper is at Europa by then, what the two probes find could be complementary, and be compared to infer finer details.

Indian coder, lawyer take on Israeli company’s threats

The Wire
June 16, 2015

On June 9, The Wire broke the story of a Bengaluru-based programmer who’d revealed that an Israeli company was injecting malicious JavaScript code into websites visited on Airtel’s 3G network. Thejesh GN had uploaded the script and screenshots of where he found it was being injected on his website to GitHub on June 3. In reply, he was threatened with overzealous punitive action under the IT Act 2000 by the company, named as Flash Networks, on June 8.

On Monday, in a heartening turn of events, Lawrence Liang, a reputed Bengaluru-based legal researcher and cofounder of the Alternative Law Forum, served a counter-notice to Flash Networks’ notice. Liang asserted his and Thejesh’s right to civil and criminal proceedings against Flash for the “unlawful insertion of code by your client into my clients source code”, which “amounts to a violation of the rights of my client, including but not limited to a violation of his privacy, an attempt to unlawfully access and hinder the operation of his website and a violation of the right to integrity of the work of my client.”

A copy of Liang’s reply was uploaded by Thejesh to his website on Monday. The document describes in detail Thejesh’s actions and the underlying intent – which were tantamount to a review of the JavaScript injection by Flash, their origin from an Airtel-owned IP address, and an inspection of their effects on his website. As the document states, “It is also commonly accepted that whenever one encounters any inserted scripts, viruses or spyware, you publish them as supporting document and evidence so other researchers can review your investigation by looking into it.”

Following Thejesh’s upload to GitHub on June 3, Flash put out its notice on June 8. The next day, in an effort to shut down the GitHub repository in which he had uploaded the screenshots, Flash served a notice under the American Digital Millennium Copyright Act. The repository was then automatically taken down by GitHub for until the matter is resolved.

In the aftermath of these events, Flash has repeatedly asserted that Thejesh violated the “confidentiality” of the script that it was injecting, called Anchor.js. Although Airtel issued a statement saying it had teamed up with Flash to track users’ monthly subscription usage, neither Flash nor Airtel have offered a substantive explanation as to how Anchor.js accomplished it. This is because Anchor.js was also found to be inserting ads onto webpages, which – thanks to their unsupervised nature – could just as well be inserting code that compromised security and user privacy.

Apart from asserting their right to legal recourse instead of the blind compliance that Flash’s DMCA notice expects, Liang has demanded that Flash should “offer an unconditional apology for attempting to insert a malicious piece of code into my client’s website which has affected the functionality of the same as well as lowering the reputation of my client” and “for violating the privacy of my client”.

Playing villains, he made a giant of himself

Christopher Lee at the Aubagne International Film Festival in September 1996. Credit: Charmich/Wikimedia Commons, license
Christopher Lee at the Aubagne International Film Festival in September 1996. Credit: Charmich/Wikimedia Commons, license

The Wire
June 15, 2015

When the first installment of The Lord of the Rings trilogy was released in 2001, it introduced a whole new generation to the ageless charms of Christopher Lee. Far removed from the often campy Dracula that an earlier set of filmgoers loved him for, he played the ‘white wizard’ Saruman with an electrifying dignity, brushing the character with a majestic flavour of evil. It’s hard to imagine many other actors being able to do that without outright vilification.

Sir Christopher Lee passed away on June 7 in a hospital in London due to respiratory problems and heart failure. He is survived by his wife Birgit Krøncke and their daughter, Christina. He was 93 – fully 69 of which he had spent as an actor, starting with small roles in action films to finally playing the bloodsucking Count in the cultic Hammer Horror films, Lord Summerisle in The Wicker Man, memorably, Francisco Scaramanga in The Man with the Golden Gun, Count Dooku in Star Wars Episode II and III, and, of course, Saruman in the movies based on JRR Tolkien’s Middle Earth epic.

Lee was also a popular fixture in horror films in the 1950s to the 1970s, often appearing as characters whose places in the literary canon were as revolutionary beings, great influencers of the zeitgeist. In fact, the list all of his roles will be powered with what appear to be minor ones – in keeping with how Hollywood for long treated science-fiction and fantasy films – with a few major forays here and there that received mainstream acclaim.

From 1950 to 1977, Lee appeared in a host of monster films, playing Dracula eight times for Hammer (1958-1973) and in the regrettable Fu Manchu productions. Although all of the Hammer films fared well commercially, Lee went on record to state that he was emotionally blackmailed into starring in them – principally because the producers ran out of money and would ask Lee to think of all the people he’d put out of work if he backed out.

His Dracula was smooth – in one film, he only hissed – but he had come to hate the lack of challenge. In this time it was as if the pithy roles Lee was being offered insulated him from the acclaim he was starting to receive from the rest of the world. In fact, a film he did in 1970 – The Private Life of Sherlock Holmes – pushed him to refuse being typecast in the future as an ‘evil heavy’, as Christopher “The Count” Lee, and eventually to leave England altogether for America in 1977.

Thus it was only in the 1970s and the 1980s that he started playing characters that would define his legacy the way he wanted. In 1973, Lee starred as the defiant Lord Summerisle in Robin Hardy’s cult classic The Wicker Man, playing a deranged nobleman who has convinced those on his estate of Summerisle that a willing human needs to be sacrificed for every season the local harvest fails. In 1974, he got to play the memorable villain Francisco Scaramanga in The Man with the Golden Gun, where he very nearly stole the show from Roger Moore’s James Bond.

Ian Fleming – whose step-cousin Lee was – conceived of Scaramanga as a crime-hardened Cuban rowdy. But what Lee ended up playing was a villain with great charm and finesse.

Lee took pride in his versatility. In an interview, he once said, “If you’re going to be a real actor, you must possess great versatility, otherwise you’re not going to last very long” – so much so that, to illustrate, he hosted an episode of Saturday Night Live in 1978 with the greats John Belushi, Dan Aykroyd and Bill Murray. Lee later said that before he went onstage that night, he’d been more terrified than before any of his films until then.

A man of many parts, Lee spoke German, French and Italian fluently, could sing (he was a great heavy metal fan and releasing an award-winning metal album called Charlemagne: By the Sword and the Cross in 2010) and fence, and boasted of an impressive variety of wartime experiences before he took to acting as a career.

In the early 1940s, after brief stints in the Finnish army and the British Home Guard, Lee volunteered for the Royal Air Force. Before he was seconded to the Army after the Allied Invasion of Italy in 1943, he was nearly killed twice, came down with six bouts of malaria in one year and received two promotions. In late-1944, he was promoted to flight-lieutenant and sent to Air Force HQ, where he participated in forward planning and liaison. In the last few months before he was discharged and the war was winding down, Lee was attached to the SAS and was part of a team tasked with hunting down and interrogating Nazi war criminals – a job that took him to various concentration camps around Europe.

However, he never spoke about his services in the Special Forces. Sample this now-famous exchange, as forces.tv details,

When pressed by an eager interviewer on his SAS past, he leaned forward and whispered: “Can you keep a secret?”

“Yes!” the interviewer replied, breathless with excitement.

“So can I” replied a smiling Lee, sitting back in his chair.

His career started to flag around the 1990s – not because of the quality of his acting but in terms of the frequency with which he did great films. A notable release in this period was Jinnah, with Lee playing the titular character of the founder of Pakistan. He considers the film his “most important”, “in terms of its subject and the great responsibility” he had as an actor.

Lee’s career was revived spectacularly in 2001 with the release of The Lord of the Rings: The Fellowship of the Ring, in which he played Saruman. There’s an oft-overlooked aspect to this character in the movies: the only other ‘important’ villain in them was Sauron, and he did not possess a physical body, did not command a physical presence. Yes, there were the orcs and the ghastly lieutenants (like the Mouth of Sauron), but as far as a visual focal point of intimidation in the movies was necessary, Lee’s Saruman provided it. Until his death in the first scene of The Return of the King, he was the greatest threat and remained the face of the enemy.

The Return of the King was also a tribute of sorts to Lee’s continued support and endorsement of the fantasy genre through the decades. Even if the Marvel multiverse and the Harry Potter series today tower over other films in terms of earnings, and production houses have become more favourable in terms of sponsoring sci-fi and fantasy films, a part of the support for them can be traced to the success of Peter Jackson’s films: The Return of the King was in fact the first fantasy film to win the Academy Award for Best Picture, in 2004.

During and after starring in the Middle Earth epics, Lee donned the role of the antagonist Count Dooku in two Star Wars films, Episode II: Attack of the Clones (2002) and Episode III: Revenge of the Sith (2005). Also in 2005, he played Willy Wonka’s father in Charlie and the Chocolate Factory. He later said in an interview to Total Films, “Johnny Depp, as far as I’m concerned, is Number One of his generation; there’s no one who can touch him.”

Lee was a product of the ‘old school’, a generation given to resilience and forthrightness, possessing a commitment toward once-commonplace ideas like waiting one’s turn. It’s hard to say if that’s what led to more than six decades of Hollywood success or if it was the other way round – but it doesn’t matter. Lee remained an actor until the day he died (a month ago, he’d signed up for a Danish film). He was proud of the wide variety of people he got the opportunity to play, to work with giants ranging from Laurence Olivier to George Lucas to Tim Burton. And through all the years he, with quiet dignity, made a giant of himself.