From Orwell to Kafka, Markov to Doctorow: Understanding Big Data through metaphors

On March 20, I attended a short talk by Malavika Jayaram, a fellow at the Berkman Center for Internet & Society, titled ‘What we talk about when we talk about Big Data’ at the T.A.J. Residency in Bengaluru. It was something of an initiation into the social and political contexts of Big Data and its usage, and the important ethical conundrums assailing these contexts.

Even if it was a little slow during the first 15 minutes, Jayaram’s talk progressed rapidly later on as she quickly piled criticism after criticism upon the concept’s foundation, which was quickly being revealed to be immature. Perhaps those familiar with Jayaram’s past research did (or didn’t) find the contents of her talk to contain more nuances than she’s let on before, but to me it revealed an array of perspectives I’ve remained balefully ignorant of.

The first in line was about the metaphors used to describe Big Data – and how our use of metaphors at all betrays our inability to comprehend Big Data in its entirety. Jayaram quoted at length but loosely from an essay by Sara M. Watson, her colleague at Berkman, titled Data is the new “____”. It describes how the dominant metaphors are industrial, dealing with the data itself as if it were a natural resource and the process of analyzing it as if it were being mined or refined.

Data as a natural resource suggests that it has great value to be mined and refined but that it must be handled by experts and large-scale industrial processes. Data as a byproduct describes the transactional traces of digital interactions but suggests it is also wasteful, pollutive, and may not be meaningful without processing. Data has also been described as a fungible resource, as an asset class, suggesting that it can be traded, stored, and protected in a data vault. One programmatic advertising professional related to me that he thinks “data is the steel of the digital economy,” an image that avoids the negative connotations of oil while at the same time expressing concern about monopolizing forces of firms Google and Facebook.

Not Orwellian but Kafkaesque

There are two casualties of this perspective. The first is the people behind the data – those whose features, actions, choices, etc. have become numbers – are forgotten even as the data they have given “birth” to becomes more important and valuable. The second casualty is the constant reminder that data is valuable, and large amounts of data more so, condemning it to a life where it can’t hope to be stagnant for long.

The dehumanization of Big Data, according to Jayaram, extends beyond analysts forgetting the data belongs to faces and names and unto the restriction of personal ownership. The people the data represents often don’t have access to it. This implies an existential anxiety quite unlike found in George Orwell’s 1984 and more like the one in Franz Kafka’s The Trial. In Jayaram’s words,

You are in prison awaiting your trial. Suddenly you find out the trial has been postponed and you have no idea why or how. There seem to be people who know things that you never will. You don’t know what you can do to encourage their decisions to keep the trial permanently postponed. You don’t know what it was about you and you have no way of changing your behavior accordingly.

In 2013, American attorney John Whitehead popularized this comparison in an article titled Kafka’s America. Whitehead argues that the sentiments of Josef K., the protagonist of The Trial, are increasingly becoming the sentiments of a common American.

Josef K’s plight, one of bureaucratic lunacy and an inability to discover the identity of his accusers, is increasingly an American reality. We now live in a society in which a person can be accused of any number of crimes without knowing what exactly he has done. He might be apprehended in the middle of the night by a roving band of SWAT police. He might find himself on a no-fly list, unable to travel for reasons undisclosed. He might have his phones or internet tapped based upon a secret order handed down by a secret court, with no recourse to discover why he was targeted. Indeed, this is Kafka’s nightmare, and it is slowly becoming America’s reality.

Kafka-biographer Reiner Stach summed up these activities as well as the steadily unraveling realism of Kafka’s book as proof of “the extent to which power relies on the complicity of its victims” – and the ‘evil’ mechanism used to achieve this state is a concern that Jayaram places among the prime contemporary problems threatening civil liberties.

If your hard drive’s not in space…

There is an added complication. If the use of Big Data was predominantly suspect, it would have been easier to build consensus against its abuse. However, that isn’t the case: Big Data is more often than not used in ways that don’t harm our personal liberties, and the misfortune is that their collective beneficence as yet has been no match for the collective harm some of its misuses have achieved. Could this be because the potential for its misuse is almost everywhere?

Yes. An often overlooked facet of using Big Data is the idea that the responsible use of Big Data is not a black-and-white deal. Facebook is not all evil and academic ethnographers are not all benign. Zuckerberg’s social network may collect and store large amounts of information that it nefariously trades with advertisers – and may even comply with the NSA’s “requests” – but there is a systematicity, an orderliness, with which the data is being passed around. The complex’s existence alone presents a problem, no doubt, but that there is a complex at all makes it easier to attempt to fix the problem than if the orderliness were absent.

And this orderliness is often absent among academicians, scholars, journalists, etc., who may not think data is a dollar note but at the same time are processing prodigious amounts of it without being as careful as is necessary about how they are logging, storing and sharing it. Jayaram rightly believes that even if information is collected for benevolent purposes, the moment it becomes data it loses its memory and stays on on the Internet as data; that if we are to be responsible data-scientists, being benevolent alone will be inadequate.

To drive the point home, she recalled a comment someone had made to her during a data workshop.

The Utopian way to secure data is to shoot your hard drive into space.

Every other recourse will only fall short.

Consent is not enough

This memoryless, Markovian character of the data-economy demands a redefinition of consent as well. The question “What is consent?” is dependent on what a person is consenting to. However, almost nobody knows how the data will be used, what for, or over what time-frames. Like a variable flowing through different parts of a computer, data can pass through a variety of contexts to each of which it provides value of varying quality. So, the same question of contextual integrity should retrospectively apply to the process of consent-giving as well: What are we consenting to when we’re consenting to something?

And when both the party asking for consent and the party asked for consent can’t know all the ways in which the data will be used, the typical way-out has been to seek consent that protects one against harm – either by ensuring that one’s civil liberties are safeguarded or by explicitly prohibiting choices that will impinge upon, again, one’s civil liberties. This has also been increasingly done in a one-size-fits-all manner that the average citizen doesn’t have the bargaining power to modify.

However, it’s become obvious by now that just protecting these liberties isn’t enough to ensure that data and consent are both promised a contextual integrity.

Why not? Because the statutes that enshrine many of these liberties is yet to be refashioned for the Internet age. In India, at least, the six fundamental rights are to equality, to freedom, against exploitation, to freedom of religion, cultural and educational rights, and to constitutional remedies. Between them, the promise of protecting against the misuse of not one’s person but one’s data is tenuous (although a recent document from the Telecom Regulatory Authority of India could soon fix this).

The Little Brothers

Anyway, an immediate consequence of this typical way-out has been that one needs to be harmed to get remedy, at a time when it remains difficult to define when one’s privacy has been harmed. And since privacy has been an enabler of human rights, even unobtrusive acts of tagging and monitoring that don’t violate the law can force compliance among the people. This is what hacker Andrew Huang talks about in his afterword to Cory Doctorow’s novel Little Brother (2008),

[In] January 2007, … Boston police found suspected explosive devices and shut down the city for a day. These devices turned out to be nothing more than circuit boards with flashing LEDs, promoting a show for the Cartoon Network. The artists who placed this urban graffiti were taken in as suspected terrorists and ultimately charged with felony; the network producers had to shell out a $2 million settlement, and the head of the Cartoon Network resigned over the fallout.

Huang’s example further weakens the Big Brother metaphor by implicating not one malevolent central authority but an epidemic, Kafkaesque paranoia that has “empowered” a multitude of Little Brothers all convinced that God is only in the detail.

While Watson’s essay (Data is the new “____”) is explicit about the power of metaphors to shape public thought, Doctorow’s book and Huang’s afterword take the next logical step in that direction and highlight the clear and present danger for what it is.

It’s not the abuse of power by one head of state but the evolution of statewide machines that (exhibit the potential to) exploit the unpreparedness of the times to coerce and compel, using as their fuel the mountainous entity – sometimes as Gargantuan as to be formless, and sometimes equally absurd – called Big Data (I exaggerate – Jayaram was more measured in her assessments – but not much).

And even if Whitehead and Stach only draw parallels between The Trial and American society, the relevant, singular “flaw” of that society exists elsewhere in the world, too: the more we surveil others, the more we’ll be surveilled ourselves, and the longer we choose to stay ignorant of what’s happening to our data, the more our complicity in its misuse. It is a bitter pill to swallow.

Featured image credit: DARPA

Curious Bends – macaroni scandal, bilingual brain, beef-eating Hindus and more

1. The great macaroni scandal in the world began in Kerala

“‘Only the upper class people of our larger cities are likely to have tasted macaroni, the popular Italian food. It is made from wheat flour and looks like bits of onion leaves, reedy, hollow, but white in colour.’ This paragraph appears in a piece titled: “Ta-Pi-O-Ca Ma-Ca-Ro-Ni: Eight Syllables That Have Proved Popular In Kerala”. Readers, I am not making this up. For a few years, from around 1958 to 1964, food scientists in India were obsessed with tapioca macaroni. Originally called synthetic rice, it was developed by the Central Food Technological Research Institute (CFTRI) in Mysore as a remedy for the problems of rice shortage, especially in the southern states.” (4 min read, livemint.com)

2. China is using Pakistan as a place to safety test its nuclear power technology

“Pakistan’s plans to build two nuclear reactors 40 kilometres from the bustling port city of Karachi, a metropolis of about 18 million people has become a bone of contention between scientists and the government. They are to be built by the China National Nuclear Corporation. Each reactor is worth US$4.8 billion and the deal includes a loan of US$6.5 billion from a Chinese bank. These reactors have never been built or tested anywhere, not even in China. If a Fukushima or a Chernobyl-like disaster were to take place, evacuating Karachi would be impossible, says a leading Pakistani physicist. He argues that building these nuclear reactors may have significant environmental, health, and social impacts.” (6 min read, scidev.net)

3. Speaking a second language may change how you see the world

“Cognitive scientists have debated whether your native language shapes how you think since the 1940s. The idea has seen a revival in recent decades, as a growing number of studies suggested that language can prompt speakers to pay attention to certain features of the world. Russian speakers are faster to distinguish shades of blue than English speakers, for example. And Japanese speakers tend to group objects by material rather than shape, whereas Koreans focus on how tightly objects fit together. Still, skeptics argue that such results are laboratory artifacts, or at best reflect cultural differences between speakers that are unrelated to language.” (4 min read, sciencemag.org)

4. Nobel-prize winning biologist Venkatraman Ramakrishnan named president of the Royal Society​

“Ramakrishnan grew up in India and has spent the majority of his research career in the United States, moving to the United Kingdom in 1999. He has a diverse scientific background: he switched to biology after a PhD in physics. “That breadth is something I hope will help me,” he says.” (3 min read, nature.com)

5. History is proof most Hindus never had any beef with beef

“To achieve this goal, the RSS has, among other things, turned beef into a Muslim-Hindu issue. So the ban on beef is a device to create a monolithic Hindu community? Yes. You also have to ask the question: When did the idea of not eating beef and meat become strong? Gandhi was essentially a Jain; he campaigned for cow protection as well as vegetarianism. It was Gandhi’s campaign that took vegetarianism to non-Brahmin social groups that were meat-arian. The only people who were not really influenced by Gandhi’s cow protection campaign and vegetarianism were Muslims, Christians and Dalits. If the Dalits were not affected, it was because Ambedkar immediately started a counter-campaign.” (8 min read, scroll.in)

Chart of the week

“Among the educated elite the traditional family is thriving: fewer than 10% of births to female college graduates are outside marriage—a figure that is barely higher than it was in 1970. In 2007 among women with just a high-school education, by contrast, 65% of births were non-marital. Race makes a difference: only 2% of births to white college graduates are out-of-wedlock, compared with 80% among African-Americans with no more than a high-school education, but neither of these figures has changed much since the 1970s. However, the non-marital birth proportion among high-school-educated whites has quadrupled, to 50%, and the same figure for college-educated blacks has fallen by a third, to 25%. Thus the class divide is growing even as the racial gap is shrinking.” (4 min read, economist.com)

d5bbf5e8-32dc-4849-802a-a510169f86aa

Tuberculosis’s invisible millions – in cases and money

Tuberculosis (TB) has killed more than a billion people in the last 200 years. That’s more than any other infectious disease in that period. And, what’s worse is that, according to the World Health Organisation (WHO), less than half the cases worldwide are ever diagnosed.

India suffers the most. It has the highest burden of TB in the world: More than 2 million suffer from the disease, and this is despite years of work to control the disease.

TB was declared a global health emergency by the WHO in 1993. Then, in 2001, the first global “Stop TB Plan” came into effect, with an international network of donors and private and public sector organisations tackling TB-related issues around the world together.

The disease is prevalent among both rich and poor countries, but has more disastrous consequences in the latter because of limited access to healthcare, poor sanitation and undernutrition. The matter is worsened because of co-morbidity, where those with weakened immune systems—having suffered from diabetes or AIDS—fall prey to TB and die.

tb1

And even between developing economies, there is significant variation in treatment levels because of difficulties in identifying new infections. In 2012, while China and India together accounted for 40% of the world’s burden of TB, the prevalence among 100,000 people was at least 167 in India and less than half that in China (about 68).

Technology can help

In an article in the journal PLOS Medicine, Puneet Dewan from the Bill & Melinda Gates Foundation and Madhukar Pai of McGill University have called for global efforts to identify, treat and cure the 3 million “missed” TB infections every year.

“Reaching all these individuals and ensuring accountable, effective TB treatment will require TB control programs to adopt innovative tools and modernize program service delivery,” they write.

In January 2015, the WHO representative to India, Nata Menabde, said the decline of TB incidence in the country was occurring at 2% per year, instead of the desired 19-20%. She added that it could be pulled up to 10% per year by 2025 if the country was ready to leverage better the available technology. The WHO’s goal is to eradicate TB by 2050. But for India that may prove to be too soon. 

This is also what Dewan and Pai are calling for. The tech interventions could be in the form of e-health services, the use of mobile phones by doctors to notify centers of new cases, and disbursing e-vouchers for subsidized treatment.

And their demands are not unreasonable, given India’s progress so far. First, India has met one of the United Nations’ ambitious Millennium Development Goals by cutting TB prevalence to half in 2015 compared to prevalence in 1990. Second, according to Menabde, India is also on track to halve TB mortality by the end of this year compared to that in 1990. The accomplishment testifies to commitment from public and private sector initiatives and places the country in a good position from which to springboard toward stiffer targets. Continued support can sustain the momentum.

tb2

In 2012, the previous government made TB a notifiable disease—mandating medical practitioners to report every TB case detected—going some way in reducing the number of “missing” cases. It also banned blood tests to diagnose TB for the lack of a clinical basis. While the delay in implementing these measures contributed to the rise of multidrug-resistant strains of the disease, they also revitalised efforts to meet targets set by the WHO at an important time. Then bad news struck.

Causing self-harm

India’s health budget for 2015-16 has not even managed to keep up with inflation. It is a mere 2% more than the previous year. For TB, this budgetary belt-tightening has meant taking a few steps back in the pace of developing cures against multi-drug resistant strains and in efforts to improve the quality of treatment at frontline private-sector agencies, which already provide more than 60% of patient care.

Dewan and Pai think TV programs, such as Aamir Khan’s Satyamev Jayate, and Amitabh Bachchan’s admission that he is a TB survivor will promote enough awareness to force changes in healthcare spending—but this seems far too beamish an outlook when the funding cuts and regulatory failures are factored in.

A new draft of the National Health Policy (NHP) was published in December. Besides providing a lopsided insight into the government’s thoughts on public healthcare, it made evident that ministers’ apathetic attitude, and not a paucity of public support, was to blame for poor policies.

Nidhi Khurana, a health systems researcher at the Johns Hopkins Bloomberg School of Public Health, summed up the NHP deftly in The Hindu:

The NHP refutes itself while describing the main reason for the National Rural Health Mission’s failure to achieve stronger health systems: “Strengthening health systems for providing comprehensive care required higher levels of investment and human resources than were made available. The budget received and the expenditure thereunder was only about 40 per cent of what was envisaged for a full revitalisation in the NRHM framework.” If this is not the case against diminished public funding for health, what is?

OA shouldn’t stop at access

Joseph Esposito argues in the scholarly kitchen why it’s okay for OA articles (which come with a CC-BY license) to be repackaged and then sold for a price by other merchants once they’re out in a paper.

The economic incentive to reach new audiences could make that otherwise OA article into something that gets brought to the attention of more and more readers. What incentive does a pure-play OA publisher have to market the materials it publishes? Unfortunately, the real name of this game is not “Open Access” but “Post and Forget.” Well-designed commerce, in other words, leads to enhanced discovery. And when it doesn’t, it enters the archaeological record.

If we can chase the idealists, ideologues, and moralists out of the temple, we may see that the practical act of providing economic incentives may be able to do more for access than any resolution from Budapest, Bayonne, Bethesda, or Berlin. The market works, and when it doesn’t, things quietly go away. So why all the fuss?

It’s not an argument that’s evident on the face of it, becoming apparent only when you realize OA’s victory march stopped halfway at allowing people to access research papers, not find them. The people who are good to helping other people find stuff are actually taking the trouble to market their wares.

So Esposito’s essentially argued to leave in a “finding fee” where it exists because there’s a big difference between something just being there in the public domain and something being found. I thought I’d disagree with the consequences of this reasoning for OA but I largely don’t.

Where I stop short is where this permission to sell papers available for free infringes on the ideals of OA for no fault of the principle of OA. But then what can OA do about that?

Read: Getting Beyond “Post and Forget” Open Access, the scholarly kitchen

A future for driverless cars, from a limbo between trolley problems and autopilots

By Anuj  Srivas and Vasudevan Mukunth

What’s the deal with everyone getting worried about artificial intelligence? It’s all the Silicon Valley elite seem willing to be apprehensive about, and Oxford philosopher Nick Bostrom seems to be the patron saint along with his book Superintelligence: Paths, Dangers, Strategies (2014).

Even if Big Data seems like it could catalyze things, they could be overestimating AI’s advent. But thanks to Google’s espied breed of driverless cars, conversations on regulation are already afoot. This is the sort of subject that could benefit from its tech being better understood; it’s not immediately apparent. To make matters worse, now is also the period when not enough data is available for everyone to scrutinize the issue but at the same time there are some opinion-mongers distorting the early hints of a debate with their desires.

In an effort to bypass this, let’s say things happen like they always do: Google doesn’t ask anybody and starts deploying its driverless cars, and then the law is forced to shape around that. Yes, this isn’t something Google can force on people because it’s part of no pre-existing ecosystem. It can’t force participation like it did with Hangouts. Yet, the law isn’t prohibitive.

In the Silicon Valley, Google has premiered its express Shopping service – for delivering purchases made online within three hours of someone placing the order for no extra cost. No extra cost because the goods are delivered using Google’s driverless cars, and the service is a test-bed for them, where they get to ‘learn’ what they will. But when it comes to buying them, who will? What about insurance? What about licenses?

A better trolley problem

It’s been understood for a while that the problem here is liabilities, summarized in many ways by the trolley problem. There’s something unsettling about loss of life due to machine failure, whereas it’s relatively easier to accept when the loss is the consequence of human hands. Theoretically it should make no difference – planes for example are driven more by computers these days than a living, breathing pilot. Essentially, you’re trusting your life to the computers running the plane. And when driverless cars are rolled out, there’s ample reason to believe that will have a similarly low chance of failure as aircrafts run by computer-pilots. But we could be missing something through this simplification.

Even if we’re laughably bad at it at times, having a human behind the wheel makes it predictable, sure, but more importantly it makes liability easier to figure. The problem with a driverless car is not that we’d doubt its logic – the logic could be perfect – but that we’d doubt what that logic dictates. A failure right now is an accident: a car ramming into a wall, a pole, into another car, another person, etc. Are these the only failures, though? A driverless car does seem similar to autopilot, but we must be concerned about what its logic dictates. We consciously say that human decision making skills are inferior, that we can’t be trusted. Though that is true, we cross an epistemological ground when we do so.

Perhaps the trolley problem isn’t well-thought out. The problem with driverless cars is not about 5 lives versus 1 life; that’s an utterly human problem. The updated problem for driverless cars would be: should the algorithm look to save the the passengers of the car or should it look to save bystanders?

And yet even this updated trolley problem is too simplistic. Computers and programmers make these kind of decisions on a daily basis already, by choosing at what time, for instance, an airbag should deploy, especially considering that if deployed unnecessarily, the airbag can also grievously injure a human being.

Therefore, we shouldn’t fall into a Frankenstein complex where our technological creations are automatically assumed to be doing evil things simply because they have no human soul. It’s not a question of “it’s bad if a machine does it and good if a human does it”.

Who programs the programmers?

And yet, the scale and moral ambiguity is pumped up to a hundred when it comes to driverless cars. Things like airbag deployment can often take refuge in physics and statistics – they are often seen in that context. And yet for driverless cars, specific programming decisions will be forced to confront morally ambiguous situations and it is here that the problem starts. If an airbag deploys unintentionally or wrongly it can always be explained away as an unfortunate error, accident or freak situation. Or, more simply, that we can’t program airbags to deploy on a case-by-case basis. Driverless cars however, can’t take refuge behind statistics or simple physics when it it is confronted with its trolley problem.

There is a more interesting question here. If a driverless car has to choose between a) running over a dog, b) swerving your car in order to miss the dog, thereby hitting a tree, and c) freeze and do nothing, what will it do? It will do whatever the programmer tells it to do. Earlier we had the choice, depending on our own moral compass, as to what we should do. People who like dogs wouldn’t kill the animal; people who cared more about their car would kill the dog. So, who programs the programmers?

And as with the simplification to a trolley problem, comparing autonomous cars to autopilot on board an aircraft is similarly short-sighted. In his book Normal Accidents, sociologist Charles Perrow talks about nuclear power plant technology and its implications for insurance policy. NPPs are packed in with redundant safety systems. When accidents don’t happen, these systems make up a bulk of the plant’s dead weight, but when an accident does happen, their failure is often the failure worth talking about.

So, even as the aircraft is flying through the air, control towers are monitoring its progress, the flight data recorders act as a deterrent against complacency, and simply the cost of one flight makes redundant safety systems feasible over a reasonable span of time.

Safety is a human thing

These features together make up the environment in which autopilot functions. On the other hand, an autonomous car doesn’t inspire the same sense of being in secure hands. In fact, it’s like an economy of scale working the other way. What safety systems kick in when the ghost in the machine fails? To continue the metaphor: As Maria Konnikova pointed out in The New Yorker in September 2014, maneuvering an aircraft can be increasingly automated. The problem arises when something about it fails and humans have to take over: we won’t be able to take over as effectively as we think we can because automation encourages our minds to wander, to not pay attention to the differences between normalcy and failure. As a result, a ‘redundancy of airbags’ is encouraged.

In other words, it would be too expensive to include all these foolproof safety measures for driverless cars but at the same time they ought to be. And this is why the first ones likely won’t be owned by individuals. The best way to introduce them would be through taxi services like Uber, effectuating communal car sharing with autonomous drivers. In a world of driverless cars, we may not own the cars themselves, so a company like Uber could internalize the costs involved in producing that ecosystem, and having them around in bulk makes safety-redundancies feasible as well.

And if driverless cars are being touted as the future, owning a car could probably become a thing of the past, too. The thrust of digital has been to share and rent more than to own with pretty much most things. Only essentials like smartphones are owned. Look at music, business software, games, rides (Uber), even apartments (Airbnb). Why not autonomous vehicles?

Curious Bends – babies for sale, broken AIIMS, male gynaec and more

1. China has a growing online market for abducted babies

“Girls fetch considerably less than boys, but there is still a market for them. Old social patterns have re-emerged in the market, like the sale of girls into a household where they will be servants until they and the son of the house are of age to marry. Most abducted children are sold to new families as a form of illegal adoption, and are increasingly sold online, though some, mostly boys, are also trafficked for forced labour. I recently worked on an asylum case involving a young man forced into begging with a group of children under traffickers’ control in China. He is still so traumatised by the brutal physical punishments inflicted on the boys when they didn’t collect enough money that he can only talk about it in the third person: “they did this to the children”, never “they did this to me”.” (4 min read, theconversation.com)

2. Will it improve India’s poor healthcare if more research hospitals like the AIIMS are built?

“Barely 1-2% of the funds allocated to AIIMS, it observed, were being spent on research. As for education, even as India suffered from a lack of doctors, 49% of the doctors trained at AIIMS had “found their vocations abroad”. This staffing shortage was hurting AIIMS itself. Waiting time for surgery ranged between 2.5-34 months. With a high doctor-patient ratio, patients were barely getting four to nine minutes with doctors at the outpatient (OPD) department. The report flagged other shortcomings. AIIMS had failed to lead the modernisation of India’s public health infrastructure. CAG also noted delays in setting up medical centres, irregularities in the purchase of equipment, and so on.” (5 min read, economictimes.com)

3. Confessions of an Indian male gynaecologist

“Many of my patients confess that they prefer a male doctor to a female one. I don’t know why. But not every woman who walks into my room is comfortable. There is always a nurse in the room as I am scared that some woman will level baseless allegations over the physical examination. Unlike men, women have many health problems. Seeing all that they go through has made me respect them. My wife says I am more like a woman. That I have too much compassion.” (2 min read, openthemagazine.com)

4. Personalising cancer care, one tumour at a time

“Mitra’s CANScript has gone a step further. A simpler analogy would be the bacteria sensitivity tests that are commonly used today. Just as a pathology lab takes a swab, cultures it and tests it against all available antibiotics to finally help a doctor prescribe the right antibiotic, CANScript runs a test against the biopsy from the patient and gives a score card for the drugs to be used. In clinics it is currently used in six solid tumors (breast cancer, gastrointestinal, glioblastoma, head and neck squamous cell carcinoma and colorectal) and two blood cancers. Three other cancers – lung, cervical and melanoma—are under lab testing. However, the limitation with CANScript is that it requires very fresh tumour.” (5 min read, seemasingh.in)

5. What your bones have in common with the Eiffel Tower

“So how did Eiffel design a structure that’s strong enough to withstand the elements, and yet weighs about as much as the air surrounding it? The secret lies in understanding the shapes of strength. It’s a lesson we can learn by looking inwards… literally. By studying our bones, we can discover some of the same principles that Eiffel used in designing his tower.” (11 min read, wired.com)

Chart of the Week

“Now there are nine powers, and the kind of protocols that the cold-war era America and Soviet Union set up to reassure each other are much less in evidence today. China is cagey about the size, status and capabilities of its nuclear forces and opaque about the doctrinal approach that might govern their use. India and Pakistan have a hotline and inform each other about tests, but do not discuss any other measures to improve nuclear security, for example by moving weapons farther from their border. Israel does not even admit that its nuclear arsenal of around 80 weapons (but could be as many as 200) exists. North Korea has around ten and can add one a year and regularly threatens to use them. The agreements that used to govern the nuclear relationship between America and Russia are also visibly fraying; co-operation on nuclear-materials safety ended in December 2014. America is expected to spend $350 billion on modernising its nuclear arsenal over the next decade and Russia is dedicating a third of its fast-growing defence budget to upgrading its nuclear forces. In January this year the Doomsday Clock was moved to three minutes to midnight, a position it was last at in 1987.” (3 min read, economist.com)

nuclear

The notion of natural quasicrystals is here to stay

In November 2008, Luca Bindi, a curator at the Universita degli Studi di Firenze, Italy, found that the alloy of aluminium and copper called khatyrkite could be a quasicrystal. Bindi couldn’t be sure because he didn’t have the transmission electron microscope necessary to verify his find, so he couriered two grains of it to a lab in Princeton University. There, physicists Paul Steinhardt – whose name has been associated with the study of quasicrystals since their discovery in 1982 – and Nan Yao made their monumental discovery: the alloy was indeed a quasicrystal, and that meant these abnormal crystal structures could form naturally as well.

Before 1982, solid substances were either crystalline or amorphous. The atoms or molecules of crystalline substances were neatly stacked in a variety of patterns, but in patterns nonetheless, that were repetitive – whether you moved them to the left or right or rotated them by some amount. In amorphous substances, their arrangement was chaotic. Then, the physicist Dan Shechtman discovered quasicrystals, crystalline solids whose atoms or molecules were arranged in patterns that were orderly but, somehow, not repetitive. It altered the extant paradigm of physical chemistry, overthrowing knowledge a century old and redefining crystallography. Shechtman won a Nobel Prize in chemistry for his work in 2011.

The electron diffraction pattern from an icosahedral quasicrystal. Credit: nobelprize.org
The electron diffraction pattern from an icosahedral quasicrystal. Credit: nobelprize.org

The discovery that khatyrkite did in fact harbor quasicrystals, on New Year’s Day 2009, triggered an expedition to the foot of the Koryak Mountains in eastern Russia in 2011. Steinhardt and Bindi were there, and his team found some strange rocks along a stream 230 km to the south-west of Anadyr, the capital of Chukotka, in which quasicrystal grains were embedded. More fascinating was the quasicrystals’ composition itself, identified as icosahedrite and thought to be of extraterrestrial origins. Steinhardt & co. think it formed in our solar nebula 4.57 billion years ago – when Earth was being formed, too – and got attached to a meteorite that crashed on Earth 15,000 years ago.

The latest results from this expedition were published in Scientific Reports on March 13. For all its details, the paper remains silent about the ten years of work and dedication consumed in discovering these anomalous crystals in a remote patch of the Russian tundra, about the human experience that fleshed out the discovery’s implications for the birth of the Solar System. Fortunately, Virat Markandeya was loud about it, in the November 2013 issue of Periscope magazine, and well. The piece is a must-read now that the notion of natural quasicrystals is here to stay.

The Large Hadron Collider is back online, ready to shift from the “what” of reality to “why”

The world’s single largest science experiment will restart on March 23 after a two-year break. Scientists and administrators at the European Organization for Nuclear Research – known by its French acronym CERN – have announced the status of the agency’s upgrades on its Large Hadron Collider (LHC) and its readiness for a new phase of experiments running from now until 2018.

Before the experiment was shut down in late 2013, the LHC became famous for helping discover the elusive Higgs boson, a fundamental (that is, indivisible) particle that gives other fundamental particles their mass through a complicated mechanism. The find earned two of the physicists who thought up the mechanism in 1964, Peter Higgs and Francois Englert, a Nobel Prize in that year.

Though the LHC had fulfilled one of its more significant goals by finding the Higgs boson, its purpose is far from complete. In its new avatar, the machine boasts of the energy and technical agility necessary to answer questions that current theories of physics are struggling to make sense of.

As Alice Bean, a particle physicist who has worked with the LHC, said, “A whole new energy region will be waiting for us to discover something.”

The finding of the Higgs boson laid to rest speculations of whether such a particle existed and what its properties could be, and validated the currently reigning set of theories that describe how various fundamental particles interact. This is called the Standard Model, and it has been successful in predicting the dynamics of those interactions.

From the what to the why

But having assimilated all this knowledge, what physicists don’t know, but desperately want to, is why those particles’ properties have the values they do. They have realized the implications are numerous and profound: ranging from the possible existence of more fundamental particles we are yet to encounter to the nature of the substance known as dark matter, which makes up a great proportion of matter in the universe while we know next to nothing about it. These mysteries were first conceived to plug gaps in the Standard Model but they have only been widening since.

With an experiment now able to better test theories, physicists have started investigating these gaps. For the LHC, the implication is that in its second edition it will not be looking for something as much as helping scientists decide where to look to start with.

As Tara Shears, a particle physicist at the University of Liverpool, told Nature, “In the first run we had a very strong theoretical steer to look for the Higgs boson. This time we don’t have any signposts that are quite so clear.”

Higher energy, luminosity

The upgrades to the LHC that would unlock new experimental possibilities were evident in early 2012.

The machine works by using powerful electric currents and magnetic fields to accelerate two trains, or beams, of protons in opposite directions, within a ring 27 km long, to almost the speed of light and then colliding them head-on. The result is a particulate fireworks of such high energy that the most rare, short-lived particles are brought into existence before they promptly devolve into lighter, more common particles. Particle detectors straddling the LHC at four points on the ring record these collisions and their effects for study.

So, to boost its performance, upgrades to the LHC were of two kinds: increasing the collision energy inside the ring and increasing the detectors’ abilities to track more numerous and more powerful collisions.

The collision energy has been nearly doubled in its second life, from 7-8 TeV to 13-14 TeV. The frequency of collisions has also been doubled from one set every 50 nanoseconds (billionth of a second) to one every 25 nanoseconds. Steve Myers, CERN’s director for accelerators and technology, had said in December 2012, “More intense beams mean more collisions and a better chance of observing rare phenomena.”

The detectors have received new sensors, neutron shields to protect from radiation damage, cooling systems and superconducting cables. An improved fail-safe system has also been installed to forestall accidents like the one in 2008, when failing to cool a magnet led to a shut-down for eight months.

In all, the upgrades cost approximately $149 million, and will increase CERN’s electricity bill by 20% to $65 million. A “massive debugging exercise” was conducted last week to ensure all of it clicked together.

Going ahead, these new specifications will be leveraged to tackle some of the more outstanding issues in fundamental physics.

CERN listed a few–presumably primary–focus areas. They include investigating if the Higgs boson could betray the existence of undiscovered particles, the particles dark matter could be made of, why the universe today has much more matter than antimatter, and if gravity is so much weaker than other forces because it is leaking into other dimensions.

Stride forward in three frontiers

Physicists are also hopeful for the prospects of discovering a class of particles called supersymmetric partners. The theory that predicts their existence is called supersymmetry. It builds on some of the conclusions of the Standard Model, and offers predictions that plug its holes as well with such mathematical elegance that it has many of the world’s leading physicists enamored. These predictions involve the existence of new particles called partners.

In a neat infographic by Elizabeth Gibney in Nature, she explains that the partner that will be easiest to detect will be the ‘stop squark’ as it is the lightest and can show itself in lower energy collisions.

In all, the LHC’s new avatar marks a big stride forward not just in the energy frontier but also in the intensity and cosmic frontiers. With its ability to produce and track more collisions per second as well as chart the least explored territories of the ancient cosmos, it’d be foolish to think this gigantic machine’s domain is confined to particle physics and couldn’t extend to fuel cells, medical diagnostics or achieving systems-reliability in IT.

Here’s a fitting video released by CERN to mark this momentous occasion in the history of high-energy physics.

Featured image: A view of the LHC. Credit: CERN

Update: After engineers spotted a short-circuit glitch in a cooled part of the LHC on March 21, its restart was postponed from March 23 by a few weeks. However, CERN has assured that its a fully understood problem and that it won’t detract from the experiment’s goals for the year.

Why ‘Mein Kampf’ in 2016 will be more ‘readable’ than ever, not less

The first four fifths of this article are fascinating. It’s titled “The future of Mein Kempf in a meme world”. Though I’ve not consumed historically significant events with consistent interest, World War II has been an exception by far. And belonging to the generation I do – the so-called Millennials – I resent the article’s conclusion that when the book’s copyright lifts next year and the “original” annotated version becomes available, its size alone will deter younger readers from picking up a copy.

Mein Kampf is Adolf Hitler’s account of his years growing up in Germany and Austria. Its greatest accomplishment has been to offer a peek into the mind that lead the world into one of history’s worst conflicts and more dreadful tragedies. Hitler wrote it – rather, dictated it to his minion Rudolf Hess – when he’d been imprisoned for the failed Beer Hall Putsch in 1923. His actions in the Second World War consequently lead to a new world order, resulting in a geopolitical power structure that continues to shape global politics in the early 21st century.

In 1945, the book was banned by law in West Germany, which identified it as Nazi propaganda. The copyright remained with the Bavarian state government – and that copyright is set to expire in 2016. For the occasion, the Institute of Contemporary History said in 2010 that it would release an annotated version of the text.

Now, it’s been almost 70 years since the end of the War and much has definitely happened. But it’s hard to investigate the causes of many aspects of the present – especially the technology – without finding a part of their foundations rooted in the indignation and exigency of the first half of the previous century. And if the author of the piece – Gavriel Rosenfeld – had argued that Mein Kampf‘s relevance in 2016 among the teens and tweens was contingent on the relevance of these aspects, he might still have constructed a better argument than to say the size of the book would drive this demographic away.

His example of Otto Strasser not having read the book also sports a glaring error. Strasser says few in the Nazi Party had read the book in 1927, when Mein Kampf‘s measure of greatness was only in terms of what Hitler had accomplished until then: trivial compared to what would come after. Today, the book depicts incidents that shaped the most terrible head of state in recent history, and likely even differs in how it is significant among neo-Nazis and the civilized.

Rosenfeld may have been misled by a deception akin to the one at play with a $10,000 Apple Watch. With that price tag, Apple is targeting only those people who think spending $10,000 on it is a good idea, not anyone else – including people with $10,000 to spare but not for a smartwatch. Similarly, those who are afraid of hefty tomes from the past have already turned away from them, but it’s facile to think it entirely an acceptance of 50-KB memes and in no part a rejection of 2,000 pages of text with 5,000 annotations.

In fact, the author’s secondary mistake through writing the piece may have been miscalculating what the memetic endeavors flooding the Internet are founded upon: an industry that continuously makes all kinds of information easier to consume and easier to share. Few will contest Rosenfeld when he says the book will not be consumed widely in its original form. However, its physical original form is irrelevant.

It will be made consumable in parts by many groups of people, many journalists, teachers, historians and an ensemble group of enthusiasts, who will upload the fruit of their efforts to the web, who will make the book searchable and shareable. In due course, and with a measure of interest that’s only to be expected, the book’s contents will be available for everyone – young and old. Who knows, even an annotation of the annotations that Mein Kampf will be released with will revitalize flagging debates on historiography. The book will ultimately be more accessible than it ever was.

And Rosenfeld’s primary misstep? To assume those who consume information in 30-second bits have no way to access what’s available in 2,000-page chunks, that they may not be interested at all because they wouldn’t be appealed by it. If anything, the Millennials’ engagement with social attributes like memetics, network effects and virality has only revealed more efficient methods of knowledge-dissemination its producers weren’t able to leverage even a decade ago.

We live in a time when anything is susceptible to become appealing to anyone with the right alterations. Why would Mein Kampf be immune to this?

NASA readies to test history’s largest, most powerful booster for its new rocket

NASA’s massive heavy-lift rocket, the Space Launch System, which will one day ferry humans to deep-space destinations and back, has become notorious for the scale of engineering backing it. In September 2014, agency administrator Charles Bolden had unveiled the world’s largest welder to mark the start of the SLS’s construction. Next, on March 11, NASA will test history’s largest and most powerful booster that will power the SLS.

The booster has five stages, and has been adapted from the four-stage version used for the Space Shuttle program. It is 47 meters in length, 3.6 meters in diameter, and weighs 801 tons. The test-fire will be conducted by prime contractor Orbital ATK at its T-97 test stand in Promontory, Utah, at 11.30 am EDT (9 pm IST) on Wednesday.

Even though the booster’s components have been verified in the past, the addition of the extra stage makes it a new configuration that engineers must test for once more.  Wednesday’s test, in this context, will be for the full duration – two minutes – for which it will be expected to fire on launch day, although it will be laid horizontally on a test-bed (like in the video below).

The maximum thrust it produces will be about 16 million newtons, burning 5,500 kg of solid propellant per second. On the SLS itself, two such boosters will join four RS-25 engines to generate a combined thrust of 37.3 million newtons. (To compare, an Airbus A380-800 uses four Rolls-Royce Trent engines to generate a combined thrust of 0.96 to 1.68 million newtons to fly.)

Here’s a video from 2009 showing how one of these tests goes (Don’t miss it when the commentator says, “Amazing display of power” at 1:36).

A second test will happen in March 2016. This month’s test will qualify the booster for operation at 32 degrees Celsius, the one next year will qualify it for performance at 4 degrees Celsius. The disparate temperatures mimic the two which the booster will be expected to perform in: during launch and in space. If both tests are successful, it will finally have to pass a design certification review in the last third of 2016 to finally qualify for use.

The idea for the SLS was born from NASA’s desire to cope with its Constellation Program starting to crumble in 2009, when President Barack Obama cancelled all deep-space exploration efforts. In efforts to ensure its $9-billion investment remained fruitful, the agency conceived of the SLS, with a new rocket at its heart. A lot of hope also underscores this new commitment: plans for a booster-test date back to 2012, while engineers conducted three test-firings of the booster between 2009 and 2011 during development for the Constellation Program’s Ares rocket.

The multipurpose module on-board the rocket which will hold humans is called Orion, which the rocket has been designed around. The first test flight when both Orion and SLS will fly together has been planned for November 2018, when they will undertake a three-week long trip through an orbit just beyond the moon and return to Earth. Orion’s own inaugural (unmanned) mission was successfully completed in December 2014, with a Delta IV heavy rocket.

NASA plans for the SLS-Orion to be able to get humans to Mars in the 2030s. To get farther – such as to the asteroids between Mars and Jupiter – the agency plans to gradually step up booster capabilities. The 2018 test will lift 70 tons, with a rocket 64.6 meters tall and 8.4 meters wide.

rockets

Current projections place the ultimate goal at a lift capability of 130 tons into low-Earth orbit – the heaviest ever. The corresponding rocket will feature two five-stage boosters, four RS-25 engines and two J-2X engines, more closely resembling the giant Saturn V of the 1960s and 1970s, which was 110 meters high.

The booster’s prime contractor, Orbital ATK, was formed by the merger of Orbital Sciences and erstwhile prime contractor ATK (Alliant Techsystems) on February 9 this year. Orbital itself hit a rough patch in October 2014 when its flagship Antares rocket exploded seconds after taking off from a launchpad in Virginia. It was carrying supplies for a resupply mission to the International Space Station. Wednesday’s test-fire will be Orbital ATK’s first ‘mission’ as a merged company.

Both Orbital ATK and Boeing – the contractor for the SLS’s core module which carry cryogenic liquid hydrogen and liquid oxygen – have reused many features and parts from the erstwhile Space Shuttle program, helping cut costs. Ironically, however, the SLS boosters which contain parts from 23 different Space Shuttle missions across 25 years are not reusable.

NASA will broadcast the test live on its television channel, NASA TV. The event is sure to revitalize public support for an agency that seems long past its golden days, yet whose heritage and legacy are important for the USA to recapture its former foothold in space exploration.

At the same time, whether even a successful test-fire will bode well in the eyes of Congressmen is hard to say: the first SLS test-flight was originally planned for 2017, and now it’s 2018. The size of the program also makes it especially susceptible to changing political winds, as a result of requiring constant and substantial funding to be kept alive.

Making matters worse is, despite motoring along with the development of the SLS itself, NASA hasn’t yet said where it will actually go besides Mars.

Featured image: Artist concept of NASA’s Space Launch System (SLS) 70-metric-ton configuration launching to space. Credit: NASA/MSFC