Sharaab or Kebab: A Tastefully Legal Take on Tunday Kebabs and Champagne.

“Remember, gentlemen, it’s not just France we are fighting for, it’s

Winston Churchill, Prime Minister of Great Britain, (1874 – 1965).

Given that food is an unendingly fascinating subject in my life, it follows that some of the most animated and involved conversations I’ve had revolve around it. So, when Haji Murad Ali’s Tunday Kebab opened in Bangalore, it was not-quite-incidental that a discussion would arise around the taste and its authenticity.

Galouti Kebab from Tunday Kebabi

Tunday Kebabi’s famous Galouti Kebab

“It doesn’t taste quite like Lucknow.”

“It’s Tunday Kebab, but not exactly.”

“Tunday Kebab sirf Lucknow main khane chahiye, nahi toh swaad nahi aata.” (Tunday Kebab should only be eaten in Lucknow; it doesn’t taste the same anywhere else.)

Which made me think about the tastes that we associate with particular places. I mean, think about it – if you’ve had pachranga achar from Panipat, no one could fool you with garden-variety, store-bought stuff from anywhere else in the country, could they?

This idea of food being specific to a place got more intriguing, the more I thought about it!

We all know Champagne’s story. If it hadn’t been for the French being, well, French, about their bubbly, sparkling white wine produced anywhere in the country would be sold at champagne rates. In fact, did you know that when the French authorities tried to redraw the boundaries of the region of Champagne for economic and revenue purposes early in the 20th century, it sparked the Champagne Riots of 1908 and 1911? Potent stuff, that bubby is!


Image above and on article thumbnail originally published on MCU035’s photostream on FlickrImage published under a Creative Commons Attribution 2.0 Generic License.

Champagne is not champagne unless it is produced in the Champagne region of north-east France. White wine, with sparkling bubbles, produced anywhere else in the world is just sparkling wine. To be champagne, it must be produced in Champagne.

And so champagne gave the world the first laws on geographical indications.

Geographical indications – that is, information highlighting the exact area of origin of a produce – started soon after the monk Dom Perignon discovered stars in his wine. Almost as early as the latter half of the 18th century, it became increasingly necessary to protect champagne from “impostors” – white wine with some fizziness, but of far inferior quality, being sold as “champagne from Champagne”. By then, pretty much anyone with gold to spare and a taste for the good life had encountered champagne and was increasingly wary of cheap reproductions. The logical step was to regulate the production of bubbly and so emerged the concept of geography-specific protection of products.


Image above and on article thumbnail originally published on facelikebambi’s photostream on FlickrImage published under a Creative Commons Attribution 2.0 Generic License.

Towards the beginning of the 20th century, appellation d’origine contrôlée (AOC – translated as “controlled destination of origin”), the first of the regulatory frameworks granting recognition to region-specific products, came into existence. Since then, France has closely protected many agricultural products, especially food items such as cheeses, by providing recognition of standards and quality associated with a product that originates in a particular region.

Needless to say, the world figured how economically viable this is.

Intellectual-Property-LawAnd now, India is protecting its kebabs, along with the laddoos from Tirupati and its very own Nashik wine! Recently, a Geographical Indicator (GI) recognition application for Lucknow’s kebabs was sent to the office of the Controller of Patents, Designs and Trademarks for registration. If granted this registration, kebabchis in Lucknow can better maintain the standard of the product and ensure protection of the brand. They will also be able to standardise pricing. But most importantly, they will be able to stop export of the product, unless it meets the industry standard as recognised by the registered GI standard.

So no matter where Miyaan Haji Murad Ali’s family makes its kebabs, to be Tunday Kebab, it’ll have to taste just like Lucknow!

(Suhasini Rao Kashyap is part of the faculty at


The Three Laws

No discussion on science fiction can be complete without the Three Laws of Robotics of Isaac Asimov, considered by many to be the father of hard science fiction. They are one of the most common themes running through science-fiction writing – especially when dealing with the subject of robots.


Isaac Asimov

The Three Laws of Robotics are:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
While the Three Laws are quite evidently rudimentary in nature – Asimov was no lawyer after all – many ethicists and roboticists accept them as starting point for discussions on the applications of artificial intelligence and on governing the conduct of robots towards humans.

The Three Laws are believed to have their genesis in the general expectations about human behaviour. In one of his short stories, Evidence, Asimov expounds on this through the protagonist, pointing out that humans are typically expected to refrain from harming one another – the basis for the First Law. With reference to the Second Law, humans are generally expected to obey authority figures such as the police, judges, and ministers, unless obeying them would conflict with the first principle of not harming another human. Lastly, humans are not expected to harm themselves, except perhaps when sacrificing themselves in pursuit of the first and second principles. Note that when it comes to humans, none of these are set in stone. Society always accepts certain exceptions — such as military men killing other military men during wars and conflict, or people refusing to follow orders when such orders are blatantly immoral, or in the case of euthanasia, which is now legal in several jurisdictions. When it comes to machines however, there is a clear moral ambivalence, some would say even fear, about imbuing them with free will beyond a point.
The Three Laws can only serve as a foundation. Indeed, it is sometimes said that they are already obsolete. Asimov himself demonstrated twenty-nine variations in his writing and even propounded a Zeroeth Law, which would override the three other laws. Rules always follow the advent of technology – and we are only at the beginning of imagining, let alone understanding, what we are capable of in terms of creating machines that today look like Honda’s Asimo or Sony’s Aibo (now discontinued) but that someday may look like us, talk like us, and act like us.

New challenges will come to light, especially given that some of the most promising developments in robotics are taking place under military control — the United States’ military plans to have a fifth of its combat units fully automated by the year 2020 – where the Three Laws are not likely to find much purchase. There is no doubt however, that the creators of robots with military applications will likely build in some level of protection for their users and compatriots — imagine the hullabaloo if an autonomous machine were to lead to American casualties. Another very practical challenge is posed by the use of robotic assistants for the elderly, a field of engineering being pioneered in Japan due to its rapidly-ageing population. Recently, Google announced that it had launched a fleet of automated cars that had driven 1,40,000 km across California with minimal human intervention; and they had been involved in only one minor accident — the Google car was rear-ended by another human-driven one. Liability will be a major issue — an autonomous machine programmed with the ability to learn from its circumstances — who is responsible for its actions: its owner, user, or creator? Or will the machine itself someday be recognised to have rights and responsibilities?

(Abhishek Shinde is a New Delhi-based lawyer. This post was first published on here.)