Law Firm in India

Issues with Data Developed Using AI & AI Laws in India

July 10, 2023 | Intellectual Property

Who shall be held responsible for erroneous data developed using AI? Although there is no specific AI legislation in India, there have been attempts to interpret & address such concerns with the help of other legal provisions & Fundamental Rights.

Artificial intelligence (AI) is arguably the next major leap the world is taking in terms of technology. With the ability to simulate human intelligence in different processes, AI allows you to enable seamless workflow and excellent efficiency by training the software to do various tasks.

AI has allowed companies to improve several aspects of their businesses. They are able to do so by training the AI using loads of data, teaching it to analyze the data for correlations and patterns and refer to these patterns to make apt predictions when encountering similar data in the future.

However, it is vital to note that although there are a wide range of benefits of using AI, there are certain issues that must be addressed:

  • Who shall be held responsible for any erroneous data developed using AI?
  • How does the law in India address issues related to data developed by AI?

Who shall be responsible for erroneous data developed using AI?


Although we do know about the incredibility of AI, a question quite often asked is ‘who shall be responsible for any incorrect data developed using AI?’ This question usually arises in cases of unclear forms of AI, which relies on several parameters to analyze any data before enabling the system to take a decision, action or inaction.

During the development and deployment process, an AI system is trained by a lot of different individuals/entities. When training AI systems, the development environment itself can have a substantial effect on the system’s efficiency.
As the saying goes, ‘too many hands, spoil the broth.’

  • The involvement of several entities during the development and deployment of AI systems complicates the task of assigning responsibilities and determining everyone’s liability.
  • You need to first and foremost establish the cause of action when it comes civil suits; however, an ambiguous AI system coupled with a huge number of aspects behind each of its decision makes it quite difficult to acknowledge any error in the data produced and determine the individual liable for such errors.
For example, in 1981, an engineer working at Kawasaki’s heavy industries plant in India was killed by a robot that was used to perform certain tasks in the plant. It was actually the world’s first death that was caused by a robot. It was found that the robot had not been turned off while the engineer was working on repairs. As such, the robot mistook the engineer for obstruction on the manufacturing line and swung its hydraulic arm to clear the ‘obstruction’. The engineer was pushed to an adjacent machine, which resulted in instant death. Although it has been decades since this incident, there is still no criminal structure for cases where robots may be involved in a crime or cause an injury to someone.

Another case of negligence occurred in 2018, where Elaine Herzberg was hit by a test vehicle that was being operated on self-driving mode. This came to be known as the first recorded death caused by a self-driving car. The Advanced Technology Group (ATG) had modified the vehicle and added a self-driving system. Although a human operator was sitting in the car for backup, they were looking at their phone when the collision occurred. After the incident, the National Transport Safety Board (NTSB) investigated the matter and came across the below mentioned facts.

  • ATG’s self-driving system had sensed the individual 5.6 seconds before impact. Although the system continued to observe and record the individual until impact, it never clearly distinguished what was crossing the road – a pedestrian – or predicted the path the item crossing the road was taking.
  • If the vehicle operator had been attentive, they would have had ample time to avoid the crash or at least reduce the damage.
  • Although Uber ATG could supervise the behavior of their backup vehicle operators, they seldom did so. Their decision to remove the second operator from test runs of the self-driving system exposed their ineffective oversight.
However, irrespective of the findings in this case, it was quite difficult to determine the liability for the damage done – whether it would be the safety driver, the ATG group or the technology used itself.

Another case occurred in 2017, when Hong Kong-based Li Kin-Kan assigned an AI-led system the task of managing USD 250 million of their own cash along with the additional leverage they took from Citigroup Inc, totaling the amount to USD 2.5 billion.

The AI system had been built by a company based in Austria and was handled by Tyndaris Investments based in London. The system was designed to work by scanning online sources such as real-time news and social media platforms and make relevant predictions on US stocks.

However, by 2018, money was being lost by the system on a regular basis, where the amount even reached over USD 20 million a day. As such, the investor chose to sue Tyndaris Investments for apparently overstating the capabilities of the AI. However, again, determining the liable entity – developer, marketer or user – was not so simple.

It is vital to understand that AI can potentially be biased and provide results that are against certain sections of society. This can lead to inconsistent results and potentially create conflicting situations. For example, in 2015, Amazon attempted an experiment with a machine learning-based solution to assess applicants by analyzing the old resumes that had been submitted to the company. The system went on to rate male applicants higher than female applicants as the applications provided to the system were majorly of male applicants, which led the system to think that male candidates were preferred over their female counterparts.

It must, further, be noted that incorrect decisions taken by AI systems can lead to individuals getting excluded from accessing certain services or benefits. As AI systems are basically probabilistic systems, any incorrect decision in consequential situations such as – when identifying a criminal or identifying a beneficiary can cause huge complications. If a beneficiary is named incorrectly, they may be barred from availing of certain services or benefits, while an incorrectly named criminal can even end up getting his life ruined because of an error in the system.

Legal Approaches for Managing AI in India


There are specific rules and guidelines for products and services in different high-risk sectors like healthcare and finance. As such, simply introducing AI to these systems in decision-making roles may not be apt.

For example, in addition to the lack of any anti-discrimination law that directly governs the decision-making by AI, the existing laws do not mention the means of decision-making they govern either.

As such, it shall be completely within the authority of the anti-discrimination legislation to regulate decisions made by using AI, especially when the entity using the decision-making AI has some constitutional or legal obligation to remain fair and unbiased.

There are some relevant laws that aim to protect you from AI-related issues in some cases; however, these need to be aptly adapted to tackle those challenges caused by the AI. In addition, the unique aspects of different sectors create a need for sector-specific laws for AI. Considering the quick pace of development of this technology, there will be a need to review the AI ethics principles and guidelines on an ongoing basis as well.

While machine learning models learn by identifying patterns between different data sets, it is usually conducted by relying on a part of the dataset, called ‘test dataset’. These datasets may not necessarily represent real world scenarios. As such, when you fail to aptly understand the connection between input features and output, it can become quite problematic to foresee its performance in a new environment with uncontrolled datasets; making it difficult to reliably deploy and scale such AI systems.

For example, a system is trained to identify animals using various datasets. However, when deployed in an actual uncontrolled environment, the system failed to deliver apt results. This was because it was found the system classified images of previous datasets based on their backgrounds and not the animal itself. As such, although effective to analyze test datasets, the AI system shall be deemed to be incapable of handing issues in the real world.

Besides, when, during training, AI systems are provided with large amounts of test data that include an individual’s personal data, there are bound to be certain privacy concerns. The absence of any privacy protection may lead to AI systems recording and remembering all the personal details of individuals without getting their exclusive consent. This shall, in turn, harm the interest of individuals by disregarding their preference on the use of their data.

The Indian judiciary interprets ‘right to life and personal liberty’ under Article 21 of the Constitution of India to include several fundamental and vital aspects of human life.

In the case of R Rajagopal vs. State of Tamil Nadu, the right to privacy was suggested to fall under Article 21 and deemed relevant for handling privacy matters that arise when AI processes personal data.

Further, in the case of K.S. Puttaswamy vs. Union of India, the Supreme Court of India stressed on the need for an inclusive legislative structure for data protection that shall be capable of addressing the issues that arise out of AI usage. As AI can also be unfair and discriminatory, Articles 14 and 15, which deal with the right to equality and right against discrimination respectively, shall be somewhat attracted to such cases too.

When it comes to the presence of any comprehensive guidance structure for the use of AI systems in India, there are none. Though there are some sector specific structures that have been chosen to use AI systems and develop using it as well.

In 2018, the National Institution for Transforming India (NITI Aayog) introduced the National Strategy on Artificial Intelligence (NSAI), where several provisions related to the usage of AI systems were discussed. The suggestions mentioned by the NITI Aayog therein are listed below:

  • To set up a panel that includes the Ministry of Corporate Affairs and the Department of Industrial Policy and Promotion to keep an eye on the regulations needed in intellectual property laws.
  • Forming appealing IP procedures for AI upgrades.
  • Introducing legal networks for data protection, security and privacy.
  • Creating different ethics according to different sectors.
The Ministry of Information and Technology launched four committees to assess various ethical issues. Besides, in addition to the Bureau of India Standards having launched a new committee for systematic and levelled AI, the Government is working on creating several safety parameters to reduce the risks associated with the usage of this technology.

In January 2019, the Securities and Exchange Board of India (SEBI) issued a circular to stockbrokers, depository participants, recognized stock exchanges and depositories and in May 2019, to all mutual funds (MFs)/ asset management companies (AMCs)/ trustee companies/ Board of Trustees of Mutual Funds/ Association of Mutual Funds in India (AMFI) on reporting requirements for AI and machine learning (ML) applications and systems offered and used.

In 2020, NITI Aayog prepared documents based on launching a supervising body and enforcement of responsible AI principles that covered the following key aspects:

  • Evaluating and employing principles related to responsible AI.
  • Forming legal and technical network.
  • Specific standards to be set through clear design, structure and processes.
  • Educating individuals and making them aware about responsible AI.
  • Creating new tools and techniques for responsible AI.
  • Representing India on a global standard.

Conclusion


India, as of now, does not have any rules or legislation specifically for AI in place. The closest to something similar is the draft of Personal Data Protection Bill (2019), which has been designed as comprehensive legislation that outlines various aspects of privacy protections that AI solutions must adhere to.

The aim, as of now, for India, should be to assess and establish legislations to fulfill the need for AI-related laws. Besides, considering the incredibly rapid growth of AI as well as its involvement in different sectors, this needs to be done as quickly as possible to avoid any issues caused by using AI.

How Can we Help You?

Write to us with your enquiries, questions or request a meeting with a lawyer to discuss your potential case. One of our experts would review the form and revert back shortly.

Thank you for getting in touch!

We appreciate you contacting us at India Law Offices. We will review the details that you have submitted and one of our experts will connect with you shortly.

Invalid Captcha