Why we need an AI code of ethics

0
365

Artificial intelligence is now a reality, but where is the fine ethical line between us and it, asks Kapil Chaudhary

Big data and artificial intelligence are not just buzzwords anymore. Companies across the world are swiftly adopting these technologies in their everyday businesses.

In 2019, Fortune 1000 companies are expected to increase their big data and artificial intelligence (AI) investments by an astonishing 91.6%, according to a survey by NewVantage Partners, a big data and business consultancy company. American tech giant Accenture forecasts that AI has the potential to add US$957 billion to India’s gross domestic product by 2035 and lift the country’s gross value added by 15% in the same period.

Kapil-Chaudhary-SAARC-Autodesk-India-Business-Law
Kapil Chaudhary

With the increasing adoption of AI and machine learning, many companies are now waking up to their ethical dimensions. A survey of 1,400 US executives by Deloitte last year found that 32% believed that ethical issues constitute one of the top three risks of AI. However, most organizations do not yet have specific approaches to deal with AI ethics. It is time for policymakers, thinkers and technology-focused lawyers in India and elsewhere to start looking at issues of digital ethics and developing regulatory and governance frameworks for AI systems.

In June 2018, the government of India put out a discussion paper setting out a National Strategy for Artificial Intelligence. The paper discusses the idea of establishing a sectoral regulatory framework to address the privacy issues associated with using AI. The framework involves collaborating with the industry to come up with sector-specific guidelines on privacy, security, ethics for manufacturing, financial services, identity, telecommunications and robotics sectors.

Elsewhere in the world, various principles and models for legal frameworks to handly AI are being discussed.

AI governance

Some of the guiding principles that are currently being looked at as core values when dealing with an ethical framework for AI are the following:

  1. Fairness. Respect for fundamental human rights and compliance with the fairness principle;
  2. Accountability. Continued attention and vigilance over the potential effects and consequences of AI;
  3. Transparency. Improved intelligibility for effective implementation;
  4. Ethics by design. Systems should be designed and developed responsibly, applying the principles of privacy by default and privacy by design; and
  5. Bias. Unlawful biases or discrimination that may result from the use of data should be reduced and mitigated

Legal challenges

One legal issue that is being discussed is whether the responsibility for loss or damage caused by AI can be attributed to someone? Are our legal systems ready and willing to confer a “separate legal personality” on AI?

For example, in English law, an automated system, even a robot, cannot currently be regarded as an agent, because only a person with a mind can be an agent in law. And, in the US, a court has observed that “robots cannot be sued” for similar reasons.

Regulatory bodies in the US, Canada and elsewhere are setting the conditions under which software can enter into a binding contract on behalf of a person. Australia and South Africa already have legislation addressing this issue and the European Parliament has recommended that, in the long run, autonomous AI in conjunction with robotics could be given the status of electronic persons.

In January 2019, Singapore revealed its model AI governance framework for public consultation, pilot adoption and feedback as part of efforts to provide detailed guidance to the private sector in addressing ethical and governance issues when deploying AI solutions. The model framework is based on two guiding principles for AI technologies: organizations using AI in decision-making should ensure that the process is explainable, transparent and fair; and AI solutions should be human-centric.

On 11 February 2019, the White House issued an Executive Order on Maintaining American Leadership in Artificial Intelligence. Numerous societal groups have also released various guidelines around the ethical design and implementation of technology. For instance, the MIT Media Lab and Berkman Klein Centre for Internet and Society at Harvard University in January 2017 embarked upon a US$27 million initiative (Ethics and Governance of Artificial Intelligence Fund) to “bridge the gap between the humanities, social sciences and computing by addressing the global challenges of AI from a multidisciplinary perspective”.

Likewise, the Institute of Electrical and Electronics Engineers (IEEE) has 11 standards working groups that are dealing with various aspects of AI.

Given the rapid progress of AI in our society, it is imperative that stakeholders recognize the sense of urgency to explore the challenges, risks and opportunities at the intersection of law, policy, ethics and regulations affecting AI.

Kapil Chaudhary is corporate counsel, India and SAARC region, for Autodesk India. The views expressed in this article are personal and do not necessarily reflect those of the employer.