“Before we wake up and find that year 2024 looks like the book “1984”, let’s figure out what kind of world we want to create”

– Bradford L. Smith, EVP, Microsoft

March 23, 2016, Microsoft launches on Twitter its cutting-edge AI solution named- Tay

However, it took less than a day for Tay, this playful conversational AI Chatbot, to become crooked. All because of a crash course on racism and bias it underwent from fellow Twitter users.

 

Image for post
Source: Twitter

For all these reasons, Microsoft had to pull the plug on Tay and scrap the project completely.

October 2018 Amazon scrapped its much-anticipated AI Recruitment tool it had been building since 2014 because it didn’t like women and was biased against them. Primarily because there was an inherent bias in the training data used in favor of men from the CV’s submitted over a 10-year period. As a result, the AI solution rated women resumes consistently lower than men’s. The project was secretly scrapped by Amazon.

And there are several such instances in the AI world. All these examples bring up a very pertinent question about AI adoption-

Do we humans completely understand the ramifications of AI adoption?

And the voice that echoes in unison in all our discussions with our clients is a stupendous No. This really means that there is a burning need for organizations to embed ethics in their AI strategies. And in this article, I share our perspective on how to do that.

Some of the ethical questions that surface with AI adoption strategy are-

1. AI Singularity:

Will we end up creating something that is more intelligent than us?

Can we ensure that we will be in complete control or will we end up losing control over AI?

2. Job Apocalypse:

Will AI ruthlessly take away livelihoods of people?

3. Lack of Transparency:

With each new release of sophisticated Deep Learning architecture are we building something so sophisticated that its inexplicable even for us?

4. Inclusion & Diversity:

Are we building something which has an inherent bias and is not inclusive?

Eg- Uber’s Facial recognition AI solution blocked transgender drivers from the system as it wasn’t trained on LGBTQ data, in turn costing them their jobs.

These questions rightly bring in the need to build Responsible AI

What do we mean by Responsible AI?

Microsoft has laid out its internal guiding principles which summarize Responsible AI under the following six heads-

1) Fairness: AI solution that doesn’t have any algorithmic bias and treats everyone fairly

2) Reliability and Safety: Human machine trust stays intact all the time

3) Privacy and security: The AI system should know where to draw the line to not invade consumers privacy and security

4) Inclusiveness: AI should be empowering and not overpowering

5) Transparency & Explainability: We should be able to explain what we have built

6) Accountability: Companies should be accountable for the actions of their AI system

Designing trustworthy AI solutions that reflect human ethical principles such as the ones above is what Responsible AI stands for. The above is by no means exhaustive set of principles but is one perspective on steps towards Responsible AI.

So, what should be done to ensure that our AI doesn’t see the doomsday? What are some ways to embed Responsible AI in our corporate ethos?

We answer this question at following two tiers-

Global-

Carving out globally accepted set of principles for Ethics in AI are paramount. Globally at least 84 public-private initiatives are underway to describe high level principles on ethical AI. We need to have a convergence across these initiatives at a global level and compliance guidelines akin to GDPR with heavy downsides for a non-compliant organization.

Organizational-

To start with, we feel that there is a dire need for Corporate Governance frameworks to position AI as an alliance of a machine’s IQ with a human’s EQ rather than it being an autonomous solution.

There is a need for corporates to formulate initiatives around-

1. Cultural awareness within the organization towards Ethical AI & its framework

2. Guidelines around retraining & reskilling people about to be impacted by the implementation of AI

3. Ensuring legal & regulatory compliance as a core component of AI scenario development

4. Algorithm Auditing- Ensure complete authority for solution owners to audit the compliance at any given time using algorithm auditing in lab environments

With these steps, organizations can make meaningful strides towards building trustworthy AI.

Disclaimer: Through these guidelines, we don’t anticipate creation of a Utopian AI world which is completely flawless, but an attempt to ensure that we don’t end up in a dystopian world and that we continue to work towards a better future for all of us.

How do you see ethics being incorporated into the AI adoption roadmap of organizations? Please comment below

Acknowledgement: Microsoft, Accenture, INSEAD, University of Oxford

Reference: here