Intelligence, Inside and Outside.

Standardizing Ethical Design For Artificial Intelligence And Autonomous Systems

While there is a wide expanse of applications artificial intelligence (AI) brings to the table, some are still concerned about how the technology can be an equally powerful tool to cause harm. In line with this, the Institute of Electrical and Electronics Engineers (IEEE) Society Standards and Activities Board proposed some standards for the use and development of AI. The implications of the said standardization were intensively discussed in the paper, “Standardizing Ethical Design for Artificial Intelligence and Autonomous Systems” by Joanna Bryson and Alan Winfield.

Concerns surrounding AI

01. An AI takeover?

A common fear about AI is that the technology will be advanced enough to transcend the ability of humans. This will then enable them to predate the human race to extinction. In some aspects, AI is already more superior than humans. These machines can already do complex mathematical tasks better, play chess and Go better, remember more things longer and others. But these capacities have not led to machine ambition. These machines may be better in some aspects but they are only one aspect. Humans are far more complex creatures not limited to one domain only.

02. A destabiliser?

Another concern is that AI will displace the workers of society, given that the abilities of the AI can become far more superior than that of humans. This can potentially render human jobs obsolete. Societies with more access to AI can also gain development faster than others. Effectively, AI can facilitate the widening of inequalities.

03. A threat?

In the hands of AI is personal information. If used maliciously, AI could be used to harm a person’s privacy, personal liberty, as well as autonomy.

Read More  Microsoft Build 2019 | Designing AI Responsibly

Standards and AI ethics

According to Bryson and Winfield, “Standards are consensus-based agreed upon ways of doing things, setting out how things should be done.” This means that complying to these standards will ensure the safety, security, and reliability of AI.

With this, IEEE created the Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems, which aims to ensure that every AI designer is fully educated and trained to prioritize ethical considerations in the development of AI systems.

The first product of the initiative is the Ethically Aligned Design (EAD) which lays down 60 draft issues and recommendations encompassing,

  • General principles
  • Embedding values into autonomous intelligent systems
  • Methods to guide ethical design
  • Safety and beneficence of AI
  • Personal Data and Individual Access Control
  • Reframing Autonomous Weapons Systems
  • Economics and Humanitarian Issues
  • Laws

From these issues, four standards working groups have generated their respective candidate standards,

  • P7000—Model Process for Addressing Ethical Concerns during System Design
  • P7001—Transparency of Autonomous Systems
  • P7002—Data Privacy Process
  • P7003—Algorithmic Bias Considerations

These standards being generated does not guarantee that we have full control of the situation. Nevertheless, there are solid steps in ensuring that what we will reap from AI developments benefits humankind.


For enquiries, product placements, sponsorships, and collaborations, connect with us at [email protected]. We'd love to hear from you!
Share this article
Shareable URL
Prev Post

What Is Quantum Machine Learning And How Can It Help Us?

Next Post

The World’s First Talking Machine

Read next