Skip to main content

This blog post originally appeared as part of CalypsoAI’s 2020 State of the Union Report. To read the full report, click here.

Throughout 2020, we saw impressive leadership initiatives in the field of AI — From Denmark becoming the first country to introduce mandatory legislation for AI and Data Ethics, to the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) unveiling a new set of standards entitled Information Technology – Artificial Intelligence – Overview of Trustworthiness in Artificial Intelligence, aimed at improving the trustworthiness of AI systems.

In February 2020, the United States Department of Defense (DoD) adopted a set of ethical AI principles that encompass Responsibly, Equitable, Traceable, Reliable, and Governable AI. In October 2020, former Under Secretary of Defense for Policy Michele Flournoy, incoming Director of National Intelligence Avril Haines, and WestExec Senior Associate Gabrielle Cheftz made a compelling case for the DoD to adopt a Test & Evaluation, Validation & Verification (TEVV) enterprise for machine learning platforms in the White Paper, Building Trust Through Testing.

In May 2020, Denmark became the first country to introduce mandatory legislation regulating AI and Data Ethics. The legislation requires companies to release information about their data ethics policies, and companies must provide information on the algorithms they use in their platforms and prove that these algorithms live up to Danish transparency requirements.

In October 2020, Australia introduced the AI Action Plan, the latest in a series of Australian initiatives targeting AI regulations, and a continuation of the AI Ethics Framework released last year. The Acton Plan requests feedback from Australian citizens and provides guidance for the integration of trusted, ethical AI in Australian society with the intention of coordinating government policy and national capability to make Australia a leading digital economy by 2030.

The scientific research community and tech industry similarly initiated ethical leadership campaigns in 2020, self-imposing robust ethics requirements. For example, researchers who submitted papers to NeurIPS, one of the largest AI research conferences in the world, were required to include an ethics statement describing the moral impact of their work. BMW similarly implemented self-imposed AI ethics requirements in October 2020, demonstrating private sector leadership in the ethical regulation of AI.

AI offers solutions to aid in cultivating a better, more ethical world. However, AI’s potential to positively impact society is limited, absent a deep understanding of the data that drives our autonomous decision-making tools. For that reason, CalypsoAI was pleased to see the numerous initiatives throughout the private and public sectors that focused on ethical AI governance. Our Secure Machine Learning lifecycle platform VESPR delivers solutions that stand at the forefront of emerging global AI standards. Through VESPR, AI/ML creators and consumers have the power to shape a better world that values trust and transparency in technology.

This blog post originally appeared as part of CalypsoAI’s 2020 State of the Union Report. To read the full report, click here.