Skip to main content

This blog post originally appeared as part of CalypsoAI’s 2020 State of the Union Report. To read the full report, click here.

In November 2020, a Detroit man was wrongfully incarcerated after being misidentified by facial recognition technology as a burglary suspect. Following a review of the surveillance footage by a judge nearly a year later, the man was found to have been incorrectly identified as the suspect and released. Just months later, another man in Detroit was released from jail after prosecutors realized they had made the same grave mistake of inappropriately trusting the validity of surveillance software to incarcerate an innocent man.

In the seminal 2018 research paper by Timnit Gebru and Joy Buolawini, the ethics researchers found that leading facial-recognition software performed worse when identifying women and people of color when compared to classifying white male faces. In 2019, research from the NIST similarly demonstrated that facial recognition software misidentifies Black and Asian faces 10 to 100 times more than White faces, indicating a troubling continuation in the trend of algorithmic bias.

In the Summer of 2020, amid a mushrooming protest movement against the highly publicized police killings of Black Americans, IBM, Amazon, and Microsoft announced they would end sales of their facial recognition technology to police in the US. By contrast, Chinese tech company Huawei tested facial recognition software to scan crowds for ethnic features, sending “Uighur alarms” to Chinese authorities who detained the Muslim minorities in prison camps.

Many of the ethical issues in AI stem from the data used in training. Applying rigorous standards to training data development is essential but can be challenging. For example, written language data is filled with bias, and in most cases, it is not intended or even detectable. Patterns are exploited during AI development and are often unknown or unintended by developers. Without rigorous test and evaluation, those undetected patterns will continue to permeate ML models, resulting in potentially bias decision-making.

AI has the potential to exploit unintended relationships and patterns in the data used in its development. To create transparent AI, it must undergo robust testing and evaluation to validate and verify the basis of its conclusions.

CalypsoAI’s VESPR platform provides AI/ML developers with the tools to identify and address bias, hyperparameter tuning, model complexity, among other critical factors. Through utilizing the Secure Machine learning Lifecycle (SMLC) platform, AI/ML developers can manage threats across the algorithm life cycle, calibrating data to ensure only fair metrics influence the model, and conduct tests to ensure trust and transparency. It’s imperative that we prioritize rigorous testing and evaluation of AI models; anything less is irresponsible. AI solutions are deterministic and complex, and to be used justly, ML models must be evaluated against robust standards, and demonstrate efficacy and validity.

This excerpt originally appeared in CalypsoAI’s 2020 State of the Union Report. To read the full report, click here.