The growth of face recognition (FR) technology has been accompanied by consistent assertions (as catalogued by Georgetown University ) that demographic dependencies could lead to accuracy variations and potential bias.
For over a decade, our industry has anticipated the integration of computer-vision into the digital workflow for numerous use-cases including in-camera face-retouching, automated metadata extraction, TV commerce, and audience analytics. As it turned out, the early traction for FR comes from security and surveillance applications, rightfully prompting dystopian fears.
While engineers have benefited from the availability of AI and machine learning tools, allowing them to trains their models to ever higher accuracy, the fairness and ethics of their algorithms have often been an afterthought.
The author initiated an AI program including a team of PhD data scientists, and teams of software engineers to build a comprehensive computer vision platform for live video. Early in the program, it became clear that ethical guidelines set at a board and executive level were essential to ensure morally acceptable outcomes.
The NIST Information Technology Laboratory quantified the accuracy of FR algorithms for demographic groups defined by sex, age, and race or country of birth.
This paper draws from the NIST report  which dissects these dependencies of over 100 FR algorithms and then details the strategies and techniques needed to not only reduce bias but deliberately design for fairness and socially responsible outcomes. It details the lessons learned from a leading facial recognition algorithm (SAFR) that NIST consistently scored high for accuracy, speed, model compactness and fairness.
For any engineer wishing to abide by the commitment of the Copenhagen letter, these are the practical steps of careful data curation of the ground truth, the application of sound science for the classification of human datasets and the ethical guidelines that should steer any AI program.
The paper concludes that it is in fact possible to reconcile business and ethical requirements for an AI program as illustrated by the NIST results.
 C. Garvie, A. Bedoya, and J. Frankle. The perpetual line-up: Unregulated police face recognition in america. Technical report, Georgetown University Law School, Washington, DC, 10 2018.
 Patrick Grother, Mei Ngan, Kayee Hanaoka Face Recognition Vendor Test (FRVT)
Part 3: Demographic Effects, NIST, Dec 2019
Technical Depth of Presentation
What Attendees will Benefit Most from this Presentation
C-suite execs directing AI programs that operate on human datasets.
Engineers training AI models using biometric information.
Take-Aways from this Presentation
In-camera computer vision is rapidly becoming a reality.
Irresponsible applications of the technology could be damaging.
Thankfully it is possible to engineer AI for both business and ethical requirements.