In July, we noted that the Federal Trade Commission has been at the forefront of shaping standards for generative AI, having just served OpenAI with a civil investigative request seeking detailed responses to nearly 200 data requests.1 This is consistent with FTC Chair Lina Khan’s previously stated commitment to “update our approach to respond to new learning technologies and technological changes.”2
This week, the FTC announced a proposed order resolving an investigation into pharmacy chain Rite Aid’s use of AI-powered facial recognition technology. The order replaces a 2010 order against the company for failing to adequately protect sensitive PHI of its customers, including improperly disposing of records containing PHI in regular trash cans.
Many readers will be aware of the significant increase in shoplifting from retail stores, often carried out by semi-organized thieves known as “flash mobs.” The FTC’s new complaint focuses on Rite Aid’s use of AI facial recognition technology to combat shoplifting from 2012 to 2022. The system appears to have identified customers Rite Aid deemed likely to be involved in shoplifting or other criminal activity and recorded them as “registrants.” “Drive cars and keep stakeholders away from stores.” The system generated an alert to send to employees indicating that an individual who entered a Rite Aid store matched someone on a watch list, according to the complaint.
Naturally, Rite Aid’s system involves employees confronting “appropriate” individuals, sometimes verbally accusing them of criminal activity, detaining them, searching them, and even reporting “criminal activity” to the police. is what happened. but, thousands of Apparently a false positive match occurred. False positives and false negatives, often referred to as Type I and Type II errors, respectively, are inherent in any system, including human decision-making systems. But here, Rite Aid failed to take steps that the FTC deemed reasonable to prevent false positives from impacting customers. These included using low-quality images from CCTV cameras as input data, not training employees properly, not testing or monitoring the accuracy of the system, and even failing to measure false positive rates. and so on. Some of the examples would be comedic if it weren’t for the real-life embarrassment suffered by innocent Rite Aid customers. For example, the system generated thousands of hits for the same “subscriber” in a very short period of time, even though the stores were thousands of miles apart.
As part of this order, Rite Aid was prohibited from using or deploying facial recognition systems (for customers) for five years. Additionally, all images and analytics must be destroyed and all third parties who received photos and videos as part of the breach system must be identified. After the five-year ban, Rite Aid will be required to conduct a written assessment of the potential risks to consumers from the use of automated biometric security or surveillance systems. This evaluation, called a system evaluation, includes (among other things) designating qualified employees to coordinate and take responsibility for the system, evaluating the system, testing its accuracy and potential for error, and documenting the algorithms used for machine learning. Includes the creation of documents containing. This also includes the datasets used for training, the demographic and geographic context in which such systems are deployed, and the training and monitoring of employees who react to the outputs of these systems.
System assessment requirements are similar to the FTC’s approach in cybersecurity incidents and data privacy breach cases. For example, over two days in July 2019, the FTC announced a $700 million settlement with Equifax for data breaches and a $5 billion settlement with Facebook for consumer privacy violations. The accompanying order mandates the implementation of information security and data privacy programs, each of which includes several technical requirements that are now standard in all FTC cybersecurity and data privacy settlements. I am.3
Similar to information security and data privacy programs, system assessment requirements are detailed and comprehensive and include approximately 40 technical requirements. Companies deploying facial recognition or other biometric technology in connection with their customers should read the order in detail and ask whether their systems can avoid his FTC’s wrath and comply with system evaluation requirements. there is. In another important respect, the FTC has been consistent. The conclusion of each of the above orders requires that the parties:[e]Evaluate and adjust your program [or Assessment] considering all the circumstances [they] Please be aware that this can have a significant impact on the effectiveness of your program. ”Four Organizations that subscribe to the “technological change” that Chairman Khan referred to and seek to benefit from AI systems for biometric security and surveillance should apply system assessments or variations thereof.
[1] AI cybersecurity, data privacy standards coming soon from the White House (natlawreview.com)
[3] For example, see: https://www.ftc.gov/system/files/documents/cases/172_3203_equifax_proused_order_7-22-19.pdf, 12:19 p.m.and https://www.ftc.gov/system/files/documents/cases/182_3109_facebook_order_filed_7-24-19.pdf, 4 to 8 p.m..
[4] https://www.ftc.gov/system/files/ftc_gov/pdf/2023190_riteaid_stipulated_order_filed.pdf (12 o’clock).