On August 29, 2023, the California Privacy Protection Agency (“CPPA”) released a series of draft regulations regarding: Cyber security audit and Risk assessment.in part 1 In this two-part post series, we reviewed the draft CPPA. Cybersecurity audit rules. In this part 2, we will focus on: Draft risk assessment regulations.
The Draft Risk Assessment Rule mirrors the Draft Cybersecurity Audit Rule in several important respects.
Like the Draft Cybersecurity Audit Regulation, the Draft Risk Assessment Regulation applies only to certain companies, namely those engaged in processing personal information that poses a “substantial risk” to consumer privacy. To that end, the draft Risk Assessment Regulation clarifies that both (i) the sale or sharing of personal information, and (ii) the processing of sensitive personal information for purposes other than employment purposes fall into the category of high-risk processing. . However, the draft regulation also lists other processing activities that may pose significant risks to consumer privacy, pending discussion by the CPPA Board.
- Process personal information to monitor employees of companies, independent contractors, job applicants, and students.
- Process personal information of consumers that the business has actual knowledge that the business is under 16 years of age.
- Process a consumer’s personal information in a public place in order to monitor the consumer’s behavior, location, movement, or behavior.
- Process your personal information to train artificial intelligence or automated decision-making technology.and
- Automation to facilitate decisions that result in the provision or denial of access to financial or loan services, housing, insurance, educational admissions or opportunities, criminal justice, employment or contract opportunities or compensation, medical services, or essential goods. using integrated decision-making technology. services and opportunities.
Another similarity between the Draft Cybersecurity Audit Regulation and the Risk Assessment Regulation is the mandate on service providers and contractors to cooperate with companies conducting risk assessments by providing all the information necessary for the business. (both directly and indirectly through mandatory contractual provisions). Complete the assessment. Both regulations suggest that if a service provider misrepresents facts to a company during an audit or evaluation, it will be treated as both a breach of contract and a direct regulatory breach.
Finally, similar to the Draft Cybersecurity Audit Regulation, companies should expect to be required to demonstrate compliance with the risk assessment regulation and submit a condensed version of their risk assessment to the CPPA.
Finally – definitions and (some) requirements for artificial intelligence and automated decision-making technologies.
One of the more notable aspects of the draft risk assessment regulations are the proposed definitions of “artificial intelligence” and “automated decision-making technology.”
- “Artificial intelligence” is “designed to operate with varying levels of autonomy and produce outputs such as predictions, recommendations, decisions, etc. that affect the physical or virtual environment for explicit or implicit purposes.” an engineering or mechanical-based system that can produce ” Generative artificial intelligence is specifically identified as a type of artificial intelligence that is subject to the draft regulation.
- “Automated decision-making technology” means “any machine learning, statistical or other data processing technology that processes personal information and uses computation in whole or as part of decision-making, including those derived from artificial intelligence; defined as “any system, software, or process”. A system for making and executing decisions, and for facilitating human decision-making. ” This term explicitly includes profiling.
The current draft risk assessment regulations, in addition to imposing several specific additional obligations on companies that use automated decision-making technologies, to conduct risk assessments as described below, training in artificial intelligence or automated decision-making technology, and then requiring companies that provide such technology. We then provide it to others to take steps to prevent the technology from being misused. These measures include:
- Prepare and distribute plain language instructions for appropriate uses of the technology to each downstream user of the technology.and
- Implement and document safeguards designed to ensure the appropriate downstream use of artificial intelligence or automated decision-making technologies by others in a risk assessment.
These requirements therefore require companies to take a broadly positive view of the potential uses of artificial intelligence and automated decision-making technologies, and to consider and seek to mitigate potential negative uses of those technologies. The onus is on you to prove it.
Businesses should be prepared to explain and defend the benefits of high-risk processing and have clear and well-defined plans to address any negative impacts.
The draft risk assessment regulation requires covered companies to identify high-risk products before analyzing both the “benefits to the company, consumers, other stakeholders, and the public resulting from processing” and the “adverse effects on the company.” requires a detailed description of processing activities. Consumer privacy related to processing. ”
In this regard, the requirements for assessing the benefits of treatment are relatively minimal and straightforward, whereas the requirements for assessing the negative effects of treatment are considerably more complex. In fact, assessment requirements focus on assessing negative impacts, requiring companies to:
- Identify all negative effects and their causes.
- Please explain their size and potential.and
- Please explain the criteria and method by which the company determined these.
The assessment assesses at least 10 separate categories of negative impacts, ranging from constitutional harms and discrimination to more specific harms such as physical and psychological harm resulting from high-risk processing activities undertaken by companies. need to do it.
The draft regulations make clear that it is not enough to consider these adverse effects. Rather, companies must develop and implement safety measures specifically designed to address them. Companies must be able to clarify the extent to which each safety measure addresses adverse effects. These obligations continue, and companies are required to identify and implement additional safeguards to “maintain knowledge of emergent risks and countermeasures.”
Additionally, the draft risk assessment regulation requires companies to justify high-risk processing activities by explaining how the benefits of processing outweigh the negative impacts mitigated by the aforementioned safeguards. I am. Given the substantial requirements for negative impact analysis, companies will need to work hard to identify and clearly articulate sufficiently compelling benefits of processing in order to adequately support their processing activities. there is.
Companies should also devise meaningful mitigation techniques to reduce the level of risk associated with such processing to a sufficiently low level to ensure that the benefits of such processing outweigh the negative impacts. To demonstrate this, companies must continue to engage external consultants to assess processing activities, design mitigation measures and carry out periodic assessments to ensure a balance of risk in favor of continued processing. You may need to hire and retain them.
Looking to the future
Although the draft Cybersecurity Audit and Risk Assessment Regulations have a long way to go before they become effective, they are a strong indication of the agency’s current thinking on these subjects. Therefore, companies that may fall within these requirements should monitor this area closely.