He described a lot 2023 is the year of artificial intelligence, and the term has appeared on many “words of the year” lists. Although AI has positively impacted productivity and efficiency in the workplace, it has also presented a number of emerging risks for businesses.
For example, recently Harris Poll Commissioned by AuditBoard, it was revealed that nearly half of working Americans (51%) currently use AI-powered tools at work, undoubtedly driven by ChatGPT and other generative AI solutions. However, at the same time, nearly half (48%) said they enter company data into AI tools that their business does not provide to help them with their work.
This rapid integration of generative AI tools into business presents ethical, legal, privacy, and process challenges, creating a need for companies to implement new, robust policies surrounding generative AI tools. Currently, most of them have not done so yet, which is what happened recently with Gartner reconnaissance It revealed that more than half of organizations lack an internal policy on generative AI, and the Harris Poll found that only 37% of working Americans have a formal policy regarding the use of AI-enabled tools that are not provided by the company.
Although it may seem like a daunting task, developing a set of policies and standards now can save organizations from major problems in the future.
Using and managing artificial intelligence: risks and challenges
Developing a set of policies and standards now can save organizations from major problems in the future.
The rapid adoption of generative AI has made keeping up with AI risk management and governance difficult for companies, and there is a clear disconnect between adoption and formal policies. The previously mentioned Harris Poll found that 64% view using AI tools as safe, suggesting that many workers and organizations may be ignoring the risks.
These risks and challenges can vary, but three of the most common include:
- blind trust. The Dunning-Kruger effect is a bias that occurs when our knowledge or abilities are overestimated. We have seen this play out in relation to the use of AI; Many overestimate the capabilities of artificial intelligence without understanding its limitations. This may lead to relatively harmless results, such as providing incomplete or inaccurate output, but it may also lead to more serious situations, such as output that violates legal usage restrictions or creates an intellectual property risk.
- Security and privacy. AI needs access to large amounts of data to be fully effective, but this sometimes includes personal data or other sensitive information. There are inherent risks that come with using unvetted AI tools, so organizations must ensure they are using tools that meet their data security standards.