![](https://hitconsultant.net/wp-content/uploads/2023/12/Morgan-Hague-002.jpg)
Advances in healthcare technology continue to benefit humanity: since the advent of X-rays in the 19th centuryth From the 20th century to dialysis, CT, MRI, and other machinesth As we enter the new century, we move on to new digital tools for this era. Perhaps the most auspicious of these is artificial intelligence (AI), which has wide-ranging applications such as predictive analytics, drug development, personalized medicine, and robot-assisted surgery.
The integration of AI into healthcare diagnosis and treatment has endless potential to revolutionize the field by improving patient outcomes, reducing costs, and increasing overall efficiency. With that powerful promise comes danger. The more deeply AI is integrated into healthcare, the greater the cybersecurity risks it creates. In fact, AI is already transforming the threat landscape across the medical profession.
AI risk assessment
While artificial intelligence is seen as a disruptive force with unknown consequences, the International Association of Privacy Professionals Estimate More than half of AI governance approaches are built on top of existing privacy programs, and only 20% of established organizations have begun to initiate formal AI practices and guidelines. Granted, there are still entirely relevant and necessary basic controls in the underlying IT systems powering these AI models, but patient privacy and health, as well as healthcare safety and You must also be aware of the new risks posed by AI that can potentially put your reputation at risk. institution. The advent of AI requires us to build new approaches to cybersecurity policies, strategies, and tactics on the foundations already established. Maintaining the status quo is important, but it is not enough.
When working with a technology that is still in its infancy, medical professionals must remain aware of the behavioral risks of AI that can lead to false diagnoses and data illusions. An AI system is only as good as the quality and quantity of its training data. To promote transparency in AI models and deep testing, President Biden recently issued the following statement: presidential order About safe, secure, and reliable artificial intelligence. The order tasks the Department of Health and Human Services with addressing unsafe medical practices and actual harms associated with AI, as well as ensuring that AI systems are safe before being released to the public and used. aims to set the national standard for rigorous red team testing.
Traditional security measures are well-suited to managing AI-related threats from cybercriminals. For example, hospitals are increasingly targeted by malware and ransomware attacks. In August of this year, Prospect Medical Holdings was in the midst of an incident that affected 16 hospitals and more than 100 other medical facilities across the U.S. for nearly six weeks, exposing the personal information of more than 24,000 employees. took its computer network offline. AI-assisted security models provide countermeasures to the use of technologies that help attackers create better social engineering attacks, probe IT systems more efficiently for weaknesses, and create malware that evades detection mechanisms. must be provided.
Many healthcare organizations rely on third-party vendors for AI solutions. These vendors can unwittingly introduce vulnerabilities like the one just described into healthcare systems, causing far-reaching impacts. This third-party movement, which means less control for internal security teams, is not new. In recent years, third parties have been the leading source of breaches in the healthcare ecosystem. But the added complexity of how vendors use AI, where it goes, and the control it has over it complicates an already complex problem.
Implementation of security measures
Healthcare organizations that are adept at preventing and quelling attacks on the human body must also embrace the need to harden their systems by putting cybersecurity at the top of their overall AI integration strategy. These measures are built to leverage the benefits of AI while protecting patient data and safety, and include:
- Multi-point defense: Given the need for redundancy, agencies should consider incorporating defensive AI capabilities and create and implement a cybersecurity strategy that includes multiple elements such as firewalls, intrusion detection systems, and advanced threat detection. . This is a multifaceted approach to discovering and mitigating threats. At different levels.
- Data encryption and access control: Protecting sensitive data and restricting access to authorized personnel starts with robust encryption protocols. Strong access control mechanisms must be implemented to prevent inappropriate access to AI systems, underlying training models and infrastructure, and personal patient records.
- Third-party vendor evaluation: Due diligence is required to thoroughly vet third-party vendors and their cybersecurity practices. At this stage in the maturity of the AI risk management field, it is probably sufficient to know whether third parties are implementing his AI models in their solutions and how corporate data is used within those models. More granular controls will be implemented as standards bodies such as HITRUST and NIST build AI-specific control frameworks.
- Incident response plan: AI systems identify the unknowns that AI technology can introduce to standard DR/IR operations and reduce downtime and data loss in the event of a cyber-attack using or against AI capabilities. It should be an important part of any organization’s incident response plan to minimize it. AI system.
- Ongoing security audits and updates: Perform regular security audits across AI systems and healthcare infrastructure to ensure standard security controls are working.
- Employee Training and Awareness: Implement mandatory AI cybersecurity training for all healthcare workers to understand the privacy and data loss risks of “off-the-shelf” AI technologies, as well as phishing techniques, deep fake capabilities, and other deceptive Make people aware of technological advances. AI will enhance the techniques used by cyber attackers.
AI can be a friend or foe to the healthcare sector, potentially improving life in an already disrupted industry or creating further infringement problems. By implementing robust security measures, increasing staff awareness, and working with trusted vendors, the industry can move forward with confidence and caution.
About Morgan Hague
Morgan Hague is an IT Risk Management Manager. meditation servicesis a leading provider of information risk management, cybersecurity, privacy, and regulatory compliance consulting services specifically for healthcare organizations.
About Britton Barton
Britton Burton is a senior director at our sister company, TPRM Strategy. CORL Technologiesa technology-enabled managed service for vendor risk management and compliance.