Express Healthcare

AI for data health and safety

Venky Ananth, EVP and Global Head of Healthcare at Infosys, highlights how AI can serve as both a shield and a sword—protecting sensitive health data while enabling innovation in a rapidly evolving digital healthcare landscape

0 213

One estimate (1) says that artificial intelligence (AI) tools could increase global GDP by 7 per cent, or about $7 trillion over a ten-year period. 

In contrast, another source (2) observes that 87 per cent of global organisations suffered an AI-enabled cyberattack in the year gone by. And a recent global cybersecurity survey notes (3) that healthcare is the most targeted sector in India, with each organisation facing about 8,600 weekly attacks, on average. 

But even as the use of AI technologies is expanding the threat landscape, the same solutions are protecting enterprises and their data from being breached. The highly regulated healthcare industry, for whom safeguarding sensitive patient information is paramount, is augmenting firewalls, intrusion detection systems, cloud security measures etc. with AI and Machine Learning-based data security and privacy solutions. This is how they are using these applications:

To enhance privacy protection and access control:

AI enables Privacy-Enhancing Technologies (PETs), such as federated learning, which protects privacy by using decentralised data – for example, patient health records residing in hospital systems – to train models, without aggregating the raw information on a central server. Further, AI-powered data management solutions automate masking or anonymisation of personal data so it can be used for training, development and testing without violating the owners’ privacy. AI-enhanced encryption techniques make it harder for unauthorised entities to breach sensitive information. In fact, AI strengthens access control in general by automatically assessing the risk level of individual users and adjusting permissions to allow only authorised personnel to access key information; it also classifies data based on the level of protection required, so healthcare organisations can assign the right controls and access restrictions. Last but not least, smart privacy dashboards and consent management tools empower the owners of data, that is, patients, to decide who can access their personal information. Here is an example of AI at work: Johns Hopkins developed an AI-powered privacy analytics model to protect patient data. By carefully reviewing all access points, the solution detected potential vulnerabilities that could expose electronic health records, thereby improving security, privacy and patient trust. 

To mitigate cybersecurity risk and facilitate regulatory compliance: 

Intelligent models analyse network traffic data and user behaviour to identify nefarious activity and security threats early, so healthcare providers can address them before they cause damage. They also prevent insider threats – such as the risk of healthcare workers handling patient data exposing it accidentally or deliberately – by analysing employee behaviour to uncover anomalies. For example, a sudden increase in the data being accessed by an employee, or access at unusual hours, could signal misuse. AI also protects healthcare systems running on cloud by monitoring them continuously, automatically detecting vulnerabilities, and responding in real-time to suspicious activity and potential threats. Even routine data compliance tasks – data discovery, classification, and access control – can be automated with AI to enhance efficiency and effectiveness. 

To have innovation along with protection:

Data drives innovation in highly digitalised businesses such as healthcare: healthcare providers frequently need to share data among themselves and with other ecosystem entities to resolve common problems, test new ideas, innovate solutions, and so on; researchers collaborating on a study need to exchange findings, data from experiments, and other information.  By enabling secure data transfer through enhanced encryption and other methods, AI creates an environment where all these participants can collaborate and innovate with confidence. What’s more, it even provides a variety of analytical tools for unlocking valuable insights from healthcare data. 

But even as healthcare providers embrace artificial intelligence for all these purposes, they should be mindful of its risks. AI algorithms often lack transparency and explainability. They can produce inaccurate or biased outcomes, especially when the training data is not of high quality. And as mentioned at the outset, the technology may be exploited by bad actors for criminal purposes. This is why healthcare organisations must establish a Responsible AI framework to ensure that AI development and deployment conforms to regulatory, ethical and human-centric principles.  

 

References: 

  1. https://www.goldmansachs.com/insights/articles/generative-ai-could-raise-global-gdp-by-7-percent
  2. https://explodingtopics.com/blog/ai-cybersecurity
  3. https://ciso.economictimes.indiatimes.com/news/cybercrime-fraud/indian-healthcare-sector-most-targeted-by-cyberattacks-followed-by-education-report/117592938

- Advertisement -

Leave A Reply

Your email address will not be published.