Decoding the impact: India’s AI governance guidelines and healthcare services

The authors explain how India’s AI Governance Guidelines outline responsibilities for healthcare businesses

India’s release of the India AI Governance Guidelines (“Guidelines”) marks a pivotal moment for the healthcare sector, presenting a balanced framework that encourages artificial intelligence (“AI”)-led innovation while demanding stringent accountability and ethical deployment. For healthcare services delivery businesses in India, these guidelines translate into both immense opportunity and clear-cut responsibilities, largely revolving around the 7 (seven) core principles of Trust, People First, Innovation over Restraint, Fairness and Equity, Accountability, Understandable by Design and Safety, Resilience and Sustainability.

The Guidelines, structured in four parts, lay out a path for a “whole of government” approach, recommending the establishment of an AI Governance Group (“AIGG”), supported by Technology and Policy Expert Committee (“TPEC”), and an AI Safety Institute (“AISI”) to coordinate policy and provide technical guidance on safety, a move that will further shape the regulatory landscape for healthcare in the medium term. The overarching goal is to leverage AI for public good, “revolutionising diagnostics in rural healthcare” and ensuring benefits reach the “last citizen”. The deployment of AI in India should, therefore, strike a balance, promoting innovation and scalability, while ensuring strong human oversight, accountability, and clearly delineated responsibilities across all actors in the AI value chain, including developers, deployers, and end-users. This translates into three critical areas for healthcare businesses:

  1.             Ethical development and deployment (the ‘People First’ mandate)

The Guidelines’ fundamental principles: (a) Fairness and Equity, (b) Accountability, and (c) Understandable by Design, are amplified by the existing ethical standards of the Indian Council of Medical Research (ICMR). Businesses are now required to ensure that AI-based applications, such as diagnostic tools, are:

  •           Free from Bias: AI models are required be trained on high-quality, representative datasets of the diverse Indian population to prevent algorithmic discrimination, where biased models could exacerbate systemic risks such as misdiagnosis, unequal access to treatment, or exclusion of underrepresented patient groups from clinical decision-making and drug efficacy assessments. ICMR’s Ethical Guidelines for Application of AI in Biomedical Research and Healthcare (“ICMR AI Guidelines”), which sets expectations of high quality, safe and transparent dataset, supplemented by bias audits, independent ethics review, data quality checks and clear delineation of responsibilities between developers and healthcare providers, is crucial here.
  •           Transparent and Explainable: Patients and clinicians must be able to understand how an AI system reached a medical conclusion. This directly mandates the adoption of Understandable by Design as a core feature, requiring clear disclosures and explanations. Clear disclosures regarding the purpose, manner, and extent of data use are essential, ensuring that individuals are not subject to AI-driven medical profiling or decision-making without their explicit authorisation.
  •           Human-centric: The People First principle mandates human oversight over AI systems, ensuring that AI-driven decisions can be reviewed, overridden, or supplemented by human judgment. This is critical in clinical settings where patient safety is paramount.
  1.             Data governance and patient privacy
  •           Given the sensitive nature of health information, compliance with the Digital Personal Data Protection (“DPDP”) Act 2023, must be treated as a primary compliance goal. Healthcare entities should proactively implement necessary safeguards, consent mechanisms, and data governance measures to ensure readiness before the law is brought into force.
  •           Informed Consent: Businesses using personal data to train AI models must adhere to the DPDP Act’s obligations of consent, purpose limitation, and data minimisation. The ICMR AI Guidelines further stress that a transparent consent process must give patients complete autonomy to choose or reject AI technologies.
  •           Data Quality and Access: Businesses are encouraged to use locally relevant datasets to create culturally representative models. They should also contribute to platforms like AIKosh to expand the availability of data for innovation, provided robust data governance frameworks are in place for the sharing of anonymised data. In the healthcare and pharmaceutical sectors, responsible data sharing can significantly accelerate drug discovery, clinical development, and precision treatment. Broader data availability enables AI models to capture real-world diversity, reduce bias, and enhance diagnostic and therapeutic outcomes.
  1.             Operational accountability and risk mitigation:

The framework emphasises a proactive, risk-mitigation approach through a blend of techno-legal solutions and institutional oversight.

  •           Risk Assessment: Healthcare businesses must develop and adhere to an India-specific risk assessment framework that accounts for real-world evidence of harm and specifically protects vulnerable groups.
  •           Accountability Mechanisms: Implementing effective grievance redressal mechanisms is mandated, ensuring patients and users have accessible and reliable channels to report AI-related harms and seek timely resolution. Distinct from this, the proposed national AI Incidents Database, supported by local-level databases, will function as a central repository to record and analyse AI-related harms across sectors. Unlike grievance mechanisms that address individual complaints, the database is designed as a broader oversight tool, enabling policymakers to identify systemic risks, recurring patterns, and real-world harms associated with AI systems.
  •           Compliance: Businesses must comply with all Indian laws, including sector-specific legislations, and are encouraged to adopt voluntary frameworks for self-certification and risk mitigation.

In essence, for healthcare businesses, the Guidelines establish a clear trajectory: harness the power of AI to drive national health goals while doing so responsibly, ensuring every technological advancement is underpinned by a deep commitment to patient safety, ethical practice, and public trust. While AI extensive applications in the healthcare industry, such as enabling early stage diagnosis, increasing accuracy of health data interpretation and predictive analytics, data-driven clinical decision-making, enhanced disease surveillance, decentralised clinical trials and patient management, improved accessibility through AI-powered telemedicine platforms and wearables, and real world interplay between hospitals, support agencies, data centres, pharmaceutical companies   and portable diagnostic solutions, the Guidelines will certainly pave the way for responsible AI deployment in the healthcare sector.

 

(Sameer Sah is a Partner, Supratim Chakraborty is a Partner, Sayani Bhattacharyya is a Senior Associate and Shramana Dwibedi is a Senior Associate with Khaitan & Co. Views expressed are personal.)

AI governance IndiaDPDP Act healthcareethical AI deployment Indiahealthcare AI complianceKhaitan & Co authors
Comments (0)
Add Comment