ANALYSIS OF THE LEGAL IMPACT OF ARTIFICIAL INTELLIGENCE IN HEALTH CARE BY - AMIRTHAVARSHINI P K & MANASA S

ANALYSIS OF THE LEGAL IMPACT OF ARTIFICIAL INTELLIGENCE IN HEALTH CARE

 

AUTHORED BY - AMIRTHAVARSHINI P K

& MANASA S

 

 

1. ABSTRACT:

Artificial intelligence has been growing immensely in many fields, making rapid strides in healthcare. AI has been successfully implemented in healthcare like automation in clinical trials, monitoring critical care, digital analysis of blood samples, cancer screening and many other ways to develop diagnostic and treatment processes. The outcomes and predictions of AI are not accepted by physicians entirely. The other main issue related is whether AI can be made liable for errors or malfunctions as the doctors are answerable for their errors to the Medical Council and they are governed by Medical ethics and laws whereas AI is not accountable to any laws. Thus laws regulating the same are very much needed. Though the usage of AI in the medical field has been progressing tremendously, the legal regulation of AI’s performance is snail-paced. This paper discusses the tools and technologies of AI used in healthcare and stresses the need for the development of laws regulating AI in the medical field. While there is no regulation in India, The European Union and some other countries have drafted laws on the same. Thus, regulations and laws that are futuristic, and uncomplicated regulatory requirements promoting compliance are needed. To make headway in framing a new legal framework for the regulation of AI should pursue some of the objectives like ensuring the liability of developers, and operators of AI for its performance, creating a unified digital space for trust for data protection, ensuring the benefits of AI outweigh the risks and to draw a clear distinction to determine where AI shall be used and where it can be prevented from.

 

1.1 BACKGROUND OF THE STUDY

 In earlier days, doctors had to identify the diseases and treat the patients accordingly. In the era of the development of computers and technologies, doctors treated patients with the aid of medical equipment like scanning machines. Now in the digital world, AI has been used in the field of business, transport and now in healthcare, it is in the beginning stage. This can be used to take care of the patient, and the administration process. Artificial intelligence (AI) is a collection of technological solutions that mimic human cognitive functions, such as the capacity for autonomous learning and decision-making without the need for a preset algorithm. When used in specific tasks, AI can achieve outcomes that are on par with or even superior to those attained through human intellectual effort. This group of technology solutions consists of tools and services for data processing and decision-making, software (including machine learning applications), and information and communication infrastructure. This development of AI in the medical field is slow pacing, the issue of legal regulation of the concept, conditions and features of development, functioning and areas of application, integration into other systems, and control over the use of end-to-end digital AI technology is necessary. This issue is resolved locally, taking into consideration the specifics of each nation's legal structure.

 

1.2 LITERATURE REVIEW

The paper by Bangul Khan, Hajira Fatima, Ayatullah Querishi, and Sanjay Kumar on the Drawbacks of Artificial Intelligence and Their Potential Solutions in the Healthcare Sector was published in Springer Nature Journal in February 2022. The article has dealt with the drawbacks of artificial intelligence in healthcare whereas the solutions to the drawbacks were so vague and simple for example- to educate, provide training and non-implementation of AI devices when there is risk involved.

 

The research paper on Artificial Intelligence: How is It Changing Medical Sciences and Its Future? published in the Indian Journal of Dermatology in October 2020 written by Kanadpriya Basu, Ritwik Sinha, Aihui Ong, and Treena Basu. This article aims to discuss the impact of artificial intelligence on medical science and distinguish hype from reality and dealt with the situation of how artificial intelligence is changing the landscape of medical science and it discussed various challenges on the implementation of AI in the medical field. Data protection is discussed as the major challenge and the article does not provide any solutions to overcome the challenges.

 

The research paper on Medical Applications of Artificial Intelligence (Legal Aspects and Future Prospects) by Vasiliy Andreevich Laptev, Inna Vladimirovna Ershova and Daria Rinatovna Feyzrakhmanova was published in the MDPI Journal, Dec 2021. The research paper explores the functioning of AI in medical-legal relations, defining its legal personality, and defining its competencies. It also examines the grounds for imposing legal liability on AI system users. The study reviews AI's legal regulation sources, including state-sanctioned ones, (EU regulation) with a particular focus on medical-legal customs and practices. However the article has reviewed the draft EU regulations and it does not provide any suggestions or outputs in the international scenario.

 

Based on literature review of many papers relating to AI in healthcare, the papers only concentrated on the ethical problems like trust issues on implementation, the challenges and limitations on usage of artificial intelligence in healthcare. However, there is a medical-legal relation of the usage of AI which is not considered a serious issue and the legal impacts of artificial intelligence in healthcare and the suggestions for a legislative framework to regulate the implementation of AI in health care are not discussed especially in the Indian legal perspective.

 

1.3 RESEARCH PROBLEM

The legal impacts of implementing artificial intelligence in the healthcare sector are taken as a main problem as there is no medico-legal framework in this area.

 

1.4 RESEARCH OBJECTIVE

The main objective behind the research paper is that establishing the urgent need for effective regulations and laws for the growing technology AI in the medical field and try to give some suggestions and sollutions in fixing the liability for the damages caused by AI.

 

1.5 RESEARCH METHODOLOGY

The research methodology is doctrinal and the research information is based on various articles and research papers published on the internet sources and various regulatory legal acts on AI.

 

1.6 RESEARCH QUESTION

The two basic questions that the paper tries to give solution/ suggestions are as follows;

  1. Whether AI can be brought under the purview of existing Indian laws or is there a 1need for separate legislation to regulate?
  2. Who can be made liable for the medical negligence and errors caused by AI?

2. APPLICATION OF AI-BASED SYSTEM IN HEALTHCARE

The applications of Artificial Intelligence in healthcare are drug development, supporting doctors in deriving decisions, monitoring the lifestyle of patients, as virtual assistants, in emergency care, surgery and monitoring chronic conditions. The following AI-based system clearly shows the possible forms of AI in medical practice is identified.

  1. Cyborg AI doctor
  2. AI robot
  1. 3.AI cloud doctor

 

A Cyborg AI doctor is a human individual using a cybernetic organism technique, AI chip will be implemented in their brain. AI robot is an independent system that independently navigates with the help of an autonomous cyber-physical system cloud doctor is a software system in which all information and data processing tools are hosted in the cloud storage service. These forms of AI are used in the prevention of the epidermis, drug development, disease diagnosis etc. A well-known example is in the pandemic period AI technology helped to analyse the SARS-CoV-2 virus.

 

2.1 AI TECHNOLOGY IN HEALTHCARE

The main technologies used in the health department are machine learning and deep learning. Machine learning is a statistical method in which the data sets are divided into train and test models. The tool is trained with the training data and with the trained model it's tested on testing data. In healthcare, the common application of machine learning is predicting what treatment protocols should be followed to a patient based on the treatment practised on various patients and its success rate.

 

Analysing the blood samples using AI is done in two ways, one is to develop algorithms that can identify people's risk of any particular disease. In this method from the training data, the algorithm learns the pattern of blood count that causes a particular disease. Once the training is done the algorithm is tested. If the blood count of any particular disease is matched with the test data, then an alarm is raised. This type of machine learning is called supervised machine learning where the algorithm learns from previously known data. For example, the possibility of kidney cancer, and pregnant women at risk from preeclampsia.

 

Another way of using AI in blood sampling is pandemic surveillance. In this method, the blood counts of people living in the same locality are used for creating algorithms. In this case there is no data of people with same disease and this type of machine learning is known as unsupervised machine learning. In this method the blood count of humans is compressed by 30% and from the compressed data the blood counts are reconstructed. If the algorithm reconstructs the blood count with an unusual count i.e., the person with the disease then it is noticed. When the algorithm sees an unusual blood count then it is separately stored.

 

AI is now used in preclinical and clinical trials. In this, the algorithm analyses the database with various chemical components and finds which one is closely bound to the target. This enables drug developers to explore the chemical world quickly. Using AI in this process saves time as it synthesis and test thousands of molecules. If this process is done manually, it will take several years of research and costs high.

 

Another usage of AI in healthcare is predicting cancer. The machine learning techniques help in predicting cancer and diagnosing is done by analysing pathology profiles, imaging studies, and its ability to convert images into mathematical sequences. Different classification techniques like support vector machine classifiers, probabilistic neural networks and K-nearest neighbours are used to detect various types of cancer. The algorithms used for detecting cancers are capable of analysing unstructured data and correctly estimating the likelihood of patients getting different illnesses. Radiomics is a part of deep learning techniques that were recently introduced to medical images in order to obtain features that are invisible to humans and reveal disease-related patterns and its characteristics.

 

Remote Patient Monitoring (RPM) is another application of AI in healthcare. This created revolution in healthcare by enhancing patient care, enabling early intervention and reducing the need of frequent personal visit.AI algorithm plays a vital role in early detection by analysing the patient data collected using wearable devices, sensors and patient information. The data collected includes heart rate, blood pressure, respiratory rate and many more. Using AI, the baselines for each patient are generated based on their age, gender, medical history and current health status. The key components of AI-enabled detection include the following:

Near Real-Time Monitoring: In this the patient data is collected continuously using wearable devices. This type of monitoring allows the AI algorithm to detect the slightest deviation from the baselines.

 

Pattern Recognition: In this the AI algorithm analyses the patterns in the collected data. If any irregular heart rhythms, sudden changes in activity are found it detects that and alerts the patient.

 

Anomaly Detection: In this AI Algorithms are trained to identify the range that differs from the normal range of a person and alert the healthcare providers to intervene and initiate proper actions.

 

Predictive analysis: In this AI predicts potential health issues based on historical data trends. For example, if a patient’s heart rate has been decreasing over time AI can alert the medical professionals for the risk of any cardiac event to the patient.

 

Another use of AI in healthcare is Automating Administrative tasks which reduces the work of the medical staff. Specialised computers and software are used to automate tasks like managing medical records, appointment scheduling, billing and payment tracking.

 

Robot-assisted surgery is also an application of AI in healthcare. In this, the surgeons make use of the technology which guides them to perform minimally invasive surgery. The clinical robotic system consists of a camera arm and a mechanical arm with surgical instruments attached to it. The Surgeon controls the arms using a computer console near the operating table. The computer console gives a high-definition, magnified view of the surgical area. This helps the surgeons to have flexibility and control during the operation which allows them to see the site properly when compared to traditional techniques.

 

3. LEGAL ANALYSIS OF ARTIFICIAL INTELLIGENCE

3.1 LEGAL PERSONALITY OF AI

 While discussing the legal analysis of AI in healthcare, the first question to answer is Whether AI is a legal person or not.  Recently Russia has recognised AI as a legal person and the humanoid (Sophia) has been given citizenship in Saudi Arabia, where there is still a debatable issue in providing a legal personality to AI.  The legal personality of AI is linked with whether it can be made the subject of legal rights and duties. The corporates as a legal entity can be taken as a precedent while discussing the grant of legal personality to AI. The difference found between them is that corporates are accountable via stakeholders and it is factiously independent whereas the AI may be independent.

 

The recognised characteristics of AI are firstly AI is a man-made product, which shows that AI is not a natural generation, but an extension of man. Secondly, AI can simulate human intelligence to replace human labour. The main method of simulation is through computer programs (or "algorithms"). Thirdly, AI has the capacity for deep learning, self-learning, and independent decision-making and action. Fourthly, Artificial intelligence can be either a humanoid robot or a computer device containing intelligent systems, or it can simply refer to software systems that simulate human thinking. It can be seen that since artificial intelligence is an "artificial product", some researchers consider it an object of the traditional "human-thing" cognitive framework, while others focus on "humanoid" or "human-like" characteristics. artificial intelligence, especially based on its ability to think independently, and believe that it has the potential to become a subject. From the above subject analysis, it can be seen that artificial intelligence does not have the prerequisites to become the same legal subject as a person. The creation and development of the concept from a philosophical subject to a legal subject is based on man, which is not only the development of metaphysics but also a historical and practical real process that reflects the unique position of man in the world. Although artificial intelligence has deeply and profoundly affected people's social life, it still cannot provide sufficient reasons to become an equal legal subject with people. At the same time, due to the limitation of human self-awareness, especially the limitation of the mechanism of the human brain, more rationality and free will are reflected in philosophical thinking, which is not scientifically described. Thus, it cannot be proven that an artificial intelligence algorithm can be completely similar to human reason, nor can it be proven that the autonomy of artificial intelligence independent of humans in certain situations and spaces is an expression of free will. Therefore, artificial intelligence cannot become the same original legal subject as humans.

 

Considering all the characteristics, the author’s standpoint is that artificial intelligence functions based on human inputs and processes and stores the data provided by humans and thus AI cannot be recognised as a legal personality. Especially when it comes to the application of AI in diagnoses and treatment procedures in healthcare, AI works with the data provided by humans, thus it cannot be treated as a legal personality.

 

3.2 CAN AI BE BROUGHT UNDER THE PURVIEW OF MEDICAL NEGLIGENCE?

Medical negligence occurs when the doctor has breached the duty of care and it amounts to injury or death of the patient. The doctor can be held liable under the law of torts, contractual liability, Indian Penal Code (IPC)1860, Consumer Protection Act,2019. Now the question is whether AI can be brought under the purview of these acts. When the treatment and diagnosis are done with the help of AI who can be held responsible? Artificial intelligence is nothing but the programming of algorithms and it works with the data provided by humans. Even if AI works autonomously, there will be experts to supervise it. Although AI may become an integral part of medicine, they will always require physician supervision and thus cannot turn hospitals into a "doctor-free" zone. Most of the time, the work done by an AI system is a black box. Therefore, it is important to discuss who should be responsible in medical cases: the device manufacturer, the hospital, or the licensed physician. As discussed earlier AI cannot be granted legal personality hence liability of AI can be discussed under vicarious liability, contractual liability, joint liability etc., In vicarious liability, the hospital will be liable for the breach of duty of the employee. Similarly, if Artificial Intelligence is used or helped in the treatment process, then hospitals would have to compensate as the AI is working under the supervision of the hospital. Under contractual liability, the hospital would buy the devices from the manufacturer or developer of artificial intelligence by entering into a contract accepting the terms and conditions of usage. If any discrepancy arises in the functioning of AI and the device misdiagnoses the patient due to a bug in it leading to the injury of the patient then the hospital can be held liable if the manufacturer has warned already about the mishap happens or else the hospital can claim that the liability rests with the manufacturer. In case of Joint liability, the victim can file a case against the hospital and the hospital can include the manufacturer and thus joint liability arises. If any artificial intelligence had to have functioned under the supervision of doctors or experts then any misdiagnosis arising out of this, the person under which supervision the treatment has been carried out will be held liable.

 

3.3 SPECIAL LEGISLATION FOR AI

EUROPEAN UNION

In discussing the liability of AI, a further question arises as to whether AI can be held under the existing legislation or whether there is a need for special legislation to regulate AI.  The European Union has framed the first regulatory framework for AI. The Act categorised the risk into three types -unacceptable risk, high risk and low risk. The medical devices fall under the EU’s product safety legislation in the high-risk category and all the high-risk systems will be assessed thoroughly before introducing in the market and also throughout the system’s lifecycle.

 

A directive on AI responsibility has been proposed by the European Commission to address consumer liability claims for harm resulting from AI-enabled goods and services. The AI Liability Directive aims to simplify the process for victims of AI-related injuries and facilitate claims against AI operators, providers, or users. The directive empowers EU Member States to compel the disclosure of evidence related to AI systems in certain situations. This rule applies if the claimant presents sufficient facts and evidence and has exhausted all attempts to gather relevant evidence from the defendant. The product liability directive aims to provide a redress mechanism for individuals who have been injured due to defective products. Unlike the proposals in the AI Liability Directive, which focuses on fault-based claims, the current Product Liability Directive establishes a no-fault liability regime. The burden of proof remains on the injured person to prove the damage, defectiveness, and causal link between the two. Manufacturers of the devices and importers of the devices who imported the machinery to the EU will, be held responsible for the defects.

 

The Updated Product Liability Directive proposes changes to bring AI systems within the product liability regime. These changes include confirming that AI systems and AI-enabled goods and services fall within the definition of products, and recognizing that not only hardware manufacturers but also software providers and digital service providers can be held liable for AI-related product defects. The directive also clarifies that responsible individuals can be held liable for changes to products, including those triggered by software updates or machine learning. Additionally, the directive would broaden the definition of damages suffered due to defective AI products, including loss of data not exclusively for professional purposes. The changes proposed are to alleviate the burden of proof where it is difficult to prove the causal link between the injury and defectiveness. The proposed AI Liability Directive is part of EU legal reforms aimed at regulating AI and emerging technologies. It aims to reduce legal uncertainty, ensure victims can seek redress for AI-related damages, and harmonize rules across Member States. The further suggestions and improvements sought under the draft on EU Liability Directive are organizations should conduct thorough risk assessments to determine if their AI use cases fall under the proposed Act and if they might be classified as 'high-risk' AI systems. Adequate governance and policies should be put in place to limit the risks of damage caused by AI systems due to incorrect use or inaction. Businesses should also consider compliance with potential disclosure requests, especially for complex AI systems, and ensure they have appropriate contractual protections, including warranties and indemnities, to cover potential risks.

 

UN

The first official discussion of AI was held in July in New York by the U.N. Security Council. Applications of AI, both military and non-military, were discussed by the council because they "may have very serious consequences for global peace and security."

 

CHINA

To monitor the generative AI field, China has put in place a series of interim regulations that go into effect on August 15,2023. These rules require service providers to submit security evaluations and obtain approval before releasing AI products for the public market.

 

G20

The members of G20 have also stressed the importance of regulation of AI.

 

4. SUGGESTIONS AND CONCLUSION

The artificial intelligence in medical field is going to be inevitable and there should a proper regulation for the same as it associates with the human life. Right to life is guaranteed under article 21 of the Indian constitution and implementation of artificial intelligence and regulation of the same, should ensure the fundamental right. In Indian law AI in healthcare can be brought under the purview of medical negligence and the liability can be arised as contractual liability, vicarious liability. However, bringing AI under the purview of medical negligence cannot provide the apt solution and hence there is need for special legislation and regulating guidelines for the same. Similar to the European Union’s special legislation on AI, India should frame special legislation for regulation of AI in all fields, especially in healthcare. The basic ideology in developing the AI should ensure that the benefits derived from artificial intelligence should outweigh the risks involved.

 

Some of the suggestions that can be taken into consideration while framing the provisions of the regulation of AI in healthcare are as follows;

 

  1. Before introducing AI into the market i.e to the hospitals, there should be strict licensing mechanism to approve the particular AI, several tests and experiments should be made before allowing them to implementation in the hospitals.
  2. During the implementation process, there should be a contract between the manufacturer, software licensee and the hospitals including the terms and conditions. The manufacturer should inform the working module, their directions to use and the dangers inbuilt.
  3. The patients who are ongoing treatment by/with the help of artificial intelligence should informed with the risk involved and consent of the patient should be brought before the treatment process.
  4. When there is any injury caused to the patients in the treatment process by /with the help of artificial intelligence, the person who owns (for example, a hospital) or manages AI (doctor, operator, or another person who sets the parameters of his work), can be held liable based on the contract and circumstances.
  5. Taking inferences from the EU legislation on regulation AI, the damages can be interpreted widely including loss of data and the manufacturer, and software developer cannot be easily withdrawn from the liability when the patient faces difficulty in proving the connection between the injury caused and defect of AI.
  6. In fixing the amount of compensation and the damages to the patients who had suffered the court can make the manufacturers or software developers or the importers or the hospitals pay the damages and there can be an order for joint liability also.
  7. When there is any difficulties faced in fixing liability on particular persons, the court in order to provide justice and compensation to the injured, can order the government to collect separate tax from the manufacturer or importer during the introduction of AI into the markets and that tax amount can be made as collective fund and used by the courts to provide compensation.
  8. There should be a creation of unified digital space for trust for data protection of the patient’s medical data and history and the violation of same can be brought under data protection violation.
  9. There should be a clear distinction to determine where AI shall be used and where it can be prevented as this involves the risk to human life.
  10. There should be separate supervising committees in every state to ensure the safe use of artificial intelligence and they should be provided with the power to inspect the hospitals regularly to ensure the same.

 

BIBLIOGRAPHY

  1. https://www.mdpi.com/2075-471X/11/1/3
  2. https://www.who.int/europe/news/item/06-02-2023-artificial-intelligence-in-mental-health-research--new-who-study-on-applications-and-challenges#:~:text=The%20study%20found%20significant%20flaws,
  3. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7640807/
  4. https://www.shlegal.com/insights/eu-artificial-intelligence-liability-directive
  5. https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
  6. https://www.ijlsi.com/wp-content/uploads/Can-AI-Be-Held-Accountable-for-Medical-Negligence.pdf
  7. https://www.brookings.edu/articles/how-to-systemically-think-about-ai-regulation/
  8. https://commission.europa.eu/business-economy-euro/doing-business-eu/contract-rules/digital-contracts/liability-rules-artificial-intelligence_en

Current Issue

ANALYSIS OF THE LEGAL IMPACT OF ARTIFICIAL INTELLIGENCE IN HEALTH CARE BY - AMIRTHAVARSHINI P K & MANASA S

Authors: AMIRTHAVARSHINI P K  & MANASA S
Registration ID: 102121 | Published Paper ID: 2120 & 2121
Year : Dec -2023 | Volume: 2 | Issue: 16
Approved ISSN : 2581-8503 | Country : Delhi, India
Page No : 18

Doi Link : https://www.doi-ds.org/doilink/12.2023-84972311/ANALYSIS OF THE LEGAL IMPACT OF ARTIFICIAL INTELLI

  • Share on:

Indexing Partner