THE LEGAL AND ETHICAL IMPLICATIONS OF ARTIFICIAL INTELLIGENCE IN THE LEGAL WORLD BY - LUCKSHA B & SADHANA S

THE LEGAL AND ETHICAL IMPLICATIONS OF ARTIFICIAL INTELLIGENCE IN THE LEGAL WORLD

 

AUTHORED BY - LUCKSHA B & SADHANA S

 

ABSTRACT:

Integrating AI into the legal system presents a spectrum of challenges. This abstract delves into the multifaceted issues encountered when AI intersects with legal processes. Firstly, algorithmic bias poses a significant concern, as AI systems trained on biased data may perpetuate or exacerbate disparities within the legal system, potentially violating constitutional principles of fairness and equality. Furthermore, ensuring the privacy and security of sensitive legal information presents a formidable obstacle, particularly given the vast amounts of data processed by AI algorithms. Intellectual property rights also come into play, raising questions about ownership and usage rights over AI-generated legal documents and analyses. Additionally, the issue of legal liability becomes increasingly complex in cases where AI systems are involved in decision-making, necessitating clarity on accountability and responsibility. Moreover, the opacity of AI decision-making processes raises concerns regarding transparency and due process. This article looks into the ethical considerations, such as preserving human judgment and safeguarding against unintended consequences, further complicate the integration of AI into legal proceedings. Addressing these challenges requires a nuanced approach, involving collaboration between legal experts, technologists, and policymakers to develop frameworks that uphold legal principles while leveraging the benefits of AI innovation.

 

INTRODUCTION:

With the growing use of AI in various aspects of business, this topic would involve exploring the legal and ethical issues that arise when the legal system relies on AI for decision-making, such as the potential for algorithmic bias and the legal liability for decisions made by machines.

 

There are several legal issues that legal system may face when using AI for decision-making. Here are some of the key legal problems:

 

• Discrimination: AI systems may inadvertently discriminate against certain groups, particularly if the algorithms are trained on biased data. This can lead to legal action under anti-discrimination laws.

 

• Privacy: AI systems may collect and process large amounts of personal data, raising concerns about data protection and privacy laws.

 

• Intellectual property: Companies must ensure they have the necessary rights to use any data, software or other intellectual property incorporated into an AI system.

 

• Liability: If an AI system makes a decision that causes harm or injury, it may not always be clear who is responsible. This can lead to legal disputes over liability and compensation.

 

• Transparency: Companies may be required to provide explanations of how their AI systems make decisions, particularly in industries that are heavily regulated.

 

• Ethical considerations: The use of AI for decision-making raises ethical questions, particularly in areas such as healthcare and criminal justice. Companies must ensure that they are acting per ethical standards and principles.

 

To mitigate these legal problems, companies should seek legal advice and conduct a thorough risk assessment before implementing AI systems. They should also ensure that their AI systems are designed with fairness, transparency, and accountability.

 

QUESTIONS TO ASK

Implementing AI in the legal system could have several positive implications but there are several questions one should ask before deciding for or against AI.

 

1)What Types Of Decisions Could Be Made Using Ai In The Legal System?

There are several ways Artificial intelligence could be used in the legal system. As an aid to advocates, it could be a powerful tool. It can be used to summarise long legal documents, provide complex legal texts in a more simplified manner, and may be utilised for easy research on case laws. It may be utilised to identify biases, evaluate evidence, and draft legal documents.

 

2)How can the decisions made by the AI system assist customers and stakeholders?

AI could help immensely to aid communication with people who are not lawyers to communicate with legal professionals. AI can help keep the clients more “in the know,” it can assist with identifying the client's needs and the impact of new legislation.[1] Rather than having multiple conversations with lawyers the AI can furnish a systematical analysis and oversee the proceedings to provide a concise report to the clients.

 

3)Are there any regulations or laws that specifically govern the use of AI in decision-making processes?

There are no specific laws governing the utilisation of Artificial Intelligence in India but there are several guidelines. The NITI Ayog 2018 released the National Strategy for Artificial Intelligence #AIForAll[2] explored the various ethical considerations of implementing AI in India and focused on principles to operationalise principles for a responsible AI.

Additionally, the Digital Personal Data Protection Act, 2023 (DPDP Act)[3] could be utilised to address privacy concerns regarding the implementation of AI.

Several committees have been created by the Ministry of Electronics and Information Technology[4] to research the development, safety, and ethical issues regarding the implementation of AI.

The Bureau of Indian Standards has also established a committee to draft standards for AI.[5]

India is also a signatory to the Global Partnership on Artificial Intelligence (GPAI). It aims to ensure AI is developed according to the standards set by the OECD AI principles. They state that the advancement of AI must be responsible, sustainable, and inclusive for all.

 

4)What are the potential ethical considerations that arise from using AI in decision-making?

AI on paper is free from biases but if the algorithm grows in a manner that incorporates human biases too it would prove to be highly problematic. If the decision-making aspect is left to the artificial intelligence there are several aspects on which it could backfire as the implications of its decisions and possible technical glitches could have immense impacts on their lives. AI is defined as a “system that acts like a human” It does not have the inherent requirement of humanity which is essential when interpreting the law.

 

In Zahira Habibullah Sheikh and ors. v. State of Gujarat and Ors[6] the supreme court held stakeholders have an inherent right to a free and fair trial with a fair prosecutor and without bias or prejudice, while AI will give speedy judgements AI needs to be evaluated from this perspective and “fairness” may be questioned.

 

Article 14[7] along with articles 15,16, and 17 explicitly aims to make the Indian society egalitarian and individualistic. They aim to remove bias, and discrimination and promote fairness in Indian society. The AI systems developed in the US were found to be extremely discriminating against African Americans. This was due to the “garbage-in, garbage-out” principle. The data fed into the AI was biased and thus the output AI was also biased. Though in India there is no question of racial bias there are several forms of bias prevalent in India which would when implemented into the AI bots be unconstitutional.

 

5)How has the Indian judiciary utilised AI?

In the case of Jaswinder Singh v. State of Punjab[8], Justice Anoop Chitkara of the HC of Punjab and Haryana utilised ChatGPT to gain a wider understanding of the bail application presented before it. However, the AI chatbot does not provide any comments as to the merits of the case, it was solely used for understanding bail jurisprudence.

 

Justice D.Y. Chandrachud the current Chief Justice of India has been known for emphasising the need to integrate AI in the India Judicial system. AI and NLP systems are utilised in the Supreme Court for transcribing hearings, this began on February 21, 2023. The transcribe was provided by AI in the Maharashtra political controversy case between the CM of Maharashtra Eknath Shinde and the former CM Uddhav Thackeray.[9]

  1. Additionally, initiatives like SUPACE (Supreme Court Portal for Assistance in Court’s Efficiency) and SUVAS (Supreme Court Vidhik Anuvaad Software) are examples of AI in the judicial system. SUPACE aids judges in their research work and SUVAS helps translate documents.

6) Have there been any instances where the AI System made a decision that had legal consequences?

 

Instances have indeed occurred where AI systems have regulated legal impact. A notable illustration involves the integration of AI within criminal justice systems to mitigate risk and recommend sentences. These algorithms sift through diverse factors to forecast an individual's likelihood of future criminal activity or propose suitable sentencing. However, certain questions have emerged regarding the equity and transparency of such algorithms, which may triger biases entrenched in historical data or lack adequate oversight.

 

In 2016, a significant legal precedent was set in Wisconsin when a court sanctioned the utilization of COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), an algorithm for risk assessment, in sentencing determinations. This decision spotlighted the ethical implications of entrusting AI in legal proceedings, particularly concerning due process and discrimination concerns.

 

Another instance unfolds in the realm of autonomous vehicles, where decisions made by AI systems during operation carry legal effect in the event of accidents or injuries. This raises queries about liability, placing scrutiny on the manufacturer of the AI system, the vehicle owner, or the developers of the software.

 

Moreover, the deployment of AI-driven content moderation on online platforms has sparked legal disputes touching on matters such as freedom of speech and censorship. The actions taken by AI algorithms to suppress or regulate content can impinge upon individuals' rights, provoking inquiries into accountability and transparency.

 

These scenarios underscore the escalating intersection between AI technology and the legal system, underscoring the importance for reliable regulatory frameworks, ethical guidelines, and accountability measures to navigate the legal implications of AI decision-making.

 

7) How can the legal system Mitigate the risk of legal liability when using AI in decision making?

 

The legal system can employ several strategies to mitigate the risk of legal liability associated with using AI in decision-making processes. Firstly, regulatory frameworks can be established to govern the development, deployment, and operation of AI systems. These regulations often outline specific requirements for transparency, fairness, accountability, and data privacy. Compliance with these regulations helps ensure that AI systems are developed and used responsibly, reducing the risk of legal liabilities.

 

Secondly, legal contracts and agreements can be drafted to allocate liability among the parties involved in AI systems. For instance, contracts between AI developers, manufacturers, service providers, and users may include clauses defining responsibilities and liabilities in case of errors, malfunctions, or adverse outcomes arising from AI decision-making. Clear contractual arrangements help clarify the roles and obligations of each party, minimizing ambiguity and potential disputes regarding liability.

 

Additionally, insurance mechanisms can be developed to address the unique risks associated with AI technologies. Insurance policies tailored for AI-related liabilities can provide financial protection against legal claims resulting from AI decision-making processes gone deviant. These policies may cover damages, legal expenses, and other costs incurred due to lawsuits or regulatory actions related to AI use.

 

Overall, a combination of regulatory measures, contractual agreements, insurance mechanisms, and monitoring practices are employed by the legal system to mitigate the risk of legal liability when using AI in decision-making contexts. These efforts aim to promote responsible AI governance, enhance accountability, and foster trust in AI technologies within legal frameworks.

 

8) Are there any privacy concerns related to the data used by the AI system?

 

Yes, privacy concerns are significant when it comes to the data used by AI systems. AI often relies on vast amounts of data, including personal information, to train and improve its algorithms. This data can range from sensitive medical records and financial transactions to social media posts and browsing history. 

 

One major concern is the potential for unauthorized access, misuse, or exploitation of this data, leading to privacy breaches and violations. Improper handling of personal data by AI systems can result in identity theft, financial fraud, discrimination, and other harmful outcomes for individuals.

 

Moreover, there is a risk of data aggregation and re-identification, where seemingly personal data can be combined with other sources to identify individuals, compromising their privacy. Additionally, AI systems may unintentionally sustain biases present in the training data, leading to discriminatory outcomes, especially in sensitive areas like hiring, lending, and law enforcement.

 

Addressing these privacy concerns requires implementing effective data protection measures, such as encryption, anonymization, access controls, and transparency about data usage. Strong regulatory frameworks, like the GDPR in Europe and similar laws elsewhere, also play a crucial role in safeguarding individuals' privacy rights in the context of AI-driven data processing.

 

9) How can the legal system ensure the accuracy and reliability of data used by the AI System?

The legal system can ensure the accuracy and reliability of the data used by AI systems primarily through regulatory frameworks and oversight mechanisms:

 

1. Data Protection Laws: Legal frameworks, such as the General Data Protection Regulation (GDPR) in Europe and similar laws in other jurisdictions, mandate companies to adhere to strict standards for data accuracy and reliability. These laws require organizations to collect, process, and store personal data lawfully, fairly, and transparently, ensuring its accuracy and relevance to the intended purposes.

 

2. Data Governance Requirements: Legal regulations can impose data governance requirements on organisations, specifying measures for data quality assurance, validation, and verification. Companies can be given the obligation to implement policies and procedures for maintaining data accuracy and reliability throughout its lifecycle, including data collection, storage, processing, and sharing.

 

3. Accountability Mechanisms: Legal frameworks can hold organisations accountable for the accuracy and reliability of the data used by AI systems. Companies are subject to regulatory scrutiny and potential legal consequences for inaccuracies, biases, or misuse of data that could result in harm to individuals or violations of privacy rights.

 

4. Regulatory Oversight: Regulatory authorities can be given the responsibility for overseeing compliance with data protection and privacy laws, have the authority to investigate complaints, conduct audits, and impose penalties on organisations found to be in breach of data accuracy and reliability requirements.

 

Through these mechanisms, the legal system can ensure that AI systems operate on accurate and reliable data, thereby mitigating risks of errors, biases, and adverse consequences for individuals and society.

 

 

PRIVACY CONCERNS WHEN IT COMES TO AI:

Privacy can be affected in several ways when AI is used in the decision-making process of a company. In the landmark judgement of Justice K.S.Puttaswamy(Retd) vs Union Of India on 26 September 2018[10], the right to privacy was held to be a fundamental right under Article 21. The incorporation of Artificial intelligence comes with privacy concerns. Some of these concerns would be:

 

• Data Collection: AI systems rely on large amounts of data to learn and make decisions. When companies collect and use this data, they may collect personal information that can identify individuals, such as names, addresses, and phone numbers. This can potentially infringe on individuals' privacy rights.

 

• Data Storage: Companies need to store large amounts of data to feed into AI models, and this data can include sensitive information about individuals. If this data is not stored securely, it can be vulnerable to cyberattacks, data breaches, and unauthorized access, which can further jeopardize individuals' privacy.

 

• Data Sharing: Companies may share their data with third-party service providers, contractors, or other partners. This can increase the risk of personal data being shared or used in ways that are not transparent or compliant with privacy laws.

 

• Decision-Making Algorithms: The algorithms used in AI decision-making can be opaque and difficult to understand, making it challenging for individuals to understand how decisions are being made and whether their data is being used appropriately. This lack of transparency can erode trust in the company and its use of AI.

 

To mitigate these privacy risks, companies should take steps such as ensuring transparency and accountability in their data collection, storage, and sharing practices, implementing strong cybersecurity measures to protect sensitive data, and ensuring that AI decision-making is explainable and transparent. Additionally, companies should ensure that they are compliant with relevant privacy laws and regulations.

 

Discrimination can occur when AI is used in decision making process in the legal system, some among the ways in which it can occur are:

        

  1. Biassed Data:

In the realm of legal systems, the integration of artificial intelligence (AI) into decision-making processes has raised concerns about the potential perpetuation or exacerbation of discrimination. One fundamental issue lies in the reliance of AI systems on data to inform their decision-making. When AI algorithms are trained on biased or incomplete datasets, they can inadvertently perpetuate and even amplify existing biases, resulting in discriminatory outcomes that disproportionately affect certain groups.

 

At the heart of this problem is the concept of biased data. Historical data used to train AI models may reflect and encode societal biases and prejudices that have permeated past decision-making processes. For instance, if historical arrest records, judicial decisions, or sentencing outcomes exhibit biases against certain demographic groups—such as racial minorities or individuals from marginalized communities—AI algorithms trained on this data will learn and internalize these biases. Consequently, when these algorithms are deployed to assist in legal decision-making, they may replicate the discriminatory patterns present in the training data.

 

One manifestation of biased data in the legal context is the continuation of systemic inequalities. For instance, if past judicial decisions have disproportionately targeted and penalized individuals from specific racial or socioeconomic backgrounds, an AI system trained on such data may exhibit a similar tendency to disproportionately penalize members of those groups in its decision-making. This perpetuates existing disparities within the legal system, further establishing systemic inequalities rather than mitigating them.

 

Moreover, biased data can lead to the reinforcement of stereotypes and prejudices against certain groups. AI algorithms trained on biased data may learn to associate certain characteristics or attributes with demographic groups, thereby encouraging harmful stereotypes. Consequently, individuals belonging to these groups may face unjust treatment based on erroneous assumptions made by AI systems, worsening the marginalization and discrimination they experience within the legal system.

 

Addressing the issue of biased data in AI-driven decision-making within the legal system requires proactive measures to identify, mitigate, and rectify biases in training data. This may involve employing techniques such as data preprocessing to detect and mitigate biases, diversifying training datasets to ensure representation from all demographic groups, and implementing fairness-aware algorithms that prioritize equitable outcomes. Additionally, ongoing monitoring and evaluation of AI systems' performance in real-world contexts are essential to detect and address any emerging biases or discriminatory patterns.


 

  1. Algorithmic bias:

Algorithmic bias is when computers make unfair decisions because they are programmed with biases. This can happen on purpose or by accident. When people make AI systems, their own biases can influence the programming. Or, if the AI learns from biased information, it can also become biased. For example, if someone makes an AI for hiring people and they prefer one gender over another, the AI might also start preferring that gender.

 

Sometimes, bias is put into AI systems on purpose. This means that the person creating the AI want it to favor certain groups over others. For instance, if someone wants to promote a certain race or religion, they might make the AI show favouritism towards that group.

 

Other times, bias happens accidentally. This means that the person creating the AI did not mean to make it biased, but it still turns out that way. This can happen because the AI is trained using information that already has biases in it. For example, if the AI is trained using old data that favoured one group over another, it might continue favouring that group even if it is not fair.

 

When AI systems have biases, it can lead to unfair outcomes, especially in legal decisions. For example, if AI is used to predict who might commit a crime, it might unfairly target certain groups more than others. This can make existing inequalities worse and lead to unfair treatment in the legal system.

 

To fix this problem, we need to make sure AI systems are fair for everyone. This means checking the data we use to train AI to make sure it is not biased. We also need to have rules in place to make sure people creating AI do not put their own biases into it. And it is important to have a diverse group of people working on AI projects to make sure different perspectives are considered.

 

By making AI fairer, we can help ensure that everyone is treated equally, no matter their race, gender, or background. This is important for creating a just and equitable society where everyone has a fair chance.


 

  1. Lack of Diversity:

When AI is used in the legal system to make decisions, it can sometimes lead to unfair treatment. One reason for this is the lack of diversity in the people and the data used to create the AI.

 

Let us break it down. Lack of diversity means that the AI system does not have enough variety in the information it uses or the people who make it. This lack of diversity can cause problems because the AI might not consider the perspectives and experiences of different groups of people.

 

For example, imagine a team of developers is creating an AI system to help judges decide sentences for crimes. If everyone on the team comes from a similar background and has similar experiences, they might not think about how their decisions could affect people from different backgrounds. This could lead to the AI making decisions that unfairly target certain groups of people.

 

Now, about the data used to train the AI. If the data only includes information about certain groups of people, it will not give the full picture. For instance, if the AI is trained using data that mostly represents one race or gender, it might not understand the experiences of people from other races or genders. This could lead to the AI making decisions that are biased against those groups.

 

To address this issue, we need to make sure that the people creating AI systems come from a variety of backgrounds and experiences. This means encouraging more diversity in the tech industry and making sure that everyone has a seat at the table when decisions are being made about AI.

 

We also need to ensure that the data used to train AI systems is diverse and representative of all groups in society. This means collecting data from a wide range of sources and making sure that it includes information about people from different backgrounds and experiences. By doing this, we can help prevent AI systems from learning and perpetuating biases that exist in society.

 

Ultimately, it is important to recognize that lack of diversity in AI can have serious consequences for fairness and equality in the legal system. By addressing this issue and taking steps to promote diversity and inclusion in AI development, we can work towards creating AI systems that are fair and equitable for all.


 

  1. Lack of Oversight:

When a company uses AI to make decisions, discrimination can happen in a few different ways. One way is when there's not enough oversight.

 

So, what does "lack of oversight" mean? Well, it means that there is not enough human supervision of the AI's decision-making process. For example, if you have a babysitter watching over kids, they make sure everything goes smoothly and no one gets hurt. In the same way, oversight is like having someone watching over the AI to make sure it is making fair decisions.

 

Now, why does lack of oversight matter? Without enough human oversight, the AI might make decisions that are unfair or discriminatory. This could happen because the AI is trained on data that is biased or because it is programmed in a way that Favors certain groups over others.

 

For example, let us say a company uses AI to screen job applications. If there's not enough human oversight, the AI might unfairly reject candidates based on factors like their race, gender, or age. This could lead to discrimination and unfair treatment of certain groups of people.

 

So, how can this problem be fixed? One solution is to make sure that there is always someone keeping an eye on the AI's decisions. This could be done by setting up a system where human supervisors regularly review the AI's outputs and intervene if they spot any unfairness or bias.

 

Additionally, companies can establish clear guidelines and policies for using AI in decision-making, including mechanisms for addressing complaints or concerns about discrimination. By creating a culture of accountability and transparency, companies can help ensure that their AI systems are making fair and unbiased decisions.

 

Ultimately, preventing discrimination in AI decision-making requires a combination of human oversight, technical safeguards, and clear policies and guidelines. By taking these steps, companies can harness the power of AI while minimizing the risk of unfairness and discrimination.

 

 

View of judges and AI:

The Indian judiciary has started taking a more progressive view on technology in the system after the COVID-19 pandemic.  Justice D Y Chandrachud has started taking steps to facilitate access to the courts through video processing and online submissions.  The court proceedings are now live-streamed in the public domain which increased the trust in the judiciary. The Madras High Court has had an exceptional number of cases cleared after including technology between 2020 and 2021. Similarly incorporating AI as an aid to judges and advocates seems to be the obvious next step.

The current CJI D Y Chandrachud emphasised the need to maintain technological progress and hybrid hearings at the National Conference on Digitisation in Odisha, on May 6, 2023.

On Constitution Day in 2019, Former CJI Sharad Arvind Bobde emphasized the need to implement AI in the judicial system to remove repetitive tasks.[11]

Former Justice Nageshwara Rao who led the Supreme Court AI committee has stated that AI should be used to accelerate the justice delivery process by utilizing it for administrative functions.[12]

The AI committee report identifies the application of AI in the judicial system. The SUVAS and SUPACE were developed based on recommendations from this committee. A Detailed Project report has been approved for phase 3 of the e-courts projects. This includes incorporating AI and Blockchain technology.

Two more areas of implementation were also identified. The use of AI on the administrative side could help with efficient case tracking and cash flow management and facilitate policy decisions. And it emphasized the need to explore the potential of AI in other avenues. It can bring faster judicial decision-making by streamlining information regarding geography, topography, confusion regarding customary law and local special laws helping dispense the number of cases pending in these areas.[13]

 

CONCLUSION:

In conclusion, the integration of artificial intelligence (AI) into decision-making processes within the legal system presents both opportunities and challenges. While AI holds the potential to enhance efficiency, accuracy, and accessibility in legal proceedings, it also raises significant legal and ethical considerations that must be addressed to ensure fairness, transparency, and accountability.

 

The legal implications of AI decision-making encompass various concerns, including discrimination, privacy violations, intellectual property rights, liability issues, and ethical considerations. Discrimination can arise from biased data, algorithmic biases, lack of diversity, and insufficient oversight, leading to unfair outcomes and exacerbating existing inequalities within the legal system.

 

 

 


[1] Thomson Reuters, "The Future of Professionals" (accessed March 23, 2024) https://www.thomsonreuters.com/en/campaigns/future-of-professionals.html

[2]  India AI Task Force, "National Strategy for Artificial Intelligence" (accessed March 25, 2024) https://indiaai.gov.in/research-reports/national-strategy-for-artificial-intelligence/

[3]  Digital Personal Data Protection Act, 2023 (DPDP Act)

[4]  Ministry of Electronics and Information Technology, "Artificial Intelligence Committees Reports" (accessed March 22, 2024) https://www.meity.gov.in/artificial-intelligence-committees-reports

[5]LITD 30 New Standards List

[6] Zahira Habibullah Sheikh and ors. v. State of Gujarat and Ors AIR 2006 SC 1367

[7] Constitution of India, Article 14

[8] Jaswinder Singh v. State of Punjab 2023:PHHC:044541

[9] Supreme Court of India Uses AI to Transcribe Live Proceedings," accessed March 25, 2024,https://indiaai.gov.in/news/supreme-court-of-india-uses-ai-to-transcribe-live-proceedings

[10] of Justice K.S.Puttaswamy(Retd) vs Union Of Indi AIR 2018 SC (SUPP) 1841, 2019 (1) SCC 1

[11]https://theprint.in/judiciary/ai-can-improve-judicial-systems-efficiency-full-text-of-cji-bobdes-constitution-day-speech/326893/

[12]https://vidhilegalpolicy.in/wp-content/uploads/2021/04/Responsible-AI-in-the-Indian-Justice-System-A-Strategy-Paper.pdf

[13] Government Of India Ministry Of Law And Justice (Department Of Justice), report on Use of Artificial Intelligence Tools in Judicial System

Current Issue

THE LEGAL AND ETHICAL IMPLICATIONS OF ARTIFICIAL INTELLIGENCE IN THE LEGAL WORLD BY - LUCKSHA B & SADHANA S

Authors:   LUCKSHA B & SADHANA S
Registration ID: 10285 | Published Paper ID: 2885 & 2886
Year : May -2024 | Volume: 2 | Issue: 16
Approved ISSN : 2581-8503 | Country : Delhi, India

DOI Link : https://www.whiteblacklegal.co.in/details/the-legal-and-ethical-implications-of-artificial-intelligence-in-the-legal-world-by---lucksha-b-sadhana-s

  • Share on:

Indexing Partner