23.1 C
Srinagar
Saturday, November 2, 2024

Artificial Intelligence – Philosophy, Ethical Issues and Regulations

Must read

AI is defined as the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.
Recent developments in Artificial Intelligence (AI) have generated a steep interest from the media and the general public. As AI systems (e.g., robots, chatbots, avatars and other intelligent agents) are moving from being perceived as a tool to being perceived as autonomous agents and team-mates, an important focus of research and development is understanding the ethical impact of these systems.
What does it mean for an AI system to make a decision? What are the moral, societal and legal consequences of their actions and decisions? Can an AI system be held accountable for its actions? How can these systems be controlled once their learning capabilities bring them into states that are possibly only remotely linked to their initial, designed, setup?
Should such autonomous innovation in commercial systems even be allowed, and how should use and development be regulated? These and many other related questions are currently the focus of much attention. The way society and our systems will be able to deal with these questions will for a large part determine our level of trust, and ultimately, the impact of AI in society, and the existence of AI.
Contrary to imagined perception, where AI systems dominate the world and is mostly concerned with warfare, AI is already changing our daily lives mostly in ways that improve human health, safety, and productivity.
In order to ensure that those dystopic futures do not become reality, these systems must be introduced in ways that build trust and understanding, and respect human and civil rights.
The need for ethical scrutiny in the development of intelligent interactive systems is becoming one of the main influential areas of research in the last few years and has led to several initiatives both from researchers and from practitioners.
Means are needed to integrate moral, societal, and legal values with technological developments in AI, both during the design process as well as part of the deliberation algorithms employed by these systems.
AI potential in biomedical research, medical education, and delivery of health care seems limitless. With its robust ability to integrate and learn from large sets of clinical data, AI can serve roles in diagnosis, clinical decision-making, and personalized medicine. For example, AI-based diagnostic algorithms applied to mammograms are assisting in the detection of breast cancer, serving as a “second opinion” for radiologists. In addition, advanced virtual human avatars are capable of engaging in meaningful conversations, which has implications for the diagnosis and treatment of psychiatric diseases. AI applications also extend into the physical realm with robotic prostheses, physical task support systems, and mobile manipulators assisting in the delivery of telemedicine.
Nonetheless, this powerful technology creates a novel set of ethical challenges that must be identified and mitigated since AI technology has tremendous capability to threaten patient preference, safety, and privacy.
Safety is one of the biggest challenges for AI in healthcare. To use one well-publicized example, IBM Watson for Oncology uses AI algorithms to assess information from patients’ medical records and help physicians explore cancer treatment options for their patients. However, it has recently come under criticism for reportedly giving “unsafe and incorrect” recommendations for cancer treatments. The problem seems to be in the training of Watson for Oncology: instead of using real patient data, the software was only trained with a few “synthetic” cases of cancer, meaning they were devised by doctors at the Memorial Sloan Kettering (MSK) Cancer Centre. MSK has stated that errors only occurred as part of the system testing and thus no incorrect treatment recommendation has been given to a real patient.
This real-life example has put the field in a negative light. It also shows that it is of uttermost importance that AIs are safe and effective. But how do we ensure that AIs keep their promises? To realize the potential of AI, stakeholders, particularly AI developers, need to make sure of two key things: (1) the reliability and validity of the datasets and (2) transparency.
AI has the capability to improve healthcare not only in high-income settings but even in remote areas. However, any ML system or human-trained algorithm will only be as trustworthy, effective, and fair as the data that it is trained with. AI also bears a risk for biases and thus discrimination. It is therefore vital that AI makers are aware of this risk and minimize potential biases at every stage in the process of product development. In particular, they should consider the risk of biases when deciding on ML techniques, they want to use to train the algorithms and include quality and diverse datasets for the programming.
The implementation of AI technologies in healthcare, especially regarding tools with a direct clinical impact such as those aimed at supporting diagnosis or clinical decisions, is undoubtedly determining a paradigm shift. Such a change of perspective involves the emergence of several major ethical issues, which are being heatedly discussed both in the scientific community and by regulatory agencies. Most AI technologies, notably deep learning networks which now are having a major role, appear as a black box to an external user. Although methods to visualize the inner structure and behaviour of the AI tools have been presented and more human-readable technologies such as decision trees are also being used, AI predictions appear largely to be determined by an obscure logic that cannot be understood or interpreted by a human observer. This limitation directly leads to the issue of the accountability of the decisions, which is nowadays being debated at a regulatory level. In other words, if a prediction fails, for example, in case of misdiagnosis, determining if the responsibility is of the radiologists who used the AI system, of the device itself or of the manufacturer is of critical importance. This obscure nature has also severe implications regarding the marketing approval of novel AI tools, which require deeper testing and verification with respect to other technologies, and thus longer time-to-market and cost.
A second issue concerns possible biases in the predictions, which may be either intentional, that is, fraudulent, or unintended. Examples of intentional biases are a DST preferably promoting the use of drugs or devices by a specific manufacturer, or a tool designed to maximize specific quality metrics relevant to the hospital but not necessarily optimizing patients’ care. Unintended biases may be related to the scarce availability of data regarding some rare pathologies or phenotypes, which may be then insufficiently covered in the training dataset with respect to more common conditions, or ethnicities for which datasets are indeed not existing or limited. Besides, insufficient data collection efforts, for example, by privileging data sources easier to access, may also lead to unintended biases. To limit the impact of such issues, efforts toward the governance of AI are starting to be undertaken, with the final aim of building a robust public trust. It should be noted that cultural differences between the European Union, the United States, and East Asian countries may likely result in dramatically different attitudes from a regulatory and governance point of view.
The use of AI in healthcare also raises serious concerns about data privacy and security, due to the massive amount of clinical and imaging data required for training and validation of the tools, thus involving issues about data collection, transmission and storage, as well as informed consent.
In July 2017, the UK Information Commissioner’s Office (ICO) ruled that the Royal Free NHS Foundation Trust was in breach of the UK Data Protection Act 1998 when it provided the personal data of circa 1.6 million patients to Google DeepMind. The data sharing happened for the clinical safety testing of “Streams,” an app that aims to help with the diagnosis and detection of acute kidney injury. However, patients were not properly informed about the processing of their data as part of the test.
Although the Streams app does not use AI, this real-life example has highlighted the potential for harm to privacy rights when developing technological solutions. If patients and clinicians do not trust AIs, their successful integration into clinical practice will ultimately fail. It is fundamentally important to adequately inform patients about the processing of their data and foster an open dialog to promote trust.
Increased attention on the possible impact of future robotics and AI systems warned about the risk of a dystopian future when the complexity of these systems progresses further.
Reports like Robotics 2020 Multi-Annual Roadmap for Robotics in Europe, Intl. Federation of Robotics (IFR). (2017), have predicted the exponential increase of robots in the future. Robots and autonomous systems are gradually expected to have widespread exploitation in society in the future including self-driving vehicles and service robots at work and at home. Ethical perspectives on AI and robotics should be addressed too. “Robo-ethics”, concerns ethical problems that occur with robots, such as whether robots pose a threat to humans in the long or short run, whether some uses of robots are problematic (such as in healthcare or as ‘killer robots’ in war), and how robots should be designed such that they act ‘ethically’ (this last concern is also called machine ethics). Developing systems need to be aware of possible ethical issues that should be considered including avoiding misuse and allowing for human inspection of the functionality of the algorithms and systems as pointed out by Bostrom, N., and Yudkowsky, E. (2014) in “The ethics of artificial intelligence”. The system itself should be aware of unwanted circumstances that result from considered ethical decisions.
Ethical considerations should be taken into account by designers of robotic and AI systems, and the autonomous systems themselves must also be aware of the ethical implications of their actions. The need for outlining ethical guidelines and their implementation is the current issue for which an elegant solution is needed.
In all areas of application, AI reasoning must be able to take into account societal values, and moral and ethical considerations, weigh the respective priorities of values held by different stakeholders and in multiple multicultural contexts, explain its reasoning, and guarantee transparency. As the capabilities for autonomous decision-making grow, perhaps the most important issue to consider is the need to rethink responsibility. There is an urgent need to identify and formalize what autonomy and responsibility exactly mean when applied to machines. Whereas taking a moral agent approach places whole responsibility with the developer as advocated by some researchers, or taking an institutional regulatory approach, the fact is that the chain of responsibility is getting longer. Definitions of control and responsibility are needed that are able to deal with a larger distance between human control and system autonomy.

The author is a research scholar of School of Computer Science at UPES, Uttarakhand and can be reached at [email protected]

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest article