Ethics. So often treated as an afterthought in the rush to innovate. Something to be tacked on once the big ideas are already in motion rather than part of the process itself. But what if ethics isn’t meant to trail behind innovation, and instead quietly guide it from the beginning? Not so early that it stifles development, but early enough to help steer progress in a more thoughtful, responsible direction.
In the case of AI, that means ethics should be at the core of how we develop, deploy, and use every single AI system. Basically, before diving deep into technical specs, regulations, and industry standards, we should ask: What is the right thing to do, and how can that guide the choices we make along the way? Because while laws tell us what we can do and professional standards advise how to do it, only ethics challenges us to consider what we should do.
Why Should Ethics Be at the Heart of AI Development?
To answer this question, let’s borrow a lens from the field of medicine. Doctors rely on an ethical framework that helps them make difficult decisions. This framework is built on the following key principles: beneficence (doing good), non-maleficence (avoiding harm), autonomy (respecting individual choice), and justice (ensuring fairness). These pillars don’t replace laws or professional expertise; they just guide them.
Now, bringing this back to AI, we have to ask: Shouldn’t AI developers work from a similar ethical foundation? After all, AI is currently shaping how we live, work, and interact. Besides already being used across a wide range of vital sectors, including healthcare, law, education, engineering, politics, and social services, AI is only set to become more widespread over time.
Given how deeply AI is becoming embedded in every part of society, isn’t it time we had a clear, well-developed, and specifically tailored moral compass to guide decisions and behaviour at every stage of its development and use?
Beyond Technicalities: Ethics as Substance, Not Surface
If we look at the medical and legal fields, one thing we see is that causing harm through omission is still considered harm. In AI, this kind of omission can occur when too much attention is given to “form” rather than “substance”. In fact, this is quite a common pattern in the AI space, with many developers focusing more on how to do something (e.g. how to optimise a system, how to generate more convincing images, how to make a chatbot sound more human, etc.) rather than on why they’re doing it or what the consequences might be.
This overemphasis on form over substance is also evident in how we approach legislative initiatives. While many AI experts and others outside the tech world agree that proper AI governance is crucial, focusing solely on legal compliance could make us overlook the deeper ethical issues. This is a flawed approach, as legality and ethics aren’t the same. As in other fields, having strict regulations in place doesn’t guarantee ethical behaviour, neither from the AI tools themselves nor from those who develop and use them. History is full of examples—from medical experiments that were once permitted to discriminatory laws that remained in place for decades—where legality failed to protect people’s rights or dignity.
Building on the same line of reasoning, it’s entirely possible to develop and release an AI system that fully complies with strict legislation yet still causes harm. That’s because, without strong ethical thinking behind the scenes, even the most advanced AI system can be misused, or worse, used exactly as intended but with damaging consequences.
This is where ethical theories can offer a helpful roadmap. Take consequentialism (the idea that we should judge actions by their outcomes), for example. It provides a useful way to assess whether an AI tool is likely to bring real benefits to society or cause widespread harm. But this isn’t the only perspective we need. Sometimes, we also have to ask whether something is inherently wrong, regardless of the outcome. That’s where deontology (the principle of doing the right thing, no matter the result) comes in.
While both of these ethical theories, along with others, are valuable, we still need a broader ethical framework to guide our reasoning. Fortunately, there are already several well-established ethical guidelines and frameworks that offer useful direction, such as:
- The OECD Principles on AI, which emphasise fairness, transparency, accountability, and safety.
- The EU’s Ethics Guidelines for Trustworthy AI that highlight the importance of human oversight, privacy, technical robustness, and fairness.
- UNESCO’s Recommendation on the Ethics of AI, which focuses on human rights, sustainability, and global equity.
- The UK AI Code of Conduct that outlines key principles, such as safety, transparency, fairness, accountability, and redress.
- The AI Cyber Security Code of Practice, which sets out baseline cyber security principles to help secure AI systems and the organisations that develop and deploy them.
Although these frameworks provide some sort of basis for responsible AI development, the current speed and scale of AI advancements indicate that this foundation alone won’t be enough to meet the ethical challenges ahead. Which brings us to the question:
Just Because We Can, Should We?
To explore the ethical dilemmas in AI, let’s look at the history of medical ethics, a field marked by evolving challenges, a lack of early guidelines, public scandals, and the eventual introduction of ethical oversight, which reflects many of the issues we’re now seeing in the AI landscape. What follows is a parallel structure, applying that same narrative approach to AI, using both real and emerging examples:
Medical Ethics Cases & AI Ethics Parallel
The God Committe
Hemodialysis Selection
In the early 1960s, the Seattle Artificial Kidney Center, which was home to the world’s first outpatient dialysis unit, had only a handful of dialysis machines, with as few as three in operation at the time. Because demand far exceeded supply, a lay committee of seven citizens, later known as the “God Committee”, was tasked with deciding who would receive treatment. Their life-or-death decisions were based on subjective moral judgments, taking into account factors like age, occupation, and social worth, all in the absence of any formal ethical guidelines.
AI Resource Allocation and Access
Modern AI systems, particularly in healthcare, finance, and criminal justice, can have a direct impact on people’s lives by influencing access to essential resources. This includes decisions such as who is approved for a loan, offered housing, or receives a medical diagnosis. During the COVID-19 pandemic, for instance, some hospitals used AI-driven triage tools to determine which patients should receive ventilators, as these were in short supply at the time. In a different context, predictive policing algorithms are used to identify high-risk neighbourhoods. This often leads to increased surveillance in those areas, while other communities may receive less protection.
Ethical Issue: When strong ethical frameworks are lacking, algorithmic decisions risk reinforcing existing biases, potentially leading to discrimination or arbitrary harm.
Death v Personhood
Redefining Death for Organ Transplants
The development of organ transplants required a redefinition of death, shifting the standard from cardiopulmonary death (when the heart stops beating) to brain death (irreversible loss of brain function). This change sparked significant medical, legal, and ethical debates worldwide as it affected when organ donation could ethically and legally take place.
Redefining Intelligence, Personhood, and Agency in AI
AI challenges our understanding of intelligence, agency, and personhood. It raises questions about what truly counts as understanding or reasoning, whether AI can be held responsible for its actions, and if advanced systems, such as sentient-like LLMs or future AGI, should be granted rights, responsibilities, or protections.
Ethical Issue: All these have sparked unprecedented debates about the legal recognition and ethical considerations of AI. Misclassifying AI systems could lead to harm, either to people or to the AI themselves through misuse or neglect.
Human Experimentation Lessons For Data
Tuskegee and Human Experimentation
The Tuskegee study revealed unethical research on a serious infectious disease, where vulnerable people were exploited over decades without their consent. This case is widely recognised as a major ethical failure that led to significant reforms in research ethics and protections for human participants.
How AI Uses Our Data and the Risks That Come With It
AI systems are often trained and tested using human data without proper informed consent. Examples include facial recognition technologies using data from people who have not agreed, widespread data scraping from the internet (which can include private messages and medical records), and the use of untested AI tools in sensitive areas, such as education, recruitment, welfare programmes, and the criminal justice system.
Ethical Issue: This raises important questions about informed consent, data ownership, dignity, and exploitation. It also highlights the urgent need for solid oversight and regulation to protect individuals and ensure ethical AI development, similar to the role played by Research Ethics Committees in medicine.
Medical Research Lessons for AI Ethics
Key Ethical Guidelines in Medical Research
The National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, established in the US during the 1970s, played a crucial role in shaping key ethical guidelines. These included the requirement for informed consent and the introduction of Institutional Review Boards (IRBs). These measures are now central to ethical medical research worldwide, ensuring that participants are protected and studies receive ethical review before they begin.
Calls for AI Ethics Panels and Regulation
There is increasing support for the creation of national or global AI ethics commissions, independent assessments of algorithmic impact, and regulations similar to the EU AI Act.
Ethical Issue: As with medicine, ethical understanding and regulation are struggling to keep pace with technological advances. We need interdisciplinary panels to assess the societal risks posed by AI, such as bias, surveillance, unemployment, and disinformation, and to provide practical ethical standards alongside a strong legal framework.
Fair Access?
Fair Protection and Inclusion of Vulnerable Groups
In medical research, an ongoing ethical concern is how to protect vulnerable populations, such as children, the elderly, and those with limited autonomy, while still allowing them to take part in studies that could bring important health benefits to their communities.
Equitable AI Deployment
AI must not only avoid causing harm but also play an active role in promoting fairness and equity. We need to ask who benefits from AI and who might be left behind or harmed by its deployment. Are disadvantaged communities, such as rural, low-income, or minority groups, being excluded or unfairly targeted? And crucially, can AI be directed to serve the public good rather than simply advancing private profit?
Ethical Issue: There is a growing recognition that fair access, representation, and benefit-sharing must be built into the design and deployment of AI systems. Ensuring that all communities benefit from AI and that none are unfairly excluded or exploited is a moral imperative.
Just like the medical field in the past, AI is now at a crossroads where power is outpacing principles. What we choose to do next could shape the future for many generations, so ethics must not be treated as a mere tick-box exercise or a last-minute patch. It should guide us from the very beginning, shaping not only what we build and who benefits from it but also what we’re willing to risk. That indirectly means giving developers proper ethical training, ensuring the public has a real voice, and setting standards that prioritise morality, justice, and legality over simply achieving the desired outcomes.
Because the question is no longer whether we can make AI more powerful. We already know the answer to that. The real question is: What kind of world will we allow it to create?
What do you think?