Humanity must shape technology before technology reshapes humanity by Brandon Hamber and Sophia Devlin
![]() |
| AI and Justice (AI generated image: Brandon Hamber) |
We are living through a moment in which artificial intelligence is rapidly reshaping the world around us, whether in justice systems, labour markets, security practices, global governance structures, and how we make war, and potentially, peace.
Technology is reshaping the everyday ways people learn, connect, and express themselves. Technology is profoundly changing how we see others, how we connect in new spaces, how we get to know, or even think we know, others. This is not a minor change; arguably, the fundamental nature of relationships is changing between humans as well as between humans and machines.
There is also a relationship between technology and conflict. We have seen digital technologies fuel division, manipulate information, and entrench inequalities. At the same time, we have witnessed them facilitate dialogue, improve connection and knowledge about others, support early warning systems, and create new tools for accountability and participatory governance. Drones, for example, can unleash destructive military power, but can also track the movement of people under threat or map atrocities and help us to better monitor and understand the impact of climate change to improve crop yields and alleviate poverty.
AI intensifies all these dynamics.
The impact of all this is rapid, diffuse, and far-reaching. However, the consequence is also uneven and deeply political. The current financial investment into AI is unthinkably enormous. The resources needed to keep AI-systems functioning and expanding are environmentally destructive. A race is also underway between the various tech titans and governments to claim the spoils.
Prominent AI researcher, Stuart Russell in his book Human Compatible: Artificial Intelligence and the Problem of Control (2019) has warned that such an AI race will inevitably lead to safety risks, cutting corners, and poor regulation all leading to the potential for autonomous AI to have catastrophic outcomes for humans that we did not take the time to consider properly.
As such, and with any powerful technology, AI carries within it both immense promise and considerable risk.
We don’t want to spoil your weekend TV binge, but there’s a scene in the recent Apple TV drama Pluribusthat might be useful here, at least for those not steeped in some AI debates. The show involves a hive-mind where humanity’s collective knowledge is shared and allows anyone to perform complex tasks like flying a plane or conducting open-heart surgery. However, the protagonist, Carol, is not part of this hive. Yet the hive seems determined to service her every need.
Despite its ability to efficiently meet and even predict her needs, Carol’s frustration at this new world leads her to jokingly, in one scene, request a hand grenade. Carol’s minder (called a chaperone in the series) arrives with the grenade and apologising for taking a bit of time to deliver it notes: “We thought you were probably being sarcastic, but we didn’t want to take the chance. Were you being sarcastic?”. The minder checks again if Carol truly wants the grenade, to which Carol says yes. The grenade is handed over with the final caution: “Please, be careful with that”.
Spoiler alert: it does not end well.
While the show creators insist the show is not about AI, it could be seen as a metaphor for a super-intelligent yet compliant, context-limited AI that follows commands without considering ethical implications or downstream consequences. At best, it depicts an AI system with limited guardrails.
Most importantly, it is not just the ethical limits of hive-minds that is problematic. The grenade scene also highlights Carol’s realisation that, as a human user with access to an all-knowing obedient partner, she could exploit the hive’s weaknesses for her own gain. She double-checks the limits later in the show asking the hive-mind if it will deliver an atomic bomb if she asks. After a few paltry attempts to dissuade her, the answer is once again, ‘Yes’.
But as amusing as this thought experiment is, for those of us who work in peacebuilding, reconciliation, transitional justice, and post-conflict reconstruction, these are not abstract concerns. How AI can or cannot be used, today and projecting into the future, will have real-world consequences.
Furthermore, although Carol’s own realisation that the hive could be exploited is important in highlighting the potential for how these technologies can be misused by humans — the hive-mind Carol has confronted to this point in the show appears largely docile, only making a limited number of decisions itself, seemingly with the sole aiming of meeting Carol’s needs.
However, AI will not be passive — it can learn, generate new ideas, and initiate actions independently, with such functionality becoming increasingly powerful every day. AI is not simply a tool to be used for good or bad by humans. As Stuart Russell and Peter Norvig observe in their book Artificial Intelligence: A Modern Approach (2020), AI is best understood as an agent acting on what it perceives in different environments.
The risk of AI, therefore, is not only AI assisting Carol to acquire an atomic bomb, but AI independently acting in problematic ways. As Historian Yuval Noah Harari said in a recent interview: “A hammer is a tool. An atom bomb is a tool. You decide to start a war and who to bomb. It doesn’t walk over there and decide to detonate itself. AI can do that”.
Furthermore, it is not only in capacities of creating harm in conflict-ridden contexts that AI matters. In fragile and post-conflict societies, the stakes regarding AI are also extraordinarily high. These are environments where trust in institutions is often low, social cohesion is delicate, democracy is fragile, and the legacies of violence continue to shape daily life. Introducing AI tools, whether in policing, welfare allocation, border management, education, or political communication, without deep ethical consideration and the integration of human rights risks reinforcing structural harms and undermining the hard-won peace. In such contexts, a poorly designed or unregulated algorithm can have consequences far beyond its technical function — it can influence who is heard, who is marginalised, and whose rights are upheld or violated.
Positively, AI could strengthen peace processes. It could support equitable access to services, enhance effective monitoring of human rights violations, enable more inclusive participation in policy and democratic processes, and help to rebuild trust in institutions through responsible, rights-respecting governance and the efficient distribution of resources. Arguably, AI could guide us in making the right decisions about peace and in maximising measures to prevent harm and to strengthen the non-recurrence of violence.
But, to harness the positive potential of AI, we must first recognise the importance of working together for social good in an interdisciplinary manner rather than unthinkingly racing towards developing AI for self-gain or advantage. This collaboration is essential to foster technological ecosystems that support, rather than undermine, dignity, rights, justice, and peace.
Secondly, we need to find the best way to ensure the safe development and deployment of AI. To achieve this, we must move beyond considering AI’s impact in a siloed or narrow manner, as we often do with other issues that can negatively affect people, such as food safety, aviation, or pharmaceutical regulation. We need to recognise AI’s potential to alter human relationships, change the nature of war and peace, and affect the very existence of our species. Therefore, we must consider the implications of AI within a much broader context.
Arguably, a human rights-based approach to AI serves as an ideal starting point.
The draft “Munich Convention on AI, Data and Human Rights,” a collaborative effort initiated by the Institute for Ethics in Artificial Intelligence (IEAI) and Globethics, serves as an excellent foundation on which to build. Drawing inspiration from existing human rights frameworks, including seminal documents such as the Universal Declaration of Human Rights and international human rights law more broadly, the convention provides a framework for integrating human rights into a global AI context. Against a backdrop of contextualising and providing useful definitions of AI, the Convention advocates for a risk-based approach aiming to ensure the safeguarding of personal data, accessibility and transparency, promoting fairness and inclusivity, informed decision-making for users, as well as minimising bias and algorithmic harm. In short, it seeks to uphold the critical protection and freedoms associated with human rights, while also advocating for accountability and redress concerning any adverse human rights impacts of AI.
Such a human rights-based approach to AI puts humans at the centre of how we think about the impact of AI, whether in our daily lives or peacebuilding contexts. A human rights-based approach also creates obligations for governments and businesses to protect users whilst promoting through technology the autonomy and enjoyment that comes with guaranteeing fundamental and universal human rights for all. This is an important starting point to ensure it is humans that remain at the centre of any AI debate, thus ensuring humankind can shape technology before technology reshapes humanity in ways we cannot reverse.
This article was written by Brandon Hamber and Sophia Devlin.
Professor Brandon Hamber is John Hume and Tip O’Neill Chair in Peace at INCORE, Ulster University and Director of Innovation at TechEthics. Sophia Devlin is the CEO of TechEthics.

No comments:
Post a Comment