Wednesday, February 4, 2026

Irish President, Catherine Connolly Speaks

Great to hear Irish President, Catherine Connolly, speak today at Ulster University. 

A takeaway: "in a world unfortunately that is ever-more consumed with war and militarism...a lesson that we should all tell the world: that peace is normal and that war is not normal and that peace is possible".

Full speech is here if you are interested.

Irish President, Catherine Connolly, Ulster University, 4 Feb 2026

Friday, January 30, 2026

Trump’s ‘Board of Peace’ is really a Board of War

DONALD Trump launched his ‘Board of Peace’ at Davos 2026, with much fanfare, promising to “end decades of suffering, stop generations of hatred and bloodshed”. Initially intended to rebuild Gaza, its mission now appears much broader, marking critical global shifts and raising serious questions about what peace means at the global level.

Through his ‘Board of Peace’, Trump aims to redefine peace by promoting a ‘peace through strength’ approach.

The idea of peace as positive social relationships built on equality and justice has been cast aside. Instead, it is viewed as a form of macro-security enforced by the most powerful through threats and coercion.

This version of peace resembles the international relations of decades past, where stability was thought to be maintained through balancing the power of nations. Security was guaranteed by fear rather than rules or ideas of mutually beneficial cooperation. Arguably, the concept of a balance of power has eroded over the past few decades through multilateral institutions such as the UN, the IMF and the World Bank, which have gained increasing global power.

Additionally, grassroots movements such as those advocating for LGBTQ+ rights and climate-change activism have often emerged from the margins, cutting across traditional forms of political power. These changes Trump and his cronies despise.

In Trump’s view, a return to power politics is necessary, with the US, as the biggest kid on the block, acting as the world’s self-interested policeman.

Inauguration of the Board of Peace at the World Economic Forum 2026 in Davos. Photo: WEF / Flickr / CC BY-NC-SA 4.0 

Peace is transactional or even extractive, with secondary gains for the US, such as oil or rare-earth minerals, flowing from the guarantee of security.

The ‘Board of Peace’ no doubt has been established to further this warped understanding. 

No-one really knows what it will ultimately do. But the trajectory of US foreign policy suggests it will operate more as a ‘Board of War’, legitimising global interventions mainly dictated by Trump, its executive chair.

It would not be surprising if Greenland is the first item on the agenda, with the board endorsing annexation to “promote” global peace. Iran and Cuba might be next.

Only one African country, Morocco, has been invited to join. This reflects Trump’s racist dismissal of African states as ‘shithole’ nations.

But from a more sinister perspective, Africa, rich in resources and riddled with conflict, also seems ripe for the ‘peace through strength’ approach. It could be awkward to be singled out for a Trump ‘peace intervention’ while sitting on the board.

The ‘Board of Peace’ also embodies Trump’s belief that existing global institutions fail to serve US interests and alternative structures are needed. Following his withdrawal from various international organisations, the proposed board aims to replace global bodies such as the UN. It will define its own agenda and legitimacy, centred around Trump, who is the chairman for life with veto power.

In Trump’s words at the launch, the board will “do pretty much whatever we want to do”.

All this reinforces the claim by Mark Carney, the prime minister of Canada, at the World Economic Forum’s annual meeting in Davos, that “the rules-based order is fading”.

The only way out of this, according to Carney, is that “middle powers must act together” in various forms of cooperation, underpinned by values such as respect for human rights, sustainable development and sovereignty.

This speech has been seen as a game-changer, particularly in liberal circles. Carney’s approach, however, misses a fundamental point.

While global institutions have provided a framework for a rules-based world, they have primarily benefited Western nations.

The alarm among Western politicians over Trump’s comments about Greenland reflects a delayed recognition of the power plays that developing nations have faced from the US and Europe for decades.

Western countries have often used the rules-based order to justify toppling governments in the name of democratic peace. They did little to stop the genocide in Gaza.

The International Criminal Court has primarily indicted African leaders, while the likes of Netanyahu remain largely protected. The UN Security Council frequently blocks resolutions that run counter to its interests.

If Carney and his coalition of ‘middle powers’ are serious about ushering in a new era, the normative aspect of their call for collective action requires serious scrutiny.

A radical rethink of the international order is not only overdue but also offers new opportunities.

For example, if the US is no longer interested in cooperating globally, bodies such as the Security Council could be dismantled and replaced with more equitable structures and power in global governance distributed more evenly. Such actions would create a different set of global levers and international players.

The danger right now, however, is that countries such as Canada and the UK – and, of course, the European Union – may become overly focused on the drive to reforge Western alliances while missing the bigger picture.

Nations around the world, including Pakistan, Indonesia, Turkey, Mexico, South Africa, India, and Brazil, are expanding their regional influence alongside the ambitions of China and Russia.

Trump’s national-interest-first approach could encourage other nations to waver in their commitments to the international order and even resort to force to resolve disputes. It is no surprise that some countries often critiqued for their human rights records eagerly joined Trump’s ‘Board of Peace’, despite the $1 billion joining fee.

They will be happy to be lackeys in the absence of anything else, or out of fear that their country might otherwise be next on Trump’s dinner table.

In this context, Carney’s ideas of ‘middle power’, Western-only alliances will do little to promote global stability.

Unless this new approach to partnerships is accompanied by genuine soul-searching about the biases, imbalances and gaps in international institutions, and is founded on renewed north-south cooperation, the world will merely replicate past mistakes.

If discussions on power are now the new norm, real change will only occur when global power dynamics are truly disrupted.

So, the discussion of new alliances should embrace rapidly developing nations and the inclusion and empowerment of the weakest.

This approach recognises that sustainable peace is built on participation and embedded in equality and shared access to power – it is not based on threats. Such broader devolution of power is exactly what Trump, as global bully boy, fears most.

Then again, maybe the ‘Board of Peace’ is just a classic Trump billion-dollar grift, to create a new ‘Board of Peace’ Peace Prize –with only one potential winner.

.  .  .

Originally published by Brandon Hamber in the The Irish News, 28 January 2026

Also published in The Geopolitical Economist, here

 


Friday, December 19, 2025

AI and Human Rights

Humanity must shape technology before technology reshapes humanity by Brandon Hamber and Sophia Devlin

AI and Justice (AI generated image: Brandon Hamber)

We are living through a moment in which artificial intelligence is rapidly reshaping the world around us, whether in justice systems, labour markets, security practices, global governance structures, and how we make war, and potentially, peace.

Technology is reshaping the everyday ways people learn, connect, and express themselves. Technology is profoundly changing how we see others, how we connect in new spaces, how we get to know, or even think we know, others. This is not a minor change; arguably, the fundamental nature of relationships is changing between humans as well as between humans and machines.

There is also a relationship between technology and conflict. We have seen digital technologies fuel division, manipulate information, and entrench inequalities. At the same time, we have witnessed them facilitate dialogue, improve connection and knowledge about others, support early warning systems, and create new tools for accountability and participatory governance. Drones, for example, can unleash destructive military power, but can also track the movement of people under threat or map atrocities and help us to better monitor and understand the impact of climate change to improve crop yields and alleviate poverty.

AI intensifies all these dynamics.

The impact of all this is rapid, diffuse, and far-reaching. However, the consequence is also uneven and deeply political. The current financial investment into AI is unthinkably enormous. The resources needed to keep AI-systems functioning and expanding are environmentally destructive. A race is also underway between the various tech titans and governments to claim the spoils.

Prominent AI researcher, Stuart Russell in his book Human Compatible: Artificial Intelligence and the Problem of Control (2019) has warned that such an AI race will inevitably lead to safety risks, cutting corners, and poor regulation all leading to the potential for autonomous AI to have catastrophic outcomes for humans that we did not take the time to consider properly.

As such, and with any powerful technology, AI carries within it both immense promise and considerable risk.

We don’t want to spoil your weekend TV binge, but there’s a scene in the recent Apple TV drama Pluribusthat might be useful here, at least for those not steeped in some AI debates. The show involves a hive-mind where humanity’s collective knowledge is shared and allows anyone to perform complex tasks like flying a plane or conducting open-heart surgery. However, the protagonist, Carol, is not part of this hive. Yet the hive seems determined to service her every need.

Despite its ability to efficiently meet and even predict her needs, Carol’s frustration at this new world leads her to jokingly, in one scene, request a hand grenade. Carol’s minder (called a chaperone in the series) arrives with the grenade and apologising for taking a bit of time to deliver it notes: “We thought you were probably being sarcastic, but we didn’t want to take the chance. Were you being sarcastic?”. The minder checks again if Carol truly wants the grenade, to which Carol says yes. The grenade is handed over with the final caution: “Please, be careful with that”.

Spoiler alert: it does not end well.

While the show creators insist the show is not about AI, it could be seen as a metaphor for a super-intelligent yet compliant, context-limited AI that follows commands without considering ethical implications or downstream consequences. At best, it depicts an AI system with limited guardrails.

Most importantly, it is not just the ethical limits of hive-minds that is problematic. The grenade scene also highlights Carol’s realisation that, as a human user with access to an all-knowing obedient partner, she could exploit the hive’s weaknesses for her own gain. She double-checks the limits later in the show asking the hive-mind if it will deliver an atomic bomb if she asks. After a few paltry attempts to dissuade her, the answer is once again, ‘Yes’.

But as amusing as this thought experiment is, for those of us who work in peacebuilding, reconciliation, transitional justice, and post-conflict reconstruction, these are not abstract concerns. How AI can or cannot be used, today and projecting into the future, will have real-world consequences.

Furthermore, although Carol’s own realisation that the hive could be exploited is important in highlighting the potential for how these technologies can be misused by humans — the hive-mind Carol has confronted to this point in the show appears largely docile, only making a limited number of decisions itself, seemingly with the sole aiming of meeting Carol’s needs.

However, AI will not be passive — it can learn, generate new ideas, and initiate actions independently, with such functionality becoming increasingly powerful every day. AI is not simply a tool to be used for good or bad by humans. As Stuart Russell and Peter Norvig observe in their book Artificial Intelligence: A Modern Approach (2020), AI is best understood as an agent acting on what it perceives in different environments.

The risk of AI, therefore, is not only AI assisting Carol to acquire an atomic bomb, but AI independently acting in problematic ways. As Historian Yuval Noah Harari said in a recent interview: “A hammer is a tool. An atom bomb is a tool. You decide to start a war and who to bomb. It doesn’t walk over there and decide to detonate itself. AI can do that”.

Furthermore, it is not only in capacities of creating harm in conflict-ridden contexts that AI matters. In fragile and post-conflict societies, the stakes regarding AI are also extraordinarily high. These are environments where trust in institutions is often low, social cohesion is delicate, democracy is fragile, and the legacies of violence continue to shape daily life. Introducing AI tools, whether in policing, welfare allocation, border management, education, or political communication, without deep ethical consideration and the integration of human rights risks reinforcing structural harms and undermining the hard-won peace. In such contexts, a poorly designed or unregulated algorithm can have consequences far beyond its technical function — it can influence who is heard, who is marginalised, and whose rights are upheld or violated.

Positively, AI could strengthen peace processes. It could support equitable access to services, enhance effective monitoring of human rights violations, enable more inclusive participation in policy and democratic processes, and help to rebuild trust in institutions through responsible, rights-respecting governance and the efficient distribution of resources. Arguably, AI could guide us in making the right decisions about peace and in maximising measures to prevent harm and to strengthen the non-recurrence of violence.

But, to harness the positive potential of AI, we must first recognise the importance of working together for social good in an interdisciplinary manner rather than unthinkingly racing towards developing AI for self-gain or advantage. This collaboration is essential to foster technological ecosystems that support, rather than undermine, dignity, rights, justice, and peace.

Secondly, we need to find the best way to ensure the safe development and deployment of AI. To achieve this, we must move beyond considering AI’s impact in a siloed or narrow manner, as we often do with other issues that can negatively affect people, such as food safety, aviation, or pharmaceutical regulation. We need to recognise AI’s potential to alter human relationships, change the nature of war and peace, and affect the very existence of our species. Therefore, we must consider the implications of AI within a much broader context.

Arguably, a human rights-based approach to AI serves as an ideal starting point.

The draft “Munich Convention on AI, Data and Human Rights,” a collaborative effort initiated by the Institute for Ethics in Artificial Intelligence (IEAI) and Globethics, serves as an excellent foundation on which to build. Drawing inspiration from existing human rights frameworks, including seminal documents such as the Universal Declaration of Human Rights and international human rights law more broadly, the convention provides a framework for integrating human rights into a global AI context. Against a backdrop of contextualising and providing useful definitions of AI, the Convention advocates for a risk-based approach aiming to ensure the safeguarding of personal data, accessibility and transparency, promoting fairness and inclusivity, informed decision-making for users, as well as minimising bias and algorithmic harm. In short, it seeks to uphold the critical protection and freedoms associated with human rights, while also advocating for accountability and redress concerning any adverse human rights impacts of AI.

Such a human rights-based approach to AI puts humans at the centre of how we think about the impact of AI, whether in our daily lives or peacebuilding contexts. A human rights-based approach also creates obligations for governments and businesses to protect users whilst promoting through technology the autonomy and enjoyment that comes with guaranteeing fundamental and universal human rights for all. This is an important starting point to ensure it is humans that remain at the centre of any AI debate, thus ensuring humankind can shape technology before technology reshapes humanity in ways we cannot reverse.

This article was written by Brandon Hamber and Sophia Devlin.

Professor Brandon Hamber is John Hume and Tip O’Neill Chair in Peace at INCORE, Ulster University and Director of Innovation at TechEthics. Sophia Devlin is the CEO of TechEthics.

Friday, December 12, 2025

Human Rights in the Age of AI

On Friday, 12 December 2025, a pivotal event titled "Human Rights in the Age of AI: Towards a New Generation of Human Rights Protections" took place at Ulster University in Belfast as part of the Human Rights Festival. The event was organised by TechEthics, the Institute for Ethics in Artificial Intelligence (Technical University of Munich), Ulster University’s School of Computing, and Ludwig-Maximilian University.

This interdisciplinary panel gathered experts from various sectors to discuss the dual impact of artificial intelligence (AI) on fundamental human rights.

I did the opening address which is available here, written with Sophia Devlin, CEO of TechEthics.

The speakers, include:
  • Denis Naughten, Inter-Parliamentary Union 
  • Dr. Nell Watson, AI Ethics Maestro at IEEE and Author of Taming the Machine
  • Ben Bland, Chair of IEEE Working Group P7014 
  • Fiona Browne, Head of AI at Danske Bank UK 
  • Dr. Alexander Kriebitz, Co-Founder of iuvenal research and Post-Doctoral Researcher (IEAI TUM, Chair of Business Ethics) 
  • Dr. Caitlin Corrigan, Director of the Institute for Ethics in Artificial Intelligence and UNESCO Women for Ethical AI


With a focus on core principles such as privacy, equality, non-discrimination, and accountability, the discussions explored how AI can both erode and enhance these rights across multiple fields, including education, health, and governance.

Transitional Justice and the Kurdish Conflict

I participated, on 12 December 2025, in the book launch hosted by The Transitional Justice Institute & INCORE (Ulster University) of the publication “Transitional Justice and the Kurdish Conflict: A Grassroots Approach" (Routledge 2025) by Dr Nisan Alıcı (University of Derby) one of my former PhD researchers. This new book examines how transitional justice can contribute to transforming the Kurdish conflict in Turkey by centralising the experience of victims-survivors, activists, and other grassroots actors. The event is especially timely, coinciding with an ongoing peace process in Turkey aimed at ending the Kurdish conflict that has lasted for over 40 years.

To find out more about the event visit the Transitional Justice and the Kurdish Conflict: A Grassroots Approach event page (opens in new window).

Celebrating International Human Rights Day

From December 8 to 10, 2025, Conflict TextilesUlster UniversityINCORE, Queen's University Belfast, and the Tower Museum collaborated on a three-day programme honouring International Human Rights Day. This initiative highlighted the power of textiles and film in addressing human rights issues.


Participants included film students, researchers, activists, curators, and community members, all exploring how textiles and film can serve as vehicles for memory, truth, and justice. Events included film screenings, workshops, exhibitions, and discussions.

On December 10, the Derry-Londonderry Ulster University Campus hosted a special event featuring short films curated by Cinematic Arts student Jessica Buchanan and PhD researcher Tabassum Islam. Following the screenings, attendees engaged in a thought-provoking discussion with filmmaker Esther Vital, Roberta Bacic from Conflict Textiles, and Professors Élise Féron and Brandon Hamber from INCORE. 

The event underscored the vital role of film in fostering community dialogue around human rights. 

Conflict Textiles has curated a page with all the links and events.