Published Aug 18 2025

Rethinking digital humanitarianism: Trust, privacy and technocolonialism

The humanitarian sector has undergone a significant transformation, increasingly shaped by the integration of digital technologies into its core operations.

This shift, often referred to as digital humanitarianism, is marked by the use of data-driven systems, digital communication platforms, and advanced analytical tools in the delivery of aid.

Across refugee camps, disaster zones, and conflict-affected regions, humanitarian organisations are embracing digital technologies to enhance their services.

In Cox’s Bazar, Bangladesh – the world’s largest refugee settlement – nearly a million Rohingya refugees depend on digital systems introduced since 2017 to access aid and essential services. Tools such as biometric registration and digital ID cards promise efficiency and oversight, but they also raise significant ethical concerns.

For a stateless and highly vulnerable population, such technologies risk deepening surveillance and control.


Read more: “Going into that camp was like walking through the gates of hell”


Emerging academic work on refugees and digital humanitarianism increasingly questions the ethical assumptions behind these innovations: Who designs these technologies? Who owns the data? What choices do people in crisis truly have when they’re compelled to submit personal information?

As my research and fieldwork in Cox’s Bazar show, the deployment of digital humanitarian technologies is deeply entangled with what Mirca Madianou terms technocolonialism. This context offers a critical lens to examine the intersections of trust, privacy, and power in contemporary humanitarian practice.

Data for aid, but at what cost?

Refugees, asylum seekers and stateless people are often required to provide sensitive biometric data such as fingerprints or iris scans to access basic services like food, shelter, or refugee status. This data is collected and managed by a complex network of humanitarian agencies and third-party contractors, frequently with limited transparency or oversight.

For many Rohingya, this data exchange is not about informed consent – it’s a condition of survival. The ethical dilemma is stark – when people are displaced, stateless, and dependent on aid, their ability to negotiate privacy diminishes drastically.

Serious questions remain: Who owns this data? How long is it stored? Can it be accessed by governments, including Myanmar’s in the future?

The fear that biometric data collected by UNHCR might be shared with the Bangladeshi government and eventually with Myanmar has already eroded trust among Rohingya refugees.

Many now worry that data gathered in the name of protection could be weaponised for surveillance or forced repatriation. In other cases, data shared with humanitarian agencies has reportedly been accessed by governments with poor human rights records, triggering concerns about monitoring, deportation, and persecution.

The problem with technosolutionism

Tech companies and humanitarian organisations often present digital tools as neutral or inherently beneficial, merely more efficient ways to serve displaced communities.

However, technologies are not neutral. They embody the interests, values and biases of those who design and implement them. In many cases, digital infrastructures are developed, funded, and governed by international actors such as NGOs, UN agencies, and private tech companies based in the Global North, while refugees have little or no input in how these systems are created or used.

This top-down approach reflects Madianou’s technocolonialism terminology – the deployment of digital tools as instruments of power, surveillance and control, echoing colonial legacies of domination and dispossession.

Photo:  iStock/Getty Images Plus

Third-party agencies, including private tech firms and both local and international NGOs, now occupy central roles in humanitarian operations by handling tasks from data management, biometric registration, food distribution, to cash transfers.

Technology giants such as Meta have used humanitarian partnerships to rehabilitate their public image in the wake of scandals such as Cambridge Analytica and broader critiques of exploitative business practices.

Companies such as IrisGuard, for example, operate in both humanitarian aid and border security, blurring the line between care and control. This outsourcing model allows corporations to benefit from branding opportunities, public relations boosts, new data sources, market expansion, and experimental testing grounds.

Ultimately, these practices reinforce corporate colonialism by consolidating power, infrastructure and data in the hands of a few multinational entities. While their platforms may appear “free”, they extract value from vulnerable users, perpetuating historical inequalities and creating new forms of technological dependency.

Toward ethical digital humanitarianism

Digital technologies are not inherently harmful; they hold genuine potential to enhance humanitarian work by streamlining aid distribution, reuniting displaced families, and enabling refugees to assert their rights.

Yet this potential comes with serious ethical and political challenges. To harness digital tools responsibly, we must address the power asymmetries and structural risks embedded in their use, especially in crisis contexts.

A more just and accountable digital humanitarianism requires firm commitment to key principles:

  • Transparency in how data is collected, used, and shared with independent oversight wherever possible
  • Informed and meaningful consent that ensures recipients have real agency over their data and can opt out without losing access to vital services
  • Data practices that are minimal, purpose-driven, and rigorously protected.

Most importantly, affected communities must not be reduced to data points – they should be meaningfully involved in shaping the digital systems that govern their lives. These are not idealistic ambitions, but necessary ethical commitments to challenge technocolonial tendencies and rebuild trust.

Ultimately, it’s crucial to recognise that the growing embrace of digital technologies in humanitarianism is driven by a complex web of interests – governments seeking control, NGOs pursuing efficiency, and corporations expanding their markets.

This convergence fuels the rapid datafication of aid, transforming not just how humanitarian responses are delivered, but how they’re experienced by those at the receiving end.

As we navigate this evolving digital border, we must confront a fundamental question: Who truly benefits, and who carries the burden of risk?

Rethinking humanitarianism in the digital age requires confronting the colonial legacies embedded in our technological systems and reimagining humanitarian futures grounded not in surveillance and control, but in care, justice, and human dignity.

About the Authors

  • Abdul aziz

    Lecturer, Malaysia School of Arts and Social Sciences, Monash University, Malaysia

    Abdul is a lecturer in media and communication studies at Monash University Malaysia. His research interests lie at the intersection of digital media, diaspora/(forced) migration, and cultural studies, with a particular focus on marginalised communities in the global south. Applying a sociological lens informed by digital media scholarship, his primary research area includes the sociocultural dimensions of everyday digital media practices in diverse settings, which engages critical questions about dynamics of (im)mobility/inequalities and power in the everyday life.

Other stories you might like