Single Blog Title

This is a single blog caption
18 Dec 2025
Mariela Sanchez Salas

Trauma, Education and AI in Conflict Contexts: Care or Control?

 This blog post probes education, trauma, and human rights to ask whether AI technologies will heal or control. This think piece is a cross-post from UNESCO’s Ideas LAB as part of NORRAG’s blog series “AI and the Digitalisation of Education: Challenges and Opportunities.”

The expansion of artificial intelligence (AI) in educational settings affected by armed conflicts, forced displacements, and humanitarian crises creates a critical paradox. While it offers opportunities to democratize access to knowledge, it also presents significant ethical risks. According to the United Nations Special Rapporteur on the right to education, “the use of artificial intelligence in education must be guided by strong legal and ethical frameworks that safeguard human rights, avoiding the exacerbation of existing inequalities” (UN, 2024). The absence of specific regulations in vulnerable contexts risks transforming a tool of support into a mechanism of exclusion and dehumanization.

The application of AI without explicit sensitivity to collective trauma and without a human rights-based approach can reproduce structural inequalities, violate the dignity of affected populations, and exacerbate educational exclusion. A United Nations Human Rights Office of the High Commissioner (OHCHR) report warns that “the development and implementation of AI technologies in education must align with the promotion of equity, social justice, and inclusion” (UN, 2021). This directive is especially urgent in contexts where the memory of harm is still fresh and where every educational intervention must contribute not only to learning but also to the symbolic reparation of victims.

“The application of AI without explicit sensitivity to collective trauma and without a human rights-based approach can reproduce structural inequalities, violate the dignity of affected populations, and exacerbate educational exclusion.”

Grounded in both international human rights norms and intercultural worldviews such as Buen Vivir, this analysis understands education in crisis settings as a space where dignity, relational care, and collective healing must guide the use of emerging technologies like AI.

The Sphere Humanitarian Charter states that “all people affected by disasters and conflict have the right to live with dignity and to receive humanitarian assistance that respects their needs, culture, and fundamental human rights” (Sphere, 2018). From this perspective, education cannot be conceived as a field for unregulated technological experimentation but must be treated as a sacred space for the restoration of human dignity. Any deployment of AI in these environments must adhere to the fundamental principle of “do no harm” and ensure the protection of learners’ fundamental rights.

“Hasty and technocratic implementation, disconnected from sociocultural realities and human suffering, risk deepening wounds rather than healing them.”

Education is a fundamental human right, enshrined in international legal instruments such as the Universal Declaration of Human Rights, whose Article 26 affirms that “everyone has the right to education”, and the Convention on the Rights of the Child, which recognizes in Article 28 the right of every child to education as essential for the development of their personality and abilities (United Nations, 1989). However, this right is gravely threatened in contexts of humanitarian crisis and armed conflict. According to the Education Cannot Wait (ECW) report, “armed conflicts, forced displacements, climate change, and other crises have increased the number of crisis-affected children urgently needing quality education to 224 million”.

AI tools are released with enormous claims to expand access to education in emergencies, forced displacements, and humanitarian crises. Yet, their hasty and technocratic implementation, disconnected from sociocultural realities and human suffering, risk deepening wounds rather than healing them. As UNESCO (2021) warns, building new educational futures must be “firmly rooted in a commitment to human rights” (p. 12) ensuring that the right to education is interpreted inclusively and extended to all without distinction. Any incorporation of emerging technologies, including AI, must be guided by principles of social justice, solidarity, and care, and not by disembodied logics of efficiency that risk reproducing structural inequalities.

The ethical use of artificial intelligence in education must be grounded in core human rights principles: dignity, non-discrimination, participation, equity, and protection. Failure to uphold these principles transforms AI into a threat not only in its technical design, but also in its profound humanitarian and legal implications.

“The deployment of AI in education must be guided not merely by technological efficiency but by the ethical imperative to restore and protect human dignity.”

International organizations such as UNICEF and UNHCR support the expansion of technological solutions for education in emergencies, where platforms like Kolibri and Rumie have been used to offer personalized learning pathways to vulnerable populations. Although developed independently, these platforms’ use in displacement and humanitarian crisis contexts reflects growing interest in inclusive digital tools to uphold the right to education in extreme situations. Yet, absent human rights protections, these technologies expose vulnerable populations to grave risks:

  • Privacy violations: The mass collection of data from vulnerable children and adolescents without informed consent can lead to new forms of exploitation and discrimination (OHCHR, 2024).

  • Reproductions of structural biases: As West, Whittaker, and Crawford (2019) warn, bias in technology sector hiring and leadership profoundly shapes the types of systems that are designed and deployed, replicating historical exclusions based on race, gender, and class.

  • Dehumanization in automated systems: Eubanks (2018) analyzes how algorithmic technologies, operating in “low-rights environments,” tend to treat vulnerable individuals as data to be managed rather than as rights-holding subjects, exacerbating their invisibility and marginalization.

The deployment of AI in education must be guided not merely by technological efficiency but by the ethical imperative to restore and protect human dignity. An unregulated educational AI could transform the right to education into a privilege conditioned by technological access, deepening existing structural gaps. Trauma pedagogy teaches that education in crisis contexts must create “safe, relational, and culturally sensitive spaces that foster student empowerment” (Carello & Butler, 2015), helping restore a sense of belonging, dignity, and self-worth.

The principle of symbolic reparation enshrined in the UN’s Basic Principles and Guidelines on the Right to a Remedy and Reparation, asserts that states must provide measures that include “restoration of the dignity and reputation of victims through symbolic and material acts”. Following this principle, AI-based educational solutions would not merely facilitate access to learning but actively contribute to restoring the dignity and agency of those who have survived violence and displacement.

“Artificial intelligence in education cannot be neutral in the face of human suffering. In crisis settings, designing efficient algorithms or promising technological access is not enough; it is imperative to rebuild relationships of care, respect, and reparation.”

Human rights-compliant AI in education would need to fulfill, at minimum, the fundamental principles of dignity and social justice. First, the principle of human dignity demands that students are never reduced to mere “data” or “users” but recognized as rights-bearing individuals (UN, 1948). Second, the principle of non-discrimination requires inclusive algorithms that do not replicate historical patterns of exclusion based on gender, race, migration status, or socioeconomic condition (OHCHR, 2021).

Furthermore, respect for informed consent requires that the collection and processing of personal data be conducted ethically, transparently, and in a way that protects students’ best interests, especially in vulnerable contexts (UNESCO, 2021). The principle of participation implies actively involving local communities in every stage of design, implementation, and evaluation of educational technologies. Lastly, special protection mandates that children, adolescents, and displaced persons be treated as priority subjects of protection, in line with international human rights law (United Nations, 1989).

Artificial intelligence in education cannot be neutral in the face of human suffering. In crisis settings, designing efficient algorithms or promising technological access is not enough; it is imperative to rebuild relationships of care, respect, and reparation. A truly human rights-based educational AI must not merely ask what technology can do for education, but who the technology must serve.

In contexts marked by trauma and dispossession, education must reclaim its deepest meaning: to be a refuge, a possibility, and a restitution of wounded dignity. The urgent question is not what AI can offer to education, but what the living memory of victims demands of educational AI. Only a technology designed with consciousness of its debt to humanity can accompany the reconstruction of more just futures, where learning is not a privilege but a collective act of healing.

From the perspective of BuenVivir, which guides the worldview of our indigenous peoples in Bolivia, knowledge is not a commodity or a tool of domination, but a path toward harmony between human beings and Mother Earth. Applying this vision to the development of AI in education implies recognizing that all technology must serve dignified life, mutual respect, and the building of solidarity-based communities. 

If educational AI were to be inspired by Buen Vivir, it will not seek soulless efficiency, but meaningful learning, healing with justice, and a shared future. Only by honoring our roots can we imagine and build an education that leaves no one behind and that heals the wounds left by conflict and inequality.

Author

Mariela Sanchez Salas is a Bolivian scholar and PhD whose work explores trauma, technology, and the ethics of education in conflict-affected societies.  Her research draws on the principles of Buen Vivir to reimagine more dignified and healing educational futures.

(Visited 35 times, 1 visits today)

Leave a Reply

Sub Menu
Archive
Back to top