Skip to main content

Indigenous Peoples and AI: Defending Rights, Shaping the Future of Technology

By Lucas Kasosi (Maasai, CS Intern)

Each year on August 9, the world comes together to observe the International Day of the World’s Indigenous Peoples, a day dedicated to honoring the resilience, wisdom, and sovereignty of over 476 million Indigenous people worldwide. The observance, first established by the United Nations General Assembly in 1994, marks the historic first meeting in 1982 of the UN Working Group on Indigenous Populations. This landmark gathering marked the beginning of a global dialogue aimed at recognizing and amplifying the rights and cultures of Indigenous Peoples.

The significance of this day cannot be overstated. It serves as a solemn reminder of the ongoing struggles that Indigenous communities face to protect and safeguard their lands, cultures, and identities in a world increasingly shaped by colonial legacies and modern challenges. For Indigenous Peoples, August 9 represents not just a celebration but also a call to action, reflecting the continuing efforts to secure rights, recognition, and justice.

In 2025, as the 9th of August falls on a Saturday, the International Day will be observed online on Friday, August 8, to better engage a global audience in these important conversations. This year’s theme, "Indigenous Peoples and AI: Defending Rights, Shaping Futures", is particularly timely and urgent. As technology continues to evolve, it is critical that Indigenous Peoples are not left behind in the digital age, especially in areas such as AI development, where the potential for both empowerment and exploitation exists.

We are standing at the threshold of a new digital era, an age shaped by Artificial Intelligence (AI), big data, and algorithmic decision-making. From language models to surveillance systems, from conservation mapping to educational tools, AI is transforming the way the world functions. But like every dominant system before it, AI carries with it the biases, exclusions, and exploitative logics of its creators.

For Indigenous Peoples, the rise of AI brings a double-edged promise: either it becomes a tool for empowerment, cultural resurgence, and sovereignty, or it becomes a new frontier for colonization, misrepresentation, and resource extraction. The AI systems of today often fail to account for the complex, nuanced ways in which Indigenous cultures and knowledge systems operate. Without the active participation of Indigenous communities in AI development and governance, these systems are at risk of perpetuating historical patterns of cultural erasure and data exploitation.

img

A unisex Maasai neck choker.

As a Maasai, I have witnessed firsthand the profound impact that cultural misrepresentation can have on the identity of Indigenous Peoples. In 2024, the Kenyan State Department of Culture used AI-generated images to represent Maasai culture in a public campaign. At first glance, this initiative seemed like a well-intentioned attempt to share Maasai heritage with a broader audience. However, the use of AI technologies to generate images of Maasai attire and cultural symbols exposed a disturbing misrepresentation that goes far beyond a simple aesthetic mistake. One of the most glaring missteps was the depiction of the Maasai neck bracelet, an ornament traditionally worn by women, being worn by men. 

For the Maasai, the symbolism embedded in cultural attire is profound and specific. The neck bracelet, for instance, is a gendered symbol: it is worn by women to signify their role, status, and spiritual connection within the community. In fact, many ornaments and articles of clothing in Maasai culture are not merely accessories, they represent age, social rank, spiritual milestones, and the intricate relationships between individuals and the natural world around them. The Isuri, a traditional garment worn exclusively by older women, signifies their wisdom and leadership within the community, while the Emiragie, worn by younger women, symbolizes fertility and vitality. These distinctions are not arbitrary; they are key components of a cultural system passed down through generations. 

This was not merely an aesthetic oversight. It was a cultural transgression, a violation of symbolic systems developed over centuries, reduced by machine logic into arbitrary decoration. Worse still, it reflected a growing trend: the appropriation and distortion of Indigenous identity through AI systems trained on unvetted, uncontextualized data. These distortions not only offend, they contribute to cultural erasure. They fracture the chain of knowledge transmission between elders and youth. They reduce living traditions to digital caricatures.

UN Declaration on the Rights of Indigenous Peoples (UNDRIP) Article 31 clearly affirms the right of Indigenous Peoples to “maintain, control, protect and develop their cultural heritage, traditional knowledge and traditional cultural expressions.” When states or tech developers use generative AI to reproduce sacred imagery without cultural oversight or consent, they violate this right. And when governments are the ones responsible, it becomes doubly dangerous, exposing a gaping void in both digital governance and cultural literacy.

img

Artificial Intelligence offers real potential for Indigenous communities. Already, Indigenous groups are using AI for language revitalization, ecological protection, and historical archiving. AI tools can digitize endangered languages, transcribe oral traditions, and model climate patterns affecting ancestral lands. For cultures that have been silenced in mainstream institutions, AI can amplify Indigenous knowledge on Indigenous terms.

But AI is also a tool of amplification, and what it amplifies depends on how it is built. Most AI systems today are trained on datasets scraped from the internet, academic databases, or government archives. These datasets rarely center Indigenous knowledge systems. Instead, they are shaped by colonial legacies, extractive science, and racialized hierarchies. As a result, AI often reproduces, and even intensifies, patterns of exclusion.

The risk is clear: without meaningful Indigenous participation, AI will further marginalize the very communities it claims to serve. Without ethical design, Indigenous data, stories, symbols, genetic material, and ecological knowledge, can be extracted, commercialized, and decontextualized. This is not hypothetical. It is happening now.

Indigenous Data Sovereignty 

At the center of these concerns is the principle of Indigenous Data Sovereignty: the right of Indigenous Peoples to govern, control, and protect data that pertains to their cultures, lands, languages, and bodies. Data is not neutral. For Indigenous communities, it is ancestral, relational, and sacred.

UNDRIP Article 32 declares that Indigenous Peoples have the right to determine and develop strategies for the use of their lands and resources, and in the digital age, data is a resource. That includes environmental data collected through sensors on Indigenous territories, genetic data used in health research, and cultural data stored in online platforms. No use of Indigenous data or heritage should ever happen without Free, Prior and Informed Consent (FPIC).

When AI systems extract, process, and monetize Indigenous knowledge without FPIC, they are not innovative, they are colonial. They perpetuate centuries of exploitation under the guise of progress. This is data colonialism, and its effects are no less destructive than the physical resource extraction that preceded it.

The ethical problems with AI don’t end with data. The infrastructure that powers AI, massive data centers, rare earth mining, and energy-intensive computation, has a direct environmental impact. Training a large AI model consumes as much energy as five cars over their entire lifetime. These environmental costs are often invisible, but they are borne disproportionately by the lands and waters Indigenous Peoples depend on.

Environmental Justice

UNDRIP Article 25 recognizes the spiritual and physical relationship between Indigenous Peoples and their territories. Article 29(2) prohibits the storage or disposal of hazardous materials on Indigenous lands without consent. Yet few tech developers or governments assess the environmental externalities of AI on Indigenous territories. Whether it’s lithium extraction for batteries or hydroelectric dams to power data centers, the digital economy is expanding colonial resource frontiers, often without regulation or redress.

If AI is to be truly sustainable, it must not merely predict climate change, it must not contribute to it.

In the face of these risks, Indigenous Peoples are not retreating from technology. They are leading its transformation.Around the world, Indigenous youth, elders, and technologists are co-creating AI that reflects their values, laws, and cosmologies.

In Aotearoa (New Zealand), Māori-led initiatives like Te Hiku Media have built language models governed by Māori law (tikanga), ensuring that data remains in Indigenous hands. In North America, the Indigenous AI Working Group has published protocols grounded in values like relationality, consent, and accountability to land. In Kenya, the Maasai Language AI Project is co-developing tools for Maa language revitalization, with elders guiding every step of the process.

These are not just digital projects. They are acts of sovereignty. They embody what UNDRIP Article 18 mandates: the right of Indigenous Peoples to participate in decision-making “through representatives chosen by themselves in accordance with their own procedures.” These efforts are about more than inclusion. They are about Indigenous jurisdiction in the design, deployment, and governance of emerging technologies.

If AI is to serve humanity equitably, Indigenous Peoples must not be token participants. They must be co-creators of the rules, systems, and technologies that shape their lives. This means involving Indigenous Peoples from the inception of AI projects, not after the fact. It means funding Indigenous-led tech development, supporting Indigenous research institutions, and embedding culturally grounded ethics into AI policy frameworks.

Governments must align their AI strategies with UNDRIP. Tech companies must adhere to the CARE Principles for Indigenous Data Governance, Collective Benefit, Authority to Control, Responsibility, and Ethics. International bodies like UNESCO and the UN AI Ethics Council must create dedicated mechanisms for Indigenous representation and oversight in global AI governance.

Without these steps, AI will become yet another domain where the rights of Indigenous Peoples are discussed, but never honored.

As we mark the International Day of the World’s Indigenous Peoples 2025, we are called to more than reflection, we are called to action. AI is not just a technology. It is a system of values, a new site of governance, and a rapidly expanding frontier of power. If we are not intentional, it will reproduce the very hierarchies it promises to dismantle.

But the future is not yet written. If we choose, AI can be a tool of resurgence, one that preserves language, protects lands, and amplifies ancestral wisdom. This will require ethical governance, legal accountability, and Indigenous leadership at every level.

Let us be clear: the ethical future of AI does not lie in Silicon Valley, it lies in the collective knowledge, values, and sovereignty of the world’s First Peoples.

On this day, and every day that follows, may we not only defend Indigenous rights, but also uplift Indigenous visions of the future.

Because the question is no longer whether Indigenous Peoples will be part of the digital world, but whether the digital world can be made just enough to deserve them.