TL;DR
The Relational Futures report by Distinguished Professor Bronwyn Carlson and Tamika Worrell examines how Aboriginal and Torres Strait Islander peoples interact with AI, highlighting culturally unsafety and politically unaccountability due to a lack of transparency and AI’s role in perpetuating existing colonial power structures. Indigenous participants expressed mixed confidence in AI, raising concerns about bias, extraction, cultural appropriation and false relationality. Even so, participants did not reject the technology, acknowledging its potential for Indigenous peoples. But those opportunities cannot be realised without a centering of Indigenous authority in AI governance, grounded in Indigenous Data Sovereignty. This necessary structural change would produce positive outcomes for all AI users.

As AI continues to be embedded into everything, Distinguished Professor Bronwyn Carlson and Tamika Worrell at the Centre for Critical Indigenous Studies at Macquarie University have released Relational Futures: Indigenous Sovereignty and the Governance of Artificial Intelligence (AI), initial findings from their Indigenous-led Relational Futures project looking at how Aboriginal and Torres Strait Islander peoples encounter and respond to AI. The report emphasises that AI is already an embedded infrastructure behind many personal and professional domains, but the impacts of its integration are being experienced unevenly. It also challenges the notion of AI as neutral, instead highlighting that AI systems are a continuation and intensification of existing structures of power in settler colonialism. AI’s material impact on Indigenous land and the environment was also acknowledged, recognising that data centres powering AI are ‘deeply entangled with Country’ as a ‘material and ecological system’ (Carlson and Worrell 2026:14).

Relational futures: Indigenous sovereignty and the governance of artificial intelligence (AI)

For Indigenous Australian AI users who participated in the research, use of AI is high but confidence is mixed, skewing towards less confident. A lack of transparency in how AI systems collect, process and use data contributes to perceptions of AI as being culturally unsafe and politically unaccountable. This sits alongside other concerns about algorithmic bias, especially in high-stakes areas such as health, policing and child protection, as well as concerns about extraction of First Nations knowledge and cultural appropriation and the false relationality of AI circumventing ‘reciprocal and culturally grounded relationships’ (Carlson and Worrell 2026:22) in conflict with ‘Indigenous relational values grounded in kinship, responsibility, and accountability’ (Carlson and Worrell 2026:8).

AI as invisible infrastructure with limited transparency contributes to Indigenous peoples’ concerns, including the potential for epistemic harm where AI outputs disconnected from cultural authority with false confidence. Despite many concerns, participants did not reject technology outright. They articulated a nuanced understanding of the complexities of AI technologies and maintained that an approach to AI grounded in Indigenous sovereignty that sets limits on the technology was a pathway forward. In that vein, there was optimism for what AI can do for Indigenous people – improving access to services by removing Western institutional barriers, supporting language and cultural revitalisation and enhancing community wellbeing – but the ability to seize those opportunities requires more than technical fixes, it requires structural change in how AI is developed and governed.

To address the concerns and harness the opportunities, Carlson and Worrell advocate shifting from the mere inclusion of Aboriginal and Torres Strait Islander people in AI to a centering of Indigenous authority in AI governance. Underpinning such an objective is a recognition of and respect for Indigenous Data Sovereignty. If we can achieve this the result would be positive outcomes for Indigenous people and for everyone.

Don't just read the beginning. Become a paid subscriber and you can read the entire briefing. Subscribe to keep reading.

Subscribe

What's in this briefing

This briefing on Carlson and Worrell’s Relational Futures report includes:

  • a summary of the framing for AI they bring to the research
  • an overview of the report and its methodology
  • the recommendations made in the report
  • a discussion of the report’s findings.

(Re)framing AI from an Indigenous perspective

When reading Carlson and Worrell’s report it is important to understand the framing they are approaching AI with. It is not the tech bro ‘move fast and break things’ mantra that champions AI as the ‘god-mode’ of productivity and disruption, nor is it the economic investment golden goose that many governments are trying to woo.