Ethical governance in an age of AI

Our digital worlds are shifting, often faster than we can keep up with. In witnessing the rapid and widespread acceptance of AI-integrated futures, we ask ourselves: 

What does ethical governance look like in an age of AI? 

When we set out on this journey, we knew that it was critical for our AI Policy –no matter what the final shape and form would end up being– to be grounded in our values. It has been a long process, and necessarily so: numerous team meetings brainstorming the different ways our work may interact with AI tools, both with and without our consent; deep rabbit hole dives into definitions, articles, and reports that often become out of date soon after they are released; and discussions surrounding the deep systems-level impacts that these technologies can (and already do) apply to our communities. 

As an organization deeply embedded in systems change work, it became abundantly clear to us that there was more to the integration of AI technologies than the day-to-day, surface level effects: through influences to global flows of capital, behaviours, beliefs, and actions, AI has quickly become an aggressive and insurgent systems change actor. In identifying individualism and isolation as core outcomes of AI dependence, community stewardship and shared accountability became a large part of our discussions: how could we support our communities to increase literacy around these challenges, to build a response to this wicked problem collectively?

All of this together, we share the following: an excerpt of hua foundation’s AI Governance Policy, with the hope of contributing one piece to the broader conversation on how we can best embody our values in the face of these new technological landscapes. 

As a team, we continue to work on our implementation guidelines: standardized processes and tools to support staff and collaborators to navigate the complexities of these spaces. We are also committed to community stewardship: facilitating spaces for dialogue and learning around AI. Keep an eye on this space and our social media, and feel free to reach out if you have ideas on how we can continue to build helpful resources and facilitate dialogue.

hua foundation’s AI Governance Policy

The following is an excerpt of hua foundation’s internal AI Governance Policy, reproduced with permission for the purposes of facilitating community dialogue.

1.0 – Preamble & values

This AI Governance Policy was developed by the hua foundation staff team, based on concerns in the rapid and widespread acceptance of AI-integrated futures, not only in terms of the inherent biases within and questionable veracity of AI generated ‘knowledge,’ but also the major climate and environmental impacts associated with the ongoing maintenance of data infrastructure. Global concerns have been raised about the integration of AI into work and life that go against our organizational values and mission, of which there are many more than we include explicitly below.

Beyond immediate impacts and direct consequences, AI technologies can also be understood as systems change actors: in other words, applying disruptive forces to affect resource flows, beliefs, behaviours, and policies. By recognizing both the potential and material impacts that AI technologies have, in both reproducing and exacerbating existing systemic injustices, we recognize our responsibility as systems change actors to both proactively build alternative systems that respond to community needs and expand this influence beyond ourselves. 

By developing and iterating on the following AI Governance Policy, we commit to actions led by our values, moving towards alternative futures that uplift the agency, dignity, and humanity of our communities. 

1.1 – Climate impacts and environmental justice

AI technologies have demonstrated ongoing impacts and consequences to the flows of local, global, and systemic resources. This includes financial, political, and social forms of capital. For the purposes of this policy, we highlight in particular the ways in which these resource flows affect climate and environmental justice issues, but acknowledge the expansive ways in which AI technologies affect other resource flows.

There has been increasing concern over the vast and wide-ranging climate impacts that AI processing and infrastructure has made on the environment. A 2024 report published by the United Nations Environment Programme (UNEP), highlights many aspects including (but not limited to): heavy metal extraction to build microchips that data centres operate on, substantial volumes of electronic waste produced over the life cycle of computing equipment and infrastructure, and fresh water and energy consumption in construction and operation of these data centres. 

While data centre resource consumption is not new, statistics in the above UN document report that a single ChatGPT query can use 10 times more electricity than a similar Google search. Another study therein claims that the process of training a language learning model (LLM) produces emissions equivalent to 125 roundtrip flights between New York and Beijing. 

Beyond immediate power usage and greenhouse gas emissions, there have been increasing red flags raised on the usage of clean, potable drinking water to cool data centres in North America. Further, these are often proposed and located in regions already disproportionately impacted by climate change: data centres located in drought-affected desert regions, such as Phoenix, AZ, put additional strain on already limited freshwater resources. For many communities, access to clean drinking water is still not guaranteed, where exploitation of the environment for capitalist gains comes at the cost of basic necessities for those already marginalized by other systems of power. 

Finally, this contributes to further reproduction of settler colonial practices of exploitation and extraction. Due to the global nature of these systems, there are currently no tangible safeguards or measures to hold corporations accountable for the negative effects that AI infrastructure is having on the environment. 

1.2 – Literacies of trust and reproduction of oppressive systems

Systemic change also occurs through the influencing of beliefs, not only in material form (the information itself that is reproduced), but also as the processes of shaping the context in which that information is received. 

Due to the relative newness of consumer access to AI tools, there is a lack of literacy on how to navigate and verify information outputs from AI modelling, in particular with the aggressive marketing tactics of AI companies to position their tools as trustworthy. Being ‘left behind’ in a framework of modernity and relevancy artificially drives and directs resources and rhetoric in an unchecked narrative shaped by capital. 

The terminology of AI itself is misleading; by framing data modelling and machine learning as ‘artificial intelligence’ there is an implied anthropomorphizing effect, that wrongly ascribes human attributes (such as nuanced discernment) to these tools, which they simply do not have. In other words, AI processing systems do not actually create meaning in their analysis; rather they operate based on predictive algorithms and pattern recognition that generate a response with the highest probability of being ‘correct’ in relation to a prompt, where ‘correctness’ is based on a given dataset that has been actively indexed. AI responses can be considered ‘meaning-semblant,’ resembling or mirroring (but not actually being) a statement that is meaningful. ‘Knowledge’ is presented in absolutes, as part of intentional attempts to position AI generated information as trustworthy; however, there are many proven instances in which data scraping is found to be completely factually incorrect –clicking through to a linked reference and a quick skim of the webpage often finds an entirely contradictory answer– but an increasing and unfounded trust in AI outputs can lead to the rapid spread of misinformation, possibly with detrimental consequences, in particular for those who have lower technological literacy. 

Further, leading AI models typically index their information from biased datasets, which reproduce oppressive systems by privileging the perspectives of those who already hold power. Not only does this mean that nuanced information and perspectives are often excluded from AI generated responses, therefore lacking access to an intersectional analysis; but more worryingly, AI modelling by design reinforces the status quo.

Finally, in a 2025 study, leading AI models resorted to unethical behaviours when placed under threat; even when prompted with direct commands not to use blackmail, AI models resorted to those exact tactics while explicitly acknowledging it was not ideal. The conceptualization of AI as an intelligent being capable of independent thought implies that it also has agency and intention, and operates with morals and responsibility. However, the reality is that AI modelling does not bear the burden of any consequences that may follow a given decision, which therefore calls into question who is held accountable to actions taken based on those analyses. This poses substantial risks, given the increasing reliance on AI generated information in decision-making processes, without appropriate accountability measures or mechanisms. 

1.3 – Individualism and the rejection of reciprocal relations

Despite operating in virtual spaces, AI tools have demonstrated the potential to deeply influence actual behaviours. These behavioural shifts directly affect the ways we navigate social interactions and the orientation of ourselves as individuals in community.

As a tool of neoliberal capitalism, the use of AI tools reinforces individualistic ways of being. Rather than making meaningful attempts to connect with others, whether to learn from each other or to share the weight of challenges together, these tools provide shortcuts that encourage self-reliance and isolation. 

Further, a reliance on AI enables siloing; responses depend largely on the way that a question is framed/written, with outputs generated to feed an implied perspective. Predictive and probability-based modelling algorithms are trained to respond in a way that reflects the user’s tendencies, relative to a knowledge base within their datasets. Therefore, there is potential to create echo chambers that reverberate the sentiments that users want to hear, reducing exposure to diverse lived experiences and community-generated knowledge, but of greater concern is the ways that this can enable escalation towards indoctrination, or in some cases ‘AI psychosis’.

Given the ways that AI is framed as an assistive technology, it is not built in ways that challenge a user’s preconceived notions or perceptions about a given topic. In many ways, AI’s positioning as an omniscient ‘one stop shop’ discourages the important work of critical thinking and seeking out additional source material. With an increasing (and sometimes sole) reliance on using chatbots for research purposes, this can reduce users’ exposure to different ways of thinking.  Social AI functions allow individuals to have greater control over their social supports, and the diversity of experiences that inform them

1.4 – Implied consent and lack of transparency on data storage and use

The speed with which AI tools have been largely accepted and integrated into daily life has meant that there are not sufficient regulatory policies or controls on their development or use; many governments are struggling to simply stay on top of the latest developments. 

AI tools are increasingly being built into platforms that we all use every day, as ways for companies to inject newness and novelty into their products. However, many of these tools are applied by default, requiring users to manually opt out of them, if the option to do so is available at all. Many of these changes are quietly bundled into Terms & Conditions updates, alongside other general adjustments and fixes, assuming the users’ consent by continuing to use the platform. 

Further, in continuing to use platforms that integrate AI modelling, there is a lack of transparency on what is happening to any and all data that is input into the model– both in terms of data storage (server locations) as well as data usage (language learning models, text and data mining). 

More recently, there have been increasing violations of intellectual property, as well as manipulation of multimedia (audio, still images, and video, among others) to impersonate individuals and/or generate fake news. This has been particularly impactful to cultural industries, undermining protected rights under the Copyright Act. Beyond direct unlicensed use of original works and/or distorted reproductions that could harm an artist’s reputation, as CARFAC-RAAV argues: “Generative AI also enables an environment in which artists are unable to protect their works from association with causes, products, services, or institutions to which they are personally opposed.” 

2.0 – Our approach

AI technologies are not entirely avoidable. Search engines like Google have been using data modeling for a very long time, and social media algorithms have integrated AI technologies that are not realistically possible to opt out of, if we hope to remain in touch with a majority of our audiences through these platforms. Our approach to navigating the landscape of AI focuses on three main commitments: responsive governance & community stewardship, relational abundance & person-centred work, and transparent application & reciprocal accountability.

2.1 – Responsive governance & community stewardship

We have chosen to approach AI through a framework of responsive governance. This means acknowledging that the landscape is constantly changing, and that no singular static policy will be able to encompass the breadth of ways that AI tools can potentially touch our work. Using a governance framework generally does not apply absolutes or directives, but rather provides principles and guidelines to help our team members and collaborators to make conscious and informed decisions on when we activate certain tools or not. This policy will act as a living document, to welcome spaces of reflection and adapt to the changing landscape of technology and community concerns. 

We acknowledge that we are not subject matter experts on AI tools, and there are limits to what can be learned through word of mouth and passive learning. Part of our AI governance framework includes a commitment to ongoing active learning and professional development to ensure that we can make informed decisions about how we interact with technology in the changing landscape, in ways that do justice and remain accountable to our communities. 

We also commit to public dialogue and sharing around our approaches to AI, to continue to model equitable and ethical action and engagement, and to support community literacy around the complexities of technological advancements. 

2.2 – Relational abundance & person-centred work

Many of the arguments in favour of using AI tools argue for their purported accessibility and assistive technologies; however, this is often activated in a framework of scarcity, where unnecessary urgency and lack of appropriate resources pressure individuals to utilize these tools to shortcut and/or stay afloat. This is not a dignified use of accessibility tools, but rather a response to dispassionate environments that necessitate their use for survival. 

Rather, a relational abundance approach prioritizes meaningful, supportive, and compassionate environments that foster trust, transparency, and healthy communication of needs. By activating person-centred accommodations, each individual is able to come to the table from a place of agency, and can be an active participant in the building of their work plans.

In conversations about accessibility accommodations, we ask ourselves: what direct accommodations can we provide on a human level before bringing in technological interventions? Examples could include:

  • Proactive communication and planning around learning and working styles to reduce deadline urgency

  • Co-working sessions to bounce and build on ideas 

  • Hiring a note-taker to support research processes 

2.3 – Transparent application & reciprocal accountability

We approach the use of any AI tools with caution and transparency. All work (both internal and external) will prioritize open and ongoing communication, to remain attentive and accountable to community and organizational commitments. 

We come to our relationships in good faith and trust that by clearly communicating our approaches to AI and justifications for them, our collaborators will participate in open dialogue around their use in our shared portfolios. By proactively planning and engaging in dialogue around application, we can uphold these commitments in a way that all team members feel agency. However, in the rare instance in which the actions of any collaborators go against the terms of this policy, whether intentional or not, we will engage in a reparative process to address harms, relative to the scale of infringement of the policy and the outcomes. 

A commitment to reparative justice means addressing harms from the perspective of those who have harm directed towards them, in contrast to a punitive justice model which places the focus on the actor who committed the harm. For example, in a traditional work environment, infringement on such a policy might end in contract termination, as a reactionary response, with repair work conducted following that (if at all). However, this response does not hold the perpetrator accountable to their actions, or for their responsibility to steward relationships with our team members or on our behalf. Rather, by directing our responses towards the harm and its consequences, we prioritize ongoing relationships and repair, and provide the opportunity for those who have committed harms to take ownership of their actions. This approach foregrounds reciprocity and acknowledges that our capacity to do harm can be overshadowed by our willingness to do better. 

3.0 – Applications

The following is a brief overview of the different types of AI tools, tangible ways that our work may have touchpoints with them. The definitions provided here are not universal and have been written for organizational use. 

3.1 – Generative AI 

Generative AI (or GenAI) is a functional data modelling and machine learning system that analyzes patterns and structures in a given dataset to produce new data based on input prompts. Its applications are intended to be formative and contribute substantially (or in full) to the final output, whether in content or structure. This can include (but is not limited to): text, images, video, and audio data. 

Examples of Generative AI tools can include:

  • A student prompting a chatbot (e.g. ChatGPT) to write an assignment based on a given prompt

  • Using a text-to-image model (e.g. Canva AI) to generate a photo of a group of racialized people for a promo image

  • Note-taking with a live transcription tool (e.g. Otter.AI) and producing a summary of meeting minutes

3.2 – Assistive AI 

Assistive AI is a generalized term that encompasses a wide range of technologies and applications that are meant to aid users with specific tasks. Generally speaking, Assistive AI tools do not produce entirely new data, but rather are supportive to a process and may suggest or make alterations to an existing dataset. However, many Assistive AI tools have begun to integrate or add Generative AI functions to their platforms. 

Examples of Assistive AI tools can include:

  • Activating the Live Captioning tool on Google Chrome while listening to a public hearing, where subtitles are not already provided

  • Using a screen reader (e.g. Adauris.AI plug-in on The Tyee) to hear an article read aloud

3.3 – Platform-based AI tools

In addition to the above proprietary AI tools, many platforms have begun integrating AI tools directly into user interfaces. Often these tools are enabled by default upon updating, and may or may not have the option to disable them, either on an individual user level or administrator/workspace level. These tools are common in enterprise and workspace software, for the purposes of automation and/or efficiency. 

Examples: 

  • Generate and interact with Deep Dive Audio Overview summaries of large documents via NotebookLM (Gemini for Google Workspace)

  • Summarizing unread notifications through Slack AI and generating a recap

  • Categorizing and sorting data in a Monday.com table, including detecting sentiment

  • Posting a video to Instagram, and the algorithm pushing it to a community member’s Explore tab


As our team continues to work on implementation guidelines, we would love to hear from you! Drop us a line, by email, DM or comment on social media to let us know:

  • What stuck out to you from reading our policy?

  • What resources have been helpful to you?

  • What events, tools, etc. would you like to see?

Stay tuned!

Next
Next

Supporting Indigenous-led organizations by giving One Day’s Pay