The Human Side of AI: Building a Positive Human + AI Culture
Most AI conversations rightly focus on governance, compliance, and risk.
But there’s a quieter, more fundamental question we don’t talk about enough:
What kind of culture do we want to build as humans and AI learn to work together?
Technology is moving faster than most teams can adapt. While policies create guardrails, culture ultimately decides whether AI enhances or erodes trust, belonging, and purpose within your organisation.
Culture often determines whether AI delivers value
In my work with charities and mission-led organisations, I see a clear pattern: they rarely fear the technology itself — they fear losing their humanity, empathy, and mission focus as they scale.
That’s why I think that AI isn’t just a tech transformation; it’s a cultural one.
Technology may change how we work, but culture decides how that change feels – and whether it lasts.
The organisations doing this best start by asking:
-
How will AI help us serve people better?
-
How do we keep our values visible as we automate more tasks?
-
What new conversations about trust and transparency do we need to have?
These aren’t compliance questions. They’re cultural ones.
Three foundations of a positive Human + AI culture
1. Transparency
Be open about how and why AI is being used. When people know the “why”, they feel part of the change – not subject to it. For example, explaining how a generative tool supports report writing builds confidence; hiding it creates anxiety.
Action: Publish a short “AI use statement” for staff and customers listing the tools in use and what they do.
2. Curiosity
Create space for learning, experimenting, and failing safely because thoughtful mistakes are where the real learning happens.. Encourage staff to “play” with AI tools before they’re expected to use them formally. Curiosity builds confidence, and confidence builds capability.
Starter actions:
-
Run a 60-minute “safe-to-fail” sandbox session for a small volunteer group.
-
Hold a fortnightly 15-minute “AI check-in” where staff share one learning.
-
After each pilot, capture two quick questions: “What helped?” and “What concerned us?”
3. Shared Purpose
Remind everyone what the technology is for. AI should free humans to do more of the work that matters – connecting, caring, creating, leading.
It’s reasonable that leaders will also look at efficiency and cost. The question is how to reconcile efficiency with mission. Culture is where you find the balance.
For example, automating donor thank-you letters might save 10 hours a week – time a fundraiser can redirect to relationship-building with major donors. That’s efficiency serving mission.
Start with the people, not the platform
To make these cultural foundations tangible, you need a framework for reflection. If you’re introducing AI to your team or organisation, here’s a simple starting point using my 5 Lenses of Success framework:
Purpose – Why are we using AI? What human outcome are we seeking?
First action: Write one sentence describing the human outcome AI should improve.
People – Who is most affected, and how do we involve them early?
First action: Invite them to co-design one small pilot.
Power – Who makes decisions about how AI is used – and who checks them?
First action: Define who signs off usage and who reviews impacts quarterly.
Process – How do we integrate AI safely and ethically into everyday workflows?
First action: Map each workflow and pilot new AI steps in a controlled, low-risk setting before scaling up.
Practice – How will we keep learning and adapting once it’s in use?
First action: Add AI reflections to existing team reviews or learning logs.
Culture and compliance go hand in hand
Culture work must sit alongside good governance — they reinforce rather than oppose one another.
If your AI tool affects decision-making or communication with people, involve your data or technical lead early, and agree clear responsibilities and oversight.
In fact, the UK’s Institute of Directors (IoD) recently released AI Governance in the Boardroom, calling on boards to move beyond box-ticking. They argue that governance of AI must be strategic: embedding accountability, ethical oversight, and alignment with an organisation’s purpose from the start.
That emphasis complements the cultural lens: when your leaders are already asking “Who is accountable?”, “How does this tool advance our mission?”, and “What is the impact on people?”, you’ve already begun bridging culture and governance.
The cultural dividend
Teams that treat AI as a cultural shift, not just a software rollout, often rediscover creativity.
Once they trust that AI isn’t replacing them, they start imagining how it might support them – and that’s when innovation returns.
Before your next AI policy review, take a step back and ask:
Are we focusing on compliance – or culture?
Because in the end, the technology will follow the culture you create.
A quick reflection
So ask your board, your team, your partners::
“Are we building a relationship with technology that reflects who we are – and who we aspire to be?”
The future will favour those who answer that question with honesty and intent.
Further reading
- Digital Care Hub – Responsible AI in Social Care (sector guidance and checklists)
-
Stanford University –Generative AI Needs Adaptive Governance (2024)
-
Charity Excellence Framework – AI Governance & Ethics for Charities
This article was co-created through a human-led process using several AI models – including ChatGPT, Claude, Gemini, and Perplexity – as thinking partners. It reflects our commitment to ethical, transparent, and accountable use of AI, where human judgement, curiosity, and oversight remain central.