The Core Challenge
Public attitudes toward AI are evolving rapidly. Trust is conditional on demonstrated responsibility. Organisations that ignore shifting expectations risk backlash that can derail AI initiatives regardless of their technical merit.
Key Concepts
| Public trust | Confidence that organisations will use AI responsibly. Surveys show fewer than 40% of UK adults have this confidence. |
| Domain-specific attitudes | Trust varies by application area. Healthcare AI receives more favourable views than criminal justice or employment AI. |
| Transparency expectations | Growing public demand for clear disclosure when AI is being used and how it works. |
| Participatory design | Involving affected communities in the design and evaluation of AI systems. |
| Social licence | The implicit permission granted by society for organisations to operate AI systems. |
Warning Signs
Watch for these indicators of misalignment with societal expectations:
- Stakeholder attitudes are assumed rather than systematically understood
- Transparency is minimal or legalistic rather than genuinely informative
- Affected communities have no voice in AI design or evaluation
- Feedback mechanisms exist but concerns aren't actually addressed
- High-profile failures in the sector haven't triggered internal review
- Organisational values around AI are vague or absent
Questions to Ask in AI Project Reviews
- "Who is affected by this system, and what voice have they had in its design?"
- "How transparent are we being about AI use? Would affected parties agree it's adequate?"
- "What feedback mechanisms exist, and how responsive are they actually?"
Questions to Ask in Governance Discussions
- "What do our stakeholders expect from us on AI? How do we know?"
- "When was the last time we changed our AI practices based on stakeholder feedback?"
- "What would we do if public attitudes toward this type of AI shifted significantly?"
Questions to Ask in Strategy Sessions
- "Are we leading or following on AI transparency and engagement?"
- "What's our social licence to operate AI? What could threaten it?"
- "How do high-profile AI failures elsewhere affect our approach?"
Reflection Prompts
- Your assumptions: What are you assuming about how stakeholders feel about AI? How do you know?
- Your organisation's position: Is your organisation leading public expectations, matching them, or falling behind?
- Your personal role: What could you do to ensure affected communities have meaningful voice in AI decisions?
Good Practice Checklist
- Stakeholder expectations are systematically understood and tracked
- Transparency goes beyond legal minimums to genuine communication
- Affected communities are involved in design and evaluation
- Feedback mechanisms are genuinely responsive, not just present
- Organisational values around AI are clear and embodied in practice
- Engagement is ongoing dialogue, not one-off consultation
Quick Reference
| Element | Question to Ask | Red Flag |
|---|---|---|
| Understanding | What do stakeholders expect? | Assumed, not researched |
| Transparency | How clear is AI disclosure? | Minimal or legalistic |
| Voice | How are affected people involved? | Not consulted |
| Responsiveness | What happens with feedback? | Collected but not acted on |
| Values | What do we stand for? | Vague or absent |
The Public Attitudes Landscape
Overall: 70% of UK adults recognise AI's potential, but fewer than 40% trust organisations to use it responsibly.
Domain variation: Healthcare AI generally more favourable; criminal justice/policing more sceptical; employment/hiring significant concerns.
Emerging concerns: Environmental impact, job displacement, loss of human judgment in important decisions.
What shifts attitudes: High-profile incidents create lasting damage. Demonstrated responsibility builds trust incrementally.