School leaders are often told they face a choice: embrace AI or ban it – but that framing is out of date.
Whether school leaders have formally planned for it or not, AI is already being used across schools: in lesson planning, parent communications, homework and mainstream software. So the real choice is not whether to adopt AI or ban it, it’s whether or not to initiate governance to protect its students, teachers and the institution itself.
The data makes the gap between usage and oversight plain. In England, the Technology In Schools survey found that 44% of teachers report using generative AI for school activities. In the US, RAND found that 53% of core-subject teachers are doing the same – yet only 45% of principals say their school or district has AI policies or guidance in place.
Ofcom's UK data adds further weight: 54% of children aged 8-15 had used a generative AI tool in the previous year, and more than half said they'd used it for schoolwork.
The longer leaders treat AI as a future adoption decision, the more ungoverned AI use will spread – without shared rules, visibility or safeguards.
What unmanaged AI use actually looks like in a school
Unmanaged AI rarely announces itself through a major incident. It tends to appear quietly.
A teacher using their personal AI account to draft parent communications, without knowing how that data is stored or processed. A student using AI for homework with no clear expectations from the school, or staff experimenting with tools that haven’t been reviewed for privacy, safeguarding or accuracy.
None of these scenarios demonstrate bad intent. They simply reflect what happens when usage runs ahead of governance.
UK guidance is explicit that schools should understand whether they are dealing with open or closed tools, avoid entering identifiable personal data into open systems, and be transparent about how generative AI is being used.
When those foundations aren't in place, schools lose the ability to shape practice before it becomes a problem – and if something does go wrong, leaders are left in a reactive position with limited visibility and limited defensibility.
The core risks from unmanaged use are well-documented and fall into predictable categories:
Data privacy
Open AI tools may store, process or reuse the information entered into them. A teacher who pastes student or parent data into an unapproved tool may not realise the implications – but the school remains responsible for the data.
Safeguarding
Students have access to tools capable of generating synthetic abusive imagery, including so-called nudify applications. Australia's eSafety Commissioner has described AI-generated image abuse affecting school communities as a current and growing concern. This needs to sit inside existing safeguarding procedures, not outside them.
Assessment integrity
When departments take different positions on AI use – or no position at all – confusion is the predictable result. Schools are still developing workable frameworks for honesty, verification and disclosure in assessed work.
Reputation and trust
Parents are more likely to lose confidence when they discover AI has been used without their knowledge than when schools explain clearly how it is being used and what safeguards have been put in place. Transparency is not a weakness in this conversation – it’s protection.
Consistency and equity
When AI use is left to individual teachers, quality and practice become uneven across classrooms. Students in one room may benefit from thoughtful AI-assisted feedback while students in another may be disadvantaged by no clear framework at all.
These risks become hardest to manage when leaders lack visibility on who used what tool, for what purpose, with what data, and under what safeguards.
How governed AI use creates a completely different environment
When schools bring AI into a controlled environment, the picture changes quickly.
Leaders know which tools are being used. They know what data can and can’t be entered. They know where human judgement still sits. Staff have guidance. Parents can be informed. There is a trail of accountability. It turns AI from a vague source of anxiety into something schools can monitor, explain and improve over time.
There are already examples of education systems moving in this direction.
In New South Wales, NSWEduChat has been positioned as a department-owned, monitored and filtered environment, giving schools a more controlled way to explore generative AI.
One defined assessment workflow linked to EduChat reportedly reduced a process from around 30 minutes per writing sample to 52 seconds. These examples don’t suggest every school should copy the same model. They show that governance makes practical use possible.
The opportunity for school leaders is this: good governance doesn't just reduce risk. It creates the conditions where AI can genuinely reduce workload, improve consistency and protect institutional trust – all at the same time.
The schools best placed to benefit from AI won't be the most adventurous early adopters. They'll be the ones with clear frameworks that make responsible use possible.
What a practical AI governance framework looks like
Most schools don’t need a perfect AI strategy before they take action, they just need a workable starting point. The aim is to make AI use visible, bounded and reviewable, so leaders can reduce uncertainty and bring consistency to what’s already happening.
Governance in this area works best when ownership is clear across leadership.
- Principals set the educational purpose, define expectations for staff use, and ensure communication with parents is clear and credible.
- IT leaders assess tools for privacy, security, access and data handling, and create a clear distinction between approved and non-approved use.
- Boards and governors provide oversight of risk, ask for assurance, and ensure AI use aligns with the school's responsibilities around safeguarding, compliance and reputation.
The practical framework should also answer a small number of basic questions:
- Which tools are approved for use by staff and students?
- What kinds of tasks are acceptable uses of those tools?
- What data must never be entered into open AI systems?
- Where must human review remain in the loop?
- How will staff and students be guided?
- How will that guidance be kept current?
Schools don’t need all the answers on day one. But they do need enough structure to replace guesswork with shared expectations.
First steps schools can take now
For most schools, the most useful immediate move is not a launch – it’s an audit.
Find out where AI is already being used: by staff, by students, and inside existing software platforms. To get the greatest level of honesty and transparency, frame the disclosure as an amnesty with no finger pointing or repercussions. Once that picture is clearer, sensible guardrails can be placed around it.
Some sensible first steps include:
- Map current use
Identify which tools are already in use, by whom, and for what purpose – including tools embedded in platforms the school already pays for. - Define approved use
Create a clear, short list of school-approved tools and make it equally clear what falls outside that boundary. - Set clear rules around data
Staff should know precisely what information must never be entered into open AI systems. This does not need to be complicated to be effective. - Update assessment guidance
Clarify what acceptable AI use looks like in student work, where disclosure is required, and how verification will work. - Train staff practically
Keep training grounded in real decisions – privacy, accuracy, safeguarding, bias and professional judgement – rather than broad capability demonstrations. - Build AI into safeguarding and risk processes
Deepfakes, synthetic imagery and AI-enabled misconduct are not separate issues. They belong inside existing response procedures. - Review regularly
AI use is moving quickly. Schools need a light but consistent review cycle so policy, practice and tools don’t drift apart.
None of these steps requires a school to become radically innovative overnight. They simply create enough visibility and control to reduce exposure and make better decisions.
The governance question every leader should be able to answer
Schools are under enough pressure already. They don’t need another technology debate built around panic or hype. They need a leadership response that matches what’s already happening on the ground.
AI use is spreading through classrooms, homework, planning workflows and mainstream software. The response won’t be shaped well by silence, delay or blanket discomfort. It will be shaped by whether leaders can bring it into view, place boundaries around it and make its use accountable.
The schools that handle this well won’t be the ones making the boldest claims. They'll be the ones creating clarity – clear rules, clear oversight, clear communication, clear responsibility.
That’s how trust is protected. That’s how inconsistency is reduced. And that’s how schools give themselves a far better chance of using AI in ways that support staff without exposing students, teachers or the institution itself.
The question for school leaders is straightforward: Can you currently see where AI is being used in your school, and are you confident the right safeguards are in place?
If your answer is uncertain, this is where to start.
For guidance, book a call with the Teacher's Buddy team.


.jpg)


