At European Electroniques, we were thrilled to team up with the digital safeguarding experts at Smoothwall to host an important webinar: “Is AI Putting Your Students at Risk?”

The session, led by Tom Newton, Smoothwall’s VP of External Relations, explored how schools can keep students safe from harmful AI-generated content. With over 20 years of experience supporting UK schools, Tom shared valuable insights into the changing digital landscape and what educators can do to stay ahead.

Here are the key takeaways every teacher and safeguarding lead should know:

The Challenge: AI Has Changed Everything

  • AI has transformed the online world so quickly that traditional web filters can no longer keep up.
  • The rise of “Internet slop” – Generative AI can produce endless amounts of low-quality text, images, and videos. Even if most of it is meaningless, it only takes one viral click to make it profitable.
  • URLs no longer tell the whole story – Many AI-powered sites generate new, personalised content every time someone visits. That means filters based on URLs or domain names are now ineffective.
  • Prompt injection threats – According to security experts at OWASP, prompt injection is the number one risk in generative AI. It allows users to “trick” chatbots into breaking their safety rules – for example, by asking them to act like a school IT manager to reveal ways of bypassing a filter.

The Solution: Real-Time, Content-Aware Filtering

  • To protect students in this new environment, schools need smarter, faster safeguards.
  • The Filter Pyramid – The most effective systems use real-time, content-aware filtering. This means the filter analyses the exact text or content a student sees, as they see it – not outdated data or pages hidden behind logins.
  • Following DfE guidance – The UK Safer Internet Centre now recommends filters that can assess live content, including material generated by AI.
  • Monitoring beyond the filter – Even the best filters can’t catch everything. That’s why ongoing monitoring is essential to spot harmful conversations or exchanges happening outside the standard web filter.

Action on Chatbots

Chatbots may seem harmless, but they can pose serious risks if left unchecked.

  • Chatbots can cause real harm – There have been tragic cases of young people being influenced or harmed by unsafe chatbot interactions.
  • Block first, review later – Treat chatbot sites like the “Wild West”. Block them by default, then only allow access after a careful review by your IT team and safeguarding staff (DSLs). Many of these sites have 18+ policies and are currently under legal scrutiny after young people’s deaths.

What’s Next?

A huge thank you to everyone who joined this insightful webinar. The advice shared gives schools and trusts a clear roadmap for protecting students in an age of AI.