What We Learned from London AI in Education Events This September
- Joshua Younger

- Sep 27
- 3 min read
Across September, educators, technologists and policy leaders gathered in London for one of several growing AI and EdTech events focused on the future of learning. With schools returning from summer and conversations around artificial intelligence accelerating, the timing could not have been more relevant. Across panel discussions, workshops and informal conversations, one message stood out clearly: AI in education is no longer a future consideration, it is a present responsibility.
The events brought together classroom teachers, school leaders, EdTech founders and safeguarding specialists to explore how AI is already influencing learning, assessment and creativity. While the technology itself has advanced rapidly, much of the discussion centred not on capability, but on implementation. Attendees repeatedly returned to the same question: How can schools use AI in ways that genuinely benefit children, without increasing risk, workload or inequality?
A recurring theme throughout the day was the importance of purpose-led AI. Speakers emphasised that AI should solve real educational problems rather than exist for novelty. In literacy, this means addressing challenges such as disengagement, lack of reading confidence, widening attainment gaps and limited time for personalised support. Research from organisations such as the Education Endowment Foundation and UNESCO was frequently referenced, reinforcing that technology is most effective when it complements strong pedagogy rather than replacing it.
Safeguarding featured prominently in discussions, particularly in light of updated UK guidance on AI use in schools. Teachers and school leaders voiced understandable concerns around data protection, inappropriate content generation and lack of transparency in some tools. Several panels highlighted the expectations set out in Keeping Children Safe in Education and recent Department for Education guidance, stressing that AI platforms must demonstrate clear content controls, minimal data collection and human oversight. There was strong agreement that trust will be the deciding factor in whether AI tools are adopted at scale in primary education.
Another key insight from the event was the growing recognition of creativity as a core literacy skill. While early AI tools in education focused heavily on efficiency and assessment, newer approaches are shifting towards creative engagement. Educators spoke about the value of storytelling, imagination and pupil agency in developing confident readers and writers. This aligns closely with findings from the National Literacy Trust, which show that children who enjoy reading and feel ownership over their learning make stronger long-term progress.
Interactive storytelling was frequently cited as a powerful example of AI being used well. Rather than generating content in isolation, AI can support children by adapting difficulty, offering vocabulary support and responding to choices in a structured, age-appropriate way. Importantly, several speakers highlighted that this approach mirrors what effective teachers already do instinctively: adjusting challenge, encouraging curiosity and responding to individual needs in real time.
Teacher workload was another major focus. With recruitment and retention remaining ongoing challenges across the sector, there was broad consensus that AI should reduce administrative burden rather than add to it. Tools that automate low-impact tasks such as marking, data analysis and progress tracking were viewed positively, provided they remain transparent and easy to interpret. This reflects the Department for Education’s position that AI should free up teacher time for planning, feedback and pastoral care.
The event also highlighted the growing importance of AI literacy itself. Children are increasingly encountering AI outside school, often without guidance. Educators argued that schools have a responsibility to help pupils understand what AI is, how it works in simple terms, and how to use it safely and ethically. Introducing AI through creative, structured and supervised tools was widely seen as a sensible starting point, particularly in primary education.
For platforms like Litsee, the conversations in London reinforced the direction many EdTech companies are now taking. Safe-by-design systems, no unnecessary data collection, strong content moderation and a clear educational purpose are no longer optional extras; they are baseline expectations. AI should enhance reading, not replace it, and should encourage creativity rather than passive consumption.
As schools continue to navigate this rapidly evolving landscape, events like this serve an important role. They provide space for educators to share concerns, for developers to listen, and for best practice to emerge collaboratively. The message from early September was clear: AI in education is not about doing more, faster. It is about doing better, more thoughtfully, and always with children’s wellbeing at the centre.




Comments