THE SKINNY
on AI for Education
Issue 12, January 2025
Welcome to The Skinny on AI for Education newsletter. Discover the latest insights at the intersection of AI and education from Professor Rose Luckin and the EVR Team. From personalized learning to smart classrooms, we decode AI's impact on education. We analyse the news, track developments in AI technology, watch what is happening with regulation and policy and discuss what all of it means for Education. Stay informed, navigate responsibly, and shape the future of learning with The Skinny.​
Headlines
​
-
The AI Revolution Continues - What Can the Research Tell Us?
-
​What the Research Says: Early Lessons from AI Tutoring That Matter Today
-
The ‘Skinny Scan’ on what is happening with AI in Education….
-
The Detail​​
-
-
​Categorised AI News Summary (November 2024 - January 2025)
​
The AI Revolution Continues, What Can the Research Tell Us?
​
Welcome to the first Skinny on AI of 2025.
​
In this issue I’m excited to introduce you to a new section of the newsletter that will be a regular feature in future issues: What the Research Says about AI in Education (WTRS).
​
I’ll also be doing my usual ‘Skinny Scan’ across what is happening in the world of AI and how that impacts on education, because as the leaves turned golden in the northern hemisphere and 2024 drew to a close further transformation was reshaping the technological landscape as several developments converged in AI's accelerating impact on society, infrastructure, and human agency.
​
As the field advances, balancing innovation with sustainability, humanity, and equitable progress emerges as the central challenge for 2025 and beyond. More on this later, but first….
​
"What the Research Says" - bridging historical insights with contemporary AI challenges
​
A significant challenge has emerged in the conversations surrounding AI in education: despite decades of valuable research, there remains a notable scarcity of readily accessible, robust evidence regarding AI's current educational impact. This gap becomes particularly pronounced as we navigate the integration of generative AI into educational settings.
​
There is a disconnect between contemporary discussions and the rich historical understanding that has emerged over many decades of AI in education research. While freely accessible research specifically addressing generative AI in education remains limited, there exists a wealth of relevant insights from previous decades that could inform our present challenges.
​
To address this critical gap, I’m introducing a new regular feature in The Skinny: "What the Research Says". My goal is to explore how past research can help ensure that we make the best use of AI in education by ensuring that our approach is grounded in robust evidence and informed by decades of valuable insights.
​
In the first instalment below I focus on AI tutors, examining how historical research into can inform our understanding and implementation of contemporary AI tutoring solutions. Each edition will connect established research findings with current challenges, offering practical insights for educators and administrators navigating the AI landscape. Readers can access these insights through both The Skinny newsletter and LinkedIn, where detailed discussions will explore the intersection of established research and emerging AI applications in education.
​
What the Research Says: Early Lessons from AI Tutoring That Matter Today
The rise of Artificial Intelligence (AI) in education marks a transformative era, promising personalised and adaptive learning experiences. And there is much energy being put into developing and using AI tutors. To harness the full potential of these powerful tools, it is crucial to draw from decades of rigorous research on AI tutors. By understanding the historical successes and challenges of AI tutors, modern educators and developers can build systems that not only replicate human-like guidance but also empower teachers and students in innovative ways.
I have selected two seminal research papers for this article in the new WTRS series: Cognitive Tutors: Lessons Learned (Anderson et al., 1994) and Meta-Reviews on AI in Education (du Boulay, 2016) (papers available here). Both papers provide invaluable insights into the principles, practices, and impacts of AI-powered learning tools. From the foundational work at Carnegie Mellon University to comprehensive meta-analyses of intelligent tutoring systems (ITS), these studies illuminate the critical design and implementation factors that make AI effective in improving learning outcomes.
There are three areas we can look at: Core Principles, Key implementation insights and Implications for today.
​
Core Principles That Stand the Test of Time
​
1. The Power of Step-Based Learning
· Early research showed that breaking problems into clear, manageable steps accelerated learning significantly
· Students using step-based AI tutors achieved proficiency in one-third the time of traditional instruction (Anderson et al., 1994)
· Meta-analyses confirmed that step-based tutoring systems consistently outperformed conventional classroom instruction (du Boulay, 2016)
​
2. The Importance of Immediate, Targeted Feedback
· Research found that immediate feedback on errors was important for learning (Anderson et al., 1994)
· Multiple studies also demonstrated that while feedback timing was important, the content of the feedback was also critical (du Boulay, 2016)
· Brief, specific feedback (focused on why something was wrong) proved more effective than lengthy explanations
· Students preferred concise feedback focused on the task rather than attempts at 'natural' conversation
​
3. Multiple Solution Paths
· Early systems demonstrated the value of recognising that different users may have different, but equally valid approaches to problem-solving
· Success for an AI Tutor came from supporting various solution strategies while maintaining clear pedagogical goals
· This principle remains crucial for designing AI-enhanced learning that adapts to diverse thinking styles
​
Key Implementation Insights
​
1. Teacher Integration is Critical
· The most successful implementations positioned AI as a classroom assistant rather than a teacher replacement (du Boulay, 2016)
· Teachers needed time (typically 1-2 years) to fully integrate AI tools effectively (Anderson et al., 1994)
· Studies consistently showed that teacher preparation and proper integration were crucial success factors (du Boulay, 2016)
· Systems worked best when teachers could control and customise the AI's role
2. Blended Learning Works Best
· AI tutors showed greatest impact when used as part of a blended learning approach
· AI excelled at foundational skill-building, freeing teachers for higher-order instruction
​
3. Student Engagement and Progress
· Students showed strong engagement when they experienced consistent progress
· Success breeds success: Early studies found students would skip other classes to work with effective AI tutors
· Particularly strong results were seen with previously struggling students
​
Implications for Today
1. Design Principles for AI Tools
· Rather than using AI to generate complete solutions, structure AI interactions to guide students through solution steps
· Focus on process guidance rather than answer generation
· Provide immediate, specific feedback
· Allow multiple valid solution paths
· Keep explanations concise and task-focused
2. Implementation Strategies
· Plan for significant teacher training and adaptation time
· Use AI to complement, not replace, teacher expertise
· Start with well-defined, foundational skills
· Monitor and adjust based on student progress
​
3. Assessment and Effectiveness
· Measure both learning gains and time to mastery
· Consider motivation and engagement as key success factors
· Look for transfer of learning to non-AI contexts
​
Looking Forward
​
These landmark studies offer a wealth of actionable guidance for those designing and using AI in education today. They emphasise the enduring relevance of step-based learning, immediate and targeted feedback, and the need for systems that accommodate diverse problem-solving approaches. Furthermore, they highlight the importance of integrating AI as a complementary tool within blended learning environments, underscoring the role of teachers in adapting and customising AI solutions to meet classroom needs.
​
For modern AI developers, these findings stress the value of balancing technological advancements with pedagogical sensibilities.
​
For educators, they offer reassurance that AI, when thoughtfully implemented, can enhance engagement, accelerate mastery, and support student-centred learning.
​
By ‘standing on the shoulders’ of these foundational studies, today’s innovators can ensure that AI becomes a powerful ally in shaping the future of education.
The ‘Skinny Scan’ on what is happening with AI in Education….
I hope you enjoyed the new WTRS section – read on for the usual Skinny Scan across what is happening in AI…
In short…converging developments in AI are reshaping the technological landscape accelerating AI’s impact. Tech giants poured £3.8 billion into Malaysian data centres whilst exploring nuclear power solutions to meet soaring energy demands—a demand set to double by 2026. Meanwhile, corporate dynamics shifted as OpenAI's valuation reached £157 billion, testing its partnership with Microsoft.
​
The EU AI Act established new governance frameworks amidst debates over military applications, whilst traditional educational technology saw investment plummet from £17.3 billion to £3 billion as AI-focused investment surged to £51.4 billion.
​
Environmental concerns intensified with predictions of 5 million metric tonnes of AI-related waste by 2030. In healthcare, whilst AI tools now process over a million physician-patient encounters monthly, the need for human oversight remains crucial—90% of AI-generated medical notes still require manual editing.
​
What does all this mean for Education? Read on to explore.
​
More detail below and lots of lovely links too for those who like to dig a bit deeper and look at the sources too.
​
- Professor Rose Luckin, UCL and Educate Ventures Research, January 2025
The Detail
The Physical Foundations
In Malaysia's Johor state, amidst humid tropical air, countless servers hummed in newly constructed data centres. Tech giants ByteDance, Microsoft, and Oracle had invested billions in these facilities, transforming this once-quiet region into a crucial node in AI's global infrastructure. The investment of £3.8 billion wasn't merely about storing data—it represented the physical foundation of AI's expanding capabilities.
​
Meanwhile, in another corner of the tech world, an unprecedented shift was occurring. Amazon, Google, and Microsoft, traditionally focused on digital infrastructure, were venturing into nuclear power. Amazon's £500 million investment in X-energy for small modular reactors highlighted a stark reality: AI's appetite for energy was becoming insatiable. With data centres projected to consume more than 1,000 terawatt-hours of electricity by 2026—more than double their 2022 consumption—the race for sustainable power sources had become critical.
​
Corporate Dynamics and Innovation
The relationship between Microsoft and OpenAI, once celebrated as the perfect marriage of resources and innovation, showed signs of strain. As OpenAI's valuation soared to £157 billion, both companies sought greater independence. Their evolving partnership illustrated the delicate balance between collaboration and competition in the AI ecosystem.
​
In the realm of model development, Mistral AI's launch of Ministral 3B and 8B models marked a significant shift toward edge computing. Priced at £0.04 and £0.10 per million tokens respectively, these models could process an impressive 131,072 tokens of input context. This development democratised AI access, bringing sophisticated capabilities to smartphones and laptops.
​
Regulatory Landscape and Ethical Considerations
The implementation of the EU AI Act in August 2024 set new standards for AI governance. LatticeFlow's COMPL-AI framework emerged as a crucial tool, evaluating models across five categories: technical robustness, privacy, transparency, fairness, and social/environmental impact. When tested, GPT-4 Turbo and Claude 3 Opus achieved the highest scores of 0.89, though most models struggled with fairness and security metrics.
​
A particularly contentious development emerged in the military sector. Meta's decision to allow Llama models for US national security purposes, followed by Anthropic offering Claude models to intelligence and defence agencies, marked a significant shift in the industry's relationship with military applications. This raised complex questions about the balance between national security and ethical AI development.
​
Infrastructure and Labour
The automation of ports worldwide became a flashpoint for AI's impact on traditional employment. Shanghai's Yangshan Port, processing 113 containers per hour compared to Oakland's 25, demonstrated the potential efficiency gains. However, the International Longshoremen's Association's resistance highlighted the human cost of this transition, with potential strikes threatening £7.5 billion in weekly economic impact.
​
Educational Implications
The transformation of educational technology was particularly striking. Traditional EdTech investment plummeted from £17.3 billion to £3 billion, while AI investment soared to £51.4 billion. Companies like Chegg and Coursera faced significant declines, while Duolingo thrived through AI integration. This shift occurred against a backdrop of concerning statistics—only 18% of US teenagers could accurately distinguish between different types of media content, with nearly half believing the press harms rather than protects democracy. While over half of US states have discussed media literacy, with 18 passing related bills, teachers face significant challenges including lack of consistent definitions, limited curriculum time, and uncertainty about effective teaching methods.
​
Looking Forward
OpenAI's ambitious expansion plans marked late 2024, with aims to quintuple its user base to 1 billion in 2025. Having raised £4.7 billion at a £118 billion valuation in October 2024, the company invested heavily in infrastructure, including new data centres across the American midwest and southwest, despite spending over £3.9 billion annually.
​
Meanwhile, YouTube under CEO Neal Mohan's leadership generated £39.3 billion in annualised revenue for Alphabet, streaming 1 billion hours of content daily on connected TVs. With £15.7 billion invested in original content and £55 billion paid to creators over three years, the platform positioned AI as a creator support tool rather than a replacement.
​
As 2024 drew to a close, several critical trends emerged. The development of "automated superapps" threatened to concentrate digital activity into fewer platforms, while AI agents began fundamentally changing human-computer interaction. Anthropic's launch of desktop control capabilities for Claude 3.5 Sonnet, achieving a 15% success rate on OSWorld benchmark tasks, hinted at AI's expanding role in everyday computing.
​
The environmental impact of these developments cast a long shadow, with projections suggesting 5 million metric tonnes of AI-related e-waste by 2030. Studies indicated that extending server lifespan by just one year could reduce waste by 62%, with potential to recover £11-22 billion in valuable materials. Trade restrictions on advanced chips may increase waste by 14%. This highlighted the urgent need for sustainable practices in AI development and deployment.
​
The Human Element
Perhaps most significantly, the year demonstrated the complex interplay between AI enhancement and human agency. Society is grappling with AI's dual role in both empowering and potentially constraining human freedom.
​
The remarkably low development costs—£2.40 for API calls and £28 monthly AWS bills—suggested democratisation of AI development. Amazon's launch of the Nova AI platform further supported this trend, with Nova Pro matching leading models at lower cost (£0.80/£3.20 per million tokens) and specialised offerings for various applications. However, this accessibility came with its own challenges, particularly in ensuring responsible development and deployment.
​
In the healthcare sector, this democratisation was already showing practical impact. Investment in AI-assisted medical documentation doubled to £629 million in 2024, with Microsoft's Nuance DAX Copilot handling 1.3 million physician-patient encounters monthly. While these tools demonstrated potential to reduce clinical documentation time by 50%, a Stanford trial revealed that 90% of AI-generated notes still needed manual editing—a reminder that human oversight remains crucial in critical applications.
​
As we look toward 2025, the AI landscape continues to evolve at a dizzying pace. The challenge lies not just in keeping up with technological advancement, but in ensuring that this progress serves humanity's best interests while preserving individual agency and social equity.
Categorised AI News Summary (November 2024 - January 2025)
AI in Education
Media Literacy Education in US Schools (18 November 2024)
Source Only 18% of US teenagers can accurately distinguish between different types of media content, with nearly half believing the press harms rather than protects democracy. While over half of US states have discussed media literacy, with 18 passing related bills, teachers face significant challenges including lack of consistent definitions, limited curriculum time, and uncertainty about effective teaching methods. Various approaches being tested include "lateral reading" for source credibility verification, educational games, and inductive learning using real examples. The situation highlights the urgent need for integrated media literacy education across subjects and improved teacher training in this critical area.
​
AI Investment in Online Education (24 December 2024)
Source Global edtech investment plummeted to £3bn in 2024 from £17.3bn in 2021, whilst AI investment soared to £51.4bn. Traditional edtech companies like Chegg and Coursera faced significant declines, whilst Duolingo thrived through AI integration. The shift suggests a fundamental transformation in educational technology, with OpenAI and Google developing education-specific tools. This trend indicates educators need to reassess their digital strategies, balancing traditional teaching methods with AI integration whilst ensuring equitable access to these emerging technologies.
​
Australian Social Media Age Restrictions (December 2024)
Source Australia's groundbreaking legislation banning under-16s from major social media platforms presents significant challenges for educators. The law, effective November 2025, requires schools to fundamentally rethink their digital engagement strategies and develop alternative communication channels. While aimed at protecting young people, implementation challenges include the prevalence of age-spoofing software and privacy concerns around verification methods. Educational institutions must now balance digital literacy education with compliance, while developing new approaches to facilitate student collaboration and maintain parent-school communication.
AI Ethics and Societal Impact
Corporate Disinformation Challenges (23 December 2024)
Source Companies face unprecedented challenges from AI-enhanced disinformation, with response times shortened from 24 hours to just 4 hours. The article highlights cases like Arla Foods facing boycott threats over baseless conspiracy theories, demonstrating the growing threat to corporate reputations. This development emphasises the critical need for enhanced digital literacy and rapid response capabilities in an era where AI-generated content can rapidly spread misinformation.
​
Meta's AI-Generated Social Media Users (23 December 2024)
Source Meta plans to populate its platforms with AI-generated characters to engage its 3 billion users, introducing profiles, bios, and content generation capabilities. Whilst offering new opportunities for engagement, experts warn of potential misuse for spreading misinformation. The development raises significant questions about authenticity in social media and the need for clear labelling of AI-generated content.
​
Physical AI Reality Check (30 December 2024)
Despite widespread enthusiasm about humanoid robots, industry experts are tempering expectations about AI's physical manifestations. The failure of high-profile projects like the Pepper robot, which only produced 27,000 units despite substantial investment, highlights persistent challenges in hardware development. Nvidia's CEO suggests the future lies in "physical AI" - enhancing existing machines rather than creating humanoid robots. Key obstacles include motor limitations, sensory feedback complexity, and power supply constraints.
​
Environmental Impact Concerns (January 2025)
Projections show 5 million metric tonnes of AI-related e-waste by 2030. Extending server lifespan by one year could reduce waste by 62%, with potential to recover £11-22 billion in valuable materials. Trade restrictions on advanced chips may increase waste by 14%. There's a growing focus on sustainable AI infrastructure.
AI and Cybersecurity
National Security Concerns (3 December 2024)
Source The UK National Cyber Security Centre reported a significant increase in cyber threats, with 1,957 attacks, including 89 "nationally significant" incidents. AI is enhancing threat capabilities in data extraction and attack sophistication. The situation highlights the urgent need for enhanced cybersecurity education and awareness across all sectors.
​
AI Safety Developments (January 2025)
NYU and MetaAI improve methods to prevent AI manipulation with E-DPO technique maintaining boundaries while preserving capabilities. Focus shifts to training methods rather than data expansion, with industry-wide commitment to reliable AI systems and growing emphasis on practical safety measures.
AI Employment and the Workforce
Meeting Dynamics and AI (2 December 2024)
Research reveals that unproductive meetings cost US businesses £204bn annually, with generational disparities in participation particularly affecting younger workers. AI solutions are emerging to address these challenges, including natural language processing tools that flag dominating voices and provide real-time feedback about speaking time. Tools like Fathom AI and Grain offer meeting summaries and generate follow-up questions to encourage deeper discussion.
​
Humanoid Robotics Outlook (2 January 2025)
Citibank's projection of 648 million humanoid robots by 2040 presents both opportunities and challenges for the workforce. With initial costs of £27,500 per unit potentially recovered within a year, these robots face practical limitations including a 1:2 charging ratio and significant maintenance costs (20% of total cost annually). While Toyota Research Institute has successfully taught robots over 500 skills, questions remain about the necessity of bipedal design and the substantial infrastructure requirements.
AI Development and Industry
OpenAI Targets Major Expansion (30 November 2024)
OpenAI is pursuing ambitious growth plans, aiming to quintuple its user base to 1 billion in 2025. The company, which raised £4.7bn at a £118bn valuation in October 2024, is investing heavily in infrastructure, including new data centres across the American midwest and southwest. Despite spending over £3.9bn annually, OpenAI is focusing on development of AI "agents" for complex tasks and plans to launch its own AI-powered search engine.
​
Model Development Challenges (December 2024)
Major companies reporting limited gains despite increased scale and investment. OpenAI's Orion showing modest improvement over GPT-4, Google's Gemini facing development hurdles, and Anthropic delayed Claude 3.5 Opus release. Industry shifting focus from size to efficiency, with training costs projected to reach £79 billion within years.
​
Google's AI Advancement (22 December 2024)
Source Google launched several significant AI breakthroughs including Gemini 2.0, Project Mariner, and Project Astra, demonstrating its commitment to maintaining leadership in AI development. The company's stock rose 38% in 2024, though it still faces competition from AI-powered alternatives and antitrust challenges.
​
Amazon-Anthropic Partnership Expansion (December 2024)
Additional £4 billion investment, bringing total to £8 billion, focusing on Amazon's custom chips for AI training. Projects 50% cost reduction compared to traditional GPUs and aims to strengthen AWS's market position. Partnership includes development of specialised training infrastructure.
​
DeepSeek's Transparent AI (December 2024)
Released DeepSeek-R1 with visible reasoning processes, outperforming competitors in mathematical and programming tasks. Offers free preview with 50-message daily limit and emphasises transparency in decision-making, marking shift from traditional scaling approaches.
​
Mistral AI's Visual Recognition (January 2025)
Released Pixtral Large for text and image processing, handling up to 30 high-resolution images simultaneously. Pricing at £1.60 per million input tokens and £4.80 per million output tokens. Strengthens European presence in AI development.
​
Amazon's Nova AI Platform Launch (January 2025)
Nova Premier (launching 2025) for high-performance computing, Nova Pro matches leading models at lower cost (£0.80/£3.20 per million tokens), Nova Lite optimised for speed and efficiency, Nova Micro specialising in text processing, Nova Canvas for image generation, and Nova Reel for video creation.
​
AI Dominance at Consumer Electronics Show (10 January 2025)
Major technology companies showcased extensive AI integration across consumer devices at CES, reflecting the growing influence of AI in everyday products. Nvidia, valued at £2.8tn after a 150% stock increase, unveiled a mini AI supercomputer and the new Cosmos platform featuring foundational AI models. CEO Jensen Huang outlined the anticipated evolution from perception AI to "physical AI".
​
Notable innovations included BMW's implementation of Amazon's Alexa with large language models, LG and Samsung's adoption of Microsoft's AI co-pilot for televisions, and Google's integration of Gemini AI assistant into TV operating systems. Novel products included Halliday's AI-assisted lightweight glasses and Apptronik's "Apollo" humanoid robot, priced at £39,500.
Emerging technologies showcased included Xpeng AeroHT's "modular flying car" (under £237,000), Kirin's electronic spoon enhancing flavour through electrical current, and Holoconnects' Holobox Mini 3D projector. However, Bank of America reported "lacklustre" consumer demand for AI products, with analysts noting consumer wariness about safety and reliability. Quantum computing stocks declined following Huang's 15-year timeline prediction for the technology's maturation.
AI Regulation and Legal Issues
Australian Social Media Regulation (30 November 2024)
Australia has implemented groundbreaking legislation banning under-16s from social media platforms, with potential fines of £25.5m for non-compliance. The ban affects major platforms including TikTok, Facebook, Instagram, Snapchat and Reddit, impacting TikTok's 8.5m Australian users and 350,000 Australian businesses.
​
US Homeland Security Warning (20 December 2024)
Source The outgoing US Homeland Security chief criticised the EU's "adversarial" approach to AI regulation, warning that differing regulatory approaches between the US and EU could create security vulnerabilities. The situation highlights the challenge of balancing innovation with security in AI governance.
​
US Creative Rights and AI (16 December 2024)
Source Nearly 40 British creative organisations formed the Creative Rights in AI Coalition, seeking copyright protection from AI companies. The coalition represents a £100bn+ industry advocating for licensing markets and creator control over work used in AI training.
​
UK Government AI Strategy 2024-2025
​
Starmer Unveils Ambitious AI Plan (January 2025)
Labour leader Keir Starmer has announced a transformative three-part strategy to establish Britain as an AI "world leader". The plan, marking a significant departure from Rishi Sunak's safety-focused approach, emphasises infrastructure development, economic productivity, and domestic AI innovation. This strategic pivot comes amid concerns that previous government emphasis on AI risks may have deterred tech investment.
​
AI Opportunities Action Plan Details (January 2025)
The comprehensive strategy, authored by venture capitalist Matt Clifford and endorsed by Starmer, outlines 50 recommendations including the creation of a "national data library" incorporating anonymised NHS patient data. The government aims to increase its computing power twentyfold, targeting 100,000 GPUs in government capacity by 2030. New "AI growth zones" are planned, with the first to be established in Culham, Oxfordshire.
​
Infrastructure Investment and Challenges
Significant infrastructure developments include Nscale's £2 billion investment in UK sovereign AI facilities, though the £800 million Edinburgh "Exascale" supercomputer project has been cancelled. The sector faces substantial energy challenges, with data centre electricity demand projected to quadruple by 2030. Current UK data centres employ 43,500 people, with the workforce expected to double within a decade.
​
Government Safety and Business Support (November 2024)
Science and Technology Secretary Peter Kyle has introduced dual initiatives combining regulatory oversight with practical support. The government has committed to comprehensive AI regulation legislation in 2025, alongside a new platform helping British businesses, particularly SMEs, implement AI technologies. Additional measures include streamlined visa processes for AI experts and the designation of special "computing zones" for data centres.
​
Market Position and Investment
Current statistics reveal the UK hosts 64 unicorn companies valued over £1 billion, compared to 770 in the US. The largest UK tech investment reached £1 billion (Wayve), with total investment in UK-based AI companies approaching £4 billion by 2024. However, the British Chambers of Commerce reports 40% of companies still have "no plans" to adopt AI technology, highlighting significant growth potential.
​
The UK's strategy aims to chart a middle path between EU regulation and US market-led approaches, positioning Britain as a global leader in AI safety testing and verification while fostering practical implementation and economic growth.
AI Market and Investment
Nvidia's Market Dominance (December 2024)
Source Nvidia invested £1 billion across 50 start-up funding rounds and corporate deals in 2024, surpassing £3 trillion market capitalisation. The company's investments span diverse sectors including medical technology, search engines, and gaming, though it faces increased antitrust scrutiny globally.
​
YouTube's AI Integration (3 January 2025)
Under CEO Neal Mohan's leadership, YouTube is leveraging AI to drive growth, generating £39.3bn in annualised revenue for Alphabet. The platform, which streams 1 billion hours of content daily on connected TVs and receives 70 billion daily views on "Shorts," is launching AI tools including "Dream Screen" and "Dream Track" for content generation. With £15.7bn invested in original content in early 2024 and £55bn paid to creators over three years, YouTube is positioning AI as a creator support tool rather than a replacement.
​
AI Medical Scribes (5 January 2025)
The healthcare sector is experiencing a significant shift towards AI-assisted documentation, with investment in medical note-taking applications doubling to £629m in 2024. Major tech companies are launching AI co-pilots for physicians, with Microsoft's Nuance DAX Copilot handling 1.3 million physician-patient encounters monthly. While these tools can reduce clinical documentation time by 50%, challenges remain - a Stanford trial showed 90% of AI-generated notes needed manual editing.
Further Reading: Find out more from these resources
Resources:
-
Watch videos from other talks about AI and Education in our webinar library here
-
Watch the AI Readiness webinar series for educators and educational businesses
-
Listen to the EdTech Podcast, hosted by Professor Rose Luckin here
-
Study our AI readiness Online Course and Primer on Generative AI here
-
Take the AI for Educators Adaptive Learning programme here
-
Read our byte-sized summary, listen to audiobook chapters, and buy the AI for School Teachers book here
-
Read research about AI in education here
About The Skinny
Welcome to "The Skinny on AI for Education" newsletter, your go-to source for the latest insights, trends, and developments at the intersection of artificial intelligence (AI) and education. In today's rapidly evolving world, AI has emerged as a powerful tool with immense potential to revolutionize the field of education. From personalized learning experiences to advanced analytics, AI is reshaping the way we teach, learn, and engage with educational content.
In this newsletter, we aim to bring you a concise and informative overview of the applications, benefits, and challenges of AI in education. Whether you're an educator, administrator, student, or simply curious about the future of education, this newsletter will serve as your trusted companion, decoding the complexities of AI and its impact on learning environments.
Our team of experts will delve into a wide range of topics, including adaptive learning algorithms, virtual tutors, smart classrooms, AI-driven assessment tools, and more. We will explore how AI can empower educators to deliver personalized instruction, identify learning gaps, and provide targeted interventions to support every student's unique needs. Furthermore, we'll discuss the ethical considerations and potential pitfalls associated with integrating AI into educational systems, ensuring that we approach this transformative technology responsibly. We will strive to provide you with actionable insights that can be applied in real-world scenarios, empowering you to navigate the AI landscape with confidence and make informed decisions for the betterment of education.
As AI continues to evolve and reshape our world, it is crucial to stay informed and engaged. By subscribing to "The Skinny on AI for Education," you will become part of a vibrant community of educators, researchers, and enthusiasts dedicated to exploring the potential of AI and driving positive change in education.