THE SKINNY
on AI for Education
Issue 11, November 2024
Welcome to The Skinny on AI for Education newsletter. Discover the latest insights at the intersection of AI and education from Professor Rose Luckin and the EVR Team. From personalized learning to smart classrooms, we decode AI's impact on education. We analyse the news, track developments in AI technology, watch what is happening with regulation and policy and discuss what all of it means for Education. Stay informed, navigate responsibly, and shape the future of learning with The Skinny.​
Headlines
​
​
How could AI help England's Special Educational Needs crisis?
​
Welcome to The Skinny where I look at recent developments in the world of AI and in particular what that means for education. As always, I take a broad scan of AI from advances in the development of the underpinning technologies, to regulation and ethics, and the implications for society and the workforce. This broader context is important to better understand the potential implications for education and training. Read on for a brief synopsis of the issues that caught my attention and of course you can see the full detail with all the links and sources below.
But for first, I want to focus on a subject close to my heart. How could AI help England's Special Educational Needs Crisis?
Throughout the more than 25 years I have worked in AI in Education I have believed that AI brings an amazingly powerful tool to help address the needs of all learners no matter how diverse. When I see the huge amounts of money being invested by tech companies – eye watering sums of money are a particular trend in the news over the last month - and the sophisticated AI technology development that is happening at a pace, I feel even more strongly that those in most need should be prioritised to benefit.
As we approach 2025, England's special educational needs and disabilities (SEND) system stands at a critical juncture. What began as an ambitious reform programme with the 2014 Children and Families Act has evolved into what many are calling a perfect storm of rising demand, diminishing resources, and systemic failure. I admit that the scale of the challenge is staggering. Today, 1.67 million pupils require special educational support - a number that has grown by nearly 22% in just four years. More telling still is the dramatic rise in those requiring the highest level of support through Education, Health and Care Plans (EHCPs), which has doubled since 2015 to 576,000 children and young people. The most pronounced growth is in autism spectrum disorder diagnoses (ASD), which now affect 1.6% of pupils compared to 0.5% in 2010. We're also seeing significant increases in speech, language and communication needs, alongside rising social, emotional and mental health challenges. The financial implications are sobering. Despite government funding reaching £10.7 billion (a 60% increase since 2019-20), the system is buckling. However, the crisis extends far beyond financial metrics. The human cost is evident in the outcomes: SEND pupils have significantly higher absence rates and reduced chances of sustained employment compared to their peers.
There is no simple answer and of course any AI driven initiative would require substantial investment and would take time to develop, but surely it is worth considering what could be possible?
It is easy for me to get carried away thinking about the ways in which AI tools and technologies could support a learner with SEN - from a person with physical disabilities for whom AI enhanced interfaces might be used to provide vital human interactions with and through technology, to the potential for AI to provide adaptive systems and intelligent analytics that enable us as educators to support neurodiverse learners with complex needs. To be clear, I am not here suggesting that every learner has an AI tutor to meet their individual needs, but rather a combination of AI and human support, where the human is vastly empowered by the AI to support each learner. We know that technically such systems can be built, but of course this takes time and money.
However, perhaps a more fruitful path to a pragmatic approach to AI’s use to tackle our SEND challenge would be to consider the key ‘pinch points’ in the current system and where at each juncture a purpose driven approach to AI might offer potential help.
From the identification of children and young people with disabilities and SEN, undertaking formal assessments for a child who is believed to have SEN to collating evidence and information to create and issue an EHCP (Education, Health and Care Plan) based on that assessment. AI could help in all these places. I believe that AI has a key role to play in tackling this substantial challenge. If anyone reading this agrees and would like to consider with me what might be done, do please get in touch.
There is of course a clear overlap within the SEND challenge I have just discussed between health and education systems, and I am often struck by what we might learn from the similarities and differences between the education and the health systems when it comes to technology and AI. For example, just as with education, the NHS faces significant technological hurdles, with only 20% of NHS organisations considered "digitally mature". Doctors in England lose an estimated 13.5 million working hours annually due to inadequate IT systems. There are clear infrastructure and staff training and professional development challenges in both health and education. Yet the amazing AI breakthroughs within medicine and health are not yet appearing in education. To my mind if some of the substantive breakthroughs that we see in medicine could be paralleled in education – and in particular for the SEND crisis, that would be a huge win for AI for society. In medicine, for example, we see generative AI tools and chatbots that are cutting the time doctors and nurses spend on paperwork, to AI that is “almost twice as accurate as a biopsy at judging the aggressiveness of some cancers" and the promise of accelerating drug discovery to boosting personalised medicine and patient engagement. Where are the equivalent educational examples and wins?
I appreciate of course that from a commercial point of view there is more profit to be made from these medical applications, but isn’t it time we put technology companies under pressure to deliver something we really need - substantial progress with AI technologies to support our SEN challenges - we are after all enabling them to progress with their innovation agendas and revenue generation, so it seems a small price to pay in return. It’s nice that Google.org announced a substantial £10 million investment in supporting 200K educators, but come on guys how about something more sizeable to help us with our SEND challenge?
​
Here is a a brief synopsis of the rest of the news for you to enjoy.
​
I was interested to read that finance professor Aswath Damodaran outlined some key strategies for remaining competitive in an AI-dominated job market, which I thought interesting and resonate with some of what I have previously written about human intelligence:
-
Becoming a generalist who can see the big picture across disciplines
-
Developing reasoning skills instead of relying on quick online searches
-
Cultivating creativity and unique connections
The UK graduate employment landscape underwent something of a downturn with the digital/IT sector seeing a 35% decrease in graduate hiring, while London experienced a 22% reduction in graduate roles. But across the wider workforce, healthcare is experiencing significant AI-driven transformation across multiple areas. For example, in diagnostics, Imperial College Health Partners reports AI systems are achieving consistent accuracy in interpreting MRI, CAT scans, and X-rays, though emphasising the importance of maintaining human oversight. The legal sector also saw positive progress, with research identifying 20 key initiatives where AI is revolutionising legal processes. AI systems now summarise 300-page contracts in 45 seconds, achieving time savings of up to 5 hours per week per user. And major recruitment firms have made a significant shift in their stance on AI use in job applications, now actively encouraging candidates to utilise AI tools for CV writing and cover letters, and LinkedIn's AI features show 90% user satisfaction. The impact of AI on Coding Jobs continues apace as AI coding assistants become more sophisticated and tools like GitHub Copilot, Amazon CodeWhisperer showing significant productivity gains, but while automation excels at routine tasks and boilerplate code, human programmers remain essential for conceptual tasks, collaboration, and translating business needs into software design. All of this must also be tempered by a recent critical examination of GPT o1 that revealed ongoing challenges in AI development, with the system struggling with basic coding tasks and often failing to recognise its own mistakes. It’s not all smooth sAIling!
​
​
There has been a huge amount in the news about environmental concerns as AI's energy consumption threatens climate change with growing AI infrastructure requiring massive electricity, potentially overwhelming power sources. The rapid growth of AI poses a critical dilemma for balancing technological advancement with environmental responsibility. And we see all the big tech companies investing heavily in nuclear power projects demonstrating the high stakes of AI competition.
​
When it comes to ethics, there are increasing concerns about Connected Vehicles that have emerged as a major cybersecurity concern and a general intensification of digital privacy concerns regarding AI systems using personal data and "AI slop" flooding social media. While some platforms offer limited opt-out options, there's no comprehensive way to prevent AI from using personal information online. This latter point should be considered also in the light of Oxford University research that found concerning correlations between social media use and mental health, revealing that 60% of 16- to 18-year-olds spend between two and four hours daily on social media.
And on the regulatory front, a groundbreaking lawsuit in Florida against Character.ai and Google highlighted AI chatbot risks for vulnerable users, following the death of a 14-year-old who allegedly became obsessed with an AI chatbot. Plus, the US Federal Trade Commission launched "Operation AI Comply" targeting businesses making misleading claims about AI capabilities. The UK government proposed an "opt-out" model for AI content scraping, following the EU's approach in its AI Act, despite opposition from publishers and creative industries, and we saw the UK's first prosecution for AI-generated abuse imagery with an 18-year prison sentence, in a case prosecuted under existing child abuse legislation rather than the Online Safety Act. The UK government launched two significant initiatives aimed at managing AI risks: first, a promise to introduce proper laws next year to keep the most powerful AI systems in check. Secondly, the government are rolling out a new platform to help British businesses navigate the challenges of AI implementation. Think of it as a sort of safety toolkit.
​
The AI Revolution continue at a pace with AI innovation showing no sign of slowing. Recent developments offer intriguing possibilities for education. Meta's MovieGen can now create realistic videos from text prompts, whilst advances in text processing allow for more sophisticated analysis which could for example, include student writing and educational materials. The latest models can handle longer texts - up to 8,192 tokens - whilst maintaining performance even with reduced processing requirements, making them more practical for school IT systems.
Recent improvements to AI assistants like Claude have brought enhanced capabilities, including the ability to navigate computer systems - a development that could streamline administrative tasks for teachers. Meanwhile, new edge computing models, designed to run locally on devices, offer faster response times and better privacy protection at relatively low cost (around 4p per million tokens for basic models). This could be particularly valuable for schools concerned about data security or those in areas with unreliable internet connectivity.
The ability of these systems to interact with external tools - from search engines to educational databases - represents a significant step forward. They could help teachers create personalised lesson plans, assist with assessment design, and even support classroom management. However, their potential use in academic testing raises important questions about assessment integrity and the nature of learning itself. There are also some thorny issues that need to be navigated when it comes to handing over authority to the AI to work our technology for us. Whilst these developments are promising, the technology still has limitations, particularly in understanding complex educational contexts and making nuanced pedagogical decisions. The question isn't just what AI can do, but how we can harness it to improve educational outcomes whilst maintaining the essential human elements of teaching and learning.
​
And the money keeps flowing as Big Tech spends more and more and more – who will be the winner? This month all the big technology companies show their AI investment credentials and revenue potential: Microsoft saw cloud revenue rises with AI boom, Meta reported 19% revenue increase, AWS showed 19% growth, Apple launched further "Apple Intelligence" AI features, Palantir reported strong earnings driven by AI demand and Google/Alphabet saw strong earnings boosted by cloud computing gains. Profits increased 34% reaching £26.3bn. The AI train will keep rolling as these competitors play a potential “winner takes all” gameplan for different elements of the AI landscape. It all feels unstoppable and somehow as educators we have to navigate the implications of this fast-paced technology evolution, because our students will surely be enticed and encouraged to engage with the increasing consumerisation of AI products and services.
Finally, I want to reflect in the ‘Beyond the Hype’ report (see below) we recently released which reported our findings from data collected from schools and colleges about AI adoption. There is an appetite for AI amongst many educators, but that lack of policies and professional development hamper practical applications of AI within education. This finding was also reflected within some of the news items over the past weeks. A comprehensive survey across 33 UK higher education institutions revealed 24% of teaching staff are incorporating AI tools, though only 13% received institutional support and 18% received proper training.
Universities in the UK also report facing an unprecedented surge in AI-related academic misconduct, with Russell Group institutions reporting varying levels of incidents and struggling with centralised record-keeping. The findings expose significant disparities in how institutions track and handle these incidents, with some universities maintaining no centralised records as cases are managed at departmental levels. This fragmented approach highlights a critical gap in the sector's response to AI-enabled cheating.
If you want more detail about all of this read on...
​
- Professor Rose Luckin, November 2024
The 'Beyond the Hype' Report
The reality of AI in education across England​
This report presents the findings from a comprehensive self-assessment completed by 256 institutions in England between February and May 2024. It follows on from 'The Shape of the Future' report, published in September, and reveals a sector poised for transformation yet struggling with the practicalities of harnessing AI's potential.
AI in Education
Following the UK Department for Education's £4 million investment in AI datasets, research revealed schools as the most trusted entities for AI decision-making in education. Teachers identified marking, data entry/analysis, and lesson planning as key areas for AI assistance while emphasising the importance of human relationships.
​
Google.org announced a substantial £10 million investment, through its GenerationAI initiative. The programme aims to reach 200,000 educators, addressing a critical gap where 77% of educators feel unprepared to teach AI skills despite 72% of K-12 students wanting AI guidance. This is part of its larger £75 million AI Opportunity Fund.
October 2024
​
The traditional education landscape is seeing significant change, with home-schooled children in England rising from 80,900 in 2022 to 92,000 in 2023. The global education technology market, valued at $142 billion in 2023, is expected to grow at 13.6% annually until 2030. Online schools like Minerva's Virtual Academy have seen dramatic growth, with revenues increasing from £500,000 in 2022 to £4 million in 2024.
Book Creator revealed its measured approach to AI integration, focusing on the "6Cs": Creativity, Collaboration, Critical Thinking, Communication, Citizenship and Character, while ensuring safe content creation.
A comprehensive survey across 33 UK higher education institutions revealed 24% of teaching staff are incorporating AI tools, though only 13% received institutional support and 18% received proper training.
A historic moment occurred when AI breakthroughs received Nobel recognition. Sir Demis Hassabis and John Jumper of Google DeepMind received the Chemistry Prize for AlphaFold, while Geoffrey Hinton and John Hopfield were awarded the Physics Prize for neural networks research.
Parents sue school in Massachusetts after son punished for using AI on paper
The parents of a Massachusetts teenager are suing his high school after they said he was accused of cheating for using an artificial intelligence tool on an assignment, highlighting the challenges schools face in developing clear AI policies.
November 2024
​
Universities faced an unprecedented surge in AI-related academic misconduct, with Russell Group institutions reporting varying levels of incidents and struggling with centralised record-keeping. The findings from Freedom of Information requests expose significant disparities in how institutions track and handle these incidents, with some universities maintaining no centralised records as cases are managed at departmental levels. This fragmented approach highlights a critical gap in the sector's response to AI-enabled cheating. The situation has prompted calls for greater involvement from governments, regulators and technology companies, as universities struggle to address this challenge independently. The varying institutional responses underscore the urgent need for standardised approaches to AI detection and prevention in academic settings.
Business schools began employing AI tools like ChatSDG and SDG Mapper to evaluate research impact against UN Sustainable Development Goals, representing a fundamental shift in academic evaluation.
AI Employment and the Workforce
September 2024
​
-
Becoming a generalist who can see the big picture across disciplines
-
Focusing on business storytelling and soft data in valuations
-
Developing reasoning skills instead of relying on quick online searches
-
Cultivating creativity and unique connections
The surge in AI-related power demand created new job opportunities, particularly in Asia's power grid infrastructure. Tokyo Electric Power Company's planned $3 billion investment in transmission infrastructure and Hitachi's growing power grids business illustrate how AI's energy demands are reshaping the job market.
October 2024
​
The UK graduate employment landscape underwent dramatic transformation
Applications per vacancy surged by 59% compared to 2023, reaching 140 applications per position. The digital/IT sector saw a 35% decrease in graduate hiring, while London experienced a 22% reduction in graduate roles.
The NHS faces significant technological hurdles, with only 20% of NHS organisations considered "digitally mature". Doctors in England lose an estimated 13.5 million working hours annually due to inadequate IT systems. While 90% of NHS organisations have electronic patient record-keeping, only 72% of social care providers have digitised records, highlighting a critical need for technological modernisation in healthcare education and practice.
November 2024
​
Healthcare is experiencing significant AI-driven transformation across multiple areas. In diagnostics, Imperial College Health Partners reports AI systems are achieving consistent accuracy in interpreting MRI, CAT scans, and X-rays, though emphasising the importance of maintaining human oversight. Harvard Medical School researchers are developing generalist medical AI models to handle comprehensive image interpretation tasks. In treatment optimisation, Barcelona's Vall d'Hebron Hospital has successfully piloted TRUSTroke, an AI system for personalising stroke treatment. The technology is also improving patient communication through AI-powered transcription and follow-up care management, with innovations like the Lola chatbot helping to reduce unnecessary hospital visits.
The legal sector saw significant transformation, with research identifying 20 key initiatives where AI is revolutionising legal processes. AI systems now summarise 300-page contracts in 45 seconds, achieving time savings of up to 5 hours per week per user.
Recruiters Embrace AI for Job Applications
Major recruitment firms have made a significant shift in their stance on AI use in job applications, now actively encouraging candidates to utilise AI tools for CV writing and cover letters. The Stepstone Group reports their AI tools have been used 2.6 million times in the past year, whilst LinkedIn's AI features show 90% user satisfaction. Research by Randstad reveals varying adoption rates across generations, with 57% of Generation Z workers using AI for applications compared to just 13% of baby boomers. However, recruiters emphasise the importance of personalising AI-generated content and strictly avoiding its use in assessments. This development marks a notable evolution from previous concerns about AI applications flooding recruitment systems.
​
The UK government began considering streamlined visa processes for AI experts as part of its "AI Opportunities Action Plan," including special "computing zones" for data centres and recognising the critical nature of AI infrastructure.
-
AI coding assistants becoming more sophisticated
-
Tools like GitHub Copilot, Amazon CodeWhisperer showing significant productivity gains
-
New systems like OpenAI's o1 and Anthropic's desktop control API emerging
-
Research suggests AI will complement rather than replace human programmers
-
Focus remains on routine tasks rather than complex problem-solving
​
Rather than threatening to replace programmers entirely, AI tools are transforming how developers work. While automation excels at routine tasks and boilerplate code, human programmers remain essential for conceptual tasks, collaboration, and translating business needs into software design. Developers who master both coding fundamentals and AI assistance are likely to thrive in this evolving landscape.
AI Ethics and Societal Impact
September 2024
Environmental concerns emerged as AI's energy consumption threatened US climate change targets
BloombergNEF predicted US emissions reductions might fall short of Paris Agreement targets due to increasing power demand from AI systems.
-
Growing AI infrastructure requires massive electricity, potentially overwhelming power sources
-
Goldman Sachs predicts 160% increase in data centre electricity needs from 2023-2030
-
AI companies lobbying for new energy infrastructure
-
Some power plants reverting to fossil fuels to meet demand
-
Positive note: AI helping manage energy distribution and carbon capture
​
The rapid growth of AI poses a critical dilemma for balancing technological advancement with environmental responsibility. While AI's energy demands threaten to overwhelm current infrastructure, the technology itself offers solutions for better energy management and carbon capture. Centralised data centres, despite their high consumption, prove more energy-efficient than distributed computing alternatives.
October 2024
​
Digital privacy concerns intensified regarding AI systems using personal data and "AI slop" flooding social media. While some platforms offer limited opt-out options, there's no comprehensive way to prevent AI from using personal information online.
Major AI companies took divergent approaches to chatbot personality development. OpenAI aimed for strict objectivity, while Anthropic acknowledged the impossibility of complete AI objectivity, focusing instead on honesty about beliefs.
A groundbreaking lawsuit in Florida against Character.ai and Google highlighted AI chatbot risks for vulnerable users, following the death of a 14-year-old who allegedly became obsessed with an AI chatbot.
Oxford University researchers found concerning correlations between social media use and mental health, revealing that 60% of 16- to 18-year-olds spend between two and four hours daily on social media.
-
Major tech companies (Amazon, Google, Microsoft) are investing heavily in nuclear power projects
-
Amazon led £500m investment in X-energy for small modular reactors
-
Google partnered with Kairos Power for reactor development
-
Microsoft signed a 20-year agreement with Constellation Energy
-
Driven by massive electricity demands from AI infrastructure
-
Highlights ongoing challenges with nuclear waste disposal
​
The tech giants' direct investment in nuclear power plants demonstrates the high stakes of AI competition. Data centres processing AI workloads are projected to consume more than 1,000 terawatt-hours of electricity by 2026, more than double the 2022 consumption. Nuclear power could provide abundant, carbon-free energy for decades, though significant challenges remain regarding safe disposal of radioactive waste.
Google's Nuclear Power Initiative
In a significant move toward sustainable AI infrastructure, Google has ordered 6-7 small modular nuclear reactors (SMRs) from Kairos Power, totalling 500 megawatts capacity. This marks the first instance of a tech company commissioning new nuclear power plants specifically for its data centres, with the first commercial reactor expected online by 2030. The initiative demonstrates the growing intersection between AI development and sustainable energy solutions.
November 2024
​
Meta's plans for a nuclear-powered AI data centre were halted due to environmental concerns, while the oil industry embraced AI for sustainability, with Abu Dhabi National Oil Company allocating $23bn to low-carbon AI technology.
AI Energy Demands Reshape Energy Sector
A landmark meeting in Abu Dhabi has highlighted how AI's enormous energy requirements are catalysing significant changes in the energy sector. Chief executives from Shell, BP, and TotalEnergies met with Microsoft and other tech leaders to address AI's growing energy demands. The gathering revealed a notable shift in thinking about renewable energy investment, particularly as major tech companies commit to powering their AI data centres with green energy. Sultan al-Jaber, CEO of Abu Dhabi National Oil Company (Adnoc), noted that ChatGPT's success 18 months ago marked a turning point in understanding this opportunity. This development suggests a potential reversal of recent trends that saw major oil companies pulling back from renewables, driven by the substantial power requirements of AI infrastructure.
AI and Cybersecurity
September-October 2024
Connected vehicles emerged as a major cybersecurity concern, with 97% of electric vehicles now internet-connected and growing concerns about potential cyber attacks and data breaches.
The Internet Archive's "Wayback Machine" experienced a significant security breach, affecting 31 million users, exposing email addresses and encrypted passwords. The incident highlighted the increasing vulnerability of major digital repositories in the age of AI-sophisticated cyber threats.
AI Development and Industry
September-October 2024
​
Google DeepMind and BioNTech developed sophisticated AI laboratory assistants, including BioNTech's 'Laila' built on Meta's Llama 3.1 platform. The system can perform routine biological tasks, monitor experiments, and detect mechanical failures.
Meta introduced MovieGen, capable of generating realistic videos from text instructions. The system can create videos up to 16 seconds long and includes features for editing and sound matching, with plans to offer these tools to Hollywood filmmakers and content creators in 2025.
A critical examination of GPT o1 revealed ongoing challenges in AI development, with the system struggling with basic coding tasks and often failing to recognise its own mistakes.
Jina AI released advanced text processing technology with their jina-embeddings-v3 system, capable of processing 8,192 input tokens with five specialist adapters.
Jina AI's Text Embeddings
-
Released jina-embeddings-v3 system
-
Features 559 million parameters and processes 8,192 input tokens
-
Includes five LoRA adapters for different tasks
-
Outperforms competitors like OpenAI and Cohere on English-language tasks
-
Offers efficient performance with reduced embedding sizes
​
This development extends the use of LoRA adapters to embedding tasks, providing developers with new options for generating high-quality embeddings. The system's ability to maintain good performance with reduced embedding sizes offers practical benefits for computationally constrained applications or data-intensive tasks.
​
October 2024
The Open Source Initiative has criticised Meta for misusing the "open-source" term regarding its Llama AI models, which have been downloaded over 400 million times. The controversy centres on Meta's limited transparency and competitive restrictions, leading to the adoption of the term "open weight" by some companies. The OSI plans to publish an official definition of open-source AI, while companies like Google and Microsoft have already adjusted their terminology.
-
Details AI industry developments across models, research, finance, and regulation
-
Notes competition between Claude, Gemini, Llama, and GPT-4
-
Highlights £9 trillion total industry value
-
Discusses shift from acquisitions to licencing arrangements
-
Notes change in safety concerns from abstract to practical issues
-
Makes predictions for 2025, including open-source models potentially outperforming proprietary ones
​
The report provides crucial insights into AI's rapid evolution from an investor's perspective, examining both technical advances and broader industry trends. Its comprehensive analysis of research findings, business deals, and political developments offers a valuable snapshot of AI's current state and potential future direction.
Late October 2024
​
Anthropic announced significant upgrades with enhanced Claude 3.5 Sonnet and new Claude 3.5 Haiku models, including groundbreaking computer navigation capabilities.
Mistral AI launched edge computing models designed specifically for local device processing, with competitive pricing at £0.04 per million tokens for the 3B model.
​
-
Launched Ministral 3B and 8B models for edge devices
-
Models outperform similar-sized competitors on various benchmarks
-
Ministral 8B-Instruct is free for non-commercial use
-
Pricing: Ministral 3B costs £0.04 per million tokens, 8B costs £0.10
-
Models can process 131,072 tokens of input context
-
Particularly suitable for smartphones and laptops
​
Edge devices are crucial for applications requiring rapid response, high privacy and security, and/or offline operation. This is especially important for autonomous and smart home devices requiring uninterrupted, swift processing. The smaller models enable developers and hobbyists to run advanced AI on consumer-grade hardware, reducing costs and expanding access to the technology.
-
Researchers developed "Pyramidal Flow Matching" to reduce processing costs
-
Open-source text-to-video model available for non-commercial use
-
Free for commercial users earning under £1m annually
-
Uses downsampled versions of embeddings to save processing power
-
Outperforms other open models but slightly behind proprietary solutions
-
Significantly reduced training time compared to competitors​
​
Video generation is an emerging field that requires enormous processing power. This innovation in reducing processing requirements could help scale the technology to more users. The film industry has shown interest in using this technology for pre- and post-production, making compute-efficient innovations particularly valuable for practical implementation.
Personal Rapid Transit Systems
A revolutionary transport solution has been proposed that utilises autonomous electric pods running on dedicated narrow pathways. The system offers on-demand, point-to-point travel whilst requiring minimal infrastructure changes and being notably more cost-effective than traditional public transport. With energy consumption at merely 50 watt-hours per passenger mile—markedly lower than high-speed trains—the pods can operate along existing road margins. This innovation comes amid projections of increasing urbanisation, with companies such as Glydways already securing contracts in California to address current transport inefficiencies.
November 2024
​
Leading AI companies are fundamentally redesigning their testing and evaluation methods as current benchmarks prove insufficient for advancing AI capabilities. Traditional tests like Hellaswag and MMLU are becoming obsolete as modern AI systems regularly achieve over 90% accuracy. New benchmarks like SWE-bench Verified are emerging to evaluate autonomous systems, with Anthropic's Claude 3.5 Sonnet achieving 49% success rate and GPT-o1 preview reaching 41.4%. Companies are increasingly developing internal testing frameworks, raising concerns about the ability to make meaningful comparisons between different AI systems. The industry is particularly focused on evaluating reasoning capabilities, with new initiatives like 'Humanity's Last Exam' and FrontierMath emerging to address these challenges.
Baidu Enters Smart Glasses Market
Baidu has announced its entry into the AI-integrated hardware market with smart glasses powered by their Ernie language model. The glasses, scheduled for release in 2024, will offer features including calorie tracking, environmental queries, and video recording capabilities. This development signals intensifying competition in AI hardware, with ByteDance recently launching AI-enabled earbuds and Meta partnering with Ray-Ban outside China. The move highlights China's potential to leverage its electronics manufacturing capabilities in the AI hardware sector, despite lagging behind in large language model development. Baidu has also announced improvements to their image generation technology using retrieval-augmented generation (RAG) to reduce hallucinations.
(More on smart glasses) capable of revealing personal details through facial recognition, raising significant privacy concerns and highlighting the need for enhanced personal data protection strategies.
​
-
Concerns about AI models having prior exposure to benchmark tests
-
Evidence of benchmark contamination in training data
-
Models showing higher performance on known benchmarks versus new similar problems
-
Need for new evaluation methods and protected benchmarks
-
Suggestions include canary strings and continually updated test sets
The contamination of benchmark tests with training data poses a serious challenge to measuring AI progress. Like students having advance access to exam questions, this situation makes it difficult to accurately assess whether improvements in model performance represent genuine advances in capability or merely memorisation of test answers.
-
Risks of training models on synthetic data leading to degraded performance
-
Research warning of "model collapse" from recursive training
-
Benefits of synthetic data include reduced legal risks and privacy concerns
-
Current successful models (Llama 3.1, Phi 3, Claude 3) using synthetic data effectively
-
Solutions include maintaining balance between real and synthetic data in training
Whilst synthetic data offers advantages in terms of legal compliance and privacy protection, its use must be carefully balanced to avoid performance degradation. Research shows that maintaining even a small percentage (10%) of real-world data can significantly mitigate the risks of model collapse, suggesting a viable path forward for sustainable AI development.
​
Claude's Computer Control Capabilities
-
Anthropic launched API commands allowing Claude Sonnet 3.5 to operate desktop applications
-
The model can use keyboards, mice, and applications through screenshot analysis
-
Achieved 15% success rate on OSWorld benchmark tasks, outperforming competitors
-
Currently experimental with recommended use in sandboxed environments
-
Restricted from creating online accounts or posting to social media
Large multimodal models' expanding capabilities to use external tools like search engines, browsers, and databases represents a significant advancement. The ability to control computer interfaces could enable automation of a broader range of computational tasks, from creating lesson plans to taking academic tests, though this latter application raises some concerns. This development signals an important expansion in AI's practical applications, though the technology still faces significant challenges in areas like screenshot interpretation and action selection.
-
Tencent released Hunyuan-Large, an open-source model outperforming competitors
-
Uses 52 billion parameters (of total 389 billion) for any given input
-
Achieves better results than Llama 3.1 405B on various benchmarks
-
Free for developers outside EU with fewer than 100 million monthly users
Hunyuan-Large demonstrates significant advancement in mixture-of-experts architecture, achieving the performance of a 405 billion parameter model whilst computing only 52 billion parameters. This represents a major efficiency improvement, making high-performance AI more accessible through reduced processing requirements and free availability for many purposes.
-
Perplexity launched Election Information Hub for US elections
-
Combined AI analysis with real-time data from Associated Press and Democracy Works
-
Offered personalised search by location, candidate, or issue
-
Some initial accuracy issues were later fixed
-
Other providers (Google, Microsoft, OpenAI) took more cautious approaches
The evolution of chatbots to provide reliable information for critical decisions like elections marks an important development. The combination of web search capabilities and retrieval-augmented generation enables the creation of decision support systems that balance personalisation with accuracy. Whilst not perfect, properly designed chatbots with high-quality information sources can enhance users' decision-making capabilities and democratic participation.
-
OpenHands (previously OpenDevin) released as free, open-source package
-
Provides various agents for coding and other tasks
-
Includes CodeAct agent for code generation and browsing agent for web interaction
-
Performance comparable to existing tools, with CodeAct/Claude 3.5 Sonnet solving 26% of Github issues
Agentic workflows are rapidly expanding large language models' capabilities. As an open-source system, OpenHands provides developers with an extensible toolkit for designing agentic systems. Whilst primarily focused on coding, its flexibility accommodates various information-handling tasks, and its customisability through prompt rewriting makes it accessible to non-programmers.
The semiconductor industry faced significant challenges, with multiple developments:
AI Regulation and Legal Issues
September 2024
​
California Governor Gavin Newsom vetoed a proposed AI safety bill that would have required safety testing and mandatory "kill switches" for advanced AI models, citing concerns about stifling innovation.
October 2024
​
The US Federal Trade Commission launched "Operation AI Comply", targeting businesses making misleading claims about AI capabilities.
​
The US Department of Justice considered breaking up Google to address its search engine monopoly, while some argued that targeting the company's ability to entrench its power would be more effective than dismantling it. The FTC's enforcement actions serve as a warning to businesses exploiting AI hype whilst potentially strengthening consumer trust in legitimate AI services. By targeting companies that deceive consumers rather than AI technology itself, these actions may help create a more trustworthy market for AI products and services.
The UK government proposed an "opt-out" model for AI content scraping, following the EU's approach in its AI Act, despite opposition from publishers and creative industries.
US Government AI Security Framework
President Biden introduced comprehensive guidelines through a national security memorandum to govern AI usage within the Pentagon and intelligence communities. Whilst not legally binding, the framework emphasises alignment with democratic values, non-discrimination, human oversight and accountability. The initiative includes establishing an AI Safety Institute in Washington for pre-release tool inspection, alongside monitoring competitors' AI developments. This forms part of a broader US strategy balancing competition with China against AI risk management, building upon existing measures such as export controls and private sector safety reporting requirements.
UK Landmark Case: AI-Generated Abuse Imagery
The UK's first prosecution for AI-generated abuse imagery has concluded with an 18-year prison sentence, requiring two-thirds completion before parole consideration. The case, prosecuted under existing child abuse legislation rather than the Online Safety Act, involved misuse of Daz 3D commercial software. This landmark case establishes legal precedent for treating computer-generated images as "indecent photographs", whilst highlighting growing challenges for law enforcement in distinguishing AI-generated content. The Crown Prosecution Service expects similar cases to rise, emphasising the need for modernised legal frameworks.
November 2024
​
UK Government's Dual AI Safety Initiatives
UK Science and Technology Secretary Peter Kyle unveiled a two-pronged approach to AI.
-
First, a promise to introduce proper laws next year to keep the most powerful AI systems in check. This is particularly relevant for education, where safeguarding is paramount, and the UK's focus on safety and AI Assurance will be crucial as businesses develop and apply AI in products for teaching and learning.
-
Secondly, the government are rolling out a new platform to help British businesses navigate the challenges of AI implementation. Think of it as a sort of safety toolkit - helping organisations figure out if their AI systems are up to scratch and free from troublesome biases. It's particularly focused on helping smaller enterprises who might not have the tech expertise at their disposal.
The UK's approach appears to be trying to chart a middle path between the EU's comprehensive regulation and the US' more market-led approach. What I particularly like about Britain's approach is its focus on becoming a world leader in AI safety testing and verification. Rather than trying to compete head-on with Silicon Valley or China in developing the biggest, flashiest AI systems, we're positioning ourselves as the people who can tell you whether these systems are actually safe to use. It's rather like becoming the Kitemark of the AI world.
However, as pointed out by Dominic Hallas, executive director of The Startup Coalition, many AI start-ups still face huge challenges around how to attract talent. This highlights a critical point: while we focus on safe AI, we must also put in place the training and education initiatives to develop a workforce skilled to leverage these powerful technologies effectively.
If the UK really is going to lead the way and make sure that AI drives the growth agenda so central to the government's mission, we need this dual focus on both safety and skills development. Without both elements working in tandem, we risk missing a substantial part of what needs to be done to realise AI's full potential in a responsible way.
EU AI Act Compliance Framework
-
LatticeFlow developed COMPL-AI framework to evaluate AI models' compliance with EU AI Act
-
Evaluates five categories: technical robustness, privacy, transparency, fairness, and social/environmental impact
-
GPT-4 Turbo and Claude 3 Opus achieved highest scores (0.89)
-
Most models performed well on privacy but struggled with fairness and security
-
Framework provides automated path for demonstrating compliance
As AI regulation becomes more prevalent, developers must ensure their models comply with legal requirements before public release or deployment. COMPL-AI represents an initial step towards helping model builders verify legal compliance or identify potential legal risks requiring attention before release. This automated compliance assessment approach could prove valuable in navigating regulatory requirements whilst maintaining development efficiency.
AI Market and Investment
September 2024
​
Google.org announced a £10 million investment in AI education as part of its larger £75 million AI Opportunity Fund.
SoftBank committed $500 million to OpenAI, part of a larger $6.5 billion funding round, demonstrating continuing investor confidence despite internal upheaval.
October 2024
​
The data centre investment surge has become increasingly attractive due to AI and cloud computing growth. The industry is projected to be worth £416 billion this year, rising to £624 billion by 2029. Global public data centres are expected to reach 5,700 by end of 2024 and 8,400 by 2030, with typical yields of 5-12% and potential returns up to 20%. Notable developments include:
-
Google's £1 billion investment in UK data centres
-
European data-centre capacity expected to increase by 16% this year
-
Data centres now account for 2.5% of global COâ‚‚ emissions
-
Cloud computing spending reached £78.2 billion in Q2 2024, up 19% year-on-year
OpenAI secured unprecedented funding of $6.6 billion at a $150 billion valuation, with controversial exclusivity demands for investors to avoid backing rivals.
​
Malaysia's southern state of Johor emerged as a key data centre hub, attracting major investments from ByteDance (£350 million), Microsoft (£2.2 billion), and Oracle (£6.5 billion).
-
Southern Malaysian state Johor becoming major data centre hub
-
ByteDance, Microsoft, Oracle investing billions in facilities
-
Region attractive due to space, energy resources, and proximity to Singapore
-
Expected £3.8 billion in data centre investments this year
-
Competing with established Asia-Pacific market
The expansion of data centres in Malaysia is crucial for AI's global development and accessibility. The massive processing power these facilities provide is essential for AI advancement across industries. Additionally, Malaysia's emergence as a data centre hub demonstrates how nations outside traditional tech centres can participate in and benefit from the tech economy, particularly as the cost-effectiveness of processing AI workloads outweighs data transmission costs.
​
Chinese AI companies achieved remarkable cost reductions, with companies like 01.ai cutting inference costs by over 90%.
Nuclear energy stocks surged due to AI-driven demand, with companies like Oklo Inc. seeing gains of up to 99%.
Raspberry Pi executives sold shares worth over £1 million, following the company's successful London Stock Exchange listing, amid significant growth in licensing business and direct unit sales.
November 2024
Major tech companies reported strong AI-driven growth:
-
Microsoft: Cloud revenue rises with AI boom
-
Meta: Reports 19% revenue increase
-
Amazon: AWS shows 19% growth
-
Apple: Launches "Apple Intelligence" AI features
-
Palantir: Reports strong earnings driven by AI deman
-
Google/Alphabet: Q3 performance profits increased 34% reaching £26.3bn, Total revenue up 15% to £88.3bn
Microsoft-OpenAI Partnership Tensions
-
The previously close partnership is showing signs of strain
-
Both companies seeking greater independence
-
Microsoft downloaded OpenAI software without following agreed protocols
-
OpenAI negotiated deals with other providers like Oracle
-
Contract clause allows OpenAI to exit if it achieves AGI
-
OpenAI's valuation has reached £157 billion with new funding
This evolving relationship illustrates the challenges of maintaining close collaboration amid rapidly changing technology. The partnership has significantly influenced AI research and product development, with Microsoft providing crucial resources for OpenAI's scaling whilst benefiting from OpenAI's models to transform its product line. Both companies now require flexibility to innovate and adapt in a competitive landscape.
Further Reading: Find out more from these resources
Resources:
-
Watch videos from other talks about AI and Education in our webinar library here
-
Watch the AI Readiness webinar series for educators and educational businesses
-
Listen to the EdTech Podcast, hosted by Professor Rose Luckin here
-
Study our AI readiness Online Course and Primer on Generative AI here
-
Take the AI for Educators Adaptive Learning programme here
-
Read our byte-sized summary, listen to audiobook chapters, and buy the AI for School Teachers book here
-
Read research about AI in education here
About The Skinny
Welcome to "The Skinny on AI for Education" newsletter, your go-to source for the latest insights, trends, and developments at the intersection of artificial intelligence (AI) and education. In today's rapidly evolving world, AI has emerged as a powerful tool with immense potential to revolutionize the field of education. From personalized learning experiences to advanced analytics, AI is reshaping the way we teach, learn, and engage with educational content.
In this newsletter, we aim to bring you a concise and informative overview of the applications, benefits, and challenges of AI in education. Whether you're an educator, administrator, student, or simply curious about the future of education, this newsletter will serve as your trusted companion, decoding the complexities of AI and its impact on learning environments.
Our team of experts will delve into a wide range of topics, including adaptive learning algorithms, virtual tutors, smart classrooms, AI-driven assessment tools, and more. We will explore how AI can empower educators to deliver personalized instruction, identify learning gaps, and provide targeted interventions to support every student's unique needs. Furthermore, we'll discuss the ethical considerations and potential pitfalls associated with integrating AI into educational systems, ensuring that we approach this transformative technology responsibly. We will strive to provide you with actionable insights that can be applied in real-world scenarios, empowering you to navigate the AI landscape with confidence and make informed decisions for the betterment of education.
As AI continues to evolve and reshape our world, it is crucial to stay informed and engaged. By subscribing to "The Skinny on AI for Education," you will become part of a vibrant community of educators, researchers, and enthusiasts dedicated to exploring the potential of AI and driving positive change in education.