THE SKINNY
on AI for Education
Issue 13, February 2025
Welcome to The Skinny on AI for Education newsletter. Discover the latest insights at the intersection of AI and education from Professor Rose Luckin and the EVR Team. From personalised learning to smart classrooms, we decode AI's impact on education. We analyse the news, track developments in AI technology, watch what is happening with regulation and policy and discuss what all of it means for Education. Stay informed, navigate responsibly, and shape the future of learning with The Skinny.
Headlines
AI Reshaping the World and our Workplace
Welcome. In our new feature, What the Research Says (WTRS), I bring educators, tech developers and policy makers actionable insights, along with tested, practical guidance drawn from robust, validated educational research.
Fancy a broader view? Our signature Skinny Scan takes you on a whistle-stop tour of AI developments reshaping education. And, what a moment to take stock. As we venture into 2025, the AI landscape is evolving ever faster and is now starting to challenge conventional wisdom. From AI development to deployment and regulation, there are changes afoot that could fundamentally alter our lives.
What might this mean for education? Our Skinny Scan tackles this question head-on. But before we dive into that...
What the Research Says: Bloom's Two Sigma Problem and Its Implications for AI in Education
In this month’s instalment, I focus on a very famous paper by Benjamin Bloom who published this landmark research way back in 1984 in a paper called "The 2 Sigma Problem: The Search for Methods of Group Instruction as Effective as One-to-One Tutoring." This work is often cited by educational researchers and by technology companies alike - from Google to Khan Academy this work is used to promote the effectiveness of one-to-one tutoring.
What the Research Says: Bloom's Two Sigma Problem and Its Implications for AI in Education
(You can find the research paper here)
Bloom’s work is often cited for its dramatic claim that tutoring can improve student achievement by two standard deviations (also known as two sigma), which is a very large improvement. Moving up by two standard deviations means making such a dramatic improvement that you'd leap from being an average student (around the 50th percentile) all the way up to being among the top 2% of students (the 98th percentile). More recent research suggests a lower but still significant effect size of around 0.79 standard deviation/sigma. Although Bloom's two sigma effect is rarely replicated in full, modern studies confirm that tutoring and personalised learning yield consistent, meaningful improvements and the core principles from Bloom’s work stand the test of time, most notably in three ways:
-
The Power of Systematic Assessment and Adaptation. Research consistently shows that continuous assessment and adaptation significantly enhance learning outcomes and that the principles of mastery learning, with clear progression criteria and multiple opportunities for practice remain vital.
-
The Critical Role of Targeted Feedback. Feedback is another critical factor. Immediate, specific feedback proves more effective than delayed or general responses, with research indicating that clarity and actionability matter more than the timing. AI tools designed to facilitate feedback must prioritise structured, precise guidance over purely conversational interactions, ensuring students receive support that genuinely enhances learning rather than creating unnecessary complexity.
-
The Importance of Flexible Learning Pathways. Flexible learning pathways also play an essential role in student achievement. Supporting multiple valid approaches to problem-solving enhances learning, while adaptive pacing tailored to individual progress significantly impacts success. This is where AI systems that offer the potential to track student progress and provide targeted support, can help.
When it comes to AI In Education: while AI presents exciting possibilities, it cannot yet replicate the nuanced support of skilled human tutors. Current AI tools struggle with deep conceptual misunderstandings, emotional support, and the challenge of helping students transfer learning across different contexts.
Key Practical Takeaways from Bloom’s work
For Educators:
-
In general teaching practice:
-
Progressive mastery approaches offer particular promise. This involves breaking learning into clearly defined units with specific mastery criteria and allowing students multiple attempts to demonstrate their understanding.
-
While immediate, personalised feedback is ideal, the reality of managing 30 or more students means teachers need practical approaches. Quick write-and-response activities and structured peer feedback systems can help. Many teachers find that starting with just one subject area or specific type of task helps make implementation more manageable.
-
Adaptive pacing presents perhaps the greatest challenge in traditional classroom settings. Nevertheless, teachers have found success with approaches like small group rotation and choice boards that allow some degree of personalised pacing while maintaining classroom coherence.
-
Regular monitoring and adjustment of strategies - Teachers report that keeping simple notes about what works helps them refine their approach over time.
-
-
When using AI and educational technology
-
Continuous professional development is essential to integrate technology and AI effectively into teaching. Educators from implementing technology/AI incrementally, starting with targeted skills or subjects, leads to better outcomes.
-
Thoughtful implementation of blended learning can improve engagement and attainment.
-
AI tools excel at foundational skill development, freeing teachers for higher-order instruction.
-
Focus on Student Engagement: early success strongly predicts continued engagement, with previously struggling students often show particularly strong gains with well-implemented systems
-
For AI Developers:
-
AI must prioritise clear, structured feedback over conversational complexity, with adaptive learning pathways aligned with proven pedagogical strategies.
-
AI tools should be designed with the realities of classroom constraints and teacher workflows in mind and should allow for teacher customisation and control to ensure meaningful classroom integration.
-
It's also important to note that effective implementation requires extensive teacher training, positioning AI as an assistant in the classroom rather than an autonomous instructor.
As AI continues to evolve, developers and educators must focus on aligning technological advancements with proven pedagogical principles. The challenge ahead is not merely technological but educational—how to bridge the gap between traditional classroom instruction and the benefits of personalised tutoring while maintaining the irreplaceable role of skilled teachers in the learning process.
So, what can we learn from What the research Says in the case of Bloom? Bloom’s research offers enduring insights into effective learning that can help educators and technology developer alike. However, it is crucial to recognise the methodological limitations of the original study.
The ‘Skinny Scan’ on what is happening with AI in Education….
I hope you enjoyed the WTRS section – read on for the usual Skinny Scan across what is happening in AI…
AI continues to develop fast, although the details of the way that it will evolve, be regulated and used remain are unclear, as do the implications for society. We must therefore prepare for the possibility of profound transformation in education. Of course, transformation usually takes much longer than envisaged by those who are promoting its reality, but in truth AI is developing very fast, and we are starting to see the emergence of some key developments in education. Personalised learning at scale, where AI adapts to help tailor content to individual student needs. Traditional assessment methods starting to be reimagined as AI enables more dynamic, real-time evaluation of student progress. And, perhaps most significantly, we are starting to witness a long overdue shift in what students learn - from fact memorisation to critical thinking, ethical reasoning, and AI literacy. The role of educators is also further evolving from knowledge providers to learning facilitators, helping students navigate an AI-augmented world while preserving essential human skills. Its’ early days, but the pace is more likely to quicken than slow, so it’s important to take note.
Education and Training: A Critical Crossroads
Some universities are actively adapting to AI's growing influence and capabilities. For example, OpenAI's partnership with California State University, affecting over 460,000 students across 23 campuses, exemplifies how higher education is incorporating AI literacy into curricula. However, research indicates an inverse relationship between confidence in generative AI and critical thinking skills. Those with higher self-confidence tend to retain stronger critical thinking abilities, whilst those with higher confidence in GenAI, who rely heavily on it for problem-solving may see a decline in independent analytical skills.
There are also some education relevant technology developments to note: AI can now process much longer text inputs—up to 750,000 words—allowing for more comprehensive analysis of documents and codebases. This advancement, combined with improvements in natural interaction and technical precision, could transform how education can be delivered and consumed. And Moshi, a conversational AI system developed in France can respond to input in 200 milliseconds, compared to ChatGPT’s 5.4 second response time, making for more natural and engaging interactions.
Students too are changing their preferences and leaning into technology, with a 14% increase in university applications for engineering and technology courses. It is also important to note that industry increasingly values hybrid skills that combine technical knowledge with distinctly human capabilities such as creativity, judgement and ethical reasoning. And in the world of technology development, the focus is shifting away from traditional coding toward the ability to ask the right questions and strategically interact with AI.
Workplace Transformation
The impact of AI on employment is becoming increasingly complex. Whilst a 2013 study by Frey and Osborne suggested that 47% of US jobs were at risk of automation, recent findings from Anthropic indicate that AI is more often augmenting rather than replacing jobs. This is further evidenced by the emergence of what Andrew Ng calls'10x professionals'—individuals who can achieve ten times the impact of their peers through their use of AI.
Current business strategies are also changing, with 51% of UK business leaders planning to redirect investment from hiring people to AI implementation. For example, BT plan to reduce its workforce by up to 55,000 jobs by decade's end, with 10,000 directly attributed to AI and automation, underscores this shift.
Enhanced productivity is being enabled through:
-
AI-powered workflows that fundamentally change how work is approached
-
Sophisticated marketing and business intelligence tools for data-driven decision-making
-
Advanced recruiting and hiring processes that can efficiently analyse large candidate pools
-
Enhanced financial and market analysis capabilities
The New AI Development Landscape: Democratisation and Investment
The assumption that advanced AI development requires massive infrastructure investments is being fundamentally challenged. Whilst American tech giants continue their substantial commitments—with Amazon allocating £100 billion and Microsoft £80 billion for 2025—Chinese company DeepSeek has demonstrated remarkable efficiency in AI development. Using less advanced H800 GPUs instead of cutting-edge chips, DeepSeek has achieved results comparable to those of major Western competitors at significantly lower costs—approximately £2.19 per million tokens, compared to OpenAI's £60 per million.
This efficiency breakthrough has sent ripples through the market, initially causing US tech stocks to lose approximately £1 trillion in value. However, the market has shown resilience, with chip maker Nvidia recovering half of its £630 billion loss with investors recognising that the industry is evolving rather than facing destruction. Nvidia's announcement of Project Digits—a £2,400 personal AI computer with the power of systems that previously cost up to £15,800—further demonstrates this potential democratisation of AI technology.
And AI development is no longer just for the tech companies. A wide range of AI-powered coding tools with high energy names like Bolt, Tempo and Wind Surf make developing your own AI much more accessible.
Global Competition and Environmental Considerations
The international AI landscape reveals a complex tapestry of national strategies and geopolitical considerations. The UK's rebranding of its AI Safety Institute to the AI Security Institute signals closer alignment with US policy, whilst maintaining its distinct approach to regulation. The country's establishment of AI growth zones in former industrial sites and twentyfold increase in public sector computing power demonstrates its commitment to balancing innovation with security.
The environmental impact of AI development remains a crucial consideration, with Microsoft's commitment to purchase 3.5 million carbon credits over 25 years highlighting the scale of the challenge. This environmental consciousness is increasingly shaping investment decisions and development strategies.
Safety and Responsible AI Development
Recent events highlight the growing sophistication of AI-enabled threats. The £1 million deepfake voice scam in Italy, where criminals impersonated Defence Minister Guido Crosetto, demonstrates the evolving nature of AI-enabled fraud. The Vatican has also weighed in, outlining crucial differences between AI-driven computational logic and human organic intelligence, highlighting concerns about misinformation, deepfakes, privacy risks, and the implications of AI on human identity.
Not surprisingly, there's a growing movement to shift focus from 'AI safety' to 'responsible AI development'. This approach emphasises preventing specific harmful applications while enabling innovation, addressing:
-
Non-consensual deepfake content
-
AI-driven misinformation
-
Potentially unsafe medical diagnoses
-
Manipulative AI applications
Looking Ahead: Investment and Future Outlook
The scale of AI investment remains extraordinary, with major tech companies planning combined spending of over £300 billion for 2025. The Stargate initiative, a £500 billion AI infrastructure project backed by SoftBank, OpenAI, Oracle and Abu Dhabi's MGX, exemplifies this commitment.
However, market responses to AI developments are becoming more nuanced. Microsoft's recent £200 billion market value reduction due to slower-than-expected cloud growth suggests investors are becoming more discriminating in their assessment of AI-related opportunities.
As AI development matures, the industry is shifting focus from pure capability to responsible deployment. The success of companies like DeepSeek and Mistral demonstrates that alternative development paths exist beyond massive infrastructure investment. This evolution suggests a future where success depends not just on raw computing power, but on creating comprehensive, ethical solutions that balance innovation with safety.
The implications of all these developments for education are particularly far-reaching. Educational institutions must now prepare students for a world where AI collaboration is the norm, not the exception. This means:
-
Developing new curricula that emphasise human-AI collaboration while strengthening uniquely human capabilities
-
Training educators to effectively integrate AI tools while maintaining pedagogical excellence
-
Creating assessment frameworks that evaluate both technical proficiency and critical thinking
-
Building ethical frameworks for AI use in educational settings
-
Ensuring equitable access to AI tools while preventing over-reliance
The success of future generations will depend not just on their ability to use AI tools, but on their capacity to think independently, reason ethically, and maintain their creative edge in an AI-enhanced world.
Thanks for reading. For more detail on all the news and links to sources read on. Many thanks to Andrew Ng and George Siemens for their great newsletters that always inform The Skinny.
- Professor Rose Luckin, February 2025
Key Developments in AI
Impact on Society, Education, and Work
The early months of 2025 have witnessed unprecedented developments in artificial intelligence, marking a fundamental shift in how AI is reshaping society, education, and the workplace. Several key themes have emerged that are transforming our world:
Transformation of Education and Work
The integration of AI into education has reached a turning point, exemplified by OpenAI's groundbreaking partnership with California State University. This reflects a crucial recognition: preparing students for an AI-enabled workplace is no longer optional. Law schools, for example, are facing the need to adapt curricula to accommodate AI-driven legal analysis tools that are transforming the legal industry. Significant breakthroughs in natural conversation handling, with AI now able to process up to 20% of overlapping speech, are revolutionising virtual classroom experiences and educational interactions.
A significant shift in employment shows that 51% of UK businesses are prioritising AI investment over hiring new staff. However, this trend is also contributing to significant job reductions, with companies such as BT planning to cut 55,000 jobs, including 10,000 directly due to AI and automation. While this shift is driving increased interest in AI-related careers—evidenced by a 14% rise in applications for engineering and technology degrees—applications for teaching and nursing have seen a notable decline. This raises concerns about balanced workforce development and the long-term impact of AI on essential public services.
Massive Investment and Infrastructure Evolution
The scale of AI investment has reached extraordinary levels, with major technology companies committing more than £320 billion for 2025 alone. This includes the ambitious £500 billion 'Stargate' project backed by SoftBank and OpenAI, as well as France's £94 billion AI programme. Beyond these large-scale projects, individual corporations are making substantial investments, with Amazon, Google, Microsoft, and Meta pledging a combined $300 billion towards AI advancements.
Meanwhile, China's DeepSeek has demonstrated that cutting-edge AI models can be trained at a fraction of previously assumed costs, challenging traditional Western assumptions about the resources required for AI breakthroughs.
Significant technical advances are reshaping the industry, with breakthroughs such as O3-mini's selectable reasoning levels and Moshi's 200-millisecond response time transforming AI capabilities. However, processing requirements have emerged as a critical bottleneck, with some deep research tasks requiring up to 30 minutes of processing time, highlighting the infrastructure challenges facing the industry.
Scientific Innovation and Healthcare Transformation
AI's impact on healthcare and scientific research has been remarkable. Google's Isomorphic Labs is preparing for human trials of AI-designed medicines, marking a breakthrough in pharmaceutical AI research. Similarly, Retro Biosciences, supported by OpenAI CEO Sam Altman, is seeking to raise $1 billion to extend human lifespan by up to ten years through AI-driven protein design. These advances, while promising, raise crucial questions about healthcare accessibility and potential inequalities in life-extension treatments. The development of more resilient AI systems, exemplified by UI-TARS's advanced error recovery capabilities, is particularly significant for healthcare applications where reliability is paramount.
Ethical Challenges and Safety Imperatives
Recent events have highlighted serious ethical challenges in AI development and deployment. South Korea has reported over 800 deepfake-related crimes in just nine months, leading to widespread activism by women demanding stronger digital safety measures and legal protections. Meanwhile, the growing sophistication of AI-generated scams has been demonstrated by criminals impersonating Italy's Defence Minister to defraud business figures of over €1 million.
The artistic community has also voiced concerns about AI ethics, with over 3,000 artists protesting against Christie's AI art auction, arguing that AI models are trained on copyrighted works without permission. In response to rising ethical concerns, companies like Anthropic have introduced new safety techniques such as "constitutional classifiers," which effectively block harmful content while maintaining system functionality.
Global Competition and Regulatory Divergence
A significant shift has occurred in international AI governance, with the UK and US declining to sign a global AI safety declaration supported by 60 other nations. This reflects growing tensions between fostering innovation and ensuring responsible AI development. The EU, meanwhile, is moving forward with its AI Act, which enforces stricter transparency and accountability measures for "high-risk" AI systems, demonstrating a different approach to AI regulation compared to the US and UK.
In the corporate world, AI partnerships and competition are reshaping the industry. Google's reversal of its stance on military AI projects, securing a £1.3 billion contract with Israel, marks a significant shift in tech-military relations. Apple has chosen Alibaba as its AI partner in China, while Elon Musk has led a consortium offering $100 billion to acquire OpenAI, raising questions about corporate control over foundational AI technologies and the future governance of AI development.
Environmental Considerations
The environmental impact of AI development has emerged as a critical concern. Major tech companies are now implementing significant environmental offset programmes, with Microsoft committing £200 million to rainforest restoration to counterbalance the carbon footprint of AI infrastructure expansion. This highlights a growing recognition that technological advancement must be balanced with environmental responsibility.
Looking Ahead
These developments suggest we are entering a period where AI competency becomes increasingly crucial across all sectors. The emergence of more accessible AI tools, exemplified by O3-mini's competitive pricing model, is democratising access to AI capabilities. Educational institutions face the challenge of preparing students for this reality, while industries must balance automation with workforce development. The significant investments in AI infrastructure indicate that this transformation will continue to accelerate, making it essential for society to address both the opportunities and challenges this presents.
The challenge ahead lies in ensuring these advances benefit society equitably while effectively managing associated risks. This requires careful consideration of privacy, security, and ethical implications, alongside continued investment in education and infrastructure to prepare for an AI-enabled future.
Below are some of the news stories from the last month:
AI News Summary
AI in Education
Breakthrough in Handling Overlapping Speech (February 2025)
New AI models can now handle up to 20% of natural conversation overlap, representing a significant advancement in natural human-AI interaction. Significance: This capability brings AI interaction closer to natural human conversation patterns, potentially improving applications in education settings, particularly in virtual classrooms and AI-assisted learning environments where natural dialogue is crucial.
OpenAI Targets Higher Education in the US with ChatGPT Rollout at California State
A significant shift in higher education emerged as OpenAI formally partnered with universities to integrate AI tools into teaching and learning. This initiative aims to prepare students for an AI-enabled workplace while ensuring ethical and responsible use of the technology.
AI in Education: Global Implementation and Impact
Nigeria Demonstrates Transformative Impact of AI in Education (9th January 2025)
A six-week pilot program in Nigeria has demonstrated AI's potential to transform education in developing contexts. Using generative AI as an after-school virtual tutor, the program achieved learning gains equivalent to two years of typical education compressed into just six weeks. Students who participated significantly outperformed their peers across all tested areas, including English, AI knowledge, and digital skills, with benefits extending to end-of-year curricular exams. The intervention showed particular success in bridging gender gaps in education, with girls showing outsized benefits.
Microsoft Launches Comprehensive AI Education Initiative (16th January 2025)
Microsoft has unveiled a major expansion of its educational AI tools, centred on Microsoft 365 Copilot Chat, a new pay-as-you-go agent system for education customers. The system includes free AI chat powered by GPT-4o and features enterprise-grade data protection. While over 400,000 educators across more than 50 countries are currently using these tools, only 20% report feeling equipped to use generative AI effectively. The initiative includes teaching and learning agents, integration with major learning management systems, and a new education-focused app (Project Spark).
Legal AI and Its Impact on Law Education
Legal AI is Reaching Deep into the Workplace (January 21, 2025)
The legal industry is experiencing a transformation as AI-powered tools revolutionise contract and case law analysis. Start-ups such as Genie AI, Luminance, and Robin AI in the UK raised over $100 million in 2024, while US competitor Harvey secured $100 million at a $1.5 billion valuation. These AI tools enhance efficiency, reducing reliance on external law firms. However, resistance remains within traditional firms due to billable-hour models. Law schools may need to adapt curricula to include AI-driven legal tasks to prepare graduates for this evolving landscape.
AI Ethics and Societal Impact
Sam Altman-Backed Retro Biosciences to Raise $1bn for Life Extension (January 23, 2025)
Retro Biosciences, supported by OpenAI CEO Sam Altman, seeks to raise $1 billion to extend human lifespan by up to ten years through AI-driven protein design. This innovation raises ethical concerns about accessibility and potential societal inequalities in life-extension treatments.
Korean Women Are Fighting Back Against Deepfakes (February 3, 2025)
Over 800 deepfake-related sex crimes were investigated in South Korea within just nine months of 2024. The crisis underscores the urgent need for stronger digital safety measures and enforcement to combat AI-enabled harassment.
Ex-Google Boss Fears for AI 'Bin Laden Scenario' (February 13, 2025)
Eric Schmidt warned about AI misuse by hostile states, drawing parallels to 9/11-style scenarios. He also advocated for restricting smartphone use in schools and banning social media for under-16s.
Christie's AI Art Auction Sparks Backlash from Artists (February 10, 2025)
Christie’s plan to hold its first AI art auction has drawn protests from over 3,000 artists, who argue AI models are trained on copyrighted works without permission. The controversy highlights ongoing tensions between technological progress and intellectual property rights.
AI and Cybersecurity
AI Safety Measures: Anthropic Announces New AI Safety Technique (February 3, 2025)
Anthropic introduced "constitutional classifiers," blocking 95% of hacking attempts while maintaining minimal refusal rates for benign queries. This innovation underscores the need for AI safety improvements as large models become more powerful.
Italian Tycoons Targeted by Fake Defence Minister in AI Scam (February 9, 2025)
Criminals used AI-generated voice technology to impersonate Italy’s Defence Minister, Guido Crosetto, scamming prominent business figures out of over €1 million.
UK Orders Apple to Give Access to Encrypted Cloud Data (February 7, 2025)
The UK government’s controversial demand for a "backdoor" in Apple’s iCloud storage has reignited debates over digital privacy versus security.
AI Employment and the Workforce
UK Companies Plan to Invest in AI Instead of Hiring (January 13, 2025)
Amid rising employer costs, 51% of UK business leaders plan to invest in AI rather than hiring new staff. Companies like BT aim to cut 55,000 jobs by 2030, with 10,000 due to AI and automation.
Surge in 18-Year-Olds Applying for UK Engineering Degrees (February 13, 2025)
A 14% increase in engineering course applications signals rising awareness of AI's career impact. However, applications for teaching and nursing have declined.
The Future of AI-Driven Drug Discovery: Ex-DeepMind Scientist Launches AI Drug Discovery Venture (February 13, 2025)
Simon Kohl, formerly of DeepMind, secured $50 million to fund Latent Labs, which aims to make biology "programmable" through AI-driven protein design.
AI Development and Industry
O3-mini Introduces Selectable Reasoning Levels (February 2025)
The introduction of selectable reasoning "effort" levels in O3-mini represents a significant advancement in AI model control, allowing users to optimise between processing speed and accuracy. This development marks a shift toward more flexible and efficient AI systems.
Significance: This innovation addresses one of the key challenges in AI deployment - the trade-off between speed and accuracy. By allowing users to select different reasoning levels, organisations can better optimize their AI usage based on specific needs and resources.
UI-TARS Demonstrates Advanced Error Recovery (February 2025)
UI-TARS has shown remarkable progress in error recovery and mistake correction, demonstrating enhanced resilience and adaptability in AI systems. Significance: The ability to recover from errors and correct mistakes autonomously represents a crucial step toward more reliable AI systems that can operate in real-world scenarios with minimal human intervention.
Moshi Achieves Breakthrough in AI Response Time (February 2025)
Moshi has achieved a 200-millisecond response time, dramatically outperforming ChatGPT Voice Mode's 5.4-second response time, marking a significant advancement in conversational AI. Significance: This breakthrough in response time brings AI interactions closer to natural human conversation speeds, potentially transforming applications in customer service, education, and personal assistance.
Qwen2.5-VL Shows Breakthrough in Model Scalability (February 2025)
The Qwen2.5-VL architecture demonstrates remarkable scalability, ranging from 3B to 72B parameters, while maintaining competitive performance across different use cases. Significance: This scalability breakthrough could democratise AI access by allowing organizations to choose model sizes that match their specific needs and resources, potentially reducing deployment costs while maintaining effectiveness.
AI-Designed Drugs Entering Trials: AI-Developed Drug Will Be in Trials by Year-End, Says Google’s Hassabis (January 21, 2025)
Google subsidiary Isomorphic Labs announced its first AI-designed drug will enter human trials by late 2025, marking a breakthrough in pharmaceutical AI research.
Microsoft Secures Deal to Restore Amazon Rainforest (January 22, 2025)
Microsoft committed $200 million to offset AI-related emissions, highlighting AI's environmental impact.
"Samsung unveils new AI-powered smartphone in fight against Apple" (January 22, 2025)
Samsung launched its S25 smartphone featuring Google's Gemini LLMs and Qualcomm chips, available in 46 languages.
"SoftBank and OpenAI back sweeping AI infrastructure project in US" (January 22, 2025)
The project aims to create 100,000 jobs and secure the US AI industry's future.
"TikTok owner ByteDance plans to spend $12bn on AI chips in 2025" (January 22, 2025)
ByteDance announced plans for significant AI infrastructure investment, including Rmb40bn for domestic AI chips.
"Google invests further $1bn in OpenAI rival Anthropic" (January 22, 2025)
Google increased its investment in Anthropic to around $3bn total.
"SK Hynix profits beat Samsung's for first time on AI boom" (January 23, 2025)
SK Hynix's operating profit jumped 20-fold to Won8.1tn ($5.6bn) in Q4 2024.
How Chinese AI Start-Up DeepSeek Shocked Silicon Valley (January 24, 2025)
DeepSeek trained a 671-billion parameter model using just 2,048 Nvidia H800s and £4.4m, challenging assumptions about AI development costs.
"Meta sticks with big bet on AI even after DeepSeek shook markets" (January 30, 2025)
Meta reaffirmed its massive AI investment plans with $60-65bn capital expenditure planned for 2025.
"Microsoft sheds $200bn in market value after cloud sales disappoint" (January 30, 2025)
Despite overall revenue growth, Microsoft's shares fell over 6% when cloud division revenue missed forecasts.
"SoftBank in talks to invest up to $25bn in OpenAI" (January 30, 2025)
SoftBank considered a $15-25bn direct investment in OpenAI, plus $15bn+ commitment to the Stargate project.
"Amazon to spend £100bn this year in AI drive" (February 2025)
Amazon announced plans to invest approximately £100bn in AI initiatives in 2025.
"Big Tech to Spend $300bn+ on AI in 2025" (February 7, 2025)
Major tech firms have pledged unprecedented AI investments, with combined spending expected to exceed $320bn in 2025.
"Macron unveils plans for €109bn of AI investment in France" (February 10, 2025)
France announced a massive AI infrastructure investment comparable to the US Stargate project.
"Elon Musk-Led Consortium Offers $100bn to Take Control of OpenAI" (February 11, 2025)
Musk and investors, including Valor Equity Partners and Baron Capital, bid $100 billion for OpenAI.
"China's tech stocks enter bull market after DeepSeek breakthrough" (February 12, 2025)
Chinese technology stocks surged following DeepSeek's AI breakthrough.
"Ex-DeepMind scientist launches AI drug discovery venture" (February 13, 2025)
Simon Kohl secured $50 million to fund Latent Labs.
"Alibaba says it will be Apple's AI partner in China" (February 13, 2025)
Apple selected Alibaba to provide AI technology for its iPhones in China.
"Alphabet shares sink after cloud growth stalls and spending surges" (February 13, 2025)
Alphabet's shares dropped 8% following disappointing cloud business growth.
"Match enlists AI to nudge men into better behaviour on dating apps" (February 16, 2025)
Match Group implemented AI to detect potentially abusive messages on platforms like Tinder and Hinge.
Alibaba Becomes Apple’s AI Partner in China (February 13, 2025)
Apple selected Alibaba to provide AI technology for its iPhones in China, demonstrating the growing importance of regional AI partnerships.
AI Regulation and Legal Issues
Google Reverses Stance on Military AI Projects (February 2025)
Google has reversed its 2018 Project Maven position, securing a £1.3 billion contract with Israel. This decision marks a significant shift in tech-military relations and follows similar moves by other major AI companies.
Significance: This reversal represents a fundamental change in how major tech companies approach military AI applications, potentially accelerating the development of AI in defence while raising new ethical considerations about AI's role in military operations.
EU Moves Forward with AI Act Despite US Opposition (February 4, 2025)
The EU’s AI Act enforces transparency for "high-risk" AI systems, signaling growing international regulatory divergence.
US and UK Decline to Sign Global AI Declaration (February 11, 2025)
A major divide in global AI governance emerged as the US and UK refused to sign a safety declaration backed by 60 nations.
"The new AI arms race" (February 12, 2025)
A Financial Times editorial analysed the shift from international cooperation to competition in AI development.
AI Market and Investment
O3-mini Disrupts AI Pricing Model (February 2025)
O3-mini has introduced significantly lower pricing at £1.10/£4.40 per million input/output tokens, intensifying competition in the AI market.
Significance: This aggressive pricing strategy could accelerate AI adoption across industries by making advanced AI capabilities more accessible to smaller organizations and developers.
Processing Requirements Emerge as Critical AI Bottleneck (February 2025)
Deep research tasks requiring up to 30 minutes of processing time highlight growing infrastructure challenges in advanced AI applications. Significance: These infrastructure limitations are becoming a critical constraint on AI advancement, potentially influencing the direction of future AI development and deployment strategies.
SoftBank and OpenAI Back $500bn AI Infrastructure Project "Stargate" (January 22, 2025)
The project aims to create 100,000 jobs and secure the US AI industry’s future.
Big Tech to Spend $300bn+ on AI in 2025 (February 7, 2025)
Major tech firms have pledged unprecedented AI investments, with Amazon, Google, Microsoft, and Meta leading the charge.
Elon Musk's Bid to Take Control of OpenAI: Elon Musk-Led Consortium Offers $100bn to Acquire OpenAI (February 11, 2025)
Musk and investors, including Valor Equity Partners and Baron Capital, bid $100 billion for OpenAI, raising questions about AI governance and OpenAI’s future direction.
Further Reading: Find out more from these resources
Resources:
-
Watch videos from other talks about AI and Education in our webinar library here
-
Watch the AI Readiness webinar series for educators and educational businesses
-
Listen to the EdTech Podcast, hosted by Professor Rose Luckin here
-
Study our AI readiness Online Course and Primer on Generative AI here
-
Read our byte-sized summary, listen to audiobook chapters, and buy the AI for School Teachers book here
-
Read research about AI in education here
About The Skinny
Welcome to "The Skinny on AI for Education" newsletter, your go-to source for the latest insights, trends, and developments at the intersection of artificial intelligence (AI) and education. In today's rapidly evolving world, AI has emerged as a powerful tool with immense potential to revolutionize the field of education. From personalized learning experiences to advanced analytics, AI is reshaping the way we teach, learn, and engage with educational content.
In this newsletter, we aim to bring you a concise and informative overview of the applications, benefits, and challenges of AI in education. Whether you're an educator, administrator, student, or simply curious about the future of education, this newsletter will serve as your trusted companion, decoding the complexities of AI and its impact on learning environments.
Our team of experts will delve into a wide range of topics, including adaptive learning algorithms, virtual tutors, smart classrooms, AI-driven assessment tools, and more. We will explore how AI can empower educators to deliver personalized instruction, identify learning gaps, and provide targeted interventions to support every student's unique needs. Furthermore, we'll discuss the ethical considerations and potential pitfalls associated with integrating AI into educational systems, ensuring that we approach this transformative technology responsibly. We will strive to provide you with actionable insights that can be applied in real-world scenarios, empowering you to navigate the AI landscape with confidence and make informed decisions for the betterment of education.
As AI continues to evolve and reshape our world, it is crucial to stay informed and engaged. By subscribing to "The Skinny on AI for Education," you will become part of a vibrant community of educators, researchers, and enthusiasts dedicated to exploring the potential of AI and driving positive change in education.