THE SKINNY
on AI for Education
Issue 7, June 2024
Welcome to The Skinny on AI for Education newsletter. Discover the latest insights at the intersection of AI and education from Professor Rose Luckin and the EVR Team. From personalized learning to smart classrooms, we decode AI's impact on education. We analyse the news, track developments in AI technology, watch what is happening with regulation and policy and discuss what all of it means for Education. Stay informed, navigate responsibly, and shape the future of learning with The Skinny.
But first, calling all educational leaders, please complete our EVR AI Benchmarking Self-Evaluation exercise – it’s important that we hear from as many UK schools and colleges as possible, to ensure our analysis gives a representative perspective about what is happening with AI in UK education at the moment.
How are you using AI? Compare your progress to other Schools and Colleges across the UK
​
Professor Rose Luckin and the EVR Team would like to invite you to participate in a national benchmarking exercise to evaluate current trends in the use of AI in education. Your perspective counts and all the team will need is for you to complete a simple 10-minute self-evaluation.
Headlines
​
​
I hope this message finds you well. Please accept my sincere apologies for the long gap since the last Skinny issue. As many of you will understand, my attention has been wholly dedicated to my family and the joyous arrival of my fourth grandchild – an incredible moment that undoubtedly warranted my complete presence and focus.
Now, I find myself grappling with the amount that has happened since the last issue - it is hard to get to grips with it all and to make sense of the various developments and their implications for education. I do my best below to navigate the latest technology developments, the continuing slew of large AI investments, the regulatory landscape and the increasing pressures on the education and training sector to get to grips with what AI is enabling students to achieve and businesses to leverage.
However, prompted by the recent OpenAI voice debacle, first please indulge me as I reflect that I once had the pleasure of being upgraded on a flight from New York to London and to add to the thrill of sipping chilled champagne and eating warm snacks in first class, I found myself sitting behind actress Scarlett Johansson. I knew she was on the flight way before I got on the plane and took my seat, I knew as soon as I heard the words "ooh now we can have a romantic dinner together" as the she entered the lounge at JFK airport with her friend. Her voice is so distinctive, and let's face it so sexy too! I knew this was no fake, because I had the benefit of seeing as well as hearing. There was no question – this was no imposter, but the real, living, breathing Ms Johansson. No wonder that OpenAI want to use her voice for their AI. Not only did she entice Joaquin Phoenix in the film 'Her' but most of the population of filmgoers too. How sad then that OpenAI have been so dishonest in the way they dealt with their misuse of her identity. A warning to all of us that they need to earn our trust and that we should not give it to them without evidence that they deserve it! The alarm bells should be ringing, and we should definitely worry that OpenAI is eliminating its "super alignment team" focused on safety and be concerned about the rapid development and deployment of AI technologies without sufficient oversight and consideration of potential risks and ethical implications – especially in education.
Oh, and just in case you are wondering, Ms. Johansson was the epitome of grace and charm, her petite frame and natural beauty undiminished by the absence of makeup. As we landed, she slipped on a baseball cap and discreetly disappeared into a waiting car, leaving her fellow passengers none the wiser to her presence as they begun to take their luggage from the overhead lockers!
But on to the newsletter now and what has been happening with AI in education...
What's been happening with AI for education
I was disheartened to see that the number of adult learners in England declined by 47% between 2010-11 and 2022-23. This decline is most pronounced in the most deprived areas of the country, exacerbating the existing skills divide. At the same time, business schools recognise the need for AI upskilling and are responding by developing new offerings to meet the evolving needs of executives and organisations and whilst there is a shortage of experts to teach the courses, it seems clear that those who can afford it will be able to get the skills they need. Sadly, the majority risk being left behind unless a concerted effort is made to increase adult training.
There are also questions about job satisfaction as AI completes more of the mundane work tasks to boost efficiency, the rise of AI also raises questions about the role of human judgment, creativity, and interpersonal relationships. Organisations that thoughtfully navigate these issues and find ways to leverage AI while empowering their employees will undoubtedly thrive in the AI era.
And employees with skills will likely do very well - 97% of IT jobs that pay over £100,000 now require AI skills as a core requirement and in general workers with AI skills are well-positioned to command an average skill premium of 21% compared to their peers without such expertise. 66% of executives at the vice president level or above state that they would not hire an applicant for a senior position who lacked knowledge of using basic generative AI tools. However, the ability to build applications using AI opens up even more opportunities, positioning job seekers as highly sought-after candidates in various industries.
Khan Academy announced that it is collaborating with Microsoft on AI in education and just to put the pressure on, OpenAI co-founder Greg Brockman and Khan Academy founder Sal Khan suggested that the window of opportunity for starting AI initiatives in education is closing rapidly. I find that hard to believe. I can understand why they would want us to believe that time is tight, but the truth is that we are only at the start of what will inevitably be a considerable a long journey for AI in education. It is also the case that building AI tutors that are truly helpful and natural to use is extremely hard and without advancing our understanding of a learner's context and evidence of delivering consistently helpful results, we should maintain some healthy scepticism.
​
Meanwhile, Google published an intriguing paper, "Towards Responsible Development of Generative AI for Education: An Evaluation-Driven Approach." The paper reports research that underscores the potential for sector-specific data to refine and optimise AI models for particular applications, such as education. The authors call for collaborative development of shared pedagogical benchmarks in the AI and education sector to drive evidence-based progress. They also acknowledge the technical limitations of their LearnLM-Tutor. For example, it is text-only, lacks full understanding of context, cannot leverage non-verbal cues, and may hallucinate information. Sociotechnical limitations include less personalisation and rapport-building compared to human tutors. They stress that more research is needed on effective human-AI collaboration in education. A finding that will be recognised by all those of us who have been working as part of the AI and education community for the past few decades.
​
The relentless march of technological progress continues, from OpenAI's GPT-4o launch and Google's enhanced Gemini 1.5 Pro and Gemini Live voice chat app, to the development of smaller language models by Microsoft, Google, Meta, and Apple to address concerns over the costs and resource demands of large language models (LLMs). Chinese AI companies offer "AI-in-a-box" products for businesses to maintain data and operations in-house, while Apple's OpenELM family of open-source, smaller LLMs prioritizes user privacy by running on Apple devices. Amidst this progress, big tech's disingenuity persists, from OpenAI's 'Sky' voice controversy to Meta's simultaneous release of the open-source Llama 3 LLM and expressions of their own scepticism regarding the safety and human-like intelligence of large LLMs. Microsoft's "Windows Recall" also raises questions, granting their AI co-pilot a "photographic memory" of user activity without clear justification.
​
As Abba famously sang, "Money, money, money." The unabated flow of substantial investments in AI continues, exemplified by Saudi Arabia's $100 billion commitment to become a global AI hub and diversify its economy. With investors demanding swift returns, the pressure to innovate may overshadow considerations of safety and ethics. Regrettably, this climate could exacerbate the existing lack of transparency from AI companies at a time when public trust and collaboration between companies and regulators are paramount for the responsible development and deployment of AI technologies. Whilst on the subject of money, Nvidia, the leading AI chip manufacturer, reported a staggering 262% revenue increase in the past quarter, underscoring the immense financial potential in this field.
As we all try to safely navigate this complex and rapidly evolving landscape, let’s remain committed to the thoughtful and ethical advancement of AI in education. Collaboratively, I continue to believe we can work towards a future where the benefits of AI are harnessed for the greater good, while vigilantly safeguarding against potential pitfalls.
​
Education news
1. The number of adult learners in England has declined by 47% between 2010-11 and 2022-23, leading to a loss of 7 million qualifications. The decline has been most pronounced in the most deprived areas of the country, exacerbating the existing skills divide. This trend is attributed to 28% real terms cut in per capita funding for adult skills by the government and a 20% reduction in investment per employee by companies over the same period, hindering economic growth and social mobility as the demand for skilled workers continues to grow.
2. The growing interest among executives in the commercial applications of generative AI is driving business schools to offer a wide range of new courses focused on digital understanding and skills. A recent survey found that nearly half of the respondents anticipate learning about AI in the next five years, along with other digital subjects such as cybersecurity, digital marketing, e-commerce, and data analytics. Business schools are responding by revamping their course portfolios and developing new offerings to meet the evolving needs of executives and organizations.
3. Training providers face the challenge of finding enough experts to teach courses on the latest AI technologies. MIT Sloan School of Management collaborates with professors from the MIT Computer Science and Artificial Intelligence Laboratory to deliver its AI courses, highlighting the need for interdisciplinary collaboration and innovative approaches to meet the growing demand for AI education.
4. As AI-generated content becomes increasingly prevalent in the workplace, organizations must reflect on the meaning and value of work for their employees. While AI can automate mundane tasks and potentially boost efficiency, it also raises questions about the role of human judgment, creativity, and interpersonal relationships. Organizations that thoughtfully navigate these issues and find ways to leverage AI while empowering their employees are more likely to thrive in the AI era, but few companies seem to be actively grappling with these challenges as AI adoption expands.
5. George Siemens announced the launch of a Global Data Consortium, an initiative to encourage universities to become producers, not just consumers, in the AI arena. The project aims to help universities get involved in AI development by reviewing and commenting on a concept/technical paper. Siemens argues that higher education is not sufficiently engaged in AI and proposes the consortium as an on-ramp for universities to start building with AI, inviting feedback on the concept paper.
6. Khan Academy is collaborating with Microsoft on AI in education. OpenAI co-founder Greg Brockman and Khan Academy founder Sal Khan emphasized that the window of opportunity for starting AI initiatives in education is closing rapidly, and the education system has done little internally to address this pressing need, stressing the importance of swift action and collaboration to harness the power of AI for improving educational outcomes and experiences.
7. Google has published a paper titled "Towards Responsible Development of Generative AI for Education: An Evaluation-Driven Approach," which demonstrates the value of educational data in shaping the effectiveness of large language models (LLMs) in specific sectors. Key Takeaways - Google DeepMind and Google have developed LearnLM-Tutor, a generative AI model optimised for 1-on-1 conversational tutoring. The development process involved participatory research with learners and educators to identify key pedagogical principles and capabilities to prioritize. This included workshops, interviews, and co-design activities. Supervised fine-tuning of Gemini 1.0 (base language model) on pedagogically rich datasets to create LearnLM-Tutor. Comprehensive evaluation of LearnLM-Tutor compared to the base model using 7 benchmarks spanning quantitative metrics, qualitative feedback, human judgments, and automated methods. LearnLM-Tutor outperformed Gemini 1.0 on most pedagogical dimensions. Real-world testing of LearnLM-Tutor with students in Arizona State University's online courses. Interviews revealed students found the AI tutor helpful, always available, and a safe space to ask questions. Responsible development practices including impact assessments, new educational policies, mitigations for risks like misinformation, and iterative human and automated evaluations.
Job market news
1. An analysis of over 5,300 live tech jobs in the UK revealed that 97% of IT jobs paying over £100,000 now require AI skills as a core requirement, either explicitly in the job title or within the role's primary requirements. The study highlights the rapid integration of AI into high-paying tech jobs and the growing demand for professionals with AI expertise, emphasizing the need for IT professionals to acquire and continuously develop AI skills to remain competitive in the job market.
2. Workers with AI skills are well-positioned to command higher salaries, enjoying an average skill premium of 21% compared to their peers without such expertise. As AI continues to reshape the way workers use information and deliver results, sectors heavily exposed to AI are experiencing nearly five times higher growth in labour productivity compared to industries with lower AI adoption, highlighting the potential for AI to drive significant efficiency gains and create new opportunities for growth and innovation across various sectors.
3. AI is increasingly affecting the job market, with 66% of executives at the vice president level or above stating that they would not hire an applicant who lacked knowledge of using basic generative AI tools. Junior and less-experienced candidates are more likely to secure employment and receive increased responsibility if they possess AI skills, highlighting the growing importance of AI competency across various levels of the workforce as companies recognize the value of employees who can effectively leverage AI tools to enhance productivity and innovation.
4. Proficiency in using AI tools has become a valuable asset in the current job market, providing individuals with a competitive edge in securing employment and advancing their careers. However, the ability to build applications using AI opens up even more opportunities, positioning job seekers as highly sought-after candidates in various industries. As companies increasingly recognize the potential of AI to transform their operations and drive innovation, the demand for professionals who can design, develop, and implement AI-powered solutions continues to grow, making AI application development an increasingly valuable and in-demand skill set.
Technology news
1. OpenAI's launch of GPT-4o, a multimodal AI model, was overshadowed by a dispute with actress Scarlett Johansson. The AI assistant, named "Sky," had a voice that Johansson claimed was too similar to her own, used without her permission. After receiving legal notices from Johansson's team, OpenAI's CEO Sam Altman removed the voice and apologized, stating that it was not intended to resemble hers. The incident highlights growing concerns in Hollywood about AI's potential to disrupt the entertainment industry and infringe upon actors' rights.
2. Major tech companies, including Microsoft, Google, Meta, and Apple, are developing small language models to encourage the adoption of AI by businesses. These models have fewer parameters but still possess powerful capabilities, making them more cost-effective and requiring less computing power. The move aims to address concerns from enterprise customers about the high costs and resources needed to run large language models and make AI more accessible to a broader range of businesses.
3. Chinese AI companies are introducing "AI-in-a-box" products that allow businesses to run AI on their own premises, posing a threat to the AI cloud computing services offered by major Chinese tech firms like Alibaba, Baidu, and Tencent. These boxed solutions cater to companies that prefer to keep their data and operations in-house rather than relying on cloud-based services, potentially disrupting the market for AI cloud computing in China, where on-premises and private cloud setups account for about half of the market.
4. Nvidia reported record sales of AI chips, leading to a 262% increase in revenue for the past quarter, surpassing analysts' expectations. The company's CEO, Jensen Huang, expects this growth to continue throughout the year, driven by the launch of their new Blackwell chips. Huang also announced that Nvidia would maintain an annual pace of introducing newer, more powerful chips, highlighting the increasing demand for advanced AI hardware and Nvidia's dominant position in the market.
5. Meta's chief AI scientist, Yann LeCun, expressed scepticism about the ability of large language models (LLMs) to achieve human-like reasoning and planning capabilities. LeCun argued that LLMs have limited understanding of logic, the physical world, and lack persistent memory, making them "intrinsically unsafe." Instead, he advocates for an alternative approach to create "superintelligence" in machines, contrasting with the current focus on advancing LLMs and suggesting that a different path may be necessary to develop truly intelligent AI systems.
​
6. Meta AI has released Llama 3, an open-source large language model with 8 billion and 70 billion parameter versions, and a 405 billion parameter model in training. When running on Grok chips, the results are highly impressive, highlighting Meta's AI capabilities and commitment to open-source development.
​
7. Vector databases are becoming increasingly important for LLMs and AI, as they enable the creation of vectors that help AI systems understand and process information more effectively, improving the performance and efficiency of AI applications.
8. Microsoft has announced new AI features for Windows, including "Windows Recall," which gives the AI assistant a "photographic memory" of a user's virtual activity while promising to protect privacy by allowing users to filter what is tracked, raising both excitement and concerns about the potential implications of such technology.
9. Apple's OpenELM is a family of open-source, smaller large language models designed to prioritise user privacy by running on Apple devices. The models range from 270 million to 3 billion parameters and can process 2,048 tokens of context. OpenELM's architecture differs from current state-of-the-art transformer models, and in evaluations, it outperformed several other open-source models trained solely on publicly available data, although it fell short on the MMLU benchmark.
10. Google's Gemini 1.5 Pro has received significant updates, most notably the doubling of its input context window to 2 million tokens of text, audio, and/or video, enabling developers to apply generative AI to multimedia files and archives beyond the capacity of other currently available models. Google also introduced Gemini 1.5 Flash, Veo, and expanded the Gemma family of open models, as well as launching Gemini Live, a smartphone app for real-time voice chat, as part of Project Astra, a DeepMind initiative to create real-time, multimodal digital assistants.
11. Amazon is removing its AI-driven Just Walk Out checkout service from most of its Amazon Fresh grocery stores and replacing it with Dash Cart, a smart shopping cart that enables customers to scan purchases as they shop. This decision suggests that the Just Walk Out technology may be less well-suited to larger store environments, facing challenges such as reliance on remote employees, difficulty in improving accuracy, high installation costs, and the need for extensive remodelling in some stores.
Markets news
1. Nvidia's record sales of AI chips led to a 262% increase in revenue for the past quarter, exceeding analysts' expectations. The company's CEO, Jensen Huang, anticipates continued growth, driven by the launch of their new Blackwell chips and a commitment to introducing newer, more powerful chips annually, demonstrating the increasing demand for advanced AI hardware and Nvidia's leading position in the market.
2. While Asia's largest AI and chipmaker stocks have continued to gain, companies further down the semiconductor supply chain, such as those involved in chipmaking equipment, materials, and chemicals, have been lagging behind their global peers in recent months. Shares of Tokyo Electron, Towa, Advantest, and Screen have experienced declines, indicating room for upside in these sectors and suggesting that investors are primarily focused on leading AI and chipmaker stocks, while the broader semiconductor industry is yet to fully benefit from the AI boom.
3. Salesforce shares plummeted by 20% after the company reported weaker than expected revenues and bookings, erasing approximately $50 billion from its market value. The customer relationship software giant provided a rare single-digit growth outlook for the second quarter, indicating challenges in maintaining growth momentum despite the increasing demand for AI and digital transformation solutions.
General news
1. OpenAI is facing increased scrutiny due to its handling of various issues, including employee investment plans, the elimination of its "super alignment team" focused on safety, and the voice used in its Omni model. There are calls for more "adult supervision" of the company, highlighting concerns about the rapid development and deployment of AI technologies without sufficient oversight and consideration of potential risks and ethical implications.
2. Saudi Arabia is investing $100 billion to become a global AI hub, aiming to diversify its economy by channelling its wealth into more sustainable industries. The country has established funds and partnerships to invest in AI start-ups, research, and infrastructure, attracting investments from both AI giants and start-ups. However, some investments have faced delays and criticisms, and U.S. partners have drawn scrutiny for working with Saudi Arabia due to human rights concerns.
Further Reading: Find out more from these free resources
Free resources:
-
Watch videos from other talks about AI and Education in our webinar library here
-
Watch the AI Readiness webinar series for educators and educational businesses
-
Listen to the EdTech Podcast, hosted by Professor Rose Luckin here
-
Study our AI readiness Online Course and Primer on Generative AI here
-
Read our byte-sized summary, listen to audiobook chapters, and buy the AI for School Teachers book here
-
Read research about AI in education here
About The Skinny
Welcome to "The Skinny on AI for Education" newsletter, your go-to source for the latest insights, trends, and developments at the intersection of artificial intelligence (AI) and education. In today's rapidly evolving world, AI has emerged as a powerful tool with immense potential to revolutionize the field of education. From personalized learning experiences to advanced analytics, AI is reshaping the way we teach, learn, and engage with educational content.
In this newsletter, we aim to bring you a concise and informative overview of the applications, benefits, and challenges of AI in education. Whether you're an educator, administrator, student, or simply curious about the future of education, this newsletter will serve as your trusted companion, decoding the complexities of AI and its impact on learning environments.
Our team of experts will delve into a wide range of topics, including adaptive learning algorithms, virtual tutors, smart classrooms, AI-driven assessment tools, and more. We will explore how AI can empower educators to deliver personalized instruction, identify learning gaps, and provide targeted interventions to support every student's unique needs. Furthermore, we'll discuss the ethical considerations and potential pitfalls associated with integrating AI into educational systems, ensuring that we approach this transformative technology responsibly. We will strive to provide you with actionable insights that can be applied in real-world scenarios, empowering you to navigate the AI landscape with confidence and make informed decisions for the betterment of education.
As AI continues to evolve and reshape our world, it is crucial to stay informed and engaged. By subscribing to "The Skinny on AI for Education," you will become part of a vibrant community of educators, researchers, and enthusiasts dedicated to exploring the potential of AI and driving positive change in education.