top of page

THE SKINNY
on AI for Education

Issue 10, October 2024

Welcome to The Skinny on AI for Education newsletter. Discover the latest insights at the intersection of AI and education from Professor Rose Luckin and the EVR Team. From personalized learning to smart classrooms, we decode AI's impact on education. We analyse the news, track developments in AI technology, watch what is happening with regulation and policy and discuss what all of it means for Education. Stay informed, navigate responsibly, and shape the future of learning with The Skinny.​

Headlines

​

​

After the AI Gold Rush: "Look at [Human] Nature on the run in the [2020s]"

​

I recently posted on LinkedIn under the title ‘After the AI Gold Rush: Charting a Course for Human Thriving’, because I worry that the wild ride of the current ‘AI Gold Rush’ is dizzying, but the important question we must ask is:  what happens “After the Gold Rush?” In the words of musician Neil Young "Look at Mother Nature on the run in the 1970s." Now, however, we need to look at human nature on the run in the 2020s, because much of the world’s power base is literally ’on the run’ trying to keep up with the rapid advancement of AI capabilities, whether it’s because they want to regulate it or because they want to make money out of it. There is hardly time to take a breath.

 

And yet that is what we must do: we must breathe, think, learn, and be wise if we want to shape this post-gold rush landscape to ensure human flourishing and thriving. If we want our ‘place in the sun’ we must find ways to navigate the complexity of what is happening in the world of AI, its wider context, implications, costs and benefits. And this is an extremely daunting task!

 

Here's a concise summary of the key points from this month’s Skinny, but do please read on for more details and lots of links to interesting reads. “Learn fast, act more slowly”:

 

  • The rapid advancement of AI technology is causing a "Gold Rush" mentality, with potential consequences for human thriving.

  • In this situation, it is vital that we consider the broader context and complexity of AI's impact, if we are to ensure human thriving.

  • Technology continues to develop at a fast pace - recent AI advancements include improved language models, image generation, and specialised AI for various tasks.

  • There's intensifying competition between nations in AI development and significant investments in AI infrastructure.

  • Fortunately, there is also a growing focus on responsible AI development, regulation, and governance, including calls for mandatory safety testing.

  • There are also increasing concerns about AI's environmental impact and infrastructure requirements, which may halt the speed of technical progress.

  • AI is impacting various sectors, from law enforcement to healthcare, raising ethical concerns and potential societal impacts – both positive and negative.

  • AI is starting to reshape education and the workplace, with an emphasis on personalised learning, AI literacy, and lifelong learning.

  • This brings ongoing implications. For example, in the UK a curriculum review is underway, and this will need to address interdisciplinary integration and the growing recognition that we need an emphasis on developing human capabilities that complement AI, such as critical thinking and creativity.

 

Many years ago, I wrote a book about ‘context’ and how our context has a substantial impact on how, when and if we learn. You can think of context as a combination of environment, people and things, like pencils, books, computers and so forth. The importance of looking at the wider context is often overlooked because we like to find simple explanations. Take, for example, the financial crash which was originally blamed on the sub-prime mortgage lending collapse, but was of course far more complex with many more moving parts. It is this recognition of the need to embrace complexity – no matter how uncomfortable - that makes me take The Skinny as an opportunity to look at the broader context of AI, not just what is happening with AI in education, because I know that the wider context will impact on AI in education.

 

In this issue of The Skinny, you can read on for thoughts about education and the workplace, but also, you can read about the ongoing advancements in AI technology that encompass significant progress in language models, image generation, and specialised AI for tasks like mathematics and audio processing. There is further evidence of the way AI is reshaping various sectors, from law enforcement to healthcare. And of intensifying competition between nations and substantial investments in AI infrastructure, including data centres and chip manufacturing: the foundations on which AI relies.

 

However, the recurrent theme is that despite the potential for AI to augment human capabilities and accelerate scientific discovery, there are growing concerns about its ethical implications, societal impacts, and environmental consequences, leading to an increased focus on responsible AI development, regulation, and governance. This is a good thing in my honest opinion.

 

But first…

 

Education and the Workplace

 

The ripples of the AI revolution have the potential to profoundly impact our educational systems and workplaces. We may be about to witness a significant shift towards personalised learning experiences and adaptive training programmes, leveraging AI's capabilities to tailor education to individual needs. This would likely reshape the roles of educators and trainers, who would be able to increasingly focus on facilitation and nurturing critical thinking skills rather than mere information delivery. As AI becomes more ubiquitous, there is a growing emphasis on AI literacy, and developing skills that complement, rather than compete with, AI systems. This shift underscores the pressing need for continuous learning and upskilling across all sectors, as professionals strive to adapt to the rapidly evolving, AI-driven job market. The message is clear: to thrive in this new landscape, we must embrace lifelong learning and cultivate uniquely human capabilities that AI cannot easily replicate.

 

In the UK, we have a curriculum review getting underway and that will need to tackle today's need for interconnectedness and fluidity, at the heart of which lies the concept of interdisciplinary integration. Not an easy thing to achieve. If we take AI as a subject specifically – we can no longer teach AI as a standalone subject, confined to the computer science department. Instead, its tendrils reach into every corner of academia, from the humanities to the social sciences, creating a rich tapestry of knowledge that reflects our complex reality. And as we equip our students with the tools to create and interact with AI, we must also instil in them a deep sense of moral obligation.

 

In this brave new world, the skills we prioritise are evolving. Critical thinking and creativity, once considered 'soft skills', are now at the forefront of education. These uniquely human abilities, complementary to AI, are the keys to thriving in an increasingly automated world.  We also need a strong emphasis on practical, real-world applications. In the rapidly changing field, the idea of education as a finite process has become obsolete. Instead, we must cultivate a mindset of lifelong learning, equipping students with the adaptability to keep pace with AI advancements long after they've left the halls of academia.

 

This new educational paradigm can’t exist in a vacuum. Increasingly, we need closer collaboration between academia and industry. Partnerships with AI companies are providing students with invaluable insights into the cutting edge of technology, while internships and co-op programmes offer real-world experience that complements their theoretical knowledge.

 

That Wider Context

​

Beyond the specifics of education and training, the rapid advances in AI technology continue, with significant progress in language models, image generation, and specialised AI for tasks like mathematics and audio processing. The development of AI agents capable of generating novel scientific research and advancements in multimodal AI handling of multiple input and output types. Specifically, OpenAI's new ‘o1’ models show improved performance in math, science, and coding tasks using ‘chains of thought’ to excel in tasks requiring multi-step reasoning as they break down complex problems into smaller, manageable steps. This is impressive, but still problematic on occasions with issues of transparency and interpretability and increased potential for misuse in academic settings, such as cheating on exams or assignments and over-reliance on AI systems for critical decision-making. Mike Sharples has a nice LinkedIn post about this.

​

Google's Imagen 3 raises the bar for text-to-image generation, outperforming competitors in various benchmarks. Alibaba's Qwen2-Math and Qwen2-Audio achieve state-of-the-art performance in math problem-solving and audio processing. AI21 Labs' Jamba 1.5 introduces a hybrid architecture combining transformer, mamba, and mixture of expert layers for faster processing of long inputs, and the development of the 4M-21 model, handling an unprecedented number of input and output types for various computer vision tasks.

I also see intensifying competition between nations, particularly the US and China, in AI development, significant investments in AI infrastructure, including data centres and chip manufacturing and possibly shifting investment patterns favouring AI startups and infrastructure projects, but many of these are still owned or partially owned by large tech companies.

 

The consequences of the AI Gold Rush continue to make waves across various sectors, with its tentacles reaching far beyond the realms of pure technology. From the corridors of law enforcement to the halls of healthcare institutions, AI is increasingly becoming an indispensable tool. In the professional world, we're witnessing AI's potential to augment human capabilities in fields such as law and medicine, potentially reshaping these age-old professions. Meanwhile, in the scientific community, AI is opening new avenues for discovery, promising to accelerate research and innovation. However, this gold rush towards AI adoption isn't without its challenges. But it is not all a smooth ride - the significant infrastructure and energy requirements for AI development have sparked concerns, leading to substantial investments in power generation and data centres to support this burgeoning field.

 

And as ever, we find ourselves grappling with a host of ethical considerations and societal impacts. Concerns about data privacy, algorithmic bias, and the potential to exacerbate inequalities are at the forefront of public discourse. The delicate balance between fostering AI innovation and mitigating potential risks, particularly in sensitive areas like law enforcement, has sparked intense debates. In response, there's a growing focus on responsible AI development and deployment. This shift in perspective has led to increased attention on AI regulation and governance at both national and international levels. Interestingly, the trend seems to be moving towards regulating AI at the application level rather than the technology level itself. There are even calls for mandatory safety testing of AI models, akin to drug testing procedures in the pharmaceutical industry. As we navigate this complex landscape, efforts to enhance AI safety and reliability continue apace, with ongoing work to reduce model hallucinations and develop new benchmarks for performance evaluation.

 

If you want more detail about all of this read on…

​

- Professor Rose Luckin, October 2024

AI Development and Industry

Summary:

Recent developments highlight rapid advancements and shifts in AI technology and the tech industry. Key points include:

  • Improvements in AI-generated imagery and specialised AI models

  • AI's growing role in scientific research and coding assistance

  • Significant investments in AI infrastructure by tech giants

  • Emerging competition in the AI chip market

  • Integration of advanced AI capabilities into consumer devices

  • Development of more sophisticated AI models with improved reasoning capabilities

  • Shift towards AI "agents" capable of more complex, autonomous actions

  • Corporate restructuring and leadership changes in major AI companies

These trends indicate AI's expanding influence across various sectors, raising questions about future impacts on industries, job roles, and technological capabilities

 

Google's Imagen 3 Raises the Bar (22 August 2024)

Google has unveiled Imagen 3, a new text-to-image AI model that outperforms competitors in most head-to-head comparisons, particularly in adhering to prompts and generating specified numbers of objects. While Midjourney v6.0 still leads in visual appeal, Imagen 3's release showcases the rapid progress in AI-generated imagery. This advancement raises questions about the future of visual arts and design industries, as well as the potential for both creative empowerment and ethical concerns surrounding AI-generated content.

 

AI Agents Generate Novel Research (22 August 2024)

Researchers have proposed an innovative workflow called AI Scientist, which uses large language models to generate novel scientific research. The system employs various AI models to generate ideas, produce code, and document enquiries across three AI research categories. This development holds promise for accelerating scientific discovery but also raises questions about the role of human researchers and the potential for AI to reshape the academic landscape.

 

Alibaba's Open Models for Math and Audio (22 August 2024)

Alibaba has introduced Qwen2-Math and Qwen2-Audio, specialized open-weights AI models. Qwen2-Math-Instruct-72B has outperformed top models on some math benchmarks, while Qwen2-Audio provides text chat in response to voice input and can discuss various audio inputs. These advancements demonstrate the growing capabilities of AI in specialized domains, potentially revolutionizing fields such as mathematics education and audio processing.

 

AI-powered Coding Attracts Significant Funding (24 August 2024)

AI-driven coding assistants have attracted nearly £1 billion in funding since early 2023, with £433 million raised in 2024 alone. This trend indicates that software engineering is becoming the first "killer app" for generative AI. Companies like GitHub report significant adoption rates and productivity gains, with over 77,000 organisations using their Copilot tool. This development is reshaping the software development landscape, potentially altering the skills required for programming and accelerating innovation across various tech sectors.

 

China's Tech Giants Invest Heavily in AI (Late August 2024)

Chinese tech companies including Alibaba, Tencent, Baidu, and ByteDance have doubled their capital spending on AI infrastructure in the first half of 2024, reaching 50 billion yuan (£7 billion) combined. Despite US sanctions limiting access to top-tier Nvidia processors, these firms are purchasing over 1 million lower-performance Nvidia H20 chips. This significant investment highlights the growing competition in global AI development and raises questions about the future balance of AI capabilities between nations.

 

Chip Challengers Aim to Break Nvidia's AI Market Grip (28 August 2024)

Smaller companies like Cerebras, d-Matrix, and Groq are challenging Nvidia's dominance in the AI chip market, focusing on AI "inference" chips crucial for running models like ChatGPT. Cerebras has announced its "Cerebras Inference" platform, claiming to be 20 times faster than Nvidia's current generation at AI inference. This competition could lead to more rapid advancements in AI hardware and potentially lower costs for AI implementation across industries.

 

Nvidia's Strong Growth Continues (29 August 2024)

Nvidia reported that its revenue more than doubled to $30 billion in the past quarter, up 122% from a year ago. The company forecasts $32.5 billion in revenue for the next quarter and has authorized another $50 billion in share buybacks. This growth underscores the booming demand for AI chips and Nvidia's dominant position in the market, with implications for the broader tech industry and global competition in AI development.

 

Intel Faces Crisis in AI Chip Market (6 September 2024)

Intel is struggling to compete in the AI chip market, particularly against rival Nvidia. The company expects only $500 million in sales of its latest Gaudi 3 chips this year, far behind Nvidia's tens of billions in GPU sales. Intel is considering drastic changes, including potentially spinning off units and selling its foundry business. This situation highlights the challenges faced by established tech companies in adapting to the rapidly evolving AI landscape and the potential for significant industry restructuring.

 

Apple Launches iPhone 16 with AI Focus (10 September 2024)

Apple has unveiled the iPhone 16, emphasising its artificial intelligence capabilities. The new model features a novel A18 chip designed to handle AI tasks locally. Apple is introducing "Apple Intelligence" features, including an enhanced Siri, photo editing tools, and writing aids. A partnership with OpenAI will provide free access to ChatGPT on iPhones. These developments suggest a growing integration of AI into everyday devices, potentially transforming how we interact with technology and process information.

 

OpenAI Introduces New AI Models with Advanced Reasoning Capabilities (10 September 2024)

OpenAI has launched new AI models called "o1", claiming they are capable of advanced reasoning. These models, which will be integrated into ChatGPT Plus, reportedly outperformed existing models in complex problem-solving tasks. The o1 models use reinforcement learning to approach problems and take longer to analyse queries. This development could open up new possibilities for AI applications in scientific research and complex problem-solving, potentially revolutionising how we approach challenging intellectual tasks.

 

Microsoft Seeks Clarity on AI Chip Export Controls (17 September 2024)

Microsoft is seeking more clarity from the US government regarding export controls on AI chips to the Middle East. This comes after the company's £1.5bn investment in G42, a UAE-based AI company, to access markets in Africa and Asia. The situation highlights the complex interplay between technological advancement, international business, and geopolitical concerns. It underscores the challenges companies face in navigating global AI development while adhering to national security considerations.

 

AI Agents: The Next Generation of AI-Powered Assistants (21 September 2024)

Major software companies are shifting focus from AI copilots to more advanced "AI agents". These agents are designed to go beyond assistance, taking actions on behalf of users. They benefit from improved memory, enhanced planning capabilities, and integration with other systems through APIs. While initially promoted for simple tasks, some companies are exploring more complex applications, including customer support automation. This shift could potentially lead to significant changes in how we interact with software and could have far-reaching implications for productivity and job roles across various industries.

 

Leadership changes at OpenAI (26 September 2024)

OpenAI is undergoing significant changes in its leadership and corporate structure. The company is planning to restructure as a for-profit entity, potentially becoming a public benefit corporation. This shift from its original non-profit status to a more commercially focused organisation has led to several high-profile departures, including Chief Technology Officer Mira Murati, Chief Research Officer Bob McGrew, and Vice-President of Research Barret Zoph. The company is also in talks to give CEO Sam Altman an equity stake for the first time, as it raises over £6 billion at a £150 billion valuation. These changes reflect a broader shift in the AI industry towards commercialisation and product-oriented development, potentially impacting the direction and pace of AI research and development.

AI Regulation and Legal Issues

Summary:

Recent developments highlight ongoing challenges and progress in AI regulation and tech industry governance. Key points include:

  • Debates over AI safety bills and their potential impact on innovation

  • International agreements on AI standards, emphasising human rights and democratic values

  • Antitrust actions against major tech companies like Google and Apple

  • Investigations into data privacy concerns related to AI model development

  • Calls for mandatory safety testing of large AI language models

  • Continued competition and legal challenges in cloud computing and AI infrastructure

  • Concerns about regulatory restrictions potentially hindering AI progress in Europe

These trends underscore the complex balance between fostering AI innovation, ensuring safety and privacy, and maintaining fair competition in the rapidly evolving tech landscape.

​

California's AI Safety Bill Faces Opposition (22 August 2024)

OpenAI has criticised a California bill (SB 1047) aimed at ensuring the safe deployment of powerful artificial intelligence. The company argues that the bill threatens "California's unique status as the global leader in AI" and could lead to engineers and entrepreneurs leaving the state. This debate highlights the challenges of balancing innovation with safety concerns in AI regulation, and the potential impact on regional competitiveness in the AI industry.

 

US, Britain, and Brussels Sign AI Standards Agreement (6 September 2024)

The United States, European Union, and United Kingdom have signed the Council of Europe's convention on Artificial Intelligence, marking the first legally binding international treaty on AI use. The convention emphasizes human rights and democratic values in regulating both public and private-sector AI systems. This agreement represents a significant step towards global AI governance, though critics point out that it lacks specific sanctions such as fines. The treaty joins other initiatives such as Europe's AI Act and the G7 deal, illustrating the growing push for cohesive international AI regulation.

 

Google Faces Antitrust Trial Over Ad Tech Monopoly (10 September 2024)

The US Department of Justice has accused Google of running a massive ad tech monopoly in a landmark antitrust trial. The government claims Google controls about 90% of the markets for ad servers and advertiser networks worldwide, with the company's cut reaching 37 cents of every advertising dollar when matching buyers and sellers. This case, part of a broader push to rein in Big Tech companies, could have significant implications for the digital advertising landscape and how tech giants operate in the future.

 

Apple Ordered to Pay €13bn in Back Taxes (11 September 2024)

The European Court of Justice has ruled that Apple must pay €13bn in back taxes to Ireland, overturning a 2020 decision by a lower court. The case stems from a 2016 decision by EU competition chief Margrethe Vestager, who said Ireland had given Apple an illegal sweetheart deal with a tax rate of less than 1%. This ruling represents a significant victory for EU regulators and could have far-reaching implications for how multinational tech companies structure their tax affairs in Europe.

 

Google Wins Appeal Against €1.5bn EU Competition Fine (18 September 2024)

Google has won an appeal against a €1.5bn competition fine imposed by the European Commission. The EU's General Court annulled the fine, despite accepting most of the Commission's assessments that Google had used its dominant position to block rival online advertisers. This case is one of three the EU has fought against Google, with total fines amounting to roughly €8.25bn. The decision highlights the complex and often protracted nature of antitrust cases in the tech industry, and the challenges regulators face in effectively policing digital markets.

 

Europe's Privacy Watchdog Probes Google's Use of Data for AI Model (12 September 2024)

Google is under investigation by Ireland's Data Protection Commission over its processing of personal data in the development of its Pathways Language Model 2 (PaLM 2) AI model. The inquiry aims to assess whether Google has breached its obligations under the EU's General Data Protection Regulation (GDPR). This investigation is part of a broader trend of data protection authorities taking action against Big Tech companies developing large language models, highlighting the growing tension between rapid AI advancement and data privacy concerns.

 

AI chat tool to be rolled out across NSW public schools to ease pressure on teachers (16 September 2024)

The New South Wales government announces the launch of NSWEduChat, an AI-powered chat tool, in public schools starting from 14 October 2024. This tool, developed after the banning of ChatGPT in classrooms, aims to assist teachers by helping with tasks such as producing student resources, correspondence, and newsletters. A trial of the tool reportedly saved some teachers more than an hour a week. The government emphasizes that NSWEduChat is designed to support, not replace, teachers and that it encourages critical thinking in students by asking follow-up questions rather than providing full answers.

 

It's time for limited, mandatory testing for AI (25 September 2024)

There's a growing call for mandatory safety testing of large AI language models, similar to how drugs are tested before public release. The UK's AI Safety Summit and AI Safety Institute are highlighted as positive steps towards AI regulation, with countries like the US, Singapore, Canada, and Japan following suit. Some AI companies, such as OpenAI and Anthropic, are voluntarily allowing the US and UK to test their models. However, the article argues for mandatory, independent, and rigorous testing of the largest AI models before public release, focusing on potential physical harms. This development underscores the increasing recognition of the need for robust AI safety measures and the potential for new regulatory frameworks in the AI industry.

 

Google files Brussels complaint against Microsoft cloud business (25 September 2024)

Google has filed an antitrust complaint with the European Commission against Microsoft, alleging unfair cloud computing practices. The complaint accuses Microsoft of leveraging its Windows software to lock customers into its Azure cloud services. Google claims that Microsoft imposes "steep penalties" on customers who want to use rival cloud providers, with moving Windows software to Azure being essentially free, while moving to a competitor incurs a 400% mark-up for new Windows server licences. This case underscores the ongoing competition in the tech industry and the complex legal challenges surrounding cloud services and AI infrastructure.

 

Europe needs regulatory certainty on AI (late September 2024)

Yann LeCun, VP & Chief AI Scientist at Meta, shares an open letter signed by industry leaders and academics, including Mark Zuckerberg, addressing AI regulation in the European Union. The letter emphasizes the need for regulatory certainty to ensure Europe can contribute to and benefit from AI progress. It highlights that Meta's Llama has become a dominant platform for AI products, but regulatory restrictions in the EU are hindering the release of its next multimodal version. The signatories urge the EU to harmonize regulations to prevent the region from becoming a "technological backwater."

AI Market and Investment

Summary:

Recent developments highlight the significant impact of AI on the global economy and tech industry. Key points include:

  • Increased startup failures amid the AI boom, reflecting market volatility

  • Major investments and funding rounds for AI companies like OpenAI

  • New ventures into AI-related sectors by unexpected players (e.g., Lidl)

  • Growing focus on AI infrastructure, including data centres and energy projects

  • Concerns about resource demands (e.g., copper, electricity) for AI development

  • Strategic partnerships and potential mergers in the tech and AI sectors

  • Geopolitical implications of AI technology access and development

These trends underscore AI's transformative potential across industries, while also highlighting challenges related to infrastructure, resources, and market dynamics in the rapidly evolving AI landscape.

 

Start-up Failures Rise Amid AI Boom (19 August 2024)

The AI boom has brought both opportunities and challenges to the tech industry. While some companies are thriving, others are struggling to keep up. In the first quarter of 2024, 254 venture-backed clients of Carta went bust, representing a 60% increase in start-up failures over the past year. This bankruptcy rate is more than seven times higher than when Carta began tracking failures in 2019.  The collapses are attributed to several factors, including interest rate rises, plummeting venture capital investment, and diminished venture debt following Silicon Valley Bank's collapse. This trend highlights the volatility of the AI-driven tech market and the challenges faced by startups in a rapidly evolving landscape.

 

Lidl Ventures into Cloud Computing and Cybersecurity (23 August 2024)

In an unexpected move, Lidl, known for selling food staples at low prices, has ventured into cloud computing and cybersecurity services. The new business, called Schwarz Digits, was created under the oversight of Lidl's 80-year-old founder, Dieter Schwarz. Initially built for Lidl's internal use, it became a standalone business and has attracted significant clients, including SAP and Bayern Munich football club.

 

OpenAI in Talks for New Funding Round (29 August 2024)

OpenAI, the creator of ChatGPT, is in discussions to raise billions of dollars at a valuation exceeding $100 billion. Venture capital firm Thrive Capital is set to lead the round with a $1 billion investment. This potential valuation represents a significant increase from OpenAI's current $86 billion valuation and reflects the rapid growth and high expectations surrounding generative AI technology.

 

Hindenburg Research Targets Super Micro Computer (28 August 2024)

Activist short seller Hindenburg Research has released a report targeting Super Micro Computer, a company that has become symbolic of the AI stock market frenzy. The report alleges various issues, including accounting manipulation and potential circumvention of sanctions.

 

Blackstone to Acquire Australian Data Centre Business AirTrunk (3 September 2024)

US private equity group Blackstone is in final talks to acquire Sydney-based data centre business AirTrunk for A$20bn (US$13.5bn). This deal, potentially the largest transaction in Australia this year, is seen as a bet on the growth of cloud computing and artificial intelligence in the Asia-Pacific region.

 

OpenAI Co-founder's New "Safe" AI Start-up Raises £1bn (5 September 2024)

Ilya Sutskever, co-founder of OpenAI, has raised £1 billion for his new AI start-up, Safe Superintelligence (SSI). The funding round values the 3-month-old company at around £5 billion. SSI aims to build "safe" artificial intelligence models, focusing on creating a "straight shot to safe superintelligence".

 

Goldman Sachs Releases Follow-up Report on Generative AI (5 September 2024)

Goldman Sachs has released a report addressing concerns about potential overvaluation in the tech sector, particularly related to generative AI. The report argues that the strong performance of the tech sector since 2010 reflects stronger fundamentals rather than irrational exuberance. However, it warns that disruptive technologies typically go through a boom-bust-boom cycle.

 

BlackRock and Microsoft Plan £30bn Fund for AI Infrastructure (17 September 2024)

BlackRock is preparing to launch a £30+ billion artificial intelligence investment fund with Microsoft. The fund, called the Global AI Investment Partnership, aims to build data centres and energy projects to meet the growing demands stemming from AI development.

 

Titans' AI Alliance Aims to Dominate Next Tech Frontier (19 September 2024)

BlackRock, Microsoft, and MGX (an Abu Dhabi investment vehicle) are planning a $100 billion spending spree to build AI power infrastructure. The consortium aims to "enhance American competitiveness in AI while meeting the growing need for energy infrastructure to power economic growth".

 

BHP Warns AI Growth Will Worsen Copper Shortfall (16 September 2024)

BHP, the world's largest mining company, warns that the growth of artificial intelligence will exacerbate a looming shortage of copper. The rise of data centres and AI is expected to increase global copper demand by 3.4mn tonnes a year by 2050.

 

Electricity Infrastructure Emerges as Next Play for AI Investors (16 September 2024)

As demand for AI technologies grows, electricity providers are emerging as a new investment opportunity in the sector. Investors are looking beyond chip manufacturers to companies poised to benefit from AI's growing power demands, including renewable electricity generators and battery providers.

 

Microsoft Strikes Nuclear Power Deal to Meet AI Demand (26 September 2024)

Microsoft has struck a 20-year power supply deal with Constellation Energy to reopen Unit 1 of the Three Mile Island nuclear plant in Pennsylvania. The plant is set to come online in 2028 and will provide over 800MW of power to Microsoft until at least 2054.

 

Investors Pile into OpenAI's £6bn Funding Round (26 September 2024)

OpenAI is finalising a new fundraising round valuing the company at £150bn, aiming to raise £6bn or more. Major tech players including Apple, Nvidia, and Microsoft are in talks to join the funding round. Investors believe OpenAI could become worth at least £1.5tn in the future, larger than Meta and Berkshire Hathaway.

 

Qualcomm Approaches Intel About Potential Takeover (21 September 2024)

Chipmaker Qualcomm has approached rival Intel about a potential takeover, though no formal offer has been made. If completed, the deal would surpass Microsoft's £69bn acquisition of Activision as the largest technology deal in history.

 

UAE President Meets Joe Biden in Push for More US AI Technology (24 September 2024)

Sheikh Mohamed bin Zayed al-Nahyan, the UAE's leader, met with US President Joe Biden in Washington to advance AI cooperation. The UAE is seeking easier access to US-made AI technology as part of its strategy to become an AI leader, with Biden granting the UAE "major defense partner" status to foster greater security ties.

AI in Education and Employment

Summary:

Recent developments highlight AI's growing impact on education and various other sectors. Key points include:

  • Innovative use of AI in schools for personalised learning and teacher support

  • AI's increasing influence on gaming, dating apps, and cancer diagnosis

  • Ongoing debates around mobile phone use in schools and social media age limits

  • AI applications in legal professions and job application processes

  • Pioneering AI-driven educational programs like that at David Game College in London, where AI and learning coaches are taking over GCSE teaching

  • The need for more discussion on leveraging AI to transform education, focusing on equity and metacognition

These trends underscore AI's potential to transform learning experiences, while also raising important considerations about implementation, ethics, and the changing nature of education and work in an AI-driven world.

​

My trip to the frontier of AI education (9 July 2024)

Bill Gates recounts his visit to First Avenue Elementary School in Newark, New Jersey, where they are pioneering the use of AI in education. The school is piloting Khanmigo, an AI-powered tutor and teacher support tool developed by Khan Academy. Gates describes how teachers are creatively using AI to create personalised problem sets, develop lesson plans, and track student progress. He notes that while the technology is still in its early stages with some limitations, it shows incredible potential for enhancing education.

 

"No @magicschool AI, this is not what teachers want!" (1 August 2024)

Adeel Khan, founder of MagicSchool AI, shares an update regarding their AI tool for education. Initially named ""AI Teacher Twin,"" the tool received mixed reactions from teachers. Some were enthusiastic about its potential as a resource when they were unavailable, while others expressed concerns about it potentially replacing teachers. In response to this feedback, MagicSchool AI decided to retire the original name and concept, replacing it with an ""AI Resource Bot"" that maintains similar functionality but removes the personality features that mimicked a teacher.

 

AI Could Change the Gaming Industry (18 August 2024)

The gaming sector is experiencing significant benefits from the AI boom, with Chinese gaming companies leading the charge in utilising AI tools for game development. AI is being employed for various tasks, including creating animations, generating realistic 3D backgrounds, game-development testing, and creating game missions. This trend is expected to enable faster and more cost-effective game releases, potentially leading to an increase in the number of games produced per year.

 

Pressure to Ban Mobile Phones in America's Schools Intensifies (19 August 2024)

Both Republican and Democratic politicians in the US are finding common ground on the idea of banning mobile phones in school classrooms. Research indicates that mobiles in class distract students, reduce performance, and can lead to cyberbullying.

 

Dating Apps Develop AI 'Wingmen' (31 August 2024)

Major dating apps including Tinder, Hinge, Bumble, and Grindr are developing AI-powered tools to assist users in various aspects of online dating. These AI "wingmen" are designed to help users craft better chat-up lines, build profiles, and provide feedback on flirting techniques.

 

UK's first 'teacherless' AI classroom set to open in London (31 August 2024)

This article reports on David Game College, a private school in London, opening the UK's first ""teacherless"" GCSE class using artificial intelligence instead of human teachers. The course, set to begin in September, will use AI platforms and virtual reality to provide personalised learning experiences for 20 students. The AI system adapts lesson plans based on each student's strengths and weaknesses, offering what the school claims is more precise and continuous evaluation than human teachers can provide. While the classroom will have ""learning coaches"" present for support and to teach subjects AI struggles with, the primary instruction will come from AI.

 

AI Breakthrough in Cancer Diagnosis (5 September 2024)

Harvard Medical School researchers have developed a new AI foundation model called "Chief" that can accurately detect multiple cancer types, assess treatments, and predict survival rates. Chief outperformed other AI diagnostic methods by up to 36% in various tasks and demonstrated an overall accuracy of almost 94% for cancer detection.

 

UK Spending on Pre-school Education Among Lowest of Advanced Economies (10 September 2024)

A report reveals that the UK spends only £5,700 per pupil aged 0-5 on pre-school education, compared to an OECD average of £9,700. While not directly related to AI, this underinvestment in early education could have implications for the UK's future workforce in an increasingly AI-driven economy.

 

Chats with AI Bots Found to Dampen Conspiracy Theory Beliefs (10 September 2024)

Research published in Science demonstrates that conspiracy theorists who engaged in debates with AI chatbots became more open-minded about their beliefs. Dialogues with the AI reduced participants' self-rated belief in their chosen theory by an average of 20%.

 

Australia Plans Minimum Age Limit for Social Media Use (10 September 2024)

The Australian government plans to introduce a minimum age limit for social media use this year, proposed to be between 13 and 16 years old. While not directly related to AI, this policy could have implications for how young people interact with AI-driven platforms and develop digital literacy skills.

 

In-house Legal Teams Start to See AI Gains (13 September 2024)

Many top in-house legal teams in Europe have moved beyond experimenting with generative AI to using it in everyday work. Companies report significant time savings and increased efficiency from using AI tools for tasks like contract review and legal research.

 

Technology Takes Class-action Lawsuits Out of the Slow Lane (13 September 2024)

Law firms are increasingly using technology, including AI, to manage large-scale litigation more efficiently. This trend is facilitating the rise of mass litigation in the UK and Europe.

 

AI Moves Along 'Hype Cycle' to Make Its Mark on Legal Profession (13 September 2024)

As ChatGPT approaches its second anniversary, the legal industry is reflecting on the impact of generative AI on its business model. Law firms are moving from grand pronouncements to practical evaluations of how AI tools can be most helpful in areas like email summarisation and document drafting.

 

How AI is Generating a 'Sea of Sameness' in Job Applications (26 September 2024)

The widespread use of generative AI and improved digital tools for creating CVs and cover letters has led to a "sea of sameness" in job applications. While these tools make it easier to create sophisticated applications, they're also making it harder for individual applicants to stand out.

 

GCSE AI Adaptive Learning Programme: The Sabrewing Programme (Late September 2024)

David Game College in London is launching the Sabrewing Programme, a pioneering GCSE course that uses AI-driven adaptive learning platforms to teach core subjects. This one-year intensive programme, starting on 23rd September 2024, is designed for students aged 15-17. It offers personalised learning pathways, allowing students to progress at their own pace under the guidance of dedicated learning coaches. The programme covers core academic subjects in the morning and includes a 500-hour life skills programme in the afternoon, focusing on areas such as critical thinking, entrepreneurship, and digital literacy.

 

Wicked Opportunities: Leveraging AI to Transform Education (Late August 2024)

This report, published by the Center on Reinventing Public Education (CRPE), summarises insights from a forum on AI in education held in April 2024. The forum brought together policymakers, edtech innovators, school system leaders, and advocates to discuss how AI can drive positive change in education. The report emphasizes the need for urgency and bold leadership in shaping AI's impact on student learning, particularly in promoting equity and access for historically marginalized communities.

 

The Metacognition Revolution: AI's Central Role in Reshaping How We Learn

This article explores how AI is transforming education by enhancing students' metacognitive abilities - their capacity to think about their own thinking processes. It presents examples from various educational institutions where AI is being used to create simulations, build chatbots, and personalize learning experiences. The article emphasizes a shift from content mastery to understanding the learning process itself, with AI tools playing a crucial role in this transition.

AI Ethics and Societal Impact

Summary:

Recent developments in AI highlight its growing impact across various sectors and the associated challenges. Key points include:

  • Early emerging tools for enhancing critical thinking in AI interactions

  • Tensions between tech giants and governments over AI regulation

  • Debates about AI's economic impact and job displacement potential

  • Ongoing legal challenges regarding copyright and data use in AI training

  • AI's influence on creative industries, finance, retail, and military applications

  • Increasing calls for ethical AI development and safety testing

These trends underscore the need for balanced discussions, updated regulations, and ethical frameworks to guide AI development and implementation across different domains.

​

“What's up, bot?” Exposing the assumptions of your GenAI prompts (1 July 2024)

Simon Buckingham Shum's article introduces an innovative AI prompt designed to enhance critical thinking in educational settings. The prompt instructs chatbots to identify implicit assumptions in users' questions rather than providing immediate answers, encouraging deeper reflection and more thoughtful inquiry. Presented as an Open Educational Resource (OER), this tool is adaptable across various AI platforms, allowing educators to customise it for different contexts and subjects.

 

Political Leaders Urged to Push Back Against Tech Bullies (Date not specified)

Tech executives are increasingly challenging officials and governments over proposals that don't align with their business models. Notable examples include Elon Musk's inappropriate response to EU commissioner Thierry Breton's letter about content moderation and threats to discontinue Starlink services in Ukraine. The article argues that companies should face consequences for aggressive behaviour towards democratic processes. This situation highlights the growing tension between tech giants and governments, raising questions about the balance of power in the digital age and the need for robust governance frameworks to manage these relationships.

 

Using Fear to Sell AI is a Bad Idea (27 August 2024)

Despite dire predictions made 18 months ago, generative AI has not caused the widespread job losses or societal upheaval that many feared. The article critiques Silicon Valley's approach to AI marketing, which often involves fearmongering as a form of promotion. This trend underscores the importance of balanced, fact-based discussions about AI's potential impacts and the need for critical thinking when evaluating claims about emerging technologies. It also highlights the responsibility of tech companies and media to provide accurate, non-sensationalised information about AI developments.

 

Rethinking the AI Boom (2 September 2024)

Daron AcemoÄŸlu, Professor of Economics at MIT, offers a nuanced perspective on AI's potential impact on the economy and society. He suggests that AI's influence could range from minimal to highly transformative, depending on how it's developed and implemented. AcemoÄŸlu advocates for redirecting AI development towards technologies that complement human workers rather than replace them. This viewpoint emphasises the importance of thoughtful AI development strategies that prioritise human-AI collaboration and the enhancement of human capabilities, rather than wholesale automation.

 

AI Hit by Copyright Claims (31 August 2024)

Leading AI companies are facing numerous copyright lawsuits and accusations of aggressive data scraping from the web. This situation raises critical questions about intellectual property rights in the age of AI and the ethical implications of training AI models on copyrighted material. These legal challenges could significantly impact the future development of AI technologies and highlight the need for updated copyright laws and ethical guidelines that address the unique challenges posed by AI's data requirements.

 

James Manyika on AI's Productivity Potential (Early 2024)

James Manyika, Google's senior vice-president for research, technology and society, discusses AI advancements and their potential impact. He emphasises that the economic potential of AI, estimated in the trillions, is not guaranteed and will require significant work, innovation, and investment to realise. Manyika predicts AI will assist professionals like doctors and teachers rather than replace them entirely. This perspective underscores the importance of viewing AI as a tool for augmenting human capabilities and the need for strategic investment and innovation to fully realise AI's potential benefits.

 

AI: Too Much Information? (7 September 2024)

Yuval Noah Harari, in his book "Nexus", challenges the notion that more information automatically leads to more open and prosperous societies. He argues that information proliferation can be exploited to impose societal order or inflame disorder, and that the ways we curate and process information are crucial. Harari describes AI as "alien intelligence," potentially beyond human control. This perspective highlights the need for critical information literacy and a deeper understanding of AI's potential societal impacts, emphasising the importance of developing robust frameworks for managing and interpreting the vast amounts of information generated in the AI age.

 

Not Everyone Needs to Have an Opinion on AI (10 September 2024)

The article discusses the controversy surrounding National Novel Writing Month (NaNoWriMo) and its stance on AI in writing. It argues that organisations should have fewer opinions, especially on matters outside their control. This case highlights the complex ethical considerations surrounding AI use in creative fields and the challenges organisations face in navigating these issues. It underscores the need for thoughtful, nuanced approaches to AI integration in various domains, rather than blanket acceptance or rejection.

 

TV News Overtaken by Digital Rivals for First Time in UK (10 September 2024)

Television has ceased to be the main source of news in the UK for the first time since the 1960s, with 71% of adults now obtaining news online. While not directly related to AI, this shift has significant implications for how AI might be used in news dissemination and consumption in the future. It underscores the need for digital literacy education to help people navigate an increasingly online information landscape, potentially shaped by AI technologies. This trend also raises questions about the role of AI in content curation and the potential for personalised news experiences.

 

Sony to Expand Japanese Anime's Fan Base Using AI (13 September 2024)

Sony is leveraging AI to expand the global fan base of Japanese anime. AI tools are evolving rapidly in anime production, progressing from enhancing visual effects to potentially generating entire animated films based on scripts. This development highlights both the creative potential of AI and the ethical considerations around copyright and the role of human creativity in AI-assisted art production. It raises questions about the future of creative industries and the balance between AI-driven efficiency and human artistic expression.

 

AI Risks Making Some People 'Uninsurable', Warns UK Financial Watchdog (19 September 2024)

Nikhil Rathi, chief executive of the Financial Conduct Authority, warned that AI use in the insurance industry could make some people "uninsurable". Despite the risks, Rathi called for the sector to be more "willing to experiment" with new technology. This case highlights the potential for AI to exacerbate existing inequalities and the need for careful regulation to ensure fair access to essential services in an AI-driven economy. It underscores the importance of developing ethical AI frameworks that prioritise inclusivity and fairness.

 

Tesco Plans to Expand Use of AI to Personalise Shopping (17 September 2024)

Tesco, the UK's largest supermarket, plans to significantly expand its use of artificial intelligence to personalise shopping experiences. While this could offer benefits like reducing waste and suggesting healthier choices, it also raises privacy concerns due to the extensive use of personal data. This development highlights the potential for AI to transform everyday experiences like grocery shopping, while also underscoring the need for robust data protection measures and transparent AI practices in consumer-facing applications.

 

War in the Age of AI Demands New Weaponry (21 September 2024)

Eric Schmidt, former CEO of Google, argues that global military expenditure should focus on affordable, attritable, and abundant weapons systems, creating opportunities for start-ups and defence unicorns. This perspective highlights the growing role of AI in military and defence applications, raising important ethical questions about the use of AI in warfare and the potential for AI to change the nature of global conflicts. It underscores the need for international dialogue and agreements on the ethical use of AI in military contexts.

 

OpenAI Acknowledges New Models Increase Risk of Misuse to Create Bioweapons (13 September 2024)

OpenAI's announcement of its new o1 models, with advanced reasoning capabilities, has brought critical ethical concerns to the forefront. The company's acknowledgement that these models have "meaningfully" increased the risk of AI being misused to create biological weapons highlights the urgent need for robust ethical frameworks and safety measures in AI development. This situation underscores the importance of responsible AI development practices and the need for proactive measures to mitigate potential misuse of advanced AI technologies.

 

It's Time for Limited, Mandatory Testing for AI (25 September 2024)

There's a growing call for mandatory safety testing of large AI language models, similar to how drugs are tested before public release. This push for regulation highlights the need to balance technological progress with public safety and ethical considerations. The proposal for AI companies to monitor and report misuse of their models, similar to pharmaceutical industry practices, indicates a shift towards greater corporate responsibility in the AI sector. This development emphasises the importance of establishing standardised safety protocols and regulatory frameworks for AI development and deployment.

AI and Cybersecurity

Summary:

The tech industry is experiencing significant shifts due to AI advancements, impacting workforce dynamics and cybersecurity. Key developments include:

  • Microsoft considering changes to Windows security after a major IT outage

  • Growing concerns about AI-driven job displacement, especially in financial services

  • Intel facing a crisis as it struggles to compete in the AI chip market

  • The emergence of more advanced AI assistants potentially replacing jobs in various sectors

These trends highlight the need for continuous skill adaptation, updated education curricula, and proactive measures to address AI's impact on the workforce and cybersecurity landscape.

 

Microsoft Plans Windows Security Overhaul (24 August 2024)

Microsoft is planning significant changes to Windows security procedures following a global IT outage caused by a faulty CrowdStrike update on 19 July 2024. The company is considering several options, including potentially blocking access to the Windows kernel for third-party security software. This development highlights the ongoing challenges of maintaining cybersecurity in an increasingly AI-driven world and the potential need for updated IT security strategies. It underscores the delicate balance between system security and software interoperability, and the need for robust testing and failsafe mechanisms in critical software updates.

AI Workforce and Labour Market

Summary:

Key points include:

  • Growing concerns about AI's impact on employment, especially in financial services

  • UK unions pushing for legislation to regulate employers' use of AI in the workplace

  • Calls for retraining programs and increased AI literacy to help workers adapt

  • Intel's crisis in the AI chip market, leading to layoffs and restructuring

  • Shift in the tech industry landscape due to the AI boom, with Nvidia overtaking Intel

  • Emergence of advanced AI agents capable of complex tasks, potentially replacing jobs in areas like customer support

  • Need for workers to develop skills that complement AI rather than compete with it

  • Potential restructuring of industries towards smaller, specialized human workforces supported by AI systems

  • Increasing importance of adaptability, lifelong learning, and AI-specific skills in the job market

  • Growing demand for education in AI ethics and governance

These developments highlight the significant impact of AI on the workforce and labour market, emphasising the need for continuous learning, skill adaptation, and evolving education systems to keep pace with rapid technological changes.

 

 

Job Displacement Concerns and Union Warnings (27 August 2024)

As artificial intelligence continues to advance, concerns about its impact on employment are growing. UK unions are set to warn financial services groups about the need to fund retraining programmes for millions of employees whose jobs could be displaced by AI. A Citigroup report from June 2024 paints a stark picture, suggesting that up to 54% of banking jobs and 48% of insurance roles could be at risk.

 

Trade union leaders are planning to increase pressure on Labour ministers to introduce legislation regulating employers' use of AI. The Trades Union Congress (TUC) has already published a blueprint bill proposing new legal rights, including transparency when employers use AI and protections against unfair dismissal by the technology.

​

Unite, one of the largest unions, warns that AI is increasingly being used to control workers through observation, with low-paid, outsourced staff from ethnic minority backgrounds being the most vulnerable. While the Labour government is working on an AI bill, it is expected to focus primarily on safety testing and government oversight of advanced AI models rather than addressing workplace concerns.

 

This development highlights the need for a proactive approach to managing AI's impact on the workforce. It suggests that education systems may need to adapt rapidly, offering upskilling and reskilling programmes to help workers transition to new roles. There's also a growing need for AI literacy across all levels of education, preparing students for a future where AI is an integral part of the workplace.

 

Intel's Crisis and Workforce Restructuring (6 September 2024)

Intel, once a dominant force in the chip industry, is facing a significant crisis as it struggles to compete in the AI chip market. The company's difficulties have led to drastic measures, including thousands of lay-offs and a $10 billion cost-cutting effort announced in August 2024. This situation highlights the rapid shifts occurring in the tech industry due to the AI boom and the challenges faced by established companies in adapting to these changes.

 

Intel's market capitalisation has fallen to $83 billion, while rival Nvidia's has grown to $2.6 trillion, underscoring the dramatic shift in the industry landscape. The company's foundry business lost $7 billion in 2023, and ambitious sales targets set in 2022 have not been met.

 

This case illustrates the potential for AI-driven market changes to cause significant workforce disruptions, even in well-established tech companies. It emphasises the need for workers to continually adapt their skills to remain relevant in a rapidly evolving tech landscape. Educational institutions may need to focus more on teaching adaptability and fostering a mindset of lifelong learning to prepare students for such a dynamic job market.

 

The Intel crisis also highlights the growing importance of AI-specific skills in the tech industry. Engineering and computer science programs may need to update their curricula to focus more on AI chip design and manufacturing processes. Additionally, there may be increased demand for retraining programs to help workers in the semiconductor industry adapt to the shift towards AI-focused chip production.

 

Move over copilots: meet the next generation of AI-powered assistants (22 September 2024)

The advent of more advanced AI agents is set to significantly impact the workforce and labour market. While initially promoted for simple, routine tasks, some companies are exploring the use of these AI agents for more complex tasks, potentially replacing jobs in areas like customer support.

 

This shift has profound implications for the labour market. Workers may need to adapt their skills to complement AI capabilities rather than compete with them. Educational institutions will need to prepare students for a job market where AI agents are increasingly common, focusing on developing skills that are less likely to be automated, such as complex problem-solving, creativity, and emotional intelligence.

​

The article mentions that some companies, like Klarna, claim to be using AI to significantly reduce their workforce and replace traditional software solutions. This trend suggests a potential restructuring of many industries, with a shift towards smaller, more specialized human workforces supported by AI systems.

 

For educators, this means a need to continually update curricula to reflect the changing demands of the job market. There may be increased emphasis on teaching students how to work alongside AI systems effectively, as well as developing the skills needed to manage and oversee AI-driven processes. Additionally, there could be growing demand for courses in AI ethics and governance, as the widespread adoption of AI in the workforce raises important ethical and societal questions.

 

These developments underscore the profound impact AI is having on the workforce and labour market, particularly in the tech sector. They suggest a future where continuous learning and adaptability will be crucial for career success, and where education systems will need to evolve rapidly to keep pace with technological change.

Further Reading: Find out more from these free resources

Free resources: 

  • Watch videos from other talks about AI and Education in our webinar library here

  • Watch the AI Readiness webinar series for educators and educational businesses 

  • Listen to the EdTech Podcast, hosted by Professor Rose Luckin here

  • Study our AI readiness Online Course and Primer on Generative AI here

  • Read our byte-sized summary, listen to audiobook chapters, and buy the AI for School Teachers book here

  • Read research about AI in education here

About The Skinny

Welcome to "The Skinny on AI for Education" newsletter, your go-to source for the latest insights, trends, and developments at the intersection of artificial intelligence (AI) and education. In today's rapidly evolving world, AI has emerged as a powerful tool with immense potential to revolutionize the field of education. From personalized learning experiences to advanced analytics, AI is reshaping the way we teach, learn, and engage with educational content.

 

In this newsletter, we aim to bring you a concise and informative overview of the applications, benefits, and challenges of AI in education. Whether you're an educator, administrator, student, or simply curious about the future of education, this newsletter will serve as your trusted companion, decoding the complexities of AI and its impact on learning environments.

 

Our team of experts will delve into a wide range of topics, including adaptive learning algorithms, virtual tutors, smart classrooms, AI-driven assessment tools, and more. We will explore how AI can empower educators to deliver personalized instruction, identify learning gaps, and provide targeted interventions to support every student's unique needs. Furthermore, we'll discuss the ethical considerations and potential pitfalls associated with integrating AI into educational systems, ensuring that we approach this transformative technology responsibly. We will strive to provide you with actionable insights that can be applied in real-world scenarios, empowering you to navigate the AI landscape with confidence and make informed decisions for the betterment of education.

 

As AI continues to evolve and reshape our world, it is crucial to stay informed and engaged. By subscribing to "The Skinny on AI for Education," you will become part of a vibrant community of educators, researchers, and enthusiasts dedicated to exploring the potential of AI and driving positive change in education.

bottom of page