Educate partners with the Edtech Podcast
EVR and the EdTech Podcast have joined forces to present a collection of series exploring education, evidence, AI and EdTech, with host Professor Rose Luckin.
Tune in twice a month to hear from special guests and expert speakers, including teachers, researchers, parents, developers, young people, and those at the forefront of innovation in teaching and learning.
Listen to the Latest Episode:
What’s in this episode?
In today’s episode, we have the first part of a two-part miniseries on risk management, risk mitigation and risk assessment in AI learning tools. Professor Rose Luckin is away in Australia, speaking internationally, so Rowland Wells takes the reins to chat with Educate Ventures Research team members about their experience managing risk as teachers and developers. What does a risk assessment look like and whose responsibility is it to take onboard its insights? Rose joins our discussion group towards the end of the episode, and in the second instalment of the conversation, Rowland sits down with Dr Rajeshwari Iyer of sAInaptic to hear her perspective on risk and testing features of a tool as a developer and CEO herself.
View our Risk Assessments here.
In the studio:
-
Rowland Wells, Creative Producer, EVR
-
Dave Turnbull, Deputy Head of Educator AI Training, EVR
-
Ibrahim Bashir, Technical Projects Manager, EVR
-
Rose Luckin, CEO & Founder, EVR
Talking points and questions include:
-
Who are these for? what’s the profile of the person we want to engage with these risk assessments? They’re concise, easy-to-read, no technical jargon. But it’s still an analysis, for people with a research/evidence mindset. Many people ignore it: we know that even learning tool developers who put research on their tools ON THEIR WEBSITES do not actually have it read by the public. So how do we get this in front of people? Do we lead the conversation with budget concerns? Safeguarding concerns? Value for money?
-
What’s the end goal of this? Are you trying to raise the sophistication of conservation around evidence and risk? Many developers who you critique might just think you’re trying to make a name pulling apart their tools. Surely the market will sort itself out?
-
What’s the process involved in making judgements about a risk assessment? If we’re trying to demonstrate to the buyers of these tools, the digital leads in schools and colleges, what to look for, what’s the first step? Can this be done quickly? Many who might benefit from AI tools might not have the time to exhaustively hunt out all the little details of a learning tool and interpret them themselves?
-
Schools aren’t testbeds for intellectual property or tech interventions. Why is it practitioners’ responsibilities to make these kind of evaluations, even with the aid of these kind of assessments? Why is the tech and AI sector not capable of regulating their own practices?
-
You’ve all worked with schools and learning and training institutions using AI tools. Although this episode is about using the tools wisely, effectively and safely, please tell us how you’ve seen teaching and learning enhanced with the safe and impactful use of AI
Our host:
-
Rowland Wells, Creative Producer, EVR
Our guests:
-
Professor Rose Luckin, Founder and CEO, EVR
-
Dave Turnbull, Deputy Head of Educator AI Training, EVR
-
Ibrahim Bashir, Technical Projects Manager, EVR
This month's bonus episode:
What’s in this episode?
In the second episode of a two-part miniseries on risk management, risk mitigation and risk assessment in AI learning tools, Professor Rose Luckin is away in Australia, speaking internationally, so Rowland Wells takes the reins to chat with Dr Rajeshwari Iyer of sAInaptic to hear her perspective on risk as a developer and CEO.
View our Risk Assessments here.
In the studio:
-
Rowland Wells, Creative Producer, EVR
-
Rajeshwari Iyer, CEO and Cofounder, sAInaptic
Talking points and questions include:
-
Who are these for? what’s the profile of the person we want to engage with these risk assessments? They’re concise, easy-to-read, no technical jargon. But it’s still an analysis, for people with a research/evidence mindset. Many people ignore it: we know that even learning tool developers who put research on their tools ON THEIR WEBSITES do not actually have it read by the public. So how do we get this in front of people? Do we lead the conversation with budget concerns? Safeguarding concerns? Value for money?
-
What’s the end goal of this? Are you trying to raise the sophistication of conservation around evidence and risk? Many developers who you critique might just think you’re trying to make a name pulling apart their tools. Surely the market will sort itself out?
-
What’s the process involved in making judgements about a risk assessment? If we’re trying to demonstrate to the buyers of these tools, the digital leads in schools and colleges, what to look for, what’s the first step? Can this be done quickly? Many who might benefit from AI tools might not have the time to exhaustively hunt out all the little details of a learning tool and interpret them themselves?
-
Schools aren’t testbeds for intellectual property or tech interventions. Why is it practitioners’ responsibilities to make these kind of evaluations, even with the aid of these kind of assessments? Why is the tech and AI sector not capable of regulating their own practices?
-
You’ve all worked with schools and learning and training institutions using AI tools. Although this episode is about using the tools wisely, effectively and safely, please tell us how you’ve seen teaching and learning enhanced with the safe and impactful use of AI
Sponsorship!
If you’d like to support a single episode or a set of them, and have your product, and your CEO out in front of users across the US, the UK, Australia, Germany, Canada and more, reach us via the button below!
Come on the show!
Want to be involved in the conversation around EdTech development and use? If you're a developer or entrepreneur with a research journey or story of evidence that you'd like to share with us, fill out the form below!