As I look at the conference offerings for this year’s ISTE conference, I am concerned to see Alexa-focused sessions about how to incorporate the device into your classroom.
I have an Amazon Dot in my kitchen, which I purchased on a whim a few years ago and use for listening to podcasts, checking the weather, and sometimes playing “Everything is Awesome” from the Lego Movie for my almost 5 year old son. I have a love/hate relationship with it. On one hand, it’s easy to say “Alexa, play….” and have nearly anything I can think of come out of that speaker. On the other, when Alexa wakes up when clearly no one is saying her name, I wonder, “how long has she been listening?”
In reality, we don’t know how long Alexa has been listening or how much audio gets sent to Amazon. You can see what data Amazon has on you through the app but it’s harder to know when you are using third party apps created for the device. The same is true for Amazon’s competitor, Google Home.
Under COPPA law, companies may not collect data from children under the age of 13 without parental consent. When we put these devices in our classrooms, we are putting our students’ privacy at risk. When we do so without parental consent, we are potentially breaking the law.
Aside from the privacy concerns, I see the current application of this technology as a poor excuse for innovation. One company touts its smart speaker app as a great way to support social emotional learning but then lists “set a timer, brain breaks, transition time, class directions, random picker,” as features of the app. These are all things that we don’t need to risk student privacy to do and that make a teacher’s job a little easier, but are really gimmicky. More worrisome is the social emotional aspect of the tool. For instance, if a student is feeling angry, just have them tell the smart speaker, and it will help them calm down! So now a child, potentially under the age of 13, could be sending personal data about their emotional state to a company without their consent or the consent of their parents. The company also depends on teachers to monitor and remove saved voice audio.
A great application of AI in the classroom would be helping students understand and recognize the AI technologies that exist in their every day life and how these tools work. AI is also way more than Siri, Alexa, or Google Home. Machine Learning and AI exist in the software we use, and many companies use it. These tools require loads of data in order to “learn.” This is how a machine beat the world champion of the game Go. It studied massive amounts of data from previous Go games in order to teach itself how to win the game and the best strategies to use. This is something to consider when we use AI of any type with our students. How could our students’ data be
used mined to make the technology “smarter?”
Students can learn more about AI technology by delving into it a little themselves. This could be done, for instance, through robotics activities involving sensors to help students see into the way AI works in self-driving cars. They can observe deepfake videos as well as the original sources for the videos, and see how AI is used to manipulate audio and video to create something new. Have them think of ways that AI could be used to simplify systems or streamline processes, and have them design prototypes. Discuss the ethics around these technologies, including the privacy concerns and the potentially disastrous mistakes these tools make when we trust algorithms just a *little* too much and that data being fed to them is biased.
There are meaningful ways to bring this technology in the classroom. I’m not sure why we need Alexa to tell our class to line up for lunch, but I am sure that the more kids understand this invisible technology, the better off they are and the more potential there is for this technology to be inclusive and ethical.
I have not read it yet, but Michelle Zimmerman has a guide to AI in the classroom available from ISTE.