Navigating the Risky World of AI in Education

Navigating the Risky World of AI in Education

By Matt Sherrill 
Regional Educational Technology Coordinator
Learning Technology Center

Artificial Intelligence (AI) is setting the education world on fire.

An ambiguous statement to say the least. It can be interpreted with the positive connotations we associate with Tim Kitzrow’s iconic voice calling across time and space with the nostalgic cry of, “He’s on fire!” Or, it can be interpreted with all the negative connotations that accompany the catastrophic force of a house fire incinerating a lifetime of memories and possessions.

Much like fire can be both a foundational building block of civilization and a destructive force that burns it to the ground, AI has the capability of being a boon or bane for education.

Fire, recklessly wielded, can quickly become uncontrollable, leaving firefighters scrambling to salvage what they can from the flames. Similarly, AI recklessly implemented or unsupervised in a school district can quickly leave administrators and educators scrambling to contain their own nightmare of data breaches, mental health crises and erosion of community trust.

However, when these tools are implemented and utilized thoughtfully, with proper oversight and precautions in place, the benefits can be revolutionary. So, how do we leverage this tool for all its potential benefits, while mitigating the risks that accompany it? How do we leverage the heat of the fire without getting burned?

First, we need to address the potential risks and understand that these threats don’t simply go away by ignoring or blocking AI. For school business officials, these risks can materialize in three familiar domains: central office, information technology systems and the classroom.

Central Office (Operational Risks)
Perhaps the biggest draw of AI integration for many districts is the operational efficiencies the technology promises. 

An overwhelmed HR department using an AI-powered tool can screen hundreds of applications for open teaching positions in record time, streamlining the hiring process. A school business official (SBO) using AI can instantly analyze and get answers to questions about complex vendor agreements, privacy policies or newly implemented state mandates and compliance deadlines.

However, with these efficiencies come significant risks if human oversight is lacking. For example, that resume screening tool that saved your HR department time and sanity? It could be trained on flawed data that is overly-representative of a particular demographic. Because of this “algorithmic bias,” your outcomes could be flawed or biased. This time savings could open your district up to a sea of headaches and lawsuits.

Now consider the AI tool the SBO used to analyze and summarize complex documents to get quick answers. The AI-generated outputs, while confident-sounding, could be flawed, inaccurate or completely fabricated. Welcome to the world of AI hallucinations! Because the SBO was basing decisions on these flawed outputs, they could have missed an important compliance deadline or developed a budget forecast based on the AI’s faulty assumption about the data.

Hope is not lost, though. There are strategies and approaches a district can take to leverage the benefits AI technology offers at an operational level while mitigating (at least in part) the associated risks.

First, be sure to have a solid vetting process for vendors and tools during the procurement phase. While many of your district’s existing RFP or procurement questions for technology are important and apply to AI-specific vendors and tools (for example, how user data is stored and used by the company), you may consider adding specific questions such as the following: 
• How do you audit your system for algorithmic bias? 
• Where was the training data for your system sourced? 
• Can you explain in simple terms how your tool makes recommendations? 

Other organizations (including Common Sense Media and Future of Privacy Forum) have also published questions and considerations when vetting AI tools that your district may want to consider.

Next, ensure all potential AI users in the district understand what tools are approved for staff use, why those tools are approved and why others are not. (More on this later.)

Finally, and perhaps most importantly, establish a “human-in-the-loop” policy for all AI use. Because of the known risks of bias and hallucinations, no task should be completely automated by AI without human oversight, verification and judgment. AI technology can be extremely beneficial when it augments or amplifies human capabilities, but it should never replace human responsibilities.

Unfortunately, even comprehensive internal policies cannot prevent external threats that AI can pose to school districts, especially to servers.

IT Systems and Networks (Systemic Risks)
Cyberattacks on school districts have skyrocketed in recent years. Bad actors looking to access valuable, sensitive data are specifically targeting school districts because of the treasure trove of personal data that they collect. One click of a link by an unsuspecting employee, and the keys to the server room have been handed over. 

Once those bad actors have gained access, they can sell that data, hold it ransom until the district pays a hefty price tag or both. Couple that with the relatively low budget districts allocate toward safeguarding their servers and digital infrastructure, and it’s a recipe that keeps IT directors up at night.

AI only supercharges these cyberattacks. Just a few short years ago, teachers and staff were trained to look for the “tells” of phishing emails such as bad grammar, poor spelling and obvious “scammy” requests. Now, those same phishing attempts can appear flawless and hyper-personalized.

Cyberattacks don’t just take the form of emails anymore, either. Through free, readily available AI tools, a bad actor can clone anyone’s voice and call employees directly. Instead of a suspicious email, the SBO gets a phone call from someone who sounds just like the superintendent, asking for sensitive information.

So, how do school districts mitigate these external risks? 

First and foremost, ensure your district has a comprehensive plan established for if and when a cyberattack event occurs, and ensure all stakeholders are aware of their roles and responsibilities as outlined in the plan. Many school districts don’t have a concrete plan or procedure for a threat that is constantly at their doorsteps.

Next, update your staff training on data security. The old phishing warnings don’t cut it in the age of sophisticated, AI-powered scams. The red flags have changed. Instead of bad grammar and spelling, scammers now use urgency and emotion. Plus, they can materialize in the form of previously “safe” modes of communication. Develop a policy for verifying identities through known means of communication. For example, if your superintendent calls you in a panic, telling you to transfer funds to a new vendor account before the district’s liability insurance lapses, tell her you’ll call her back on her cell phone. 

Some companies and organizations have gone so far as to channel the ‘90s family favorite strategy of developing “pass phrases.” (You can thank Barbara Walters and the 20/20 team for that one.) If you get any sort of request that’s out of the ordinary, such as log-in credentials or transferring funds, the person requesting must provide that week’s pass phrase, or you don’t execute the request.

Finally, it’s never a bad idea for the district to consider a data minimization plan. Do an audit of the forms sent to parents from the district, school and classroom. What information is being requested and collected? How much of that information is legally required, and how is it stored? Remember, data that isn’t collected can’t be breached.

Obviously, all these mitigation strategies are best done in collaboration with your IT department. Too often, there can be a disconnect between those protecting the district’s systems and data and those whose data they are trying to protect.

The Classroom (Liability Risk and Student Safety)
Usually, when discussing the integration of AI at the classroom level, the conversation almost always revolves around the fear of cheating or “cognitive offloading,” which is the decrease of mental effort and skill development due to students bypassing the learning process via AI tools. That is absolutely a risk of AI at the classroom level, and there are plenty of widely discussed tools and strategies that can help mitigate those risks.

However, another risk that has been gaining attention is the impact of “AI Companion” apps on the mental health and safety of young people. There have been growing concerns and lawsuits against AI companies for the problematic “advice” they can offer teens. This has raised some alarm bells in school districts, prompting them to re-examine the AI tools approved by the district.

Here again, the importance of having a strong vetting and procurement process is a key strategy to mitigating this risk. Having a comprehensive set of questions to ask vendors that interrogates the safety measures in place for students is vital. Additionally, having clear policies and guidelines in place that educate students, parents and staff on the approved tools and their intended uses are equally important.

A less-discussed risk at the classroom level is the concept known as “Shadow AI.” This is the term used to describe an employee’s use of AI tools that have not been vetted or approved by the district. This is one of the most common risky behaviors taking place in school districts around the state.

Teachers, already bogged down with a mountain of responsibilities and to-dos, are quick to adopt any tool promising efficiency and relief. For example, consider a “free” AI tool built to help teachers grade student work and give comprehensive feedback to help them grow in record time? Yes, please!

The problem is the teacher uploading student information and work into a tool that may or may not be compliant with all relevant laws around student data privacy (e.g., The Student Online Personal Protection Act (SOPPA)). Fortunately, many (but definitely not all) AI tools designed specifically for education offer legally compliant student data protection, even on their free plans. However, because this is not universally practiced, there is always a risk of Shadow AI cases turning into court cases for the district.

To address this risk, it’s important that all district stakeholders are aligned. Developing a “walled garden” of properly vetted tools is an important first step, along with consistent communication regarding approved tools. Your district leaders should ask the following:
• Are your teachers aware of the district-approved tools? 
• Are they trained on why this is so important and the potential consequences of recklessly sharing student data? 
If not, it can lead to a common tension that exists between teachers, the district and the IT department. Blocking resources without a clear explanation for the restriction leads to assumptions, annoyance and potential animosity.

Similar to precautions for central office staff, teachers need to be informed and trained on important concepts if they are going to use AI. Developing policy, guidelines and sound vetting/procurement practices is imperative in leveraging the benefits of AI responsibly in a school district.

AI Literacy: A District’s Fire Safety Plan
Earlier, I compared AI to fire. Like fire, AI is a powerful force that can provide many benefits, but one that can lead to devastation if not handled with care. With the risks that AI brings to a school district (both internal and external), one might wonder if it’s worth bringing it into the district environment. That’s a fair point. However, I’d like to make one more comparison.

The internet.

A tool itself that is full of problematic, inappropriate and potentially dangerous content and brings substantial risk to a school district. (Remember all that talk about cyberattacks?) Yet, we hand almost every staff member and student a device that gives them direct access to this resource because of all the potential benefits it can provide. 

This isn’t done haphazardly. Systems are put in place, policy is developed and we equip students and staff with the skills and knowledge to use the internet ethically, responsibly and safely (e.g., digital citizenship instruction, media/digital literacy, etc.).

Navigating AI in education is a similar venture. There are benefits, and there are risks. The key is putting systems, safeguards and policies in place, along with a plan to equip all stakeholders with the skills and knowledge they need to navigate it safely and responsibly. This is the importance of AI literacy.

Perhaps the single most important step a district can take in mitigating the risks associated with AI is helping develop this literacy with staff and students. Whether this is done through free online courses, in-house professional learning or outside experts consulting with leadership to facilitate professional development, the districts that take the time and care to develop collective AI literacy in its staff are going to be the ones best equipped to harness the heat of the fire without getting burned.