School social media checks in a world of AI

In the UK, it’s increasingly common for schools to carry out social media checks on candidates invited for interview. The intention behind this practice is clear: safeguarding students. These checks are designed to identify concerns that can be discussed openly during interview rather than to eliminate candidates before they’ve had a chance to explain. But the social media checks rely on the AI algorithms inherent in social media tools, with all their flaws, plus the increasing availability of AI tools introduce new risks and challenges.

The Promise and the Pitfalls

On the surface, social media searches feel and look simple and straightforward.   You might ask candidates to provide their account details for the social media services they use, or you may simply search to find them.    You can then peruse or search the content, what they have posted, replied to, etc, and from this identify questions or areas of discussion for interviews. 

But online posts are often without context meaning it is all too easy for posts to be misinterpreted.  It makes me think of VAR and football, that things often look worse when slowed down in a video review, or when slowed down looking at posts without the emotions and pace of a specific moment in time, responding to others posts and thoughts.   Also do HR professionals truly understand how social media and their search or display algorithms work?  It is AI algorithms which decide what information to present meaning there is a risk of bias.    The algorithms might focus particularly on those posts most likely to spark a reaction, as they seek to keep people on their platform, not understanding the aim of the HR staff member carrying out the searches.   A harmless cultural reference or a joke taken out of context could be surfaced simply because the system has been trained to surface such comments due to the greater likelihood that such comments would lead to strong feeling, comments and the engagement in the platform. This may make things look worse, or better, than they are.   Equally AI algorithms might surface different types of post based on gender, ethnicity, age or other data points related to a candidate, potentially introducing bias to the data the HR team have available to them.

The Rise of Synthetic Content

Then there’s the growing threat of fake content. Deepfakes and AI-generated images are no longer the stuff of science fiction, they’re here, and they’re convincing. Imagine a candidate being implicated by a fabricated photo circulating online, or even a fake video. Without robust verification processes, schools could make decisions based on lies. How many HR teams are prepared to spot a deepfake? How many even know what to look for?   Also, as we see growing numbers of people using wearable technologies, such as smart glasses, how are HR to react when footage was taken without the applicants’ knowledge, before being posted online?   How would they even know it was without consent, and therefore illegal?  Would it be acceptable to use such a post within an interview process?   What if the applicant pointed out it was taken without consent and therefore was being processed illegally both by the poster and now the school?

Safeguarding vs. Fairness

The tension between safeguarding and fairness is real. While protecting students is paramount, recruitment must remain ethical and transparent. Social media checks should never become covert screening tools. Candidates deserve the chance to explain context, and decisions should be based on facts, not assumptions. Yet when AI enters the equation, the line between fact and fiction can blur alarmingly quickly.

There’s also the question of privacy. GDPR sets clear boundaries, but do all schools adhere to them when using AI-driven tools? Consent is critical, as is clarity about what these checks involve. Without transparency, trust in the recruitment process erodes and that’s a risk no school can afford.

Bridging the Knowledge Gap

The truth is, many HR professionals in education are experts in safeguarding and compliance, but not in data science or AI ethics. This knowledge gap matters. If we don’t understand how these tools work, we can’t challenge their outputs. We can’t ask the right questions about transparency, fairness, or verification. And we certainly can’t protect candidates from the unintended consequences of flawed algorithms.   This for me is key, in ensuring HR staff understand the tools they are using if undertaking social media checks, including understanding the risks which may arise from the use of AI powered search tools inherent in social media platforms.   Additionally they need to understand the risks as they relate to fake content, including audio, images and video;  What you hear or see may not be all that it appears to be.

A Call for Reflection

Ultimately, the goal is simple: to keep students safe without compromising the integrity of recruitment. Achieving that balance requires more than technology, it critically requires understanding, vigilance, and a willingness to challenge the systems we rely on, and the content they may present.   If schools are carrying out social media checks, the widespread availability of generative AI tools, and our increasing awareness of the risks particularly around bias, means maybe we need to revisit this and ensure we have considered all of the implications.

Author: Gary Henderson

Gary Henderson is currently the Director of IT in an Independent school in the UK.Prior to this he worked as the Head of Learning Technologies working with public and private schools across the Middle East.This includes leading the planning and development of IT within a number of new schools opening in the UAE.As a trained teacher with over 15 years working in education his experience includes UK state secondary schools, further education and higher education, as well as experience of various international schools teaching various curricula. This has led him to present at a number of educational conferences in the UK and Middle East.

Leave a comment