From trolling and self-harm sites to child porn and online radicalisation, the issue of online safety in schools and universities intensifies every year. The rise of these risks in an IT environment where most end-points are owned by end-users is a daunting one for educators.
Today’s headteachers and vice-chancellors have to contend with daily threats to student well-being that previous generations simply did not.
Free Wi-Fi for students and staff is the norm in most secondary and post-secondary institutions. Knowing when people are accessing unsafe materials from within the network on their own devices is difficult. So is capturing unwanted contact with students from bad actors outside it.
Add to that the challenges of balancing professional imperatives around duty-of-care with individuals’ right to privacy, and still addressing a growing burden of legal and regulatory compliance. How best to safeguard young people online has become a front-and-centre issue across the education sector.
>See also: Virtual reality in education will help create the workforce of the future
What do we mean by safeguarding?
Safeguarding in essence means protecting individuals in your care, and sometimes in your employment, from exposure to materials that may cause themselves or other individuals to come to mental or physical harm.
The Department for Education (DfE) defines three categories for online safeguarding:
- Stopping exposure to illegal, inappropriate or harmful material
- Protecting students from harmful online interactions with other users
- Identifying personal online behaviours that suggest a likelihood of harm
DfE Guidance set out in 2016 requires schools to implement “an effective approach to online safety” that establishes “mechanisms to identify, intervene in and escalate any incident where appropriate.”
All of this now underlies the safety inspection criteria for education governing body Ofsted. Schools in particular have to provide proof they meet online safeguarding requirements when Ofsted inspectors come calling. The challenge for the education sector is defining what that effective safeguarding approach looks like for their individual institutions.
>See also: Theresa May set to outline technical education plan to address digital skills crisis
IT’s role in safeguarding
The safeguarding approach many education CIOs take is to block worrying websites entirely, limit internet usage on public networks, and closely monitor the activity taking place there. As baseline control mechanisms these measures can help raise a red flag if a student or staff member is using the network in a way likely to cause harm to themselves or others.
On their own however they are blunt instruments – not sufficient to really help in situations where individuals need to be identified and/or action taken quickly. They may also trample on institutional and individual attitudes toward privacy; and seen as heavy-handed by tech savvy students, who have the means and ability to find workarounds past block lists and mainstream monitoring tools.
With the right tools in place and a clear understanding of safeguarding’s aims, educators can implement more sophisticated measures that go beyond blocking, while remaining mindful of privacy. It is possible to safeguard students’ well-being without restricting academic endeavour, informed discussion, or debate around controversial topics.
>See also: Cyber security education: It’s time to do more homework
Getting to a solution
Imagine a situation where a first-year university student, perhaps far from home, dealing with stress, sadness, feelings of anxiety or isolation, finds him or herself visiting websites or forums related to promoting self-harm. Interest in the topic and visits to self-harm sites increase in a short period. The risk of acting on self-harm ‘advice’ grows with each passing day.
Traditional blocking and monitoring may capture some of the activity and alert the university that someone is accessing these sites, but identifying the individual can be a challenge – at minimum it will take a dedicated analyst time and resources to narrow down possible options. Injury or worse could occur before the IT department can work out who is actually at risk. Meanwhile the web histories of numerous individuals will have been accessed and reviewed while the investigation is underway.
What is needed is context, and it’s here that big data analytics and machine learning can help educators correlate activity with what we might know about the individuals engaging in it.
If we could capture and add metrics to the above scenario such as the type of information being searched, the keywords searched for, number of times they tried to access that kind of information, locations of the network routers used for access, devices connected to those routers at key points in time, the likelihood of identifying the individual at risk increases dramatically. We could also eliminate time-wasting false positives, such as red flagging a welfare officer accessing self harm sites for research purposes.
>See also: The answer the UK’s cyber security skills gap?
Big data not Big Brother
Of course having such power would come with a great deal of responsibility. Information about risky online behaviours should only be visible to relevant individuals, such as welfare or safeguarding officers who sit outside the IT function. The education sector needs tools that allow IT to become facilitators of information without having access to the specific data. Only then educators really tackle the challenges of safeguarding.
Technology is just part of the solution. In the age of the smartphone, young people have unrestricted access to the internet 24/7. Advanced network analysis and blocking harmful materials will help, but students still need to be made continually aware of the risks online. There is also a sensible discussion to be had amongst faculty and student body about the rationale for active monitoring, and where reasonable limits of online privacy lie.
Sourced by Andy Deacon, Head of Public Sector Technology UK&I at LogPoint.