Recent international developments, particularly Australia’s move to ban under‑16s from social media, have reignited debate in the UK about how best to protect young people online. As a result, pressure has been mounting on the UK Government to go further than the measures already set out in the Online Safety Act 2023, most notably the banning of under-16s from social media.
While ministers have previously ruled out introducing a similar ban in the UK, public concern has not gone away. Petitions continue to circulate, campaign groups remain vocal, and the question of whether we are doing enough to safeguard young people online keeps resurfacing.
In response to mounting public concern, the Government has recently launched a national consultation on this issue. The consultation invites views on a wide range of questions, 54 in total, addressing multiple aspects of young people’s engagement with the digital world, extending well beyond social media.
As individuals with experience in researching, writing about, and teaching young people to navigate the digital world, we have contributed to this consultation. While we acknowledge that the online environment presents some risks and harms, we strongly believe it is essential to proceed with caution.
Below, we set out three key points raised in our consultation response.
Banning Social Media: Under-16s
A recurring theme across wider society, and one clearly reflected in the consultation, is the continued rhetoric that under‑16s should be banned from social media. We do not believe this is the right approach. Instead, current efforts should focus on enforcing the existing minimum age requirements, presently set at 13, rather than introducing a blanket ban on all under‑16s accessing social media platforms. There is substantial evidence that many children under the age of 13 already use a range of social media services, despite current restrictions. If enforcement at this level is proving difficult, it is reasonable to question why raising the threshold to 16 would result in a different outcome. Furthermore, evidence suggests that for children under eleven, a lack of understanding about coercive behaviours such as online tracking or location sharing may make them more vulnerable than their older peers, highlighting the need to have more protective measures in place for this age group.
Crucially, increasing the minimum age is unlikely to prevent young people from accessing social media altogether. Similar outcomes have been well documented in other protection‑oriented policy areas, such as vaping and drug use, where higher age limits have failed to address underlying issues. Instead, such measures often displace the behaviour into less visible and more covert forms, including the possibility of forcing young people onto less regulated sites. On sites such as TikTok and Facebook, there is some, though we accept at times flawed moderation. Plenty of other sites exist with no moderation processes in place. Sometimes it is ‘better the devil you know’.
Moreover, excluding young people from online spaces risks creating further harms. In a 21st‑century context, where social connection is often conducted digitally and many individuals do not live in close physical proximity to their peers, restriction from online communities is likely to exacerbate feelings of isolation and loneliness than to address the underlying risks associated with social media use.
Banning Certain Digital Functions: Under-16s
Rather than imposing a blanket ban on under‑16s using social media, the consultation also explores the possibility of banning access to certain platform functions for this age group. These proposals include, but are not limited to, disappearing content, location sharing, and the sending of nude images and/or videos.
While such targeted restrictions may appear attractive in principle, many would be difficult, if not impossible, to implement in practice. For example, although preventing young people from sending nudes may seem like a positive step, the technological capability required to achieve this goal does not currently exist. This is therefore technologically unworkable, either all sexual content must be blocked, or none. Otherwise, you would need software which can assess the age of the person in the nude image, and to do that you would need to train it using Child Sexual Abuse Material. This would not be an easy task to undertake and would require handing tech companies images of victims being abused, current issues with tech companies inability to keep sensitive data private suggests they do not have the capabilities to protect such images.
In addition, some of the functions identified in the consultation serve important and legitimate purposes for young people. For example, location‑sharing features can play a valuable role in enabling young people to stay safe and connect with family members. In practice, however, it would be extremely difficult to distinguish between location sharing with close relatives and sharing with other online users.
Similarly, features such as disappearing content can provide young people with a space to express themselves more authentically online, particularly where they may feel unable to do so in offline settings due to social pressures, stigma, or prejudice. For instance, a young person may not yet feel ready or safe to share aspects of their person, such as their gender identity, with family members etc. Restricting access to such tool’s risks removing supportive digital spaces that, for some young people, serve an important role in self‑expression and wellbeing.
Banning VPNs: Under-16s
A significant counter‑argument to banning young people from particular platforms or restricting access to specific digital functions is the reality that many will still be able to circumvent these measures through the use of VPNs. One proposed response is to introduce age‑restrictions on VPN services themselves. However, this approach once again misunderstands both how young people and beyond, use these digital technologies, including VPNs, and the vital role these tools can play in their lives.
VPNs provide young people with an opportunity to explore information which their parents might prefer they did not access, but which they have a right to, such as information about sexuality, sexual health and gender identity. Preventing them accessing VPNs could make vulnerable children even more vulnerable if their parents discover their search history before they are ready to talk about it.
More broadly, mandating age‑verification or age‑restrictions for VPN use would have far‑reaching and potentially harmful consequences for individuals across the UK. VPNs serve many legitimate and protective purposes, including for those experiencing domestic abuse or living in households where it is unsafe for them to express their ‘authentic’ selves. Requiring age‑verification to access VPN services could expose individuals to significant risks, undermining personal safety and privacy.
The Solution: Education, Education, Education
It is encouraging that the Government’s consultation does not focus exclusively on restricting young people’s access to particular platforms or digital functions. Notably, albeit briefly, it also acknowledges the role of education. From our experience of researching, teaching, and working with young people in relation to the digital world, we recognise education as a crucial component to support young people with the online world.
Meaningful digital education equips young people with the skills needed to navigate online spaces safely, critically, and confidently, enabling them to identify risks, respond to harmful content, and access support when required. However, current approaches to digital literacy in the UK are fragmented, inconsistent, and are inadequate. Significant emphasis is placed on warning young people not to do something, as opposed to supporting them navigating the digital world.
A more robust and comprehensive approach to digital education is required, one that is age‑appropriate, sustained, and embedded throughout a young person’s development, rather than delivered as a one‑off intervention. This should include not only technical skills, but also education around online relationships, consent, privacy, algorithmic awareness, and resilience. Importantly, any meaningful educational framework must also extend to parents and carers.
Dr Laura Higson-Bliss and Louisa Street Keele University.


Leave a Reply