Categories
Legislation Other News

New age assurance guidelines for user-to-user and search platforms

Loading the Elevenlabs Text to Speech AudioNative Player...

 

Ofcom’s second consultation offers early insight into new rules

New guidelines protecting children from harmful content bring search engines and user-to-user platforms a step closer to mandatory age assurance. The draft regulations from Ofcom, the UK’s online safety regulator, are open to consultation. But they provide an early glance at the tough new rules that will restrict access to content from 2025.

The proposed guidelines are Ofcom’s latest response to the Online Safety Act. Passed last year, the Act will give Britain one of the toughest online regulatory systems in the world. Social media apps, search engines and other online services will need to adopt robust age checks and stop their algorithms recommending harmful content to children.

What is harmful content?

This is the second of Ofcom’s four consultation exercises on finalising the regulations that will flesh out the Act’s skeleton framework. The first, which closed in February, focused on protecting people from illegal content. The current discussions will lead to new rules designed to stop children accessing harmful content. The Act divides harmful content into three broad categories:

Primary priority content (PPC) that is harmful to children:

Pornographic content, and content which encourages, promotes, or provides instructions for suicide, self-harm, and eating disorders.

Priority content (PC) that is harmful to children:

Content which is abusive or incites hatred, bullying content, and content which encourages, promotes, or provides instructions for violence, dangerous stunts and challenges, and self-administering harmful substances.

Non-designated content that presents a material risk of harm to children:

Any types of content that do not fall within the above two categories which presents “a material risk of significant harm to an appreciable number of UK children.”
 
Based on these definitions, Ofcom has published draft Children’s Safety Codes which aim to ensure that:

  1. Children will not normally be able to access pornography.
  2. Children will be protected from seeing, and being recommended, potentially harmful content.
  3. Children will not be added to group chats without their consent.
  4. It will be easier for children to complain when they see harmful content, and they can be more confident that their complaints will be acted on.

 

Creating a safer online environment

In a four-week period (June-July 2023), Ofcom found that 62% of children aged 13-17 encountered PPC/PC online. Research also found that children consider violent content ‘unavoidable’ online, and that nearly two-thirds of children and young adults (13-19) have seen pornographic content. The number of girls aged 13-21 who have been subject to abusive or hateful comments online has almost tripled in 10 years from 20% in 2013 to 57% in 2023.

To create a safer online environment for children, Ofcom has outlined a series of steps that search services and user-to-user platforms will be expected to take.

Online services must determine whether or not they are likely to be accessed by children. To help in this, Ofcom has posted an online tool, here. Platforms that are likely to be accessed by children must:

  1. Complete a risk assessment to identify risks posed to children, drawing on Ofcom’s ‘children’s risk profiles’.
  2. Prevent children from encountering primary priority content relating to suicide, self-harm, eating disorders, and pornography. Services must also minimise children’s exposure to other serious harms defined as ‘priority content’, including violent, hateful or abusive material, bullying content, and content promoting dangerous challenges.
  3. Implement and review safety measures to mitigate the risks to children. Ofcom’s Safety Codes include more than 40 measures such as robust age checks, safer algorithms, effective moderation, strong governance and accountability, and more information and support for children including easy-to-use reporting and complaints processes.

 

Highly effective age assurance

There is no single fix-all measure that services can take to protect children online. But the package of measures recommended by Ofcom prominently relies on age assurance. Ofcom anticipates that most digital services not using age assurance are likely to be accessed by children. Once the final draft of the new rules comes into force, age assurance will be mandatory.
 
In practice, this will mean that all services will have to ban harmful content or introduce what Ofcom describes as “highly effective age-checks” restricting access to either the whole platform or parts of it that offer adults-only content. Ofcom defines “highly effective” as age assurance capable of technical accuracy, robustness, reliability, and fairness, with further details here.
 
Regulated services will no longer be able to get away with an ineffective ‘I am 18’ button. They will need to commit to age assurance technology to ensure their services are safer by design.
 
The quickest way of doing this is to adopt a proven digital ID product, like Luciditi. Ian Moody, Luciditi co-founder and CEO, says, “Easier and more cost-effective than starting from scratch, Luciditi can be easily embedded in web sites or apps, either by using a pre-built plugin or by using our Software Development Kit.”
 
Ofcom have specifically said their measures will apply to all sites that fall within the scope of the Act, irrespective of the size of the business. ‘We’re too small to be relevant’, won’t wash as an excuse.
 
Services cannot refuse to take steps to protect children simply because the work is too expensive or inconvenient. Ofcom says, “protecting children is a priority and all services, even the smallest, will have to take action as a result of our proposals.”
 

“Don’t wait for enforcement and hefty fines” – Tech Sec

According to Ofcom, children who have encountered harmful content experience feelings of anxiety, shame or guilt, sometimes leading to a wide-ranging and severe impact on their physical and mental wellbeing.
 
The lawlessness exploited by some of the world’s leading social media platforms has contributed to the deaths of children like 14-year-old Molly Russell. The coroner’s report concluded that watching content promoting suicide and self-harm had contributed to Molly’s death by suicide.
 
“We want children to enjoy life online”, said Dame Melanie Dawes, Ofcom Chief Executive, “but for too long, their experiences have been blighted by seriously harmful content which they can’t avoid or control. Many parents share feelings of frustration and worry about how to keep their children safe. That must change.”
 
The consultation exercise closes on July 17, 2024. Ofcom says, “We will take all feedback into account, as well as engaging with children to hear what they think of our plans. We expect to finalise our proposals and publish our final statement and documents in spring 2025.”
 
Welcoming Ofcom’s proposals, Technology Secretary Michelle Donelan said, “To platforms, my message is engage with us and prepare. Do not wait for enforcement and hefty fines – step up to meet your responsibilities and act now.”
 
The Online Safety Act doesn’t pull its punches. Repeat offenders will potentially be fined up to £18 million or 10% of global revenue, whichever is greater, and company managers risk going to jail for up to two years. In the coming months, platforms will need to be proactive in committing to the age assurance products that will help them stay on the right side of the law.
 
In Britain at least, the carefree distribution of harmful content is about to change. Ofcom’s proposals go much further than current industry practice and demand a step-change from tech firms in how UK children are protected online.
 

Want to know more?

Luciditi’s Age Assurance technology can help companies meet these strict new guidelines.  If you would like to know more, Contact us for a chat today.

Get in touch

Categories
Legislation Other News

Using Digital ID to restrict the damaging impact of online stalkers

Loading the Elevenlabs Text to Speech AudioNative Player...

 

How do you stop someone stepping into your life, pretending to be you, contacting your friends and relatives? Digital identity technology can protect individuals from people like Matthew Hardy who tormented dozens of women and their family and friends, and is currently serving the longest sentence given to a stalker in the UK.

Imagine a friend messages to say they’ve got something to tell you, something secret, something about your partner. Except it’s not from your friend at all, it’s from a stalker, pretending to be someone you know, feeding you poisonous information that’s hurtful, damaging, and false. Hardy did this to 63 women, along with hundreds of people associated with them.

Building a watertight case

Hardy, 32, from Northwich, Cheshire, UK, initially targeted women he knew from childhood. He later selected women at random, often choosing people with a prominent social media profile. Bullied at school, he struggled to develop friendships and at the time of his trial was unemployed.

Over 11 years, more than 100 complaints were made to Cheshire police alone. Many women, however, found that police officers in Cheshire, Lincolnshire, Kent and elsewhere were slow to take them seriously.

Hardy was arrested 10 times but denied the allegations and police struggled to build a watertight case against him. Some of the women collected evidence, sending screenshots to the police – which Hardy was able to monitor, mocking them for their actions.

‘Can I tell you a secret?’

Eventually, Hardy was brought to trial through the diligent work of Cheshire police officer PC Kevin Anderson. PC Anderson later told reporters that, “The impact on those affected by his actions has been immense, causing some of them to change some of their daily habits, and live in constant fear that they were being watched.”

In January 2022, Hardy was convicted of five counts of stalking and sentenced to nine years in prison. “It’s the longest sentence we’ve ever heard of,” said Violet Alvarez of anti-stalking charity the Suzy Lamplugh Trust. For many of his victims, the impact of his actions still overshadows their lives.

In an appeal hearing, the defence team argued that Hardy’s autism prevented him understanding the true impact of his actions on his victims, and his sentence was reduced to eight years.

Hardy’s case was detailed in a seven-part podcast by Guardian journalist Sirin Kale, later developed into a two-part documentary for Netflix. Both productions used the title, ‘Can I tell you a secret?’, often Hardy’s opening line in his first message to unsuspecting victims. They believed it was from a friend, usually another woman, though in fact it was Hardy using accounts he’d created on social media in the names of other people.

Protecting users from harmful content

The UK’s new Online Safety Act, passed last autumn, aims to force platforms to better protect UK users from harmful content. Intended more as a skeleton framework than a fully fleshed-out package of measures, the 286 pages of the Act make no reference to stalking. The new law does however include an offence of ‘false communication’, committed if a message is sent by someone who intended it ‘to cause non-trivial psychological or physical harm to a likely audience.’ However enforcing the law won’t be easy, especially in an encrypted service like WhatsApp, as the government has assured tech firms they won’t be forced to scan encrypted texts indiscriminately.

Maintaining a balance between privacy and security is difficult. In the meantime, users remain vulnerable to someone hiding behind a false identity. How can you tell if a message is truly from the person who appears to have sent it? A solution lies in digital identity apps like Luciditi.

High-grade digital security

Digital identity can assure websites and platforms that a user is who they claim to be. The app verifies someone’s identity then holds this proof in a user’s ‘digital wallet’, protecting it behind high-grade digital security, of the kind used by banks.

The data is never shared with anyone unless the owner chooses to do so. Websites or other apps that accept Digital Identity are simply given assurance that someone is genuinely who they claim to be – which has obvious advantages for social media platforms.

Large Online Social Media companies who want to protect their users could easily allow them to provide a ‘proof’ of their identity. Again, no personal data would be released, there would simply be a digital yes/no. This would mean that a user’s friends and family need only trust a message confirmed as ‘yes, this genuinely comes from the person you know.’

Helping platforms to protect users

An early adopter would have first mover advantage. Rivals would be compelled to follow suit – or face losing people who were no longer willing to use a platform that still allowed a Wild West approach to identity.

A user joining a platform protected by Digital Identity would be asked to verify their identity through use of their chosen wallet app or by creating a new one using identity documents. The data is secured such that no-one other than the user could access.

The proof provided by the presented ID wallet would assure the platform that this person was genuine, they had been verified, they were not a bot or a fake account, their messages were approved. Unapproved messages apparently sent by the same person could then be easily identified and ignored.

Cheating the system

Luciditi’s Glassvault feature goes a step further in preventing genuine verified accounts being used for harm. Rather than solely relying on the proof from an ID wallet, certain elements of the underlying data used at the time of verification are parked in a digital ‘escrow’.

Police investigating online harm could be granted access to specific Glassvault data which would reveal the true identity of the user involved. Even if an account were deleted, including the ID wallet linked to it, the Glassvault data would stay out of reach and would remain available to investigators.

Unfounded fears

Protecting users by verifying identity doesn’t sit easy with everyone. Campaigners fear that identity data can be collected by the government, leading to a backdoor route to a national database. In this scenario, all UK citizens would be required to give their details to a database that would then become the primary way to access public and private sector services, similar to the system adopted by Denmark.

Worse, the data could be used for ‘big brother’-style control of the population, or sold to the highest bidder for profit, leading to a ‘surveillance state’ where you use your fingerprints for everything from buying milk to paying tax.

“The Government is introducing a giant digital identity system for all of us to access basic services”, claims campaign group Big Brother Watch, adding that “the Government is also cultivating a ‘digital identity market’ of private companies that can perform identity checks online.”

However, these claims are refuted by Open Identity Exchange (OIX), an umbrella organisation representing those involved in the digital ID sector, among them Arissian – developers of Luciditi.

OIX says that “digital ID will not be mandatory. There will always be choices…. There is focus on alternative proofing methods for those who either struggle to prove who they are or simply do not want a digital ID.”

For developers like Arissian, digital ID gives users an optional asset that primarily relies on trust. “Digital ID tech puts people front and centre”, says Ian Moody, Luciditi co-founder and CEO, “our user-centric approach is committed to a framework of trust that safeguards privacy, protects users, and is driven by the needs of individuals.”

For Moody, big brother conspiracies are “uninformed scare-mongering” that imply politicians are capable of spending billions of pounds on managing the identity details of more than 60 million people in a system that’s both hugely extensive and completely successful yet somehow entirely hidden from financial oversight and the media.

Robust standards for digital products

We as a society, campaigners included, face hard choices. We’re living in a digital age, it’s never been easier to connect with people. We can instantly buy products and services online; teenagers no longer need to take their passport to a gig to prove their age. However, our digital connections are vulnerable to criminal actions.

We as individuals need to protect ourselves and so do retailers and suppliers. Which is why sometimes we need to involve the police – as Hardy’s victims did. The police can only enforce the law, and so the law must keep up with technical developments.

It’s the government’s duty to protect our online safety, but this is not the same thing as pervasive big brother oversight.

The Department for Science, Innovation and Technology (DSIT) has expressly said, ‘The government is not making digital identities mandatory.’ DSIT is setting robust standards for digital ID products, and overseeing a list of “trust-marked” accreditation organisations. These are the first steps towards legislation that, rather than creating a giant database, will better protect individuals’ digital privacy.

Restricting others like Hardy

As is often the way with innovation, advances in digital identification sometimes outstrip public acceptance. It takes time for people to feel comfortable with developments, from digital ID assurance to live facial recognition in shops. Even so, big brother campaigners are out of step.

The government is working on new legislation that protects the privacy of individuals and incorporates national frameworks of trust as supported by the developers of products like Luciditi. The alternative is no legislation, where stalkers remain unrestricted.

People like Hardy are not fearful about new tech. He was able to investigate his victims’ lives, their social circles, and their actions in reporting him to the police, likely assisted via easily available tools. Others are just as capable. As a society, digital ID is our best weapon in restricting them.

Want to know more?

Luciditi’s Identity Proofing technology can help meet some of the challenges presented by the OSA. If you would like to know more, Contact us for a chat today.

Get in touch

Categories
Legislation Other News

Understanding the Online Safety Act: Implications for Adult Sites

Loading the Elevenlabs Text to Speech AudioNative Player...

Ofcom calls for biometric age checks to stop children seeing adult content

Tough new guidance from Ofcom aims to protect children from seeing online pornography. The Online Safety Act, passed last autumn, restricts underage access to adult content. New details have been published explaining how this can be done through age assurance, giving digital identity platforms like Luciditi a frontline role in helping content providers stay on the right side of the law.

On average, children first see online pornography at age 13 – although nearly a quarter discover it by age 11 (27%), and one in 10 as young as nine (10%), according to research. Before turning 18, nearly eight in 10 youngsters (79%) have encountered violent pornography showing coercive, degrading or pain-inducing sex acts.

The Online Safety Act (OSA) aims to protect children by making the internet in the UK the safest place online in the world. Under the OSA, sites and apps showing adult content will have to ensure that children can’t access their platform.

Highly effective age checks

The new law has been described as a skeleton act. The bare bones approved by parliament will be fleshed out one topic at a time by the communications watchdog Ofcom.

Ofcom’s first update, last November, focused on protecting people from online harms. Now, its second period of consultation and guidance aims to protect children from online pornography through what it describes as “highly effective age checks.” The new guidance looks in detail at the age assurance tech that providers will need to adopt.

The porn industry has long been an early adopter of innovation – from online credit card transactions to live streaming. Age assurance, tried and trusted in other sectors, is unlikely to pose any technical challenges whether providers develop it in-house or adopt an existing product.

Businesses flouting the OSA can be fined up to £18 million or 10% of global revenue, and their directors jailed for up to two years. Nevertheless, the vast majority of adult content providers will be committed to maintaining a profitable, stable, and compliant operation that avoids tangling with the law. They don’t want kids looking at inappropriate material any less than anyone else.

The difficulties of staying in-house

To comply with the OSA, providers must introduce age assurance – through age estimation, age verification or a combination of both.

In most cases, adults will be able to access a site through age estimation tech. Smart AI assesses a selfie in estimating whether a user is at least five years older than 18. Users who are 18 or thereabouts will be asked to verify their age through personal identity data confirming their date of birth.

The big question for both providers and users is who should oversee the selfies and data, providers or third-party specialists?

If developed in-house, estimation and verification can bring challenges perhaps unique to the porn industry. Criminals target users by surreptitiously activating the camera on their device and threatening to release the footage if money isn’t handed over. Just the threat of this can lead to a payout, even without evidence that the camera was actually activated.

Mindful of a risk of blackmail or other breaches of anonymity, users may be reluctant to send a selfie to a porn site. Asking them to give up their personal data poses an even bigger challenge. Explicit website Pornhub said regulations requiring the collection of “highly sensitive personal information” could jeopardise user safety.

Social media users are already sceptical – memes have started appearing showing someone accessing a porn site and being asked for a selfie before they enter. In the US, similar worries about age checks led users to access porn sites via a virtual private network (VPN). In Utah, demand for VPNs surged by 847% the day after new age checks came into effect.

Staying in-house means having to overcome widespread concerns. Providers who are legitimate, established, and successful but owned by an international parent group may particularly struggle to persuade British users that their selfie and data will be permanently and properly safeguarded.

Expertise from Luciditi

There is an easy, trusted alternative to the in-house route. Digital ID platforms such as Luciditi create an ‘air-gapped’ solution. A specialist in age assurance, Luciditi is British, well-established, and trusted by the UK government as Britain’s first supplier of a digital PASS proof of age card. Its developers, who have a background in digitally managing sensitive NHS records, have brought Luciditi to a range of industries. Users are already sending selfies and data to Luciditi for other age-restricted products or services.

Ofcom suggests that age assurance could involve tech associated with facial age estimation, photo ID matching and open banking all of which Luciditi already perform. Luciditi securely processes all selfies and data and instantly destroys it after use. Nothing is given to a third-party beyond an automated nod that a user is an adult. This meets Ofcom’s requirement for providers to take care in safeguarding privacy.

Prevention of tracking also an important factor, not just by the site operator, but also by the data source. So if a user chooses Open Banking to prove their age, your bank can’t see “why” they needed it or “whom” they shared it with – often called a “double blind” verification. Having certified systems handling privacy, anonymity and security is essential if it is ever to be trusted by users.

“We’re perfectly placed to support the adult content industry with age assurance”, said Ian Moody, Luciditi CEO, “our in-depth experience in supporting online providers of other age-restricted products means we can quickly bring sites up to the new standards set by Ofcom.”

Embedded in a provider’s site, Luciditi’s tech would operate behind the scenes, independently overseeing access. Providers could welcome new users with a message saying that access is managed by a reputable, independent third-party, safeguarding anonymity. This would assure users that they are not sending anything directly to the owners of a porn site. Additionally, providers can embed Luciditi across all their age-restricted products and services, whether relating to adult content or not.

User-generated content

As an established digital identity platform, Luciditi supports individuals as well as businesses. Users download the Luciditi app, which is free and easy to use. This lets them create their own digital identity wallet, safely storing their selfie and data and letting them breeze through an age check in a couple of taps.

This facility will benefit providers who host adult user-generated content and who need to know that performers are aged 18 or over. This issue isn’t covered by the latest guidance but will be included in Ofcom’s next update, due in spring 2024. Providers who choose to act early can future-proof their business now by addressing this issue as part of their wider approach to age assurance.

No alternatives

During the current process of consultation, which ends on March 5th, Ofcom will not be looking at softer options. For providers looking to retain their audience, age assurance is the only show in town. “Our practical guidance sets out a range of methods for highly effective age checks”, said Dame Melanie Dawes, Ofcom’s Chief Executive, “we’re clear that weaker methods – such as allowing users to self-declare their age – won’t meet this standard.”

The OSA effectively fired a starting gun. The race is now on for adult content providers to accept its provisions, take appropriate action, and adopt the tech they need before the law is enforced in or after 2025.

It’s not just about completing the work before the new measures are actively enforced. It’s about acting quickly to maintain a competitive position. Businesses that build trust early will seize the advantage in developing their market share. It’s not just the new law that providers need to be mindful of, it’s each other.

Want to know more?

Luciditi’s Age Assurance technology can help meet some of the challenges presented by the OSA. If you would like to know more, Contact us for a chat today.

Get in touch

Categories
Legislation Other News

The race is on to become compliant with UK online safety law

Loading the Elevenlabs Text to Speech AudioNative Player...

 

The Online Safety Act 2023 is now law and enforcement will be phased in by Ofcom. How should you and your business prepare?

Ambitious plans to make Britain the safest place to be online have recently become law. The Online Safety Act 2023 covers all large social media platforms, search engines, and age restricted online services that are used by people in the UK, regardless of where such companies are based in the world. What does the Act mean for you and your business, and how should you prepare for it? Our complete guide to the new legislation answers five key questions

1. What is the Online Safety Act?

The Online Safety Act (OSA) is a new set of laws aimed at creating a safe online environment for UK users, especially children. Due to be phased in over two years, the law walks a fine line between making companies remove illegal and harmful content, while simultaneously protecting users’ freedom of expression. It has been described as a ‘skeleton law’, offering the bare bones of protection which will be fleshed out in subsequent laws, regulations, and codes of practice.

The OSA has had a long and difficult journey. An early draft first appeared in 2019 when proposals were published in the Online Harms White Paper. This defined “online harms” as content or activity that harms individual users, particularly children, or “threatens our way of life in the UK, either by undermining national security, or by reducing trust and undermining our shared rights, responsibilities and opportunities to foster integration.”

The Act covers any service that hosts user-generated images, videos, or comments, available to users in the UK. It includes messaging applications and chat forums, and therefore potentially applies to the major social media platforms such as X (Twitter), TikTok, Facebook, Instagram, BeReal, Snapchat, WhatsApp, YouTube, Google, and Bing.

Fears of a censor’s charter

Early drafts of the “Online Safety Bill” were described by critics as a censor’s charter, and parts of it have been rewritten over time. The current version might have had its claws clipped but it still has teeth. Repeat offenders will potentially be fined up to £18 million or 10% of global revenue, whichever is greater, and company managers risk going to jail for up to two years.

The Act provides a ‘triple shield’ of protection. Providers must:

  • remove illegal content
  • remove content that breaches their own terms of service
  • provide adults with tools to regulate the content they see

Children will be automatically prevented from seeing dangerous content without having to change any settings.

2. What are the key provisions of the OSA?

The Act targets what it describes as “regulated services”, specifically large social media platforms, search engines, or platforms hosting other user to user services (for example of an adult nature), along with companies providing a combination of these. Precisely which providers in particular the Act will affect won’t be known until the government publishes further details about thresholds.

Providers will have to comply with measures in four key areas:

  • Removing illegal content
  • protecting children
  • restricting fraudulent advertising
  • communication offences such as spreading fake but harmful information

2.1 Illegal content

Providers will be expected to prevent adults and children from accessing illegal content. Previously, this was largely restricted to material associated with an act of terrorism or child sexual exploitation. Under the new law, illegal content also includes anything that glorifies suicide or promotes self-harm. Content that is illegal and will need to be removed, includes:

  • child sexual abuse
  • controlling or coercive behaviour
  • extreme sexual violence
  • fraud
  • hate crime
  • inciting violence
  • illegal immigration and people smuggling
  • promoting or facilitating suicide
  • promoting self-harm
  • revenge porn
  • selling illegal drugs or weapons
  • sexual exploitation
  • terrorism

Guidance published by the government explains that “platforms will need to think about how they design their sites to reduce the likelihood of them being used for criminal activity in the first place.”

The largest social media platforms will have to give adults better tools to control what they see online. These will allow users to avoid seeing material that is potentially harmful but which isn’t criminal, (providers will have to ensure that children are unable to access such content).

The new tools must be effective and easy to access and could include human moderation, blocking content flagged by other internet users, or sensitivity and warning screens. They will also allow adults to filter out contact from unverified users, which will help stop anonymous trolls from reaching them.

2.2 Protecting children

The OSA affects material assessed as being likely to be seen by children. Providers will have to prevent children from accessing content regarded either as illegal or harmful. The government’s guidance suggests that the OSA will protect children by making providers (in particular social media platforms):

  • prevent illegal or harmful content from appearing online, quickly removing it when it does.
  • prevent children from accessing harmful and age-inappropriate content.
  • enforce age limits and age checking measures.
  • ensure the risks and dangers posed to children on the largest social media platforms are more transparent, for example by publishing risk assessments.
  • provide parents and children with clear and accessible ways to report problems online when they do arise.

“Harmful” is a grey area. The Act gives the government minister responsible for enforcing the new law (the secretary of state) the power to define “harmful”. The OSA suggests the minister will do so where there is “a material risk of significant harm to an appreciable number of children.” According to the government guidance, harmful content includes:

  • pornographic material
  • content that does not meet a criminal level but which promotes or glorifies suicide, self-harm or eating disorders
  • content that depicts or encourages serious violence
  • online abuse, cyberbullying or online harassment

Social media companies set age limits on their platforms, usually excluding children younger than 13. However, many younger children have accounts. The OSA aims to clamp down on this practice.

2.3 Fraudulent advertising

Under the OSA, providers will have to prevent users from seeing fraudulent advertising such as ‘get rich quick’ scams. An advert will be regarded as fraudulent if it falls under a wide range of offences listed in the Act, from criminal fraud to misleading statements in the financial services area. For large social media platforms and search engines, advertising material is fraudulent if it:

    1. (a) is a paid-for advert
    1. (b) amounts to an offence (the OSA lists possible fraud offences) and
    1. (c) (in the case of social media) is not an email, SMS message (or other form of messaging as listed)

Social media platforms and search engines must:

  • Prevent individuals encountering fraudulent adverts
  • Minimise the length of time such content is available
  • Remove material (or stop access to it) as soon as they are made aware of it

Providers must also include clear language in their terms of service regarding the technology they are using to comply with the law.

2.4 Communication offences

Under the OSA, an offence of “false communication” is committed if a message is sent by someone who intended it “to cause non-trivial psychological or physical harm to a likely audience.” The law applies to individual users and “corporate officers” (who can be guilty of neglect), but excludes news publishers and other recognised media outlets.

An offence of “threatening communication” would be committed if someone sends a message that threatens death or serious harm (assault, rape, or serious financial loss), with the intent of making the recipient fear that the threat would be carried out.

The Act also makes it illegal to encourage or assist an act of self-harm. A crime occurs if an offending message is sent, or if someone is shown an offending message (whoever originally wrote it).

Sending flashing images can also be regarded as an offence. Possible prison sentences under this part of the OSA vary depending on the offence but can be up to five years. A company need not be a provider of regulated services to be caught by this part of the law.

Amendments will be made to the 2003 Sexual Offences Act making it illegal to share or threaten to share intimate pictures, if the offender was seeking to cause distress.

3. What are the requirements for age assurance tech?

In recent years, the UK has been edging ever closer to adopting an online age verification system. After passing the Digital Economy Act (2017), Britain became the first country to allow such a system to be implemented. Websites selling pornography would have had to adopt “robust” measures that stopped children accessing their content. However, enforcing this was easier said than done.

The possibility of a wide variety of porn outlets around the world collecting the personal identity data of UK users, led to concerns about breaches of the General Data Protection Regulation (GDPR). The scheme was abandoned in 2019, and at that point the baton was passed to the OSA.

Whether adults accessing pornography will encounter mandatory age assurance under the OSA is still the subject of legislative debate. However, adult content providers will need to ensure that children are not able to see such material. The Act says:

“A provider is only entitled to conclude that it is not possible for children to access a service, or a part of it, if age verification or age estimation is used on the service with the result that children are not normally able to access the service.”

This then may lead to providers committing to age assurance by default to ensure compliance. In its final version, the Act tightens up definitions of ‘assurance’, clarifying how and when this may be provided – whether by estimation tech, verification measures, or both.

Digital ID providers, such as our own platform Luciditi, use age estimation AI to give quick and easy access to the majority of users. Those close to the age threshold will need to arrange access via age verification, which relies on personal data. Luciditi only sends a simple ‘yes’ or ‘no’ reply to online age restricted access requests. The data itself is securely managed and can’t be seen by third parties. Keeping business operations compliant, Luciditi can be embedded in a client’s website by developers (ours or yours), or even simply via a plug-in.

Under the terms of the Act, providers will have to say what technology they are using, and show they are enforcing their age limits. More detail is expected to be given by Ofcom (see 4, below), later this year. In June 2023, Ofcom said:

“Online pornography services and other interested stakeholders will be able to read and respond to our draft guidance on age assurance from autumn 2023. This will be relevant to all services in scope of Part 5 of the Online Safety Act.”
[Part 5 relates to online platforms showing pornographic content].

Lawyer Nick Elwell-Sutton notes that “whether age verification for children will be a mandatory requirement is still the subject of ongoing consultation, but many service providers may voluntarily seek to implement verification at age 18 to avoid the more stringent child safety requirements.”

Age assurance technology will likely need to conform to existing government standards, including the UK Digital Identity and Attributes Trust Framework (DIATF). Introduced in 2021, DIATF sets the rules, standards, and governance for its digital identity service providers, like Arissian who developed Luciditi. One of the key principles behind DIATF is the need for mutual trust between users and online services, a principle it shares with the OSA.

Iain Corby, executive director for the Age Verification Providers Association, a politically neutral trade body representing all areas of the age assurance ecosystem, commented: “For too long, regulators have neglected enforcement of age restrictions online. We are now seeing their attention shift towards the internet, and those firms which offer goods and services where a minimum age applies, should urgently implement a robust age verification solution to avoid very heavy fines.”

4. How will the OSA be enforced?

Not without difficulty. The OSA will be enforced by Ofcom, the UK’s communications regulator. Ofcom will prepare and monitor a register of providers covered by the law, which may include up to 100,000 companies.

The government funded Ofcom in advance to ensure an immediate start. However, providers will soon have to finance the new measures themselves through regular fees to Ofcom.

Ofcom will not be pursuing individual breaches of the law. It will instead focus on assessing how a provider is meeting the new requirements overall at the risk of being fined, as detailed above. Ofcom will have powers of entry and inspection at a providers’ offices.

In the most extreme cases, with the agreement of the courts, Ofcom will be able to require payment providers, advertisers and internet service providers to stop working with a site, preventing it from generating money or being accessed from the UK.

Criminal action will be taken against those who fail to follow information requests from Ofcom. Senior managers can be jailed for up to two years for destroying or altering information requested by Ofcom, or where a senior manager has “consented or connived in ignoring enforceable requirements, risking serious harm to children.”

The new law will come into effect in a phased approach:

Phase 1: illegal harms duties.

      1. Codes of practice are expected to be published soon after the Act becomes law.

Phase 2: child safety duties and pornography.

      1. Draft guidance on age assurance is due to be published from autumn 2023.

Phase 3: transparency and user empowerment

    1. This is likely to lead to further laws covering additional companies.

5. How should businesses be preparing for the OSA?

While the OSA mainly targets social media platforms and search engines, its measures are of general application. In other words, any business could face enforcement if its actions fall within the scope of the new law.

Businesses concerned about the OSA are advised to carry out a risk assessment covering products and services, complaints procedures, terms of service, and internal processes and policies. Companies should also assess how likely their platforms/products are to be accessed by children.

In particular, businesses will need to identify potential development work, as many obligations imposed by the OSA will require technical solutions and backend changes. Further advice from a legal perspective is available here.

Conclusions

The Wild West nature of the internet is notoriously difficult for any one country to tame. Things will be easier for the UK now that the EU’s Digital Services Act has come into effect, forcing more than 40 online giants including Facebook, X, Google, and TikTok to better regulate content delivered within the EU.

Nevertheless, the UK faces a lonely battle with leading providers, especially those concerned about a part of the Act aimed at identifying terrorism or child sexual exploitation and abuse (CSEA). Until very recently, it had been expected that Ofcom will would be able to insist that a provider uses “accredited technology” to identify terrorism or CSEA content. In other words, a service like WhatsApp – that allows users to send encrypted messages – must develop the ability to breach the encryption and scan the messages for illegal content.

No surprise then that WhatsApp isn’t happy at what has been described as a ‘backdoor’ into end-to-end encryption measures. In April, the platform threatened to leave the UK altogether. Signal and five other messaging services expressed similar concerns. In response, the government has assured tech firms they won’t be forced to scan encrypted texts indiscriminately. Ofcom will only be able to intervene if and when scanning content for illegal material becomes “technically feasible.”

Ofcom will also be able to compel providers to reveal the algorithms used in selecting and displaying content so that it can assess how platforms prevent users from seeing harmful material.

These challenges notwithstanding, campaigners from all sides agree that something is needed even if some remain sceptical about the OSA. Modifications were made to the Act in June, in part guided by the inquest into the death of Molly Russell. In 2017, Molly died at the age of 14 from an act of self-harm after seeing online images that, according to the coroner, “shouldn’t have been available for a child to see.” The OSA may not be perfect. But for the sake of children across the country, it’s at least a step in the right direction.

Want to know more?

Luciditi’s Age Assurance technology can help meet some of the challenges presented by the OSA. If you would like to know more, Contact us for a chat today.

Get in touch