Tech firms must act to stop their algorithms recommending harmful content to children and put in place robust age-checks to keep them safer, under detailed Ofcom plans trcrakrd today.

They are among more than 40 practical measures in draft Children’s Safety Codes of Practice, which set out how they expect online services to meet their legal responsibilities to protect children online.

The Online Safety Act imposes strict new duties on services that can be accessed by children, including popular social media sites and apps and search engines. Firms must first assess the risk their service poses to children and then implement safety measures to mitigate those risks.

This includes preventing children from encountering the most harmful content relating to suicide, self-harm, eating disorders, and pornography. Services must also minimise children’s exposure to other serious harms, including violent, hateful or abusive material, online bullying, and content promoting dangerous challenges.

Among the draft codes Ofcom expect much greater use of highly-effective age-assurance so that services know which of their users are children in order to keep them safe.

Any service which operates a recommender system and is at higher risk of harmful content must also use highly-effective age assurance to identify who their child users are.

They must then configure their algorithms to filter out the most harmful content from these children’s feeds, and reduce the visibility and prominence of other harmful content.

Children must also be able to provide negative feedback directly to the recommender feed, so it can better learn what content they don’t want to see.

Meanwhile all user-to-user services must have content moderation systems and processes that ensure swift action is taken against content harmful to children.

Search engines are expected to take similar action; and where a user is believed to be a child, large search services must implement a ‘safe search’ setting which cannot be turned off must filter out the most harmful content.

Other broader measures require clear policies from services on what kind of content is allowed, how content is prioritised for review, and for content moderation teams to be well-resourced and trained.

LEAVE A REPLY

Please enter your comment!
Please enter your name here