Child safety lies deep at the heart of what eModeration does. So as Safer Internet Day celebrates its 11th anniversary, we take a look back at our past 11+ years in moderating children’s content to see what’s changed and what’s stayed the same.
Back in 2002 most children’s content was set in virtual worlds. With the huge uptake of mobile phones and tablet use in recent years, we’ve seen popular games such as Moshi Monsters and Club Penguin move on to these devices with great success.
There’s also a lot more educational content available. Gaming communities such as Quest Atlantis and Spore actively support learning and a growing number of games, such as National Geographic’s Animal Jam, incorporate educational content.
There is also a greater awareness of internet safety among both adults and young people, and masses of specialist safety projects. This means it is far easier for teens and tweens to connect with like-minded individuals and find support through organisations such as GaGa’s Born This Way foundation, It Gets Better and BeatBullying.
Is online content more ‘dangerous’?
However, with the increased volume of content available, there is an inevitable downside. With virtually unlimited access to smaller devices, it is practically impossible for adults to supervise the use of phones and tablets by children.
‘Sexting’ among teens was unheard of back in the noughties. Studies vary on how common this practice is. However, our experience suggests that it is now more commonplace for young people to be exposed to sexual texts and pictures from their peers, or to be coerced in to sending their own ‘sexts’.
And bullying online is more widespread and more subtle.
In response to this, we’ve seen scare-mongering by populist media and panic among adults about the dangers of the internet. In fact, most young people are using the internet and technology for good. Rather than panicking, parents and teaching professionals need to speak to youngsters and ask them right out how they are using technology and what they are using it for.
Better technology to tackle inappropriate content
The technology available to parents, educational organisations and brands online is far more sophisticated than it was 11 years ago. Intelligent filters such as those developed by Synapsify and Discuss.it now allow for the monitoring of actual conversations rather than the straightforward black/white lists of yesteryear. This helps adults and brands better understand how young people are interacting with online content and expressing themselves.
There are also image recognition programs such as Impala which can help law enforcement agencies search for particular rooms or locations when gathering evidence of grooming.
Is there still a role for human moderation?
So with all these filters, social listening tools and tagging programs in play, is there still a role for human moderation?
We firmly believe so. Online behaviour is nuanced. It requires more than a basic understanding of the black and white activity of negative behaviour. Even artificial intelligence (AI) is challenged to keep up with what is considered acceptable and unacceptable online. By the time an AI system gets used to a nuanced colloquialism, it has become obsolete. A human moderator will quickly pick up on a subtle reference and be able to share it with colleagues.
Technology is close to understanding some conversations but you still need human interpretation to process the results and respond.
Human interaction, behaviour, and conversation is not always cut and dried. And, therefore, only trained professional humans will ever be able to keep abreast of how languages ebb and flow over time. Technology has come close to understanding some forms of conversational activity but you still need human interpretation to process the results and decide how to respond to this behaviour. In some extreme instances, this will involve contacting emergency or social services.
And don’t forget that the current generation of digitally savvy teens and tweens are the ones that have figured out how to circumvent most safety technological barriers!
Context is king
It’s also important that human moderation is localised to the community it’s talking to and that they understand the context of the language they are moderating. Yes, it’s easy to spot the ‘f-bomb’ but kids are quick to develop their own language – or use deliberate mis-spellings – to circumvent a filter.
But when it comes to the psychology behind the bad words, you need someone that deeply understands that culture’s colloquialisms and typical youth behaviour to successfully moderate an online children’s environment. You also need well-trained individuals who are capable of handling “the worst of the worst” online content and skilled at triaging that content. Not every moderator can do that.
We’re proud to have helped pave the way in developing child safety initiatives. Our CEO Tamara Littleton helped put together the Home Office Good Practice Guidance for the Moderation Interactive Services for Children back in 2005, and again in 2010 under UKCCIS.
And last year, we collaborated with specialist training company Moderation Gateway to provide content and expertise on UGC moderation and child safety for the digital industry’s first Moderation Foundation Training course. In the future, we hope to see this training become a requirement for moderators handling children’s content.
It’s been an interesting 11 years helping to make the internet a safer place. We look forward to the challenges that will face the UGC moderation and digital content industry in the next 11 years.
Thanks to my colleague Jennifer M Puckett for her expertise in putting this blog together.