This post: Social Accountability: Who’s Really Responsible for Protecting Our Kids Online?
By Contributing Writer/Randi Feigin
Especially in these days of social isolation and having a lot of extra time on our hands, the internet has many benefits. But it can also be a pretty ugly place. For teenagers, it can be downright dangerous.
Along with a worldwide web of wonders comes more than a few digital perils. With their kids’ safety at stake, parents can’t rely solely on social media platforms to implement the appropriate controls to protect their kids from cyberbullying, cyber predators, cyber stalkers, targeted advertising, scams, misinformation, hoaxes, and identity theft.
Just as important as teaching good table manners, parents need to start having conversations as early as possible about the risks of the online world. And, just like every other lesson parents strive to drill into their kid’s head, it takes practice and consistency at home. But accountability reaches beyond parental control.
While parents are ultimately responsible for shaping their children’s online experience, that doesn’t entirely free social media platforms from accountability.
What’s at Stake
To be fair, social media companies don’t intend for their products to harm kids. And most platforms weren’t necessarily designed for their use. But profitability and market share drive digital platform development, and children comprise an enormous share of the market. When you boil it down, keeping kids engaged online is big business with big profitability.
A Few Facts:
- Nearly 95% of teenagers have access to a smartphone and 45% say they’re online “almost constantly.”
- More than half of teenagers think they spend too much time on their phones, and nearly half think they spend too much time on social media.
- More than a third of teens visit social media sites several times a day.
- Online advertising targeted to attract the attention of kids is expected to reach $1.4 billion this year.
- In 2019, average daily screen use for the American teen, excluding school and homework, exceeded 7 hours.
- A recent study found teens who spend more time on their screens are more likely to experience depressive symptoms and suicidal behaviors.
- Data shows that 32% of teenagers aged 13–17 use Twitter, 51% use Facebook, 69% use Snapchat, 72% use Instagram, and a whopping 85% use YouTube.
It doesn’t take a mathematician to conclude that all this time online equates to a lot of unsupervised screen time. And with social anxiety disorders strongly linked to social media use, it’s no surprise why so many kids are suffering emotionally.
In September last year, the Journal of the American Medical Association published a cohort study of 6,595 U.S. teenagers aged 12–15 showing that increased time spent using social media per day was associated with the likelihood of depression, anxiety, aggression, and antisocial behavior. The study also found that kids who spend more than three hours per day on social media may be at heightened risk for mental health problems.
But, let’s keep in mind, it’s not all bad.
There are plenty of positives about our connected culture. Digital tools can be a great source of entertainment, they help our teens stay connected to family and friends and the instant access of information can expand their knowledge and world view. In fact, 38% of teens experiencing moderate to severe symptoms of depression are using mobile well-being apps, which is encouraging news. Another bright spot is that because today’s teenagers are increasingly active online and they have become a lucrative source of income to tech giants, social platforms have placed far greater emphasis on keeping them safe.
What’s Being Done to Protect Our Kids
So, who’s really responsible for protecting our kids online? The industry is clearly challenged. But many companies are trying to make their products more palatable to all users, and safer for kids in particular.
Microsoft recently released Project Artemis to third-party online services that have chat functionality. It’s a tool to identify child predators that “scans historical chats for patterns that indicate a predator is grooming children for sexual abuse.” The company has been using it to monitor chats on its Xbox platform and may incorporate it into its other chat services, like Skype.
On another front, Instagram (which is owned by Facebook) recently rolled out a feature enabling business and creator accounts to set age restrictions for followers so that adult-oriented content isn’t served to children. Dr. Sameer Hinduja, director of the Cyberbullying Research Center, noted that Instagram has started “using AI (artificial intelligence) specifically to get users to pause, reflect, and edit their words when they are about to post something potentially offensive or hurtful via their new Comment Warning and Feed Post Warning systems.”
Instagram also recently launched Pressure to be Perfect, a collaboration with The Jed Foundation (JED), an organization focused on protecting emotional health and preventing suicide of teens and young adults. The campaign features two digital toolkits found online – one for parents and another for teens – that aim to kick-off conversations about social pressures. The social platform is also offering tools teens can use to encourage them to be their genuine selves on Instagram minus the added pressure of being “perfect.”
This is all good news. But each company is doing its own thing with no set guidelines or cohesion across platforms. That limits impact. To maximize the impact of these restrictions and safeguards, it’s important for these companies to invest in better ways to ensure safety and civility online.
But, let’s be real. Social media companies are never going to care about kids as much as parents do.
What Are the Next Steps for Social Media Platforms?
In order to continue to strive to keep our kids safe online, social media companies and internet portals are going to need to hire expert staff, set-up cross-platform guidelines (which currently don’t exist), as well as implement new technologies. This is not an easy task and they are all making it up as they go.
Therefore, the onus is on parents and caregivers to teach their children how to navigate the online world and how to spot dangers.
Teaching good digital citizenship should be just like teaching basic safety skills and polite behavior. Discussions need to be a part of everyday conversation and it needs to start when kids are young. Open the door to conversation about online safety, set age-appropriate boundaries on access, and explore the myriad of tools and safety apps that not only help kids navigate the internet responsibly but also help keep them safe.
While social media platforms come to grips with their accountability in establishing safer and healthier online communities, it remains up to parents to ensure they’re establishing safe and healthy habits for online behavior with their kids.
Randi Feigin is CEO of SafeToNet Americas
SafeToNet Americas is a global technology leader dedicated to making the digital world safer for children to explore and enjoy. Follow her on Twitter here.
Join the conversation! In your opinion, who’s really responsible for protecting our kids online? Share your thoughts in the comments section below.