Japan Grapples with Risks of Social Media in Pursuing Options for Protecting Children Online

Society Lifestyle Education Family

Sogabe Masahiro [Profile]

Australia’s sweeping ban on the use of social media by children under the age of 16 has attracted intense interest in Japan, where there is growing alarm over the hazards of online engagement. The author looks at countermeasures taken thus far and considers further steps for mitigating the risks.

Australia’s new law prohibiting the use of social media by children under 16 has been greeted with skepticism by many Japanese experts, who question whether such bans can be upheld and enforced. Nonetheless, a working group launched by the Children and Families Agency has begun studying Japan’s policy options, driven by widespread agreement that more must be done to protect our young people from the dangers of unfettered engagement with social media. I myself am a member of that working group, but the views expressed in this article are solely my own and do not necessarily reflect the views of others on the panel or in the Japanese government.

More than Six Hours a Day Online

A 2023 survey conducted by the Children and Families Agency found that 98.7% of Japanese minors between the ages of 10 and 17 used the internet, while 83.2% used smartphones. The survey also found that weekday online engagement in the same age group averaged 4 hours and 57 minutes. Among high school students, 99.6% used the internet, and 98.1% used smartphones. Average weekday use within this group came to 6 hours and 14 minutes, indicating that, on average, high school students are spending more than one-fourth of their lives online.

There is no doubt that the internet has become an integral part of young people’s lives. On the positive side, its use can broaden their horizons, deepen their interests, and unlock their potential through access to a wide range of knowledge and experience as well as connections with people they would never encounter in real life. But serious dangers also lurk on the internet.

Of particular concern are the potential harms attending the use of social media. These dangers have grown ever more prominent over the past 15 years thanks to social media’s ability to spread content virally while amplifying biases through algorithms that target content to user tastes and preferences. Although many countries have moved to address such risks, there is a growing need for stronger policies to protect our children.

Breaking Down the Risks

In 2021, the Organization for Economic Cooperation and Development issued a fully revised edition of its 2011 Recommendation on Children in the Digital Environment, a testimony to the changes sweeping the digital environment and the diverse and increasingly serious dangers facing children as a result. The report includes an extremely useful typology of risks based on four categories: content risks, conduct risks, contact risks, and consumer risks (see table).

Risks for Children in the Digital Environment

Risk categories Manifestations of risks
Content risks Hateful content, harmful content, illegal content, misinformation
Conduct risks Hateful behavior, harmful behavior, illegal behavior, user-generated problematic behavior
Contact risks Hateful encounters, harmful encounters, illegal encounters, other problematic encounters
Consumer risks Marketing risks, commercial profiling risks, financial risks, security risks
Cross-cutting risks: Privacy risks, advanced technology risks, risks on health and wellbeing

Source: OECD.

Content risks are the harms stemming from exposure to hateful, harmful, or illegal content and disinformation. They include harm from offensive, abusive, and defamatory content, online fraud, and age-inappropriate violent and sexual content.

Conduct risks refer to the potential for fostering or inciting inappropriate online behavior, including cyberbullying and such “user-generated problematic behavior” as sexting (sending sexually explicit images of oneself).

Contact risks are the hazards resulting from personal encounters in the digital environment. The best-known examples of this category are probably sexual abuse and exploitation by adults with whom children come into contact online. Another manifestation of contact risk that has escalated sharply in Japan is the phenomenon of yami baito, or “shady part-time jobs.” (Recent months have witnessed a wave of crimes, from scams to armed robberies, perpetrated by young people who responded to ads for ostensibly legitimate jobs posted on social media.)

Consumer risks are the dangers posed by online commerce, including marketing practices that may take advantage of children’s inexperience and immaturity as consumers.

The revised OECD typology also includes several broad cross-cutting risk categories. Among these are “advanced technology risks,” such as the hazards of bias and prejudice built into AI’s predictive models.

Another cross-cutting category is “health and well-being risks.” Like most of the risks enumerated, these can affect adults as well as minors, but the potential impact on young people’s health is a matter of particular concern from a parental and public standpoint. Many parents worry about the mental-health effects of excessive engagement with the internet, especially social media. Several disturbing examples have received media coverage, including the case of a woman who traced her eating disorder to a video she saw as a high school student, in which a model explained how she lost weight by fasting.

What measures has the government taken to protect children from these hazards?

Difficulties of Regulating Smartphone Use

Japan’s only omnibus legislation for the protection of children online is the 2008 Act on Establishment of Enhanced Environment for Youth’s Safe and Secure Internet Use, revised in 2018. Simply, the law is aimed at promoting the use of filters and boosting digital-media literacy. It requires mobile carriers to incorporate filtering of some sort in conjunction with service for users under the age of 18, unless the parents expressly ask for an exemption. However, the functionality of the filtering services or apps provided is left entirely to the individual carrier. In practice, they function only to block access to specific websites and apps identified as age-inappropriate for reasons of content and contact risk. Social media and messaging apps are, in principle, inaccessible by default.

With regard to digital-media literacy, a wide range of educational initiatives have been implemented by the public sector, as well as by companies, civic groups, and other organizations. Most are aimed at helping children build media literacy while using their smartphones or otherwise engaging with the internet.

Some local governments have also introduced ordinances in an effort to discourage or limit children’s use of smartphones, but the movement has not gained much traction. An ordinance enacted by Ishikawa Prefecture in 2009 called for parental “efforts” to keep cellphones and similar devices out of the hands of elementary and junior high school children, except for emergency purposes. Initially it attracted considerable attention as the first such statute in Japan, but a revised ordinance passed in 2022 shifted the focus to fostering prudent use of smartphones through digital-media literacy.

In 2020, Kagawa Prefecture enacted an “ordinance to combat internet and video game addiction” by limiting gaming for children under 18 to no more than one hour per day on weekdays and 90 minutes on weekends and holidays. It also advised limiting smartphone use to 9:00 pm for elementary school students and 10:00 pm for older students. Although toothless and unenforceable, the ordinance nonetheless elicited a sharp backlash and was challenged in court by residents as a “violation of their right to use smartphones.” (The law was upheld.) These experiences help explain why experts and policymakers in Japan have expressed skepticism about emulating Australia’s legal ban on the use of social media by children under 16.

Focus on Sexual Exploitation

Meanwhile, Japan is experiencing a wave of internet-mediated sexual harms involving children—primarily in the category of contact risks, with conduct risks as a secondary problem.

In addition to the abovementioned filtering requirements, Japan already has a plethora of statutes penalizing sexual offenses against children. These include relatively long-established provisions in the Penal Code, the Child Pornography Act (revised in 2014), the regulations governing dating sites, and the 2014 Revenge Porn Prevention Act. More recently, the 2023 revision of the Penal Code and the Code of Criminal Procedure raised the age of consent from 13 to 16 and established penalties for grooming. At the same time, Japan enacted a new criminal statute to penalize voyeurism, the Act for Punishment of Photographing or Filming Sexually Explicit Images.

Less attention has been lavished on other risk categories, such as health and well-being. To be sure, the authorities have worked to alert the public to the negative effect smartphones and video games can have on vision and learning. In addition, healthcare providers have been directed to treat smartphone and internet addiction as a condition requiring treatment. But there is no comprehensive policy for mitigating these risks.

Nor have privacy risks been adequately addressed. Although the topic has come up in the context of an upcoming revision of the Act on the Protection of Personal Information, there are no laws specifically designed to protect children’s privacy online.

Responsibility of Social Media Platforms

A more fundamental problem, however, is that social networking companies are under no legal obligation to take steps to protect children from dangerous content and contacts. The legal measures taken thus far have depended primarily on the kind of filtering that blocks access to specific sites and apps. Moreover, if these filters are removed or customized by users or their parents, social apps of any kind can be installed. This leaves many children exposed to the risks discussed above, since the platforms themselves are not legally required to monitor or restrict content according to age.

A number of countries have moved to require better moderation of content on social media, not only to protect children but, more generally, to combat defamation, misinformation, and other widespread abuses. Momentum for tighter restrictions had been growing in the United States, but passage of national legislation was delayed, and with the rise of the Trump administration, the likelihood of tighter regulation has become remote.

Japan’s Information Distribution Platform Act, enacted in 2024, makes some attempt to combat online defamation and other rights violations. But in terms of provider responsibility, it requires only that large-scale platforms take swift action to remove illegal or harmful content while improving the transparency of their content-removal policies and procedures. Because this leaves so much to the discretion of the social platforms, it is unclear how effective it will be.

I believe Japan must step up its efforts to protect children online by rethinking the regulatory environment in which social platforms operate. But this effort faces unique challenges. Simply removing problematic posts, for example, can unduly restrict adults’ access to information. Restricting content by age requires reliable methods for verifying users’ age. These are just a sampling of the obstacles to be overcome.

Furthermore, how deeply the government can insert itself into the conversation will depend greatly on the degree of public alarm over the risks social media poses to children. My hope is that this article will contribute in some small way to awareness of the issue.

(Originally published in Japanese. Banner photo: © Pixta.)

    Related Tags

    Internet smartphone children SNS

    Sogabe MasahiroView article list

    Professor, Graduate School of Law, Kyoto University. Specializes in constitutional and information law. Born in 1974. Chair of the Broadcasting Ethics Committee of the Broadcasting Ethics and Program Improvement Organization and managing director of the Social Media Association of Japan, among other posts. Since November 2024, has chaired the Children and Families Agency’s working group on the protection of children online.

    Other articles in this report