+
  • HOME»
  • Can India adapt Australia’s age gating to regulate social media usage by children?

Can India adapt Australia’s age gating to regulate social media usage by children?

Children’s social media usage has always been a contentious issue, with concerns about online safety, mental health challenges, and exposure to age-inappropriate content. Traditionally, this was seen as a problem for parents to address, with parental consent being enough for a child to access social media in many jurisdictions. However, Australia has now pioneered legislation […]

Children’s social media usage has always been a contentious issue, with concerns about online safety, mental health challenges, and exposure to age-inappropriate content. Traditionally, this was seen as a problem for parents to address, with parental consent being enough for a child to access social media in many jurisdictions. However, Australia has now pioneered legislation forbidding individuals under sixteen from accessing major social media platforms, regardless of parental permission. For the past week, the world has been debating Australia’s social media ban for children under 16. While some have lauded this bold move, others have criticized the hurried passage of the bill without adequate deliberation. Set to take effect in 2025, the law aims to protect the mental and physical well-being of Australian adolescents through strict age restrictions on social media use.
In India, the Digital Personal Data Protection Act, 2023 (DPDP), requires social media companies to obtain verifiable parental consent to process any personal data of a child (defined as anyone below the age of 18). However, since the DPDP rules are yet to be notified, how such consent will be obtained remains unclear. In August 2024, the National Commission for Protection of Child Rights (NCPCR) proposed a KYC-based system for verifying children’s age under DPDP compliance. Against this backdrop, this article compares Australia’s outright ban with India’s position and considers whether India should adopt a similar approach, while also suggesting alternative measures.

Australia’s Age Gating on Social Media
The Online Safety Amendment (Social Media Minimum Age) Bill 2024 amended the Online Safety Act 2021, shifting the responsibility for user protection from parents and children to social media platforms. It introduced a broad definition of “age-restricted social media platforms,” encompassing a wide range of commonly recognized social media services. Australians will not need official ID for age assurance under the law. Instead, platforms have 12 months to find suitable alternatives or face $49.5 million fines. The 2024–25 Budget committed $6.5 million to trial age assurance technologies to determine their efficacy, maturity, and suitability for Australia. Platforms must also ringfence age verification data and destroy it immediately, with strict penalties for noncompliance. Additionally, it mandates a review within two years of implementation, allowing the Government to recalibrate policies as needed to reflect changes in the behavior of social media platforms and young users.

India’s KYC-Based Age Verification

While there are no specific age gating laws on children’s social media usage in India, section 9 of the DPDP act requires data fiduciaries such as social media platforms to obtain verifiable parental consent for processing data of children under 18 years. The act also prohibits tracking children’s behaviour or to targeting advertisements to them. The NCPCR, intends to seek proposals from the Ministry of Electronics and Information Technology (MeitY) to introduce KYC-based age verification mechanisms for children under the DPDP Act. The MeitY has already engaged with social media companies to explore potential age verification solutions. While options such as Digilocker and Aadhaar were considered, but they were deemed ineffective for age verification. However, the effectiveness of this mode of age verification is highly uncertain, as it involves collecting extensive personal details of children and creating large pools of potentially sensitive data in the hands of various platforms. Misuse or security breaches might occur with this data.

Lessons from Global Practices

Around the world, efforts to protect children online have ranged from strict regulations like the European Union’s General Data Protection Regulation (GDPR) to self-regulatory measures taken by technology companies. Age verification is required in many countries, including the United States, EU, France, Germany, Belgium, Norway and UK among others. Online services and apps in the US must get parental consent before collecting data on children under 13 under COPPA. Similar consequences are seen in the EU Digital Services Act and UK Online Safety Bill. UK’s Online Safety Bill enforces age verification to prevent teens from seeing age-inappropriate information. Parents must approve children’s social media use till 13–16 in Norway, France, and Belgium. These rules encourage platforms to enforce age verification procedures.

Balancing Safety, Accessibility and Parental Autonomy

The negative effects of social media on teenagers are widely reported around the world, such as cyberbullying, bad eating or sleeping habits, mental health problems such as anxiety, depression, and low self-esteem. However, for the young children that are often touted as digital natives, social media is a modern digital playground to make friends with others and extended family, express themselves, learn new skills, and join campaigns and activities. Moreover, laws such as those in Australia deprive parents of the right to choose what is best for their children since not all children are equally vulnerable to online threats and may use these sites for good.
In its March 2021 general comment on the UNCRC, the UN acknowledged children as digital rights holders and stressed their engagement by enabling them with digital literacy and encouraging digital citizenship. An outright ban on social media might violate international children’s rights to accessibility and participation.

A Path Forward

Today’s children are born and brought up in a digitally connected world, and the internet is an integral part of their social development. Hence, an absolute ban may force them to use it secretly, potentially causing more harm than it does now. There should be a cooperative strategy to involve legislators, technology providers, educators, and civil society in setting an environment in which children may flourish safely online. Technologies like AI-driven content filtering and blockchain verification systems hold potential for the improvement of safety protocols. International cooperation with agencies such as ICMEC can help keep the internet safe for children, foster the exchange of best practices, and facilitate the creation of global standards for protecting children online. Age gating and verification based on KYC require careful design with considerations related to the privacy, ease of use, and inclusiveness. There is a need to introduce strong digital literacy programmes into education to teach children safe online navigation, privacy and risk recognition. As things stand today, Indian parents are tasked with overseeing their tech-savvy children’s social media usage, despite many parents lacking the necessary digital literacy. Therefore, laws like those in Australia, which place the onus on platforms to verify users and ensure children do not access them until the prescribed age, are a welcome move.

Dr. Aswathy Prakash G. is an Associate Professor of Law at Saveetha School of Law, SIMATS, Chennai and Dr. Alisha Verma is an Assistant Professor Law at Symbiosis Law School, Pune

Advertisement