Australia is moving forward with its ambitious plan to restrict social media access for young people, with the latest development seeing Twitch added to the list of platforms subject to the ban. The new regulations, set to take effect December 10th, aim to protect children under 16 from potential online harms. This decision comes as the country prepares to enforce its Social Media Minimum Age (SMMA) rules, impacting major tech companies and raising questions about age verification online.
The addition of Twitch, a popular livestreaming service, means Australians under 16 will be unable to create new accounts after December 10th, and existing accounts will be deactivated by January 9th. According to a Twitch spokesperson, the company globally allows users aged 13 and up, but requires parental involvement for those under the age of adulthood in their respective regions. Pinterest, however, has been excluded from the initial ban.
Understanding Australia’s Social Media Ban
Australia’s SMMA legislation, passed roughly a year ago, requires platforms to verify users’ ages and restrict access to those under 16 unless parental consent is provided. The eSafety Commissioner, the country’s internet safety regulator, has determined that Twitch falls under the category of “age-restricted social media platforms” due to its focus on social interaction and livestreaming features. This contrasts with Pinterest, which is primarily a visual discovery tool.
The ban will initially apply to a range of prominent platforms, including Meta’s Facebook and Instagram, Snapchat, TikTok, X (formerly Twitter), YouTube (excluding YouTube Kids and Google Classroom), Reddit, and the local streaming service Kick. These companies will be obligated to implement measures to block underage users, a task that has proven technically and logistically challenging.
Industry Response and Age Verification Concerns
The implementation of the SMMA rules hasn’t been without controversy. Tech giants like Google and Meta previously requested a delay in enforcement, citing the need for further development and testing of age-verification technologies. They argued that reliable and privacy-respecting methods for confirming users’ ages were not yet readily available.
The eSafety Commissioner has provided a self-assessment tool to assist platforms in determining whether they are subject to the new regulations. However, the effectiveness of this tool and the broader age-verification process remain key concerns for both companies and privacy advocates. The debate centers on balancing child safety with the right to privacy and freedom of expression.
The challenge lies in creating a system that accurately verifies age without collecting excessive personal data or creating vulnerabilities for fraud. Some proposed solutions involve using government-issued identification, but these raise privacy concerns and may exclude individuals without such documentation. Other methods, such as biometric analysis, are also under consideration but face ethical and technical hurdles.
Global Trends in Online Safety Regulation
Australia is not alone in its efforts to regulate online safety for children and teenagers. Several other countries are grappling with similar issues and exploring different approaches. The United Kingdom’s Online Safety Act, which came into effect in July, mandates platforms to proactively protect young users from harmful content, including material related to self-harm and eating disorders, or face substantial fines.
Meanwhile, the United States is seeing a patchwork of state-level legislation. As of August 2025, twenty-four U.S. states have enacted age-verification laws. Utah was the first to require app stores to verify users’ ages and obtain parental consent before allowing minors to download applications. This fragmented approach creates compliance complexities for companies operating nationwide.
These developments reflect a growing global awareness of the potential risks associated with children’s exposure to internet content, including cyberbullying, inappropriate material, and the negative impacts on mental health. Regulators are increasingly focused on holding platforms accountable for the safety of their users, particularly vulnerable young people.
The implementation of these regulations also raises questions about parental responsibility and the role of education in promoting safe online behavior. While platforms are being asked to do more, many argue that parents and schools also have a crucial role to play in guiding children’s online experiences.
Looking ahead, the success of Australia’s SMMA rules will depend on the effective implementation of age-verification measures and the cooperation of social media platforms. The eSafety Commissioner is expected to closely monitor compliance and provide guidance to companies as they navigate these new requirements. The coming months will be critical in assessing the impact of the ban on young Australians’ access to social media and the broader online landscape. Further clarification on enforcement mechanisms and potential penalties is also anticipated, as is ongoing debate about the best methods for age verification that respect both privacy and safety.

