Federal officials have drafted plans to ban social media use for children under the age of 14 as part of the government’s forthcoming online harms bill, three sources told the Globe and Mail. The proposal would raise Canada’s current minimum from 13 to 14 and must first receive cabinet approval, with ministers expected to consider it as early as next month.
Officials drew up the measure after Australia implemented a ban for under-16s in December, a move that has prompted other countries, including Britain and Canada, to assess similar restrictions. The new online harms bill, a replacement for a bill introduced in 2024 that died when Parliament was dissolved, is expected within months.
What Ottawa is considering
The draft proposal would bar children under 14 from accessing major social platforms. Officials have also discussed whether a dedicated regulator would be needed to police the ban, and whether a slimmer regulatory structure than the previous Bill C-63 could be used, possibly centred on a single commission with powers to levy fines and handle complaints.
The government is separately weighing new privacy protections to shield youth under 18 from targeted marketing in an update to privacy laws, to be brought forward by AI Minister Evan Solomon, according to the sources.
Why advocates and officials want a higher cutoff
Child-safety groups and some academics argue that current safeguards do not prevent serious harms to young people online. The Canadian Centre for Child Protection reported a rise in online violence targeting girls, including aggressive coercion and threats to distribute intimate images. From June 2022 through December 2025, the centre logged 127 reports of extreme online violence, most occurring in the past year.
The Canadian Centre for Child Protection supports the idea of a social-media delay as an additional layer to prevent serious injury and harms to children and youth. Regulations should clearly define what types of products and services companies make available to children.
Lianna Macdonald, executive director, Canadian Centre for Child Protection
Experts say a ban could reduce exposure to grooming, scams and harmful content, but they also warn that a prohibition alone will not fix the underlying problems on platforms, such as algorithmic incentives and weak accountability.
For a ban to be effective it would require a regulator who could not only police it, and issue penalties for infractions, but address wider harms on the internet affecting both children and adults. Without a regulator, when a child hits the age when social media is allowed, they could jump right into a social-media ecosystem that has no protections in it whatsoever.
Taylor Owen, Beaverbrook Chair in Media, Ethics and Communications, McGill University
Enforcement and regulatory options
Officials are debating how to enforce an age-based ban. One model under discussion is a single digital-safety commission, which would have powers to impose fines and act as a recourse for Canadians harmed online. The previous Bill C-63 had proposed a digital-safety commission and an ombudsperson to handle removal of child sexual-abuse material, non-consensual intimate images, and posts encouraging self-harm.
Some sources say the new bill could adopt a slimmer regulatory approach than Bill C-63, while still giving a regulator clear authority to penalize non-compliant platforms. Pediatricians and child-safety advocates have stressed that a comprehensive strategy must include an independent regulator.
The government needs a comprehensive strategy that is multipronged in its approach, and it cannot achieve that without an independent regulator. Australia had a regulator in place when it introduced its ban.
Charlotte Moore Hepburn, pediatrician and medical director, Child Health Policy Accelerator, SickKids
Industry reactions and proposals
Tech companies have expressed varying responses. Meta has proposed age verification at the app-store level, which would require app providers to verify ages when devices are set up. Google has raised reservations about broad bans and criticised proposals that would shift verification responsibility to app stores and parents.
Their proposal puts the onus of age verification solely on app stores and increases the burden on parents, letting Meta apps like Instagram off the hook while creating serious privacy risks for families and doing little to make kids safer online.
Kareem Ghanem, senior director, government affairs and public policy, Google
Meetings between Meta and federal officials in Ottawa have focused on technical fixes and verification options. Any approach that relies on device-level or app-store verification raises questions about privacy, feasibility and enforcement.
What this would mean for families
A federal ban would force parents, schools and app stores to adjust. Some families already delay giving children smartphones because of social and peer pressures and safety concerns. A higher minimum age could reduce early exposure to harmful content, but experts say it must be paired with stronger platform rules, education for young people and support for families.
- Current Canadian rule sets minimum social-media age at 13, often evaded with false ages.
- Australia’s under-16 ban has led millions of young users to leave major platforms.
- Officials are considering both an age limit and new restrictions on targeted marketing to under-18s.
- Debate continues over creating a regulator with enforcement powers versus a slimmer oversight body.
A spokesperson for Identity Minister Marc Miller said the government intends to act swiftly to better protect children online and that platforms have a role to play in meeting that challenge.
The draft proposals are subject to cabinet approval and further internal discussion. If approved, they would form part of a suite of digital-safety measures the government plans to introduce in the coming months.