News

Ontario watchdogs publish principles for responsible AI use

The Information and Privacy Commissioner and Human Rights Commissioner issued joint guidance for public and private sector AI use, urging assessments, transparency and the ability to shut systems down.

Ontario watchdogs publish principles for responsible AI use
Ontario watchdogs publish principles for responsible AI use
Copy link

By Torontoer Staff

Ontario’s Information and Privacy Commissioner and Human Rights Commissioner have released joint guidance intended to steer how government, public institutions and private organisations use artificial intelligence. The document sets out principles and practical steps for assessing, monitoring and, if necessary, decommissioning AI systems.
The move comes as provincial AI regulations remain under development, and after the commissioners’ offices received complaints and launched investigations into AI deployments in schools, public services and employment screening.

Why the commissioners stepped in

Patricia Kosseim, Ontario’s Information and Privacy Commissioner, said the technology is evolving quickly and that institutions need to remember existing legal obligations when deploying AI. Her office has already handled complaints, including a university student concern about AI-enabled online proctoring used during exams.

The deployment and development of AI in the public sector across Ontario is of great interest and priority for many institutions. We felt it was urgent to remind institutions of existing obligations.

Patricia Kosseim, Information and Privacy Commissioner of Ontario
The Human Rights Commission raised related concerns about bias and disproportionate impacts on historically marginalised groups. Commissioners said existing gaps in oversight can lead to unintended consequences unless organisations build safeguards into how they design and operate systems.

We want the people of Ontario to benefit from AI. But as a social justice oversight, we must take the lead in preparing citizens and institutions on the innovation, the monitoring, the implementation of these systems.

Patricia DeGuire, Ontario Human Rights Commissioner

Core principles and practical steps

The guidance groups expectations under several headings: validity and reliability, transparency and accountability, human rights protection, and security. It lays out both high-level principles and specific actions institutions should take before and after deploying AI tools.
  • Conduct validity and reliability assessments before deployment and on a regular basis
  • Ensure transparency about when and how AI is used, including in hiring and decision-making
  • Protect human rights and avoid systems that unduly target protest participants or violate Charter rights
  • Implement security measures to guard personal information
  • Establish processes to review negative impacts on individuals or groups
  • Temporarily or permanently turn off systems that are unsafe or cause harm
The commissioners emphasised that AI systems should be able to be paused or decommissioned if they produce unsafe outcomes, and that institutions must monitor and report on harmful effects. They recommended clear governance, documented risk assessments and accessible complaint procedures for people affected by AI decisions.

How this fits with provincial law

The guidance arrives while the provincial government progresses its own regulatory framework. In 2024 Ontario passed the Enhancing the Digital Security and Trust Act, which gives the province authority to regulate AI use in the public sector, but specific regulations have not yet been finalised.
Kosseim said current government materials set out only high-level principles that apply directly to provincial ministries. The commissioners’ document aims to fill an immediate need for clearer expectations across Crown agencies, hospitals, school boards and other public institutions.
Once the province adopts detailed regulations, watchdogs expect that those rules will provide binding parameters for all public institutions, and offer a model for private-sector actors.

Implications for employers, schools and the public

The report highlights areas where everyday Ontarians may encounter AI: employment screening, student monitoring, and public services such as welfare or housing decisions. Employers are already legally required to disclose to jobseekers when AI is used in hiring, and the Human Rights Commission flagged algorithmic bias as a growing risk for indirect discrimination.
For schools, the commissioners pointed to the proctoring complaint as an example of balancing system accuracy and accountability with privacy and fairness. For public services, the guidance recommends early and ongoing impact assessments before automated decisions affect people’s rights or access to programs.

Next steps and what to expect

The commissioners described their guidance as urgent and pressing. They called on institutions to adopt the recommended assessments, governance and accountability measures while awaiting provincial regulations. They also said the rules should inform private-sector practices, particularly where AI can shape employment or access to services.
The commissioners’ joint statement frames the objective plainly: encourage responsible use of a rapidly evolving technology so it benefits people and preserves public trust.
Institutions planning to deploy AI are now expected to document risk assessments, adopt monitoring procedures and maintain the ability to intervene if systems cause harm. Individuals who believe an AI system has affected them can look to the commissioners’ offices for complaints and guidance while broader regulations are finalised.
AIprivacyhuman rightsOntariotechnology