Federal, provincial and territorial privacy authorities, including Newfoundland and Labrador’s Office of the Information and Privacy Commissioner, have launched a set of principles to advance the responsible, trustworthy and privacy-protective development and use of generative artificial intelligence (AI) technologies in Canada.
The authorities introduced the principles during an international symposium on privacy and generative AI that was hosted in Ottawa by the Office of the Privacy Commissioner of Canada.
While AI presents potential benefits across many domains and in everyday life, the regulators note that there are also risks and potential harms to privacy, data protection, and other fundamental human rights if these technologies are not properly developed and regulated.
Public bodies and custodians of personal health information in Newfoundland and Labrador have a responsibility to ensure that their use of AI complies with existing privacy laws, however those privacy laws will likely prove inadequate to effectively protect the public interest. Governments in Canada and around the world have begun to recognize this, and many have begun to enact specific laws and legislative amendments that are meant to address the unique challenges of AI. Commissioner Michael Harvey stated:
Artificial Intelligence is already embedded within some of the products and services we use, and governments and health care bodies will certainly want to harness the power and potential of AI in Newfoundland and Labrador for the benefit of citizens. It is important that there be laws in place to ensure that privacy and ethical considerations are appropriately addressed. We have made recommendations to government to amend both the Access to Information and Protection of Privacy Act, 2015 and the Personal Health Information Act to ensure that AI programs are appropriately assessed during their development, and that these assessments are subject to oversight from both a data ethics and privacy perspective.
The joint document launched by privacy regulators lays out how key privacy principles apply when developing, providing, or using generative AI models, tools, products and services. These include:
- Establishing legal authority for collecting and using personal information, and when relying on consent ensuring that it is valid and meaningful;
- Being open and transparent about the way information is used and the privacy risks involved;
- Making AI tools explainable to users;
- Developing safeguards for the protection of privacy rights; and
- Limiting the sharing of personal, sensitive or confidential information.
Developers are also urged to take into consideration the unique impact that these tools can have on vulnerable groups, including children.
The document provides examples of best practices, including implementing “privacy by design” into the development of the tools, and the labelling content created by generative AI.
Principles for Responsible, Trustworthy and Privacy-Protective Generative AI Technologies
-30-
Media contact
Sean Murray
Director of Research & Quality Assurance
709-729-6309