Why Boards and Councils must lead the way in governing artificial intelligence’s role in regulation
If the last two years have shown us anything, artificial intelligence is not only here and rapidly evolving, but adoption rates across organizations and professions is also steadily rising. AI is now part of the everyday reality facing many professions. and, by extension, the regulatory bodies that oversee them.
AI is assisting health professionals with diagnostics, engineers with design models, and legal professionals with research and file preparation. AI is supporting teachers with lesson planning, accountants with compliance monitoring, and social workers with identifying at-risk individuals based on case history and behavioural, and other factors. The reach of AI is expanding across virtually every regulated profession.
In parallel, regulatory bodies themselves may find internal uses for AI, from streamlining licencing decisions and improving registration processes, to identifying red flags during compliance reviews, assisting with communications and public engagement, and more.
With these changes, there is a growing need for Boards and Councils to think carefully about their role. Not to react with alarm, but to provide clear, principled guidance in an environment that is still evolving.
At MDR Strategy Group, we have had the privilege of supporting regulators in nearly every Canadian province and territory. Increasingly, we are being asked to engage directly with Boards and Councils to help navigate the strategic questions that AI brings. These conversations are multidimensional: they’re about exploring if, when and how AI should be used, and in which contexts; how regulated professionals are using AI and if they’re using it appropriately; and also, how AI could change the very nature and scope of professional practice. They’re also about ensuring that its use. by the regulator or by registrants, is thoughtful, measured, and most of all, in the public interest.
AI belongs on your risk register
One of the most valuable steps a Board or Council can take is to include AI in its organizational risk register. Doing so signals awareness and forward-thinking, Even in cases where AI is not yet in use, either by the regulatory body or the professions it oversees, placing it on the register guarantees that its potential impact on safety, fairness, public confidence, and regulatory effectiveness receives regular attention and oversight and doesn’t get lost in the shuffle.
Putting AI on the risk register does not mean viewing AI only as a threat. Rather, it means acknowledging its role as a fast-moving, constantly evolving influence in both regulated practice and regulatory operations. A risk register entry allows Boards to monitor developments, ask timely questions, and adapt as needed.
Policies must evolve alongside technology
As AI tools continue to develop, static rules will not be sufficient. Boards should encourage living policies, agile frameworks that evolve as new applications, benefits, limitations, and ethical questions emerge.
This includes setting direction on matters such as:
- If, when and how the regulator may use AI internally
- Expectations around transparency, disclosure, and accountability
- The relationship between registrants’ use of AI, standards of practice, and codes of ethics
- The need for new competencies or professional declarations related to AI use
While some of these considerations overlap with operational decision-making, they are governance decisions. They speak directly to the public interest, and to the credibility of regulatory bodies as informed, forward-thinking institutions.
Equity, trust, and the public interest
Another key consideration for Boards is how AI systems might impact equity, accessibility, and human rights, particularly for equity-deserving communities. At the Board/Council level, oversight should include a review of bias mitigation strategies and alignment with broader commitments to equity and justice.
Furthermore, transparent communication about how AI is being used and how it is governed, is essential to maintaining public trust. Boards should encourage the development and refinement of communications strategies alongside AI policies.
Oversight beyond internal use of AI tools
Many regulators will not develop their own AI systems; instead, they will purchase or licence them from vendors. But outsourcing technology does not mean accountability can also be outsourced. Therefore, Boards should mandate and oversee the development of due diligence processes, including how third-party tools were trained and validated, and whether vendors uphold the regulator’s standards, values, and regulatory obligations.
AI literacy at the Board level
Effective AI governance requires Boards/Councils to assess their own level of AI literacy. While deep expertise may not be necessary or realistic, they must consider whether they have, or need to develop a more robust understanding of AI so that the right questions are asked and the answers can be meaningfully understood.
Starting the conversation
To support this work, we have developed a conversation and training framework designed to help Boards and Councils initiate, structure, and sustain informed oversight discussions on AI.
Such a framework is structured to surface immediate oversight concerns as well as longer-term governance implications. It includes a review of key developments, the identification of sector-specific risks and opportunities, and practical steps to guide ethical and strategic decision-making.
What expectations does the public—or government—have for AI use in your sector, and how are you preparing to meet them?
How will the regulator contribute to sector-wide consistency, fairness, or leadership on AI issues across jurisdictions?
Here are some of the questions we explore with Boards in these sessions:
- What expectations does the public, licence-holders, and other key stakeholders have for AI use in your sector, and how are you preparing to meet them?
- How will the regulator contribute to sector-wide consistent use of AI, fairness, or leadership on AI issues?
- Is AI being used within the professions you license and regulate, in what ways, and to what impact?
- How might the use of AI by licenced professionals change the risks, standards, or scope of regulated practice?
- Is the regulator currently using AI in any internal functions? If so, how, and if not, why?
- What are the reputational and ethical considerations in either case?
- What role should the Board play in setting parameters for acceptable use?
- How will you know when oversight mechanisms need to be reviewed or strengthened?
These are not one-time questions. They are part of an ongoing conversation that each Board and Council will need to return to regularly as AI continues to evolve.
A path forward
Boards and Councils are not expected to become experts in artificial intelligence; however, they do need to be confident in asking the right questions, understanding the implications, and setting direction that aligns with their mandate to serve and protect the public.
AI is both a tool, and a significant shift. Regulatory Boards and Councils who engage with it thoughtfully will be the ones best positioned to lead through change.
If your Board is ready to begin that conversation, we would be pleased to support you.
M. Daniel Roukema, CEO | Melissa Peneycad, Director, Public Engagement and AI Strategy MDR Strategy Group Ltd.