Executive Summary
Artificial Intelligence has rapidly moved from the margins of technological development to the very centre of professional practice in law, finance, and fiduciary services. For Canadian Trust and Estate Practitioners, the possibilities created by this shift are significant. Properly applied, AI can accelerate the drafting of legal documents, simplify administrative procedures, strengthen compliance processes, and improve communication with clients.
Yet with these opportunities come two core risks that are especially dangerous in a fiduciary context. The first is the risk of confidentiality breaches. Consumer grade AI tools transmit and may store prompts on external servers, which makes them unsuitable for handling wills, trust deeds, financial statements, or any other client materials that must remain secret. The second is the risk of errors and hallucinations. Large language models generate text that often appears persuasive and authoritative, but may contain fabricated case citations, misapplied provincial law, or miscalculated financial figures.
The future for fiduciaries will not involve rejecting AI entirely but instead learning how to adopt it responsibly. That means relying only on ring fenced enterprise grade systems that provide contractual guarantees around confidentiality, insisting on rigorous human review of every output, and embedding the use of AI within strong governance frameworks.
Part I: What AI Is and How It Works
Artificial Intelligence describes technology that performs tasks we normally associate with intelligent beings. Generative AI is a subset of this field that can create new text, images, and other forms of content by identifying and replicating patterns within massive training datasets.
At the centre of this development are large language models. These models are trained on billions of words and generate their outputs one token at a time, choosing each word probabilistically based on what is most likely to follow. This produces fluent and often highly persuasive responses, but it is important to understand the limitations. Large language models do not retrieve facts from databases, do not check their answers against authoritative sources, do not ensure consistency across outputs, and do not have any conception of what is true or false. Accuracy is therefore incidental, not guaranteed.
This limitation explains why hallucinations occur. A practitioner might ask for a summary of recent British Columbia trust cases, and the AI may produce a list of case names and citations that look entirely authentic. However, when those citations are checked against legal databases such as CanLII, they may prove to be entirely fabricated. This risk has already caused embarrassment in Canadian court proceedings and has led to sanctions in other jurisdictions.
Part II: Why AI Matters for Canadian TEPs
For Canadian fiduciaries, AI represents both promise and peril.
On the promise side, AI can dramatically improve efficiency. It can create draft documents in minutes that previously took hours. It can automate repetitive tasks such as onboarding forms or KYC processes. It can help explain complex estate processes to clients in plain language. Legal technology providers such as Clio and Thomson Reuters are already embedding AI directly into their platforms, which means many practitioners will encounter it whether they actively seek it out or not.
On the peril side, fiduciaries cannot ignore the heightened risks. Confidentiality is absolute and cannot be compromised, yet consumer AI tools cannot provide sufficient guarantees that client data will remain private. Errors are unacceptable in fiduciary work, yet hallucinations are an inherent feature of large language models. Ethical questions about bias, intellectual property, and the loss of training opportunities for junior staff add further complexity.
Part III: Confidentiality and Error Risks
Confidentiality is the most pressing concern. When client documents are entered into consumer AI systems, practitioners have no meaningful control over where that information is stored or processed. Data may be transmitted across borders, creating further legal complications. Because fiduciaries are bound by the highest standards of confidentiality, even a single breach could destroy trust in the practitioner and their firm.
Errors are the second major risk. Large language models are not designed to produce truth. They are designed to produce words that are likely to follow one another. As a result, they may produce fabricated case law, misapply statutory provisions, or misstate numbers such as executor fees or tax liabilities. These outputs can be dangerously convincing and difficult to detect without careful human review.
The safeguards are clear. Practitioners should only use enterprise AI systems that provide ring fenced environments, contractual commitments not to train on client data, and guarantees of confidentiality. They should ban the use of consumer AI tools in their organizations. Finally, they must commit to human verification of every AI generated output, with senior level review required for higher risk work such as legal analysis or tax opinions.
Part IV: Practical Applications for TEPs
When used responsibly, AI can provide meaningful assistance in trust and estate practice:
Drafting and Summarization. AI can produce first drafts of wills, trust deeds, executor reports, or client letters. These drafts are not final products and cannot be sent directly to clients, but they can significantly reduce the amount of time required to begin the drafting process.
Research. AI can help locate relevant cases, statutes, or precedents more quickly. However, hallucinations are common, and every reference must be checked against authoritative databases such as CanLII, Lexis, or Westlaw before being relied upon.
Administration. The day-to-day work of fiduciary administration is heavily paperwork driven. AI tools can assist with completing probate forms, conducting conflict checks, verifying identities, creating financial statements and monitoring deadlines. These efficiencies can free staff to focus on higher value work, but human oversight remains necessary at every stage.
Client Engagement. Many clients and families expect immediate answers to basic questions. AI powered chatbots can be programmed to handle frequently asked questions and explain estate processes in clear language. However, such systems must be carefully restricted to avoid providing misleading advice and should be designed to escalate complex queries to human professionals.
Part V: Broader Ethical and Legal Issues
The adoption of AI also raises broader ethical and legal questions that extend beyond individual practice:
Intellectual property law has not caught up with the reality of AI generated outputs. In many cases, the content produced by AI cannot be traced to an identifiable source, and ownership is unclear.
Data privacy is another serious concern. Fiduciaries must know how the AI system uses, stores, and retains input data. Clients must be informed about the use of AI and given the ability to withdraw consent if desired.
Ethical concerns also abound. AI systems can reflect and amplify biases in their training data, creating outputs that may be discriminatory or misleading. They can be used for malicious purposes, including fraud and deepfakes.
Contractual frameworks lag behind these developments. Most agreements do not yet contain clauses addressing the use of AI, leaving questions of liability unresolved. If harm occurs, it may not be clear who is accountable: the practitioner, the AI provider, or both.
Finally, the regulatory environment is evolving quickly. More than one thousand AI policy initiatives are underway globally, and Canadian fiduciaries should expect increased scrutiny and higher standards of care in the near future.
Part VI: Governance and Due Diligence
To adopt AI responsibly, fiduciaries need strong governance frameworks. STEP has suggested a governance process that begins with an AI governance council at the leadership level. This council sets strategy and oversees implementation.
A formal organizational AI policy must be developed to define the purpose and scope of AI use, establish ethical principles, and set data security requirements. A risk matrix should classify use cases as low, medium, or high risk. High risk uses should require additional approvals and senior oversight. Training must be provided to all staff so they understand both the benefits and the limitations of AI. Finally, regular audits should be carried out to ensure that the policy is followed and that confidentiality and error rates are being monitored.
Ten key elements of any AI policy should include: defining purpose and scope, embedding fairness and transparency, ensuring strong data management, establishing accountability, maintaining compliance, providing training, conducting audits, and creating pathways for remediation.
Part VII: Case Studies
Consider two contrasting scenarios.
In the first, a practitioner uploads a client’s trust deed into a free version of ChatGPT to obtain a summary. The AI generates a plausible looking summary, but omits key provisions and misstates trustee powers. Because the document was entered into a consumer AI system, confidentiality is compromised. The result is a breach of duty and serious reputational harm.
In the second, a firm adopts Microsoft Copilot in a closed enterprise environment with Canadian data residency. Staff enter only anonymized documents, and all outputs are carefully reviewed by senior counsel. The firm doubles its efficiency while protecting client confidentiality and ensuring accuracy.
Part VIII: Adoption Roadmap
The adoption of AI should be deliberate and phased.
Firms should begin by exploring AI with anonymized, non-confidential data to build familiarity. They can then move to piloting AI in low risk tasks under close supervision.
The next stage is to create a policy that bans consumer tools, mandates enterprise AI solutions, and requires human review. Only after these measures are in place should firms scale the use of AI into practice management and administration.
Finally, firms must audit their use of AI to ensure that confidentiality safeguards are working and that error rates are being tracked.
Part IX: The Human Factor
AI can accelerate drafting, simplify research, and automate administration. But it cannot provide empathy to grieving families, cannot exercise nuanced judgment in complex disputes, cannot guarantee confidentiality, and cannot reliably detect factual or judgement errors.
For these reasons, humans remain indispensable. The role of the professional is not diminished but redefined. AI should be understood as a tool to handle repetitive or mechanical tasks, freeing practitioners to focus on the human aspects of fiduciary practice: judgment, strategy, and trust.
Part X: Future Scenarios
In the short term, over the next one to two years, AI will become increasingly embedded in fiduciary platforms. Breaches and hallucinations will make headlines, and regulators will issue early guidance.
In the medium term, over three to five years, disclosure of AI use will likely become mandatory. Malpractice insurers will require firms to adopt enterprise safeguards. Firms will be expected to track and report on error rates.
In the long term, within five to ten years, ring fenced fiduciary AI systems will become universal. Firms that fail to adopt them may even be regarded as negligent. Practitioners will focus more heavily on higher value work such as strategic planning, conflict resolution, and stewardship of family legacies.
Prompting Principles for Practitioners
Even when used responsibly, AI is only as effective as the instructions it receives. Crafting prompts is therefore a professional skill. The following eight principles provide a framework for effective prompting in fiduciary practice.
- Define Roles Clearly. Specify who is speaking, who the audience is, and what role the AI should play. For example, “Act as a trust officer drafting a summary for a family client with no legal background.”
- State the Objective. Every prompt should describe the purpose of the task, whether that is to summarize a document, draft a letter, or produce a checklist.
- Provide Context. Contextual information such as jurisdiction, family structure, or relevant restrictions should be included in the prompt so that the AI can generate an accurate response.
- Break the Task into Steps. Complex assignments should be divided into smaller components, such as identifying parties, summarizing trustee powers, and outlining distribution provisions separately.
- Establish Output Guidelines. Directions about format and tone help ensure consistency. For example, the practitioner might specify that the output should be in plain language or in the format of a professional letter.
- Use Examples. Examples of the desired outcome provide useful reference points. Showing the AI what a good summary looks like helps it to replicate that style.
- State Prohibitions Explicitly. Prompts should also specify what the AI must not do, such as fabricating case citations or offering legal advice.
- Define Success Criteria. Finally, prompts should establish benchmarks for success. For example, “The summary should be under 500 words, highlight the three key trustee powers, and be written at a Grade 10 reading level.”
Prompting should be treated as giving instructions to a junior colleague: clear, contextual, bounded, and subject to review.
Conclusion
AI is reshaping fiduciary practice in Canada. For Trust and Estate Practitioners, it offers the opportunity to improve efficiency, scale services, and enhance client experience. But it also carries profound risks.
Two of the greatest risks are breaches of confidentiality and unchecked errors. The only responsible path forward is to adopt enterprise AI tools that provide contractual safeguards, insist on human review of all outputs, and implement robust governance frameworks.
Confidentiality and accuracy are not optional. They are the foundation of fiduciary legitimacy.
Explore our live and evolving database of AI tools and practitioner feedback tailored to Canadian TEPs: AI Options for Canadian TEPs with feedback
Nicole Garton is president and co-founder of Heritage Trust.
Recognized by Best Lawyers in Canada for trusts and estates and family law, she previously chaired the Canadian Bar Association Wills and Trusts Subsection (Vancouver).
Contact Nicole by email or phone at (778) 742-5005

Heritage Trust is a leading non-deposit taking financial institution, regulated by the BC Financial Services Authority (BCFSA), a government agency of the Province of British Columbia. Heritage Trust offers caring and professional executor, trustee, power of attorney, committee, escrow and family office services to BC resident clients.
We welcome you to contact us.