Escalation & Incident Pathway

When to Report (if ANY apply)

  • Personal, confidential, or sensitive data involved.
  • Possible privacy, security, legal, or AI policy breach.
  • Harmful or incorrect AI output used in decisions.
  • Operational, reputational, or compliance risk.

Not sure? Report it. privacy@tru.ca or cyber911@tru.ca

Do this Immediately

  • Stop using the AI tool.
  • Preserve prompts, outputs, and logs.
  • Isolate or disconnect system (if safe).
  • Report the incident right away.

Who to Contact

Do not contact external parties unless directed.

What Happens Next

  • Acknowledgement (within 1 business day)
  • Assessment & containment (Information Security + Privacy)
  • Supports provided by department (P&C, Finance, Privacy/Legal, Info. Security)
  • Follow up on source of breach to prevent recurrence (Privacy and Info. Security)

Serious incidents escalate to senior leadership.

Include When Reporting

  • What happened & when
  • AI tool or system
  • Data or individuals affected
  • Actions already taken

Key Reminder

Reporting is expected, supported, and non-punitive. Early reporting protects people, data, and the organization.

AI Escalation & Incident Pathway — Detailed Version

  • What should I consider before using an AI tool?
  • Has the AI tool been vetted or whitelisted for use by the organization? Has it been blacklisted?
  • Will the AI be given personal or sensitive information? If yes, a Privacy Impact Assessment is required.
  • Will the company use the data to further train their AI model? If yes, is there an opt-out option?
  • What counts as an AI-related incident?
  • personal, confidential, or sensitive data was exposed.
  • a legal infraction if the AI results in the disclosure, retention, or processing of protected information in violation of privacy laws, contractual agreements, or institutional policy.
  • harmful or incorrect AI advice used in decision making.
  • operational, reputational, or compliance violations.
  • Promptly contact the Privacy Officer and/or the Director of Information Security by email or phone.
  • Identify primary contacts for reporting incidents (e.g., IT Security, Data Privacy Officer).
  • The Privacy Officer or the Director of Information Security can be contacted by phone or through email privacy@tru.ca and cyber911@tru.ca respectively.
  • The Privacy Officer and/or the Director of Information Security should be contacted immediately along with your supervisor. You will be advised on what to do next based on the incident which may also include contacting the individuals affected. It is important for legal, regulatory, and security reasons to always contact Privacy or Information Security departments for their advice before any other (notification) action is taken.
  • Refer to the Privacy Breaches procedure “Notice requirements” available on the website.
  • Acknowledgement within 24 hours depending on the assessed severity of the incident. Each incident involves unique circumstances; there is no set resolution time.
  • Provide guidance on documenting the incident and actions taken.
  • Incidents involving AI tools have specific “what to keep” requirements. For details, see “WHAT TO SAVE” at the end of this page.
  • Once contacted, the information security department will confirm the incident (true positive) and begin triage, containment, and eradication procedures. The privacy office will assess the severity of personal information exposure and advise on next steps including whether the victim requires notification and/or the “Office of the Information & Privacy Commissioner for BC (OIPC)” needs to be involved.
  • High-severity incidents may trigger activation of the Incident Response Plan (IRP) based on the nature and impact of the event (e.g., ransomware).
  • Support may include legal and risk advice, financial recommendations and guidance, health and welfare counseling, and technical assistance.
  • At the close of an investigation, a root cause post-mortem identifies preventative recommendations, which may include changes to processes, policy, technology, and people development.
  • The seriousness of a reported incident is governed under our “Incident Response Plan – Incident Classification and Escalation Process”. Incidents are classified as tier 1 or tier 2 ranging in priority from P1 (high) to P5 (low). Low to medium severity incidents that are considered routine are typically handled by Information Technology Services directly. High severity incidents are escalated to senior leadership, legal counsel, and privacy officers. Critical severity includes all those involved in high severity incidents plus external partners, cyber insurance providers, law enforcement, and regulatory bodies.

What to Save

Name of the AI tool used (e.g., ChatGPT, Copilot, Gemini, etc) including:

  • URL to tool.
  • URL to company privacy statement and/or security measures.
  • Was there a privacy impact assessment (PIA) done?

How it was accessed

  • Web browser.
  • Integrated into another app (IDE, Office, browser extension).

Account used

  • Work account vs personal account.
  • Email/username tied to the AI tool.

Exact text pasted into the AI tool

  • Prompts.
  • Follow-up questions.

AI responses

  • Copy/paste into a document.
  • Screenshots if copy isn’t possible.

Note if the content included:

  • Personal data (PII).
  • Credentials, keys, tokens.
  • Internal or confidential information.
  • Student/employee/financial data.
  • Keep date & time the AI tool was used (approximate times are fine).
  • Record when you realized sensitive data may have been exposed.
  • Document any actions taken afterward (closing tab, deleting history, changing passwords, etc.).
  • Device used (laptop, desktop, mobile).
  • Operating system.
  • Browser or application used.
  • Network used.
  • On-campus.
  • VPN.
  • Home / public Wi-Fi.

Any unexpected:

  • Logins alerts.
  • MFA prompts.
  • Password reset notices.

Whether:

  • Keys or passwords were pasted into the AI.
  • Those credentials are still active.
  • Names of files uploaded or copied into the AI tool.
  • File types (PDF, spreadsheets, code, text).
  • Source of data (email, shared drive, database, LMS, HR system).

Do not re-upload or re-open files unless told to.

  • Who owns the data?
  • Whose data was included?
  • Was the output:
    • Saved?
    • Shared?
    • Copied into another system?

Note: Much of this information will be available in the “chat history”. Do not delete chat history unless instructed.