Purpose
This FAQ addresses common questions about artificial intelligence (AI) at Thompson Rivers University (TRU). It aims to reduce fear, misinformation, and unnecessary resistance by clearly explaining what is allowed, encouraged, and restricted
Audience
Students
Faculty
Staff
Researchers
Read This First
- AI is permitted at TRU when used responsibly and within clear guardrails.
- AI supports human work; it does not replace human judgment or accountability.
- Questions and early reporting are encouraged, not punished
- Integrity, transparency, and learning come first.
Common Myths vs. Reality
Reality: AI use is permitted. What matters is how you use it. Like other digital tools, AI comes with clear expectations and guardrails.
AI is treated like email, spreadsheets, and search engines — a tool whose appropriateness depends on the situation.
It is:
- encouraged for brainstorming, outlining, or summarizing
- appropriate for administrative drafting and workflow support
- limited or restricted when independent skill demonstration is required
For example:
- A student using AI to generate ideas and then writing their own analysis → appropriate.
- A student submitting AI-written work as their own where original work is required → inappropriate.
- Staff using AI to draft a communication and reviewing it before sending → appropriate.
The key question is not whether AI exists — it’s whether its use aligns with expectations, learning outcomes, and professional responsibilities.
Reality: AI use is not automatically misconduct. Appropriateness depends on the expectations for a course, assignment, project, or task. Some require disclosure; some prohibit AI; others encourage it.
AI becomes misconduct only when it replaces required learning, originality, or accountability.
Think of AI like a calculator:
- allowed in some contexts
- restricted in others
- required in certain professional settings
Examples:
- Using AI to summarize readings before class → generally appropriate.
- Using AI to complete a take-home assignment meant to assess personal understanding → misconduct.
- Using AI to help structure a report and then writing it yourself → appropriate.
Integrity is about honesty, transparency, and meeting expectations — not about avoiding tools entirely.
Reality: Expectations vary by course, program, role, or unit. Instructors and supervisors set context‑specific guidance depending on learning outcomes, research needs, or job duties.
A single rule would not work because different environments have different goals.
For example:
- A writing class may restrict AI to support skill development.
- A business program may encourage AI for research synthesis and analysis.
- Administrative teams may use AI for workflow efficiency.
- Researchers may use AI for literature scanning or coding support.
AI policy must be flexible enough to protect learning while still enabling innovation and productivity.
Context determines appropriateness.
Reality: AI supports human work; it does not replace it. Teaching, grading, decision‑making, research design, and accountability remain human responsibilities.
AI can:
- generate drafts
- summarize information
- analyze patterns
But it cannot:
- understand institutional context
- make accountable decisions
- mentor students
- apply ethical judgment
- design research with intent
AI handles tasks. Humans provide purpose, direction, and accountability.
The more AI handles repetitive work, the more human roles shift toward:
- leadership
- critical thinking
- collaboration
- oversight
Reality: Learning and professional judgment remain essential. AI should support understanding and not replace skill development or critical thinking.
AI can produce convincing outputs quickly — but that does not mean they are correct or appropriate.
Without understanding, a person cannot:
- identify mistakes
- refine results
- judge relevance
- defend their work
Using AI without comprehension is like presenting a report you never read.
AI helps people learn faster and work more efficiently — but responsibility for the work remains with the human using it.
Reality: AI can be wrong, incomplete, or biased. Users must check accuracy, evaluate sources, and apply judgment.
AI models generate responses based on patterns, not truth.
They can:
- hallucinate facts
- miss context
- reflect bias in training data
- provide outdated information
Responsible use means:
- verifying important claims
- cross-checking sources
- reviewing tone and assumptions
AI output is a draft — not a final answer.
Reality: Not all tools handle data safely. Confidential, personal, or sensitive information should not be entered into public AI tools, but can be used with approved AI tools as long as this is done in accordance with FIPPA and TRU policy
There is more than one type of AI tool:
- public tools
- enterprise/approved tools
- locally hosted tools
Each has different data protections.
Before entering information, ask:
- Is this tool approved?
- Is this data confidential?
- Where is the data stored?
- Who can access it?
Convenience should never override privacy obligations.
Reality: Responsible experimentation is encouraged. Early questions and early reporting are supported, not punished.
Innovation requires exploration.
TRU supports:
- trying new workflows
- learning new tools
- asking questions early
- reporting issues openly
What matters is:
- transparency
- responsible data use
- willingness to seek guidance
Avoiding experimentation entirely can create greater risk by slowing learning and adoption.
Reality: Human review is always required. AI assists with tasks, but users must verify accuracy, ethics, and appropriateness.
AI can scale work quickly — including mistakes.
Human oversight ensures:
- accuracy
- alignment with institutional values
- ethical considerations
- appropriate decisions
The faster the tool, the more important the human checkpoint.
Accountability always remains with the person using the tool.
Reality: Anyone at TRU can learn to use AI responsibly. Training, guidance, and resources support all skill levels.
Safe use does not require deep technical knowledge.
It requires:
- asking clear questions
- reviewing outputs
- understanding limitations
- following guidance
AI literacy is becoming similar to:
- digital literacy
- research literacy
- information evaluation skills
It improves with practice and support.
Reality: AI shifts work but does not eliminate the need for people. Roles involving judgment, leadership, collaboration, ethics, and strategy have become even more important. AI enables efficiency, but humans remain essential.
AI changes how work happens.
Tasks may shift:
- manual drafting → AI-assisted drafting
- repetitive analysis → AI-supported insights
- administrative triage → automated workflows
But human strengths become more central:
- decision-making
- communication
- strategy
- empathy
- institutional knowledge
AI removes friction — not human value.
Reality: AI is one approach to automation — not the default or the only one. Many processes can be streamlined through simpler methods such as workflow redesign, policy updates, or basic digital tools. Start with the problem, not the technology.
Automation often begins with simpler solutions:
- clearer processes
- standardized templates
- improved forms
- better data organization
- rule-based workflows
AI becomes useful when:
- language interpretation is needed
- pattern recognition is required
- scale exceeds manual capacity
The correct starting point is always:
“What problem are we solving?”
Technology should follow the need — not define it.
Reality: AI is powerful — but it’s not magic. It doesn’t read your mind, understand your goals on its own, or automatically produce perfect results.
Think of AI like a high‑powered bicycle:
It can go fast, it can climb steep hills, and it can help you reach places you never could on foot — but it still needs a rider. You steer. You balance. You decide where to go.
AI only works well when you give it:
- Clear, clean information
- If you feed AI messy, contradictory, or incomplete inputs, it will give you messy, contradictory, or incomplete outputs. Garbage in → Garbage out.
- Instructions that point it in the right direction
- AI won’t guess what “good” looks like.
- You have to tell it the task, the tone, the format, the audience, and the goal — just like directing an assistant.
- Human thinking, checking, and decision‑making
- AI needs a human partner to:
- correct mistakes
- catch hallucinations
- refine drafts
- judge accuracy
- and choose the final answer
- AI needs a human partner to:
AI is a collaborator, not a replacement
It accelerates your work, but you provide expertise, direction, and judgment. Without a human guiding it, AI is just a tool waiting for instructions.
Key Consideration
AI is a tool, not a shortcut or a replacement for people. Used thoughtfully, transparently, and responsibly, AI can support learning, teaching, research, and work at TRU.