Who moderates
TalkCampus?
Three layers work together: AI, professional moderators, and clinicians. No single system acts alone. Parallel models flag risk in seconds; trained staff interpret context; clinical specialists step in when health and safety require it.
Clinical layer
Masters-level clinicians
I-CARE, private outreach, and documented follow-up when risk is elevated.
Human layer
Trust & Safety
Coded language, behaviour, and policy with full oversight and clear escalation lines.
AI layer
Multi-model screening
We use a number of models that work at creation, with ML classifiers and restricted discovery.
24/7
Moderation cover
24/7
Human-led review
90%
Capacity headroom
5
Clinical languages
Three layers, shared responsibility
Each layer has a distinct job. Together they reduce blind spots: speed from automation, judgment from professionals, and clinical depth when wellbeing is at stake.
AI framework
- · We use a number of models that work at creation, running in parallel so no single vendor holds sole authority.
- · Screens every post at the point of creation before it reaches the wider feed.
- · Human reviewers assess all content, supported by ML classifiers and policy rules that surface risk scores.
- · Restricted search terms and discovery limits reduce harmful content pathways.
- · Honour System and community norms reinforce positive behaviour alongside automation.
Trust & Safety team
- · Professional staff dedicated to community safety and policy enforcement.
- · Trained in coded language detection and behavioural analysis, not just keyword matching.
- · Hierarchical structure: team leads, senior moderators, and clinical liaison for escalations.
- · Every serious action is visible in our case management system for oversight, audit, and institutional reporting.
Clinical team
- · 24/7 clinical coverage so high-risk moments are never left unattended.
- · I-CARE framework (Identify, Classify, Assess, Respond, Escalate) structures every escalation.
- · One-to-one private conversations when a student needs professional support.
- · Incident reporting with clear handoffs to your institution where your protocol requires it.
- · Follow-up and closure so cases are not dropped mid-stream.
Rules that keep peer support constructive
Clear standards make moderation fair and predictable. Serious breaches can lead to suspension or banning; students can appeal when they believe a mistake was made.
Respect
Treat others with dignity. Disagreement is fine; harassment is not.
Zero tolerance for hate
No discrimination or targeted abuse based on identity or beliefs.
Share safely
Avoid graphic detail that could harm others or retraumatise the community.
Prohibited content
No illegal activity, sexual content involving minors, or instructions for self-harm.
Protect privacy
Do not share personal details about yourself or anyone else.
Appropriate use
Peer support, not therapy replacement. Use clinical and crisis routes when needed.
Enforcement and appeals
Repeated or severe violations may result in temporary suspension or a permanent ban, applied in line with policy and fully documented. Students may submit an appeal for review. The goal is both community safety and procedural fairness.
User safety controls
Moderation is not only top-down. Students get tools to shape their own experience and step back when they need to.
Trigger warnings
Students can label sensitive topics so others can choose whether to engage.
Hide, block, and snooze
Control who you see and when you need a break without leaving the community.
Content filters
Tune what appears in your feed to match your comfort level.
Anonymous posting
Usernames only. Share experiences without exposing real-world identity.
Safety Centre
In-app guidance, reporting paths, and links to crisis resources in one place.
Built for real student life
Late nights, exam stress, and homesickness do not follow a nine-to-five schedule. TalkCampus mirrors that reality with continuous moderation and clinical cover, transparent guidelines, and tools students can use to protect their own wellbeing.
- ✓ Multi-model AI at creation, with humans and clinicians in the loop for serious cases
- ✓ Trust & Safety trained for subtle risk, not only obvious rule breaks
- ✓ Clinical languages and I-CARE so support matches the moment
Trusted by 310+ universities
& colleges worldwide
























Knowing that students have round-the-clock support with real-time clinical safeguarding gives us confidence, and it reduces pressure on crisis services.
Sarah Richardson
Head of Wellbeing, University of Derby
Who moderates TalkCampus?
Common questions from students, student services, and safeguarding leads.
Moderation is delivered by a combined team: professional Trust & Safety staff who review all content supported by AI, and Masters-level clinicians for escalations. Humans lead every decision, with AI assisting detection and prioritisation.
Trust & Safety staff are trained professionals with ongoing education in coded language, behavioural risk, and community policy. Clinicians hold relevant clinical qualifications and operate under TalkCampus clinical governance. AI is a tool they use, not a substitute for judgment.
Our Trust & Safety team reviews all content around the clock, supported by AI for detection and prioritisation. Clinical pathways activate when human reviewers identify risk, with 24/7 coverage so urgent cases are picked up without waiting for office hours.
Coverage is continuous. Our Trust & Safety team works around the clock supported by AI, and the clinical team maintains the same 24/7 roster. Your students are not outside monitored support because the clock changed.
Yes. Enforcement is phased where appropriate, and students can appeal through the published process. Appeals are reviewed against community guidelines and case notes so decisions are consistent and auditable.
See how moderation fits your institution
Book a demo to walk through our human-led moderation, Trust & Safety workflows, clinical escalation, and how our case management system supports audit and reporting.