TL;DR
- October Health uses AI to support employee mental health, coaching, and HR workflows.
- All AI systems are classified under the EU AI Act before deployment.
- High-risk or clinical-adjacent signals immediately break the conversation and surface emergency resources directly to the user.
- We never sell or share conversation data with AI providers for model training.
- This policy is reviewed annually, or upon material change.
1. Purpose and Scope
This policy establishes October Health Limited's framework for the responsible development, deployment, and oversight of artificial intelligence systems. It exists to ensure accountability, regulatory compliance, and trust in our AI products and services.
This policy is aligned with the EU AI Act (Regulation (EU) 2024/1689) Article 9 requirements for risk management, and follows the ISO/IEC 42001:2023 standard for AI management systems.
Scope: All AI systems operated, developed, or integrated by October Health Limited, including systems provided by third-party providers (OpenAI, Google, Anthropic) that are deployed within our products. This includes our conversational agents (Luna, Ash, Ivy), hiring-support tools, assessment engines, and internal AI tooling.
2. AI System Classification and Risk Assessment
All AI systems are classified prior to deployment using the EU AI Act's risk-based framework. October Health recognises four risk tiers:
- Unacceptable Risk: Prohibited. October Health does not develop or deploy AI systems that fall into this category (e.g. social scoring, real-time biometric identification).
- High Risk: Subject to Annex III requirements. At October Health, this includes hiring-support tools that assist in recruitment screening or candidate evaluation.
- Limited Risk: Transparency obligation. This includes our conversational AI agents (Luna, Ash, Ivy), where users must be informed they are interacting with AI.
- Minimal Risk: No specific obligations. This includes content recommendation systems and search ranking features.
New AI systems require a risk classification assessment before deployment, signed off by the Data Protection Officer and CTO. The assessment evaluates purpose, data inputs, decision impact, and affected populations.
3. Roles and Responsibilities
| Role | Responsibility |
|---|---|
| CEO | Accountable for the AI governance programme |
| CTO | Technical oversight of AI system design, security, and architecture |
| Data Protection Officer | Privacy and regulatory compliance; DPIA coordination; supervisory authority liaison |
| AI Governance Lead (DPO) | Day-to-day register maintenance, audit scheduling, and policy updates |
| Product Director | Per-system risk owner; responsible for compliance within their product area |
| Third-party Providers | Contractual obligations per Section 11 of this policy |
4. AI Lifecycle Governance
All AI systems at October Health follow a governed lifecycle from initial design through to retirement:
Design
Requirements analysis, risk classification, bias assessment planning, and data minimisation review. A design review is conducted with the AI Governance Lead before development begins.
Development
Test coverage requirements for AI outputs, fairness metrics definition, data minimisation enforcement, and prompt engineering review. All system prompts are version-controlled and require peer review.
Deployment
Staged rollout with logging enabled for audit purposes. For High-Risk and clinical-adjacent flows, the system immediately breaks the conversation and presents emergency details when risk is detected. Users are notified they are interacting with AI (Limited Risk and above).
Monitoring
Continuous performance metrics, drift detection, user feedback loops, and escalation rate monitoring. Alerts trigger review when metrics deviate from established baselines.
Retirement
Data deletion schedule, user notification, model decommission checklist, and archive of governance records for the required retention period.
5. Data Governance and Privacy
All conversation data processed by AI systems is subject to the October Health Privacy Policy.
- No user conversation data is shared with AI providers for model training. This is enforced contractually with each provider (see AI Model Register).
- Data retention is minimised: conversation logs are retained for a maximum of 90 days by default, configurable per organisation.
- A Data Protection Impact Assessment (DPIA) is conducted for all High-Risk AI systems before deployment.
- Standard Contractual Clauses (SCCs) and the UK International Data Transfer Agreement (IDTA) are in place for all US-based providers.
6. Transparency and Explainability
In accordance with the EU AI Act Article 52 transparency obligation for Limited Risk systems:
- Users are always informed when they are interacting with an AI system. All conversational agents are clearly identified as AI.
- AI-generated recommendations are presented with appropriate context about their nature and limitations.
- Users can ask AI agents to explain the reasoning behind their responses.
- This policy, the AI Model Register, and transparency testing results are publicly available.
7. Human Oversight
October Health maintains meaningful human oversight across all AI systems, with the level of oversight proportionate to the risk classification:
- Crisis detection: When the AI detects high-risk or clinical-adjacent signals (e.g. suicidal ideation, self-harm, safeguarding concerns), the conversation is immediately broken and the user is presented with emergency contact details and crisis resources. There is no intermediate step — the system acts instantly to surface help.
- Hiring support: AI outputs are advisory only. Final hiring decisions are always made by humans. AI recommendations are accompanied by explanations and confidence indicators.
- Escalation paths: An in-product "Talk to a person" option is available at all times. Crisis detection immediately surfaces emergency services details to the user.
- Override capability: HR administrators can disable or limit AI features on a per-organisation basis.
8. Monitoring, Audit and Continuous Improvement
- Monthly: Automated metric review covering response quality scores, error rates, and bias indicators.
- Quarterly: Internal audit of model register accuracy, data processing logs, and escalation rate analysis.
- Annually: Full external or third-party safety audit and comprehensive policy review.
All findings are logged in the AI Governance Register. Remediation actions are tracked to closure with assigned owners and due dates.
9. Incident Response
AI-specific incident types include: harmful or inappropriate output, safeguarding failure, data breach via an AI pipeline, and discriminatory decision-making.
- Internal reporting: All AI incidents must be reported to security@october.health within 24 hours of detection.
- Regulatory notification: Where a personal data breach is involved, notification to the supervisory authority is made within 72 hours in accordance with GDPR Article 33.
- Post-incident: Root cause analysis, model retraining or prompt update as appropriate, and transparency disclosure to affected users where required.
10. Bias and Fairness
October Health is committed to identifying and mitigating bias in all AI systems:
- Pre-deployment bias testing is conducted for all High-Risk and Limited-Risk AI systems.
- Protected characteristics tested include: gender, age, race/ethnicity, disability, and pregnancy/maternity.
- Flip testing methodology: identical prompts with demographic variables changed; outputs compared for differential treatment.
- Results are published on our AI Transparency page as testing is completed.
- Ongoing monitoring: demographic breakdown of escalation rates is reviewed monthly. An alert is triggered if disparity across any protected characteristic exceeds 5%.
11. Third-Party AI Provider Management
All third-party AI providers integrated into October Health products must meet the following requirements:
- Contractual opt-out from using October Health data for model training.
- A signed Data Processing Agreement (DPA) compliant with GDPR requirements.
- Confirmation of ISO 27001 certification or equivalent security standard.
- Provision of model cards or equivalent documentation on request.
Current providers are listed in the AI Model Register. New providers require joint approval from the CTO and DPO before integration may begin.
12. Review Schedule
This policy is reviewed annually in Q1, or upon any of the following triggers:
- Significant change to an AI model or system architecture
- New regulatory requirement or guidance
- Material AI-related incident
- Acquisition, merger, or significant organisational change
| Version | Date | Summary of Changes |
|---|---|---|
| 1.0 | 1 February 2025 | Initial publication |
Next scheduled review: Q1 2027