← back to Legal
AI Usage Policy
October Health integrates Artificial Intelligence (AI) to enhance employee wellbeing, performance support, and organisational insights. This policy outlines our approach to transparency, privacy, safety, and regulatory compliance across all AI-powered features. October Health is committed to ethical AI that prioritizes user trust, data protection, and responsible innovation.
AI in October Health Services
Employee‑Facing Solutions
- AI Coaching Courses: Personalised, self-paced wellbeing and performance courses, available 24/7.
- Journaling Tools: Guided, AI-supported self-reflection and habit-building tools.
- Luna | AI Chat Support: Real-time personalised wellbeing support, psycho‑education, and resource navigation.
- Human‑Hosted Group Sessions: Co-hosted by Luna, providing transcription, translation, engagement questions, and user‑friendly summaries.
- Ivy: AI dietitian, food photo analysis, 1:1 AI chat and tracking of macros, exercise, etc.
- Assessments: AI-assisted feedback to users after the completion of assessments.
Organisation‑Facing Solutions
- October Companion: AI assistant for drafting HR documentation (e.g., job descriptions, performance plans, internal communications).
- Luna in the Forest: Live, interactive AI support offering personalised feedback, maintaining user preferences, and generating session recaps.
- Business Intelligence Tools: Aggregated, anonymised insights into employee wellbeing, organisational culture, and performance trends.
Important:
October Health’s AI systems are informational and non‑clinical. They do not provide medical advice or diagnosis, nor do they create a doctor‑patient relationship. Clear disclosures notify users when they are interacting with AI.
Key Principles
User‑Focused Strategy
- AI systems are designed for inclusivity, accessibility, and fairness.
- Bias testing and regional/cultural suitability reviews are conducted regularly.
- Systems are continuously refined based on user feedback and research.
Privacy and Security
- Fully compliant with GDPR, POPIA, HIPAA (where applicable), and CCPA.
- All personal data is anonymised, pseudonymised, or minimised wherever possible.
- Explicit user consent is obtained before processing sensitive data.
- October Health is SOC2 certified and aligns with WHO AI Ethics Guidelines.
- Cross‑border data transfers follow legally valid safeguards (e.g., SCCs).
Accuracy and Accountability
- AI outputs are validated through testing, internal review, and safety evaluation.
- Users are informed of the limitations of AI and the non‑clinical nature of support.
- A defined human‑oversight process ensures responsible monitoring and escalation.
Security and Ethical Measures
Continuous Monitoring
- AI systems undergo periodic testing, security reviews, and ethical evaluations.
- Post-deployment monitoring detects anomalies, regressions, or unsafe patterns.
- Logging systems track incidents, errors, and performance trends.
Transparency
- Users are informed when interacting with AI systems.
- Clear descriptions explain the purpose, capabilities, and boundaries of each AI feature.
Data Handling
- All sensitive and personal data is encrypted at rest and in transit.
- Data is retained only as long as necessary for service delivery or compliance purposes.
- Data subjects may access, correct, or request deletion of their data at any time.
- No identifiable data is ever shared with employers without explicit consent.
- Cross‑border transfers use adequate safeguards and are disclosed to users.
Third‑Party AI Providers
- October Health may use AI services from providers such as OpenAI or Google.
- These providers do not train their models on October Health user data.
- An automated redaction system removes or masks personal/sensitive data before third‑party processing.
- A public list of sub‑processors is maintained and updated regularly.
Human Oversight and Governance
Human Oversight
- Human experts periodically review system behaviour, safety responses, and flagged conversations.
- Users can request human review at any time.
- High‑risk situations always escalate to human-controlled workflows.
Governance & Accountability
- A dedicated AI Safety & Compliance team oversees risk management, quality assurance, and regulatory alignment.
- A structured change‑management process is used for all AI updates, including validation and re‑approval steps.
Model Limitations & Known Failure Modes
AI systems may occasionally:
- Misinterpret user input, tone, dialect, or cultural nuance.
- Provide incomplete or generic suggestions.
- Fail to fully detect mental‑health risk or crisis signals.
- Generate inaccurate or biased responses despite guardrails.
Users are informed that AI responses are not a substitute for professional support.
Bias Monitoring & Fairness
October Health maintains:
- Regular fairness assessments across regions (e.g., South Africa, Kenya, Nigeria).
- Content suitability reviews for cultural and linguistic norms.
- Corrective action plans when bias or unintended outcomes are detected.
Escalation Protocols for High‑Risk Cases
For users expressing:
- Suicidal intent
- Intentional self-harm
- Harm to others
October Health initiates:
- Immediate crisis guidance, including local emergency numbers or 24/7 hotline referrals.
- Sensitive and trauma-informed messaging to support user safety.
- Follow-up communications via the app (where contact details exist).
- Human escalation, managed by trained wellbeing or crisis support personnel.
User Rights
Users may at any time:
- Request access to their personal data.
- Request correction or deletion.
- Opt out of AI‑based features (where functionally possible).
- Request human review of AI decisions or responses.
- Request an explanation of how AI outputs are generated (high-level).
- Withdraw consent for data processing.
These rights are honoured in compliance with GDPR, POPIA, CCPA, and applicable laws.
Post‑Deployment Monitoring & Incident Response
Monitoring
- Automated systems monitor safety patterns, inappropriate outputs, model drift, and anomalies.
- Performance metrics (accuracy, safety, response quality) are reviewed regularly.
Incident Response
- A formal AI incident protocol governs detection, reporting, investigation, and mitigation.
- Users may report issues via help@october.health
- Serious incidents trigger internal escalation to the AI Safety & Compliance team.
Data Retention and Deletion
- Personal data is retained only as long as necessary for providing the service or meeting legal obligations.
- Anonymised or aggregated analytics may be retained for organisational wellbeing insights.
- Users may request deletion; requests are completed within mandated regulatory timelines.
- Secure deletion protocols ensure irrecoverable removal of data from backups and archives.
Enterprise Controls (for Organisations)
- Role‑based access controls (RBAC) ensure that organisational administrators can only view anonymised, aggregated data.
- Employers cannot access identifiable user messages or personal wellbeing data.
- Organisations may configure:
- Analytics settings
- Regions or languages
- Data‑retention windows
- Resource links and escalation pathways
- Permissions for AI features
Commitment to Innovation and Compliance
October Health continually invests in:
- Ethical AI research
- User‑feedback driven improvements
- Safety upgrades
- Bias reduction
- Security hardening
- Clinical expert review of guardrails
- Transparent communication with users and enterprise clients
Our mission is to provide powerful wellbeing tools while ensuring safety, privacy, and trust at every stage.
Model Providers
Our third-party subcontractors include all AI model providers used. For ease of use, the latest list is:
- OpenAI (40% average usage)
- Google (35% average usage)
- Anthropic (15% average usage)
- Perplexity (10% average usage)
Our partners are checked for performance benchmarks and bias audits by our engineering team, and are all considered leaders in both security and generative AI.
Data Provenance and Scope
October Health does not train or provide data for training of generative LLMs. We use machine learning on our own data sets which is in-house to establish benchmarks, provide aggregate data reporting, etc.
No model providers we work with train using our data or client data.
System Architecture
Detailed architecture diagrams are available on request for October Health clients. In simple terms, we have a central AI gateway that all user and client AI requests are routed through.
This system removes PII, moderates requests and responses against accuracy and risk guardrails, and selects a model provider depending on task, cost and availability.
This data is hosted in our SOC2 Type-II certified environment in AWS US East and complies with October Health standard retention policies, RTBF and more.