AI Transparency

How Our AI Works

Understanding how VocalCalm's AI makes decisions, protects your privacy, and supports your mental wellness journey with complete transparency.

Privacy First

All AI processing respects your privacy. We use an open-source LLM configured for zero data retention and never train on your conversations.

Safety Logging

High-risk moments generate minimal safety signals for compliance when legally required. We don't offer 24/7 human monitoring—VocalCalm signposts you to emergency contacts instead.

Explainable Decisions

You can always request explanations for AI recommendations and understand the reasoning behind them.

How AI Decisions Are Made

1. Conversation Analysis

Our AI analyzes your conversations using natural language processing to understand:

  • • Key themes and concerns you express
  • • Goals you set during sessions
  • • Progress toward your wellness intentions
  • • Potential safety risks or crisis indicators

2. Therapeutic Matching

Based on your needs, the AI selects appropriate therapeutic approaches:

  • • CBT techniques for thought patterns
  • • DBT skills for emotional regulation
  • • ACT methods for acceptance and values
  • • Evidence-based interventions matched to your situation

3. Personalization Engine

The AI personalizes your experience by:

  • • Learning your communication preferences
  • • Adapting to your progress over time
  • • Remembering insight-level notes from previous sessions
  • • Adjusting coaching exercises to your pace

What Data We Use

  • Insight-level notes you choose to keep
  • Session timing and usage cadence
  • Questionnaire preferences and goals
  • Progress reflections you choose to save
  • Safety risk indicators

What We Never Do

  • Share your data with third parties for marketing
  • Use your sessions to train external AI models
  • Allow humans to listen to or review your sessions
  • Store raw audio or full conversation transcripts
  • Replace professional medical diagnosis

How Our Memory System Works

Your coach remembers past sessions through short insight-level notes so each conversation feels continuous, without storing full conversations. Audio is processed in real time and not stored. Insight notes are encrypted in private databases and can be deleted at any time.

  • Encryption end-to-end: TLS in transit, AES-256 at rest for stored insights.
  • Audit trails: Access logs are recorded for security and compliance.
  • User control: Export or delete insight notes at any time.
  • Transparency in sessions: The coach can summarize what it has saved.

Your Rights Regarding AI Decisions

You Can Always:

  • • Ask for explanations of AI decisions
  • • Opt out of specific AI features
  • • Access all data the AI uses about you
  • • Request deletion of insight notes
  • • Challenge or correct AI assessments

How to Exercise Your Rights:

  • • In-app: Settings → AI Preferences
  • • Email: [email protected]
  • • Request forms available in your account
  • • Responses provided as quickly as possible
  • • No fee for exercising your rights

Safety Monitoring & Crisis Detection

Our AI monitors for safety concerns and creates minimal logs for compliance when required. We do not offer live crisis monitoring—VocalCalm always encourages you to contact local emergency services.

Pattern Recognition

AI identifies concerning patterns that may indicate increased risk

Audit Trail

Flagged sessions generate minimal metadata so we can meet legal obligations without storing audio.

Resource Connection

We surface hotline numbers and online support portals for your region—reach out directly for immediate help.

Learn More

Questions about AI transparency?
Contact our AI Ethics team at [email protected]