Understanding how VocalCalm's AI makes decisions, protects your privacy, and supports your mental wellness journey with complete transparency.
All AI processing respects your privacy. We use an open-source LLM configured for zero data retention and never train on your conversations.
High-risk moments generate minimal safety signals for compliance when legally required. We don't offer 24/7 human monitoring—VocalCalm signposts you to emergency contacts instead.
You can always request explanations for AI recommendations and understand the reasoning behind them.
Our AI analyzes your conversations using natural language processing to understand:
Based on your needs, the AI selects appropriate therapeutic approaches:
The AI personalizes your experience by:
Your coach remembers past sessions through short insight-level notes so each conversation feels continuous, without storing full conversations. Audio is processed in real time and not stored. Insight notes are encrypted in private databases and can be deleted at any time.
Our AI monitors for safety concerns and creates minimal logs for compliance when required. We do not offer live crisis monitoring—VocalCalm always encourages you to contact local emergency services.
AI identifies concerning patterns that may indicate increased risk
Flagged sessions generate minimal metadata so we can meet legal obligations without storing audio.
We surface hotline numbers and online support portals for your region—reach out directly for immediate help.
Questions about AI transparency?
Contact our AI Ethics team at [email protected]