Understanding how VocalCalm's AI makes decisions, protects your privacy, and supports your mental wellness journey with complete transparency.
All AI processing respects your privacy. We never share your personal data for training external models.
High-risk moments are logged automatically so we can audit behaviour when legally required. We don't offer 24/7 human monitoring—VocalCalm signposts you to emergency contacts instead.
You can always request explanations for AI recommendations and understand the reasoning behind them.
Our AI analyzes your conversations using natural language processing to understand:
Based on your needs, the AI selects appropriate therapeutic approaches:
The AI personalizes your experience by:
Your coach remembers past sessions so each conversation feels continuous, but memory is treated like clinical data. We dual-save every transcript: it lands in our private, encrypted memory store first, then synchronises to a secure backup once the encrypted channel is available. If the backup copy is delayed or unavailable, your coach keeps going with the primary record and tells you what was retrieved.
Our AI continuously monitors for safety concerns and logs them for compliance review. We do not offer live crisis monitoring—VocalCalm always encourages you to contact local emergency services.
AI identifies concerning patterns that may indicate increased risk
Flagged conversations are stored securely so a qualified reviewer can audit them if a legal obligation arises.
We surface hotline numbers and online support portals for your region—reach out directly for immediate help.
Questions about AI transparency?
Contact our AI Ethics team at [email protected]