Build a module that scans user-submitted text for crisis language and surfaces appropriate resources — without ever blocking the user's ability to post. ```typescript scanContent(text: string): SafetyCheckResult // SafetyCheckResult: { // flagged: boolean, // severity: 'low' | 'medium' | 'high', // matchedPatterns: string[], // suggestedResources: Resource[] // } // Resource: { name: string, contact: string, url: string, available24h: boolean } ``` Requirements: 1. Match against a curated keyword/phrase list for self-harm and crisis language using word-boundary regex (`\b`). The list should be maintainable (exported constant), not buried in logic. 2. Severity levels: 'low' = general distress language, 'medium' = explicit self-harm references, 'high' = immediate crisis indicators 3. Always include these resources for 'medium' and 'high': Crisis Text Line (HOME to 741741), 988 Suicide & Crisis Lifeline, NAMI Helpline (1-800-950-NAMI) 4. Never modify, censor, or block the original text 5. Case-insensitive matching Write tests for: high-severity match, low-severity match, false positive guard ("shoot, I forgot", "I'm dying laughing"), clean content returning `flagged: false`.
No contributions yet.