
The Challenge
I’ve spent the last few months building something I never imagined I’d create: a custom large language model designed specifically to support sexual assault survivors. This isn’t a side project or a portfolio piece. It’s a response to a gap I’ve watched persist for years—the brutal reality that finding help after trauma shouldn’t require endless Google searches, cold calls to strangers, and the emotional labor of explaining your story over and over to determine if someone is even the right fit.
The current system for finding post-assault support is broken, and technology has the potential to fix it.
As a designer and researcher working at the intersection of technology and human experience, I’ve become increasingly interested in how we can use AI for genuine good, not just efficiency or profit, but to address real human needs. The work I’m doing with Beyond Believed represents an attempt to build something that validates survivors’ experiences while connecting them to vetted, transparent resources. This article introduces the V1 of this custom LLM, explains why it was built, and explores the fundamental problems it aims to solve.
When Finding Help Becomes Another Trauma
Sexual assault is one of the most underreported crimes in the United States. According to RAINN (Rape, Abuse & Incest National Network), only 310 out of every 1,000 sexual assaults are reported to police. The reasons for this are complex and deeply rooted in trauma response, fear of not being believed, and the overwhelming nature of navigating systems that weren’t designed with survivors in mind.
The Google Search Spiral
When someone searches “sexual assault lawyer near me” or “trauma therapist for assault survivors,” they’re met with pages of results that offer little meaningful differentiation. SEO-optimized law firm websites promise compassionate representation. Psychology Today profiles list specialties in trauma. But these listings rarely answer the questions that actually matter to someone in crisis:
Does this lawyer believe survivors? Have they successfully handled cases like mine? Will they pressure me to report if I’m not ready? Do they understand the specific dynamics of campus assault, workplace harassment, or intimate partner violence? What will this cost me, and can I afford it?
The same opacity exists in finding therapists. A provider might list “trauma” as a specialty, but that tells you nothing about their actual approach, whether they’re trained in evidence-based treatments like EMDR or CPT, or if they have experience with the specific type of assault you’ve survived. Survivors are left making phone calls, explaining their trauma over and over, hoping the person on the other end will be a good fit—all while managing the psychological weight of their experience.
The Vetting Problem
Beyond the difficulty of finding information, there’s a more fundamental issue: most online directories don’t vet their listings in meaningful ways. A lawyer can claim to specialize in sexual assault cases without demonstrating any track record. A therapist can list trauma as a focus area without specific training in trauma-informed care. Survivors are left to do their own research, reading reviews that may or may not reflect the provider’s competence with assault cases specifically.
This lack of vetting isn’t just inconvenient, it’s dangerous. Survivors who encounter providers who don’t believe them, who minimize their experiences, or who lack the specialized knowledge to support them effectively can experience what researchers call “secondary trauma” or “institutional betrayal.” The very act of seeking help becomes another source of harm.
The Transparency Gap
Even when survivors find providers who seem promising, they often can’t get clear answers about cost until they’re already engaged in the process. Legal fees vary wildly and are rarely posted publicly. Therapy costs depend on insurance networks, sliding scales, and whether the provider is in-network, information that’s often buried or requires multiple phone calls to uncover.
For survivors who may already be dealing with financial instability (particularly common in cases of intimate partner violence or workplace harassment), this lack of transparency creates another barrier. The question “Can I afford to get help?” shouldn’t be this difficult to answer.
The Emotional Labor of Explaining
Perhaps the most exhausting aspect of the current system is the repetition it requires. To find the right lawyer, you might need to have initial consultations with three or four different firms, explaining your situation each time. To find the right therapist, you might go through intake calls with multiple practices, describing your trauma repeatedly to determine if someone is a good fit.
This repetition isn’t just tedious, it’s retraumatizing. Each retelling activates the stress response. Each new person you have to convince of your experience’s validity takes an emotional toll. The system treats survivors as if they have infinite capacity for this labor, when in reality, many are operating from a place of profound depletion.
Behind the name of Beyond Believed is the combination of these very problems, being believed should be the bare minimum level of support for survivors. Where can you help you from there?
Why AI? Why Now?
The problems I’ve outlined aren’t new. Advocates and survivors have been pointing to these gaps for years. So why is a custom LLM the right solution, and why now?
The answer lies in what large language models can uniquely provide: conversational, personalized guidance that feels validating rather than transactional, combined with the ability to surface vetted, specific information based on individual circumstances.
The Validation Factor
One of the most powerful aspects of conversational AI is its ability to create a space that feels inherently validating. When you’re talking to a chatbot designed specifically to support assault survivors, you’re not worried about being believed. You’re not managing someone else’s emotional reaction to your story. You’re not wondering if you’re taking up too much time or asking too many questions.
This matters more than it might seem. Research on trauma recovery consistently shows that validation, having your experience acknowledged and unquestioned, is a crucial first step in healing. A custom LLM trained on trauma-informed language and survivor-centered principles can provide this validation in a way that a Google search never could.
The model doesn’t judge. It doesn’t express shock or discomfort. It doesn’t minimize or question. It simply acknowledges your experience and helps you find the support you need. For many survivors, this might be the first interaction they have about their assault that doesn’t require them to defend or explain their reality.
Personalized Navigation Without Repetition
Unlike a static directory or search engine, a conversational AI can ask clarifying questions and surface relevant resources based on your specific situation, without requiring you to tell your full story. The model can understand context: whether you’re looking for a lawyer who handles campus assault versus you need a therapist who takes your insurance or offers sliding scale fees.
This personalization happens through conversation, which feels more natural and less overwhelming than filling out lengthy intake forms or navigating complex filter systems. You can ask questions in plain language: “I was assaulted by my supervisor six months ago. I’m not ready to report to police, but I want to know my options.” The LLM can respond with relevant information about civil suits, state laws, and therapeutic support—all tailored to your timeline and readiness.
Connecting to Vetted Resources
The power of this custom LLM isn’t just in the conversation—it’s in what the conversation connects you to. Behind the interface is an Airtable database of vetted lawyers, therapists, and wellness practitioners who have been specifically screened for their work with assault survivors.
This vetting process begins with outreach from myself or potential partner to connect. Next includes verification of credentials, review of their approach and philosophy, confirmation of their experience with assault cases specifically, and transparency about their pricing and payment options. When the LLM recommends a provider, survivors can trust that this person has been evaluated against meaningful criteria, not just that they paid for a directory listing.
The database structure also allows for conversational matching. A survivor can be connected to a lawyer who has experience with their specific type of case, who practices in their jurisdiction, and whose fee structure aligns with their financial situation. They can find a therapist who is trained in their preferred modality, who has availability that works with their schedule, and who has demonstrated competence in trauma-informed care.
Available 24/7, Judgment-Free
Trauma responces doesn’t appear at convient times. Survivors often find themselves searching for information at 2 AM, unable to sleep, trying to understand their options. AI is available whenever someone needs it, providing consistent, reliable information without the constraints of office hours or appointment availability.
This constant availability also removes the pressure of “using your time wisely” that can come with phone calls or consultations. You can ask the same question multiple ways. You can come back days later and pick up where you left off. You can explore options without committing to anything. The model creates space for the kind of processing and decision-making that trauma survivors often need, nonlinear, repetitive, and self-paced.
How the Custom LLM Works

The Beyond Believed Companion is built on a foundation of trauma-informed principles and survivor-centered design. Every aspect of its training and implementation has been considered through the lens of what survivors actually need.
Training on Trauma-Informed Language
The model has been trained specifically on trauma-informed communication principles. This means it understands how to validate without over-empathizing, how to provide information without overwhelming, and how to respect autonomy while offering guidance. It doesn’t use language that implies blame or questions credibility. It doesn’t push survivors toward any particular course of action. It simply provides information and support based on what the survivor is asking for.
The model recognizes that there’s no “right way” to respond to assault, no timeline for healing, and no single path forward that works for everyone.
Integration with Vetted Provider Database
The LLM connects directly to an Airtable backend that houses the database of vetted providers. This isn’t a simple keyword match, the system understands context and can make sophisticated recommendations based on multiple factors simultaneously.
When a survivor describes their situation, the model can identify relevant providers based on their location, the type of assault they experienced, their financial constraints, their insurance status, their preferred therapeutic approach, and their readiness for different types of intervention. The recommendations come with transparent information about each provider’s background, approach, availability, and cost.
Privacy and Security by Design
Given the sensitive nature of the conversations, privacy and security are paramount. The system is designed to collect only the information necessary to make relevant recommendations. Conversations are not stored in ways that could identify individual users. The model doesn’t require account creation or personal information unless a survivor chooses to provide it to connect with a specific provider. This privacy-first approach recognizes personal privacy and autonomy.
Continuous Learning and Improvement
The Beyond Believed Companion is not live in beta, and it’s important to acknowledge that this work is ongoing (aka this isn’t a “launch and forget” project). The model continues to be updated and refined based on feedback from survivors, input from trauma specialists, and emerging research on best practices in trauma-informed care. The provider database is continuously expanded and updated to ensure recommendations remain current and relevant.
This article represents the introduction to this work, not its conclusion.
The Limitations and Ethical Considerations
As excited as I am about the potential of this AI companion, I’m also acutely aware of its limitations and the ethical questions it raises.
AI Cannot Replace Human Connection
The LLM is a tool for connection, and I firmly believe in the mental health context should never be used as a replacement for human support. It can help survivors find the right therapist, but it cannot provide therapy. It can explain legal options, but it cannot represent someone in court. It can validate experiences, but it cannot replace the healing that comes from human relationships and community support.
The goal is to make the path to human support less exhausting and more accessible, not to eliminate human connection from the healing process. The model is designed to be a bridge, not a destination. There is also an onboarding, disclaimer, and AI transparency page reminding any users of this.
The Risk of Over-Reliance on Technology
There’s a valid concern that building technological solutions to social problems can distract from addressing root causes. Sexual assault is a cultural issue that requires cultural solutions—*cough* accountabilty*cough*systemic change. An LLM that helps survivors find lawyers doesn’t prevent assault from happening in the first place.
I don’t see this work as a replacement for prevention efforts or cultural change. I see it as harm reduction, a way to support people who are already struggling to find support or answers while we continue the longer, harder work of creating a world where assault isn’t a common experience.
Bias in AI Systems
All AI systems reflect the biases present in their training data and design decisions. Despite careful attention to trauma-informed principles, this LLM may still contain blind spots or biases that could harm certain survivors. Ongoing monitoring and feedback mechanisms are essential to identify and address these issues as they emerge.
The vetting process for providers also carries the risk of bias, who gets included, what criteria are prioritized, and how “good fit” is defined all involve subjective judgments that could disadvantage certain providers or survivors. Transparency about these decisions and openness to feedback are crucial.
What Success Looks Like
Success for the Beyond Believed Companion isn’t measured in user numbers or engagement metrics. It’s measured in whether survivors feel more supported, less alone, and more able to access the help they need.
Success looks like a survivor who’s been searching for weeks finding a lawyer who understands their case in a single conversation. It looks like someone who’s been too afraid to reach out for therapy feeling validated enough to take that step. It looks like a person who couldn’t afford to make dozens of phone calls getting clear, transparent information about their options without having to explain their trauma repeatedly.
Success also looks like the model becoming unnecessary—a future where finding post-assault support is so straightforward, so survivor-centered, and so accessible that a custom LLM isn’t needed. Until then, this technology represents an attempt to meet survivors where they are, with the tools we have available now.
The Work Continues
This article introduces the work behind building the Beyond Believed Companion, but the work is far from finished. The model continues to be refined, the provider database continues to grow, and the understanding of how best to support survivors continues to evolve.
If you’re a trauma-informed provider interested in being included in the database, a survivor with feedback on how the model could better serve your needs, or a researcher or advocate with insights on trauma-informed AI design, I want to hear from you. This work is most effective when it’s informed by the community it aims to serve.
Connect with me at hello@beyondbelieved.org

