Back to News
AI Wednesday, March 4, 2026

How Frontline’s ‘Lizzy’ uses AI to flag imminent domestic violence

Frontline, a spin‑out co‑founded by LMU doctoral researcher Ba Linh Le, has launched Lizzy, an AI‑assisted web app that predicts the short‑term risk of domestic violence with roughly 80% accuracy. The…

Frontline, a spin‑out co‑founded by LMU doctoral researcher Ba Linh Le, has launched Lizzy, an AI‑assisted web app that predicts the short‑term risk of domestic violence with roughly 80% accuracy.


The app is already deployed in counseling centres and protective facilities across 11 German states, and the team is angling for wider medical and institutional adoption.

Lizzy emerged from public‑health research at Ludwig‑Maximilians‑Universität München. Unlike many conventional tools developed decades ago in North America, Lizzy is trained on a representative German longitudinal survey of 7,400 participants.


That local data foundation is central to its claim: risk factors and legal contexts differ by country, and transplanting U.S. or Canadian checklists to Germany produces unreliable results.

“Risk assessments play a crucial role in determining how much support affected individuals receive—and when,” Ba Linh Le told LMU Newsroom. Her team built Frontline in Berlin to turn those research insights into a practical, low‑friction tool for frontline professionals: police, shelter staff and counsellors who must make time‑sensitive decisions.

The app offers a short screening (six questions, about one minute) and a longer assessment that generates a violence profile and a three‑month risk estimate. Instead of relying solely on direct questions about physical or sexual abuse — which survivors may not answer honestly — Lizzy uses proxy indicators such as object‑throwing, financial coercion or digital surveillance to infer risk. The model aggregates physical, sexual, emotional, digital and financial abuse into a single assessment.

“There will never be a tool that predicts danger with one hundred percent certainty. Human behavior is too variable,” Ba Linh Le told LMU Newsroom. “Risk assessment is only the first step in combating domestic violence. But every percentage point of reliability can determine whether the right support arrives in time.”

Frontline has not yet monetised Lizzy, according to LMU. That opens the usual startup questions: what is the sustainable revenue model (public procurements, SaaS to municipalities, paid integrations into hospital IT)? What regulatory hurdles exist when an AI model influences protective actions? And how will the team balance scaling with ethical audits and auditability?

LMU is actively supporting the transition from lab to market: the university is creating a central entrepreneurship and knowledge‑transfer unit to accelerate spin‑outs and help researchers commercialise ideas. “Our goal is to catalyze innovation,” Dr. Philipp Baaske, LMU Vice President for Entrepreneurship, told LMU Newsroom. “We focus on the people behind the innovations. We challenge and support them, advise and connect them, remove obstacles and accelerate the transfer of ideas into the economy and society.”

Frontline’s approach also underscores a wider lesson for AI builders: data fidelity and domain fit often matter more than model complexity. Lizzy’s predictive power comes from locally representative training data and careful question design, rather than headline‑grabbing model scale.

LMU has coupled research and institutional signals — from the orange bench campaign to a new policy against harassment — to push the campus community toward preventative measures.


“As a university, we want to take a stand, send a message, raise awareness, and call for action. We need more people like Ba Linh Le,” Dr. Margit Weber, LMU Vice President for Equal Opportunities, Talent Development, and Diversity, told LMU Newsroom.