AI Project Exposes Victim-Blaming and Gender Bias in Family Court Domestic Abuse Cases

Findings from an AI project called herEthical AI, which analyzed family court judgments and appeals in England and Wales. The AI revealed the use of victim-blaming and gender-biased language by judges in domestic abuse cases. The study found examples where judges discredited survivors, downplayed abuse, or made stereotypical assumptions about victims. For instance, a judge referred to an attempted strangling as a “prank” and cast doubt on a professional woman's testimony regarding inappropriate sexual relations.

The project is part of a campaign called Breaking Bias, founded by barrister Charlotte Proudman, which aims to highlight judicial attitudes that retraumatize survivors and discourage them from coming forward. The research emphasizes the need for better judicial training on domestic abuse, coercive control, and trauma.

The AI model's goal is to provide more transparency in family court proceedings, which are often seen as lacking accountability. The project has used freely available court judgments and is crowdfunding to obtain transcripts to enhance its understanding of courtroom dynamics.

Credits to Rachel Hall and The Guardian for this article .

https://www.theguardian.com/law/2024/oct/08/family-court-judges-victim-blaming-language-domestic-abuse-cases-ai-project