bolder.bolder.Let's Talk
← Case Studies

legal-ai

Judger: AI Legal Chatbot for Jordanian Law Firms

How we built an Arabic-first legal chatbot that helps Jordanian lawyers query case law, statutes, and precedents — and get cited, jurisdiction-aware answers in seconds.

·Bolder Team
Judger: AI Legal Chatbot for Jordanian Law Firms

Overview

Jordanian law firms deal with an enormous volume of case law, statutes, and legal commentary — most of it in Arabic. Junior associates spend hours searching through databases for the right precedent. Senior partners spend hours reviewing that work. Judger compresses both steps.

We built an AI legal chatbot for a Jordanian legal services client that lets lawyers ask questions in natural Arabic and get cited, jurisdiction-aware answers drawn from a curated corpus of Jordanian and Arab regional law.

The Problem

  • Legal research in Arabic is time-consuming and fragmented across multiple sources
  • Existing tools are either English-first or lack legal domain depth
  • Associates needed a way to surface relevant rulings without spending hours on manual search
  • Partners needed confidence that AI-generated answers were traceable to real sources

What We Built

Arabic-First RAG Pipeline

We built a retrieval pipeline tuned specifically for Arabic legal text:

  • Document ingestion: structured parsing of Jordanian statutes, court rulings, and regulatory texts
  • Arabic-aware chunking: respects Arabic sentence structure and legal clause boundaries
  • Hybrid retrieval: dense + sparse search with re-ranking tuned on legal query patterns
  • Source citation: every answer includes the exact source document and clause

Conversational Interface

  • Multi-turn dialogue — lawyers can ask follow-up questions, narrow scope, or request comparisons
  • Dialect-aware: understands Jordanian legal Arabic and formal MSA
  • Guardrails to flag ambiguous queries and ask for clarification before generating

Evaluation Layer

We ran continuous evaluation throughout development using BEval Studio:

  • Automated scoring on answer relevance, citation accuracy, and hallucination rate
  • Human review by practicing Jordanian lawyers for a sample of outputs
  • Final system achieved >95% citation accuracy on held-out test queries

Results

  • Research time cut by ~65% for standard case law queries
  • Adopted by associates as a daily first-pass research tool
  • Zero hallucinated citations in post-launch monitoring (first 60 days)

Notes

Client name withheld under NDA. Deployed in production, Jordan.

Work with us

Want results like this?

Start a project →