LexVault is an AI that answers legal questions, analyzes contracts, and runs M&A due diligence — grounded in real law, verified before it reaches you, with a full audit trail. Runs entirely on your hardware. No data sent to OpenAI, ever.
Every component runs on-premise. Disconnect the ethernet cable and it still works.
No OpenAI, no Azure, no AWS. The LLM, embeddings, reranker, and translation all run locally on your hardware.
Deployed on an NVIDIA DGX Spark that fits on a desk. No server room, no IT department. All models run locally — 1,150 tokens/second. Always on.
Ask in English, search in German or Slovak statute text. The LLM generates search queries in the target language — zero extra latency, perfect legal terminology.
New case law and statute updates delivered monthly via secure USB drive. Plug in, wait 30 seconds, done. Your legal database stays current without ever connecting to the internet.
Five steps, fully on-premise. First results stream within seconds.
If the target jurisdiction uses a different language, the system translates the query. "tenant rights" becomes "Rechte der Mieter" for Austrian law.
Neural Translation · 450+ Languages · ~350msSearches statutes and case law using semantic meaning and exact legal terms simultaneously. Results merged and ranked by relevance.
Hybrid Vector Search · Semantic + Exact MatchA cross-encoder reads each query-document pair and scores relevance. Only results above threshold pass. Noise filtered out.
Cross-Encoder Reranker · GPU-acceleratedThe AI reads the relevant law and writes a grounded answer citing specific statutes and decisions. Temperature zero — facts only, no creativity.
Open-Source LLM · GPU-Accelerated · streamingAn independent LLM judge reviews the full response against sources at temperature zero. Unsupported claims are automatically rewritten grounded in source text. Claims that fail re-verification are removed.
LLM Judge · temp 0 · regenerate-or-removeA legal-specialist language model built from the ground up for statute retrieval, cross-jurisdiction research, and grounded legal reasoning. Not a general-purpose chatbot with a legal prompt.
93% vs 75% — Lotus AI outperforms general-purpose models with twice the parameters
On a 35-test legal benchmark spanning tool routing, document reranking, response generation, and live retrieval across 10 jurisdictions, Lotus AI scored 93% against a leading 9-billion-parameter model’s 75%. Half the size, better results.
For every legal question, Lotus generates three parallel search queries covering different statutory angles. A question about banking licenses triggers queries for the license type, the granting conditions, and the supervising authority — all in the correct jurisdiction language.
Ask in English, get search queries in German, Slovak, Romanian, or Czech — using correct statutory terminology, not dictionary translations. 100% language accuracy across all tested jurisdictions, including informal and mixed-language input.
Tested against 3 million+ legal provisions across 10 jurisdictions. 88% of target statutes found in the top results — outperforming models with twice the parameter count. The one shared failure across all tested models was a database gap, not a model limitation.
Every answer cites specific statute sections by full name. Cross-source inference connects related provisions into coherent legal analysis. No hallucinated claims — verified against source text before delivery.
| Category | Tests | What It Measures | Lotus AI | General 9B |
|---|---|---|---|---|
| Tool Routing | 15 | Query generation in correct language with statutory terminology across 8 jurisdictions | 100% | 53% |
| Document Reranking | 5 | Selecting relevant statutes from mixed document sets with minimal noise | 95% | 83% |
| Response Generation | 7 | Citation format, cross-source inference, language matching, grounding | 93% | 86% |
| Live RAG Retrieval | 8 | Finding specific statutes in 3M+ provisions across jurisdictions | 88% | 75% |
| Weighted Total | 93% | 75% | ||
All tests run at temperature zero with identical prompts. “General 9B” is a leading open-source 9-billion-parameter model — the same architecture, just larger. Lotus AI is less than half the size. Live retrieval tested against production database (3,062,429 legal provisions, 10 jurisdictions).
Scatterbrained Lawyer Test
“yo can a geschäftsführer just get fired from a gmbh whenever or does there need to be a grund?”
Lotus found GmbH-Gesetz § 16 in 3M+ provisions.
Cross-Language Retrieval
“How do I set up an LLC in Slovakia?”
Lotus searched in Slovak, found Obchodný zákonník provisions.
Research, analyze, run due diligence, and export — all in one platform, all on your hardware.
Ask questions in any language. The AI searches statutes and case law, cites specific sections, and verifies every claim before responding. Upload PDF, DOCX, or TXT files for context.
Upload an entire data room. The AI classifies documents, extracts clauses, flags risks across 4 tiers, cross-references warranties against disclosures, and verifies every finding against applicable statutes. Multi-party, multi-jurisdiction.
Upload a contract and get a clause-by-clause risk assessment. Each clause rated by importance with specific statute references. Large documents automatically chunked and analyzed.
Export research conversations, drafted contracts, and document analyses as professionally formatted DOCX or PDF files. Ready for client delivery.
Each lawyer gets their own login and conversation history. Admin dashboard with invite system, role management, usage stats, and seat control. Like ChatGPT Team, but on-premise.
Run parallel multi-source investigations across jurisdictions. Progressive streaming results. Automatic document chunking for large files. Cancel anytime.
When regulators ask "how did the AI reach that conclusion?", you have the answer. Full audit coverage across every AI workflow — no other legal AI product offers this.
Every claim the AI makes is linked to the specific statute section or court decision it came from. Up to 2,000 characters of source text preserved per citation — not just a reference number.
Before any response reaches the user, an independent LLM judge reviews the full answer against its sources at temperature zero. Unsupported claims are rewritten from source text. Claims that fail re-verification are removed entirely. Both the raw LLM output and the verified final response are stored.
Every interaction recorded: system prompt, raw AI output, verified response, sources searched, confidence score, per-claim verification results. Searchable, exportable, survives account deletion.
Audit logging across all AI workflows: Research Chat, Document Analysis, Deep Investigation (every RAG query and LLM call across multi-pass analysis), and Document Navigator. Every step is traceable.
Sample audit entry: “System prompt logged (1,247 chars) — raw LLM output stored (3,421 chars) — cited GmbH-Gesetz § 6 — Judge verdict: Verified — source text: 1,847 chars — model: open-source LLM (temp 0) — response time: 1.2s — logged 2026-03-10 14:32:07”
Courts, regulators, and bar associations are drawing a clear line: if your AI provider can access the data, your confidentiality is at risk.
Judge Rakoff ruled that documents generated using a consumer AI platform were not protected by attorney-client privilege. The court found no reasonable expectation of confidentiality where the provider’s terms allow data collection and government disclosure. First ruling of its kind.
US law compels US-incorporated companies to produce data “regardless of whether such data is located within or outside the United States.” FISA 702 (reauthorized 2024) enables intelligence collection from US providers’ infrastructure — including European data centres. Server location is irrelevant; corporate domicile decides.
Transferring personal data to US-controlled infrastructure remains legally precarious after Schrems II (C-311/18). The EU-US Data Privacy Framework is already under challenge. Art. 28 GDPR imposes strict obligations on data processors — every cloud AI provider is one.
Transparency obligations apply from August 2026. GPAI providers face compliance duties under Art. 53 (in force Aug 2025). Fines up to €35M or 7% of global turnover for prohibited practices; €20M or 4% under GDPR. Italy fined OpenAI €15M in December 2024.
Unlawful disclosure of client secrets is a criminal offense (up to 1 year imprisonment). Cloud AI providers require a § 203-compliant confidentiality addendum — most standard API terms do not include one.
Lawyers must maintain secrecy about all matters entrusted to them. External service providers are permitted only with adequate confidentiality obligations. Clients must be informed about categories of service providers used.
Article 11: lawyers must maintain professional confidentiality regarding any aspect of a mandate. Article 45(6): unauthorized disclosure is a criminal offense carrying 1–5 years imprisonment.
The Council of Bars and Law Societies of Europe states lawyers may not input personal, confidential, or client-related information into GenAI tools without adequate safeguards. Warns of hallucination, data retention, and training data risks.
“The right and duty of lawyers to keep clients’ matters confidential and to respect professional secrecy is one of the most important professional duties and is the basis for the relationship of trust between lawyer and client in a state governed by the rule of law.” — CCBE Guide on the Use of Generative AI by Lawyers, October 2025
Applicable regulations: US CLOUD Act 2018 FISA 702 (reauth. 2024) US v. Heppner (S.D.N.Y. 2026) EU AI Act 2024/1689 GDPR Art. 28, 44-49 Schrems II (C-311/18) DE § 203 StGB AT § 9 RAO RO Law 51/1995 Art. 11, 45 CCBE GenAI Guide 2025
Open-source models, runs entirely on your hardware, zero cloud services. End-to-end pipeline completes in under 15 seconds with response streaming from ~3s.
| Layer | Technology | Details |
|---|---|---|
| LLM | Lotus AI | Purpose-built legal model — 93% on 35-test benchmark, streaming, temp zero |
| Embeddings | Multilingual Embeddings | Semantic search across 100+ languages |
| Reranker | Cross-Encoder Reranker | GPU-accelerated relevance scoring (~1s) |
| Translation | LLM-Native Translation | Search queries generated in target language by the LLM — zero extra latency |
| Verification | LLM Judge (temp 0) | Reviews full response against sources, regenerates or removes unsupported claims |
| Search | Hybrid Vector Search | Semantic + exact term matching, GPU-accelerated |
| Hardware | NVIDIA DGX Spark | 128 GB unified memory, 1,150 tok/s concurrent. Clusterable for larger firms. |
| Deployment | Pre-Installed Appliance | Plug in and go, no configuration needed |
Pick the jurisdictions you need. Our multilingual pipeline supports 450+ languages — if a country publishes its law, we can add it.
| Jurisdiction | Coverage | Status |
|---|---|---|
| US Federal | Statutes (USC) + Regulations (CFR) | Available |
| US States | State statutes & regulations (per state) | On request |
| Austria | Full legislation (51K+ sections) | Available |
| Slovakia | Core statutes & codes | Available |
| EU | Regulations, Directives & CJEU decisions | On request |
| Germany | Federal statutes & codes | Available |
| Romania | Full legislation (1.1M+ sections) | Available |
| UK | Primary legislation & case law | On request |
| China | National laws & State Council regulations | On request |
| Others | Any jurisdiction with publicly available law | On request |
Every other legal AI sends your data to the cloud. LexVault doesn't.
| Harvey AI | Westlaw CoCounsel | Lexis+ AI | AI:ssociate | LexVault | |
|---|---|---|---|---|---|
| Fully Offline / On-Premise | — | — | — | — | Yes |
| Compliance Audit Trail | — | — | — | — | Yes |
| LLM Judge Verification | — | — | — | — | Yes |
| AI Legal Research | Yes | Add-on | Add-on | Yes | Yes |
| M&A Due Diligence | Cloud only | — | — | — | Yes (on-premise) |
| Document Analysis | Yes | — | Basic | Yes | Yes |
| Multi-Jurisdiction | — | Per product | Per product | — | Yes |
| Cross-Language Search | — | — | — | — | Yes |
| DACH / CEE Statutes | — | Limited | AT only | AT only | US, DE, AT, SK, RO |
| No Cloud / No API Costs | — | — | — | — | Yes |
| Cost / Lawyer / Month | $1,200+ | $225–500 | €115–450 | €39–69 | from €99 |
AI:ssociate is cheaper per seat but cloud-only, Austria-only, and lacks offline capability, audit trail, and multi-jurisdiction support. LexVault costs more per seat but includes unlimited queries with zero ongoing API costs — the AI runs on your hardware.