Fully Offline · Full Audit Trail · No Cloud APIs

AI Legal Assistant
That Never Leaves
Your Network

LexVault is an AI that answers legal questions, analyzes contracts, and runs M&A due diligence — grounded in real law, verified before it reaches you, with a full audit trail. Runs entirely on your hardware. No data sent to OpenAI, ever.

LLM Judge
Verified Responses
450+
Languages
Any
Jurisdiction on Request
0
Cloud Dependencies
Why Offline

Your Data Stays Yours

Every component runs on-premise. Disconnect the ethernet cable and it still works.

🔒

Zero Cloud APIs

No OpenAI, no Azure, no AWS. The LLM, embeddings, reranker, and translation all run locally on your hardware.

💻

On Your Hardware

Deployed on an NVIDIA DGX Spark that fits on a desk. No server room, no IT department. All models run locally — 1,150 tokens/second. Always on.

🌐

Cross-Language Search

Ask in English, search in German or Slovak statute text. The LLM generates search queries in the target language — zero extra latency, perfect legal terminology.

📊

Monthly Updates

New case law and statute updates delivered monthly via secure USB drive. Plug in, wait 30 seconds, done. Your legal database stays current without ever connecting to the internet.

How It Works

From Question to Verified Answer

Five steps, fully on-premise. First results stream within seconds.

1

Translate

If the target jurisdiction uses a different language, the system translates the query. "tenant rights" becomes "Rechte der Mieter" for Austrian law.

Neural Translation · 450+ Languages · ~350ms
2

Hybrid Search

Searches statutes and case law using semantic meaning and exact legal terms simultaneously. Results merged and ranked by relevance.

Hybrid Vector Search · Semantic + Exact Match
3

Rerank

A cross-encoder reads each query-document pair and scores relevance. Only results above threshold pass. Noise filtered out.

Cross-Encoder Reranker · GPU-accelerated
4

Generate Answer

The AI reads the relevant law and writes a grounded answer citing specific statutes and decisions. Temperature zero — facts only, no creativity.

Open-Source LLM · GPU-Accelerated · streaming
5

Verify

An independent LLM judge reviews the full response against sources at temperature zero. Unsupported claims are automatically rewritten grounded in source text. Claims that fail re-verification are removed.

LLM Judge · temp 0 · regenerate-or-remove
What You Get

Everything a Legal Team Needs

Research, analyze, run due diligence, and export — all in one platform, all on your hardware.

🔍

AI Legal Research

Ask questions in any language. The AI searches statutes and case law, cites specific sections, and verifies every claim before responding. Upload PDF, DOCX, or TXT files for context.

📋

M&A Due Diligence

Upload an entire data room. The AI classifies documents, extracts clauses, flags risks across 4 tiers, cross-references warranties against disclosures, and verifies every finding against applicable statutes. Multi-party, multi-jurisdiction.

📋

Document Analysis

Upload a contract and get a clause-by-clause risk assessment. Each clause rated by importance with specific statute references. Large documents automatically chunked and analyzed.

📤

Export to DOCX & PDF

Export research conversations, drafted contracts, and document analyses as professionally formatted DOCX or PDF files. Ready for client delivery.

👥

Team Management

Each lawyer gets their own login and conversation history. Admin dashboard with invite system, role management, usage stats, and seat control. Like ChatGPT Team, but on-premise.

🔎

Deep Investigation

Run parallel multi-source investigations across jurisdictions. Progressive streaming results. Automatic document chunking for large files. Cancel anytime.

Compliance

Every AI Answer Has a Receipt

When regulators ask "how did the AI reach that conclusion?", you have the answer. No other legal AI product offers this.

📜

Source Traceability

Every claim the AI makes is linked to the specific statute section or court decision it came from. Full source text attached — not just a citation number.

LLM Judge Verification

Before any response reaches the user, an independent LLM judge reviews the full answer against its sources at temperature zero. Unsupported claims are automatically rewritten grounded in source text. Claims that fail re-verification are removed entirely.

📋

Permanent Audit Log

Every interaction recorded: model used, response time, sources searched, confidence score, verification result per claim (Verified, Flagged, Rewritten, Removed). Searchable, exportable, always available.

Sample audit entry: “The AI cited GmbH-Gesetz § 6 — Judge verdict: Verified — full source text: 1,847 chars — model: open-source LLM (temp 0) — response time: 1.2s — logged 2026-02-25 14:32:07”

Technology

Tech Stack

Open-source models, runs entirely on your hardware, zero cloud services. End-to-end pipeline completes in under 15 seconds with response streaming from ~3s.

LayerTechnologyDetails
LLMOpen-Source Language ModelStreaming output, temperature zero for legal research
EmbeddingsMultilingual EmbeddingsSemantic search across 100+ languages
RerankerCross-Encoder RerankerGPU-accelerated relevance scoring (~1s)
TranslationLLM-Native TranslationSearch queries generated in target language by the LLM — zero extra latency
VerificationLLM Judge (temp 0)Reviews full response against sources, regenerates or removes unsupported claims
SearchHybrid Vector SearchSemantic + exact term matching, GPU-accelerated
HardwareNVIDIA DGX Spark128 GB unified memory, 1,150 tok/s concurrent. Clusterable for larger firms.
DeploymentPre-Installed AppliancePlug in and go, no configuration needed
Competition

What No One Else Can Offer

Every legal AI runs in the cloud. Every one is a black box. LexVault is neither.

Westlaw / CoCounsel LexisNexis Harvey AI LexVault
Offline / Data Never Leaves Your Building Yes
Compliance Audit Trail Yes
LLM Judge Verified Responses Yes
AI Legal AssistantAdd-onAdd-onYesYes
M&A Due DiligenceYes (on-premise)
Document AnalysisBasicYesYes
Multi-JurisdictionPer productPer productYes
Cross-Language SearchYes
Cost / Lawyer $225–500 €115–450 $1,200+ from €99

5-Year Cost: 60-Lawyer Firm

Same features. Same AI. Fraction of the price. No API costs, no overages, no cloud dependency.

Solution Monthly 5-Year Total You Save
Harvey AI ($1,200/seat)~€64,800~€3,888,000€3,527,100
Lexis+ AI Professional ($494)~€26,700~€1,602,000€1,241,100
CoCounsel All Access ($500)~€27,000~€1,620,000€1,259,100
CoCounsel Core ($225)~€12,120~€727,200€366,300
LexVault€5,940€360,900

Based on published per-seat rates (USD at ~€0.90). Harvey: $1,200/seat/month (Sacra, 20-seat min). CoCounsel: published by Thomson Reuters. Lexis+ AI: published tiers. All competitors charge additional API/token costs. LexVault: unlimited queries, zero API costs — the LLM runs on your hardware.

Pricing

Simple, Transparent

€99 per Lawyer per Jurisdiction. The More You Add, the Less Each Costs.

Pick the jurisdictions you need. Unlimited AI queries, document analysis, and contract drafting — all included.
Setup (one-time)
€1,500 base
+ €50 per lawyer. Includes perpetual software license, installation, and training.
Monthly
€99 /lawyer/jurisdiction
15% volume discount per additional jurisdiction, applied to your entire bill. Capped at 45% discount.

Volume Discounts

Each jurisdiction you add reduces the rate on your entire bill — not just the new one.

JurisdictionsDiscountRate per JurisdictionTotal per Lawyer
1€99€99 /month
215%€84€168 /month
330%€69€208 /month
4+45% (cap)€54€218+ /month

Examples

Small Firm
10 lawyers · Austria
€990 /month
10 × €99 × 1 jur
€99/lawyer · Setup: €2,000
Mid-Size
30 lawyers · AT + EU
€5,049 /month
30 × €84 × 2 jur (15% discount)
€168/lawyer · Setup: €3,000
Cross-Border
50 lawyers · AT + SK + EU
€10,395 /month
50 × €69 × 3 jur (30% discount)
€208/lawyer · Setup: €4,000
Enterprise
150 lawyers · 5 jurisdictions
€40,838 /month
150 × €54 × 5 jur (45% cap)
€272/lawyer · Setup: €9,000
Hardware

One Box. Plug In. Done.

LexVault ships pre-installed on NVIDIA hardware. Fits on a desk, no server room required. Recommended: NVIDIA DGX Spark — 128 GB unified memory, 1,150 tok/s concurrent inference. Multiple units can be clustered for larger deployments.

Small – Mid Firm
NVIDIA DGX Spark · 128 GB
from €3,600
Up to 60 lawyers
~30 concurrent users · 1,150 tok/s
Large Firm
2× DGX Spark Cluster · 256 GB
from €7,200
Up to 120 lawyers
~60 concurrent users · 2,300 tok/s
Enterprise
DGX Spark Cluster · 512 GB+
from €14,400
200+ lawyers
Frontier-class models · ConnectX-7 interconnect

Recommended hardware: NVIDIA DGX Spark (Grace Blackwell GB10, 128 GB LPDDR5X). 1,150 tokens/second concurrent via vLLM. Clusterable via ConnectX-7 400G for larger deployments. All models run locally — no cloud, no external API, no driver complexity. Hardware purchased separately at cost.

Coverage

Jurisdictions

Pick the jurisdictions you need. Our multilingual pipeline supports 450+ languages — if a country publishes its law, we can add it.

JurisdictionCoverageStatus
US FederalStatutes (USC) + Regulations (CFR)Available
US StatesState statutes & regulations (per state)On request
AustriaCore statutes & codesAvailable
SlovakiaCore statutes & codesAvailable
EURegulations, Directives & CJEU decisionsOn request
GermanyFederal statutes & codesAvailable
UKPrimary legislation & case lawOn request
ChinaNational laws & State Council regulationsOn request
OthersAny jurisdiction with publicly available lawOn request