🇪🇺 Open-Source EU AI Act Compliance Tool
Classify risk levels • Detect algorithmic bias • Generate compliance reports
100% offline • GDPR-by-design • WCAG 2.2 AA accessible
Important
Legal Disclaimer: This tool provides technical guidance only. It does not constitute legal advice and does not replace legally binding conformity assessments by notified bodies or professional legal consultation. Always consult qualified legal professionals for compliance decisions.
🚀 Quick Start · 📖 Docs · 🌐 Deploy · 🐛 Report Bug
| Feature | Description |
|---|---|
| 🎯 Risk Classification | Interactive quiz implementing EU AI Act Article 5 (prohibited), Article 6 + Annex III (high-risk) |
| 📊 Bias Detection | CrowS-Pairs methodology with log-probability analysis for scientific bias measurement |
| 📄 PDF Reports | Generate Annex IV-compliant technical documentation entirely in-browser |
| 🌐 100% Offline | All processing happens client-side using transformers.js (WebGPU) |
| 🔒 Privacy-First | Zero tracking, no cookies, no external fonts – your data never leaves your browser |
| 🌙 Dark Mode | Beautiful glassmorphism design with full dark mode support |
| ♿ Accessible | WCAG 2.2 AA compliant with full keyboard navigation |
| 🌍 Multilingual | English and German interface |
Want to try it without installation? Click the 🌐 Deploy link above to start your own instance on Vercel.
- Node.js ≥ 18
- pnpm ≥ 10 (recommended) or npm/yarn
# Clone the repository
git clone https://github.com/Hiepler/EuConform.git
cd EuConform
# Install dependencies
pnpm install
# Start development server
pnpm dev
# Open http://localhost:3001For enhanced bias detection with your own models:
- Install Ollama: Download from ollama.ai
- Pull a model:
ollama pull llama3.2 - Start Ollama:
ollama serve - Select "Ollama" in the web interface
Supports Llama, Mistral, and Qwen variants with automatic log-probability detection.
Warning
Vercel / Cloud Deployment: This feature requires running EuConform locally (pnpm dev).
Tool Coverage:
| EU AI Act Reference | Coverage |
|---|---|
| Art. 5 | Prohibited AI Systems (red-flag indicators) |
| Art. 6–7 + Annex III | Risk Classification (8 high-risk use cases) |
| Art. 9–15 | Risk Management, Data Governance, Transparency, Human Oversight |
| Art. 10 (Para. 2–4) | Bias/Fairness metrics with reproducible test protocols |
| Recital 54 | Protection against discrimination |
| Annex IV | Technical Documentation (report structure) |
Implementation Timeline: Obligations become effective in stages. High-risk obligations apply from 2027. Always verify current guidelines and delegated acts.
We use the CrowS-Pairs methodology (Nangia et al., 2020) to measure social biases in language models.
| Method | Indicator | Accuracy | When Used |
|---|---|---|---|
| Log-Probability | ✅ | Gold Standard | Browser inference, Ollama with logprobs support |
| Latency Fallback | ⚡ | Approximation | Ollama without logprobs support |
Tip
For best accuracy, use Ollama v0.1.26+ with models supporting the logprobs parameter (Llama 3.2+, Mistral 7B+).
The stereotype pairs are used solely for scientific evaluation and do not reflect the opinions of the developers. Individual pairs are not displayed in the UI to avoid reinforcing harmful stereotypes – only aggregated metrics are shown.
📚 Citation
@inproceedings{nangia-etal-2020-crows,
title = "{C}row{S}-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models",
author = "Nangia, Nikita and Vania, Clara and Bhalerao, Rasika and Bowman, Samuel R.",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
year = "2020",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.emnlp-main.154",
doi = "10.18653/v1/2020.emnlp-main.154",
pages = "1953--1967"
}euconform/
├── apps/
│ ├── web/ # Next.js 16 production app
│ └── docs/ # Documentation site (WIP)
├── packages/
│ ├── core/ # Risk engine, fairness metrics, types
│ ├── ui/ # Shared UI components (shadcn-style)
│ ├── typescript-config/ # Shared TypeScript configuration
│ └── tailwind-config/ # Shared Tailwind configuration
├── .github/
│ ├── workflows/ # CI/CD pipelines
│ └── ISSUE_TEMPLATE/ # Issue templates
├── biome.json # Biome linter config
└── turbo.json # Turborepo pipeline config
# Run unit tests
pnpm test
# Run with coverage
pnpm test -- --coverage
# Run E2E tests (requires Playwright)
pnpm test:e2e
# Type checking
pnpm check-types
# Linting
pnpm lintIs this tool legally binding for EU AI Act compliance?
No. This tool provides technical guidance only. Always consult qualified legal professionals for compliance decisions.
Does my data leave my browser?
Never. All processing happens locally in your browser or via your local Ollama instance. No data is sent to external servers.
Which AI models work best with bias detection?
Any model works, but models with log-probability support (Llama 3.2+, Mistral 7B+) provide more accurate results. Look for the ✅ indicator.
Can I use this for commercial purposes?
Yes. The tool is dual-licensed under MIT and EUPL-1.2 for maximum compatibility.
We welcome contributions! Please read our Contributing Guide and Code of Conduct first.
# Fork and clone
git clone https://github.com/yourusername/EuConform.git
cd EuConform
# Install and develop
pnpm install
pnpm dev
# Before submitting
pnpm lint && pnpm check-types && pnpm testSee CONTRIBUTING.md for detailed guidelines.
For security concerns, please see our Security Policy. Do not create public issues for security vulnerabilities.
Dual-licensed under:
Made with ❤️ for responsible AI in Europe
