Skip to main content
A
Appixen

Indian Constitution LLM

Fine-tune Mistral-7B on Indian Constitution and legal documents. Create an interactive Q&A system powered by AI for legal research and constitutional queries.

Core Features

Everything you need to build and deploy a legal Q&A system

📥

Data Ingestion

Hover to learn more

Data Ingestion

Loads Indian Constitution articles from JSON files, Supreme Court judgments from Kaggle, and processes user-uploaded PDF documents.

Efficient Fine-tuning

Hover to learn more

Efficient Fine-tuning

Utilizes unsloth for fast and memory-efficient LoRA fine-tuning of Mistral-7B-Instruct-v0.3-bnb-4bit.

🤗

Hugging Face Hub Integration

Hover to learn more

Hugging Face Hub Integration

Seamlessly uploads the fine-tuned model and tokenizer to your specified Hugging Face repository.

📊

Model Evaluation

Hover to learn more

Model Evaluation

Includes a basic evaluation script with predefined test questions to assess model performance.

🌐

Interactive Interface

Hover to learn more

Interactive Interface

Deploys a Gradio web interface for real-time question-answering with the fine-tuned model.

Dataset Sources

Training data aggregated from multiple legal data sources

📄

INDIA.json

Hover to learn more

INDIA.json

Structured JSON data with instruction and response fields for constitutional articles.

⚖️

Kaggle Legal Judgments

Hover to learn more

Kaggle Legal Judgments

Top 100 entries from Supreme Court judgments dataset (1950-2024), formatted as summarization tasks.

📑

PDF Processing

Hover to learn more

PDF Processing

Text extracted from uploaded PDFs, formatted as general content descriptions for training.

Model Details

Technical details of the fine-tuned LLM

🧠

Base Model

Hover to learn more

Base Model

unsloth/mistral-7b-instruct-v0.3-bnb-4bit (Mistral-7B, 4-bit quantized)

🔧

Fine-tuning Method

Hover to learn more

Fine-tuning Method

LoRA (Low-Rank Adaptation) using unsloth for efficiency with r=16, lora_alpha=16.

📈

Training Arguments

Hover to learn more

Training Arguments

per_device_train_batch_size=2, gradient_accumulation_steps=4, max_steps=50, learning_rate=2e-4.

Frequently Asked Questions

Learn more about the Indian Constitution LLM project

Still have questions?

Contact Us

Ready to Build Your Legal AI?

Start fine-tuning your own LLM on Indian Constitution and legal documents today