# 🚀 Quick Start Guide ## Prerequisites - Python 3.10 or higher - HuggingFace account (for model access) - 4GB+ RAM recommended - GPU optional (will use CPU if not available) ## Installation ### Option 1: Using Docker (Recommended) ```bash # Build the Docker image docker build -t tunisian-license-plate-ocr . # Run the container docker run -p 7860:7860 -p 8000:8000 tunisian-license-plate-ocr ``` **Access the application:** - Gradio UI: http://localhost:7860 - FastAPI: http://localhost:8000/docs ### Option 2: Local Installation ```bash # Install dependencies pip install -r requirements.txt # Run the application (both FastAPI and Gradio) python run.py ``` **Or run separately:** ```bash # Run Gradio only python -m app.gradio_app # Run FastAPI only python -m app.main ``` ## Using the Gradio Interface ### Simple View 1. Open http://localhost:7860 2. Click on the "Simple View" tab 3. Upload an image of a vehicle with a Tunisian license plate 4. Click "🚀 Process Image" 5. View the extracted license plate number and confidence scores ### Detailed View 1. Click on the "Detailed View" tab 2. Upload an image 3. Click "🚀 Process Image" 4. See all intermediate processing steps: - Original image with detected car - Plate detection overlay - Car crop highlighting the plate - Word detection highlighted - Masked plate ready for OCR ## Using the API ### Example: Detect Car ```bash curl -X POST "http://localhost:8000/detect-car" \ -H "Content-Type: multipart/form-data" \ -F "file=@path/to/your/image.jpg" ``` ### Example: Complete Pipeline ```bash curl -X POST "http://localhost:8000/process" \ -H "Content-Type: multipart/form-data" \ -F "file=@path/to/your/image.jpg" ``` **Response:** ```json { "success": true, "text": "12345TU6789", "confidence": { "plate_detection": 0.95, "word_detection": 0.88, "ocr": 0.92, "overall": 0.92 } } ``` ### Example: Detect Plate Only ```bash curl -X POST "http://localhost:8000/detect-plate" \ -H "Content-Type: multipart/form-data" \ -F "file=@path/to/your/image.jpg" ``` ### Example: Using Python Requests ```python import requests # Complete pipeline with open('vehicle_image.jpg', 'rb') as f: response = requests.post( 'http://localhost:8000/process', files={'file': f} ) result = response.json() print(f"License Plate: {result['text']}") print(f"Confidence: {result['confidence']['overall']:.2%}") ``` ## Testing with Sample Images Sample images are available in the `samples/` directory: ```bash # Test with a sample image curl -X POST "http://localhost:8000/process" \ -F "file=@samples/0.jpg" ``` ## Troubleshooting ### Models not loading - Ensure your HuggingFace token is set in `.env` - Check internet connection (models download on first run) - Verify token has access to the required models ### Out of memory - Reduce image size before processing - Use CPU instead of GPU if CUDA memory is insufficient - Close other applications ### Import errors - Reinstall dependencies: `pip install -r requirements.txt --upgrade` - Check Python version: `python --version` (should be 3.10+) ## Environment Variables Create a `.env` file in the root directory: ```env HUGGINGFACE_TOKEN=your_token_here ``` ## API Documentation Full API documentation is available at: - Swagger UI: http://localhost:8000/docs - ReDoc: http://localhost:8000/redoc ## Performance Tips 1. **First run is slower**: Models download on first use 2. **GPU acceleration**: Install CUDA-enabled PyTorch for faster inference 3. **Batch processing**: Use the API endpoints for processing multiple images 4. **Image size**: Resize large images (>2000px) for faster processing ## Support For issues or questions: 1. Check the main [README.md](README.md) 2. Review the [API documentation](http://localhost:8000/docs) 3. Open an issue on GitHub --- Happy License Plate Recognition! 🚗