Skip to content

A Docker-powered service for PDF document layout analysis. This service provides a powerful and flexible PDF analysis service. The service allows for the segmentation and classification of different parts of PDF pages, identifying the elements such as texts, titles, pictures, tables and so on.

License

Notifications You must be signed in to change notification settings

huridocs/pdf-document-layout-analysis

Repository files navigation

PDF Document Layout Analysis

A Docker-powered microservice for intelligent PDF document layout analysis, OCR, and content extraction

Python Version FastAPI Docker GPU Support


πŸš€ Overview

This project provides a powerful and flexible PDF analysis microservice built with Clean Architecture principles. The service enables OCR, segmentation, and classification of different parts of PDF pages, identifying elements such as texts, titles, pictures, tables, formulas, and more. Additionally, it determines the correct reading order of these identified elements and can convert PDFs to various formats including Markdown and HTML.

✨ Key Features

  • πŸ” Advanced PDF Layout Analysis - Segment and classify PDF content with high accuracy
  • πŸ–ΌοΈ Visual & Fast Models - Choose between VGT (Vision Grid Transformer) for accuracy or LightGBM for speed
  • πŸ“ Multi-format Output - Export to JSON, Markdown, HTML, and visualize PDF segmentations
  • 🌐 OCR Support - 150+ language support with Tesseract OCR
  • πŸ“Š Table & Formula Extraction - Extract tables as HTML and formulas as LaTeX
  • πŸ—οΈ Clean Architecture - Modular, testable, and maintainable codebase
  • 🐳 Docker-Ready - Easy deployment with GPU support
  • ⚑ RESTful API - Comprehensive API with 10+ endpoints

πŸ”— Project Links


πŸš€ Quick Start

1. Start the Service

With GPU support (recommended for better performance):

make start

Without GPU support:

make start_no_gpu

The service will be available at http://localhost:5060

Check service status:

curl http://localhost:5060/info

2. Basic PDF Analysis

Analyze a PDF document (VGT model - high accuracy):

curl -X POST -F 'file=@/path/to/your/document.pdf' http://localhost:5060

Fast analysis (LightGBM models - faster processing):

curl -X POST -F 'file=@/path/to/your/document.pdf' -F "fast=true" http://localhost:5060

3. Stop the Service

make stop

πŸ’‘ Tip: Replace /path/to/your/document.pdf with the actual path to your PDF file. The service will return a JSON response with segmented content and metadata.

πŸ“‹ Table of Contents

βš™οΈ Dependencies

Required

Optional

πŸ“‹ Requirements

System Requirements

  • RAM: 2 GB minimum
  • GPU Memory: 5 GB (optional, will fallback to CPU if unavailable)
  • Disk Space: 10 GB for models and dependencies
  • CPU: Multi-core recommended for better performance

Docker Requirements

  • Docker Engine 20.10+
  • Docker Compose 2.0+

πŸ“š API Reference

The service provides a comprehensive RESTful API with the following endpoints:

Core Analysis Endpoints

Endpoint Method Description Parameters
/ POST Analyze PDF layout and extract segments file, fast, ocr_tables
/save_xml/{filename} POST Analyze PDF and save XML output file, xml_file_name, fast
/get_xml/{filename} GET Retrieve saved XML analysis xml_file_name

Content Extraction Endpoints

Endpoint Method Description Parameters
/text POST Extract text by content types file, fast, types
/toc POST Extract table of contents file, fast
/toc_legacy_uwazi_compatible POST Extract TOC (Uwazi compatible) file

Format Conversion Endpoints

Endpoint Method Description Parameters
/markdown POST Convert PDF to Markdown (includes segmentation data in zip) file, fast, extract_toc, dpi, output_file
/html POST Convert PDF to HTML (includes segmentation data in zip) file, fast, extract_toc, dpi, output_file
/visualize POST Visualize segmentation results on the PDF file, fast

OCR & Utility Endpoints

Endpoint Method Description Parameters
/ocr POST Apply OCR to PDF file, language
/info GET Get service information -
/ GET Health check and system info -
/error GET Test error handling -

Common Parameters

  • file: PDF file to process (multipart/form-data)
  • fast: Use LightGBM models instead of VGT (boolean, default: false)
  • ocr_tables: Apply OCR to table regions (boolean, default: false)
  • language: OCR language code (string, default: "en")
  • types: Comma-separated content types to extract (string, default: "all")
  • extract_toc: Include table of contents at the beginning of the output (boolean, default: false)
  • dpi: Image resolution for conversion (integer, default: 120)

πŸ’‘ Usage Examples

Basic PDF Analysis

Standard analysis with VGT model:

curl -X POST \
  -F '[email protected]' \
  http://localhost:5060

Fast analysis with LightGBM models:

curl -X POST \
  -F '[email protected]' \
  -F 'fast=true' \
  http://localhost:5060

Analysis with table OCR:

curl -X POST \
  -F '[email protected]' \
  -F 'ocr_tables=true' \
  http://localhost:5060

Text Extraction

Extract all text:

curl -X POST \
  -F '[email protected]' \
  -F 'types=all' \
  http://localhost:5060/text

Extract specific content types:

curl -X POST \
  -F '[email protected]' \
  -F 'types=title,text,table' \
  http://localhost:5060/text

Format Conversion

Convert to Markdown:

curl -X POST http://localhost:5060/markdown \
  -F '[email protected]' \
  -F 'extract_toc=true' \
  -F 'output_file=document.md' \
  --output 'document.zip'

Convert to HTML:

curl -X POST http://localhost:5060/html \
  -F '[email protected]' \
  -F 'extract_toc=true' \
  -F 'output_file=document.html' \
  --output 'document.zip'

πŸ“‹ Segmentation Data: Format conversion endpoints automatically include detailed segmentation data in the zip output. The resulting zip file contains a {filename}_segmentation.json file with information about each detected document segment including:

  • Coordinates: left, top, width, height
  • Page information: page_number, page_width, page_height
  • Content: text content and segment type (e.g., "Title", "Text", "Table", "Picture")

OCR Processing

OCR in English:

curl -X POST \
  -F 'file=@scanned_document.pdf' \
  -F 'language=en' \
  http://localhost:5060/ocr \
  --output ocr_processed.pdf

OCR in other languages:

# French
curl -X POST \
  -F 'file=@document_french.pdf' \
  -F 'language=fr' \
  http://localhost:5060/ocr \
  --output ocr_french.pdf

# Spanish
curl -X POST \
  -F 'file=@document_spanish.pdf' \
  -F 'language=es' \
  http://localhost:5060/ocr \
  --output ocr_spanish.pdf

Visualization

Generate visualization PDF:

curl -X POST \
  -F '[email protected]' \
  http://localhost:5060/visualize \
  --output visualization.pdf

Table of Contents Extraction

Extract structured TOC:

curl -X POST \
  -F '[email protected]' \
  http://localhost:5060/toc

XML Storage and Retrieval

Analyze and save XML:

curl -X POST \
  -F '[email protected]' \
  http://localhost:5060/save_xml/my_analysis

Retrieve saved XML:

curl http://localhost:5060/get_xml/my_analysis.xml

Service Information

Get service info and supported languages:

curl http://localhost:5060/info

Health check:

curl http://localhost:5060/

Response Format

Most endpoints return JSON with segment information:

[
  {
    "left": 72.0,
    "top": 84.0,
    "width": 451.2,
    "height": 23.04,
    "page_number": 1,
    "page_width": 595.32,
    "page_height": 841.92,
    "text": "Document Title",
    "type": "Title"
  },
  {
    "left": 72.0,
    "top": 120.0,
    "width": 451.2,
    "height": 200.0,
    "page_number": 1,
    "page_width": 595.32,
    "page_height": 841.92,
    "text": "This is the main text content...",
    "type": "Text"
  }
]

Supported Content Types

  • Caption - Image and table captions
  • Footnote - Footnote text
  • Formula - Mathematical formulas
  • List item - List items and bullet points
  • Page footer - Footer content
  • Page header - Header content
  • Picture - Images and figures
  • Section header - Section headings
  • Table - Table content
  • Text - Regular text paragraphs
  • Title - Document and section titles

πŸ—οΈ Architecture

This project follows Clean Architecture principles, ensuring separation of concerns, testability, and maintainability. The codebase is organized into distinct layers:

Directory Structure

src/
β”œβ”€β”€ domain/                 # Enterprise Business Rules
β”‚   β”œβ”€β”€ PdfImages.py       # PDF image handling domain logic
β”‚   β”œβ”€β”€ PdfSegment.py      # PDF segment entity
β”‚   β”œβ”€β”€ Prediction.py      # ML prediction entity
β”‚   └── SegmentBox.py      # Core segment box entity
β”œβ”€β”€ use_cases/             # Application Business Rules
β”‚   β”œβ”€β”€ pdf_analysis/      # PDF analysis use case
β”‚   β”œβ”€β”€ text_extraction/   # Text extraction use case
β”‚   β”œβ”€β”€ toc_extraction/    # Table of contents extraction
β”‚   β”œβ”€β”€ visualization/     # PDF visualization use case
β”‚   β”œβ”€β”€ ocr/              # OCR processing use case
β”‚   β”œβ”€β”€ markdown_conversion/ # Markdown conversion use case
β”‚   └── html_conversion/   # HTML conversion use case
β”œβ”€β”€ adapters/              # Interface Adapters
β”‚   β”œβ”€β”€ infrastructure/    # External service adapters
β”‚   β”œβ”€β”€ ml/               # Machine learning model adapters
β”‚   β”œβ”€β”€ storage/          # File storage adapters
β”‚   └── web/              # Web framework adapters
β”œβ”€β”€ ports/                 # Interface definitions
β”‚   β”œβ”€β”€ services/         # Service interfaces
β”‚   └── repositories/     # Repository interfaces
└── drivers/              # Frameworks & Drivers
    └── web/              # FastAPI application setup

Layer Responsibilities

  • Domain Layer: Contains core business entities and rules independent of external concerns
  • Use Cases Layer: Orchestrates domain entities to fulfill specific application requirements
  • Adapters Layer: Implements interfaces defined by inner layers and adapts external frameworks
  • Drivers Layer: Contains frameworks, databases, and external agency configurations

Key Benefits

  • πŸ”„ Dependency Inversion: High-level modules don't depend on low-level modules
  • πŸ§ͺ Testability: Easy to unit test business logic in isolation
  • πŸ”§ Maintainability: Changes to external frameworks don't affect business rules
  • πŸ“ˆ Scalability: Easy to add new features without modifying existing code

πŸ€– Models

The service offers two complementary model approaches, each optimized for different use cases:

1. Vision Grid Transformer (VGT) - High Accuracy Model

Overview: A state-of-the-art visual model developed by Alibaba Research Group that "sees" the entire page layout.

Key Features:

  • 🎯 High Accuracy: Best-in-class performance on document layout analysis
  • πŸ‘οΈ Visual Understanding: Analyzes the entire page context including spatial relationships
  • πŸ“Š Trained on DocLayNet: Uses the comprehensive DocLayNet dataset
  • πŸ”¬ Research-Backed: Based on Advanced Literate Machinery

Resource Requirements:

  • GPU: 5GB+ VRAM (recommended)
  • CPU: Falls back automatically if GPU unavailable
  • Processing Speed: ~1.75 seconds/page (GPU [GTX 1070]) or ~13.5 seconds/page (CPU [i7-8700])

2. LightGBM Models - Fast & Efficient

Overview: Lightweight ensemble of two specialized models using XML-based features from Poppler.

Key Features:

  • ⚑ High Speed: ~0.42 seconds per page on CPU (i7-8700)
  • πŸ’Ύ Low Resource Usage: CPU-only, minimal memory footprint
  • πŸ”„ Dual Model Approach:
    • Token Type Classifier: Identifies content types (title, text, table, etc.)
    • Segmentation Model: Determines proper content boundaries
  • πŸ“„ XML-Based: Uses Poppler's PDF-to-XML conversion for feature extraction

Trade-offs:

  • Slightly lower accuracy compared to VGT
  • No visual context understanding
  • Excellent for batch processing and resource-constrained environments

OCR Integration

Both models integrate seamlessly with OCR capabilities:

  • Engine: Tesseract OCR
  • Processing: ocrmypdf
  • Languages: 150+ supported languages
  • Output: Searchable PDFs with preserved layout

Model Selection Guide

Use Case Recommended Model Reason
High accuracy requirements VGT Superior visual understanding
Batch processing LightGBM Faster processing, lower resources
GPU available VGT Leverages GPU acceleration
CPU-only environment LightGBM Optimized for CPU processing
Real-time applications LightGBM Consistent fast response times
Research/analysis VGT Best accuracy for detailed analysis

πŸ“Š Data

Training Dataset

Both model types are trained on the comprehensive DocLayNet dataset, a large-scale document layout analysis dataset containing over 80,000 document pages.

Document Categories

The models can identify and classify 11 distinct content types:

ID Category Description
1 Caption Image and table captions
2 Footnote Footnote references and text
3 Formula Mathematical equations and formulas
4 List item Bulleted and numbered list items
5 Page footer Footer content and page numbers
6 Page header Header content and titles
7 Picture Images, figures, and graphics
8 Section header Section and subsection headings
9 Table Tabular data and structures
10 Text Regular paragraph text
11 Title Document and chapter titles

Dataset Characteristics

  • Domain Coverage: Academic papers, technical documents, reports
  • Language: Primarily English with multilingual support
  • Quality: High-quality annotations with bounding boxes and labels
  • Diversity: Various document layouts, fonts, and formatting styles

For detailed information about the dataset, visit the DocLayNet repository.

πŸ”§ Development

Local Development Setup

  1. Clone the repository:

    git clone https://github.com/huridocs/pdf-document-layout-analysis.git
    cd pdf-document-layout-analysis
  2. Create virtual environment:

    make install_venv
  3. Activate environment:

    make activate
    # or manually: source .venv/bin/activate
  4. Install dependencies:

    make install

Code Quality

Format code:

make formatter

Check formatting:

make check_format

Testing

Run tests:

make test

Integration tests:

# Tests are located in src/tests/integration/
python -m pytest src/tests/integration/test_end_to_end.py

Docker Development

Build and start (detached mode):

# With GPU
make start_detached_gpu

# Without GPU  
make start_detached

Clean up Docker resources:

# Remove containers
make remove_docker_containers

# Remove images
make remove_docker_images

Project Structure

pdf-document-layout-analysis/
β”œβ”€β”€ src/                    # Source code
β”‚   β”œβ”€β”€ domain/            # Business entities
β”‚   β”œβ”€β”€ use_cases/         # Application logic
β”‚   β”œβ”€β”€ adapters/          # External integrations
β”‚   β”œβ”€β”€ ports/             # Interface definitions
β”‚   └── drivers/           # Framework configurations
β”œβ”€β”€ test_pdfs/             # Test PDF files
β”œβ”€β”€ models/                # ML model storage
β”œβ”€β”€ docker-compose.yml     # Docker configuration
β”œβ”€β”€ Dockerfile             # Container definition
β”œβ”€β”€ Makefile              # Development commands
β”œβ”€β”€ pyproject.toml        # Python project configuration
└── requirements.txt      # Python dependencies

Environment Variables

Key configuration options:

# OCR configuration
OCR_SOURCE=/tmp/ocr_source

# Model paths (auto-configured)
MODELS_PATH=./models

# Service configuration  
HOST=0.0.0.0
PORT=5060

Adding New Features

  1. Domain Logic: Add entities in src/domain/
  2. Use Cases: Implement business logic in src/use_cases/
  3. Adapters: Create integrations in src/adapters/
  4. Ports: Define interfaces in src/ports/
  5. Controllers: Add endpoints in src/adapters/web/

Debugging

View logs:

docker compose logs -f

Access container:

docker exec -it pdf-document-layout-analysis /bin/bash

Free up disk space:

make free_up_space

Order of Output Elements

The service returns SegmentBox elements in a carefully determined reading order:

Reading Order Algorithm

  1. Poppler Integration: Uses Poppler PDF-to-XML conversion to establish initial token reading order
  2. Segment Averaging: Calculates average reading order for multi-token segments
  3. Type-Based Sorting: Prioritizes content types:
    • Headers placed first
    • Main content in reading order
    • Footers and footnotes placed last

Non-Text Elements

For segments without text (e.g., images):

  • Processed after text-based sorting
  • Positioned based on nearest text segment proximity
  • Uses spatial distance as the primary criterion

Advanced Table and Formula Extraction

Default Behavior

  • Formulas: Automatically extracted as LaTeX format in the text property
  • Tables: Basic text extraction included by default

Enhanced Table Extraction

OCR tables and extract them in HTML format by setting ocr_tables=true:

curl -X POST -F '[email protected]' -F 'ocr_tables=true' http://localhost:5060

Extraction Engines

πŸ“ˆ Benchmarks

Performance

VGT model performance on PubLayNet dataset:

Metric Overall Text Title List Table Figure
F1 Score 0.962 0.950 0.939 0.968 0.981 0.971

πŸ“Š Comparison: View comprehensive model comparisons at Papers With Code

Speed

Performance benchmarks on 15-page academic documents:

Model Hardware Speed (sec/page) Use Case
LightGBM CPU (i7-8700 3.2GHz) 0.42 Fast processing
VGT GPU (GTX 1070) 1.75 High accuracy
VGT CPU (i7-8700 3.2GHz) 13.5 CPU fallback

Performance Recommendations

  • GPU Available: Use VGT for best accuracy-speed balance
  • CPU Only: Use LightGBM for optimal performance
  • Batch Processing: LightGBM for consistent throughput
  • High Accuracy: VGT with GPU for best results

🌐 Installation of More Languages for OCR

The service uses Tesseract OCR with support for 150+ languages. The Docker image includes only common languages to minimize image size.

Installing Additional Languages

1. Access the Container

docker exec -it --user root pdf-document-layout-analysis /bin/bash

2. Install Language Packs

# Install specific language
apt-get update
apt-get install tesseract-ocr-[LANGCODE]

3. Common Language Examples

# Korean
apt-get install tesseract-ocr-kor

# German  
apt-get install tesseract-ocr-deu

# French
apt-get install tesseract-ocr-fra

# Spanish
apt-get install tesseract-ocr-spa

# Chinese Simplified
apt-get install tesseract-ocr-chi-sim

# Arabic
apt-get install tesseract-ocr-ara

# Japanese
apt-get install tesseract-ocr-jpn

4. Verify Installation

curl http://localhost:5060/info

Language Code Reference

Find Tesseract language codes in the ISO to Tesseract mapping.

Supported Languages

Common language codes:

  • eng - English
  • fra - French
  • deu - German
  • spa - Spanish
  • ita - Italian
  • por - Portuguese
  • rus - Russian
  • chi-sim - Chinese Simplified
  • chi-tra - Chinese Traditional
  • jpn - Japanese
  • kor - Korean
  • ara - Arabic
  • hin - Hindi

Usage with Multiple Languages

# OCR with specific language
curl -X POST \
  -F '[email protected]' \
  -F 'language=fr' \
  http://localhost:5060/ocr \
  --output french_ocr.pdf

πŸ”— Related Services

Explore our ecosystem of PDF processing services built on this foundation:

πŸ” Purpose: Intelligent extraction of structured table of contents from PDF documents

Key Features:

  • Leverages layout analysis for accurate TOC identification
  • Hierarchical structure recognition
  • Multiple output formats supported
  • Integration-ready API

πŸ“ Purpose: Advanced text extraction with layout awareness

Key Features:

  • Content-type aware extraction
  • Preserves document structure
  • Reading order optimization
  • Clean text output with metadata

Integration Benefits

These services work seamlessly together:

  • Shared Analysis: Reuse layout analysis results across services
  • Consistent Output: Standardized JSON format for easy integration
  • Scalable Architecture: Deploy services independently or together
  • Docker Ready: All services containerized for easy deployment

🀝 Contributing

We welcome contributions to improve the PDF Document Layout Analysis service!

How to Contribute

  1. Fork the Repository

    git clone https://github.com/your-username/pdf-document-layout-analysis.git
  2. Create a Feature Branch

    git checkout -b feature/your-feature-name
  3. Set Up Development Environment

    make install_venv
    make install
  4. Make Your Changes

    • Follow the Clean Architecture principles
    • Add tests for new features
    • Update documentation as needed
  5. Run Tests and Quality Checks

    make test
    make check_format
  6. Submit a Pull Request

    • Provide clear description of changes
    • Include test results
    • Reference any related issues

Contribution Guidelines

Code Standards

  • Python: Follow PEP 8 with 125-character line length
  • Architecture: Maintain Clean Architecture boundaries
  • Testing: Include unit tests for new functionality
  • Documentation: Update README and docstrings

Areas for Contribution

  • πŸ› Bug Fixes: Report and fix issues
  • ✨ New Features: Add new endpoints or functionality
  • πŸ“š Documentation: Improve guides and examples
  • πŸ§ͺ Testing: Expand test coverage
  • πŸš€ Performance: Optimize processing speed
  • 🌐 Internationalization: Add language support

Development Workflow

  1. Issue First: Create or comment on relevant issues
  2. Small PRs: Keep pull requests focused and manageable
  3. Clean Commits: Use descriptive commit messages
  4. Documentation: Update relevant documentation
  5. Testing: Ensure all tests pass

Getting Help

  • πŸ“š Documentation: Check this README and inline docs
  • πŸ’¬ Issues: Search existing issues or create new ones
  • πŸ” Code: Explore the codebase structure
  • πŸ“§ Contact: Reach out to maintainers for guidance

License

This project is licensed under the terms specified in the LICENSE file.

About

A Docker-powered service for PDF document layout analysis. This service provides a powerful and flexible PDF analysis service. The service allows for the segmentation and classification of different parts of PDF pages, identifying the elements such as texts, titles, pictures, tables and so on.

Resources

License

Stars

Watchers

Forks

Sponsor this project

Packages

 
 
 

Contributors 6