This is a research project. Please do not use it commercially and use it responsibly.
WebAI-to-API is a modular web server built with FastAPI, designed to manage requests across AI services like Gemini. It features a clean, extendable architecture that simplifies configuration, integration, and maintenance.
Note: Currently, Gemini is the primary supported AI service.
- π Endpoints Management:
/v1/chat/completions
/gemini
/gemini-chat
/translate
- π Service Switching: Easily configure and switch between AI providers via
config.conf
. - π οΈ Modular Architecture: Organized into clearly defined modules for API routes, services, configurations, and utilities, making development and maintenance straightforward.
-
Clone the repository:
git clone https://github.com/Amm1rr/WebAI-to-API.git cd WebAI-to-API
-
Install dependencies using Poetry:
poetry install
-
Create and update the configuration file:
cp config.conf.example config.conf
Then, edit
config.conf
to adjust service settings and other options. -
Run the server:
poetry run python src/run.py
Send a POST request to /v1/chat/completions
(or any other available endpoint) with the required payload.
{
"model": "gemini-2.0-flash",
"messages": [{ "role": "user", "content": "Hello!" }]
}
{
"id": "chatcmpl-12345",
"object": "chat.completion",
"created": 1693417200,
"model": "gemini-2.0-flash",
"choices": [
{
"message": {
"role": "assistant",
"content": "Hi there!"
},
"finish_reason": "stop",
"index": 0
}
],
"usage": {
"prompt_tokens": 0,
"completion_tokens": 0,
"total_tokens": 0
}
}
- β Gemini Support: Implemented
- π‘
Claude, ChatGPT Development: Discontinued
Section | Option | Description | Example Value |
---|---|---|---|
[AI] | default_ai | Default service for /v1/chat/completions |
gemini |
[EnabledAI] | gemini | Enable/disable Gemini service | true |
[Browser] | name | Browser for cookie-based authentication | firefox |
The complete configuration template is available in WebAI-to-API/config.conf.example
.
If the cookies are left empty, the application will automatically retrieve them using the default browser specified.
[AI]
# Default AI service.
default_ai = gemini
# Default model for Gemini.
default_model_gemini = gemini-2.0-flash
# Gemini cookies (leave empty to use browser_cookies3 for automatic authentication).
gemini_cookie_1psid =
gemini_cookie_1psidts =
[EnabledAI]
# Enable or disable AI services.
gemini = true
[Browser]
# Default browser options: firefox, brave, chrome, edge, safari.
name = firefox
The project now follows a modular layout that separates configuration, business logic, API endpoints, and utilities:
src/
βββ app/
β βββ __init__.py
β βββ main.py # FastAPI app creation, configuration, and lifespan management.
β βββ config.py # Global configuration loader/updater.
β βββ logger.py # Centralized logging configuration.
β βββ endpoints/ # API endpoint routers.
β β βββ __init__.py
β β βββ gemini.py # Endpoints for Gemini (e.g., /gemini, /gemini-chat).
β β βββ chat.py # Endpoints for translation and OpenAI-compatible requests.
β βββ services/ # Business logic and service wrappers.
β β βββ __init__.py
β β βββ gemini_client.py # Gemini client initialization, content generation, and cleanup.
β β βββ session_manager.py # Session management for chat and translation.
β βββ utils/ # Helper functions.
β βββ __init__.py
β βββ browser.py # Browser-based cookie retrieval.
βββ models/ # Models and wrappers (e.g., MyGeminiClient).
β βββ gemini.py
βββ schemas/ # Pydantic schemas for request/response validation.
β βββ request.py
βββ config.conf # Application configuration file.
βββ run.py # Entry point to run the server.
The project is built on a modular architecture designed for scalability and ease of maintenance. Its primary components are:
- app/main.py: Initializes the FastAPI application, configures middleware, and manages application lifespan (startup and shutdown routines).
- app/config.py: Handles the loading and updating of configuration settings from
config.conf
. - app/logger.py: Sets up a centralized logging system.
- app/endpoints/: Contains separate modules for handling API endpoints. Each module (e.g.,
gemini.py
andchat.py
) manages routes specific to their functionality. - app/services/: Encapsulates business logic, including the Gemini client wrapper (
gemini_client.py
) and session management (session_manager.py
). - app/utils/browser.py: Provides helper functions, such as retrieving cookies from the browser for authentication.
- models/: Holds model definitions like
MyGeminiClient
for interfacing with the Gemini Web API. - schemas/: Defines Pydantic models for validating API requests.
-
Application Initialization:
On startup, the application loads configurations and initializes the Gemini client and session managers. This is managed via thelifespan
context inapp/main.py
. -
Routing:
The API endpoints are organized into dedicated routers underapp/endpoints/
, which are then included in the main FastAPI application. -
Service Layer:
Theapp/services/
directory contains the logic for interacting with the Gemini API and managing user sessions, ensuring that the API routes remain clean and focused on request handling. -
Utilities and Configurations:
Helper functions and configuration logic are kept separate to maintain clarity and ease of updates.
For Docker setup and deployment instructions, please refer to the Docker.md documentation.
This project is open source under the MIT License.
Note: This is a research project. Please use it responsibly, and be aware that additional security measures and error handling are necessary for production deployments.