A comprehensive LangGraph quickstart application demonstrating AI agent workflows with proper architecture and best practices. This project serves as a solid, well-architected foundation that developers can easily clone and extend with custom agentic application features.
- Complete LangGraph Workflow Implementation: Demonstrates stateful, multi-actor applications with conditional routing
- Research Agent System: AI agents capable of web search, analysis, and synthesis
- Modular Architecture: Clean separation of concerns with agents, tools, and workflows
- Multiple Search Providers: Support for DuckDuckGo (free), Serper, and Tavily APIs
- Text Processing Tools: Advanced text analysis, summarization, and extraction capabilities
- Production-Ready Configuration: Environment management, logging, and error handling
- Interactive CLI: Multiple ways to interact with the system
- Comprehensive Examples: Detailed usage examples and demonstrations
langgraph-lab/
├── src/
│ ├── agents/ # AI agent implementations
│ ├── workflows/ # LangGraph workflow definitions
│ ├── tools/ # Custom tools (search, text processing)
│ ├── config/ # Configuration management
│ └── utils/ # Utilities and logging
├── examples/ # Usage examples
├── tests/ # Test suite
└── main.py # CLI entry point
- Research Workflow: LangGraph-based workflow with conditional routing and state management
- Research Agent: Specialized agent for conducting research tasks
- Web Search Tool: Multi-provider web search with fallback strategies
- Text Processor: Advanced text analysis and summarization
- Configuration System: Centralized settings with environment variable support
- Python 3.9+
- OpenAI API key
- Optional: Serper or Tavily API keys for enhanced search
-
Clone the repository:
git clone https://github.com/harehimself/langgraph-lab.git cd langgraph-lab
-
Create and activate virtual environment:
python -m venv venv # On Windows venv\Scripts\activate # On macOS/Linux source venv/bin/activate
-
Install dependencies:
pip install -r requirements.txt
-
Configure environment:
cp .env.example .env # Edit .env with your API keys
-
Run the demo:
python main.py demo
Create a .env
file with the following variables:
# Required
OPENAI_API_KEY=your_openai_api_key_here
OPENAI_MODEL=gpt-4-turbo-preview
# Optional - Enhanced Search
SERPER_API_KEY=your_serper_api_key_here
TAVILY_API_KEY=your_tavily_api_key_here
# Optional - LangSmith Tracing
LANGCHAIN_TRACING_V2=true
LANGCHAIN_API_KEY=your_langsmith_api_key_here
LANGCHAIN_PROJECT=langgraph-lab
APP_NAME
: Application name (default: "LangGraph Lab")LOG_LEVEL
: Logging level (default: "INFO")MAX_ITERATIONS
: Maximum workflow iterations (default: 10)TIMEOUT_SECONDS
: Workflow timeout (default: 300)
python main.py demo
python main.py research "What are the latest developments in AI agents?"
python main.py interactive
python main.py info
import asyncio
from src.workflows.research_workflow import research_topic
async def main():
# Conduct research
results = await research_topic(
"How to build AI agents with LangGraph",
research_depth="standard"
)
print(f"Analysis: {results['analysis']}")
print(f"Sources: {results['sources']}")
print(f"Key Insights: {results['key_insights']}")
asyncio.run(main())
from src.agents.base_agent import ToolAgent
class CustomAgent(ToolAgent):
def _get_default_system_prompt(self) -> str:
return "You are a specialized AI agent for..."
async def process(self, state):
# Implement custom logic
return updated_state
The main workflow demonstrates:
- Query Analysis: Understanding research requirements
- Conditional Routing: Different paths based on complexity
- Agent Coordination: Multiple agents working together
- State Management: Maintaining context across steps
- Result Validation: Quality checks and follow-ups
basic
: Quick research with single searchstandard
: Comprehensive research with analysisdeep
: Multi-iteration research with synthesis
- DuckDuckGo: Free web search (no API key required)
- Serper: Google search API with high-quality results
- Tavily: AI-optimized search with content processing
- Summarization: Intelligent text summarization
- Key Point Extraction: Automatic insight identification
- Sentiment Analysis: Emotion and tone detection
- Entity Extraction: Named entity recognition
# Simple research query
results = await research_topic(
"Benefits of using LangGraph for AI applications",
"basic"
)
# Complex multi-faceted research
results = await research_topic(
"Compare different AI agent frameworks for enterprise use",
"deep"
)
from src.tools.web_search import WebSearchTool
search_tool = WebSearchTool()
results = await search_tool.search("LangGraph tutorials", max_results=5)
Run the test suite:
# Install dev dependencies
pip install -e ".[dev]"
# Run tests
pytest
# Run with coverage
pytest --cov=src --cov-report=html
# Format code
black src/ tests/ examples/
# Sort imports
isort src/ tests/ examples/
# Type checking
mypy src/
# Linting
flake8 src/ tests/ examples/
- Create agent class in
src/agents/
- Inherit from
BaseAgent
orToolAgent
- Implement required methods
- Add to workflow in
src/workflows/
- Create tool class in
src/tools/
- Implement async methods
- Add to agent tool registry
- Update documentation
FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
CMD ["python", "main.py", "demo"]
# Production deployment
export OPENAI_API_KEY="your_key"
export LOG_LEVEL="WARNING"
export DEBUG="false"
python main.py research "your query"
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature
) - Make your changes
- Add tests for new functionality
- Ensure code quality checks pass
- Commit your changes (
git commit -m 'Add amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
This project is licensed under the MIT License. See the LICENSE file for details.
- LangGraph for the amazing workflow framework
- LangChain for the foundation
- OpenAI for the language models
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Documentation: Check the
examples/
directory for detailed usage examples
- Add more specialized agents (analysis, writing, coding)
- Implement persistent state storage
- Add web interface with Streamlit/Gradio
- Enhanced tool integrations (databases, APIs)
- Multi-modal capabilities (images, documents)
- Distributed workflow execution
Built with ❤️ by Mike Hare