Product Researcher Agent searches the web for information about user-supplied products and returns structured analysis, comparisons, and recommendations.
Set API keys for the LLM of choice (Anthropic is set by default in src/agent/graph.py
) and Tavily API:
cp .env.example .env
Clone the repository and launch the assistant using the LangGraph server:
curl -LsSf https://astral.sh/uv/install.sh | sh
git clone https://github.com/j-cunanan/product-researcher.git
cd product-researcher
uvx --refresh --from "langgraph-cli[inmem]" --with-editable . --python 3.11 langgraph dev
Product Researcher Agent follows a multi-step research and analysis workflow:
- Research Phase: The system performs comprehensive product research:
- Executes concurrent web searches via Tavily API for:
- Product specifications and features
- User reviews and feedback
- Expert opinions and professional reviews
- Retrieves up to
max_search_results
results per search type
- Executes concurrent web searches via Tavily API for:
- Analysis Phase: After research is complete, the system:
- Generates a structured comparison table of top products
- Provides detailed analysis of top options
- Creates final recommendations and buying advice
- Output Phase: The system delivers three main components:
- Comparison table with key features, pros, and cons
- Detailed analysis of top options with market overview
- Final recommendations including top pick, premium, and budget options
The configuration for Product Researcher Agent is defined in the src/agent/configuration.py
file:
max_search_queries
: int = 3 # Max search queries per productmax_search_results
: int = 3 # Max search results per querycomparison_table
: bool = True # Whether to include comparison tabledetailed_analysis
: bool = True # Whether to include detailed analysisfinal_recommendation
: bool = True # Whether to include final recommendation
The user inputs are:
* query: str - Product search query
* category: str - Product category (e.g., electronics, outdoors)
* price_range: Optional[str] - Desired price range (defaults to "Any")
The system provides structured output in three main sections:
-
Comparison Table: A markdown-formatted table comparing 3-5 top products, including:
- Model name
- Price
- Rating
- Key features
- Pros and cons
-
Detailed Analysis:
- Market overview and trends
- Analysis of top options
- Key decision factors
- Price-performance analysis
-
Final Recommendations:
- Top recommendation with justification
- Premium option
- Budget option
- Specialized recommendations
- Usage/buying tips
Prior to engaging in any optimization, it is important to establish a baseline performance. This repository includes:
- A dataset consisting of a list of companies and the expected structured information to be extracted for each company.
- An evaluation script that can be used to evaluate the agent on this dataset.
Make sure you have the LangSmith CLI installed:
pip install langsmith
And set your API key:
export LANGSMITH_API_KEY=<your_langsmith_api_key>
export ANTHROPIC_API_KEY=<your_anthropic_api_key>
A score between 0 and 1 is assigned to each extraction result by an LLM model that acts as a judge.
The model assigns the score based on how closely the extracted information matches the expected information.
Create a new dataset in LangSmith using the code in the eval
folder:
python eval/create_dataset.py
To run the evaluation, you can use the run_eval.py
script in the eval
folder. This will create a new experiment in LangSmith for the dataset you created in the previous step.
python eval/run_eval.py --experiment-prefix "My custom prefix" --agent-url http://localhost:2024