Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OpenAI API Key require even when .env file has deepseek settings #1067

Open
Videmak opened this issue Jan 7, 2025 · 9 comments
Open

OpenAI API Key require even when .env file has deepseek settings #1067

Videmak opened this issue Jan 7, 2025 · 9 comments

Comments

@Videmak
Copy link

Videmak commented Jan 7, 2025

No matter what model I set in the .env file, I keep getting this error:

File "/usr/local/lib/python3.10/site-packages/langchain_openai/embeddings/base.py", line 342, in validate_environment
    values["client"] = openai.OpenAI(
  File "/usr/local/lib/python3.10/site-packages/openai/_client.py", line 105, in __init__
    raise OpenAIError(
openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable
INFO:     connection closed

This is my settings and steps

  1. I created a .env file in the folder path. .../gpt-researcher/.env
  2. I added seeting for DeepSeek as below:
DEEPSEEK_API_KEY=[DeepSeek_API_KEY]
FAST_LLM=deepseek:deepseek-chat
SMART_LLM=deepseek:deepseek-chat
STRATEGIC_LLM=deepseek:deepseek-chat

Now when I try any search querry, I get the error message above. I have tried restarting the server but nothing. Even when I try using google Gemin.

NOTE: I'm Using the latest release of GPT-Researcher (3.1.8)

@assafelovic
Copy link
Owner

@winsonluk any chance you can take a look at this?

@winsonluk
Copy link
Contributor

winsonluk commented Jan 8, 2025

@assafelovic hey, thanks for the report. @Videmak - I was able to reproduce your error with this exact .env file:

DEEPSEEK_API_KEY=[DeepSeek_API_KEY]
FAST_LLM=deepseek:deepseek-chat
SMART_LLM=deepseek:deepseek-chat
STRATEGIC_LLM=deepseek:deepseek-chat
(venv) [01/8/25 2:37:55 PM] ~/Documents/gpt-researcher $ !p
python3 -m uvicorn main:app --reload
INFO:     Will watch for changes in these directories: ['/Users/wluk/Documents/gpt-researcher']
INFO:     Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
INFO:     Started reloader process [5803] using StatReload
INFO:     Started server process [5805]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     127.0.0.1:53283 - "GET / HTTP/1.1" 200 OK
INFO:     127.0.0.1:53283 - "GET / HTTP/1.1" 200 OK
INFO:     127.0.0.1:53283 - "GET / HTTP/1.1" 200 OK
INFO:     127.0.0.1:53285 - "GET /static/gptr-logo.png HTTP/1.1" 200 OK
INFO:     127.0.0.1:53283 - "GET /site/styles.css HTTP/1.1" 200 OK
INFO:     127.0.0.1:53285 - "GET /site/scripts.js HTTP/1.1" 200 OK
INFO:     127.0.0.1:53285 - "GET /static/favicon.ico HTTP/1.1" 200 OK
INFO:     ('127.0.0.1', 53339) - "WebSocket /ws" [accepted]
INFO:     connection open
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/Users/wluk/Documents/gpt-researcher/venv/lib/python3.12/site-packages/uvicorn/protocols/websockets/websockets_impl.py", line 243, in run_asgi
    result = await self.app(self.scope, self.asgi_receive, self.asgi_send)  # type: ignore[func-returns-value]
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/wluk/Documents/gpt-researcher/venv/lib/python3.12/site-packages/uvicorn/middleware/proxy_headers.py", line 60, in __call__
    return await self.app(scope, receive, send)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/wluk/Documents/gpt-researcher/venv/lib/python3.12/site-packages/fastapi/applications.py", line 1054, in __call__
    await super().__call__(scope, receive, send)
  File "/Users/wluk/Documents/gpt-researcher/venv/lib/python3.12/site-packages/starlette/applications.py", line 113, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/Users/wluk/Documents/gpt-researcher/venv/lib/python3.12/site-packages/starlette/middleware/errors.py", line 152, in __call__
    await self.app(scope, receive, send)
  File "/Users/wluk/Documents/gpt-researcher/venv/lib/python3.12/site-packages/starlette/middleware/cors.py", line 77, in __call__
    await self.app(scope, receive, send)
  File "/Users/wluk/Documents/gpt-researcher/venv/lib/python3.12/site-packages/starlette/middleware/exceptions.py", line 62, in __call__
    await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
  File "/Users/wluk/Documents/gpt-researcher/venv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
    raise exc
  File "/Users/wluk/Documents/gpt-researcher/venv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
    await app(scope, receive, sender)
  File "/Users/wluk/Documents/gpt-researcher/venv/lib/python3.12/site-packages/starlette/routing.py", line 715, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/Users/wluk/Documents/gpt-researcher/venv/lib/python3.12/site-packages/starlette/routing.py", line 735, in app
    await route.handle(scope, receive, send)
  File "/Users/wluk/Documents/gpt-researcher/venv/lib/python3.12/site-packages/starlette/routing.py", line 362, in handle
    await self.app(scope, receive, send)
  File "/Users/wluk/Documents/gpt-researcher/venv/lib/python3.12/site-packages/starlette/routing.py", line 95, in app
    await wrap_app_handling_exceptions(app, session)(scope, receive, send)
  File "/Users/wluk/Documents/gpt-researcher/venv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
    raise exc
  File "/Users/wluk/Documents/gpt-researcher/venv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
    await app(scope, receive, sender)
  File "/Users/wluk/Documents/gpt-researcher/venv/lib/python3.12/site-packages/starlette/routing.py", line 93, in app
    await func(session)
  File "/Users/wluk/Documents/gpt-researcher/venv/lib/python3.12/site-packages/fastapi/routing.py", line 383, in app
    await dependant.call(**solved_result.values)
  File "/Users/wluk/Documents/gpt-researcher/backend/server/server.py", line 132, in websocket_endpoint
    await handle_websocket_communication(websocket, manager)
  File "/Users/wluk/Documents/gpt-researcher/backend/server/server_utils.py", line 241, in handle_websocket_communication
    await handle_start_command(websocket, data, manager)
  File "/Users/wluk/Documents/gpt-researcher/backend/server/server_utils.py", line 139, in handle_start_command
    report = await manager.start_streaming(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/wluk/Documents/gpt-researcher/backend/server/websocket_manager.py", line 66, in start_streaming
    report = await run_agent(task, report_type, report_source, source_urls, document_urls, tone, websocket, headers = headers, config_path = config_path)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/wluk/Documents/gpt-researcher/backend/server/websocket_manager.py", line 110, in run_agent
    report = await researcher.run()
             ^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/wluk/Documents/gpt-researcher/backend/report_type/basic_report/basic_report.py", line 32, in run
    researcher = GPTResearcher(
                 ^^^^^^^^^^^^^^
  File "/Users/wluk/Documents/gpt-researcher/gpt_researcher/agent.py", line 81, in __init__
    self.memory = Memory(
                  ^^^^^^^
  File "/Users/wluk/Documents/gpt-researcher/gpt_researcher/memory/embeddings.py", line 46, in __init__
    _embeddings = OpenAIEmbeddings(model=model, **embdding_kwargs)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/wluk/Documents/gpt-researcher/venv/lib/python3.12/site-packages/pydantic/main.py", line 212, in __init__
    validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/wluk/Documents/gpt-researcher/venv/lib/python3.12/site-packages/langchain_openai/embeddings/base.py", line 338, in validate_environment
    self.client = openai.OpenAI(**client_params, **sync_specific).embeddings  # type: ignore[arg-type]
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/wluk/Documents/gpt-researcher/venv/lib/python3.12/site-packages/openai/_client.py", line 110, in __init__
    raise OpenAIError(
openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable
INFO:     connection closed

I believe the issue is that OPENAI_API_KEY and TAVILY_API_KEY also need to be set in .env - even if you're not using OpenAI models! (I agree this isn't very clear in the docs). If you change your .env to look like this, it should work:

OPENAI_API_KEY=[OPENAI_API_KEY]
TAVILY_API_KEY=[TAVILY_API_KEY]
DEEPSEEK_API_KEY=[DeepSeek_API_KEY]
FAST_LLM=deepseek:deepseek-chat
SMART_LLM=deepseek:deepseek-chat
STRATEGIC_LLM=deepseek:deepseek-chat

The reason for this is because DeepSeek doesn't have it's own text embedding models (see https://api-docs.deepseek.com/faq#does-your-api-support-embedding), so we have to use OpenAI for the text embedding step.

But as long as you set FAST_LLM, SMART_LLM, and STRATEGIC_LLM to DeepSeek, the actual text generation will be from DeepSeek.

We just need your OPENAI_API_KEY for that first embedding step.

@Videmak
Copy link
Author

Videmak commented Jan 8, 2025

Ohh, this is helpful, thanks @winsonluk and @assafelovic

@BrockBakke
Copy link

I'm getting the exact same error no matter what LLM is chosen, even ones that have their own embedding model. I started getting it after pulling today's repo.

@BrockBakke
Copy link

python3 ./tests/test-your-retriever.py
Retrievers: [<class 'gpt_researcher.retrievers.tavily.tavily_search.TavilySearch'>]
Search results:
[ { 'body': 'In this chapter...

Retriever works.

python3 ./tests/test-your-llm.py
Not much, just here to help you out! What can I do for you today?

LLM works

Outside of my keys (which work as demonstrated as above, using tavily and openAI keys), and some commented out code, this is all that's in my .env file:

FAST_LLM="openai:gpt-4o-mini"
SMART_LLM="openai:gpt-4o"
STRATEGIC_LLM="openai:o1-preview"
EMBEDDING="openai:text-embedding-3-small"

here's the error:

ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/home/b/proj/gpt-researcher/.venv/lib64/python3.13/site-packages/uvicorn/protocols/websockets/websockets_impl.py", line 243, in run_asgi
result = await self.app(self.scope, self.asgi_receive, self.asgi_send) # type: ignore[func-returns-value]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/b/proj/gpt-researcher/.venv/lib64/python3.13/site-packages/uvicorn/middleware/proxy_headers.py", line 60, in call
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/b/proj/gpt-researcher/.venv/lib64/python3.13/site-packages/fastapi/applications.py", line 1054, in call
await super().call(scope, receive, send)
File "/home/b/proj/gpt-researcher/.venv/lib64/python3.13/site-packages/starlette/applications.py", line 113, in call
await self.middleware_stack(scope, receive, send)
File "/home/b/proj/gpt-researcher/.venv/lib64/python3.13/site-packages/starlette/middleware/errors.py", line 152, in call
await self.app(scope, receive, send)
File "/home/b/proj/gpt-researcher/.venv/lib64/python3.13/site-packages/starlette/middleware/cors.py", line 77, in call
await self.app(scope, receive, send)
File "/home/b/proj/gpt-researcher/.venv/lib64/python3.13/site-packages/starlette/middleware/exceptions.py", line 62, in call
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "/home/b/proj/gpt-researcher/.venv/lib64/python3.13/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
raise exc
File "/home/b/proj/gpt-researcher/.venv/lib64/python3.13/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
await app(scope, receive, sender)
File "/home/b/proj/gpt-researcher/.venv/lib64/python3.13/site-packages/starlette/routing.py", line 715, in call
await self.middleware_stack(scope, receive, send)
File "/home/b/proj/gpt-researcher/.venv/lib64/python3.13/site-packages/starlette/routing.py", line 735, in app
await route.handle(scope, receive, send)
File "/home/b/proj/gpt-researcher/.venv/lib64/python3.13/site-packages/starlette/routing.py", line 362, in handle
await self.app(scope, receive, send)
File "/home/b/proj/gpt-researcher/.venv/lib64/python3.13/site-packages/starlette/routing.py", line 95, in app
await wrap_app_handling_exceptions(app, session)(scope, receive, send)
File "/home/b/proj/gpt-researcher/.venv/lib64/python3.13/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
raise exc
File "/home/b/proj/gpt-researcher/.venv/lib64/python3.13/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
await app(scope, receive, sender)
File "/home/b/proj/gpt-researcher/.venv/lib64/python3.13/site-packages/starlette/routing.py", line 93, in app
await func(session)
File "/home/b/proj/gpt-researcher/.venv/lib64/python3.13/site-packages/fastapi/routing.py", line 383, in app
await dependant.call(**solved_result.values)
File "/home/b/proj/gpt-researcher/backend/server/server.py", line 132, in websocket_endpoint
await handle_websocket_communication(websocket, manager)
File "/home/b/proj/gpt-researcher/backend/server/server_utils.py", line 241, in handle_websocket_communication
await handle_start_command(websocket, data, manager)
File "/home/b/proj/gpt-researcher/backend/server/server_utils.py", line 139, in handle_start_command
report = await manager.start_streaming(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<8 lines>...
)
^
File "/home/b/proj/gpt-researcher/backend/server/websocket_manager.py", line 66, in start_streaming
report = await run_agent(task, report_type, report_source, source_urls, document_urls, tone, websocket, headers = headers, config_path = config_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/b/proj/gpt-researcher/backend/server/websocket_manager.py", line 110, in run_agent
report = await researcher.run()
^^^^^^^^^^^^^^^^^^^^^^
File "/home/b/proj/gpt-researcher/backend/report_type/basic_report/basic_report.py", line 32, in run
researcher = GPTResearcher(
query=self.query,
...<7 lines>...
headers=self.headers
)
File "/home/b/proj/gpt-researcher/gpt_researcher/agent.py", line 81, in init
self.memory = Memory(
~~~~~~^
self.cfg.embedding_provider, self.cfg.embedding_model, **self.cfg.embedding_kwargs
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/home/b/proj/gpt-researcher/gpt_researcher/memory/embeddings.py", line 46, in init
_embeddings = OpenAIEmbeddings(model=model, **embdding_kwargs)
File "/home/b/proj/gpt-researcher/.venv/lib64/python3.13/site-packages/pydantic/main.py", line 212, in init
validated_self = self.pydantic_validator.validate_python(data, self_instance=self)
File "/home/b/proj/gpt-researcher/.venv/lib64/python3.13/site-packages/langchain_openai/embeddings/base.py", line 338, in validate_environment
self.client = openai.OpenAI(**client_params, **sync_specific).embeddings # type: ignore[arg-type]
~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/b/proj/gpt-researcher/.venv/lib64/python3.13/site-packages/openai/_client.py", line 110, in init
raise OpenAIError(
"The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable"
)
openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable

@winsonluk
Copy link
Contributor

winsonluk commented Jan 8, 2025

@BrockBakke could you paste the exact commands you're trying, from pulling the repo to encountering the error? I'm able to generate a report successfully with the latest repo using these steps:

git clone [email protected]:assafelovic/gpt-researcher.git
cd gpt-researcher
python3 -m venv venv
source venv/bin/activate
pip3 install -r requirements.txt
mv .env.example .env
<add OPENAI_API_KEY and TAVILY_API_KEY to .env>
python3 -m uvicorn main:app --reload

.env should look something like this:

OPENAI_API_KEY="sk-proj-e3sZ..."                                                                
TAVILY_API_KEY="tvly-fseUz3..."                                              
DOC_PATH=./my-docs

Try running all this in a new terminal session / new environment to make sure you don't have any conflicting environment variables or existing config files. If you're still encountering an error, it might be worth creating a new issue if it's not specifically DeepSeek related

@BrockBakke
Copy link

I think I figured out what the problem was. If you attempt to run without setting an embeddings model in your env, then even after that, when you set one, it will throw this error.

@kga245
Copy link
Contributor

kga245 commented Jan 9, 2025

@BrockBakke Good find. And for the record I think that makes sense. You would want to hard --reload for every time you want to initialize new env variables.

@danieldekay
Copy link
Contributor

Would it be possible to add better reporting to the human that certain keys are missing and required? The original .env file has no mention of embedding models, and that should be thrown as a human friendly error by e.g. pydantic before it actually does anything.

@assafelovic

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants