-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OpenAI API Key require even when .env file has deepseek settings #1067
Comments
@winsonluk any chance you can take a look at this? |
@assafelovic hey, thanks for the report. @Videmak - I was able to reproduce your error with this exact
I believe the issue is that
The reason for this is because DeepSeek doesn't have it's own text embedding models (see https://api-docs.deepseek.com/faq#does-your-api-support-embedding), so we have to use OpenAI for the text embedding step. But as long as you set We just need your |
Ohh, this is helpful, thanks @winsonluk and @assafelovic |
I'm getting the exact same error no matter what LLM is chosen, even ones that have their own embedding model. I started getting it after pulling today's repo. |
python3 ./tests/test-your-retriever.py Retriever works. python3 ./tests/test-your-llm.py LLM works Outside of my keys (which work as demonstrated as above, using tavily and openAI keys), and some commented out code, this is all that's in my .env file: FAST_LLM="openai:gpt-4o-mini" here's the error: ERROR: Exception in ASGI application |
@BrockBakke could you paste the exact commands you're trying, from pulling the repo to encountering the error? I'm able to generate a report successfully with the latest repo using these steps:
.env should look something like this:
Try running all this in a new terminal session / new environment to make sure you don't have any conflicting environment variables or existing config files. If you're still encountering an error, it might be worth creating a new issue if it's not specifically DeepSeek related |
I think I figured out what the problem was. If you attempt to run without setting an embeddings model in your env, then even after that, when you set one, it will throw this error. |
@BrockBakke Good find. And for the record I think that makes sense. You would want to hard --reload for every time you want to initialize new env variables. |
Would it be possible to add better reporting to the human that certain keys are missing and required? The original .env file has no mention of embedding models, and that should be thrown as a human friendly error by e.g. pydantic before it actually does anything. |
No matter what model I set in the .env file, I keep getting this error:
This is my settings and steps
Now when I try any search querry, I get the error message above. I have tried restarting the server but nothing. Even when I try using google Gemin.
NOTE: I'm Using the latest release of GPT-Researcher (3.1.8)
The text was updated successfully, but these errors were encountered: