-
-
Notifications
You must be signed in to change notification settings - Fork 3.1k
Support Llama-api as an LLM provider #10451
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support Llama-api as an LLM provider #10451
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
Hey @seyeong-han can you please add a test for your integration either inside test_main or test_meta_llama_chat_transformation.py inside
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
9dd1030
into
BerriAI:litellm_meta_llama_api
* Support Llama-api as an LLM provider (#10451) * init: support llama-api as a llm provider * docs: fix endpoint url * fix: rename meta dir to meta-llama * docs: add meta-llama info * fix: mv LlamaAPIConfig under chat directory * feat: add LlamaAPIConfig in ProviderConfigManager * fix: provider_config from ProviderConfigManager * feat: add supports_tool_choice param * fix: remove optional_params using model_info * fix: rename meta-llama to meta_llama * init: test for meta_llama * fix: model names --------- Co-authored-by: Krish Dholakia <[email protected]> * fix file naming convention * fix file naming convention for meta_llama * docs meta llama api litellm --------- Co-authored-by: Young Han <[email protected]> Co-authored-by: Krish Dholakia <[email protected]>
* Support Llama-api as an LLM provider (#10451) * init: support llama-api as a llm provider * docs: fix endpoint url * fix: rename meta dir to meta-llama * docs: add meta-llama info * fix: mv LlamaAPIConfig under chat directory * feat: add LlamaAPIConfig in ProviderConfigManager * fix: provider_config from ProviderConfigManager * feat: add supports_tool_choice param * fix: remove optional_params using model_info * fix: rename meta-llama to meta_llama * init: test for meta_llama * fix: model names --------- Co-authored-by: Krish Dholakia <[email protected]> * fix file naming convention * fix file naming convention for meta_llama * docs meta llama api litellm --------- Co-authored-by: Young Han <[email protected]> Co-authored-by: Krish Dholakia <[email protected]>
Add Llama API support
This PR adds support for the Llama API through a new
LlamaAPIConfig
model class.Key features:
OpenAIGPTConfig
class using Llama's OpenAI-compatible endpoint (https://api.llama.com/compat/v1/)Current limitations:
We plan to add support for tool calling in a future update once the API stabilizes.
Type
🆕 New Feature