Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TaskWeaver is not able to handle the multiple LLM selection #429

Open
BrajkishorePrajapati opened this issue Oct 17, 2024 · 1 comment
Open

Comments

@BrajkishorePrajapati
Copy link

Describe the bug
I am trying to create a chain-lit UI with a drop-down filter for selecting the LLM. But even after selecting a different model, I am getting the response from the default Model. Can someone please help me define the correct taskweaver.json so that it will select the right model for the response?

Also what changes need to be made in app.py

And this issue is different from the previous one where task weaver was not able to run for different llm for planner and code interpreter.

@liqul
Copy link
Contributor

liqul commented Jan 26, 2025

We didn't support configurations from Chainlit, which is pretty much for demonstration purpose.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants