Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Gemini GenerativeModel.start_chat() is not supported #1116

Open
syberkitten opened this issue Oct 24, 2024 · 0 comments
Open

Gemini GenerativeModel.start_chat() is not supported #1116

syberkitten opened this issue Oct 24, 2024 · 0 comments

Comments

@syberkitten
Copy link

Is your feature request related to a problem? Please describe.
Using the start_chat mechanism with instructor for gemini does not work.
documentation shows example with chat.completions.create, but I'm using start_chat
allows more flexibility with supplemental files.
start_chat() returns a chat session object on which we run:
chat.send_message_async(question, stream, safety_settings)
adding response_model to it fails:
response = await self.chat_session.send_message_async(question, stream=False, TypeError: ChatSession.send_message_async() got an unexpected keyword argument 'response_model'

Describe the solution you'd like
support the response_model in send_message_async

Describe alternatives you've considered
not using gemini with instructor at this point

Additional context
instructor = "^1.6.3"
google-generativeai = "^0.8.2"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant