-
Notifications
You must be signed in to change notification settings - Fork 72
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add llm.prompt
#930
base: main
Are you sure you want to change the base?
feat: add llm.prompt
#930
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #930 +/- ##
==========================================
Coverage 100.00% 100.00%
==========================================
Files 515 517 +2
Lines 21070 21382 +312
==========================================
+ Hits 21070 21382 +312
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
mirascope/llm/_prompt.py
Outdated
| (_ResponseModelT | CallResponse) | ||
): | ||
context = get_current_context() | ||
if context is None or context.provider is None or context.model is None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: refactor this into a private utility with the above duplicate (just so the messages will always match)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
@@ -498,3 +498,334 @@ def __call__( | |||
Any, | |||
Any, | |||
] | |||
|
|||
|
|||
class PromptDecorator( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see your point here now about whether we need llm.prompt
or if llm.call
should just make provider/model
optional and raise the error if not provided at runtime, especially since the return types aren't different so the only way to know you need context is to see it has the llm.prompt
decorator (at which point you could see that it doesn't have model/provider). Could also potentially update to return something like RequiresContext[...]
as a type wrapper passthrough or something if model or provider aren't provided.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, and that puts us back into the issue of not having a way for the type system to check if a context is provided. Also, even RequiresContext
is underspecified because model and provider are optional on the context. Really we require a context that has both model and provider... which hypothetically could be coming from two separate contexts.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, I think this will actually be moot somewhat when we merge provider and model.
In the short term, we could provide the following type aliases:
RequiresContext[T]: TypeAlias = T
RequiresProvider[T]: TypeAlias = T
RequiresModel[T]: TypeAlias = T
We would then remove RequiresProvider
and RequiresModel
as part of the merge.
I imagine these do little but let someone know that it requires context right where they are in the code.
I think this would be helpful/useful even without explicit lint errors.
This is a first stab at implementing the
llm.prompt
api, as discussed in #884, particularly #884 (comment).It is basically a sibling to
llm.call
that doesn't require providing the model and provider upfront, but rather assumes that they will be present via thellm.context
context manager. Rather than re-implement the logic, under the hood it has some minimal wrapping and then callsllm.call
. Once we implement performance benchmarking (as discussed in #896), we may find it's better not to wrapllm.call
here, but for a proof of concept for the interface, this seemed fine.I copied a lot of the (complex) type signature for
llm.call
, including making aPromptDecorator
class, which may be cargo cult-y.Initially, rather than taking a simple decorator approach, I wanted to return a callable
Prompt
class. This would have enabled nice APIs like chaining function calls on the prompt in order to change its structural parameters; e.g. ifprompt.stream(True)
would return a newPrompt
that has streaming enabled, orprompt.response_model(Book)
would make a new version of the same prompt with a response model configured. However, I found this to be a nightmare from a typing perspective, so I backed off to just implementing a decorator a-la call.I think there's value in having a way to set up decorated llm calls while not providing the provider or model upfront, hence advocating for this API. Though, I wonder if having a separate
llm.prompt
api distinct fromllm.call
is worth it, versus just making the provider and model optional onllm.call
, with an error thrown at runtime if no default model/provider are present, and it was called outside of a context.I didn't update docs or examples in this PR, however here's a code snippet which both works and puts
llm.prompt
a bit through the paces: