-
-
Notifications
You must be signed in to change notification settings - Fork 715
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bump Cerebras #1177
Bump Cerebras #1177
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 Looks good to me! Reviewed everything up to 802a34a in 35 seconds
More details
- Looked at
101
lines of code in3
files - Skipped
1
files when reviewing. - Skipped posting
3
drafted comments based on config settings.
1. tests/llm/test_cerebras/modes.py:140
- Draft comment:
Theresponse_model
should beIterable[User]
to match the expected response type.
response_model=Iterable[User],
- Reason this comment was not posted:
Decided after close inspection that this draft comment was likely wrong and/or not actionable:
The comment suggests reverting a change that might have been intentional. The test logic seems to expect individualUser
objects, not an iterable, which supports the change toUser
. The comment might be based on an assumption rather than the actual test logic. The change tocreate_iterable
might not necessarily requireIterable[User]
if the test logic expectsUser
.
I might be missing the broader context of howcreate_iterable
is expected to behave. If it inherently returns an iterable, the change toUser
might be incorrect. However, the test logic should be the primary guide here.
The test logic is a strong indicator of the expected behavior. If it checks forUser
objects, the change toUser
is likely correct, regardless of the function name.
The comment should be deleted as the test logic supports the change toUser
, and the comment seems to be based on an assumption rather than the actual test logic.
2. tests/llm/test_cerebras/modes.py:194
- Draft comment:
Theresponse_model
should beIterable[User]
to match the expected response type.
response_model=Iterable[User],
- Reason this comment was not posted:
Marked as duplicate.
3. tests/llm/test_cerebras/modes.py:185
- Draft comment:
Assertions should have descriptive error messages. For example:
with pytest.raises(ValueError, match="Expected error message"):
This applies to other assertions in this file as well.
- Reason this comment was not posted:
Confidence changes required:80%
The assertion error messages in the test cases should be more descriptive to help identify the cause of the failure.
Workflow ID: wflow_jErPR16w1fZRV1v8
You can customize Ellipsis with 👍 / 👎 feedback, review rules, user-specific overrides, quiet
mode, and more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 Looks good to me! Incremental review on e631b95 in 14 seconds
More details
- Looked at
12
lines of code in1
files - Skipped
0
files when reviewing. - Skipped posting
1
drafted comments based on config settings.
1. instructor/client_cerebras.py:3
- Draft comment:
Literal
is used in the code but the import statement for it was removed. Re-add the import forLiteral
. - Reason this comment was not posted:
Decided after close inspection that this draft comment was likely wrong and/or not actionable:
The comment suggests re-adding an import forLiteral
, but there is no evidence in the provided code thatLiteral
is used. The function overloads are defined withoutLiteral
, and the code seems to function correctly without it. Therefore, the comment is likely incorrect.
I might be missing some context about howLiteral
could be used elsewhere or in a different way that isn't immediately obvious. However, based on the provided code, there is no indication of its necessity.
The absence ofLiteral
usage in the provided code is strong evidence that the import is unnecessary. Without any usage, the comment is not valid.
The comment should be deleted because there is no usage ofLiteral
in the code, making the import unnecessary.
Workflow ID: wflow_XqZ8RV9EDGmY4Zgb
You can customize Ellipsis with 👍 / 👎 feedback, review rules, user-specific overrides, quiet
mode, and more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 Looks good to me! Incremental review on 6620dfa in 20 seconds
More details
- Looked at
22
lines of code in1
files - Skipped
1
files when reviewing. - Skipped posting
2
drafted comments based on config settings.
1. pyproject.toml:41
- Draft comment:
Thecerebras_cloud_sdk
version should be updated to^1.11.0
as per the PR description, not^1.5.0
. - Reason this comment was not posted:
Comment did not seem useful.
2. pyproject.toml:41
- Draft comment:
The version ofcerebras_cloud_sdk
should be updated to^1.11.0
as mentioned in the PR description. - Reason this comment was not posted:
Marked as duplicate.
Workflow ID: wflow_137hcDAp6GmNBWj7
You can customize Ellipsis with 👍 / 👎 feedback, review rules, user-specific overrides, quiet
mode, and more.
This bumps cerebras to the latest 1.11 version
Important
Update
cerebras_cloud_sdk
to v1.11.0, refactorfrom_cerebras
function, and adjust tests inmodes.py
.cerebras_cloud_sdk
to version^1.11.0
inpyproject.toml
.use_async
parameter fromfrom_cerebras
inclient_cerebras.py
.isinstance
to determine async behavior infrom_cerebras
.test_cerebras_json_streaming
andtest_cerebras_tool_error
inmodes.py
to usecreate_iterable
instead ofcreate
.modes.py
to align with new functionality.This description was created by for 6620dfa. It will automatically update as commits are pushed.