Skip to content

Commit

Permalink
0.6.0 dev (#1138)
Browse files Browse the repository at this point in the history
* reapply litellm updates to support only messages llm kwarg

* tests run and make progress on rewrite, most of unit_tests_passing

* migrate more tests partially

* some progress

* more progress

* fix some more tests

* fix some more tests

* more progress

* more tests

* more tests

* removed nltk dependency

* remove nltk import and download from validator base

* remove commented out test referencing nltk

* throwing import error if nltk not available in detect_pii mock

* tests passing

* typing and lint

* lint

* typing

* fix bad merge

* Added rule based sentence tokenization from WordTokenizers.jl with minor modifications

* Added new version of split_sentence_str using new rule based sentence tokenization

* updates to factor in quotes during sentence splitting

* updated poetry.lock

* replaced split sentence default

* testing changes using custom separator in wordtokenizer algo

* fix for counting subs

* reverted split sentence in validators base

* reverted to pre-seperator algo, added fix for conditional white space after ?!. chars

* fix for optional white space after potential line endings ?!.

* added back modified seperator algo, fix for split sentence

* Fix regex patterns for abbreviations in tokenization_utils_seperator.py

* fix tests

* remove nltk references from tests

* removed older scripts

* minor fixes

* notebooks

* last few notebooks

* last books

* update docs for messages

* last of docs

* update more docs and start migration guide

* fix tests and format

* update some tests

* bumped api version

* dep updates

* renable history by default

* expose messages to prompt helper and finish docs for it

* indention

* fix test out of main

* update api client to point to its alpha

* update validator default on fail behavior from no op to exception

* update notebooks

* Installs from private pypi with validator versioning

* chore: Fix typo in pip_process function

* wip: updating tests

* fixed tests

* fix f-string syntax error

* fixed uninstall, list validators cli cmds, and tests

* removed unused methods from cli hub utils

* Removed writes to org namespaced init files during install, removed unused methods

* fix tests associated with removal of namespace init file

* update for litellm dont pass reask messages to llms

* Update pyproject.toml

* Update guardrails/cli/hub/uninstall.py

* remove regex pypi package dependency

* bumped version to 0.6.0-alpha3

* bump version to 0.6.0-alpha4

* temp workaround to raise errors on on pip processes

* raise errors on install failures to avoid placing import statement

* update version to 0.6.0

* updated installation tests to reflect change of behavior during exceptions in pip install subprocess

---------

Co-authored-by: David Tam <[email protected]>
Co-authored-by: Alejandro <[email protected]>
Co-authored-by: dtam <[email protected]>
  • Loading branch information
4 people authored Nov 6, 2024
1 parent ef603c3 commit 01c56ec
Show file tree
Hide file tree
Showing 151 changed files with 15,701 additions and 7,445 deletions.
6,690 changes: 6,674 additions & 16 deletions docs/concepts/async_streaming.ipynb

Large diffs are not rendered by default.

1 change: 0 additions & 1 deletion docs/concepts/error_remediation.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,6 @@ Note that this list is not exhaustive of the possible errors that could occur.
```log
The callable `fn` passed to `Guard(fn, ...)` failed with the following error:
{Root error message here!}.
Make sure that `fn` can be called as a function that takes in a single prompt string and returns a string.
```


Expand Down
20 changes: 4 additions & 16 deletions docs/concepts/logs.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,17 +33,17 @@ docs/html/single-step-history.html

## Calls
### Initial Input
Inital inputs like prompt and instructions from a call are available on each call.
Initial inputs like messages from a call are available on each call.

```py
first_call = my_guard.history.first
print("prompt\n-----")
print(first_call.prompt)
print("message\n-----")
print(first_call.messages[0]["content"])
print("prompt params\n------------- ")
print(first_call.prompt_params)
```
```log
prompt
message
-----
You are a human in an enchanted forest. You come across opponents of different types. You should fight smaller opponents, run away from bigger ones, and freeze if the opponent is a bear.
Expand All @@ -67,18 +67,6 @@ prompt params
{'opp_type': 'grizzly'}
```

Note: Input messages and msg_history currently can be accessed through iterations
```py
print(guard.history.last.iterations.last.inputs.msg_history)
```
```log
[
{"role":"system","content":"You are a helpful assistant."},
{"role":"user","content":"Tell me a joke"}
]
```


### Final Output
Final output of call is accessible on a call.
```py
Expand Down
44 changes: 36 additions & 8 deletions docs/concepts/streaming.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -39,7 +39,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -51,12 +51,41 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 3,
"metadata": {},
"outputs": [],
"outputs": [
{
"data": {
"text/html": [
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"color: #800080; text-decoration-color: #800080; font-weight: bold\">ValidationOutcome</span><span style=\"font-weight: bold\">(</span>\n",
" <span style=\"color: #808000; text-decoration-color: #808000\">call_id</span>=<span style=\"color: #008000; text-decoration-color: #008000\">'14148119808'</span>,\n",
" <span style=\"color: #808000; text-decoration-color: #808000\">raw_llm_output</span>=<span style=\"color: #008000; text-decoration-color: #008000\">'.'</span>,\n",
" <span style=\"color: #808000; text-decoration-color: #808000\">validation_summaries</span>=<span style=\"font-weight: bold\">[]</span>,\n",
" <span style=\"color: #808000; text-decoration-color: #808000\">validated_output</span>=<span style=\"color: #008000; text-decoration-color: #008000\">'.'</span>,\n",
" <span style=\"color: #808000; text-decoration-color: #808000\">reask</span>=<span style=\"color: #800080; text-decoration-color: #800080; font-style: italic\">None</span>,\n",
" <span style=\"color: #808000; text-decoration-color: #808000\">validation_passed</span>=<span style=\"color: #00ff00; text-decoration-color: #00ff00; font-style: italic\">True</span>,\n",
" <span style=\"color: #808000; text-decoration-color: #808000\">error</span>=<span style=\"color: #800080; text-decoration-color: #800080; font-style: italic\">None</span>\n",
"<span style=\"font-weight: bold\">)</span>\n",
"</pre>\n"
],
"text/plain": [
"\u001b[1;35mValidationOutcome\u001b[0m\u001b[1m(\u001b[0m\n",
" \u001b[33mcall_id\u001b[0m=\u001b[32m'14148119808'\u001b[0m,\n",
" \u001b[33mraw_llm_output\u001b[0m=\u001b[32m'.'\u001b[0m,\n",
" \u001b[33mvalidation_summaries\u001b[0m=\u001b[1m[\u001b[0m\u001b[1m]\u001b[0m,\n",
" \u001b[33mvalidated_output\u001b[0m=\u001b[32m'.'\u001b[0m,\n",
" \u001b[33mreask\u001b[0m=\u001b[3;35mNone\u001b[0m,\n",
" \u001b[33mvalidation_passed\u001b[0m=\u001b[3;92mTrue\u001b[0m,\n",
" \u001b[33merror\u001b[0m=\u001b[3;35mNone\u001b[0m\n",
"\u001b[1m)\u001b[0m\n"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"fragment_generator = guard(\n",
" litellm.completion,\n",
" model=\"gpt-4o\",\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n",
Expand Down Expand Up @@ -116,7 +145,6 @@
"guard = gd.Guard()\n",
"\n",
"fragment_generator = await guard(\n",
" litellm.completion,\n",
" model=\"gpt-3.5-turbo\",\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n",
Expand All @@ -137,7 +165,7 @@
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"display_name": "litellm",
"language": "python",
"name": "python3"
},
Expand All @@ -151,7 +179,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.8"
"version": "3.12.3"
}
},
"nbformat": 4,
Expand Down
Loading

0 comments on commit 01c56ec

Please sign in to comment.