Skip to content

Commit

Permalink
docs: update docs
Browse files Browse the repository at this point in the history
  • Loading branch information
jxnl committed Oct 14, 2024
1 parent dbab4d5 commit d7328ed
Show file tree
Hide file tree
Showing 37 changed files with 725 additions and 324 deletions.
14 changes: 12 additions & 2 deletions docs/blog/index.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,16 @@
---
categories:
- OpenAI
comments: true
description: Subscribe to our newsletter for AI updates, tips, and insights into the
latest features and advancements in AI technology.
tags:
- AI Updates
- Newsletter Subscription
- Tips
- AI Features
- Instructor
title: Subscribe to Our Newsletter for AI Updates and Tips
description: Join our newsletter for the latest features, tips, and insights on using Instructor effectively.
---

# Subscribe to our Newsletter for Updates and Tips
Expand Down Expand Up @@ -47,4 +57,4 @@ If you want to get updates on new features and tips on how to use Instructor, yo
## Media and Resources

- [Course: Structured Outputs with Instructor](https://www.wandb.courses/courses/steering-language-models?x=1)
- [Keynote: Pydantic is All You Need](posts/aisummit-2023.md)
- [Keynote: Pydantic is All You Need](posts/aisummit-2023.md)
22 changes: 14 additions & 8 deletions docs/blog/posts/aisummit-2023.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,19 @@
---
draft: False
authors:
- jxnl
categories:
- Pydantic
comments: true
date: 2023-11-02
description: Explore insights on utilizing Pydantic for effective prompt engineering
in this AI Engineer Summit keynote.
draft: false
tags:
- python
- talks
- prompt engineering
- video
authors:
- jxnl
- Pydantic
- Prompt Engineering
- AI Summit
- Machine Learning
- Data Validation
---

# AI Engineer Keynote: Pydantic is all you need
Expand All @@ -20,4 +26,4 @@ authors:

Last month, I ventured back onto the speaking circuit at the inaugural [AI Engineer Summit](https://www.ai.engineer/summit), sharing insights on leveraging [Pydantic](https://docs.pydantic.dev/latest/) for effective prompt engineering. I dove deep into what is covered in our documentation and standard blog posts,

I'd genuinely appreciate any feedback on the talk – every bit helps in refining the art. So, take a moment to check out the [full talk here](https://youtu.be/yj-wSRJwrrc?si=vGMIqtTapbIN8SLz), and let's continue pushing the boundaries of what's possible.
I'd genuinely appreciate any feedback on the talk – every bit helps in refining the art. So, take a moment to check out the [full talk here](https://youtu.be/yj-wSRJwrrc?si=vGMIqtTapbIN8SLz), and let's continue pushing the boundaries of what's possible.
19 changes: 15 additions & 4 deletions docs/blog/posts/announcing-gemini-tool-calling-support.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,19 @@
---
draft: False
date: 2024-09-03
authors:
- ivanleomk
- ivanleomk
categories:
- LLM Techniques
comments: true
date: 2024-09-03
description: Introducing structured outputs for Gemini tool calling support in the
instructor library, enhancing interactions with Gemini and VertexAI SDKs.
draft: false
tags:
- Gemini
- VertexAI
- Tool Calling
- Instructor Library
- AI SDKs
---

# Structured Outputs for Gemini now supported
Expand Down Expand Up @@ -112,4 +123,4 @@ print(resp)
#> name='Jason' age=25
```

1. Current Gemini models that support tool calling are `gemini-1.5-flash-latest` and `gemini-1.5-pro-latest`.
1. Current Gemini models that support tool calling are `gemini-1.5-flash-latest` and `gemini-1.5-pro-latest`.
19 changes: 15 additions & 4 deletions docs/blog/posts/anthropic-prompt-caching.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,19 @@
---
draft: False
date: 2024-09-14
authors:
- ivanleomk
- ivanleomk
categories:
- Anthropic
comments: true
date: 2024-09-14
description: Discover how prompt caching with Anthropic can improve response times
and reduce costs for large context applications.
draft: false
tags:
- prompt caching
- Anthropic
- API optimization
- cost reduction
- latency improvement
---

# Why should I use prompt caching?
Expand Down Expand Up @@ -324,4 +335,4 @@ for _ in range(2):
assert isinstance(resp, Character)
print(completion.usage)
print(resp)
```
```
17 changes: 14 additions & 3 deletions docs/blog/posts/anthropic.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,19 @@
---
draft: False
date: 2024-03-20
authors:
- jxnl
- jxnl
categories:
- Anthropic
comments: true
date: 2024-03-20
description: Enhance your projects with the new Anthropic client support, featuring
installation guidance and user model creation.
draft: false
tags:
- Anthropic
- API Development
- Pydantic
- Python
- LLM Techniques
---

# Announcing Anthropic Support
Expand Down
20 changes: 14 additions & 6 deletions docs/blog/posts/bad-schemas-could-break-llms.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,19 @@
---
draft: False
authors:
- ivanleomk
categories:
- LLM Techniques
comments: true
date: 2024-09-26
description: Discover how response models impact LLM performance, focusing on structured
outputs for optimal results in GPT-4o and Claude models.
draft: false
tags:
- llms
- structured-outputs
authors:
- ivanleomk
- LLM Performance
- Response Models
- Structured Outputs
- GPT-4o
- Claude Models
---

# Bad Schemas could break your LLM Structured Outputs
Expand Down Expand Up @@ -333,4 +341,4 @@ This allows us to combine a LLM’s expressiveness with the performance of a det

`instructor` makes it easy to get structured data from LLMs and is built on top of Pydantic. This makes it an indispensable tool to quickly prototype and find the right response models for your specific application.

To get started with instructor today, check out our [Getting Started](../../index.md) and [Examples](../../examples/index.md) sections that cover various LLM providers and specialised implementations.
To get started with instructor today, check out our [Getting Started](../../index.md) and [Examples](../../examples/index.md) sections that cover various LLM providers and specialised implementations.
20 changes: 14 additions & 6 deletions docs/blog/posts/best_framework.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,20 @@
---
draft: False
authors:
- jxnl
categories:
- LLM Techniques
comments: true
date: 2024-03-05
description: Discover how the Instructor library simplifies structured LLM outputs
using Python type annotations for seamless data mapping.
draft: false
slug: zero-cost-abstractions
tags:
- python
- llms
authors:
- jxnl
- Instructor
- LLM Outputs
- Python
- Pydantic
- Data Mapping
---

# Why Instructor is the Best Library for Structured LLM Outputs
Expand Down Expand Up @@ -76,4 +84,4 @@ This incremental, zero-overhead adoption path makes Instructor perfect for sprin

And if you decide Instructor isn't a good fit after all, removing it is as simple as not applying the patch! The familiarity and flexibility of working directly with the OpenAI SDK is a core strength.

Instructor solves the "string hellll" of unstructured LLM outputs. It allows teams to easily realize the full potential of tools like GPTs by mapping their text to type-safe, validated data structures. If you're looking to get more structured value out of LLMs, give Instructor a try!
Instructor solves the "string hellll" of unstructured LLM outputs. It allows teams to easily realize the full potential of tools like GPTs by mapping their text to type-safe, validated data structures. If you're looking to get more structured value out of LLMs, give Instructor a try!
23 changes: 14 additions & 9 deletions docs/blog/posts/caching.md
Original file line number Diff line number Diff line change
@@ -1,15 +1,20 @@
---
draft: False
authors:
- jxnl
categories:
- Performance Optimization
comments: true
date: 2023-11-26
description: Learn caching techniques in Python using Pydantic models with functools,
diskcache, and Redis for improved performance and efficiency.
draft: false
slug: python-caching
tags:
- caching
- functools
- redis
- diskcache
- python
authors:
- jxnl
- Python
- Caching
- Pydantic
- Performance Optimization
- Redis
---

# Introduction to Caching in Python
Expand Down Expand Up @@ -341,4 +346,4 @@ Choosing the right caching strategy depends on your application's specific needs

If you'd like to use this code, try to send it over to ChatGPT to understand it more, and to add additional features that might matter for you, for example, the cache isn't invalidated when your BaseModel changes, so you might want to encode the `Model.model_json_schema()` as part of the key.

If you like the content check out our [GitHub](https://github.com/jxnl/instructor) as give us a star and checkout the library.
If you like the content check out our [GitHub](https://github.com/jxnl/instructor) as give us a star and checkout the library.
26 changes: 15 additions & 11 deletions docs/blog/posts/chain-of-density.md
Original file line number Diff line number Diff line change
@@ -1,17 +1,21 @@
---
draft: False
authors:
- ivanleomk
- jxnl
categories:
- LLM Techniques
comments: true
date: 2023-11-05
description: Learn to implement Chain of Density with GPT-3.5 for improved summarization,
achieving 20x latency reduction and 50x cost savings.
draft: false
slug: chain-of-density
tags:
- pydantic
- validation
- chain of density
- finetuneing
- gpt-3.5-turbo
- distillation
authors:
- ivanleomk
- jxnl
- GPT-3.5
- Chain of Density
- Summarization
- LLM Techniques
- Fine-tuning
---

# Smarter Summaries w/ Finetuning GPT-3.5 and Chain of Density
Expand Down Expand Up @@ -543,4 +547,4 @@ Interestingly, the model finetuned with the least examples seems to outperform t

Finetuning this iterative method was 20-40x faster while improving overall performance, resulting in massive efficiency gains by finetuning and distilling capabilities into specialized models.

We've seen how `Instructor` can make your life easier, from data modeling to distillation and finetuning. If you enjoy the content or want to try out `instructor` check out the [github](https://github.com/jxnl/instructor) and don't forget to give us a star!
We've seen how `Instructor` can make your life easier, from data modeling to distillation and finetuning. If you enjoy the content or want to try out `instructor` check out the [github](https://github.com/jxnl/instructor) and don't forget to give us a star!
23 changes: 14 additions & 9 deletions docs/blog/posts/citations.md
Original file line number Diff line number Diff line change
@@ -1,15 +1,20 @@
---
draft: False
authors:
- jxnl
categories:
- Pydantic
comments: true
date: 2023-11-18
description: Explore how Pydantic enhances LLM citation verification, improving data
accuracy and reliability in responses.
draft: false
slug: validate-citations
tags:
- pydantic
- validation
- finetuneing
- citations
- hallucination
authors:
- jxnl
- Pydantic
- LLM
- Data Accuracy
- Citation Verification
- Python
---

# Verifying LLM Citations with Pydantic
Expand Down Expand Up @@ -268,4 +273,4 @@ except ValidationError as e:

These examples demonstrate the potential of using Pydantic and OpenAI to enhance data accuracy through citation verification. While the LLM-based approach may not be efficient for runtime operations, it has exciting implications for generating a dataset of accurate responses. By leveraging this method during data generation, we can fine-tune a model that excels in citation accuracy. Similar to our last post on [finetuning a better summarizer](https://jxnl.github.io/instructor/blog/2023/11/05/chain-of-density/).

If you like the content check out our [GitHub](https://github.com/jxnl/instructor) as give us a star and checkout the library.
If you like the content check out our [GitHub](https://github.com/jxnl/instructor) as give us a star and checkout the library.
19 changes: 14 additions & 5 deletions docs/blog/posts/course.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,20 @@
---
draft: False
authors:
- jxnl
categories:
- OpenAI
comments: true
date: 2024-02-14
description: Discover a free one-hour course on Weights and Biases covering essential
techniques for language models.
draft: false
slug: weights-and-biases-course
tags:
- open source
authors:
- jxnl
- Weights and Biases
- AI course
- machine learning
- language models
- free resources
---

# Free course on Weights and Biases
Expand All @@ -14,4 +23,4 @@ I just released a free course on wits and biases. It goes over the material from

[![](img/course.png)](https://www.wandb.courses/courses/steering-language-models)

> Click the image to access the course
> Click the image to access the course
21 changes: 15 additions & 6 deletions docs/blog/posts/distilation-part1.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,19 @@
---
draft: False
date: 2023-10-17
categories:
- Finetuning
authors:
- jxnl
- jxnl
categories:
- LLM Techniques
comments: true
date: 2023-10-17
description: Explore Instructor for fine-tuning language models with Python, simplifying
function calls, and enhancing performance.
draft: false
tags:
- Instructor
- Fine-tuning
- Python
- Language Models
- Distillation
---

# Enhancing Python Functions with Instructor: A Guide to Fine-Tuning and Distillation
Expand Down Expand Up @@ -163,4 +172,4 @@ With this, you can swap the function implementation, making it backward compatib

We've seen how `Instructor` can make your life easier, from fine-tuning to distillation. Now if you're thinking wow, I'd love a backend service to do this for continously, you're in luck! Please check out the survey at [useinstructor.com](https://useinstructor.com) and let us know who you are.

If you enjoy the content or want to try out `instructor` please check out the [github](https://github.com/jxnl/instructor) and give us a star!
If you enjoy the content or want to try out `instructor` please check out the [github](https://github.com/jxnl/instructor) and give us a star!
Loading

0 comments on commit d7328ed

Please sign in to comment.