Skip to content

Commit

Permalink
feat: fix final errors
Browse files Browse the repository at this point in the history
  • Loading branch information
sgomez committed Oct 13, 2024
1 parent 021fa04 commit 82b4d58
Show file tree
Hide file tree
Showing 7 changed files with 156 additions and 68 deletions.
6 changes: 4 additions & 2 deletions docs/agents/using-tools.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ import { registry } from '../ai/setup-registry'
import { environment } from '../environment.mjs'
import { conversationRepository } from '../repositories/conversation'
import { buildDisplaySelectionButtons } from '../tools/display-selection-buttons'
import { getFreeAppointments } from '../tools/get-free-appointments'
import { buildGetFreeAppointments } from '../tools/get-free-appointments'

export const onMessage = new Composer()

Expand Down Expand Up @@ -109,7 +109,7 @@ onMessage.on('message:text', async (context) => {
system: PROMPT,
tools: {
displaySelectionButtons: buildDisplaySelectionButtons(context),
getFreeAppointments,
getFreeAppointments: buildGetFreeAppointments(),
},
})

Expand All @@ -124,3 +124,5 @@ onMessage.on('message:text', async (context) => {
await context.reply(text)
})
```

Now try to ask for an appointment and you will see how both tools are called in a row.
8 changes: 3 additions & 5 deletions docs/bot/running.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,11 +24,8 @@ export async function start(context: CommandContext<Context>): Promise<void> {
This is where the echo functionality comes into play. The bot listens for incoming text messages from users and, upon receiving a message, responds by sending back the same message. This demonstrates the basic capability of the bot to handle and reply to user input.

```ts title="src/lib/handlers/on-message.ts"
import { generateText } from 'ai'
import { Composer } from 'grammy'

import { environment } from '../environment.mjs'

export const onMessage = new Composer()

onMessage.on('message:text', async (context) => {
Expand All @@ -49,14 +46,12 @@ Additionally, we ensure the bot shuts down properly when the process receives te
```ts title="src/main.ts"
import process from 'node:process'

import { environment } from './lib/environment.mjs'
import { Bot } from 'grammy'

import { start } from './lib/commands/start'
import { environment } from './lib/environment.mjs'
import { onMessage } from './lib/handlers/on-message'


async function main(): Promise<void> {
const bot = new Bot(environment.BOT_TOKEN)

Expand Down Expand Up @@ -97,6 +92,9 @@ This final section provides a step-by-step guide on how to set up and run the bo
pnpm install
```

!!! info
This step was done automatically if you are using our devcontainer.

4. **Run the bot**:
Start the bot in development mode:
```bash
Expand Down
3 changes: 3 additions & 0 deletions docs/chatbot/basic.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,9 @@ At this stage, the bot can respond intelligently using AI but lacks conversation

## Full code

!!! example

Update the next file to add the integration with the Vercel SDK AI

```ts title="src/lib/handlers/on-message.ts"
import { generateText } from 'ai'
Expand Down
17 changes: 14 additions & 3 deletions docs/chatbot/memory.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,14 +11,23 @@ We will create a table named `messages` to store the conversations.
To create this table using **Drizzle ORM**, we can follow a similar structure to other tables created in the project. Below is a basic template:

```ts title="src/lib/db/schema/messages.ts"
import { bigint, pgTable, serial, text, varchar, timestamp } from 'drizzle-orm/pg-core'
import {
bigint,
pgTable,
serial,
text,
timestamp,
varchar,
} from 'drizzle-orm/pg-core'

export const messages = pgTable('messages', {
chatId: bigint({ mode: 'number' }).notNull(),
content: text('content').notNull(),
messageId: serial('message_id').primaryKey(),
occurredOn: timestamp('occurred_on').defaultNow().notNull(),
role: varchar('role', { length: 50 }).$type<'user' | 'assistant' | 'system' | 'tool'>().notNull(),
role: varchar('role', { length: 50 })
.$type<'user' | 'assistant' | 'system' | 'tool'>()
.notNull(),
})
```

Expand Down Expand Up @@ -99,6 +108,8 @@ export class ConversationRepository {
await database.delete(messages).where(eq(messages.chatId, chatId))
}
}

export const conversationRepository = new ConversationRepository()
```


Expand All @@ -117,7 +128,7 @@ import { conversationRepository } from '../repositories/conversation'
export async function start(context: CommandContext<Context>): Promise<void> {
const chatId = context.chat.id
// Clear the conversation
await conversationRepository.clearConversation(chatId)
await conversationRepository.clear(chatId)

const content = 'Welcome, how can I help you?'
// Store the assistant's welcome message
Expand Down
78 changes: 43 additions & 35 deletions docs/chatbot/register.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,13 +2,47 @@

In this section, you will learn how to create registries for AI models in Vercel SDK. This setup allows you to register multiple AI providers and language models to be used in your project.


## Full code

!!! info

This step is usually not necessary, if you are only going to use one AI provider you can use it directly instead of creating a record.
This step is usually not necessary, if you are only going to use one AI provider you can use it directly instead of creating a record. For convenience, this code is already included in the template but we will explain it here.


```ts title="src/lib/ai/setup-registry.ts"
import { openai as originalOpenAI } from '@ai-sdk/openai'
import {
experimental_createProviderRegistry as createProviderRegistry,
experimental_customProvider as customProvider,
} from 'ai'
import { ollama as originalOllama } from 'ollama-ai-provider'

const ollama = customProvider({
fallbackProvider: originalOllama,
languageModels: {
'qwen-2_5': originalOllama('qwen2.5'),
},
})

export const openai = customProvider({
fallbackProvider: originalOpenAI,
languageModels: {
'gpt-4o-mini': originalOpenAI('gpt-4o-mini', {
structuredOutputs: true,
}),
},
})

export const registry = createProviderRegistry({
ollama,
openai,
})
```

## Setting up the Registry

The following steps will guide you on how to register AI providers such as OpenAI and Ollama using the Vercel SDK.
The following steps will guide you on how to register AI providers such as OpenAI and Ollama using the Vercel SDK. Open the file `src/lib/ai/setup-registry.ts` to see how it works.

### Step 1: Import Required Modules

Expand Down Expand Up @@ -45,6 +79,13 @@ export const openai = customProvider({
})
```

!!! info

You can more models and providers if you wish. Just update the `MODEL*` envvars in `.env` file to activate them. If you need an API token
you will need update the `src/lib/environment.mjs` file too.

The model name should be `PROVIDER:MODEL_NAME` just like _Ollama_ and _OpenAI_ does.

### Step 3: Create the Registry

Once the providers are defined, create the registry that will include these custom providers:
Expand All @@ -69,36 +110,3 @@ Now, to use one or the other, edit the .env file and configure which provider an
!!! warning

It is possible that the free models do not work as well as the proprietary ones in the examples that use tools. Especially if they are small, since it is normal that in local we cannot run models with more than 12B of parameters. After the publication of this tutorial new and better open models may appear, try other options to see if they work better. If not you can always try a commercial model.


## Full code

```ts title="src/lib/ai/setup-registry.ts"
import { openai as originalOpenAI } from '@ai-sdk/openai'
import {
experimental_createProviderRegistry as createProviderRegistry,
experimental_customProvider as customProvider,
} from 'ai'
import { ollama as originalOllama } from 'ollama-ai-provider'

const ollama = customProvider({
fallbackProvider: originalOllama,
languageModels: {
'qwen-2_5': originalOllama('qwen2.5'),
},
})

export const openai = customProvider({
fallbackProvider: originalOpenAI,
languageModels: {
'gpt-4o-mini': originalOpenAI('gpt-4o-mini', {
structuredOutputs: true,
}),
},
})

export const registry = createProviderRegistry({
ollama,
openai,
})
```
59 changes: 40 additions & 19 deletions docs/rag/rag.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,39 +44,43 @@ export async function learn(context: CommandContext<Context>): Promise<void> {
## Creating the `/ask` command

```ts title="src/lib/commands/ask.ts"
bot.command("ask", async (context) => {
const userQuery = context.match;
import { generateText } from 'ai'
import type { CommandContext, Context } from 'grammy'

import { findRelevantContent } from '../ai/embeddings'
import { registry } from '../ai/setup-registry'
import { environment } from '../environment.mjs'

export async function ask(context: CommandContext<Context>): Promise<void> {
const userQuery = context.match

// Find relevant content using embeddings
const relevantContent = await findRelevantContent(userQuery);
const relevantContent = await findRelevantContent(userQuery)

if (relevantContent.length === 0) {
await context.reply("Sorry, I couldn't find any relevant information.");
return;
await context.reply("Sorry, I couldn't find any relevant information.")
return
}

// Generate the response with the RAG-enhanced prompt
const { text } = await generateText({
messages: [{ content: userQuery, role: "user" }],
messages: [{ content: userQuery, role: 'user' }],
model: registry.languageModel(environment.MODEL),
// Combine the relevant content into the system prompt
system: `
You are a chatbot designed to help users book hair salon appointments.
Here is some additional information relevant to your query:
${relevantContent.map((content) => content.name).join("\n")}
Answer the user's question based on this information.
If a user asks for information outside of these details,
please respond with: "I'm sorry, but I cannot assist with that.
For more information, please call us at (555) 456-7890 or email
us at [email protected]."
You are a chatbot designed to help users book hair salon appointments.
Here is some additional information relevant to your query:
${relevantContent.map((content) => content.name).join('\n')}
Answer the user's question based on this information.
If a user asks for information outside of these details, please respond with: "I'm sorry, but I cannot assist with that. For more information, please call us at (555) 456-7890 or email us at [email protected]."
`,
});
})

// Reply with the generated text
await context.reply(text);
});
await context.reply(text)
}
```


Expand Down Expand Up @@ -129,3 +133,20 @@ main().catch((error) => console.error(error))
The generateText method now includes the additional content in the system prompt. This augments the bot’s ability to respond in a contextually aware manner by incorporating specific information from the retrieved data.

With RAG, our bot can learn new information dynamically and retrieve relevant content to enhance its responses. By leveraging embeddings and prompt injection, the bot becomes more capable of answering user questions accurately. This setup demonstrates how RAG can be applied to improve interactions, making the bot more flexible and intelligent while still being grounded in specific data sources.


!!! exercise

Add the information we had in the prompt:

1. `/learn Our salon offers a haircut service for $25.`
2. `/learn Our salon provides hair color services for $50.`
3. `/learn We also offer a manicure service for $15.`
4. `/learn Our opening hours are Monday to Saturday from 9 AM to 7 PM.`
5. `/learn Our salon is closed on Sundays.`

And them does some questions:

1. `/ask What are your opening hours?`
2. `/ask How much is a haircut?`
3. `/ask Say my name`
53 changes: 49 additions & 4 deletions docs/rag/tools.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,14 @@ This step involves defining the database schema for storing appointments, includ


```ts title="src/lib/db/schema/appointments.ts"
import { bigint, date, pgTable, serial, time, uniqueIndex } from "drizzle-orm/pg-core";
import {
bigint,
date,
pgTable,
serial,
time,
uniqueIndex,
} from 'drizzle-orm/pg-core'

export const appointments = pgTable(
'appointments',
Expand Down Expand Up @@ -42,7 +49,7 @@ The repository class contains methods for interacting with the database. It hand

This logic ensures that the chatbot can always return relevant appointment data, even if none have been pre-created for that day. The dynamic nature of this repository is key to making the system respond to real-world conditions.

```ts
```ts title="src/lib/repositories/appointments.ts"
import { eq } from 'drizzle-orm/expressions'

import { db as database } from '../db/index'
Expand Down Expand Up @@ -100,6 +107,44 @@ export const appointmentsRepository = new AppointmentRepository()

Here, we define a tool (getFreeAppointments) that fetches free appointments for the next day using the repository. The tool returns a markdown list of available time slots, which can be directly integrated into the chatbot’s responses. This tool encapsulates the repository logic, ensuring that the chatbot can retrieve dynamic appointment data without direct interaction with the database.

```ts title="src/lib/tools/get-free-appointments.ts"
import { type CoreTool, tool } from 'ai'
import { format } from 'date-fns'
import { z } from 'zod'

import { appointmentsRepository } from '../repositories/appointments'
import { tomorrow } from '../utils'

export const buildGetFreeAppointments = (): CoreTool =>
tool({
description:
'Use this tool to search for available appointment times for tomorrow. Returns the response',
execute: async () => {
console.log(`Called getFreeAppointments tool`)

const freeAppointments =
await appointmentsRepository.getFreeAppointmentsForDay(tomorrow())

if (freeAppointments.length === 0) {
return `Sorry, there are no available appointments for tomorrow.`
}

const availableSlots = freeAppointments
.map(
(app) =>
`- ${format(new Date(`1970-01-01T${app.timeSlot}`), 'HH:mm')}`,
)
.join('\n')

return `Available appointments are:\n${availableSlots}.`
},
parameters: z.object({}),
})
```

### Step 4: Adding the tool

Finally we incorporate the tool to the context of our bot.

```ts title="src/lib/handlers/on-message.ts"
import { generateText } from 'ai'
Expand All @@ -108,7 +153,7 @@ import { Composer } from 'grammy'
import { registry } from '../ai/setup-registry'
import { environment } from '../environment.mjs'
import { conversationRepository } from '../repositories/conversation'
import { getFreeAppointments } from '../tools/get-free-appointments'
import { buildGetFreeAppointments } from '../tools/get-free-appointments'

export const onMessage = new Composer()

Expand Down Expand Up @@ -138,7 +183,7 @@ onMessage.on('message:text', async (context) => {
model: registry.languageModel(environment.MODEL),
system: PROMPT,
tools: {
getFreeAppointments,
getFreeAppointments: buildGetFreeAppointments(),
},
})

Expand Down

0 comments on commit 82b4d58

Please sign in to comment.