|
1 | 1 | <img src="/docs/assets/images/logotype.svg" alt="RubyLLM" height="120" width="250">
|
2 | 2 |
|
3 |
| -A delightful Ruby way to work with AI. No configuration madness, no complex callbacks, no handler hell – just beautiful, expressive Ruby code. |
4 |
| - |
5 |
| -<div style="display: flex; align-items: center; flex-wrap: wrap; margin-bottom: 1em"> |
6 |
| - <img src="https://upload.wikimedia.org/wikipedia/commons/4/4d/OpenAI_Logo.svg" alt="OpenAI" height="40" width="120"> |
7 |
| - |
8 |
| - <img src="https://upload.wikimedia.org/wikipedia/commons/7/78/Anthropic_logo.svg" alt="Anthropic" height="40" width="120"> |
9 |
| - |
10 |
| - <img src="https://upload.wikimedia.org/wikipedia/commons/8/8a/Google_Gemini_logo.svg" alt="Google" height="40" width="120"> |
11 |
| - |
12 |
| - <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/bedrock-color.svg" alt="Bedrock" height="40"> |
13 |
| - <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/bedrock-text.svg" alt="Bedrock" height="40" width="120"> |
14 |
| - |
15 |
| - <img src="https://upload.wikimedia.org/wikipedia/commons/e/ec/DeepSeek_logo.svg" alt="DeepSeek" height="40" width="120"> |
| 3 | +**A delightful Ruby way to work with AI.** RubyLLM provides **one** beautiful, Ruby-like interface to interact with modern AI models. Chat, generate images, create embeddings, and use tools – all with clean, expressive code that feels like Ruby, not like patching together multiple services. |
| 4 | + |
| 5 | +<div class="provider-icons"> |
| 6 | + <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/anthropic-text.svg" alt="Anthropic" class="logo-small"> |
| 7 | + |
| 8 | + <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/bedrock-color.svg" alt="Bedrock" class="logo-medium"> |
| 9 | + <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/bedrock-text.svg" alt="Bedrock" class="logo-small"> |
| 10 | + |
| 11 | + <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/deepseek-color.svg" alt="DeepSeek" class="logo-medium"> |
| 12 | + <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/deepseek-text.svg" alt="DeepSeek" class="logo-small"> |
| 13 | + |
| 14 | + <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/gemini-brand-color.svg" alt="Gemini" class="logo-large"> |
| 15 | + |
| 16 | + <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/ollama.svg" alt="Ollama" class="logo-medium"> |
| 17 | + <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/ollama-text.svg" alt="Ollama" class="logo-medium"> |
| 18 | + |
| 19 | + <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/openai.svg" alt="OpenAI" class="logo-medium"> |
| 20 | + <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/openai-text.svg" alt="OpenAI" class="logo-medium"> |
| 21 | + |
| 22 | + <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/openrouter.svg" alt="OpenRouter" class="logo-medium"> |
| 23 | + <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/openrouter-text.svg" alt="OpenRouter" class="logo-small"> |
| 24 | + |
16 | 25 | </div>
|
17 | 26 |
|
18 |
| -<a href="https://badge.fury.io/rb/ruby_llm"><img src="https://badge.fury.io/rb/ruby_llm.svg" alt="Gem Version" /></a> |
19 |
| -<a href="https://github.com/testdouble/standard"><img src="https://img.shields.io/badge/code_style-standard-brightgreen.svg" alt="Ruby Style Guide" /></a> |
20 |
| -<a href="https://rubygems.org/gems/ruby_llm"><img alt="Gem Downloads" src="https://img.shields.io/gem/dt/ruby_llm"></a> |
21 |
| -<a href="https://codecov.io/gh/crmne/ruby_llm"><img src="https://codecov.io/gh/crmne/ruby_llm/branch/main/graph/badge.svg" alt="codecov" /></a> |
| 27 | +<div class="badge-container"> |
| 28 | + <a href="https://badge.fury.io/rb/ruby_llm"><img src="https://badge.fury.io/rb/ruby_llm.svg" alt="Gem Version" /></a> |
| 29 | + <a href="https://github.com/testdouble/standard"><img src="https://img.shields.io/badge/code_style-standard-brightgreen.svg" alt="Ruby Style Guide" /></a> |
| 30 | + <a href="https://rubygems.org/gems/ruby_llm"><img alt="Gem Downloads" src="https://img.shields.io/gem/dt/ruby_llm"></a> |
| 31 | + <a href="https://codecov.io/gh/crmne/ruby_llm"><img src="https://codecov.io/gh/crmne/ruby_llm/branch/main/graph/badge.svg" alt="codecov" /></a> |
| 32 | +</div> |
22 | 33 |
|
23 |
| -🤺 Battle tested at [💬 Chat with Work](https://chatwithwork.com) |
| 34 | +🤺 Battle tested at [💬 Chat with Work](https://chatwithwork.com) |
24 | 35 |
|
25 | 36 | ## The problem with AI libraries
|
26 | 37 |
|
27 | 38 | Every AI provider comes with its own client library, its own response format, its own conventions for streaming, and its own way of handling errors. Want to use multiple providers? Prepare to juggle incompatible APIs and bloated dependencies.
|
28 | 39 |
|
29 | 40 | RubyLLM fixes all that. One beautiful API for everything. One consistent format. Minimal dependencies — just Faraday and Zeitwerk. Because working with AI should be a joy, not a chore.
|
30 | 41 |
|
31 |
| -## Features |
32 |
| - |
33 |
| -- 💬 **Chat** with OpenAI, Anthropic, Gemini, AWS Bedrock Anthropic, and DeepSeek models |
34 |
| -- 👁️ **Vision and Audio** understanding |
35 |
| -- 📄 **PDF Analysis** for analyzing documents |
36 |
| -- 🖼️ **Image generation** with DALL-E and other providers |
37 |
| -- 📊 **Embeddings** for vector search and semantic analysis |
38 |
| -- 🔧 **Tools** that let AI use your Ruby code |
39 |
| -- 🚂 **Rails integration** to persist chats and messages with ActiveRecord |
40 |
| -- 🌊 **Streaming** responses with proper Ruby patterns |
41 |
| - |
42 | 42 | ## What makes it great
|
43 | 43 |
|
44 | 44 | ```ruby
|
|
85 | 85 | chat.with_tool(Weather).ask "What's the weather in Berlin? (52.5200, 13.4050)"
|
86 | 86 | ```
|
87 | 87 |
|
| 88 | +## Core Capabilities |
| 89 | + |
| 90 | +* 💬 **Unified Chat:** Converse with models from OpenAI, Anthropic, Gemini, Bedrock, OpenRouter, DeepSeek, Ollama, or any OpenAI-compatible API using `RubyLLM.chat`. |
| 91 | +* 👁️ **Vision:** Analyze images within chats. |
| 92 | +* 🔊 **Audio:** Transcribe and understand audio content. |
| 93 | +* 📄 **PDF Analysis:** Extract information and summarize PDF documents. |
| 94 | +* 🖼️ **Image Generation:** Create images with `RubyLLM.paint`. |
| 95 | +* 📊 **Embeddings:** Generate text embeddings for vector search with `RubyLLM.embed`. |
| 96 | +* 🔧 **Tools (Function Calling):** Let AI models call your Ruby code using `RubyLLM::Tool`. |
| 97 | +* 🚂 **Rails Integration:** Easily persist chats, messages, and tool calls using `acts_as_chat` and `acts_as_message`. |
| 98 | +* 🌊 **Streaming:** Process responses in real-time with idiomatic Ruby blocks. |
| 99 | + |
88 | 100 | ## Installation
|
89 | 101 |
|
| 102 | +Add to your Gemfile: |
90 | 103 | ```ruby
|
91 |
| -# In your Gemfile |
92 | 104 | gem 'ruby_llm'
|
93 |
| - |
94 |
| -# Then run |
95 |
| -bundle install |
96 |
| - |
97 |
| -# Or install it yourself |
98 |
| -gem install ruby_llm |
99 | 105 | ```
|
| 106 | +Then `bundle install`. |
100 | 107 |
|
101 |
| -Configure with your API keys: |
102 |
| - |
| 108 | +Configure your API keys (using environment variables is recommended): |
103 | 109 | ```ruby
|
| 110 | +# config/initializers/ruby_llm.rb or similar |
104 | 111 | RubyLLM.configure do |config|
|
105 | 112 | config.openai_api_key = ENV.fetch('OPENAI_API_KEY', nil)
|
106 |
| - config.anthropic_api_key = ENV.fetch('ANTHROPIC_API_KEY', nil) |
107 |
| - config.gemini_api_key = ENV.fetch('GEMINI_API_KEY', nil) |
108 |
| - config.deepseek_api_key = ENV.fetch('DEEPSEEK_API_KEY', nil) |
109 |
| - |
110 |
| - # Bedrock |
111 |
| - config.bedrock_api_key = ENV.fetch('AWS_ACCESS_KEY_ID', nil) |
112 |
| - config.bedrock_secret_key = ENV.fetch('AWS_SECRET_ACCESS_KEY', nil) |
113 |
| - config.bedrock_region = ENV.fetch('AWS_REGION', nil) |
114 |
| - config.bedrock_session_token = ENV.fetch('AWS_SESSION_TOKEN', nil) |
| 113 | + # Add keys ONLY for providers you intend to use |
| 114 | + # config.anthropic_api_key = ENV.fetch('ANTHROPIC_API_KEY', nil) |
| 115 | + # ... see Configuration guide for all options ... |
115 | 116 | end
|
116 | 117 | ```
|
| 118 | +See the [Installation Guide](https://rubyllm.com/installation) for full details. |
117 | 119 |
|
118 |
| -## Have great conversations |
119 |
| - |
120 |
| -```ruby |
121 |
| -# Start a chat with the default model (gpt-4.1-nano) |
122 |
| -chat = RubyLLM.chat |
123 |
| - |
124 |
| -# Or specify what you want |
125 |
| -chat = RubyLLM.chat(model: 'claude-3-7-sonnet-20250219') |
126 |
| - |
127 |
| -# Simple questions just work |
128 |
| -chat.ask "What's the difference between attr_reader and attr_accessor?" |
129 |
| - |
130 |
| -# Multi-turn conversations are seamless |
131 |
| -chat.ask "Could you give me an example?" |
132 |
| - |
133 |
| -# Stream responses in real-time |
134 |
| -chat.ask "Tell me a story about a Ruby programmer" do |chunk| |
135 |
| - print chunk.content |
136 |
| -end |
137 |
| - |
138 |
| -# Set personality or behavior with instructions (aka system prompts) |
139 |
| -chat.with_instructions "You are a friendly Ruby expert who loves to help beginners" |
140 |
| - |
141 |
| -# Understand content in multiple forms |
142 |
| -chat.ask "Compare these diagrams", with: { image: ["diagram1.png", "diagram2.png"] } |
143 |
| -chat.ask "Summarize this document", with: { pdf: "contract.pdf" } |
144 |
| -chat.ask "What's being said?", with: { audio: "meeting.wav" } |
145 |
| - |
146 |
| -# Need a different model mid-conversation? No problem |
147 |
| -chat.with_model('gemini-2.0-flash').ask "What's your favorite algorithm?" |
148 |
| -``` |
| 120 | +## Rails Integration |
149 | 121 |
|
150 |
| -## Rails integration that makes sense |
| 122 | +Add persistence to your chat models effortlessly: |
151 | 123 |
|
152 | 124 | ```ruby
|
153 | 125 | # app/models/chat.rb
|
154 | 126 | class Chat < ApplicationRecord
|
155 |
| - acts_as_chat |
156 |
| - |
157 |
| - # Works great with Turbo |
158 |
| - broadcasts_to ->(chat) { "chat_#{chat.id}" } |
| 127 | + acts_as_chat # Automatically saves messages & tool calls |
| 128 | + # ... your other model logic ... |
159 | 129 | end
|
160 | 130 |
|
161 | 131 | # app/models/message.rb
|
162 | 132 | class Message < ApplicationRecord
|
163 | 133 | acts_as_message
|
| 134 | + # ... |
164 | 135 | end
|
165 | 136 |
|
166 |
| -# app/models/tool_call.rb |
| 137 | +# app/models/tool_call.rb (if using tools) |
167 | 138 | class ToolCall < ApplicationRecord
|
168 | 139 | acts_as_tool_call
|
| 140 | + # ... |
169 | 141 | end
|
170 | 142 |
|
171 |
| -# In a background job |
172 |
| -chat = Chat.create! model_id: "gpt-4.1-nano" |
173 |
| - |
174 |
| -# Set personality or behavior with instructions (aka system prompts) - they're persisted too! |
175 |
| -chat.with_instructions "You are a friendly Ruby expert who loves to help beginners" |
176 |
| - |
177 |
| -chat.ask("What's your favorite Ruby gem?") do |chunk| |
178 |
| - Turbo::StreamsChannel.broadcast_append_to( |
179 |
| - chat, |
180 |
| - target: "response", |
181 |
| - partial: "messages/chunk", |
182 |
| - locals: { chunk: chunk } |
183 |
| - ) |
184 |
| -end |
185 |
| - |
186 |
| -# That's it - chat history is automatically saved |
| 143 | +# Now interacting with a Chat record persists the conversation: |
| 144 | +chat_record = Chat.create!(model_id: "gpt-4.1-nano") |
| 145 | +chat_record.ask("Explain Active Record callbacks.") # User & Assistant messages saved |
187 | 146 | ```
|
188 |
| - |
189 |
| -## Creating tools is a breeze |
190 |
| - |
191 |
| -```ruby |
192 |
| -class Search < RubyLLM::Tool |
193 |
| - description "Searches a knowledge base" |
194 |
| - |
195 |
| - param :query, desc: "The search query" |
196 |
| - param :limit, type: :integer, desc: "Max results", required: false |
197 |
| - |
198 |
| - def execute(query:, limit: 5) |
199 |
| - # Your search logic here |
200 |
| - Document.search(query).limit(limit).map(&:title) |
201 |
| - end |
202 |
| -end |
203 |
| - |
204 |
| -# Let the AI use it |
205 |
| -chat.with_tool(Search).ask "Find documents about Ruby 3.3 features" |
206 |
| -``` |
207 |
| - |
208 |
| -## Learn more |
209 |
| - |
210 |
| -Check out the guides at https://rubyllm.com for deeper dives into conversations with tools, streaming responses, embedding generations, and more. |
| 147 | +Check the [Rails Integration Guide](https://rubyllm.com/guides/rails) for more. |
| 148 | + |
| 149 | +## Learn More |
| 150 | + |
| 151 | +Dive deeper with the official documentation: |
| 152 | + |
| 153 | +- [Installation](https://rubyllm.com/installation) |
| 154 | +- [Configuration](https://rubyllm.com/configuration) |
| 155 | +- **Guides:** |
| 156 | + - [Getting Started](https://rubyllm.com/guides/getting-started) |
| 157 | + - [Chatting with AI Models](https://rubyllm.com/guides/chat) |
| 158 | + - [Using Tools](https://rubyllm.com/guides/tools) |
| 159 | + - [Streaming Responses](https://rubyllm.com/guides/streaming) |
| 160 | + - [Rails Integration](https://rubyllm.com/guides/rails) |
| 161 | + - [Image Generation](https://rubyllm.com/guides/image-generation) |
| 162 | + - [Embeddings](https://rubyllm.com/guides/embeddings) |
| 163 | + - [Working with Models](https://rubyllm.com/guides/models) |
| 164 | + - [Error Handling](https://rubyllm.com/guides/error-handling) |
| 165 | + - [Available Models](https://rubyllm.com/guides/available-models) |
211 | 166 |
|
212 | 167 | ## Contributing
|
213 | 168 |
|
214 |
| -We welcome contributions to RubyLLM! |
215 |
| - |
216 |
| -See [CONTRIBUTING.md](CONTRIBUTING.md) for detailed instructions on how to: |
217 |
| -- Run the test suite |
218 |
| -- Add new features |
219 |
| -- Update documentation |
220 |
| -- Re-record VCR cassettes when needed |
221 |
| - |
222 |
| -We appreciate your help making RubyLLM better! |
| 169 | +We welcome contributions! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for details on setup, testing, and contribution guidelines. |
223 | 170 |
|
224 | 171 | ## License
|
225 | 172 |
|
226 |
| -Released under the MIT License. |
| 173 | +Released under the MIT License. |
0 commit comments