Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remote Ollama server in the future? #270

Closed
AlvaroNovillo opened this issue Jan 24, 2025 · 5 comments
Closed

Remote Ollama server in the future? #270

AlvaroNovillo opened this issue Jan 24, 2025 · 5 comments

Comments

@AlvaroNovillo
Copy link

AlvaroNovillo commented Jan 24, 2025

Hi! I'm trying to use your amazing package to do a custom Ollama build on a remote server, but I saw that it's not configured to run on remote servers by default. Are you planning to add this enhancement in the near future?

Thanks in advance!

@hadley
Copy link
Member

hadley commented Jan 24, 2025

If you explain what that means, I can certainly consider it.

@AlvaroNovillo
Copy link
Author

I mean to use the package inside a shiny app running in a server, with Ollama installed in a different server. I don’t really know if that’s, possible. In function chat_ollama(), it uses a local Ollama, could I just introduce a base_url from a different server??

@hadley
Copy link
Member

hadley commented Jan 27, 2025

Yes.

@hadley hadley closed this as completed Jan 27, 2025
@AlvaroNovillo
Copy link
Author

AlvaroNovillo commented Jan 27, 2025

Hi again!

Sorry for the confusion! I was trying to input inside the turn_list parameter, thats why it didnt work, but it works fine! Thanks

# Cargar la librería ellmer
library(ellmer)

# Definir la dirección IP y puerto del servidor Ollama
ollama_server <- "yuur_server:11434"  # Reemplaza con la IP/puerto correctos

# Configurar el modelo que se quiere usar en Ollama (por ejemplo, llama2)
ollama_model <- "llama3.2"

# Definir el prompt para la consulta
prompt_text <- "¿Cuál es la capital de Francia?"

# Realizar la consulta a Ollama usando la función chat_ollama
chat <- chat_ollama(
  model = ollama_model,         # Nombre del modelo a usar
  base_url = ollama_server,     # Dirección del servidor de Ollama
  echo = "all"
)

# Mostrar la respuesta del modelo
chat$chat(prompt_text)

@cboettig
Copy link

@AlvaroNovillo just a side-note here, but if you are hosting models on your own server and want the server to be able to respond to multiple users simultaneously, you might consider using https://docs.vllm.ai/en/latest/ with ellmer's chat_vllm() support.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants