Skip to content

Latest commit

 

History

History
57 lines (39 loc) · 3.35 KB

README.md

File metadata and controls

57 lines (39 loc) · 3.35 KB

Nebius package for Ollama

Description

Ollama is an open-source tool that runs large language models (LLMs) on your server. This makes it particularly appealing to AI developers, researchers, and businesses concerned with data control and privacy. By running models on your server, you maintain full data ownership and avoid the potential security risks associated with cloud storage. Offline AI tools like Ollama also help reduce latency and reliance on external servers, making them faster and more reliable.

Short description

Runs LLMs on your server, ensuring data privacy and control. Ideal for AI developers and businesses.

Use cases

  • Financial services: Run LLMs to analyze financial data, detect fraud, and generate personalized financial advice while ensuring data privacy.
  • Healthcare: Use LLMs for patient data analysis, diagnosis assistance, and predictive healthcare management without compromising sensitive information.
  • Insurance: Implement LLMs for claims processing, risk assessment, and customer service automation, maintaining control over customer data.
  • Manufacturing: Optimize production processes, predictive maintenance, and supply chain management using LLMs on-premises.
  • Marketing: Enhance customer segmentation, content personalization, and campaign optimization with LLMs, keeping marketing data secure.
  • Retail: Improve inventory management, pricing strategies, and customer experience by running LLMs locally.
  • Telecom: Utilize LLMs for network optimization, customer support automation, and predictive maintenance, ensuring data control and security.

Links

Legal

By using the application, you agree to their terms and conditions: the helm-chart, MIT.

Tutorial

To install the product:

  1. Click Install.
  2. You can find model name in Ollama library
  3. Wait for the application to change its status to Deployed.
  4. Install the NVIDIA® GPU Operator on the cluster.

Usage

  1. Install kubectl and configure it to work with the created cluster

  2. Check that Ollama pods are running:

    kubectl get pods -n <namespace>
  3. You can use JupyterHub to interact with Ollama

  4. Interact with Ollama