Skip to content

star-whale/alpaca-lora

This branch is 7 commits ahead of tloen/alpaca-lora:main.

Folders and files

NameName
Last commit message
Last commit date

Latest commit

b7b7f5b Β· Jul 5, 2023
Mar 27, 2023
Mar 29, 2023
Apr 4, 2023
Mar 30, 2023
Mar 30, 2023
Jul 4, 2023
Mar 13, 2023
Mar 30, 2023
Mar 13, 2023
Apr 28, 2023
Apr 25, 2023
Mar 13, 2023
Apr 7, 2023
Apr 7, 2023
Apr 25, 2023
Apr 25, 2023
Mar 30, 2023
Apr 9, 2023
Apr 9, 2023
Apr 25, 2023
Jul 4, 2023
Mar 27, 2023
Mar 27, 2023
Apr 25, 2023
Jul 4, 2023

Repository files navigation

πŸ¦™πŸŒ²πŸ€ Alpaca-LoRA && Starwhale

This repo is forked from tloen/alpaca-lora, see the original README.md. We add some Starwhale support for this repo, users could manage lifecycle of the model/dataset by starwhale, including:

  • finetune a new version of model locally or remotelly and get a finetuned version of model
  • serve an API locally or remotelly
  • evaluate the model with Starwhale datasets

Build Starwhale Datasets

python build_swds.py

Build Starwhale Model

python build_swmp.py

Finetune the model with dataset and gain a new verson

swcli model run -u llama-7b-hf/version/latest -d test/version/latest -h swmp_handlers:fine_tune

Serve an API for the finetuned version of model

swcli model serve -u llama-7b-hf/version/latest

About

Instruct-tune LLaMA on consumer hardware

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 73.2%
  • Python 26.2%
  • Dockerfile 0.6%