Skip to content

Latest commit

 

History

History
82 lines (68 loc) · 2.13 KB

File metadata and controls

82 lines (68 loc) · 2.13 KB

LLaMA-Customer-Support-Assistant

Fine-tuned LLaMA 3.2 model for customer support using Unsloth optimization. Trained on Bitext's 27K customer service dataset. Optimized for M1 Mac.

Features

  • 2x faster training with Unsloth optimization
  • LoRA fine-tuning
  • Memory-efficient for M1 Macs
  • Automated customer support responses
  • Order cancellation and status handling

Dataset

Project Structure

LLaMA-Customer-Support-Assistant/
├── fine_tuning.ipynb          # Training notebook
├── test_model.ipynb           # Testing notebook
├── customer_support_model/    # Model outputs
├── dataset/                   # CSV dataset
└── README.md

Installation

# Create conda environment
conda create -n llm_env python=3.10
conda activate llm_env

# Install PyTorch for M1
conda install pytorch torchvision torchaudio -c pytorch

# Install dependencies
pip install transformers datasets accelerate peft
pip install unsloth

Usage

  1. Start Jupyter:
jupyter notebook
  1. Execute notebooks:
  • Run fine_tuning.ipynb for training
  • Run test_model.ipynb for testing

Model Details

  • Base Model: unsloth/Llama-3.2-3B-Instruct
  • Optimization: Unsloth + MPS backend
  • Fine-tuning: LoRA
  • Training Parameters:
    • Batch size: 1
    • Learning rate: 1e-4
    • Epochs: 1
    • Max sequence length: 256

Requirements

  • M1/M2 Mac
  • 8GB+ RAM
  • Python 3.10+
  • PyTorch with MPS support

Acknowledgments

License

MIT

Citation

@misc{bitext2023customer,
    title={Customer Support LLM Training Dataset},
    author={Bitext},
    year={2023},
    publisher={GitHub},
    url={https://github.com/bitext/customer-support-llm-chatbot-training-dataset}
}