Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

experts based on Phi 3 model with PEFT library #124

Open
AliEdalat opened this issue Oct 19, 2024 · 17 comments
Open

experts based on Phi 3 model with PEFT library #124

AliEdalat opened this issue Oct 19, 2024 · 17 comments

Comments

@AliEdalat
Copy link

Hello,

Thank you for your interesting research work.

I have 10 experts trained based on the Phi 3 model (datasets selected based on paper cluttering). I have used the TRL and PEFT libraries for training, ensuring the checkpoint structures are suitable for these libraries.

In training the experts, I used LoRA in 4-bit quantized mode. Additionally, I utilized the o and kqv attention in each layer during training.

I would like to know how I can use your code to execute Arrow for merging these experts for each token in every model layer.

I have some errors in the code.

please explain step by step. I am a beginner in this field.

Thank you, and I would appreciate your response.

@sordonia
Copy link
Member

@pclucas14 can you help with this issue?

I also have a PR that creates a library compatible with our library out of PEFT adapters, maybe we can merge that and write an example on how to run Arrow using experts trained with TRL and PEFT

Thank you

@sordonia
Copy link
Member

@AliEdalat would you be able to share with us the name of your PEFT adapters?

@sordonia
Copy link
Member

in #86 , we provide a script in examples/create_arrow_model.py, you can specify a list of PEFT adapters and a destination path, it will upload a checkpoint of the base model that has been "Arrowed" with the selected experts.

you can reload the model with model = MultiExpertModel.from_pretrained(destination) (look in the file for instructions)

Once you have the model, you can model.forward and model.generate as you would do with a HF model.

@TheTahaaa
Copy link

TheTahaaa commented Oct 25, 2024

Hey @sordonia,

I'm @AliEdalat's teammate, and have recently implemented the Arrow algorithm for our project using the PEFT library, achieving promising results. To do this, I modified the forward path in the bnb.py file (since we use QLoRA in our workflow) and made adjustments to methods in the peft_model.py file, along with some other PEFT library files.

I thought you might be interested in a dedicated PEFT adapter for the Arrow algorithm, similar to existing ones like Poly and MHR. Let me know if you'd like any help on this!

Thanks!

@sordonia
Copy link
Member

It'd be great! Do you plan to open a PR into PEFT? I'd be willing to help

@TheTahaaa
Copy link

TheTahaaa commented Oct 28, 2024

It'd be great! Do you plan to open a PR into PEFT? I'd be willing to help

Yes, I'd be glad to contribute!

Which branch should I (we) open a PR on?

@sordonia
Copy link
Member

Just to clarify my understanding: you are planning to PR your work into the huggingface PEFT library, right? Or into MTTL library?

@TheTahaaa
Copy link

Just to clarify my understanding: you are planning to PR your work into the huggingface PEFT library, right? Or into MTTL library?

Yes! HuggingFace PEFT library.

I initially asked about the branch because I thought there might already be a feature request for implementing Arrow in PEFT. Should I go ahead and open a feature request for this, or would you prefer to handle it?

(Sorry for any ambiguity!)

@pclucas14
Copy link
Contributor

pclucas14 commented Oct 28, 2024

Hi,

Super exciting to hear you are getting promising results! Please go ahead and PR into PEFT. I am happy to help if needed, but you probably have a better intuition on how PEFT works than us :)

@sordonia
Copy link
Member

Yes please go ahead an open a feature request, we can jump in the PR when needed!

@sordonia
Copy link
Member

sordonia commented Oct 29, 2024

(we merged our own PEFT support in examples/create_arrow_model.py curious if it works with your PEFT experts :))

@TheTahaaa
Copy link

TheTahaaa commented Nov 2, 2024

@sordonia @pclucas14 I'll do it for sure! However, I need to first make sure that my code can almost reproduce the result on the zero-shot datasets, like Piqa, as you've reported in the paper.

Furthermore, I think I need to refactor the code, so the Arrow can be added as a Tuner Class in the PEFT library, similar to Polytropon and π-Tuning.

@sordonia
Copy link
Member

@TheTahaaa how is it going? :)

@TheTahaaa
Copy link

Hey @sordonia!

I’ve tested my implementation of the Arrow algorithm on the benchmarks from your paper (PIQA, BOOLQ, etc.) using the Phi3 model, and the results were quite promising, confirming that the algorithm works well.

I was just about to start working on the PR—so thanks for the reminder! 😁

Apologies for the delay; we only have access to a single RTX 3090 (24GB), which made testing the algorithm on the benchmark datasets a bit challenging.

@sordonia
Copy link
Member

Awesome!! I am really looking forward to know more about the setting and no problem at all, I was just curious so I pinged, let me know the progress and how we can help

@TheTahaaa
Copy link

I'll definitely keep you updated on the progress and share more details about the setting soon.

To make things more transparent and easier to follow, I was thinking we could set up this on Git. That way, you can easily track changes and progress as we go.

Let me know if that works for you, and I’d be happy to set it up!

@sordonia
Copy link
Member

sure! thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants