A framework for large scale recommendation algorithms.
-
Updated
Dec 10, 2024 - Python
A framework for large scale recommendation algorithms.
[NeurIPS 2023] Michelangelo: Conditional 3D Shape Generation based on Shape-Image-Text Aligned Latent Representation
InternEvo is an open-sourced lightweight training framework aims to support model pre-training without the need for extensive dependencies.
Repository for Project Insight: NLP as a Service
A compilation of the best multi-agent papers
CLIP (Contrastive Language–Image Pre-training) for Italian
Federated Learning Utilities and Tools for Experimentation
The official code for "TEMPO: Prompt-based Generative Pre-trained Transformer for Time Series Forecasting (ICLR 2024)". TEMPO is one of the very first open source Time Series Foundation Models for forecasting task v1.0 version.
Retrieval-based Voice Conversion (RVC) implemented with Hugging Face Transformers.
I will implement Fastai in each projects present in this repository.
Pytorch implementation of image captioning using transformer-based model.
[TMI 2023] XBound-Former: Toward Cross-scale Boundary Modeling in Transformers
Official repository for the paper "ALERT: A Comprehensive Benchmark for Assessing Large Language Models’ Safety through Red Teaming"
This project investigates the security of large language models by performing binary classification of a set of input prompts to discover malicious prompts. Several approaches have been analyzed using classical ML algorithms, a trained LLM model, and a fine-tuned LLM model.
Image Captioning Vision Transformers (ViTs) are transformer models that generate descriptive captions for images by combining the power of Transformers and computer vision. It leverages state-of-the-art pre-trained ViT models and employs technique
A radically simple, reliable, and high performance template to enable you to quickly get set up building multi-agent applications
This repository contain my 75Day Hard Generative AI and LLM Learning Challenge.
An ASR (Automatic Speech Recognition) adversarial attack repository.
An open source community implementation of the model from "DIFFERENTIAL TRANSFORMER" paper by Microsoft.
Symbolic music generation taking inspiration from NLP and human composition process
Add a description, image, and links to the transformers-models topic page so that developers can more easily learn about it.
To associate your repository with the transformers-models topic, visit your repo's landing page and select "manage topics."