Skip to content
/ LISA Public

Pytorch implementation of LISA (Linear-Time Self Attention with Codeword Histogram for Efficient Recommendation. WWW 2021)

Notifications You must be signed in to change notification settings

USTCLLM/LISA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

author
yinlu
Mar 23, 2021
8873572 · Mar 23, 2021

History

1 Commit
Mar 23, 2021
Mar 23, 2021
Mar 23, 2021
Mar 23, 2021
Mar 23, 2021
Mar 23, 2021
Mar 23, 2021
Mar 23, 2021
Mar 23, 2021
Mar 23, 2021
Mar 23, 2021
Mar 23, 2021
Mar 23, 2021
Mar 23, 2021

Repository files navigation

Requirements

  • PyTorch 1.6.0
  • numpy
  • tqdm

Usage

train_efficient_attention_target_seperate_pq.py, train_efficient_attention_soft_pq.py, train_efficient_attention.py correspond to the LISA-Mini, LISA-Soft, LISA-Base models, respectively.

train_transformer.py corresponds to the vanilla Transformer based model.

To train LISA-Mini on ML-1M dataset, run:

python train_efficient_attention_target_seperate_pq.py --config configs/EfficientAttn-Mini/ml-1m-256.json --train_dataset datasets/ml-1m/train.pkl --eval_dataset datasets/ml-1m/eval.pkl --gpu 0

To train LISA-Soft, run:

python train_efficient_attention_soft_pq.py --config configs/EfficientAttn-Soft/ml-1m-128.json --train_dataset datasets/ml-1m/train.pkl --eval_dataset datasets/ml-1m/eval.pkl --gpu 0

To train LISA-Base, run:

python train_efficient_attention.py --config configs/EfficientAttn-Base/ml-1m.json --train_dataset datasets/ml-1m/train.pkl --eval_dataset datasets/ml-1m/eval.pkl --gpu 0

To train the vanilla Transformer, run:

python train_transformer.py --config configs/Transformer/ml-1m.json --train_dataset datasets/ml-1m/train.pkl --eval_dataset datasets/ml-1m/eval.pkl --gpu 0

About

Pytorch implementation of LISA (Linear-Time Self Attention with Codeword Histogram for Efficient Recommendation. WWW 2021)

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages