Skip to content

Commit

Permalink
update
Browse files Browse the repository at this point in the history
  • Loading branch information
26hzhang committed Mar 27, 2022
1 parent 2ae5cc9 commit e1ae970
Show file tree
Hide file tree
Showing 3 changed files with 7 additions and 5 deletions.
8 changes: 5 additions & 3 deletions Bibtex/Evolving Attention with Residual Convolutions.bib
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
@article{wang2021evolving,
@inproceedings{wang2021evolving,
title={Evolving attention with residual convolutions},
author={Wang, Yujing and Yang, Yaming and Bai, Jiangang and Zhang, Mingliang and Bai, Jing and Yu, Jing and Zhang, Ce and Huang, Gao and Tong, Yunhai},
journal={arXiv preprint arXiv:2102.12895},
year={2021}
booktitle={International Conference on Machine Learning},
pages={10971--10980},
year={2021},
organization={PMLR}
}
2 changes: 0 additions & 2 deletions readme/cv/vision_pretraining.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,10 +16,8 @@
- [2021 ICLR] **Representation Learning via Invariant Causal Mechanisms**, [[paper]](https://openreview.net/pdf?id=9p2ekP904Rs), [[bibtex]](/Bibtex/Representation%20Learning%20via%20Invariant%20Causal%20Mechanisms.bib).
- [2021 ArXiv] **Conditional Positional Encodings for Vision Transformers**, [[paper]](https://arxiv.org/pdf/2102.10882v2.pdf), [[bibtex]](/Bibtex/Conditional%20Positional%20Encodings%20for%20Vision%20Transformers.bib), sources: [[Meituan-AutoML/CPVT]](https://github.com/Meituan-AutoML/CPVT).
- [2021 ICML] **Training Data-Efficient Image Transformers & Distillation Through Attention**, [[paper]](http://proceedings.mlr.press/v139/touvron21a/touvron21a.pdf), [[bibtex]](/Bibtex/Training%20data-efficient%20image%20transformers%20&%20distillation%20through%20attention.bib), [[supplementary]](http://proceedings.mlr.press/v139/touvron21a/touvron21a-supp.pdf), sources: [[facebookresearch/deit]](https://github.com/facebookresearch/deit).
- [2021 ArXiv] **An Attention Free Transformer**, [[paper]](https://arxiv.org/pdf/2105.14103.pdf), [[bibtex]](/Bibtex/An%20Attention%20Free%20Transformer.bib), sources: [[rish-16/aft-pytorch]](https://github.com/rish-16/aft-pytorch).
- [2021 ArXiv] **Proactive Pseudo-Intervention: Contrastive Learning For Interpretable Vision Models**, [[paper]](https://arxiv.org/pdf/2012.03369.pdf), [[bibtex]](/Bibtex/Proactive%20Pseudo-Intervention%20-%20Contrastive%20Learning%20For%20Interpretable%20Vision%20Models.bib).
- [2021 ArXiv] **Adversarial Visual Robustness by Causal Intervention**, [[paper]](https://arxiv.org/pdf/2106.09534.pdf), [[bibtex]](/Bibtex/Adversarial%20Visual%20Robustness%20by%20Causal%20Intervention.bib), sources: [[KaihuaTang/Adversarial-Robustness-by-Causal-Intervention.pytorch]](https://github.com/KaihuaTang/Adversarial-Robustness-by-Causal-Intervention.pytorch).
- [2021 CVPR] **Evolving Attention with Residual Convolutions**, [[paper]](https://arxiv.org/pdf/2102.12895.pdf), [[bibtex]](/Bibtex/Evolving%20Attention%20with%20Residual%20Convolutions.bib).
- [2021 CVPR] **Exploring Simple Siamese Representation Learning**, [[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Chen_Exploring_Simple_Siamese_Representation_Learning_CVPR_2021_paper.pdf), [[bibtex]](/Bibtex/Exploring%20Simple%20Siamese%20Representation%20Learning.bib), sources: [[facebookresearch/simsiam]](https://github.com/facebookresearch/simsiam).
- [2021 CVPR] **Pre-Trained Image Processing Transformer**, [[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Chen_Pre-Trained_Image_Processing_Transformer_CVPR_2021_paper.pdf), [[bibtex]](/Bibtex/Pre-Trained%20Image%20Processing%20Transformer.bib), [[supplementary]](https://openaccess.thecvf.com/content/CVPR2021/supplemental/Chen_Pre-Trained_Image_Processing_CVPR_2021_supplemental.pdf), sources: [[huawei-noah/Pretrained-IPT]](https://github.com/huawei-noah/Pretrained-IPT), [[]]().
- [2021 ICML] **Understanding Self-Supervised Learning Dynamics without Contrastive Pairs**, [[paper]](https://research.fb.com/wp-content/uploads/2021/06/Understanding-self-supervised-Learning-Dynamics-without-Contrastive-Pairs.pdf), [[bibtex]](/Bibtex/Understanding%20Self-Supervised%20Learning%20Dynamics%20without%20Contrastive%20Pairs.bib), [[slides]](https://icml.cc/media/icml-2021/Slides/10403.pdf), sources: [[facebookresearch/luckmatters/ssl]](https://github.com/facebookresearch/luckmatters/tree/master/ssl).
Expand Down
2 changes: 2 additions & 0 deletions readme/general_rep_learning.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
# General Representation Learning

- [2021 ArXiv] **An Attention Free Transformer**, [[paper]](https://arxiv.org/pdf/2105.14103.pdf), [[bibtex]](/Bibtex/An%20Attention%20Free%20Transformer.bib), sources: [[rish-16/aft-pytorch]](https://github.com/rish-16/aft-pytorch).
- [2021 ICML] **Perceiver: General Perception with Iterative Attention**, [[paper]](https://proceedings.mlr.press/v139/jaegle21a/jaegle21a.pdf), [[bibtex]](/Bibtex/Perceiver%20-%20General%20Perception%20with%20Iterative%20Attention.bib), sources: [[lucidrains/perceiver-pytorch]](https://github.com/lucidrains/perceiver-pytorch).
- [2021 ICML] **Evolving Attention with Residual Convolutions**, [[paper]](https://arxiv.org/pdf/2102.12895.pdf), [[bibtex]](/Bibtex/Evolving%20Attention%20with%20Residual%20Convolutions.bib).
- [2022 ArXiv] **data2vec: A General Framework for Self-supervised Learning in Speech, Vision
and Language**, [[paper]](https://arxiv.org/pdf/2202.03555.pdf), [[bibtex]](/Bibtex/data2vec.bib), sources: [[pytorch/fairseq/data2vec]](https://github.com/pytorch/fairseq/tree/main/examples/data2vec).

0 comments on commit e1ae970

Please sign in to comment.