Docs

Rethinking patch dependence for masked autoencoders

Abstract: "In this work, we re-examine inter-patch dependencies in the decoding mechanism of masked autoencoders (MAE). We decompose this decoding mechanism for masked patch reconstruction in MAE into self-attention and cross-attention. Our investigations suggest that self-attention between mask patches is not essential for learning good representations. To this end, we propose a novel pretraining framework: Cross-Attention Masked Autoencoders (CrossMAE). CrossMAE's decoder leverages only cross-attention between masked and visible tokens, with no degradation in downstream performance. This design also enables decoding only a small subset of mask tokens, boosting efficiency. Furthermore, each decoder block can now leverage different encoder features, resulting in improved representation learning. CrossMAE matches MAE in performance with 2.5 to 3.7× less decoding compute. It also surpasses MAE on ImageNet classification and COCO instance segmentation under the same compute."

Details

author(s)
Adam Yala
publication date
25 January 2024
source
Arxiv
Link to publication
External link ->

We use cookies on our site.