要阅读的论文列表

要读的论文

后面标记是否读完或者不读了

  1. AN IMAGE IS WORTH 16X16 WORDS: TRANSFORMERS FOR IMAGE RECOGNITION AT SCALE
    【已完成】笔记:2022.11.18

  2. CrossViT: Cross-Attention Multi-Scale Vision Transformer for Image Classification
    【已完成】笔记:2022.11.19

  3. Swin Transformer: Hierarchical Vision Transformer using Shifted Windows

  4. ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks

  5. Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions

  6. Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet

  7. Multiscale Vision Transformers

  8. Scaling Vision Transformers

  9. Rethinking Spatial Dimensions of Vision Transformers

  10. DeepViT: Towards Deeper Vision Transformer

  11. Conditional Positional Encodings for Vision Transformers


本文来自互联网用户投稿,文章观点仅代表作者本人,不代表本站立场,不承担相关法律责任。如若转载,请注明出处。 如若内容造成侵权/违法违规/事实不符,请点击【内容举报】进行投诉反馈!

相关文章

立即
投稿

微信公众账号

微信扫一扫加关注

返回
顶部