- 学术搬砖
- 论文阅读笔记
目录
- # 2. 论文阅读-图像分类
- 2-1. Query2Label A Simple Transformer Way to Multi-Label Classification2-2. Contextual Transformer Networks for Visual Recognition2-3. General Multi-label Image Classification with Transformers2-4. RepVGG
- # 3. 论文阅读-语义分割
- 3-1. (DeepLabv1) Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs
- # 4. 论文阅读-知识蒸馏
- 4-1. Awesome-Knowledge-distillation
- # 5. 论文阅读-Transformer
- 5-1. Transformer系列代码5-2. An Image is Worth 16x16 Words Transformers for Image Recognition at Scale5-3. Do Vision Transformers See Like Convolutional Neural Networks
- # 6. 论文阅读-图卷积网络
- 6-1. Awesome-Graph-Neural-Network
- # 7. 论文阅读-弱监督图像分割
- 7-1. Awesome weakly supervised semantic segmentation7-2. Self-supervised Equivariant Attention Mechanism for Weakly Supervised Semantic Segmentation7-3. Weakly-Supervised Image Semantic Segmentation Using Graph Convolutional Networks7-4. Discriminative Region Suppression for Weakly-Supervised Semantic Segmentation7-5. Weakly-Supervised Semantic Segmentation via Sub-category Exploration7-6. AffinityNet Learning Pixel level Semantic Affinity with Image level Supervision for Weakly Supervised Semantic Segmentation7-7. Grad-CAM Visual Explanations from Deep Networks via Gradient-based Localization7-8. Grad-CAM++ Improved Visual Explanations for Deep Convolutional Networks7-9. Leveraging Auxiliary Tasks with Affinity Learning for Weakly Supervised Semantic Segmentation7-10. Embedded Discriminative Attention Mechanism for Weakly Supervised Semantic Segmentation7-11. Group-Wise Semantic Mining for Weakly Supervised Semantic Segmentation7-12. Mining Cross-Image Semantics for Weakly Supervised Semantic Segmentation7-13. NoPeopleAllowed The Three-Step Approach to Weakly Supervised SemanticSegmentation7-14. Weakly Supervised Learning of Instance Segmentation with Inter-pixel Relations7-15. Learning Deep Features for Discriminative Localization7-16. Convolutional Random Walk Networks for Semantic Image Segmentation7-17. Learning random-walk label propagation for weakly-supervised semantic segmentation7-18. Puzzle-CAM Improved localization via matching partial and full features7-19. Learning Visual Words for Weakly-Supervised Semantic Segmentation7-20. 区域擦除 | Object Region Mining with Adversarial Erasing A Simple Classification to Semantic Segmentation Approach7-21. CAM 扩散 | Tell Me Where to Look Guided Attention Inference Network7-22. Self-Erasing Network for Integral Object Attention7-23. Transformer CAM|Transformer Interpretability Beyond Attention Visualization7-24. GETAM Gradient-weighted Element-wise Transformer Attention Map for Weakly-supervised Semantic segmentation7-25. Class Re-Activation Maps for Weakly-Supervised Semantic Segmentation
- # 8. 论文阅读-半监督图像分割
- 8-1. Learning from Pixel-Level Label Noise A NewPerspective for Semi-Supervised SemanticSegmentation8-2. Semi-supervised Semantic Segmentation via Strong-weak Dual-branch Network8-3. Semi-Supervised Learning by exploiting unlabeled data correlations in a dual-branch network8-4. DMT Dynamic Mutual Training for Semi-Supervised Learning8-5. Semi-supervised semantic segmentation needs strong, varied perturbations8-6. ClassMix Segmentation-Based Data Augmentation for Semi-Supervised Learning8-7. Social-STGCNN A Social Spatio-Temporal Graph Convolutional Neural Network for Human Trajectory Prediction8-8. Semi-supevised Semantic Segmentation with High- and Low-level Consistency8-9. Self-Tuning for Data-Efficient Deep Learning8-10. FixMatch Simplifying Semi-Supervised Learning with Consistency and Confidence8-11. Re-distributing Biased Pseudo Labels for Semi-supervised Semantic Segmentation A Baseline Investigation8-12. Mean teachers are better role models Weight-averaged consistency targets improve semi-supervised deep learning results
- # 9. 论文阅读-带噪学习
- # 10. 论文阅读-小样本学习
- 10-1. SPICE Semantic Pseudo-labeling for Image Clustering10-2. Improving Unsupervised Image Clustering With Robust Learning10-3. SCAN Learning to Classify Images without Labels10-4. Sill-Net Feature Augmentation with Separated Illumination Representation
- # 11. 论文阅读-自监督学习
- 11-1. (MoCov1) Momentum Contrast for Unsupervised Visual Representation Learning11-2. (SimCLRv1) A Simple Framework for Contrastive Learning of Visual Representations11-3. (SimCLRv2) Big Self-Supervised Models are Strong Semi-Supervised Learners11-4. (InstDis) Unsupervised Feature Learning via Non-Parametric Instance Discrimination11-5. (CPC) Representation Learning with Contrastive Predictive Coding11-6. (CMC) Contrastive Multiview Coding, also contains implementations for MoCo and InstDis11-7. (HPT) Self-Supervised Pretraining Improves Self-Supervised Pretraining11-8. (SimSiam) SimSiam Exploring Simple Siamese Representation Learning11-9. 自监督系列代码
上次更新: 2021/08/02, 21:04:52
- 02
- README 美化05-20
- 03
- 常见 Tricks 代码片段05-12