Muyun99's wiki Muyun99's wiki
首页
学术搬砖
学习笔记
生活杂谈
wiki搬运
资源收藏
关于
  • 分类
  • 标签
  • 归档
GitHub (opens new window)

Muyun99

努力成为一个善良的人
首页
学术搬砖
学习笔记
生活杂谈
wiki搬运
资源收藏
关于
  • 分类
  • 标签
  • 归档
GitHub (opens new window)
  • 论文摘抄

  • 论文阅读-图像分类

  • 论文阅读-语义分割

  • 论文阅读-知识蒸馏

  • 论文阅读-Transformer

  • 论文阅读-图卷积网络

  • 论文阅读-弱监督图像分割

  • 论文阅读-半监督图像分割

  • 论文阅读-带噪学习

  • 论文阅读-小样本学习

  • 论文阅读-自监督学习

    • (MoCov1) Momentum Contrast for Unsupervised Visual Representation Learning
    • (SimCLRv1) A Simple Framework for Contrastive Learning of Visual Representations
    • (SimCLRv2) Big Self-Supervised Models are Strong Semi-Supervised Learners
      • (InstDis) Unsupervised Feature Learning via Non-Parametric Instance Discrimination
      • (CPC) Representation Learning with Contrastive Predictive Coding
      • (CMC) Contrastive Multiview Coding, also contains implementations for MoCo and InstDis
      • (HPT) Self-Supervised Pretraining Improves Self-Supervised Pretraining
      • (SimSiam) SimSiam Exploring Simple Siamese Representation Learning
      • 自监督系列代码
    • 语义分割中的知识蒸馏

    • 学术文章搜集

    • 论文阅读-其他文章

    • 学术搬砖
    • 论文阅读-自监督学习
    Muyun99
    2021-04-14

    (SimCLRv2) Big Self-Supervised Models are Strong Semi-Supervised Learners

    # (SimCLRv2)Big Self-Supervised Models are Strong Semi-Supervised Learners

    # 作者:Google Hinton组

    # 摘要

    # 阅读

    # 论文的目的及结论

    # 论文的实验

    # 论文的方法

    # 论文的背景

    # 总结

    # 论文的贡献

    image-20210324235903099

    image-20210325000419171

    Big CNN 用无监督学习来pretraining,然后去fine-tuning

    再做一个无监督的蒸馏,train一个student model

    无监督的蒸馏:用finetune的model来计算一个soft label,然后去train这个student model

    # 论文的不足
    # 论文如何讲故事

    两个利用无标记数据的范式

    • Task-agnostic use of unlabeled data
      • Unsupervised pre-training + supervised fine-tuning
    • Task-specific use of unlabeled data
      • Self-training,pseudo-labeling
      • Label consistency regularization
      • Other label propogation

    # 参考资料

    上次更新: 2021/11/03, 23:35:28
    (SimCLRv1) A Simple Framework for Contrastive Learning of Visual Representations
    (InstDis) Unsupervised Feature Learning via Non-Parametric Instance Discrimination

    ← (SimCLRv1) A Simple Framework for Contrastive Learning of Visual Representations (InstDis) Unsupervised Feature Learning via Non-Parametric Instance Discrimination→

    最近更新
    01
    Structured Knowledge Distillation for Semantic Segmentation
    06-03
    02
    README 美化
    05-20
    03
    常见 Tricks 代码片段
    05-12
    更多文章>
    Theme by Vdoing | Copyright © 2021-2023 Muyun99 | MIT License
    • 跟随系统
    • 浅色模式
    • 深色模式
    • 阅读模式
    ×