Model Compression

반응형

    Zero-Shot Knowledge Transfer via Adversarial Belief Matching

    1. Data-free Knowledge distillation Knowledge distillation: Dealing with the problem of training a smaller model (Student) from a high capacity source model (Teacher) so as to retain most of its performance. As the word itself, We perform knowledge distillation when there is no original dataset on which the Teacher network has been trained. It is because, in real-world, most datasets are proprie..

    Data-Free Learning of Student Networks

    1. Data-free Knowledge distillation Knowledge distillation: Dealing with the problem of training a smaller model (Student) from a high capacity source model (Teacher) so as to retain most of its performance. As the word itself, We perform knowledge distillation when there is no original dataset on which the Teacher network has been trained. It is because, in real-world, most datasets are proprie..

    Zero-Shot Knowledge Distillation in Deep Networks

    1. What is data-free knowledge distillation?? Knowledge distillation: Dealing with the problem of training a smaller model (Student) from a high capacity source model (Teacher) so as to retain most of its performance. As the word itself, We perform knowledge distillation when there is no original dataset on which the Teacher network has been trained. It is because, in real-world, most datasets a..

반응형