Data-Free Learning of Student Networks
·
AI paper review/Model Compression
1. Data-free Knowledge distillation Knowledge distillation: Dealing with the problem of training a smaller model (Student) from a high capacity source model (Teacher) so as to retain most of its performance. As the word itself, We perform knowledge distillation when there is no original dataset on which the Teacher network has been trained. It is because, in real-world, most datasets are proprie..
Zero-Shot Knowledge Distillation in Deep Networks
·
AI paper review/Model Compression
1. What is data-free knowledge distillation?? Knowledge distillation: Dealing with the problem of training a smaller model (Student) from a high capacity source model (Teacher) so as to retain most of its performance. As the word itself, We perform knowledge distillation when there is no original dataset on which the Teacher network has been trained. It is because, in real-world, most datasets a..
Interpretable Explanations of Black Boxes by Meaningful Perturbation
·
AI paper review/Explainable AI
1. How to explain the decision of black-box model?? Given the left figure, we wonder that why the deep network predicts the image as "dog". To gratify this curiosity, we aim to find the important regions of an input image to classify it as "dog" class. \[ \downarrow \text{The idea} \] If we find and remove THE REGIONS, the probability of the prediction significantly gets lower. Note: removing th..
GradCAM
·
AI paper review/Explainable AI
1. What is the goal of GradCAM?? The goal of GradCAM is to produce a coarse localization map highlighting the important regions in the image for predicting the concept (class). GradCAM uses the gradients of any target concept (such as "cat") flowing into the final convolutional layer. Note: I (da2so) will only deal with the problem of image classification in the following contents. The property ..