EfficientDet Scalable and Efficient Object Detection
·
AI paper review
1. Introduction 1.1 Motivation The existing methods for object detection mainly have two problems. (i) Most previous works have developed network structures for cross-scale feature fusion. However, they usually contribute to the fused output feature unequally. (ii) While previous works mainly rely on bigger backbone networks or larger input image sizes for higher accuracy, scaling up feature net..
Knowledge Distillation via Softmax Regression Representation Learning
·
AI paper review/Model Compression
1. Introduction Knowledge Distillation: Dealing with the problem of training a smaller model (Student) from a high capacity source model (Teacher) so as to retain most of its performance. 1.1 Motivation The authors of this paper try to make the outputs of student network be similar to the outputs of the teacher network. Then, they have advocated for a method that optimizes not only the output of..
Adjustable Real-time Style Transfer
·
AI paper review/Mobile-friendly
1. Introduction Style transfer: Synthesizing an image with content similar to a given image and style similar to another. 1.1 Motivation There are two main problems in style transfer on the existing methods. (i) The first weak point is that they generate only one stylization for a given content/style pair. (ii) One other issue of them is their high sensitivity to the hyper-parameters. 1.2 Goal T..
EagleEye: Fast Sub-net Evaluation for Efficient Neural Network Pruning
·
AI paper review/Model Compression
1. Introduction Pruning: Eliminating the computational redundant part of a trained DNN and then getting a smaller and more efficient pruned DNN. 1.1 Motivation The important thing to prune a trained DNN is to obtain the sub-net with the highest accuracy with reasonably small searching efforts. Existing methods to solve this problem mainly focus on an evaluation process. The evaluation process ai..
Counterfactual Explanation Based on Gradual Construction for Deep Networks
·
AI paper review/Explainable AI
1. Counterfactual Explanation Counterfactual Explanation: Given input data that are classified as a class from a deep network, it is to perturb the subset of features in the input data such that the model is forced to predict the perturbed data as a target class. The Framework for counterfactual explanation is described in Fig 1. From perturbed data, we can interpret that the pre-trained model t..
A Disentangling Invertible Interpretation Network for Explaining Latent Representations
·
AI paper review/Explainable AI
1. Interpreting hidden representations 1.1 Invertible transformation of hidden representations Input image: \( x \in \mathbb{R}^{H \times W \times 3 } \) Sub-network of \(f \) including hidden layers: \(E\) Latent (original) representation: \( z=\mathbb{E} (x) \in \mathbb{R}^{H \times W \times 3 = N}\) Sub-network after the hidden layer: \( G \) \( f(x)=G \cdot E(x) \) In A Disentangling Inverti..