분류 전체보기

반응형

    Knowledge Distillation via Softmax Regression Representation Learning

    1. Introduction Knowledge Distillation: Dealing with the problem of training a smaller model (Student) from a high capacity source model (Teacher) so as to retain most of its performance. 1.1 Motivation The authors of this paper try to make the outputs of student network be similar to the outputs of the teacher network. Then, they have advocated for a method that optimizes not only the output of..

    Adjustable Real-time Style Transfer

    1. Introduction Style transfer: Synthesizing an image with content similar to a given image and style similar to another. 1.1 Motivation There are two main problems in style transfer on the existing methods. (i) The first weak point is that they generate only one stylization for a given content/style pair. (ii) One other issue of them is their high sensitivity to the hyper-parameters. 1.2 Goal T..

    EagleEye: Fast Sub-net Evaluation for Efficient Neural Network Pruning

    1. Introduction Pruning: Eliminating the computational redundant part of a trained DNN and then getting a smaller and more efficient pruned DNN. 1.1 Motivation The important thing to prune a trained DNN is to obtain the sub-net with the highest accuracy with reasonably small searching efforts. Existing methods to solve this problem mainly focus on an evaluation process. The evaluation process ai..

    Counterfactual Explanation Based on Gradual Construction for Deep Networks

    1. Counterfactual Explanation Counterfactual Explanation: Given input data that are classified as a class from a deep network, it is to perturb the subset of features in the input data such that the model is forced to predict the perturbed data as a target class. The Framework for counterfactual explanation is described in Fig 1. From perturbed data, we can interpret that the pre-trained model t..

    A Disentangling Invertible Interpretation Network for Explaining Latent Representations

    1. Interpreting hidden representations 1.1 Invertible transformation of hidden representations Input image: \( x \in \mathbb{R}^{H \times W \times 3 } \) Sub-network of \(f \) including hidden layers: \(E\) Latent (original) representation: \( z=\mathbb{E} (x) \in \mathbb{R}^{H \times W \times 3 = N}\) Sub-network after the hidden layer: \( G \) \( f(x)=G \cdot E(x) \) In A Disentangling Inverti..

    TFLite 뽀개기 (3) - Quantization

    1. Quantization 내용은 python, Tensorflow-gpu 2.x, keras model 에 한정되어 있음을 알려드립니다. 이전 글에서 TFLite model로 Inference까지 해봤습니다. 이번에는 TFLite model을 경량화 시키는 방법을 알려드릴게요. 경량화 방법은 TFLite에서 제공하는 Quantization이며 경량화의 효과는 다음과 같습니다. Smaller storage size 모델 사이즈를 줄여 user의 device에 적은 storage을 occupy함. Less memory usage 작은 모델로 만들어 RAM memory를 적게 occupy함. Reducing latency 좀 더 빠르게 inference할 수 있게 함. 하지만 마냥 장점만 가져갈 수는 없죠..

반응형