Skip to main content

Posts

Showing posts from December, 2018

Something erased, something is back

Review of Large Margin Deep Networks for Classification

Current deep learning research is pushing further in several directions. There are new structures appearing: Automatic liver tumor segmentation in CT with fully convolutional neural networks and object-based postprocessing Look Closer to See Better: Recurrent Attention Convolutional Neural Network for Fine-grained Image Recognition Dropout: A Simple Way to Prevent Neural Networks from Overfitting There are new optimization techniques for enabling faster distributed DNN training: Sparsified SGD with Memory1-Bit Stochastic Gradient Descent and its Application to Data-Parallel Distributed Training of Speech DNNs And last, but not least, current research is also looking at new loss functions, which is topic of this blog post: Large Margin Deep Networks for Classification Under motto 'You get what you optimize', the authors of the paper argue that benefits for using large margin loss function include better robustness to input perturbations and in section 4 of the paper on MNIST datase…