|
|
|
09/4 |
Data visualization
1. Geoffrey Hinton and Sam Roweis, Stochastic Neighbor Embedding.
2. Laurens van der Maaten and Geoffrey Hinton, Visualizing Data using t-SNE.
|
Gal Yona, Aviv Netanyahu |
Visualizing High Dimensional Data
|
|
|
|
16/4 |
Passover vacation |
|
|
|
|
|
23/4 |
Image synthesis from neural activities
1. Gatys, Ecker, Bethge, Image Style Transfer Using Convolutional Neural Networks.
2. Aravindh Mahendran, Andrea Vedaldi, Visualizing Deep Convolutional Neural Networks Using Natural Pre-images.
3. Brox, Clune, Synthesizing the preferred inputs for neurons in neural networks via deep generator networks.
|
Liad Pollak, Roman Beliy |
DNN Visualization
|
|
|
|
30/4 |
NN as universal approximators
1. George Cybenko, 1989, Approximation by superpositions of a sigmoidal function.
2. Kurt Hornik, Maxwell Stinchcombe, and Halbert White, 1989, Multilayer feedforward networks are universalapproximators
3. Holden Lee, Rong Ge, Andrej Risteski, Tengyu Ma, Sanjeev Arora, On the ability of neural nets to express distributions
|
Yotam Amar, Heli Ben Hamu |
NN as Universal Approximators
|
|
|
|
07/5 |
The VC-dimension of NN
1. Edwardo Sontag, VC-Dimension of neural network.
2. Peter Bartlett, Vapnik-Chervonenkis Dimension of Neural Nets.
3. Chiyuan Zhang et al, Understanding deep learning requires rethinking generalization.
|
Liran Szlak, Shira Kritchman |
VC-dimension
|
|
|
|
14/5 |
LAG BAOMER |
|
|
|
|
|
21/5 |
Faculty trip |
|
|
|
|
28/5 |
Optimization and stability
1. Moritz Hardt, Benjamin Recht, Yoram Singer, Train faster, generalize better: Stability of stochastic gradient descent.
2. TD. Soudry, Y. Carmon, No bad local minima: Data independent training error guarantees for multilayer neural networks.
3. Shai Shalev-Shwartz, Ohad Shamir and Shaked Shammah, Failures of Deep Learning
4. On the Sample Complexity of End-to-end Training vs. Semantic Abstraction Training, Shalev-Schartz and Shashua,
|
Dror Kaufman, Nitzan Artzi |
Optimization and stability
|
|
|
|
|
|
04/6 |
Local minima and generalization
1. Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio, Sharp Minima Can Generalize For Deep Nets.
2.Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang, On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima.
|
Amnon Geifman, Guy Teller |
Local minima
|
|
|
|
|
|
11/6 |
Generative adverserial networks
1. I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, 2014, Generative Adversarial Nets.
2. I. J. Goodfellow, NIPS 2016 tutorial.
3. Mehdi Mirza, Simon Osindero Conditional Generative Adversarial Nets.
4. Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, Alexei A. Efros Image-to-Image Translation with Conditional Adversarial Networks.
5. Emily Denton, Soumith Chintala, Arthur Szlam, Rob Fergus Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks.
6. Energy-based Generative Adversarial Network Junbo Zhao, Michael Mathieu, Yann LeCun.
|
Alona Faktor, Tal Kaminker |
GANs |
|
|
|
|
|
18/6 |
Word2vec
1. Yoshua Bengio, Rejean Ducharme, Pascal Vincent, and Christian Jauvin, 2003,A neural probabilistic language model.
2. Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean, NIPS, 2013, Distributed representations of words and phrases and their compositionality.
3. Omer Levy, Yoav Goldberg Neural Word Embedding as Implicit Matrix Factorization.
|
Adam Yaari, Sivan Biham |
Word2vec |
|
|
|
|
|
25/6 |
Is deep better than shallow?
1. G. F. Montufar, R. Pascanu, K. Cho, and Y. Bengio, NIPS 2014, On the number of linear regions of deep neural networks.
2. Matus Telgarsky, 2016, Benefits of depth in neural networks.
3. Ronen Eldan and Ohad Shamir The Power of Depth for FeedforwardNeural Networks.
4. Itay Safran, Ohad Shamir Depth Separation in ReLU Networks for Approximating Smooth Non-Linear Functions.
|
Gal Benor, Itay Safran |
|
|
|
|
|
|
02/7 |
Training without ordered pairs
1. Gintare Karolina Dziugaite, Daniel M. Roy, Zoubin Ghahramani, Training generative neural networks via Maximum Mean Discrepancy optimization.
2. Isola, Efros, Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks.
3. Yaniv Taigman, Adam Polyak, Lior Wolf, Unsupervised cross-domain image generation
|
Ekaterina Kravchenko, Assaf Shocher |
|
|
|
|
|
|
09/7 |
Variational auto encoders
1. Yann Ollivier, Auto-encoders: reconstruction versus compression.
2. Diederik P Kingma, Max Welling, Auto-Encoding Variational Bayes.
|
Adi Watzman, Chen Attias |
|
|
|