Intro to Deep Learning

26
Introduction to Neural Networks and Theano Kushal Arora [email protected]fl.edu

Transcript of Intro to Deep Learning

Page 1: Intro to Deep Learning

Introduction to Neural Networks and Theano

Kushal [email protected]

Page 2: Intro to Deep Learning

Outline1. Why Care?

2. Why it works?

3. Logistic Regression to Neural Networks

4. Deep Networks and Issues

5. Autoencoders and Stacked Autoencoders

6. Why Deep Learning Works

7. Theano Overview

8. Code Hands On

Page 3: Intro to Deep Learning

Why Care?● Has bettered state of art of various tasks

– Speech Recognition

● Microsoft Audio Video Indexing Service speech system uses Deep Learning● Brought down word error rate by 30% compared to SOTA GMM

– Natural Language Understanding/Processing

● Neural net language models are current state of art in language modeling, Sentiment Analysis, Paraphrase Detection and many other NLP task

● SENNA system uses neural embedding for various NLP tasks like POS tagging, NER, chunking etc. 

● SENNA is not only better than SOTA but also much faster 

– Object Recognition

● Breakthrough started in 2006 with MNIST dataset. Still the best (0..27% error )● SOTA error rate over ImageNet dataset down to 15.3% compared to 26.1%   

Page 4: Intro to Deep Learning

Why Care?

● MultiTask and Transfer Learning

– Ability to exploit  commonalities of different learning task by transferring learned knowledge

– Example Google Image Search (Woman in a red dress in a garden) 

– Other example is Google's Image Caption (http://googleresearch.blogspot.com/2014/11/a­picture­is­worth­thousand­coherent.html) 

Page 5: Intro to Deep Learning

Why it works?● Learning Representations

– Time to move over hand crafted features

● Distributed Representation

– More robust and expressive features

● Learning  Multiple Level of Representation

– Learning multiple level of abstraction

Page 6: Intro to Deep Learning

#1 Learning Representation

• Handcrafting features is inefficient and time consuming

• Must be done again for each task and domain

• The features are often incomplete and highly correlated

Page 7: Intro to Deep Learning

#2 Distributed Representation

● Features can be non­mutually exclusive example language. 

● Need  to  move  beyond  one­hot  representations  such  as  from  clustering algorithms, k­nearest neighbors etc

● O(n)  parameters/examples  for  O(N)  input  regions  but  we  can  do  better  O(k) parameters/inputs for O(2k) input regions

● Similar to multiclustering, where multiple clustering algorithms are applied in parallel or same clustering algorithm is applied to different input region 

Page 8: Intro to Deep Learning

#3 Learning multiple level of representation

● Deep architectures promote reuse of features

● Deep architecture can potentially lead to progressively more abstract features at higher layer

● Good intermediate representation can be shared across tasks

● Insufficient depth can be exponentially inefficient

 

Page 9: Intro to Deep Learning

The Basics

● Building block of the network is a neuron. 

● The basic terminologies– Input

– Activation layer

– Output

● Activation function is usually of the form

where

inputactivation function

output

h(w ,b)=f (wT . x+b)

f ( z)=1

1+e−z

Page 10: Intro to Deep Learning

Logistic Regression● Logistic regression is a probabilistic, linear classifier

● Parameterized by W and b

● Each output is probability of input belonging to class yi

● Prob of x being member of class Yi is calculated as

where

and

P (Y =Y i∣x)=softmax i(Wx+b)

softmax i(Wx+b)=eW i

Tx+bi

∑j

eW jTx+bi

y pred=argmaxY iP (Y=Y i∣x)

Page 11: Intro to Deep Learning

Logistic Regression – Training● The loss function for logistic regression is negative log likelihood, defined as 

● The loss function minimization is done using stochastic gradient descent or mini batch stochastic gradient descent.   

● The function is convex and will always reach global minima which is not true for other architectures we will discuss.

● Cannot solve famous XOR problem.

Page 12: Intro to Deep Learning

Multilayer preceptron

● An MLP can be viewed as a logistic regression classifier where the input is first transformed using a learned non-linear transformation.

● The non linear transformation can be a sigmoid or a tanh function.

● Due to addition of non linear hidden layer, the loss function now is non convex

● Though there is no sure way to avoid minima, some empirical value initializations of the weight matrix helps.

Page 13: Intro to Deep Learning

Multilayer preceptron – Training 

● Let  D be the size of input vector and there be L output labels. The output vector (of size L) will be given as 

 

where   and   are activation functions.

● The hard part of training is calculating the gradient for stochastic gradient descent and of course avoid the local minima. 

● Back­propagation used as an optimization technique to avoid calculation gradient at each step. (It is like Dynamic programming for derivative chain rule recursion)

 

G s

Page 14: Intro to Deep Learning

Deep Neural Nets and issues● Generally networks with more than 2­3 hidden layers are 

called deep nets. 

● Pre 2006 they performed worse than shallow networks. 

● Though hard to analyze the exact cause the experimental results suggested that gradient­based training for deep nets got stuck in local minima due to gradient diffusion 

● It was difficult to get good generalization.

● Random initialization which worked for shallow network couldn't be used deep networks.

● The issue was solved using a unsupervised pre­training of hidden layers followed by a fine tuning or supervised training.

Page 15: Intro to Deep Learning

Auto­Encoders

● Multilayer neural nets with target output = input.

● Reconstruction = decoder(encoder(input))

● Objective is to minimize the reconstruction error.

● PCA can be seen as a auto-encoder with and

● So autoencoder could be seen as an non-linear PCA which tries of learn latent representation of input.

a= tanh (Wx+b)

x '=tanh (W Tx+c)

L=‖x '−x‖2

a=Wx x '=W T a

Page 16: Intro to Deep Learning

Auto­Encoders ­ Training

Page 17: Intro to Deep Learning

Stacked Auto­encoders● One way of creating deep networks.

● Other is Deep Belief Networks that is made of stacked RBMs trained greedily.

● Training is divided into two steps– Pre Training

– Fine Tuning

● In Pre Training auto­encoders are recursively trained one at a time.

● In second phase, i.e. fine tuning, the whole network is trained using to minimize negative log likelihood of predictions.

● Pre Training is unsupervised but post training is supervised 

Page 18: Intro to Deep Learning

Pre­Training

Page 19: Intro to Deep Learning

Why Pre­Training Works

● Hard to know exactly as deep nets are hard to analyze

● Regularization Hypothesis

– Pre­Training acts as adding regularization term leading to better generalization.

– It can be seen as good representation for P(X) leads to good representation of P(Y|X) 

● Optimization Hypothesis

– Pre­Training leads to weight initialization that restricts the parameter space near better local minima

– These minimas are not achievable via random initialization 

Page 20: Intro to Deep Learning

Other Type of Neural Networks

● Recurrent Neural Nets

● Recursive Neural Networks

● Long Short Term Memory Architecture

● Convolutional Neural Networks

Page 21: Intro to Deep Learning

Libraries to use

● Theano/Pylearn2 (U Motreal, Python)

● Caffe (UCB, C++, Fast, CUDA based)

● Torch7 (NYU, Lua, supported by Facebook)

Page 22: Intro to Deep Learning

Introduction to TheanoWhat is Theano?

Theano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi­dimensional arrays efficiently. 

Why should it be used?

– Tighter integration with numpy

– Transparent use of GPU

– Efficient symbolic representation

– Does lots of heavy lifting like gradient calculation for you 

– Uses simple abstractions for tensors and matrices so you don't have to deal with it

Page 23: Intro to Deep Learning

 Theano Basics (Tutorial)

● How To: – Declare Expression– Compute Gradient 

● How expression is evaluated. (link)

● Debugging

Page 25: Intro to Deep Learning

Refrences● [Bengio07]  Bengio, P. Lamblin, D. Popovici and H. Larochelle, Greedy 

Layer­Wise Training of Deep Networks, in Advances in Neural Information Processing Systems 19 (NIPS‘06), pages 153­160, MIT Press 2007.

● [Bengio09]    Bengio, Learning deep architectures for AI, Foundations and Trends in Machine Learning 1(2) pages 1­127.

● [Xavier10] Bengio, X. Glorot, Understanding the difficulty of training deep feedforward neuralnetworks, AISTATS 2010

● Bengio, Yoshua, Aaron Courville, and Pascal Vincent. "Representation learning: A review and new perspectives." Pattern Analysis and Machine Intelligence, IEEE Transactions on 35.8 (2013): 1798­1828.

● Deep learning for NLP (http://nlp.stanford.edu/courses/NAACL2013/)

Page 26: Intro to Deep Learning

Thank You!