CS60010: Deep Learningsudeshna/courses/DL18/lec5.pdfSigmoid Units for Bernoulli Output...
Transcript of CS60010: Deep Learningsudeshna/courses/DL18/lec5.pdfSigmoid Units for Bernoulli Output...
CS60010: Deep Learning
Sudeshna Sarkar Spring 2018
16 Jan 2018
FFN
β’ Goal: Approximate some unknown ideal function f : X ! Y β’ Ideal classifier: y = f*(x) with x and category y β’ Feedforward Network: Define parametric mapping β’ y = f(x; theta) β’ Learn parameters theta to get a good approximation to f* from
available sample
β’ Function f is a composition of many different functions
original W
negative gradient direction W_1
W_2
Gradient Descent
Slides based on cs231n by Fei-Fei Li & Andrej Karpathy & Justin Johnson
The effects of step size (or βlearning rateβ)
original W
True gradients in blue minibatch gradients in red
W_1
W_2
Stochastic Gradient Descent
Gradients are noisy but still make good progress on average
Cost functions:
β’ In most cases, our parametric model defines a distribution π(π¦|π₯;π)
β’ Use the principle of maximum likelihood
β’ The cost function is often the negative log-likelihood β’ equivalently described as the cross-entropy between the
training data and the model distribution. π½ π = βπΈπ₯,π¦~ποΏ½ππππ log ππππππ(π¦|π₯)
β’ .
Conditional Distributions and Cross-Entropy
π½ π = βπΈπ₯,π¦~ποΏ½ππππ log ππππππ(π¦|π₯) β’ The specific form of the cost function changes from model to
model, depending on the specific form of logππππππ β’ For example, if ππππππ(π¦|π₯) = π©(y; f x,π , I) then we
recover the mean squared error cost,
π½ π =12πΈπ₯,π¦~ποΏ½ππππ π¦ β π(π₯; π) 2 + Const
β’ For predicting median of Gaussian, the equivalence between maximum likelihood estimation with an output distribution and minimization of mean squared error holds
β’ Specifying a model π(π¦|π₯) automatically determines a cost function log π(π¦|π₯)
β’ The gradient of the cost function must be large and predictable enough to serve as a good guide for the learning algorithm.
β’ The negative log-likelihood helps to avoid this problem for many models.
Learning Conditional Statistics
β’ Sometimes we merely predict some statistic of π¦ conditioned on π₯. Use specialized loss functions β’ For example, we may have a predictor π(π₯; π) that we wish to employ to predict
the mean of π¦.
β’ With a sufficiently powerful neural network, we can think of the NN as being able to represent any function π from a wide class of functions. β’ view the cost function as being a functional rather than just a function. β’ A functional is a mapping from functions to real numbers.
β’ We can thus think of learning as choosing a function rather than a set of parameters.
β’ We can design our cost functional to have its minimum occur at some specific function we desire. For example, we can design the cost functional to have its minimum lie on the function that maps π₯ to the expected value of π¦ given π₯.
Learning Conditional Statistics
β’ Results derived using calculus of variations: 1. Solving the optimization problem
πβ =ππππππ
π πΌπ₯,π¦~πππππ π¦ β π(π₯) 2
yields πβ(π₯) = πΌπ₯,π¦~πππππ(π¦|π₯) π¦
2. Solving
πβ =ππππππ
π πΌπ₯,π¦~πππππ π¦ β π(π₯) 1
yields a function that predicts the median value of π¦ for each π₯. Mean Absolute Error (MAE) β’ MSE and MAE often lead to poor results when used with gradient-based optimization.
Some output units that saturate produce very small gradients when combined with these cost functions.
β’ Thus use of cross-entropy is popular even when it is not necessary to estimate the distribution π(π¦|π₯)
Output Units
1. Linear units for Gaussian Output Distributions. Linear output layers are often used to produce the mean of a conditional Gaussian distribution π π¦ π₯ = π©(π¦;π¦οΏ½, πΌ)
β’ Maximizing the log-likelihood is then equivalent to minimizing the mean squared error
β’ Because linear units do not saturate, they pose little difficulty for gradient-based optimization algorithms
2. Sigmoid Units for Bernoulli Output Distributions. 2-class classification problem. Needs to predict π(π¦ = 1|π₯).
π¦οΏ½ = π π€πβ + π 3. Softmax Units for Multinoulli Output Distributions. Any time we
wish to represent a probability distribution over a discrete variable with n possible values, we may use the softmax function. Softmax functions are most often used as the output of a classifier, to represent the probability distribution over π different classes.
Softmax output
β’ In case of a discrete variable with π values, produce a vector ποΏ½ with π¦οΏ½π = π(π¦ = π|π₯).
β’ A linear layer predicts unnormalized log probabilities: π = πΎπ»π + π
β’ where π§π = logποΏ½(π¦ = π|π₯)
softmax(π§)π =exp (π§π)β exp (π§π)π
β’ When training the softmax to output a target value π¦ using maximum log-likelihood
β’ Maximize logπ π¦ = π; π§ = log π π ππ πππ₯(π§)π β’ log π π ππ πππ₯(π§)π = π§π β logβ exp (π§π)π
Output Types
Output Type Output Distribution
Output Layer Cost Function
Binary Bernoulli Sigmoid Binary cross-
entropy
Discrete Multinoulli Softmax Discrete cross-
entropy
Continuous Gaussian Linear Gaussian cross- entropy (MSE)
Continuous Mixture of Gaussian
Mixture Density Cross-entropy
Continuous Arbitrary GAN, VAE, FVBN Various
Sigmoid output with target of 1
β3 β2 β1 0 1 2 3
z
0.0
0.5
1.0
π(π) Cross-entropy loss MSE loss
Bad idea to use MSE loss with sigmoid unit.
Benefits
Hidden Units
β’ Rectified linear units are an excellent default choice of hidden unit.
β’ Use ReLUs, 90% of the time β’ Many hidden units perform comparably to ReLUs.
New hidden units that perform comparably are rarely interesting.
Rectified Linear Activation
0 z
0
g(z)
= m
ax{0
, z}
ReLU
β’ Positives: β’ Gives large and consistent gradients (does not saturate) when active β’ Efficient to optimize, converges much faster than sigmoid or tanh
β’ Negatives: β’ Non zero centered output β’ Units "die" i.e. when inactive they will never update
Architecture Basics
β’ Depth β’ Width
Universal Approximator Theorem
β’ One hidden layer is enough to represent (not learn) an approximation of any function to an arbitrary degree of accuracy
β’ So why deeper? β’ Shallow net may need (exponentially) more width β’ Shallow net may overfit more
β’ http://mcneela.github.io/machine_learning/2017/03/21/Universal-
Approximation-Theorem.html β’ https://blog.goodaudience.com/neural-networks-part-1-a-simple-proof-of-
the-universal-approximation-theorem-b7864964dbd3 β’ http://neuralnetworksanddeeplearning.com/chap4.html