Deep Learning Frameworks slides

34
Deep Learning - Concepts & Frameworks Peter Morgan – Data Science Partnership Deep Learning Frameworks ODSC Meetup Peter Morgan 16 Mar 2016 1

Transcript of Deep Learning Frameworks slides

Page 1: Deep Learning Frameworks slides

Deep Learning - Concepts & Frameworks

Peter Morgan – Data Science PartnershipDeep Learning Frameworks ODSC Meetup Peter Morgan 16 Mar 2016 1

Page 2: Deep Learning Frameworks slides

Contents

• Deep Learning Concepts ………….3

• Deep Learning Frameworks …..26

• Specific Frameworks ………..……30

• Comparison …………………………..33

• Questions ………………………………34

Deep Learning Frameworks ODSC Meetup Peter Morgan 16 Mar 2016 2

Page 3: Deep Learning Frameworks slides

Overview - Concepts not Code

No Maths No Code

Deep Learning Frameworks ODSC Meetup Peter Morgan 16 Mar 2016 3

Page 4: Deep Learning Frameworks slides

Frameworks - The Big Picture

Deep Learning Frameworks ODSC Meetup Peter Morgan 16 Mar 2016 4

AI FrameworksCognitive

Architectures

ML Frameworks Supervised,

Unsupervised & Reinforcement

Deep Learning FrameworksNeural Nets

Page 5: Deep Learning Frameworks slides

What is Deep learning?

• Deep learning refers to algorithms based on artificial neural networks (ANNs), which in turn are based on biological neural networks (BNN), such as the human brain.

• In practice it consists of mulitple layers, nodes, weights and optimisation algorithms

• Due to more labeled data, more compute power, better optimization algorithms, and better neural net models and architectures, deep learning has started to supersede humans when it comes to image recognition and classification.

• Work is being done to obtain similar levels of performance in natural language processing and understanding.

• Deep learning applies to supervised, unsupervised and reinforcement learning.

• According to Jeff Dean in a recent interview, Google have implemented DL in over one hundred of their products and services including search and photos.

Deep Learning Frameworks ODSC Meetup Peter Morgan 16 Mar 2016 5

Page 6: Deep Learning Frameworks slides

Deep Learning Evolution

Deep Learning Frameworks ODSC Meetup Peter Morgan 16 Mar 2016 6

Page 7: Deep Learning Frameworks slides

Deep Learning Concepts

Deep Learning Frameworks ODSC Meetup Peter Morgan 16 Mar 2016 7

Page 8: Deep Learning Frameworks slides

Data Sets• Raw data input into the neural network can originate from any environmental source.

• It can be recorded and stored in a database (e.g., text, images, audio, video) or live (incident directly from the environment), so called streaming data.

• Examples of recorded data sets include the MNIST and Labeled Faces in the Wild (LFW).

Deep Learning Frameworks ODSC Meetup Peter Morgan 16 Mar 2016 8

MNIST LFW

Page 9: Deep Learning Frameworks slides

Multilayer Perceptron (MLP) • A MLP is a feedforward artificial neural network model that maps sets of input data onto a set of

appropriate outputs (Rosenblatt, 1958).

• An MLP consists of multiple layers of nodes with each layer fully connected to the next one.

• Except for the input nodes, each node is a neuron (or processing element) with a nonlinear activation function.

• MLP utilizes a supervised learning technique called backpropagation for training the network.

Deep Learning Frameworks ODSC Meetup Peter Morgan 16 Mar 2016 9

Page 10: Deep Learning Frameworks slides

Layers and Nodes

• Layers

• A neural network is made up of layers of nodes.

• There is an input (visible) layer, an output (classification) layer, and several hidden layers.

• A typical NN may have between 10 and 30 layers, but sometimes many more.

• Every layer of a deep learning network requires four elements: the input (vector), the weights (matrix), a bias and the transform (activation function).

• Nodes

• Nodes in a neural network represent the places where calculations are done.

• At each node, the input values xi are multiplied by a weight wi, summed, added to a bias b, and fed into an activation function.

• A decision is then made whether to transmit the resultant value depending on if it exceeds a certain threshold value or not.

• For example, images from the MNIST data set have 784 pixels, so neural nets processing them must have 784 input nodes, one per pixel.

Deep Learning Frameworks ODSC Meetup Peter Morgan 16 Mar 2016 10

Page 11: Deep Learning Frameworks slides

Weights & Softmax

• Weights

• Coefficients that amplify or mute the input signal coming into each node.

• They are assigned initial values which can be random or chosen based upon some insight.

• A NN can be represented by its weight matrix along with its activation functions.

• Softmax

• The softmax function, or normalized exponential, is a generalization of the sigmoid logistic function.

• In neural network simulations, the softmax function is often implemented at the final layer of a network used for classification.

Deep Learning Frameworks ODSC Meetup Peter Morgan 16 Mar 2016 11

Page 12: Deep Learning Frameworks slides

Activation Function

Deep Learning Frameworks ODSC Meetup Peter Morgan 16 Mar 2016 12

• One of a set of functions that determine the threshold at each node above which a signal is passed through the node, and below which it is blocked.

• Activation functions normalize input from a previous layer to a value between -1 and 1 to ensure an output layer probability of between 0 and 1.

• Functions that achieve this include the logistic, sigmoid, tanh and ReLU functions.

Page 13: Deep Learning Frameworks slides

Optimization and Overfitting

• Optimization

• Refers to the manner by which a neural net minimizes error as it adjusts its coefficients (weights) step by step.

• L-BFGS is one such algorithm.

• Overfitting

• Overfitting is where too many parameters are used in the model constructed to fit the data which leads to poor predictive power of the model.

• Regularisation, cross-validation and dropout are all methods used to address this problem.

Deep Learning Frameworks ODSC Meetup Peter Morgan 16 Mar 2016 13

Page 14: Deep Learning Frameworks slides

Regularisation and Cross-validation

• Both techniques are used to prevent overfitting

• Regularisation

• Regularisation refers to a process of introducing additional information in order to prevent overfitting.

• It penalizes models with extreme parameter values by introducing a factor which weights the penalty against more complex models with an increasing variance in the data errors.

• Cross-validation

• Cross-validation combines sampling sets to correct for overfitting and derive a more accurate estimate of model prediction performance.

• One round of cross-validation involves partitioning a sample of data into complementary subsets, performing the analysis on one subset (called the training set), and validating the analysis on the other subset (called the validation set or testing set).

• To reduce variability, multiple rounds of cross-validation are performed using different partitions, and the validation results are averaged over the rounds.

Deep Learning Frameworks ODSC Meetup Peter Morgan 16 Mar 2016 14

Page 15: Deep Learning Frameworks slides

Feed Forward Networks

• The feedforward neural network was the first and simplest type of artificial neural network devised.

• In this network, the information moves in only one direction, forward, from the input nodes, through the hidden nodes and to the output nodes.

• There are no cycles or feedback loops in the network.

Deep Learning Frameworks ODSC Meetup Peter Morgan 16 Mar 2016 15

Page 16: Deep Learning Frameworks slides

Support Vector Machines (SVM)

• Support vector machines are supervised learning algorithms that analyze data and recognize patterns, in order to classify data points (Vapnik, 1979).

• Given a set of training examples, each marked for belonging to one of two categories, an SVM training algorithm builds a model that assigns new examples into one category or the other – it is a linear classifier.

• Recently SVM’s have been usurped by the success of deep learning algorithms.

Deep Learning Frameworks ODSC Meetup Peter Morgan 16 Mar 2016 16

Page 17: Deep Learning Frameworks slides

Stochastic Gradient Descent

• Stochastic gradient descent is a popular algorithm for training a wide range of models in machine learning, including support vector machines and logistic regression.

• When combined with the backpropagation algorithm, it is the de facto standard algorithm for training artificial neural networks.

Deep Learning Frameworks ODSC Meetup Peter Morgan 16 Mar 2016 17

Page 18: Deep Learning Frameworks slides

Back Propagation• Backpropagation is a common method of training artificial neural networks used in conjunction

with an optimization method such as gradient descent.

• The method calculates the gradient of a loss function with respect to all the weights in the network.

• The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the loss function.

Deep Learning Frameworks ODSC Meetup Peter Morgan 16 Mar 2016 18

Page 19: Deep Learning Frameworks slides

Markov Models

• Markov Model

• A Markov model is a stochastic (probabilistic) model used to model randomly changing systems where it is assumed that future states depend only on the present state and not on the sequence of events that preceded it.

• Markov Chain

• Markov Chains are essentially logical circuits that connect two or more states via probabilities.

• Markov Chains are sequential. Their purpose is to give you a good idea, given one state, of what the next state will be.

• Use cases include natural language processing (NLP) and stock price trading.

Deep Learning Frameworks ODSC Meetup Peter Morgan 16 Mar 2016 19

Page 20: Deep Learning Frameworks slides

Hidden Markov Model

• An HMM is a statistical Markov model in which the system being modeled is assumed to be a Markov process with unobserved (hidden) states.

• A HMM can be presented as the simplest dynamic Bayesian network.

• They are used in time series prediction, e.g., in language analysis.

Deep Learning Frameworks ODSC Meetup Peter Morgan 16 Mar 2016 20

Page 21: Deep Learning Frameworks slides

Restricted Boltzmann Machines & Autoencoders

• Restricted Boltzmann Machines (RBM)

• Invented by Geoff Hinton (1999) at the University of Toronto, RBMs are shallow, two-layer neural nets consisting of an input layer and a hidden layer.

• They constitute the building blocks of deep neural networks - a deep learning network is simply many restricted Boltzmann machines stacked on top of one another.

• Nodes are connected to each other across layers, but have the restriction that no two nodes of the same layer are linked.

• Autoencoder

• An autoencoder is an ANN used for learning efficient codings.

• The aim of an autoencoder is to learn a compressed, distributed representation (encoding) for a set of data, typically for the purpose of dimensionality reduction.

• In an autoencoder, the output layer has equally many nodes as the input layer, and instead of training it to predict some target value y given inputs x, an autoencoder is trained to reconstruct its own inputs x.

Deep Learning Frameworks ODSC Meetup Peter Morgan 16 Mar 2016 21

Page 22: Deep Learning Frameworks slides

Max Pooling and Cost Functions

• Max Pooling

• In CNN’s, max-pooling partitions the input image into a set of non-overlapping rectangles and, for each such sub-region, outputs the maximum value.

• Loss Function

• A loss function or cost function is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cost" associated with the event.

• The opposite of a loss function is called a reward function, or utility function.

• Dropout The dropout method is introduced to prevent overfitting.

• At each training stage, individual nodes are either "dropped out" of the net with probability 1-p or kept with probability p, so that a reduced network is left.

• By avoiding training all nodes on all the training data, dropout decreases overfitting in neural nets.

• The method also significantly improves the speed of training.

Deep Learning Frameworks ODSC Meetup Peter Morgan 16 Mar 2016 22

Page 23: Deep Learning Frameworks slides

Convolutional Neural Networks

• First developed in 1970’s.

• Widely used for image recognition and classification.

• Inspired by biological processes, CNN’s are a type of feed-forward ANN.

• The individual neurons are tiled in such a way that they respond to overlapping regions in the visual field.

Deep Learning Frameworks ODSC Meetup Peter Morgan 16 Mar 2016 23

Page 24: Deep Learning Frameworks slides

Recurrent Neural Networks

• First developed in 1970’s.

• RNN’s are neural networks that are used to predict the next element in a sequence or time series.

• This could be, for example, words in a sentence or letters in a word.

• Applications include predicting or generating music, stories, news, code, financial instrument pricing, text, speech, in fact the next element in any event stream.

Deep Learning Frameworks ODSC Meetup Peter Morgan 16 Mar 2016 24

Page 25: Deep Learning Frameworks slides

LSTM and NTM

• Long Short Term Memory (LSTM)

• LSTM is an RNN architecture that contains blocks that can remember a value for an arbitrary length of time.

• It solves the vanishing or exploding gradient problem when calculating back propagation.

• An LSTM network is universal in the sense that given enough network units it can compute anything a conventional computer can compute, provided it has the proper weight matrix.

• LSTM outperforms alternative RNNs and Hidden Markov Models and other sequence learning methods in numerous applications, e.g., in handwriting recognition, speech recognition and music composition.

• Neural Turing Machines (NTM)

• NTMs are a method of extending the capabilities of recurrent neural networks by coupling them to external memory resources.

Deep Learning Frameworks ODSC Meetup Peter Morgan 16 Mar 2016 25

Page 26: Deep Learning Frameworks slides

Deep Learning Frameworks

Deep Learning Frameworks ODSC Meetup Peter Morgan 16 Mar 2016 26

Framework = Toolkit = Library

Page 27: Deep Learning Frameworks slides

Deep Learning Frameworks

• Apache SINGA

• Brainstorm

• Caffe

• Chainer

• CNTK (Microsoft)

• DL4J

• DMLC

• Fbcunn (Facebook)

• Lasagne

• Minerva

• Mocha.jl (Julia)

• MXnet

• Neon (Nervana)

• Purine

• Tensorflow (Google)

• Theano

• Torch

• Warp-CTC (Baidu)

Deep Learning Frameworks ODSC Meetup Peter Morgan 16 Mar 2016 27

Page 28: Deep Learning Frameworks slides

Deep Learning Frameworks (cont)

• Brain (Javascript)

• Cudamat

• Deep Learning Framework (Intel)

• Deepnet

• Hebel

• Infer.NET

• Keras

• Leaf

• MLPNeuralNet

• Neural Network Toolbox (MatLab)

• Neuraltalk

• Neurolab

• OpenDeep

• PyBrain

• Swift-AI

• VELES (Samsung)

Deep Learning Frameworks ODSC Meetup Peter Morgan 16 Mar 2016 28

Page 29: Deep Learning Frameworks slides

Some Specific DL Libraries

• ANN

• Brainstorm

• clab/cnn

• DeepHear (music composition)

• DoctorTeeth/diffmem

• miloharper/neural-network-animation

• CNN

• ConvNetJS

• Marvin

• MatConvNet

• RNN

• awesome-rnn

• karpathy/char-rnn

• karpathy/neuraltalk2

• wojciechz/learning_to_execute

• LSTM

• dl4j-0.4 Graves LTSM

• github.nicodjimenez/lstm

• Russell91/LSTMSummation

Deep Learning Frameworks ODSC Meetup Peter Morgan 16 Mar 2016 29

Page 30: Deep Learning Frameworks slides

TensorFlow

• TensorFlow is the newly (Nov 2015) open sourced deep learning library from Google.

• It is their second generation system for the implementation and deployment of large-scale machine learning models.

• Written in C++ with a python interface, it is borne from research and deploying machine learning projects throughout a wide range of Google products and services.

• Initially TF ran only on a single node (your laptop, say), but Google have recently released a version that now runs on a cluster.

• https://www.tensorflow.org/

Deep Learning Frameworks ODSC Meetup Peter Morgan 16 Mar 2016 30

Page 31: Deep Learning Frameworks slides

Torch

• First released in 2000, with over 50,000 downloads, company users include Google, Facebook, Twitter.

• The goal of Torch is to have maximum flexibility and speed in building scientific algorithms while making the process extremely simple.

• Torch is a neural network library written in Lua with a C/CUDA interface originally developed by a team from the Swiss institute EPFL.

• At the heart of Torch are popular neural network and optimization libraries which are simple to use, while being flexible in implementing different complex neural network topologies.

• http://torch.ch/

Deep Learning Frameworks ODSC Meetup Peter Morgan 16 Mar 2016 31

Page 32: Deep Learning Frameworks slides

Theano

• Theano is a deep learning library written in python and popular for its ease of use.

• Using Theano, it is possible to attain speeds rivalling hand-crafted C implementations for problems involving large amounts of data.

• Theano has been powering large-scale computationally intensive scientific investigations since 2007

• Supports CuDNN v3

• http://deeplearning.net/software/theano/

Deep Learning Frameworks ODSC Meetup Peter Morgan 16 Mar 2016 32

Page 33: Deep Learning Frameworks slides

DL Framework Comparison

• The most common metrics we can use to compare these deep learning frameworks are:

• Speed of execution

• Ease of use

• Languages used (core and front end)

• Resources (CPU and memory capacity) needed in order to run the various algorithms

• GPU support

• Size of active community of users

• Contributors and committers

• Platforms supported (e.g., OS, single devices and/or distributed systems)

• Algorithmic support

• Number of packages in their library

• For example, various benchmarks and comparisons are available here:https://github.com/soumith/convnet-benchmarks

Deep Learning Frameworks ODSC Meetup Peter Morgan 16 Mar 2016 33

Page 34: Deep Learning Frameworks slides

Questions?

Deep Learning Frameworks ODSC Meetup Peter Morgan 16 Mar 2016 34

www.datasciencepartnership.com@insight_ai