Machine Learning with Deep Neural Networks Piotr Luszczek

42
1/42 COSC 594 2020 COSC 594 2020 Machine Learning with Deep Neural Networks Piotr Luszczek February 19, 2020

Transcript of Machine Learning with Deep Neural Networks Piotr Luszczek

Page 1: Machine Learning with Deep Neural Networks Piotr Luszczek

1/42

COSC 594 2020COSC 594 2020

Machine Learning with Deep Neural Networks

Piotr Luszczek

February 19, 2020

Page 2: Machine Learning with Deep Neural Networks Piotr Luszczek

2/42

Terms and AcronymsTerms and Acronyms

● AI = Artificial Intelligence● ML = Machine Learning● DL = Deep Learning● DNN = Deep Neural Network● CNN = Convolutional Neural Network● LSTM = Long Short Term Memory● RNN = Recurrent Neural Network● ResNet = Residual Network● GANN = Generative Adversarial Neural

Network (also spelled GAN)

● Specific types of learning– Adversarial learning

● Using negative examples

– Transfer learning● Moving pre-trained layers between

models

– Active learning● “Human-assisted” learning

– Federated learning● Training data comes from federate

(separate) sources

– Representation (feature) learning● Automating feature discovery

Page 3: Machine Learning with Deep Neural Networks Piotr Luszczek

3/42

● McCulloch and Pitts– “A logical calculus of the ideas

immanent in nervous activity”– 1943– Mathematical model of neurons as

basic switching element● Brain

– 100 billion neurons (1011)– 1000 – 10000 connections/neuron– Total 1014 connections

Page 4: Machine Learning with Deep Neural Networks Piotr Luszczek

4/42

Straight Line Fitting: Linear RegressionStraight Line Fitting: Linear Regression

A=n∑ x i yi−(∑ xi)(∑ yi)

n∑ x i2−(∑ xi)

2

B=∑ yi−A(∑ x i)

n

y=A x+B

0 5 10 15 20 25 30 35 40

-5

0

5

10

15

20

25

explanatory variable

response variable observation

prediction

error

Page 5: Machine Learning with Deep Neural Networks Piotr Luszczek

5/42

Logistic RegressionLogistic Regression

-15 -10 -5 0 5 10 150

0.2

0.4

0.6

0.8

1

1.2

logp( x)

1− p (x)=A x+B p(x)=

1

1+e−( Ax+B)

Issues:● Linear classifier● Linear separability

and convergence● Negative example:

XORO X

X O

Page 6: Machine Learning with Deep Neural Networks Piotr Luszczek

6/42

Logistical Regression as a Binary ClassifierLogistical Regression as a Binary Classifier

θ2

θ1

θ3

θ4

x1

x2

x3

x4

Find θ that minimizesobjective function L(θ):● Predicts positives● Does not predict negatives

Find θ that minimizesobjective function L(θ):● Predicts positives● Does not predict negatives

Σ

Probability that f(x,θ) is 1

Probability that f(x,θ) is 1f (x⃗ , θ⃗)≡σ(θ

T x )=1

1+exp(−θT x )

L(θ⃗)=−∑i

m

{y i≡1}log [ f (x i , θ⃗)] + {y i≡0}log [1− f (x i , θ⃗)]

Page 7: Machine Learning with Deep Neural Networks Piotr Luszczek

7/42

Backpropagation Algorithm: Single LayerBackpropagation Algorithm: Single Layer

σ

weightsinputs

computed output

correct output

error signal

!

Page 8: Machine Learning with Deep Neural Networks Piotr Luszczek

8/42

Backpropagation Algorithm: Hidden LayerBackpropagation Algorithm: Hidden Layer

σ

weightsinputs

computed output

correct output

error signal

!

w (k+1)=w (k )

−η[∑i , jm

w ij (y(k+1)

−y (k ))] df (z )

dzhttp://home.agh.edu.pl/~vlsi/AI/backp_t_en/backprop.htm

Page 9: Machine Learning with Deep Neural Networks Piotr Luszczek

9/42

Stacking Neural LayersStacking Neural Layers

Input #1

Input #2

Input #3

Input #4

Hidden Layer

Output layer

Output

Page 10: Machine Learning with Deep Neural Networks Piotr Luszczek

10/42

Example Neural Network: XOR GateExample Neural Network: XOR Gate

-6.4

6.7

σ

-5

4.9

σ8.5

-7.9

σ3.2

-2.6

3.6

Input 1 Input 2 Output

0 0 0.03

0 1 0.96

1 0 0.97

1 1 0.03

0

0

1

1

Page 11: Machine Learning with Deep Neural Networks Piotr Luszczek

11/42

Finding Optimal WeightsFinding Optimal Weights

∇θL (θ⃗)=−∑i

m

x i⋅(y i− f (x i ,θ))

θ⃗(k+1)

=θ⃗(k )

−η∇θL (θ⃗)

● Stochastic Gradient Descent is a generic method● Used in Adaline, Perceptron, K-Means, SVM, Lasso

● Considers empirical risk vs. expected risk● Related to Randomized Kaczmarz

● Stochastic Gradient Descent is a generic method● Used in Adaline, Perceptron, K-Means, SVM, Lasso

● Considers empirical risk vs. expected risk● Related to Randomized Kaczmarz

Find θ that minimizesobjective function L(θ)

Find θ that minimizesobjective function L(θ)

Page 12: Machine Learning with Deep Neural Networks Piotr Luszczek

12/42

Stochastic Gradient Descent OverviewStochastic Gradient Descent Overview

Minimize F ( x)=1n ∑

i=1

n

f i (x)

Initialize x0for each j=1 ,2 , ...

randomly draw i← i j

x j+1← x j

−γ∇ f i ( x( j))

Goal :non−asymptotic bounds on E‖x j− x̄‖

Page 13: Machine Learning with Deep Neural Networks Piotr Luszczek

13/42

COSC594COSC594

Practical Aspects and Implementations

Page 14: Machine Learning with Deep Neural Networks Piotr Luszczek

14/42

DNN Design SpaceDNN Design Space177 153 118 91 85 100 124 145

151 124 93 77 86 115 148 168

115 93 78 83 108 145 177 191

88 79 84 104 136 168 190 197

82 85 103 127 152 170 180 182

91 101 120 138 150 157 159 159

103 114 127 136 140 140 140 141

111 119 126 130 130 129 128 130

connections for input neuron = width·height

connections for input neuron = tile(width·height)

177 153 118 91 85 100 124 145

151 124 93 77 86 115 148 168

115 93 78 83 108 145 177 191

88 79 84 104 136 168 190 197

82 85 103 127 152 170 180 182

91 101 120 138 150 157 159 159

103 114 127 136 140 140 140 141

111 119 126 130 130 129 128 130

Local connectivity and overlap

Other tricks:● Weight-tying● Pooling, max-pooling● Contrast normalization● Rectified Linear Units (ReLu)● Dropout to avoid overfitting

Page 15: Machine Learning with Deep Neural Networks Piotr Luszczek

15/42

Training with Backpropagation: Points of InterestTraining with Backpropagation: Points of Interest

● We need multiple layers for complicated problems– Convexity, separability are not required

● Problems– Diminishing gradients– Unbound weights

● Normalize● Dropout of sub-threshold weights

– Overfitting● Addressed with regularization

– Sensitivity to training set● Who ordered your training images (on disk)?

– Training time vs. recall time● Weeks, days, hours vs. millisecond● Few training groups● Almost everybody performs inference

Page 16: Machine Learning with Deep Neural Networks Piotr Luszczek

16/42

COSC594COSC594

Popular Deep Learning Models

Page 17: Machine Learning with Deep Neural Networks Piotr Luszczek

17/42

AlexNet: Convolutional Neural NetworkAlexNet: Convolutional Neural Network

CNN DPM SVM1 DPM SVM20

10

20

30

40

50

60

Task 2: Detection

Err

or %

CNN SIFT+FV SVM1 SVM2 NCM0

5

10

15

20

25

30

35

Task 1: Classification

% E

rror

http://www.slideshare.net/yuhuang/deep-learning-for-image-denoising-superresolution-27435126

3

224

224

Stride of 4

1111

96

55

55

55

25627

27

33

13

13

384

33 13

33

384Max pooling

Max pooling

Max pooling

1313

13256

dense

densedense

4096 4096

● Won ImageNet 2012 LSVRC

– 60 million parameters

– 832 million MAC ops

– Krizhevsky et al. 2012

1000

Page 18: Machine Learning with Deep Neural Networks Piotr Luszczek

18/42

GoogLeNet (2014)GoogLeNet (2014)

Inception ModuleInception Module

Page 19: Machine Learning with Deep Neural Networks Piotr Luszczek

19/42

Deep Residual Learning for Image Recognition (1512.03385)Deep Residual Learning for Image Recognition (1512.03385)

3x3 conv, 64

3x3 conv, 64

3x3 conv, 256

3x3 conv, 128 3x3 conv, 128

3x3 conv, 128

3x3 conv, 256

3x3 conv, 256

3x3 conv, 256

3x3 conv, 64

3x3 conv, 64

3x3 conv, 64

3x3 conv, 64

3x3 conv, 64

3x3 conv, 64

Pool, /2 Pool, /2 Pool, /2

ResNeXt

VGG-19 residualplain

3x3 conv, 64

3x3 conv, 64

3x3 conv, 64

3x3 conv, 64

3x3 conv, 64

3x3 conv, 64

3x3 conv, 128

laye

r i+1laye

r i

Li(x)

Li+1

(x)

Li+1

(x)+x

relu

● ResNet design

– Helps with gradient stagnation.

– Resilience when deleting layers.

– Allows deeper networks.

Page 20: Machine Learning with Deep Neural Networks Piotr Luszczek

20/42

Deep Networks: SummaryDeep Networks: Summary

● AlexNet– 5 convolutional layers

● VGG– 16 convolutional layers– 19convolutional layers

● GoogleNet (Inception_v1)– 22 convolutional layers

● ResNet v1 and v2– 50– 101– 152

Model Size(MB)

Top-1Accuracy

Top-5Accuracy

Parameters Depth

Xception 88 0.79 0.945 22910480 126

VGG16 528 0.713 0.901 138357544 23

VGG19 549 0.713 0.9 143667240 26

ResNet50 98 0.749 0.921 25636712 -

ResNet101 171 0.764 0.928 44707176 -

ResNet152 232 0.766 0.931 60419944 -

ResNet50V2 98 0.76 0.93 25613800 -

ResNet101V2 171 0.772 0.938 44675560 -

ResNet152V2 232 0.78 0.942 60380648 -

InceptionV3 92 0.779 0.937 23851784 159

InceptionResNetV2 215 0.803 0.953 55873736 572

MobileNet 16 0.704 0.895 4253864 88

MobileNetV2 14 0.713 0.901 3538984 88

DenseNet121 33 0.75 0.923 8062504 121

DenseNet169 57 0.762 0.932 14307880 169

DenseNet201 80 0.773 0.936 20242984 201

NASNetMobile 23 0.744 0.919 5326716 -

NASNetLarge 343 0.825 0.96 88949818 -

Page 21: Machine Learning with Deep Neural Networks Piotr Luszczek

21/42

Generative Adversarial Network: OverviewGenerative Adversarial Network: Overview

Generator(artist)

Generator(artist)

Discriminator(critic)

Discriminator(critic)

NoiseNoise

Correct?

Latentspace

Latentspace

Realsamples

Realsamples

Potential problems:● Mode collapse● No convergence to Nash equilibriumSolutions:● Wasserstein GAN

● Use Wasserstein (earthmover) distance

Potential problems:● Mode collapse● No convergence to Nash equilibriumSolutions:● Wasserstein GAN

● Use Wasserstein (earthmover) distance

Page 22: Machine Learning with Deep Neural Networks Piotr Luszczek

22/42

DNN → Tensors → SGEMMDNN → Tensors → SGEMM

WH

C

SR

C

.

.

.

.

.

.N K

K

QP

Input images Filter Output images

.

.

.N

Filter Matrix

Output Images

Input Images

Local Memory

Storage format (for expanded data):● NCHW (natural, naive)● CHWN

Storage format (for expanded data):● NCHW (natural, naive)● CHWN

Page 23: Machine Learning with Deep Neural Networks Piotr Luszczek

23/42

From SGEMM to DNN with AutotuningFrom SGEMM to DNN with Autotuning

Page 24: Machine Learning with Deep Neural Networks Piotr Luszczek

24/42

COSC594COSC594

Deep Learning Data Sets

Page 25: Machine Learning with Deep Neural Networks Piotr Luszczek

25/42

Popular Data SetsPopular Data Sets

● Data Sets– MNIST

● Handwritten digits

– CIFAR10 and CIFAR100● 50,000 small images (32x32)● 10 or 100 classes of images

– ImageNet– Cambridge Analytica

● Personality types

Page 26: Machine Learning with Deep Neural Networks Piotr Luszczek

26/42

ImageNet Collection of Classified ImagesImageNet Collection of Classified Images● 15 million images

– Tagged with concepts (synonym sets = synsets)

● Some images have– SIFT features– Bounding box annotations

● Availability– Links to images– API– Full download for educational purposes

● Competition– ImageNet Large Scale Visual

Recognition Challenge● There are precision/recall numbers for

human testers

Page 27: Machine Learning with Deep Neural Networks Piotr Luszczek

27/42

ImageNet TrainingImageNet Training

● ResNet– https://arxiv.org/abs/1512.03385 – Deep Residual Learning

● 1 hour on 256 GPUs– https://arxiv.org/abs/1706.02677 – Accurate, Large Minibatch Stochastic Gradient Descend

● 15 minutes on 1024 GPUs– T. Akiba, S. Suzuki, and K. Fukuda. Extremely Large Minibatch SGD: Training ResNet-50

on ImageNet in 15 Minutes. NIPS 2017 Workshop: Deep Learning At Supercomputer Scale, 12 2017

– https://github.com/chainer/chainermn ● Performance highlights

– MPI Allreduce()– Synchronous vs. Asynchronous gradient updates– NVLink communication primitives: NVIDIA nccl (pronounced “nickel”)

Page 28: Machine Learning with Deep Neural Networks Piotr Luszczek

28/42

COSC594COSC594

Deep Learning Software Stack

Page 29: Machine Learning with Deep Neural Networks Piotr Luszczek

29/42

Software for Deep LearningSoftware for Deep Learning● Unlike HPC, Deep Learning community

has low tolerance for complexity● Some languages

– Lua● Torch with Lua.JIT

– Julia● Flux: zygote, capsnet

– Python● PyTorch● TensorFlow

– Python, C++, JXA– TensorFlow.js– TensorFlow Lite

● Keras● …

– Jupyter– Vendors: cuDNN, MIOpen, MKL-DNN

● Python software stack– NumPy– SciPy– Matplotlib, Seaborn– Numba– Pandas– Scikit-learn– scikit-image– Scikit-Optimize, scikit-opt

Page 30: Machine Learning with Deep Neural Networks Piotr Luszczek

30/42

Keras: Classify ImageNet classes with ResNet50Keras: Classify ImageNet classes with ResNet50

from keras.applications.resnet50 import ResNet50from keras.preprocessing import imagefrom keras.applications.resnet50 import preprocess_input, decode_predictionsimport numpy as np

model = ResNet50(weights='imagenet')

img_path = 'elephant.jpg'img = image.load_img(img_path, target_size=(224, 224))x = image.img_to_array(img)x = np.expand_dims(x, axis=0)x = preprocess_input(x)

preds = model.predict(x)# decode the results into a list of tuples (class, description, probability)# (one such list for each sample in the batch)print('Predicted:', decode_predictions(preds, top=3)[0])# Predicted: [(u'n02504013', u'Indian_elephant', 0.82658225), (u'n01871265', u'tusker', 0.1122357), (u'n02504458', u'African_elephant', 0.061040461)]

Page 31: Machine Learning with Deep Neural Networks Piotr Luszczek

31/42

Keras: Extract features with VGG16Keras: Extract features with VGG16

from keras.applications.vgg16 import VGG16from keras.preprocessing import imagefrom keras.applications.vgg16 import preprocess_inputimport numpy as np

model = VGG16(weights='imagenet', include_top=False)

img_path = 'elephant.jpg'img = image.load_img(img_path, target_size=(224, 224))x = image.img_to_array(img)x = np.expand_dims(x, axis=0)x = preprocess_input(x)

features = model.predict(x)

Page 32: Machine Learning with Deep Neural Networks Piotr Luszczek

32/42

Extract features from an arbitrary intermediate layer with VGG19Extract features from an arbitrary intermediate layer with VGG19from keras.applications.vgg19 import VGG19from keras.preprocessing import imagefrom keras.applications.vgg19 import preprocess_inputfrom keras.models import Modelimport numpy as np

base_model = VGG19(weights='imagenet')model = Model(inputs=base_model.input, outputs=base_model.get_layer('block4_pool').output)

img_path = 'elephant.jpg'img = image.load_img(img_path, target_size=(224, 224))x = image.img_to_array(img)x = np.expand_dims(x, axis=0)x = preprocess_input(x)

block4_pool_features = model.predict(x)

Page 33: Machine Learning with Deep Neural Networks Piotr Luszczek

33/42

Fine-tune InceptionV3 on a new set of classesFine-tune InceptionV3 on a new set of classes

from keras.applications.inception_v3 import InceptionV3from keras.preprocessing import imagefrom keras.models import Modelfrom keras.layers import Dense, GlobalAveragePooling2Dfrom keras import backend as K# create the base pre-trained modelbase_model = InceptionV3(weights='imagenet', include_top=False)# add a global spatial average pooling layerx = base_model.outputx = GlobalAveragePooling2D()(x)# let's add a fully-connected layerx = Dense(1024, activation='relu')(x)# and a logistic layer -- let's say we have 200 classespredictions = Dense(200, activation='softmax')(x)# this is the model we will trainmodel = Model(inputs=base_model.input, outputs=predictions)# first: train only the top layers (which were randomly initialized)# i.e. freeze all convolutional InceptionV3 layersfor layer in base_model.layers: layer.trainable = False# compile the model (should be done *after* setting layers to non-trainable)model.compile(optimizer='rmsprop', loss='categorical_crossentropy')

# train the model on the new data for a few epochsmodel.fit_generator(...)

# at this point, the top layers are well trained and we can start fine-tuning# convolutional layers from inception V3. We will freeze the bottom N layers# and train the remaining top layers.

# let's visualize layer names and layer indices to see how many layers# we should freeze:for i, layer in enumerate(base_model.layers): print(i, layer.name)

# we chose to train the top 2 inception blocks, i.e. we will freeze# the first 249 layers and unfreeze the rest:for layer in model.layers[:249]: layer.trainable = Falsefor layer in model.layers[249:]: layer.trainable = True

# we need to recompile the model for these modifications to take effect# we use SGD with a low learning ratefrom keras.optimizers import SGDmodel.compile(optimizer=SGD(lr=0.0001, momentum=0.9), loss='categorical_crossentropy')# we train our model again (this time fine-tuning the top 2 inception blocks# alongside the top Dense layersmodel.fit_generator(...)

Page 34: Machine Learning with Deep Neural Networks Piotr Luszczek

34/42

Keras: Build InceptionV3 over a custom input tensorKeras: Build InceptionV3 over a custom input tensor

from keras.applications.inception_v3 import InceptionV3from keras.layers import Input

# this could also be the output a different Keras model or layerinput_tensor = Input(shape=(224, 224, 3)) # this assumes K.image_data_format() == 'channels_last'

model = InceptionV3(input_tensor=input_tensor, weights='imagenet', include_top=True)

Page 35: Machine Learning with Deep Neural Networks Piotr Luszczek

35/42

COSC594COSC594

Deep Learning Hardware Stack

Page 36: Machine Learning with Deep Neural Networks Piotr Luszczek

36/42

Computational Needs According to OpenAIComputational Needs According to OpenAI

https://openai.com/blog/ai-and-compute/

Page 37: Machine Learning with Deep Neural Networks Piotr Luszczek

37/42

Modern Hardware: Lower Precision for Deep LearningModern Hardware: Lower Precision for Deep Learning

● Hardware (company)– GPU Tensor Cores (NVIDIA)– TPU MXU (Google)– Zion (Facebook)– DaVinci (Huawei)– Dot-product engine (HPE)– Eyeriss (Amazon)– Wafer Scale Engine (Cerebras)– Nervana (Intel)– Deep Learning Boost (Intel AI)– Graph Core– ...

● Lower-precision benchmarks– Baidu– Dawn– mlperf– Deep500– ...– HPL-AI

60+

Page 38: Machine Learning with Deep Neural Networks Piotr Luszczek

38/42

FP16 Hardware for DNN Training/InferenceFP16 Hardware for DNN Training/Inference

● AMD– Radeon Instinct MI5, MI8, MI25, MI50,

MI60● ARM

– NEON VFP FP16 in V8.2-A● Intel

– Cascade Lake● NVIDIA Pascal and Volta

– P100, Turing, TX1, Jetson Nano– Non-Tesla cards (Quadro, GeForce

11xy)– V100, DGX-1, DGX-2

● Tensor core with 32-bit intermediates

● Supercomputers– TSUBAME 3 (47 Pflop/s FP16)– Tokyo Tech– Piz Daint– Summit +3 Eflop/s FP16– Sierra

● Google– TPU 1: INT8 ~30 TOPS– TPU 2 pod: 11.5 Pflop/s– TPU 3 pod: 20-90 Pflop/s

● NVIDIA– DRIVE PX 2: 24 DL TOPS

● Intel– Xeon Phi Knights Mill– Nervana

Page 39: Machine Learning with Deep Neural Networks Piotr Luszczek

39/42

Modern Hardware and Floating-Point FormatsModern Hardware and Floating-Point Formats

INT FP64 FP32 HLFFP32INT HLFHLFHLF

INT FP64 FP32 HLFFP32INT HLFHLFHLF

INT FP64 FP32 HLFFP32INT HLFHLFHLF MXU

128x128

Tensor Cores

FP32

FP16

BF16

Page 40: Machine Learning with Deep Neural Networks Piotr Luszczek

40/42

Major Floating Point Formats from IEEE 754 (2008)Major Floating Point Formats from IEEE 754 (2008)

Precision Width Exponent bits

Mantissa bits

Epsilon Max

Quadruple 128 15 112 O(10-34) 1.2x104932

Extended 80 15 64 O(10-19) 1.2x104932

Double 64 11 52 O(10-16) 1.8x10308

Single 32 8 23 O(10-7) 3.4x1038

Half* 16 5 10 O(10-3) 65504

*Only storage format is specified. IEEE 2018 covers the compute rules.BFloat 16 8 7 O(10-2) 3.4x1038

● IEEE 754 2018 standard update includes 16-bit for computing

Page 41: Machine Learning with Deep Neural Networks Piotr Luszczek

41/42

COSC594COSC594

In conclusion...

Page 42: Machine Learning with Deep Neural Networks Piotr Luszczek

42/42

Future WorkFuture Work

● Optimization– SGD, momentum, ADAM, …

● Hyper-parameter selection– AutoML

● Natural language processing– BERT, Transformer, Transformer2, Megatron

● Reinforcement learning– “Stochastic branch-and-bound”– Deep RL, Q-learning

● Capsule networks– Not yet established, if ever