CS230 Deep Learningcs230.stanford.edu/projects_winter_2019/reports/15808060.pdf · 100, dropout of...

6

Transcript of CS230 Deep Learningcs230.stanford.edu/projects_winter_2019/reports/15808060.pdf · 100, dropout of...

Page 1: CS230 Deep Learningcs230.stanford.edu/projects_winter_2019/reports/15808060.pdf · 100, dropout of 0.2 and number of epoch 50. We train the model with Adam optimizer of learning rate
Page 2: CS230 Deep Learningcs230.stanford.edu/projects_winter_2019/reports/15808060.pdf · 100, dropout of 0.2 and number of epoch 50. We train the model with Adam optimizer of learning rate
Page 3: CS230 Deep Learningcs230.stanford.edu/projects_winter_2019/reports/15808060.pdf · 100, dropout of 0.2 and number of epoch 50. We train the model with Adam optimizer of learning rate
Page 4: CS230 Deep Learningcs230.stanford.edu/projects_winter_2019/reports/15808060.pdf · 100, dropout of 0.2 and number of epoch 50. We train the model with Adam optimizer of learning rate
Page 5: CS230 Deep Learningcs230.stanford.edu/projects_winter_2019/reports/15808060.pdf · 100, dropout of 0.2 and number of epoch 50. We train the model with Adam optimizer of learning rate
Page 6: CS230 Deep Learningcs230.stanford.edu/projects_winter_2019/reports/15808060.pdf · 100, dropout of 0.2 and number of epoch 50. We train the model with Adam optimizer of learning rate