Session

for Student

Close

Introduction to Deep Learning

Times:5 sessions

Format:On-demand

Presented:AIC

(1) Purpose and content of the course

This course is an introductory course on deep learning, a machine learning technique. In particular, this course deals with deep learning using multi-layer neural networks (NNs), and covers the basic theory of neural networks and typical NNs such as CNNs and RNNs, as well as how to utilize them. In addition, students will implement various networks using PyTorch, a deep learning library, on Google Colaboratory to enhance their understanding and implementation skills. Through the above, the goal is to be able to design and implement appropriate networks according to the problem to be solved.

(2) Content of each session

Session 1: Introduction to deep learning

First, an overview of deep learning will be given. Specifically, the standing of deep learning in machine learning and problems such as classification and regression handled by deep learning will be explained. We will also give an overview of deep learning using multi-layer NNs. Specifically, the concepts of learning as weight optimization, loss function, epoch, etc. will be explained. After that, learning using single-layer NNs will be implemented in PyTorch, and the participants will be given hands-on experience. Then, the difference between perceptrons and NNs, extensions from single layer to multi-layer, etc. will be explained, and an introduction to DNNs will be given as the objective of the first session.

Session 2: Fundamentals of Deep Neural Networks (DNN)

The objective of the second session is to deal with the basic theory of the structure of DNNs and how they are trained, and to be able to design a basic DNN and implement training. First, the structure of DNNs will be covered. Specifically, we will explain the input, intermediate, and output layers, the activation function, and the calculation of output values by forward propagation. Next, the training of DNNs will be discussed. Specifically, theories related to learning such as loss function, gradient descent, batch learning, and error back propagation will be explained. The above theories will be implemented in PyTorch in order to consolidate the understanding. The dataset used for learning will be the same as the one used in the Introduction to Machine Learning to deepen the understanding that deep learning is a method of machine learning.

Session 3: DNN Practice – Learning Techniques

In Session 3, we will introduce techniques to improve the learning accuracy of DNNs described in Session 2. Specifically, we will outline the learning rate and optimization methods such as Adam and Adagrad, which are used to search for weight parameters. The concept of overlearning will also be explained, and techniques to prevent it, such as Dropout and Weight decay, will be introduced. Finally, Batch Normalization, a technique that stabilizes the overall learning process and improves learning efficiency, will also be explained. Exercises to improve the learning accuracy while changing various hyper parameters will be provided in the class, and the objective is to be able to use DNNs in a practical way.

Session 4: Convolutional Neural Networks (CNN) and Image Classification

In the fourth session, we will deal with convolutional neural networks (CNNs), which are widely used in the fields of image processing and speech recognition. First, we will consider the problems in all-connected neural networks and learn the background of the introduction of CNNs to understand the overall picture and features of CNNs. Next, the convolutional layer and pooling layer, which are layers unique to CNNs and used for feature extraction, will be understood through implementation. Finally, by implementing CNNs on image data sets such as MNIST and CIFAR-10, which were used in the third session, the objective is to confirm the usefulness of using CNNs for image processing and to solidify implementation skills.

Session 5: Recursive Neural Networks (RNNs) and Prediction of Time-Series Data

In Session 5, we will discuss regression neural networks (RNNs), which are well suited for forecasting time-series data. The objective is then to implement RNNs and solve the problem of forecasting time-series data. First, we will explain the principles of RNNs, how to learn time-series data using RNNs, and how to predict time-series data by implementing basic examples. Next, we introduce the Long Short-Term Memory (LSTM) model, which solves various problems of RNN models and has higher performance. Finally, students will work on the problem of predicting stock prices using RNNs and LSTMs to practice implementation and solidify their understanding.

ページトップ