曙海教育集团
全国报名免费热线:4008699035 微信:shuhaipeixun
或15921673576(微信同号) QQ:1299983702
首页 课程表 在线聊 报名 讲师 品牌 QQ聊 活动 就业
 
Understanding Deep Neural Networks培训

 
   班级规模及环境--热线:4008699035 手机:15921673576( 微信同号)
       坚持小班授课,为保证培训效果,增加互动环节,每期人数限3到5人。
   上课时间和地点
上课地点:【上海】:同济大学(沪西)/新城金郡商务楼(11号线白银路站) 【深圳分部】:电影大厦(地铁一号线大剧院站)/深圳大学成教院 【北京分部】:北京中山学院/福鑫大楼 【南京分部】:金港大厦(和燕路) 【武汉分部】:佳源大厦(高新二路) 【成都分部】:领馆区1号(中和大道) 【沈阳分部】:沈阳理工大学/六宅臻品 【郑州分部】:郑州大学/锦华大厦 【石家庄分部】:河北科技大学/瑞景大厦 【广州分部】:广粮大厦 【西安分部】:协同大厦
最近开课时间(周末班/连续班/晚班):2019年1月26日
   实验设备
     ☆资深工程师授课
        
        ☆注重质量 ☆边讲边练

        ☆合格学员免费推荐工作
        ★实验设备请点击这儿查看★
   质量保障

        1、培训过程中,如有部分内容理解不透或消化不好,可免费在以后培训班中重听;
        2、课程完成后,授课老师留给学员手机和Email,保障培训效果,免费提供半年的技术支持。
        3、培训合格学员可享受免费推荐就业机会。

课程大纲
 

Part 1 – Deep Learning and DNN Concepts

Introduction AI, Machine Learning & Deep Learning

History, basic concepts and usual applications of artificial intelligence far Of the fantasies carried by this domain

Collective Intelligence: aggregating knowledge shared by many virtual agents

Genetic algorithms: to evolve a population of virtual agents by selection

Usual Learning Machine: definition.

Types of tasks: supervised learning, unsupervised learning, reinforcement learning

Types of actions: classification, regression, clustering, density estimation, reduction of dimensionality

Examples of Machine Learning algorithms: Linear regression, Naive Bayes, Random Tree

Machine learning VS Deep Learning: problems on which Machine Learning remains Today the state of the art (Random Forests & XGBoosts)

Basic Concepts of a Neural Network (Application: multi-layer perceptron)

Reminder of mathematical bases.

Definition of a network of neurons: classical architecture, activation and

Weighting of previous activations, depth of a network

Definition of the learning of a network of neurons: functions of cost, back-propagation, Stochastic gradient descent, maximum likelihood.

Modeling of a neural network: modeling input and output data according to The type of problem (regression, classification ...). Curse of dimensionality.

Distinction between Multi-feature data and signal. Choice of a cost function according to the data.

Approximation of a function by a network of neurons: presentation and examples

Approximation of a distribution by a network of neurons: presentation and examples

Data Augmentation: how to balance a dataset

Generalization of the results of a network of neurons.

Initialization and regularization of a neural network: L1 / L2 regularization, Batch Normalization

Optimization and convergence algorithms

Standard ML / DL Tools

A simple presentation with advantages, disadvantages, position in the ecosystem and use is planned.

Data management tools: Apache Spark, Apache Hadoop Tools

Machine Learning: Numpy, Scipy, Sci-kit

DL high level frameworks: PyTorch, Keras, Lasagne

Low level DL frameworks: Theano, Torch, Caffe, Tensorflow

Convolutional Neural Networks (CNN).

Presentation of the CNNs: fundamental principles and applications

Basic operation of a CNN: convolutional layer, use of a kernel,

Padding & stride, feature map generation, pooling layers. Extensions 1D, 2D and 3D.

Presentation of the different CNN architectures that brought the state of the art in classification

Images: LeNet, VGG Networks, Network in Network, Inception, Resnet. Presentation of Innovations brought about by each architecture and their more global applications (Convolution 1x1 or residual connections)

Use of an attention model.

Application to a common classification case (text or image)

CNNs for generation: super-resolution, pixel-to-pixel segmentation. Presentation of

Main strategies for increasing feature maps for image generation.

Recurrent Neural Networks (RNN).

Presentation of RNNs: fundamental principles and applications.

Basic operation of the RNN: hidden activation, back propagation through time, Unfolded version.

Evolutions towards the Gated Recurrent Units (GRUs) and LSTM (Long Short Term Memory).

Presentation of the different states and the evolutions brought by these architectures

Convergence and vanising gradient problems

Classical architectures: Prediction of a temporal series, classification ...

RNN Encoder Decoder type architecture. Use of an attention model.

NLP applications: word / character encoding, translation.

Video Applications: prediction of the next generated image of a video sequence.

Generational models: Variational AutoEncoder (VAE) and Generative Adversarial Networks (GAN).

Presentation of the generational models, link with the CNNs

Auto-encoder: reduction of dimensionality and limited generation

Variational Auto-encoder: generational model and approximation of the distribution of a given. Definition and use of latent space. Reparameterization trick. Applications and Limits observed

Generative Adversarial Networks: Fundamentals.

Dual Network Architecture (Generator and discriminator) with alternate learning, cost functions available.

Convergence of a GAN and difficulties encountered.

Improved convergence: Wasserstein GAN, Began. Earth Moving Distance.

Applications for the generation of images or photographs, text generation, super-resolution.

Deep Reinforcement Learning.

Presentation of reinforcement learning: control of an agent in a defined environment

By a state and possible actions

Use of a neural network to approximate the state function

Deep Q Learning: experience replay, and application to the control of a video game.

Optimization of learning policy. On-policy && off-policy. Actor critic architecture. A3C.

Applications: control of a single video game or a digital system.

Part 2 – Theano for Deep Learning

Theano Basics

Introduction

Installation and Configuration

Theano Functions

inputs, outputs, updates, givens

Training and Optimization of a neural network using Theano

Neural Network Modeling

Logistic Regression

Hidden Layers

Training a network

Computing and Classification

Optimization

Log Loss

Testing the model

Part 3 – DNN using Tensorflow

TensorFlow Basics

Creation, Initializing, Saving, and Restoring TensorFlow variables

Feeding, Reading and Preloading TensorFlow Data

How to use TensorFlow infrastructure to train models at scale

Visualizing and Evaluating models with TensorBoard

TensorFlow Mechanics

Prepare the Data

Download

Inputs and Placeholders

Build the GraphS

Inference

Loss

Training

Train the Model

The Graph

The Session

Train Loop

Evaluate the Model

Build the Eval Graph

Eval Output

The Perceptron

Activation functions

The perceptron learning algorithm

Binary classification with the perceptron

Document classification with the perceptron

Limitations of the perceptron

From the Perceptron to Support Vector Machines

Kernels and the kernel trick

Maximum margin classification and support vectors

Artificial Neural Networks

Nonlinear decision boundaries

Feedforward and feedback artificial neural networks

Multilayer perceptrons

Minimizing the cost function

Forward propagation

Back propagation

Improving the way neural networks learn

Convolutional Neural Networks

Goals

Model Architecture

Principles

Code Organization

Launching and Training the Model

Evaluating a Model

Basic Introductions to be given to the below modules(Brief Introduction to be provided based on time availability):

Tensorflow - Advanced Usage

Threading and Queues

Distributed TensorFlow

Writing Documentation and Sharing your Model

Customizing Data Readers

Manipulating TensorFlow Model Files

TensorFlow Serving

Introduction

Basic Serving Tutorial

Advanced Serving Tutorial

Serving Inception Model Tutorial

 
  备案号:沪ICP备08026168号 .(2014年7月11)..............