Program Introduction
Leverage a pre-trained model for computer vision inferencing. You will convert pre-trained models into the framework agnostic intermediate representation with the Model Optimizer, and perform efficient inference on deep learning models through the hardware-agnostic Inference Engine. Finally, you will deploy an app on the edge, including sending information through MQTT, and analyze model performance and use cases
03. Notebooks and Workspaces0:18
Introduction to AI at the Edge
02. What is AI at the Edge?
03. Why is AI at the Edge Important1:26
04. Applications of AI at the Edge1:09
04. Applications of AI at the Edge Quiz
05. Historical Context1:11
05. Historical Context
06. Course Structure1:26
07. Why Are the Topics Distinct1:01
07. Why Are the Topics Distinct?
08. Relevant Tools and Prerequisites1:31
09. What You Will Build0:40
09.1 What You Will Build0:15
10. Recap0:22
Leveraging Pre-Trained Models
01. Introduction0:19
02. The OpenVINO™ Toolkit1:41
02. The OpenVINO™ Toolkit
03. Pre-Trained Models in OpenVINO™1:04
04. Types of Computer Vision Models3:24
04. Types of Computer Vision Models
05. Case Studies in Computer Vision2:28
05. Case Studies in Computer Vision
06. Available Pre-Trained Models in OpenVINO™3:47
06. Available Pre-Trained Models in OpenVINO™
07. Exercise Loading Pre-Trained Models3:05
08. Solution Loading Pre-Trained Models4:44
09. Optimizations on the Pre-Trained Models0:52
10. Choosing the Right Model for Your App2:25
10. Choosing the Right Model for Your App
11. Pre-processing Inputs3:15
12. Exercise Pre-processing Inputs00:00
13. Solution Pre-processing Inputs5:33
14. Handling Network Outputs2:22
14. Handling Network Outputs
15. Running Your First Edge App4:23
16. Exercise Deploy An App at the Edge
17. Solution Deploy An App at the Edge7:38
17.1 Solution: Deploy An App at the Edge4:30
17.2 Solution: Deploy An App at the Edge1:30
18. Recap0:23
19. Lesson Glossary
The Model Optimizer
01. Introduction0:22
02. The Model Optimizer1:36
02. The Model Optimizer
03. Optimization Techniques3:14
03. Optimization Techniques
04. Supported Frameworks1:32
05. Intermediate Representations1:46
05. Intermediate Representations
06. Using the Model Optimizer with TensorFlow Models4:11
07. Exercise Convert a TF Model
08. Solution Convert a TF Model2:54
09. Using the Model Optimizer with Caffe Models1:57
10. Exercise Convert a Caffe Model
11. Solution Convert a Caffe Model
12. Using the Model Optimizer with ONNX Models1:40
13. Exercise Convert an ONNX Model
14. Solution Convert an ONNX Model1:25
15. Cutting Parts of a Model1:33
16. Supported Layers1:43
16. Supported Layers
17. Custom Layers1:37
18. Exercise Custom Layers
19. Recap0:30
20. Lesson Glossary
The Inference Engine
01. Introduction0:19
02. The Inference Engine1:22
02. The Inference Engine
03. Supported Devices2:53
03. Supported Devices
04. Using the Inference Engine with an IR3:46
05. Exercise Feed an IR to the Inference Engine
06. Solution Feed an IR to the Inference Engine3:37
07. Sending Inference Requests to the IE0:55
08. Asynchronous Requests1:33
08. Asynchronous Requests
09. Exercise Inference Requests
10. Solution Inference Requests4:05
11. Handling Results1:13
11. Handling Results
12. Integrating into Your App1:06
13. Exercise Integrate into an App
14. Solution Integrate into an App6:03
15. Behind the Scenes of Inference Engine2:54
15. Behind the Scenes of Inference Engine
16. Recap0:28
17. Lesson Glossary
Deploying an Edge App
01. Introduction0:57
02. OpenCV Basics2:22
03. Handling Input Streams2:18
04. Exercise Handling Input Streams
05. Solution Handling Input Streams
06. Gathering Useful Information from Model Outputs2:24
06. Gathering Useful Information from Model Outputs
07. Exercise Process Model Outputs
08. Solution Process Model Outputs5:17
09. Intro to MQTT2:42
10. Communicating with MQTT2:03
10. Communicating with MQTT
11. Streaming Images to a Server3:11
11. Streaming Images to a Server
12. Handling Statistics and Images from a Node Server1:27
13. Exercise Server Communications
14. Solution Server Communications5:50
15. Analyzing Performance Basics1:56
15. Analyzing Performance Basics
16. Model Use Cases0:50
16. Model Use Cases
17. Concerning End User Needs0:56
17. Concerning End User Needs
18. Recap0:29
19. Lesson Glossary
20. Course Recap0:25
21. Partner with Intel
Project Deploy a People Counter App at the Edge
01. Project Introduction
02. Project Set-Up
03. Project Instructions Code
04. Running the App
05. Project Instructions Write-Up
06. Minimum Viable Project
07. Project Workspace
Project Description – Deploy a People Counter App at the Edge
Project Rubric – Deploy a People Counter App at the Edge
Introduction to Hardware at the Edge
Grow your expertise in choosing the right hardware. Identify key hardware specifications of various hardware types (CPU, VPU, FPGA, and Integrated GPU). Utilize the Intel® DevCloud for the Edge to test model performance and deploy power-efficient deep neural network inference on on the various hardware types. Finally, you will distribute workload on available compute devices in order to improve model performance.
01. Instructor Introduction1:41
01.2 Instructor Introduction0:30
02. Course Overview1:24
03. Changes in OpenVINO 2020.1
04. Lesson Overview2:01
05. Why is Choosing the Right Hardware Important1:04
05. Why is Choosing the Right Hardware Important?
06. Design of Edge AI Systems1:20
06.2 Design of Edge AI Systems0:37
06.2 Design of Edge AI Systems
07. Analyze1:38
08. Design1:02
09. Develop1:37
10. Test and Deploy1:28
10. Test and Deploy
11. Basic Terminology5:49
12. Intel DevCloud2:17
12. Intel DevCloud
13. Updating Your Workspace
13. Updating Your Workspace
14. Walkthrough Using Intel DevCloud4:21
15. Exercise Using Intel DevCloud
16. Lesson Review1:16
CPUs and Integrated GPUs
01. Lesson Overview1:45
02. CPU Basics
02. CPU Basics
03. Threads and Processes
03. Threads and Processes
04. Multithreading and Multiprocessing
04. Multithreading and Multiprocessing
05. Introduction to Intel Processors1:19
05. Introduction to Intel Processors
06. Intel CPU Architecture3:17
06. Intel CPU Architecture
07. CPU Specifications (Part 1)
07. CPU Specifications (Part 1)
08. CPU Specifications (Part 2)
08. CPU Specifications (Part 2)
09. Exercise CPU Scenario
09. Exercise: CPU Scenario
10. Updating Your Workspace
10. Updating Your Workspace
11. Walkthrough CPU and the DevCloud
12. Exercise CPU and the Devcloud
13. Integrated GPU (IGPU)3:18
13. Integrated GPU (IGPU)
14. Walkthrough IGPU and the DevCloud
15. IGPU and Batch Processing
15. IGPU and Batch Processing
16. Exercise IGPU Scenario
16. IGPU Scenario
17. Exercise IGPU and the DevCloud
18. Lesson Review0:49
VPUs
01. Lesson Overview1:13
02. Introduction to VPUs1:35
02. Introduction to VPUs
03. Architecture of VPUs1:33
04. Myriad X Characteristics1:53
05. Intel Neural Compute Stick 21:59
05. Intel Neural Compute Stick 2
06. Exercise: VPU Scenario
06. Exercise: VPU Scenario
07. Updating Your Workspace
08. Walkthrough: VPU and the DevCloud
09. Exercise: VPU and the DevCloud
10. Multi-Device Plugin3:29
10. Multi-Device Plugin
11. Walkthrough: Multi-Device Plugin and the DevCloud
12. Exercise: Multi Device Plugin on DevCloud
13. Lesson Review0:56
FPGAs
01. Lesson Overview2:33
02. Introduction to FPGAs3:10
02. Introduction to FPGAs
03. Architecture of FPGAs2:08
03. Architecture of FPGAs
04. Programming FPGAs4:17
04. Programming FPGAs
04.2 Programming FPGAs1:53
05. FPGA Specifications3:10
05. FPGA Specifications
06. Intel Vision Accelerator Design1:58
06. Intel Vision Accelerator Design
07. Exercise FPGA Scenario
07. FPGA Scenario
08. Updating Your Workspace
09. Walkthrough FPGA and the DevCloud
10. Exercise FPGA and the DevCloud
11. Heterogeneous Plugin2:21
11. Heterogeneous Plugin
12. Exercise Heterogeneous Plugin on DevCloud
13. Lesson Review2:25
14. Course Review00:00
Project Smart Queuing System
01. Project Overview2:11
02. Part 1 Hardware Proposal
03. Scenario 1 Manufacturing
04. Scenario 2 Retail
05. Scenario 3 Transportation
06. Part 2 Testing your Hardware
07. Step 1 Create the Python Script
08. Step 2 Create the Job Submission Script
09. Step 3 Manufacturing Scenario
10. Step 4 Retail Scenario
11. Step 5 Transportation Scenario
12. Step 6 Submit your Project
Project Description – Smart Queuing System
Project Rubric – Smart Queuing System
Introduction to Software Optimization
Learn how to optimize your model and application code to reduce inference time when running your model at the edge. Use different software optimization techniques to improve the inference time of your model. Calculate how computationally expensive your model is. Use the DL Workbench to optimize your model and benchmark the performance of your model. Use a VTune amplifier to find and fix hotspots in your application code. Finally, package your application code and data so that it can be easily deployed to multiple devices.
01. Instructor Introduction1:28
02. Course Overview4:28
03. Installing OpenVINO
04. Lesson Overview2:16
05. What is Software Optimization and Why Does it Matter3:30
05. What is Software Optimization and Why Does it Matter?
05.2 What is Software Optimization and Why Does it Matter2:40
05.2 What is Software Optimization and Why Does it Matter?
06. Types of Software Optimization4:39
06. Types of Software Optimization
07. Performance Metrics4:08
07. Performance Metrics
07.2 Performance Metrics00:00
07.2 Performance Metrics
08. Some Other Performance Metrics00:00
08. Some Other Performance Metrics
09. When do we do Software Optimization00:00
09. When do we do Software Optimization?
10. Lesson Review00:00
Reducing Model Operations
01. Lesson Overview00:00
02. Calculating Model FLOPs Dense Layers00:00
02. Calculating Model FLOPs: Dense Layers
03. Calculating Model FLOPS Convolutional Layers00:00
03. Calculating Model FLOPS: Convolutional Layers
04. Calculate the FLOPs in a model
04. Calculate the FLOPs in a model
05. Using Efficient Layers Pooling Layers00:00
05. Using Efficient Layers: Pooling Layers
06. Exercise Pooling Performance
07. Using Efficient Layers Separable Convolutions00:00
07. Using Efficient Layers: Separable Convolutions
08. Exercise Separable Convolutions Performance
09. Measuring Layerwise Performance
10. Exercise Measuring Layerwise Performance
11. Model Pruning00:00
11. Model Pruning
12. Lesson Review00:00
Reducing Model Size
01. Lesson Overview00:00
02. Introduction to Quantization00:00
02. Introduction to Quantization
03. Benchmarking Model Performance00:00
03. Benchmarking Model Performance
04. Exercise Benchmarking Model Performance00:00
05. Advanced Benchmarking00:00
05. Advanced Benchmarking
06. Exercise Advanced Benchmarking00:00
07. How Quantization is Done00:00
07. How Quantization is Done
08. Quantizing a Model using DL Workbench00:00
08. Quantizing a Model using DL Workbench
09. Exercise Quantizing a Model Using DL Workbench00:00
09. Exercise: Quantizing a Model Using DL Workbench
10. Model Compression00:00
10. Model Compression
11. Knowledge Distillation00:00
11. Knowledge Distillation
12. Lesson Review00:00
Other Optimization Tools and Techniques
01. Lesson Overview00:00
02. Introduction to Intel VTune00:00
02. Introduction to Intel VTune
03. Exercise Profiling Using VTune00:00
03. Exercise: Profiling Using VTune
04. Advanced Concepts in Intel VTune00:00
04. Advanced Concepts in Intel VTune
05. Exercise Advanced Profiling Using VTune Amplifier00:00
05. Exercise: Advanced Profiling Using VTune Amplifier
06. Packaging Your Application
07. Exercise Packaging Your Application
08. Exercise Deploying Runtime Package
09. Lesson Review
10. Course Review00:00
Project Computer Pointer Controller
01. Overview00:00
02. Part 1 Project Setup
03. Part 2 Build the Inference Pipeline00:00
04. Part 3 Complete the README
05. Part 4 Standout Suggestions00:00
06. Part 5 Check Your Work
Project Description – Computer Pointer Controller
Project Rubric – Computer Pointer Controller
02. Prerequisites & Other Requirements
Prerequisites
Before you begin, please check the following Nanodegree Program prerequisite requirements to make sure you have the skills to succeed in this program.
To succeed in this program, students should have the following:
- Intermediate knowledge of programming in Python
- Experience with training and deploying deep learning models
- Familiarity with different DL layers and architectures (CNN based)
- Familiarity with the command line (bash terminal)
- Experience using OpenCV
Hardware & Software Requirements
Please review these requirements to make sure you have what you need to complete this Nanodegree Program:
- 64-bit operating system that has 6th or newer generation of Intel processor running either Windows 10, Ubuntu 18.04.3 LTS, or macOS 10.13 or higher.
- Installing OpenVINO (version 202.1) on your local environment. OpenVINO and the software listed below will only need to run locally to complete the project & exercises in Course 3. All other projects and exercises can be completed within Udacity’s classroom using workspaces.
- Installing Intel’s Deep Learning Workbench (version 202.1). Please note that DL Workbench does not currently support Windows 10 Home Edition. We recommend students either upgrade to Windows 10 Professional or use a Linux based system.
- Installing Intel’s VTune Amplifier.