Artificial Intelligence (AI)

Overview:  

Artificial Intelligence (AI) is the big thing in the technology field and a large number of organizations are implementing AI and the demand for professionals in AI is growing at an amazing speed. Artificial Intelligence (AI) course with Robusta will provide a wide understanding of the concepts of Artificial Intelligence (AI) to make computer programs to solve problems and achieve goals in the world.

Artificial Intelligence (AI) makes computers to perform tasks such as speech recognition, decision-making and visual perception which normally requires human intelligence that aims to develop intelligent machines.

The basic grounding in the Robusta’s practices in AI is likely to become valuable in the field of business, and profession. This course is intended to cover the concepts of Artificial Intelligence from the basics to advanced implementation

In this Artificial Intelligence (AI) course, you will be able to

  • Understand the basics of AI and how these technologies are re-defining the AI industry
  • Learn the key terminology used in AI space
  • Learn major applications of AI thru use cases
Duration:  

60 hours

Course Objectives:

Artificial Intelligence (AI) is becoming smarter day by day in all business functions to elevate performances. AI is used widely in gaming, media, finance, robotics, quantum science, autonomous vehicles, and medical diagnosis. AI technology is a crucial prerequisite of much of the digital transformation taking place today as organizations position themselves to capitalize on the ever-growing amount of data being generated and collected.

To build a successful career in Artificial Intelligence (AI), this course is intended to give a complete understanding of Artificial Intelligence concepts. This course offers you get practical, hands-on experience to ensure hassle-free execution of real-life projects. This AI course leverages world-class industry expertise in making you professional data science experts.

Robusta familiarises you with the basic terminologies, problem-solving, and learning methods of AI and also discuss the impact of AI

Intended Audience:

Robusta’s course on Artificial Intelligence (AI) gives you the basic knowledge of Artificial Intelligence. This course doesn’t need any programming skills and best suited for:

  • Well-suited for management and non-technical participants
  • Students who want to learn Artificial Intelligence

Newbies who are not familiar with AI or its implications

Course outlines:

1.      Computer Vision

a)      Introduction

  • What is Vision
  • Applications of Image & Video Analytics
  • Challenges in the Space of Image & Video Analytics

b)  Image Filtering

  • Image Representation as a Matrix & a Function
  • Image Transformations & Operations
    • Point Operations
      • Reversing ContrastPoint Operations
      • Contrast Stretching
      • Histogram Equalization
      • Average
    • Local Operations
      • Average for Noise Reduction
      • Moving Average – Uniform & Non-Uniform Weights
      • 2D Moving Average
    • Linear Filtering
      • Cross-Correlation
      • Average Filtering
      • Gaussian Filter
      • Convolution
      • Boundary Effects
      • Sharpening Filters
      • Separable Filters
  • Cross-correlation for Template Matching

c)      Edge Detection Origin of edges

  • Derivatives & Edges
    • Derivatives with Convolution
    • Partial Derivative of an Image
    • Sobel Edge Detection Filter
    • Finite Difference Filters
    • Image Gradient
    • Effects of Noise
  • Convolution – Differentiation Property
  • Derivative of Gaussian filters
    • 1D & 2D Gaussian
    • Second Derivative
  • Laplacian Filter
    • Smoothening with Gaussian
    • Laplacian of Gaussian ( LoG)
    • LoG filter
    • Reducing noise using Gaussian Filter
  • Non-Linear Filters
    • Bilateral Filters
  • Optimal Edge Detection
    • Canny Edge Detector
    • Non-Maximum Suppression
    • Hysteresis Thresholding

d)      Frequency Domain

  • Frequency Spectra
    • Fourier Transform
      • Magnitude vs Phase
      • Rotation & Edge effects
    • Fourier Filtering
      • High Pass Filtering
      • Low-pass Filtering
      • Band-pass Filtering
  • Filtering in Frequency domain
  • Fourier Amplitude & Phase Spectrum
    • fftshift(x)

e)      Image sub-sampling

  • Image Aliasing & Wagon Wheel Effect
    • Shannon’s Sampling Theorem
    • Downsampling
  • Gaussian Pre-Filtering
    • Image Pyramid
    • Gaussian Pyramid
    • Image Upsampling
    • Image Interpolation
  • Nearest Neighbour Interpolation
    • Linear & Bilinear Interpolation
    • Reconstruction Filters
    • Cubic & Cubic Spline Interpolation
  • Interpolation Filters
    • Interpolation & Decimation
      • Image Rotation
      • Multiresolution Representations
  • Laplacian Pyramid & Image Blending

f)      Image Features Detection

  • Why extract Image features
    • Local features
      • Detection, Description & Matching
  • Interest Operator Repeatability
  • Descriptor Distinctiveness
    • Invariant local Features
    • Local features Detection – Local measure of Uniqueness
  • A simple matching criteria
    • SSD error
    • SSD weighted
    • Selecting, Interest Point & Overview of Eigenvector & Eigenvalues
  • Harris Corner Detector
    • Image Transformations
    • Scale Invariant Detection
    • Automatic Scale Selection
    • Blob Detection in 2D & Characteristic Scale
  • Scale-Invariant Interest Points & Fast Approximation
    • Signature Function

g)      Image Feature Descriptors

  • The ideal feature descriptor
    • How to achieve Invariance?
    • Raw Pixels as local Descriptors
  • Scale Invariant Feature Transform – SIFT
    • SIFT – Scale-Space Extrema Detection
    • SIFT – Choosing Parameters
    • SIFT – Keypoint Localization
    • SIFT – Orientation Assignment
    • SIFT – Feature Descriptor
    • SIFT – Partial Voting
  • PCA-SIFT
    • Gradient Location-Orientation histogram (GLOH)
    • SIFT (Scale Invariant Feature Transform) vs SURF (Speeded Up Robust Features)
    • HOG (Histogram of Oriented Gradients)
    • LBP (Local Binary Patterns)
    • Filter Banks
    • Indexing Local Features: Inverted file Index
      • Visual words
      • Visual vocabulary
      • Bag of visual words
      • Constructing the tree
      • Parsing the tree

h)      Feature Matching

  • Image mosaicking
    • Wide baseline stereo matching
    • Spatial verification
    • Fitting Problem
      • Least Square Line Fitting
      • Total Least Squares
  • Random Sample Consensus(RANSAC), Choosing parameters
    • Hough Voting
      • Hough Transform
      • Hough Space
    • Hough Voting – Illustration, Several Lines
    • Dealing with Noise
    • Hough Transform for Circles
    • Generalized Hough Transform
  • RANSAC: Going from line-fitting to image mosaicing
  • Image Transformation
    • Translation
    • Rotation
    • Scaling
  • How many parameters in the model?
  • Geometric Transformations
  • Matching / Alignment as Fitting
  • Affine Transformations
  • Feature-based Alignment
    • Dealing with Outliers
    • Matching Local Features
    • How to measure performance – ROC curve

a)      Window-based Models for Category Recognition

  • General Recognition Framework
    • Window-based models
    • Part-based models
  • Window-based model
    • Generating & Scoring Candidates
    • Sliding Windows Methods
    • Global Representation
    • Representation Texture – Material, Orientation, Scale
    • Filter Banks
    • Gabor Transform, Gabor Basics
  • Classifier: Nearest Neighbour for Scene Gist Detection
  • Classifier: SVM for person detection
  • Classifier: Boosting for Face Detection – Viola-Jones Face Detector – Adaboost

2.      Neural Network

a)      Artificial Neural Networks (ANN)

  • Artificial Neuron
  • Integration Function
  • Activation Function
    • Step
    • Ramp
    • Sigmoid
    • Tanh
    • ELU
    • ReLU
    • Leaky ReLU
    • Maxout
    • Softmax
  • AND gate, XOR gate using Perceptron
  • Perceptron
    • Change integration & Multi-Layered Perceptron
    • Error Surface
    • Back Propagation Algorithm
      • Loss function
      • Activation function
      • Iteration
      • Epoch
      • Learning rate (alpha)
      • Batch Size
  • Deep Learning Libraries
    • caffe
    • Torch
    • Theano
    • Tensorflow
  • Deep Neural Network
  • Data Optimization Techniques

b)      Real-world scenarios of Deep Learning

  • Gradient Descent (GD) Learning
    • Vanishing / Exploding Gradient
    • Slow Convergence
    • Batch GD, Stochastic GD, Mini-Batch Stochastic GD
  • Momentum
    • Nesterov Momentum
    • Loss Functions
      • Cross-Entropy
      • Negative Log-Likelihood
  • Learning Rate (Alpha) – How to choose
  • Adaptive Learning Rate Methods
    • Adagrad
    • RMSProp
    • Adam (Adaptive Moment Estimation)
  • Regularization Methods
    • Empirical Risk Minimization (ERM)
  • Overfitting
    • Early stopping
    • Weight Decay
    • Dropout
    • Dropconnect
  • Noise
    • Data
    • Label
    • Gradient
  • Data Manipulation Methods
    • Data Transformation
    • Batch Normalization
      • Covariate Shift
      • Data Augmentation

c)      Convolution Neural Network (CNN)

  • Convolution Neural Network – CNN
  • ImageNet Classification Challenge
    • Hierarchical Approach
    • Local Connectivity
    • Parameter Sharing
  • Normalization Layer
    • Last Layer Customization
    • Loss Functions
    • Transfer Learning
  • Convolution of an image with a filter
  • Convolution Layer – Basic ConvNet
  • ReLU (Rectified Linear Units) Layer
    • Stride
    • Pad
    • Pooling Layer
    • Fully Connected Layer
  • Weight Initialization – Xavier’s initialization
    • Semantic Segmentation
    • Fully Convolutional Networks
    • Classification + Localization
  • Object Detection using CNNs
    • Regional CNN
    • Fast RCNN
    • Siamese Networks

d)      Recurrent Neural Networks (RNN)

  • Recurrent Neural Networks for NLP
  • Traditional Language Models
  • Original Neural Language Model using MLPs
  • Recurrent Neural Networks
    • Back propagation through time (BPTT)
    • Recurrent Neural Networks loss computation
  • Image Captioning
  • Bidirectional RNNs
  • Deep Bidirectional RNNs
  • Memory based Models

e)      Long Short-Term Memory (LSTM) & Auto-encoders

  • Long Short-Term Memory (LSTM)
  • RNN vs LSTM
    • Deep RNNs vs Deep LSTMs
  • LSTM detailed description
  • Auto-encoders
    • Encoder part of auto-encoder
    • Decoder part of auto-encoder
      • Denoising Autoencoders (dA)
      • Stacking auto-encoders
  • MxNet, TensorFlow, Keras libraries to solve the use cases
  • Học trực tuyến

  • Học tại Hồ Chí Minh

  • Học tại Hà Nội


Các khóa học khác