SlideShare a Scribd company logo
Department of Information Technology 1Soft Computing (ITC4256 )
Dr. C.V. Suresh Babu
Professor
Department of IT
Hindustan Institute of Science & Technology
Unsupervised learning networks
Department of Information Technology 2Soft Computing (ITC4256 )
Action Plan
• Unsupervised Learning Networks
- Introduction to Kohonen Self-Organizing Feature Maps (KSOM)
- Rectangular grid computing
- Hexagonal grid computing
- KSOM architecture
- KSOM training algorithm
• Quiz at the end of session
Department of Information Technology 3Soft Computing (ITC4256 )
Department of Information Technology 4Soft Computing (ITC4256 )
Kohonen Self-Organizing Feature Maps (KSOM)
• Suppose if there are some pattern of arbitrary
dimensions, however, we need them in one
dimension or two dimensions.
• Then the process of feature mapping would be
very useful to convert the wide pattern space
into a typical feature space.
• There can be various topologies, however the
following two topologies are used the most:
- Rectangular Grid Topology
- Hexagonal Grid Topology
Department of Information Technology 5Soft Computing (ITC4256 )
Rectangular Grid Topology
• This topology has 24 nodes in the distance-2 grid, 16 nodes in the distance-1 grid, and 8 nodes in the
distance-0 grid, which means the difference between each rectangular grid is 8 nodes.
• The winning unit is indicated by #.
Department of Information Technology 6Soft Computing (ITC4256 )
Hexagonal Grid Topology
• This topology has 18 nodes in the distance-2 grid, 12 nodes in the distance-1 grid, and 6 nodes in the
distance-0 grid, which means the difference between each rectangular grid is 6 nodes.
• The winning unit is indicated by #.
Department of Information Technology 7Soft Computing (ITC4256 )
KSOM - Architecture
• The architecture of KSOM is similar to that of the competitive
network.
• With the help of neighborhood schemes, discussed earlier, the
training can take place over the extended region of the network.
Department of Information Technology 8Soft Computing (ITC4256 )
KSOM – Training Algorithm
Step 1 − Initialize the weights, the learning rate α and the neighborhood
topological scheme.
Step 2 − Continue step 3-9, when the stopping condition is not true.
Step 3 − Continue step 4-6 for every input vector x.
Step 4 − Calculate Square of Euclidean Distance for j = 1 to m
n m
D(j) = ∑ ∑ (xi − wij)2
i=1 j=1
Step 5 − Obtain the winning unit J where D j is minimum.
Department of Information Technology 9Soft Computing (ITC4256 )
KSOM – Training Algorithm (Cont…)
Step 6 − Calculate the new weight of the winning unit by the following relation −
wij(new) = wij(old) + α[xi−wij(old)]
Step 7 − Update the learning rate α by the following relation −
α(t+1)=0.5αt
Step 8 − Reduce the radius of topological scheme.
Step 9 − Check for the stopping condition for the network.
Department of Information Technology 10Soft Computing (ITC4256 )
Quiz - Questions
1. What are the 2 topologies used in KSOM?
2. The winning unit is indicated by ---------.
a) * b) $ c) # d) !
3. What parameters has to be initialized for the training algorithm?
4. What is done in step 8?
5. The architecture of KSOM is similar to that of the ------------ network.
Department of Information Technology 11Soft Computing (ITC4256 )
Quiz - Answers
1. What are the 2 topologies used in KSOM?
i. Rectangular Grid Topology ii. Hexagonal Grid Topology
2. The winning unit is indicated by ---------.
a) * b) $ c) # d) !
3. What parameters has to be initialized for the training algorithm?
Weights, learning rate α and the neighbourhood topological scheme.
4. What is done in step 8?
Reduce the radius of topological scheme.
5. The architecture of KSOM is similar to that of the ------------ network.
Competitive
Department of Information Technology 12Soft Computing (ITC4256 )
Action Plan
• Unsupervised Learning Networks (Cont…)
- Introduction to ART
- Operational principle of ART
- ART1 architecture
- ART1 training algorithm
• Quiz at the end of session
Department of Information Technology 13Soft Computing (ITC4256 )
Adaptive Resonance Theory (ART)
• This network was developed by Stephen Grossberg and Gail Carpenter in 1987.
• It is based on competition and uses unsupervised learning model.
• Basically, ART network is a vector classifier which accepts an input vector and classifies it into one of
the categories depending upon which of the stored pattern it resembles the most.
Department of Information Technology 14Soft Computing (ITC4256 )
ART – Operational Principle
• The main operation of ART classification can be divided into the following
phases:
- Recognition phase
- Comparison phase
- Search phase
Department of Information Technology 15Soft Computing (ITC4256 )
Department of Information Technology 16Soft Computing (ITC4256 )
ART1 - Architecture
1. Computational Unit:
a. Input unit (F1 layer):
i. F1a layer Input portion
ii. F1b layer Interface
b. Cluster Unit (F2 layer)
c. Reset Mechanism
2. Supplement Unit:
• Two supplemental units namely, G1 and G2 is added along with reset unit, R.
• They are called gain control units.
Department of Information Technology 17Soft Computing (ITC4256 )
ART1 – Architecture (Cont…)
Department of Information Technology 18Soft Computing (ITC4256 )
ART1 – Architecture (Cont…)
Parameters Used:
• n − Number of components in the input vector
• m − Maximum number of clusters that can be formed
• bij − Weight from F1b to F2 layer, i.e. bottom-up weights
• tji − Weight from F2 to F1b layer, i.e. top-down weights
• ρ − Vigilance parameter
• ||x|| − Norm of vector x
Department of Information Technology 19Soft Computing (ITC4256 )
ART1 – Training Algorithm
Step 1 − Initialize the learning rate, the vigilance parameter, and the weights as
follows −
α > 1 and 0 < ρ ≤ 1
0 < bij(0) < (α) / (α − 1 + n) and tij(0) = 1
Step 2 − Continue step 3-9, when the stopping condition is not true.
Step 3 − Continue step 4-6 for every training input.
Step 4 − Set activations of all F1a and F1 units as follows
F2 = 0 and F1a = input vectors
Step 5 − Input signal from F1a to F1b layer must be sent like
si = xi
Department of Information Technology 20Soft Computing (ITC4256 )
ART1 – Training Algorithm (Cont…)
Step 6 − For every inhibited F2 node
yj = ∑ bijxi the condition is yj ≠ -1
i
Step 7 − Perform step 8-10, when the reset is true.
Step 8 − Find J for yJ ≥ yj for all nodes j.
Step 9 − Again calculate the activation on F1b as follows
xi = sitji
Step 10 − Now, after calculating the norm of vector x and vector s, we need to
check the reset condition as follows −
• If ||x||/ ||s|| < vigilance parameter ρ, then inhibit node J and go to step 7
• Else If ||x||/ ||s|| ≥ vigilance parameter ρ, then proceed further.
Department of Information Technology 21Soft Computing (ITC4256 )
ART1 – Training Algorithm (Cont…)
Step 11 − Weight updating for node J can be done as follows −
bij(new) = (αxi) / (α − 1 + ||x||)
tij(new) = xi
Step 12 − The stopping condition for algorithm must be checked.
Department of Information Technology 22Soft Computing (ITC4256 )
Quiz - Questions
1. ART network is a --------- classifier.
a) vector b) scalar c) linear d) non-linear
2. What are the 3 phases of the main operation of ART?
3. Name the 2 units of ART architecture.
4. What are the 3 components of computational unit?
5. What is the full form ART?
Department of Information Technology 23Soft Computing (ITC4256 )
Quiz - Answers
1. ART network is a --------- classifier.
a) vector
2. What are the 3 phases of the main operation of ART?
i. recognition ii. comparison iii. search
3. Name the 2 units of ART architecture.
i. computational unit ii. Supplement unit
4. What are the 3 components of computational unit?
i. input unit ii. Cluster unit iii. reset mechanism
5. What is the full form ART?
Adaptive Resonance Theory
Department of Information Technology 24Soft Computing (ITC4256 )
Action Plan
• Unsupervised Learning Networks (Cont…)
- Introduction to Radial Basis Function (RBF) network
- RBF architecture
- Hidden neural model
- Gaussian RBF
- RBF network parameters
- RBF learning algorithms
• Quiz at the end of session
Department of Information Technology 25Soft Computing (ITC4256 )
Radial Basis Function (RBF) Network
• A function is radial basis(RBF) if its output depends on (is a non-increasing
function of) the distance of the input from a given stored vector.
• The output of the red vector is “interpolated” using the three green vectors,
where each vector gives a contribution that depends on its weight and on its
distance from the red point.
• w1 < w3 < w2
Department of Information Technology 26Soft Computing (ITC4256 )
Department of Information Technology 27Soft Computing (ITC4256 )
RBF - Architecture
• One hidden layer with RBF activation functions.
• Output layer with linear activation function.
Department of Information Technology 28Soft Computing (ITC4256 )
Hidden Neuron Model
• Hidden units use radial basis functions.
• The output depends on the distance of the input x from the center t.
• t is called center.
• is called spread.
• Center and spread are parameters.
Department of Information Technology 29Soft Computing (ITC4256 )
Hidden Neurons
• A hidden neuron is more sensitive to data points near its center.
• For Gaussian RBF this sensitivity may be tuned by adjusting the spread ,
where a larger spread implies less sensitivity.
Department of Information Technology 30Soft Computing (ITC4256 )
Gaussian RBF
Department of Information Technology 31Soft Computing (ITC4256 )
Types of
• Multiquadrics:
• Inverse multiquadrics:
• Gaussian functions (most used):
Department of Information Technology 32Soft Computing (ITC4256 )
RBF Network Parameters
• What do we have to learn for a RBF NN with a given architecture?
- The centers of the RBF activation functions.
- The spreads of the Gaussian RBF activation functions.
- The weights from the hidden to the output layer.
• Different learning algorithms may be used for learning the RBF network parameters.
Department of Information Technology 33Soft Computing (ITC4256 )
RBF - Learning Algorithm 1
• Centers are selected at random.
• Spreads are chosen by normalization:
• Then the activation function of hidden neuron i becomes:
• Weights are computed by means of the pseudo-inverse method.
Department of Information Technology 34Soft Computing (ITC4256 )
Learning Algorithm 1 - Summary
1. Choose the centers randomly from the training set.
2. Compute the spread for the RBF function using the normalization method.
3. Find the weights using the pseudo-inverse method.
Department of Information Technology 35Soft Computing (ITC4256 )
RBF - Learning Algorithm 2
Clustering algorithm for finding the centers :
• Initialization: tk(0) random k = 1, …, m1
• Sampling: draw x from input space .
• Similarity matching: find index of center closer to x.
• Updating: adjust centers.
• Continuation: increment n by 1, goto 2 and continue until no noticeable changes of centers occur.
Department of Information Technology 36Soft Computing (ITC4256 )
Learning Algorithm 2 - Summary
Hybrid Learning Process:
• Clustering for finding the centers.
• Spreads chosen by normalization.
• LMS algorithm for finding the weights.
Department of Information Technology 37Soft Computing (ITC4256 )
RBF - Learning Algorithm 3
• Apply the gradient descent method for finding centers, spread and weights,
by minimizing the (instantaneous) squared error.
• Update for:
centers:
spread:
weights:
Department of Information Technology 38Soft Computing (ITC4256 )
Quiz - Questions
1. ---------- units use radial basis functions.
2. A hidden neuron is more sensitive to data points near its ---------.
3. What are the types of ?
4. By what means weights are found in RBF learning algorithm 2 ?
5. How the centers are chosen in RBF learning algorithm 1 ?
Department of Information Technology 39Soft Computing (ITC4256 )
Quiz - Answers
1. -------- units use radial basis functions.
Hidden
2. A hidden neuron is more sensitive to data points near its ---------.
center
3. What are the types of ?
i. multiquadrics ii. inverse multiquadrics iii. Gaussian functions
4. By what means weights are found in RBF learning algorithm 2 ?
LMS algorithm
5. How the centers are chosen in RBF learning algorithm 1 ?
Centers randomly chosen from training set.
Department of Information Technology 40Soft Computing (ITC4256 )
Action Plan
• Unsupervised Learning Networks (Cont…)
- Introduction to Counter Propagation (CP) network
- CP architecture
- CP outstar and instar
- CP Operation
• Quiz at the end of session
Department of Information Technology 41Soft Computing (ITC4256 )
Counter Propagation Network
• CP algorithm consists of a input, hidden and
output layer.
• In this case the hidden layer is called the
Kohonen layer & the output layer is called the
Grossberg layer.
• The activation of this winner neuron is set to 1
& the activation of all other neurons in this
layer is set to 0.
Department of Information Technology 42Soft Computing (ITC4256 )
Counter Propagation Network (Cont…)
Purpose:
• Fast and coarse approximation of vector mapping.
• Input vectors x are divided into clusters/classes.
• Each cluster of x has output y, which is (hopefully) the average of
for all x in that class.
Department of Information Technology 43Soft Computing (ITC4256 )
Architecture: Simple Forward CPN
Department of Information Technology 44Soft Computing (ITC4256 )
Network Architecture
Department of Information Technology 45Soft Computing (ITC4256 )
Counter Propagation Network (Cont…)
1. Invented by Robert Hecht-Nielson, founder of HNC inc.
2. Consists of two opposing networks, one for learning a function, the other
for learning its inverse.
3. Each network has two layers:
- A Kohonen first layer that clusters inputs.
- An ‘outstar’ second layer to provide the output values for each cluster.
Department of Information Technology 46Soft Computing (ITC4256 )
CP - Outstar and Instar
• An instar responds to a single input.
• An outstar produces a single (multi dimensional) output d when simulated with a binary value x.
• Biologically, outstar would be synaptic weights, while instar would have dendritic ones.
• It is common to refer to weights as ‘synaptic’.
Department of Information Technology 47Soft Computing (ITC4256 )
CP - Outstar and Instar (Cont…)
• Variations can be possible by adding weights.
Department of Information Technology 48Soft Computing (ITC4256 )
CP Operation
• An outstar neuron is associated with each cluster representative.
• Given an input, the winner is found.
• An outstar is then stimulated to give the output.
• Since these networks operate by recognizing input patterns in the first
layer, one would generally use lots of neurons in this layer.
Department of Information Technology 49Soft Computing (ITC4256 )
Quiz - Questions
1. In CP network the hidden layer is called the --------- layer & the output
layer is called the --------- layer.
2. The activation of this winner neuron is set to 1 & the activation of all other
neurons in this layer is set to 0.
a) true b) false
3. ---------- vectors x are divided into clusters/classes.
a) input b) output c) outstar d) instar
4. CP network consist of two opposing networks, one for learning a ----------,
the other for learning its ----------.
5. Biologically, outstar would be -------- weights, while instar would have
---------- ones.
Department of Information Technology 50Soft Computing (ITC4256 )
Quiz - Answers
1. Kohonen & Grossberg
2. a) true
3. a) input
4. function & inverse
5. synaptic & dendritic
Ad

More Related Content

What's hot (20)

Introduction to artificial neural network
Introduction to artificial neural networkIntroduction to artificial neural network
Introduction to artificial neural network
Dr. C.V. Suresh Babu
 
KNN
KNNKNN
KNN
BhuvneshYadav13
 
Multi Layer Network
Multi Layer NetworkMulti Layer Network
Multi Layer Network
International Islamic University
 
Associative memory network
Associative memory networkAssociative memory network
Associative memory network
Dr. C.V. Suresh Babu
 
Mc Culloch Pitts Neuron
Mc Culloch Pitts NeuronMc Culloch Pitts Neuron
Mc Culloch Pitts Neuron
Shajun Nisha
 
15 Machine Learning Multilayer Perceptron
15 Machine Learning Multilayer Perceptron15 Machine Learning Multilayer Perceptron
15 Machine Learning Multilayer Perceptron
Andres Mendez-Vazquez
 
Introduction to Graph Neural Networks: Basics and Applications - Katsuhiko Is...
Introduction to Graph Neural Networks: Basics and Applications - Katsuhiko Is...Introduction to Graph Neural Networks: Basics and Applications - Katsuhiko Is...
Introduction to Graph Neural Networks: Basics and Applications - Katsuhiko Is...
Preferred Networks
 
Deep neural networks
Deep neural networksDeep neural networks
Deep neural networks
Si Haem
 
Feedforward neural network
Feedforward neural networkFeedforward neural network
Feedforward neural network
Sopheaktra YONG
 
Neural Networks: Rosenblatt's Perceptron
Neural Networks: Rosenblatt's PerceptronNeural Networks: Rosenblatt's Perceptron
Neural Networks: Rosenblatt's Perceptron
Mostafa G. M. Mostafa
 
The partial derivative of the binary Cross-entropy loss function
The partial derivative of the binary Cross-entropy loss functionThe partial derivative of the binary Cross-entropy loss function
The partial derivative of the binary Cross-entropy loss function
TobiasRoeschl
 
K - Nearest neighbor ( KNN )
K - Nearest neighbor  ( KNN )K - Nearest neighbor  ( KNN )
K - Nearest neighbor ( KNN )
Mohammad Junaid Khan
 
Introduction to Deep learning
Introduction to Deep learningIntroduction to Deep learning
Introduction to Deep learning
Massimiliano Ruocco
 
Data Preprocessing
Data PreprocessingData Preprocessing
Data Preprocessing
zekeLabs Technologies
 
Divide and conquer
Divide and conquerDivide and conquer
Divide and conquer
Dr Shashikant Athawale
 
Multilayer perceptron
Multilayer perceptronMultilayer perceptron
Multilayer perceptron
omaraldabash
 
Introduction to Neural networks (under graduate course) Lecture 7 of 9
Introduction to Neural networks (under graduate course) Lecture 7 of 9Introduction to Neural networks (under graduate course) Lecture 7 of 9
Introduction to Neural networks (under graduate course) Lecture 7 of 9
Randa Elanwar
 
Neural network
Neural networkNeural network
Neural network
Babu Priyavrat
 
Supervised Learning
Supervised LearningSupervised Learning
Supervised Learning
butest
 
Neural Networks: Multilayer Perceptron
Neural Networks: Multilayer PerceptronNeural Networks: Multilayer Perceptron
Neural Networks: Multilayer Perceptron
Mostafa G. M. Mostafa
 
Introduction to artificial neural network
Introduction to artificial neural networkIntroduction to artificial neural network
Introduction to artificial neural network
Dr. C.V. Suresh Babu
 
Mc Culloch Pitts Neuron
Mc Culloch Pitts NeuronMc Culloch Pitts Neuron
Mc Culloch Pitts Neuron
Shajun Nisha
 
15 Machine Learning Multilayer Perceptron
15 Machine Learning Multilayer Perceptron15 Machine Learning Multilayer Perceptron
15 Machine Learning Multilayer Perceptron
Andres Mendez-Vazquez
 
Introduction to Graph Neural Networks: Basics and Applications - Katsuhiko Is...
Introduction to Graph Neural Networks: Basics and Applications - Katsuhiko Is...Introduction to Graph Neural Networks: Basics and Applications - Katsuhiko Is...
Introduction to Graph Neural Networks: Basics and Applications - Katsuhiko Is...
Preferred Networks
 
Deep neural networks
Deep neural networksDeep neural networks
Deep neural networks
Si Haem
 
Feedforward neural network
Feedforward neural networkFeedforward neural network
Feedforward neural network
Sopheaktra YONG
 
Neural Networks: Rosenblatt's Perceptron
Neural Networks: Rosenblatt's PerceptronNeural Networks: Rosenblatt's Perceptron
Neural Networks: Rosenblatt's Perceptron
Mostafa G. M. Mostafa
 
The partial derivative of the binary Cross-entropy loss function
The partial derivative of the binary Cross-entropy loss functionThe partial derivative of the binary Cross-entropy loss function
The partial derivative of the binary Cross-entropy loss function
TobiasRoeschl
 
Multilayer perceptron
Multilayer perceptronMultilayer perceptron
Multilayer perceptron
omaraldabash
 
Introduction to Neural networks (under graduate course) Lecture 7 of 9
Introduction to Neural networks (under graduate course) Lecture 7 of 9Introduction to Neural networks (under graduate course) Lecture 7 of 9
Introduction to Neural networks (under graduate course) Lecture 7 of 9
Randa Elanwar
 
Supervised Learning
Supervised LearningSupervised Learning
Supervised Learning
butest
 
Neural Networks: Multilayer Perceptron
Neural Networks: Multilayer PerceptronNeural Networks: Multilayer Perceptron
Neural Networks: Multilayer Perceptron
Mostafa G. M. Mostafa
 

Similar to Unsupervised learning networks (20)

Supervised learning network
Supervised learning networkSupervised learning network
Supervised learning network
Dr. C.V. Suresh Babu
 
Unit ii supervised ii
Unit ii supervised iiUnit ii supervised ii
Unit ii supervised ii
Indira Priyadarsini
 
22PCOAM16_UNIT 2_Session 10 Multi Layer Perceptrons.pptx
22PCOAM16_UNIT 2_Session 10 Multi Layer Perceptrons.pptx22PCOAM16_UNIT 2_Session 10 Multi Layer Perceptrons.pptx
22PCOAM16_UNIT 2_Session 10 Multi Layer Perceptrons.pptx
Guru Nanak Technical Institutions
 
Implementation of Back-Propagation Neural Network using Scilab and its Conver...
Implementation of Back-Propagation Neural Network using Scilab and its Conver...Implementation of Back-Propagation Neural Network using Scilab and its Conver...
Implementation of Back-Propagation Neural Network using Scilab and its Conver...
IJEEE
 
Hardware Acceleration for Machine Learning
Hardware Acceleration for Machine LearningHardware Acceleration for Machine Learning
Hardware Acceleration for Machine Learning
CastLabKAIST
 
Lec 6-bp
Lec 6-bpLec 6-bp
Lec 6-bp
Taymoor Nazmy
 
B Eng Final Year Project Presentation
B Eng Final Year Project PresentationB Eng Final Year Project Presentation
B Eng Final Year Project Presentation
jesujoseph
 
AILABS - Lecture Series - Is AI the New Electricity? Topic:- Classification a...
AILABS - Lecture Series - Is AI the New Electricity? Topic:- Classification a...AILABS - Lecture Series - Is AI the New Electricity? Topic:- Classification a...
AILABS - Lecture Series - Is AI the New Electricity? Topic:- Classification a...
AILABS Academy
 
Tutorial-on-DNN-09A-Co-design-Sparsity.pdf
Tutorial-on-DNN-09A-Co-design-Sparsity.pdfTutorial-on-DNN-09A-Co-design-Sparsity.pdf
Tutorial-on-DNN-09A-Co-design-Sparsity.pdf
Duy-Hieu Bui
 
Sorting_project_2.pdf
Sorting_project_2.pdfSorting_project_2.pdf
Sorting_project_2.pdf
VrushaliSathe2
 
Presentation on SOM
Presentation on SOMPresentation on SOM
Presentation on SOM
ArchiLab 7
 
IRJET - Image Classification using CNN
IRJET - Image Classification using CNNIRJET - Image Classification using CNN
IRJET - Image Classification using CNN
IRJET Journal
 
Artificial Neural Networks presentations
Artificial Neural Networks presentationsArtificial Neural Networks presentations
Artificial Neural Networks presentations
migob991
 
Design and Analysis of Algorthim(CSE) Study Material
Design and Analysis of Algorthim(CSE) Study MaterialDesign and Analysis of Algorthim(CSE) Study Material
Design and Analysis of Algorthim(CSE) Study Material
ImAN777733
 
Defuzzification
DefuzzificationDefuzzification
Defuzzification
Dr. C.V. Suresh Babu
 
Secondary structure prediction
Secondary structure predictionSecondary structure prediction
Secondary structure prediction
samantlalit
 
Web Spam Classification Using Supervised Artificial Neural Network Algorithms
Web Spam Classification Using Supervised Artificial Neural Network AlgorithmsWeb Spam Classification Using Supervised Artificial Neural Network Algorithms
Web Spam Classification Using Supervised Artificial Neural Network Algorithms
aciijournal
 
Web Spam Classification Using Supervised Artificial Neural Network Algorithms
Web Spam Classification Using Supervised Artificial Neural Network AlgorithmsWeb Spam Classification Using Supervised Artificial Neural Network Algorithms
Web Spam Classification Using Supervised Artificial Neural Network Algorithms
aciijournal
 
AI Lesson 39
AI Lesson 39AI Lesson 39
AI Lesson 39
Assistant Professor
 
Lesson 39
Lesson 39Lesson 39
Lesson 39
Avijit Kumar
 
Implementation of Back-Propagation Neural Network using Scilab and its Conver...
Implementation of Back-Propagation Neural Network using Scilab and its Conver...Implementation of Back-Propagation Neural Network using Scilab and its Conver...
Implementation of Back-Propagation Neural Network using Scilab and its Conver...
IJEEE
 
Hardware Acceleration for Machine Learning
Hardware Acceleration for Machine LearningHardware Acceleration for Machine Learning
Hardware Acceleration for Machine Learning
CastLabKAIST
 
B Eng Final Year Project Presentation
B Eng Final Year Project PresentationB Eng Final Year Project Presentation
B Eng Final Year Project Presentation
jesujoseph
 
AILABS - Lecture Series - Is AI the New Electricity? Topic:- Classification a...
AILABS - Lecture Series - Is AI the New Electricity? Topic:- Classification a...AILABS - Lecture Series - Is AI the New Electricity? Topic:- Classification a...
AILABS - Lecture Series - Is AI the New Electricity? Topic:- Classification a...
AILABS Academy
 
Tutorial-on-DNN-09A-Co-design-Sparsity.pdf
Tutorial-on-DNN-09A-Co-design-Sparsity.pdfTutorial-on-DNN-09A-Co-design-Sparsity.pdf
Tutorial-on-DNN-09A-Co-design-Sparsity.pdf
Duy-Hieu Bui
 
Presentation on SOM
Presentation on SOMPresentation on SOM
Presentation on SOM
ArchiLab 7
 
IRJET - Image Classification using CNN
IRJET - Image Classification using CNNIRJET - Image Classification using CNN
IRJET - Image Classification using CNN
IRJET Journal
 
Artificial Neural Networks presentations
Artificial Neural Networks presentationsArtificial Neural Networks presentations
Artificial Neural Networks presentations
migob991
 
Design and Analysis of Algorthim(CSE) Study Material
Design and Analysis of Algorthim(CSE) Study MaterialDesign and Analysis of Algorthim(CSE) Study Material
Design and Analysis of Algorthim(CSE) Study Material
ImAN777733
 
Secondary structure prediction
Secondary structure predictionSecondary structure prediction
Secondary structure prediction
samantlalit
 
Web Spam Classification Using Supervised Artificial Neural Network Algorithms
Web Spam Classification Using Supervised Artificial Neural Network AlgorithmsWeb Spam Classification Using Supervised Artificial Neural Network Algorithms
Web Spam Classification Using Supervised Artificial Neural Network Algorithms
aciijournal
 
Web Spam Classification Using Supervised Artificial Neural Network Algorithms
Web Spam Classification Using Supervised Artificial Neural Network AlgorithmsWeb Spam Classification Using Supervised Artificial Neural Network Algorithms
Web Spam Classification Using Supervised Artificial Neural Network Algorithms
aciijournal
 
Ad

More from Dr. C.V. Suresh Babu (20)

Data analytics with R
Data analytics with RData analytics with R
Data analytics with R
Dr. C.V. Suresh Babu
 
Association rules
Association rulesAssociation rules
Association rules
Dr. C.V. Suresh Babu
 
Clustering
ClusteringClustering
Clustering
Dr. C.V. Suresh Babu
 
Classification
ClassificationClassification
Classification
Dr. C.V. Suresh Babu
 
Blue property assumptions.
Blue property assumptions.Blue property assumptions.
Blue property assumptions.
Dr. C.V. Suresh Babu
 
Introduction to regression
Introduction to regressionIntroduction to regression
Introduction to regression
Dr. C.V. Suresh Babu
 
DART
DARTDART
DART
Dr. C.V. Suresh Babu
 
Mycin
MycinMycin
Mycin
Dr. C.V. Suresh Babu
 
Expert systems
Expert systemsExpert systems
Expert systems
Dr. C.V. Suresh Babu
 
Dempster shafer theory
Dempster shafer theoryDempster shafer theory
Dempster shafer theory
Dr. C.V. Suresh Babu
 
Bayes network
Bayes networkBayes network
Bayes network
Dr. C.V. Suresh Babu
 
Bayes' theorem
Bayes' theoremBayes' theorem
Bayes' theorem
Dr. C.V. Suresh Babu
 
Knowledge based agents
Knowledge based agentsKnowledge based agents
Knowledge based agents
Dr. C.V. Suresh Babu
 
Rule based system
Rule based systemRule based system
Rule based system
Dr. C.V. Suresh Babu
 
Formal Logic in AI
Formal Logic in AIFormal Logic in AI
Formal Logic in AI
Dr. C.V. Suresh Babu
 
Production based system
Production based systemProduction based system
Production based system
Dr. C.V. Suresh Babu
 
Game playing in AI
Game playing in AIGame playing in AI
Game playing in AI
Dr. C.V. Suresh Babu
 
Diagnosis test of diabetics and hypertension by AI
Diagnosis test of diabetics and hypertension by AIDiagnosis test of diabetics and hypertension by AI
Diagnosis test of diabetics and hypertension by AI
Dr. C.V. Suresh Babu
 
A study on “impact of artificial intelligence in covid19 diagnosis”
A study on “impact of artificial intelligence in covid19 diagnosis”A study on “impact of artificial intelligence in covid19 diagnosis”
A study on “impact of artificial intelligence in covid19 diagnosis”
Dr. C.V. Suresh Babu
 
A study on “impact of artificial intelligence in covid19 diagnosis”
A study on “impact of artificial intelligence in covid19 diagnosis”A study on “impact of artificial intelligence in covid19 diagnosis”
A study on “impact of artificial intelligence in covid19 diagnosis”
Dr. C.V. Suresh Babu
 
Ad

Recently uploaded (20)

How to Use Upgrade Code Command in Odoo 18
How to Use Upgrade Code Command in Odoo 18How to Use Upgrade Code Command in Odoo 18
How to Use Upgrade Code Command in Odoo 18
Celine George
 
Module_2_Types_and_Approaches_of_Research (2).pptx
Module_2_Types_and_Approaches_of_Research (2).pptxModule_2_Types_and_Approaches_of_Research (2).pptx
Module_2_Types_and_Approaches_of_Research (2).pptx
drroxannekemp
 
"Bridging Cultures Through Holiday Cards: 39 Students Celebrate Global Tradit...
"Bridging Cultures Through Holiday Cards: 39 Students Celebrate Global Tradit..."Bridging Cultures Through Holiday Cards: 39 Students Celebrate Global Tradit...
"Bridging Cultures Through Holiday Cards: 39 Students Celebrate Global Tradit...
AlionaBujoreanu
 
How to Manage Manual Reordering Rule in Odoo 18 Inventory
How to Manage Manual Reordering Rule in Odoo 18 InventoryHow to Manage Manual Reordering Rule in Odoo 18 Inventory
How to Manage Manual Reordering Rule in Odoo 18 Inventory
Celine George
 
Peer Assessment_ Unit 2 Skills Development for Live Performance - for Libby.docx
Peer Assessment_ Unit 2 Skills Development for Live Performance - for Libby.docxPeer Assessment_ Unit 2 Skills Development for Live Performance - for Libby.docx
Peer Assessment_ Unit 2 Skills Development for Live Performance - for Libby.docx
19lburrell
 
IPL QUIZ | THE QUIZ CLUB OF PSGCAS | 2025.pdf
IPL QUIZ | THE QUIZ CLUB OF PSGCAS | 2025.pdfIPL QUIZ | THE QUIZ CLUB OF PSGCAS | 2025.pdf
IPL QUIZ | THE QUIZ CLUB OF PSGCAS | 2025.pdf
Quiz Club of PSG College of Arts & Science
 
materi 3D Augmented Reality dengan assemblr
materi 3D Augmented Reality dengan assemblrmateri 3D Augmented Reality dengan assemblr
materi 3D Augmented Reality dengan assemblr
fatikhatunnajikhah1
 
Letter to Secretary Linda McMahon from U.S. Senators
Letter to Secretary Linda McMahon from U.S. SenatorsLetter to Secretary Linda McMahon from U.S. Senators
Letter to Secretary Linda McMahon from U.S. Senators
Mebane Rash
 
Antepartum fetal surveillance---Dr. H.K.Cheema pdf.pdf
Antepartum fetal surveillance---Dr. H.K.Cheema pdf.pdfAntepartum fetal surveillance---Dr. H.K.Cheema pdf.pdf
Antepartum fetal surveillance---Dr. H.K.Cheema pdf.pdf
Dr H.K. Cheema
 
Module 1: Foundations of Research
Module 1: Foundations of ResearchModule 1: Foundations of Research
Module 1: Foundations of Research
drroxannekemp
 
ITI COPA Question Paper PDF 2017 Theory MCQ
ITI COPA Question Paper PDF 2017 Theory MCQITI COPA Question Paper PDF 2017 Theory MCQ
ITI COPA Question Paper PDF 2017 Theory MCQ
SONU HEETSON
 
UNITED_KINGDOM.pptUNITED_KINGDOM.pptUNITED_KINGDOM.ppt
UNITED_KINGDOM.pptUNITED_KINGDOM.pptUNITED_KINGDOM.pptUNITED_KINGDOM.pptUNITED_KINGDOM.pptUNITED_KINGDOM.ppt
UNITED_KINGDOM.pptUNITED_KINGDOM.pptUNITED_KINGDOM.ppt
lsitinova
 
The History of Kashmir Lohar Dynasty NEP.ppt
The History of Kashmir Lohar Dynasty NEP.pptThe History of Kashmir Lohar Dynasty NEP.ppt
The History of Kashmir Lohar Dynasty NEP.ppt
Arya Mahila P. G. College, Banaras Hindu University, Varanasi, India.
 
libbys peer assesment.docx..............
libbys peer assesment.docx..............libbys peer assesment.docx..............
libbys peer assesment.docx..............
19lburrell
 
PUBH1000 Slides - Module 12: Advocacy for Health
PUBH1000 Slides - Module 12: Advocacy for HealthPUBH1000 Slides - Module 12: Advocacy for Health
PUBH1000 Slides - Module 12: Advocacy for Health
JonathanHallett4
 
Statement by Linda McMahon on May 21, 2025
Statement by Linda McMahon on May 21, 2025Statement by Linda McMahon on May 21, 2025
Statement by Linda McMahon on May 21, 2025
Mebane Rash
 
Search Matching Applicants in Odoo 18 - Odoo Slides
Search Matching Applicants in Odoo 18 - Odoo SlidesSearch Matching Applicants in Odoo 18 - Odoo Slides
Search Matching Applicants in Odoo 18 - Odoo Slides
Celine George
 
Launch of The State of Global Teenage Career Preparation - Andreas Schleicher...
Launch of The State of Global Teenage Career Preparation - Andreas Schleicher...Launch of The State of Global Teenage Career Preparation - Andreas Schleicher...
Launch of The State of Global Teenage Career Preparation - Andreas Schleicher...
EduSkills OECD
 
Conditions for Boltzmann Law – Biophysics Lecture Slide
Conditions for Boltzmann Law – Biophysics Lecture SlideConditions for Boltzmann Law – Biophysics Lecture Slide
Conditions for Boltzmann Law – Biophysics Lecture Slide
PKLI-Institute of Nursing and Allied Health Sciences Lahore , Pakistan.
 
PUBH1000 Slides - Module 11: Governance for Health
PUBH1000 Slides - Module 11: Governance for HealthPUBH1000 Slides - Module 11: Governance for Health
PUBH1000 Slides - Module 11: Governance for Health
JonathanHallett4
 
How to Use Upgrade Code Command in Odoo 18
How to Use Upgrade Code Command in Odoo 18How to Use Upgrade Code Command in Odoo 18
How to Use Upgrade Code Command in Odoo 18
Celine George
 
Module_2_Types_and_Approaches_of_Research (2).pptx
Module_2_Types_and_Approaches_of_Research (2).pptxModule_2_Types_and_Approaches_of_Research (2).pptx
Module_2_Types_and_Approaches_of_Research (2).pptx
drroxannekemp
 
"Bridging Cultures Through Holiday Cards: 39 Students Celebrate Global Tradit...
"Bridging Cultures Through Holiday Cards: 39 Students Celebrate Global Tradit..."Bridging Cultures Through Holiday Cards: 39 Students Celebrate Global Tradit...
"Bridging Cultures Through Holiday Cards: 39 Students Celebrate Global Tradit...
AlionaBujoreanu
 
How to Manage Manual Reordering Rule in Odoo 18 Inventory
How to Manage Manual Reordering Rule in Odoo 18 InventoryHow to Manage Manual Reordering Rule in Odoo 18 Inventory
How to Manage Manual Reordering Rule in Odoo 18 Inventory
Celine George
 
Peer Assessment_ Unit 2 Skills Development for Live Performance - for Libby.docx
Peer Assessment_ Unit 2 Skills Development for Live Performance - for Libby.docxPeer Assessment_ Unit 2 Skills Development for Live Performance - for Libby.docx
Peer Assessment_ Unit 2 Skills Development for Live Performance - for Libby.docx
19lburrell
 
materi 3D Augmented Reality dengan assemblr
materi 3D Augmented Reality dengan assemblrmateri 3D Augmented Reality dengan assemblr
materi 3D Augmented Reality dengan assemblr
fatikhatunnajikhah1
 
Letter to Secretary Linda McMahon from U.S. Senators
Letter to Secretary Linda McMahon from U.S. SenatorsLetter to Secretary Linda McMahon from U.S. Senators
Letter to Secretary Linda McMahon from U.S. Senators
Mebane Rash
 
Antepartum fetal surveillance---Dr. H.K.Cheema pdf.pdf
Antepartum fetal surveillance---Dr. H.K.Cheema pdf.pdfAntepartum fetal surveillance---Dr. H.K.Cheema pdf.pdf
Antepartum fetal surveillance---Dr. H.K.Cheema pdf.pdf
Dr H.K. Cheema
 
Module 1: Foundations of Research
Module 1: Foundations of ResearchModule 1: Foundations of Research
Module 1: Foundations of Research
drroxannekemp
 
ITI COPA Question Paper PDF 2017 Theory MCQ
ITI COPA Question Paper PDF 2017 Theory MCQITI COPA Question Paper PDF 2017 Theory MCQ
ITI COPA Question Paper PDF 2017 Theory MCQ
SONU HEETSON
 
UNITED_KINGDOM.pptUNITED_KINGDOM.pptUNITED_KINGDOM.ppt
UNITED_KINGDOM.pptUNITED_KINGDOM.pptUNITED_KINGDOM.pptUNITED_KINGDOM.pptUNITED_KINGDOM.pptUNITED_KINGDOM.ppt
UNITED_KINGDOM.pptUNITED_KINGDOM.pptUNITED_KINGDOM.ppt
lsitinova
 
libbys peer assesment.docx..............
libbys peer assesment.docx..............libbys peer assesment.docx..............
libbys peer assesment.docx..............
19lburrell
 
PUBH1000 Slides - Module 12: Advocacy for Health
PUBH1000 Slides - Module 12: Advocacy for HealthPUBH1000 Slides - Module 12: Advocacy for Health
PUBH1000 Slides - Module 12: Advocacy for Health
JonathanHallett4
 
Statement by Linda McMahon on May 21, 2025
Statement by Linda McMahon on May 21, 2025Statement by Linda McMahon on May 21, 2025
Statement by Linda McMahon on May 21, 2025
Mebane Rash
 
Search Matching Applicants in Odoo 18 - Odoo Slides
Search Matching Applicants in Odoo 18 - Odoo SlidesSearch Matching Applicants in Odoo 18 - Odoo Slides
Search Matching Applicants in Odoo 18 - Odoo Slides
Celine George
 
Launch of The State of Global Teenage Career Preparation - Andreas Schleicher...
Launch of The State of Global Teenage Career Preparation - Andreas Schleicher...Launch of The State of Global Teenage Career Preparation - Andreas Schleicher...
Launch of The State of Global Teenage Career Preparation - Andreas Schleicher...
EduSkills OECD
 
PUBH1000 Slides - Module 11: Governance for Health
PUBH1000 Slides - Module 11: Governance for HealthPUBH1000 Slides - Module 11: Governance for Health
PUBH1000 Slides - Module 11: Governance for Health
JonathanHallett4
 

Unsupervised learning networks

  • 1. Department of Information Technology 1Soft Computing (ITC4256 ) Dr. C.V. Suresh Babu Professor Department of IT Hindustan Institute of Science & Technology Unsupervised learning networks
  • 2. Department of Information Technology 2Soft Computing (ITC4256 ) Action Plan • Unsupervised Learning Networks - Introduction to Kohonen Self-Organizing Feature Maps (KSOM) - Rectangular grid computing - Hexagonal grid computing - KSOM architecture - KSOM training algorithm • Quiz at the end of session
  • 3. Department of Information Technology 3Soft Computing (ITC4256 )
  • 4. Department of Information Technology 4Soft Computing (ITC4256 ) Kohonen Self-Organizing Feature Maps (KSOM) • Suppose if there are some pattern of arbitrary dimensions, however, we need them in one dimension or two dimensions. • Then the process of feature mapping would be very useful to convert the wide pattern space into a typical feature space. • There can be various topologies, however the following two topologies are used the most: - Rectangular Grid Topology - Hexagonal Grid Topology
  • 5. Department of Information Technology 5Soft Computing (ITC4256 ) Rectangular Grid Topology • This topology has 24 nodes in the distance-2 grid, 16 nodes in the distance-1 grid, and 8 nodes in the distance-0 grid, which means the difference between each rectangular grid is 8 nodes. • The winning unit is indicated by #.
  • 6. Department of Information Technology 6Soft Computing (ITC4256 ) Hexagonal Grid Topology • This topology has 18 nodes in the distance-2 grid, 12 nodes in the distance-1 grid, and 6 nodes in the distance-0 grid, which means the difference between each rectangular grid is 6 nodes. • The winning unit is indicated by #.
  • 7. Department of Information Technology 7Soft Computing (ITC4256 ) KSOM - Architecture • The architecture of KSOM is similar to that of the competitive network. • With the help of neighborhood schemes, discussed earlier, the training can take place over the extended region of the network.
  • 8. Department of Information Technology 8Soft Computing (ITC4256 ) KSOM – Training Algorithm Step 1 − Initialize the weights, the learning rate α and the neighborhood topological scheme. Step 2 − Continue step 3-9, when the stopping condition is not true. Step 3 − Continue step 4-6 for every input vector x. Step 4 − Calculate Square of Euclidean Distance for j = 1 to m n m D(j) = ∑ ∑ (xi − wij)2 i=1 j=1 Step 5 − Obtain the winning unit J where D j is minimum.
  • 9. Department of Information Technology 9Soft Computing (ITC4256 ) KSOM – Training Algorithm (Cont…) Step 6 − Calculate the new weight of the winning unit by the following relation − wij(new) = wij(old) + α[xi−wij(old)] Step 7 − Update the learning rate α by the following relation − α(t+1)=0.5αt Step 8 − Reduce the radius of topological scheme. Step 9 − Check for the stopping condition for the network.
  • 10. Department of Information Technology 10Soft Computing (ITC4256 ) Quiz - Questions 1. What are the 2 topologies used in KSOM? 2. The winning unit is indicated by ---------. a) * b) $ c) # d) ! 3. What parameters has to be initialized for the training algorithm? 4. What is done in step 8? 5. The architecture of KSOM is similar to that of the ------------ network.
  • 11. Department of Information Technology 11Soft Computing (ITC4256 ) Quiz - Answers 1. What are the 2 topologies used in KSOM? i. Rectangular Grid Topology ii. Hexagonal Grid Topology 2. The winning unit is indicated by ---------. a) * b) $ c) # d) ! 3. What parameters has to be initialized for the training algorithm? Weights, learning rate α and the neighbourhood topological scheme. 4. What is done in step 8? Reduce the radius of topological scheme. 5. The architecture of KSOM is similar to that of the ------------ network. Competitive
  • 12. Department of Information Technology 12Soft Computing (ITC4256 ) Action Plan • Unsupervised Learning Networks (Cont…) - Introduction to ART - Operational principle of ART - ART1 architecture - ART1 training algorithm • Quiz at the end of session
  • 13. Department of Information Technology 13Soft Computing (ITC4256 ) Adaptive Resonance Theory (ART) • This network was developed by Stephen Grossberg and Gail Carpenter in 1987. • It is based on competition and uses unsupervised learning model. • Basically, ART network is a vector classifier which accepts an input vector and classifies it into one of the categories depending upon which of the stored pattern it resembles the most.
  • 14. Department of Information Technology 14Soft Computing (ITC4256 ) ART – Operational Principle • The main operation of ART classification can be divided into the following phases: - Recognition phase - Comparison phase - Search phase
  • 15. Department of Information Technology 15Soft Computing (ITC4256 )
  • 16. Department of Information Technology 16Soft Computing (ITC4256 ) ART1 - Architecture 1. Computational Unit: a. Input unit (F1 layer): i. F1a layer Input portion ii. F1b layer Interface b. Cluster Unit (F2 layer) c. Reset Mechanism 2. Supplement Unit: • Two supplemental units namely, G1 and G2 is added along with reset unit, R. • They are called gain control units.
  • 17. Department of Information Technology 17Soft Computing (ITC4256 ) ART1 – Architecture (Cont…)
  • 18. Department of Information Technology 18Soft Computing (ITC4256 ) ART1 – Architecture (Cont…) Parameters Used: • n − Number of components in the input vector • m − Maximum number of clusters that can be formed • bij − Weight from F1b to F2 layer, i.e. bottom-up weights • tji − Weight from F2 to F1b layer, i.e. top-down weights • ρ − Vigilance parameter • ||x|| − Norm of vector x
  • 19. Department of Information Technology 19Soft Computing (ITC4256 ) ART1 – Training Algorithm Step 1 − Initialize the learning rate, the vigilance parameter, and the weights as follows − α > 1 and 0 < ρ ≤ 1 0 < bij(0) < (α) / (α − 1 + n) and tij(0) = 1 Step 2 − Continue step 3-9, when the stopping condition is not true. Step 3 − Continue step 4-6 for every training input. Step 4 − Set activations of all F1a and F1 units as follows F2 = 0 and F1a = input vectors Step 5 − Input signal from F1a to F1b layer must be sent like si = xi
  • 20. Department of Information Technology 20Soft Computing (ITC4256 ) ART1 – Training Algorithm (Cont…) Step 6 − For every inhibited F2 node yj = ∑ bijxi the condition is yj ≠ -1 i Step 7 − Perform step 8-10, when the reset is true. Step 8 − Find J for yJ ≥ yj for all nodes j. Step 9 − Again calculate the activation on F1b as follows xi = sitji Step 10 − Now, after calculating the norm of vector x and vector s, we need to check the reset condition as follows − • If ||x||/ ||s|| < vigilance parameter ρ, then inhibit node J and go to step 7 • Else If ||x||/ ||s|| ≥ vigilance parameter ρ, then proceed further.
  • 21. Department of Information Technology 21Soft Computing (ITC4256 ) ART1 – Training Algorithm (Cont…) Step 11 − Weight updating for node J can be done as follows − bij(new) = (αxi) / (α − 1 + ||x||) tij(new) = xi Step 12 − The stopping condition for algorithm must be checked.
  • 22. Department of Information Technology 22Soft Computing (ITC4256 ) Quiz - Questions 1. ART network is a --------- classifier. a) vector b) scalar c) linear d) non-linear 2. What are the 3 phases of the main operation of ART? 3. Name the 2 units of ART architecture. 4. What are the 3 components of computational unit? 5. What is the full form ART?
  • 23. Department of Information Technology 23Soft Computing (ITC4256 ) Quiz - Answers 1. ART network is a --------- classifier. a) vector 2. What are the 3 phases of the main operation of ART? i. recognition ii. comparison iii. search 3. Name the 2 units of ART architecture. i. computational unit ii. Supplement unit 4. What are the 3 components of computational unit? i. input unit ii. Cluster unit iii. reset mechanism 5. What is the full form ART? Adaptive Resonance Theory
  • 24. Department of Information Technology 24Soft Computing (ITC4256 ) Action Plan • Unsupervised Learning Networks (Cont…) - Introduction to Radial Basis Function (RBF) network - RBF architecture - Hidden neural model - Gaussian RBF - RBF network parameters - RBF learning algorithms • Quiz at the end of session
  • 25. Department of Information Technology 25Soft Computing (ITC4256 ) Radial Basis Function (RBF) Network • A function is radial basis(RBF) if its output depends on (is a non-increasing function of) the distance of the input from a given stored vector. • The output of the red vector is “interpolated” using the three green vectors, where each vector gives a contribution that depends on its weight and on its distance from the red point. • w1 < w3 < w2
  • 26. Department of Information Technology 26Soft Computing (ITC4256 )
  • 27. Department of Information Technology 27Soft Computing (ITC4256 ) RBF - Architecture • One hidden layer with RBF activation functions. • Output layer with linear activation function.
  • 28. Department of Information Technology 28Soft Computing (ITC4256 ) Hidden Neuron Model • Hidden units use radial basis functions. • The output depends on the distance of the input x from the center t. • t is called center. • is called spread. • Center and spread are parameters.
  • 29. Department of Information Technology 29Soft Computing (ITC4256 ) Hidden Neurons • A hidden neuron is more sensitive to data points near its center. • For Gaussian RBF this sensitivity may be tuned by adjusting the spread , where a larger spread implies less sensitivity.
  • 30. Department of Information Technology 30Soft Computing (ITC4256 ) Gaussian RBF
  • 31. Department of Information Technology 31Soft Computing (ITC4256 ) Types of • Multiquadrics: • Inverse multiquadrics: • Gaussian functions (most used):
  • 32. Department of Information Technology 32Soft Computing (ITC4256 ) RBF Network Parameters • What do we have to learn for a RBF NN with a given architecture? - The centers of the RBF activation functions. - The spreads of the Gaussian RBF activation functions. - The weights from the hidden to the output layer. • Different learning algorithms may be used for learning the RBF network parameters.
  • 33. Department of Information Technology 33Soft Computing (ITC4256 ) RBF - Learning Algorithm 1 • Centers are selected at random. • Spreads are chosen by normalization: • Then the activation function of hidden neuron i becomes: • Weights are computed by means of the pseudo-inverse method.
  • 34. Department of Information Technology 34Soft Computing (ITC4256 ) Learning Algorithm 1 - Summary 1. Choose the centers randomly from the training set. 2. Compute the spread for the RBF function using the normalization method. 3. Find the weights using the pseudo-inverse method.
  • 35. Department of Information Technology 35Soft Computing (ITC4256 ) RBF - Learning Algorithm 2 Clustering algorithm for finding the centers : • Initialization: tk(0) random k = 1, …, m1 • Sampling: draw x from input space . • Similarity matching: find index of center closer to x. • Updating: adjust centers. • Continuation: increment n by 1, goto 2 and continue until no noticeable changes of centers occur.
  • 36. Department of Information Technology 36Soft Computing (ITC4256 ) Learning Algorithm 2 - Summary Hybrid Learning Process: • Clustering for finding the centers. • Spreads chosen by normalization. • LMS algorithm for finding the weights.
  • 37. Department of Information Technology 37Soft Computing (ITC4256 ) RBF - Learning Algorithm 3 • Apply the gradient descent method for finding centers, spread and weights, by minimizing the (instantaneous) squared error. • Update for: centers: spread: weights:
  • 38. Department of Information Technology 38Soft Computing (ITC4256 ) Quiz - Questions 1. ---------- units use radial basis functions. 2. A hidden neuron is more sensitive to data points near its ---------. 3. What are the types of ? 4. By what means weights are found in RBF learning algorithm 2 ? 5. How the centers are chosen in RBF learning algorithm 1 ?
  • 39. Department of Information Technology 39Soft Computing (ITC4256 ) Quiz - Answers 1. -------- units use radial basis functions. Hidden 2. A hidden neuron is more sensitive to data points near its ---------. center 3. What are the types of ? i. multiquadrics ii. inverse multiquadrics iii. Gaussian functions 4. By what means weights are found in RBF learning algorithm 2 ? LMS algorithm 5. How the centers are chosen in RBF learning algorithm 1 ? Centers randomly chosen from training set.
  • 40. Department of Information Technology 40Soft Computing (ITC4256 ) Action Plan • Unsupervised Learning Networks (Cont…) - Introduction to Counter Propagation (CP) network - CP architecture - CP outstar and instar - CP Operation • Quiz at the end of session
  • 41. Department of Information Technology 41Soft Computing (ITC4256 ) Counter Propagation Network • CP algorithm consists of a input, hidden and output layer. • In this case the hidden layer is called the Kohonen layer & the output layer is called the Grossberg layer. • The activation of this winner neuron is set to 1 & the activation of all other neurons in this layer is set to 0.
  • 42. Department of Information Technology 42Soft Computing (ITC4256 ) Counter Propagation Network (Cont…) Purpose: • Fast and coarse approximation of vector mapping. • Input vectors x are divided into clusters/classes. • Each cluster of x has output y, which is (hopefully) the average of for all x in that class.
  • 43. Department of Information Technology 43Soft Computing (ITC4256 ) Architecture: Simple Forward CPN
  • 44. Department of Information Technology 44Soft Computing (ITC4256 ) Network Architecture
  • 45. Department of Information Technology 45Soft Computing (ITC4256 ) Counter Propagation Network (Cont…) 1. Invented by Robert Hecht-Nielson, founder of HNC inc. 2. Consists of two opposing networks, one for learning a function, the other for learning its inverse. 3. Each network has two layers: - A Kohonen first layer that clusters inputs. - An ‘outstar’ second layer to provide the output values for each cluster.
  • 46. Department of Information Technology 46Soft Computing (ITC4256 ) CP - Outstar and Instar • An instar responds to a single input. • An outstar produces a single (multi dimensional) output d when simulated with a binary value x. • Biologically, outstar would be synaptic weights, while instar would have dendritic ones. • It is common to refer to weights as ‘synaptic’.
  • 47. Department of Information Technology 47Soft Computing (ITC4256 ) CP - Outstar and Instar (Cont…) • Variations can be possible by adding weights.
  • 48. Department of Information Technology 48Soft Computing (ITC4256 ) CP Operation • An outstar neuron is associated with each cluster representative. • Given an input, the winner is found. • An outstar is then stimulated to give the output. • Since these networks operate by recognizing input patterns in the first layer, one would generally use lots of neurons in this layer.
  • 49. Department of Information Technology 49Soft Computing (ITC4256 ) Quiz - Questions 1. In CP network the hidden layer is called the --------- layer & the output layer is called the --------- layer. 2. The activation of this winner neuron is set to 1 & the activation of all other neurons in this layer is set to 0. a) true b) false 3. ---------- vectors x are divided into clusters/classes. a) input b) output c) outstar d) instar 4. CP network consist of two opposing networks, one for learning a ----------, the other for learning its ----------. 5. Biologically, outstar would be -------- weights, while instar would have ---------- ones.
  • 50. Department of Information Technology 50Soft Computing (ITC4256 ) Quiz - Answers 1. Kohonen & Grossberg 2. a) true 3. a) input 4. function & inverse 5. synaptic & dendritic
  翻译: