SlideShare a Scribd company logo
Neural Networks and
Fuzzy Systems
Single layer Perception Classifier
Dr. Tamer Ahmed Farrag
Course No.: 803522-3
Course Outline
Part I : Neural Networks (11 weeks)
• Introduction to Machine Learning
• Fundamental Concepts of Artificial Neural Networks
(ANN)
• Single layer Perception Classifier
• Multi-layer Feed forward Networks
• Single layer FeedBack Networks
• Unsupervised learning
Part II : Fuzzy Systems (4 weeks)
• Fuzzy set theory
• Fuzzy Systems
2
Building Neural Networks Strategy
• Formulating neural network solutions for particular
problems is a multi-stage process:
1. Understand and specify the problem in terms of inputs and
required outputs
2. Take the simplest form of network you think might be able to
solve your problem
3. Try to find the appropriate connection weights (including neuron
thresholds) so that the network produces the right outputs for
each input in its training data
4. Make sure that the network works on its training data and test its
generalization by checking its performance on new testing data
5. If the network doesn’t perform well enough, go back to stage 3
and try harder.
6. If the network still doesn’t perform well enough, go back to stage
2 and try harder.
7. If the network still doesn’t perform well enough, go back to stage
1 and try harder.
8. Problem solved – or not.
3
Decision Hyperplanes and Linear Separability
• If we have two inputs, then the decision boundary that is
a one dimensional straight line in the two dimensional
input space of possible input values.
• In general, A set of points in n-dimensional space are
linearly separable if there is a hyperplane of
(n − 1) dimensions that separates the sets.
• This hyperplane is clearly still linear (i.e., straight or flat
or non-curved) and can still only divide the space into two
regions.
• Problems with input patterns that can be classified using a
single hyperplane are said to be linearly separable.
Problems (such as XOR) which cannot be classified in this
way are said to be non-linearly separable.
4
Linear separated vs Non linear separated
binary Classification problems
Linear separated
Non linear separated
Class 2
Class 1
Non linear separated
Non linear separated Non linear separated
Decision Boundary for some of Logic Gates
6
outx2x1
000
010
001
111 A
x1
A
x1
x2
x1
outx2x1
000
110
101
111
outx2x1
000
110
101
011
out = 2
out= 1
AND Gate
Linear separated
OR Gate
Linear separated
XOR Gate
Non Linear separated
Implementation of Logical NOT, AND, and OR using
McCulloch-Pitts neuron
7
outx2x1
000
010
001
111
outx2x1
000
110
101
111
AND ORNOT
outx1
10
01
How to Finding Weights and Threshold?
• Constructing simple networks by hand (e.g., by
trial and error) is one thing. But what about harder
problems?
• How long should we keep looking for a solution?
We need to be able to calculate appropriate
parameter values rather than searching for
solutions by trial and error.
• Each training pattern produces a linear inequality
for the output in terms of the inputs and the
network parameters. These can be used to
compute the weights and thresholds.
8
Finding Weights Analytically for the AND Network?
9
x1
x2
W1=?
W2=?
out
T=?
outx2x1
000
010
001
111
equations
0 * w1 + 0 * w2<T
0 w1 + 1 * w2<T
1 * w1 + 0 * w2<T
1 * w1 + 1 * w2≥ T
F(z)=
0 𝑓𝑜𝑟 𝑥 < 𝑇
1 𝑓𝑜𝑟 𝑥 ≥ 𝑇
𝑤ℎ𝑒𝑟𝑒 𝑧 = (𝑥1 ∗ 𝑤1 + 𝑥2 ∗ 𝑤2)
equations
T >0
w2< T
w1 < T
w1 + w2≥ T
Easy to solve
(infinite number of solutions)
For example assume:
T=1.5 , w1=1 , w2=1
Finding Weights Analytically for the XOR Network?
10
x1
x2
W1=?
W2=?
out
T=?
outx2x1
000
110
101
011
equations
0 * w1 + 0 * w2<T
0 w1 + 1 * w2≥T
1 * w1 + 0 * w2≥T
1 * w1 + 1 * w2< T
F(z)=
0 𝑓𝑜𝑟 𝑥 < 𝑇
1 𝑓𝑜𝑟 𝑥 ≥ 𝑇
𝑤ℎ𝑒𝑟𝑒 𝑧 = (𝑥1 ∗ 𝑤1 + 𝑥2 ∗ 𝑤2)
equations
T >0 (1)
w2 ≥ T (2)
w1 ≥ T (3)
w1 + w2 < T (4)
No Solution
Why?
How to solve this problem ?
Useful Notation
• We often need to deal with ordered sets of numbers, which we write as
vectors, e.g.
x = (x1, x2, x3, …, xn) , y = (y1, y2, y3, …, ym)
• The components xi can be added up to give a scalar (number), e.g.
s = x1 + x2 + x3 + … + xn =
𝑖=1
𝑛
𝑥𝑖
• Two vectors of the same length may be added to give another vector,
e.g.
z = x + y = (x1 + y1, x2 + y2, …, xn + yn)
• Two vectors of the same length may be multiplied to give a scalar, e.g.
p = x.y = x1y1 + x2 y2 + …+ xnyn =
𝑖=1
𝑛
𝑥𝑖 𝑦𝑖
11
What is the problem of implementing XOR?
• In the previous slide, Clearly the first, second and
third inequalities are incompatible with the fourth,
so there is in fact no solution.
• We need more complex networks, e.g. that combine
together many simple networks, or use different
activation/thresholding/transfer functions.
• It then becomes much more difficult to determine
all the weights and thresholds by hand. Next, we
will see how a neural network can learn these
parameters.
12
Perceptron
• In 1958, Frank Rosenblatt introduced a training algorithm that
provided the first procedure for training a simple ANN: a perceptron.
• Any number of McCulloch-Pitts neurons can be connected together in
any way we like.
• The arrangement that has one layer of input neurons feeding forward
to one output layer of McCulloch-Pitts neurons, with full connectivity, is
known as a Perceptron:
13
𝒊= 𝟎
𝒏
𝒘𝒊 𝒙𝒊 +𝒃
1
0
X1
X2
X3
w1
w2
w3
Output
𝑶𝒖𝒕𝒑𝒖𝒕 = 𝒉𝒂𝒓𝒅𝒍𝒊𝒎
𝒊=𝟎
𝒏
𝒘𝒊 𝒙𝒊 + 𝒃
Perceptron
• In 1958, Frank Rosenblatt introduced a training algorithm
that provided the first procedure for training a simple ANN:
a perceptron.
• Any number of McCulloch-Pitts neurons can be connected
together in any way we like.
• The arrangement that has one layer of input neurons
feeding forward to one output layer of McCulloch-Pitts
neurons, with full connectivity, is known as a Perceptron.
• It using hard limiter as the activation function.
• There are a learning rule (algorithm) to adjust the weights
for better results
14
Single layer feedforward (Perceptron)
• The following figure showing a perceptron network
with n inputs(organized in one input layer) and m
output (organized in one output layer) no hidden
layers :
15
i j
1
2
3
Wij
n
1
2
3
Perceptron
• Simple network:
The following figure showing a perceptron network with
2 input(organized in one input layer) and 1 output
(organized in one output layer) :
• Important Remark: for binary classification problems
we always has single neuron in the output layer
16
b
𝒊= 𝟎
𝒏
𝒘𝒊 𝒙𝒊 +𝒃
1
0
X1
X2
w1
w2
1
output
Fixed Increment Perceptron Learning
Algorithm
• The problem: using the perceptron to perform binary
classification task
• The goal: adjust the weights and bias to some values by which
the percentage of the classification error decreased.
• How: by training the perceptron using pre- classified examples
(supervised learning).
• the desired output is the correct output should be generated (D)
• Network calculates the output (Y) which may be wrong and need
to be recalculated after adjust the network parameters.
• Normally , we starts random initial weights and adjust them in
small steps (using the learning algorism) until the required
outputs are produced.
17
Fixed Increment Perceptron Learning
Algorithm (cont.)
18
Calculating the new weights:
• Calculate the error (ej)
𝑒𝑗 = 𝐷𝑗 − 𝑌𝑗
• Network changes its weights in proportion to the error:
Δ 𝑤𝑖𝑗 = 𝛼 ∗ 𝑒𝑗 ∗ 𝑥𝑖
• Where  is the learning rate or step size :
1. Used to control the amount of weight adjustment at each step of training.
2. ranges from 0 to 1.
3. determines the rate of learning in each time step.
• The new (adjusted) weight:
𝑤𝑖𝑗
𝑛𝑒𝑤
= 𝑤𝑖𝑗
𝑜𝑙𝑑
+ Δ 𝑤𝑖𝑗
Or , 𝑤𝑖𝑗
𝑛𝑒𝑤
= 𝑤𝑖𝑗
𝑜𝑙𝑑
+ 𝛼 𝑒𝑗 𝑥𝑖
• This rule can be extended to train the bias by noting that a bias is
simply a weight whose input is always 1
𝑏𝑗
𝑛𝑒𝑤
= 𝑏𝑗
𝑜𝑙𝑑
+ 𝛼 𝑒𝑗
Fixed Increment Perceptron Learning Algorithm flow chart
19
random initial weights
Calculate the output (Yj) using
Training Examples
Calculate Error
𝑒𝑗 = 𝐷𝑗 − 𝑌𝑗
Error =0
End
Update Weights and biases
Yes
No
Perceptron
• The perceptron is a linear classifier. (Why?)
• The Perceptron algorithm rule is guaranteed to
converge to a weight vector that correctly
classifies the examples provided the training
examples are linearly separable.
• To get the correct weights, many training epochs
are used with suitable learning rate α
• So, it can’t be used to solve Non linear separated
(such as XOR problem)
20
Example: AND problem
• The AND problem is linear separated problem Has
2 inputs(X1 , X2) and one output (out)
21
outx2x1
000
010
001
111
x1
x2
out
bias
w1
w2
1
Example of perceptron learning: the logical operation AND
22
Assume α =1 ,initial values : ( b=1 , w1=0.3 , w2=-1)
Final
bias
Final weights
Error
(e)
Actual
Output
(Y)
bias
Initial weights
desired
output (D)
input
epoch
w2w1w2w1x2X1
0-10.3-111-10.3000
1
0-10.3000-10.3010
-1-1-0.7-110-10.3001
000.310-1-1-0.7111
-100.3-11000.3000
2
-100.300-100.3010
-100.300-100.3001
011.310-100.3111
-111.3-11011.3000
-201.3-11-111.30103
-201.300-201.3001
-112.310-201.3111
.
.
.
.
-312.300-312.3000
-312.300-312.30107
-312.300-312.3001
-312.301-312.3111
More Examples
• Try another initial values and learning rate
• Try another linear separated functions such as OR ,
NAND, NOR.
• What do you notice?
23
Example: Trucks Classification problems
24
• Consider the simple example of classifying trucks given their masses and
lengths
• How do we construct a neural network that can classify any Lorry and
Van?
Mass Length Class
10 6 Lorry
20 5 Lorry
5 4 Van
2 5 Van
2 5 Van
3 6 Lorry
10 7 Lorry
15 8 Lorry
5 9 Lorry
0
1
2
3
4
5
6
7
8
9
10
0510152025
Solution
• The trucks classification problem has the folowing
features:
• It is is a binary classification problem ( 2 category lorry
or van)
• As shown, it is linear separated problem
• Has 2 inputs(mass , length)
• Has one output (Class)
• So, It can be solved by single layer perceptron as
shown:
Check: TrunkExample.ipynb
25
mass
length
class
bias
w1
w2
1
Overcome Perceptron the limitations
• To overcome the limitations of single layer
networks, multi-layer feed-forward networks can
be used, which not only have input and output
units, but also have hidden units that are neither
input nor output units.
• Using Non-linear Activation functions (e.g.
sigmoid).
• Using advanced learning algorithms (e.g. Gradian
descent, backpropagation).
26
Ad

More Related Content

What's hot (20)

Activation function
Activation functionActivation function
Activation function
Astha Jain
 
DBSCAN : A Clustering Algorithm
DBSCAN : A Clustering AlgorithmDBSCAN : A Clustering Algorithm
DBSCAN : A Clustering Algorithm
Pınar Yahşi
 
Simple Introduction to AutoEncoder
Simple Introduction to AutoEncoderSimple Introduction to AutoEncoder
Simple Introduction to AutoEncoder
Jun Lang
 
Gradient descent method
Gradient descent methodGradient descent method
Gradient descent method
Sanghyuk Chun
 
Adaptive Resonance Theory
Adaptive Resonance TheoryAdaptive Resonance Theory
Adaptive Resonance Theory
Naveen Kumar
 
Introduction to Neural Networks
Introduction to Neural NetworksIntroduction to Neural Networks
Introduction to Neural Networks
Databricks
 
Mc culloch pitts neuron
Mc culloch pitts neuronMc culloch pitts neuron
Mc culloch pitts neuron
Siksha 'O' Anusandhan (Deemed to be University )
 
Time complexity.ppt
Time complexity.pptTime complexity.ppt
Time complexity.ppt
YekoyeTigabuYeko
 
ProLog (Artificial Intelligence) Introduction
ProLog (Artificial Intelligence) IntroductionProLog (Artificial Intelligence) Introduction
ProLog (Artificial Intelligence) Introduction
wahab khan
 
Convolutional Neural Networks (CNN)
Convolutional Neural Networks (CNN)Convolutional Neural Networks (CNN)
Convolutional Neural Networks (CNN)
Gaurav Mittal
 
Huffman coding
Huffman coding Huffman coding
Huffman coding
Nazmul Hyder
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
Atul Krishna
 
Recurrent neural networks
Recurrent neural networksRecurrent neural networks
Recurrent neural networks
Viacheslav Khomenko
 
Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...
Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...
Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...
Mohammed Bennamoun
 
Backpropagation: Understanding How to Update ANNs Weights Step-by-Step
Backpropagation: Understanding How to Update ANNs Weights Step-by-StepBackpropagation: Understanding How to Update ANNs Weights Step-by-Step
Backpropagation: Understanding How to Update ANNs Weights Step-by-Step
Ahmed Gad
 
Activation function
Activation functionActivation function
Activation function
RakshithGowdakodihal
 
Optimization/Gradient Descent
Optimization/Gradient DescentOptimization/Gradient Descent
Optimization/Gradient Descent
kandelin
 
Bayesian networks in AI
Bayesian networks in AIBayesian networks in AI
Bayesian networks in AI
Byoung-Hee Kim
 
Naive Bayes
Naive BayesNaive Bayes
Naive Bayes
CloudxLab
 
Regularization in deep learning
Regularization in deep learningRegularization in deep learning
Regularization in deep learning
Kien Le
 
Activation function
Activation functionActivation function
Activation function
Astha Jain
 
DBSCAN : A Clustering Algorithm
DBSCAN : A Clustering AlgorithmDBSCAN : A Clustering Algorithm
DBSCAN : A Clustering Algorithm
Pınar Yahşi
 
Simple Introduction to AutoEncoder
Simple Introduction to AutoEncoderSimple Introduction to AutoEncoder
Simple Introduction to AutoEncoder
Jun Lang
 
Gradient descent method
Gradient descent methodGradient descent method
Gradient descent method
Sanghyuk Chun
 
Adaptive Resonance Theory
Adaptive Resonance TheoryAdaptive Resonance Theory
Adaptive Resonance Theory
Naveen Kumar
 
Introduction to Neural Networks
Introduction to Neural NetworksIntroduction to Neural Networks
Introduction to Neural Networks
Databricks
 
ProLog (Artificial Intelligence) Introduction
ProLog (Artificial Intelligence) IntroductionProLog (Artificial Intelligence) Introduction
ProLog (Artificial Intelligence) Introduction
wahab khan
 
Convolutional Neural Networks (CNN)
Convolutional Neural Networks (CNN)Convolutional Neural Networks (CNN)
Convolutional Neural Networks (CNN)
Gaurav Mittal
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
Atul Krishna
 
Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...
Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...
Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...
Mohammed Bennamoun
 
Backpropagation: Understanding How to Update ANNs Weights Step-by-Step
Backpropagation: Understanding How to Update ANNs Weights Step-by-StepBackpropagation: Understanding How to Update ANNs Weights Step-by-Step
Backpropagation: Understanding How to Update ANNs Weights Step-by-Step
Ahmed Gad
 
Optimization/Gradient Descent
Optimization/Gradient DescentOptimization/Gradient Descent
Optimization/Gradient Descent
kandelin
 
Bayesian networks in AI
Bayesian networks in AIBayesian networks in AI
Bayesian networks in AI
Byoung-Hee Kim
 
Regularization in deep learning
Regularization in deep learningRegularization in deep learning
Regularization in deep learning
Kien Le
 

Similar to 03 Single layer Perception Classifier (20)

Neural Networks
Neural NetworksNeural Networks
Neural Networks
Makerere Unversity School of Public Health, Victoria University
 
ann-ics320Part4.ppt
ann-ics320Part4.pptann-ics320Part4.ppt
ann-ics320Part4.ppt
GayathriRHICETCSESTA
 
ann-ics320Part4.ppt
ann-ics320Part4.pptann-ics320Part4.ppt
ann-ics320Part4.ppt
GayathriRHICETCSESTA
 
10 Backpropagation Algorithm for Neural Networks (1).pptx
10 Backpropagation Algorithm for Neural Networks (1).pptx10 Backpropagation Algorithm for Neural Networks (1).pptx
10 Backpropagation Algorithm for Neural Networks (1).pptx
SaifKhan703888
 
Introduction to Neural networks (under graduate course) Lecture 4 of 9
Introduction to Neural networks (under graduate course) Lecture 4 of 9Introduction to Neural networks (under graduate course) Lecture 4 of 9
Introduction to Neural networks (under graduate course) Lecture 4 of 9
Randa Elanwar
 
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNSArtificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Mohammed Bennamoun
 
CS767_Lecture_04.pptx
CS767_Lecture_04.pptxCS767_Lecture_04.pptx
CS767_Lecture_04.pptx
ShujatHussainGadi
 
2-Perceptrons.pdf
2-Perceptrons.pdf2-Perceptrons.pdf
2-Perceptrons.pdf
DrSmithaVasP
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
ssuserab4f3e
 
Neural Networks
Neural NetworksNeural Networks
Neural Networks
Adri Jovin
 
ANN presentation explaination and architecture.pptx
ANN presentation explaination and architecture.pptxANN presentation explaination and architecture.pptx
ANN presentation explaination and architecture.pptx
Account1850
 
Module1 (2).pptxvgybhunjimko,l.vgbyhnjmk;
Module1 (2).pptxvgybhunjimko,l.vgbyhnjmk;Module1 (2).pptxvgybhunjimko,l.vgbyhnjmk;
Module1 (2).pptxvgybhunjimko,l.vgbyhnjmk;
vallepubalaji66
 
Introduction to Neural Networks and Deep Learning
Introduction to Neural Networks and Deep LearningIntroduction to Neural Networks and Deep Learning
Introduction to Neural Networks and Deep Learning
Vahid Mirjalili
 
Soft Computering Technics - Unit2
Soft Computering Technics - Unit2Soft Computering Technics - Unit2
Soft Computering Technics - Unit2
sravanthi computers
 
Unsupervised-learning.ppt
Unsupervised-learning.pptUnsupervised-learning.ppt
Unsupervised-learning.ppt
Grishma Sharma
 
Neural
NeuralNeural
Neural
Vaibhav Shah
 
ARTIFICIAL-NEURAL-NETWORKMACHINELEARNING
ARTIFICIAL-NEURAL-NETWORKMACHINELEARNINGARTIFICIAL-NEURAL-NETWORKMACHINELEARNING
ARTIFICIAL-NEURAL-NETWORKMACHINELEARNING
mohanapriyastp
 
ACUMENS ON NEURAL NET AKG 20 7 23.pptx
ACUMENS ON NEURAL NET AKG 20 7 23.pptxACUMENS ON NEURAL NET AKG 20 7 23.pptx
ACUMENS ON NEURAL NET AKG 20 7 23.pptx
gnans Kgnanshek
 
Perceptron
PerceptronPerceptron
Perceptron
Nagarajan
 
Nural Network ppt presentation which help about nural
Nural Network ppt presentation which help about nuralNural Network ppt presentation which help about nural
Nural Network ppt presentation which help about nural
sayaleedeshmukh5
 
10 Backpropagation Algorithm for Neural Networks (1).pptx
10 Backpropagation Algorithm for Neural Networks (1).pptx10 Backpropagation Algorithm for Neural Networks (1).pptx
10 Backpropagation Algorithm for Neural Networks (1).pptx
SaifKhan703888
 
Introduction to Neural networks (under graduate course) Lecture 4 of 9
Introduction to Neural networks (under graduate course) Lecture 4 of 9Introduction to Neural networks (under graduate course) Lecture 4 of 9
Introduction to Neural networks (under graduate course) Lecture 4 of 9
Randa Elanwar
 
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNSArtificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Mohammed Bennamoun
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
ssuserab4f3e
 
Neural Networks
Neural NetworksNeural Networks
Neural Networks
Adri Jovin
 
ANN presentation explaination and architecture.pptx
ANN presentation explaination and architecture.pptxANN presentation explaination and architecture.pptx
ANN presentation explaination and architecture.pptx
Account1850
 
Module1 (2).pptxvgybhunjimko,l.vgbyhnjmk;
Module1 (2).pptxvgybhunjimko,l.vgbyhnjmk;Module1 (2).pptxvgybhunjimko,l.vgbyhnjmk;
Module1 (2).pptxvgybhunjimko,l.vgbyhnjmk;
vallepubalaji66
 
Introduction to Neural Networks and Deep Learning
Introduction to Neural Networks and Deep LearningIntroduction to Neural Networks and Deep Learning
Introduction to Neural Networks and Deep Learning
Vahid Mirjalili
 
Soft Computering Technics - Unit2
Soft Computering Technics - Unit2Soft Computering Technics - Unit2
Soft Computering Technics - Unit2
sravanthi computers
 
Unsupervised-learning.ppt
Unsupervised-learning.pptUnsupervised-learning.ppt
Unsupervised-learning.ppt
Grishma Sharma
 
ARTIFICIAL-NEURAL-NETWORKMACHINELEARNING
ARTIFICIAL-NEURAL-NETWORKMACHINELEARNINGARTIFICIAL-NEURAL-NETWORKMACHINELEARNING
ARTIFICIAL-NEURAL-NETWORKMACHINELEARNING
mohanapriyastp
 
ACUMENS ON NEURAL NET AKG 20 7 23.pptx
ACUMENS ON NEURAL NET AKG 20 7 23.pptxACUMENS ON NEURAL NET AKG 20 7 23.pptx
ACUMENS ON NEURAL NET AKG 20 7 23.pptx
gnans Kgnanshek
 
Nural Network ppt presentation which help about nural
Nural Network ppt presentation which help about nuralNural Network ppt presentation which help about nural
Nural Network ppt presentation which help about nural
sayaleedeshmukh5
 
Ad

Recently uploaded (20)

Classification of mental disorder in 5th semester bsc. nursing and also used ...
Classification of mental disorder in 5th semester bsc. nursing and also used ...Classification of mental disorder in 5th semester bsc. nursing and also used ...
Classification of mental disorder in 5th semester bsc. nursing and also used ...
parmarjuli1412
 
MCQS (EMERGENCY NURSING) DR. NASIR MUSTAFA
MCQS (EMERGENCY NURSING) DR. NASIR MUSTAFAMCQS (EMERGENCY NURSING) DR. NASIR MUSTAFA
MCQS (EMERGENCY NURSING) DR. NASIR MUSTAFA
Dr. Nasir Mustafa
 
YSPH VMOC Special Report - Measles Outbreak Southwest US 5-14-2025 .pptx
YSPH VMOC Special Report - Measles Outbreak  Southwest US 5-14-2025  .pptxYSPH VMOC Special Report - Measles Outbreak  Southwest US 5-14-2025  .pptx
YSPH VMOC Special Report - Measles Outbreak Southwest US 5-14-2025 .pptx
Yale School of Public Health - The Virtual Medical Operations Center (VMOC)
 
"Heraldry Detective Project"- Coats of Arms and Mottos of "Ivanhoe" in Ivanho...
"Heraldry Detective Project"- Coats of Arms and Mottos of "Ivanhoe" in Ivanho..."Heraldry Detective Project"- Coats of Arms and Mottos of "Ivanhoe" in Ivanho...
"Heraldry Detective Project"- Coats of Arms and Mottos of "Ivanhoe" in Ivanho...
ruslana1975
 
Pope Leo XIV, the first Pope from North America.pptx
Pope Leo XIV, the first Pope from North America.pptxPope Leo XIV, the first Pope from North America.pptx
Pope Leo XIV, the first Pope from North America.pptx
Martin M Flynn
 
CNS infections (encephalitis, meningitis & Brain abscess
CNS infections (encephalitis, meningitis & Brain abscessCNS infections (encephalitis, meningitis & Brain abscess
CNS infections (encephalitis, meningitis & Brain abscess
Mohamed Rizk Khodair
 
Botany Assignment Help Guide - Academic Excellence
Botany Assignment Help Guide - Academic ExcellenceBotany Assignment Help Guide - Academic Excellence
Botany Assignment Help Guide - Academic Excellence
online college homework help
 
How to Create Kanban View in Odoo 18 - Odoo Slides
How to Create Kanban View in Odoo 18 - Odoo SlidesHow to Create Kanban View in Odoo 18 - Odoo Slides
How to Create Kanban View in Odoo 18 - Odoo Slides
Celine George
 
Mental Health Assessment in 5th semester bsc. nursing and also used in 2nd ye...
Mental Health Assessment in 5th semester bsc. nursing and also used in 2nd ye...Mental Health Assessment in 5th semester bsc. nursing and also used in 2nd ye...
Mental Health Assessment in 5th semester bsc. nursing and also used in 2nd ye...
parmarjuli1412
 
Chemotherapy of Malignancy -Anticancer.pptx
Chemotherapy of Malignancy -Anticancer.pptxChemotherapy of Malignancy -Anticancer.pptx
Chemotherapy of Malignancy -Anticancer.pptx
Mayuri Chavan
 
PUBH1000 Slides - Module 11: Governance for Health
PUBH1000 Slides - Module 11: Governance for HealthPUBH1000 Slides - Module 11: Governance for Health
PUBH1000 Slides - Module 11: Governance for Health
JonathanHallett4
 
Peer Assessment_ Unit 2 Skills Development for Live Performance - for Libby.docx
Peer Assessment_ Unit 2 Skills Development for Live Performance - for Libby.docxPeer Assessment_ Unit 2 Skills Development for Live Performance - for Libby.docx
Peer Assessment_ Unit 2 Skills Development for Live Performance - for Libby.docx
19lburrell
 
How to Configure Public Holidays & Mandatory Days in Odoo 18
How to Configure Public Holidays & Mandatory Days in Odoo 18How to Configure Public Holidays & Mandatory Days in Odoo 18
How to Configure Public Holidays & Mandatory Days in Odoo 18
Celine George
 
E-Filing_of_Income_Tax.pptx and concept of form 26AS
E-Filing_of_Income_Tax.pptx and concept of form 26ASE-Filing_of_Income_Tax.pptx and concept of form 26AS
E-Filing_of_Income_Tax.pptx and concept of form 26AS
Abinash Palangdar
 
Unit 5 ACUTE, SUBACUTE,CHRONIC TOXICITY.pptx
Unit 5 ACUTE, SUBACUTE,CHRONIC TOXICITY.pptxUnit 5 ACUTE, SUBACUTE,CHRONIC TOXICITY.pptx
Unit 5 ACUTE, SUBACUTE,CHRONIC TOXICITY.pptx
Mayuri Chavan
 
TERMINOLOGIES,GRIEF PROCESS AND LOSS AMD ITS TYPES .pptx
TERMINOLOGIES,GRIEF PROCESS AND LOSS AMD ITS TYPES .pptxTERMINOLOGIES,GRIEF PROCESS AND LOSS AMD ITS TYPES .pptx
TERMINOLOGIES,GRIEF PROCESS AND LOSS AMD ITS TYPES .pptx
PoojaSen20
 
Final Evaluation.docx...........................
Final Evaluation.docx...........................Final Evaluation.docx...........................
Final Evaluation.docx...........................
l1bbyburrell
 
libbys peer assesment.docx..............
libbys peer assesment.docx..............libbys peer assesment.docx..............
libbys peer assesment.docx..............
19lburrell
 
DEATH & ITS TYPES AND PHYSIOLOGICAL CHANGES IN BODY AFTER DEATH, PATIENT WILL...
DEATH & ITS TYPES AND PHYSIOLOGICAL CHANGES IN BODY AFTER DEATH, PATIENT WILL...DEATH & ITS TYPES AND PHYSIOLOGICAL CHANGES IN BODY AFTER DEATH, PATIENT WILL...
DEATH & ITS TYPES AND PHYSIOLOGICAL CHANGES IN BODY AFTER DEATH, PATIENT WILL...
PoojaSen20
 
puzzle Irregular Verbs- Simple Past Tense
puzzle Irregular Verbs- Simple Past Tensepuzzle Irregular Verbs- Simple Past Tense
puzzle Irregular Verbs- Simple Past Tense
OlgaLeonorTorresSnch
 
Classification of mental disorder in 5th semester bsc. nursing and also used ...
Classification of mental disorder in 5th semester bsc. nursing and also used ...Classification of mental disorder in 5th semester bsc. nursing and also used ...
Classification of mental disorder in 5th semester bsc. nursing and also used ...
parmarjuli1412
 
MCQS (EMERGENCY NURSING) DR. NASIR MUSTAFA
MCQS (EMERGENCY NURSING) DR. NASIR MUSTAFAMCQS (EMERGENCY NURSING) DR. NASIR MUSTAFA
MCQS (EMERGENCY NURSING) DR. NASIR MUSTAFA
Dr. Nasir Mustafa
 
"Heraldry Detective Project"- Coats of Arms and Mottos of "Ivanhoe" in Ivanho...
"Heraldry Detective Project"- Coats of Arms and Mottos of "Ivanhoe" in Ivanho..."Heraldry Detective Project"- Coats of Arms and Mottos of "Ivanhoe" in Ivanho...
"Heraldry Detective Project"- Coats of Arms and Mottos of "Ivanhoe" in Ivanho...
ruslana1975
 
Pope Leo XIV, the first Pope from North America.pptx
Pope Leo XIV, the first Pope from North America.pptxPope Leo XIV, the first Pope from North America.pptx
Pope Leo XIV, the first Pope from North America.pptx
Martin M Flynn
 
CNS infections (encephalitis, meningitis & Brain abscess
CNS infections (encephalitis, meningitis & Brain abscessCNS infections (encephalitis, meningitis & Brain abscess
CNS infections (encephalitis, meningitis & Brain abscess
Mohamed Rizk Khodair
 
Botany Assignment Help Guide - Academic Excellence
Botany Assignment Help Guide - Academic ExcellenceBotany Assignment Help Guide - Academic Excellence
Botany Assignment Help Guide - Academic Excellence
online college homework help
 
How to Create Kanban View in Odoo 18 - Odoo Slides
How to Create Kanban View in Odoo 18 - Odoo SlidesHow to Create Kanban View in Odoo 18 - Odoo Slides
How to Create Kanban View in Odoo 18 - Odoo Slides
Celine George
 
Mental Health Assessment in 5th semester bsc. nursing and also used in 2nd ye...
Mental Health Assessment in 5th semester bsc. nursing and also used in 2nd ye...Mental Health Assessment in 5th semester bsc. nursing and also used in 2nd ye...
Mental Health Assessment in 5th semester bsc. nursing and also used in 2nd ye...
parmarjuli1412
 
Chemotherapy of Malignancy -Anticancer.pptx
Chemotherapy of Malignancy -Anticancer.pptxChemotherapy of Malignancy -Anticancer.pptx
Chemotherapy of Malignancy -Anticancer.pptx
Mayuri Chavan
 
PUBH1000 Slides - Module 11: Governance for Health
PUBH1000 Slides - Module 11: Governance for HealthPUBH1000 Slides - Module 11: Governance for Health
PUBH1000 Slides - Module 11: Governance for Health
JonathanHallett4
 
Peer Assessment_ Unit 2 Skills Development for Live Performance - for Libby.docx
Peer Assessment_ Unit 2 Skills Development for Live Performance - for Libby.docxPeer Assessment_ Unit 2 Skills Development for Live Performance - for Libby.docx
Peer Assessment_ Unit 2 Skills Development for Live Performance - for Libby.docx
19lburrell
 
How to Configure Public Holidays & Mandatory Days in Odoo 18
How to Configure Public Holidays & Mandatory Days in Odoo 18How to Configure Public Holidays & Mandatory Days in Odoo 18
How to Configure Public Holidays & Mandatory Days in Odoo 18
Celine George
 
E-Filing_of_Income_Tax.pptx and concept of form 26AS
E-Filing_of_Income_Tax.pptx and concept of form 26ASE-Filing_of_Income_Tax.pptx and concept of form 26AS
E-Filing_of_Income_Tax.pptx and concept of form 26AS
Abinash Palangdar
 
Unit 5 ACUTE, SUBACUTE,CHRONIC TOXICITY.pptx
Unit 5 ACUTE, SUBACUTE,CHRONIC TOXICITY.pptxUnit 5 ACUTE, SUBACUTE,CHRONIC TOXICITY.pptx
Unit 5 ACUTE, SUBACUTE,CHRONIC TOXICITY.pptx
Mayuri Chavan
 
TERMINOLOGIES,GRIEF PROCESS AND LOSS AMD ITS TYPES .pptx
TERMINOLOGIES,GRIEF PROCESS AND LOSS AMD ITS TYPES .pptxTERMINOLOGIES,GRIEF PROCESS AND LOSS AMD ITS TYPES .pptx
TERMINOLOGIES,GRIEF PROCESS AND LOSS AMD ITS TYPES .pptx
PoojaSen20
 
Final Evaluation.docx...........................
Final Evaluation.docx...........................Final Evaluation.docx...........................
Final Evaluation.docx...........................
l1bbyburrell
 
libbys peer assesment.docx..............
libbys peer assesment.docx..............libbys peer assesment.docx..............
libbys peer assesment.docx..............
19lburrell
 
DEATH & ITS TYPES AND PHYSIOLOGICAL CHANGES IN BODY AFTER DEATH, PATIENT WILL...
DEATH & ITS TYPES AND PHYSIOLOGICAL CHANGES IN BODY AFTER DEATH, PATIENT WILL...DEATH & ITS TYPES AND PHYSIOLOGICAL CHANGES IN BODY AFTER DEATH, PATIENT WILL...
DEATH & ITS TYPES AND PHYSIOLOGICAL CHANGES IN BODY AFTER DEATH, PATIENT WILL...
PoojaSen20
 
puzzle Irregular Verbs- Simple Past Tense
puzzle Irregular Verbs- Simple Past Tensepuzzle Irregular Verbs- Simple Past Tense
puzzle Irregular Verbs- Simple Past Tense
OlgaLeonorTorresSnch
 
Ad

03 Single layer Perception Classifier

  • 1. Neural Networks and Fuzzy Systems Single layer Perception Classifier Dr. Tamer Ahmed Farrag Course No.: 803522-3
  • 2. Course Outline Part I : Neural Networks (11 weeks) • Introduction to Machine Learning • Fundamental Concepts of Artificial Neural Networks (ANN) • Single layer Perception Classifier • Multi-layer Feed forward Networks • Single layer FeedBack Networks • Unsupervised learning Part II : Fuzzy Systems (4 weeks) • Fuzzy set theory • Fuzzy Systems 2
  • 3. Building Neural Networks Strategy • Formulating neural network solutions for particular problems is a multi-stage process: 1. Understand and specify the problem in terms of inputs and required outputs 2. Take the simplest form of network you think might be able to solve your problem 3. Try to find the appropriate connection weights (including neuron thresholds) so that the network produces the right outputs for each input in its training data 4. Make sure that the network works on its training data and test its generalization by checking its performance on new testing data 5. If the network doesn’t perform well enough, go back to stage 3 and try harder. 6. If the network still doesn’t perform well enough, go back to stage 2 and try harder. 7. If the network still doesn’t perform well enough, go back to stage 1 and try harder. 8. Problem solved – or not. 3
  • 4. Decision Hyperplanes and Linear Separability • If we have two inputs, then the decision boundary that is a one dimensional straight line in the two dimensional input space of possible input values. • In general, A set of points in n-dimensional space are linearly separable if there is a hyperplane of (n − 1) dimensions that separates the sets. • This hyperplane is clearly still linear (i.e., straight or flat or non-curved) and can still only divide the space into two regions. • Problems with input patterns that can be classified using a single hyperplane are said to be linearly separable. Problems (such as XOR) which cannot be classified in this way are said to be non-linearly separable. 4
  • 5. Linear separated vs Non linear separated binary Classification problems Linear separated Non linear separated Class 2 Class 1 Non linear separated Non linear separated Non linear separated
  • 6. Decision Boundary for some of Logic Gates 6 outx2x1 000 010 001 111 A x1 A x1 x2 x1 outx2x1 000 110 101 111 outx2x1 000 110 101 011 out = 2 out= 1 AND Gate Linear separated OR Gate Linear separated XOR Gate Non Linear separated
  • 7. Implementation of Logical NOT, AND, and OR using McCulloch-Pitts neuron 7 outx2x1 000 010 001 111 outx2x1 000 110 101 111 AND ORNOT outx1 10 01
  • 8. How to Finding Weights and Threshold? • Constructing simple networks by hand (e.g., by trial and error) is one thing. But what about harder problems? • How long should we keep looking for a solution? We need to be able to calculate appropriate parameter values rather than searching for solutions by trial and error. • Each training pattern produces a linear inequality for the output in terms of the inputs and the network parameters. These can be used to compute the weights and thresholds. 8
  • 9. Finding Weights Analytically for the AND Network? 9 x1 x2 W1=? W2=? out T=? outx2x1 000 010 001 111 equations 0 * w1 + 0 * w2<T 0 w1 + 1 * w2<T 1 * w1 + 0 * w2<T 1 * w1 + 1 * w2≥ T F(z)= 0 𝑓𝑜𝑟 𝑥 < 𝑇 1 𝑓𝑜𝑟 𝑥 ≥ 𝑇 𝑤ℎ𝑒𝑟𝑒 𝑧 = (𝑥1 ∗ 𝑤1 + 𝑥2 ∗ 𝑤2) equations T >0 w2< T w1 < T w1 + w2≥ T Easy to solve (infinite number of solutions) For example assume: T=1.5 , w1=1 , w2=1
  • 10. Finding Weights Analytically for the XOR Network? 10 x1 x2 W1=? W2=? out T=? outx2x1 000 110 101 011 equations 0 * w1 + 0 * w2<T 0 w1 + 1 * w2≥T 1 * w1 + 0 * w2≥T 1 * w1 + 1 * w2< T F(z)= 0 𝑓𝑜𝑟 𝑥 < 𝑇 1 𝑓𝑜𝑟 𝑥 ≥ 𝑇 𝑤ℎ𝑒𝑟𝑒 𝑧 = (𝑥1 ∗ 𝑤1 + 𝑥2 ∗ 𝑤2) equations T >0 (1) w2 ≥ T (2) w1 ≥ T (3) w1 + w2 < T (4) No Solution Why? How to solve this problem ?
  • 11. Useful Notation • We often need to deal with ordered sets of numbers, which we write as vectors, e.g. x = (x1, x2, x3, …, xn) , y = (y1, y2, y3, …, ym) • The components xi can be added up to give a scalar (number), e.g. s = x1 + x2 + x3 + … + xn = 𝑖=1 𝑛 𝑥𝑖 • Two vectors of the same length may be added to give another vector, e.g. z = x + y = (x1 + y1, x2 + y2, …, xn + yn) • Two vectors of the same length may be multiplied to give a scalar, e.g. p = x.y = x1y1 + x2 y2 + …+ xnyn = 𝑖=1 𝑛 𝑥𝑖 𝑦𝑖 11
  • 12. What is the problem of implementing XOR? • In the previous slide, Clearly the first, second and third inequalities are incompatible with the fourth, so there is in fact no solution. • We need more complex networks, e.g. that combine together many simple networks, or use different activation/thresholding/transfer functions. • It then becomes much more difficult to determine all the weights and thresholds by hand. Next, we will see how a neural network can learn these parameters. 12
  • 13. Perceptron • In 1958, Frank Rosenblatt introduced a training algorithm that provided the first procedure for training a simple ANN: a perceptron. • Any number of McCulloch-Pitts neurons can be connected together in any way we like. • The arrangement that has one layer of input neurons feeding forward to one output layer of McCulloch-Pitts neurons, with full connectivity, is known as a Perceptron: 13 𝒊= 𝟎 𝒏 𝒘𝒊 𝒙𝒊 +𝒃 1 0 X1 X2 X3 w1 w2 w3 Output 𝑶𝒖𝒕𝒑𝒖𝒕 = 𝒉𝒂𝒓𝒅𝒍𝒊𝒎 𝒊=𝟎 𝒏 𝒘𝒊 𝒙𝒊 + 𝒃
  • 14. Perceptron • In 1958, Frank Rosenblatt introduced a training algorithm that provided the first procedure for training a simple ANN: a perceptron. • Any number of McCulloch-Pitts neurons can be connected together in any way we like. • The arrangement that has one layer of input neurons feeding forward to one output layer of McCulloch-Pitts neurons, with full connectivity, is known as a Perceptron. • It using hard limiter as the activation function. • There are a learning rule (algorithm) to adjust the weights for better results 14
  • 15. Single layer feedforward (Perceptron) • The following figure showing a perceptron network with n inputs(organized in one input layer) and m output (organized in one output layer) no hidden layers : 15 i j 1 2 3 Wij n 1 2 3
  • 16. Perceptron • Simple network: The following figure showing a perceptron network with 2 input(organized in one input layer) and 1 output (organized in one output layer) : • Important Remark: for binary classification problems we always has single neuron in the output layer 16 b 𝒊= 𝟎 𝒏 𝒘𝒊 𝒙𝒊 +𝒃 1 0 X1 X2 w1 w2 1 output
  • 17. Fixed Increment Perceptron Learning Algorithm • The problem: using the perceptron to perform binary classification task • The goal: adjust the weights and bias to some values by which the percentage of the classification error decreased. • How: by training the perceptron using pre- classified examples (supervised learning). • the desired output is the correct output should be generated (D) • Network calculates the output (Y) which may be wrong and need to be recalculated after adjust the network parameters. • Normally , we starts random initial weights and adjust them in small steps (using the learning algorism) until the required outputs are produced. 17
  • 18. Fixed Increment Perceptron Learning Algorithm (cont.) 18 Calculating the new weights: • Calculate the error (ej) 𝑒𝑗 = 𝐷𝑗 − 𝑌𝑗 • Network changes its weights in proportion to the error: Δ 𝑤𝑖𝑗 = 𝛼 ∗ 𝑒𝑗 ∗ 𝑥𝑖 • Where  is the learning rate or step size : 1. Used to control the amount of weight adjustment at each step of training. 2. ranges from 0 to 1. 3. determines the rate of learning in each time step. • The new (adjusted) weight: 𝑤𝑖𝑗 𝑛𝑒𝑤 = 𝑤𝑖𝑗 𝑜𝑙𝑑 + Δ 𝑤𝑖𝑗 Or , 𝑤𝑖𝑗 𝑛𝑒𝑤 = 𝑤𝑖𝑗 𝑜𝑙𝑑 + 𝛼 𝑒𝑗 𝑥𝑖 • This rule can be extended to train the bias by noting that a bias is simply a weight whose input is always 1 𝑏𝑗 𝑛𝑒𝑤 = 𝑏𝑗 𝑜𝑙𝑑 + 𝛼 𝑒𝑗
  • 19. Fixed Increment Perceptron Learning Algorithm flow chart 19 random initial weights Calculate the output (Yj) using Training Examples Calculate Error 𝑒𝑗 = 𝐷𝑗 − 𝑌𝑗 Error =0 End Update Weights and biases Yes No
  • 20. Perceptron • The perceptron is a linear classifier. (Why?) • The Perceptron algorithm rule is guaranteed to converge to a weight vector that correctly classifies the examples provided the training examples are linearly separable. • To get the correct weights, many training epochs are used with suitable learning rate α • So, it can’t be used to solve Non linear separated (such as XOR problem) 20
  • 21. Example: AND problem • The AND problem is linear separated problem Has 2 inputs(X1 , X2) and one output (out) 21 outx2x1 000 010 001 111 x1 x2 out bias w1 w2 1
  • 22. Example of perceptron learning: the logical operation AND 22 Assume α =1 ,initial values : ( b=1 , w1=0.3 , w2=-1) Final bias Final weights Error (e) Actual Output (Y) bias Initial weights desired output (D) input epoch w2w1w2w1x2X1 0-10.3-111-10.3000 1 0-10.3000-10.3010 -1-1-0.7-110-10.3001 000.310-1-1-0.7111 -100.3-11000.3000 2 -100.300-100.3010 -100.300-100.3001 011.310-100.3111 -111.3-11011.3000 -201.3-11-111.30103 -201.300-201.3001 -112.310-201.3111 . . . . -312.300-312.3000 -312.300-312.30107 -312.300-312.3001 -312.301-312.3111
  • 23. More Examples • Try another initial values and learning rate • Try another linear separated functions such as OR , NAND, NOR. • What do you notice? 23
  • 24. Example: Trucks Classification problems 24 • Consider the simple example of classifying trucks given their masses and lengths • How do we construct a neural network that can classify any Lorry and Van? Mass Length Class 10 6 Lorry 20 5 Lorry 5 4 Van 2 5 Van 2 5 Van 3 6 Lorry 10 7 Lorry 15 8 Lorry 5 9 Lorry 0 1 2 3 4 5 6 7 8 9 10 0510152025
  • 25. Solution • The trucks classification problem has the folowing features: • It is is a binary classification problem ( 2 category lorry or van) • As shown, it is linear separated problem • Has 2 inputs(mass , length) • Has one output (Class) • So, It can be solved by single layer perceptron as shown: Check: TrunkExample.ipynb 25 mass length class bias w1 w2 1
  • 26. Overcome Perceptron the limitations • To overcome the limitations of single layer networks, multi-layer feed-forward networks can be used, which not only have input and output units, but also have hidden units that are neither input nor output units. • Using Non-linear Activation functions (e.g. sigmoid). • Using advanced learning algorithms (e.g. Gradian descent, backpropagation). 26
  翻译: