SlideShare a Scribd company logo
NAÏVE BAYESIAN
TEXT CLASSIFIER
EVENT MODELS
NOTES AND HOW TO USE THEM
Alex Lin
Senior Architect
Intelligent Mining
alin@intelligentmining.com
Outline
  Quick   refresher for
      Naïve Bayesian Text Classifier
  Event   Models
    Multinomial Event Model
    Multi-variate Bernoulli Event Model
  Performance characteristics
  Why are they important?



                                           2
Naïve Bayes Text Classifier
  Supervised   Learning
  Labeled data set for training to classify the
   unlabeled data
  Easy to implement and highly scalable
  It is often the first thing to try

  Successful cases: Email spam filtering,
   News article categorization, Product
   classification, Social content categorization

                                                   3
N.B. Text Classifier - Training
d1= w1w9w213w748w985...wn|y=1
d2= w12w19w417w578...wn|y=0
d3= w53w62w23w78w105...wn|y=1     Train          y=0                   y=1
d4= w46w58w293w348w965...wn|y=1
d5= w8w9w33w678w967...wn|y=0
d6= w71w79w529w779...wn|y=1
……
dn= w49w127w23w93...wn|y=0

                                                           P(w1|y=1)


                                          P(w1|y=0)
                                                                  P(w3|y=1)
                                               P(w2|y=0)
                                                                              P(wn|y=1)
                                                      P(w3|y=0)




                                                w1     w2     w3       …      wn          4
N.B. Text Classifier - Classifying
       d1= w1w3|y=?                Classify                w1    w2    w3     …   wn


     To classify a document, pick the class
     with “maximum posterior probability”
     Argmax P(Y = c | w1,w 3 )
          c
    = Argmax P(w1,w 3 |Y = c)P(Y = c)
           c                                  P(w1|y=0)
    = Argmax P(Y = c)∏ P(w n |Y = c)                                              P(w3|y=1)
           c
                        n                                 P(w3|y=0)
                                                                      P(w1|y=1)
      P(Y = 0)P(w1 |Y = 0)P(w 3 |Y = 0)
      P(Y = 1)P(w1 |Y = 1)P(w 3 |Y = 1)
                                                                                        5
                                                  y=0                               y=1
€
€
Bayesian Theorem
                     Likelihood         Class Prior


                      P(X |Y )P(Y )
           P(Y | X) =
                         P(X)
         Posterior                  Evidence




€  Generative learning algorithms model “Likelihood”
   P(X|Y) and “Class Prior” P(Y) then make
   prediction based on Bayes Theorem.
                                                        6
Naïve Bayes Text Classifier
                                                                    P(X |Y )P(Y )
                                                       P(Y | X) =
                                                                       P(X)
    Y ∈ {0,1}     d = w1,w 6 ,...,w n

    Apply Bayes’ Theorem:                       €

                                     P(w1,w 6 ,...,w n |Y = 0)P(Y = 0)
      €P(Y = 0 | w1,w 6 ,...,w n ) =
                                             P(w1,w 6 ,...,w n )
                                     P(w1,w 6 ,...,w n |Y = 1)P(Y = 1)
       P(Y = 1 | w1,w 6 ,...,w n ) =
                                           P(w1,w 6 ,...,w n )
€
                       €

€                                                                              7
                      €
Naïve Bayes Text Classifier
                                                                     P(X |Y )P(Y )
                                                        P(Y | X) =
                                                                        P(X)
        Y ∈ {0,1}       d = w1,w 6 ,...,w n

        To find most likely class label for d: €
          Argmax P(Y = c | w1,w 6 ,...,w n )
€         €    c
                    P(w1,w 6 ,...,w n |Y = c)P(Y = c)
           = Argmax
                 c          P(w1,w 6 ,...,w n )
    €      = Argmax P(w1,w 6 ,...,w n |Y = c)P(Y = c)
                    c

                           How do we estimate likelihood?
    €                                                                          8


    €
Estimate Likelihood P(X|Y)
    How to estimate Likelihood P(w1,w 6 ,...,w n |Y = c) ?
    Naïve Bayes’ assumption - assume that the words
    (wn) are conditionally independent given y.
    P(w1,w 6 ,...,w n |Y € c)
                         =
          = P(w1 |Y = c) * P(w 6 |Y = c) * ...* P(w n |Y = c)
          = ∏ P(w i |Y = c)
             i∈n                   Different event models
€         = ∑ log(P(w i |Y = c))
            i∈n                                                 9


€
Differences of Two Models
                                                               ∏ P(w   i   |Y = c)
    Multinomial Event model                                i∈n


                      tf wi ,c   Term Freq. of wi in Class c
      P(w i |Y = c) =
                         c                      €
                               Sum of all Term Freq. in Class c



    Multi-variate Bernoulli Event model
€                         df wi ,c   Doc Freq. of wi in Class c
        P(w i |Y = c) =                 # of Docs in Class c
                           Nc
        when wi not exists in d
                                                                                10
        P(w i |Y = c) = 1− P(w i |Y = c)
€
Comparison of Two Models
                                                                             tf wi ,c
Multinomial:                                             P(w i |Y = c) =
                                                                                c
  Label     w1        w2        w3        w4       …       wn
   Y=0     tfw1,0    tfw2,0 tfw2,0 tfw2,0          …      tfwn,0         |c|
                                           €
   Y=1     tfw1,1    tfw2,1 tfw2,1 tfw2,1          …      tfwn,1         |c|

                                                                             df wi ,c
Multi-variate Bernoulli:                                 P(w i |Y = c) =
                                                                               Nc
   Label     w1         w2           w3         w4       …          wn
   Y=0      dfw1,0    dfw2,0     dfw3,0        dfw4,0    …         dfwn,0               N0
                                           €
   Y=0     !dfw1,0    !dfw2,0    !dfw3,0       !dfw4,0   …         !dfwn,0              N0
   Y=1      dfw1,1    dfw2,1     dfw3,1        dfw4,1    …         dfwn,1               N1   11

   Y=1     !dfw1,1    !dfw2,1    !dfw3,1       !dfw4,1   …         !dfwn,1              N1
Comparison of Two Models
                                            tf wi ,c                            ∏ P(w       |Y = c)
Multinomial:              P(w i |Y = c) =
                                               c                                i∈n
                                                                                        i



d = {w1,w 2 ,...,w n }    n: # of words in d           w n ∈ {1,2,3,..., V }
                                  c                             €
Argmax ∑ log(P(w i |Y = c)) + log( )
           €
    c
       i∈n                        D
                         €
                                                                    df wi ,c
Multi-variate Bernoulli:                        P(w i |Y = c) =
                                                                     Nc
d = {w1,w 2 ,...,w n }   n: # of words in vocabulary |V|                w n ∈ {0,1}
                                 €                                                     c
Argmax ∑ log(P(w i |Y = c) (1− P(w i |Y = c))
                                     wi                                1−w i
                                                                               ) + log( )
       c
           i∈n                                                                         D          12

                                                        €
Multinomial
                              For each
                              word in
                                doc




w1   w2   w3   w4   ..   wn              w1   w2   w3   w4   ..   w5




           Y=0                                      Y=1



           A%                                       B%

                                                                       13
Multi-variate Bernoulli
        For each
        word in
          doc



                                        Y=1   A%
                          W
                                        Y=0   1-A%
                   When W does exists
                        in doc




                                        Y=1   B%
                          W’
                                        Y=0   1-B%
                   When W does not
                    exists in doc
                                                     14
Performance Characteristics
  Multinomial  would perform better Multi-variate
   Bernoulli in most text classification tasks, especially
   when vocabulary size |V| >= 1K
  Multinomial perform better when handling data sets
   that have large variance in document length
  Multivariate Bernoulli could have the advantage of
   dense data set
  Non-text features could be added as additional
   Bernoulli variables. However it should not be added to
   the vocabulary in Multinomial model
                                                        15
Why are they interesting?
  Certaindata points are more suitable for one
   event model than the other.
  Example:
    Web page text + “Domain” + “Author”
    Social content text + “Who”
    Product name / desc + “Brand”
  We can create a Naïve Bayes Classifier that
   combines event models
  Most importantly, try both on your data set
                                                  16
Thank you
  Any   question or comment?




                                17
Appendix
  Laplace Smoothing
  Generative vs Discriminative Learning
  Multinomial Event Model

  Multi-variate Bernoulli Event Model
  Notation




                                           18
Laplace Smoothing

    Multinomial:
                        tf wi ,c + 1
      P(w i |Y = c) =
                         c+V

    Multi-variate Bernoulli:

€                       df wi ,c + 1
      P(w i |Y = c) =
                         Nc + 2
                                       19




€
Generative vs. Discriminative
Discriminative Learning Algorithm:

Try to learn P(Y | X) directly or try to map input X
to labels Y directly

Generative Learning Algorithm:

Try to model P(X |Y) and
 €                           P(Y )   first, then use
Bayes theorem to find out
                  P(X |Y )P(Y )
       P(Y | X) =
                €    P(X)
  €                                                    20
Multinomial Event Model
                         tf wi ,c
    P(w i |Y = c) =
                              c

     d = {w1,w 2 ,...,w n }       n: # of words in d   w n ∈ {1,2,3,..., V }

    = Argmax P(w1,w 6 ,...,w n |Y = c)P(Y = c)
                c
€                                             €
                               c   tf wi ,c
     = Argmax ∑ log(   ) + log( )
           c
              i∈n
                     c         D

                                                                               21
Multi-variate Bernoulli Event Model
                         df wi ,c
    P(w i |Y = c) =
                             Nc

    d = {w1,w 2 ,...,w n }   n: # of words in vocabulary |V|   w n ∈ {0,1}

    = Argmax P(w1,w 6 ,...,w n |Y = c)P(Y = c)
               c
€
                                  df wi ,c
                                 df
                                 € wi ,c 1−wi     c
    = Argmax ∑ log((    ) )(1− (             wi
                                        ) ) + log( )
          c
             i∈n
                     Nc           Nc              D

                                                                             22
Notation
  d: a document
  w: a word in a document

  X: observed data attributes

  Y: Class Label

  |V|: number of terms in vocabulary

  |D|: number of docs in training set

  |c|: number of docs in class c

  tf: Term frequency

  df: document frequency

  !df:
                                         23
References
    McCallum, A., Nigam, K.: A comparison of event models for Naive Bayes
     text classification. In: Learning for Text Categorization: Papers from the AAAI
     Workshop, AAAI Press (1998) 41–48 Technical Report WS-98-05
    Metsis, V., Androutsopoulos, I., Paliouras, G.: Spam Filtering with Naive
     Bayes – Which Naive Bayes (2006) Third Conference on Email and Anti-Spam
     (CEAS)
    Schneider, K: On Word Frequency Information and Negative Evidence in
     Naive Bayes Text Classification (2004) España for Natural Language
     Processing, EsTAL




                                                                                  24
Ad

More Related Content

What's hot (20)

Divergence clustering
Divergence clusteringDivergence clustering
Divergence clustering
Frank Nielsen
 
Numerical approach for Hamilton-Jacobi equations on a network: application to...
Numerical approach for Hamilton-Jacobi equations on a network: application to...Numerical approach for Hamilton-Jacobi equations on a network: application to...
Numerical approach for Hamilton-Jacobi equations on a network: application to...
Guillaume Costeseque
 
Ian.petrow【transcendental number theory】.
Ian.petrow【transcendental number theory】.Ian.petrow【transcendental number theory】.
Ian.petrow【transcendental number theory】.
Tong Leung
 
MUMS: Bayesian, Fiducial, and Frequentist Conference - Statistical Sparsity, ...
MUMS: Bayesian, Fiducial, and Frequentist Conference - Statistical Sparsity, ...MUMS: Bayesian, Fiducial, and Frequentist Conference - Statistical Sparsity, ...
MUMS: Bayesian, Fiducial, and Frequentist Conference - Statistical Sparsity, ...
The Statistical and Applied Mathematical Sciences Institute
 
Side 2019 #3
Side 2019 #3Side 2019 #3
Side 2019 #3
Arthur Charpentier
 
Bayesian computation with INLA
Bayesian computation with INLABayesian computation with INLA
Bayesian computation with INLA
Thiago Guerrera Martins
 
Voronoi diagrams in information geometry:  Statistical Voronoi diagrams and ...
Voronoi diagrams in information geometry:  Statistical Voronoi diagrams and ...Voronoi diagrams in information geometry:  Statistical Voronoi diagrams and ...
Voronoi diagrams in information geometry:  Statistical Voronoi diagrams and ...
Frank Nielsen
 
Lecture 2: linear SVM in the dual
Lecture 2: linear SVM in the dualLecture 2: linear SVM in the dual
Lecture 2: linear SVM in the dual
Stéphane Canu
 
Tensor Completion for PDEs with uncertain coefficients and Bayesian Update te...
Tensor Completion for PDEs with uncertain coefficients and Bayesian Update te...Tensor Completion for PDEs with uncertain coefficients and Bayesian Update te...
Tensor Completion for PDEs with uncertain coefficients and Bayesian Update te...
Alexander Litvinenko
 
March12 natarajan
March12 natarajanMarch12 natarajan
March12 natarajan
BBKuhn
 
Side 2019 #7
Side 2019 #7Side 2019 #7
Side 2019 #7
Arthur Charpentier
 
Computational Information Geometry: A quick review (ICMS)
Computational Information Geometry: A quick review (ICMS)Computational Information Geometry: A quick review (ICMS)
Computational Information Geometry: A quick review (ICMS)
Frank Nielsen
 
Fougeres Besancon Archimax
Fougeres Besancon ArchimaxFougeres Besancon Archimax
Fougeres Besancon Archimax
Arthur Charpentier
 
Hyperfunction method for numerical integration and Fredholm integral equation...
Hyperfunction method for numerical integration and Fredholm integral equation...Hyperfunction method for numerical integration and Fredholm integral equation...
Hyperfunction method for numerical integration and Fredholm integral equation...
HidenoriOgata
 
Side 2019 #9
Side 2019 #9Side 2019 #9
Side 2019 #9
Arthur Charpentier
 
Side 2019, part 2
Side 2019, part 2Side 2019, part 2
Side 2019, part 2
Arthur Charpentier
 
15 Bivariate Change Of Variables
15 Bivariate Change Of Variables15 Bivariate Change Of Variables
15 Bivariate Change Of Variables
Hadley Wickham
 
Slides lln-risques
Slides lln-risquesSlides lln-risques
Slides lln-risques
Arthur Charpentier
 
Side 2019 #6
Side 2019 #6Side 2019 #6
Side 2019 #6
Arthur Charpentier
 
Slides econometrics-2018-graduate-4
Slides econometrics-2018-graduate-4Slides econometrics-2018-graduate-4
Slides econometrics-2018-graduate-4
Arthur Charpentier
 
Divergence clustering
Divergence clusteringDivergence clustering
Divergence clustering
Frank Nielsen
 
Numerical approach for Hamilton-Jacobi equations on a network: application to...
Numerical approach for Hamilton-Jacobi equations on a network: application to...Numerical approach for Hamilton-Jacobi equations on a network: application to...
Numerical approach for Hamilton-Jacobi equations on a network: application to...
Guillaume Costeseque
 
Ian.petrow【transcendental number theory】.
Ian.petrow【transcendental number theory】.Ian.petrow【transcendental number theory】.
Ian.petrow【transcendental number theory】.
Tong Leung
 
Voronoi diagrams in information geometry:  Statistical Voronoi diagrams and ...
Voronoi diagrams in information geometry:  Statistical Voronoi diagrams and ...Voronoi diagrams in information geometry:  Statistical Voronoi diagrams and ...
Voronoi diagrams in information geometry:  Statistical Voronoi diagrams and ...
Frank Nielsen
 
Lecture 2: linear SVM in the dual
Lecture 2: linear SVM in the dualLecture 2: linear SVM in the dual
Lecture 2: linear SVM in the dual
Stéphane Canu
 
Tensor Completion for PDEs with uncertain coefficients and Bayesian Update te...
Tensor Completion for PDEs with uncertain coefficients and Bayesian Update te...Tensor Completion for PDEs with uncertain coefficients and Bayesian Update te...
Tensor Completion for PDEs with uncertain coefficients and Bayesian Update te...
Alexander Litvinenko
 
March12 natarajan
March12 natarajanMarch12 natarajan
March12 natarajan
BBKuhn
 
Computational Information Geometry: A quick review (ICMS)
Computational Information Geometry: A quick review (ICMS)Computational Information Geometry: A quick review (ICMS)
Computational Information Geometry: A quick review (ICMS)
Frank Nielsen
 
Hyperfunction method for numerical integration and Fredholm integral equation...
Hyperfunction method for numerical integration and Fredholm integral equation...Hyperfunction method for numerical integration and Fredholm integral equation...
Hyperfunction method for numerical integration and Fredholm integral equation...
HidenoriOgata
 
15 Bivariate Change Of Variables
15 Bivariate Change Of Variables15 Bivariate Change Of Variables
15 Bivariate Change Of Variables
Hadley Wickham
 
Slides econometrics-2018-graduate-4
Slides econometrics-2018-graduate-4Slides econometrics-2018-graduate-4
Slides econometrics-2018-graduate-4
Arthur Charpentier
 

Viewers also liked (20)

Text classification & sentiment analysis
Text classification & sentiment analysisText classification & sentiment analysis
Text classification & sentiment analysis
M. Atif Qureshi
 
Introduction to text classification using naive bayes
Introduction to text classification using naive bayesIntroduction to text classification using naive bayes
Introduction to text classification using naive bayes
Dhwaj Raj
 
Text classification
Text classificationText classification
Text classification
Harry Potter
 
Sentiment analysis using naive bayes classifier
Sentiment analysis using naive bayes classifier Sentiment analysis using naive bayes classifier
Sentiment analysis using naive bayes classifier
Dev Sahu
 
Recommender system algorithm and architecture
Recommender system algorithm and architectureRecommender system algorithm and architecture
Recommender system algorithm and architecture
Liang Xiang
 
Content - Based Recommendations Enhanced with Collaborative Information
Content - Based Recommendations Enhanced with Collaborative InformationContent - Based Recommendations Enhanced with Collaborative Information
Content - Based Recommendations Enhanced with Collaborative Information
Alessandro Liparoti
 
Supervised algorithms
Supervised algorithmsSupervised algorithms
Supervised algorithms
Yassine Akhiat
 
Comparing Naive Bayesian and k-NN algorithms for automatic ...
Comparing Naive Bayesian and k-NN algorithms for automatic ...Comparing Naive Bayesian and k-NN algorithms for automatic ...
Comparing Naive Bayesian and k-NN algorithms for automatic ...
butest
 
Intro to Factorization Machines
Intro to Factorization MachinesIntro to Factorization Machines
Intro to Factorization Machines
Pavel Kalaidin
 
Dwdm naive bayes_ankit_gadgil_027
Dwdm naive bayes_ankit_gadgil_027Dwdm naive bayes_ankit_gadgil_027
Dwdm naive bayes_ankit_gadgil_027
ankitgadgil
 
Keyword-based Search and Exploration on Databases (SIGMOD 2011)
Keyword-based Search and Exploration on Databases (SIGMOD 2011)Keyword-based Search and Exploration on Databases (SIGMOD 2011)
Keyword-based Search and Exploration on Databases (SIGMOD 2011)
weiw_oz
 
Interactive Query and Search for your Big Data
Interactive Query and Search for your Big DataInteractive Query and Search for your Big Data
Interactive Query and Search for your Big Data
DataWorks Summit
 
Presentation
PresentationPresentation
Presentation
Xiaoyu Chen
 
Warsaw Data Science - Factorization Machines Introduction
Warsaw Data Science -  Factorization Machines IntroductionWarsaw Data Science -  Factorization Machines Introduction
Warsaw Data Science - Factorization Machines Introduction
Bartlomiej Twardowski
 
Factorization Machines with libFM
Factorization Machines with libFMFactorization Machines with libFM
Factorization Machines with libFM
Liangjie Hong
 
Overview of recommender system
Overview of recommender systemOverview of recommender system
Overview of recommender system
Stanley Wang
 
Structured Document Search and Retrieval
Structured Document Search and RetrievalStructured Document Search and Retrieval
Structured Document Search and Retrieval
Optum
 
Steffen Rendle, Research Scientist, Google at MLconf SF
Steffen Rendle, Research Scientist, Google at MLconf SFSteffen Rendle, Research Scientist, Google at MLconf SF
Steffen Rendle, Research Scientist, Google at MLconf SF
MLconf
 
Recommendation Engine Demystified
Recommendation Engine DemystifiedRecommendation Engine Demystified
Recommendation Engine Demystified
DKALab
 
Text Classification/Categorization
Text Classification/CategorizationText Classification/Categorization
Text Classification/Categorization
Oswal Abhishek
 
Text classification & sentiment analysis
Text classification & sentiment analysisText classification & sentiment analysis
Text classification & sentiment analysis
M. Atif Qureshi
 
Introduction to text classification using naive bayes
Introduction to text classification using naive bayesIntroduction to text classification using naive bayes
Introduction to text classification using naive bayes
Dhwaj Raj
 
Text classification
Text classificationText classification
Text classification
Harry Potter
 
Sentiment analysis using naive bayes classifier
Sentiment analysis using naive bayes classifier Sentiment analysis using naive bayes classifier
Sentiment analysis using naive bayes classifier
Dev Sahu
 
Recommender system algorithm and architecture
Recommender system algorithm and architectureRecommender system algorithm and architecture
Recommender system algorithm and architecture
Liang Xiang
 
Content - Based Recommendations Enhanced with Collaborative Information
Content - Based Recommendations Enhanced with Collaborative InformationContent - Based Recommendations Enhanced with Collaborative Information
Content - Based Recommendations Enhanced with Collaborative Information
Alessandro Liparoti
 
Comparing Naive Bayesian and k-NN algorithms for automatic ...
Comparing Naive Bayesian and k-NN algorithms for automatic ...Comparing Naive Bayesian and k-NN algorithms for automatic ...
Comparing Naive Bayesian and k-NN algorithms for automatic ...
butest
 
Intro to Factorization Machines
Intro to Factorization MachinesIntro to Factorization Machines
Intro to Factorization Machines
Pavel Kalaidin
 
Dwdm naive bayes_ankit_gadgil_027
Dwdm naive bayes_ankit_gadgil_027Dwdm naive bayes_ankit_gadgil_027
Dwdm naive bayes_ankit_gadgil_027
ankitgadgil
 
Keyword-based Search and Exploration on Databases (SIGMOD 2011)
Keyword-based Search and Exploration on Databases (SIGMOD 2011)Keyword-based Search and Exploration on Databases (SIGMOD 2011)
Keyword-based Search and Exploration on Databases (SIGMOD 2011)
weiw_oz
 
Interactive Query and Search for your Big Data
Interactive Query and Search for your Big DataInteractive Query and Search for your Big Data
Interactive Query and Search for your Big Data
DataWorks Summit
 
Warsaw Data Science - Factorization Machines Introduction
Warsaw Data Science -  Factorization Machines IntroductionWarsaw Data Science -  Factorization Machines Introduction
Warsaw Data Science - Factorization Machines Introduction
Bartlomiej Twardowski
 
Factorization Machines with libFM
Factorization Machines with libFMFactorization Machines with libFM
Factorization Machines with libFM
Liangjie Hong
 
Overview of recommender system
Overview of recommender systemOverview of recommender system
Overview of recommender system
Stanley Wang
 
Structured Document Search and Retrieval
Structured Document Search and RetrievalStructured Document Search and Retrieval
Structured Document Search and Retrieval
Optum
 
Steffen Rendle, Research Scientist, Google at MLconf SF
Steffen Rendle, Research Scientist, Google at MLconf SFSteffen Rendle, Research Scientist, Google at MLconf SF
Steffen Rendle, Research Scientist, Google at MLconf SF
MLconf
 
Recommendation Engine Demystified
Recommendation Engine DemystifiedRecommendation Engine Demystified
Recommendation Engine Demystified
DKALab
 
Text Classification/Categorization
Text Classification/CategorizationText Classification/Categorization
Text Classification/Categorization
Oswal Abhishek
 
Ad

Similar to Naive Bayesian Text Classifier Event Models (14)

Bayes 6
Bayes 6Bayes 6
Bayes 6
uddingias
 
分類器 (ナイーブベイズ)
分類器 (ナイーブベイズ)分類器 (ナイーブベイズ)
分類器 (ナイーブベイズ)
Satoshi MATSUURA
 
Tele3113 wk1wed
Tele3113 wk1wedTele3113 wk1wed
Tele3113 wk1wed
Vin Voro
 
Query Suggestion @ tokyotextmining#2
Query Suggestion @ tokyotextmining#2Query Suggestion @ tokyotextmining#2
Query Suggestion @ tokyotextmining#2
ybenjo
 
Probability based learning (in book: Machine learning for predictve data anal...
Probability based learning (in book: Machine learning for predictve data anal...Probability based learning (in book: Machine learning for predictve data anal...
Probability based learning (in book: Machine learning for predictve data anal...
Duyen Do
 
Savage-Dickey paradox
Savage-Dickey paradoxSavage-Dickey paradox
Savage-Dickey paradox
Christian Robert
 
Comparing estimation algorithms for block clustering models
Comparing estimation algorithms for block clustering modelsComparing estimation algorithms for block clustering models
Comparing estimation algorithms for block clustering models
BigMC
 
probability problem with brief solution 5
probability problem with brief solution 5probability problem with brief solution 5
probability problem with brief solution 5
getasewk
 
Stochastic modelling and quasi-random numbers
Stochastic modelling and quasi-random numbersStochastic modelling and quasi-random numbers
Stochastic modelling and quasi-random numbers
Olivier Teytaud
 
Intro to Classification: Logistic Regression & SVM
Intro to Classification: Logistic Regression & SVMIntro to Classification: Logistic Regression & SVM
Intro to Classification: Logistic Regression & SVM
NYC Predictive Analytics
 
Lecture10 - Naïve Bayes
Lecture10 - Naïve BayesLecture10 - Naïve Bayes
Lecture10 - Naïve Bayes
Albert Orriols-Puig
 
Litvinenko_RWTH_UQ_Seminar_talk.pdf
Litvinenko_RWTH_UQ_Seminar_talk.pdfLitvinenko_RWTH_UQ_Seminar_talk.pdf
Litvinenko_RWTH_UQ_Seminar_talk.pdf
Alexander Litvinenko
 
A Compositional Encoding for the Asynchronous Pi-Calculus into the Join-Calculus
A Compositional Encoding for the Asynchronous Pi-Calculus into the Join-CalculusA Compositional Encoding for the Asynchronous Pi-Calculus into the Join-Calculus
A Compositional Encoding for the Asynchronous Pi-Calculus into the Join-Calculus
smennicke
 
確率伝播
確率伝播確率伝播
確率伝播
Yoshihide Nishio
 
分類器 (ナイーブベイズ)
分類器 (ナイーブベイズ)分類器 (ナイーブベイズ)
分類器 (ナイーブベイズ)
Satoshi MATSUURA
 
Tele3113 wk1wed
Tele3113 wk1wedTele3113 wk1wed
Tele3113 wk1wed
Vin Voro
 
Query Suggestion @ tokyotextmining#2
Query Suggestion @ tokyotextmining#2Query Suggestion @ tokyotextmining#2
Query Suggestion @ tokyotextmining#2
ybenjo
 
Probability based learning (in book: Machine learning for predictve data anal...
Probability based learning (in book: Machine learning for predictve data anal...Probability based learning (in book: Machine learning for predictve data anal...
Probability based learning (in book: Machine learning for predictve data anal...
Duyen Do
 
Comparing estimation algorithms for block clustering models
Comparing estimation algorithms for block clustering modelsComparing estimation algorithms for block clustering models
Comparing estimation algorithms for block clustering models
BigMC
 
probability problem with brief solution 5
probability problem with brief solution 5probability problem with brief solution 5
probability problem with brief solution 5
getasewk
 
Stochastic modelling and quasi-random numbers
Stochastic modelling and quasi-random numbersStochastic modelling and quasi-random numbers
Stochastic modelling and quasi-random numbers
Olivier Teytaud
 
Intro to Classification: Logistic Regression & SVM
Intro to Classification: Logistic Regression & SVMIntro to Classification: Logistic Regression & SVM
Intro to Classification: Logistic Regression & SVM
NYC Predictive Analytics
 
Litvinenko_RWTH_UQ_Seminar_talk.pdf
Litvinenko_RWTH_UQ_Seminar_talk.pdfLitvinenko_RWTH_UQ_Seminar_talk.pdf
Litvinenko_RWTH_UQ_Seminar_talk.pdf
Alexander Litvinenko
 
A Compositional Encoding for the Asynchronous Pi-Calculus into the Join-Calculus
A Compositional Encoding for the Asynchronous Pi-Calculus into the Join-CalculusA Compositional Encoding for the Asynchronous Pi-Calculus into the Join-Calculus
A Compositional Encoding for the Asynchronous Pi-Calculus into the Join-Calculus
smennicke
 
Ad

Recently uploaded (20)

Kit-Works Team Study_아직도 Dockefile.pdf_김성호
Kit-Works Team Study_아직도 Dockefile.pdf_김성호Kit-Works Team Study_아직도 Dockefile.pdf_김성호
Kit-Works Team Study_아직도 Dockefile.pdf_김성호
Wonjun Hwang
 
Viam product demo_ Deploying and scaling AI with hardware.pdf
Viam product demo_ Deploying and scaling AI with hardware.pdfViam product demo_ Deploying and scaling AI with hardware.pdf
Viam product demo_ Deploying and scaling AI with hardware.pdf
camilalamoratta
 
Everything You Need to Know About Agentforce? (Put AI Agents to Work)
Everything You Need to Know About Agentforce? (Put AI Agents to Work)Everything You Need to Know About Agentforce? (Put AI Agents to Work)
Everything You Need to Know About Agentforce? (Put AI Agents to Work)
Cyntexa
 
Building the Customer Identity Community, Together.pdf
Building the Customer Identity Community, Together.pdfBuilding the Customer Identity Community, Together.pdf
Building the Customer Identity Community, Together.pdf
Cheryl Hung
 
AI 3-in-1: Agents, RAG, and Local Models - Brent Laster
AI 3-in-1: Agents, RAG, and Local Models - Brent LasterAI 3-in-1: Agents, RAG, and Local Models - Brent Laster
AI 3-in-1: Agents, RAG, and Local Models - Brent Laster
All Things Open
 
Could Virtual Threads cast away the usage of Kotlin Coroutines - DevoxxUK2025
Could Virtual Threads cast away the usage of Kotlin Coroutines - DevoxxUK2025Could Virtual Threads cast away the usage of Kotlin Coroutines - DevoxxUK2025
Could Virtual Threads cast away the usage of Kotlin Coroutines - DevoxxUK2025
João Esperancinha
 
Shoehorning dependency injection into a FP language, what does it take?
Shoehorning dependency injection into a FP language, what does it take?Shoehorning dependency injection into a FP language, what does it take?
Shoehorning dependency injection into a FP language, what does it take?
Eric Torreborre
 
Mastering Testing in the Modern F&B Landscape
Mastering Testing in the Modern F&B LandscapeMastering Testing in the Modern F&B Landscape
Mastering Testing in the Modern F&B Landscape
marketing943205
 
IT488 Wireless Sensor Networks_Information Technology
IT488 Wireless Sensor Networks_Information TechnologyIT488 Wireless Sensor Networks_Information Technology
IT488 Wireless Sensor Networks_Information Technology
SHEHABALYAMANI
 
Artificial_Intelligence_in_Everyday_Life.pptx
Artificial_Intelligence_in_Everyday_Life.pptxArtificial_Intelligence_in_Everyday_Life.pptx
Artificial_Intelligence_in_Everyday_Life.pptx
03ANMOLCHAURASIYA
 
Reimagine How You and Your Team Work with Microsoft 365 Copilot.pptx
Reimagine How You and Your Team Work with Microsoft 365 Copilot.pptxReimagine How You and Your Team Work with Microsoft 365 Copilot.pptx
Reimagine How You and Your Team Work with Microsoft 365 Copilot.pptx
John Moore
 
Bepents tech services - a premier cybersecurity consulting firm
Bepents tech services - a premier cybersecurity consulting firmBepents tech services - a premier cybersecurity consulting firm
Bepents tech services - a premier cybersecurity consulting firm
Benard76
 
Challenges in Migrating Imperative Deep Learning Programs to Graph Execution:...
Challenges in Migrating Imperative Deep Learning Programs to Graph Execution:...Challenges in Migrating Imperative Deep Learning Programs to Graph Execution:...
Challenges in Migrating Imperative Deep Learning Programs to Graph Execution:...
Raffi Khatchadourian
 
Cybersecurity Threat Vectors and Mitigation
Cybersecurity Threat Vectors and MitigationCybersecurity Threat Vectors and Mitigation
Cybersecurity Threat Vectors and Mitigation
VICTOR MAESTRE RAMIREZ
 
RTP Over QUIC: An Interesting Opportunity Or Wasted Time?
RTP Over QUIC: An Interesting Opportunity Or Wasted Time?RTP Over QUIC: An Interesting Opportunity Or Wasted Time?
RTP Over QUIC: An Interesting Opportunity Or Wasted Time?
Lorenzo Miniero
 
Dark Dynamism: drones, dark factories and deurbanization
Dark Dynamism: drones, dark factories and deurbanizationDark Dynamism: drones, dark factories and deurbanization
Dark Dynamism: drones, dark factories and deurbanization
Jakub Šimek
 
Q1 2025 Dropbox Earnings and Investor Presentation
Q1 2025 Dropbox Earnings and Investor PresentationQ1 2025 Dropbox Earnings and Investor Presentation
Q1 2025 Dropbox Earnings and Investor Presentation
Dropbox
 
Zilliz Cloud Monthly Technical Review: May 2025
Zilliz Cloud Monthly Technical Review: May 2025Zilliz Cloud Monthly Technical Review: May 2025
Zilliz Cloud Monthly Technical Review: May 2025
Zilliz
 
Slack like a pro: strategies for 10x engineering teams
Slack like a pro: strategies for 10x engineering teamsSlack like a pro: strategies for 10x engineering teams
Slack like a pro: strategies for 10x engineering teams
Nacho Cougil
 
Config 2025 presentation recap covering both days
Config 2025 presentation recap covering both daysConfig 2025 presentation recap covering both days
Config 2025 presentation recap covering both days
TrishAntoni1
 
Kit-Works Team Study_아직도 Dockefile.pdf_김성호
Kit-Works Team Study_아직도 Dockefile.pdf_김성호Kit-Works Team Study_아직도 Dockefile.pdf_김성호
Kit-Works Team Study_아직도 Dockefile.pdf_김성호
Wonjun Hwang
 
Viam product demo_ Deploying and scaling AI with hardware.pdf
Viam product demo_ Deploying and scaling AI with hardware.pdfViam product demo_ Deploying and scaling AI with hardware.pdf
Viam product demo_ Deploying and scaling AI with hardware.pdf
camilalamoratta
 
Everything You Need to Know About Agentforce? (Put AI Agents to Work)
Everything You Need to Know About Agentforce? (Put AI Agents to Work)Everything You Need to Know About Agentforce? (Put AI Agents to Work)
Everything You Need to Know About Agentforce? (Put AI Agents to Work)
Cyntexa
 
Building the Customer Identity Community, Together.pdf
Building the Customer Identity Community, Together.pdfBuilding the Customer Identity Community, Together.pdf
Building the Customer Identity Community, Together.pdf
Cheryl Hung
 
AI 3-in-1: Agents, RAG, and Local Models - Brent Laster
AI 3-in-1: Agents, RAG, and Local Models - Brent LasterAI 3-in-1: Agents, RAG, and Local Models - Brent Laster
AI 3-in-1: Agents, RAG, and Local Models - Brent Laster
All Things Open
 
Could Virtual Threads cast away the usage of Kotlin Coroutines - DevoxxUK2025
Could Virtual Threads cast away the usage of Kotlin Coroutines - DevoxxUK2025Could Virtual Threads cast away the usage of Kotlin Coroutines - DevoxxUK2025
Could Virtual Threads cast away the usage of Kotlin Coroutines - DevoxxUK2025
João Esperancinha
 
Shoehorning dependency injection into a FP language, what does it take?
Shoehorning dependency injection into a FP language, what does it take?Shoehorning dependency injection into a FP language, what does it take?
Shoehorning dependency injection into a FP language, what does it take?
Eric Torreborre
 
Mastering Testing in the Modern F&B Landscape
Mastering Testing in the Modern F&B LandscapeMastering Testing in the Modern F&B Landscape
Mastering Testing in the Modern F&B Landscape
marketing943205
 
IT488 Wireless Sensor Networks_Information Technology
IT488 Wireless Sensor Networks_Information TechnologyIT488 Wireless Sensor Networks_Information Technology
IT488 Wireless Sensor Networks_Information Technology
SHEHABALYAMANI
 
Artificial_Intelligence_in_Everyday_Life.pptx
Artificial_Intelligence_in_Everyday_Life.pptxArtificial_Intelligence_in_Everyday_Life.pptx
Artificial_Intelligence_in_Everyday_Life.pptx
03ANMOLCHAURASIYA
 
Reimagine How You and Your Team Work with Microsoft 365 Copilot.pptx
Reimagine How You and Your Team Work with Microsoft 365 Copilot.pptxReimagine How You and Your Team Work with Microsoft 365 Copilot.pptx
Reimagine How You and Your Team Work with Microsoft 365 Copilot.pptx
John Moore
 
Bepents tech services - a premier cybersecurity consulting firm
Bepents tech services - a premier cybersecurity consulting firmBepents tech services - a premier cybersecurity consulting firm
Bepents tech services - a premier cybersecurity consulting firm
Benard76
 
Challenges in Migrating Imperative Deep Learning Programs to Graph Execution:...
Challenges in Migrating Imperative Deep Learning Programs to Graph Execution:...Challenges in Migrating Imperative Deep Learning Programs to Graph Execution:...
Challenges in Migrating Imperative Deep Learning Programs to Graph Execution:...
Raffi Khatchadourian
 
Cybersecurity Threat Vectors and Mitigation
Cybersecurity Threat Vectors and MitigationCybersecurity Threat Vectors and Mitigation
Cybersecurity Threat Vectors and Mitigation
VICTOR MAESTRE RAMIREZ
 
RTP Over QUIC: An Interesting Opportunity Or Wasted Time?
RTP Over QUIC: An Interesting Opportunity Or Wasted Time?RTP Over QUIC: An Interesting Opportunity Or Wasted Time?
RTP Over QUIC: An Interesting Opportunity Or Wasted Time?
Lorenzo Miniero
 
Dark Dynamism: drones, dark factories and deurbanization
Dark Dynamism: drones, dark factories and deurbanizationDark Dynamism: drones, dark factories and deurbanization
Dark Dynamism: drones, dark factories and deurbanization
Jakub Šimek
 
Q1 2025 Dropbox Earnings and Investor Presentation
Q1 2025 Dropbox Earnings and Investor PresentationQ1 2025 Dropbox Earnings and Investor Presentation
Q1 2025 Dropbox Earnings and Investor Presentation
Dropbox
 
Zilliz Cloud Monthly Technical Review: May 2025
Zilliz Cloud Monthly Technical Review: May 2025Zilliz Cloud Monthly Technical Review: May 2025
Zilliz Cloud Monthly Technical Review: May 2025
Zilliz
 
Slack like a pro: strategies for 10x engineering teams
Slack like a pro: strategies for 10x engineering teamsSlack like a pro: strategies for 10x engineering teams
Slack like a pro: strategies for 10x engineering teams
Nacho Cougil
 
Config 2025 presentation recap covering both days
Config 2025 presentation recap covering both daysConfig 2025 presentation recap covering both days
Config 2025 presentation recap covering both days
TrishAntoni1
 

Naive Bayesian Text Classifier Event Models

  • 1. NAÏVE BAYESIAN TEXT CLASSIFIER EVENT MODELS NOTES AND HOW TO USE THEM Alex Lin Senior Architect Intelligent Mining alin@intelligentmining.com
  • 2. Outline   Quick refresher for   Naïve Bayesian Text Classifier   Event Models   Multinomial Event Model   Multi-variate Bernoulli Event Model   Performance characteristics   Why are they important? 2
  • 3. Naïve Bayes Text Classifier   Supervised Learning   Labeled data set for training to classify the unlabeled data   Easy to implement and highly scalable   It is often the first thing to try   Successful cases: Email spam filtering, News article categorization, Product classification, Social content categorization 3
  • 4. N.B. Text Classifier - Training d1= w1w9w213w748w985...wn|y=1 d2= w12w19w417w578...wn|y=0 d3= w53w62w23w78w105...wn|y=1 Train y=0 y=1 d4= w46w58w293w348w965...wn|y=1 d5= w8w9w33w678w967...wn|y=0 d6= w71w79w529w779...wn|y=1 …… dn= w49w127w23w93...wn|y=0 P(w1|y=1) P(w1|y=0) P(w3|y=1) P(w2|y=0) P(wn|y=1) P(w3|y=0) w1 w2 w3 … wn 4
  • 5. N.B. Text Classifier - Classifying d1= w1w3|y=? Classify w1 w2 w3 … wn To classify a document, pick the class with “maximum posterior probability” Argmax P(Y = c | w1,w 3 ) c = Argmax P(w1,w 3 |Y = c)P(Y = c) c P(w1|y=0) = Argmax P(Y = c)∏ P(w n |Y = c) P(w3|y=1) c n P(w3|y=0) P(w1|y=1) P(Y = 0)P(w1 |Y = 0)P(w 3 |Y = 0) P(Y = 1)P(w1 |Y = 1)P(w 3 |Y = 1) 5 y=0 y=1 € €
  • 6. Bayesian Theorem Likelihood Class Prior P(X |Y )P(Y ) P(Y | X) = P(X) Posterior Evidence €  Generative learning algorithms model “Likelihood” P(X|Y) and “Class Prior” P(Y) then make prediction based on Bayes Theorem. 6
  • 7. Naïve Bayes Text Classifier P(X |Y )P(Y ) P(Y | X) = P(X) Y ∈ {0,1} d = w1,w 6 ,...,w n Apply Bayes’ Theorem: € P(w1,w 6 ,...,w n |Y = 0)P(Y = 0) €P(Y = 0 | w1,w 6 ,...,w n ) = P(w1,w 6 ,...,w n ) P(w1,w 6 ,...,w n |Y = 1)P(Y = 1) P(Y = 1 | w1,w 6 ,...,w n ) = P(w1,w 6 ,...,w n ) € € € 7 €
  • 8. Naïve Bayes Text Classifier P(X |Y )P(Y ) P(Y | X) = P(X) Y ∈ {0,1} d = w1,w 6 ,...,w n To find most likely class label for d: € Argmax P(Y = c | w1,w 6 ,...,w n ) € € c P(w1,w 6 ,...,w n |Y = c)P(Y = c) = Argmax c P(w1,w 6 ,...,w n ) € = Argmax P(w1,w 6 ,...,w n |Y = c)P(Y = c) c How do we estimate likelihood? € 8 €
  • 9. Estimate Likelihood P(X|Y) How to estimate Likelihood P(w1,w 6 ,...,w n |Y = c) ? Naïve Bayes’ assumption - assume that the words (wn) are conditionally independent given y. P(w1,w 6 ,...,w n |Y € c) = = P(w1 |Y = c) * P(w 6 |Y = c) * ...* P(w n |Y = c) = ∏ P(w i |Y = c) i∈n Different event models € = ∑ log(P(w i |Y = c)) i∈n 9 €
  • 10. Differences of Two Models ∏ P(w i |Y = c) Multinomial Event model i∈n tf wi ,c Term Freq. of wi in Class c P(w i |Y = c) = c € Sum of all Term Freq. in Class c Multi-variate Bernoulli Event model € df wi ,c Doc Freq. of wi in Class c P(w i |Y = c) = # of Docs in Class c Nc when wi not exists in d 10 P(w i |Y = c) = 1− P(w i |Y = c) €
  • 11. Comparison of Two Models tf wi ,c Multinomial: P(w i |Y = c) = c Label w1 w2 w3 w4 … wn Y=0 tfw1,0 tfw2,0 tfw2,0 tfw2,0 … tfwn,0 |c| € Y=1 tfw1,1 tfw2,1 tfw2,1 tfw2,1 … tfwn,1 |c| df wi ,c Multi-variate Bernoulli: P(w i |Y = c) = Nc Label w1 w2 w3 w4 … wn Y=0 dfw1,0 dfw2,0 dfw3,0 dfw4,0 … dfwn,0 N0 € Y=0 !dfw1,0 !dfw2,0 !dfw3,0 !dfw4,0 … !dfwn,0 N0 Y=1 dfw1,1 dfw2,1 dfw3,1 dfw4,1 … dfwn,1 N1 11 Y=1 !dfw1,1 !dfw2,1 !dfw3,1 !dfw4,1 … !dfwn,1 N1
  • 12. Comparison of Two Models tf wi ,c ∏ P(w |Y = c) Multinomial: P(w i |Y = c) = c i∈n i d = {w1,w 2 ,...,w n } n: # of words in d w n ∈ {1,2,3,..., V } c € Argmax ∑ log(P(w i |Y = c)) + log( ) € c i∈n D € df wi ,c Multi-variate Bernoulli: P(w i |Y = c) = Nc d = {w1,w 2 ,...,w n } n: # of words in vocabulary |V| w n ∈ {0,1} € c Argmax ∑ log(P(w i |Y = c) (1− P(w i |Y = c)) wi 1−w i ) + log( ) c i∈n D 12 €
  • 13. Multinomial For each word in doc w1 w2 w3 w4 .. wn w1 w2 w3 w4 .. w5 Y=0 Y=1 A% B% 13
  • 14. Multi-variate Bernoulli For each word in doc Y=1 A% W Y=0 1-A% When W does exists in doc Y=1 B% W’ Y=0 1-B% When W does not exists in doc 14
  • 15. Performance Characteristics   Multinomial would perform better Multi-variate Bernoulli in most text classification tasks, especially when vocabulary size |V| >= 1K   Multinomial perform better when handling data sets that have large variance in document length   Multivariate Bernoulli could have the advantage of dense data set   Non-text features could be added as additional Bernoulli variables. However it should not be added to the vocabulary in Multinomial model 15
  • 16. Why are they interesting?   Certaindata points are more suitable for one event model than the other.   Example:   Web page text + “Domain” + “Author”   Social content text + “Who”   Product name / desc + “Brand”   We can create a Naïve Bayes Classifier that combines event models   Most importantly, try both on your data set 16
  • 17. Thank you   Any question or comment? 17
  • 18. Appendix   Laplace Smoothing   Generative vs Discriminative Learning   Multinomial Event Model   Multi-variate Bernoulli Event Model   Notation 18
  • 19. Laplace Smoothing Multinomial: tf wi ,c + 1 P(w i |Y = c) = c+V Multi-variate Bernoulli: € df wi ,c + 1 P(w i |Y = c) = Nc + 2 19 €
  • 20. Generative vs. Discriminative Discriminative Learning Algorithm: Try to learn P(Y | X) directly or try to map input X to labels Y directly Generative Learning Algorithm: Try to model P(X |Y) and € P(Y ) first, then use Bayes theorem to find out P(X |Y )P(Y ) P(Y | X) = € P(X) € 20
  • 21. Multinomial Event Model tf wi ,c P(w i |Y = c) = c d = {w1,w 2 ,...,w n } n: # of words in d w n ∈ {1,2,3,..., V } = Argmax P(w1,w 6 ,...,w n |Y = c)P(Y = c) c € € c tf wi ,c = Argmax ∑ log( ) + log( ) c i∈n c D 21
  • 22. Multi-variate Bernoulli Event Model df wi ,c P(w i |Y = c) = Nc d = {w1,w 2 ,...,w n } n: # of words in vocabulary |V| w n ∈ {0,1} = Argmax P(w1,w 6 ,...,w n |Y = c)P(Y = c) c € df wi ,c df € wi ,c 1−wi c = Argmax ∑ log(( ) )(1− ( wi ) ) + log( ) c i∈n Nc Nc D 22
  • 23. Notation   d: a document   w: a word in a document   X: observed data attributes   Y: Class Label   |V|: number of terms in vocabulary   |D|: number of docs in training set   |c|: number of docs in class c   tf: Term frequency   df: document frequency   !df: 23
  • 24. References   McCallum, A., Nigam, K.: A comparison of event models for Naive Bayes text classification. In: Learning for Text Categorization: Papers from the AAAI Workshop, AAAI Press (1998) 41–48 Technical Report WS-98-05   Metsis, V., Androutsopoulos, I., Paliouras, G.: Spam Filtering with Naive Bayes – Which Naive Bayes (2006) Third Conference on Email and Anti-Spam (CEAS)   Schneider, K: On Word Frequency Information and Negative Evidence in Naive Bayes Text Classification (2004) España for Natural Language Processing, EsTAL 24
  翻译: