SlideShare a Scribd company logo
Multinomial Logistic Regression
with Apache Spark
DB Tsai
Machine Learning Engineer
May 1st
, 2014
Machine Learning with Big Data
● Hadoop MapReduce solutions
● MapReduce is scaling well for batch processing
● However, lots of machine learning algorithms
are iterative by nature.
● There are lots of tricks people do, like training
with subsamples of data, and then average the
models. Why big data with approximation?
+ =
Lightning-fast cluster computing
● Empower users to iterate through the data by
utilizing the in-memory cache.
● Logistic regression runs
up to 100x faster than
Hadoop M/R in memory.
● We're able to train
exact model without doing any approximation.
Binary Logistic Regression
d =
ax1+bx2+cx0
√a2
+b2
where x0=1
P( y=1∣⃗x , ⃗w)=
exp(d )
1+exp(d )
=
exp(⃗x ⃗w)
1+exp(⃗x ⃗w)
P ( y=0∣⃗x , ⃗w)=
1
1+exp(⃗x ⃗w)
log
P ( y=1∣⃗x , ⃗w)
P( y=0∣⃗x , ⃗w)
=⃗x ⃗w
w0=
c
√a2
+b2
w1=
a
√a2
+b2
w2=
b
√a2
+b2
wherew0 iscalled as intercept
Training Binary Logistic Regression
● Maximum Likelihood estimation
From a training data
and labels
● We want to find that maximizes the
likelihood of data defined by
● We can take log of the equation, and minimize
it instead. The Log-Likelihood becomes the
loss function.
X =( ⃗x1 , ⃗x2 , ⃗x3 ,...)
Y =( y1, y2, y3, ...)
⃗w
L( ⃗w , ⃗x1, ... , ⃗xN )=P ( y1∣⃗x1 , ⃗w)P ( y2∣⃗x2 , ⃗w)... P( yN∣ ⃗xN , ⃗w)
l( ⃗w ,⃗x)=log P(y1∣⃗x1 , ⃗w)+log P( y2∣⃗x2 , ⃗w)...+log P( yN∣ ⃗xN , ⃗w)
Optimization
● First Order Minimizer
Require loss, gradient of loss function
– Gradient Decent is step size
– Limited-memory BFGS (L-BFGS)
– Orthant-Wise Limited-memory Quasi-Newton
(OWLQN)
– Coordinate Descent (CD)
– Trust Region Newton Method (TRON)
● Second Order Minimizer
Require loss, gradient and hessian of loss function
– Newton-Raphson, quadratic convergence. Fast!
● Ref: Journal of Machine Learning Research 11 (2010) 3183-
3234, Chih-Jen Lin et al.
⃗wn+1=⃗wn−γ ⃗G , γ
⃗wn+1=⃗wn−H−1 ⃗G
Problem of Second Order Minimizer
● Scale horizontally (the numbers of training
data) by leveraging Spark to parallelize this
iterative optimization process.
● Don't scale vertically (the numbers of training
features). Dimension of Hessian is
● Recent applications from document
classification and computational linguistics are
of this type.
dim(H )=[(k−1)(n+1)]
2
where k is numof class ,nis num of features
L-BFGS
● It's a quasi-Newton method.
● Hessian matrix of second derivatives doesn't
need to be evaluated directly.
● Hessian matrix is approximated using gradient
evaluations.
● It converges a way faster than the default
optimizer in Spark, Gradient Decent.
● We love open source! Alpine Data Labs
contributed our L-BFGS to Spark, and it's
already merged in Spark-1157.
Training Binary Logistic Regression
l( ⃗w ,⃗x) = ∑
k=1
N
log P( yk∣⃗xk , ⃗w)
= ∑k=1
N
yk log P ( yk=1∣⃗xk , ⃗w)+(1−yk )log P ( yk =0∣⃗xk , ⃗w)
= ∑k=1
N
yk log
exp( ⃗xk ⃗w)
1+exp( ⃗xk ⃗w)
+(1−yk )log
1
1+exp( ⃗xk ⃗w)
= ∑k=1
N
yk ⃗xk ⃗w−log(1+exp( ⃗xk ⃗w))
Gradient : Gi (⃗w ,⃗x)=
∂l( ⃗w ,⃗x)
∂ wi
=∑k=1
N
yk xki−
exp( ⃗xk ⃗w)
1+exp( ⃗xk ⃗w)
xki
Hessian: H ij (⃗w ,⃗x)=
∂∂l( ⃗w ,⃗x)
∂wi ∂w j
=−∑k=1
N
exp( ⃗xk ⃗w)
(1+exp( ⃗xk ⃗w))2
xki xkj
Overfitting
P ( y=1∣⃗x , ⃗w)=
exp(zd )
1+exp(zd )
=
exp(⃗x ⃗w)
1+exp(⃗x ⃗w)
Regularization
● The loss function becomes
● The loss function of regularizer doesn't
depend on data. Common regularizers are
– L2 Regularization:
– L1 Regularization:
● L1 norm is not differentiable at zero!
ltotal (⃗w ,⃗x)=lmodel (⃗w ,⃗x)+lreg( ⃗w)
lreg (⃗w)=λ∑i=1
N
wi
2
lreg (⃗w)=λ∑i=1
N
∣wi∣
⃗G (⃗w ,⃗x)total=⃗G(⃗w ,⃗x)model+⃗G( ⃗w)reg
̄H ( ⃗w ,⃗x)total= ̄H (⃗w ,⃗x)model+ ̄H (⃗w)reg
Mini School of Spark APIs
● map(func) : Return a new distributed dataset
formed by passing each element of the source
through a function func.
● reduce(func) : Aggregate the elements of the
dataset using a function func (which takes two
arguments and returns one). The function
should be commutative and associative so
that it can be computed correctly in parallel.
No “key” thing here compared with Hadoop
M/R.
Example – No More “Word Count”!
Let's compute the mean of numbers.
3.0
2.0
1.0
5.0
Executor 1
Executor 2
map
map
(1.0, 1)
(5.0, 1)
(3.0, 1)
(2.0, 1)
reduce
(5.0, 2)
reduce (11.0, 4)
Shuffle
● The previous example using map(func) will
create new tuple object, which may cause
Garbage Collection issue.
● Like the combiner in Hadoop Map Reduce,
for each tuple emitted from map(func), there is
no guarantee that they will be combined
locally in the same executor by reduce(func).
It may increase the traffic in shuffle phase.
● In Hadoop, we address this by in-mapper
combiner or aggregating the result in global
variables which have scope to entire partition.
The same approach can be used in Spark.
Mini School of Spark APIs
● mapPartitions(func) : Similar to map, but runs
separately on each partition (block) of the
RDD, so func must be of type Iterator[T] =>
Iterator[U] when running on an RDD of type T.
● This API allows us to have global variables on
entire partition to aggregate the result locally
and efficiently.
Better Mean
of Numbers
Implementation
3.0
2.0
Executor 1
reduce (11.0, 4)
Shuffle
var counts = 0
var sum = 0.0
loop
var counts = 2
var sum = 5.0
3.0
2.0
(5.0, 2)
1.0
5.0
Executor 2
var counts = 0
var sum = 0.0
loop
var counts = 2
var sum = 6.0
1.0
5.0
(6.0, 2)
mapPartitions
More Idiomatic Scala Implementation
● aggregate(zeroValue: U)(seqOp: (U, T) => U,
combOp: (U, U) => U): U zeroValue is a neutral "zero value"
with type U for initialization. This is
analogous to
in previous example.
var counts = 0
var sum = 0.0
seqOp is function taking
(U, T) => U, where U is aggregator
initialized as zeroValue. T is each
line of values in RDD. This is
analogous to mapPartition in
previous example.
combOp is function taking
(U, U) => U. This is essentially
combing the results between
different executors. The
functionality is the same as
reduce(func) in previous example.
Approach of Parallelization in Spark
Spark Driver
JVM
Time
Spark Executor 1
JVM
Spark Executor 2
JVM
1) Find available resources in
cluster, and launch executor JVMs.
Also initialize the weights.
2) Ask executors to load the data
into executors' JVMs
2) Trying to load the data into memory. If the data is bigger
than memory, it will be partial cached.
(The data locality from source will be taken care by Spark)
3) Ask executors to compute loss, and
gradient of each training sample
(each row) given the current weights.
Get the aggregated results after
the Reduce Phase in executors.
5) If the regularization is enabled,
compute the loss and gradient of
regularizer in driver since it doesn't
depend on training data but only
depends on weights. Add them into
the results from executors.
3) Map Phase:
Compute the loss and gradient of each row of training data
locally given the weights obtained from the driver. Can either
emit each result or sum them up in local aggregators.
4) Reduce Phase:
Sum up the losses and gradients emitted from the Map Phase
Taking a rest!
6) Plug the loss and gradient from
model and regularizer into optimizer
to get the new weights. If the
differences of weights and losses
are larger than criteria,
GO BACK TO 3)
Taking a rest!
7) Finish the model training! Taking a rest!
Step 3) and 4)
● This is the implementation of step 3) and 4) in
MLlib before Spark 1.0
● gradient can have implementations of
Logistic Regression, Linear Regression, SVM,
or any customized cost function.
● Each training data will create new “grad”
object after gradient.compute.
Step 3) and 4) with mapPartitions
Step 3) and 4) with aggregate
● This is the implementation of step 3) and 4) in
MLlib in coming Spark 1.0
● No unnecessary object creation! It's helpful
when we're dealing with large features training
data. GC will not kick in the executor JVMs.
Extension to Multinomial Logistic Regression
● In Binary Logistic Regression
● For K classes multinomial problem where labels
ranged from [0, K-1], we can generalize it via
● The model, weights becomes
(K-1)(N+1) matrix, where N is number of features.
log
P( y=1∣⃗x , ̄w)
P ( y=0∣⃗x , ̄w)
=⃗x ⃗w1
log
P ( y=2∣⃗x , ̄w)
P ( y=0∣⃗x , ̄w)
=⃗x ⃗w2
...
log
P ( y=K −1∣⃗x , ̄w)
P ( y=0∣⃗x , ̄w)
=⃗x ⃗wK −1
log
P( y=1∣⃗x , ⃗w)
P ( y=0∣⃗x , ⃗w)
=⃗x ⃗w
̄w=(⃗w1, ⃗w2, ... , ⃗wK −1)T
P( y=0∣⃗x , ̄w)=
1
1+∑
i=1
K −1
exp(⃗x ⃗wi )
P( y=1∣⃗x , ̄w)=
exp(⃗x ⃗w2)
1+∑
i=1
K −1
exp(⃗x ⃗wi)
...
P( y=K −1∣⃗x , ̄w)=
exp(⃗x ⃗wK −1)
1+∑i=1
K −1
exp(⃗x ⃗wi )
Training Multinomial Logistic Regression
l( ̄w ,⃗x) = ∑
k=1
N
log P( yk∣⃗xk , ̄w)
= ∑k=1
N
α( yk)log P( y=0∣⃗xk , ̄w)+(1−α( yk ))log P ( yk∣⃗xk , ̄w)
= ∑k=1
N
α( yk)log
1
1+∑
i=1
K −1
exp(⃗x ⃗wi)
+(1−α( yk ))log
exp(⃗x ⃗w yk
)
1+∑
i=1
K−1
exp(⃗x ⃗wi)
= ∑
k=1
N
(1−α( yk ))⃗x ⃗wyk
−log(1+∑
i=1
K−1
exp(⃗x ⃗wi))
Gradient : Gij ( ̄w ,⃗x)=
∂l( ̄w ,⃗x)
∂wij
=∑k=1
N
(1−α( yk )) xkj δi , yk
−
exp( ⃗xk ⃗w)
1+exp( ⃗xk ⃗w)
xkj
α( yk )=1 if yk=0
α( yk )=0 if yk≠0
Define:
Note that the first index “i” is for classes, and the second index “j” is for features.
Hessian: H ij ,lm( ̄w ,⃗x)=
∂∂l ( ̄w ,⃗x)
∂wij ∂wlm
0 5 10 15 20 25 30 35
0.3
0.35
0.4
0.45
0.5
0.55
0.6
0.65
0.7
Logistic Regression with a9a Dataset (11M rows, 123 features, 11% non-zero elements)
16 executors in INTEL Xeon E3-1230v3 32GB Memory * 5 nodes Hadoop 2.0.5 alpha cluster
L-BFGS Dense Features
L-BFGS Sparse Features
GD Sparse Features
GD Dense Features
Seconds
Log-Likelihood/NumberofSamples
a9a Dataset Benchmark
a9a Dataset Benchmark
-1 1 3 5 7 9 11 13 15
0.3
0.35
0.4
0.45
0.5
0.55
0.6
0.65
0.7
Logistic Regression with a9a Dataset (11M rows, 123 features, 11% non-zero elements)
16 executors in INTEL Xeon E3-1230v3 32GB Memory * 5 nodes Hadoop 2.0.5 alpha cluster
L-BFGS
GD
Iterations
Log-Likelihood/NumberofSamples
0 5 10 15 20 25 30
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
Logistic Regression with rcv1 Dataset (6.8M rows, 677,399 features, 0.15% non-zero elements)
16 executors in INTEL Xeon E3-1230v3 32GB Memory * 5 nodes Hadoop 2.0.5 alpha cluster
LBFGS Sparse Vector
GD Sparse Vector
Second
Log-Likelihood/NumberofSamples
rcv1 Dataset Benchmark
news20 Dataset Benchmark
0 10 20 30 40 50 60 70 80
0
0.2
0.4
0.6
0.8
1
1.2
Logistic Regression with news20 Dataset (0.14M rows, 1,355,191 features, 0.034% non-zero elements)
16 executors in INTEL Xeon E3-1230v3 32GB Memory * 5 nodes Hadoop 2.0.5 alpha cluster
LBFGS Sparse Vector
GD Sparse Vector
Second
Log-Likelihood/NumberofSamples
Alpine Demo
Alpine Demo
Alpine Demo
Conclusion
● Alpine supports state of the art Cloudera CDH5
● Spark runs programs 100x faster than Hadoop
● Spark turns iterative big data machine learning problems
into single machine problems
● MLlib provides lots of state of art machine learning
implementations, e.g. K-means, linear regression,
logistic regression, ALS, collaborative filtering, and
Naive Bayes, etc.
● Spark provides ease of use APIs in Java, Scala, or
Python, and interactive Scala and Python shell
● Spark is Apache project, and it's the most active big data
open source project nowadays.
Ad

More Related Content

What's hot (20)

Overview on Optimization algorithms in Deep Learning
Overview on Optimization algorithms in Deep LearningOverview on Optimization algorithms in Deep Learning
Overview on Optimization algorithms in Deep Learning
Khang Pham
 
A TRAINING METHOD USING
 DNN-GUIDED LAYERWISE PRETRAINING
 FOR DEEP GAUSSIAN ...
A TRAINING METHOD USING
 DNN-GUIDED LAYERWISE PRETRAINING
 FOR DEEP GAUSSIAN ...A TRAINING METHOD USING
 DNN-GUIDED LAYERWISE PRETRAINING
 FOR DEEP GAUSSIAN ...
A TRAINING METHOD USING
 DNN-GUIDED LAYERWISE PRETRAINING
 FOR DEEP GAUSSIAN ...
Tomoki Koriyama
 
Gradient Descent, Back Propagation, and Auto Differentiation - Advanced Spark...
Gradient Descent, Back Propagation, and Auto Differentiation - Advanced Spark...Gradient Descent, Back Propagation, and Auto Differentiation - Advanced Spark...
Gradient Descent, Back Propagation, and Auto Differentiation - Advanced Spark...
Chris Fregly
 
Chap 8. Optimization for training deep models
Chap 8. Optimization for training deep modelsChap 8. Optimization for training deep models
Chap 8. Optimization for training deep models
Young-Geun Choi
 
Boosted Tree-based Multinomial Logit Model for Aggregated Market Data
Boosted Tree-based Multinomial Logit Model for Aggregated Market DataBoosted Tree-based Multinomial Logit Model for Aggregated Market Data
Boosted Tree-based Multinomial Logit Model for Aggregated Market Data
Jay (Jianqiang) Wang
 
How to design a linear control system
How to design a linear control systemHow to design a linear control system
How to design a linear control system
Alireza Mirzaei
 
Data Structure: Algorithm and analysis
Data Structure: Algorithm and analysisData Structure: Algorithm and analysis
Data Structure: Algorithm and analysis
Dr. Rajdeep Chatterjee
 
Optimization for Deep Learning
Optimization for Deep LearningOptimization for Deep Learning
Optimization for Deep Learning
Sebastian Ruder
 
Pytorch meetup
Pytorch meetupPytorch meetup
Pytorch meetup
Dmitri Azarnyh
 
Phase Retrieval: Motivation and Techniques
Phase Retrieval: Motivation and TechniquesPhase Retrieval: Motivation and Techniques
Phase Retrieval: Motivation and Techniques
Vaibhav Dixit
 
Aaex5 group2(中英夾雜)
Aaex5 group2(中英夾雜)Aaex5 group2(中英夾雜)
Aaex5 group2(中英夾雜)
Shiang-Yun Yang
 
Algorithm Complexity and Main Concepts
Algorithm Complexity and Main ConceptsAlgorithm Complexity and Main Concepts
Algorithm Complexity and Main Concepts
Adelina Ahadova
 
A note on word embedding
A note on word embeddingA note on word embedding
A note on word embedding
Khang Pham
 
Aaex7 group2(中英夾雜)
Aaex7 group2(中英夾雜)Aaex7 group2(中英夾雜)
Aaex7 group2(中英夾雜)
Shiang-Yun Yang
 
5.3 dyn algo-i
5.3 dyn algo-i5.3 dyn algo-i
5.3 dyn algo-i
Krish_ver2
 
VAE-type Deep Generative Models
VAE-type Deep Generative ModelsVAE-type Deep Generative Models
VAE-type Deep Generative Models
Kenta Oono
 
ICML2012読み会 Scaling Up Coordinate Descent Algorithms for Large L1 regularizat...
ICML2012読み会 Scaling Up Coordinate Descent Algorithms for Large L1 regularizat...ICML2012読み会 Scaling Up Coordinate Descent Algorithms for Large L1 regularizat...
ICML2012読み会 Scaling Up Coordinate Descent Algorithms for Large L1 regularizat...
sleepy_yoshi
 
EuroPython 2017 - PyData - Deep Learning your Broadband Network @ HOME
EuroPython 2017 - PyData - Deep Learning your Broadband Network @ HOMEEuroPython 2017 - PyData - Deep Learning your Broadband Network @ HOME
EuroPython 2017 - PyData - Deep Learning your Broadband Network @ HOME
HONGJOO LEE
 
Simple representations for learning: factorizations and similarities
Simple representations for learning: factorizations and similarities Simple representations for learning: factorizations and similarities
Simple representations for learning: factorizations and similarities
Gael Varoquaux
 
Exploring Optimization in Vowpal Wabbit
Exploring Optimization in Vowpal WabbitExploring Optimization in Vowpal Wabbit
Exploring Optimization in Vowpal Wabbit
Shiladitya Sen
 
Overview on Optimization algorithms in Deep Learning
Overview on Optimization algorithms in Deep LearningOverview on Optimization algorithms in Deep Learning
Overview on Optimization algorithms in Deep Learning
Khang Pham
 
A TRAINING METHOD USING
 DNN-GUIDED LAYERWISE PRETRAINING
 FOR DEEP GAUSSIAN ...
A TRAINING METHOD USING
 DNN-GUIDED LAYERWISE PRETRAINING
 FOR DEEP GAUSSIAN ...A TRAINING METHOD USING
 DNN-GUIDED LAYERWISE PRETRAINING
 FOR DEEP GAUSSIAN ...
A TRAINING METHOD USING
 DNN-GUIDED LAYERWISE PRETRAINING
 FOR DEEP GAUSSIAN ...
Tomoki Koriyama
 
Gradient Descent, Back Propagation, and Auto Differentiation - Advanced Spark...
Gradient Descent, Back Propagation, and Auto Differentiation - Advanced Spark...Gradient Descent, Back Propagation, and Auto Differentiation - Advanced Spark...
Gradient Descent, Back Propagation, and Auto Differentiation - Advanced Spark...
Chris Fregly
 
Chap 8. Optimization for training deep models
Chap 8. Optimization for training deep modelsChap 8. Optimization for training deep models
Chap 8. Optimization for training deep models
Young-Geun Choi
 
Boosted Tree-based Multinomial Logit Model for Aggregated Market Data
Boosted Tree-based Multinomial Logit Model for Aggregated Market DataBoosted Tree-based Multinomial Logit Model for Aggregated Market Data
Boosted Tree-based Multinomial Logit Model for Aggregated Market Data
Jay (Jianqiang) Wang
 
How to design a linear control system
How to design a linear control systemHow to design a linear control system
How to design a linear control system
Alireza Mirzaei
 
Data Structure: Algorithm and analysis
Data Structure: Algorithm and analysisData Structure: Algorithm and analysis
Data Structure: Algorithm and analysis
Dr. Rajdeep Chatterjee
 
Optimization for Deep Learning
Optimization for Deep LearningOptimization for Deep Learning
Optimization for Deep Learning
Sebastian Ruder
 
Phase Retrieval: Motivation and Techniques
Phase Retrieval: Motivation and TechniquesPhase Retrieval: Motivation and Techniques
Phase Retrieval: Motivation and Techniques
Vaibhav Dixit
 
Aaex5 group2(中英夾雜)
Aaex5 group2(中英夾雜)Aaex5 group2(中英夾雜)
Aaex5 group2(中英夾雜)
Shiang-Yun Yang
 
Algorithm Complexity and Main Concepts
Algorithm Complexity and Main ConceptsAlgorithm Complexity and Main Concepts
Algorithm Complexity and Main Concepts
Adelina Ahadova
 
A note on word embedding
A note on word embeddingA note on word embedding
A note on word embedding
Khang Pham
 
Aaex7 group2(中英夾雜)
Aaex7 group2(中英夾雜)Aaex7 group2(中英夾雜)
Aaex7 group2(中英夾雜)
Shiang-Yun Yang
 
5.3 dyn algo-i
5.3 dyn algo-i5.3 dyn algo-i
5.3 dyn algo-i
Krish_ver2
 
VAE-type Deep Generative Models
VAE-type Deep Generative ModelsVAE-type Deep Generative Models
VAE-type Deep Generative Models
Kenta Oono
 
ICML2012読み会 Scaling Up Coordinate Descent Algorithms for Large L1 regularizat...
ICML2012読み会 Scaling Up Coordinate Descent Algorithms for Large L1 regularizat...ICML2012読み会 Scaling Up Coordinate Descent Algorithms for Large L1 regularizat...
ICML2012読み会 Scaling Up Coordinate Descent Algorithms for Large L1 regularizat...
sleepy_yoshi
 
EuroPython 2017 - PyData - Deep Learning your Broadband Network @ HOME
EuroPython 2017 - PyData - Deep Learning your Broadband Network @ HOMEEuroPython 2017 - PyData - Deep Learning your Broadband Network @ HOME
EuroPython 2017 - PyData - Deep Learning your Broadband Network @ HOME
HONGJOO LEE
 
Simple representations for learning: factorizations and similarities
Simple representations for learning: factorizations and similarities Simple representations for learning: factorizations and similarities
Simple representations for learning: factorizations and similarities
Gael Varoquaux
 
Exploring Optimization in Vowpal Wabbit
Exploring Optimization in Vowpal WabbitExploring Optimization in Vowpal Wabbit
Exploring Optimization in Vowpal Wabbit
Shiladitya Sen
 

Similar to Alpine Spark Implementation - Technical (20)

Introduction to PyTorch
Introduction to PyTorchIntroduction to PyTorch
Introduction to PyTorch
Jun Young Park
 
CSCI 2033 Elementary Computational Linear Algebra(Spring 20.docx
CSCI 2033 Elementary Computational Linear Algebra(Spring 20.docxCSCI 2033 Elementary Computational Linear Algebra(Spring 20.docx
CSCI 2033 Elementary Computational Linear Algebra(Spring 20.docx
mydrynan
 
Functional go
Functional goFunctional go
Functional go
Geison Goes
 
Introduction to Artificial Neural Networks
Introduction to Artificial Neural NetworksIntroduction to Artificial Neural Networks
Introduction to Artificial Neural Networks
Stratio
 
20181212 - PGconfASIA - LT - English
20181212 - PGconfASIA - LT - English20181212 - PGconfASIA - LT - English
20181212 - PGconfASIA - LT - English
Kohei KaiGai
 
Algorithm analysis basics - Seven Functions/Big-Oh/Omega/Theta
Algorithm analysis basics - Seven Functions/Big-Oh/Omega/ThetaAlgorithm analysis basics - Seven Functions/Big-Oh/Omega/Theta
Algorithm analysis basics - Seven Functions/Big-Oh/Omega/Theta
Priyanka Rana
 
Java 8
Java 8Java 8
Java 8
vilniusjug
 
Basic concept of MATLAB.ppt
Basic concept of MATLAB.pptBasic concept of MATLAB.ppt
Basic concept of MATLAB.ppt
aliraza2732
 
SPU Optimizations - Part 2
SPU Optimizations - Part 2SPU Optimizations - Part 2
SPU Optimizations - Part 2
Naughty Dog
 
MUMS: Transition & SPUQ Workshop - Practical Bayesian Optimization for Urban ...
MUMS: Transition & SPUQ Workshop - Practical Bayesian Optimization for Urban ...MUMS: Transition & SPUQ Workshop - Practical Bayesian Optimization for Urban ...
MUMS: Transition & SPUQ Workshop - Practical Bayesian Optimization for Urban ...
The Statistical and Applied Mathematical Sciences Institute
 
Backpropagation - Elisa Sayrol - UPC Barcelona 2018
Backpropagation - Elisa Sayrol - UPC Barcelona 2018Backpropagation - Elisa Sayrol - UPC Barcelona 2018
Backpropagation - Elisa Sayrol - UPC Barcelona 2018
Universitat Politècnica de Catalunya
 
Matlab1
Matlab1Matlab1
Matlab1
guest8ba004
 
Dafunctor
DafunctorDafunctor
Dafunctor
Buganini Chiu
 
MapReduce
MapReduceMapReduce
MapReduce
ahmedelmorsy89
 
Unit3 MapReduce
Unit3 MapReduceUnit3 MapReduce
Unit3 MapReduce
Integral university, India
 
Dsp manual completed2
Dsp manual completed2Dsp manual completed2
Dsp manual completed2
bilawalali74
 
On Implementation of Neuron Network(Back-propagation)
On Implementation of Neuron Network(Back-propagation)On Implementation of Neuron Network(Back-propagation)
On Implementation of Neuron Network(Back-propagation)
Yu Liu
 
Early Results of OpenMP 4.5 Portability on NVIDIA GPUs & CPUs
Early Results of OpenMP 4.5 Portability on NVIDIA GPUs & CPUsEarly Results of OpenMP 4.5 Portability on NVIDIA GPUs & CPUs
Early Results of OpenMP 4.5 Portability on NVIDIA GPUs & CPUs
Jeff Larkin
 
Control system Lab record
Control system Lab record Control system Lab record
Control system Lab record
Yuvraj Singh
 
MapReduce: teoria e prática
MapReduce: teoria e práticaMapReduce: teoria e prática
MapReduce: teoria e prática
PET Computação
 
Introduction to PyTorch
Introduction to PyTorchIntroduction to PyTorch
Introduction to PyTorch
Jun Young Park
 
CSCI 2033 Elementary Computational Linear Algebra(Spring 20.docx
CSCI 2033 Elementary Computational Linear Algebra(Spring 20.docxCSCI 2033 Elementary Computational Linear Algebra(Spring 20.docx
CSCI 2033 Elementary Computational Linear Algebra(Spring 20.docx
mydrynan
 
Introduction to Artificial Neural Networks
Introduction to Artificial Neural NetworksIntroduction to Artificial Neural Networks
Introduction to Artificial Neural Networks
Stratio
 
20181212 - PGconfASIA - LT - English
20181212 - PGconfASIA - LT - English20181212 - PGconfASIA - LT - English
20181212 - PGconfASIA - LT - English
Kohei KaiGai
 
Algorithm analysis basics - Seven Functions/Big-Oh/Omega/Theta
Algorithm analysis basics - Seven Functions/Big-Oh/Omega/ThetaAlgorithm analysis basics - Seven Functions/Big-Oh/Omega/Theta
Algorithm analysis basics - Seven Functions/Big-Oh/Omega/Theta
Priyanka Rana
 
Basic concept of MATLAB.ppt
Basic concept of MATLAB.pptBasic concept of MATLAB.ppt
Basic concept of MATLAB.ppt
aliraza2732
 
SPU Optimizations - Part 2
SPU Optimizations - Part 2SPU Optimizations - Part 2
SPU Optimizations - Part 2
Naughty Dog
 
Dsp manual completed2
Dsp manual completed2Dsp manual completed2
Dsp manual completed2
bilawalali74
 
On Implementation of Neuron Network(Back-propagation)
On Implementation of Neuron Network(Back-propagation)On Implementation of Neuron Network(Back-propagation)
On Implementation of Neuron Network(Back-propagation)
Yu Liu
 
Early Results of OpenMP 4.5 Portability on NVIDIA GPUs & CPUs
Early Results of OpenMP 4.5 Portability on NVIDIA GPUs & CPUsEarly Results of OpenMP 4.5 Portability on NVIDIA GPUs & CPUs
Early Results of OpenMP 4.5 Portability on NVIDIA GPUs & CPUs
Jeff Larkin
 
Control system Lab record
Control system Lab record Control system Lab record
Control system Lab record
Yuvraj Singh
 
MapReduce: teoria e prática
MapReduce: teoria e práticaMapReduce: teoria e prática
MapReduce: teoria e prática
PET Computação
 
Ad

More from alpinedatalabs (6)

Integrating R and the JVM Platform - Alpine Data Labs' R Execute Operator
Integrating R and the JVM Platform - Alpine Data Labs' R Execute OperatorIntegrating R and the JVM Platform - Alpine Data Labs' R Execute Operator
Integrating R and the JVM Platform - Alpine Data Labs' R Execute Operator
alpinedatalabs
 
Alpine innovation final v1.0
Alpine innovation final v1.0Alpine innovation final v1.0
Alpine innovation final v1.0
alpinedatalabs
 
Predictive analytics from a to z
Predictive analytics from a to zPredictive analytics from a to z
Predictive analytics from a to z
alpinedatalabs
 
Don't Gamble With Your Data
Don't Gamble With Your DataDon't Gamble With Your Data
Don't Gamble With Your Data
alpinedatalabs
 
Steven Hillion Presents, "Why Women are Better Data Scientists."
Steven Hillion Presents, "Why Women are Better Data Scientists."Steven Hillion Presents, "Why Women are Better Data Scientists."
Steven Hillion Presents, "Why Women are Better Data Scientists."
alpinedatalabs
 
Strata Big Data Camp 2013
Strata Big Data Camp 2013Strata Big Data Camp 2013
Strata Big Data Camp 2013
alpinedatalabs
 
Integrating R and the JVM Platform - Alpine Data Labs' R Execute Operator
Integrating R and the JVM Platform - Alpine Data Labs' R Execute OperatorIntegrating R and the JVM Platform - Alpine Data Labs' R Execute Operator
Integrating R and the JVM Platform - Alpine Data Labs' R Execute Operator
alpinedatalabs
 
Alpine innovation final v1.0
Alpine innovation final v1.0Alpine innovation final v1.0
Alpine innovation final v1.0
alpinedatalabs
 
Predictive analytics from a to z
Predictive analytics from a to zPredictive analytics from a to z
Predictive analytics from a to z
alpinedatalabs
 
Don't Gamble With Your Data
Don't Gamble With Your DataDon't Gamble With Your Data
Don't Gamble With Your Data
alpinedatalabs
 
Steven Hillion Presents, "Why Women are Better Data Scientists."
Steven Hillion Presents, "Why Women are Better Data Scientists."Steven Hillion Presents, "Why Women are Better Data Scientists."
Steven Hillion Presents, "Why Women are Better Data Scientists."
alpinedatalabs
 
Strata Big Data Camp 2013
Strata Big Data Camp 2013Strata Big Data Camp 2013
Strata Big Data Camp 2013
alpinedatalabs
 
Ad

Recently uploaded (20)

Dr. Robert Krug - Expert In Artificial Intelligence
Dr. Robert Krug - Expert In Artificial IntelligenceDr. Robert Krug - Expert In Artificial Intelligence
Dr. Robert Krug - Expert In Artificial Intelligence
Dr. Robert Krug
 
文凭证书美国SDSU文凭圣地亚哥州立大学学生证学历认证查询
文凭证书美国SDSU文凭圣地亚哥州立大学学生证学历认证查询文凭证书美国SDSU文凭圣地亚哥州立大学学生证学历认证查询
文凭证书美国SDSU文凭圣地亚哥州立大学学生证学历认证查询
Taqyea
 
Z14_IBM__APL_by_Christian_Demmer_IBM.pdf
Z14_IBM__APL_by_Christian_Demmer_IBM.pdfZ14_IBM__APL_by_Christian_Demmer_IBM.pdf
Z14_IBM__APL_by_Christian_Demmer_IBM.pdf
Fariborz Seyedloo
 
TOAE201-Slides-Chapter 4. Sample theoretical basis (1).pdf
TOAE201-Slides-Chapter 4. Sample theoretical basis (1).pdfTOAE201-Slides-Chapter 4. Sample theoretical basis (1).pdf
TOAE201-Slides-Chapter 4. Sample theoretical basis (1).pdf
NhiV747372
 
Dynamics 365 Business Rules Dynamics Dynamics
Dynamics 365 Business Rules Dynamics DynamicsDynamics 365 Business Rules Dynamics Dynamics
Dynamics 365 Business Rules Dynamics Dynamics
heyoubro69
 
national income & related aggregates (1)(1).pptx
national income & related aggregates (1)(1).pptxnational income & related aggregates (1)(1).pptx
national income & related aggregates (1)(1).pptx
j2492618
 
Controlling Financial Processes at a Municipality
Controlling Financial Processes at a MunicipalityControlling Financial Processes at a Municipality
Controlling Financial Processes at a Municipality
Process mining Evangelist
 
lecture_13 tree in mmmmmmmm mmmmmfftro.pptx
lecture_13 tree in mmmmmmmm     mmmmmfftro.pptxlecture_13 tree in mmmmmmmm     mmmmmfftro.pptx
lecture_13 tree in mmmmmmmm mmmmmfftro.pptx
sarajafffri058
 
Process Mining as Enabler for Digital Transformations
Process Mining as Enabler for Digital TransformationsProcess Mining as Enabler for Digital Transformations
Process Mining as Enabler for Digital Transformations
Process mining Evangelist
 
AWS Certified Machine Learning Slides.pdf
AWS Certified Machine Learning Slides.pdfAWS Certified Machine Learning Slides.pdf
AWS Certified Machine Learning Slides.pdf
philsparkshome
 
Mining a Global Trade Process with Data Science - Microsoft
Mining a Global Trade Process with Data Science - MicrosoftMining a Global Trade Process with Data Science - Microsoft
Mining a Global Trade Process with Data Science - Microsoft
Process mining Evangelist
 
Language Learning App Data Research by Globibo [2025]
Language Learning App Data Research by Globibo [2025]Language Learning App Data Research by Globibo [2025]
Language Learning App Data Research by Globibo [2025]
globibo
 
real illuminati Uganda agent 0782561496/0756664682
real illuminati Uganda agent 0782561496/0756664682real illuminati Uganda agent 0782561496/0756664682
real illuminati Uganda agent 0782561496/0756664682
way to join real illuminati Agent In Kampala Call/WhatsApp+256782561496/0756664682
 
What is ETL? Difference between ETL and ELT?.pdf
What is ETL? Difference between ETL and ELT?.pdfWhat is ETL? Difference between ETL and ELT?.pdf
What is ETL? Difference between ETL and ELT?.pdf
SaikatBasu37
 
AWS RDS Presentation to make concepts easy.pptx
AWS RDS Presentation to make concepts easy.pptxAWS RDS Presentation to make concepts easy.pptx
AWS RDS Presentation to make concepts easy.pptx
bharatkumarbhojwani
 
Ann Naser Nabil- Data Scientist Portfolio.pdf
Ann Naser Nabil- Data Scientist Portfolio.pdfAnn Naser Nabil- Data Scientist Portfolio.pdf
Ann Naser Nabil- Data Scientist Portfolio.pdf
আন্ নাসের নাবিল
 
AI ------------------------------ W1L2.pptx
AI ------------------------------ W1L2.pptxAI ------------------------------ W1L2.pptx
AI ------------------------------ W1L2.pptx
AyeshaJalil6
 
50_questions_full.pptxdddddddddddddddddd
50_questions_full.pptxdddddddddddddddddd50_questions_full.pptxdddddddddddddddddd
50_questions_full.pptxdddddddddddddddddd
emir73065
 
Sets theories and applications that can used to imporve knowledge
Sets theories and applications that can used to imporve knowledgeSets theories and applications that can used to imporve knowledge
Sets theories and applications that can used to imporve knowledge
saumyasl2020
 
Process Mining Machine Recoveries to Reduce Downtime
Process Mining Machine Recoveries to Reduce DowntimeProcess Mining Machine Recoveries to Reduce Downtime
Process Mining Machine Recoveries to Reduce Downtime
Process mining Evangelist
 
Dr. Robert Krug - Expert In Artificial Intelligence
Dr. Robert Krug - Expert In Artificial IntelligenceDr. Robert Krug - Expert In Artificial Intelligence
Dr. Robert Krug - Expert In Artificial Intelligence
Dr. Robert Krug
 
文凭证书美国SDSU文凭圣地亚哥州立大学学生证学历认证查询
文凭证书美国SDSU文凭圣地亚哥州立大学学生证学历认证查询文凭证书美国SDSU文凭圣地亚哥州立大学学生证学历认证查询
文凭证书美国SDSU文凭圣地亚哥州立大学学生证学历认证查询
Taqyea
 
Z14_IBM__APL_by_Christian_Demmer_IBM.pdf
Z14_IBM__APL_by_Christian_Demmer_IBM.pdfZ14_IBM__APL_by_Christian_Demmer_IBM.pdf
Z14_IBM__APL_by_Christian_Demmer_IBM.pdf
Fariborz Seyedloo
 
TOAE201-Slides-Chapter 4. Sample theoretical basis (1).pdf
TOAE201-Slides-Chapter 4. Sample theoretical basis (1).pdfTOAE201-Slides-Chapter 4. Sample theoretical basis (1).pdf
TOAE201-Slides-Chapter 4. Sample theoretical basis (1).pdf
NhiV747372
 
Dynamics 365 Business Rules Dynamics Dynamics
Dynamics 365 Business Rules Dynamics DynamicsDynamics 365 Business Rules Dynamics Dynamics
Dynamics 365 Business Rules Dynamics Dynamics
heyoubro69
 
national income & related aggregates (1)(1).pptx
national income & related aggregates (1)(1).pptxnational income & related aggregates (1)(1).pptx
national income & related aggregates (1)(1).pptx
j2492618
 
Controlling Financial Processes at a Municipality
Controlling Financial Processes at a MunicipalityControlling Financial Processes at a Municipality
Controlling Financial Processes at a Municipality
Process mining Evangelist
 
lecture_13 tree in mmmmmmmm mmmmmfftro.pptx
lecture_13 tree in mmmmmmmm     mmmmmfftro.pptxlecture_13 tree in mmmmmmmm     mmmmmfftro.pptx
lecture_13 tree in mmmmmmmm mmmmmfftro.pptx
sarajafffri058
 
Process Mining as Enabler for Digital Transformations
Process Mining as Enabler for Digital TransformationsProcess Mining as Enabler for Digital Transformations
Process Mining as Enabler for Digital Transformations
Process mining Evangelist
 
AWS Certified Machine Learning Slides.pdf
AWS Certified Machine Learning Slides.pdfAWS Certified Machine Learning Slides.pdf
AWS Certified Machine Learning Slides.pdf
philsparkshome
 
Mining a Global Trade Process with Data Science - Microsoft
Mining a Global Trade Process with Data Science - MicrosoftMining a Global Trade Process with Data Science - Microsoft
Mining a Global Trade Process with Data Science - Microsoft
Process mining Evangelist
 
Language Learning App Data Research by Globibo [2025]
Language Learning App Data Research by Globibo [2025]Language Learning App Data Research by Globibo [2025]
Language Learning App Data Research by Globibo [2025]
globibo
 
What is ETL? Difference between ETL and ELT?.pdf
What is ETL? Difference between ETL and ELT?.pdfWhat is ETL? Difference between ETL and ELT?.pdf
What is ETL? Difference between ETL and ELT?.pdf
SaikatBasu37
 
AWS RDS Presentation to make concepts easy.pptx
AWS RDS Presentation to make concepts easy.pptxAWS RDS Presentation to make concepts easy.pptx
AWS RDS Presentation to make concepts easy.pptx
bharatkumarbhojwani
 
AI ------------------------------ W1L2.pptx
AI ------------------------------ W1L2.pptxAI ------------------------------ W1L2.pptx
AI ------------------------------ W1L2.pptx
AyeshaJalil6
 
50_questions_full.pptxdddddddddddddddddd
50_questions_full.pptxdddddddddddddddddd50_questions_full.pptxdddddddddddddddddd
50_questions_full.pptxdddddddddddddddddd
emir73065
 
Sets theories and applications that can used to imporve knowledge
Sets theories and applications that can used to imporve knowledgeSets theories and applications that can used to imporve knowledge
Sets theories and applications that can used to imporve knowledge
saumyasl2020
 
Process Mining Machine Recoveries to Reduce Downtime
Process Mining Machine Recoveries to Reduce DowntimeProcess Mining Machine Recoveries to Reduce Downtime
Process Mining Machine Recoveries to Reduce Downtime
Process mining Evangelist
 

Alpine Spark Implementation - Technical

  • 1. Multinomial Logistic Regression with Apache Spark DB Tsai Machine Learning Engineer May 1st , 2014
  • 2. Machine Learning with Big Data ● Hadoop MapReduce solutions ● MapReduce is scaling well for batch processing ● However, lots of machine learning algorithms are iterative by nature. ● There are lots of tricks people do, like training with subsamples of data, and then average the models. Why big data with approximation? + =
  • 3. Lightning-fast cluster computing ● Empower users to iterate through the data by utilizing the in-memory cache. ● Logistic regression runs up to 100x faster than Hadoop M/R in memory. ● We're able to train exact model without doing any approximation.
  • 4. Binary Logistic Regression d = ax1+bx2+cx0 √a2 +b2 where x0=1 P( y=1∣⃗x , ⃗w)= exp(d ) 1+exp(d ) = exp(⃗x ⃗w) 1+exp(⃗x ⃗w) P ( y=0∣⃗x , ⃗w)= 1 1+exp(⃗x ⃗w) log P ( y=1∣⃗x , ⃗w) P( y=0∣⃗x , ⃗w) =⃗x ⃗w w0= c √a2 +b2 w1= a √a2 +b2 w2= b √a2 +b2 wherew0 iscalled as intercept
  • 5. Training Binary Logistic Regression ● Maximum Likelihood estimation From a training data and labels ● We want to find that maximizes the likelihood of data defined by ● We can take log of the equation, and minimize it instead. The Log-Likelihood becomes the loss function. X =( ⃗x1 , ⃗x2 , ⃗x3 ,...) Y =( y1, y2, y3, ...) ⃗w L( ⃗w , ⃗x1, ... , ⃗xN )=P ( y1∣⃗x1 , ⃗w)P ( y2∣⃗x2 , ⃗w)... P( yN∣ ⃗xN , ⃗w) l( ⃗w ,⃗x)=log P(y1∣⃗x1 , ⃗w)+log P( y2∣⃗x2 , ⃗w)...+log P( yN∣ ⃗xN , ⃗w)
  • 6. Optimization ● First Order Minimizer Require loss, gradient of loss function – Gradient Decent is step size – Limited-memory BFGS (L-BFGS) – Orthant-Wise Limited-memory Quasi-Newton (OWLQN) – Coordinate Descent (CD) – Trust Region Newton Method (TRON) ● Second Order Minimizer Require loss, gradient and hessian of loss function – Newton-Raphson, quadratic convergence. Fast! ● Ref: Journal of Machine Learning Research 11 (2010) 3183- 3234, Chih-Jen Lin et al. ⃗wn+1=⃗wn−γ ⃗G , γ ⃗wn+1=⃗wn−H−1 ⃗G
  • 7. Problem of Second Order Minimizer ● Scale horizontally (the numbers of training data) by leveraging Spark to parallelize this iterative optimization process. ● Don't scale vertically (the numbers of training features). Dimension of Hessian is ● Recent applications from document classification and computational linguistics are of this type. dim(H )=[(k−1)(n+1)] 2 where k is numof class ,nis num of features
  • 8. L-BFGS ● It's a quasi-Newton method. ● Hessian matrix of second derivatives doesn't need to be evaluated directly. ● Hessian matrix is approximated using gradient evaluations. ● It converges a way faster than the default optimizer in Spark, Gradient Decent. ● We love open source! Alpine Data Labs contributed our L-BFGS to Spark, and it's already merged in Spark-1157.
  • 9. Training Binary Logistic Regression l( ⃗w ,⃗x) = ∑ k=1 N log P( yk∣⃗xk , ⃗w) = ∑k=1 N yk log P ( yk=1∣⃗xk , ⃗w)+(1−yk )log P ( yk =0∣⃗xk , ⃗w) = ∑k=1 N yk log exp( ⃗xk ⃗w) 1+exp( ⃗xk ⃗w) +(1−yk )log 1 1+exp( ⃗xk ⃗w) = ∑k=1 N yk ⃗xk ⃗w−log(1+exp( ⃗xk ⃗w)) Gradient : Gi (⃗w ,⃗x)= ∂l( ⃗w ,⃗x) ∂ wi =∑k=1 N yk xki− exp( ⃗xk ⃗w) 1+exp( ⃗xk ⃗w) xki Hessian: H ij (⃗w ,⃗x)= ∂∂l( ⃗w ,⃗x) ∂wi ∂w j =−∑k=1 N exp( ⃗xk ⃗w) (1+exp( ⃗xk ⃗w))2 xki xkj
  • 10. Overfitting P ( y=1∣⃗x , ⃗w)= exp(zd ) 1+exp(zd ) = exp(⃗x ⃗w) 1+exp(⃗x ⃗w)
  • 11. Regularization ● The loss function becomes ● The loss function of regularizer doesn't depend on data. Common regularizers are – L2 Regularization: – L1 Regularization: ● L1 norm is not differentiable at zero! ltotal (⃗w ,⃗x)=lmodel (⃗w ,⃗x)+lreg( ⃗w) lreg (⃗w)=λ∑i=1 N wi 2 lreg (⃗w)=λ∑i=1 N ∣wi∣ ⃗G (⃗w ,⃗x)total=⃗G(⃗w ,⃗x)model+⃗G( ⃗w)reg ̄H ( ⃗w ,⃗x)total= ̄H (⃗w ,⃗x)model+ ̄H (⃗w)reg
  • 12. Mini School of Spark APIs ● map(func) : Return a new distributed dataset formed by passing each element of the source through a function func. ● reduce(func) : Aggregate the elements of the dataset using a function func (which takes two arguments and returns one). The function should be commutative and associative so that it can be computed correctly in parallel. No “key” thing here compared with Hadoop M/R.
  • 13. Example – No More “Word Count”! Let's compute the mean of numbers. 3.0 2.0 1.0 5.0 Executor 1 Executor 2 map map (1.0, 1) (5.0, 1) (3.0, 1) (2.0, 1) reduce (5.0, 2) reduce (11.0, 4) Shuffle
  • 14. ● The previous example using map(func) will create new tuple object, which may cause Garbage Collection issue. ● Like the combiner in Hadoop Map Reduce, for each tuple emitted from map(func), there is no guarantee that they will be combined locally in the same executor by reduce(func). It may increase the traffic in shuffle phase. ● In Hadoop, we address this by in-mapper combiner or aggregating the result in global variables which have scope to entire partition. The same approach can be used in Spark.
  • 15. Mini School of Spark APIs ● mapPartitions(func) : Similar to map, but runs separately on each partition (block) of the RDD, so func must be of type Iterator[T] => Iterator[U] when running on an RDD of type T. ● This API allows us to have global variables on entire partition to aggregate the result locally and efficiently.
  • 16. Better Mean of Numbers Implementation 3.0 2.0 Executor 1 reduce (11.0, 4) Shuffle var counts = 0 var sum = 0.0 loop var counts = 2 var sum = 5.0 3.0 2.0 (5.0, 2) 1.0 5.0 Executor 2 var counts = 0 var sum = 0.0 loop var counts = 2 var sum = 6.0 1.0 5.0 (6.0, 2) mapPartitions
  • 17. More Idiomatic Scala Implementation ● aggregate(zeroValue: U)(seqOp: (U, T) => U, combOp: (U, U) => U): U zeroValue is a neutral "zero value" with type U for initialization. This is analogous to in previous example. var counts = 0 var sum = 0.0 seqOp is function taking (U, T) => U, where U is aggregator initialized as zeroValue. T is each line of values in RDD. This is analogous to mapPartition in previous example. combOp is function taking (U, U) => U. This is essentially combing the results between different executors. The functionality is the same as reduce(func) in previous example.
  • 18. Approach of Parallelization in Spark Spark Driver JVM Time Spark Executor 1 JVM Spark Executor 2 JVM 1) Find available resources in cluster, and launch executor JVMs. Also initialize the weights. 2) Ask executors to load the data into executors' JVMs 2) Trying to load the data into memory. If the data is bigger than memory, it will be partial cached. (The data locality from source will be taken care by Spark) 3) Ask executors to compute loss, and gradient of each training sample (each row) given the current weights. Get the aggregated results after the Reduce Phase in executors. 5) If the regularization is enabled, compute the loss and gradient of regularizer in driver since it doesn't depend on training data but only depends on weights. Add them into the results from executors. 3) Map Phase: Compute the loss and gradient of each row of training data locally given the weights obtained from the driver. Can either emit each result or sum them up in local aggregators. 4) Reduce Phase: Sum up the losses and gradients emitted from the Map Phase Taking a rest! 6) Plug the loss and gradient from model and regularizer into optimizer to get the new weights. If the differences of weights and losses are larger than criteria, GO BACK TO 3) Taking a rest! 7) Finish the model training! Taking a rest!
  • 19. Step 3) and 4) ● This is the implementation of step 3) and 4) in MLlib before Spark 1.0 ● gradient can have implementations of Logistic Regression, Linear Regression, SVM, or any customized cost function. ● Each training data will create new “grad” object after gradient.compute.
  • 20. Step 3) and 4) with mapPartitions
  • 21. Step 3) and 4) with aggregate ● This is the implementation of step 3) and 4) in MLlib in coming Spark 1.0 ● No unnecessary object creation! It's helpful when we're dealing with large features training data. GC will not kick in the executor JVMs.
  • 22. Extension to Multinomial Logistic Regression ● In Binary Logistic Regression ● For K classes multinomial problem where labels ranged from [0, K-1], we can generalize it via ● The model, weights becomes (K-1)(N+1) matrix, where N is number of features. log P( y=1∣⃗x , ̄w) P ( y=0∣⃗x , ̄w) =⃗x ⃗w1 log P ( y=2∣⃗x , ̄w) P ( y=0∣⃗x , ̄w) =⃗x ⃗w2 ... log P ( y=K −1∣⃗x , ̄w) P ( y=0∣⃗x , ̄w) =⃗x ⃗wK −1 log P( y=1∣⃗x , ⃗w) P ( y=0∣⃗x , ⃗w) =⃗x ⃗w ̄w=(⃗w1, ⃗w2, ... , ⃗wK −1)T P( y=0∣⃗x , ̄w)= 1 1+∑ i=1 K −1 exp(⃗x ⃗wi ) P( y=1∣⃗x , ̄w)= exp(⃗x ⃗w2) 1+∑ i=1 K −1 exp(⃗x ⃗wi) ... P( y=K −1∣⃗x , ̄w)= exp(⃗x ⃗wK −1) 1+∑i=1 K −1 exp(⃗x ⃗wi )
  • 23. Training Multinomial Logistic Regression l( ̄w ,⃗x) = ∑ k=1 N log P( yk∣⃗xk , ̄w) = ∑k=1 N α( yk)log P( y=0∣⃗xk , ̄w)+(1−α( yk ))log P ( yk∣⃗xk , ̄w) = ∑k=1 N α( yk)log 1 1+∑ i=1 K −1 exp(⃗x ⃗wi) +(1−α( yk ))log exp(⃗x ⃗w yk ) 1+∑ i=1 K−1 exp(⃗x ⃗wi) = ∑ k=1 N (1−α( yk ))⃗x ⃗wyk −log(1+∑ i=1 K−1 exp(⃗x ⃗wi)) Gradient : Gij ( ̄w ,⃗x)= ∂l( ̄w ,⃗x) ∂wij =∑k=1 N (1−α( yk )) xkj δi , yk − exp( ⃗xk ⃗w) 1+exp( ⃗xk ⃗w) xkj α( yk )=1 if yk=0 α( yk )=0 if yk≠0 Define: Note that the first index “i” is for classes, and the second index “j” is for features. Hessian: H ij ,lm( ̄w ,⃗x)= ∂∂l ( ̄w ,⃗x) ∂wij ∂wlm
  • 24. 0 5 10 15 20 25 30 35 0.3 0.35 0.4 0.45 0.5 0.55 0.6 0.65 0.7 Logistic Regression with a9a Dataset (11M rows, 123 features, 11% non-zero elements) 16 executors in INTEL Xeon E3-1230v3 32GB Memory * 5 nodes Hadoop 2.0.5 alpha cluster L-BFGS Dense Features L-BFGS Sparse Features GD Sparse Features GD Dense Features Seconds Log-Likelihood/NumberofSamples a9a Dataset Benchmark
  • 25. a9a Dataset Benchmark -1 1 3 5 7 9 11 13 15 0.3 0.35 0.4 0.45 0.5 0.55 0.6 0.65 0.7 Logistic Regression with a9a Dataset (11M rows, 123 features, 11% non-zero elements) 16 executors in INTEL Xeon E3-1230v3 32GB Memory * 5 nodes Hadoop 2.0.5 alpha cluster L-BFGS GD Iterations Log-Likelihood/NumberofSamples
  • 26. 0 5 10 15 20 25 30 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Logistic Regression with rcv1 Dataset (6.8M rows, 677,399 features, 0.15% non-zero elements) 16 executors in INTEL Xeon E3-1230v3 32GB Memory * 5 nodes Hadoop 2.0.5 alpha cluster LBFGS Sparse Vector GD Sparse Vector Second Log-Likelihood/NumberofSamples rcv1 Dataset Benchmark
  • 27. news20 Dataset Benchmark 0 10 20 30 40 50 60 70 80 0 0.2 0.4 0.6 0.8 1 1.2 Logistic Regression with news20 Dataset (0.14M rows, 1,355,191 features, 0.034% non-zero elements) 16 executors in INTEL Xeon E3-1230v3 32GB Memory * 5 nodes Hadoop 2.0.5 alpha cluster LBFGS Sparse Vector GD Sparse Vector Second Log-Likelihood/NumberofSamples
  • 31. Conclusion ● Alpine supports state of the art Cloudera CDH5 ● Spark runs programs 100x faster than Hadoop ● Spark turns iterative big data machine learning problems into single machine problems ● MLlib provides lots of state of art machine learning implementations, e.g. K-means, linear regression, logistic regression, ALS, collaborative filtering, and Naive Bayes, etc. ● Spark provides ease of use APIs in Java, Scala, or Python, and interactive Scala and Python shell ● Spark is Apache project, and it's the most active big data open source project nowadays.
  翻译: