SlideShare a Scribd company logo
Network Intelligence and Analysis Lab 
Network Intelligence and Analysis Lab 
Markov Chain Basic 
2014.07.11 
SanghyukChun
Network Intelligence and Analysis Lab 
•Exact Counting 
•#P Complete 
•Sampling and Counting 
Previous Chapters 
2
Network Intelligence and Analysis Lab 
•Markov Chain Basic 
•ErgodicMC has an unique stationary distribution 
•Some basic concepts (Coupling, Mixing time) 
•Coupling from past 
•Coupling detail 
•IsingModel 
•Bounding Mixing time via Coupling 
•Random spanning tree 
•Path coupling framework 
•MC for k-coloring graph 
Remaining Chapters 
3 
Today!
Network Intelligence and Analysis Lab 
•Introduce Markov Chain 
•Show a potential algorithmic use of Markov Chain for sampling from complex distribution 
•Prove that ErgodicMarkov Chain always converge to unique stationary distribution 
•Introduce Coupling techniques and Mixing time 
In this chapter… 
4
Network Intelligence and Analysis Lab 
•For a finite state space Ω, we say a sequence of random variables (푋푡) on Ωis a Markov chain if the sequence is Markovianin the following sense 
•For all t, all 푥0,…,푥푡,푦∈Ω, we require 
•Pr푋푡+1=푦푋0=푥0,…,푋푡=푥푡=Pr(푋푡+1=푦|푋푡=푥푡) 
•The Markov property: “Memoryless” 
Markov Chain 
5
Network Intelligence and Analysis Lab 
•For a finite state space Ω, we say a sequence of random variables (푋푡) on Ωis a Markov chain if the sequence is Markovian 
•Let’s Ωdenote the set of shuffling (ex. 푋1=1,2,3,…,52) 
•The next shuffling state only depends on previous shuffling state, or 푋푡only depends on 푋푡+1 
•Question 1: How can we uniformly shuffle the card? 
•Question 2: Can we get fast uniform shuffling algorithm? 
Example of Markov Chain (Card Shuffling) 
6
Network Intelligence and Analysis Lab 
•Transition Matrix 
•푃푥,푦=Pr(푋푡+1=푦|푋푡=푥) 
•Transitions are independent of the time (time-homogeneous) 
Transition Matrix 
7
Network Intelligence and Analysis Lab 
•Transition Matrix 
•푃푥,푦=Pr(푋푡+1=푦|푋푡=푥) 
•Transitions are independent of the time (time-homogeneous) 
•The t-step distribution is defined in the natural way 
•푃푡푥,푦= 푃푥,푦,푡=1 푧∈Ω푃푥,푧푃푡−1(푧,푦),푡>1 
Transition Matrix 
8
Network Intelligence and Analysis Lab 
•A distribution 휋is a stationary distribution if it is invariant with respect to the transition matrix 
• forall푦∈Ω,휋푦= 푥∈Ω 휋푥푃(푥,푦) 
•Theorem 1 
•For a finite ergodicMarkov Chain, there exists a unique stationary distribution 휋 
•Proof? 
Stationary Distribution 
9
Network Intelligence and Analysis Lab 
•A Markov Chain is ergodicif there exists t such that for all x,푦∈Ω, 푃푥,푦푡>0 
•It is possible to go from every state to every state (not necessarily in one move) 
•For finite MC following conditions are equivalent to ergodicity 
•Irreducible: 
•Forall푥,푦∈Ω,thereexists푡=푡푥,푦푠.푡.푃푡푥,푦>0 
•Aperiodic: 
•Forall푥∈Ω,gcd푡:푃푡푥,푥>0=1 
•Since regardless of their initial state, ErgodicMCs eventually reach a unique stationary distribution, EMCs are useful algorithmic tools 
ErgodicMarkov Chain 
10
Network Intelligence and Analysis Lab 
•Goal: we have a probability distribution we’d like to generate random sample from 
•Solution via MC: If we can design an ergodicMC whose unique stationary distribution is desired distribution, we then run the chain and can get the distribution 
•Example: sampling matching 
Algorithmic usage of ErgodicMarkov Chains 
11
Network Intelligence and Analysis Lab 
•For a graph 퐺=(푉,퐸), let Ωdenote the set of matching of G 
•We define a MC on Ωwhose transitions are as 
•Choose an edge e uniformly at random from E 
•Let, 푋′= 푋푡∪푒,if푒∉푋푡 푋푡푒,if푒∈푋푡 
•If 푋′∈Ω, then set 푋푡+1=푋′with probability ½; otherwise set 푋푡+1=푋푡 
•The MC is aperiodic (푃푀,푀≥1/2forall푀∈Ω) 
•The MC is irreducible (via empty set) with symmetric transition probabilities 
•Symmetric transition matrix has uniform stationary distribution 
•Thus, the unique stationary distribution is uniform over all matching of G 
Sampling Matching 
12
Network Intelligence and Analysis Lab 
•We will prove the theorem using the coupling technique and coupling Lemma 
Proof of Theorem (introduction) 
13
Network Intelligence and Analysis Lab 
•For distribution 휇,휈on a finite setΩ, a distribution ωon Ω×Ωis a coupling if 
•In other words, ωis a joint distribution whose marginal distributions are the appropriate distributions 
•Variation distance between 휇,휈is defined as 
Coupling Technique 
14
Network Intelligence and Analysis Lab 
Coupling Lemma 
15
Network Intelligence and Analysis Lab 
Proof of Lemma (a) 
16
Network Intelligence and Analysis Lab 
•For all 푧∈Ω, let 
•휔푧,푧=min{휇푧,휈(푧)} 
•푑푇푉=Pr(푋≠푌) 
•We need to complete the construction of w in valid way 
•For y,푧∈Ω,y≠푧, let 
•It is straight forward to verify that w is valid coupling 
Proof of Lemma (b) 
17
Network Intelligence and Analysis Lab 
•Consider a pair of Markov chains 푋푡,푌푡on Ωwith transition matrices 푃푋,푃푌respectively 
•Typically, MCs are identical in applications (푃푋=푃푌) 
•The Markov chainXt′ ,Yt′onΩ×Ωis a Markoviancoupling if 
•For such a Markoviancoupling we have variance distance as 
•If we choose 푌0as stationary distribution πthen we have 
•This shows how can we use coupling to bound the distance from stationary 
Couplings for Markov Chain 
18
Network Intelligence and Analysis Lab 
•Create MCs 푋푡,푌푡, where initial 푋0,푌0are arbitrary state on Ω 
•Create coupling for there chains in the following way 
•From 푋푡,푌푡, choose 푋푡+1according to transition matrix P 
•If Yt=푋푡,setYt+1=푋푡+1, otherwise choose Yt+1according to P, independent of the choice for 푋푡 
•By ergodicity, there exist 푡∗s.t.forall푥,푦∈Ω,푃푡∗ 푥,푦≥휖>0 
•Therefore, for all 푋0,푌0∈Ω 
•We can see similarly get step 푡∗→2푡∗ 
Proof of Theorem(1/4) 
19
Network Intelligence and Analysis Lab 
•Create coupling for there chains in the following way 
•From 푋푡,푌푡, choose 푋푡+1according to transition matrix P 
•If Yt=푋푡,setYt+1=푋푡+1, otherwise choose Yt+1according to P, independent of the choice for 푋푡 
•If once 푋푠=푌푠,wehave푋푠′=푌푠′forall푠′≥푠 
•From earlier observation, 
Proof of Theorem(2/4) 
20
Network Intelligence and Analysis Lab 
•For integer k > 0, 
•Therefore, 
•Since Xt=푌푡,impliesXt+1=푌푡+1,forall푡′≥푡, we have 
•Note that coupling of MC we defined, defines a coupling of Xt,푌푡. Hence by Coupling Lemma, 
•This proves that from any initial points we reach same distribution 
Proof of Theorem(3/4) 
21 
For all 푥,푦∈Ω
Network Intelligence and Analysis Lab 
•From previous result, we proves there is a limiting distribution σ 
•Question: is σa stationary distribution? Or satisfiesforall푦∈Ω,휎푦= 푥∈Ω 휎푥푃(푥,푦) 
Proof of Theorem (4/4) 
22
Network Intelligence and Analysis Lab 
•Convergence itself is guaranteed if MC is ErgodicMC 
•However, it gives no indication to the convergence rate 
•We define the mixing time 휏푚푖푥(휖)as the time until the chain is within variance distance εfrom the worst initial state 
•휏푚푖푥휖=maxmin푋0∈Ω{푡:푑푇푉푃푡푋0,∙,휋≤ϵ} 
•To get efficient sampling algorithms (e.x. matching chain), we hope that mixing time is polynomial for input size 
Markov Chains for Algorithmic Purpose: Mixing Time 
23
Ad

More Related Content

What's hot (20)

Continuous control with deep reinforcement learning (DDPG)
Continuous control with deep reinforcement learning (DDPG)Continuous control with deep reinforcement learning (DDPG)
Continuous control with deep reinforcement learning (DDPG)
Taehoon Kim
 
Explicit Density Models
Explicit Density ModelsExplicit Density Models
Explicit Density Models
Sangwoo Mo
 
DDPG algortihm for angry birds
DDPG algortihm for angry birdsDDPG algortihm for angry birds
DDPG algortihm for angry birds
Wangyu Han
 
Meta-Learning with Implicit Gradients
Meta-Learning with Implicit GradientsMeta-Learning with Implicit Gradients
Meta-Learning with Implicit Gradients
Sangwoo Mo
 
SPICE-MATEX @ DAC15
SPICE-MATEX @ DAC15SPICE-MATEX @ DAC15
SPICE-MATEX @ DAC15
Hao Zhuang
 
Bayesian Model-Agnostic Meta-Learning
Bayesian Model-Agnostic Meta-LearningBayesian Model-Agnostic Meta-Learning
Bayesian Model-Agnostic Meta-Learning
Sangwoo Mo
 
Distributed Deep Q-Learning
Distributed Deep Q-LearningDistributed Deep Q-Learning
Distributed Deep Q-Learning
Lyft
 
Introduction to Diffusion Models
Introduction to Diffusion ModelsIntroduction to Diffusion Models
Introduction to Diffusion Models
Sangwoo Mo
 
Linear regression, costs & gradient descent
Linear regression, costs & gradient descentLinear regression, costs & gradient descent
Linear regression, costs & gradient descent
Revanth Kumar
 
Lp and ip programming cp 9
Lp and ip programming cp 9Lp and ip programming cp 9
Lp and ip programming cp 9
M S Prasad
 
is anyone_interest_in_auto-encoding_variational-bayes
is anyone_interest_in_auto-encoding_variational-bayesis anyone_interest_in_auto-encoding_variational-bayes
is anyone_interest_in_auto-encoding_variational-bayes
NAVER Engineering
 
Overview on Optimization algorithms in Deep Learning
Overview on Optimization algorithms in Deep LearningOverview on Optimization algorithms in Deep Learning
Overview on Optimization algorithms in Deep Learning
Khang Pham
 
Dueling network architectures for deep reinforcement learning
Dueling network architectures for deep reinforcement learningDueling network architectures for deep reinforcement learning
Dueling network architectures for deep reinforcement learning
Taehoon Kim
 
Undecidable Problems - COPING WITH THE LIMITATIONS OF ALGORITHM POWER
Undecidable Problems - COPING WITH THE LIMITATIONS OF ALGORITHM POWERUndecidable Problems - COPING WITH THE LIMITATIONS OF ALGORITHM POWER
Undecidable Problems - COPING WITH THE LIMITATIONS OF ALGORITHM POWER
muthukrishnavinayaga
 
unit-4-dynamic programming
unit-4-dynamic programmingunit-4-dynamic programming
unit-4-dynamic programming
hodcsencet
 
An overview of gradient descent optimization algorithms
An overview of gradient descent optimization algorithms An overview of gradient descent optimization algorithms
An overview of gradient descent optimization algorithms
Hakky St
 
Branch and bound
Branch and boundBranch and bound
Branch and bound
Nv Thejaswini
 
Lecture 8 dynamic programming
Lecture 8 dynamic programmingLecture 8 dynamic programming
Lecture 8 dynamic programming
Oye Tu
 
Dynamic programming
Dynamic programmingDynamic programming
Dynamic programming
Gopi Saiteja
 
DS ppt
DS pptDS ppt
DS ppt
kirupasuchi1996
 
Continuous control with deep reinforcement learning (DDPG)
Continuous control with deep reinforcement learning (DDPG)Continuous control with deep reinforcement learning (DDPG)
Continuous control with deep reinforcement learning (DDPG)
Taehoon Kim
 
Explicit Density Models
Explicit Density ModelsExplicit Density Models
Explicit Density Models
Sangwoo Mo
 
DDPG algortihm for angry birds
DDPG algortihm for angry birdsDDPG algortihm for angry birds
DDPG algortihm for angry birds
Wangyu Han
 
Meta-Learning with Implicit Gradients
Meta-Learning with Implicit GradientsMeta-Learning with Implicit Gradients
Meta-Learning with Implicit Gradients
Sangwoo Mo
 
SPICE-MATEX @ DAC15
SPICE-MATEX @ DAC15SPICE-MATEX @ DAC15
SPICE-MATEX @ DAC15
Hao Zhuang
 
Bayesian Model-Agnostic Meta-Learning
Bayesian Model-Agnostic Meta-LearningBayesian Model-Agnostic Meta-Learning
Bayesian Model-Agnostic Meta-Learning
Sangwoo Mo
 
Distributed Deep Q-Learning
Distributed Deep Q-LearningDistributed Deep Q-Learning
Distributed Deep Q-Learning
Lyft
 
Introduction to Diffusion Models
Introduction to Diffusion ModelsIntroduction to Diffusion Models
Introduction to Diffusion Models
Sangwoo Mo
 
Linear regression, costs & gradient descent
Linear regression, costs & gradient descentLinear regression, costs & gradient descent
Linear regression, costs & gradient descent
Revanth Kumar
 
Lp and ip programming cp 9
Lp and ip programming cp 9Lp and ip programming cp 9
Lp and ip programming cp 9
M S Prasad
 
is anyone_interest_in_auto-encoding_variational-bayes
is anyone_interest_in_auto-encoding_variational-bayesis anyone_interest_in_auto-encoding_variational-bayes
is anyone_interest_in_auto-encoding_variational-bayes
NAVER Engineering
 
Overview on Optimization algorithms in Deep Learning
Overview on Optimization algorithms in Deep LearningOverview on Optimization algorithms in Deep Learning
Overview on Optimization algorithms in Deep Learning
Khang Pham
 
Dueling network architectures for deep reinforcement learning
Dueling network architectures for deep reinforcement learningDueling network architectures for deep reinforcement learning
Dueling network architectures for deep reinforcement learning
Taehoon Kim
 
Undecidable Problems - COPING WITH THE LIMITATIONS OF ALGORITHM POWER
Undecidable Problems - COPING WITH THE LIMITATIONS OF ALGORITHM POWERUndecidable Problems - COPING WITH THE LIMITATIONS OF ALGORITHM POWER
Undecidable Problems - COPING WITH THE LIMITATIONS OF ALGORITHM POWER
muthukrishnavinayaga
 
unit-4-dynamic programming
unit-4-dynamic programmingunit-4-dynamic programming
unit-4-dynamic programming
hodcsencet
 
An overview of gradient descent optimization algorithms
An overview of gradient descent optimization algorithms An overview of gradient descent optimization algorithms
An overview of gradient descent optimization algorithms
Hakky St
 
Lecture 8 dynamic programming
Lecture 8 dynamic programmingLecture 8 dynamic programming
Lecture 8 dynamic programming
Oye Tu
 
Dynamic programming
Dynamic programmingDynamic programming
Dynamic programming
Gopi Saiteja
 

Viewers also liked (20)

Introduction to E-book
Introduction to E-bookIntroduction to E-book
Introduction to E-book
Sanghyuk Chun
 
K-means and GMM
K-means and GMMK-means and GMM
K-means and GMM
Sanghyuk Chun
 
면발쫄깃123(기업용제안서)
면발쫄깃123(기업용제안서)면발쫄깃123(기업용제안서)
면발쫄깃123(기업용제안서)
Minho Lee
 
Probabilistic Models of Time Series and Sequences
Probabilistic Models of Time Series and SequencesProbabilistic Models of Time Series and Sequences
Probabilistic Models of Time Series and Sequences
Zitao Liu
 
구조 설정
구조 설정구조 설정
구조 설정
승연 신
 
최종Ppt 디자인입힌거 최종
최종Ppt 디자인입힌거 최종최종Ppt 디자인입힌거 최종
최종Ppt 디자인입힌거 최종
종성 박
 
비즈니스모델 젠 (Business Model Zen) 소개
비즈니스모델 젠 (Business Model Zen) 소개비즈니스모델 젠 (Business Model Zen) 소개
비즈니스모델 젠 (Business Model Zen) 소개
The Innovation Lab
 
Distance Metric Learning
Distance Metric LearningDistance Metric Learning
Distance Metric Learning
Sanghyuk Chun
 
M A R K O V C H A I N
M A R K O V  C H A I NM A R K O V  C H A I N
M A R K O V C H A I N
ashishtqm
 
Markov chain intro
Markov chain introMarkov chain intro
Markov chain intro
2vikasdubey
 
[토크아이티] 프런트엔드 개발 시작하기 저자 특강
[토크아이티] 프런트엔드 개발 시작하기 저자 특강 [토크아이티] 프런트엔드 개발 시작하기 저자 특강
[토크아이티] 프런트엔드 개발 시작하기 저자 특강
우영 주
 
Python Sympy 모듈 이해하기
Python Sympy 모듈 이해하기Python Sympy 모듈 이해하기
Python Sympy 모듈 이해하기
Yong Joon Moon
 
Lesson 11: Markov Chains
Lesson 11: Markov ChainsLesson 11: Markov Chains
Lesson 11: Markov Chains
Matthew Leingang
 
패션, 뷰티, 라이프스타일 부문 글로벌 디지털 콘텐츠 허브 : 패션인코리아 소개
패션, 뷰티, 라이프스타일 부문 글로벌 디지털 콘텐츠 허브 : 패션인코리아 소개패션, 뷰티, 라이프스타일 부문 글로벌 디지털 콘텐츠 허브 : 패션인코리아 소개
패션, 뷰티, 라이프스타일 부문 글로벌 디지털 콘텐츠 허브 : 패션인코리아 소개
Evan Ryu
 
Markov Models
Markov ModelsMarkov Models
Markov Models
Vu Pham
 
Markov chain
Markov chainMarkov chain
Markov chain
Yogesh Khandelwal
 
212140045 박채영 인데코
212140045 박채영 인데코212140045 박채영 인데코
212140045 박채영 인데코
채영 박
 
Markov Chains
Markov ChainsMarkov Chains
Markov Chains
guest8901f4
 
Markov analysis
Markov analysisMarkov analysis
Markov analysis
ganith2k13
 
쿠키런 1년, 서버개발 분투기
쿠키런 1년, 서버개발 분투기쿠키런 1년, 서버개발 분투기
쿠키런 1년, 서버개발 분투기
Brian Hong
 
Introduction to E-book
Introduction to E-bookIntroduction to E-book
Introduction to E-book
Sanghyuk Chun
 
면발쫄깃123(기업용제안서)
면발쫄깃123(기업용제안서)면발쫄깃123(기업용제안서)
면발쫄깃123(기업용제안서)
Minho Lee
 
Probabilistic Models of Time Series and Sequences
Probabilistic Models of Time Series and SequencesProbabilistic Models of Time Series and Sequences
Probabilistic Models of Time Series and Sequences
Zitao Liu
 
최종Ppt 디자인입힌거 최종
최종Ppt 디자인입힌거 최종최종Ppt 디자인입힌거 최종
최종Ppt 디자인입힌거 최종
종성 박
 
비즈니스모델 젠 (Business Model Zen) 소개
비즈니스모델 젠 (Business Model Zen) 소개비즈니스모델 젠 (Business Model Zen) 소개
비즈니스모델 젠 (Business Model Zen) 소개
The Innovation Lab
 
Distance Metric Learning
Distance Metric LearningDistance Metric Learning
Distance Metric Learning
Sanghyuk Chun
 
M A R K O V C H A I N
M A R K O V  C H A I NM A R K O V  C H A I N
M A R K O V C H A I N
ashishtqm
 
Markov chain intro
Markov chain introMarkov chain intro
Markov chain intro
2vikasdubey
 
[토크아이티] 프런트엔드 개발 시작하기 저자 특강
[토크아이티] 프런트엔드 개발 시작하기 저자 특강 [토크아이티] 프런트엔드 개발 시작하기 저자 특강
[토크아이티] 프런트엔드 개발 시작하기 저자 특강
우영 주
 
Python Sympy 모듈 이해하기
Python Sympy 모듈 이해하기Python Sympy 모듈 이해하기
Python Sympy 모듈 이해하기
Yong Joon Moon
 
패션, 뷰티, 라이프스타일 부문 글로벌 디지털 콘텐츠 허브 : 패션인코리아 소개
패션, 뷰티, 라이프스타일 부문 글로벌 디지털 콘텐츠 허브 : 패션인코리아 소개패션, 뷰티, 라이프스타일 부문 글로벌 디지털 콘텐츠 허브 : 패션인코리아 소개
패션, 뷰티, 라이프스타일 부문 글로벌 디지털 콘텐츠 허브 : 패션인코리아 소개
Evan Ryu
 
Markov Models
Markov ModelsMarkov Models
Markov Models
Vu Pham
 
212140045 박채영 인데코
212140045 박채영 인데코212140045 박채영 인데코
212140045 박채영 인데코
채영 박
 
Markov analysis
Markov analysisMarkov analysis
Markov analysis
ganith2k13
 
쿠키런 1년, 서버개발 분투기
쿠키런 1년, 서버개발 분투기쿠키런 1년, 서버개발 분투기
쿠키런 1년, 서버개발 분투기
Brian Hong
 
Ad

Similar to Markov Chain Basic (20)

Fa18_P1.pptx
Fa18_P1.pptxFa18_P1.pptx
Fa18_P1.pptx
Md Abul Hayat
 
Paper study: Learning to solve circuit sat
Paper study: Learning to solve circuit satPaper study: Learning to solve circuit sat
Paper study: Learning to solve circuit sat
ChenYiHuang5
 
nural network ER. Abhishek k. upadhyay
nural network ER. Abhishek  k. upadhyaynural network ER. Abhishek  k. upadhyay
nural network ER. Abhishek k. upadhyay
abhishek upadhyay
 
Competitive Learning [Deep Learning And Nueral Networks].pptx
Competitive Learning [Deep Learning And Nueral Networks].pptxCompetitive Learning [Deep Learning And Nueral Networks].pptx
Competitive Learning [Deep Learning And Nueral Networks].pptx
raghavaram5555
 
Neural Networks
Neural NetworksNeural Networks
Neural Networks
Makerere Unversity School of Public Health, Victoria University
 
Av 738- Adaptive Filtering - Background Material
Av 738- Adaptive Filtering - Background MaterialAv 738- Adaptive Filtering - Background Material
Av 738- Adaptive Filtering - Background Material
Dr. Bilal Siddiqui, C.Eng., MIMechE, FRAeS
 
Efficient anomaly detection via matrix sketching
Efficient anomaly detection via matrix sketchingEfficient anomaly detection via matrix sketching
Efficient anomaly detection via matrix sketching
Hsing-chuan Hsieh
 
A basic overview to the Shor's algorithm
A basic overview to the Shor's algorithmA basic overview to the Shor's algorithm
A basic overview to the Shor's algorithm
UTKARSHPATEL233666
 
Discrete Diffusion Models - Presentation
Discrete Diffusion Models - PresentationDiscrete Diffusion Models - Presentation
Discrete Diffusion Models - Presentation
Kien Duc Do
 
test generation
test generationtest generation
test generation
dennis gookyi
 
Trajectory Transformer.pptx
Trajectory Transformer.pptxTrajectory Transformer.pptx
Trajectory Transformer.pptx
Seungeon Baek
 
Financial Networks III. Centrality and Systemic Importance
Financial Networks III. Centrality and Systemic ImportanceFinancial Networks III. Centrality and Systemic Importance
Financial Networks III. Centrality and Systemic Importance
Kimmo Soramaki
 
all about petri netis model and simulation
all about petri netis model and simulationall about petri netis model and simulation
all about petri netis model and simulation
AssadLeo1
 
Artificial Neural Networks presentations
Artificial Neural Networks presentationsArtificial Neural Networks presentations
Artificial Neural Networks presentations
migob991
 
Quarks zk study-club
Quarks zk study-clubQuarks zk study-club
Quarks zk study-club
Alex Pruden
 
Paper study: Attention, learn to solve routing problems!
Paper study: Attention, learn to solve routing problems!Paper study: Attention, learn to solve routing problems!
Paper study: Attention, learn to solve routing problems!
ChenYiHuang5
 
Deep Learning Theory Seminar (Chap 3, part 2)
Deep Learning Theory Seminar (Chap 3, part 2)Deep Learning Theory Seminar (Chap 3, part 2)
Deep Learning Theory Seminar (Chap 3, part 2)
Sangwoo Mo
 
Class13_Quicksort_Algorithm.pdf
Class13_Quicksort_Algorithm.pdfClass13_Quicksort_Algorithm.pdf
Class13_Quicksort_Algorithm.pdf
AkashSingh625550
 
Stat 2153 Stochastic Process and Markov chain
Stat 2153 Stochastic Process and Markov chainStat 2153 Stochastic Process and Markov chain
Stat 2153 Stochastic Process and Markov chain
Khulna University
 
Monte Carlo Berkeley.pptx
Monte Carlo Berkeley.pptxMonte Carlo Berkeley.pptx
Monte Carlo Berkeley.pptx
HaibinSu2
 
Paper study: Learning to solve circuit sat
Paper study: Learning to solve circuit satPaper study: Learning to solve circuit sat
Paper study: Learning to solve circuit sat
ChenYiHuang5
 
nural network ER. Abhishek k. upadhyay
nural network ER. Abhishek  k. upadhyaynural network ER. Abhishek  k. upadhyay
nural network ER. Abhishek k. upadhyay
abhishek upadhyay
 
Competitive Learning [Deep Learning And Nueral Networks].pptx
Competitive Learning [Deep Learning And Nueral Networks].pptxCompetitive Learning [Deep Learning And Nueral Networks].pptx
Competitive Learning [Deep Learning And Nueral Networks].pptx
raghavaram5555
 
Efficient anomaly detection via matrix sketching
Efficient anomaly detection via matrix sketchingEfficient anomaly detection via matrix sketching
Efficient anomaly detection via matrix sketching
Hsing-chuan Hsieh
 
A basic overview to the Shor's algorithm
A basic overview to the Shor's algorithmA basic overview to the Shor's algorithm
A basic overview to the Shor's algorithm
UTKARSHPATEL233666
 
Discrete Diffusion Models - Presentation
Discrete Diffusion Models - PresentationDiscrete Diffusion Models - Presentation
Discrete Diffusion Models - Presentation
Kien Duc Do
 
Trajectory Transformer.pptx
Trajectory Transformer.pptxTrajectory Transformer.pptx
Trajectory Transformer.pptx
Seungeon Baek
 
Financial Networks III. Centrality and Systemic Importance
Financial Networks III. Centrality and Systemic ImportanceFinancial Networks III. Centrality and Systemic Importance
Financial Networks III. Centrality and Systemic Importance
Kimmo Soramaki
 
all about petri netis model and simulation
all about petri netis model and simulationall about petri netis model and simulation
all about petri netis model and simulation
AssadLeo1
 
Artificial Neural Networks presentations
Artificial Neural Networks presentationsArtificial Neural Networks presentations
Artificial Neural Networks presentations
migob991
 
Quarks zk study-club
Quarks zk study-clubQuarks zk study-club
Quarks zk study-club
Alex Pruden
 
Paper study: Attention, learn to solve routing problems!
Paper study: Attention, learn to solve routing problems!Paper study: Attention, learn to solve routing problems!
Paper study: Attention, learn to solve routing problems!
ChenYiHuang5
 
Deep Learning Theory Seminar (Chap 3, part 2)
Deep Learning Theory Seminar (Chap 3, part 2)Deep Learning Theory Seminar (Chap 3, part 2)
Deep Learning Theory Seminar (Chap 3, part 2)
Sangwoo Mo
 
Class13_Quicksort_Algorithm.pdf
Class13_Quicksort_Algorithm.pdfClass13_Quicksort_Algorithm.pdf
Class13_Quicksort_Algorithm.pdf
AkashSingh625550
 
Stat 2153 Stochastic Process and Markov chain
Stat 2153 Stochastic Process and Markov chainStat 2153 Stochastic Process and Markov chain
Stat 2153 Stochastic Process and Markov chain
Khulna University
 
Monte Carlo Berkeley.pptx
Monte Carlo Berkeley.pptxMonte Carlo Berkeley.pptx
Monte Carlo Berkeley.pptx
HaibinSu2
 
Ad

Recently uploaded (20)

Enterprise Integration Is Dead! Long Live AI-Driven Integration with Apache C...
Enterprise Integration Is Dead! Long Live AI-Driven Integration with Apache C...Enterprise Integration Is Dead! Long Live AI-Driven Integration with Apache C...
Enterprise Integration Is Dead! Long Live AI-Driven Integration with Apache C...
Markus Eisele
 
Viam product demo_ Deploying and scaling AI with hardware.pdf
Viam product demo_ Deploying and scaling AI with hardware.pdfViam product demo_ Deploying and scaling AI with hardware.pdf
Viam product demo_ Deploying and scaling AI with hardware.pdf
camilalamoratta
 
AI 3-in-1: Agents, RAG, and Local Models - Brent Laster
AI 3-in-1: Agents, RAG, and Local Models - Brent LasterAI 3-in-1: Agents, RAG, and Local Models - Brent Laster
AI 3-in-1: Agents, RAG, and Local Models - Brent Laster
All Things Open
 
On-Device or Remote? On the Energy Efficiency of Fetching LLM-Generated Conte...
On-Device or Remote? On the Energy Efficiency of Fetching LLM-Generated Conte...On-Device or Remote? On the Energy Efficiency of Fetching LLM-Generated Conte...
On-Device or Remote? On the Energy Efficiency of Fetching LLM-Generated Conte...
Ivano Malavolta
 
AI x Accessibility UXPA by Stew Smith and Olivier Vroom
AI x Accessibility UXPA by Stew Smith and Olivier VroomAI x Accessibility UXPA by Stew Smith and Olivier Vroom
AI x Accessibility UXPA by Stew Smith and Olivier Vroom
UXPA Boston
 
Mastering Testing in the Modern F&B Landscape
Mastering Testing in the Modern F&B LandscapeMastering Testing in the Modern F&B Landscape
Mastering Testing in the Modern F&B Landscape
marketing943205
 
Dark Dynamism: drones, dark factories and deurbanization
Dark Dynamism: drones, dark factories and deurbanizationDark Dynamism: drones, dark factories and deurbanization
Dark Dynamism: drones, dark factories and deurbanization
Jakub Šimek
 
Design pattern talk by Kaya Weers - 2025 (v2)
Design pattern talk by Kaya Weers - 2025 (v2)Design pattern talk by Kaya Weers - 2025 (v2)
Design pattern talk by Kaya Weers - 2025 (v2)
Kaya Weers
 
Config 2025 presentation recap covering both days
Config 2025 presentation recap covering both daysConfig 2025 presentation recap covering both days
Config 2025 presentation recap covering both days
TrishAntoni1
 
Limecraft Webinar - 2025.3 release, featuring Content Delivery, Graphic Conte...
Limecraft Webinar - 2025.3 release, featuring Content Delivery, Graphic Conte...Limecraft Webinar - 2025.3 release, featuring Content Delivery, Graphic Conte...
Limecraft Webinar - 2025.3 release, featuring Content Delivery, Graphic Conte...
Maarten Verwaest
 
Q1 2025 Dropbox Earnings and Investor Presentation
Q1 2025 Dropbox Earnings and Investor PresentationQ1 2025 Dropbox Earnings and Investor Presentation
Q1 2025 Dropbox Earnings and Investor Presentation
Dropbox
 
The No-Code Way to Build a Marketing Team with One AI Agent (Download the n8n...
The No-Code Way to Build a Marketing Team with One AI Agent (Download the n8n...The No-Code Way to Build a Marketing Team with One AI Agent (Download the n8n...
The No-Code Way to Build a Marketing Team with One AI Agent (Download the n8n...
SOFTTECHHUB
 
Artificial_Intelligence_in_Everyday_Life.pptx
Artificial_Intelligence_in_Everyday_Life.pptxArtificial_Intelligence_in_Everyday_Life.pptx
Artificial_Intelligence_in_Everyday_Life.pptx
03ANMOLCHAURASIYA
 
DevOpsDays SLC - Platform Engineers are Product Managers.pptx
DevOpsDays SLC - Platform Engineers are Product Managers.pptxDevOpsDays SLC - Platform Engineers are Product Managers.pptx
DevOpsDays SLC - Platform Engineers are Product Managers.pptx
Justin Reock
 
Challenges in Migrating Imperative Deep Learning Programs to Graph Execution:...
Challenges in Migrating Imperative Deep Learning Programs to Graph Execution:...Challenges in Migrating Imperative Deep Learning Programs to Graph Execution:...
Challenges in Migrating Imperative Deep Learning Programs to Graph Execution:...
Raffi Khatchadourian
 
Everything You Need to Know About Agentforce? (Put AI Agents to Work)
Everything You Need to Know About Agentforce? (Put AI Agents to Work)Everything You Need to Know About Agentforce? (Put AI Agents to Work)
Everything You Need to Know About Agentforce? (Put AI Agents to Work)
Cyntexa
 
Integrating FME with Python: Tips, Demos, and Best Practices for Powerful Aut...
Integrating FME with Python: Tips, Demos, and Best Practices for Powerful Aut...Integrating FME with Python: Tips, Demos, and Best Practices for Powerful Aut...
Integrating FME with Python: Tips, Demos, and Best Practices for Powerful Aut...
Safe Software
 
Building the Customer Identity Community, Together.pdf
Building the Customer Identity Community, Together.pdfBuilding the Customer Identity Community, Together.pdf
Building the Customer Identity Community, Together.pdf
Cheryl Hung
 
An Overview of Salesforce Health Cloud & How is it Transforming Patient Care
An Overview of Salesforce Health Cloud & How is it Transforming Patient CareAn Overview of Salesforce Health Cloud & How is it Transforming Patient Care
An Overview of Salesforce Health Cloud & How is it Transforming Patient Care
Cyntexa
 
Slack like a pro: strategies for 10x engineering teams
Slack like a pro: strategies for 10x engineering teamsSlack like a pro: strategies for 10x engineering teams
Slack like a pro: strategies for 10x engineering teams
Nacho Cougil
 
Enterprise Integration Is Dead! Long Live AI-Driven Integration with Apache C...
Enterprise Integration Is Dead! Long Live AI-Driven Integration with Apache C...Enterprise Integration Is Dead! Long Live AI-Driven Integration with Apache C...
Enterprise Integration Is Dead! Long Live AI-Driven Integration with Apache C...
Markus Eisele
 
Viam product demo_ Deploying and scaling AI with hardware.pdf
Viam product demo_ Deploying and scaling AI with hardware.pdfViam product demo_ Deploying and scaling AI with hardware.pdf
Viam product demo_ Deploying and scaling AI with hardware.pdf
camilalamoratta
 
AI 3-in-1: Agents, RAG, and Local Models - Brent Laster
AI 3-in-1: Agents, RAG, and Local Models - Brent LasterAI 3-in-1: Agents, RAG, and Local Models - Brent Laster
AI 3-in-1: Agents, RAG, and Local Models - Brent Laster
All Things Open
 
On-Device or Remote? On the Energy Efficiency of Fetching LLM-Generated Conte...
On-Device or Remote? On the Energy Efficiency of Fetching LLM-Generated Conte...On-Device or Remote? On the Energy Efficiency of Fetching LLM-Generated Conte...
On-Device or Remote? On the Energy Efficiency of Fetching LLM-Generated Conte...
Ivano Malavolta
 
AI x Accessibility UXPA by Stew Smith and Olivier Vroom
AI x Accessibility UXPA by Stew Smith and Olivier VroomAI x Accessibility UXPA by Stew Smith and Olivier Vroom
AI x Accessibility UXPA by Stew Smith and Olivier Vroom
UXPA Boston
 
Mastering Testing in the Modern F&B Landscape
Mastering Testing in the Modern F&B LandscapeMastering Testing in the Modern F&B Landscape
Mastering Testing in the Modern F&B Landscape
marketing943205
 
Dark Dynamism: drones, dark factories and deurbanization
Dark Dynamism: drones, dark factories and deurbanizationDark Dynamism: drones, dark factories and deurbanization
Dark Dynamism: drones, dark factories and deurbanization
Jakub Šimek
 
Design pattern talk by Kaya Weers - 2025 (v2)
Design pattern talk by Kaya Weers - 2025 (v2)Design pattern talk by Kaya Weers - 2025 (v2)
Design pattern talk by Kaya Weers - 2025 (v2)
Kaya Weers
 
Config 2025 presentation recap covering both days
Config 2025 presentation recap covering both daysConfig 2025 presentation recap covering both days
Config 2025 presentation recap covering both days
TrishAntoni1
 
Limecraft Webinar - 2025.3 release, featuring Content Delivery, Graphic Conte...
Limecraft Webinar - 2025.3 release, featuring Content Delivery, Graphic Conte...Limecraft Webinar - 2025.3 release, featuring Content Delivery, Graphic Conte...
Limecraft Webinar - 2025.3 release, featuring Content Delivery, Graphic Conte...
Maarten Verwaest
 
Q1 2025 Dropbox Earnings and Investor Presentation
Q1 2025 Dropbox Earnings and Investor PresentationQ1 2025 Dropbox Earnings and Investor Presentation
Q1 2025 Dropbox Earnings and Investor Presentation
Dropbox
 
The No-Code Way to Build a Marketing Team with One AI Agent (Download the n8n...
The No-Code Way to Build a Marketing Team with One AI Agent (Download the n8n...The No-Code Way to Build a Marketing Team with One AI Agent (Download the n8n...
The No-Code Way to Build a Marketing Team with One AI Agent (Download the n8n...
SOFTTECHHUB
 
Artificial_Intelligence_in_Everyday_Life.pptx
Artificial_Intelligence_in_Everyday_Life.pptxArtificial_Intelligence_in_Everyday_Life.pptx
Artificial_Intelligence_in_Everyday_Life.pptx
03ANMOLCHAURASIYA
 
DevOpsDays SLC - Platform Engineers are Product Managers.pptx
DevOpsDays SLC - Platform Engineers are Product Managers.pptxDevOpsDays SLC - Platform Engineers are Product Managers.pptx
DevOpsDays SLC - Platform Engineers are Product Managers.pptx
Justin Reock
 
Challenges in Migrating Imperative Deep Learning Programs to Graph Execution:...
Challenges in Migrating Imperative Deep Learning Programs to Graph Execution:...Challenges in Migrating Imperative Deep Learning Programs to Graph Execution:...
Challenges in Migrating Imperative Deep Learning Programs to Graph Execution:...
Raffi Khatchadourian
 
Everything You Need to Know About Agentforce? (Put AI Agents to Work)
Everything You Need to Know About Agentforce? (Put AI Agents to Work)Everything You Need to Know About Agentforce? (Put AI Agents to Work)
Everything You Need to Know About Agentforce? (Put AI Agents to Work)
Cyntexa
 
Integrating FME with Python: Tips, Demos, and Best Practices for Powerful Aut...
Integrating FME with Python: Tips, Demos, and Best Practices for Powerful Aut...Integrating FME with Python: Tips, Demos, and Best Practices for Powerful Aut...
Integrating FME with Python: Tips, Demos, and Best Practices for Powerful Aut...
Safe Software
 
Building the Customer Identity Community, Together.pdf
Building the Customer Identity Community, Together.pdfBuilding the Customer Identity Community, Together.pdf
Building the Customer Identity Community, Together.pdf
Cheryl Hung
 
An Overview of Salesforce Health Cloud & How is it Transforming Patient Care
An Overview of Salesforce Health Cloud & How is it Transforming Patient CareAn Overview of Salesforce Health Cloud & How is it Transforming Patient Care
An Overview of Salesforce Health Cloud & How is it Transforming Patient Care
Cyntexa
 
Slack like a pro: strategies for 10x engineering teams
Slack like a pro: strategies for 10x engineering teamsSlack like a pro: strategies for 10x engineering teams
Slack like a pro: strategies for 10x engineering teams
Nacho Cougil
 

Markov Chain Basic

  • 1. Network Intelligence and Analysis Lab Network Intelligence and Analysis Lab Markov Chain Basic 2014.07.11 SanghyukChun
  • 2. Network Intelligence and Analysis Lab •Exact Counting •#P Complete •Sampling and Counting Previous Chapters 2
  • 3. Network Intelligence and Analysis Lab •Markov Chain Basic •ErgodicMC has an unique stationary distribution •Some basic concepts (Coupling, Mixing time) •Coupling from past •Coupling detail •IsingModel •Bounding Mixing time via Coupling •Random spanning tree •Path coupling framework •MC for k-coloring graph Remaining Chapters 3 Today!
  • 4. Network Intelligence and Analysis Lab •Introduce Markov Chain •Show a potential algorithmic use of Markov Chain for sampling from complex distribution •Prove that ErgodicMarkov Chain always converge to unique stationary distribution •Introduce Coupling techniques and Mixing time In this chapter… 4
  • 5. Network Intelligence and Analysis Lab •For a finite state space Ω, we say a sequence of random variables (푋푡) on Ωis a Markov chain if the sequence is Markovianin the following sense •For all t, all 푥0,…,푥푡,푦∈Ω, we require •Pr푋푡+1=푦푋0=푥0,…,푋푡=푥푡=Pr(푋푡+1=푦|푋푡=푥푡) •The Markov property: “Memoryless” Markov Chain 5
  • 6. Network Intelligence and Analysis Lab •For a finite state space Ω, we say a sequence of random variables (푋푡) on Ωis a Markov chain if the sequence is Markovian •Let’s Ωdenote the set of shuffling (ex. 푋1=1,2,3,…,52) •The next shuffling state only depends on previous shuffling state, or 푋푡only depends on 푋푡+1 •Question 1: How can we uniformly shuffle the card? •Question 2: Can we get fast uniform shuffling algorithm? Example of Markov Chain (Card Shuffling) 6
  • 7. Network Intelligence and Analysis Lab •Transition Matrix •푃푥,푦=Pr(푋푡+1=푦|푋푡=푥) •Transitions are independent of the time (time-homogeneous) Transition Matrix 7
  • 8. Network Intelligence and Analysis Lab •Transition Matrix •푃푥,푦=Pr(푋푡+1=푦|푋푡=푥) •Transitions are independent of the time (time-homogeneous) •The t-step distribution is defined in the natural way •푃푡푥,푦= 푃푥,푦,푡=1 푧∈Ω푃푥,푧푃푡−1(푧,푦),푡>1 Transition Matrix 8
  • 9. Network Intelligence and Analysis Lab •A distribution 휋is a stationary distribution if it is invariant with respect to the transition matrix • forall푦∈Ω,휋푦= 푥∈Ω 휋푥푃(푥,푦) •Theorem 1 •For a finite ergodicMarkov Chain, there exists a unique stationary distribution 휋 •Proof? Stationary Distribution 9
  • 10. Network Intelligence and Analysis Lab •A Markov Chain is ergodicif there exists t such that for all x,푦∈Ω, 푃푥,푦푡>0 •It is possible to go from every state to every state (not necessarily in one move) •For finite MC following conditions are equivalent to ergodicity •Irreducible: •Forall푥,푦∈Ω,thereexists푡=푡푥,푦푠.푡.푃푡푥,푦>0 •Aperiodic: •Forall푥∈Ω,gcd푡:푃푡푥,푥>0=1 •Since regardless of their initial state, ErgodicMCs eventually reach a unique stationary distribution, EMCs are useful algorithmic tools ErgodicMarkov Chain 10
  • 11. Network Intelligence and Analysis Lab •Goal: we have a probability distribution we’d like to generate random sample from •Solution via MC: If we can design an ergodicMC whose unique stationary distribution is desired distribution, we then run the chain and can get the distribution •Example: sampling matching Algorithmic usage of ErgodicMarkov Chains 11
  • 12. Network Intelligence and Analysis Lab •For a graph 퐺=(푉,퐸), let Ωdenote the set of matching of G •We define a MC on Ωwhose transitions are as •Choose an edge e uniformly at random from E •Let, 푋′= 푋푡∪푒,if푒∉푋푡 푋푡푒,if푒∈푋푡 •If 푋′∈Ω, then set 푋푡+1=푋′with probability ½; otherwise set 푋푡+1=푋푡 •The MC is aperiodic (푃푀,푀≥1/2forall푀∈Ω) •The MC is irreducible (via empty set) with symmetric transition probabilities •Symmetric transition matrix has uniform stationary distribution •Thus, the unique stationary distribution is uniform over all matching of G Sampling Matching 12
  • 13. Network Intelligence and Analysis Lab •We will prove the theorem using the coupling technique and coupling Lemma Proof of Theorem (introduction) 13
  • 14. Network Intelligence and Analysis Lab •For distribution 휇,휈on a finite setΩ, a distribution ωon Ω×Ωis a coupling if •In other words, ωis a joint distribution whose marginal distributions are the appropriate distributions •Variation distance between 휇,휈is defined as Coupling Technique 14
  • 15. Network Intelligence and Analysis Lab Coupling Lemma 15
  • 16. Network Intelligence and Analysis Lab Proof of Lemma (a) 16
  • 17. Network Intelligence and Analysis Lab •For all 푧∈Ω, let •휔푧,푧=min{휇푧,휈(푧)} •푑푇푉=Pr(푋≠푌) •We need to complete the construction of w in valid way •For y,푧∈Ω,y≠푧, let •It is straight forward to verify that w is valid coupling Proof of Lemma (b) 17
  • 18. Network Intelligence and Analysis Lab •Consider a pair of Markov chains 푋푡,푌푡on Ωwith transition matrices 푃푋,푃푌respectively •Typically, MCs are identical in applications (푃푋=푃푌) •The Markov chainXt′ ,Yt′onΩ×Ωis a Markoviancoupling if •For such a Markoviancoupling we have variance distance as •If we choose 푌0as stationary distribution πthen we have •This shows how can we use coupling to bound the distance from stationary Couplings for Markov Chain 18
  • 19. Network Intelligence and Analysis Lab •Create MCs 푋푡,푌푡, where initial 푋0,푌0are arbitrary state on Ω •Create coupling for there chains in the following way •From 푋푡,푌푡, choose 푋푡+1according to transition matrix P •If Yt=푋푡,setYt+1=푋푡+1, otherwise choose Yt+1according to P, independent of the choice for 푋푡 •By ergodicity, there exist 푡∗s.t.forall푥,푦∈Ω,푃푡∗ 푥,푦≥휖>0 •Therefore, for all 푋0,푌0∈Ω •We can see similarly get step 푡∗→2푡∗ Proof of Theorem(1/4) 19
  • 20. Network Intelligence and Analysis Lab •Create coupling for there chains in the following way •From 푋푡,푌푡, choose 푋푡+1according to transition matrix P •If Yt=푋푡,setYt+1=푋푡+1, otherwise choose Yt+1according to P, independent of the choice for 푋푡 •If once 푋푠=푌푠,wehave푋푠′=푌푠′forall푠′≥푠 •From earlier observation, Proof of Theorem(2/4) 20
  • 21. Network Intelligence and Analysis Lab •For integer k > 0, •Therefore, •Since Xt=푌푡,impliesXt+1=푌푡+1,forall푡′≥푡, we have •Note that coupling of MC we defined, defines a coupling of Xt,푌푡. Hence by Coupling Lemma, •This proves that from any initial points we reach same distribution Proof of Theorem(3/4) 21 For all 푥,푦∈Ω
  • 22. Network Intelligence and Analysis Lab •From previous result, we proves there is a limiting distribution σ •Question: is σa stationary distribution? Or satisfiesforall푦∈Ω,휎푦= 푥∈Ω 휎푥푃(푥,푦) Proof of Theorem (4/4) 22
  • 23. Network Intelligence and Analysis Lab •Convergence itself is guaranteed if MC is ErgodicMC •However, it gives no indication to the convergence rate •We define the mixing time 휏푚푖푥(휖)as the time until the chain is within variance distance εfrom the worst initial state •휏푚푖푥휖=maxmin푋0∈Ω{푡:푑푇푉푃푡푋0,∙,휋≤ϵ} •To get efficient sampling algorithms (e.x. matching chain), we hope that mixing time is polynomial for input size Markov Chains for Algorithmic Purpose: Mixing Time 23
  翻译: