A compound Poisson process is a continuous-time (random) stochastic process with jumps. The jumps arrive randomly according to a Poisson process and the size of the jumps is also random, with a specified probability distribution. A compound Poisson process, parameterised by a rate
λ
>
0
{\displaystyle \lambda >0}
and jump size distribution G , is a process
{
Y
(
t
)
:
t
≥
0
}
{\displaystyle \{\,Y(t):t\geq 0\,\}}
given by
Y
(
t
)
=
∑
i
=
1
N
(
t
)
D
i
{\displaystyle Y(t)=\sum _{i=1}^{N(t)}D_{i}}
where,
{
N
(
t
)
:
t
≥
0
}
{\displaystyle \{\,N(t):t\geq 0\,\}}
is a Poisson process with rate
λ
{\displaystyle \lambda }
, and
{
D
i
:
i
≥
1
}
{\displaystyle \{\,D_{i}:i\geq 1\,\}}
are independent and identically distributed random variables, with distribution function G , which are also independent of
{
N
(
t
)
:
t
≥
0
}
.
{\displaystyle \{\,N(t):t\geq 0\,\}.\,}
When
Y
(
t
)
<
m
a
t
h
>
a
r
e
n
o
n
−
n
e
g
a
t
i
v
e
d
i
s
c
r
e
t
e
r
a
n
d
o
m
v
a
r
i
a
b
l
e
,
t
h
e
n
t
h
i
s
c
o
m
p
o
u
n
d
P
o
i
s
s
o
n
d
i
s
t
r
i
b
u
t
i
o
n
i
s
n
a
m
e
d
s
t
u
t
t
e
r
i
n
g
P
o
i
s
s
o
n
p
r
o
c
e
s
s
w
h
i
c
h
h
a
s
t
h
e
f
e
a
t
u
r
e
t
h
a
t
t
w
o
o
r
m
o
r
e
e
v
e
n
t
s
o
c
c
u
r
i
n
a
v
e
r
y
s
h
o
r
t
t
i
m
e
(
a
r
r
i
v
e
i
n
g
r
o
u
p
o
r
b
a
t
c
h
e
s
)
.
<
r
e
f
n
a
m
e
=
H
u
i
m
i
n
g
>
c
i
t
e
a
r
x
i
v
|
f
i
r
s
t
=
Z
h
a
n
g
|
l
a
s
t
=
H
u
i
m
i
n
g
|
c
o
a
u
t
h
o
r
s
=
[
[
L
i
l
i
C
h
u
]
]
,
[
[
Y
u
D
i
a
o
]
]
|
t
i
t
l
e
=
S
o
m
e
P
r
o
p
e
r
t
i
e
s
o
f
t
h
e
G
e
n
e
r
a
l
i
z
e
d
S
t
u
t
t
e
r
i
n
g
P
o
i
s
s
o
n
D
i
s
t
r
i
b
u
t
i
o
n
a
n
d
i
t
s
A
p
p
l
i
c
a
t
i
o
n
s
|
e
p
r
i
n
t
=
a
r
X
i
v
:
1207.6539
v
1
|
y
e
a
r
=
2012
|
c
l
a
s
s
=
m
a
t
h
.
S
T
<
/
r
e
f
>==
P
r
o
p
e
r
t
i
e
s
o
f
t
h
e
c
o
m
p
o
u
n
d
P
o
i
s
s
o
n
p
r
o
c
e
s
s
==
U
s
i
n
g
[
[
l
a
w
o
f
t
o
t
a
l
e
x
p
e
c
t
a
t
i
o
n
|
c
o
n
d
i
t
i
o
n
a
l
e
x
p
e
c
t
a
t
i
o
n
]
]
,
t
h
e
[
[
e
x
p
e
c
t
e
d
v
a
l
u
e
]
]
o
f
a
c
o
m
p
o
u
n
d
P
o
i
s
s
o
n
p
r
o
c
e
s
s
c
a
n
b
e
c
a
l
c
u
l
a
t
e
d
a
s
::<
m
a
t
h
>
E
(
Y
(
t
)
)
=
E
(
E
(
Y
(
t
)
|
N
(
t
)
)
)
=
E
(
N
(
t
)
E
(
D
)
)
=
E
(
N
(
t
)
)
E
(
D
)
=
λ
t
E
(
D
)
.
{\displaystyle Y(t)<math>arenon-negativediscreterandomvariable,thenthiscompoundPoissondistributionisnamedstutteringPoissonprocesswhichhasthefeaturethattwoormoreeventsoccurinaveryshorttime(arriveingrouporbatches).<refname=Huiming>{citearxiv|first=Zhang|last=Huiming|coauthors=[[LiliChu]],[[YuDiao]]|title=SomePropertiesoftheGeneralizedStutteringPoissonDistributionanditsApplications|eprint=arXiv:1207.6539v1|year=2012|class=math.ST}</ref>==PropertiesofthecompoundPoissonprocess==Using[[lawoftotalexpectation|conditionalexpectation]],the[[expectedvalue]]ofacompoundPoissonprocesscanbecalculatedas::<math>\,E(Y(t))=E(E(Y(t)|N(t)))=E(N(t)E(D))=E(N(t))E(D)=\lambda tE(D).}
Making similar use of the law of total variance , the variance can be calculated as:
var
(
Y
(
t
)
)
=
E
(
var
(
Y
(
t
)
|
N
(
t
)
)
)
+
var
(
E
(
Y
(
t
)
|
N
(
t
)
)
)
=
E
(
N
(
t
)
var
(
D
)
)
+
var
(
N
(
t
)
E
(
D
)
)
=
var
(
D
)
E
(
N
(
t
)
)
+
E
(
D
)
2
var
(
N
(
t
)
)
=
var
(
D
)
λ
t
+
E
(
D
)
2
λ
t
=
λ
t
(
var
(
D
)
+
E
(
D
)
2
)
=
λ
t
E
(
D
2
)
.
{\displaystyle {\begin{aligned}\operatorname {var} (Y(t))&=E(\operatorname {var} (Y(t)|N(t)))+\operatorname {var} (E(Y(t)|N(t)))\\&=E(N(t)\operatorname {var} (D))+\operatorname {var} (N(t)E(D))\\&=\operatorname {var} (D)E(N(t))+E(D)^{2}\operatorname {var} (N(t))\\&=\operatorname {var} (D)\lambda t+E(D)^{2}\lambda t\\&=\lambda t(\operatorname {var} (D)+E(D)^{2})\\&=\lambda tE(D^{2}).\end{aligned}}}
Lastly, using the law of total probability , the moment generating function can be given as follows:
Pr
(
Y
(
t
)
=
i
)
=
∑
n
Pr
(
Y
(
t
)
=
i
|
N
(
t
)
=
n
)
Pr
(
N
(
t
)
=
n
)
{\displaystyle \,\Pr(Y(t)=i)=\sum _{n}\Pr(Y(t)=i|N(t)=n)\Pr(N(t)=n)}
E
(
e
s
Y
)
=
∑
i
e
s
i
Pr
(
Y
(
t
)
=
i
)
=
∑
i
e
s
i
∑
n
Pr
(
Y
(
t
)
=
i
|
N
(
t
)
=
n
)
Pr
(
N
(
t
)
=
n
)
=
∑
n
Pr
(
N
(
t
)
=
n
)
∑
i
e
s
i
Pr
(
Y
(
t
)
=
i
|
N
(
t
)
=
n
)
=
∑
n
Pr
(
N
(
t
)
=
n
)
∑
i
e
s
i
Pr
(
D
1
+
D
2
+
⋯
+
D
n
=
i
)
=
∑
n
Pr
(
N
(
t
)
=
n
)
M
D
(
s
)
n
=
∑
n
Pr
(
N
(
t
)
=
n
)
e
n
ln
(
M
D
(
s
)
)
=
M
N
(
t
)
(
ln
(
M
D
(
s
)
)
=
e
λ
t
(
M
D
(
s
)
−
1
)
.
{\displaystyle {\begin{aligned}E(e^{sY})&=\sum _{i}e^{si}\Pr(Y(t)=i)\\&=\sum _{i}e^{si}\sum _{n}\Pr(Y(t)=i|N(t)=n)\Pr(N(t)=n)\\&=\sum _{n}\Pr(N(t)=n)\sum _{i}e^{si}\Pr(Y(t)=i|N(t)=n)\\&=\sum _{n}\Pr(N(t)=n)\sum _{i}e^{si}\Pr(D_{1}+D_{2}+\cdots +D_{n}=i)\\&=\sum _{n}\Pr(N(t)=n)M_{D}(s)^{n}\\&=\sum _{n}\Pr(N(t)=n)e^{n\ln(M_{D}(s))}\\&=M_{N(t)}(\ln(M_{D}(s))\\&=e^{\lambda t\left(M_{D}(s)-1\right)}.\end{aligned}}}
Exponentiation of measures
Let N , Y , and D be as above. Let μ be the probability measure according to which D is distributed, i.e.
μ
(
A
)
=
Pr
(
D
∈
A
)
.
{\displaystyle \mu (A)=\Pr(D\in A).\,}
Let δ 0 be the trivial probability distribution putting all of the mass at zero. Then the probability distribution of Y (t ) is the measure
exp
(
λ
t
(
μ
−
δ
0
)
)
{\displaystyle \exp(\lambda t(\mu -\delta _{0}))\,}
where the exponential exp(ν ) of a finite measure ν on Borel subsets of the real line is defined by
exp
(
ν
)
=
∑
n
=
0
∞
ν
∗
n
n
!
{\displaystyle \exp(\nu )=\sum _{n=0}^{\infty }{\nu ^{*n} \over n!}}
and
ν
∗
n
=
ν
∗
⋯
∗
ν
⏟
n
factors
{\displaystyle \nu ^{*n}=\underbrace {\nu *\cdots *\nu } _{n{\text{ factors}}}}
is a convolution of measures, and the series converges weakly .
See also