Continuous Time Markov Chain Two Machines Preempt
Exponentially Distributed Time
Continuous-Time Markov Chains
Sheldon M. Ross , in Introduction to Probability Models (Tenth Edition), 2010
Example 6.16
Consider a set of n components along with a single repairman. Suppose that component i functions for an exponentially distributed time with rate λ i and then fails. The time it then takes to repair component i is exponential with rate μ i , i = 1,…,n. Suppose that when there is more than one failed component the repairman always works on the most recent failure. For instance, if there are at present two failed components—say, components 1 and 2 of which 1 has failed most recently—then the repairman wll be working on component 1. However, if component 3 should fail before 1's repair is completed, then the repairman would stop working on component 1 and switch to component 3 (that is, a newly failed component preempts service).
To analyze the preceding as a continuous-time Markov chain, the state must represent the set of failed components in the order of failure. That is, the state will be i 1, i 2,…, ik if i 1, i 2,…, ik are the k failed components (all the other n − k being functional) with i 1 having been the most recent failure (and is thus presently being repaired), i 2 the second most recent, and so on. Because there are k! possible orderings for a fixed set of k failed components and choices of that set, it follows that there are
possible states.
The balance equations for the limiting probabilities are as follows:
(6.23)
where ϕ is the state when all components are working. The preceding equations follow because statei 1,…, ik can be left either by a failure of any of the additional components or by a repair completion of component i 1. Also, that state can be entered either by a repair completion of component i when the state is i, i 1, …, ik or by a failure of component i 1 when the state is i 2, …, ik .
However, if we take
(6.24)
then it is easily seen that Equations (6.23) are satisfied. Hence, by uniqueness these must be the limiting probabilities with P(ϕ) determined to make their sum equal 1. That is,
As an illustration, suppose n = 2 and so there are five states ϕ, 1, 2, 12, 21. Then from the preceding we would have
It is interesting to note, using Equation (6.24), that given the set of failed components, each of the possible orderings of these components is equally likely.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123756862000054
SPECIAL RANDOM VARIABLES
Sheldon M. Ross , in Introduction to Probability and Statistics for Engineers and Scientists (Fourth Edition), 2009
EXAMPLE 5.6b
A crew of workers has 3 interchangeable machines, of which 2 must be working for the crew to do its job. When in use, each machine will function for an exponentially distributed time having parameter λbefore breaking down. The workers decide initially to use machines A and B and keep machine C in reserve to replace whichever of A or B breaks down first. They will then be able to continue working until one of the remaining machines breaks down. When the crew is forced to stop working because only one of the machines has not yet broken down, what is the probability that the still operable machine is machine C?
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123704832000102
Renewal Theory and Its Applications
Sheldon Ross , in Introduction to Probability Models (Eleventh Edition), 2014
7.8 Computing the Renewal Function
The difficulty with attempting to use the identity
to compute the renewal function is that the determination of requires the computation of an -dimensional integral. Following, we present an effective algorithm that requires as inputs only one-dimensional integrals.
Let be an exponential random variable having rate , and suppose that is independent of the renewal process . We start by determining , the expected number of renewals by the random time . To do so, we first condition on , the time of the first renewal. This yields
(7.30)
where is the interarrival density. To determine , we now condition on whether or not exceeds . Now, if , then as the first renewal occurs at time , it follows that the number of renewals by time is equal to 0. On the other hand, if we are given that , then the number of renewals by time will equal 1 (the one at ) plus the number of additional renewals between and . But by the memoryless property of exponential random variables, it follows that, given that , the amount by which it exceeds is also exponential with rate , and so given that the number of renewals between and will have the same distribution as . Hence,
and so,
Substituting this into Equation (7.30) gives
or
(7.31)
where has the renewal interarrival distribution.
If we let , then Equation (7.31) presents an expression for the expected number of renewals (not by time , but) by a random exponentially distributed time with mean . However, as such a random variable need not be close to its mean (its variance is ), Equation (7.31) need not be particularly close to . To obtain an accurate approximation suppose that are independent exponentials with rate and suppose they are also independent of the renewal process. Let, for ,
To compute an expression for , we again start by conditioning on , the time of the first renewal:
(7.32)
To determine the foregoing conditional expectation, we now condition on the number of partial sums , that are less than . Now, if all partial sums are less than —that is, if —then clearly the number of renewals by time is 0. On the other hand, given that , of these partial sums are less than , it follows from the lack of memory property of the exponential that the number of renewals by time will have the same distribution as 1 plus . Hence,
(7.33)
To determine the distribution of the number of the partial sums that are less than , note that the successive values of these partial sums , have the same distribution as the first event times of a Poisson process with rate (since each successive partial sum is the previous sum plus an independent exponential with rate ). Hence, it follows that, for ,
(7.34)
Upon substitution of Equations (7.33) and (7.34) into Equation (7.32), we obtain
or, equivalently,
(7.35)
If we set , then starting with given by Equation (7.31), we can use Equation (7.35) to recursively compute . The approximation of is given by . Since is the sum of independent exponential random variables each with mean , it follows that it is (gamma) distributed with mean and variance . Hence, by choosing large, will be a random variable having most of its probability concentrated about , and so should be quite close to . (Indeed, if is continuous at , it can be shown that these approximations converge to as goes to infinity.)
Example 7.32
Table 7.1 compares the approximation with the exact value for the distributions with densities , which are given by
Table 7.1. Approximating
| Exact | Approximation | ||||||
|---|---|---|---|---|---|---|---|
| i | t | n 1 | n 3 | n 10 | n 25 | n 50 | |
| 1 | 1 | 0.2838 | 0.3333 | 0.3040 | 0.2903 | 0.2865 | 0.2852 |
| 1 | 2 | 0.7546 | 0.8000 | 0.7697 | 0.7586 | 0.7561 | 0.7553 |
| 1 | 5 | 2.250 | 2.273 | 2.253 | 2.250 | 2.250 | 2.250 |
| 1 | 10 | 4.75 | 4.762 | 4.751 | 4.750 | 4.750 | 4.750 |
| | |||||||
| 2 | 0.1 | 0.1733 | 0.1681 | 0.1687 | 0.1689 | 0.1690 | — |
| 2 | 0.3 | 0.5111 | 0.4964 | 0.4997 | 0.5010 | 0.5014 | — |
| 2 | 0.5 | 0.8404 | 0.8182 | 0.8245 | 0.8273 | 0.8281 | 0.8283 |
| 2 | 1 | 1.6400 | 1.6087 | 1.6205 | 1.6261 | 1.6277 | 1.6283 |
| 2 | 3 | 4.7389 | 4.7143 | 4.7294 | 4.7350 | 4.7363 | 4.7367 |
| 2 | 10 | 15.5089 | 15.5000 | 15.5081 | 15.5089 | 15.5089 | 15.5089 |
| | |||||||
| 3 | 0.1 | 0.2819 | 0.2692 | 0.2772 | 0.2804 | 0.2813 | — |
| 3 | 0.3 | 0.7638 | 0.7105 | 0.7421 | 0.7567 | 0.7609 | — |
| 3 | 1 | 2.0890 | 2.0000 | 2.0556 | 2.0789 | 2.0850 | 2.0870 |
| 3 | 3 | 5.4444 | 5.4000 | 5.4375 | 5.4437 | 5.4442 | 5.4443 |
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780124079489000074
Special Random Processes
Oliver C. Ibe , in Fundamentals of Applied Probability and Random Processes (Second Edition), 2014
12.8.1 Birth and Death Processes
Birth and death processes are a special type of continuous-time Markov chains. Consider a continuous-time Markov chain with states 0, 1, 2, …. If p ij = 0 whenever j ≠ i − 1 or j ≠ i + 1, then the Markov chain is called a birth and death process. Thus, a birth and death process is a continuous-time Markov chain with states 0, 1, 2, …, in which transitions from state i can only go to either state i + 1 or state i − 1. That is, a transition either causes an increase in state by one or a decrease in state by one. A birth is said to occur when the state increases by one, and a death is said to occur when the state decreases by one. For a birth and death process, we define the following transition rates from state i:
Thus, λ i is the rate at which a birth occurs when the process is in state i and μ i is the rate at which a death occurs when the process is in state i. The sum of these two rates is v i = λ i + μ i , which is the rate of transition out of state i. The state-transition-rate diagram of a birth and death process is shown in Figure 12.17. It is called a state-transition-rate diagram as opposed to a state-transition diagram because it shows the rate at which the process moves from state to state and not the probability of moving from one state to another. Note that μ 0 = 0, since there can be no death when the process is in empty state.
Figure 12.17. State-Transition-Rate Diagram for Birth and Death Process
The actual state-transition probabilities when the process is in state i are p i(i + 1) and p i(i − 1). By definition, p i(i + 1) = λ i /(λ i + μ i ) is the probability that a birth occurs before a death when the process is in state i. Similarly, p i(i − 1) = μ i /(λ i + μ i ) is the probability that a death occurs before a birth when the process is in state i.
Recall that the rate at which the probability of the process being in state i changes with time is given by
In the steady state,
which gives
(12.32)
The equation states that the rate at which the process leaves state i either through a birth or a death is equal to the rate at which it enters the state through a birth when the process is in state i − 1 or through a death when the process is in state i + 1.
If we assume that the limiting probabilities exist, then from the above equation we obtain the following:
(12.33a)
(12.33b)
This is called the balance equation because it balances (or equates) the rate at which the process enters state i with the rate at which it leaves state i.
Example 12.15
A machine is operational for an exponentially distributed time with mean 1/ λ before breaking down. When it breaks down, it takes a time that is exponentially distributed with mean 1/μ to repair it. What is the fraction of time that the machine is operational (or available)?
Solution:
This is a two-state birth and death process. Let U denote the up state and D the down state. Then, the state-transition-rate diagram is shown in Figure 12.18.
Figure 12.18. State-Transition-Rate Diagram for Example 12.15
Let p U denote the steady-state probability that the process is in the operational state, and let p D denote the steady-state probability that the process is in the down state. Then the balance equations become
Substituting for p D in the first equation gives p U = μ/(λ + μ).
Example 12.16
Customers arrive at a bank according to a Poisson process with rate λ. The time to serve each customer is exponentially distributed with mean 1/μ. There is only one teller at the bank, and an arriving customer who finds the teller busy when she arrives will join a single queue that operates on a first-come, first-served basis. Determine the limiting-state probabilities given that μ > λ.
Solution:
This is a continuous-time Markov chain in which arrivals constitute births and service completions constitute deaths. Also, for all i, λ i = λ and μ i = μ. Thus if p k denotes the steady-state probability that there are k customers in the system, the balance equations are as follows:
In general it can be shown that
Now,
where the equality follows from the fact that μ > λ. Thus,
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128008522000122
DEVS Markov Modeling and Simulation
Bernard P. Zeigler , ... Ernesto Kofman , in Theory of Modeling and Simulation (Third Edition), 2019
21.8 Relations Between DEVS CTM and DTM
From fundamental Markov theory, CTM is equivalent to DTM in the limit of small time step. The DEVS extension of each type introduces an external transition function which has elapsed time in state as an argument. However, only CTM can provide elapsed time that represents sojourn time in state. This is because the time advance function randomly selects the sojourn time using the correct computation. On the other hand, the external transition function in the DTM case can only supply the time step value being employed. We can get the actual sojourn time by adding a stopwatch which starts when a state is first entered and stops when the state is changed.
Exercise 21.24
Show how to add stopwatch functionality to a DEVS DTM by adding a state variable that accumulates the transition count while holding in a state.
Thus, as in Table 21.4, with both DEVS CTM and DEVS DTM as defined, the DEVS CTM has more capability than the DEVS DTM. Adding the stopwatch to the DEVS DTM gives it the same capability as DEVS CTM. In contrast, if we restrict the external transition function to not depend on elapsed time, then both types are equivalent.
Table 21.4. Behavioral comparison of CTM and DTM with/without auxiliary timing.
| DEVS DTM\DEVS CTM | As defined | No elapsed time dependence |
|---|---|---|
| As defined | DEVS CTM > DEVS DTM | DEVS CTM = DEVS DTM |
| With stopwatch | DEVS CTM = DEVS DTM | DEVS CTM = DEVS DTM |
21.8.1 Example: DEVS Markov Coupled Model
The following example employs DEVS Markov models as components and investigates the different types of models that characterize the resultants and their behaviors. The example is a coupled model of a DEVS CTM with a DEVS Markov (fixed transition time) model representing the usual timed situation in reality. Also the latter is replaced by a DEVS CTM where the timer is allowed to be a random element and the resulting types of models and behaviors are investigated.
Mobile Worker At any time, a worker is located in a locality with variability in the available jobs. Having just arrived to a locality, the worker begins a search for employment. Finding a job, the worker stays in the job until losing it. At that point she remains unemployed until finding a new one. The worker allows a fixed amount of time to find such a job and leaves the area in search of new employment if a new job does not materialize within that time. In Fig. 21.12 we represent such a worker with states inJob, JobLess, and JobSearch together with a timer that alerts the worker to start a search after a fixed time. Also this timer is replaced by one which has an exponentially distributed time duration.
Figure 21.12. Mobile Worker.
- a)
-
Coupled model for the timed mobile worker containing worker and timer components where the timer is the CTM alternative
- b)
-
CTM of mobile worker
- c)
-
Fixed duration alternative for timer (set to 3 days)
- d)
-
CTM alternative for timer (exponential duration with mean of 3)
- e)
-
Part of the state diagram of the coupled model – starting with (inJob, Expired), the loss of job with output StartTimer! leads to (JobLess, Timing); then, depending on which happens first, finding job or timer timeout, transition back to (inJob, Expired) or (JobSearch, Expired).
21.8.2 DEVS Hidden Markov Models
We return to characterize the hidden Markov model in Fig. 21.8 noting that it is of the form of a Cascade Composition presented in Section 15.9. The structure of the resultant model can be characterized as in Fig. 21.13. Using the terminology of Section 15.9, the front component is correlated with a partition on the state set of the resultant defined by the equivalence classes of the states mapped to the same front component state. State trajectories remain within an equivalent class and are governed by a back component which is a DEVS Markov model associated with the equivalent class. Output equivalence classes are orthogonal to the blocks of the state mapping so that outputs can't reveal the state of the front block. Switching between such classes is governed by the front component which is the hidden Markov model.
Figure 21.13. Hidden Markov Model.
The mapping can be characterized as a multi-step homomorphism of stochastic timed models (Chapter 9).
Exercises
- 1.
-
Write a CTM DEVS model for the mobile worker and a DEVS model for the fixed duration timer.
- 2.
-
Write a coupled model of with components in the previous exercise.
- 3.
-
Write a CTM DEVS model for the variable duration timer and substitute it for the timer in the coupled model.
- 4.
-
Show that the Probability Transition Structure underlying the coupled model state diagram in E can be shown to be isomorphic to the atomic DEVS CTM in F where the correspondence merely projects out the state of the worker from the paired state of the worker and timer.
- 5.
-
Argue that the coupled model with fixed duration timer is not equivalent to a CTM.
- 6.
-
If instead of immediately jumping to the JobSearch state when the timer expires, the worker waits for the time remaining and jumps to JobSearch instead. Show that this is an example of a DEVS Markov model where the elapsed time is being used and prove (or disprove) that the resultant is not equivalent to a DEVS CTM.
21.8.3 Example: Dynamic Structure via Variable Transition Probability
The dynamic structure capability of DEVS (Chapter 1) can be applied to DEVS Markov models to vary the underlying PTS and TTS structures. This allows more granular control of probabilities and transition times than is possible within fixed structure models. For example, the DEVS CTM model in Fig. 21.14 uses a "tunnel" sequence to lower the probability of death with longer remission. The DEVS Markov model in Fig. 21.15 employs the sojourn time of the initial state to lower the probability in a more granular manner.
Figure 21.14. Tunnel sequence approach to temporal dependence of transition probability.
Figure 21.15. Sojourn time approach to temporal dependence of transition probability.
Exercise 21.25
Show how to implement the change in PTS in Fig. 21.15 based on the sojourn time of the initial state. Hint: obtain the sojourn time as the time advance of the initial state and define a function to map the values obtained to the probability of transition of GreaterThan2Years to Death in the PTS.
The flow chart in Fig. 21.16 illustrates the control flow for the internal transitions for an atomic DEVS Markov model based on a pair of PTS and TTS.
Figure 21.16. Flow chart to illustrate DEVS Markov atomic model.
To illustrate the implementation of dynamic structure change, Fig. 21.16 considers that the PTS or TTS structures can depend on some index such as elapsed time or input value. Moreover, the current value of the index is incorporated into the state as a state variable. Thus, when the index is changed to a new value, the value is stored to help determine the values of the PTS and/or TTS. This might happen due to internal or external transitions. Dependence, of this kind, on external input values has been associated with Markov Decision Processes and Markov Agent Models. The DEVS ability for an external event to supply the elapsed time as index appears to be novel but can be considered to be the natural feature that exploits the coupling capability of coupled models having DEVS Markov model components. That is, if we allow such models to be influenced by elapsed time in external transitions then the resultants of coupled models will also depend on the global time advance (or residence time in the last global state) and hence will be Semi-Markov in nature. Furthermore, any change in the PTS or TTS structures amounts to dynamic structure change since it changes the basic properties of the model's transition definitions.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128133705000328
Continuous-Time Markov Chains
Oliver C. Ibe , in Markov Processes for Stochastic Modeling (Second Edition), 2013
5.3 Birth and Death Processes
Birth and death processes are a special type of CTMC. Consider a CTMC with states . If whenever or , then the Markov chain is called a birth and death process. Thus, a birth and death process is a CTMC with states , in which transitions from state i can only go to either state or state . That is, a transition either causes an increase in state by one or a decrease in state by one. A birth is said to occur when the state increases by one, and a death is said to occur when the state decreases by one. For a birth and death process, we define the following transition rates from state i:
Thus, is the rate at which a birth occurs when the process is in state i and is the rate at which a death occurs when the process is in state i. The sum of these two rates is , which is the rate of transition out of state i. The state-transition-rate diagram of a birth and death process is shown in Figure 5.3. It is called a state-transition-rate diagram as opposed to a state-transition diagram because it shows the rate at which the process moves from state to state and not the probability of moving from one state to another. Note that , because there can be no death when the process is in empty state.
Figure 5.3. State-transition-rate diagram for birth and death process.
The actual state-transition probabilities when the process is in state i are and . By definition, is the probability that a birth occurs before a death when the process is in state i. Similarly, is the probability that a death occurs before a birth when the process is in state i.
Recall from Eq. (5.1) that the rate at which the probability of the process being in state i changes with time is given by
Thus, for the birth and death process we have that
In the steady state,
If we assume that the limiting probabilities exist, then from the preceding equation we obtain the following:
The equation states that the rate at which the process leaves state i either through a birth or a death is equal to the rate at which it enters the state through a birth when the process is in state or through a death when the process is in state . This is called the balance equation because it balances (or equates) the rate at which the process enters state i with the rate at which it leaves state i.
Example 5.2
A machine is operational for an exponentially distributed time with mean before breaking down. When it breaks down, it takes a time that is exponentially distributed with mean to repair it. What is the fraction of time that the machine is operational (or available)?
Solution
This is a two-state birth and death process. Let U denote the up state and D the down state. Then, the state-transition-rate diagram is shown in Figure 5.4.
Figure 5.4. State-transition-rate diagram for Example 5.2.
Let denote the steady-state probability that the process is in the operational state, and let denote the steady-state probability that the process is in the down state. Then the balance equations become
Substituting in the first equation gives .
Example 5.3
Customers arrive at a bank according to a Poisson process with rate . The time to serve each customer is exponentially distributed with mean . There is only one teller at the bank, and an arriving customer who finds the teller busy when she arrives will join a single queue that operates on a first-come-first-served basis. Determine the limiting-state probabilities given that .
Solution
This is a CTMC in which arrivals constitute births and service completions constitute deaths. Also, for all i, and . Thus, if denotes the steady-state probability that there are k customers in the system, the balance equations are as follows:
Similarly, it can be shown that
Now,
Thus,
5.3.1 Local Balance Equations
Recall that the steady-state solution of the birth and death process is given by
For , we obtain . Because we know from the first equation that , this equation becomes
Similarly, for , we have that Applying the last result we obtain
Repeated application of this method yields the general result
This result states that when the process is in the steady state, the rate at which it makes a transition from state i to state , which we refer to the rate of flow from state i to state , is equal to the rate of flow from state to state i. This property is referred to as local balance condition. Recall that it is an application of the reversibility property discussed in Chapter 4. Direct application of the property allows us to solve for the steady-state probabilities of the birth and death process recursively as follows:
When for all i and for all i, we obtain the result
The sum converges if and only if , which is equivalent to the condition that . Under this condition we obtain the solutions
In Chapter 6, we will refer to this special case of the birth and death process as an M/M/1 queueing system.
5.3.2 Transient Analysis of Birth and Death Processes
Recall from Eq. (5.6) that
For the birth and death process we have that
From this we obtain the following system of differential equations:
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780124077959000050
Disease Modelling and Public Health, Part B
Anuj Mubayi , in Handbook of Statistics, 2017
2.3.1 Quantifying Case-Underreporting Levels
We derive a metric to quantify local measure of underreporting and actual disease incidence using a mathematical model. In the model, the transmission of vector-borne disease is assumed to be a results of interacting human–vector populations that incorporates the appropriate epidemiological states, transmission mechanisms, and epidemiological and contact parameters. Usually for neglected diseases data are scarce and hence, there are large variations in infectiousness, length of incubation, and treatment periods (depends on the treatment regime). These variations can be well captured by "staged-progression models" (SPM) or "linear chain trick" (Feng et al., 2007; Hyman et al., 1999). That is, we consider class-stages for the latent and infectious individuals, latent vectors, and individuals who get treatment at public and private health facilities. We include two groups of treated classes: patients undergoing treatment at public health facilities and patients treated at nongovernmental or private health care clinics (see Fig. 4). We model underreporting through the proportion p that captures the fraction of patients undergoing treatment in public health facilities. The remaining proportion 1 − p of infectious patients get treatment at private health clinics and are unreported. We assume that average treatment rate at private health clinics is, in general, q times the average treatment rate at public health facilities, where q ∈ [0, 1]; q = 1 means that the length of the treatment period in public and private treatment facilities are the same, q = 0.75 implies that average treatment length in private facilities is 4/3 (i.e., 1/q) times longer.
Fig. 4. Flow chart of stage progression model with structured treatment regiems.
Once a susceptible individual gets infected through the bite of an infected sandfly, he/she becomes latent and progresses through the n 1 latent stages (E i ; i = 1, …, n 1) before becoming infectious. In our model, . A latent individual spends an exponentially distributed time in each latent stage which is independent of the time spent in every other stage. Thus, he/she remains latent for a Γ( n 1; n 1 γ h )-distributed time where Γ refers to the gamma distribution. That is, the expected length of the intrinsic latent period is 1/γ h with standard deviation of . Similarly, it is assumed for infectious and treatment periods. We also assume stages in latent and infectious states in vectors.
The rate of infection for susceptible hosts is modeled via where λ h = mCβ hv , m is the per capita average number of sandfies (assumed constant), C is the mean rate of bites per sandfly, β hv is a transmission probability per bite from an infectious sandfly and Z/N v is the proportion of infectious sandflies in the population (i.e., the probability that a bite is made by an infectious sandfly). Here, we are assuming that both N v and N h are constant. The proportion of bites of susceptible sandflies biting infectious humans is modeled as the proportion of infectious humans in the population , assuming sandflies bite any host with equal probability. The infection rate of susceptible sandflies is , where λ v = Cβ vh , β vh is the transmission probability per bite from an infectious human to a susceptible sandfly (Chowell et al., 2007). The model defined here uses the well-known fact that a Γ(n, v)-distribution is equivalent to the distribution of the sum of n independent Exp(v)-distributions. From Fig. 4, the rate of reporting of cases is , where represents endemic state of infected compartment (Mubayi et al., 2010). Hence, the underreporting proportion is given by
where , , , , , , and and the reproduction number of the system is
The true or adjusted incidence of a region is
and then using adjusted incidence high-risk districts can be identified if the adjusted proportion of cases in the district (as compared to other districts) is greater than proportion of population in that district.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/S0169716117300299
Queueing Theory
Sheldon Ross , in Introduction to Probability Models (Eleventh Edition), 2014
8.8 A Finite Source Model
Consider a system of machines, whose working times are independent exponential random variables with rate . Upon failure, a machine instantly goes to a repair facility that consists of a single repairperson. If the repairperson is free, repair begins on the machine; otherwise, the machine joins the queue of failed machines. When a machine is repaired it becomes a working machine, and repair begins on a new machine from the queue of failed machines (provided the queue is nonempty). The successive repair times are independent random variables having density function , with mean
To analyze this system, so as to determine such quantities as the average number of machines that are down and the average time that a machine is down, we will exploit the exponentially distributed working times to obtain a Markov chain. Specifically, let denote the number of failed machines immediately after the th repair occurs, . Now, if , then the situation when the th repair has just occurred is that repair is about to begin on a machine, there are other machines waiting for repair, and there are working machines, each of which will (independently) continue to work for an exponential time with rate . Similarly, if , then all machines are working and will (independently) continue to do so for exponentially distributed times with rate . Consequently, any information about earlier states of the system will not affect the probability distribution of the number of down machines at the moment of the next repair completion; hence, is a Markov chain. To determine its transition probabilities , suppose first that . Conditioning on , the length of the next repair time, and making use of the independence of the remaining working times, yields that for
If , then, because the next repair will not begin until one of the machines fails,
Let , denote the stationary probabilities of this Markov chain. That is, they are the unique solution of
Therefore, after explicitly determining the transition probabilities and solving the preceding equations, we would know the value of , the proportion of repair completions that leaves all machines working. Let us say that the system is "on" when all machines are working and "off" otherwise. (Thus, the system is on when the repairperson is idle and off when he is busy.) As all machines are working when the system goes back on, it follows from the lack of memory property of the exponential that the system probabilistically starts over when it goes on. Hence, this on–off system is an alternating renewal process. Suppose that the system has just become on, thus starting a new cycle, and let , be the time of the th repair from that moment. Also, let denote the number of repairs in the off (busy) time of the cycle. Then, it follows that , the length of the off period, can be expressed as
Although is not independent of the sequence , it is easy to check that it is a stopping time for this sequence, and thus by Wald's equation (see Exercise 13 of Chapter 7) we have
Also, since an on time will last until one of the machines fails, and since the minimum of independent exponential random variables is exponential with a rate equal to the sum of their rates, it follows that , the mean on (idle) time in a cycle, is given by
Hence, , the proportion of time that the repairperson is busy, satisfies
However, since, on average, one out of every repair completions will leave all machines working, it follows that
Consequently,
(8.58)
Now focus attention on one of the machines, call it machine number 1, and let denote the proportion of time that machine 1 is being repaired. Since the proportion of time that the repairperson is busy is , and since all machines fail at the same rate and have the same repair distribution, it follows that
(8.59)
However, machine 1 alternates between time periods when it is working, when it is waiting in queue, and when it is in repair. Let denote, respectively, the th working time, the th queueing time, and the th repair time of machine 1, . Then, the proportion of time that machine 1 is being repaired during its first working–queue–repair cycles is as follows:
Letting and using the strong law of large numbers to conclude that the averages of the and of the converge, respectively, to and , yields
where is the average amount of time that machine 1 spends in queue when it fails. Using Equation (8.59), the preceding gives
or, equivalently, that
Moreover, since all machines are probabilistically equivalent it follows that is equal to , the average amount of time that a failed machine spends in queue. To determine the average number of machines in queue, we will make use of the basic queueing identity
where is the average rate at which machines fail. To determine , again focus attention on machine 1 and suppose that we earn one per unit time whenever machine 1 is being repaired. It then follows from the basic cost identity of Equation (8.1) that
where is the average rate at which machine 1 fails. Thus, from Equation (8.59), we obtain
Because all machines fail at the same rate, the preceding implies that
which gives that the average number of machines in queue is
Since the average number of machines being repaired is , the preceding, along with Equation (8.58), shows that the average number of down machines is
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780124079489000086
Queueing Theory
Sheldon M. Ross , in Introduction to Probability Models (Twelfth Edition), 2019
8.8 A Finite Source Model
Consider a system of m machines, whose working times are independent exponential random variables with rate λ. Upon failure, a machine instantly goes to a repair facility that consists of a single repairperson. If the repairperson is free, repair begins on the machine; otherwise, the machine joins the queue of failed machines. When a machine is repaired it becomes a working machine, and repair begins on a new machine from the queue of failed machines (provided the queue is nonempty). The successive repair times are independent random variables having density function g, with mean
To analyze this system, so as to determine such quantities as the average number of machines that are down and the average time that a machine is down, we will exploit the exponentially distributed working times to obtain a Markov chain. Specifically, let denote the number of failed machines immediately after the nth repair occurs, . Now, if , then the situation when the nth repair has just occurred is that repair is about to begin on a machine, there are other machines waiting for repair, and there are working machines, each of which will (independently) continue to work for an exponential time with rate λ. Similarly, if , then all m machines are working and will (independently) continue to do so for exponentially distributed times with rate λ. Consequently, any information about earlier states of the system will not affect the probability distribution of the number of down machines at the moment of the next repair completion; hence, is a Markov chain. To determine its transition probabilities , suppose first that . Conditioning on R, the length of the next repair time, and making use of the independence of the remaining working times, yields that for
If , then, because the next repair will not begin until one of the machines fails,
Let , denote the stationary probabilities of this Markov chain. That is, they are the unique solution of
Therefore, after explicitly determining the transition probabilities and solving the preceding equations, we would know the value of , the proportion of repair completions that leaves all machines working. Let us say that the system is "on" when all machines are working and "off" otherwise. (Thus, the system is on when the repairperson is idle and off when he is busy.) As all machines are working when the system goes back on, it follows from the lack of memory property of the exponential that the system probabilistically starts over when it goes on. Hence, this on–off system is an alternating renewal process. Suppose that the system has just become on, thus starting a new cycle, and let , be the time of the ith repair from that moment. Also, let N denote the number of repairs in the off (busy) time of the cycle. Then, it follows that B, the length of the off period, can be expressed as
Although N is not independent of the sequence , it is easy to check that it is a stopping time for this sequence, and thus by Wald's equation (see Exercise 13 of Chapter 7) we have
Also, since an on time will last until one of the machines fails, and since the minimum of independent exponential random variables is exponential with a rate equal to the sum of their rates, it follows that , the mean on (idle) time in a cycle, is given by
Hence, , the proportion of time that the repairperson is busy, satisfies
However, since, on average, one out of every repair completions will leave all machines working, it follows that
Consequently,
(8.58)
Now focus attention on one of the machines, call it machine number 1, and let denote the proportion of time that machine 1 is being repaired. Since the proportion of time that the repairperson is busy is , and since all machines fail at the same rate and have the same repair distribution, it follows that
(8.59)
However, machine 1 alternates between time periods when it is working, when it is waiting in queue, and when it is in repair. Let denote, respectively, the ith working time, the ith queueing time, and the ith repair time of machine 1, . Then, the proportion of time that machine 1 is being repaired during its first n working–queue–repair cycles is as follows:
Letting and using the strong law of large numbers to conclude that the averages of the and of the converge, respectively, to and , yields
where is the average amount of time that machine 1 spends in queue when it fails. Using Eq. (8.59), the preceding gives
or, equivalently, that
Moreover, since all machines are probabilistically equivalent it follows that is equal to , the average amount of time that a failed machine spends in queue. To determine the average number of machines in queue, we will make use of the basic queueing identity
where is the average rate at which machines fail. To determine , again focus attention on machine 1 and suppose that we earn one per unit time whenever machine 1 is being repaired. It then follows from the basic cost identity of Eq. (8.1) that
where is the average rate at which machine 1 fails. Thus, from Eq. (8.59), we obtain
Because all m machines fail at the same rate, the preceding implies that
which gives that the average number of machines in queue is
Since the average number of machines being repaired is , the preceding, along with Eq. (8.58), shows that the average number of down machines is
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128143469000135
Special Random Variables
Sheldon M. Ross , in Introduction to Probability and Statistics for Engineers and Scientists (Fifth Edition), 2014
5.6 Exponential Random Variables
A continuous random variable whose probability density function is given, for some λ > 0, by
is said to be an exponential random variable (or, more simply, is said to be exponentially distributed) with parameter λ. The cumulative distribution function F(x) of an exponential random variable is given by
The exponential distribution often arises, in practice, as being the distribution of the amount of time until some specific event occurs. For instance, the amount of time (starting from now) until an earthquake occurs, or until a new war breaks out, or until a telephone call you receive turns out to be a wrong number are all random variables that tend in practice to have exponential distributions (see Section 5.6.1 for an explanation).
The moment generating function of the exponential is given by
Differentiation yields
and so
Thus λ is the reciprocal of the mean, and the variance is equal to the square of the mean.
The key property of an exponential random variable is that it is memoryless, where we say that a nonnegative random variable X is memoryless if
(5.6.1)
To understand why Equation 5.6.1 is called the memoryless property, imagine that X represents the length of time that a certain item functions before failing. Now let us consider the probability that an item that is still functioning at age t will continue to function for at least an additional time s. Since this will be the case if the total functional lifetime of the item exceeds t + s given that the item is still functioning at t, we see that
Thus, we see that Equation 5.6.1 states that the distribution of additional functional life of an item of age t is the same as that of a new item — in other words, when Equation 5.6.1 is satisfied, there is no need to remember the age of a functional item since as long as it is still functional it is "as good as new."
The condition in Equation 5.6.1 is equivalent to
or
(5.6.2)
When X is an exponential random variable, then
and so Equation 5.6.2 is satisfied (since ). Hence, exponentially distributed random variables are memoryless (and in fact it can be shown that they are the only random variables that are memoryless).
Example 5.6a
Suppose that a number of miles that a car can run before its battery wears out is exponentially distributed with an average value of 10,000 miles. If a person desires to take a 5,000-mile trip, what is the probability that she will be able to complete her trip without having to replace her car battery? What can be said when the distribution is not exponential?
Solution
It follows, by the memoryless property of the exponential distribution, that the remaining lifetime (in thousands of miles) of the battery is exponential with parameter λ = 1/10. Hence the desired probability is
However, if the lifetime distribution F is not exponential, then the relevant probability is
where t is the number of miles that the battery had been in use prior to the start of the trip. Therefore, if the distribution is not exponential, additional information is needed (namely, t) before the desired probability can be calculated.■
For another illustration of the memoryless property, consider the following example.
Example 5.6b
A crew of workers has 3 interchangeable machines, of which 2 must be working for the crew to do its job. When in use, each machine will function for an exponentially distributed time having parameter λ before breaking down. The workers decide initially to use machines A and B and keep machine C in reserve to replace whichever of A or B breaks down first. They will then be able to continue working until one of the remaining machines breaks down. When the crew is forced to stop working because only one of the machines has not yet broken down, what is the probability that the still operable machine is machine C?
Solution
This can be easily answered, without any need for computations, by invoking the memoryless property of the exponential distribution. The argument is as follows: Consider the moment at which machine C is first put in use. At that time either A or B would have just broken down and the other one — call it machine 0 — will still be functioning. Now even though 0 would have already been functioning for some time, by the memoryless property of the exponential distribution, it follows that its remaining lifetime has the same distribution as that of a machine that is just being put into use. Thus, the remaining lifetimes of machine 0 and machine C have the same distribution and so, by symmetry, the probability that 0 will fail before C is .■
The following proposition presents another property of the exponential distribution.
Proposition 5.6.1
If X 1, X 2,…, Xn are independent exponential random variables having respective parameters λ1, λ2,…, λ n , then min(X 1,X 2,…,Xn ) is exponential with parameter .
Proof
Since the smallest value of a set of numbers is greater than x if and only if all values are greater than x, we have
■
Example 5.6c
A series system is one that needs all of its components to function in order for the system itself to be functional. For an n-component series system in which the component lifetimes are independent exponential random variables with respective parameters λ1, λ2,…, λ n , what is the probability that the system survives for a time t?
Solution
Since the system life is equal to the minimal component life, it follows from Proposition 5.6.1 that
■
Another useful property of exponential random variables is that cX is exponential with parameter λ/c when X is exponential with parameter λ, and c > 0. This follows since
The parameter λ is called the rate of the exponential distribution.
5.6.1 *The Poisson Process
Suppose that "events" are occurring at random time points, and let N(t) denote the number of events that occurs in the time interval [0, t]. These events are said to constitute a Poisson process having rate λ, λ > 0, if
- (a)
-
N(0) = 0
- (b)
-
The number of events that occur in disjoint time intervals are independent.
- (c)
-
The distribution of the number of events that occur in a given interval depends only on the length of the interval and not on its location.
- (d)
-
- (e)
-
Thus, Condition (a) states that the process begins at time 0. Condition (b), the independent increment assumption, states for instance that the number of events by time t [that is, N(t)] is independent of the number of events that occurs between t and t + s [that is, N(t + s) – N(t)]. Condition (c), the stationary increment assumption, states that probability distribution of N(t + s) – N(t) is the same for all values of t. Conditions (d) and (e) state that in a small interval of length h, the probability of one event occurring is approximately λh, whereas the probability of 2 or more is approximately 0.
We will now show that these assumptions imply that the number of events occurring in any interval of length t is a Poisson random variable with parameter λt. To be precise, let us call the interval [0, t] and denote by N(t) the number of events occurring in that interval. To obtain an expression for P{N(t) = k}, we start by breaking the interval [0, t] into n nonoverlapping subintervals each of length t/n (Figure 5.10). Now there will be k events in [0, t] if either
Figure 5.10.
- (i)
-
N(t) equals k and there is at most one event in each subinterval;
- (ii)
-
N(t) equals k and at least one of the subintervals contains 2 or more events.
Since these two possibilities are clearly mutually exclusive, and since Condition (i) is equivalent to the statement that k of the n subintervals contain exactly 1 event and the other n – k contain 0 events, we have that
(5.6.3)
Now it can be shown, using Condition (e), that
(5.6.4)
Also, it follows from Conditions (d) and (e) that
Hence, since the number of events that occur in different subintervals are independent [from Condition (b)], it follows that
(5.6.5)
with the approximation becoming exact as the number of subintervals, n, goes to ∞. However, the probability in Equation 5.6.5 is just the probability that a binomial random variable with parameters n and p = λt/n equals k. Hence, as n becomes larger and larger, this approaches the probability that a Poisson random variable with mean nλt/n = λt equals k. Hence, from Equations 5.6.3, 5.6.4, and 5.6.5, we see upon letting n approach ∞ that
We have shown:
Proposition 5.6.2
For a Poisson process having rate λ
That is, the number of events in any interval of length t has a Poisson distribution with mean λt.
For a Poisson process, let X 1 denote the time of the first event. Further, for n > 1, let Xn denote the elapsed time between (n – 1)st and the nth events. The sequence (Xn , n = 1,2,…} is called the sequence of interarrival times. For instance, if X 1 = 5 and X 2 = 10, then the first event of the Poisson process would have occurred at time 5 and the second at time 15.
We now determine the distribution of the Xn . To do so, we first note that the event {X 1 > t} takes place if and only if no events of the Poisson process occur in the interval [0, t], and thus,
Hence, X 1 has an exponential distribution with mean 1/λ. To obtain the distribution of X 2, note that
where the last two equations followed from independent and stationary increments. Therefore, from the foregoing we see that X2 is also an exponential random variable with mean 1/λ, and furthermore, that X 2 is independent of X 1. Repeating the same argument yields:
Proposition 5.6.3
X 1, X 2,… are independent exponential random variables, each with mean 1/λ.
5.6.2 *The Pareto Distribution
If X is an exponential random variable with rate λ, then
is said to be a Pareto random variable with parameters α and λ. The parameter λ > 0 is called the index parameter, and α is called the minimum parameter (because P(Y ≥ α) = 1). The distribution function of Y is derived as follows: For y ≥ α,
Hence, the distribution function of Y is
Differentiating the distribution function yields the density function of Y:
It can be shown (see Problem 5-49) that E[Y] = ∞ when λ ≤ 1. When λ > 1, the mean is obtained as follows.
An important feature of Pareto distributions is that for y 0 > α the conditional distribution of a Pareto random variable Y with parameters α and λ, given that it exceeds y 0, is the Pareto distribution with parameters y 0 and λ. To verify this, note for y > y 0 that
Thus, the conditional distribution is indeed Pareto with parameters y 0 and λ.
One of the earliest uses of the Pareto was as the distribution of the annual income of the members of a population. In fact, it has been widely supposed that incomes in many populations can be modeled as coming from a Pareto distribution with index parameter λ = log(5)/log(4) ≈ 1.161. Under this supposition, it turns out that the total income of the top 20 percent of earners is 80 percent of the population's total income earnings, and that the top 20 percent of these high earners earn 80 percent of the total of all high earners income, and that the top 20 percent of these very high earners earn 80 percent of the total of all very high earners income, and so on.
To verify the preceding claim, let y .8 be the 80 percentile of the Pareto distribution. Because FY (y) = 1 – (α/y)λ, we see that .8 = F(y .8) = 1 – (α/y .8)λ, showing that
and thus
Now suppose, from this point on, that λ = log(5)/log(4), and note that log(4) = (1/λ) log(5) = log(51/λ), showing that 4 = 51/λ, or equivalently that 1/λ = log5(4). Hence,
The average income of a randomly chosen individual in the top 20 percent is E[Y|Y > y .8], which is easily obtained by using that the conditional distribution of Y given that it exceeds y .8 is Pareto with parameters y .8 and λ. Using the previously derived formula for E[Y], this yields that
To obtain E[Y|Y < y .8], the average income of a randomly chosen individual in the bottom 80 percent, we use the identity
Using the previously derived expressions for E[Y] and E[Y|Y > y .8], the preceding equation yields that
showing that
Thus,
Hence, the average earnings of someone in the top 20 percent of income earned is 16 times that of someone in the lower 80 percent, thus showing that, although there are 4 times as many people in the lower earning group, the total income of the lower income group is only 20 percent of the total earnings of the population. (On average, for every 5 people in the population, 4 are in the lower 80 percent and 1 is in the upper 20 percent; the 4 in the lower earnings group earn on average a total of , whereas the one in the higher income group earns on average . Thus, 4 out of every 5 dollars of the population's total income is earned by someone in the highest 20 percent.)
Because the conditional distribution of a high income earner (that is, one who earns more than y .8) is Pareto with parameters y .8 and λ, it also follows from the preceding that 80 percent of the total of the earnings of this group are earned by the top 20 percent of these high earners, and so on.
The Pareto distribution has been applied in a variety of areas. For instance, it has been used as the distribution of
- (a)
-
the file size of internet traffic (under the TCP protocol);
- (b)
-
the time to compete a job assigned to a supercomputer;
- (c)
-
the size of a meteorite;
- (d)
-
the yearly maximum one day rainfalls in different regions.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123948113500058
Source: https://www.sciencedirect.com/topics/mathematics/exponentially-distributed-time
0 Response to "Continuous Time Markov Chain Two Machines Preempt"
Postar um comentário