Probability and Statistics
Comments
Description
Module 1Probability 1. Introduction In our daily life we come across many processes whose nature cannot be predicted in advance. Such processes are referred to as random processes. The only way to derive information about random processes is to conduct experiments. Each such experiment results in an outcome which cannot be predicted beforehand. In fact even if the experiment is repeated under identical conditions, due to presence of factors which are beyond control, outcomes of the experiment may vary from trial to trial. However we may know in advance that each outcome of the experiment will result in one of the several given possibilities. For example, in the cast of a die under a fixed environment the outcome (number of dots on the upper face of the die) cannot be predicted in advance and it varies from trial to trial. However we know in advance that the outcome has to be among one of the numbers 1, 2, … , 6. Probability theory deals with the modeling and study of random processes. The field of Statistics is closely related to probability theory and it deals with drawing inferences from the data pertaining to random processes. Definition 1.1 (i) (ii) A random experiment is an experiment in which: (a) the set of all possible outcomes of the experiment is known in advance; (b) the outcome of a particular performance (trial) of the experiment cannot be predicted in advance; (c) the experiment can be repeated under identical conditions. The collection of all possible outcomes of a random experiment is called the sample space. A sample space will usually be denoted by . ▄ Example 1.1 (i) (ii) In the random experiment of casting a die one may take the sample space as = 1, 2, 3, 4, 5, 6 , where ∈ indicates that the experiment results in = 1, … ,6 dots on the upper face of die. In the random experiment of simultaneously flipping a coin and casting a die one may take the sample space as = , × 1, 2, … , 6 = , : ∈ , , ∈ 1, 2, … , 6 , 1 (iii) (iv) where , , indicates that the flip of the coin resulted in head (tail) on the upper face and the cast of the die resulted in = 1, 2, … , 6 dots on the upper face. Consider an experiment where a coin is tossed repeatedly until a head is observed. In this case the sample space may be taken as = 1, 2, … (or = T, TH, TTH, … ),where ∈ (or TT ⋯ TH ∈ with − 1 Ts and one H) indicates that the experiment terminates on the -th trial with first − 1 trials resulting in tails on the upper face and the -th trial resulting in the head on the upper face. In the random experiment of measuring lifetimes (in hours) of a particular brand of batteries manufactured by a company one may take = 0,70,000",where we have assumed that no battery lasts for more than 70,000 hours. ▄ Definition 1.2 (i) Let be the sample space of a random experiment and let # ⊆ . If the outcome of the random experiment is a member of the set # we say that the event # has occurred. (ii) Two events #% and #& are said to be mutually exclusive if they cannot occur simultaneously, i.e., if #% ∩ #& = (, the empty set. ▄ In a random experiment some events may be more likely to occur than the others. For example, in the cast of a fair die (a die that is not biased towards any particular outcome), the occurrence of an odd number of dots on the upper face is more likely than the occurrence of 2 or 4 dots on the upper face. Thus it may be desirable to quantify the likelihoods of occurrences of various events. Probability of an event is a numerical measure of chance with which that event occurs. To assign probabilities to various events associated with a random experiment one may assign a real number )# ∈ 0,1" to each event # with the interpretation that there is a 100 × )# % chance that the event # will occur and a +100 × 1 − )#, % chance that the event # will not occur. For example if the probability of an event is 0.25 it would mean that there is a 25% chance that the event will occur and that there is a 75% chance that the event will not occur. Note that, for any such assignment of possibilities to be meaningful, one must have ) = 1. Now we will discuss two methods of assigning probabilities. I. Classical Method This method of assigning probabilities is used for random experiments which result in a finite number of equally likely outcomes. Let = .% , … , ./ be a finite sample space with 0 ∈ ℕ possible outcomes; here ℕ denotes the set of natural numbers. For ⊆ , let |#| denote the number of elements in #. An outcome . ∈ is said to be favorable to an event 2 # if . ∈ #. In the classical method of assigning probabilities, the probability of an event # is given by )# = number of outocmes favorable to E |#| |#| = = . || total number of outcomes 0 Note that probabilities assigned through classical method satisfy the following properties of intuitive appeal: (i) (ii) For any event #, )# ≥ 0; For mutually exclusive events #% , #& , … , #/ i.e. , #D ∩ #E = ( , whenever , F ∈ 1, … , 0 , ≠ F |⋃NMJ% EM | ∑NMJ%|EM | |EM | = =P = P )#D ; ) HI #D K = n n n / (iii) DJ% ) = |Q| = 1 . |Q| N MJ% / MJ% Example 1.2 Suppose that in a classroom we have 25 students (with registration numbers1, 2, … , 25) born in the same year having 365 days. Suppose that we want to find the probability of the event # that they all are born on different days of the year. Here an outcome consists of a sequence of 25 birthdays. Suppose that all such sequences are equally likely. Then || = 365&R , |E| = 365 × 364 × ⋯ × 341 =STR )&R and ) # = = |Q| |U| STRVWX STRWX ∙ The classical method of assigning probabilities has a limited applicability as it can be used only for random experiments which result in a finite number of equally likely outcomes. ▄ II. Relative Frequency Method Suppose that we have independent repetitions of a random experiment (here independent repetitions means that the outcome of one trial is not affected by the outcome of another trial) under identical conditions. Let Z[ # denote the number of times an event # occurs (also called the frequency of event # in \ trials) in the first \ trials and let [ # = Z[ # /\ denote the corresponding relative frequency. Using advanced probabilistic arguments (e.g., using Weak Law of Large Numbers to be discussed in Module 7) it can be shown that, under mild conditions, the relative frequencies stabilize (in certain sense) as \ gets large (i.e., for any event #, lim raE exists in certain sense). In the relative frequency method of assigning [→` probabilities the probability of an event # is given by 3 Z[ # ∙ [→` \ )# = lim [ # lim [→` Figure 1.1. Plot of relative frequencies ([ # ) of number of heads against number of trials (N) in the random experiment of tossing a fair coin (with probability of head in each trial as 0.5). In practice, to assign probability to an event #, the experiment is repeated a large (but fixed) number of times (say \ times) and the approximation )# b [ # is used for assigning probability to event #. Note that probabilities assigned through relative frequency method also satisfy the following properties of intuitive appeal: (i) (ii) (iii) for any event #, )# B 0; for mutually exclusive events #% , #& , … , #/ ) 1. / / DJ% DJ% ) HI #D K P )#D ; Although the relative frequency method seems to have more applicability than the classical method it too has limitations. A major problem with the relative frequency method is that it is 4 A set is said to be uncountable if it is not countable. and only if. countable and uncountable sets. Another difficulty with relative frequency method is that it assumes that the experiment can be repeated a large number of times. in predicting the success of a new space technology it may not be possible to repeat the experiment a large number of times due to high costs involved). A set is said to be infinite if it is not finite. Any uncountable set is an infinite set. A set # is said to be countable if either # = ( or if there is an onto function Z: ℕ → #.3 (i) (ii) (iii) (iv) (v) (vi) A set # is said to be finite if either # = ( (the empty set) or if there exists a one-one and onto function Z: 1. A set # is countable if. provides some of the properties of finite. If # is a finite set and f is a set such that there exists a one-one and onto function Z: # → f or Z: f → # then f is finite. for some ℕg ⊆ ℕ. either # = ( or there exists a one-one map Z: # → ℕ. 5 . whose proof(s) can be found in any standard textbook on set theory. … . The following definitions will be useful in future discussions. either # is finite or there exists a one-one map Z: ℕ → #.2. If d is an uncountable set and d ⊆ e then e is uncountable. ▄ The following proposition. If d is an infinite set and d ⊆ e then e is infinite.1 (i) (ii) (iii) (iv) (v) (vi) (vii) (viii) (ix) (x) Any finite set is countable. This may not be always possible due to budgetary and other constraints (e. Definition 1. 0 for some natural number 0. where ℝ denotes the set of real numbers.2. … . and only if. A set # is countable if. A set # is said to be continuum if there is a one-one and onto function Z: ℝ → # or Z: # → ℝ . 0 → # or Z: # → 1.g. If d is a countable and e ⊆ d then e is countable.imprecise as it is based on an approximation)# ≈ [ # . Proposition 1. A set is said to be countably infinite if it is countable and infinite. where ℕ denotes the set of natural numbers. A set # is countable if and only if either # = ( or there exists a one-one and onto map Z: # → ℕg .. If # is a countably infinite (continuum) set and f is a set such that there exists a one-one and onto function Z: # → f or Z: f → # then f is countably infinite (continuum). Thus ℕ is countable. Therefore. Define Z: ℝ → ℝ and t: ℝ → 0. Further. or as # = . u ∈ 0. It follows that ℝand (0. Then Z: ℝ → ℝ and t: ℝ → 0. u ∈ ℝ./ .3 (i) (ii) Define Z: ℕ → ℕ by Z 0 = 0. for − ∞ < h < i < ∞ . ▄ It is clear that it may not be possible to assign probabilities in a way that applies to every situation. Thus ℕ is countably infinite. i is continuum. let ℎu = i − hu + h.1 (vii)). if 0 is even 2 Clearly Z: ℕ → ℤ is one-one and onto. In advanced courses on probability theory it is shown that in many situations (especially when the sample space is continuum) it is not 6 . Assignment of probabilities to various subsets of the sample space that is consistent with intuitively appealing properties (i)-(iii) of classical (or relative frequency) method is done through probability modeling. for some 0 ∈ ℕ. 1 by Zu = u. 1) are continuum (using Proposition 1.1 (xiv) it is straight forward to show that ℚ (the set of rational numbers) is countably infinite. Clearly Z: ℕ → ℕ is one-one and onto. i is one-one and onto. 1.& .1 (ii) it follows that any subset of ℤ is countable.& . is uncountable. ▄ Example 1. countable union of countable sets is countable.% .1 (vii) it follows that any interval h. Now on using Proportion 1. 1 are one-one and onto functions. using (i) above and Proportion 1. In the modern approach to probability theory one does not bother about how probabilities are assigned. ℤ is countably infinite.(xi) (xii) (xiii) (xiv) (xv) A non empty countable set # can be either written as # = . Let ℤ denote the set of integers.% . if 0 is odd Z0 = q 02 − . Clearly ℎ: 0.1 → h.1 (vii). Also it can be easily seen (using the contradiction method) that ℕ is infinite. Define Z: ℕ → ℤ by 0−1 . (iii) (iv) Using the fact that ℕ is countably infinite and Proposition 1. In other words. and t u = % %vw x . Any continuum set is uncountable. i. Unit interval 0. . Again using proposition 1. Then ⋃m∈o dm is countable. … . Hence any interval h.1 is uncountable. . u ∈ ℝ. 0 ∈ ℕ. ℕ × ℕ is countable. Let l be a countable set and let dm : n ∈ l be a (countable) collection of countable sets. … . where −∞ < h < i < ∞. and (iii) dD ∈ ℱ. 1. Let us denote by ℱ the class of sets for which the probability assignments can be finally done. ∈ ).. ℬ. }. to as many subsets of as possible keeping in mind that properties (i)-(iii) of classical (or relative frequency) method are not violated. It will be reasonable to assume that ℱ satisfies the following properties: (i) ∈ ℱ.2. 3 . = 1. Starting from the basic sets in } assignment of probabilities is extended. in many situations. 5. then } may be class of all open intervals in ℝ. (ii) d ∈ ℱ ⟹ d = − d ∈ ℱ .g. in an intuitively justified manner. For example { = 1 .2 A sigma-field (-field) of subsets of is a class ℱ of subsets of satisfying the following properties: (i) (ii) ∈ ℱ. .possible to assign probabilities to all subsets of such that properties (i)-(iii) of classical (or relative frequency) method are satisfied.1 (i) A set whose elements are themselves set is called a class of sets. We call the class ℱ as event space and elements of ℱare called events. if is a countable set. Therefore probabilities are assigned to only certain types of subsets of . In other words. 2. Therefore one begins with assigning probabilities to members of an appropriately chosen class } of subsets of (e. then } may be class of all singletons . We call the members of } as basic sets. it may not be possible to assign probabilities to all subsets of the sample space such that properties (i)-(iii) of classical (or relative frequency) method are satisfied. …. Rather we will define the concept of probability for certain types of subsets using a set of axioms that are consistent with properties (i)-(iii) of classical (or relative frequency) method. … ⇒ ⋃` DJ% dD ∈ ℱ . This leads to introduction of the following definition. Definition 2. 6 . A class of sets will be usually denoted by script letters {. (ii) Let } be a class of sets. A function ~: } → ℝ is called a set function. if = ℝ. We will also study various properties of probability measures. 2. ▄ As stated above. d ∈ ℱ ⇒ d = − d ∈ ℱ (closed under complements). 7 . In the following section we will discuss the modern approach to probability theory where we will not be concerned with how probabilities are assigned to suitably chosen subsets of . a real-valued function whose domain is a class of sets is called a set function. . Axiomatic Approach to Probability and Properties of Probability Measure We begin this section with the following definitions. Definition 2. (d) #% . d . . #& .in general.g. Suppose that ℱ is a -field of subsets of . … ⇒ ⋃` DJ% dD ∈ ℱ (closed under countably infinite unions). 8 .1 We expect the event space to be a -field. The Borel -field in ℝ (denoted by ℬ ) is the -field generated by class of all open rectangles in ℝ . ▄ Remark 2. ▄ (i) (ii) Example 2. if is a countable set then } may be class of all singletons . #& . #/ ∈ ℱ.(iii) dD ∈ ℱ. Let = ℝ and let be the class of all open intervals in ℝ. ∙ ▄ % / Let } be an appropriately chosen class of basic subsets of for which the probabilities can be assigned to begin with (e. … . It is the smallest sigma-field containing the set d. denotes the -dimensional Euclidean space. = 1. Then ℱ = ℱm m∈o (v) (vi) is a -field and it is the smallest -field that contains class } (called the -field generated by } and is denoted by }) (see Problem 3 (iii)). (c) #. here ℝ = u% .1 (i) (ii) (iii) (iv) ℱ = (. … ∈ ℱ ⇒ ⋂` DJ% #D ∈ ℱ since ⋂DJ% #D = ⋃DJ% #D . It turns out (a topic for an advanced course in probability theory) that. A set e ∈ ℬ is called a Borel set in ℝ . ⇒ ⋃/DJ% #D ∈ ℱ and ⋂/DJ% #D ∈ ℱ (take #/v% = #/v& = ⋯ = (so that ⋃/DJ% #D = ⋃∞ DJ% #D or #/v% = #/v& = ⋯ = so / ∞ that ⋂DJ% #D = ⋂DJ% #D ). a field may not contain all subsets of . is a sigma field. called the trivial sigma-field. f ∈ ℱ ⇒ # − f = # ∩ f ∈ ℱ and # Δf ≝ # − f ∪ f − # ∈ ℱ. Then ℬ% = is called the Borel -field on ℝ. (e) although the power set of is a -field of subsets of . . h + . … . ∈ ). if = ℝ then } may be class of all open intervals in ℝ. is a -field of subsets of . for an appropriately chosen class } of basic sets. Then.. Arbitrary intersection of -fields is a -field (see Problem 3 (i)). … . 2. for some 0 ∈ ℕ. u : −∞ < uD < ∞. (. Then ℱ = d.. Suppose that d ⊆ . ℬ% contains all singletons and hence all countable subsets of ℝ + h = ⋂` /J% +h − % / . Let } be a class of subsets of and let fm ∶ n ∈ l be the collection of all -fields that contain }. (a) ( ∈ ℱ since ( = ` ` (b) #% . = 1. … or Ω = . . 2. generally the domain ℱ of a probability measure is taken to be }. the -field generated by the class } of basic subsets of .D = D . Rather we will be interested in properties of probability measure defined on event space ℱ.% . for some d ∈ ℱ.. (iv) When is countable it is possible to assign probabilities to all subsets of using Axiom 2 provided we can assign probabilities to singleton subsets u of . i. = 9 . ℱ. ) is called a probability space.1 (i) proved later) but if )d = 1 (or )d = 0). then it does not mean that d = ( or d = () (see Problem 14 (ii). A probability function (or a probability measure) is a set function ).the assignment of probabilities that is consistent with properties (i)-(iii) of classical (or relative frequency) method can be extended in an unique manner from } to }. Therefore. for some n ∈ ℕ and let ) .% . Axiom 2: Countably infinite additive ∞ ∞ DJ% %J% (c) ) = 1 (Axiom 3: Probability of the sample space is 1). Recall that members of ℱ are called events. Let be a sample space associated with a random experiment and let ℱ be the event space (a -field of subsets of ). . #D ∈ ℱ. #D ∩ #E = (. satisfying the following three axioms: (a) )# ≥ 0. )⋃DJ% #D is well defined.& . … is a countably infinite collection of sets in a -field ℱ then ` ⋃` DJ% #D ∈ ℱ and. not all subsets of are elements of ℱ. To illustrate this let = . = 1. Now we provide a mathematical definition of probability based on a set of axioms. We have stated before that we will not care about how assignment of probabilities to various members of event space ℱ (a -field of subsets of ) is done. … . #& . ▄ Remark 2. ℱ. (Axiom 1: Non-negativity)./ .e. (ii) In any probability space .. e. ∀# ∈ ℱ. defined on ℱ. ) we have ) = 1 (or )( = 0.2 (i) Note that if #% . (b) If #% . (iii) In general not all subsets of are events.3 (i) Let ℱ be a -field of subsets of . therefore. see Theorem 2. ≠ F then ) HI #D K = P )#D . the smallest -field containing the class }. … . Definition 2. #& . (ii) The triplet . … is a countably infinite collection of mutually exclusive events i. )be a probability space. …. D: ∈ Thus in this case we may take ℱ = ) .D = ) = 1. Then )#% = 1. and #D ∩ #E = (. then } = (see Problem 5 (ii)). the power set of . 0 . 0 ≤ )# ≤ 1 and )# = 1 − )#. #& ∈ ℱ ⇒ )#% ∪ #& = )#% + )#& − )#% ∩ #& . 3. Then.1.D = ) ⋃DJ% . #& ∈ ℱ and #% ⊆ #& ⇒ )#& − #% = )#& − )#% and )#% ≤ )#& (monotonicity of probability measures). (Axiom 3) #D ∈ ℱ. = 1.1 (iii) below) ` ` ∑` DJ% D = ∑DJ% ) . so that 0 ≤ D ≤ 1. 2.g. assignment of probabilities for all subsets of is not possible when is continuum (e. ≠ F. … . #% . = 2. … ..1 Let. 2. ` 1 = )#% = ) HI #D K ` DJ% = P )#D using Axiom 2 DJ% ` = 1 + P )( ` DJ& ⇒ P )( = 0 DJ& 10 . ∀# ∈ ℱ. #% . (v) Due to some inconsistency problems. … (see Theorem 2. ℱ. It is worth mentioning here that if is countable and } = . ∶ . Then (i) (ii) (iii) (iv) (v) Proof. (i) )( = 0. ▄ Theorem 2. if contains an interval). to begin with. = 1. 2. Therefore. and )d = P D . = 1. #% = ⋃` DJ% #D and #D ∩ #E = (. ∈ (class of all singleton subsets of ) is the class of basic sets for which the assignment of the probabilities can be done. ≠ F ⇒ )⋃/DJ% #D = ∑/DJ% ) #D (finite additivity). … . for any d ⊆ .2. Let #% = and #D = (. #D ∈ ℱ. Therefore 1 ) (iv) ) # ∪ # ) # z )# (using (ii)) ⇒ )# 1 and )# 1 )# (since )# ∈ 0. ….1") ⇒ 0 )# 1 and )# 1 )# . 11 . Then #D ∈ . Figure 2. #& #% ∪ #& #% and #% ∩ #& #% (. Then #& #% ∈ . 0 z 2. (ii) Let #D (. Then # ∪ # and # ∩ # (. #& ∈ and let #% ⊆ #& . Let #% . 0 z 1.⇒ )( 0. 0 z 1. / ` ) HI #D K ) HI #D K %J% ` %J% P )#D using Axiom 2 DJ% / P )#D . Therefore. MJ% (iii) Let # ∈ . 2. 1. G F and )#D 0. … . … .1 Therefore. 0 z 2. #D ∩ #E (. #& #% .3 )#& )#% ∩ #& ∪ #& #% 12 . #& ∈ . it follows that )#% )#& . ) #& = )#% ∪ #& #% )#% z )#& #% (using (ii)) ⇒ ) #& #% )#& )#% .2 Therefore. Then #& #% ∈ . #% ∩ #& #% ( and #% ∪ #& #% ∪ Figure 2. As )#& #% B 0. Therefore. )#% ∪ #& )#% ∪ #& #% )#% z )#& #% (using (ii)) (2. Figure 2.1) Also #% ∩ #& ∩ #& #% ( and #& #% ∩ #& ∪ #& #% . (v) Let #% . 1) and (2. = 1. We will use the principle of mathematical induction. … . … .1 (v). .& = )#% + )#& and &. = )#% ∩ #& + )#& − #% ⇒ )#& − #% = )#& − )#% ∩ #& ∙ (using (ii) (2. Using Theorem 2.& + &. we get )#% ∪ #& = )#% + )#& − )#% ∩ #& . 3. #/ ∈ ℱ 0 ∈ ℕ. … .2). #& . … . for some positive integer ≥ 2. Then ¡v% ¡ ) HI #D K = ) ¢HI #D K ∪ #¡v% £ DJ% DJ% ¡ ¡ = ) HI #D K + ) #¡v% − ) ¢HI #D K ∩ #¡v% £ using the result for 0 = 2 DJ% DJ% ¡ ¡ = ) HI #D K + ) #¡v% − ) HI#D ∩ #¡v% K ¡ DJ% ¡ DJ% = P D. for ∈ 2. 3. we have )#% ∪ #& = )#% + )#& − )#% ∩ #& = %. Thus the result is true for 0 = 2. . where %. 0 ≥ 2.2) Using (2. Proof. 0 . Then DJ% 13 . ℱ.¡ + )#¡v% − ) HI#D ∩ #¡v% K using the result for 0 = 2.3 DJ% Let fD = #D ∩ #¡v% ./ = ∑/DJ% )#D and. Then / / ) HI #D K = P . DJ% where %./ = −1% P %D ⋯D / J% )#D % ∩ #D & ∩ ⋯ ∩ #D ./ .& = −)#% ∩ #& .2 (Inclusion-Exclusion Formula) Let . ) be a probability space and let #% . ▄ Theorem 2. Now suppose that the result is true for 0 ∈ 2.& . S .4) in (2.¡v% = P .¡ − ¡.¡ + ⋯ + ¡. Then )#% ∪ #& ∪ #S #% + )#& + )#S −) #% ∩ #& + )#% ∩ #S + )#& ∩ #S +) ¦ =) #¦ ∩ #&¦¦ ∩¦¦¨ #S ¥¦¦¦¦¦¦§¦¦¦¦¦¦¨ ¥¦¦ %¦§¦ ¥¦¦¦¦¦¦¦¦¦¦¦¦¦§¦¦¦¦¦¦¦¦¦¦¦¦¦¨ ©. &. Note that %.¡ − %.S = −&.¡ + )#¡v% = %.4 ¡ where %. 2.¡ again using the result for 0 = . we get )⋃¡v% DJ% #D = +%. and ¡. ©W.3 (i) Let #% .¡ − ¡%.¡v% .S = S. ⋯ .¡ = .¡ − %. where ©ª.¡ = ∑¡ DJ% ) fD = ∑DJ% )#D ∩ #¡v% and. #& … ∈ ℱ.3).¡v% . .3.S . ▄ DJ% J& J% Remark 2. %D DW ⋯D ¡ %D DW ⋯D ¡ Using (2.S . . … . Therefore.ª )⋃/DJ% #D = %.S = %.ª = %.¡v% .S − &. .¡ + ) #¡v% . 3. = 2.¡ . 14 . for ∈ 2. ./ − &.S + S./ + S.ª where %. ¡v% ¡v% ¡v% ) HI #D K = %.¡ = −¡v%./ ⋯ + −1/% /.¡ = −1% = −1% P )fD ∩ fDW ∩ ⋯ ∩ fD P )#D ∩ #DW ∩ ⋯ ∩ #D ∩ #¡v% . In general.¡v% .¡v% + P .S and S.¡ ¡ ) HI#D ∩ #¡v% K = ) HI fD K DJ% DJ% = ∑¡ J% . + &./ . 3./ ≤ )⋃/%J% #D ≤ %. = 2./ . … . ℱ.W = %. … 0. Now suppose that the result is true for 0 ∈ 2.& ≤ %. We have #% + )#& ¥¦ ¦§¦ ) #% ∪ #& = ) −)¦ #% ∩¦¦¨ #& ¥¦¦¦§¦¦¦¨ ©. (i) (Boole’s Inequality) %.5 DJ% and ) ¢I fD £ ≥ P )fD − DJ% DJ% DJ% P )fD ∩ fE . (Bonferroni’s Inequality) )⋂/%J% #D ≥ %. ▄ Theorem 2. if is even 1 ≥ )#% ∪ #& = )#% + )#& − )#% ∩ #& ⇒ )#% ∩ #& ≥ )#% + )#& − 1.6 Then 15 . for some positive integer ≥ 2. D./ . #/ ∈ ℱ 0 ∈ ℕ./ = « (ii) We have D. … . if is odd . Thus the result is true for 0 = 2. ) be a probability space and let #% . 2. −D. .& = −)#% ∩ #& ≤ 0. (i) (ii) Proof. … . 0 ≥ 2 . f¡ ∈ ℱ ) ¢I fD £ ≤ P )fD . 3../ − 0 − 1.& . 3. %DE = 2.& + &.& = )#% + )#& and &. The above inequality is known as Bonferroni’s inequality./ . = 1. … ./ + &. Then. We will use the principle of mathematical induction. where %.2. suppose that for arbitrary events f% . 2.e.3 Let . 2.W ©W. … . #& . under the notations of Theorem 2. i. 2.9) and (2.¡v% . we get ¡ DJ% ¡ ) HI#D ∩ #¡v% K ≤ P ) #D ∩ #¡v% . for = .10 DJ% Now using (2.2 DJ% ¡ ¡ DJ% = ) HI #D K + )#¡v% − ) HI#D ∩ #¡v% K. we get ¡ ) HI #D K ≥ %.¡v% ¡ ) HI #D K = ) ¢HI #D K ∪ #¡v% £ DJ% DJ% ¡ ≤ ) HI #D K + ) #¡v% using 2.5 for = 2 ¡ DJ% ≤ P )#D + )#¡v% using 2.7 DJ% Also.¡ .8 DJ% Using (2.8). we get 16 .5 for k = m DJ% ¡v% = P )#D = %. 2. ¡v% ¡ ) HI #D K = ) ¢HI #D K ∪ #¡v% £ DJ% ¡ DJ% ¡ = ) HI #D K + )#¡v% − ) ¢HI #D K ∩ #¡v% £ using Theorem 2.9 DJ% DJ% and using (2.10) in (2.¡ + &. 2.6). 2. for = .5). ¡ + )#¡v% − P )#D ∩ #¡v% DJ% ¡v% = P )#D − DJ% P %DE¡v% = %. ² ³.7) and (2.¡v% + &.11).¡ + &.2 we can in fact prove the following inequalities: & / &% 0 P E.¡v% + &. we get ¡v% %. 2 EJ% EJ% EJ% 17 .11) Combining (2. DJ% )#D ∩ #E (2.¡v%. %J% and the assertion follows by principle of mathematical induction. ▄ DJ% Remark 2./ .2./ ≤ ) ¢I #E £ ≤ P E. … .¡v% ≤ ) H I #D K ≤ %. = 1.4 Under the notation of Theorem 2. (ii) We have / / ) H #D K = 1 − ) ¢H #D K £ MJ% N MJ% = 1 − )I EM / MJ% ≥ 1 − P ) #D using Boole° sinequality %J% / = 1 − P1 − )#D / DJ% = P )#D − 0 − 1.¡v% ¡ ) HI #D K ≥ %.¡v% . #& .e. … . Then #D ⊆ ⋃/EJ% #E .where ²& ³ denotes the largest integer not exceeding / / & . DJ% DJ% It follows that )⋃/DJ% #D = 0. #/ ∈ ℱ be events. ▄ DJ% Definition 2. = 1. … .1 Let . … . … . … . We have )#D = 1. = 1. ) be a probability space and let #% . Using Boole’s inequality. … . First suppose that )#D = 0. … . … . … . / 0 ≤ ) #D ≤ ) ¢I #E £ = 0.. DJ% / ⇔ ) H #D K = 1. ℱ. (i) )#D = 0. 0. = 1. )#D = 1.▄ Corollary 2. = 1. = 1.4 A countable collection #D : ∈ l of events is said to be exhaustive if )⋃D∈o #D = 1. 0 ⇔ )⋃/DJ% #D = 0. 0 / ⇔ ) HI #D K = 0 using i DJ% / ⇔ ) ¢HI #D K £ = 1. = 1. and therefore. (ii) = 1. Then (i) (ii) Proof. suppose that )⋃/EJ% #E = 0 . 0 . we get / / 0 ≤ ) HI #D K ≤ P )#D = 0. 0. ▄ 18 . = 1. 0. 0 ⇔ )⋂/DJ% #D = 1. Conversely. µJ% i. )#D = 0. 0 ⇔ )#D = 0. ▄ For a finite sample space . ¶D E ∩ ¶D = (. total number of cases which is the same as classical method of assigning probabilities. )⋃DJ% ¶D = ∑DJ% ) ¶D = 1 and )¶% = ⋯ = )¶ = . ¶ are mutually exclusive. where .. 3 = ) 2.Example 2. ¶ be singleton subsets of . = ⋃DJ% ¶D . ¶D ∩ ¶E = (. 3 . where ¶% . without replacement. . 5 19 . F ≠ and ∈ 2. Let ¶% . ≠ F. = . Suppose that. … . F ∈ 1. Thus. Here the assumption that ¶% . Example 2.2 (Equally Likely Probability Models) Consider a probability space . from the set 1. where % . ) . combinations of 5 cards. ¶& .e. 3 = S. … . if ≠ F. Here the sample space comprises of all + . … . ¶ ).Then = ⋃DJ% ¶D and 5 % )¶% = ⋯ = ) ¶ = . · ⊆ 1. Let #% be the event that each card is spade. 2 . for some positive integer ≥ 2 . F indicates that % the experiment terminates with chosen numbers as and F. ℱ. … . … . and is the number of ways that are favorable to # ∈ ℱ. EJ% Note that here is the total number of ways in which the random experiment can terminate (number of partition sets ¶% . … . 3 and ) 1. 2. Then · )# = P ) +¶D E . for any # ∈ ℱ. i.3 Suppose that five cards are drawn at random and without replacement from a deck of 52 52 cards. Thus number of 5 52 favorable cases= + . 3 then = 1. exhaustive and equally likely events.. say. 2. 1. when we say that an experiment has been performed at random we mean that various possible outcomes in are equally likely. 2. )# = number of cases favorable to # = . . 3 . ¶ are equally likely is a part of probability modeling.Further % suppose that an event # ∈ ℱ can be written as # = ¶D % ∪ ¶D & ∪ ⋯ ∪ ¶D · . . = . 2 = ) 1. … . For example when we say that two numbers are chosen at random. Then Number of cases favorable to #% = + 13 . d& .Therefore. 52 + . ∙ Therefore. … . … . 5 ) #% = ∙ 52 + .+ . ▄ 52 + . We need to find )⋂/DJ% #D . d/ . Then #& is the event that 39 none of the drawn cards is spade. R ∙ Let #S be the event that among the drawn cards three are kings and two are queens.+ . If these letters are inserted at random in 0 envelopes find the probability that no letter is inserted into the correct envelope. then Example 2. We have 20 . 13 . Solution. )#S = 3 2 ∙ 52 + . 3 2 4 4 + . Let us label the letters as º% . )#¹ = 2 2 1 . therefore. Then 4 4 number of cases favorable to #S = + . 5 + and )#& = 1 − )#& = 1 − +S¸. two are queens and one is jack. … . 5 Suppose that we have 0 ≥ 2 letters and corresponding 0 addressed envelopes. 5 Similarly. 0. and. 5 39 . º& . ) #& = 5 .+ . andnumber of cases favorable to #& = + .4 4 4 4 + . Let #D denote the event that letter ºD is (correctly) inserted into envelope dD . 5 + Now let #& be the event that at least one of the drawn cards is spade. 2. if #¹ is the event that among the drawn cards two are kings. º/ and respective envelopes as d% . + . R +R&. = 1. 1 Consider a random experiment of shuffling a deck of 52 cards in such a way that all 52! arrangements of cards (when looked from top to bottom) are equally likely./ = −1% P %D DW ⋯D / DJ% J% )#D ∩ #DW ∩ ⋯ ∩ #D . / / / = ) ¢HI #D K £ = 1 − ) HI #D K = 1 − P . This may happen. ºDW . ºD are inserted into correct envelopes. for 1 ≤ % < & < ⋯ < ≤ 0. 0 . Conditional Probability and Independence of Events Let . In many situations we may not be interested in the whole space . Note that 0 letters can be inserted into 0 envelopes in 0! ways. )#D ∩ #DW ∩ ⋯ ∩ #D = 0 − ! 0! ⇒ . … . DJ% . 0! = ⇒ / ) H #D K DJ% 0 − ! 0! −1% ! 1 1 1 −1/ = − + − ⋯+ . Rather we may be interested in a subset e ∈ ℱ of the sample space ./ = −1% P 1≤1 <2 <⋯< ≤0 0 0 − ! = −1% + . for ∈ 1. Example 3. when we know apriori that the outcome of the experiment has to be an element of e ∈ ℱ. ) be a given probability space. ▄ 2! 3! 4! 0! 3. … . 2./ . Also. ℱ. Clearly number of cases favorable to this event is 0 − ! . for example. #D ∩ #DW ∩ ⋯ ∩ #D is the event that letters ºD . Therefore./ ) H #D K DJ% where. 21 . for 1 ≤ % < & < ⋯ < ≤ 0. ∈ e ) and )¼ the unconditional probability of event ¼. In the light of this information. Note that 3 × 50! )½ ¼ = = 51! i. For # ∈ ℱ. 3. the experiment will result in an outcome . sample space e comprises of 51! arrangements of 52 cards with bottom card as king of heart. Now suppose that it is noticed that the bottom card is the king of heart. ) be a given probability space. )½ ¼ = S×Rg! R&! R%! R&! = )¼ ∩ e )e )¼ ∩ e . In such situations the sample space is e and natural contenders for the membership of the event space are 22 . )# = probability of event # under sample space . . where )e > 0. and =all 52! permutations of cards. )½ # = probability of event # under sample space e. )½ ¼ = S×Rg! R%! .Here. e. Let . ℱ. ℱ = Ω.1 )e We call )½ ¼ the conditional probability of event ¼ given that the experiment will result in an outcome in e (i. ▄ Example 3.Define the event ¼: top card is king.. Suppose that we know in advance that the outcome of the experiment has to be an element of e ∈ ℱ.1 lays ground for introduction of the concept of conditional probability.e. define Clearly. whether ½ d ∩ e ∶ d ∈ is a sigma-field of subsets of e? Theorem 3. ½ is closed under complements with respect to e.e.Then¶D dD ∩ e. d ∩ e ∶ d ∈ . Define ½ d ∩ e ∶ d ∈ ..2. ¶ ∈ ½ ⇒ C A ∩ e for same d ∈ d ∩ e ⇒ ¶ e ¶ ¥¦§¦¨ ∈ (since d ∈ ) Figure 3.e.1 ⇒ ¶ e ¶ ∈ ½ . This raises the question whether ½ d ∩ e ∶ d ∈ is an event space? i. … ¥¦ ¦§¦ ¦¨ DJ% DJ% ∈ 23 . for somedD ∈ .. ` ` I ¶D HI dD K ∩ e since dD ∈ . We have ∈ and therefore e ∩ e ∈ ½ .2.1 Let be a -field of subsets and let e ∈ . ….2 Also. 1. Proof. Then ½ is a field of subsets of eand ½ ⊆ . 1. …. 1. Now suppose that ¶D ∈ ½ . Since e ∈ and ½ d ∩ e ∶ d ∈ it is obvious that ½ ⊆ . (3.3) i. Therefore.2. 3. ) ⋅ |e are probability spaces. )½ Proof. Now (3. Note that. ℱ½ . e.4 i. ∀ ¶ ∈ ℱ½ .∈ ℱ½ . Then and . = 1. … (since ℱ½ ⊆ ℱ). Let us define another set function )∙ |e ∶ ℱ → ℝ by Pd|e )½ d ∩ e = Theorem 3.e.Then ¶D ∈ ℱ. Let .. 2. ℱ. )e ¶ ∈ ℱ½ = d ∩ e: d ∈ ℱ .2). = 1. )½ is countable additive on ℱ½ . ) e d ∈ ℱ.e. 3. (3. Á Let ¶D ∈ ℱ½ .2 )d ∩ e .4) imply that ℱ½ is a -field of subsets of e. ℱ½ is closed under countable unions.3) and (3. 3. 2. ℱ. ▄ Equation (3. Clearly )½ ¶ Á½ B 0. … be mutually exclusive. DJ% 24 . and )⋃` DJ% ¶D )½ HI ¶D K = )e ` DJ% = ∑` DJ% ) ¶D )e ` = P DJ% ` )¶D )e = P )½ ¶D .5 i.1) suggests considering the set function )½ : ℱ½ → ℝ defined by )½ ¶ = )¶ . for ¶ ∈ ℱ½ .. )¶ is well defined as ℱ½ ⊆ ℱ. )be a probability space and let e ∈ ℱ be such that ) e > 0. )e d ∈ ℱ. 2. ∀ d ∈ ℱ and )|B ) ∩ B )e = =1∙ )e )e Let #D ∈ ℱ. … are mutually exclusive and ` ` ` ` DJ% DJ% DJ% DJ% ` ) HI #D |eK )½ HI ¶D K P )½ ¶D P )½ #D ∩ e = P ) #D |e . … be mutually exclusive.2 Six cards are dealt at random (without replacement) from a deck of 52 cards.1 )d ∩ e . using 3. ▄ Note that domains of )½ ∙ and )∙ |e are ℱ½ and ℱ respectively. We have. Define the set function )∙ |e : ℱ → ℝ by )d|e )½ d ∩ e = )d ∩ e . Find the probability of getting all cards of heart in a hand (event A) given that there are at least 5 cards of heart in the hand (event B). Moreover.5 DJ% It follows that)∙ |e is a probability measure on ℱ. 25 . = 1. Solution. ▄ Example 3. ) be a probability space and let e ∈ ℱ be a fixed event such that )e > 0. Let .2. ) d|e )½ d ∩ e = Definition 3.Also )½ e = )e =1∙ )e Thus )½ is a probability measure on ℱ½ . Then ¶D = #D ∩ e ∈ ℱ½ . = 1. )e d ∈ ℱ. Note that )d|e B 0. ℱ. We call )d|e the conditional probability of event d given that the outcome of the experiment is in e or simply the conditional probability of d given e. + . = 1. . ▄ 26 . T )d ∩ e .1 For events #% . If )#% ∩ #& > 0 (which also guarantees that )#% > 0. DJ% provided )#% ∩ #& ∩ ⋯ ∩ #/% > 0 (which also guarantees that )#% ∩ #& ∩ ⋯ ∩ #D > 0. + + . if ) #% > 0. R % R& + .+S¸. )#% ∩ #& = )#% )#& |#% .) d|e = Clearly. since #% ∩ #& ⊆ #% ). Using principle of mathematical induction it can be shown that / ) H #D K = )#% )#& |#% )#S |#% ∩ #& ⋯ )#/ |#% ∩ #& ∩ ⋯ ∩ #/% . and )#% ∩ #& ∩ #S = )#% ∩ #& ∩ #S = )#% ∩ #& )#S |#% ∩ #& = )#% )#& |#% )#S |#% ∩ #& . T +%S. #/ ∈ ℱ 0 ≥ 2. )d ∩ e = )d = and )e = Therefore. 5 1 6 + Remark 3. 2. 6 )d|e = .v+%S T. ⋯ . ▄ 13 39 13 + . R& + . +%S T. … . 0 − 1). )e ∙ 13 . . Find the probability that the first draw resulted in a red ball and the second draw resulted in a black ball. Then. the following theorem provides a relationship between marginal probability )# of an event # ∈ ℱ and joint probabilities )# ∩ #D of events # and #D . Define the events d: first draw results in a red ball. Two balls are drawn successively. ℱ.3 An urn contains four red and six black balls. #D ∩ #E = (.3 (Theorem of Total Probability) Let . For a countable collection #D : ∈ l of mutually exclusive and exhaustive events. ) be a probability space and let #D : ∈ l be a countable collection of mutually exclusive and exhaustive events (i. e: second draw results in a black ball.Then. ∀ ∈ l. ) # = )# ∩ f + )# ∩ f = )# ∩ f # ∩ f ⊆ f ⇒ 0 ≤ )# ∩ f ≤ )f = 0 = ) HI# ∩ #D K = P )# ∩ #D #D  are disjoint D∈o ⇒ #D ∩ #s ⊆ #D are disjoint D∈o 27 . Required probability = )d ∩ e = )d)e|d 4 6 12 × = .Example 3. D∈o D∈o Proof. Let f = ⋃D∈o #D .e. ∈ l. at random and without replacement. ) be a probability space. from the urn. Theorem 3. for any event # ∈ ℱ. Solution. )# = P )# ∩ #D = P ) #|#D )#D . Then )f = 1 and ) f = 1 − )f = 0. ▄ 10 9 45 Let . ℱ. Therefore. whenever ≠ F. and )⋃D∈o #D = 1) such that )#D > 0. )#E |# )#E |# )#|#E )#E . ) be a probability space and let #D : ∈ l be a countable collection of mutually exclusive and exhaustive events with )#D > 0. Therefore )Å = )#% )Å|#% z )#& )Å|#& = 2 4 4 6 × + × 6 10 6 10 8 ∙ ▄ 15 The following theorem provides a method for finding the probability of occurrence of an event in a past trial based on information on occurrences in future trials. we have Proof. for any event # ∈ ℱ with )# > 0. = P )#|#D )#D . Solution. Otherwise urn Ä& is selected. #% ∶ urn Ä% is selected. for F ∈ l. Define the events: Å ∶ drawn ball is white. Theorem 3. #& is a collection of mutually exclusive and exhaustive events.4 (Bayes’ Theorem) Let . A fair die is cast and urn Ä% is selected if the upper face of die shows 5 or 6 dots. Then #% . ∈ l . If a ball is drawn at random from the selected urn find the probability that the drawn ball is white. ▄ D∈o Example 3. ∑D∈o )#|#D )#D F∈l∙ )#E ∩ # )# 28 . ℱ. Then. #& ∶ urn Ä& is selected.4 Urn Ä% contains 4 white and 6 black balls and urn Ä& contains 6 white and 4 black balls. We have. A ball is drawn at random from the selected urn. find the conditional probability that it came from urn Ä% . are referred to as prior probabilities and the probabilities )#E |#. Example 3. = )#|#E )#E )# )#|#E )#E using Theorem of Total Probability. F ∈ l. (i) We have #% ∶ urn Ä% is selected Ç mutually & exhaustive events #& ∶ urn Ä& is selected )#% |Å )Å|#% )#% ) Å|#% )#% z )Å|#&) #& ¹ %g ¹ %g & & ×T T ¹ × T z %g × T 29 . (i) (ii) Given that the drawn ball is white. ▄ ∑D∈o )#|#D )#D Remark 3. Given that the event # has occurred. are referred to as posterior probabilities. Given that the drawn ball is white. Bayes’ theorem provides the conditional probability that the event # is caused by occurrence of event #E . Otherwise urn Ä& is selected. A fair die is cast and urn Ä% is selected if the upper face of die shows five or six dots. Solution. F ∈ l.2 (i) (ii) Suppose that the occurrence of any one of the mutually exclusive and exhaustive events #D .5 Urn Ä% contains 4 white and 6 black balls and urn Ä& contains 6 white and 4 black balls. In Bayes’ theorem the probabilities )#E .4. F ∈ l. causes the occurrence of an event #. ▄ To see an application of Bayes’ theorem let us revisit Example 3. find the conditional probability that it came from urn Ä& . ∈ l. Define the events: Å ∶ drawn ball is white. 3 30 . we have )#& |Å 1 − )#% |Å 3 ∙ ▄ 4 In the above example )#% |Å < ) #% . independent if )d ∩ e = ) d)e . positively associated if )d ∩ e > )d)e .e. ¹ % S 3 2 > = )#& .. % and )#& |Å i. Definition 3. 4 3 the probability of occurrence of event #% decreases in the presence of the information that the outcome will be an element of Å. and )#& |Å > )#& ⇔ )#& ∩ Å > )#& )Å. Events d and e are said to be (i) (ii) (iii) negatively associated if )d ∩ e < )d)e . Note that )#% |Å < )#% ⇔ )#% ∩ Å < )#% )Å. ) be a probability space and let d and e be two events. (i) (ii) These phenomena are related to the concept of association defined in the sequel. the probability of occurrence of event #& increases in the presence of information that the outcome will be an element of Å. ▄ Remark 3.1 = ∙ 4 (ii) Since #% and #& are mutually exclusive and )#% ∪ #& |Å )|Å 1.2 Let. ℱ. . Let l ⊆ ℝ be an index set and let #m : n ∈ l be a collection of events in ℱ. i.e. ∀ d ∈ ℱ. ▄ Remark 3. to conclude that three events #% . n ≠ Ê in the collection #E : F ∈ l are independent. … .. 0 ) ¢ #m E £ = Ë ) +#m E . whenever n. Events #% .. so that #m : n ∈ l = #% . (ii) Let l = 1. Definition 3. (i) Events #m : n ∈ l are said to be pair wise independent if any pair of events #m and #É . … . #/ are said to be independent if. #/ is a finite collection of events in ℱ. 3..e. … . … . 31 .6). the following 4 = 2S − 3 − 1 conditions must be verified: )#% ∩ #& = )#% )#& . and only if.3. #m of #% . if )e = 0 then any event d ∈ ℱ and e are independent.3 Let .. … .6 EJ% (iii) EJ% Let l ⊆ ℝ be an arbitrary index set. )#% ∩ #S = )#% )#S . ) be a probability space. … . ▄ Now we define the concept of independence for arbitrary collection of events. the availability of the information that event e has occurred does not alter the probability of occurrence of event d. for any sub collection #m % . conditions in (3. #/ ∈ ℱ are independent one must verify 2/ − 0 − 0 1 += ∑/µJ& + F . i. For example.e. ℱ. for some 0 ∈ ℕ. n .(i) (ii) If )e = 0 then )d ∩ e = 0 = )d)e. If )e > 0 then d and e are independent If. if )#m ∩ #É = )#m )#É .4 (i) To verify that 0 events #% . Events #m : n ∈ l are said to be independent if any finite sub collection of events in #m : n ∈ l forms a collection of independent events. Ê ∈ l and n ≠ Ê. #/ = 2. … . then events d and e are independent if. and only if. 2. if )e > 0. )d|e )d. #& and #S are independent. i. % Thus d. … . for any permutation n% . (iv) Events in any sub collection of independent events are independent. where ) = . e and ¶ are pairwise independent. 2. 4 . 4 . ) d ∩ e ∩ ¶ = ¹ ≠ ) d)e )¶. the events #m % ..)#& ∩ #S = )#& ) #S . i. in general. and )e ∩ ¶ = )e )¶. % & )d ∩ e = )d ∩ ¶ = )e ∩ ¶ = ) 4 = . … . P.e. 0. 32 . ▄ The following example illustrates that. % ¹ ) d = )e = ) ¶ = . d. … . e = 2.6 Let = 1. #m / are also independent. pair wise independence of a collection of events may not imply their independence. In particular independence of a collection of events implies their pair wise independence.5 Let . e and ¶ are not independent.Then (i) d and e are independent events. )#% ∩ #& ∩ #S = )#% )#& )#S . ℱ. 2. Thus the notion of independence is symmetric in the events involved. (ii) If events #% . ℱ. and )d ∩ e ∩ ¶ = ) 4 = ¹ ∙ % % ¹ Clearly. Consider the probability space . e ∈ ℱ). 4 and ¶ = 3. = 1. Let d = 1. the power set of . ) be a probability space and let d and e be independent events (d. n/ of 1. )d ∩ ¶ = )d)¶. 4 . ▄ Theorem 3. 3. 3. #/ are independent then. Example 3. However. 4 and let ℱ = . … . Then. )d ∩ e = )d)e . f/ are independent. 0 . fD ¡ . f/ are independent. 2.6 Let . Follows from (i) by interchanging the roles of d and e. (i) Since e = d ∩ e ∪ d ∩ e and d ∩ e ∩ d ∩ e = (. … . … . Î . f/ are independent. … . f/% . 0 − 1 . f . … . ¡ . 0 − 1 the events f% . … . … . fmÌ . … . … . fv% . d and e are independent events. f/% . … . … . fD ¡ . Follows on using (i) and (ii) sequentially. fmÍ are independent. we show that the events f% . 0 − 1 and any permutationn% . … . (ii) (iii) i.5. Moreover the events f% . … . ) be a probability space and let f% . where Î = f/ or Î = fE . ℱ. for any ∈ 1. ▄ The following theorem strengthens the results of Theorem 3. fm .(ii) (iii) d and e are independent events. 0 − 1 − % . … . Î of f% . under the hypothesis of the theorem. d and e are independent events. Then. Thus the following two cases arise: ÏÐÑÒ Ó. depending on whether or not f/ is a part of sub collection fD % . Since the notion of independence is symmetric in the events involved. it is enough to show that for any ∈ 1. f/ % . For this consider a sub collection fD % . … . Î = f/ Since f% . n/ of 1.. ¡ ⊆ 1. Proof. we have )e = )d ∩ e + ) d ∩ e ⇒ )d ∩ e = )e − )d ∩ e = )e − )d)e = 1 − )d)e = )d )e . the events fm % . we have 33 . … . … . 2.e. Using backward induction and symmetry in the notion of independence the above mentioned assertion would follow if. We have )d ∩ e = )d)e. f/ are independent. f/ 0 ∈ ℕ. … . Proof. 0 ≥ 2 be independent events in ℱ. … . Theorem 3. for some F ∈ 1. … . fD ¡ . … . … . ▄ 34 . EJ% Now the result follows on combining the two cases.¡ ¡ ) ¢ fD E £ = Ë ) +fD E . ¡ . EJ% Case II.5 ii) ¡ ¡ ⇒ ) Ô¢ fD E £ ∩ f/ Õ = ) ¢ fD E £ )f/ EJ% ¡ EJ% = ÖË ) +fD E .× )Î.× )f/ EJ% EJ% ¡ = ) ¢ fD E £ )f/ ¡ EJ% ⇒ events fD E and f/ are independent EJ% ⇒ events ⋂¡ EJ% fD E and f/ are independent Theorem 3. Î is a sub collection of independent events f% . for some F ∈ 1. In this case fD % .. 0 − 1 − % .× )f/ EJ% ¡ ⇒ )fD % ∩ ⋯ ∩ fD ¡ ∩ Î = ÖË ) +fD E . EJ% EJ% and ¡ ¡ ) Ô¢ fD E £ ∩ f/ Õ = ÖË ) +fD E . f/ and therefore ¡ )fD % ∩ ⋯ ∩ fD ¡ ∩ Î = ÖË fD E × ) Î . … . Î = fE . … . 2. 35 . 2. 2. 0 = 1. … of events by Lim/→` d/ and the limit of a sequence h/ : 0 = 1. … is decreasing (written as d/ ↓ ) if d/v% ⊆ d/ . We say that the sequence d/ : 0 = 1. 3. Definition 4. ℱ.1 Let . 2. If d/ ↑ we define the limit of the sequence d/ : 0 = 1. e/ = d/ − d/% . 2. ℱ. … . … as ⋂` /J% d/ and write ` Lim/→` d/ = ⋂/J% d/ . Proof. … is increasing (written as d/ ↑ ) if d/ ⊆ d/v% . 0 = 1. … . 4. Then ) +Lim d/ . 2. Define e% = d% . ) be a probability space and let d/ : 0 = 1.1 (Continuity of Probability Measures) Let d/ : 0 = 1. Continuity of Probability Measures We begin this section with the following definition. Theorem 4. 2. We say that the sequence d/ : 0 = 1. /→` /→` Case I. d/ ↑ In this case. 2. ▄ Throughout we will denote the limit of a monotone sequence d/ : 0 = 1. 2. … of real numbers (provided it exists) by lim/→` h/ . If d/ ↓ we define the limit of the sequence d/ : 0 = 1. … is monotone if either d/ ↑ or d/ ↓. Lim/→` d/ = ⋃` /J% d/ . = lim )d/ . …. 2. … as ⋃` /J% d/ and write ` Lim/→` d/ = ⋃/J% d/ .When we say that two or more random experiments are independent (or that two or more random experiments are performed independently) it simply means that the events associated with the respective random experiments are independent. (i) (ii) (iii) (iv) (v) We say that the sequence d/ : 0 = 1. 0 = 2. … be a sequence of monotone events in a probability space. … be a sequence of events in ℱ. ).2. 2. 0 1.Figure 4. ` ) +Lim d/ . …) / / J& J& lim Ü)d% z P ) d P ) d% Ý /→` lim )d% z )d/ )d% " /→` lim )d/ . 1. Therefore. e/ s are mutually exclusive and ⋃` /J% e/ ⋃/J% d/ Lim/→` d/ . /→` 36 . 2 … .1 (iv) since d% ⊆ d .1 ` Then e/ ∈ . ) HI e/ K /→` ` /J% P )e/ /J% / lim P )e /→` J% / lim Ü)d% z P )d d% Ý /→` J& / lim Ü)d% z P)d ) d% Ý /→` J& (using Theorem 2. Define / / e/ = I #D and ¶/ = #D . ) be a probability space and let #D : = 1. 2./ + &. Lim e/ = ⋃` /J% e/ = ⋃DJ% #D and Lim¶/ = ⋂DJ% #D .2. … DJ% DJ% ` ` Then e/ ↑.1 /→` / = lim ) HI #D K /→` DJ% = lim ß%. Lim d/ = ⋂` /J% d/ and d/ ↑. ▄ /→` Remark 4. /→` ` ) +Lim d/ . = ) H d/ K /→` /J% ` = 1 − ) HH d/ K K /J% ` = 1 − ) HI d/ K /J% = 1 − )Lim d/ /→` = 1 − lim )d/ using Case I.Case II. /→` 37 . Therefore ` /→` ) HI #D K = ) +Lim e/ .1 Let . DJ% /→` /→` = lim )e/ using Theorem 4./ à. … be a countably infinite collection of events in ℱ. since d/ ↑ /→` = 1 − lim 1 − )d/ /→` = lim )d/ ./ + ⋯ + /. 0 = 1. ℱ. Therefore. ¶/ ↓. d/ ↓ In this case. 2 . Let = 1. the following three conditions are satisfied: (i) ∈ ℱ . Let ℱã : ä ∈ l be a collection of sigma-fields of subsets of . 1. ⋯ is a collection of independent events. Check which of the following is a sigma-field of subsets of : (i) ℱ% = (. 4 . 2 . DJ% /→` = lim )¶/ using Theorem 4. 3.1 /→` = lim )⋂/DJ% #D . 2 . 1. n = 1. ▄ DJ% Problems 1. /→` Similarly. if #D : = 1. (iii) d/ ∈ ℱ.2. 2. ` ) H #D K = ) +Lim ¶/ . ⋯ ⇒ ⋂` /J% d/ ∈ ℱ.Ns are as defined in Theorem 2. (iii) ℱS = (. (i) Show that ⋂ã∈o ℱã is a sigma-field. and only if. Show that a class ℱ of subsets of is a sigma-field of subsets of if. 3. 3. 38 . 3. 4 . 4 . 3. . Moreover. 2 .where Sâ. then ` / ) H #D K = lim ) H #D K DJ% /→` / DJ% = lim ÜË ) #D Ý /→` ` DJ% = Ë ) #D . 4 . 1. 2. (ii) ℱ& = (. 3. 4 . 3. 4 2. 4 . (ii) Using a counter example show that ∪ã∈o ℱã may not be a sigma-field. 1. . (ii) d ∈ ℱ ⇒ d = − d ∈ ℱ . 1 . 2. 1 . 3. 2. 2. 1. (i) Show that the probability that exactly one of the events d or e will occur is given by )d + )e − 2)d ∩ e . 0 < < 1. (vi) )d ∩ e ∪ ¶ ∩ è.. and )d = 1. … . verify if . d ∈ ℱ. )d ∩ e = 0. ) be a probability space and let d and e be two events (i. (v) )d ∪ e ∪ èand )d ∪ e ∪ ¶ ∪ è. ℱ. d.2. Show that } = ⋂ã∈o ℱã . if d has a finite number of elements. Show that } = . Find: (i) )d ∪ e ∪ ¶and )d ∩ e ∩ ¶ . 6. )d ∩ e ∩ ¶ = 0. (ii) Using a counter example show that { may not be closed under countably infinite unions (and hence { may not be a sigma-field). (i) Show that { is closed under complements and finite unions. (b) What can you say about ℱwhen is countable? (ii) Let Ω be a countable set and let } = . )e ∩ è = )¶ ∩ è = 0.3. e. ℱ. è ∈ ℱ . )e = 0.2. Let be an infinite set and let { = d ⊆ : d is finite or d is finite . (ii) Show that )d ∩ e − )d)e = )d)e − )d ∩ e = ) d ) e − )d ∩ e = )d ∪ e − )d )e . 4. 8. d ∈ ℱ. 2. ¶.1 and ) è = 0.6. (iii) )d = 0. ) be a probability space and let d. (ii) )d = ∑æ ∈ 1 − æ . 5.e. )e ∩ ¶ = 0. : . ä > 0. In each of the following cases.(iii) Let } be a class of subsets of and let ℱã : ä ∈ l be a collection of all sigma-fields that contain the class }. Let . (ii) )d ∪ e ∩ ¶and )d ∪ e ∩ ¶. 7. )¶ = 0. Suppose that )d = 0. if d has infinite number of elements. e ∈ ℱ).1. )d ∩ ¶ = 0. (iii) )d ∪ e ∩ ¶ and )d ∩ e ∪ ¶ . ) d ∩ è = 0. ℱ.2. Let .4. where } denotes the smallest sigma-field containing the class } (or the sigma-field generated by class }). d ∈ ℱ.5. (i) Let be an uncountable set and let ℱ = d ⊆ : d is countable ord is countable . ∈ Ω . (a) Show that ℱ is a sigma-field. Let ℱ = =the power set of = 0. (iv) )e ∩ ¶ ∩ èand )d ∩ ¶ ∩ è. ) is a probability space: (i) )d = ∑æ ∈ å ã äæ ⁄u! . 39 . 1". ∀i ∈ 0. Consider an empty box in which four balls are to be placed (one-by-one) according to the following scheme. 2. ) be a probability space such that ℱ is the smallest sigma-field containing all subintervals of = 0. 10. … . Three numbers h. Otherwise a black ball is placed in the box. 2. 2. ê is randomly chosen on the unit square = u. )d = 0. e/ ↑. )/ are made to stand in a row at random. (iii) Show that. Find the probability that there are exactly person between )% and )& .e. 1 ≠ Ω) . … . Verify that d/ ↓. for any region ì ⊆ for which the area is defined.50 . ôS and ô¹ producing binary codes 0 and 1.9. (ii) geometric progression. ë: 0 ≤ u ≤ 1. 11. 12. the probability that é. 1 = 1 but 0. The code % ¹ S ¹ produced by machine ô is fed into machine ôv% = 1. 1³ . The machine ô% produces codes0 and 1 with respective probabilities and . ³ and e/ = + + % / % & % /v& . 1" and )0. (iv) For 0 ∈ ℕ. 0 ≤ ë ≤ 1 (i. Consider four coding machines ô% . Given that the first ball placed in the box was white find the probability that the box will contain exactly two black balls. 14. i³ . ê to the nearest % S side does not exceed units. ê lies on ì is íîïí ðñ ò íîïí ðñ © ⋅ Find the probability that the distance from é. Find the probability that the quadratic equation hu & + iu + ó = 0 will have real root(s). 1" = 1(Note that here ) i = 0 but i ≠ ( and )0. )Lim/→` d/ = lim/→` )d/ and )Lim/→` e/ = lim/→` ) e/ . i" = i − h. … . 1" and )h. ℱ.6 . A fair die is cast each time and the number of dots on the upper face is noted. (i) Show that i = ⋂` /J% +i − % /v% . Suppose that 0 ≥ 3 persons )% . where 0 ≤ h < i ≤ 1 (such a probability measure is known to exist). If the upper face shows up 2 or 5 dots then a white ball is placed in the box. here ∈ 1. Let 0. … . A point é. for any countable set d ∈ ℱ. 15. Three numbers are chosen at random from the set 1. let d/ = +0.. ∀i ∈ 0. 3 which may either leave 40 . (ii) Show that ) i = 0. Find the probability that the chosen numbers are in (i) arithmetic progression. i and ó are chosen at random and with replacement from the set 1. 1". 2. 0 − 2 . 13. ô& . Physics and Mathematics. 9 where d Δ e = d − e ∪ e − d. Assuming that the performances of the students % % % & S ¹ % R in four subjects are independent. Let d and e be independent events. For independent events d% . … . show that: / ) H dD K ≤ å ∑ö Á . … are independent and ∑` /J% )d/ < ∞ then. 18. ) d Δ e ≥ . Let . ) be a probability space and let d% . 0 = 1. Chemistry. find the conditional probability that the machine ô% produced code 0. . dD ∈ ℱ. A student appears in the examinations of four subjects Biology. (v) at least one subject. d& . find the probability that the student will clear examination(s) of (i) all the subjects. (iv) if ∑` /J% ) d/ < ∞ then. (iii) )# = lim/→` )¶/ = lim/→` lim¡→` )⋂¡ J/ d and ) # = lim/→` ) ¶/ . (v) if d% . DJ% Í 19. d& . (iii) exactly one subject. e. ) d ∩ e . (ii) no subject. … be a sequence of events ` i. … . Suppose that probabilities of the student clearing examinations in these subjects are . only finitely many d/ s will occur. (iv) exactly two subjects. è = ` ⋃` /J% e/ and # = ⋂/J% ¶/ . with probability one. Define e/ = ⋂` DJ/ dD . . 17. 16. ôS andô¹ change the received code with probability . = 1. and respectively. … .2. d/ .ô& . Suppose that each of the machines ¹ has produced code 1. with probability one. ℱ. 41 . infinitely many d/  will occur. Show that 4 max )d ∪ e . Show that: (i) è is the event that all but a finite number of d/ s occur and # is the event that infinitely many d/ s occur. ¶/ = ⋃DJ/ dD . 2. (ii) è ⊆ #. Given that the machine ô¹ S the received code unchanged or may change it. 0. ¶/ that function independently. 23.¶/ will function at time ÷. 0 of the components function. … . A locality has 0 houses numbered 1.20. e and ¶ be three events such that )e ∩ ¶ > 0. show that E and fE are negatively associated but D and fE are positively associated.2. Let d. (ii) )d ∩ e|¶ )d|¶ )e|¶ if d and e are independent events. 0. A -out-of-0 system is a system comprising of 0 components that functions if. F ∈ 1. For each . Let d. Let . … . 0. … . let fE denote the event that search of the house number F will fail to nab the terrorist there and let )fE |E E ∈ 0. 42 . … . in general. Let E denote the event that the terrorist is hiding in house numbered F.1 and the probability that it will not be functioning at time ÷ is 1 − D ÷. At any given time ÷ the probability that the component ¶D will be functioning is D ÷∈ 0. During a search operation.. e ∈ ℱ. … . (ii) Find the probability that a series system comprising of components ¶% . 22. … . …. d and ¶ are negatively (positively) associated? 21. ¶/ will function at time ÷. F = 1. 0 . d. (i) Find the probability that a parallel system comprising of components ¶% . . Consider 0 components ¶% . 24.1. at least ∈ 1. 0 and a terrorist is hiding in one of these houses. e. Show that if d and e are positively (negatively) associated then d and e are negatively (positively) associated. A1-out-of-0 system is called a parallel system and an0-out-of-0 system is called a series system. (iii) If D ÷ = ÷. Can we conclude that. ≠ F. = 1. … . = 1. … . Prove or disprove each of the following: (i) )d ∩ e|¶ )d|e ∩ ¶ )e|¶ . 0. ) be a probability space and let A and B two eventsi. … .1. ℱ. 0 and let )E = E ∈ 0. ¶/ will function at time ÷. e and ¶ be three events such that d and e are negatively (positively) associated and e and ¶ are negatively (positively) associated. F = 1. … . … . F = 1. Interpret these findings. and only if. find the probability that a -out-of-0 system comprising of components ¶% .
Copyright © 2024 DOKUMEN.SITE Inc.