Kernel 2 spasial GWR



Comments



Description

LECTURE 7: Kernel Density Estimationg g g g g g Non-parametric Density Estimation Histograms Parzen Windows Smooth Kernels Product Kernel Density Estimation The Naïve Bayes Classifier Introduction to Pattern Analysis Ricardo Gutierrez-Osuna Texas A&M University 1 Non-parametric density estimation g In the previous two lectures we have assumed that either The likelihoods p(x|ωi) were known (Likelihood Ratio Test) or At least the parametric form of the likelihoods were known (Parameter Estimation) n n g The methods that will be presented in the next two lectures do not afford such luxuries Instead, they attempt to estimate the density directly from the data without making any parametric assumptions about the underlying distribution Sounds challenging? You bet! n n P(x1, x2| ωi) 7* 0.15 7 777 7777* 77 77* 7* 777 0.1 0.05 x2 0 -0.05 4 4* 4 44444 4444 4 4 44 -0.15 0.4 0.5 0.6 0.7 0.8 0.9 10* 10* 10 10* 10 10 10 10 10 10 10 10 10* 1 10* 1.1 NON-PARAMETRIC DENSITY ESTIMATION x2 -0.1 9 9* 9* 99 9 55559 9 99 5 9 5* 5 9* 8 55* 22 89 59* 9* 8595* 883* 8 88 8* 5* 22 5555 2* 2* 8* 2 33* 883* 22 22* 22* 8* 8 3* 2* 3 2 33 1* 3 1211* 3* 33* 3 6* 333* 2* 6* 6* 11* 1 6 666 6 111*6* 6* 3 666 1 6*6*6 x1 x1 Introduction to Pattern Analysis Ricardo Gutierrez-Osuna Texas A&M University 2 The histogram (1) g The simplest form of non-parametric D.08 0.16 0.14 0.02 0 0 2 Introduction to Pattern Analysis Ricardo Gutierrez-Osuna Texas A&M University 4 6 8 10 12 14 16 x 3 .04 0.1 p(x) 0.12 0. is the familiar histogram n Divide the sample space into a number of bins and approximate the density at the center of each bin by the fraction of points in the training data that fall into the corresponding bin [ 1 number of x (k in same bin as x PH (x) = [width of bin containing x ] N n ] The histogram requires two “parameters” to be defined: bin width and starting position of the first bin 0.E.06 0. The histogram (2) g The histogram is a very simple form of density estimation. they are only an artifact of the chosen bin locations g n These discontinuities make it very difficult (to the naïve analyst) to grasp the structure of the data A much more serious problem is the curse of dimensionality. we will not spend more time looking at the histogram Introduction to Pattern Analysis Ricardo Gutierrez-Osuna Texas A&M University 4 . since the number of bins grows exponentially with the number of dimensions g g For multivariate data. but has several drawbacks n The density estimate depends on the starting position of the bins g n The discontinuities of the estimate are not due to the underlying density. the density estimate is also affected by the orientation of the bins In high dimensions we would require a very large number of examples or else most of the bins would be empty All these drawbacks make the histogram unsuitable for most practical applications except for rapid visualization of results in one or two dimensions n Therefore. ) that the mean and variance of the ratio k/N are k  E  = P N  n and 2  k k    P(1− P ) Var   = E  − P   = N N     N Therefore. general formulation (1) g Let us return to the basic definition of probability to get a solid idea of what we are trying to accomplish n The probability that a vector x. The probability that k of these N vectors fall in ℜ is given by the binomial distribution  N P(k ) =   Pk (1− P)N−k k n It can be shown (from the properties of the binomial p.Non-parametric density estimation.m. the distribution becomes sharper (the variance gets smaller) so we can expect that a good estimate of the probability P can be obtained from the mean fraction of the points that fall within ℜ P≅ Introduction to Pattern Analysis Ricardo Gutierrez-Osuna Texas A&M University k N From [Bishop. as N→∞. drawn from a distribution p(x). …. x(N} are drawn from the distribution. x(2. will fall in a given region ℜ of the sample space is P = ∫ p(x' )dx' ℜ n Suppose now that N vectors {x(1.f. 1995] 5 . if we assume that ℜ is so small that p(x) does not vary appreciably within it.Non-parametric density estimation. in practice. then ∫ p(x' )dx' ≅ p(x)V ℜ g n where V is the volume enclosed by region ℜ Merging with the previous result we obtain P = ∫ p(x' )dx' ≅ p(x)V   k ℜ ⇒ p(x) ≅  k NV  P ≅ N  n g This estimate becomes more accurate as we increase the number of sample points N and shrink the volume V In practice the value of N (the total number of examples) is fixed n n In order to improve the accuracy of the estimate p(x) we could let V approach zero but then the region ℜ would then become so small that it would enclose no examples This means that. 1995] Introduction to Pattern Analysis Ricardo Gutierrez-Osuna Texas A&M University 6 . general formulation (2) n On the other hand. we will have to find a compromise value for the volume V g g Large enough to include enough examples within ℜ Small enough to support the assumption that p(x) is constant within ℜ From [Bishop. and k grows with N appropriately From [Bishop. which will be covered in the next lecture It can be shown that both kNN and KDE converge to the true probability density as N→∞. This gives rise to the k Nearest Neighbor (kNN) approach. This leads to methods commonly referred to as Kernel Density Estimation (KDE).Non-parametric density estimation. general formulation (3) g In conclusion. the general expression for non-parametric density estimation becomes  V is the volume surrounding x k  p(x) ≅ where  N is the total number of examples NV  k is the number of examples inside V g When applying this result to practical density estimation problems. 1995] Introduction to Pattern Analysis Ricardo Gutierrez-Osuna Texas A&M University 7 . two basic approaches can be adopted n n g We can choose a fixed value of the volume V and determine k from the data. provided that V shrinks with N. which are the subject of this lecture We can choose a fixed value of k and determine the corresponding volume V from the data. .Parzen windows (1) g Suppose that the region ℜ that encloses the k examples is a hypercube with sides of length h centered at the estimation point x n g Then its volume is given by V=hD. where D is the number of dimensions To find the number of examples that fall within this region we define a kernel function K(u) 1 u j < 1/2 ∀j = 1... and zero otherwise From [Bishop. 1995] Introduction to Pattern Analysis Ricardo Gutierrez-Osuna Texas A&M University 8 . which corresponds to a unit hypercube centered at the origin. D K (u) =  0 otherwise n n x h h h This kernel. is known as a Parzen window or the naïve estimator The quantity K((x-x(n)/h) is then equal to unity if the point x(n is inside a hypercube of side h centered on x. 1995] Introduction to Pattern Analysis Ricardo Gutierrez-Osuna Texas A&M University 9 . with the exception that the bin locations are determined by the data points K(x-x(3)=1 x(3 x(4 K(x-x(4)=0 From [Bishop.Parzen windows (2) g The total number of points inside the hypercube is then x(4  x − x (n   k = ∑ K  h  n =1  N g x x(3 x(2 1/V x(1 Volume Substituting back into the expression for the density estimate pKDE (x) = n 1 NhD x−x  K ∑ h n =1  N (n K(x-x(1)=1 x(1    K(x-x(2)=1 x(2 Notice that the Parzen window density estimate resembles the histogram. the smoother the estimate pKDE(x) For h→0. 1995] Introduction to Pattern Analysis Ricardo Gutierrez-Osuna Texas A&M University 10 . in practice we have a finite number of points.Parzen windows (3) g To understand the role of the kernel function we compute the expectation of the probability estimate p(x) 1 E[pKDE (x )] = NhD   x − x (n   = E K  ∑ h n =1    N 1   x − x (n  1  x − x'   = D ∫ K  = D E K  p(x' )dx' h   h  h  h  n g We can see that the expectation of the estimated density pKDE(x) is a convolution of the true density p(x) with the kernel function n g where we have assumed that the vectors x(n are drawn independently from the true density p(x) The width h of the kernel plays the role of a smoothing parameter: the wider the kernel function. since the density estimate pKDE(x) would then degenerate to a set of impulses located at the training data points From [Bishop. the kernel approaches a Dirac delta function and pKDE(x) approaches the true density n However. so h cannot be made arbitrarily small. 6. 12. 15.Numeric exercise g Given the dataset below. use Parzen windows to estimate the density p(x) at y=3..…x(N} = {4. 5. 14. 17} Solution n Let’s first draw the dataset to get an idea of what numerical results we should expect y=15 p(x) y=3 y=10 5 n 15 x Let’s now estimate p(y=3): 1 pKDE (y = 3) = NhD n 10    y − x (n  1  3−4 3−5 3−5 3−6  3 − 17  = K   + K  + K  + K  + .025 = 1 10 × 4 10 × 4 N Similarly pKDE (y = 10) = 1 [0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0] = 0 = 0 1 10 × 4 10 × 4 pKDE (y = 15) = 1 [0 + 0 + 0 + 0 + 0 + 1+ 1+ 1+ 1+ 0] = 4 = 0.1 1 10 × 4 10 × 4 Introduction to Pattern Analysis Ricardo Gutierrez-Osuna Texas A&M University 11 .10. 16. x(2. 15. + K   = K  ∑ h  10 × 41  1 4 4 4 4 4         n =1  424 3 1 424 3 1 424 3 1 424 3 1 424 3 -1/4 -1/2 -1/2 -1 -13/4   1 [1+ 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0] = 1 = 0. 5.. Use a bandwidth of h=4 n g X = {x(1.15. 1995] 12 . but not not always. regardless of their distance to the estimation point x It is easy to to overcome some of these issues by generalizing the Parzen window with a smooth kernel function K(u) such that ∫ K (x )dx = 1 RD n Usually.Smooth kernels (1) g The Parzen window has several drawbacks n n g Yields density estimates that have discontinuities Weights equally all points xi. such as the multivariate Gaussian density function K (x ) = n 1  1 T  exp − x x (2π )D / 2  2  where the expression of the density estimate remains the same as with Parzen windows (n N pKDE (x) = 1 1 NhD x−x  K ∑ h n =1  K(u) Parzen(u) A=1 A=1 -1/2 Introduction to Pattern Analysis Ricardo Gutierrez-Osuna Texas A&M University    -1/2 u -1/2 -1/2 u From [Bishop. K(u) will be a radially symmetric and unimodal probability density function. Smooth kernels (2) Just as the Parzen window estimate can be considered a sum of boxes centered at the observations. h=3 g Kernel functions 0.03 PKDE(x).035 0. also called the smoothing parameter or bandwidth.015 0. the smooth kernel estimate is a sum of “bumps” placed at the data points n n The kernel function determines the shape of the bumps The parameter h.01 0.045 Density estimate 0.005 0 -10 Introduction to Pattern Analysis Ricardo Gutierrez-Osuna Texas A&M University -5 0 5 10 15 x 20 25 30 35 40 Data points 13 .02 0.04 0. determines their width 0.025 0. 0 0. h=10.05 0.09 0. h=1.04 0.06 0.07 0.01 0.0 0.005 0.025 0.015 0.02 0.045 0.03 0.05 0.03 0.02 0.035 PKDE(x).005 -5 0 5 10 15 x 20 25 30 35 0 -10 40 0.015 0.03 0.5 PKDE(x).025 PKDE(x).08 0.005 0 -10 -5 0 5 10 Introduction to Pattern Analysis Ricardo Gutierrez-Osuna Texas A&M University 15 x 20 25 30 35 40 0 -10 14 . h=5.0 g 0.02 PKDE(x).Choosing the bandwidth: univariate case (1) The problem of choosing the bandwidth is crucial in density estimation n n A large bandwidth will over-smooth the density and mask the structure in the data A small bandwidth will yield a density estimate that is spiky and very hard to interpret 0.02 0.03 0. h=2.025 0.01 0.04 0.01 0.01 0 -10 0.035 -5 0 5 10 15 x 20 25 30 35 40 -5 0 5 10 15 x 20 25 30 35 40 0.015 0. h=2.15 0.Choosing the bandwidth: univariate case (2) g We would like to find a value of the smoothing parameter that minimizes the error between the estimated density and the true density n A natural measure is the mean square error at the estimation point x.4 0.0 h=2.1 g The bias of an estimate is the systematic error incurred in the estimation The variance of an estimate is the random error incurred in the estimation PKDE PKDE (x).1 0.(x).05 0.3 0. defined by [ ] MSE x (pKDE ) = E (pKDE (x ) − p(x )) = {E[pKDE (x ) − p(x )] } + var (pKDE (x )) 14442444 3 142 4 43 4 2 2 bias This expression is an example of the bias-variance tradeoff that we saw earlier in the course: the bias can be reduced at the expense of the variance.5 0.25 Multiple kernel density estimates 0.35 0.2 0.1 0 0-3 -3 -2 -2 -1 -1 0 x0 x Introduction to Pattern Analysis Ricardo Gutierrez-Osuna Texas A&M University 1 1 2 2 3 3 0 -3 -2 -1 0 x 1 2 3 15 .3 0. and vice versa n n The bias-variance dilemma applied to bandwidth selection simply means that n n A large bandwidth will reduce the differences among the estimates of pKDE(x) for different data sets (the variance) but it will increase the bias of pKDE(x) with respect to the true density p(x) A small bandwidth will reduce the bias of pKDE(x). h=0.3 0.6 0.35 0.0 g variance 0.2 0.7 True density 0.2 PKDE(x). at the expense of a larger variance in the estimates pKDE(x) BIAS 0.15 0.05 0.1 0.4 VARIANCE 0.4 0.25 0. univariate case (3) g Subjective choice n n g The natural way for choosing the smoothing parameter is to plot out several curves and choose the estimate that is most in accordance with one’s prior (subjective) ideas However.06σN−1/ 5 g where σ is the sample variance and N is the number of training examples From [Silverman. this method is not practical in pattern recognition since we typically have highdimensional data Reference to a standard distribution n Assume a standard density function and find the value of the bandwidth that minimizes the integral of the square error (MISE) {[ hopt = argmin{MISE(pKDE (x )) } = argmin E ∫ (pKDE (x ) − p(x )) dx h n h 2 ]} If we assume that the true distribution is a Gaussian density and we use a Gaussian kernel. it can be shown that the optimal value of the bandwidth becomes hopt = 1.Bandwidth selection methods. 1986] Introduction to Pattern Analysis Ricardo Gutierrez-Osuna Texas A&M University 16 . n ≠m   N From [Silverman. a density estimate with Dirac delta functions at each training data point A practical alternative is to maximize the “pseudo-likelihood” computed using leave-one-out cross-validation hMLCV 1 N  = argmax  ∑ log p −n x (n  h N n=1  ( ) ( ) where p −n x (n 1 = (N − 1)h  x (n − x (m  ∑ K h  m =1. The optimal bandwidth then becomes  IQR  where A = min σ. The formula for semi-interquartile range is therefore: IQR=Q3-Q1 n g A percentile rank is the proportion of examples in a distribution that a specific example is greater than or equal to Likelihood cross-validation n n The ML estimate of h is degenerate since it yields hML=0. It is computed as the difference between the 75th percentile (Q3) and the 25th percentile (Q1). a robust estimate of the spread.Bandwidth selection methods.06 to better cope with multimodal densities.34  hopt = 0. 1986] Introduction to Pattern Analysis Ricardo Gutierrez-Osuna Texas A&M University 17 . univariate case (4) n Better results can be obtained if we use a robust measure of the spread instead of the sample variance and we reduce the coefficient 1.   1.9 AN−1/ 5 g IQR is the interquartile range. Likelihood cross-validation p-1(x) p-1 (x(1) x(1 p-2(x) p-3(x) x p-2(x(2) x(2 Introduction to Pattern Analysis Ricardo Gutierrez-Osuna Texas A&M University p-3(x(3) x(3 p-4 p-4(x) x x x(4 (x(4) x 18 . Multivariate density estimation g The derived expression of the estimate PKDE(x) for multiple dimensions was 1 pKDE (x) = NhD n g g  x − x (n   K  ∑ h n =1   N Notice that the bandwidth h is the same for all the axes. and then transform back [Fukunaga] g g The whitening transform is simply y=Λ-1/2MTx. if the spread of the data is much greater in one of the coordinate directions than the others. where Λ and M are the eigenvalue and eigenvector matrices of the sample covariance of x Fukunaga’s method is equivalent to using a hyper-ellipsoidal kernel Introduction to Pattern Analysis Ricardo Gutierrez-Osuna Texas A&M University 19 . so this density estimate will be weight all the axis equally However. which complicates the procedure There are two basic alternatives to solve the scaling problem without having to use a more general kernel density estimate n n Pre-scale each axis (normalize to unit variance. estimate the density. for instance) Pre-whiten the data (linearly transform to have unit covariance matrix). we should use a vector of smoothing parameters or even a full covariance matrix. . x .…hD) uses kernel independence... h1... defined as pPKDE (x ) = ( 1 N K x. hD ∑ N i=1 ) D  x(d) − x (n (d)  1  where K x. x (n ..h1.. and only the bandwidths are allowed to differ n g (n Bandwidth selection can then be performed with any of the methods presented for univariate density estimation It is important to notice that although the expression of K(x. this does not imply that any type of feature independence is being assumed n n A density estimation method that assumed feature independence would have the following expression D   x(d) − x (n (d)   1 N    pFEAT −IND (x ) = ∏  K ∑ d  Nh h = i 1 d=1  d d   Notice how the order of the summation and product are reversed compared to the product kernel Introduction to Pattern Analysis Ricardo Gutierrez-Osuna Texas A&M University 20 .. hD = ∏Kd h h1 ⋅ ⋅ ⋅ hD d=1  d  ( n g ) The product kernel consists of the product of one-dimensional kernels Typically the same kernel function is used in each dimension ( Kd(x)=K(x) ).Product kernels g A very popular method for performing multivariate density estimation is the product kernel. h1.x(n. Product kernel.5 x2 6 5.5 5 x2 6 5.5 7.5 3.5 x2 6 5.06σN-1/5 (middle) and h=0.5 4. example 1 This example shows the product kernel density estimate of a bivariate unimodal Gaussian distribution g n n 100 data points were drawn from the distribution The figures show the true density (left) and the estimates using h=1.5 7.5 5 5 4.5 4.9AN-1/5 (right) 7.5 6.5 4 4 4 3.5 7 7 7 6.5 3.5 3 3 3 -2 -1 0 x1 1 2 Introduction to Pattern Analysis Ricardo Gutierrez-Osuna Texas A&M University -2 -1 0 x1 1 2 -2 -1 0 x1 1 2 21 .5 6. Product kernel.9AN-1/5 (right) n n 8 8 6 6 6 4 4 4 2 2 2 x2 8 x2 x2 g 0 0 0 -2 -2 -2 -4 -4 -4 -2 0 2 4 6 x1 Introduction to Pattern Analysis Ricardo Gutierrez-Osuna Texas A&M University -2 0 2 4 x1 6 -2 0 2 4 6 x1 22 . example 2 This example shows the product kernel density estimate of a bivariate bimodal Gaussian distribution 100 data points were drawn from the distribution The figures show the true density (left) and the estimates using h=1.06σN-1/5 (middle) and h=0. Naïve Bayes classifier g Recall that the Bayes classifier is given by the following family of discriminant functions choose ωi if gi ( x ) > g j ( x ) ∀j ≠ i where gi (x) = P(ωi | x) g Using Bayes rule. the Naïve Bayes has been shown to have comparable performance to artificial neural networks and decision tree learning in some domains Introduction to Pattern Analysis Ricardo Gutierrez-Osuna Texas A&M University 23 . which is a much easier problem than estimating the multivariate density P(x|ωi) n Despite its simplicity.NB ( x ) = P(ωi )∏ P(x(d) | ωi ) d=1 g Naïve Naïve Bayes Bayes Classifier Classifier The main advantage of the Naïve Bayes classifier is that we only need to compute the univariate densities P(x(d)|ωi). the curse of dimensionality makes it a very tough problem! One highly practical simplification of the Bayes classifier is the so-called Naïve Bayes classifier n The Naïve Bayes classifier makes the assumption that the features are class-conditionally independent D P(x | ωi ) = ∏ P(x(d) | ωi ) d=1 g D It is important to notice that this assumption is not as rigid as assuming independent features P(x) = ∏ P(x(d)) d =1 n Merging this expression into the discriminant function yields the decision rule for the Naïve Bayes classifier D gi. these discriminant functions can be expressed as gi (x) = P(ωi | x) ∝ P(x | ωi )P(ωi ) n g g where P(ωi) is our prior knowledge and P(x|ωi) is obtained through density estimation Although we have presented density estimation methods that allow us to estimate the multivariate likelihood P(x|ωi). independence x2 x2 D P(x | ωi ) ≠ ∏ P(x(d) | ωi ) d=1 x2 D P(x | ωi ) = ∏ P(x(d) | ωi ) D d=1 P(x) ≠ ∏ P(x(d)) d =1 Introduction to Pattern Analysis Ricardo Gutierrez-Osuna Texas A&M University x1 D x1 P(x | ωi ) = ∏ P(x(d) | ωi ) D d =1 P(x) ≅ ∏ P(x(d)) d=1 24 .Class-conditional independence vs.
Copyright © 2024 DOKUMEN.SITE Inc.