Description

Chapter 2Signal representation and mathematical tools In this book we will consider antenna arrays which consist of many individual antenna elements, and therefore a large number of signals, one for each element, has to be processed at the same time. The signals can be assigned to locations on the antenna, forming discrete sampling of the spatial wave field for transmitting or receiving. Signals transmitted or received with a radar system also have to be represented as a function of time. Signal samples are formed in the spacial and temporal dimension for digital signal processing with signal processors or computers and for recording for later analysis. These samples z are measured generally with the complex components x and v, i.e. they are to be written as complex values. The description of z by the amplitude a and the phase cp is equivalent. An individual signal is thus given by: (2.1) The components x and y are usually called / and Q components for inphase and quadrature phase. The sequences of samples can be produced in such a way as to be equivalent to the original time-continuous signal. More precise information on sampling will follow in chapter 6. 2.1 Vectors, matrices We will write matrices with bold-face letters. Two-dimensional matrices will be written with bold-face capital letters. One-dimensional matrices (rows or columns) are also named vectors. A temporal sequence of signals must be numbered for identification. We write for the sequence z«, with n = 1 , . . . , N9 N is thus the length of the signal sequence z\,..., ZN- We can write this sequence more briefly as a column matrix or vector z: (2.2) The transpose of z is then the row matrix z T = [z\ • • • ZN]- In o u r applications we generally need the transpose and conjugate complex form: (2.3) This one-dimensional sequence could also be derived at one time instant from a set of antenna elements by a purely spatial sampling. In many applications signals are regarded in terms of both temporal and spatial sampling. These are then summarised in a rectangular matrix: (2.4) Here, the temporal sequences n are again written as a single column for one antenna element m. The signals from the M spatially separate antenna elements at one time instant form a single row. For an indication of the matrix size it is helpful to write Z[N, M], thus the matrix Z has T rows and M columns. V 2.2 Computing with matrices For completeness the basic tools for working with matrices are given in the following. More detailed explanations can be found in mathematical textbooks and e.g. in the user's guides of the programming systems MATLAB and Mathematica. 2.2.1 Addition and subtraction Two matrices A and B of the same size, and therefore the same dimensions [M, TV], are added or subtracted element by element. That means for: C = A+ B we have to take: (2.5) 2.2.2 Multiplication Multiplication assumes that elements can always be multiplied in pairs. It applies the principle: a result element is the sum of the products of a row with a column. This may be seen most simply for a pair of row and column matrices. If a and b matrices are of the same dimension N we have: and and can form: (4T' indicates the transpose) or for complex elements according to equation 2.3: (2.6) (astrisk indicates the complex conjugate transpose) which is the scalar, dot or inner product of the matrices a and b. As a special case for a column matrix a the expression: (2.7) is the so-called 2- or Euclidian norm. It is also explained as the square of the length of vector a. A geometrical interpretation is sometimes given to the scalar product: after normalisation of the vectors with respect to their length it is the cosine of the angle cp between the two TV-dimensional vectors a and b: (2.8) If the value of cp — 90° then cos cp — 0. As in the two-dimensional case, we say that the two vectors a and b are orthogonal. Now two rectangular matrices are to be multiplied; the matrix product is defined in the following way: A[M, N] B [N, L] = C[M, L] with (2.9) A B C Figure 2.1 Multiplication of two rectangular matrices The result element cik is thus formed by the scalar product of the /th row of A with the £th column of B. This is illustrated in Figure 2.1. The number of columns of the left matrix, A, must be equal to the number of rows of the right matrix, B; the values of M and L may be arbitrary. The multiplication of a matrix A[M, N] with B[Af, L] results in a matrix C[M, L]. We can multiply a column by a row according to this definition, for example, as a dyadic product: (2.10) The sequence of the multiplication cannot be exchanged, without changing the result. The transpose of a product is given by: (2.11) 2.2.3 Identity matrix The identity matrix 1[N, N] corresponds to the role of 'one' in scalar algebra: (2.12) It is a diagonal matrix, containing elements, in this case the value 1, only on the main diagonal. In general the following applies: (2.13) 2.2.4 Inverse matrix Particularly important is the so-called inverse matrix A~ l , defined for a square matrix A(TV, N). Multiplication of A" 1 with A yields the unit matrix I. (2.14) or equivalently: With the inverse matrix A" 1 one can solve the following general problem, with X as the unknown matrix: (2.15) By: we can solve for X: (2.16) The computation of the inverse matrix is accomplished using special algorithms, e.g. the Gauss elimination method, which are available in program systems such as MATLAB or Mathematica. To be inverted the matrix A must fulfil certain conditions, so that the inverse matrix exists: it must be square and non-singular. A useful inversion formula has been derived [2]: (2.17) 2.2.5 Eigenvalue decomposition To a matrix A applies the equation [I]: The column matrix x is named eigenvector and the scalar number X the eigenvalue. To a matrix of dimension [W, N] there exists generally N eigenvalues, A, which may be combined in the diagonal matrix D, and N eigenvectors, x, combined as columns in the eigenvector matrix X. One can thus also write: AX = XD The column vectors x in X are mutually orthogonal, i.e. X* X = I. It follows: X* AX = D The matrix A is thus transformed into the diagonal matrix D. The computation of D and X to a matrix A may be performed again with special procedures available in program systems, e.g. MATLAB or Mathematica. (2.18) 2.2.6 QR decomposition A matrix A can be decomposed into a product [1,2]: A = QR (2.19) Q is an orthonormalised matrix, i.e. it contains column vectors with the norm, given by equation 2.7, equal to 1. R is an upper triangle matrix, below the main diagonal it contains only elements equal to zero. 2.3 Fourier transform To a finite time signal series s can be assigned with the Fourier transform a corresponding set of frequency values. The discrete Fourier transform, or DFT, is particularly significant for signal processing. It is defined by the following equation: (2.20) If the signal s(t) is sampled with the regular time period T at t = nT to give the signal samples Sn then Sk corresponds to the frequency spectral part of the signal s at the frequency co^'. (2.21) The frequency lines appear thus with the mutual distance In/NT. A DC component of the signal corresponds to k = 0. With a growing A we get a finer frequency resolution. f The spectrum, the totality of the frequency lines, is unambiguous only up to the frequency Q)N-\. For values of k outside the interval 0 , . . . , (N — 1) we may write k = k1 + pN, with p any integer and kf within the interval 0 , . . . , (N — 1). Then we have: The spectrum repeats itself with period N because of the cyclic exp function. By the Fourier transform coefficients Sk according to equation 2.20 the signal sn is completely described. The time signal series can be recovered exactly by an inverse Fourier transform: (2.22) As an illustration and short exercise we will now prove equation 2.22. By using equation 2.20 we get: s » = jfll ( E *«exp ( - I T * ™ ) ) e x p (ifnk) k=0 \m=0 / {222a) and by exchanging the order of summation: j N-I / 5n N-I Sm 2x1 6XP \\ (m = M^ ( ^ (" N " n)k ) {222b) For m ^ n there follows for the sum expression for index Jc: with Now we recall the well-known sum formula for a geometrical series: Since: (2.22c) it follows X = O. One also says that the rows exp(—(2nj/N)nk) and exp(—(2jrj/N)mk) are orthogonal to each other. The vectors formed by both rows are orthogonal, as defined in equation 2.8. When m = n each of the exp expressions is equal to 1 and therefore the sum becomes N. The entire expression remains only sn. This had to be proven. The Fourier transform of equations 2.20 and 2.21 describes mathematically the spectral lines S^, which are discrete sinusoidal signals at frequencies cok existing for an infinite time. This results from the implicit assumption of a periodic repetition of the signal s. In radar practice generally the signal is time limited and one can imagine the existence of the signals Sk only during the duration of the temporal signal sn. A time-limited signal does not produce a line spectrum but, instead of each discrete Sk, a continuous spectrum with a functional form sin co/co around cok- However, since these spectra have mutual zeros, the orthogonality remains and the inverse transform with equation 2.22 is valid. 2.3.1 Fast Fourier transform, FFT For computation of the DFT from equation 2.20 or 2.22 one uses the so-called fast Fourier transform FFT [3] as a very efficient numerical algorithm. Instead of N2 complex multiplications one needs only (N/2) log2 N complex multiplications (with Iog2 for the binary logarithm). This saving is achieved by skilful combinations of intermediate results. The value N must be a binary number 2P (p integer). In modern programming systems such as MATLAB or Mathematica the FFT is of course available. For example, MATLAB performs the transform according to equation 2.20 by: So with k = k — 1, the DC component results for k = 1. The cyclic repetition explained above has to be considered. 2.4 Filter in the frequency and time domain The transformation of signals into the frequency domain is important for the application and treatment of filters. A filter is usually characterised by its frequency response H(co). To the signal frequencies cok are assigned the filter values Hk. To a signal s at the filter input we get the filter output signal y. The spectral values of the signal and the filter simply have to be multiplied, resulting in the output spectrum: So we must first compute the signal spectrum Sk using equation 2.20, multiply by Hk and then apply the inverse Fourier transformation to come back into the time domain by equation 2.22. The result is the temporal output signal yn: (2.23) or by substituting the time-domain definition of the signal by equation 2.20: and by changing the order of summation: By comparison with equation 2.22 we recognise the second sum as the time-domain representation h of the filter function H and we may write: (2.24) and finally we get: (2.25) This expression represents the filtering of a signal sequence Sn with a filter function hn which is defined in the time domain. This kind of filtering is also called convolution between s and h. Let us consider a filter for performing the convolution by means of a shift register, by which the signal s is shifted with the clock period T. At the individual shift stages the signal 57 is weighted with the factor hn-\. Afterwards the summation of the weighted signals follows. The number n represents the shift between sand /i, growing with the clock pulses. Such a filter is also called a transversal filter. In Figure 2.2 we visualise the shift state n = 2: For n = N — 1 the set of sn coincides completely with the set of hn. In this case the signal is weighted by the temporal mirror image of h. Later we will discuss the optimal or matched filter which satisfies the condition: (2.26) which means that all signals are weighted by their complex conjugate values. Therefore all products of the sum of equation 2.25 are real valued. This is the matched state Figure 2.2 Convolution of a signal with a filter function by means of a shift register of signal filtering resulting in a maximum output signal y. All the other shift states result in inevitable time sidelobes. With all other shift steps n the elements of h are initially not all defined. Since h results by equation 2.22 from a given H, a periodic repetition of the values of h exists with the period N. These values are applied automatically by using equation 2.23 for the convolution. Therefore, by applying equation 2.25 the value (n — I) as the index of h is always taken modulo N. If n — I does not lie within the interval 0 , . . . , (N — 1), suitable multiples of N are added or subtracted. Equation 2.25 becomes with this statement: (2.27) Because of the periodicity, the process of filtering after equation 2.27 is known as the cyclic convolution. If the signal s consists of a single impulse with the value so = 1 for / = 0, and all the other sn — 0, then yn = hn. The filter function may be produced in such a way and is therefore also called the impulse response. The importance of equations 2.23 and 2.24 consists in the fact that on the basis of a temporal filter function h the filter effect can be computed after equation 2.23 with the FT of s and h. With larger values A this computation is then much more f economical by application of the FFT compared to a direct convolution of s and h in the time domain. Often the cyclic convolution after equation 2.27 is not adequate, since h is not periodic, but is a limited function in time. For the computation of the so-called aperiodic or linear convolution it must be ensured that the mutually shifted sequences s and h do not meet periodic repetitions. This is achieved on the basis of s and h in the time domain by filling up the time series with zeros (zero padding). The entire sequences are then, for example, twice as long. In practice it is often the case that a very long signal sequence s has to be filtered with a very much shorter filter function h. In this case the signal sequence is appropriately decomposed into short subsequences and then filtered aperiodically. From the overlapping output sequences the actual long output sequence yn has then to be built up again. 2.5 Correlation We assume two signal series xn and yn (n — 1 , . . . , Af, with a mean equal to zero) and we want to express their relationship or similarity. The expression: (2.28) may be used for this purpose and is known as correlation. Imagine that the signals Xn and yn are changing their amplitude and polarity randomly and independently; the products will then cancel to zero if N goes to oo. But if both signals are the same then q will show a maximum value. We may introduce additionally a time lag k (sample periods) and define the crosscorrelation function: (2.29) With this correlation function we will get a high value if Xn and yn are similar but shifted in time by k sample periods. If we want to indicate or detect for a signal Xn similarities or relationships in itself over time we may take the autocorrelation function: (2.30) For fc = 0we obtain the variance: (2.31) For complex-valued signals the autocorrelation function is defined by: (2.32) If some equal signals have a certain phase, then this will be rotated back to zero within the product and result in a maximum contribution to the sum. 2.6 Wiener Khintchine theorem The Wiener Khintchine theorem gives a relation between the power spectrum and the autocorrelation function for random noise signals. We may compute the amplitude spectrum for a noise signal by equation 2.20. From this follows the power spectrum: (2.33) Because we have in mind a random signal x for an infinite time we may divide the entire process into time segments with N samples each and after that take the mean value or expectation of P^ (in chapter 3 we will discuss the basics of statistics). We compute the DFT of Pk with equation 2.20 and use equation 2.22. First we have: With this we compute: The exp expression on the right-hand side is zero for (—m + n + p) ^ 0, according to our little exercise to prove equation 2.22. Only for: m = n -f p we get for the sum expression the value N. So finally the result is: and with equation 2.32: (2.34) This relation between the power spectrum and the autocorrelation function given by the Fourier transform is named the Wiener Khintchine theorem. Sometimes we need the reverse relation of equation 2.34. This follows with equation 2.20: (2.35) 2.7 References 1 BELLMANN, R.: 'Introduction to matrix analysis' (McGraw-Hill Book Company, Inc., New York, Toronto, London, 1960) 2 BODEWIG, E.: 'Matrix calculus' (North-Holland Publishing Company, Amsterdam, 1959) 3 HAGER, W. W.: 'Applied numerical linear algebra' (Prentice Hall, London, 1988) 4 COOLEY, J. W., LEWIS, P., and WELCH, P. D.: 'Historical notes on the Fast Fourier Transform', IEEE Trans. Audio Electroacoust., 1967,15 (2), pp. 76-79 5 BIALLY, T.: 'The Fast Fourier Transform and its application to radar signals'. International conference on Radar, Paris, France, 1978, pp. 51-59 6 BLAHUT, R. E.: 'Fast algorithms for digital signal processing' (Addison-Wesley Publishing Company, Inc., New York, 1985) 7 McCLELLAN, J. H., and RADER, C. M.: 'Number theory in digital signal processing' (Prentice Hall Inc., Englewood Cliffs, New Jersey, 1997) Documents Similar To 67985_02Skip carouselcarousel previouscarousel nextMatrix Algebra NotesW Sharpe Full Course and Exams13-AdvancedMultDivMat Lab Primer-Answer to Experiments.linear algebraLabch7Apps 12 Matrices WorksheetGBF459 - Matrix Algebra10.1.1.109.5001MatlabEvrimGuler-Homework104 MatricesMatix Algebra & Economic ApplicationsLinear System Theory 2E (Wilson J. Rugh)Transm LineGREProb.pdfIIpuc Design&Bluprntqp MathSystems and MatricesFNS.pdfViewcontent.cgiCoupled Spring Systems Using Matlab and GeoGebraCRB for Low-Rank Decompositionagents-assembleT 3 Sec.1.6Vector Matrix NormsJohansson 2005Zill Math CH0813-GEOL4250-assignment4More From uranubSkip carouselcarousel previouscarousel nextDT-PLA-SAM-20089783540491279-c1Virtual Array Processing for Active RadarLow-cost Algorithm for Some Bearing Estimation Methods in Presence of Separable Nuisance Parametersschleher526-ch04Third PeterOn the Virtual Array Concepts for Higher Order Array ProcessingBy FaithMultiple Emitter Location and Signal Parameter EstimationBlind Synchronization Algorithm for the DS-CDMA SignalsVirtual Array Processing Using Wideband Cyclostationary SignalsLicensesensors-15-13121Arise, My Soul, AriseTutorial on Maximum Likelihood EstimationAnthony ReidB-why Free Encryption is Not Good Enough WP 19580109-12-militaryspaceAdaptive FilteringSparcobv0013ISec Final Open Crypto Audit Project TrueCrypt Security AssessmentUser Guide Ktgps1p25-1141bv0014Tb Cross Coupling60052User Guide Ktgps1Alternatives for Military Space Radarbbm%3A978-0-306-47978-6%2F1Best Books About Eigenvalues And EigenvectorsMatrix Calculusby E. BodewigA First Course in Linear Algebraby Daniel ZelinskyInstruction to Statistical Pattern Recognitionby Elsevier Books ReferenceTotally Nonnegative Matricesby Shaun M. Fallat and Charles R. JohnsonMultivariable Calculus with Linear Algebra and Seriesby William F. Trench and Bernard KolmanVectors and Matricesby Pamela LiebeckFooter MenuBack To TopAboutAbout ScribdPressOur blogJoin our team!Contact UsJoin todayInvite FriendsGiftsLegalTermsPrivacyCopyrightSupportHelp / FAQAccessibilityPurchase helpAdChoicesPublishersSocial MediaCopyright © 2018 Scribd Inc. .Browse Books.Site Directory.Site Language: English中文EspañolالعربيةPortuguês日本語DeutschFrançaisTurkceРусский языкTiếng việtJęzyk polskiBahasa indonesiaSign up to vote on this titleUsefulNot usefulYou're Reading a Free PreviewDownloadClose DialogAre you sure?This action might not be possible to undo. Are you sure you want to continue?CANCELOK
Copyright © 2024 DOKUMEN.SITE Inc.