Contemporary Communication Systems Using Matlab Proakis and Salehi Copy



Comments



Description

CONTEMPORARYCOMMUNICATION SYSTEMS using MATLAB 1 John G. P r o a k i s Masoud S a l e h i The PWS BookWare Companion Series tm CONTEMPORARY COMMUNICATION SYSTEMS USING MATLAB® John G. Proakis Masoud Salehi Northeastern University PWS Publishing Company I(T)P An International Thomson Publishing Company Boston • Albany • Bonn • Cincinnati • London • Madrid • Melbourne • Mexico City • New York Paris • San Francisco • Singapore • Tokyo • Toronto • Washington The PWS BookWare Companion Series tm CONTEMPORAR Y COMMUNICA TION SYSTEMS USING MATLAB® John G. Proakis Masoud Salehi Northeastern University PWS Publishing Company I( T P An International Thomson Publishing Company Boston • Albany • Bonn • Cincinnati • London • Madrid • Melbourne * Mexico City • New York Paris • San Francisco • Singapore • Tokyo • Toronto • Washington PWS Publishing C o m p a n y 20 Park Plaza, Boston, M A 02116-4324 Copyright © 1998 by PWS Publishing Company, a division of International Thomson Publishing Inc. AH rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transcribed in any form or by any means—electronic, mechanical, photocopying, recording, or otherwise—without the prior written permission of PWS Publishing Company. MATLAB is a trademark of The Mathworks. Inc. The MathWorks, Inc. is the developer of MATLAB, the high-performance computational software introduced in this book. For further information on MATLAB and other MathWorks products—including SIMULINK® and MATLAB Application Toolboxes for math and analysis, control system design, system identification, and oiher disciplines—-contact The MathWorks at 24 Prime Parle Way, Naticlc. MA 01760 (phone 508-647-7000: fax: 508-647-7001. email: [email protected]). You can also sign up to receive the MathWorks quarterly newsletter and register for the user group. Macintosh is a trademark of Apple Computer. Inc MS-DOS is a trademark of Microsoft Corporation BookWare Companion Senes is a trademark of PWS Publishing Company. I(T;P® International Thomson Publishing The trademark ITP is used under license For more information, contact-. International Thomson Editores Campos Eiiseos 385. Piso 7 Col. Polanco 11560 Mexico C F.. Mexico International Thomson Publishing GmbH Königs winterer Strasse 418 53227 Bonn, Germany P W S Publishing Company 20 Park Plaza Boston, M A 02116 International Thomson Publishing Europe Berkshire House 168-173 High Holbom Undon WC1V7AA England Thomas Nelson Australia 102 Dodds Street South Melbourne. 3205 Victoria. Australia Nelson Canada 1120 Birchmoum Road Scarborough. Ontario Canada Ml K 5 G 4 International Thomson Publishing Asia 221 Henderson Road #05-10 Henderson Building Singapore 0315 International Thomson Publishing Japan Hirakawacho Kyowa Building. 31 2-2-1 Hirakawacho Chiyoda-ku. Tokyo 102 Japan About the Cover The BookWare Companion Series cover was created on a Macintosh Quadra 700. using Aldus FreeHand and Quark XPress. The surface plot on the cover, provided courtesy of The MathWorks. Inc.. Natick. MA, was created with MATLAB® and was inserted on the cover mockup with a HP ScanJet IIP Scanner It represents a surface created by assigning the values of different functions to specific matrix elements. Serif j Ct>-originators: Robert D. Strum and Tom Robbins Editor Bill Barter Assistant Editor: Suzanne Jeans Editortol Assistant: Tricia Kelly Library of Congress Cataloging-in-Publication Data Market Development Manager: Nathan Wilbur Production Editor: Pamela Rockwell Proakis, John G. Manufacturing Coordinator: Andrew Chnstensen Contemporary communication systems using M A T L A B ! John G. Cover Designer: Stuart Paterxon. Image House. Inc. Proakis. Masoud Salehi. — 1st ed. p. cm. — (PWS BookWare companion senes) Includes index. Cover Printer. Phoenix Color Corp. ISBN 0-534-93804-3 Text Printer and Binder. Malloy Lithographing I. Data transmission systems—Computer simulation. 2. Telecommunication systems—Computer simulation. 3. MATLAB. I. Salehi. Masoud. II. Title. III. Senes. TK5105.P744 1997 97-31815 Printed and bound in the United States of America. 62! .382' !6'07855369—dc21 CIP 98 99 00 — 10 9 8 7 6 5 4 3 2 1 ABC Note Students learn in a number of ways and in a variety of settings. They learn through lectures, in informal study groups, or alone at their desks or in front of a computer terminal. Wherever the location, students learn most efficiently by solving problems, with frequent feedback from an instructor, following a workedout problem as a model. Worked-out problems have a number of positive aspects. They can capture the essence of a key concept—often better than paragraphs of explanation. They provide methods for acquiring new knowledge and for evaluating its use. They provide a taste of real-life issues and demonstrate techniques for solving real problems. Most important, they encourage active participation in learning. We created the BookWare Companion Series because we saw an unfulfilled need for computer-based learning tools that address the computational aspects of problem solving across the curriculum. The BC series concept was also shaped by other forces: a general agreement among instructors thai students learn best when they are actively involved in their own learning, and the realisation chat textbooks have not kept up with or matched student learning needs. Educators and publishers are just beginning to understand that the amount of material crammed into most textbooks cannot be absorbed, let alone the knowledge to be mastered in four years of undergraduate study. Rather than attempting to teach students all the latest knowledge, colleges and universities are now striving to teach them to reason: to understand the relationships and connections between new information and existing knowlege; and to cultivate problem-solving skills, intuition, and critical thinking. The BookWare Companion Series was developed in response to this changing mission. Specifically, the BookWare Companion Series was designed for educators who wish to integrate their curriculum with computer-based learning tools, and for students who find their current textbooks overwhelming. The former will find in the BookWare Companion Series the means by which to use powerful software tools to support their course activities, without having to customize the applications themselves. The latter will find relevant problems and examples quickly and easily and have instant electronic access to them. We hope that the BC series will become a clearinghouse for the exchange of reliable leaching ideas and a baseline series for incorporating learning advances from emerging technologies. For example, we intend toreuse the kernel of each BC volume and add electronic scripts from other software programs as desired by customers. We are pursuing the addition of AI/Expert System technology to provide and intelligent tutoring capability for future iterations of BC volumes. We also anticipate a paperless environment in which BC content can v !low freely over high-speed networks to support remote learning activities. In order for these and other goals to be realized, educators, students, software developers, network administrators, and publishers will need to communicate freely and actively with each other. Wc encourage you to participate in these exciting developments and become involved in the BC Series today. If you have an idea for improving the effectiveness of the BC concept, an example problem, a demonstration using software or multimedia, or an opportunity to explore, contact us. We thank you one and all for your continuing support. The PWS Electrical Engineering Team: [email protected] [email protected] [email protected] [email protected] [email protected] Acquisitions Editor Assistant Editor Marketing Manager Production Editor Editorial Assistant vi 1 DSB-AM Demodulation 3. I Preview 1.1 DSB-AM 3.2 SSB-AM Demodulation 3.1 Periodic Signals and LTI Systems 1.3.2 Frequency Domain Analysis of LT1 Systems 1.1 Preview 4.5 Linear Filtering of Random Processes 2.4 Angle Modulation Analog-to-Digital Conversion 4.1 Preview 3.4 Power Spectrum of Random Processes and White Processes 2.3 S SB-AM 3.5 Lowpass Equivalent of Bandpass Signals Random Processes 2.3 Gaussian and Gauss-Markov Processes 2.2 Conventional AM 3.2.3.3 Conventional AM Demodulation 3.3 Demodulation of AM Signals 3.2 Measure of Information vii 1 1 1 12 16 23 28 32 35 45 45 45 51 57 63 69 79 79 79 80 89 96 101 101 106 Ill 116 131 131 132 2 3 4 I A .3.2.2 Amplitude Modulation (AM) 3.3 Founer Transforms 1.Contents 1 Signais and Linear Systems !.2.2.1 Preview 2.3.2 Fourier Series 1.2 Generation of Random Variables 2.4 Power and Energy i .1 Sampling Theorem 1.6 Lowpass and Bandpass Processes Analog Modulation 3.3. 3 132 138 138 146 165 165 165 166 166 169 172 174 Ill 177 182 185 187 189 191 191 192 196 200 201 210 221 221 221 226 235 241 244 248 253 257 265 271 281 281 281 285 287 5 Baseband Digital Transmission 5.3.3 Signal Correlator 5.5 Signal Waveforms with Multiple Amplitude Levels Multidimensional Signals 5.1 Scalar Quantization 4. . .2 Optimum Receiver for the A W G N Channel 5.3 Precoding for Detection of Partial Response Signals 6.1 Preview 7.3 Characterization of Bandlimited Channels and Channel Distortion 6.1 Adaptive Linear Equalizers 6.7 Antipodal Signals for Binary Signal Transmission 5.9 Signal Constellation Diagrams for Binary Signals 5. .5 Monte Carlo Simulation of a Binary Communication System .3 Multiamplitude Signal Transmission 5.3.5.2. .8 On-Off Signals for Binary Signal Transmission 5.4 The Detector 5. 5.3 Matched Filter 5.2.4.6.1 Optimum Receiver for the AWGN Channel 5.1 Preview .2 The Power Spectrum of a Digital PAM Signal 6.4 Characterization of Intersymbol Interference 6.4.1 Demodulation of PAM Signals 7.2.1 Signal Design for Zero ISI 6.2 Carrier-Amplitude Modulation 7.2.3. 7 .3 Carrier-Phase Modulation .4 The Detector 5.2 Biorthogonal Signals 5.1 Multidimensional Orthogonal Signals 5.3.2.5.2.2 Pulse-Code Modulation CONTENTS 4.2.2.2.1 Preview 5.3. 6.2 Signal Correlator 5.4 6 Digital Transmission T h r o u g h Bandlimited Channels 6.7 Nonlinear Equalizers Digital Transmission via Carrier Modulation 7.2.6 Other Binary Signal Transmission Methods 5.viii 4.3.1 Noiseless Coding Quantization 4.2 Binary Signal Transmission 5.2 Signal Design for Controlled ISI 6.1 Signal Waveforms with Four Amplitude Levels 5.6 Linear Equalizers 6.5.2. .5 Communication System Design for Bandlimited Channels 6.3. 1 Preview 9.5 7.2 Clock Synchronization 291 297 304 306 308 313 314 316 321 325 326 332 343 343 343 344 355 357 371 391 391 392 395 397 398 403 409 412 417 8 Channel Capacity and Coding 8.4 7.3 Two Applications of DS Spread Spectrum Signals 9.1 Preview 8.2 Convolutional Codes Spread Spectrum Communication Systems 9.1 Signal Demodulation 9.3.2 Channel Model and Channel Capacity 8.3.1 Frequency-Shift Keying 7.2 Probability of Error for QAM in an AWGN Channel Carrier-Frequency Modulation 7. 9 .2 Probability of Ercor 9.1 Carrier Synchronization 7.4.1 Linear Block Codes 8.3 Channel Coding 8.3 Generation of PN Sequences 9.2 Demodulation and Detection of FSK Signals 7. .1 Channel Capacity 8.1 Phase Demodulation and Detection 7.2.CONTENTS ix 7.5.2 Differential Phase Modulation and Demodulation Quadrature Amplitude Modulation 7.2.2.4.6.3.2 Use of Signal Diversity to Overcome Partial-Band Interference .6.6 7.2.5.4.4.3 Probability of Error for Noncoherent Detection of FSK Synchronization in Communication Systems 7.5.2 Direct-Sequence Spread Spectrum Systems 9.1 Demodulation and Detection of QAM 7.1 Probability of Error for FH Signals 9.3.4 Frequency-Hopped Spread Spectrum 9. . coders. By design. including coding and decoding algorithms and modulation and demodulation techniques. e. modulators.g. and demodulators. since these techniques are most frequently used in the treatment of communication systems. SCOPE OF THE BOOK The objective of this book is to serve as a companion or supplement.. For example. we introduce the matched filter and the correlator and assert that these devices result in the optimum demodulation of signals corrupted by additive white Gaussian noise (AWGN). and then illustrate the basic notions by means of an example. including both time-domain and frequency-domain characterizations. to any of the comprehensive textbooks in communication systems. establish the necessary notation. There is one chapter on analog communication techniques. the treatment of the various topics is brief. decoders. include a number of applications that serve to motivate the students. and the remaining five chapters are focused on digital communications. The book is intended to be used primarily by senior-level undergraduate students and graduate students in electrical engineering.PREFACE There are many textbooks on the market today that treat the basic topics in analog and digital communication systems. but we do not provide a proof of this assertion. Chapter 1: Signals and Linear Systems This chapter provides a review of the basic tools and techniques from linear systems analysis. because several tutorial books and manuals on MATLAB arc available. We provide the motivation and a short introduction to each topic. The first two chapters on signals and linear systems and on random processes treat the basic background that is generally required in the study of communication systems. Such a proof is generally given in most textbooks on communication systems. Frequency-domainanalysis techniques are emphasized. computer engineering and computer science. that constitute the hasic elements of a communication system. ORGANIZATION OF THE BOOK The book consists of nine chapters. Relatively few of the textbooks. xi . The focus of most of these textbooks is by necessity on the theory that underlies the design and performance analysis of the various building blocks. We assume that the student (or user) is familiar with the fundamentals of MATLAB. a personal computer is sufficient) using the popular student edition of MATLAB. especially those written for undergraduate students. The primary text and the instructor are expected to provide the required depth of the topics treated. Those topics are not covered. The book provides a variety of exerciscs that may be solved on the computer (generally. In particular.. and conventional AM. which causes errors in signal demodulation. such us double-sideband AM. quadratureamplitude-modulated signals.ation i. that gives the limit on the amount of information that can be transmitted through the channel. single-sideband AM.XII Chapter 2: Random Processes in this chapter we illustrate nieilu'ds f. Chapter 6: Digital Transmission Through Bandlimited Channels In this chapter we consider the characterization of bandlimited channels and the problem of designing signal waveforms for such channels. Chapter 7: Digital Transmission via Carrier Modulation Four types of carrier-modulated signals that are suitable for transmission through bandpass channels are considered in this chapter. Then. and the performance of the demodulator is evaluated. We show that channel distortion results in intersymbol interference (ISI). called the channel capacity. Systems studied include amplitude modulation (AM). the binary symmetric channel (BSC) and the additive white . we consider two channel models. The topics include the generation of random variables with a specified probability distribution function. This conversion process allows us to transmit or store the signals digitally. Chapter 4: Analog-to-Digital Conversion In this chapter we treat various methods that are used to convert analog source signals into digital sequences in an efficient way. phase-shift keying. Chapter 8: Channel Capacity and Coding In this chapter we consider appropriate mathematical models for communication channels and introduce a fundamental quantity. and the characten/. and frequency-shift keying. and lossless data compression. Both binary and nonbinary modulation techniques are considered. and angle-modulation schemes. such as Huffman coding. such as frequency modulation (FM) and phase modulation (PM). These are amplitude-modulated signals.r generating random variables and samples ot randotn processes. The optimum demodulation of these signals is described. such as pulse-code modulation (PCM). the generation uf samples of Gaussian and Gauss-Markoy processes. Chapter 5: Baseband Digital Transmission The focus of this chapter is on the introduction of baseband digital modulation and demodulation techniques for transmitting digital information through an AWGN channel. We consider both lossy data compression schemes. we treat the design of channel equalizers that compensate for channel distortion.t stationary random princesses in ihe lime domain and the frequency domain Chapter 3: Analog Modulation The performances of analog modulation and demodulation techniques in the presence and absence of additive noise are treated in this chapter. Chapter 9: Spread Spectrum Communication Systems The basic elements of a spread spectrum digital communication system are treated in this chapter. These channel models are used in the treatment of block and convolutional codes for achieving reliable communication through such channels. respectively.m file). In particular. The files are in separate directories corresponding to each chapter of the book. All MATLAB files have been tested using version 4 of MATLAB. In order to use the software. In most cases numerous comments have been added to the MATLAB files to ease their understanding.xiii Gaussian noise (AWGN) channel. SOFTWARE The accompanying diskette includes all the MATLAB files used in the text. . The generation of pseudonoise (PN) sequences for use in spread spectrum systems is also treated. It should be noted. the main objective of the authors has been the clarity of the MATLAB code rather than its efficiency. In cases where efficient code could have made it difficult to follow. we have chosen to use a less efficient but more readable code. In some cases a MATLAB file appears in more than one directory because it is used in more than one chapter. that in developing the MATLAB files. however. copy the MATLAB files to your personal computer and add the corresponding paths to your m a t l a b p a t h environment (on an IBM PC compatible machine this is usually done by editing the matlabrc. direct sequence (DS) spread spectrum and frequency-hopped (FH) spread spectrum systems are considered in conjunction with phase-shift keying (PSK) and frequency-shift keying (FSK) modulation. . then we cover power and energy concepts. since these are the most frequently used techniques.2) 1 .2 Fourier Series The input-output relation of a linear time-invariant system is given by the convolution integral defined by y{t)=x(t)*h(t) (1-2. the sampling theorem. Linear systems and their characteristics in the time and frequency domains.1) where hi!) denotes the impulse response of the system. We start with the Fourier series and transforms. 1.1 Preview In this chapter we review the basic tools and techniques from linear system analysis used in the analysis of communication systems. and so the well-known tools and techniques from linear system analysis can be employed in their analysis. If the input .v(r) is a complex exponential given by jc(f) = Aejl!. and y ( f ) is the output signal.fnt (1-2.Chapter 1 Signals and Linear Systems 1. We emphasize frequency-domain analysis tools. x{t) is the input signal. are the two fundamental topics whose thorough understanding is indispensable in the study of communication systems. together with probability and analysis of random signals. Most communication channels and many subblocks of transmitters and receivers can be well modeled as linear and time-invariant (LTI) systems. and lowpass representation of bandpass signals. the Fourier series coefficients {*„} are complex numbers even when x ( f ) is a real-valued signal. Fourier series and Fourier transforms are techniques for expanding signals in terms of complex exponentials. The (complex) amplitude of the output. computing the response of LTI systems to exponential inputs is particularly easy. is the (complex) amplitude of the input amplified by h{x)e-j2*f»x dr Note that the above quantity is a function of the impulse response of the LTI system.r(i) and are given by eor + 7"„ .2 CHAPTER I. In general.2. the output is a complex exponential with the same frequency as the input. and the frequency / „ = n/o is called the nth harmonic.4) where the . The frequency /o = 1 / 7o is called the fundamental frequency of the periodic signal. it is natural in linear system analysis to look for methods of expanding signals as the sum of complex exponentials. h(t ).3) In other words._L " f x(t)e~J2*nl/T"dr (1. (/) satisfy the Dirichlet conditions. This type of Fourier series is known as the exponential Fourier series and can be applied to both real-valued and complex-valued signals *(f) as long as they are periodic.2. With this basis. however. For .2. any periodic signal 1 x(t) with period To can be expressed as *(D= £ xne'2nn"Ta (1.r„'s are called the Fourier series coefficients of the signal . and the frequency of the input signal. /o.5) 7*0 Ja Here a is an arbitrary constant chosen in such a way that the computation of the integral is simplified. SIGNALS AND LINEAR SYSTEMS then the output is given by /oo Aej2nM'-z)h(r)dT -oo = a | J A(r) e~j27lJ"r dx ° j 2 * f i '' (1. Therefore. A Fourier series is the orthogonal expansion of periodic signals with period To when the signal set is employed as the basis for the expansion. 1 A sufficient condition for the existence of the Fourier series is that details see 11 ]. In most cases either a = 0 or a = — 7q/2 is a good choice. Consequently. 9) which. ™ i Tu d { (1.8) *-n = (in + jbn — J — (1.2. x. can be applied only to real.t_„[ (1. periodic signals and is obtained by defining «» - jlh.1.t/— ) dt (1. = (1.r .7) Thus the Fourier scries coefficients of a real-valued signal have Hermitian symmetry.j 2. after using Euier's relation e .'dt To J a I When x(r) is a real-valued [L t-rTn x { { ) e .2. i.2. therefore. Another form of Fourier series.13) . Fourier Scries*- 3 periodic signal we have rc-Tn xit)ej2:""/T.e. equivalently. t t—~ j ) dt To) and.toBy defining b » 6 n = — arctan — (1. •r(r) (1.2..12) Note that for n = 0 we always have bo = 0.2.11) ( n To Ja a —Tip 2 rfa b„ = -zr \ x ( 0 sin I 2.2. so oq — l.1 = |. known as trigonometric Fourier series.t 2 .t«:/7}t _ CQS t y ^ .2. their magnitude is even and their phase is odd).j sin ^ 2 n t y (1.6) From this it is obvious that '-0. their real part is even and their imaginary part is odd (or.2.10) results in 2 2 C f°a ~ T " " x(f )cos I 2.2. 12) can be written in the form </() / .v„ | is usually called the nuii-nitiufe spectrum.c„'s are real.v„] O.b-cos \<p — arctan — J Equation (1. — Illustrative Problem 1. In this case the trigonometric Fourier series consists of all sine functions. we have dt (1. It".2. The plot of j.. • —TT ( b\ (1.1 [Fourier series of a rectangular signal train] Let the periodic signal x(t).2.t„| and lx„ versus n or nfu are called therii. Therefore.T o / 2 .18) is zero and all r„'s are imaginary. v ( .16) Plots of j..rJ b„ = —2 Im[.19) . be defined by A. In general the Fourier series coefficients {.2. with period 7"q.4 CHAPTER I.c.} for real-valued signals are related to a„.sere re spectrum of x(t).2.f ) = -. c„. Similarly. = U « l tin = LX„ (1. and the plot of lx„ is referred to as the phase spec! runt. A 2fo) |I| < t 0 t = .14) n (1..v(0 is real and even—i.2.2.> sin $ — y a. if r(M is real and odd—i. and f)„ through an = 2Re[. SIGNALS AND LINEAR SYSTEMS and using the relation a cos c £ > + /.2. for a real and even signal x(t). In this case the trigonometric Fourier series consists of all cosine functions.15) which is the third form of the Fourier series expansion for real and periodic signals.17) which is zero because the integrand is an odd function o f r .v(f)—then dt (1. b„.v(i) = An 2' 0.e. all .if .4. if .e. *t() otherwise (1.v(-M = x(t)—then taking a = . i f | < i r = (1. 2. The rectangular signal 11(f) is. Plot the discrete spectrum of . and to — I: !. 2 . as usual.2. Determine the Fourier series coefficients of x(r) in exponential and trigonometric form. | 2 -ro j i I ! A * i 'o Jo 1 Figure 1. Fourier Series 5 for |í' < To/2. defined by [l. where íq < 7b/2. n ( / ) = H . 2 (1.21) = ^sinc Q ) where sinc(x) is defined as sinc(jc) = sinfTTjr) — ( 1 . Assuming A = 1.22) 3 ) A plot of the sine function is shown in Figure 1.1.1.*(/)• 1. d. A plot of _r(r) is shown in Figure 1. in Illustrative Problem 1.20) (0. otherwise j x(t) 1 1 -Tq ! 7.1: The signal x(r).2. we have "'J I'"2""'"' = 1 L-j2™/4 .-2 2t /jXyN I •l J. To derive the Fourier series coefficients in the expansion of jc(r).2.1. .2.2. 7b = 4. 5.2. The magnitude of the jr„'s is j jsinc ( j ) | .24) Note that for even n's. 2. 71 ©I (1. 1.25) A plot of the Fourier series approximations to this signal over one period for n = 0. The discrete spectrum is shown in Figure 1.2.3. depending on its sign. SIGNALS AND LINEAR SYSTEMS 10 » S 4 -2 0 2 1 6 i 1 £ » Figure 1. Note that as n increases. Obviously. Using these coefficients we have *<•<>= E = i + Esinc(9cos(2-Tfï) 1 n n (1.4.22 CHAPTER I.2: The sine signal.7. 9 is shown in Figure 1. all the t„'s are real (since x(t) is real and even). where ao = Co = 1 and jco — 5). . the approximation becomes closer to the original signal jf(f). Therefore. 3. so a„ = sine ( 5 ) bn =0 c„ = jsinc 0n = 0. the phase is either zero or 7i. Note that xn is always real. x„ — 0 (with the exception of n = 0 . 4: The discrete spectrum of the signal in Illustrative Problem l .1.2. . Figure 1. Fourier Series 7 Figure 1. l .1.3: Various Fourier series approximations for the rectangular pulse in Illustrative Problem 1. num2str(t). for nn=1:nargin-5 args=[args. j=sqrt( .TOLPl. b .vector uf length n + I of Fourier Series % Coefficients.P2.parameters of funfen. i ] \args]).5. xxn % pi. xxO. p ' .p2.x). and p3 The function is given % over one period extending from 'a' to b' % xx .num2str(t). we have x-„ — x*.m determines the Fourier series coefficients for nonnegative values of n. . Problem I. b .4 . " ) ) . xx(i+1)=eval([' 1 / ( ' . » q u a d ( f u n f c n .A. In Figure 1. a . When the signal . SIGNALS AND LINEAR SYSTEMS The MATLAB script for plotting the discrete spectrum of the signal is given below % MATLAB script for Illustrative n=[—20:1:20]. xxi.funfcn). % to! . % ll can depend on up to three parameters % pi. args=[].' . and the signal in the interval [a. in un m-file. [ ] ' .m given beiow function xx=fseries(funfcn. x=abs(sinc(n/2)). b] = [ .. end ILLUSTRATIVE PROBLEM Illustrative Problem 1. • ) .N.n. p3 .' ) / I ' .2 [The magnitude and the phase spectra] Determine and plot the discrete magnitude and phase spectra of the periodic signal * ( ( ) with a period equal to 8 and defined as x(t) = A ( r ) for |f| < 4.num2str(t). Since the signal is given by an m-file lambda.v (f) is described on one period between a and b. 4] and determine the coefficients.int2str(nn)).int2str(i).6 the magnitude and the phase spectra of this signal are plotted for a choice of n = 24.a.b. t o l . * ' . xx(1)=evo)((' 1 / ( ' .p3) %FSERIES Returns the Fourier series coefficients. p2. siem(n.The given function.') •).1 ) . • ) . we can choose the interval [a.args]).j * p i * x * ( ' . % XX FS£RIES(FUNFCN. as shown in Figure 1. .8 CHAPTER I. for i—1 :n newfun=[' e x p ( .B. the Fourier series coefficients can be obtained using the m-file fseries.P3) % funfcn . a . t=b-a.pl . p2.m. end args=[args. Note that the m-file fseries. b\ is given in an m-file.tol. Chapter I. but since here x(r) is real-valued. • q u a d ( n e w £ u n .the error level. t o l . e Problem 2. function= ' lambda ' .b. xxl=[conj(xx1).7.cude s p e c t r W I phasexx J =angle(xx I ). 6].1.5: A periodic signal.a. Fourier Senes 9 Figure 1. Chapter I. absxxl=abs(xxl): pause % Press any key to see a plot <ij the maur.3 [The magnitude and the phase spectra] Determine and plot the magnitude and the phase spectra of a periodic signal with a period equal to 12 that is given by - in the interval [—6. b=4.agni.absxxl) t i t l e f ' t h e d i s c r e t e rr.tol).n. % MATLAB echo on scrip! for tituslrdtr.L2. A plot of this signal is shown in Figure 1.phasexxl) litle('the d i s c r e t e p h a s e spectrum') — i i w b i ^ a j m j i d ^ d i d f l t Illustrative Problem 1. \xl=x*(n+1:-1:2). xx=fseries(func[ion. pause % Press any key to see ti plot of the phase steminl. The MATLAB script for determining and plotting the magnitude and the phase spectra is given below. (ol=0. n=24. a=—4. spectrum stem(nl. .xx].iuule nl=[—n:n). SIGNALS AND LINEAR SYSTEMS ooooooooooooooo OOOQOOOO Figure 1. .2.6: The magnitude and the phase spectra in Illustrative Problem 1.26 CHAPTER I. a=-6.xxj.tol.n. ^ ^ The signal is equal to the density function of a zero-mean unit-variance Gaussian (normal) random variable given in the m-ftle normal. This file requires two parameters. % MATLAB sc'ipl fur Illustrative Problem 1. echo on function=' n o r m a l ' . stem(nl. Chapter t.phasexxl) title('the d i s c r e t e p h a s e s p e c t r u m ' ) . b=6.7: The periodic signal in Illustrative Problem 1. which in the above problem are 0 and 1.1: xx=fseries( function.0.m. Fourier Series 11 Figure 1. absxxl=abs(xxl). xxl=[conj(xxt). we can use the following MATLAB script to obtain the magnitude and the phase plots shown in Figure 1.8. tol=0. the mean and the standard deviation of the random variable. Therefore. m and s. n=24.b.a.absxxl) title{'the d i s c r e t e m a g n i t u d e s p e c t r u m ' ) phasexx 1 =ang!eUx I): pause % Press any key to see a plot of the phase stemfnl. pause % Press any key to see a plot of the magnitude nl=[-n:n].2.1.3. respectively.1): xxl=xx(n+1:-1:2). r(r) and >'(r) can be obtained by employing the convolution integral oc / = • • T O . Can you give an example where the period of the output is different from the period of the input? "Also known as the frequency response of the system.30) 2 We say usually with the same period as the input signal.2. and therefore it has a Fourier series expansion.2.26) y(0= y^23""'7» (1.27) then the relation between the Fourier series coefficients of . the output signa! y(t) is also periodic. SIGNALS AND LINEAR SYSTEMS 1.2. as shown in Figure 1.1 Periodic Signals and LTI Systems When a periodic signai x(t) is passed through a linear time-invanant (LTI) system.2.28) From the above relation we have = (1.2.9. usually with the same period as the input signal 2 (why?). If x(t) and y(t) are expanded as ÎC x(t) = £ 00 £ x„eJ2.nr/r.""/Ttt (1.29) where H ( / ) denotes the transfer function 3 of the LTI system given as the Fourier transform of its impulse response h{t). . oo / -oo k(t)e~ j2:tf 'dt (1.r(r £ n= -oo z)h{x)dx xne'2*n{'-r)!T»h{x)dx ! " = j r rx> x„( P h(T)e~j2'.>dx)ej2y"'"T<< = E n= -oo y^ 2 ""'^ (1-2.28 CHAPTER I. .3. Magnitude and phase spectra for Illustrative Problem 1.8.Fourier Senes Qoooooooooooooooooooooooooo o è Figure 1. Plots of . .2. h{t) Figure 1. -I < r < 0 (1.10: The input signal and the system impulse response.r(i) and h(t) are given in Figure 1. Illustrative Problem 1.31) 3. (1. 0.9: Periodic signals through LTI systems.10. 0 < t < I 1.4 [Filtering of periodic signals] A triangular pulse train x(t) with period To = 2 is defined in one period as + I. SIGNALS AND LINEAR SYSTEMS X(t ) LTi System Figure 1. Assuming that this signal passes through an LTI system whose impulse response is given by h(t) r. Determine the Fourier series coefficients of x(r).2. otherwise 2. Plot the discrete spectrum of x{t). = 0 < / < 1 otherwise 0.32) plot the discrete spectrum and the output v(f). I t + 1.14 CHAPTER I. 35) (1. 0 n 2 < I 4 t J 6 .. we will adopt a numerical approach. 1] interval and thai the Fourier transform of A(r) is s i n c 2 ( / ) .11. .. 2.2.1!: The discrete spectrum of the signal.Ç = ^ P J-IK.2.i„ — 0 for all even values of n except for n = 0. * l_I . Fourier Senes 15 1. 0» OJK oI 1 0 4 I . First we have to derive H ( f ) .2..1. The resulting magnitude . the transfer function of the system. we have . x(!)e-r-*>u/Tt) d { (1.33) AU)e->* dt (1. 1 0 Figure 1. A plot of the discrete spectrum of x(i) is shown in Figure 1. • a . L . Obviously.2.36) (1.37) AU)e-!7""dt where we have used ihe fact that A(/) vanishes outside the [ —1.2.2.. We have x "= F I = ]. This result can also be obtained by using the expression for A(r) and integrating by parts... 3.34) (1. Although this can be done analytically.4 ! -2 . 16 CHAPTER I.13. n c Platting commands follow 1.»H I (21 61). 'Si time vector i=[-. To derive the discrete spectrum of the output we employ the relation y„ = x „ / / ^ j 1 1 /«\ / n\ = -sine-(-)*( -) The resulting discrete spectrum of the output is shown in Figure 1.3 Fourier Transforms The Fourier transform is the extension of the Fourier series to nonperiodic signals.*2. n=[-20:1 20].zeros0.2. f=[0:df:fs]-fs/2.2) . h=(zeros(1. % frequency resolution df=fs/80. The MATLAB script for this problem follows.5:ts: 1. % Fourier series coefficients of ill) vector x=. % sampling interval ts=1/40. % rearrange H HJ=fflshift(H).3.20)).12.t(21 61). Chapter I.38) (1. % transfer function H=fft(h)/fs.5»(sinc(n/2>). is denoted by X ( f ) or. y=x.51. given by oo / -oo X { f ) e j 2 n / 'df (1. are shown in (1.t(f) that satisfies certain conditions.39) % MA I'lAB script for illustrative echo on Problem 4. 4.3.2. equivalently. SIGNALS AND LINEAR SYSTEMS of the transfer function and also the magnitude of H («/ Tq) = H(n/2) Figure 1.1) The inverse Fourier transform of X ( f ) is x ( t ) . '7c impulse response fs=1/Lc. f"[jc(r)l and is defined by rx> / -oo x{t)e-W dt (1. known as Dirichlet's conditions [1]. The Fourier transform of a signal .20). 12: The transfer function of the LTI system and the magnitude of H .1. Fourier Transforms 33 Figure 1.3. and vice versa. X { ~ f )= X ' { f ) (1. riaxi(t) + flx2(0) = <*T[x . If jr(r) is a real signal. < ( . If X ( f ) = F[j.7) . Time shift: A shift in the time domain results in a phase shift in the frequency domain.3.3.4) 2.18 CHAPTER I.) .3. flr^O (1.5) 3. ( r ) ] + £ F [ . then X ( f ) satisfies the Hermitian symmetry.e.3.6) 4. Scaling: An expansion in the time domain results in a contraction in the frequency domain. The most important properties of the Fourier transform are summarized as follows.3.13: The discrete spectrum of the output. t 2 ( D J (1.. then ^U(«03 = A x ( . SIGNALS AND LINEAR SYSTEMS I -I J • 2 0 -IS I D -5 0 n S 1 0 1 5 2 0 Figure 1. Duality: If X ( f ) = then :F[X(r)] = . Linearity: The Fourier transform of a linear combination of two or more signals is the linear combination of the corresponding Fourier transforms. 1./ ) (1. If X ( f ) = F[jc(f)3. then i0)] = e j 2 * f t »X{f) (1.3) There are certain properties that the Fourier transform satisfies. i.(f)]. 13) d. sgn(r) is the signum function.3. In this table u _ i ( 0 denotes the unit step function.12) 8.14) The second relation is also referred to as Rayleigh's relation.3.10) 7.11) (1. then T[x'{l)]= dn din j l n f X i f ) = (jl7T/)"X(f) (1.)] and Y ( f ) = T[y(t)].e. If X ( f ) — £"[*(.3. If X { f ) — ?"[*{/)]> then T{ej2n}"' x{t)) = X(f — /o) (1.1 gives the most useful Fourier transform pairs. Parseval's relation: If X ( f ) = !F[jc(f)] and Y ( f ) = T[y(t)]. defined as I I. i. O O />O0 / x(.3.3.8) . JciO = jP t > 0 t = 0 f <0 (i. Fourier Trans forms 19 5. If X{f ) = J"[.t)y'(t)dt =/ X { f ) Y \ f ) d f •OO r u ( / ) i 2 dt = Jr—OO i x ( / ) i z df J—OO J—OO (1.15) and ( ^ " ' ( f ) denotes the nth derivative of the impulse signal. and vice versa. <5(.3. 0.)] = -[X(f 6. For a periodic signal x ( i ) . then T[x{t)*y(t)]^X{f)Y(f) F[x(t)y(t)] = X ( f ) * Y ( f ) then (1. with period 7q. Convolution: Convolution in the time domain is equivalent to multiplication in the frequency domain. -1.. M o d u l a t i o n : Multiplication by an exponential in the time domain corresponds to a frequency shift in the frequency domain.3./ o ) + X(f + /o)J 1 !F[x(t) COS(2TT/O.v(f)). *nej2* m/To .9) (1. whose Fourier series coefficients are given by xn. DifTerentiation: Differentiation in the time domain corresponds to multiplication by j2jzf in the frequency domain. Table 1.3.) is the impulse signal.3.1. 20 CHAPTER I.i d ) . .1: Table of Fourier transform pairs.T/OO i H f S(f fo) 1 5 ( f ) - fa) + /O> + \S(f + fa) sin(27r7oO n ( 0 j j M f ./ O ) .rn) fit e fin COS(2.£ « ( / sinc{/) n i f ) sinc^(/) a>0 A > 0 a > 0 sinc(/) A(0 sinc-(i) e ' ^ u . t ( R ) . SIGNALS AND LINEAR SYSTEMS X ( f ) <5(0 1 &(r . A(/) 1 Ct-HîTf 1 la + ilirf)1 la a2 + (2irn- e~n!~ sgn(f) 1 Tin kb{f)+ jlTTf 72^7 5("'(0 (J2nf)* C^-TO 1 ^/ - ^ Table 1. (0 = otherwise (1.vi ( 0 and . r ^ f O .14.14: Signals .3. x\(0 (0 Figure 1.t2(0 shown in Figure 1.r(Ol T =-x = N.18) The Fourier transform of a signal is called the spectrum of the signal.3. The spectrum of a signal in genera! is a complex function X{f ) \ therefore. defined by . ILLUSTRATIVE PROBLEM Illustrative Problem 1..t < ! 5 T (1. *71.tiiO- . ( (/) is the Fourier transform o f .3. t [ ( 0 and . to plot the spectrum.v(0. It is also possible to express the Fourier series coefficients in terms of the Fourier t r a n s f o r m o f the truncated signal by 1 . X ( f ) .5 [Fourier transforms] Plot the magnitude and the phase spectra of signals .17) where by definition .3. Fourier Transforms 37 the Fourier transform is obtained by X ( f ) = F|. k) . „ r [.v„ = —XT[. the truncated signal.1.=»""•] (1. usually two plots are provided—the magnitude spectrum | X ( / ) | and the phase spectrum / .16) In other words the Fourier transform of a periodic signal consists of impulses at multiples of the fundamental frequency (harmoniesi of the original signal.Vj. [X2.df2)=fFtseq(x2. f={0. In Section 1.51)+1 .df). respectively. [Xl.'525). x2=zeroslsize(t)). ' -.1 )]-fs/2. t=[-5:ts:5): % l=zeros(size(0): x1(41:51)=t(41 ./ftshift(angle(XI !(500:525))).61 )=cmes(sizeU I (52:61 ))V.xi ITdf 1 ]=fflseq(x 1 . plot(f.df).ts. Frequency Figure 1. df—0.fftshift(ajigle(X21{500:525))). The c o m m o n magnitude spectrum and the two phase spectra plotted on the same axis are shown in Figures I .f(500. is=1/fs.' ) . we would expect them to have the same magnitude spectra.01 . Chapter !. x2(51:71)=*l(41.3. SIGNALS AND LINEAR SYSTEMS Sincc the signals are similar except for a time shift.16. x 1(52. % MATLAB scrip! for Illustrative Problem 5.525).x21 .15: The common magnitude spectrum of the signals jri(r) and ^ 2 ( 0 - The MATLAB script for this problem is given below.ts.df 1 df 1 *(length{x il ) .22 — ^ J M I I M M ^ — CHAPTER I. 1 5 a n d 1.1 we show how to obtain the Fourier transform of a signal using MATLAB.fftshift(abs{Xl J))) p!or(f(500. fs=10. Xll=XI/fs: X2l=X2/Fs.61>. 20) has a Fourier transform given by . a signal whose Fourier transform vanishes for | / | > V V for some VV—can be completely described in terms of its sample values taken at intervals Ts as long as T.21) .1 Sampling Theorem The sampling theorem is one of the most important results in signal and system analysis. The sampling theorem says that a bandlimited signal—i.3.3. known as the Nyquisl interval (or Nyquist rate).e..3.3.16: The phase spectra of the signals ^ i ( f ) and jct(0- 1.3. < 1/2VV. o o j x n v foralI/ *«(/) = - £ { f ~ T ) -^rX(f) Is for|/|<W (1. it forms the basis for the relation between continuous-time signals and discrete-time signals.1.)) t=-oo x(t)= (i. If the sampling is done at intervals Tx = 1 / 2 W .19) This result is based on the fact that the sampled waveform ^ ( r ) defined as oo -t4(0« n = -'X> x (nTx)S(t ^ nT. Fourier Transforms 23 Figure 1.) (1. the signal x(r) can be reconstructed from the sample values {*[«] = as 00 £ j:(n7: i )sinc(2W(r ~ nT. 1. .17: Representation of the sampling theorem.23) which gives the relation between the Fourier transform of an analog signal and the discrete Fourier transform of its corresponding sampled signal.19) for Ts = I and {jc[n]}^__ 3 = {1.1) 4.3.17 is a representation of Equation (1. The discrete Fourier transform as (DFT) of the discrete-time sequence is expressed oo n=— oo Comparing Equations (1.21) we conclude that X l f ) = T s X d ( f ) for | / | < W (1. x{t) = s i n c ( t + 3 ) + sinc(r + 2) . 1.2sinc(r .2) + 2smc(r .2) Figure 1. Figure 1.2 .1 .1) + 2 s i n c ( r ) . In other words.3. 2}. .sinc(r . .3. SIGNALS AND LINEAR SYSTEMS so passing it through a lowpass filter with a bandwidth of W and a gain of Ts in the passband will reproduce the original signal.3.22) and (1.s t n c i f 4. 2.24 CHAPTER I. and the required frequency resolution df and returns a sequence whose length is a power of 2. given below. 0. Note that since the FFT algorithm essentially gives the D F T of the sampled signal. function [M.df) {M.zeros( 1 . we have to increase N . else ni=fs/df: end n2=lengch(m).24) otherwise and is shown in Figure 1. in order to obtain the Fourier transform of the original analog signal.n): m=[m. where /v = 1/7".. .ts) % Generates M. In many cases if this length is not a power of 2.v(r) taken at intervals of T. is used as the representation of the signal.m. we have to multiply it by T.df]=fftseq(m. the FFT of this sequence M . The F F T algorithm is computationally efficient if the length of the input sequence.23). n=2*(nwjt(newpow2(nl)./ N apart.ts. The output df is the final frequency resolution.. The MATLAB function fftseq. in order to get the Fourier transform of the analog signal we have to employ Equation (1. equivalently. This means that after computing the FFT.df) % tSf. % % Output m is the :ert>-padded versum of input m.k n o w n / a j r Fourier transform (FFT) algorithm. fs=1/ts. takes as its input a time sequence m.18. if naigin == 2 nl=0.. and the resulting frequency resolution. M is the FFT. is a power of 2. divide it by />. In this algorithm a sequence of length N of samples of the signals.3.3. .df} = Stseif(m.nextpow2 {n2))). -2 < r < -1 -1 < r < 1 1 < t < 2 .n—n2)].t(r) = 1.m. The result is a sequence of length N of samples of X d { f ) in the frequency interval [0.— 2VV is the Nyquist frequency. the FFT of the sequence m %FFTSEQ % The sequence is :ert>-pudded tt> meet the required frequency resolution d f .1. ts is the sampling interval. the value of A / gives the frequency resolution of the resulting Fourier transform. M=fft(m.6 [Analytical and numerical derivation of the Fourier transform] The signal j:(i) is described by t + 2. df=fs/n: ILLUSTRATIVE PROBLEM Illustrative Problem 1. the length of the input signal. In order to improve the frequency resolution.3.f + 2. . it is made to be a power of 2 by techniques such as zero-padding. N . (1.m. Fourier Transforms 25 Numerical computation of the discrete Fourier transform is done via the w e l l . or.mJH = fftsetfmjs. f3]. the sampling interval r. When the samples are A / = / . 2.26 CHAPTER I. using a simple MATLAB script employing the fftseq. which meets the requirements of the . Obviously the Fourier transform is real. Using MATLAB.4] and sample it at Tf intervals. 2. we can derive the FFT numerically.18: The signal. We consider the signal on the interval [—4. We have chosen the required frequency resolution to be 0. the sampling interval is Ts = l / / v = 0.r(r).25) and therefore X ( f ) = 4 s i n c 2 ( 2 / ) . its bandwidth is proportional to the inverse time duration of the signal. The signal x(r) can be written as x(t) = 2A .0098 Hz. — ^ » l l l i J M i f c 1. Determine the Fourier transform of . In order to determine the Fourier transform using MATLAB.m function. so the resulting frequency resolution returned by fftseq.2. scaling. Hence.r(0- 1. we first give a rough estimate of the bandwidth of the signal.19 .3.27) and therefore the Nyquist frequency is twice the bandwidth and is equal to 5. To be on the safe side we take the bandwidth as ten times the inverse time duration.26) where we have used.m is 0.01 Hz.3. determine the Fourier transform numerically and plot the result.3. Since the signal is relatively smooth.sinc~(/) (1. and the fact that the Fourier transform of A ( : ) is s i n c 2 ( / ) . The magnitude-spectrum is shown in Figure 1. With this choice.5 4 (1.= 2. or BW = 10 x . linearity. The time duration of the signal is 4. SIGNALS AND LINEAR SYSTEMS Figure 1.x(t) analytically and plot the spectrum of .A(0 (1. fs/2. % MATLAB script for Illustrative Problem 6.dfl]=ffiseq<x. % exact Fourier transform pause % Press a key to see the pint of the Fourier Transform derived analytically Clf subploi(2. The signal vector r .~2. % frequency vector for analytic y=4*(sinc(2*fl)).10)).zeros(1.1) plot(fl.1. [X.[0:0. % set parameters fs=1/ts.(1 :-0. problem. x=(zeros(1. Fourier Transforms 27 I I 0. is zero-padded to a length of 256 to meet the frequency-resolution requirement and also to make it a power of 2 for computational efficiency.2) . % derive the FFT X l=X/fs.2:1 ).01.1 o 35 J 15 -0 5 0 i . which has length 41.abs(y». ~2-(sinc(fl )). df=0.2:0].20. Chapter I echo on ts=0 2.dn.10).19: The magnitude-spectrum of x(i) derived analytically.t5.5]. % scaling f=(0. The M A T L A B script for this problem is given below.1.9). xlabel(' F r e q u e n c y ' ) approach lillef.'Magnitude-pectrurn o t x ( t l d e r i v e d a n a l y t i c a l l y ' ) pause % Press akey to see the plot of the Fourier transform derived numerically subploi(2.1. % frequency vector for FFT f[ ={-2.df 1 :df l*(letigth(x)— 1 ) ) . O S 15 2 25 Figure 1.ones(1. Y~N.001:2.3.5:0. A plot of the magnitude-spectrum of the Fourier transform is given in Figure 1.x. ploi(f.v(r) derived numerically.28) where H ( f ) = T[h{t)]= f A(Oe"w'(/i J —x> (1. SIGNALS AND LINEAR SYSTEMS Figure 1.3.3.28 CHAPTER I. Equation (1.3.30) is the transfer function of the system.3. .3.31) which shows the relation between magnitude and phase spectra of the input and the output.2 Frequency Domain Analysis of LTI Systems The output of an LTI system with impulse response h(t) when the input signal is jr(r) is given by the convolution integral y(t)=x{t)*k(t) Applying the convolution theorem we obtain Y ( f ) = X ( f ) H ( f ) (1.ffishift(abs(X 1))).29) can be written in the form \r(f)\ = \x{f)\\H(j)\ LY{f) = LX{f) + LH{f) (1.3.20: The magnitude-spcctrum of.29) (1. xlabel( • F r e q u e n c y ' ) mle('Magnitude-pectrum of x ( t ) derived numerically') 1. 1.5 Hz. 0. .# ) L ( = 2 = 0 (1.3. This is a sinusoidal whose half-period is 2. m u i M m First we derive an expression for the sinusoidal part of the signal. 1. 1.5.21 consists of some line segments and a sinusoidal segment. The signal has an amplitude of 2 and is raised by 2.25f + 8) + 2 = 2cos(Q.7 [LTI system analysis in the frequency domain] The signal x(t) whose plot is given in Figure 1.25 Hz. . Fourier Transforms 29 — t i U M M M b m M M M J t o Illustrative Problem 1. so. — m j .rf -f 6) + 2.5. find the output of the filter and plot it. the signal can be written as r + 2. Therefore.21: The signal ^(r). therefore. it has a frequency of / o = j = 0. 0.T/ -I.3.3.34) Having a complete description of the signal we can proceed with the solution. Determine the FFT of this signal and plot it. -2 < t < 0 0 < t < 1 1< r < 3 3 < f < 4 otherwise (1. The value of the phase 8 is derived by employing the boundary conditions 2 — 2COS(0. derive an expression for the filter output.32) It. Figure 1. 1.1.33) or 8 = 0. the general expression for it is 2 cos(27r X 0. If the signal is passed through a filter whose impulse response is given by 0 </ <1 1< t < 2 otherwise (1. 3. 2. If the signal is passed through an ideal lowpass filter with a bandwidth of 1.3.5vTf).t(0 = 2 + 2cos(0. 24.22: Magnitude-spectrum of the signal. 3. Chapter I.26)=t(16:26)+2. The MATLAB script for this problem is given below. resolution sampling frequency sampling intenal time vector input signal initiation . the Fourier transform of the output. The plot of the magnitude-spectrum of the signal is given in Figure 1. 1-5 < / < 3. Ls=1 /fs. I I.3. Since the bandwidth of the towpass filter is 1. Here / . 0 < / < 1. % % % % % fretj. Using this transfer function gives the output shown in Figure 1.5 (1.23.22. x=zeros(1.5 Hz. The required frequency resolution is 0. 3.01.5 < / < 5 which is multiplied by X ( f ) to generate Y ( f ) . = 5 Hz. 2. its transfer function is given by 0. Problem 7.5 % MATLAB script for Illustrative echo on df=0.30 CHAPTER I.01 Hz.35) 1. SIGNALS AND L I N E A R SYSTEMS !. Figure 1.length(t)). Here we obtain the output of the filter by a simple convolution. The bandwidth of the signal has been chosen to be 5 Hz. fs=5. The result is shown in Figure 1. t=[-5:ts:5]. x("l6. .24: The output signal in the third part of the problem.1. Figure 1.23: The output of the lowpass filter. Fourier Transforms 31 Figure 1.3. ceilO .5). and a signal with positive and All periodic A s i g n a l w i t h finite e n e r g y is c a l l e d an energy-type finite p o w e r is a power-type signal.xl .5»pi*i(32:41»: x(42:46)=2*ones(1 r 5). w h e r e a s .1) i' t-(r)dr . % part 2 /i/fer transfer fund inn ll-[one5(1.x).length(X)—2»eeil(.df] ]=fftseq(x.t y p e s i g n a l s . d e n o t e d b y E x a n d P x .ts.51 -cciJ(7/(s))].jyjmst 1. x( 32:41 )=2+2*cos(0.abs(y!( 1 length. SIGNALS AND LINEAR SYSTEMS x(27:31)=2*ones(1. *7« output of the filter % part J % LTI system impulse response h=[zeros{1 . % sculini.3) 4 There exist signals that are neither energy type nor power type.4. % part I a [X./2 signal.1.fftshift(abs(X 1))) pause % Press u kev to sec the output of the low pass filter p!ot((. 5 .t y p e s i g n a l g i v e s c o s ( / ) is a n e x a m p l e o f a p o w e r .32 CHAPTER I.onesi1 .zeros( 1 . T h e energy t h e d i s t r i b u t i o n o f e n e r g y at v a r i o u s f r e q u e n c i e s of t h e s i g n a l a n d is g i v e n b y §Y(/) = I*(/)1: Therefore.ones( 1 . c specrtim of the input f=[0:dfi:dfl*(Iength(xl)-1)J-fs/2. 1.ceil(7/is)—ceilcS/tsjj. v2=conv(h.5/dfl)).t y p e s i g n a l . (1.: J-T.i(ceil(5/is)+1 :ccil(6 /is)).5/df]))]: Y=X.*H.4-2) Ex = / $ x ( f ) d f (1-4.y2). are defined as oo / Px = ^lim X2(t)dl (1. ^ frequency vector Xl=X/fs. "o output spectrum yl-ifft(Y). r e s p e c t i v e l y .ceil(1.x(r) = F o r i n s t a n c e * ( r ) = 1 1 ( f ) is a n e x a m p l e o f a n e n e r g y spectra! density of a n e n e r g y .4 Power and Energy T h e e n e r g y a n d t h e p o w e r c o n t e n t s of a r e a l s i g n a l * ( r ) .'1 t y p e s i g n a l . i)))). One example of such signals is x (/ ) = The only exception is those signals thai are equal to zero almost everywhere./f V .ceil(5/is)). output of the LTI s\ste»i pause % Press a icy to see spectrum of ihe input ploi(f. s i g n a l s 5 a r e p o w e r .5/df l)). pause % Press a key to see the output of the LTI system ploti( — 10:ts 10).df). and the power at the nth harmonic (h/Tq) is | v„| 2 . the output energy spectral density.4.4.4) K.4.e.11) 1 Px = lim .5) (1.v<r)] where /?y(r) is the autocorrelation function of . the power spectral density is given by (1. or power spectral density.v(0 passes through a filter with transfer function H(f ). When the signal .4.ri 11 defined as xl t)x(t 4.v—2N + 1 £ .1) in terms of the discrete-time signal become Ex = Ts X v2^] (1. is obtained via J Sy(f) = [H(f)\2-§x(f) (1.4.6) and the power spectra! density is in general given by 8 x ( f )r[Rx(r)\ (1..10) \ Sy(f) = IH(fYrSx(f) If we use the discretc-time (sampled) signal.4.9) which means all the power is concentrated at the harmonics of the fundamental frequency.4.4.vU) = J — x(r ) » v( —r) for real-valued signals.r) cit f 1.8) For the special case of a periodic signal x(t) with period 7o and Fourier series coefficients xn. the magnitude square of the corresponding Fourier scries coefficient.4. For power-type signals we define the time-average function as R x(r ) = Mm autocorrelation -ril x{t)x(t + r )dt . i.4.1. the energy and power relations equivalent to Equation (1.7) The totai power is the integral of the power spectral density given by Px = J Sx(f)df (1. Power and Energy Using the convolution theorem we have fj X ( / ) = JF[/?. m. SIGNALS AND LINEAR SYSTEMS and if the F F T is employed—i. This signal is sampled at a sampling rate of 1000 samples per second.7.m gives the power content of a signal vector. then the energy spectral density of * ( . \ X d ( f ) \ - (1.. we can plot the power spectral density of the signal. s ILLUSTRATIVE PROBLEM Illustrative Problem 1.8 [Power and power spectrum] The signal * ( r ) has a duration of 10 and is the sum of two sinusoidal signals of unit amplitude.12) The following MATLAB function power. X n~0 iV-l x2["\ (1. the equivalent analog signal.0003 W. . cos(2?r x 47f) + cos(2.e. Using M A T L A B .m the power content of the signal is found to be 1. if the length of the sequence is finite and the sequence is repeated—then Ex . ) . By using spectrum. function p=power(x) % %POWER p pimeifil Returns the power in signal x p=(norm(x)*2)/length(x).m and specplot.4. The twin peaks in the power spectrum correspond to the two frequencies present in the signal. —^AiMmwm Using the M A T L A B function power.13) is most where T is the sampling interval. one with frequency 47 Hz and the other with frequency 219 Hz.r x 2 ! 9 f ) . x(t) = 0 < r < 10 otherwise 0. is obtained by using the Equation (1. find the power content and the power spectral density for this signal. as shown in Figure 1.3.25.34 CHAPTER I. The power spectral density of a sequence easily derived by using the M A T L A B function spectrum.4. If X t i ( f ) is the DFT of the sequence i [ n ] .m.23) and is given by $ x ( f ) = T . T h e M A T L A B script for this problem follows. Lowpass Equivalent of Bandpass Signals 35 1 0' 1 0 Li 8. p=power(x). Corresponding to a bandpass signal x(t) we can define the analytic signal z(t). signal spectrum Figure 1. —fo). where W < /o.25: The power spectral density of the signal consisting of two sinusoidal signals at frequencies f \ = 47 and / ? = 219 Hz. i.e.fs) % Press a key to see the power in the 1. for a bandpass signal X ( f ) = 0 for | / ± / 0 | > W.5. t=(0:ts:10]. Chapter I. whose . of course. % MATLAB script for Hluxtrurtive Pmbtem t $=0.A lowpass signal is a signal for which the frequency components are located around the zero frequency. pause P pause % Press ti key to see the power spccplcX(psd.1. we have X ( f ) =0.001: fs=1/ts. for \ f \ > W. In other words.. 1024). x=cos{2«pi*47»i)+cos(2»pi »219*t).5 Lowpass Equivalent of Bandpass Signals A bandpass signal is a signal for which all frequency components are located in the neighborhood of a central frequency fo (and. psd=spectrum(x. 36 CHAPTER I. SIGNALS AND LINEAR SYSTEMS Fourier transform is given as Z ( f )=2«_i(/)X(/) where u-\(f)is the unit step function. In the time domain this relation is written as z<r) - x{i) + jx(t) where x(t) denotes the Hilbert transform of.t(r) defined as x(t) — domain it is given by X(f) = -jsgn(f)X{f) (1.5.2) in the frequency (1.5.1) (1.5.3) We note that the Hilbert transform function in MATLAB, denoted by hilbert.m, generates the complex sequence z(r). The real part of z(t) is the original sequence, and its imaginary part is the Hilbert transform of the original sequence. The lowpass equivalent of the signal x(t), denoted by .t/(r), is expressed in terms of z(f) as x,(t) ~ z{t)e'l2:!}»' From this relation we have x(t) = Re[j xi(!)e i 2 n / »'\ Jc(r) = I m M r ) « ' 2 " ' " ' ] In the frequency domain we have * ; ( / ) = Z ( / + /o) = 2 « _ i ( / + / o ) * ( / + / o ) and X i ( f ) = X(f - f0) + X*{-f ~ /0) (1.5.7) (1.5.6) (1.5.5) (1.5.4) The lowpass equivalent of a real bandpass signal is, in general, a complex signal. Its real part, denoted by xc(t), is called the in-phase component of x (t) and its imaginary part is called the quadrature component of x(f) and is denoted by .r t (r); i.e., xi«) =xc(t) + jxAO (1-5.8) In terms of the in-phase and the quadrature components we have x(t) = xc(t)cos(2xfot) - xx(t) s\n(2nfy) (1.5.9) x ( t ) = xAO c o s ( 2 j i / q î ) + Jtt-(i) sin(2;r/o0 If we express *j(f) in polar coordinates, we have *,<0 = V ( r ) e j e ( 0 (1.5.10) 1.5. Lowpass Equivalent of Bandpass Signals 37 where V(t) and © ( f ) are called the envelope and the phase of the signal *(/). In terms of these two we have JC(/) = V(r) cos(2,T/or + 0 ( 0 ) The envelope and the phase can be expressed as VCD - y/xHO 0 ( r ) — arctan or, equivalently. +*f<0 x,(t) (1.5.12) (1.5.11) V(i) = >A2(0 + i2(n (1.5.13) © ( f ) — arctan — — x(t) 2,T/o/ It is obvious from the above relations that the envelope is independent of the choice of /o, whereas the phase depends on this choice. We have written some simple MATLAB files to generate the analytic signal, the lowpass representation of a signal, the in-phase and quadrature components, and the envelope and phase. These MATLAB functions are analytic.m, loweq.m, quadcomp.m, and env_phas.m, respectively. A listing of these functions is given below. function z=anaiytic(x) % z - analytic! x} %ANALYTIC Returns the analytic signal corresponding % z=hilbert{x); to signal x. function xl=loweq{*,ts,fO) % xl = loweq{x.tsJO) %LOWEQ Returns the lowpass equivalent fo JO is the center frequency. % Is is the sampling interval. % i=(0:ts:ts»(length(x)-1)J: z=hilbert(x); xl=z.*exp{—j*2»pi*f0»t); of the signal x. function [xc,xsj=quadcomp(x,ts,f?}) % [xc.xsj - q uadcompfx. ts./O) "hQUADCOMP Returns the in-phase and quadrature components of 38 CHAPTER I. SIGNALS AND LINEAR SYSTEMS % % % z=loweq(x,ts,fO); xc-real(z); xs=imag(z); the signal x. fO is the center frequency, sampling interval. ts is the function [ v. phi ]=env , p has (x, is, fO) % \v,pluj - env_phas(x,!s,JO) % v = em -phas(x.ts.jO) %ENVJ>HAS Returns the envelope and the phase of the bandpass signal x. ?o JO is the center frequency. % ts is the sampling interval. % if nargout == 2 z=loweq(x,t5,f0); phi=angle(z), end v=abs(hilbert(x)); ILLUSTRATIVE PROBLEM Dlustrative Problem 1.9 [Bandpass to lowpass transformation] The signal *(r) is given as x(t) = sinc(lOOf) cos(2;r X 200r) 1. Plot this signal and its magnitude spectrum. 2. With fo = 200 Hz, find the lowpass equivalent and plot its magnitude spectrum. Plot the in-phase and the quadrature components and the envelope of this signal. 3. Repeat part 2 assuming / o = 100 Hz. (1.5.14) ^ Choosing the sampling interval to be f.t = 0.001 s. we have a sampling frequency of f , = l/t, = 1000 Hz. Choosing a desired frequency resolution of df = 0.5 Hz. we have the following. 1. Plots of the signal and its magnitude spectrum are given in Figure 1.26. Plots are generated by MATLAB. 2. Choosing /o = 200 Hz, we find the lowpass equivalent to x(f) by using the loweq.m function. Then using fftseq.m we obtain its spectrum; we plot its magnitude-spectrum in Figure 1.27. It is seen that the magnitude spectrum is an even function in this case, because we can write x(t) = Re[sinc( lOOOe^2" * 2 0 0 '] (i.5.15) 1.5. Lowpass Equivalent of Bandpass Signals 39 4L r Figure 1.26: The signal x(t) and its magnitude-spectrum. 0.01 J 940« O O C M . soa 400 -mo too -io> o it» fretfoetny mo 30a 400 Figure 1.27: The magnitude-spectrum of the lowpass equivalent to * ( / ) in Illustrative Problem 1.9 when f0 = 200 Hz. 40 CHAPTER I. SIGNALS AND LINEAR SYSTEMS Comparing chis to J~' 2'TX /(,;-, x ( 0 = R z[x,{i)eJ (1.5.16) we conclude that jc/(r) = sinc(lOOf) (1.5.17) which means that the lowpass equivalent signal is a real signal in this case. This, in turn, means that xv(t) = xi(t) and xs{t) = 0. Also, we conclude that 0(0 J0' • JT, V(0 = 1^(01 (1.5.18) ATt(f) < 0 Plots of xc(t) and V(t) are given in Figure 1.28. Figure 1.28: The in-phase component and the envelope of x{t). Note that choosing this particular value of /o, which is the frequency with respect to which X ( f ) is symmetric, yields in these results. 3. If /o = 100 Hz, then the above results will not be true in general, and x/(f) will be a complex signal. The magnitude spectrum of the lowpass equivalent signal is plotted in Figure 1.29. As its is seen here, the magnitude spectrum lacks the symmetry present in the Fourier transform of real signals. Plots of the in-phase components of * ( f ) and its envelope are given in Figure 1.30. 1.5. Lowpass Equivalent of Bandpass Signals 41 SOQ tM 300 Ï00 •••&> Û 100 2Ï» 300 4Ù0 Sto Figure 1.29: The magnitude spectrum of the lowpass equivalent to x(t) Problem 1.9 when / 0 = 100 Hz. in Illustrative Figure 1.30: The in-phase component and the envelope of the signal x(t) when / o = 100 Hz. 42 CHAPTER I. SIGNALS AND LINEAR SYSTEMS Problems 1.1 Consider the periodic signal of Illustrative Problem i. 1 shown in Figure 1.1. Assuming A = 1, To = 10, and to = I, determine and plot the discrete spectrum of the signal. Compare your results with those obtained in Illustrative Problem 1.! and justify the differences. 1.2 In Illustrative Problem 1.1, assuming A = 1, To — 4, and fo = 2, determine and plot the discrete spectrum of the signal. Compare your results with those obtained in Illustrative Problem 1.1 and justify the differences. 1.3 Using the m-file fseries.m, determine the Fourier series coefficients of the signal shown in Figure 1.1 with A = 1, Tq = 4, and to = \ for - 2 4 < n < 24. Plot the magnitude spectrum of the signal. Now using Equation {1.2,5) determine the Fourier series coefficients and plot the magnitude spectrum. Why are the results not exactly the same? 1.4 Repeat Problem 1.3 with To = 4.6, and compare the results with those obtained by using Equation (1.2.5). Do you observe the same discrepancy between the two results here? Why? 1.5 Using the MATLAB script dis.spci.rn, determine and plot the phase spectrum of the periodic signal jc(r) with a period of To = 4.6 and described in the interval [ - 2 . 3 , 2.3] by the relation * ( 0 = A(r). Plot the phase spectrum for - 2 4 < n <24, Now, analytically determine the Fourier series coefficients of the signal, and show that all coefficients are nonnegative real numbers. Does the phase spectrum you plotted earlier agree with this result? If not, explain why. 1.6 In Problem 1.5, define x(f) = A(r) in the interval [—1.3,3.3]; the period is still To = 4.6. Determine and plot the magnitude and phase spectra using the MATLAB script dis_spct.m. Note that the signal is the same as the signal in Problem 1.5. Compare the magnitude and the phase spectra with that obtained in Problem 1.5. Which one shows a more noticeable difference, the magnitude or the phase spectrum? Why? 1.7 Repeat Illustrative Problem 1.2 with [a, b] = [ - 4 , 4] and*(i) = cos(?rr/8) for ji ] < 4. 1.8 Repeat Illustrative Problem 1.2 with [a, b] = [ - 4 , 4 ] and x(r) = sin(;rr/8) for [r| < 4 and compare your results with those of Problem 1.7. 1.9 Numerically determine and plot the magnitude and the phase spectra of a signal x{t) with a period equal to 1 0 - 6 seconds and defined as -106f+0.5, jc(0 = 0 < r < 5 x 10-7 otherwise ~ ~ 0, in the interval |f| < 5 x 10~ 7 . 1.5. Lowpass Equivalent of Bandpass Signals 43 1.10 A periodic signal x{t) with period 7"o = 6 is defined by x ( f ) = I~I(r/3) for |f| < 3. This signal passes through an LTI system with an impulse response given by >~'n, 0. 0 < f < 4 otherwise h{t) = Numerically determine and plot the discrete spectrum of the output signal. 1.11 Repeat Problem 1.10 with x ( f ) = e~2' for |f| < 3 and I, 0, 0 < f < 4 otherwise 1.12 Verify the convolution theorem of the Fourier transform for signals x(f) = n ( r ) and y{t) = A(i) numerically, once by determining the convolution directly and once by using the Fourier transforms of the two signals. 1.13 Plot the magnitude and the phase spectra of a signal given by 1, jc(0 = -2 < / < -I I'l < 1 1< t < 2 otherwise Irl. 1, 0, 1.14 Determine and plot the magnitude spectrum of an even signal * ( 0 which for positive values of t is given as t + 1. 1, *(0 = o <t <1 1< r < 2 2 < t <4 otherwise -t+4, 0, Determine your result both analytically and numerically and compare the results. 1.15 The signal described in Problem 1.14 is passed through an LTI system with an impulse response given by I I, 0 < t < 2 2, 2 < t < 3 otherwise 0, Determine the magnitude and the phase spectra of the output signal. c. 1.17 Repeat Problem 1. .18 Consider the signal x(r) = cos(2. Compare your results in theses two cases.m. design a lowpass Butterworth filter of order 4 with a cutoff frequency of 100 Hz and pass . As in Illustrative Problem 1. 0. 0 < i < 10 otherwise a.T X 219f). but this time design highpass Butterworth filters with the same orders and the same cutoff frequencies. Determine and plot the envelope of this signal.r(f) through this filler. assume this signal is sampled at a rate of 1000 samples/second. *(0 = 0. 1. determine the lowpass equivalent and the in-phase and the quadrature components of this signal. b.r x 47r) + COS{2JT X 219r). Once assuming fo — 47 and once assuming fo = 219. d.25. 0 < i < 10 otherwise is considered. Now design a Butterworth filter of order 8 with the same cutoff frequency. SIGNALS AND LINEAR SYSTEMS 1 . Determine the analytic signal corresponding to this signal.16 The signal cos(27T X 47R) + COS(2.8. Determine and sketch the output power spectrum and compare it with Figure 1. Using the MATLAB M-file butter. and determine the output of this filter and plot its power spectrum.44 CHAPTER I.16. Determine and plot the Hilbert transform of this signal. Plot your results and make the comparisons. Chapter 2 Random Processes 2. The third topic that we consider is the characterization of a stationary random process by its autocorrelation in the time domain and by its power spectrum in the frequency domain. Then. If A denotes such a random variable. we also consider the autocorrelation function and the power spectrum of a linearly filtered random process. 2. the number of outputs in the interval 0 < A < 1 is sufficiently large. We begin with the description of a method for generating random variables with a specified probability distribution function. Most computer software libraries include a uniform random number generator. we are able to study its effects through simulation of communication systems and to assess the performance of such systems in the presence of noise. We call the output of the random number generator a random variable. we consider Gaussian and Gauss-Markov processes and illustrate a method for generating samples of such processes. Since linear filters play a very important role in communication systems. its range is the interval 0 < A < 1. and as a consequence it is impossible to represent the continuum of numbers in the interval 0 < A < 1. By generating such noise on a computer. The final section of this chapter deals with the characteristics of lowpass and bandpass random processes. Such a random number generator generates a number between 0 and 1 with equal probability. However.1 Preview In this chapter we illustrate methods for generating random variables and samples of random processes. for all practical purposes.2 Generation of Random Variables Random number generators are often used in practice to simulate the effect of noiselike signals and other random phenomena that are encountered in the physical world. so that we 45 . We know that the numerical output of a digital computer has limited precision. Such noise is present in electronic devices and systems and usually limits our ability to communicate over large distances and to detect relatively weak signals. we may assume that our computer represents each output by a large number of bits in either fixed point or floating point. Consequently. Its probability is uniformly distributed in the interval (—5.2. distribution function F(B) (2. Hence.46 CHAPTER 2. is illustrated in Figure 2.2(b). .1: Probability density function / ( A ) and the probability distribution function F ( A ) of a uniformly distributed random variable A. denoted as in*.1(b). b + 1). If we wish to generate uniformly distributed noise in an interval (b. which is the m a x i m u m value that can be achieved by a distribution function. for the uniform random variable A we have F(l) and the range of F ( A ) is 0 < F(A) function is shown in Figure 2.2.2. RANDOM PROCESSES are justified in assuming that any value in the interval is a possible output from the generator. is mA = T h e integral of the probability density function.2(a). the random variable B as shown in Figure 2. Thus a new random variable B can be defined as B = A+ b which now has a mean value m j = H j . The uniform probability density function for the random variable A. We note that the average value or mean value of A.2) < t for 0 < A < 1.1) For any random variable. denoted as / ( A ) .3) For example. is shown in Figure 2. it can be accomplished simply by using the output A of the random number generator and shifting it by an amount b. is called the probability distribution function of the random variable A and is defined as F(A)=f J—00 f{x)dx (2. this area must always be unity. if b = — j.1(a). = /"' J—00 f(x)dx= 1 (2. The probability distribution (a) Figure 2. which represents the area under / ( A ) . 1) can be used to generate random variables with other probability distribution functions.2: Probability density function and the probability distribution function of a zeromean uniformly distributed random variable.1). Since the range of F(C) is the interval (0. suppose that we wish to generate a random variable C with probability distribution function F(C).2.5) = A (2. For example. Figure 2. f(B) F{B) (a) Figure 2.1). If we set F(C) then C = F~'(A) (2.3: Inverse mapping from the uniformly distributed random variable A to the new random variable C. we begin by generating a uniformly distributed random variable A in the range (0.2.2.2. Generation of Random Variables 47 A uniformly distributed random variable in the range (0. as illustrated in Figure 2.3.4) . Hence = (2.2. ^sMBIhdMMfc This random variable has a probability distribution function 1 F(C) Upon solving for C.7) .4(a).e. By this means we obtain a new random variable C with probability distribution F{C). jC2. 0<C<2 /(C) = \ 10. and the solution in (2.4(b).5) provides the value of C for which F(C) ~ A. This inverse mapping from A to C is illustrated in Figure 2.1 Generate a random variable C that has the linear probability density function shown in Figure 2. i. ~ " otherwise Figure 2.2.. 1.2. We generate a uniformly distributed random variable A and set F{C) = A.2. C <0 0 < C < 2 C > 2 which is illustrated in Figure 2.3.4: Linear probability density function and the corresponding probability distribution function. we obtain 0. U c .4) for C. — i m U U M b m M M M M Illustrative Problem 2.6) C = (2. RANDOM PROCESSES Thus we solve (2.48 CHAPTER 2. 2. Unfortunately.2.5: Gaussian probability density function and the corresponding probability distribution function.10) . t R <0 R> o (2. In Illustrative Problem 2. probability distribution. C). Generation of Random Variables 49 Thus we generate a random variable C with probability function F ( C ) . Noise encountered in physical systems is often characterized by the normal. or Gaussian.2. Consequently.o c . which is a measure of the spread of the probability density function / ( C ) . The probability density function is given by /(C) - _ •Jin 1 (j e~c .2. which is illustrated in Figure 2. Thus F(C) f f{x)dx (2. The probability distribution function F(C) is the area under / ( C ) over the range ( . with probability distribution function F(R) = 0.9) cannot be expressed in terms of simple functions. A way has been found to circumvent this problem.8) where ct~ is the variance of C. the inverse mapping is difficult to achieve.2.4(b). In some cases it is not. -co < C < O G (2.2. From probability theory it is known that a Rayleigh distributed random variable R.9) J —! Figure 2.1 the inverse mapping C — F~l(A) was simple. This problem arises in trying to generate random numbers that have a normal distribution function. as shown in Figure 2.5. the integral in (2. gsrv2l = gngauss(nusgma) % lgsr\-l.2.gsrv2]=gngauss(m. % a uniform random variable in (0.14) where A is a uniformly distributed random variable in the interval (0.2.10) is easily inverted.1) gsrv 1=m+i+c os (2 * pi *u).2.e~Rl/2a2 and hence = A (2.13) R = yjlo1 l n [ l / ( l .2.2. Since (2.2. we have F(J?) = 1 . % another uniform random variable in (0. sgma=1. The parameterCT2is the variance of C and D. The MATLAB script that implements the method above for generating Gaussian distributed random variables is given below. m=0: end.11) and (2. .50 CHAPTER 2. % a Rayleigh distributed random variable u=rand. function [gsrv I . m=0. If one <if the input arguments is missing % il lakes the mean as 0. elseif nargin = 1. Now.15) then from (2.1) z=sgma»(sqrt(2*log(1/(1-u)))). then C and D can be translated by the addition of the mean value. The method described above is often used in practice to generate Gaussian distributed random variables.2. u=rand. gsrv2=m+z *s in (2*pi *u). we obtain two statistically independent Gaussian distributed random variables C and D. it generates two standard & Gaussian random variables if nargin == 0.1).5.2. % If neither mean nor the variance it j'ii™.gsrv2j = gngauss(sgma) % Igsrvl.12). if we generate a second uniformly distributed random variable B and define 0 = 2 JIB (2. these random variables have a mean value of zero and a variance a 2 . As shown in Figure 2. RANDOM PROCESSES is related to a pair of Gaussian random variables C and D through the transformation C = Kcos© D = R sin© (2. If a nonzero mean Gaussian random variable is desired. 2JT). sgma=m.A)] (2.gsn2l = gngauss % GNGAUSS generates two independent Gaussian random variables with mean % m and standard deviation i j m u .sgma) % [g*rvl.12) where © is a uniformly distributed variable in the interval (0.11) (2. Gaussian processes provide rather good models for some information sources as well. and C is the n x n covariance matrix of the random variables U i .) random variables. individual electrons.3. .2. The reason for the Gaussian behavior of thermal noise is that the current introduced by movement of electrons in an electric circuit can be regarded as the sum of small currents of a very large number of sources.e. can be closely modeled by a Gaussian process. We begin with a formal definition of a Gaussian process.3.^ ( x -m)'C'\x . therefore. . . knowledge of the mean m and covariance C provides a complete statistical description of the process. the random variables (X(fi)}"_| have a jointly Gaussian density function. Another very important property of a Gaussian process is concerned with its characteristics when passed through a linear time-invariant system. which may be expressed as f{x) = .— — — . namely. that at any time instant the random variable XUq) is Gaussian. i. P r o p e r t y 1: For Gaussian processes.2) The superscript t denotes the transpose of a vector or a matrix and C~ covariance matrix C.. By applying the central limit theorem.3 Gaussian and Gauss-Markov Processes Gaussian processes plav an important role in communication systems. xi denotes the n random variables Xj = X(tj). m is the mean value vector. in particular. X f o ) ) are distributed according to a two-dimensional Gaussian random variable.. and at any two points f i. h the random variables (AT (/j). m — E(X).)] l (2. x\ r„) with elements dj = £ [ ( .5 — . we have the following property. which is produced by random movement of electrons due to thermal agitation.e x p [ . make these processes mathematically tractable and easy to deal with. This property may be stated as follows. f„).. which are given below.3.1) where the vector x = (JTI.d. Definition: A random process X(t) is a Gaussian process if for ail n and all (f|. t j . Gaussian and Gauss-Markov Processes 51 2. v . Moreover. since a complete statistical description of {X(fi)}" = 1 depends only on the mean vector m and the covariance matrix C. The fundamental reason for their importance is that thermal noise in electronic devices. Apart from thermal noise. . It can be assumed that at least a majority of these sources behave independently and. the total current is the sum of a large number of independent and identically distributed (i.nn)(Xj -m. is the inverse of the From the above definition it is seen.m)] (2. this total current has a Gaussian distribution. Some interesting properties of the Gaussian processes.i. We wish to transform these into a pair of Gaussian random variables J:I and x j with mean m = 0 and covariance matrix C = pa i<J2 i r (2. y2 y„) .3. defined as E[(Xi -mi)(X.2. . Let us demonstrate this procedure by means of an example that employs the bivariate Gaussian distribution.7) CS\CZ .2 [ G e n e r a t i o n of samples of a multivariate G a u s s i a n process] Generate samples of a multivariate Gaussian random process X(r) having a specified mean value mx and a covanance Cx —«frH'HHH» First.3. we factor the desired n x n covariance matrix C r as c. time-invariant (LTI) system. = = = cl / 2 E[(X . and p is the normalized covariance.5) = clx/2(cli2y The most difficult step in this process is the factorization of the covariance matrix Cx. Let us denote this sequence of n samples by the vector Y — (yi.3.3. RANDOM PROCESSES Property 2: If the Gaussian process X(r) is passed through a linear. respectively.3. Secondly. — i i i i i L ^ r i J i f j j j ^ i i d n Illustrative Problem 2. the output of the system is also a Gaussian process.3) (2. we generate a sequence o f » statistically independent.d y^.-m2)] 0\02 p = = (2. = Then.f (2.4) Thus the covanance of X is C. zero-mean and unit variance Gaussian random variables by using the method described in Section 2. wc define the linearly transformed {n x 1) vector X as X = clr'Y +mx (2. which have zero mean and unit variance.6) where of and cr} are the variances of xj and A. Suppose we begin with a pair of statistically independent Gaussian random variables y\ ar..m x ) { X r -m ll ! x )!} E[C\ 'YY'{C\ ) ] E(YY')(Cy 1/2.52 CHAPTER 2. The effect of the system on X(t) is simply rejected bv a change in the mean value and the covanance of X(r). % Computation of the pdf of (xI. xJ=-3.2. Cx=[1 1/2.Dyi + (V3 4.exp«-1/2M((xl(i) x2(j))-mx' ) ' i n v ( C x ) ' ( [ * l [ i | . % plotting command for pdj follows mesh(xl.C) MULTl-GP generates a multivariate Gaussian random process with mean vector m (column vector). end. x2(j)l-mx))). . x2=-3:deiU'3. G a u s s i a n and Gauss-Markov Processes 53 The covariancc matrix C can be factored as c = c h2 {c u2 y where C Therefore. for j=1:length(x2).C) % % % [x] = multi-gp{m.j>=(1/((2»pi)»det(Cx)-1/2)). function [x] = multi. (2.del«:3.1 V 3 .1" 7 3 -r I (2. x=multi_gp(mx. for 1=1 lengih(x 1).gp(m. % MATLAB script fur Illustrative echo on Problem 2. f(i.x2. mx=[0 0]' .3.l)y 2 The MATLAB scripts for this computation are given below.1)V2 (>/3 . and covariunce matrix C.3.9) ( V 3 + Dyi + ( 7 3 .3.3. Chapter 2.Cx>. " = 2V2 v'l+l N/3 .f). (2) follows delta=0.1/2 1]. end.8) = 1 V ! i i j i V 3 -r 1 V3-1 V3-11ÍV V3+lJ|_>2. we may determine the eigenvalues {A.'. Figure 2.6 illustrates the joint pdf / (x\. the covariance matrix C can be expressed as (2-3. y(i)=gngauss.6: Joint probability density function of and X2. As indicated above. x=sqrrm(C)*y+m.*. RANDOM PROCESSES Figure 2.11) t= i .54 N=length(m): for i=1:N. 1 < k < n} and the corresponding eigenvectors 1 < k < n}.3. CHAPTER 2. XT) for the covariance matrix C given by (2. end. Then.6). y=y.10) *=i and since C = C 1 / 2 ( C 1 / 2 ) ' .3. it follows that (2. the most difficult step in the computation is determining Given the desired covariance matrix. 7 and 2.i + wn.2 1000 (2. it follows that if fi < ri < • • • < f„. Gaussian random variables. ) ] (2. then P[X(tn) < xn | X(r„_i). _ 0 = pol_x (2.i..d. The MATLAB scripts for this computation are given below. n = 1. process X(r) is a Markov process whose probability density The simplest method for generating a Markov process is by means of the simple recursive formula (2.d.3.13) Definition: A Gauss-Markov function is Gaussian.2. then the resulting process X(f) is a Gauss-Markov process.) = p E ( X . .3 Generate a sequence of 1000 (equally spaced) samples of a Gauss-Markov process from the recursive relation X„ = 0.3. 50 (2.16) where Xq = Oand {wn} is a sequence of zero mean and unit variance i. Figures 2. (white) random variables and p is a parameter that determines the degree of correlation between X„ and X„_i.12) From this definition. Plot the sequence {X„.17) n= l where N = 1000. N — m L—' m= 0 . 1 < n < 1000} as a function of the time index n and the autocorrelation j Rx{m) = N-m Y XnXn+m.i. E(X„X„_. . X(/„_ 2 ) X ( n ) ] = P[X(tn) <x„ | X ( r „ _ . That is.95X„. .14) when ui„ is a sequence of zero-mean i.3.] = P [ X ( f „ ) < * „ | X ( r „ _ i ) ] (2.8 illustrate the sequence (X„) and the autocorrelation function R x (m). if r„ > then ^ [*('«> I X(r) t < r„_. I L L U S T R A T I V E P R O B L E M Illustrative Problem 2. That is.15) If the sequence {m„} is Gaussian. 1 .3.3. .3.3. Gaussian and Gauss-Markov Processes 55 Definition: A Markov process X ( 0 is a random process whose past has no influence on the future if its present is specified. . rho. % plotting commands follow function [X J=gaus_mar(XO. X(i)=rho*X(i-1 )+Ws(i). M=50. X(1 )=rho»X0+Ws( 1).mar(XO. n Figure 2.N). . R\=Rx_est(X.56 CHAPTER 2. RANDOM PROCESSES % MATLAB script for Illustrative echo on Problem 3. Chapter 2 rho=0.95: X0=0.N) % IX j-guus_/naiiXO.rho. N=1000.N) % GAUSJrlAR generates a Gauss-Markov process of length N. 7o generate the noise process end.M). (Wsii) Ws(l+1)|=gngauss.7: The Gauss-Markov sequence. % first element in the Gauss-Markov process for i=2:N. % the remaining elements end. rho. X=gau:>. % the noise process is taken to be white Gaussian "o noise with zero mean and unit variunce tor i = 1:2:N. i. Power Spectrum of Random Processes and White Processes 57 \ \ \ 1 25 m Figure 2. 2. That is. (2. As indicated above.2.4.1) Conversely.e.4.4. Definition: A random process X(t ) is called a white process if it has a flat power spectrum. we often assume that such noise is a white random process.e.2) In modeling thermal noise that is generated in electronic devices used in the implementation of communication systems. the autocorrelation function Rx(t) of a stationary random process X(r) is obtained from the power spectrum S x ( f ) by means of the inverse Fourier transform.4 Power Spectrum of Random Processes and White Processes A stationary random process X ( f ) is characterized in the frequency domain by its power spectrum S x ( f ) . if Sx ( / ) is a constant for all / . Also. . Such a process is defined as follows.8: The autocorrelation function of the Gauss-Markov process. which is the Fourier transform of the autocorrelation function Rx(r) of the random process. the importance of white processes stems from the fact that thermal noise can be closely modeled as spectrally constant over a wide range of frequencies. i... (2. 38 x I 0 " 2 3 J/K). This power spectrum is shown in Figure 2.4.4. but the rate of convergence to 0 is very low. RANDOM PROCESSES a number of processes thai arc used to describe a variety of information sources are modeled as the output of LTI systems driven by a white process. S n ( f ) drops to 90% of its maximum at about / « 2 x 10 12 Hz. a white process may not be a meaningful physical process. however. T denotes the temperature in kelvins. then CC /"CC / (2. quantum mechanical analysis of the thermal noise shows ihat it has a powcr-spectral density given by *„</) = 2( hf/kT hf e _ j) (2.74 CHAPTER 2.9: Plotof S „ ( / ) in (2. therefore. and the value of this maximum is . no real physical process can have infinite power and. The spectrum goes to 0 as / goes to infinity. at room temperature {T = 300K).6 x lO^ ' 4 J \ s) and k is Boltzmartn's constant (equal to 1. We observe. which is beyond the frequencies employed in conventional . For instance. Figure 2. The above spectrum achieves its maximum at / = 0. that if S K ( f ) = C for all / .4) in which h denotes Planck's constant (equal to 6.3) -ac S A f ) d f = J-X Cdf = so that the total poweT is infinite.9.4. However. Obviously.4). in addition to being white. determine the power spectrum of the sequence {X„} by computing the discrete Fourier transform (DFT) of R x {m). the power-spectral density of thermal noise is usually given as Sfl(f ) = ^ and is sometimes referred to as the twosided power-spectral density. and the computation of the power spectrum S x { f ) is given below.m| n= XnXn+m. lation function tfc(r) x the autocorre- RAT) = r J-x SAf)el2nixdf = ~ 2 r J-vq e ^ d f = ^S(r) 2 (2.2. although not precisely white.6) Also. defined as j RAm) = N-m Y XnXn+m. it is necessary to average the sample autocorrelation over several realizations. the computation of the autocorrelation. We should note that the autocorrelation function and the power spectrum exhibit a significant variability. we have Rx(r) = 0. We will avoid this terminology throughout and simply use power spectrum or power-spectral density.5) where (5(r) is the unit impulse. therefore. If.-2 -M (2. can be modeled for all practical purposes as a white process with the power spectrum equaling The value kT is usually denoted by No. \m\ m = —1. if we sample a white process at two points /[ and (3 (/| h ) .10 and 2. the sampled random variables will be statistically independent Gaussian random variables.11 illustrate Rx(m) and S x ( f ) obtained by running this program using average autocorrelation over 10 realizations of the random process. 1 M nH N 1 = -—— V N . m=0. Consequently for all r ^ 0.7) The MATLAB script that implements the generation of the sequence {X„}. . Therefore.4 Generate a discrete-time sequence of N ~ 1000 i. Figures 2.e. which is efficiently computed by use of the fast Fourier transform (FFT) algorithm. uniformly distributed random numbers in the interval ( — 5.. the resulting random variables will be uncorrected. is defined as S A f ) = £ RAm)e~jlnfml(2M+U (2-4. From this we conclude that thermal noise.4. the random process is also Gaussian.4. For a white random process X ( r ) with power spectrum S ( f ) = Nq/2. emphasizing that this spectrum extends to both positive and negative frequencies.4. TheDFT. 7) and compute the autocorrelation of the sequence (X„}. i. ILLUSTRATIVE PROBLEM Illustrative Problem 2.i d. Power Spectrum of Random Processes and White Processes 59 communication systems. Let us determine its autocorrelation function. uniformly distributed random variables between -1/2 and 1/2 autocorrelation of the reuhzaiiim power spectrum of the realization sum of the autocorrelations sum of the sped rums autocorrelation spectrum % ensemble average % ensemble average m Figure 2.1) we have . X=rand(1.av=Sx.M).av/10.60 CHAPTER 2. % % % % % % % take the ensemble average over 10 realizations N n. Rx. end.est(X.av+Sx. Sx.M+1). From (2.av=zeros(1 . Sx=ffishift(abs(fft(Rx))).N)-1/2. Rx=Rx.4.d.av=Sx. A bandlimited random process X ( f ) has the power spectrum S ( f\ — X 2 ' l u/ l i — |/| > B (2.4.av=Rx_av/10. % plot ling comments follow Problem 4.8) 0.script for Illustrative echo on N=1000. M=50. Chapter 2.10: The autocorrelation function in Illustrative Problem 2. for j-1:10. Sx. RANDOM PROCESSES % MATLAB . Rx_av=Rx_av+Rx. Rx.4. Sx-av=zeros(1.M+1). 11: The power spectrum in Illustrative Problem 2. Note that we obtain only a coarse representation of the autocorrelation function Rx(r). *.4. Figure 2. because we sampled S x ( f ) only in the frequency range | / | < 8. If we keep A / fixed and increase the number of samples by including samples for \ f \ > B.2. with each sample normalized to unity. we represent S x ( f ) by N samples in the frequency range I / I 5 B. ^ « I M H k i m To perform the computation.4. we obtain intermediate values of Rx(r). . MATLAB may be used to compute Rx(r) from S x ( f ) and vice-versa.5 Compute the autocorrelation /? c ( r ) for the random process whose power spectrum is given by (2.12 illustrates RX(T).(r)= = f J-8 B 2 NQB / sin 2tt B V 2xBr (2. The fast Fourier transform (FFT) algorithm may be used for this computation. The frequency separation in this example is Af — 2B/N.9) Figure 2.13. Power Spectrum of Random Processes and White Processes 61 C O B •OS -0 4 -0 3 -0 2 -0 1 0 0 1 0. The result of computing the inverse FFT with N = 32 is illustrated in Figure 2.2 03 04 0 5 f Figure 2. of which N = 32 are unity.4.8). I L L U S T R A T I V E P R O B L E M Illustrative Problem 2.14 illustrates the result of computing the inverse FFT with Ni = 256 samples.4. RANDOM PROCESSES Figure 2.5 with 32 samples.62 CHAPTER 2. .13: Inverse FFT of the power spectrum of the bandlimited random process in Illustrative Problem 2. 5.1) It follows that the output of the linear filter is the random process (2.14: Inverse FFT of the power spectrum of the bandlimited random process in Illustrative Problem 2.5 with 256 samples.f dt (2.25 Linear Filtering of Random Processes 63 r 300 Figure 2.5 Linear Filtering of Random Processes Suppose that a stationary random process X(t) is passed through a linear time-invariant filter that is characterized in the time domain by its impulse response h(r)and in the frequency domain by its frequency response # ( / ) » / h(t)e-il!.5. 2.2) The mean value of Y(t) is . T)h(t + r ..4).4) In the frequency domain. 8 W I M The frequency response of the filter is easily shown to be H F '<°° 1 + j l n f = .5) This is easily shown by taking the Fourier transform of (2.5. The autocorrelation function of Y(t) is (2.. RANDOM PROCESSES m Y = E[K{f)J = f E[X(v)\h(t J —nc I h(t J —X I J—so r)jdr = mx z)dr = m( h(r)dr = m.3) Ry(r) = E[Y{x)Y(t + r)J O O rCO / / 3o E[X{z)X{a)]h(t -n> O O Jroc .r)h(t + r .5.5.6 Suppose that a white random process X(f) with power spectrum S x i f ) = 1 for all / excites a linear filter with impulse response Determine the power spectrum S y ( f ) of the filter output.(r .a ) / i ( i . —timMMWMMMM Illustrative Problem 2. H ( 0 ) where / / ( 0 ) is the frequency response H ( f ) of the filter evaluated at / = 0.5.a ) d x d a (2.a ) d r d a / -no J/ /?.64 CHAPTER 2. the power spectrum of the output process Y{t) is related to the power spectrum of the input process X{i) and the frequency response of the linear filter by the expression S A f ) = S x ( f ) \ H ( f ) \ - (2.Lf ( 257 ) -- . echo on dcita=Q.5. F. Linear Filtering of Random Processes 65 Hence. % MATL-\B script fur Htuwalive Pmhtem 6."2.8). Sx=ones(1 lengihif)): H=1 .7 Compute the autocorrelation function Ry(r) S y ( f ) in the Illustrative Problem 2. F.2.8) The graph of Sy( f ) is illustrated in Figure 2. Chapter 2.*H. corresponding to ./(t +(2«pi»f}.15.mm delta.5.15: Plot of S y { f ) given by (2. TheMATLAB script for computing S y ( f ) for a specified Sx( f ) and H \ f ) is given below.01.max=2."2).6 for the specified S z ( f ) ~ 1. I L L U S T R A T I V E P R O B L E M Illustrative Problem 2. Sy=Sx.F_max.min=-2. SvC/) = [H(f)\- 1 i + (2 TT/y- (2. Figure 2. f=F.5. % number of samples % frequency separation % swap the first half % sampled spectrum % autocorrelation of Y echo on N=256. i n ^ i . deltaf=0. i m In this case. f=[0: deltaf:{N / 2)» deitaf.5.7. Chapter 2. Suppose that a stationary random process X{t) is sampled and the samples are passed through a discrete-time linear filter with impulse response h(n).16 illustrates this computation with N — 256 frequency samples and a frequency separation A / = 0."2).1. The output of the linear filter is given by the convolution sum formula oo (2. % MATLAB script for Illustrative Pniblem 7. Figure 2.9) *=0 . Sy=1. Ry=ifft(Sy). Let us now consider the equivalent discrete-time problem.66 CHAPTER 2.5.1.1 )*deliaf deltaf:-deltaf).( N / 2 . we may use the inverse FFT algorithm on samples o f S v ( / ) given by (2.16: Plot of Ry(z) of Illustrative Problem 2.8). The MATLAB script for this computation is given below. ./(1+(2*pi*f). % plotting command follows plor(fftshift(rea)(Ry))). Figure 2. RANDOM PROCESSES — a j . 5.10) (2.14) and 00 S y ( f ) = Y1 m—~oo Ry(m)e~Jl7lfm (2.5. The mean value of the output process is mv = E[Y{n)} 3C = ^ h ( k ) E [ X { n -k)} k=0 = mx J2 h(k) k-0 cc = mxH( 0) where H(0) is the frequency response H ( f ) of the filter evaluated at / = 0 and oo H ( f ) = ^2h(n)e~i2ltin (2. Linear Filtering of Random Processes 67 where = X(f„) are discrete-time values of the input random process and y(n) is the output of the discrete-time filter.5.11) n=0 The autocorrelation function of the output process is R y (m) = £ ( r ( « ) y ( n + /«)J O O 00 = *=01=0 OO ~k)X(n +m ~ I)} 00 k=01=0 The corresponding expression in the frequency domain is Sy(/) = S x {f)\H(f)\ 2 (2.15) .5.5.13) where the power spectra are defined as 00 S A f ) = m = -oo Rx(m)e~j2ltfm (2.5.2. 1.68 CHAPTER 2.9 cos (2 JT/) Therefore. Sy=1 . The MATLAB script for this computation is given below. % one period of Sy .17) Sy{f) = \H{f)\1Sx{f) I 1.9cos(2n7) (2. RANDOM PROCESSES Illustrative Problem 2.95e-j2** and (2. Figure 2.9025 .5.95)". the power spectrum of the output process is (2./(1.5. n > 0 n < 0 Determine the power spectrum of the output process {Y{n}) -^msEasm It is easily seen that H ( f ) = n=0 CO Y. Problem ft. % MATLAB script for Illustrative delta-w=2*pi/100.16) \H(f)\- = I 0.9025 . Note that S y { f ) is periodic with period 2 jr.95e~i2nf\ I 1.1 .19) where we assumed S x { f ) is normalized to unity.8 Suppose that a white random process with samples (X(n)} is passed through a linear filter with impulse response ^ (0. w=-pi:delta.5.9*cos(w)).w:pi.9025-1. Chapter 2.0.18) (2.k(n)e->2»l" n= 0 1 ~ I . lt h(n) — { [0.5.17 illustrates S y { f ) . The parameter B is called the bandwidth of the random process. 2.2.17: Plot of S v ( / ) of Illustrative Problem 2.J W M Illustrative Problem 2.i. a lowpass random process has most of its power concentrated at low frequencies. The student will find it interesting to compare this autocorrelation with that obtained in Illustrative Problem 2. The autocorrelation of the output process (y(n)} may be determined by taking the inverse FFT of S y ( f ) . The input sequence is an i. Definition: A random process is called lowpass if its power spectrum is large in the vicinity of / = 0 and small (approaching 0) at high frequencies.8. In other words.3.Sy).d. sequence of uniformly distributed random variables on the interval ( — T h e lowpass filter has the impulse response .6 Lowpass and Bandpass Processes Just as in the case of deterministic signals. Low pass and Bandpass P r o c e s s e s 69 35D r i 300 L 250 L 2 W ' jj i I 100 L Figure 2.9 Consider the problem of generating samples of a lowpass random process by passing a white noise sequence {X„} through a lowpass filter. Definition: A lowpass random process X(t) is bandlimitedifthe power spectrum S x ( f ) = 0 for | / | > B.6. % plotting command follow r plot(w. — i m u ^ M M i U M M . random signals can also be characterized as lowpass and bandpass random processes. n > [. Ry=Rx. % MATLAB script for Illustrative Problem N=1000. Sxav=Sxav+Sx.6). Figures 2. Ryav=zeros(1.M). Sx=fFtshift(abs(fft<Rx))). Rxav=Rxav/10. Determine the power spectra S x ( f ) and S v ( / ) by computing the DFT of Rx (m) and R v (m).70 CHAPTER 2.18 and 2. Sy=fftshift(abs{fft(Ry))). Chapter 2.( 1 /2). Ryav=Ryav/10. Syav=zeros(1 .M). Y(1)s0.M+1). Y(n) « 0 9 * Y ( n .ttst(Y. We note that the plots of the autocorrelation function and the power spectra are averages over 10 realizations of the random process. end: Rx=Rx_cst(X. Syav=Syav+Sy: end. 9 V „ _ I + X.M+1). Sxav=Sxav/10.4.1/2) % Note that Y(n) means % % % % autocorrelation of (A>t) autocorrelation of (Vn) power spectrum of (Xn) Power spectrum of {Kn} Y(n-l) .19 illustrate the autocorrelation functions and the power spectra. Syav=Syav/10. v_i = 0 Compute the output sequence {y„ | and determine the autocorrelation functions Rx(m) and R%(m). Ryav=Ryav+Ry. for n=2'N. % plotting commands follow 9. Rxav=Rxav+Rx. N l . M=50.1 ) + X(n). % 'he maximum value of n % Take the ensemble average ove 10 realizations % Generate a uniform number sequence on (-1/2.M+1). Rxav=zeros(1. X=rand( 1 . RANDOM PROCESSES and is characterized by the input-output recursive (difference) equation >'« = 0 .M+1). as indicated in (2. Sxav=zeros{1 . for 10. — The MATLAB scripts for these computations are given below. Definition: A random process is called bandpass if its power spectrum is large in a band of frequencies centered in the neighborhood of a central frequency ± / o and relatively small outside of this band of frequencies. end. Rx(m)=Rx(in)/(N-n\+t).s Rx(fit-1}. end. Rx(m)=Rx(m)+X(n)*. Rjt(I). In a communication system.. As in the case of deterministic signals. MI = Kx-estiX. the information-bearing signal is usually a lowpass random process that modulates a carrier for transmission over a bandpass (narrowband) communication channel. a bandpass random process X (t) can be represented as X(t) = Xc(t) cos Infot . Thus.18: Autocorrelation functions Rx(m) and Rv(m) in Illustrative Problem 2.9..m + 1.6.Xf(r) s i n 2 ^ / 0 r (2. A random process is called narrowband if its bandwidth B « /oBandpass processes are suitable for representing modulated signals.6. the modulated signal is a bandpass random process. Figure 2. N<*ie that Rx{tt\) actually nx€{ir\. N=length(X). for n=1 N .X<n+m —1). Low pass and Bandpass Processes 71 function % IRx! % % % IRx]=Rx-£si(X. Rx=zeros(1.M) RXJiST Estimates the autocorrelation of the sequence of random variables given in X Only Rx(O). RxiXf) are computed.M + 1): for m=1:M+1.2. .1) . . jointly stationary processes.19: Power spectra S x ( f ) and S v ( / ) in Illustrative Problem 2. X t ( f ) and Xs(t).(r) c o s 2 j r / 0 r + J ? .6. ( r ) sin 2 j r / 0 r (2. where X c ( r ) and X . Theorem: If X ( t ) is a zero-mean stationary random process. the processes X t ( f ) and Xs(t) are also zero-mean.. RANDOM PROCESSES Figure 2.Rt(z) Finally.9. In fact.5) .4) ^ Also. The following theorem. ( r ) = tf. T h e random processes XL (t) and X. the autocorrelation function of the bandpass process X ( f ) is expressed in terms of the autocorrelation function /? L -(r) and the cross-correlation function Rcs ( r ) as Rx(r) = /?c(r)cos2jr/0T . provides an important relationship among X ( t ) . it can be easiiy proved (see [1]) that the autocorrelation functions of X t ( i ) and X j ( r ) are identical and may be expressed as Rc(r) = K .2) where K x ( r ) is the autocorrelation function of the bandpass process X(r) and fi^(r) is the Hilbert transform of ^ ( r ) .6.(/) are lowpass processes.72 CHAPTER 2. which is defined as kAx) = L J-oo T~ r r mdi cosItt/qX (2. stated without proof./?Cj(T)sin2jr/0r (2. ( r ) are called the in-phase and quadratic components of X(r). the cross-correlation function of X t ( f ) and X f ( f ) is expressed as / ? „ ( r ) = / ? x ( r ) s i n 2 7 r / 0 r .6. % number of samples % standard Gaussian input noise % lowpass filler parameters processes . On a digital computer samples of the lowpass processes Xc{t) and Xx(t) are generated by filtering two independent white noise processes by two identical lowpass filters. cos 2-n-f i WGN Lowpass I filter X r (/) COS 2 TTfj -Xs(r) sin lirf ^i WGN Lowpass filter sin 277-/0r t Figure 2. t (n) modulates the quadrature carrier sin 2 J T f o n T . corresponding to the sampled values of Xc(r) and X. end. we obtain the samples XL-(n) and Xs(n). t (f). The MATLAB script for these computations is given below. Then. as shown in Figure 2.10 [ G e n e r a t i o n of s a m p l e s of a b a n d p a s s r a n d o m process] Generate samples of a bandpass random process by first generating samples of two statistically independent random processes X c ( r ) and X.21. [X2(i) X2(i+1)]=gngauss. X t («) modulates the sampled carrier cos2jr/o«7" and X. Low pass and Bandpass Processes 73 — i H M k k f e U i i W J ^ U I d l d f l t Illustrative P r o b l e m 2. w e have selected the lowpass filter to have a transfer function H(z) = 1 -0. (r) and using these to modulate the quadrature carriers COS2JT fyi and sin 2^/qI. Problem 10.20: Generation of a bandpass random process. A=[1 — 0. For illustrative purposes.9z-' Also. % MATLAB script for Illustrative N=1000: tor i=1:2:N.6.20. Thus. Chapter 2. IXI(i) Xl(i+1)]=gngauss.2. where T is the appropriate sampling interval.9]. we selected T = 1 and fy = — T h e resulting power spectrum of the bandpass process is shown in Figure 2. bajid_pass_proct:ssii)=Xi. Xc=fiJter(B.I is assumed ic determine the autocorrelation and the spectrum of ihe bandpass M=5Q.XI).A.21: The power spectrum of the bandpass process in Illustrative Problem 2. fc=1 0 0 0 / p i. Ks=fil?er(B.N.Ml.74 CHAPTER 2.X2). c T.(i )*cosi2*pi*l"c*i)—Xs(i)*sin(2*pi*fc*i): a end'. bpp_autocorr=Rx-est( band _pass-process. .A. RANDOM PROCESSES B=1. bpp_sp«tmm=frish!ftubs(tft(bpp_au[c>currHK c plotting commands follow process Figure 2.10. % carrier frequency for i=1. Compare these results with the results obtained in Problem 2. 2..x2) that have mean vector m = E[>cj and covariance matrix JC ] = 2 1 2 2 C = 2 1 1 . N). Plot the histogram and the probability distribution function for the sequence.6 Generate a set of 1000 Gaussian random numbers having zero mean and unit variance by using the MATLAB function randn(l.2 .7 Generate 1000 pairs of Gaussian random numbers (x\. The histogram may be determined by quantizing the interval in 10 equal-width subintervals covering the range [0.2. beginning with the first interval covering the range — <r 2 /10 < x < a 2 / 1 0 . 0.2 Generate a set of 1000 uniform random numbers in the interval [ . N ) .6. 1] using the MATLAB function rand(l. Plot the histogram and the probability distribution function for the sequence. 0 < x < 2 otherwise Plot the histogram and the probability distribution function. 2. 2. 1| and counting the numbers in each subintervai.3 Generate a set of 1000 uniform random numbers in the interval [ . 5] using the MATLAB function rand( 1. Plot the histogram and the probability distribution function for the sequence.V) function.1 Generate a set of 1000 uniform random numbers in the interval (0.2.4 Generate a set of 1000 random numbers having the linear probability density function 1. In determining the histogram.5 Generate a set of 1000 Gaussian random numbers having zero mean and unit variance using the method described in Section 2. N). the range of the random numbers may be subdivided into subintervals of width a 2 / 5 .7 . 2. 2] by using the MATLAB function rand( 1. Plot the histogram and the probability distribution function for the sequence. where a 2 is the variance. Plot the histogram and the probability distribution function for the sequence. 2. 2.5. Low pass and Bandpass Processes 75 Problems 2. 11 Repeat Illustrative Problem 2. 2.i. i = 1 . 1000 ^ j=i j !000 m-i = y xi. 0. 1000 f ^ j m| — Also.9.4 with an i.10 Repeat Illustrative Problem 2. n = \. sequence of zero-mean.8 Generate a sequence of 1000 samples of a Gauss-Markov process described by the recursive relation Xn = p X „ _ i + W„. — "M) 2 Z ^ i=i J 1000 CT" =1000 y ^ (x->.2 1000 where Xo = 0. 2 . otherwise 2.76 CHAPTER 2. 2. Gaussian random variables. and {W„} is a sequence of zero-mean and unit variance i. RANDOM PROCESSES a. . x n ) .i.9 Repeat Illustrative Problem 2. 1000 defined as 1000 Y x i. Gaussian random variables. Determine the means of the samples ( x u .6 with a linear filter that has impulse response "3t. r > 0 f < 0 . I 1000 a: i = > ON. — w->)~ [000 and their covariance.5 when the power spectrum of a bandlimited random process is 10. . p = 0. 1 1000 1000 1=1 b.d. unit variance. MO = . determine their variances. Compare the values obtained from the samples with the theoretical values.d. . 2. 6) and (2.The output sequence { x c n } modulates the carrier COS(.4. Low pass and Bandpass Processes 77 2. Each of these sequences is passed through a linear filter with impulse response h(n) jor }0.6.13 Repeat Illustrative Problem 2.8 when (O.2. The bandpass signal is formed by combining the modulated components as in (2.95y„_ i + xn. Compute and plot the autocorrelation components Rc(m) and Rs (m) for |m| < 10 for the sequences {x c n j and {xsn}. . {xcn) and {xsn}. 2.1).14 Generate an i. n > 0. This sequence is passed through a linear filter with impulse response /!(.6.i. n < 0 whose input-output characteristic is given by the recursive relation xn=^x„-\+w„. = 0 Thus.d. 2. Compute the autocorrelation function Rx(m) of the bandpass signal for \m\ < 10.12 Determine numerically the autocorrelation function of the random process at the output of the linear filter in Problem 2.8.i. respectively. Compare this result for S v ( f ) with that obtained in Illustrative Problem 2. and the output sequence {*„} modulates the quadrature carrier sin(rt/2)n.T/2)M.S)\ «>0 n < 0 h{n) = .7). 21 4 ?< '' ]. n <0 The recursive equation that describes the output of this filter as a function of the input is y„ = 0. sequences {UJ^} and (w<„} of /V = 1000 uniformly distributed 1 1 random numbers in the interval [ — 4.d. Plot these power spectra and comment on the results.15 Generate two i. S s ( f ) and S x ( f ) .11. we obtain two sequences. sequence | of N = 1000 uniformly distributed random numbers in the interval ^ J.)= f(0-95)n' (o. y_ ( = 0 Compute the autocorrelation functions Rx(m) and R^im) of the sequences {x„} and {y„} and the corresponding power spectra Sx ( / ) and S y ( f ) using the relations given in (2. « > 1.4. 2. Use the DFT (or the FFT algorithm) to compute the power spectra S c ( f ) . "0. . both in the presence and in the absence of additive noise. Also.and frequency-domain representations of signals expressed through the Fourier transform relation. These properties are obviously not independent of each other. Frequency-domain representation of the modulated signal. Due to the fundamental difference between amplitude.2 Amplitude Modulation (AM) Amplitude modulation (AM). We begin this chapter with the study of the simplest modulation scheme. which is frequently referred to as linear modulation. and angle-modulation schemes.Chapter 3 Analog Modulation 3. these schemes are treated separately and in different sections. is the family of modulation schemes in which the amplitude of a sinusoidal carrier is changed 79 . Systems studied in this chapter include amplitude-modulation (AM) schemes. SSB-AM. 3. 3. 5. amplitude modulation. Signal-to-noise ratio (SNR) after demodulation. the bandwidth of a signal is defined in terms of its frequency characteristics. Power content of the modulated signal. 4.1 Preview In this chapter we will study the performance of various analog modulation-demodulation schemes. There exists a close relationship between time. and conventional AM. Time-domain representation of the modulated signal. such as frequency and phase modulation.and angle-modulation schemes. 2. such as DSB-AM. Each member of the class of analog modulation systems is characterized by five basic properties: 1. Bandwidth of the modulated signal. r/ t r) (3. and a scaling of ^V in the spectrum of the message signal.fc) + y A / ( / + / c ) (3. as. or much more complex.2) (3. These systems are widely used in broadcasting (AM radio and TV video broadcasting). The frequency-domain representation of DSB-AM signal is obtained by taking the Fourier transform of u(i) and results in U ( f ) = y M { f . SSB-AM (single-sideband amplitude modulation).2. This class of modulation schemes consists of DSB-AM (double-sideband amplitude modulation). and for VSB-AM the bandwidth is between V V and 2\V. and VSB-AM (vestigial-sideband amplitude modulation).2.80 CHAPTER 3. This means that the time-domain representation of the modulated signal is given by h (i) = A t / n ( 0 c o s ( 2 j r / c o where c(r) = Ac cos(2. as in SSB-AM or VSB-AM. this type of modulation results in a shift of ± / L . point-to-point communication (SSB). where W denotes the bandwidth of the message signal. Obviously. The transmission bandwidth denoted by Bt is twice the bandwidth of the message signal: BT = 2 W (3.3) where M{f) is the Fourier transform of m(t).1.1) is the carrier and m(t) is the message signal. for DSB-AM and conventional AM the bandwidth is 2 W .2.2. the amplitude of the modulated signal is proportional to the message signal. Amplitude-modulation systems are usually characterized by a relatively low bandwidth requirement and power inefficiency in comparison to the angle-modulation schemes. For SSB-AM the bandwidth is W. for example. and multiplexing applications (for example. . in the DSB-AM case. The bandwidth requirement for AM systems varies between V V and 2W. The dependence between the modulating signal and the amplitude of the modulated carrier can be very simple. transmission of many telephone channels over microwave links). ANALOG MODULATION as a function of the modulating signal.4) A typical message spectrum and the spectrum of the corresponding DSB-AM modulated signal are shown in Figure 3.1 DSB-AM In DSB-AM. conventional amplitude modulation.2. 3. 1: Spectrum of DSB-AM modulated signal.2. Amplitude Modu la lion (AM) 81 Figure 3. I / fT/2 '«) 2 dt = 1 fT/2 1™ r / J-T/2 I f T / 1 m2(t) 2 m2(l) 2 . COS(4TT f t) J c c) \ d t (3.m T->ooT - 1 f 7 7 2 2. / m (t) J_T/2 cos(4^fct) \ 2 \ dt\ J yr—ooT .2.2. The power content of the modulated signal is given by 1 fT/2 T —3C T j' . l. the SNR for a D S B ..2.A M system is equal to the baseband .7) J-T/2 1 goes to zero as 7" .r / 2 lim .3. the frequency content of COS(4>T/c/).> oo. the integral f 7 / ' 1 m\t) .5) follows from the fact that m ( r ) is a lowpass signal with frequency contents much less that 2 fc. therefore. Finally.7 7 2 lim l U)dt ] rT/2 lim T J -T/2 ] T J' . T-*oo l J-T/2 = (3.6) where Pm is the power content of the message signal and Equation (3. 3. Assuming that the message signal is periodic with period Tq = to> determine the power in the modulated signal.075 "(f) = ~ Ö Ö T " ) ÖÖ5 COS (500 JT /) (3. Obtain the expression for u{t).025\ . 0 5 / ) ( l 2 e ~ ° .e.** ILLUSTRATIVE PROBLEM PR °28) where PR is the received power (the power in the modulated signal at the receiver). we obtain tof = ( ^ p j ( i _ (3.0 ' 0 5 sine ( 0 .15 s and / t = 250 Hz. ijr is the noise power spectral density {assuming white noise).2. m(t) can be written as m(t) = n r . therefore. and the resulting modulated signal is denoted by u(t).10) Substituting fo — 0. .15 s gives F[m(t)] = 0.( 3 . *j<t< otherwise This message DSB-AM modulates the carrier c(r) = cos 2. If a noise is added to the modulated signal in pan (3) such thai the resulting SNR is 10 dB.0. It is assumed that to = 0.2 f l ( ^ r r ) . it is given by s). Illustrative Problem 3-1 [DSB-AM modulation] The message signal m ( i ) is defined as 0 <t < | m(t)= 0. Using the standard Fourier transform relation i*[rt(f)] = sinc(r) together with the shifting and the scaling theorems of the Fourier transform. 4. G5e . 2 . SOLUTION . 1. and W is the message bandwidth.2. „ ( t ~ ( 0. 2.32 CHAPTER 3. i. find the noise power.r/ t -r. Derive the spectra of m(t) and «(f). 1 1 ) . ANALOG MODULATION SNR.9) 2. .. ) ) ( l - Plots of the magnitude of the spectra of the message and the modulated signals are shown in Figure 3.0/3 f Pm = i~ f I mi(t)dt 1 to Jo = .3. .833 1.666 Here 10 log 1Q ( ) = 10 or PR = Pu .05(/ .025<T° ) s i n c (0.1.0833.2: Magnitude spectra of the message and the modulated signals in Illustrative Problem 3./ . Figure 3.[^ to V3 + + ^ 3 I 5 = _ = 1.-jPm A* - _I ~Pm where Pm is the power in the message signal.2. 3+ The power in the modulated signal is given by Pu .2. 2.666 and Pu = — — = 0.10P ftl which results in Pn = Puf 10 = 0. Amplitude Modulation (AM) 83 For the modulated signal u{t) we have U { f ) = 0. -2*ones(t .001. % scaling [C.m 7c Matlab demonstration script for DSB-AM 7c is +1 for 0 < t < l0/3. df=0.2. 76 spectrum of the signuUnoise R=R/fs. 7c Fourier transform f=[0:dfl:dfl*(length(m)-1))-fs/2.c.l0/(3*ts)).df).dfll-fftseqftus.3) plol(t.m. L"=i:os(2»pi*fc.std=sqn(noise.*c.power pause % Press any key to see a plot of the messuge elf subplol(2.c(1:length(t))) xlabel(' T i m e ' ) titlef'The c a r r i e r ' ) pause % Press any key to see a plot of the modulated signal subplot(2.t0/(3*ts)).84 CHAPTER 3. 7c carrier signal u=m.*l).eq(c.dfl]=fftseq(r.1 /is. 7o Fourier transform U=U/fs. % freq.u.dfl ]=ffiseq(m. resolution time vector linear SNR m=[ones(1 .1. % Add noise to the modulated signal [R.3.zeros(1 .r.abs(fftshift(M))) xlabe!(' F r e q u e n c y ' ) . subplot(2.dfl]=ffa. % Fourier transform M=M/fs.length(t))) xlabel(' T i m e ' ) tit)e('The m o d u l a t e d s i g n a l ' ) pause % Press any key to see a plots of the magnitude of the messuge and the % modulated signal in the frequency domain. fc=250.df). % power in modulated signal noise-power=signal. -2 for t0/3 -s echo on |0=.. % % % % 7c r /o 7c 7c signal duration sampling interval carrier frequency SNR in dB (logarithmici sampling frequency desired freq.2) plot{t.1) plot(f. ANALOG MODULATION The MATLAB script for the above problem follows. f i . snr=20.power/snr_lin.u(1 . 76 Compute noise standard deviation noise=noise.2.tS. vector signaLpowcr=power(u(1 :length(t))).std*ran<in(1.ts. The message signal t < 2rO/J u/iJ zero otherwise.ts. t=[0:ts:l0]. To dsbl. % scuting [U. 7c Compute noise power noise.df). % modulated signal [M. % Generate noise r=u+noise. 7o message signal modulation.length(u)).2.m(1 :length(t))) xlabel(' T i m e •) title('The m e s s a g e s i g n a l ' ) pause % Press any key ti> see a plot of the carrier subplol{2. snr_hn = 10~(snr/10).15. % scaling pause % Press a key to show the modulated signal power signal. is=0.power).df).1) plot(t.t0/(3*ts)+1)]. r-.1) plo!(f.** a key i<> *ee '-he modulated signal imd noise subp!ot(2. 2. Amplitude Modu la lion (AM) 85 title! ' Spec cru— or t h e message s i g n a l ' . o i s e ' l xlabeK ' Time ' ! pause 7c Press a kt\ :<> see the modulated signal am! nam m (>*"<{• domain subp!ot(2.1.2.ois^ s p e c t r u m ' ) x!abel(' F r eqv: e r. s determine the power in the modulated signal.1. c y ' ) Illustrative Problem 3.ple ') xlabel( 'Tine ' ) pause % Pre.1.e n o d u l a t e d s i g n a l ' > xlabe!(' Frequency ' i pause 7c Press u key i<> see ti noise subplol(2. 4.of ^r.noisei1:lengthiO)) title!' n o i s e sar.abs(. If a Gaussian noise is added to the modulated signal such that the resulting SNR is 10 dB.1) sample plotU.. |/|</o otherwise where Iq = 0.d r. 1. where / c = 250 Hz. . If the message signal is periodic with period 7o = 0.abs(fftshi(t(R j)) ntle{' Sigr.absifftshit'U t ) V ) tltlel' spec-. Determine the spectra of m(t) and m(0 3. find the noise power.2) spectrin' ! xiabel(' Frequency ') plottf. Determine the modulated signal u(t).1.2) ploKt.1.2) plot! f.j)i title( * S i g n a l subplot(2. This message modulates the carrier c(t) = COS(2JT/ c 0. subplot(2.ai ar.3.1.2.nl :Iengih(t))> title(' S i g n a l and r .2 [DSB modulation for an almost bandlimited signai] The message signal » i ( 0 is given by sinc(lOOr).ffishttt(i. ts. % % % % % % % % % % % % . Pu = 0.df). snr=20. ANALOG MODULATION 1. snrJin=10"(snr/10).0247.2. hence.1 otherwise (3. 0 0 2 4 7 The MATLAB script for the above problem follows. [M.*t). 3.m % Mailab demonstration % is m(i) = sinc(IOOt) script for DSB-AM modulation.»c. We have «(0 = m(t)c(t) sinc(lOOr) cos(500f).} is shown in Figure 3. which results in Pm = 0. Here l01og I O = 1 0 => P„ = 0 . As the figure shows. 2. The power in the modulated signal is half of the power in the message signal. 0.86 CHAPTER 3. m=sinc(100«t). 1 PR = 0 .ts. % scaling % Fourier .001.12) (3.fi^no/ duration sampling internal carrier frequency StWR in dB (logarithmic) sampling frequency required frtq resolution time vector linear SNR the message signal the carrier signal the DSB-AM modulated signal Fourier transform transform [U. c=cos(2*pi»fc. The power in the message signal is given by The integral can be computed using MATLAB's quad8. u=m. fc=250. t=[—i0/2:ts:t0/2].3.3.u. M=M/fs. The message signal echo on ts=0. % dsb2.0495 and. the message signal is an almost bandlimited signal with a bandwidth of 50 Hz.dfl ]=fftseq(u.2.m m-file. |/|<0. 1 P Î / = 0 . 10=2.13) = sinc( I 0 0 t ) f l ( 5 r ) cos(500r ) A plot of the spectra of m{t) and »(. fs=1 /ts.in.df).dfl]=mseq(m. df=0. 2. Amplitude Modu la lion (AM) 87 Figure 3. .3.2.3: Spectra of the message and the modulated signals in Illustrative Problem 3. 1) plol(f.1 .2) ploi(t.1.1) plot{t.power/snr_iin. pause % Press a key to show the modulated signal power signal-power pause 7cPress any key to see a plot of the message elf subplot (2. subplot{2.2.3) plot t t.std*randn( 1 . % Add noise to the modulated signal r=u+noise.2) p!ot(f.2) plot(t.1. 7c frequency vector f=[0:dfL:dfl*(length(m)-1)]-fs/2: % compute modulated signal power signaJ-power=power(u(1 :length(t))).length(u)). 7c scaling R=R/fs. % Generate noise sequence noise=noise. % Fourier transform [R. ANALOG MODULATION % scaling U=U/fs. domain subplot{2. 7c Compute noise power noise.2.df).1.1.88 CHAPTER 3. 7c Compute noise S!andard deviation noise_std=sqn(noise^power) .noised :length(0)) tit)e('noise sample') xlabeK• T i m e ' ) pause % Press a key to see the modulated signal and noise subplot(2.u{ 1 :lengih(t))) xlabeK ' T i m e •) title('The m o d u l a t e d s i g n a l ' ) pause % Press any key to see a plot of the magnitude of the message and the % modulated signal in the frequency domain.r.1.c(1:length(l)>) xlabeK' T i m e ' ) title(' T h e c a r r i e r ' ) pause % Press any key to sec a plot of the modulated signal subplot(2.dfl]=mseq(r.1) plot(r.2) ploi(f.2.power=signal.abs(fftshift(U))) tille(' S i g n a l s p e c t r u m ' ) xlabeK' F r e q u e n c y ' ) subplot (2 .abs(fftshift(U))) Iitle(' S p e c t r u m of t h e m o d u l a t e d s i g n a l ' ) x)abel(' F r e q u e n c y ' ) pause % Press a key to see a noise sample subplot (2 .r(1:length(t)>) title{' S i g n a l arid n o i s e ' ) xlabel(' T i m e ' ) pause % Press a key to see the modulated signal and noise tn freq.abs(ffishift(R») titleC S i g n a l a n d n o i s e s p e c t r u m ' ) xlabel(' F r e q u e n c y ' ) .is.1) plo({t.m(1 length(t))) x!abel( • T i m e ' ) tille{'The m e s s a g e s i g n a l ' ) pause % Press any key to see a plot of the carrier subplot(2.abs(fftshifi(M))) xlabeK ' F r e q u e n c y ' ) title(' S p e c t r u m of t h e m e s s a g e s i g n a l ' ) Subplot{2. is equal to the bandwidth of DSB-AM and is given by B =2W t (3.m given below is a general DSB modulator of the message signal given in vector m on a carrier of frequency fc. ( / ./ ( ) + a i i . tit input and returns the DSB modulated signal ts << i/2fc. conventional AM is a less economical modulation scheme in terms of power utilization. . This fact shows that compared to DSB-AM.r/cr) c (3. what is the effect of having large ro's ar>d small fa's? What is the effect on the bandwidth and signal power ^ The m-file dsb.2. in particular.fc) % %DS8-MOD % % % % u .3.«cos(2*pi«0.2.4. function u=dsb_mod(m.ts. which is a positive constant between 0 and 1.16) Typical frequency-domain plots of the message and the corresponding conventional AM signal are shown in Figure 3. The modulated signal it normalized to have half the message power.2 Conventional AM In many respects conventional AM is quite similar to DSB-AM. the message signal starts at 0 t=[0:lengih(m)-l ]*ts.14) and U ( f ) = y ( i ( / .2.15) The net effect of scaling the message signal and adding a constant to it is that the term [1 + a m „ ( t ) ] is always positive.fcl lakes signal m sampled at is and carrier I re if /. where m„(/) is the normalized message signal {i.. Amplitude Modu la lion (AM) 89 What happens if the duration of the message signal fo changes. of course. The bandwidth.e.dsb jnodfm. the only difference is that in conventional AM. u=m. Note the existence of the sinusoidal component at the frequency fL in [ / { / ) .2. This means that a (usually substantial) fraction of the transmitted power is in the signal carrier that does not really serve the transmission of information.2. Thus we have «(0 = A [ 1 +am„(0]cos(2. 3. |m„(i)l < I). This makes the demodulation of these signals much easier by employing envelope detectors.is. and a is the index of modulation.mod.fc) + <$(/ + fc) + aM„(f + A)] (3. m(i) is substituted with (1 + a % ( 0 ] . we always have rj < 0. We see that the SNR is reduced by a factor equal to rj compared to a DSB-AM system. This reduction of performance is a direct consequence of the fact that a significant part of the total power is in the carrier (the deltas in the spectrum). which is the power in the message-bearing part of the modulated signal.^ a P m „.5. . is given by +a2Pm„] Pu = ^ [ l (3. which does not carry any information and is filtered out at the receiver. the value of T ) is around 0. This is the power that is really used to transmit the message.4: Spectrum of conventional AM. In practice.2.f .17) A^ A^ 7 which comprises two parts.2. which denotes the power in the carrier. The ratio of the power that is used to transmit the message to the total power in the modulated signal is called the modulation efficiency and is defined by "T^ \ N j 0 ( 321 8 '' > (3.19) Since |m„(r)| < 1 and a < 1. . ANALOG MODULATION Figure 3.1. and . assuming that the message signal is a zeromean signal.90 CHAPTER 3. The signal-to-noise ratio is given by 'NqW where r] is the modulation efficiency. The power content of the modulated signal. however. Determine the spectra of the message and the modulated signals. 0 5 / ) ( i . It is assumed that / t = 250 Hz and f() = 0. 2. the modulation index is a = 0. 3.2. 1.3.15.85.3 [Conventional A M ] The message signal I I.5. If the message signal is periodic with a period equal to fo.m{t) 1+0.6. For the message signal we have r[m(t)} = 0 . A plot of the message and the modulated signals is shown in Figure 3.05(/ . . Derive an expression for the modulated signal.° ° . 0 5 e . First note that max |m(/)i = 2.2.2e~° '>*</+25°>) Plots of the spectra of the message and the modulated signal are shown in Figure 3.010625^° m ly T(/ ' ~250)) J ' * < ' + 2 3 O W ( 0 . «(f) = . '-f < t < modulates the carrier C ( 0 — C O S ( 2 . ()<<f ^ otherwise —2. Amplitude Modu lalion(AM) 91 Illustrative Problem 3. 2.250))(l + 0. 1. If a noise signal is added to the message signal such that the SNR at the output of the demodulator is lOdB. find the power content of the noise signal.2<T° W ) (3. T / < t ) using a conventional A M scheme.20) and for the modulated signal U ( f ) = 0. Note that the scales on the two spectra plots are different.852 1 + From this we have cos(27r/t/) °-425N ( T E ^ ) " 0 8 5 N )] COS<50C "". therefore. m „ ( i ) = m(t)/2. 3.w s i n c ( 0 . 0 5 ( / + 2 5 0 ) ) ( l . 0. determine the power in the modulated signal and the modulation efficiency.010625e-°°5^(/-250)sinc(0. 4. The presence of the two delta functions in the spectrum of the modulated signal is apparent. 667 The power in the normalized message signal P mn is given by P „" =4 .108 CHAPTER 3. ANALOG MODULATION Figure 3. 1 = 0.4167 I + 0 .3010 . 8 5 2 x 0. The power in the message signal can be obtained as I Pm = 0.i 1 4-0.4167 1 + a}P m „ = 0.05 / J0 dl +4 /-0.3.2314 The power in the modulated signal is given by ( E denotes the expected value) Pu = y £ [ l +flmn(0r 1025 \ 1 ( 0 = .85 2 x 0.1.05 dl = 1.15 /•0.4167 and the modulation efficiency is or Pm.5088 In this case 101og 10 or = 10 1*0 W .5: The message and the modulated signals in Illustrative Problem 3.66 4 =0. .7 x (TÏ7/ = 0.I / J 0.P m 1 m = 1. 2.3. Amplitude Modu la lion (AM) 109 Frequency Figure 3.6: Spectra of the message and the modulated signals in Illustrative Problem . i0/(3*ts)+1>). % scaling f=[0:dfl:dfl*()ength(m)-1))-fs/2.-2»ones(1. % SNR in dB flogarithmicI j=0. % power in modulated signal % power in normalized message pmn=power(m(1:length(t)))/(max(abs(m)))~2.2.n).t0/(3*is)).ts. % carrier frequency snr=10.»c.dfl]=fflseq(u.m % Matlab demonstration scrip! fur DSB-AM modulation. % sampling interval fc=250. % sampling frequency t=[0:ts:t0]. % frequency vector u=(1+a»m.df).m. c=cos(2*pi*fc.dfl]=fftseq(r.df).ts. % Fourier transform U=U/fs. % modulation efficiency noise_power=eta*signal_power/snr_Un.001.r. -2 for i0/3 < I < 2t0/3 and zero otherwise.1) .u.power=power(u(1 length(i))). % modulated signal [U. power pause % Press a key to show the modulation efficiency eta pause % Press any key to see a plot of the message subplot(2. PH = ~f [l +a2Pm„] because in this problem m(t) is not a zero mean signal.n=m/max(abs(m)). % SNR % message signal m=[ones(1. % signal duration I s=0. echo on [0=15. % normalized message signal (M. % scaling pause % Press a key to show the modulated signal power signal. % am. The message signal % is + / for 0 < ! < tO/3.2314 and Pr = P„ = 3^=0.*t).5088 in the above equation yields In finding the power in the modulated signal.t0/(3«ts)). % time vector df=0.94 Substituting r. = 0.85.0118 — M M m U t o * CHAPTER 3. % carrier signal m.df). eta=(a"2»pmnj/(1+a"2*pmn). % generate noise r=u+noise. % Fourier transform M=M/fs. % Add noise to the modulated signal [R. % Fourier transform R=R/fs.2.dfl]=fftseq(m.zeros(1. The M A T L A B script for this problem is given below.std=sqrt(noise_power): % noise standard deviation noise=noise_std*randn(1 . ANALOG MODULATION P„ = 0. % required frequency resolution snrjin=10'(snr/10). in this problem we could not use the relation AT.lengTh(u)). % scaling signaJ.ts. % modulation index fs=1/'s. % noise power noise. am jnod{a.ts.2.c(1:length<t))) axis([0 0.1]) xlabelf' T i : n e ' ) title('The c a r r i e r ' ) pause % Press any key in see a p!r>t of the modulated signal subplot(2.m.fcj IkAM-MOD takes signal m sampled at is and carrier % fret/.1]) xlabel( * T i n e ' ) title(' T h e m o d u l a t e d s i g n a l ' ) pause % Press wiy kev to see a plots of the magnitude oj the message and the ^c modulated signal in the frequency domain.15 . 1 2.1. 1 2.3.2 .1) plof(f.m(1 :ler>gth(t})) axis([0 0.abs(fflshift(U))) lille(' S i g n a l s p e c t r u m ' ) xlabelf' F r e q u e n c y ' ) subploi<2. "a" is the modulation index .15 .abs(ffishift(R))) title( ' S i g n a l a n d n o i s e s p e c t r u m ' ) xlabel( * F r e q u e n c y ' ) The MATLAB m-file am-mode.1.15 .2) plot(f.2 .2) p!o[(f.1]) xlabel(' Time •) [it!e('The m e s s a g e s i g n a l ' ) pause pause % Press any key lo see a plui of :he carrier subplot(2.2) ploi(t.m.r(1 :length(t))) iit!e(' S i g n a l a n d n o i s e ' ) xlabelf' T i m e ' ) pause % Press a key to see the modulated signal and noise in freq.1. function u=am-mod(a.2.2) plot(t. Amplitude Modu la lion (AM) 95 plot(l.3) ploi(t.noise(1:1ength(0)) tiiJe('noise s a m p l e ' ) xlabel(' T i m e ' ) pause % Press a key to see the modulated signal and noise subplo«2.fc) % u .abs(fftshift(M))) xlabeH' F r e q u e n c y ' ) tille<' S p e c t r u m of t h e m e s s a g e s i g n a l ' ) subplol(2. subplol(2.1.abs(fitshifI{U))) title(' S p e c t r u m o f t h e m o d u l a t e d s i g n a l ' ) xlabel<' F r e q u e n c y ' ) pause % Press a key to see a noise sample subp!oi(2. domain Subplot(2.u (1 :lengih(t))) axis([0 0.2 .1.1) plot(t.1) plot<f. 1 2.1.m given below is a general conventional AM modulator. fc as input and returns the AM modulated % signal.2. Is. The time representation for these signals is given by u(t) = A cos(2.24) 2£P 4 (3.*c. 0. by M ( f ) = —j s g n ( / ) A i ( / ) . Depending on the sideband that remains. and therefore the SNR in both systems is the same.3 SSB-AM SSB-AM is derived from DSB-AM by eliminating one of the sidebands. The signal denoted by m(t) is the Hilbert transform of m(t)./ c ) + M</ + / J ] . CHAPTER 3. since the modulated signal has half the bandwidth of the corresponding DSB-AM signal.2. BT = W The power in the SSB signal is given by p ~ A2 (3.23) Typical plots of the spectra of a message signal and the corresponding USSB-AM modulated signal are shown in Figure 3. defined by m(t) = m{t) * or.2. In the frequency domain.rfct) ~m(t) A• sin(27r/ t -/) (3. ANALOG MODULATION t=[0:length(m)-1]*ts.25) Note that the power is half of the power in the corresponding DSB-AM signal. either the upper or the lower sideband. ^USSB(/) = 0. i./.2.96 % und ta << Sj2fc.2. On the other hand. m . l/l otherwise ft < (3. u=(1 +a*m_n).2.e.*t). USSB-AM and LSSBAM. c=cos(2*pi*fc. The bandwidth of the SSB signal is half the bandwidth of DSB and conventional AM and so is equal to the bandwidth of the message signal. l/l i f t otherwise (3. n=m/max( abs( m ) ) .e.22) and [M{f ULSSB(/) = .21) where the minus sign corresponds to USSB-AM and the plus sign corresponds toLSSB-AM. 3. the Hilbert transform of a signal represents a n j l phase shift in all signal components. it occupies half the bandwidth of DSB-AM. the noise power at the front end of the receiver is also half of a comparable DSB-AM signal.. i. In other words. we have | [ A f ( / .7. there exist two types of SSB-AM. in the frequency domain. fs\ PR .) + M(f + fc)].. because one of the sidebands has been removed.2. Therefore. Amplitude Modu lalion(AM) 97 f .15 s and f t = 250 Hz.e. Assuming the message signal is periodic with period io. however.7: Spectra of the message and the USSB-AM signal.. . It is assumed that fo = 0. 4. 3. Therefore. that this function returns a complex sequence whose real part is the original signal and whose imaginary part is the desired Hilbert transform. The Hilbert transform of the message signal can be computed using the Hilbert transform m-file of MATLAB. 1.3.T / t r) using an LSSB-AM scheme. determine the power in the noise. i. It should be noted.4 [Single-sideband example) The message signal 1. Figure 3.m. hilbert. dfcMHHMm 1. 0. 2.2. Find the spectrum of the modulated signal. otherwise 0 < / < 'f modulates the carrier c(r) = cos(2 . If a noise is added to the modulated signal such that the SNR after demodulation is 10 dB. determine the power in the modulated signal. Plot the Hilbert transform of the message signal and the modulated signal u(f). —CEOSEEinaszaiiEEK Illustrative P r o b l e m 3. the Hilbert transform of the sequence m is obtained by using the command imag(hilbert(m)). m(/) = -2. Plots of m(r) and the spectrum of the LSSB-AM modulated signal u(r) are shown in Figure 3.2. ANALOG MODULATION Now.98 CHAPTER 3. The MATLAB script for this problem follows.15 7o and therefore m (i) dt = 1. -2 for tO/3 < t < 2t0/3 and zero otherwise echo on . % Issb.1P U = 0. using the relation « ( f ) = m (t) COS(2TTfci) + m(i) s i n ( 2 ^ f j ) (3.8. P„ = 0 . The power in the message signal is = — r 1 5 ~ 0 .27) we can find the modulated signal.8: Hilbert transform and the spectrum of the LSSB-AM modulated signal form(r). 2.0416.m % Matlab demonstration script for LSSB-AM modulation. 1 P * = 0. The post-demodulation SNR is given by Hence. Figure 3.667 ' A2 Pu = . The message signal % is +1 for 0 < t < tO/3.f P 4 m = 0 A \ 6 3. c=cos(2*pi*fc *t).25.1. % % Compute signal power noise.0.2. power): % Compute noise standard deviation noise=noise_std»randn( 1 .power pause % Press any key to see a plot of the message signal elf subplot (2.2) plot(t.m(1 :length(0)) ox is{[0.abs( fftshift(ULSS B))) xlabel(' F r e q u e n c y ' ) ( i t i e C S p e c t r u m o f t h e LSSB-AM m o d u l a t e d s i g n a l ' ) pause % Press any key to see the spectra of the message and the modulated signal elf % % % % % % % % signal duration sampling interval carrier frequency SiVR in dB (logarithmic) sampling frequency desired freq resolution time vector SiWR .dfl]=fftseq(r. fc=250.df).udssb.t0/(3*ts)). % scaling pause % Press a key to show the modulated signal power signal . % Add the signal to noise [R. % scaling u=real(tfft(ULSSB))*fs. % earner vector udsb=m. % frequency vector n2=cei](fc/dfl). % Compute noise power no ise_5td=sqrt{ noise. % Fourier transform R=R/fs.*c. ULSSB=UDSB.-2.r.dfl]=fftseq(udsb.ts.df). % generate LSS8-AM spectrum [M.c( 1 :length(t))) xlabeK' T i m e ' ) titie('The c a r r i e r ' ) pause % Press any key to see a plot of the modulated signal and its spectrum elf subplot (2.1) plot( [Q :ts: ts*(len gth(u)— 1 )/8]. % DSB modulated signal [UDSB.1]) xlabel(' T i m e ' ) tiile('The m e s s a g e s i g n a l ' ) pause % Press any key to see a plot of the carrier subplot (2. t=[0:ts:t0]. % Fourier transform M=M/fs. % scaling f=[0:df 1 :dfl*(length(udssb)—1)] —fs/2. snr.1) plot(t.3. power=signal.2) p lot (f.zeros(1 .t0/(3*ts)).lin=10~(snr/10). Snr=10.1.-2*ones(1.i0/(3*ts)+1)]. fs=1/ts: df=0.1.df).length(u)).dfl]=fflseq(nrvis.1. power=power(udsb(1 :Iength(t)))/2. % Generate LSSB signal from spectrum signal.1.m. Amplitude Modu la lion (AM) 115 tO= 15.u( 1: length(u)/8)) xlabel<' T i m e ' ) tit!e(' T h e LSSB-AM m o d u l a t e d s i g n a l ' ) subplot(2. % the message vector m=[ones(1.2. % Fourier transform UDSB=UDSB/fs. % location of carrier in freq vector % remove the upper wideband from DSB UDSB<n2:lerigth{UDSB)-n2)=zeros(size(UDSB<n2:lengtii(lIDSB)—n2))).ts.power/snfJin. % Generate noise vector r=u+noise.15. l. ANALOG MODULATION subplot(2.2) pause % Press a key to see the modulated signal noise in freq.r(t:lengih(i))) lltlet' M o d u l a t e d s i g n a l a n d n o i s e " ) xlabel<' T i m e ' ) subplot(2.2) pioit f.2) ploi(t.mod.rn given below modulate the message signal given in vector m using USSB and LSSB modulation schemes.*cos(2*pi»i)—imag(hi)bert(m)).abs( ffishif« ULSSB») xlabeH' F r e q u e n c y ' ) title!' S p e c t r u m of t h e LSSB-AM m o d u l a t e d pause % Press any key to see a noise sample subp!ot(2.abs(fftshift(R))) [itle(' M o d u l a t e d s i g n a l n o i s e s p e c t r u m ' ) xlabel(' F r e q u e n c y ' ) The m-files ussb.»sin(2»pi*t).fc) % u = Issb-mod(m. function u=lssb-mod(m.1) pause % Press any key to see the spectrum of the modulated signal plo[(f.abs(fftshift(M))) xlabel(' F r e q u e n c y ' ) title(' S p e c t r u m of t h e m e s s a g e signal') subplot(2.mod(m. u=m.1. function u=ussb.1.ts.fc) % u = ussb-mod(m.1.ts.abs(fItshift(ULSSB))) title(' M o d u l a t e d s i g n a l s p e c t r u m ' ) xlabel(' F r e q u e n c y ' ) subplot(2. .1.*sin(2*pi *t). t=[0:length(m)-1]*ts: u=m.fc) VcUSSB-MOD takes signal m sampled at is and carrier % freq.1.100 CHAPTER 3.noisc(1:length(l))) tillef n o i s e s a m p l e ' ) signal') xlabeK 'Time') pause % Press a key In see the mod it! ma! signal and noise subplot(2. fc as input and returns the LSSB modulated % signal. fc as input and returns the USSB modulated % signal.1) plotf f. ts << 1 j 2fc.mod. domain plot(f. ts << l/2fc.s.1.ts. t=[0:lenglh(m)~ 1 ]«ts.1) plot(t.rn and lssb.*cos(2*pi»t)+imag(hilbert(m)).fc) TcLSSBJdOD takes signal m sampled at ts and carrier % freq. 0. cos(2.3. For D S B .3.A M demodulation] The message signal m(t) is defined as 1. so the demodulation process is much easier. which requires the existence of a signal with the same frequency and phase of the carrier at the receiver. plied by COS(2. — m(t) cos(47r/. This process is shown in Figure 3. will be demodulated. For conventional AM.A M and S 5 B .3. In this case precise knowledge of the frequency and the phase of the carrier at the receiver is not crucial.3 Demodulation of AM Signals Demodulation is the process of extracting the message signal from the modulated signal. i T i a a s M i a ^ Illustrative Problem 3.9. the high-frequency components will be filtered out and the lowpass component. Coherent demodulation for DSB-AM and SSB-AM consists of multiplying (mixing) the modulated signal by a sinusoidal with the same frequency and phase of the carrier and then passing the product through a lowpass filter. . the mixer output has a lowpass component of ^ M ( f ) and high-frequency components in the neighborhood of ±2 A.By passing y(r) through a lowpass filter with bandwidth IV.1) where y ( 0 denotes the mixer output.3. The oscillator that generates the required sinusoidal at the receiver is called the local oscillator. 3. which is proportional to the message signal. • • • M B H S M . ^ m ( i ) . m(f) = -2..3. Demodulation of A M Signals 101 3.A M the demodulation method is coherent demodulation. The demodulation process depends on the type of modulation employed.2) 2 4 4 As it is seen.5 [ D S B .rAO u{t) y(0 Lowpass Filter ^m(f) Figure 3.T/ ( /) (or mixed with COS(2JT/.9: Demodulation of DSB-AM signals. 0<<< t ^ < f < ^ otherwise . envelope detectors are used for demodulation.r) z (3. and its Fourier transform is given by y ( f ) = 2 A ) + ^ M ( f + 2 A ) (3. R)) results in V(/) = A ( m ( r ) COS(2JT/ti) COS(2TT f c t) = Ac 2 m(t) A.1 DSB-AM Demodulation which when multi- In the DSB case the modulated signal is given by A<m(t)COS(2JTA0. ° . Demodulate the modulated signal u(t) and recover m(t). l ( / .M > w ( / + 5 0 0 ) s i n c ( 0 . ANALOG MODULATION This message D S B .025 \ f t — 0. and the resulting modulated signal is denoted by u{t). The first two parts of this problem are the same as the first two parts of the Illustrative Problem 3. and we repeat only those results here. namely. 0 1 2 5 e .075^1 / i — 0. 2. To demodulate. 0 5 ( / . «(0 = and r[m(t)] = |C--/W3slnc ^ _ ( Y ) r / 1 — 0.500)) ( l .° ..15 s and fc = 250 Hz.1.023\ +1 r„ n /r 2 l \~oW) -2n (-535-)Jcos<. O 1 2 5 « .Q5f) + 0 . MAIIIUUim l.075^"1 Plot the results in the time = 0 .025 \ / f — 0.oco'") — 0. 0 5 / ) ( l . 0 7 5 \ ] ? ir„/r-0.° 5 j .075\"| (t ~ 0 .5 0 0 > s i n c ( 0 .2 * " ° + O . t ^ s i n c ( 0 .2. w e multiply u(r) by COS(2TT f c t) to obtain the mixer output y{t) y(t) = T u(t)cos(2irfct) / 1 — 0. 0 5 ( / . 0 5 e ~ ° 0 5 ' ' . 0 2 5 f . U ( f ) = 0 .2<T 0 ' 0 1 Therefore. Obtain the expression for u(t). 1. It is assumed that fo = 0.2 5 0 ) s i n c ( 0 .° ° 5 ^ ^ + 2 3 0 > s i n c ( 0 . 0 2 5 i .025\ (l 2<r001^) whose Fourier transform is given by Y ( f ) = OM5e-°°Sjn/smc(O. 0 5 ( / + 500)) ( l - . 3. Derive the spectra of /7t(f) and u ( f ) . 0 5 ( / + 250) ( l je' 0 1 ^^ 2 5 0 ^ ) 2e-° ] + 2 5 0 ) 3.° ° 5 ^ ( / .250)) ( l + 0 .A M modulates the carrier c ( 0 = cos 2 n f c t .102 CHAPTER 3. and frequency domains. the appropriate choice for the bandwidth of the lowpass filter would be W. For a strictly bandlimited message signal.5. 500 Hz). The message signal % is +1 for 0 < t < r0/3. and a comparison in the time domain is shown in Figure 3. the bandwidth of the message signal. We see that filtering the first term yields the original message signal (up to a proportionality constant).11. In order to recover the message signal m(t). As shown above.12. 0. The choice of the bandwidth of the filter is more or less arbitrary here because the message signal is not strictly bandlimited. and a bandpass component located at ± 2 f c (in this case. 9b dsb-dem.15.m % Motiob demonstration script for DSB-AM demodulation.3. % signal duration . 1/1 < 1 5 0 otherwise A comparison of the spectra of m(f) and the demodulator output is shown in Figure 3. the ideal lowpass filter employed here has a characteristic 1.10. the spectrum of the mixer output has a lowpass component that is quite similar to the spectrum of the message signal. Demodulation of A M Signals 103 where the first term corresponds to the message signal and the last two terms correspond to the high-frequency terms at twice the carrier frequency. A plot of the magnitudes of U { f ) and Y ( f ) is shown in Figure 3. Using a lowpass filter we can simply separate the lowpass component from the bandpass component. The MATLAB script for this problem follows. echo on t0=. -2 far tO/3 < t < 2tO/3 and zero otherwise. Therefore. we pass y(e) through a lowpass filter with a bandwidth of 150 Hz. Tlie derruHJulau* <wiipui 3.3. except for a factor of j . .5.104 CHAPTER 3.12: Message and demodulator output in Illustrative Problem 3. ANALOG MODULATION Figure 3. dfIJ=ffiseq(y .1.1) pi o< (f. % murinj.-2»ones(1 .1 /IS. % modulated signal y=u.1 . Demodulation of A M Signals 105 ts=1/1500.«c.n. H=zeros(size(f)).3.ts.ffts hi ft(abs( Y))) titlef' S p e c t r u m of t h e M i x e r O u t p u t ' ) xlatoeli' F r e q u e n c y ' ) subplot (3. % time vector df=0.1.1) plot (f.y. % desired frequency residution % message signal m=(ones(1.3. H(1 :n .*Y.f s / 2 .fftshi ft(abs( Y))} lil!e{' S p e c t r u m of t h e M i x e r O u t p u t ' ) xlabeK' F r e q u e n c y ' ) pause % Press a key to see the effect of filtering on the mixer output elf subplot(3. H(length(f)-n_cutoff+-1 :lengih(f))=2*ones(1.dfl]=fftseq(u. % sampling frequency [=[0:ts:t0]. % carrier signal u=m.df): % Fourier transform M=M/fs.fiis h L fi( abs( M))) title(' S p e c t r u m of t h e t h e M e s s a g e S i g n a l ' ) xlabeK' F r e q u e n c y ' ) subplot(3. % spectrum of the filter output dem=real(ifft(DEM))»fs: % filter output pause % Press a key to see the effect of mixing elf subplot( 3. % sampling interval fc=250.3) plot( f. fftshi ft( abs( DE M))) title('Spectrum of t h e D e m o d u l a t o r o u t p u t ' ) xlabe)(' F r e q u e n c y ' ) pause % Press a key to compare the spectra of the message an the received elf subplot (2 .1) pi ot{ f.*c.cutoff): DEM=H.1. % carrier frequency is .l0/(3*ts)).2) plot( f.t0/(3*ts)).zeros(1 .[5 .ts. [M.df 1 ]=fftseqim.3.n .1. % Fourier transform U=U/fs: % scaling [Y. % Fourier transform Y=Y/fs: % scaling f_cutoff=150: % cutog freq. % Design the filter f=[0:df 1 :dfl »(length(y)-1)) .2) plot( f.<!f).cutoff).1 . % scaling [U.df). fftshift(abs (U))) t i t l e ( ' S p e c t r u m of t h e M o d u l a t e d S i g n a l ' ) x label ( ' F r e q u e n c y ' ) subplot{3.3) pi ot (f.cutoff )=2»ones(1 . of the filter n_cutoff=floor(150/dfl). fftshift(abs(M))) signal .t0/(3»ts)+1)].»t).u.m.1.ffts hift(abs( H))) tule(' L o w p a s s F i l t e r C h a r a c t e r i s t i c s ' ) xlabel(' F r e q u e n c y ' ) subplot (3. c=cos (2 *pi»fc. 3.Mixing these two signals gives >(/) = Acm(t) A cos(27r/ t r) x c o s ( 2 n f c t + <p) A (3.6 [Effect of phase error on DSB-AM demodulation] In the demodulation of DSB-AM signals we assumed that the phase of the local oscillator is equal to the phase of the carrier.106 CHAPTER 3.—i.3.1...2) plot(f.6) . i. r / 4 this power loss is 3 dB. The lowpass term. and the local oscillator generates a sinusoidal signal given by cos(27r/ c ' + 4>). In this case u(t) = y m ( i ) c o s ( 2 j r / c i ) T Ym(t)sin(27r/Cf) A A (3.3) (3.4) y m ( l ) cos (<p) + y cos(4jr/ c r + <p) As before. ^ m ( r ) cos(^). there are two terms present in the mixer output. however. that in this case we can recover the message signal with essentially no distortion. The bandpass term can be filtered out by a lowpass filter.3.3. therefore.mO length(t))) title{'The M e s s a g e S i g n a l ' ) xlabe!(' Time ') subplot(2. If that is not the case.2 SSB-AM Demodulation The demodulation process of SSB-AM signals is basically the same as the demodulation process for DSB-AM signals. but we will suffer a power loss of cos V .1. We can see.e. if there exists a phase shift <p between the local oscillator and the carrier—how would the demodulation process change? In this case we have « ( / ) = Acm(t) c o s ( 2 ^ / c i ) . ANALOG MODULATION iitle('Spectrum of t h e M e s s a g e S i g n a l ' ) xlabel(' F r e q u e n c y ' ) subp!ot(2. and for 4> = n/2 nothing is recovered in the demodulation process. depends on 0 .For tp = . mixing followed by lowpass filtering.e.fftshift<abs(DEM))) title!'Spectrum of t h e Demodulator Output") xlabelf' F r e q u e n c y ' ) pause % Press a key to see the message and the demodulator output signals subplot(2. The power in the lowpass term is given by where Pm denotes the power in the message signal.dem(1:length(t))) title('The D e m o d u l a t o r O u t p u t ' ) xlabelf 'Time') I L L U S T R A T I V E P R O B L E M Illustrative Problem 3.2) plot(t. 3.1) plot(t.1. T fct ) + y m ( R ) s i n ( 2 J T / C .A M example] In a U S S B . Y(f) f. F ) c o s ( 2 ^ / ( : r ) A = ~ m ( t ) H -/n(r)cos(4jr/t.A M case is depicted in Figure 3.15 s. The expression for U ( / ) is given by 0 . ûî ^ i ^ + 2 5 0 1 s i n c ( 0 .lllkJMJM The modulated signal and its spectrum are given in Illustrative Problem 3. + W Figure 3-13: Demodulation of USSB-AM signals.iy»r(/+250)j _ 1/1 < fe otherwise . Mixing u(r) with the local oscillator output.2e~° i>*</-250)) W ) = + 0.4. U(f) /<. I L L U S T R A T I V E P R O B L E M Illustrative Problem 3. Demodulation of A M Signals 107 where the minus sign corresponds to the USSB and the plus sign corresponds to the LSSB. and the carrier has a frequency of 2 5 0 Hz.0 0 5 ^ ( / . 0.13. m(t) = -2.250)) {l .7 [ L S S B . M^.2 tf -o.7) which contains bandpass components at ± 2 / t and a lowpass component proportional to the message signal.2 5 0 W ( 0 . The lowpass component can be filtered out using a lowpass filter to recover the message signal.3. 0 5 ( / . 0 5 ( / + 250)) ( l . 0 2 5 C . This process for the U S S B . + W -vv w 2/c 2/.3. we obtain Ac n Y(0 = A A A COS"(2.3. find { / ( / ) and Y { f ) and compare the demodulated signal with the message signal.A M modulation system. if the message signal is 1.f) 4 4 — m ( i ) sin(4jr/tf) 4 (3.025e-° 0. 0 < / < % < t < otherwise ^ with to = 0. % lssb-dem.2 e . The MATLAB script for this problem follows.U { f ~ f c ) + ~U(f + f. 0 5 ( / + 5 0 0 ) ) ( l . 0 5 / ) (l . |/[ < f c 0 .108 CHAPTER 3. 0 5 ( / .7. 0 1 2 5 e .* » ) ) 0 . 0 l 2 5 e . ANALOG MODULATION and Y ( f ) = i .»(/+500)j _ 0.2e~° « " M / . -2 for tO/3 < t < 2t0/3 and zero otherwise.° 0 1 . 0 1 2 5 s i n e ( 0 .m % Matlab demonstration script for LSSB-AM demodulation.5 O O ) s i n c ( 0 . < f c -2f otherwise A plot of Y { f ) is shown in Figure 3. echo on . the spectrum of the output is shown in Figure 3.o ' ° 5 ^ i / .° 0 5 ^ ^ + 5 0 0 ) s i n c ( 0 . The signal y(t) is filtered by a lowpass filter with •800 -600 -400 200 Q Frequency 200 400 600 900 Figure 3.16 the original message signal is compared with the demodulated signal.14.500)) (l .) .15. In the Figure 3. a cutoff frequency of 150 Hz.14: Magnitude spectrum of the mixer output in Illustrative Problem 3. The message signal % iis +1 for 0 < i < tO/3.le'0-01^) fc <2fc < f < ~f ç 0 . Demodulation of A M Signals 109 Figure 3.15: The demodulator output in Illustrative Problem 3.16: The message signal and the demodulator output in Illustrative Problem 3.3. .7.7. Figure 3.3. 7c ti/ne vector % the message vector m=[ones(1. u=realliffi(ULSSB))*fs: 7t mixing % scaling % frequency vector % generate LSSB signal from spectrum >=u -cos{2»pi*fc*[0.126 CHAPTER 3.cuioff+1 :)engih(f))=4*ones(1 .ffishift(abs(ULSSB))) title{' S p e c t r u m o f t h e M o d u l a t e d Signal') xlabeK ' F r e q u e n c y ' ) subplot(3. 7c spectrum of the output of the mixer Y=Y/fs. % desired freq. of the filter % design the filter H=?eros{size(f)).fftshifi(abs(M))) title!' S p e c t r u m o f t h e t h e M e s s a g e Signal') x!abe!(' F r e q u e n c y ' ) subplot(3.tO/(3*ts)).-2*i>nes(1. f=[0:dfl:dFJ*(length(M)-1)]-fs/2.cutofr). H( 1 :n.1) pi ot (f.1.tO/(3*is)).os(2*pi*fc.cutofT)=4*ones( 1 .cuioff=fioor( 1 5 0 / d f ) .1) plot(f.dfl]=ffiSL'q(m.3) plot(f. % sampling interval fc=250. Lcutoff=150.*c. % mi.ts. % spectrum of the filter output H(lengtb(f)-n.1.is*(leiigih(u)-1)]). % filler output pause % Press a key to see the effect of mixing elf subp!oi(3.fftshift(abs(Y))) title(' S p e c t r u m o f t h e M i x e r O u t p u t ' ) xlabell ' F r e q u e n c y ' ) pause % Press a key to see the effect r/f filtering on the miser output df subplot(3.zeros(1.df). ANALOG t()=. remove the upper sideband from ULSSB = UDSB.1.m. % sculing % Choose the cutoff freq. % Fourier transform MODULATION L'DSB=UDSB/fs.2) piot(f.dfl]=ffiseq(y. n. 7c sampling frequency df=0 25. vector UDSB(n2:!ength(UDSB)-n2)=zcros(size(UDSB(n2:length{UDSB)-n2))).*I).resolution E=[0:ts:t0]. % currier frequency fs=1/'ls.y.rial duration ts=1/1500.1. [y. % spectrum of the filler output dem=real(ifft(DEM))*fs.2) plot(f.udsMttI=rftseq(udsb. DSB % scaling % location of earner in freq.l5. [V1. % Generate LSSB-AM spectrum % spectrum of the message signal M=M/fs.n.ts. t ft s hi fU abs (Y))) tille(' S p e c t r u m o f t h e M i x e r Output') xlabeK ' F r e q u e n c y ' ) subp!ot(3.(]n.ls.1.ts.df). % DSB modulated signal [UDSB.n.*Y. DEM=H.fftsfiift(absl.cuioff).H))) title(' L o w p a s s F i l t e r xlabel(' F r e q u e n c y ' ) Characteristics') . % carrier vector udsb=m.tO/{3»ls)+1)]: c=i. n2=eeil(fc/dfl). and a capacitor. Demodulation of A M Signals subplot(3. Therefore. the effect of the phase offset here is not simply attenuating the demodulated signal.3.mO iength(t))) title('The M e s s a g e S i g n a l ' ) xlabel('Time') subplot(2.m ( i ) s i n ( 2 .ffuJuft(abs(DEM))) iitte( ' S p e c crura o f t h e D e m o d u l a t o r o u t p u t ' ) xlabet( ' F r e q u e n c y ') pause % Press a key to see the message and the demodulator subplot(2. as opposed to coherent demodulation required for DSB-AM and SSB -AM.j .17.dem(1 iengih(t))) tille('The Demodulator O u t p u t ' ) xlabel('Time') 111 output signals JtESEtEEEEMSESESMh Illustrative Problem 3. In envelope detection the envelope of the modulated signal is detected via a simple circuit consisting of a diode. the Hilbert transform of the signal will be demodulated instead of the signal itself. A 1 fct + 4>) ± -£-m(t)s\n<p + high-frequency terms (3. a resistor.8) As seen.8 [Effect of p h a s e e r r o r on SSB-AM] What is the effect of phase error in the SSB-AM? Assuming that the local oscillator generates a sinusoidal with a phase offset of <p with respect to the carrier.3.3 Conventional AM Demodulation We have already seen that conventional AM is inferior to DSB -AM and SSB -AM when power and SNR are considered. r + 0 ) ~m(t)cos{27tfct) = -~m{t)cos4> A • A A • ^ .. we have j<O = n ( O c o s ( 2 f f / t . as shown in Figure 3.2) pio[(t. In the special case of — n / 2 . this modulation scheme is widely used in broadcasting.Hence.3. unlike the DSB-AM case. where there exists a single transmitter and numerous receivers whose cost should be kept low.1) ptoUt. The role of the carrier component is to make the demodulation of the conventional AM easier via envelope detection.3. Here the demodulated signal is attenuated by a factor of c o s £ as well as distorted by addition of the ik^-mCO sin<£ term.3) ploi(f. r / c 0 cos{2n A. 3.1. demodulation of AM signals is significantly less complex than the demodulation of DSB-AM and SSB -AM signals. The reason is that a usually large part of the modulated signal power is in the carrier component that does not carry information. .1.1. in order to obtain the envelope. . ILLUSTRATIVE PROBLEM Illustrative Problem 3. Therefore. we conclude that V(i) = 1 +am„U) (3. or asynchronous. 0 < í < f < t < otherwise j ^ modulates the carrier c(i) = c o s ( 2 i r / c f ) using a conventional AM modulation scheme. which is V(t) = |1 .(01 (3. denoted by K(/).3.3.. 0. v (f) represent the in-phase and the quadrature components of the bandpass signal u(t).17: A simple envelope detector.85.10) where mn(i) is proportional to the message signal m(t) and 1 corresponds to the carrier component that can be separated by a dc-block circuit.(r) > Ü. Thus if tz(r) is the bandpass signal with central frequency fc and the lowpass equivalent to «(/) is denoted by w. ANALOG MODULATION Figure 3. The envelope is simply the magnitude of the lowpass equivalent of the bandpass signal.2r(0 + »£(0 = yjuld) + uj(r) (3.126 CHAPTER 3. can be expressed as f ( 0 = y«.9 [Envelope detection] The message signal 1. Mathematically.3. and the modulation index is a = 0. in the above procedure there is no need for knowledge of <p. As seen. Recall from Chapter 1 that the envelope of a bandpass signal can be expressed as the magnitude of its lowpass equivalent signal. the envelope detector generates the envelope of the conventional AM signal. the phase of the carrier signal. m(f) = -2.11) where h c (/) and«. demodulation.9) Because 1 + w. it is enough to obtain the lowpass equivalent of the bandpass signal. It is assumed that fc = 250 Hz and /q = 0. That is why such a demodulation scheme is called noncoherent. then the envelope of u(i).15 s.(f). I a w m w a W 1/VWwwWVY MI I ^/tyVWv.-0. C o m p a r e this case with the case where there is no noise present. which is V(f) = |1 + ii/H.025 t .05 0.(/)]. demodulate the message signal. the expression 1 +am„ ( f ) is positive. Using envelope detection. we have hKO'. the envelope of the signal (1 + am„(t)]cos(2rcfLt). Note that a crucial point in the recovery of m ( i ) is that for all values of f. 113 2. Demodulation of A M Signals 1. the d c component of the signal is removed and the signal is scaled to generate the demodulator output. After the envelope detector separates the envelope of the modulated signal. A plot of the conventional A M modulated signal and its envelope as detected by the envelope detector are shown in Figure 3.85ri(0.19.3. is equal to 1 + a m „ ( 0 . 1. 0 I +0. !f the message signal is periodic with a period equal to fo if a n A W G N process is added to the modulated signal such that the power in the noise process is one-hundredth the power in the modulated signal.075 „ — )-0. . 1 + 0 . Conventional A M modulated signal and its envelope. therefore. then the original message m(t) is recovered.18. from which m ( t ) can be recovered easily. Plots of the original message signal and the demodulator output are shown in Figure 3. ^ Figure 3.425fl( . As in Illustrative Problem 3.0.18.3. use an envelope demodulator to demodulate the received signal..3. 8 5 — cos(2 .05 COS (500 TT O «(') = If an envelope detector is used to demodulate the signal and the carrier component is removed by a dc-block. t / . 126 CHAPTER 3. ANALOG MODULATION 'wJj I I I yVvwww I I I ! ij^VWw^ 0 v^ n ft • » Tii 00} 004 om & M or 01 2 a g IM 7Z ri m Figure 3. ts=0.20: T h e received signal and its envelope in the presence of noise. echo on t0=.m % Matlab demonstration script for envelope detection. 2. % signal duration % sampling interval fc=250. % carrier frequency .19: The message signal and the demodulated signal when no noise is present.15. some distortion d u e to the presence of noise will also be present. Figure 3. % am. 2 0 the received signal and its envelope are shown.dem. When noise is present.21 the message signal and the demodulated signal are compared for this case.Q01. % is +7 for 0 < t < tO/3. In Figure 3. In Figure 3 . The M A T L A B script for this problem follows. -2 for tO/3 < t < 2t0/3 The message signal and zero otherwise. power=sigtiai_powet/100. % time vector df=0. % normalized message signal [M .2) plot(t.powM). % Fourier transform eitv=env_phas(u). % Find the envelope deml=2«(env—1)/a. % Demodulate m the presence of noise pause % Press any key to see a plot of the message subplol(2.sld=sqn(noise. when noise is present dem2=2*(env.1) plot(t. % carrier signal m.3. % Fourier transform env.df).1 )j —fs/2. % power in modulated signal noise.m(1 :length(0)) axis(i0 0 .1. Demodulation of A M Signals 115 Figure 3. c=cos(2*pi»fc.2 .m.-2*ones(1 .ts.u{1:length(t))) axis([0 0 . % noise standard deviation noise=noise.length(u)).r—1)/a. % sampling frequency t=[0:ts:t0]. % noise power noise.t0/(3*ts)).8S. 1 2.dfy. 1 5 .25.df).2 . 1 2.power=power(u(1:length(t}}). 1 5 . % Add noise to the modulated signal [Rj-.1 J) xlabel('Tiine') title('The m o d u l a t e d s i g n a l ' ) pause % Press a key to see the envelope elf of the modulated signal .r=env_phas(r): % envelope.1]) xlabeii 1 T i m e ' ) title('The m e s s a g e s i g n a l ' ) pause % Press any key to see a plot of the modulated signal subplot{2.zeros(1.df 1 l=fftseq(r.t0/(3»is».lS.dfl ]=fftseq(rrus.*cL % modulated signal [U.1.dfll=ffwtq(u. % modulation index f$=1 / is. % frequency vector u=( 1 +a*m_n).*i). a=0.21: The message and the demodulated signal in the presence of noise. % required frequency resolution % message signal it)=[ones(1.n=m/max(abs(m».3. % Remove dc and rescale signal.i0/(3»ls)+1)].std*randn{ 1 . % Fourier transform f = ( 0 : d f l : d f l * 0 e n g t h { m ) . % generate noise r=u+rtoise.u. 2) plot(t. since the message bandwidth is not finite.1 2.1. are used in situations where bandwidth is not the major concern and a high S N R is required.21.1.22 we have plotted the demodulator outputs when noise-limiting filters of different bandwidths are used The case of infinite bandwidth is equivalent to the result shown in Figure 3. but it will also decrease the amount of noise in the demodulator output. In practice the received signal r{t) is passed through the noise-limiting filter and then supplied to the envelope detector. belong to the class of nonlinear modulation schemes. 1 2.1]) xlaheK ' T i m e ' ) iiilc('The m e s s a g e s i g n a l ' ) subplot(2.1) plot(t.1. which is a bandpass filter in the first stage of any receiver.1) plot(i. These schemes can b e visualized as modulation techniques that trade-off bandwidth for power and. 3.2) plot(t.u (1 :length(t») axis([0 0. passing r(t) through any bandpass filter will cause distortion on the demodulated message.envf 1 lengih(t))) CHAPTER 3.1.126 subplot* 2.dem2( 1 :lenglh(t))) xbbel( ' T i m e ' ) title!'The d e m o d u l a t e d s i g n a l | [ j I in t h e p r e s e n c e of noise') In the demodulation process above.m(1:length(i))) axis([0 0.1. In the above example.1]) xlabeK ' T i m e ') title! 'The m e s s a g e s i g n a l ' ) subplol(2.4 Angle Modulation Angle-modulation schemes.15 —2. we have neglected the effect of the noise-limiting filter. 1 2. Frequency modulation is widely used in .15 . therefore. This family of modulation schemes is characterized by their high-bandwidth requirements and good p e r f o r m a n c e in the presence of noise.2 .2 .1]) xlabel(' T i m e ' ) rirle ('The m o d u l a t e d s i g n a l ' ) 5ubplot{2.15 .1.ixis([0 0. In Figure 3.!)i( 1 :length(l))> . which include frequency modulation (FM) and phase modulation (PM). ANALOG MODULATION xlabel( ' Time ') title! • E n v e l o p e of t h e m o d u l a t e d s i g n a l ' ) pause '7i Press n key to compute the message and the demodulated elf subplot(2.2) plotit.1) plol(t. : cir subploi(2.deml(1 :Icngih(r))) xlahel( ' T i m e ' ) titlef 'The d e m o d u l a t e d s i g n a l ' ) pause 1c Press u key to compute in the presence of noise signal j j . J.-f. Angle Modulation 133 Figure 3. .22: Effect of the bandwidth of the noise-limiting filter on the output of the envelope detector. In the frequency domain w e have "(/>= E HI ~ ( A + "An)) H = + (A + nf„)) (3. for a nonsinusoidal m(t). Since there is a close relationship between P M and F M . / ) for F M .2) cos<2 . and point-to-point communication systems. respectively. PM (3. T h e frequency-domain representation of angle-modulated signafs is. the modulation indices are defined as Pp = kp max | m ( i ) | kj max | m ( r ) | £ where W is the bandwidth of the message signal m ( t ) .1) FM «(0=1 where kj and kp represent the deviation constants of FM and PM.4. is given by [ Av cos (lizf t + knm{tj) .4) «(/)= Aty„(Jfl)cos(2^(A-+nf„)r) (3.f + /3 / J cos(2n-/ f f l O). time-domain representation. In our treatment of angle-modulation schemes. the modulated signal is of the form A ( c o s ( 2 ^ A . where FM [ = \ p - kpa k Jl ( 3 ' 4 ' 3 ) I fm and Pp and ft/ are the modulation indices of PM and F M . power content. in general.a s i n ( 2 j r / „ . bandwidth. The time-domain representation of angle-modulated signals.4t. In general. with e m p h a s i s on FM. namely. In case of a sinusoidal message signal the modulated signal can be represented by C O £ / ! = — OO (3-4. microwave carrier modulation. when the carrier is c ( r ) = . we will treat them in parallel. ANALOG MODULATION high-fidelity F M broadcasting. respectively. depending on whether w e are dealing with P M or F M . T / t f ) and the message signal is rrt(t). frequency-domain representation.126 CHAPTER 3. very complex due to the nonlinearity of these modulation schemes. finally. TV audio broadcasting. and. SNR.5) where Jn ( f i ) is the Bessel function of the first kind and of order n and fi is either fip or 0 / . We assume m(t) — a c o s ( 2 j r / m / ) for PM and m{t) = . u(t) — PM „ (3-4.4.6) . We treat only the case where the message signal m ( f ) is a sinusoidal signal. we again concentrate on their five basic properties. / -< \ \ AL cos f 2 j t / ( t + Ink} j _ x m(r) dzj .c o s ( 2 . Then.4.r/ ( / + P f cos(2jtAm f )). and Bj is the bandwidth of the modulated signal. W is the bandwidth of the message.-f.4. 0. ILLUSTRATIVE P R O B L E M Illustrative Problem 3.4. the deviation constant is kf = 50. The power content for both FM and PM is given by Pu = y (3. 0 < r < % f <t otherwise It is -2. . Since the modulated signal is sinusoidal. When pre-emphasis and de-emphasis filters with a 3-dB cutoff frequency equal to / o are employed.Angle Modulation 119 Obviously.7) where /3 is the modulation index. This bandwidth is given by Carson's rule as Br = 2(fi + 1)W (3.(OI) W W - 2 . with varying instantaneous frequency and constant amplitude. the bandwidth of the modulated signal is not finite. its power is constant and does not depend on the message signal. when no pre-emphasis and de-emphasis filtering is employed. the SNR for FM is given by ( I ) W o = P D ^ ( I ) 3 (W7/o — arctan(W//o)) \ N / (3.15 s. J fmax|m(iy): NUW < pm FM rlV1 (3 4 9) 3 (max |/r. Determine the spectra of the message and the modulated signals. Ss- Since max |m(i)| denotes the maximum magnitude of the message signal. Plot the modulated signal. 1.10) 0 where ( S / N ) 0 is the SNR without pre-emphasis and de-emphasis filtering given by Equation 3.10 [Frequency modulation] The message signal modulates the carrier c(/) = c o s ( 2 n f c t ) using a frequency-modulation scheme. I I.9. is given by -r \ = PMii ( r . However. The expression for the power content of the angle-modulated signals is very simple. we can interpret Pm/(max |m{i )l) 2 as the power in the normalized message signal and denote it by Pm„.4. assumed that fc = 200 Hz and fo = 0. 2. we can define the effective bandwidth of the signal as the bandwidth containing 98% to 99% of the modulated signal power.8) The SNR for angle modulated signals.4.J. Figure 3. It is readily seen that unlike A M .2nkj / -/-oo We have to find m ( r ) dr.120 CHAPTER 3. as shown above.4. w e obtain the expression for the spectrum of u ( f ) shown in Figure 3. d e f i n e the bandwidth as the width of the main lobe of the spectrum of m(t).23: The integral of the message signal.11) We can. an approximate bandwidth for the message should be used in the expression kf max | m ( f ) | P — W (3. and the results are shown in Figure 3. This can be done numerically or analytically. and therefore to define the index of modulation. Using the relation for u{t) and the value of the integral of m ( f ) . for example. in the F M case there does not exist a clear similarity between the spectrum of the message and the spectrum of the modulated signal.24. We have u(t) = At COS / 2nfct \ -f.23.25. which results in W = 20 H z . In this particular example the bandwidth of the message signal is not finite. 2.A plot of m ( r ) and u(r) is shown in Figure 3. Using M A T L A B ' s Fourier transform routines. ANALOG MODULATION 1. w e obtain the expression for « ( f ) . Angle Modulation 121 .J.-f. % Fourier transform L' = U/fs. ANALOG MODULATION and so T h e M A T L A B s c r i p t For t h i s p r o b l e m f o l l o w s .nJ 'A! tn.t0/(3*ts)). '~r •:>>! I.' M a g n i t u d e .dfI ]=fFrseq(u.1.2 .l0/(3*ts)+1 )H int. 1 2.2) p!otii'.abs(Rtshift(U))) titles. •2 for tOf3 < t < 2tO/3.df I ]=fftseq(m.df).1 > plot(f.mf 1 length(t))) xxisl[0 0.ud length(t))) x\is([0 0. The message signal "i is ~l /or 0 < i < tO/3.t.-2*ones(1 .u. % modulated signul fL'. % earner frequency k.ill "* Matlab demonstration script for frequency modulation.1. = ionesi 1 i0/(3*ts)). % frequency vector j=cos(2"pi»fc«i+2*pi»kf«int_m). 1 2.1 ) ] .ts. % time vector ilt =0.15 .126 CHAPTER 3. % Fourier transform M = \1.0til0].abs(fTtshift(M))) \labell ' F r e q u e n c y ' ) title!' M a g n i t u d e .1.1) ploiit.df). on i " . I " L T 1=1 iengih (t)-1 % integral of m mr-ini i-1 )=mt. and zero other-use.an 11=0.15.25.s. % required frequency resolution ~c message 'ig/iul rr. % modulation index :s=1 is: % sampling frequency :-.2 . =50.1]) xlabelt ' T i n e ' ) title!'The m o d u l a t e d s i g n a l ' ) pause % Press any key to see a plots of the magnitude of the message 7c modulated signal in the frequency domain.1]) ilabeK • T i m e ' ) mlei'The message s i g n a l ' ) <ubplot(2.f s / 2 .m(i)*m(i>»ts.15 .s p e c t r u m o f t h e m o d u l a t e d s i g n a l ' ) xlabeK ' F r e q u e n c y ' ) and the .2) ploKt.s p e c t r u m o f t h e m e s s a g e s i g n a l ' ) subp!ot(2. i. >ubploi(2. % scaling pause il Press any ke\ to see a plot of the message and the modulated signal iufcplm(2. % sampling internal ic-200. % scaling f=:0 dfl d f l » ( l e n g t h ( m ) .zeros(1 . % vijjrtui duration [>=0 0005.'ts.1. m function of MATLAB. 2. 0. A plot of the modulated signal in the frequency domain is shown in Figure 3. A plot of u(t) together with the message signal is shown in Figure 3. .28. This phase is 2nkf f'_gom(z)dT. The integral of the message signal is shown in Figure 3. \J \J !/ V Figure 3. the demodulated signal is quite similar to the message signal.-f. where fc = 250 Hz.26. Compare the demodulator output and the original message signal. t / ( t + Inkf J m( to find w(r). |r| < ?o otherwise where ?o = 0 . we employ the unwrap. We first integrate the message signal and then use the relation «(i) = Ac cos ^2.r/ t -f). — I. Note that in order to restore the phase and undo the effect of 2n phase foldings. As you can see.29. Plot the modulated signal in the time and frequency domain. To demodulate the FM signal. 1.11 [Frequency modulation] Let the message signal be rn(t) = [sinc(lOOr).J. which can be differentiated and divided by 2kkf to obtain m (t). we first find the phase of the modulated signal u(t). 1 . Angle Modulation ILLUSTRATIVE PROBLEM 123 Illustrative Problem 3.27.26: The message and the modulated signals. Plots of the message signal and the demodulated signal are shown in Figure 3. This message modulates the carrier c(t) = cos(2. The deviation constant is kf — 100. 2. 126 CHAPTER 3. ANALOG MODULATION Tint Figure 3. .27: Integral of the message signal. -f. end To Fourier transform [M.m(i)+m(i)*ts.df l]=fftseq(m.f s / 2 : 7c modulated signal u=cos(2*pi*fc*t+2»pi*kf*inum): % Fourier transform [U. 7o integral of m for i=1:length(i>—1 int_m(i+1)=int.df).m(1:length(t))} xlabel(' T i m e ' ) title{'The m e s s a g e s i g n a l ' ) phase . df=0. phase] =en v .3.dfl]=ffiseq{u.J. 7c 7c % % % % 7c % 7c % •'ignu! duration sampling interval carrier frequency SNR in dB <logarithmic) sampling frequency required freq resolution tone vect'ir deviation constunt required frequency resolution the message signal mt. % scaling M=M/fs. Angle Modulation 125 !a\ M A Figure 3. [ v.m(1)=0. 7c demodulation. df=0.df). find phase of u % restore original phase phi=unwrafK phase): 7c demodulator output.ts.2501.r m(t)-sincl lOOt) echo on % script for frequency modulation The message signal t0= 2.ts. snr=20. m=sinc(100»O.l ) ] . phas{ u . t=(-tO/2:ts:[0/2]: kf=100. fm2. pause % Press any key to see a plot of the message and the modulated signal subplot(2.25.m.1) plot(t. fs=1 /is.l.001: fc=2S0. ts=0.29: The message signal and the demodulated signal.m Muilob demonstration % i. % scaling U=U/fs. The MATLAB script for this problem follows. 7c frequency vector f=[0:dfl :dfl « ( l e n g t h ( m ) .ts.u. differentiate and scale dem=(1 /(2*pi*kf))*(diff(phi)/ts). 2) p1ot(f.126 subploii2.s p e c t r u m of t h e m o d u l a t e d xlaheN ' F r e q u e n c y ' ) pause % Pres <jnv kev to see plots of ihe message % noise signal') and (he demodulator <*utpui nir/i no subplot(2.1) ploT(t. Can you explain why this happens? .1.abs(fftshift(U))) t i t l e i ' M a g n i t u d e . ANALOG MODULATION p3orir. However.1) of (he message and the plot(f.26 the amplitude of the signal «(r) is apparently not constant.1.!i{ 1 .m(1 lengihil))) xJabeli' T i m e ' ) t i t l e ( ' T h e m e s s a g e s i g n a l ' ) iubplot(2.lengthit))) xlabelf' Time') tUte( J The m o d u l a t e d s i g n a l ' ) pause % Press any key to see a pirns of (he maiiniiude % modulated si y mil in the frequency domain subplot£2. in Figure 3.1.2) CHAPTER J.1.2) plot(t.1.s p e c t r u m of t h e m e s s a g e signal') subplot(2.abs(fftshift(M))) xbbel{' F r e q u e n c y ' ) t i t l e ! ' M a g n i t u d e .dem(1:length(t))) x l a b e l ( ' T i m e ' ) ttiiei' T h e d e m o d u l a t e d s i g n a l ' ) QUESTION A frequency-modulated signal has constant amplitude. 4 In Problem 3. Determine the power content of the modulated signal. Determine the spectrum of the modulated signal. 0 < t <1 I < t < 2 in the interval [0. 2].4. 0 I'l < 2 otherwise m({) = and a carrier with frequency of 100 Hz. a. Write a MATLAB m-file and use it to do the following.2 .1 suppose instead of DSB. Determine the power spectral density of the modulated signal and compare it with the power spectral density of the message signal. 3. Change the modulation index from 0. Plot the modulated signal b.3. 0. b.3 Repeat Problem 3.9 0. Determine and plot the spectrum of the modulated signal.2.9 and explain how it affects the spectrum derived in part a.1 to 0.2 Repeat Problem 3. Ansle Modulation 127 Problems 3. 2]. Plot the ratio of the power content of the sidebands to the power content of the carrier as a function of the modulation index. otherwise in the interval [0. a. c.1 with m(t) = f.Î Signal mit) is given by r.1 < i < 1 -f.1. i < m(t) = i — r t < 1 .1? 3. a conventional AM scheme is used with a modulation index of a = 0. c./ + 2. . This signal is used to DSB modulate a carrier of frequency 25 Hz and amplitude I to generate the modulated signal h(/).1 with sinc 2 (100. What do you think is the difference between this problem and Problem 3. 3. . Using M A T L A B .5. .8 assume that the modulation scheme is U S S B .1 < t < 1 I < t < 1. First use a D S B scheme and then remove the lower sideband. 45°. ANALOG MODULATION 3. 0.O — s i n ( 2 j r / L /) Do you observe any differences with the results obtained in Problem 3. is used to modulate a carrier with frequency of 25 Hz using a DSB scheme. 60°. b. but instead of using a filtered DSB to generate the U S S B signal. 3. 3.8.9 In Problem 3. Compare the spectrum of the modulated signal to the spectrum of the unmodulated signal.6.126 CHAPTER 3. and its spectrum. where 0 < 9 < ?r/2. use the relation ^ « ( f ) = ~m(t) ^ cos(2jr/t. 3. Plot the demodulated signal for 0 . 90°. assuming the modulation is conventional AM with a modulation index of 0.1. Determine and plot the modulated signal.1 let the modulation scheme be USSB instead of DSB. 0. plot the power in the d e m o d u l a t e d signal versus 9 (assume noiseless transmission).2. b. 3.9 otherwise 0. c. 30°.7 Repeat Problems 3.6 Repeat Problem 3. a. |r| < 2 otherwise m(t) = and use a carrier frequency of 100 Hz to generate an L S S B signal.5 In Problem 3. Use an envelope detector to demodulate the signal in the absence of noise and plot the demodulated signal. Determine and plot the spectrum of the modulated signal. a.8 Signal I f. Assume that the Jocal oscillator at the demodulator has a phase lag of 8 with t h e carrier. -t + 2.6 but substitute the message signal with sinc2(10f). 3.10 Repeat Problem 3.5 and 3. Determine and plot the modulated signal.0 ° . Demodulate and plot the demodulated signal when the ratio of noise power to the modulated signal power is 0.1 < t < 1 1 < r < 1. 0 < t < 1 1 < f < 1.16 Let the message signal be a p e r i o d i c signal with period 2 described in the [0. 3.01. 0. 0.1 . and let the modulation scheme be the one described in Problem 3. Plot spectra of the message and the modulated signal.3 of the modulated signal.11.17 In Problem 3.f + 2.1. Determine the range of the instantaneous frequency of the modulated signal. 2 ] is defined as 0. Determine the modulation index. 3.1.13 Repeat Problem 3.001. 0. 3. Add the noise processes to the modulated signal as described and demodulate the resulting signal.001. 0.1. 0. b.-f.15 Demodulate the modulated signal of Problem 3.14 The signal frequency-modulates a earner with frequency 1000 Hz. I f.14 using a frequency demodulation M A T L A B file and compare the demodulated signal with the message signal. 2] interval by I f. 0.9 otherwise This message signal DSB modulates a carrier of 50 Hz.05. Plot the power spectral density of the demodulator output in each case.11 with a L S S B modulation scheme. d.J. a.9 otherwise 0. c. and 0.11 A message signal is periodic with a period of 2 s and in the time interval [ 0 . < 1 1 < f < 22 otherwise 0. Plot the output of the D S B demodulator and compare it with the message for the cases where a white Gaussian noise is added to the modulated signal with a power equal to 0. .14. C o m p a r e your results with the results obtained in Problem 3. and 0. Before demodulation additive white Gaussian noise is added to the modulated signal. 0 < . Angle Modulation 129 3.01. 3.1 i with a conventional AM modulation and envelope demodulation.t + 2. Determine the bandwidth of the modulated signal. 3.12 Repeat Problem 3. 3.3.16 assume m ( t ) = 0. T h e deviation constant is kf — 25.05. . . Quantization (or lossy data compression). such that the original data sequence can be completely recovered from the compressed sequence. The fundamental limit on the performance of this class of data-compression schemes is given by the rate-distortion bound. The general theme of data compression. Analog sources include speech. 2.1 Preview The majority of information sources are analog by nature. of which analog-to-digital conversion is a special case. Noiseless coding (or lossless data compression). to communicate. differential pulse code modulation (DPCM). and many telemetry sources. The fundamental limit on the compression achieved by this class is given by the entropy of the source. This is desirable because. In this process some distortion will inevitably occur. In this chapter we consider various methods and techniques used for converting analog sources to digital sequences in an efficient way. delta modulation (AM). digital information is easier to process. and to store. as we will see in subsequent chapters. in which the digital data (usually the result of quantization. image. as discussed above) are compressed with the goal of representing them with as few bits as possible. This lost information cannot be recovered. and arithmetic coding belong to this class of data-compression schemes. Source-coding techniques such as Huffman coding.Chapter 4 Analog-to-Digital Conversion 4. 131 . uniform quantization. in which the analog source is quantized into a finite number of levels. nonuniform quantization. and vector quantization belong to this class. General A/D techniques such as pulse code modulation (PCM). can be divided into two main branches: I. Lempel-Ziv coding. In this class of coding schemes no loss of information occurs. so some information will be lost. For a discrete-memoryless and stationary random process. For the binary alphabet with probabilities p and 1 — p.1) where X denotes the source alphabet and p{x) is the probability of the letter v. The base of the logarithm is usually chosen to be 2.1 Noiseless Coding Noiseless coding is the general term for all schemes that reduce the number of bits required for the representation of a source output for perfect recovery.p) A plot of the binary entropy function is given in Figure 4. video. but cannot be less than H(X). T h e noiseless coding theorem.132 CHAFI'tK 4. speech. The entropy of a source provides an essentia! bound on the number of bits required to represent a source for full recovery.( \ . the information content.x) (4. (4. the entropy is denoted by H b ( p ) and is given by HiAp) = . .2.2) Figure 4.p) log( 1 . which can be thought of as independent drawings of one random variable X. AiSALUU-tU-UlUi IAL lwive 4.1: Plot of the binary entropy function.1.2 Measure of Information The output of an information source (data.2. is defined as H{X) = p(-*)toSp{. the average number of bits per source output required to encode a source for error-free recovery can be made as close to H ( X ) as we desire. In other words.p l o g p . which results in the entropy being expressed in bits. 4.2. etc. or entropy.) can be modeled as a random process. 2 + 3 x (0. Starting from the root of the tree and assigning O's and l's to any two branches emerging from the :>amc node.0. This process is repeated until only one merged output is left. 0. Huffman coding and Lempei-Ziv coding are two examples. We follow the algorithm outlined above to get the tree shown in Figure 4.07. and corresponding probability vector p = {0.1. we generate the code.VI tlllUI lildltVIl due to Shannon.06) = 3. states that for perfcci reconstruction of a source it is possible to use a code with a rate as close to the entropy of the source as we desire.06) Find the average codeword length of the resulting code and compare it with the entropy of the source.09.08 + 0. Here we discuss the Huffman coding algorithm. but it is not possible to use a code with a rate less than the source entropy.0.15.1 [ H u f f m a n coding] Design a Huffman codc for a source with alphabet X = {j:i. . for any € > 0. In other words. In this way we generate a iree. ILLUSTRATIVE PROBLEM Illustrative Problem 4. Prefix-free codes are codes in which no codeword is a prefix of another codeword.12.13.08. = 3. It can be shown that in this way we generate a codc with minimum average length among the class of prefix-free codes. as expected.0.0.v^ .0.V9}.^ p. regardless of the complexity of the encoder and the decoder.m given below calculates the entropy of a probability vector p.13 + 0 . The average codeword length for this code is £ = 2 x 0 . log p. 0.09 + 0. .2.1 bits per source output The entropy of the source is given as 9 H{X) = . 1 2 + 0 . There exist various algorithms for noiseless source coding.2. 1 ) + 4 x (0.' The following example shows how to design a Huffman code.15 + 0.07 + 0. H u f f m a n Coding In Huffman coding we assign longer codewords to the less probable source outputs and shorter codewords to the more probable ones.0. To do this we start by merging the two least probable source outputs to generate a new merged output whose probability is the sum of the corresponding probabilities. The MATLAB function entropy. but we cannot have a code with rate less than H(X). we can have a code with rate less than A") -r e.0371 i=l bits per source output We observe that L > H(X). 22 01 0.26 10 0. .17 101 1010 0.15 100 00 110 010 X} 0.58 Prob.06 Uli Figure 4.13 0.134 CHAPTER 4.07 1110 0.DIGITA L CONVERSION Codewords 00 100 x\ 0.09 1011 *7 0.2 0.1 0. 010 Oil x* 011 1010 0.42 110 0.2: H u f f m a n code tree.12 0. A NA LOG-TO.13 111 1111 *9 0.32 0.08 1011 1110 *8 0. function [h. cor. v e c t o r . if length(find(p<0))"=0. and I the % average codeword length for a source with % probability vector p.er. It should also be noted that the H u f f m a n coding algorithm does not result in a unique code d u e to the arbitrary way of assigning O's and 1 's to different tree branches. by increasing K we can get as close to H(X) as we desire.3) is called the efficiency of the Huffman code. wc always have rj < I. % Ih.4) If we design a H u f f m a n code for blocks of length K instead of for single letters. errorl'Not a p r o b . n e g a t i v e component ( s ) ' ) end if abs(sumip)-1 )> 10e-10. which designs a H u f f m a n codc for a discretem e m o r y less source with probability vector p and returns both the codewords and the average c o d e w o r d length. errort'Not a p r o a .2.por.ESTROPYiPi returns the entropy function the probability vector p of if lengih(find(p<0)r=0.2. .4. The M A T L A B function h u f f m a n . %HUFFMAN Huffman code generator. Obviously. Huffman code generator % returns h the Huffman code matrix. we will have H{X) < L < H(X) + (4. is given below.5) ft and. error('Noc a p r o b . v e c t o r .lI = Huffman!p). Measure of Information 135 function h=entropy(p) % % H . U can be shown that the average codeword length for any H u f f m a n code satisfies the inequalities H(X) < I < H(X) + 1 (4.l)=huffman(p). increasing K increases the complexity considerably.2. Needless to say.2. end n e g a t i v e c o m p o n e n t ( s ) ') if abs(sum(p)-1)>10e-10.es do n o t add up t o 1 ' ) end h=sum(-p »log2(p)): The quantity n = (4. m . In general. therefore. That is why we talk about a H u f f m a n code rather than the H u f f m a n code. v e c t o r . :)r=32)). end end for i=1:n h{i.zeros(1. c(n-1. end for i = 1 .136 CHAPTER 4. for i = 2 : n . c(n-i.1 )+1 :n»find(m(n-i+1 .:)=[l(1 :n—i+1 ).n)=' 0 '. q=p.1 ) = c ( n . -.n + 1 : 2 » n .:)=blanks(n*n).0. 2. e(n—i.. . q=(q(1)+qC2).q(3:n). v e c t o r . ILLUSTRATIVE PROBLEM Illustrative Problem 4. x2.n*(find(m(n-i + 1. 1 : n .n)=' 0 ' . for j = 1 : i .1 cfn—i.0.3.:)==j + 1)).2»n)= • 1 ' .1. n»(find(m(n-i+1. .05.(j+1 )»n + 1 :(j+2)»r>)=c(n-i+1.0.0.2+n)='l'. .n*(find(m{1 . Determine the entropy of the source.0. n .DIGITA L CONVERSION I') errorf'Koc a p r o b . Now design a Huffman code for source sequences of length 2 and compare the efficiency of this code with the efficiency of the code derived in part 2.:)==i)»n). c(n —i. Find a Huffman code for the source and determine the efficiency of the Huffman code. m=zeros{n-1.i-1)].. c o m p o n e n t s do n o t a d d u p t o end n=tengih(p).21.1 c(n-i.:)==!))). ANA LOG-TO.n): for i=1:n—1 [q. . 1.:)==1)).1 ) .:)==j + 1 ) . end c(n— 1 .l]=sort(q).1!.:)==i)-1)+1 :find(m( 1 . m{i.*)l). end l=sum(p. -(n~2):n*(find(m(n-i+1.i .25} is to be encoded using Huffman coding. 3.1 c(i. 1 1 (i)=lengih(find(abs(h(i. jr6} and the corresponding probabilities p = {0.1 :n)=c(1.2 [Huffman coding] A discrete-memoryless information source with alphabet X — U). .09.1:n-1)=c(r—i-t-1. m function we can design a Huffman code for this source. the efficiency of this code is r/] = 2. and 1000 and the average codeword length for the extended source is 4.4. Using the huffman. 0100.3549 bits per source symbol. 00. 111111. 111001. 00110. 101111.3549 2. 101110. in order to obtain the probability vector tor the extended source.011010. ii ioi mo. 01100. 00001.38 binary symbols per source output. 11. each component being the product of two probabilities in the original probability vector p. 1101.9922 binary symbols per source output. The Huffman codewords are given by 1110000. 11110. 0000001.00000000. 1110001.7097.00111. 2. liioiiio.01110.m in the form of kron(/p. The entropy of the extended source is found to be 4.9932 4.00101. 1011000. 001001. ILLUSTRATIVE PROBLEM Illustrative Problem 4.001000. we must generate a vector with 36 components. p). so the efficiency of this Huffman code is n. This can be done by employing the MATLAB function kron.m function and is found to be 2. 01. 000. 10110111. 0001. m o n o . The resulting codewords are 1. If we find the entropy of the source using the entropy.2. Measure of Information 137 1. 000001. = 4. 0110. 111010. 1011001. 111011 in. 001. Thus.38 _ _ = 0.7420 which shows an improvement compared to the efficiency of the Huffman code designed in part 2. 0101. 1010.7420. The entropy of the source is derived via the entropy. A new source whose outputs are letter pairs of the original source has 36 output letters of the form ((*.m function to determine a Huffman code and the corresponding average codeword length. we see . and 10.3 [A Huffman code with maximum efficiency] Design a Huffman code for a source with probability vector 1 1 1 1 1 1 P 1 1 ' 2 ' 4 ' 8* 16' 3 2 ' 6 4 ' 128' 256' 256 We use the huffman. The average codeword length is 1.9895 3.•.01111.m function. The codewords are found to be 010. 10110110. Since the source is memoryless. the probability of each pair is the product of the individual letter probabilities. i o o i . i i o o . The average codeword length for this code is found to be 2.7097 — = 0. and 00000001. Therefore.onoi i. 111110. . 0111.1011010. Then all values of the random variable that fall within region J?. . In order to process the source output digitally. It is clear that. This one-toone correspondence between the compressed data and the source output means that their entropies are equal and no information is lost in the encoding-decoding process. Can you say under what conditions the efficiency of a Huffman code is equal to 1? 4. and within each region a single point called a quantization level is chosen.e. for 1 < i < N.3. i. A NA LOG-TO. and the source output is also a deterministic function of the compressed data.3) .1) (4. the number of bits required for representation of each source output is not finite.2) The Obviously. in nonuniform quantization regions of various lengths are allowed. mean-square quantization error is therefore given by (4. which is denoted by x. whereas in vector quantization blocks of source output are quantized. This means that x € where Xi € X j 1 < = > GOO = xi (4. called quantization intervals. The information lost in the quantization process can never be recovered. In many applications.DIGITA L CONVERSION that the entropy of the source is also 1 9 9 2 2 bits per source output: hence the efficiency of this code is 1. In uniform quantization.1 Scalar Quantization In scalar quantization the range of the random variable X is divided into N nonovertapping regions J?. compression of the source output sequence such that full recovery is possible from the compressed data.. where the source alphabet is not discrete. In these methods the compressed data is a deterministic function of the source output.3. quantization schemes can be classified as scalar quantization and vector quantization schemes. nonuniform quantizers outperform uniform quantizers. in general.3. the source has to be quantized to a finite number of levels. In general.3 Quantization In the previous section we studied two methods for noiseless coding.3. are quantized to the ith quantization level. Scalar quantizers can be further classified as uniform quantizers and nonuniform quantizers.. a quantization of this type introduces a mean-square error of (x — xi) .138 CHAPTER 4. the quantization regions are chosen to have equal length. such as digital processing of the analog signals. This process reduces the number of bus to a finite number but at the same time introduces some distortion. In scalar quantization each source output is quantized individually. 4. e. Having determined a and A. i.3) and (4.4).o c .. = £ [X1AT € fj> = fx.1 (4.3. The signat-to-quantizadon-noise ratio.4.2) A. Quantization 139 where / x ( .-'s and the resulting distortion can be determined easily using Equations (4.6) . i. oo) The optimal quantization level in each quantization interval can be shown to be the centroid of that interval. which is denoted by A. Plots of the quantization function Q{x) for a symmetric probability density function of X. Oil. therefore. an). = (i .e.4.N / 2 ) A apt = oo 1 < i < N .3.3.3.. ] I < i < N (4. For the symmetric probability density functions. at a distance A / 2 from the boundaries of the quantization regions.ci + 2A] = (a + (jV . and are of equal length. the design of the uniform quantizer is equivalent to determining a and A. the problem becomes even simpler. i. the values of i. and even and odd values of N are shown in Figures 4. a] J?2 = («.. a + A] = (a -r A.3. i.e. where ao — —oo a.\ i = N (4. i. i i i = ( .t ) denotes the probability density function of the sourcc random variable. respectively.4) xfx(x)dx — fx(x)dx Therefore. In such a case. In some cases it is convenient to choose the quantization levels to be simply the midpoints of the quantization regions.5) 1< / < N .3 and 4. (SQNR) is defined as SQNR i j B = I01og10—^ Uniform Quantization In uniform quantization all quantization regions except the first and the last one.3. 3: The uniform quantizer for N = 6.156 CHAPTER 4. (Note that here a + 2 A = 0.4: The quantization function for jV = 7. (Note that here £4 = 0.) . GOO *7 *6 a -I.2 A a a + A a + 4A a + 3A a + 5A Figure 4.DIGITA L CONVERSION GOO •*6 a a + A a + 3A h a + 4A Figure 4. ANA LOG-TO. [ ] ' . B] region. pf p2.p3) TiMSESHST Returns the mean-squared quantization error 7c [Y.DlSTj = MSE-DIST( F UNFCN. ' ) •].int2str(n)]: end args=[args. for i=1 :length(a)— t y ( i ) = e v a l ( [ ' c e n t r o i d ( f u r . the distribution of the sourcc.args]).' ] * |. [ ] ' .B. finally.the vector defining the boundaries of the % quantization regions.pl.P3l. These m-files are given below.P2.P2. y l=evaJ{[ • q u a d ( f u n t c n l . for i = 1 : l e n g t h ( a ) . funfen 1 = [ ' x . f e n . a .3 args=[args.args]).P3) To funfen = the distribution function given % in an m-file It can depent on up to three % parameters. p2. y2=eval([ • q u a d ( f u n f e n . The % 7c function con contain up to three parameters. a(length(a)il % is the support of funfen). * ' . y=yl/y2.funfen]. which can depend on up to three parameters.iol. Quantization 141 We see that in this ease we have only one parameter A. a ( i } .TOLPI. jail). .tol.the relume error PI.A.mt2str(n)J. * ' . % pi. p ' .P 1 .funfen]. p ' .m. PJ args=[]. for n=1 n a r g i n . t o l ' . mse_dist. p3 . t o l . function y=ceniroi(J(funfcn. has to be given in an m-file.m. % a .num2str(y(i)). a .p2.CENTROIDCF . and uq. (note. which we have to choose to achieve minimum distortion.4 args=[args.parameters of funfen % tol .( ' . t o l .the relative error args=[].dist. P2. and.a. centroid.' .1 n e w f u n = [ ' ( x . b . finds the centroid of the 7c function F -iefined in an m-file on the I A. function [y. Three m-files. ~2 .b. tol .disi]=mse-d!si(funfen.orgs]). % Y .p2. In order to use each of these m-files. a ( i + 1) .TOL.pi . find the centroid of a region: the mean-square quantization error for a given distribution and given quantization region boundaries. for n = 1 : n a r g ] n . end disl=0. p3. A.3. the distortion when a uniform quantizer is used to quantize a given source (it is assumed that the quantization levels are set to the centroids of the quantization regions).4.a.m.' .p3) 7c CENTROID Finds the centroid of a /unction aver a region.' ) ) . end a r g s = [ a r g s . b . p2.4 [Determining the centroids] Determine the centroids of the quantization regions for a zero-mean.P2.b. This distribution is a function of two parameters. where the boundaries of the quantization regions are given by (—5.pi~p<irameters of the input function. A NA LOG-TO.p2. % /Y. the mean and the variance. % disr-dtstortion. % tol=the relative error. H*» The Gaussian distribution is given in the m-file normal.delta.pi.P3) % fun/cn-.uiurce density function given m an m-ftle % with ut mart three parameters. % y-ijuanUTMiton levels.dist(funfcn.N. . —4.c. % p! . return end if ( s + ( n . unit variance Gaussian distribution. p!. for i=2.fl~The support of the source density function. return end if (s<t>) error{ " T h e l e f t m o s t b o u n d a r y t o o s m a l l . end a(n+1)=c. p ' . % )i=number of levels. ' ) .p2. The support of the Gaussian distribution is (—oo. respectively. t o l ' .2 ) « d e ) t a > c ) e r r o r ( ' T h e l e f t m o s t b o u n d a r y t o o l a r g e . a( 1 )=b. [y. 5).n a(i)=s+{i-2)«deha. 0. (list=dtst-t-evaS([' quad (newfun.p3.dist]=evaJ([ ' m s e _ d i s t I f u n t c n .orgs]).DELTA. but for employing the numerical routines it is enough to use a range that is many times the standard deviation of the distribution. a .n.DISTI=UQ-DIST(FUNFCX. return end args=[ ].' .b < d e l t a » ( n . a( U .DIGITA L CONVERSION I ' . denoted by m and s (or a ) .B.2 .m.s.TOLPI.io!.p3) %UQ-DIST Returns the distorriun uf a uniform quantizer. % {b. % with quantization points ret to the ctntrtuds. for j = 1 : n a i j i n —7 args=[args. ' ) .argsl>: ILLUSTRATIVE PROBLEM Illustrative Problem 4.') ' ) . 1. a ( i+ L i . oo). % delta-level size % s = lhe leftmost quantization return boundary.dist]=uq.S. end function [y. .2 ) ) e r r o r ( ' T o o m a n y l e v e l s f o r t h i s r a n g e . 3. if ( e . C. col. ' ) .142 CHAPTER 4. end args=[args.im2str(j)]. and ( 5 . In this case the quantization levels corresponding to the first and the last quantization regions are chosen to be at distance A / 2 from the two outermost quantization boundaries. n = 12 .3 ) .2 . . ± 5 .1 . 1 8 6 5 .2827. .1. s = —5. ILLUSTRATIVE PROBLEM Illustrative Problem 4. 9c MATIAB scrip! for Illustrative Problem 4. ±(N/2 — 1)A and the quantization levels are given by 0.m.5 ] .0851 and quantization values of ±0.4 ] . 3 7 0 6 .1). 3. —2.6 [Uniform quantizer distortion] Determine the mean-square error for a uniform quantizer with 12 quantization levels. . .dist. ± 1. . A = 1. (N — 1)A/2. 3.0.1865). .5101. m + 10-v/f). . ( .m determines the squared error distortion for a symmetric density function when the quantization levels are chosen to be the midpoints of the quantization intervals. . then the boundaries are given by ± A / 2 .5]. ± 2 A . ± 2 . a=[—10. Quantization 143 For example. . ±(N/2 — 1) A and the quantization levels are given by ± A / 2 . 0 ] . . . ( . —0. 1. . ±2. This means that in the uq.-4. —4.5 In Illustrative problem 4.a(i+1 ). end This results in the following quantization levels: ( .o o . and ±5.au). tol = 0. . . ( . ±4.2 . 0.4 .2]. The m-file uq_mdpnt. (N — l ) A / 2 . designed for a zero-mean Gaussian source with variance of 4.0.6455. for i=1 :!engih(a)—1 y(i)=cenlroid(' n o r m a l ' . ±3. (0. can be used. .177.3 . + o o ) .3. . . p\ — 0. ILLUSTRATIVE P R O B L E M Illustrative Problem 4. we obtain a mean square error of 0. (3.4089.5 . 10) and using mse.4487. ( . 5. —5.4691. . 1]. (m .dist.1 ] .4].10^/7. .m function we can substitute b = . and p2 = 2.0. —2]. ± 3 A / 2 .4286. ( . It is assumed that the quantization regions are symmetric with respect to the mean of the distribution.10j. The following m-file determines the centroids (optimal quantization levels). 0. Chapter 4.3]. ± 4 . ± 2 A .4 . and the quantization regions are ( . m msssm Letting a — ( — 10. ± A .7228. 5.2 0 .001. ± A .m we obtain a squared error distortion of 0.4. ± 3 . 1.—2.001.4599.4897. then the quantization boundaries are 0. ± 3 A / 2 . The m-file uq_mdpnt. . 2 1 6 8 .m is given below. (2. each of length 1.-5.4 determine the mean-square error.3.5 . If the number of quantization levels is odd. (4. . c = 20. ± 1. (1. Substituting these values into uq.5. .dist. . This means that if the number of quantization levels is even. By the symmetry assumption the boundaries of the quantization regions are 0. pl -parameters of the input function. % delta-level size.b.TOLP I .p3) %UQJ\IDPNT Returns the distortion of a uniform quantizer % wtih quantization points set to the midpoints.p3. a ( i + 1 ) .p2. for . disi=0: for i=1 :n newfun=[' t x .int2str(j)). p\ = 0 and p2 = 1 for the density function parameters.' .7 [Uniform quantizer with levels set to the midpoints] Find the distortions when a uniform quantizer is used to quantize a zero-mean.bj-The support of the source densiry function.m we substitute 'normal' 2 for the density function name.DIGITA L CONVERSION function dist=uq-mdpnt(funfcn. t o l . return ILLUSTRATIVE PROBLEM Illustrative Problem 4.delta. % n=number of levels. pl. dist=dist+eval([' q u a d ( n e w f u n . % dist-distortion.lol.pl. " 2 . end y(n)=a(n)+delta.' ) ) .1 )+delta.p2. and A = 1 for the length of the quantization levels.n.N.DELTA. 1 ) • ]. '). for i=3:n a ( i ) = a ( i . B. % (•b.nargin-5 args=[args.num2str(y(i)). The number of quantization levels is 11. a(n+1)=b: a(2)=-(n/2-1)*delta: y(1)=a(2)-delta/2. % pfp2. . For the parameter b.P2. —«WHHIP In uq_mdpnt. n = 11 for the number of quantization levels. * ' . ANA LOG-TO. % tol=lhe relative error. unit variance Gaussian random variable. end for this r a n g e .( ' . a ( i ) .144 CHAPTER 4. if (2*b<delta*(n~1)) errorl'Too many l e v e l s end args=[i.args]).P3I % funfen—source density function given in an m-jUe % with at most three parameters. t ) ' . % DIST=UQ-MDPNT(F UN PCS. end args=[args. The density % function is assumed to be an even function.funfen). y(i-1)=a(i)-de!ta/2. and the length of each quantization region is 1. p • . which chooses ^The name of the function should be substituted with 'normal' inriudin^ the single quotes. dist]=l!oydmax(funfcn. Quantization 145 the support set of the density function.p3) %LLOYDMAX Returns the the Lloyd-Max quantizer and the mean-squared % quantization error for a symmetric distribution.' ) ' ] .bj approximates support oj the density function. function [a. Since in this case optimization is done under more relaxed conditions. the result is obviously superior to that of uniform quantization.y.0833. but in genera! there is no guarantee that the global minimum can be achieved. TOL P1.int2str(j)). and we choose the tolerance to be 0 001. This algorithm is guaranteed to converge to a local minimum. % a-The vector giving the boundaries of the % quantization regions.3. can be expressed as _X T : •C-.. It can depent on up to three % parameters.p3. end args=[args. % v=The quantization levels.p2. Y.4. pl. Nonuniform Quantization In nonuniform quantization the requirement that the quantization regions.pl .b. In order to obtain the solution to the Lloyd-Max equations.P2. B. for j=1 : n a r g i n . ' .P3) % funfcn=The density function given % in an m-file. . The procedure of designing an optimal quantizer is shown in the m-file lloydmax. except the first and the last.DISTJ-LLO YD MAX! FUNFÇM. have equal lengths is relaxed.4 args=largs. The resulting distortion is Ü.n. From this set we can simply find the set of quantization region boundaries ar From this set of a.p2. we use the value b = 10p2 = 10. "c n-The number nf quantization regions. and each quantization region can have any length.^ d x +-ti) (-4 3 7 ) From these equations we conclude that the optimal quantization leveis are the centroids of the quantization regions and the optimal boundaries between the quantization regions are the midpoints between the quantization levels.N.'s a new set of quantization levels can be obtained. Hi-I f x i . % pi.p3~Parameters of funfcn % tol=the relative error. args=[]. This process is continued until the improvement in distortion from one iteration to another is not noticeable. p • . we start with a set of quantization levels . The optimality conditions for this case.tol.m. % t-b. given below. known as the Lloyd-Max conditions.p2. % IA.t. 146 CHAPTER 4. A NA LOG-TO- DIGITA L CONVERSION v=eval([' v a r i a n c e ( f u n f c n . - b , b , t o ! ' .args]), a(1)=-b; d=2*b/n; for i=2:n a ( i ) = a ( i - 1 }+d; end a(n+1 )=b; disl=v; [ y . n e w d i s t ] = e v a ] ( [ ' m s e _ d i s c I f u n f c r . . a , t o 1 ' .args)); while(newdist<0.99*disi), for i=2:n a(0=(y(i-1)+y(i))/2; end dist=newdist, [y,newdist]=eval([' mse_dist [ fun fen, a, t o l ' .args]); end ILLUSTRATIVE PROBLEM Illustrative Problem 4.8 [Lloyd-Max quantizer design] Design a 10-leve! Lloyd-Max quantizer for a zero-mean, unit variance Gaussian source. — Using b = 10, n — 10, tol = 0.01, p\ = 0, and p i = 1 in Uoydmax.m, we obtain the quantization boundaries and quantization levels vectors a and y as a = ± 10, ±2.16, ± 1.51, ±0.98, ±0.48, 0 y = ±2.52, ±1.78, ±1.22, ±0.72, ±0.24 and the resulting distortion is 0.02. These values are good approximations to the optimal values given in the table by Max [2], 4.3.2 Pulse-Code Modulation In pulse-code modulation an analog signal is first sampled at a rate higher than the Nyquist rate, and then the samples are quantized. It is assumed that the analog signal is distributed on an interval denoted by [ - x m i X , Jrmax]. and the number of quantization levels is large. The quantization levels can be equal or unequal. In the first case we are dealing with a uniform PCM and in the second case, with a nonuniform PCM. Uniform PCM In uniform PCM the interval [—x max , .rma*j of length 2x m a x is divided into N equai subintervals, each of length A = 2xmax/N. If N is large enough, the density function of the input in each subinterval can be assumed to be uniform, resulting in a distortion of D = A 2 / 1 2 . If N is a power of 2, or N — 2V, then v bits are required for representation of each level. This means that if the bandwidth of the analog signal is W and if sampling is done at the 4.3. Quantization 147 Nyquist rate, the required bandwidth for transmission of the P C M signal is at least v W {in practice, 1 W is closer to reality). The distortion is given by „ A2 x~ — m:LX ~ 3N2 nm ' 3 x 4 " If the power of the analog signal is denoted by X2, the signal-to-quantization-noise ratio ( S Q N R ) is given by SQNR = 3,V--j— •'mts = 3 x 4 ^ X~ {4.3.9) = 3 x 4"Xwhere X denotes the normalized input defined by t-JL T h e S Q N R in decibels is given by SQNR l d B * 4 . % + 6v + T \ b (4.3.10) After quantization, the quantized levels are encoded using v bits for each quantized level. T h e encoding scheme that is usually employed is natural binary coding (NBC), m e a n i n g that the lowest level is mapped into a sequence of all O's and the highest level is mapped into a sequence of all l ' s . All the other levels are mapped in increasing order of the quantized value. The m-file u_pcm.m given below takes as its input a sequence of sampled values and the n u m b e r of desired quantization levels and finds the quantized sequence, the encoded sequence, and the resulting S Q N R (in dcibels). function [sqnr,a_quan,code]=u-pCrt)(a,n) %UJ>CM % % % % % % Uniform PCM encoding of a sequence. ISQNR.A^QU/W.CODEI = UJ>CM(A.X) a = input sequence n - number of quantization levels (even) sqnr = output SQNR Iin dB) a^quan - quantized output before encoding code = the encoded output 148 CHAPTER 4. ANA LOG-TO- DIGITA L CONVERSION amax=max (a bs (a)); a_quan=a/amax; b_quan=a_quan; d=2/n; q=d.*[0;n-1]; q=q-((n-1)/2)*d; for i=1:n a . q u a n ( f i n d { ( q ( i ) - d / 2 < = a^quani & (a_quan < = q u ) + d / 2 ) ) ) = . q ( i ) . * o n « ( 1 . ] e n g ( h ( f i r i d f ( q ( i ) - d / 2 < = a . q u a n ) & (a_quan <= q(i)+d/2)!)): b_quan(lind( 3_quan==q(i) ) ) = ( i - 1 ).*ones(1.length(find{ a.quan==q(i) ))i: end a_quan=a_quan*amax; nu=ceil{log2(n)); code=zeros(length(a),nu); for i=l:length(a) for j=nu: — 1:0 if ( Jixib_quan(i)/(2"j)) == 1) codc(i,(nu—j)) = 1; b.quan(i) = h . q u a n ( i ) — 2 ~ j , end end end sqnr=20»log I O ( n o r m ( a ) / n o r m i a - a _ q u a n ) ) ; ILLUSTRATIVE PROBLEM Illustrative Problem 4.9 [Uniform PCM] Generate a sinusoidal signal with amplitude 1, and co = 1. Using a uniform PCM scheme, quantize it once to 8 levels and once to 16 levels. Plot the original signal and the quantized signals on the same axis. Compare the resulting SQNRs in the two cases. dkMIIHt»»^ We arbitrarily choose the duration of the signal to be 10 s. Then using the u_pcm .m m-file, we generate the quantized signals for the two cases of S and 16 quantization levels. The resulting SQNRs are 18.90 dB for the 8-level PCM and 25.13 dB for 16-level uniform PCM. The plots are shown in Figure 4.5. A MATLAB script for this problem is shown below. % MATIAB echo on script fur Illustrative Problem 9. Chapter 4 (=[0:0.01:10]; a=sin(t); [sqnr8,aquan8,code8]=u.pcm(a.8); [sqnr 16,aquan 16.eode 16]=u-pc m(a,16); pause % Press a key to see the SQNR for N = 8 sqnr8 pause % Press a key to see the SQNR for N = 16 sqnrlft pause % Press a key to see the plot of the signal and its quantized versions 4.3. Quantization 149 Figure 4.5: Uniform PCM for a sinusoidal signal using 8 and 16 levels. 150 CHAPTER 4. A NA LOG-TO- DIGITA L CONVERSION p]ot(t,a,fl-fl,i,aquan8,fl-. fI,t.aquanl6,fl-rJ,i,zeros(1 .lengih(t))) ILLUSTRATIVE PROBLEM Illustrative Problem 4.10 [Uniform PCM] Generate a sequence of length 500 of zeromean, unit variance Gaussian random variables. Using u_pcm.m, find the resulting SQNR when the number of quantization levels is 64. Find the first five values of the sequence, the corresponding quantized values, and the corresponding codewords. —^ESÏÏSS*The following m-file gives the solution. % MAT LAB script for Illustrative echo on a=nmdn( 1,500); n=64; (sqnr.a_quan,code]=u-pcm(a.64); pause % Press a key to see the SQNR. sqnr pause % Press a key to see the first five input Problem 10. Chapter 4 values. a<1:5) pause % Press a key to see the first five quantized values. a_quan(1:5) pause % Press a key to see the first five codewords. c o d e d :5,:) In a typical running of this file, the following values were observed. SQNR = 31.66 dB Input = [0.1775, —0.4540, 1.0683, -2.2541,0.5376] Quantized values = [0.1569, - 0 . 4 7 0 8 , 1.0985, - 2 . 2 4 9 4 . 0.5754] 10 Codewords = 0 1 I 0 0 0 1 0 0 0 0 1 I 0 1 1 0 1 1 0 1 0 1 0 1 0 0 1 Note that different runnings of the program result in different values for the input, the quantized values, and the codewords. However the resulting SQNRs are very close. ILLUSTRATIVE P R O B L E M Illustrative Problem 4.11 In Illustrative Problem 4.10, plot the quantization error, defined as the difference between the input value and the quantized value. Also, plot the quantized value as a function of the input value. 4.3. Quantization 151 Figure 4,6: Quantization error in uniform P C M for 64 quantization levels. 152 CHAPTER 4. ANA LOG-TO- DIGITA L CONVERSION mssm The two desired plots are shown in Figure 4.6. ILLUSTRATIVE PROBLEM I l l u s t r a t i v e P r o b l e m 4.12 Repeat Illustrative Problem 4.11 with number of quantization levels set once to 16 and set once to 128. C o m p a r e the results. « « « M » ™ The result for 16 quantization levels is shown in Figure 4.7. and the result for 128 quantization levels is shown in Figure 4.8. Comparing Figures 4.6, 4.7, and 4.8, it is obvious that the larger the number of quantization levels, the smaller the quantization error, as expected. Also note that for a large number of quantization levels, the relation between the input and the quantized values tends to a line with slope 1 passing through the origin; i.e., the input and the quantized values b e c o m e almost equal. For a small number of quantization levels (16 for instance), this relation is far from equality, as shown in Figure 4.7. Nonuniform PCM In nonuniform PCM the input signal is first passed through a nonlinear element to reduce its dynamic range, and the output is applied to a uniform PCM system. At the receiving end, the output is passed through the inverse of the nonlinear element used in the transmitter. The overall effect is equivalent to a PCM system with nonuniform spacing between levels. In general, for transmission of speech signals, the nonlinearities that are employed are either ^i-law or A-Iaw nonlinearities. A M"law nonlinearity is defined by the relation , , log(l + m W ) , , y = #U> = — — - — s g n U ) log(l + ti) m i m , (4.3.11) where x is the normalized input (]x| < 1) and ii is a parameter that in standard jx-law nonlinearity is equal to 255. A plot of this nonlinearity for different values of ^ is shown in Figure 4.9. The inverse of nonlinearity is given by . = " - ^ ' " - ' s s n O ) M ,4.3.,2, The two m-ftles mulaw.m and invmulaw.m given below implement ^i-law nonlinearity and its inverse. function [y,a]=mulaw(x,mu) %MULAW mu-law nonimeurtty for % Y = MULAW(X.MU) % X - input vector nonuniform PCM. 4.3. Quantization Figure 4.7: Quantization error for 16 quantization levels. . *signum(x)./log(1+mu)). Quantization 155 08 - u = - 0.rn is the equivalent of the m-file u_pcm.: L if/ - 0 -0.m when using a ^t-law F C M scheme. a=max(abs(x)).output SQNR (in dB) ./mu).2 - -OS -i i t 0. nonlineurity of the mu-law nontmearity T h e m-file mula.4.1 ).n. MUL4_PCM(A.pcm.input sequence n = number of quantization sqnr .MU) Y = Normalized output x=(({1+mu) " ( a b s ( y ) ) .2 -0 4 -0 6 / -OS -0. y=(log(1+mu*abs(s/a)).«signum(y).3.6 - 04 o. function [sq nr.A-QUAN.NMU) levels (evert) % a .CODE/ = % % % sequence.9: The compander. a_quan.6 / y y v -04 -0.2 //) 0 0. function x=iovmulaw(y. This file is given below.4 . code]=mul a_ pc m (a.i 06 i Of Figure 4.mu) %INVMULAW The invent of mu-law %X=I!WMULAW(Y. m u) %MUIA-PCM mu-luw PCM encoding of a [SQNR. 76 dB.11. Comparing the input-output relation for the uniform and the nonuniform PCM shown in Figures 4. Also determine the SQNR in each case. and 4.76 dB. Comparing these results with the uniform PCM. sqnr=20»logl0(norm(a)/norm(a—a. ANA LOG-TO.quan. plot the error and the input-output relation for the quantizer in each case.quantized code = the encoded CHAPTER 4.DIGITA L CONVERSION output before output encoding [y.mu). . From the above example we see that the performance of the nonuniform PCM. Compare the resulting SQNR in the two cases. and 128 quantization levels and a /i-law noniinearity with p.maximum]=mu]aw(a. i.e. Using 16. a .7 and 4..y_q. I) distribution. The first 20 samples are generated according to a Gaussian random variable with mean 0 and variance 400 (cr = 20). q uan=tnax i mu m u an. 64. The SQNR will be 13. we observe that in all cases the performance is inferior to the uniform PCM. and for 128 levels we have SQNR = 31.10. The next example examines the case where the performance of the nonuniform PCM is superior to the performance of the uniform PCM.a.code] = mula_pcm(a. is not as good as uniform PCM. ILLUSTRATIVE PROBLEM 0. and the next 480 samples are drawn according to a Gaussian random variable with mean 0 and variance 1. 4.14 The nonstationary sequence a of length 500 consists of two parts. (sqnr. This sequence is once quantized using a uniform PCM scheme and once using a nonuniform PCM scheme. 16.156 % % a_quan . ILLUSTRATIVE PROBLEM Illustrative Problem 4.mu). The reason is that in the above example the dynamic range of the input signal is not very large. = 255.quan)).255) we can obtain the quantized sequence and the SQNR for a 16-level quantization. let Illustrative Problem 4.12.code]=U-pcm(y.n). For the case of 64 levels we obtain SQNR = 25. a_q!i2fi=invmulaw(y-q. i). in this case. — Let the vector a be the vector of length 500 generated according to a = randn(500) Then by using [dist. Plots of the input-output relation for the quantizer and the quantization error are given in Figures 4.89 dB.13 [Nonuniform P C M ] Generate a sequence of random variables of length 500 according to an M(0.10 clearly shows why the former is called uniform PCM and the latter is called nonuniform PCM. 10: Quantization error and input-output relation for a 16-level /z-law P C M .3. .4. Quantization 157 Figure 4. 158 CHAPTER 4. A NA LOG-TO.DIGITA L CONVERSION . Quantization 175 .3.4. 49 dB and 24.m files to determine the resulting S Q N R . 20)randn( 1.160 CHAPTER 4. ANALOG-TO-DIGITAL CONVERSION The sequence is generated by the M A T L A B c o m m a n d a = [ 2 0 * r a n d n ( l . p c m . m a n d the mula_pcm.95 dB. In this case the p e r f o r m a n c e of the nonuniform P C M is definitely superior to the performance of the uniform PCM. The resulting S Q N R s arc 20.480)] Now w e can apply the u . respectively. . Explain why the result in this case is different from the result obtained in Problem 4.5 A continuous information source has a zero-mean. 6. Quantization 161 Problems 4.2.0. 1 5 } Determine the efficiency of the code by computing the average codeword length and the entropy of the source.0. 5.4). b. 4. 4. This source is quantized using a uniform symmetric quantizer. 0.2. 4. 3. c. a.3 T h e probabilities of the letters of the alphabet occurring in printed English are given in Table 4. The quantizer uniformly quantizes the interval (—10. 5. where the length of the quantization regions is unity. the number of quantization levels. 2 1 .125}. 4.3. determine and plot the mean square distortion for N = 3 .125. Design a H u f f m a n code for printed English. 0 . 4.4 Repeat Problem 4. On the same graph plot log2/Vr versus N and explain why the two curves differ. 8. determine the entropy of the quantized source output and plot it as a function of N . 0 2 . For/V = 2.1. 0 . 1 . 5.0.2 A discrete-memoryless information source is described by the probability vector p — (0. Assuming the quantization levels are located at midpoints of the quantization regions.5. The number of quantization levels is N. unit variance Gaussian distribution. 0.4. plot the mean square distortion when the quantization levels are taken to be the centroids of the quantization regions. 10 as a function of N . 0. 3. Determine the entropy of printed English. 7.6 A zero-mean. 9. 2 .2. Plot the average codeword length (per source output) as a function of K. c. 0. 0 . b. 8. Write a M A T L A B file to compute the probabilities of the ATth extension of this source for a given K. 0 5 .25. unit variance Gaussian source is quantized using a u n i f o r m quantizer.2 for a discrete-memoryless source with a probability vector p — {0. 7. 0 .7 On the s a m e figure that you plotted in Problem 4.1 Design a H u f f m a n code for an information source with probabilities p = { 0 . Determine the average codeword length and the efficiency of the H u f f m a n code. 0 . 4 . 4. 0 . For what values of N are the two plots closer and why? . 10].1. Design H u f f m a n codes for this source and its /Cth extensions for K = 1 . 2 . 4. 0 7 . 6.3. 4.6. a. 0152 0.0013 0. .0005 0.0152 0.0175 0.1: Probabilities of letters in printed English.0008 0.0796 0.162 CHAPTER 4.0632 0.0321 0.0467 0.1859 Q R S T U V w X Y Z Word space Table 4.1031 0.0008 0.0228 0.0484 0.0642 0.0514 0.0083 0.0049 0.0208 0.0218 0.0574 0.0317 0.0575 0. A NA LOG-TO.DIGITA L CONVERSION Letter A B C D E F G H I J K L M N 0 P Probability 0.0127 0.0164 0.0198 0. Verify that the variance of a Laplacian random variable is equal to 2/kl. 4. design optimal nonuniform quantizers with a number of levels N = 2.4. substituting the uniform quantizer with the optimal nonuniform quantizer.13 T h e periodic signal * ( f ) has a period of 2 and in the interval [0. 5. 7. Assuming k = 1. 3.1 0 a . 0 < t < 1 1 < t < 2 x{t) = a.9 A Laplacian random variable is defined by the probability density function /<*) = where k > 0 is a given constant. 4. the entropy of the quantized source. unit variance Gaussian source. Repeat parts a. 4. d. Plot the mean-square error as a function of N for this source.3. and c using a 16-level uniform P C M system. 8 for the Laplacian source given in Problem 4. take the interval of interest to be [ . 4. c. 5. R. 8 levels for this source. 1 0 a ] where a is the standard deviation of the source. determine the S Q N R for this system in decibels. C o m p a r e these results with those obtained from quantizing a zero-mean. the average codeword length of a H u f f m a n code designed for that source. Quantization 163 4. A s usual.9 with a k = Jl (note that this choice of A results in a zeromean. unit variance Gaussian source. Plot H(X). b.10 Repeat Problem 4. unit variance Laplacian source). 5. 2] is defined as t.11 Design an optimal 8-level quantizer for a Laplacian source and plot the resulting mean-square distortion as a function of a as k changes in the interval ( 0 . Design an 8-level uniform P C M quantizer for this signal and plot the quantized output of this system. 5 ] . and R . 6. 4. 6. 1 . c. Plot the quantization error for this system. 8. and log 2 iV as a function of N on the same figure. b. 4. 7. a. 1 2 Design optimal nonuniform quantizers with iV = 2. b. 7.8 For a zero-mean. 3. 3. For each case determine H ( X ) .9. By calculating the power in the error signal. 6. Plot the entropy of the quantized source and log 2 jV as functions of N on the same figure. design uniform quantizers with N — 2. . 4 . ~t +2. 4. 18 Repeat Problem 4. For simulation of the effect of noise. 16-level. 4. unit variance Gaussian sequence with a length of 1000 and quantize it using a 6-bit-per-symbol uniform P C M scheme. 8-leveI. 32-level. ANA LOG-TO. and 64-)evel uniform P C M schemes for this sequence and plot the resulting S Q N R (in decibels) as a function of the number of bits allocated to each source output.1.14 using a nonuniform /x-law PCM with p — 255. 4. 0.DIGITA L CONVERSION 4.16 Repeal Problem 4. you can generate binary random sequences with these probabilities and add them (modulo 2) to the encoded sequence. The error probability of the channel is denoted by p. 5 x 10 ".17 Repeat Problem 4.-law P C M with p — 255.255. 5 x 1 0 " \ 1 0 " 2 . 4.u . . Design 4-!evel.15 Generate a zero-mean. Plot the overall S Q N R in decibels as a function of p for values of p — 1 0 _ ? . 0. The resulting 6000 bits are transmitted to the receiver via a noisy channel.14 Generate a Gaussian sequence with mean equal to 0 and variance equal to 1 with 1000 elements.2.164 CHAPTER 4.13 using a nonuniform ^. 4.15 using a nonuniform ^t-law P C M with . 1) .e. i.1 Preview In this chapter we consider several baseband digital modulation and demodulation techniques for transmitting digital information through an additive white Gaussian noise channel.Chapter 5 Baseband Digital Transmission 5.2 Binary Signal Transmission In a binary communication system. which is a sample function of a white Gaussian process with power spectrum Nq/2 watts/hertz. The channel through which the signal is transmitted is assumed to corrupt the signal by the addition of noise. so(t) and i . Suppose that the data rate is specified as R bits per second. We assume that the data bits 0 and 1 are equally probable. Such a channel is called an additive white Gaussian noise (AWGN) channel. Then. (r). denoted as n(t). 0 < t < Tb 0<t <Th where Th = 1//? is defined as the bit time interval. binary data consisting of a sequence of O's and 1 's are transmitted by means of two signal waveforms. the received signal waveform is expressed as r(i) = J/(0 + «(0. each bit is mapped into a corresponding signal waveform according to the rule 0 1 jo(0. 5. We describe the optimum receivers for these different signals and consider the evaluation of their performance in terms of the average probability of error. say. Consequently. JI(0. We begin with binary pulse modulation and then we introduce several nonbinary modulation methods. 165 0 <t<Th (5.. each occurs with probability j .2. i=<U. and are mutually statistically independent. BASEBAND DIGITAL TRANSMISSION The task of the receiver is to determine whether a 0 or a I was transmitted after observing the received signal r(t) in the interval 0 < r < Th. samples the two outputs at t = 7j. and let sa(t) be the transmitted signal.2.1 Optimum Receiver for the AWGN Channel In nearly all basic digital communication texts.2 Signal Correlator The signal correlator cro^s-correlates the received signal r(r) with the two possible transmitted signals so(r) and ![(/). 5.The receiver is designed to minimize the probability of error.2. the signal correlator computes the two outputs 'o(f) = [ Jo r(r)so(T)dz (5. ILLUSTRATIVE PROBLEM Illustrative Problem 5. as illustrated in Figure 5. 5. and feeds the sampled outputs to the detector.1 Suppose the signal waveforms ^o(i ) and i | (i) are the ones shown in Figure 5.1: Cross correlation of the received signal r(f) with the two transmitted signals. it is shown that the optimum receiver for the AWGN channel consists of two building blocks.166 CHAPTER 5. Such a receiver is called the optimum receiver. Figure 5. One is either a signai correlator or a matched filter.2.2.1. the received signal is . The other is a detector. Then.2) r i ( 0 = f r(T)j|(r)rfT Jo in the interval 0 < t < Th. That is. i. o<t<Th (5.e.2. r 0= •Th = s^Ddi + f " Jo n(t)so(t)dt (5.2: Signal waveforms î q { 0 and i | (/) for a binary communication system.3) —CSBSDEBP W h e n the signal r(t) is processed by the two signal correlators shown in Figure 5.6) . Slit) T Ti. 0 0 -/I Figure 5.5.. the outputs ro and r\ at the sampling instant f = Tb are /f =Jo ) f J0 •Th r(t)sQ(t)dt rT. r { / ) = iU(() +'i(f).4) = £ + no and fTh Jo r Th çTf. D e t e r m i n e the correlator outputs at the sampling instants.2.2..5) where no and n ] are the noise components at the output of the signal correlators. / Jo = n! = So(t)si(t)dt + / J0 n[t)3\[t)dt (5. Binary Signal Transmission 167 .1.2. >Th no = f J0 fTh n(t)so(l)dt ni= Jo n(t)si(t)dt (5.2. 9) = f 'L and variances o f . Since n(t) is a sampie function of a white Gaussian process with power spectrum N o / 2 .2. waveforms are orthogonal.e.8) Figure 5. the signal correlator outputs are ro = no n = £ +nL (5. 0 < i < Th It is easy to show that. when i o ( 0 is transmitted and when s i ( f ) is transmitted. i.2. is the energy of the signals sq(i) and S((l). for i = 1.7) On the other hand. i.. 2.3 illustrates the two noise-free correlator outputs in the interval 0 < r < Tb for each of the two cases—i. (b) i i ( 0 w a s transmitted.2.e.e.168 CHAPTER 5.. E(n0) = f '' s0(t)E[n(t)] Jo dt = 0 (5. where *Sl(t)£[n(t)ldr=0 . We also note that the two signal f * So(t)Sl{t)dt Jo =0 (5. when si (r) is the transmitted signal.3: Noise-free correlator outputs. the noise components no and /i[ are Gaussian with zero means. ( a ) i o ( O was transmitted. in this case.. BASEBAND DIGITAL TRANSMISSION and £ = A2T/. the received signal is r ( i ) = i i ( r ) + n(t). Output of O u t p u t of Correlator I O u t p u t of Correlator 0 r > T„ Figure 5. i=0. S-U)di (5. r n i(t}s. Similarly.0.5. when i 0 ( O is transmitted.10) / ^ Jo = — . the probability density functions of ro and r\ are p{ra \ j o ( 0 was transmuted) = p{r\ I so(i) was transmitted) = V2jt a _L V27T a e (r " ( 5 .2.{z}E[n(t)n(z)]dtdz Jo Nq 1 Jo fT>> Jo Si(t)Si(T)S(t-T)dtdz N0 £No fT" . 2 .2. are illustrated in Figure 5.3 Matched Filter The matched filter provides an alternative to the signal correlator for demodulating the received signal r ( r ) .= = = E{n-) r•T Ii.2.2. 1 2 ) e These two probability density functions.1 (5. A filter that is matched to the signal waveform s(t).13) . when A| (r) is transmitted. Binary Signal Transmission 169 a. 0 < t < Tb.4.4: Probability density function p(ro \ 0) and p(r\ | 0) when i o ( f ) is transmitted.11) Therefore. 5. Figure 5. 0 < t < Tb (5. ro is zero-mean Gaussian with variance o ~ and r[ is Gaussian with mean value <£" and variance cr~. h pi i. has an impulse response h(t) = s(Th . denoted as p(ro | 0) and p{ri | 0).2. 2. we obtain y(Th) = f ''s2(t)dt Jo = £ (5.6(a).16) where 3E is the energy of the signal s{t). the signal waveform—say. at the sampling instant r = Tb. Now suppose the signal waveforms i o ( 0 is transmitted. the outputs of the t w o matched filters with impulse responses Ao(0 and hi(t) are r0 = £ + n0 (5. Therefore. the matched filter output at the sampling instant t = Tb is identical to the output of the signal correlator.18) respectively.2 Consider the use of matched filters for the demodulation of the two signal waveforms shown in Figure 5.13). Also.17) M O = MTh ~ 0 as illustrated in Figure 5.2. The response of the filter with impuise response M O to the signal c o m p o n e n t j o ( 0 is illustrated in Figure 5.t + T)d( (5.15) Jo and if we sample y ( f ) at t = Th.6(b). the response of the filter with impulse response h\ ( f ) to the signal component j<>(0 is illustrated in Figure 5. Note that each impulse response is obtained by folding the signal i ( 0 to obtain j ( .2.2.14) If we substitute in (5.2 and determine the outputs.5.14) for h{( — x) from (5. >•(/)—at the output of the matched filter when the input waveform is j ( f ) is given by the convolution integral y(t) = f s(z)h(t Ja -z)dt (5./ ) and then delaying the folded signal /) by Tb to obtain s(Tb — / ) . the received signal r ( 0 = i o ( 0 + " ( 0 is passed through the t w o matched filters.170 CHAPTER 5.2.2. BASEBAND DIGITAL TRANSMISSION Consequently.2. Note that these outputs are identical to the outputs obtained f r o m sampling the signal correlator outputs at t = Tb- . we obtain v(0 — f s[T)s(Th . Then. The impulse responses of the two matched filters are MO = so(Th t) (5. ILLUSTRATIVE PROBLEM Illustrative Problem 5. Hence. - r) 0 -A Figure 5. .(r) = sAT.5. - t) A.5: Impulse responses of matched fillers for signals i o ( 0 and ^ ( r ) . Figure 5.6: Signal outputs of matched filters when jo(/) is transmitted. Binary Signal Transmission 187 *o(0 = sJT.2. 21) = 2 = s a] (5.n o ) 2 ] = £ ( n ? ) + E(n\) .23) . which correspond to the transmission of either a 0 or a 1.2.22) f e - ^ l d x V2TT <7X j£ = — [ e~x'2dx y/2Wv (5.2£(«. the probability of error is P ^ ' / J0 so(t)sl(T)n(t)n(T)dtdz fT. BASEBAND DIGITAL TRANSMISSION 5. because the signal waveforms are orthogonal. respectively.2. I L L U S T R A T I V E P R O B L E M I l l u s t r a t i v e P r o b l e m 5. their difference x = n j — no is also zero-mean Gaussian.2.Determine the probability of error. the probability of error is Pt = P{r\ > r 0 ) = P(n{ > 3E + n0) ~ P{ny ~ nQ > £ ) (5. E{n \ /io) = £ / Jo Nn fTh = T r / 2 Jo = ^ 2 = 0 Therefore.2.2. E(x2) Hence. The optimum detector for these signals compares ro and r\ and decides that a 0 was transmitted when ro > ri and that a 1 was transmitted when r\ > ro.19) Since n \ and no are zero-mean Gaussian random variables. The optimum detector is defined as the detector that minimizes the probability of error. . r Th r Ti.172 CHAPTER 5. That is.] / J0(Oit(r)fi(f-T)£/frfr Jo fT* Jo 5Q{t)s\(t)dt (5.2. which are equally probable and have equal energies.20) — 0.4 The Detector The detector observes the correlator or matched filter outputs r0 and r\ and decides on whether the transmitted signal waveform is either j o ( 0 or S\(t).3 Let us consider the detector for the signals shown in Figure 5. The variance of the random variable x is E(x2) But E{n\no) = £ [ ( n .2. n0) (5. When i o ( 0 is the transmitted signal waveform. The reader may verify that the probability of error that is obtained when i j { f ) is transmitted is identical to that obtained when i o ( 0 is transmitted. the probability of error decreases exponentially as the SNR increases. % MATLAB initia] _snr=0. script that generates the probability of error versus the signal-to-noise ratio finaLsnr=15. for i=1 :length(snr_in.in_(JB.2.dB). the average probability of error is that given by (5. snn= 1 0 " (snr_in.2. where the SNR is displayed on a logarithmic scale ( 1 0 l o g 1 0 £ / N q ) . .7: Probability of error for orthogonal signals. This expression for the probability of error is evaluated by using the M A T L A B script given below and is plotted in Figure 5. The derivation of the detector p e r f o r m a n c e given above was based on the transmission of the signal waveform s o ( 0 . Because the O's and l ' s in the data sequence are equally probable.25: snr Jn_dB=initial_snr:snr_step:finaI_snr.23). Binary Signal Transmission T h e ratio £ J N o is calied the signal-to-noise ratio (SNR).dB{i)/10) .Pe).5. 173 Figure 5.7 as a function of the SNR. As expected. Pe(i )=Qfu nct( sq rt ( snr) ) : end. snr_stcp=0. semilogy{snr. If the number generated is in the range (0. To accomplish this task.5 Monte Carlo Simulation of a Binary Communication System Monte Carlo computer simulations are usually performed in practice to estimate the probability of error of a digital communication system. Otherwise. If a 1 is generated. BASEBAND DIGITAL TRANSMISSION 5. the binary source output is a 0.2.5). For convenience. We begin by generating a binary sequence of O's and l ' s that occur with equal probability and are mutually statistically independent.174 CHAPTER 5. The additive noise components no a n d n j are generated by means of two Gaussian noise generators. T h e model of the system is illustrated in Figure 5. then ro = £ + "o and ri = n i . Figure 5. If a 0 is generated.8: Simulation model for Illustrative Problem 5. 0.4 Use Monte Carlo simulation to estimate and plot Pf versus S N R for a binary communication system that employs correlators or matched tillers. We demonstrate the method for estimating the probability of error for the binary communication system described above.8. especially in cases where the analysis of the detector performance is difficult to perform. which constitute the input to the detector. we use a random number generator that generates a uniform random number with a range (0. . it is a 1. I l l u s t r a t i v e P r o b l e m 5. then ro — no and r j = £ + n i . — We simulate the generation of the random variables ro and r i . Their means are zero and their variances are a2 — £ No/2.4. 1). 9 illustrates the results of this simulation for the transmission of iV=10. % generation of the binar\ data source for i=1:N. for 1=1.1) temp=rand. else % with probability 1/2.iheo_erT. Note that the S N R .en_prt>(i)=Qfuni. 4. which is defined as £ / \ ' n .1 12: for i=1 :length(SNRindB I). if ( t e m p < 0 . Chapter 5.000 bits at several different values of SNR. % theoretical error rate ihco.err_prt>(i)=smldPe54(SNRindBl(i)).2. The detector output is compared with t h e binary transmitted sequence.5. given by (5. source output is 1 dsource(i)=1. source output is 0 dsourcc(i)=0.smld. and probability of error calculation numoferr=0.' * ' ).in_dB*log(10)/10). We should also note that a simulation of .V= 10.t(5qri{SNR)). we should have at least 10 errors for a reliable estimate of Pe. end end. SNR=exp(SNRmdB2(i)*log( 1 0 ) / 1 0 ) . S N R m d B 2 = 0 0.prb). function [pJ=smldPe54(snr_in_dB) % (pi = smldPe54(rnrJnJB) % SMLDPF. SNR=en. Binary Signal Transmission 175 w e may normalize the signal energy 3t to unity (S — I) and vary a2. is then equal to 1 /2o~. % simulated error rate smld. Note the agreement between the simulation results and the theoretical value of PL.000 data bits allows us to estimate the error probability reliably down to about Pe = 10~-\ In other words. Figure 5. % AM TLA 0 script for Illustrative Problem echo on SNRindB 1=0:1 12. hold semilogy(SNRindB2. and an error counter is used to count the number of bit errors. . % plotting commands follow semilogy(SNRindB1.54 finds I he probability of error for the given % snr_in^iB. for i=1 length! SNRindB2).23). % detection.p(snr.N. 7o signal-to-noise ratio sgma=E/sqrt(2*SNR). end. 5 ) . with N~ 10.eiT-prb.2. % with probability 1/2. standard deviation of noise N=10000. end. % a uniform random variable over (0. stgnut-io-noise ratio in JB E=1. M A T L A B scripts for this problem are given below.000 data bits. % sigmti. BASEBAND DIGITAL TRANSMISSION % matched filter outputs if ( d s o u r c e ( i ) = 0 ) .176 CHAPTER 5. decis=0. end. r!=gngauss(sgma). p=numoferr/N. — c n n s E ® . if (decis"=dsou/ce(i)).9. r!=E+gngauss(sgma>. whereas at higher S N R s they agree less. end. end. Can you explain why? H o w should w e change the simulation process to result in better agreement at higher signal-to-noise ratios? . simulation and theoretical results completely agree at low signal-to-noise ratios. else decis=1. increase the error counter % decision is "I" 7c decision is "0" follows % if the source output is "I " % if the source output is "0" Figure 5. 7t> detector if (rO>rl). numoferr=numoferr+1.9: Error probability f r o m M o n t e Carlo simulation compared with theoretical error probability for orthogonal signaling. % probability of error estimate % if it is tin error. else rO= gngauss(sgma).— In Figure 5. end. rf)=E+gngauss(sgma). 10(a). A second pair is illustrated in Figure 5.26) .11.7 Antipodal Signals for Binary Signal Transmission Two signal waveforms are said to be antipodal if one signal waveform is the negative of the other.i ( 0 + n(0 The output of the correlator or matched filter at the sampling instant t — This (5. For example.2.24) The optimum receiver for recovering the binary information employs a single correlator or a single matched filter matched to i ( / ) .2. The received signal waveform from an A W G N channel may be expressed as r(t) = ± j ( r ) + n ( i ) . as illustrated in Figure 5.2.25) r = £ +n (5. (b) Another pair of antipodal signals. Below. •MO A Tb A Tb 0 -A 0 —A A iMO 0 Tb 0 -A Tb (a) A pair of antipodal signals (b) A n o t h e r pair of antipodal signals Figure 5.2.2.5. followed by a detector. The other method employs an on-off-type signal. 0<t<Th (5. where i ( f ) is some arbitrary waveform having energy £ .10: Examples of antipodal signals. Suppose w e use antipodal signal waveforms i o ( 0 = J ( f ) and i i ( r ) = . Binary Signal Transmission 177 5.i ( r ) to transmit binary information. Let us suppose that i ( i ) was transmitted. (a) A pair of antipodal signals. One method e m p l o y s antipodal signals. one pair of antipodal signals is illustrated in Figure 5.2. so that the received signal is r ( i ) . 5.10(b). we describe two other methods for transmitting binary information through a communication channel.6 Other Binary Signal Transmission Methods The binary signal transmission method described above was based on the use of orthogonal signals. 2. BASEBAND DIGITAL TRANSMISSION where 3E is the signal energy and n is the additive noise component.2.178 CHAPTER 5.28.~ fT" 2 Jo -x)s(t)s(T)dtdz (5. the probability density function of r when s(t) is transmitted is p{r I s(t) was transmitted) s p{r \ 0) = J_ V2jt a (5. Since the additive noise process n{t) is zero mean. (b) Correlator demodulator. which is expressed as •7fc n — f n {t)s(t)dt Jo f (5.11: Optimum receiver for antipodal signals. (a) Matched filter demodulator.2. it follows that E(n) variance of the noise component n is = 0.27) Received signal rit) M a t c h e d filter 1 *(Tb-t) [ Detector t Output decision Sample (a) M a t c h e d filter d e m o d u l a t o r (b) Figure 5.29) . The o2 s = f E(n2) •Th rTh f JO JO i{t)n(T)}s(t)s(x)dtdx Í " 8{t Jo 1 . J0 Consequently. 12: Probability density function for the input to the detector. Pr = P{r < 0) . For equally probable signal waveforms.12.30) p{r i —5(f) was transmitted) = p{r | 1) = J _ e -(H-3r)-/2<r 2 •Jin a (5.2. Binary Signal Transmission 179 Similarly. the optimum detector compares r with the threshold zero.2. T h e noise that corrupts the signal causes errors at the detector. when the signal waveform ~s(t) is transmitted.e. Suppose that s(t) was transmitted. Then the probability of error is equal to the probability that r < 0. If r < 0 . the decision is m a d e that s(r) was transmitted. i. the decision is made that -s(t) was transmitted. the input to the detector is r = and the probability density function of r is + n (5. T h e probability of a detector error is easily computed.31) These two probability density functions are illustrated in Figure 5. Figure 5.. If r > 0.2.5. which is the input to the detector. we may say that antipodal signals yield the same p e r f o r m a n c e (same error probability) as orthogonal signals by using one-half of the transmitted energy of orthogonal signals.13. the decision is made that the transmitted bit is a 0. the decision is made that the transmitted bit is a 1. The sequence of 0 ' s and 1 's is mapped into a sequence of ±3T.2. — ^ m m w g f c As shown. for the same transmitted signal energy 5 f . Figure 5. we simulate the generation of the random variable r . The detector compares the random variable r with the threshold of 0. A Gaussian noise generator is used to generate the sequence of zero-mean Gaussian random numbers with variance a 2 . and the bit errors are counted. where £ represents the signal energy. The output of the detector is compared with the transmitted sequence of information bits. If r > 0. The model of the system is illustrated in Figure 5.000 bits at several different values . when the two signal waveforms are equally probable.23). BASEBAND DIGITAL TRANSMISSION A similar result is obtained when — is transmitted.13: Model of the binary communication system employing antipodal signals. antipodal signals result in better performance.5 Use Monte Carlo simulation to estimate and plot the error probability performance of a binary antipodal communication system. Alternatively.14 illustrates the results of this simulation for the transmission of 10. antipodal signals are 3 dB more efficient than orthogonal signals.180 CHAPTER 5. Figure 5. W h e n we compare the probability of error for antipodal signals with that for orthogonal signals given by (5. If r < 0. Consequently.2. « E T may be normalized to unity. A uniform random number generator is used to generate the binary information sequence from the binary data source. I L L U S T R A T I V E P R O B L E M I l l u s t r a t i v e P r o b l e m 5. the average probability of error is given by (5. Hence. we observe that.32). % if the source output is "I" end.2.smld-err. 5. Binary Signal Transmission 181 of SNR. for i=1 :N. SNRindB2=0 0 1 10. end. dsource(i )=0.14 for comparison.dB) % [pi = smtdPe55<snrM-dBj % SMLDPE55 simulates the probability of error for the particular % value of snr_in_ilB stgnal-ta-noise ratio in dB. % with probability 1/2.2. % a uniform random variable over (0.prb.5. % The matched filler outputs if (dsource(i)—0). SNR=exp(snr_in. SNR=exp(SNRindB2(i)«log( 1 0 ) / l 0 ) .dB*log( 101/10). % sigma. decis=0.1) if (tempcO. c 'c simulated error mle S[nld_err_prb(i)=smldPe?5iSNRindB Mti). r=—E+gngauss(sgma). T h e M A T L A B scripts for this problem are given below. % theoretical error rale iheo_erT_prt>(i)=QfurKt($qni2*SNR)). % detector follows if ( r < 0 ) . % generation of the binary data source Jollowt for i=1:N. function [p]BsmidPe55(snr. E=1. source output i> 1 end end: % the detection. end: for i=1 length(SNRindR2). The theoretical value for Pe given by (5.theo_err_prb). hold semilogy(SNR)ndB2. % with probability Ij2.S). % if the source output is "0" else r=E-i-gngaiiss(sgma). % signal-to-noise ratio sgma=E/sqrl(2*SNR).m. standard deviation of noise N=10000. Chapter echo on SNRindB 1=0:1:10. and probability of error calculation follows numoferr=0. temp=rand.' " ').32) is also plotted in Figure 5. % MATUB script for Illustrated Problem 5. source output is 0 else dsource(i)=1. % ploffifi^' commands jolhm semilogy(SNRindBl. for i=1 length(SNRmdBI). % decision is "0" else . end.182 CHAPTER 5. To transmit a 0. BASEBAND DIGITAL TRANSMISSION decis=1. the received signal w a v e f o r m may be represented as n(t).r "I" the error counter % if it is an error. p=numoferr/N. a signal waveform s{t) is transmitted. end. To transmit a 1. the o p t i m u m receiver consists of a correlator or a matched filter matched to j ( r ) .8 On-Off Signals for Binary Signal Transmission A binary information sequence may also b e transmitted by use of on-off signals. As in the case of antipodal signals.14: Error probability f r o m Monte Carlo simulation compared with theoretical error probability for antipodal signals. end. and followed by a . whose output is sampled at t — Tb. i ( r ) + n(t). if a 0 is transmitted if a 1 is transmitted (5 2 33) where n ( r ) represents the additive white Gaussian noise. if (decis"=dsource(i)). Consequently. % decision i. 5. no signal is transmitted in the time interval of duration T^. numofenr =nuinofeiT+1. increase % probability of error estimate Figure 5.2. Therefore. the average probability of error is . otherwise.L v 2n a if a 0 is transmitted if i is transmitted p(r | 1} = a T h e s e probability density functions are illustrated in Figure 5. if 0 is transmitted if a 1 is transmitted ^ ^ ^ where n is a zero-mean.36) A s s u m i n g that the binary information bits are equally probable. T h e input to the detector may be expressed as n. W h e n a 0 is transmitted.2.15: T h e probability density function for the received signal at the output of the correlator for on-off signals. a . a 0 is declared to have been transmitted. Binary Signal Transmission 183 detector that compares the sampled output to the threshold. the conditional probability density functions of the random variable r are p(r [ 0) = V 2ti e~rl/2a~.15. On the other hand. £ + n. a I is declared to have been transmitted.2.2. when a 1 is transmitted. denoted as a . Gaussian random variable with variance a2 = £ N q / 2 . the probability of error is 1 fa < a) = -—=/ V2 7i a J-oo Pei(a) = P(r dr (5. If r > a . Figure 5.35) w h e r e a is the threshold. the probability of error is Pe0(a) = P{r > a) = 1 v 2n a / Ja 2/2 "' dr (5.5. (5.} as given by (5.34). It appears to be 6 dB worse than antipodal signals and 3 dB worse than orthogonal signals. Chapter 5.38) Substitution of this optimum value into (5.2. The theoretical error rate given by (5. this difference should be factored in when making comparisons of the performance with other signal types. % MATLAB echo on script for Illustrative Problem 6. It is easily shown that the optimum threshold is aopt = y (5. The M A T L A B scripts for this problem are given below. we generate a sequence of random variables {r. smld_erT-prb(i)=smldPe56(SNRindBS(i».1:15. I L L U S T R A T I V E P R O B L E M I l l u s t r a t i v e P r o b l e m 5. and (5. Thus.2.35).39) is also illustrated in this figure.2.37) yields the probability of error /V(«opt) = ß L h r r (5-2-39) We observe that error-rate performance with on-off signals is not as good as antipodal signals. However. BASEBAND DIGITAL TRANSMISSION P A O = l~PtO{a) + ^ P e i ( a ) (5. Figure 5. end. except that one of the signals is 0.000 binary digits.2. theo_err_prb(i)=Qfunct(sqit(SNR/2)). Consequently. SNRindBl=0:1:15.2. for i=1:!ength(SNRindB2). % simulated ermr rate % signal-to-noise ratio % theoretical error rate .2. The model for the system to be simulated is similar to that shown in Figure 5.184 CHAPTER 5. the average transmitted energy for the on-off signals is 3 d B less than that for antipodal and orthogonal signals.} to the optimum threshold S / 2 and makes the appropriate decisions. SNRindB2=0:0.37) The value of the threshold a that minimizes the average probability of error is found by differentiating Pe(a) and solving for the optimum threshold. for i=1 :length(SNRindBI). SNR=exp(SNRindB2(i)*log<10)/10).16 illustrates the estimated error probability based on 10.6 Use Monte Carlo simulation to estimate and plot the p e r f o r m a n c e of a binary communication system employing on-off signaling. The detector compares the random variables (r.36).2.13. SNR=exp(snr_in_dB*log(10)/10): sgma=E/sqn(2*SNR). else r=E+gngauss(sgma). end. source output 1/2. % The matched filter outputs if (dsource(j)==0). source output calculation % if the source % if the source output output is is "0" "I" % decision % decision is is "0" "1" the error counter % if it is an error.2. and probability of error numoferr=0. w h e r e . for i=1 :N. signal-to-noise ratio in dB. data source a given % signal-to-noise -rami % sigma. m a y be characterized geometrically as points in "signal space.prb).9 Signal Constellation Diagrams for Binary Signals T h e three types of binary signals. ' * ' ). % detector follows if (r<alpha_opc). numoferr=numoferr+1. % with probability % wii/i probability I (2." In the case of antipodal signals.prb.err. p=numoferr/N. 5 ) . else decis=1. else dsource<i)=1 . on-off. % generation of the binary for i=1:N. temp=rand. Binary Signal Transmission 185 end. alpha_opi=1/2. end.theo _erT . % plotting commands follow semiJogylSNRindB 1 . function (p]=smldPe56(snr_in_dB) % (pi = rmtdPc56tsnr-in^tB) % SMLDPE56 simulates the probability of error for % snrjn_dB. standard deviation follow % a uniform random variable of noise over (0.2.smld. end. and orthogonal. dec i s=0. end end: % detection. namely. r=gngauss(sgma). N=10000. E=1 .1) is 0 is I dsource(i)=0. end. antipodal. hold semilogy(SNRindB2. if (decis"=dsource(i)). if ( t e m p < 0 .5. increase % probability of error estimate 5. suffices to represent the antipodal signals in the signal space. the signal points corresponding to these points are { • / £ . where either (r0lrt) = ( v ^ + no.17(a). Hence.16: Error probability from M o n t e Carlo simulation compared with theoretical error probability for on-off signals. binary orthogonal signals require a two-dimensional geometric representation. The one-dimensional geometric representation of antipodal signals follows from the fact that only one signal waveform or basis function. On the other hand. On-off signals are also one-dimensional.186 CHAPTER 5.7 T h e effect of noise on the performance of a binary c o m m u n i c a tion system can be observed f r o m the received signal plus noise at the input to the detector.17(b). namely. the signals are j ( 0 and -s(t). let us consider binary orthogonal signals. since there are t w o linearly independent functions J o ( 0 and (. as shown in Figure 5.17 are called signal constellations.) that constitute the two signal waveforms. • / £ ) . each having energy the two signal points fall on the real line at ± J ~ £ . / ' t ) = («0. as shown in Figure 5. r j ) . 0) and (0. BASEBAND DIGITAL TRANSMISSION Figure 5. The geometric representations of the binary signals shown in Figure 5. For example.17(c). + /ii) . as shown in Figure 5. the two signal points also fall on the real line atO and • / £ . I L L U S T R A T I V E P R O B L E M I l l u s t r a t i v e P r o b l e m 5.nt) or ( r o . for which the input to the detector consists of the pair of random variables (ro. Consequently. yl=nl. Chapter 5. The energy £ of the signal may be normalized to unity.0. w e use signal waveforms that take on multiple amplitude levels.1). . As in Illustrative Problem 5.J 3 L 0 (a) 0 (b) (c) Figure 5. and plot these 100 samples for each <7 on different two-dimensional plots. n l = r a n d o m ( ' n o r m ' .0.0.5.5. p t o t ( x l .100.y2.' * ' ) axi$(' s q u a r e ' ) 7.3 MuUiamplitude Signal Transmission In the preceding section. (c) Orthogonal signals. the noise components increase in size and cause more errors.5.0. use M o n t e Carlo simulation to generate 100 samples of (/"o. we can transmit multiple bits per signal waveform.100. x1=1 .+n3. a = 0.5 is given below.100. As the noise power level increases.x2. The M A T L A B script for this problem for a = 0.0. n 2 = m n d o m ( ' n o rar .1). (b) On-off signals. 5. % MATLAB script for Illustrated Problem echo on nO=raiidom( ' n o r m ' . In this section. —CE2SZEB The results of the Monte Carlo simulation are shown in Figure 5.0. Thus. independent Gaussian random variables with variance a 2 . each signal waveform conveyed 1 bit of information.4. Thus.1).1.5.100.0. .5.18. n3=random( n o r m ' .3. Muliiamplitude Signai Transmission 187 The noise random variables no and «1 are zero-mean. we treated the transmission of digital information by use of binary signal w a v e f o r m s . Note that at a low noise power level (<7 small) the effect of the noise on performance (error rate) of the communication system is small. y l .1). (a) Antipodal signals. ' o ' .17: Signal point constellation for binary signals.0.+n0. 5 .3. and <r = 0 . x2=n2: y2=1 . ^ 1) for each value of a = 0. 188 CHAPTER 5. \ « • \ • a =0. 5 Figure 5.18: Received signal points at input to the selector for orthogonal signals ( M o n t e Carlo simulation). .3 " J if « •:. BASEBAND DIGITAL TRANSMISSION J/f N . « • * 1 °v « ff = 0 . T h e task of the receiver is to determine which of the four signal w a v e f o r m s was transmitted after observing the received signal r(t) in the interval 0 < t < T.3) where 2d is the Euclidean distance between two adjacent amplitude levels. 01. 3. 0 < f < T (5. i = 0. the symbol interval is T = 2Tb• Since all the signal waveforms are scaled versions of the signal basis function g ( f ) . 2 . —d. we consider the case in which the signal amplitude takes on one of four possible equally spaced values—namely. 1 . we assign the following pairs of information bits to the four signal waveforms: 00 01 11 - j0(O j|(f) s2(t) 10 — j 3 ( 0 Each pair of information bits (00. 10. equivalently. The four signal waveforms are illustrated in Figure 5.2.3. The four P A M signal waveforms shown in Figure 5.3. Therefore. { } — 1 — 3d!. As in the case of binary signals.20.4) where n(t) is a sample function of a white Gaussian noise process with power spectrum Nq/2 watts/hertz. In particular.3.3. 3í/} or.5.19. .19 can be used to transmit 2 bits of information per waveform. We call this set of waveforms pulse amplitude modulated (PAM) signals. [0.1) where Am is the amplitude of the with waveform and g{t) is a rectangular pulse defined as _ jv^T?*. 0 < r < T (5. the geometric representation of the four PAM signals is the signal constellation diagram shown in Figure 5.1. o < t < T otherwise ( 5 3 2 where the energy in the pulse ^ ( i ) is normalized to unity. the received signal is represented as r(t) = Sj(t) +n(t)'. 11} is called a symbol. theses signal waveforms may be represented geometrically as points on the real line. Thus. Am~(2m-3)d. d . Note that if the bit rate is R = 1 /Tb. T h e optimum receiver is designed to minimize the probability of a symbol error. Consequently.3 (5. Muliiamplitude Signai Transmission 189 5. and the time duration T is called the symbol interval.1 Signal Waveforms with Four Amplitude Levels Let us consider a set of signal waveforms of the form * « ( 0 = Amg(t). w e assume that the PAM signal waveforms are transmitted through an A W G N channel. m =0.3. .19: Multiamplitude signal waveforms.20: Signa] constellation for the four-PAM signal waveforms. BASEBAND DIGITAL TRANSMISSION Ji(0 s2(t) Vf S)(t) Vr Figure 5.190 CHAPTER 5. 00 -3d 01 -d ! 0 11 d 10 3d Figure 5. w e consider only the signal correlator in our treatment.3. Muliiamplitude Signai Transmission 191 5.8) Therefore.5) n= f T gU)n(t)dl Jo (5. Since the signal correlator and the matched filter yield the same output at the sampling instant.3. 8 ( t ) g ( t ) S ( t ~ r ) d t d z = ^ 2 (5.7) a.6) We note that n is a Gaussian random variable with mean E(n) and variance = fT Jo g(t)E[n(t)\dt = 0 (5. the probability density function for the output r of the signal correlator is . defined as (5.3.= £(«z) •T rT = f Jo [ Jo [ g{t)g(t)E[n{t)n{z)]dtdz = ^ Jo -T Jo .2 Optimum Receiver for the AWGN Channel The receiver that minimizes the probability of error is implemented by passing the signal through a signal correlator or matched filter followed by an amplitude detector. the signal correlator output is r = f Jo = f Jo r(t)g(t)d: Aig2(t)dt + f Jo g(t)n(t)dt = A(-+n where n represents the noise component.5.3 Signai Correlator The signal correlator cross-correlates the received signal r ( f ) with the signal pulse g(t) and its output is sampled at t = T .2.3. Thus.3.3. 5.3. when | n | > d.3.192 CHAPTER 5. 5. as illustrated in the signal constellation in Figure 5. can take the values ± d.3. I = 0 . . the optimum amplitude detector compares the correlator output r with the four possible transmitted amplitude levels and selects the amplitude level that is closest in Euclidean distance to r..3. Since the four amplitude levels are equally probable. Since the received signal amplitude A. ± 3d. the optimum amplitude detector c o m p u t e s the distances Dj = \r - A . Thus.2. We note that a decision error occurs when the noise variable n exceeds in magnitude one-half of the distance between amplitude levels.11) We observe that the squared distance between successive amplitude levels is (2d)2 Therefore. BASEBAND DIGITAL TRANSMISSION p(r | Aj(i)was transmitted) = (5. we assume that the four possible amplitude levels are equally probable. In the following development of the performance of the optimum detector. 1.20.e. the average probability of a symbol error is (5.3. However.9) where A.3d are transmitted. i.4 The Detector The detector observes the correlator output r and decides which of the four PAM signals was transmitted in the signal interval.10) and selects the amplitude corresponding to the smallest distance. w h e n the amplitude levels + 3 d or . the average probability of error may be expressed as s S2. an error can occur in one direction only. is one of the four possible amplitude values. I.3 (5. Muliiamplitude Signai Transmission 193 Alternatively.3 d . respectively. (0./Aro).25.75. As shown. Since each transmitted symbol consists of two information bits. w e use a random number generator that generates a uniform random number in the range (0. 0. 0. This range is subdivided into four equal intervals. w e simulate the generation of the random variable r .0). The model for the system to be simulated is shown in Figure 5. (0. ILLUSTRATIVE PROBLEM I l l u s t r a t i v e P r o b l e m 5.22. To accomplish this task. as described above. respectively. The detector observes r = Am+n and computes the distance between r and the four possible transmitted signal amplitudes. the output of the uniform random number generator is mapped into the corresponding signal amplitude levels ( .21 as a function of the S N R defined as 101og 10 (3r aV i. (0. we may normalize the distance parameter d — 1 and vary a2. the transmitted average e n e r g y per bit is £ w / 2 s= £lvb- The average probability of error Pa is plotted in Figure 5. hence. For convenience.3. 3d). We begin by generating a sequence of quaternary symbols that are mapped into corresponding amplitude levels { A m } . where the subintervals correspond to the symbols (pairs of information bits) 00. T h e additive noise component having mean 0 and variance a2 is generated by m e a n s of a Gaussian random number generator. 1). which is the output of the signal correlator and the input to the detector. the average transmitted signal energy per symbol is 4 j = 5d2 £ m = ~ y f sj(t)dt (5. d. Thus.75).13) Consequently.3.25). 11.5.5). (0.8 Perform a Monte Carlo simulation of the four-level (quaternary) PAM communication system that employs a signal correlator. followed by an amplitude detector. Since all four amplitude levels are equally probable. 1.5. d~ = and. 10. the average probability of error may be expressed in terms of the signal energy. Its output A m is the signal . 01. 0. ~d. 27^0 Figure 5. BASEBAND DIGITAL TRANSMISSION ioiog|0.14). amplitude level corresponding to the smallest distance. Chapter 5. for i=1 .21: Probability of symbol error for four-level PAM.0. % signat-to-noise ratio SNR-per_bit=exp(SNRindB2(i)*log(10)/10).15) ) 4 \ < t N o t e the agreement between the simulation results and the theoretical values of P4 computed f r o m (5. .length(SNRindB2).000 symbols at different values of the average bit SNR. SNRindBI=0:1:12. A„ is compared with the actual transmitted signal amplitude. end.194 CHAPTER 5. % simulated error rate smJd_tn--prt}(i)=smtdPe57(SNRindB I(i)). Figure 5. % MATLAB script for Illustrated echo on Problem 8. which is defined as d No 2 \ 2 (5. for i=1:]ength(SNRindBl). and an error counter is used to count the errors m a d e by the detector.3. T h e M A T L A B scripts for this problem are given beiow.3.1:t2.23 illustrates the results of the simulation for the transmissions of N= 10. SNRindB2=0. source output is elseif (temp < 0 . 7 5 ) . dsource(i)=2. hold sem i logy ( S N R i n d B 2 . SNR=exp{snr. d=1. dsource(i)=1. signal-to-noise ratio in dB. e r r . 5 ) . standard deviation of noise N=10000.theo .in-dB*log(10)/10). Muliiamplitude Signai Transmission 195 Figure 5. 2 5 ) .err_prb(L)=(3/2)*Qfunct(sqrt((4/5)*SNR„per. % number of symbols being simulated % generation of the quarternary data source follows for i=1:N. % theoretical error rule theo. prb) . dsource(i)=0. % with probability 1/4. source output is elseif ( t e m p < 0 . ' * ' ). % signal-to-noise ratio per bit Sgma=sqrt((5»d"2)/(4*SNR)). % a uniform random variable over (0.smULerr_prb. % plotting commands follow semilogy(SNRindB I . function [p]=snildPe57(snr_in.1) if ( t e m p < 0 .3. % with probability 1/4. % with probability 1/4. temp=rand.5.22: Block diagram of four-ievel PAM for Monte Carlo simulation.bit)): end.dB) % {pi = SmIdPeS7(snrjn~dB) % SMLDPE57 simulates the probability of error for the given % snrJnjiB. source output is else dsource(i)=3. % sigma. source output is end "00" "01" "10" "11 " . % with probability 1/4. 0 < t < 7\ m = 0.10) for m = 0 .2 + d ) . . Each signal waveform conveys k. As in the case of four-level PAM.M + 1 )d. increase % probability of error estimate 53. .3. p=numofen/N.1 (5. = log 2 A/ bits of information. the corresponding symbol rate is \/T — 1 fkTb. % detection. . . end. 1 s 2 M-i where the M amplitude values are equaily spaced and given as = (2m . % detector follows if ( r < . the optimum receiver consists of a signal correlator (or matched fiiter) followed by an amplitude detector that computes the Euclidean distances given by (5. the decision is made in favor of the amplitude level that corresponds to the smallest distance. M — i. else r=3*d+gngauss(sgma). if idecis'=dsource(i)). BASEBAND DIGITAL TRANSMISSION end. For equally probable amplitude levels. decis=2. . elseif ( r < 0 ) . decis=1. . In general.5 Signal Waveforms with Multiple Amplitude Levels It is relatively straightforward to construct multiamplitude signals with more than four levels. else decis=3.2). numoferT=numofen + 1. . U . M . end.3. decis=0. elseif ( r < 2 * d ) . a set of M = 2* multiamplitude signal waveforms is represented as sm{t) = Amg(t). m = 0. r-—d+gngauss(sgma). 1 . elseif (dsource{i)-=1). end. and probability of error numoferr=0. end. elseif (dsource(i)==2) r=d+g ngauss (s gi n a). .. % The mulched filter outputs \f (dsource(i)==0) ( r=-3*d+gngauss(sgma). When the bit rate is R = i/T).3. for i=1 :N.196 CHAPTER 5.16) and g{t) is a rectangular pulse» which has been defined in (5. calculation % if the source % if the source % if the source % if the source output output output output fs is "00" "01" is "10" is "II" % decision % decision % decision % decision is ' 00" is is is "01" "10" '11" the error counter % if it is an error. 22 applies in general. ILLUSTRATIVE PROBLEM Illustrative Problem 5. A uniform random number generator is used to generate the sequence of information symbols.10) and selects the .0{^/iV0) Figure 5.3. and the resulting signal plus noise is fed to the detector. which are viewed as blocks of four information bits. 16. 1) into 16 equal-width subintervals and mapping the 16-ary symbols into the 16-ary signai amplitudes.4. The probability of error for the optimum detector in an A/-levei PAM system is easily shown to be M 2(M . Figure 5. SOLUTION The basic block diagram shown in Figure 5. A white Gaussian noise sequence is added to the 16-ary information symbol sequence.5. MuhiampUtude Signal Transmission 197 101og. 8.1) M / /60o. The 16-ary symbols may be generated directly by subdividing the interval (0.24 illustrates the probability of a symbol error for M = 2.23: Error probability for Monte Carlo simulation compared with theoretical error probability for M = 4 PAM.a^ \ yy (M2 \)N0 — ) J where is the average energy for an information bit.9 Perform a Monte Carlo simulation of a 16-levei PAM digital communication system and measure its error-rate performance.3. The detector computes the distance metrics given by (5. 17) with M = 16.3.000 transmitted symbols and the theoretical symbol error rate given by (5. 8. for i=1 :length(SNRindBl). . M=16. SNRindBl=5:1:25. SNRindB2=5:0. % plotting commands follow semilogy(SNRindBI.198 CHAPTER 5. BASEBAND DIGITAL TRANSMISSION Figure 5. end. % theoretical error rate iheo. Figure 5.smld_err.prt>. end.1 )>SNR.per. The M A T L A B scripts for this problem are given below. 4.bit=exp(SNRindB2(i)*log(10)/10). hold semi lo gy (SNRindB2 . for i=1:length(SNRindB2). The output of the detector is c o m p a r e d with the transmitted information symbol sequence and errors and counted.24: Symbol error probability for Ai-level PAM for M = 2.per_btf)>. % MATLAB echo on script for Illustrative Problem 9.' * ' ) .25 illustrates the measured symbol error rate for 10.1:25.theo_eir_ prb).erT_prb(i)=(2»(M~ 1 ) / M ) * Q f u n c t ( s q r t ( ( 6 * l o g 2 ( M V ( M " 2 . Chapter 5. SNR. amplitude corresponding to the smallest metric. 16. % simulated error rule sm]d-err-prt><i)=smldPe58(SNRindBl(0). 3 0 ) + d ) . elseif ( r > ( M — 2 4 ) * d ) . elseif (r>{M—6)*d).dB) % [pj = smldPe58(snrJn^dB) % SMLDPE5H simulates the emir probability for the given % snrjn-dB. % sigma. % number of symbols being simulated % generation of the i/uarternary data source for i=1 :N.1 2 ) * d ) . % the index is an integer from 0 to M-l. decis=15. % 16-ary PAM d=1. decis=14.5. signal-to-notse ratio in dB M=16. end. elseif ( r > ( M .4 ) » d ) . elseif ( r > ( M . decis=13. elseif ( r > ( M .1 4 ) * d ) . elseif ( r > ( M . elseif ( r > ( M . where % all the possible values are equally likely dsource(i)=index. elseif ( r > ( M — 2 8 ) » d ) .1 0 ) » d ) . elseif ( r > { M .2 2 ) * d ) .3. decis=3. SNR=exp(snrJn_dB*log(10)/l0). elseif ( r > ( M — 2 0 ) * d ) . temp=rand.1 8 ) * d ) .1) index=floor(M*temp). % the detector if ( r > ( M . decis=8. decis=6. for i=1:N. decis=5. Muliiamplitude Signai Transmission 199 function [p]=smldPe58(snr_in. decis=12. decis=11.2 ) * d ) . and probability of error calculation numoferr=0. decis=4. else . elseif ( r > ( M . dec:s=10. decis=2. decis=9. decis=7.1 6 ) * d ) . decis=1. elseif ( r > ( M . elseif ( r > ( M . standard deviation of noise N=10000. elseif ( r > ( M — 2 6 ) * d ) . % a uniform random variable over (0. % signal-to-noise ratio per bit sgma=sqrt((85*d"2)/(8*SNR)). % detection.8 ) * d ) . % matched filter outputs % (2*dsource(i)-M+l)*d is the mapping to the 16-ary constellation r={2*dsourcc(i)—M+1)*d+gngauss(sgma). 25: Error rate from Monte Carlo simulation compared with the theoretical e u o r probability for M = 16 PAM. In this section we consider the construction of a class of M = 2* signal w a v e f o r m s that have a multidimensional representation. BASEBAND DIGITAL TRANSMISSION decis=0. w e are able to transmit k = log 2 M bits of information per signal waveform. end. increase numoferr=numoferr-t-1. p=numoferr/N.200 CHAPTER 5. % if it is on error. % probability of error the error counter estimate Figure 5. . 5. end.4 Multidimensional Signals In the preceding section we constructed multiamplitude signal waveforms. We also observed that the multiamplitude signals can be represented geometrically as signal points on the real line (see Figure 5. Such signal w a v e f o r m s are called one-dimensional signals.20).d i m e n s i o n a l space. the set of signal w a v e f o r m s can be represented geometrically as points in N . We have already observed that binary orthogonal signals are represented geometrically as points in two-dimensional space. That is. if (decis~=dsource(i)). with signal w a v e f o r m s having M = 2k amplitude levels. which allowed us to transmit multiple bits per signal waveform. end. Thus. 5. . The time interval available to transmit each symbol is T ~ kTf. Each block of k bits is called a symbol. . 7}.k = 0. 1.4. which have the properties of (a) mutual orthogonality and (b) equal energy. In this section.2) i As in our previous discussion. ( i ) . which are to be transmitted through a communication channel.4.26: An example of four orthogonal. . T) is to subdivide the interval into M equal subintervals of duration T/M and . 1. M . i ~ k (5-4. we a s s u m e that an information source is providing a sequence of information bits. The modulator takes k bits at a time and maps them into one of M = 2* signal waveforms. 7" 1 1 yr 4 j Figure 5.. The information bits occur at a uniform rate of R bits per second. .1 Multidimensional Orthogonal Signals There are many ways to construct multidimensional signai waveforms with various properties. w e consider the construction of a set of M = 2k waveforms i .Hence.4. M .1) where 3T is the energy of each signal waveform and is defined as is called the Kronecker delta. T h e simplest way to construct a set of M — 2k equal-energy orthogonal w a v e f o r m s in the interval (0.1. which «••* = r ' jO. . for i = 0. These t w o properties may be succinctly expressed as L T si{t)sk(t)dt = £8ik. equal-energy signal waveforms. JoCO MO I'lC) A A J 3 (0 . T is the symbol interval. i. The reciprocal of R is the bit interval. . . Multidimensional Sign3!s 201 5. .1 (5. ..1 1 2 J 21 1 . . if the transmitted waveform is s.3) Such a set of orthogonal waveforms can be represented as a set of A/-dimensional orthogonal vectors.27: Signal constellation for M = 2 and M = 3 orthogonal signals. So = 0. . . Jo = A T/M 2 i = 0. 0 .1 M .0 0) 0) Si . Consequently. Figure 5. .Q sM = ( 0 .26 illustrates such a construction for M = 4 signals.e. y/£.(0. M . . BASEBAND DIGITAL TRANSMISSION to assign a signal waveform for each subinterval. All signal waveforms constructed in this manner have identical energy. The receiver observes the signal r(t) and decides which of the M signal waveforms was transmitted. Let us assume that these orthogonal signal waveforms are used to transmit information through an AWGN channel.5) where n ( f ) is a sample function of a white Gaussian noise process with power spectrum Nq/2 watts/hertz.27 illustrates the signal points (signal constellations) corresponding to M = 2 and M — 3 orthogonal signals. given as rT £ = / sf(t)dt.4) Figure 5. 0 <i<T.4.1 (5. 2 . f=0. (/). 0.202 CHAPTER 5.1 (5. the received waveform is r ( f ) = Si(t) + n(t). .4.4. I. i. . V 'T 1 Vx 'T Al vT _ ] — m = 3 — Vf—J M= 2 Vr—*l Figure 5. (5. . .. r0= ( s%(t)dt+ Jo = £+n0 ( Jo n{t)sQ{t)dt (5. let us consider the case in which signal correlators are used.1. as shown in Figure 5. = f Jo r(t)S[(t) (it. . i =0. Multidimensional Sign3!s O p t i m u m Receiver for t h e A W G N C h a n n e l 203 The receiver that minimizes the probability of error first passes the signal r ( r ) through a parallel bank of M matched filters or M correlators.28: Optimum receiver for multidimensional orthogonal signals.7) .4.4. Since the signal correlators and matched filters yield the same output at the sampling instant.5.. Signal Correlators The received signal r(t) is cross-correleted with each of the M signal waveforms and die correlator outputs are sampled at t — T.6) which may be represented in vector form as r = [ro. r ^ . .4. .i ] ' . Suppose that signal waveform j o ( f ) is transmitted. r o 4 Received signal r(i) r 1 Detector Output decision r . . M ~ 1 (5. r\.. ..28.4 - M-\ Figure 5. Then. Thus. the M correlator outputs are r. .1. the proba- The reader is encouraged to show that £ ( n . The other Ai — 1 outputs consist of noise only. ro > r 2 .2 M ..4. for i = 1. the probability of a correct decision is simply the probability that ro > r. . .9) Therefore. i bility density functions for the correlator outputs are 1 P i r o I i o ( 0 was transmitted) = *Jln a e-(ro-. with mean zero and variance cr 2 = £ ( n f ) T rT = Jo f J0 T Jo (t)sl{x)E[n{t)n{i)] dtdr 2 Jo = ^ [ 2 2 Jo (5. .2. Consequently.3 M — 1 (5. BASEBAND DIGITAL TRANSMISSION f Jo = n(t)st{t)dt i = 1.204 and r.g) 5 fa 2 i 2 p(n | s o ( 0 was transmitted) = 1 s/Ijt a g-^/-'7 i"} i — 1.e. = / JO "(Oii (/)dr (5.. pc - P(r0 > H . M — i and selects the signal corresponding to the largest correlator output. ) = 0. n . 2 A/ — 1 T h e Detector The optimum detector observes the M correlator outputs r.8) n. r 0 > r w _i) (5. .11) . i.4.4. Che output ro consists of a signal component and a noise component no. . = f Jo = where f Jo s0(t)si(t)dt+ n(t)si {t)dt CHAPTER 5.4. i = 0. 1 . In the case where so(t) was transmitted. . Each of the noise components is Gaussian. .10) j . 1 (5-4. £h/N-o.4.. Sometimes. there are ways in which n bits out of k may be in error.14) Furthermore.Q i y ) ^ - ) ^ .15) divided by k. Hence. 8.4. 3 2 . the average number of bit errors per £-bit symbol is i=i and the average bit error probability is just the result in (5.4. This figure illustrates that by increasing the number M of waveforms. ro > rM~0 (5. Since all the M signals are equally likely.5.Pc = 1 . all symbol errors are equiprobable and occur with probability M .^ ^ ' / 2 ^ (5. .13) is the average probability of a symbol error. 4 . 6 4 . the expression for Pm given by (5. are shown in Figure 5. can be expressed in integral form as 1 Pm = ^ = = J f00 i I 1 l l . The same expression for the probability of error is obtained when any one of the other M — 1 signals is transmitted.12) It can be shown that P\.1 = 2" . r 0 > r2. This integral can be evaluated numerically. 16.4.t l .13) reduces to Pi = Q (y^b/Wo) which is the result we obtained in Section 5.. the expression in (5. Multidimensional Sign3!s and the probability of a symbol error is simply 205 Pm = I .29 for M = 2 . For equiprobable orthogonal signals.P(r0 > rx.4. one . the number of bits per symbol.2 for binary orthogonal signals. Thus 2t-1 2*31^ Pb - ( 5 A 1 6 ) The graphs of the probability of a binary digit error as a function of the SNR per bit.4. it is desirable to convert the probability of a symbol error into an equivalent probability of a binary digit error.13) For the special case M — 2.. where £h = £ / k is the energy per bit. tolerance.inf.32).inf. Pe_2(i)=Qfunct(sqrUsnr)V.step.64(i)=(32/63)*quad8( ' b d t _ i n t ' . hnaLsnr=15.tolerance. snr.tolerance.pl us-inf.2 0 . BASEBAND DIGITAL TRANSMISSION \0)og i 0 (£ h /N 0 ) Figure 5.[ ]. Pe_32(i)=(16/31)*quad8( ' b d t _ i n t ' .! ]. % plotting commands follow ratio .[ ].inf. (olerance=1e-7.step=1. Pe-8(i)=(4/7)»quad8(' b d t _ i n c ' .p]us-inf.inf.plus.minus. i n f = .snr.64).minus. Pe_4(i)=(2/3)*quad8(' b d t _ i n t ' . tolerance.inf. can reduce the SNR per bit required to achieve a given probability of a bit error. 16).4). mi n u s .snr=0.13) is given below. % this is practically infinity si\r-in_dB=imual_snr.snr.dB(i)/10). Pe.4.usf. tolerance.plus. for i=1 :length(snr. end. % tolerance used for the integration minus.8).minus_inf.n. snj=l0~(snr-in.plus.l6(i)=(8/15)»quad&(' b d t _ i n t ' .29: Bit-error probability for orthogonal signals. The MATLAB script for the computation of the error probability in (5.snr.fmal_5nr. % this is practically -infinity plus_inf=20.snr. % MATLAB script that gene rules the probability of error versus the signal-to-noise inmal.in-dB). Pe.snr.| ].snr.mi nus.206 CHAPTER 5. inf. The model of the system to be simulated is illustrated in Figure 5. In any case. We may first generate a binary sequence of 0's and 1 *s that occur with equal probability and are mutually statistically independent. Figure 5.30: Block diagram of system with M — 4 orthogonal signals for Monte Carlo simulation. as in Illustrative Problem 5. as in Illustrative Problem 5. An alternative to generating the individual bits is to generate the pairs of bits.10 Perform a M o n t e Carlo simulation of a digital communication system that employs M = 4 orthogonal signals. which are mapped into the corresponding signal components. Multidimensional Sign3!s iHMfeHsfcHLÜJafcfablldflfc 207 I l l u s t r a t i v e P r o b l e m 5.30. w e simulate the generation of the random variables r 3 . which constitute the input to the detector. ^aWlhJklJM As shown.4.8.5.4. we have the mapping of the four symbols into the signal points: . T h e binary sequence is grouped into pairs of bits. M=4.sinld. — The detector output is compared with the transmitted sequence of bits. dsourcel(i)=0. 0 .prb. VZ. Note the agreement between the simulation results and the theoretical value of Ph given by (5. c % signal-to-noise ratio per bit SNR=exp(snr_in_dB*log(10)/10). end.1) lemp=rand.16). elseif ( t e m p < 0 . 0 .err. and an error counter is used to count the number of bit errors.17) The additive noise components no. % generation of the quarternary data source for i=1:N. elseif (temp<0.25).208 CHAPTER 5. % sigma. standard deviation of noise sgma=sqrt(E~ 2/(4*SNR)).' * ' ) . 10. V~£) (5. Since £ = 2 £ h . BASEBAND DIGITAL TRANSMISSION 0 0 — so = ( v ^ .O. dsource2(i)=0.dB) % (pj smldPe59(snrJn.4. < % number of symbols being simulated N=10000. il follows that £ ) . % pita ting commands follow semilogyiSNRindB. if (iernp<0. The MATLAB scripts for this problem are given below. 7 5 ) . Chapter 5 function [p)=sinJdPe59(snr. % simulated error rate smJd. 0) 10 — V 52 = ( 0 . we may normalize the symbol energy to £ = 1 and vary a~. y f W . For convenience. n \. .in. nj. ny are generated by means of four Gaussian noise generators. 0 .dB) % SMLDPE59 simulates the probability of error for the given % snrJn-dB. dsourcel(i)=0. % MATLAB scrip! jar Illustrative Problem echo on SNRindB=0:2:10. % a uniform random variable over (0. 0 ) 01 — si = (0.31 illustrates the results of this simulation for the transmission of 20. % quarternary orthogonal signalling E=1.5). Figure 5. each having a mean zero and variance c — N o £ J2. 0) 1 I —v — ( 0 .000 bits at several different values of the SNR £ h j N o . for i=1 :length(SNRtndB).eir. 0 . dsource2(i)=1.prb(i)=smldPe59{SNRindB(i)). signal-to-noise ratio in dB. .4. 0. elseif ((dsourcel(i)==1) & (dsource2(i)==0)). rO=gngauss(sgma). rl=gngauss(sgma). decis 1 =0. rl=gngauss{sgma). r3=gngauss(sgma). end. decis2=0. for i=1:N. if (decis2"=dsource2(i)>. increase the error numoferr=numofen+1. else decis 1=1. elseif ((dsourcel(i)==0) & {dsource2(i)==1)). end. r3=sqrt(E)+g n gauss (s g ma). if (r0==max_r). and probability of error calculation numofeiT=0. rf}=sqrt(E)+gngauss(sgma). dsource2(i)=1. r2=gngauss(sgma). % if it is an error. end. Multidimensional Sign3!s 209 dsourcel(i)=1. % bit error probability estimate counter counter . decis2=Q. end.5. decis 1 = 1. eiseif (r2==max. % count the number of bit errors made in this decision if ( d e c i s r = d s o u r c e l ( i ) ) . end end: % detection. increase the error numoferr=numofefT+1. decis2=1. r3=gngauss(sgma). else dsourcel<i)=1.r). rQ=gngauss(sgma). r2=gngauss(sgma). decis 1=0. rl=gngauss(sgma). p-numoferr/(2*N). r2=sqrt(E)+gngauss(sgma). % matched filter outputs if ((dsourcel(i)==0) & (dsource2(i)==0)). decis2=1. else rO=gngauss(sgma). 9c if it is an error. r ) . % the detector max_r=max([rO rl r2 r3]). r3=gngauss(sgma). elseif ( r l = = m a x . dsource2(i)=0. end.4. r2=gngauss(sgma). rl=sqrt(E)+gngauss{sgtna). and assigning a rectangular pulse to each subintervai. The geometric representation of a set of M signals constructed in this manner is given by the following (M/2)-dimensional signal points: . each having M/2 dimensions. for i = 0. 1 M/2 — 1. S[(t) SM/2-](t) are orthogonal signal waveforms. In such a signal set.2 Biorthogonal Signals As we observed in the preceding section. 5. Figure 5. A similar method can be used to construct another set of M = 2* multidimensional signals that have the property of being biorthogonal.31: Bit error probability for M = 4 orthogonal signals from a Monte Carlo simulation compared with theoretical error probability. sq{î). a set of M — 2* equal-energy orthogonal waveforms can be constructed by subdividing the symbol interval T into M equal subintervals of duration T / M and assigning a rectangular signal pulse to each subintervai.210 CHAPTER 5. Thus we obtain M signals.4. The M/2 orthogonal waveforms can be easily constructed by subdividing the symbol interval T — kTb into M/2 nonoverlapping subintervals. BASEBAND DIGITAL TRANSMISSION 10 l o g l o [Zh/No) Figure 5. each of duration 2T/M. T h e other M/2 waveforms are simply s i + M ß ( t ) = — J r ( 0 .32 illustrates a set of M =4 biorthogonal waveforms constructed in this manner. one-half the waveforms are orthogonal and the other half are the negative of these orthogonal waveforms. That is. O p t i m u m Receiver The optimum receiver may be implemented by cross-correlating the received signal r(t) with each of the M/2 orthogonal signal waveforms. 0 ) si = (0. 0 0) sM/i-\ sm/2 = (0. Thus.. and passing the M/2 correlator outputs to the detector.18) •sn(') Ji(') Ji(0 Si«) 0 -A Figure 5..4. let us assume that the biorthogonal signal waveforms are used to transmit information through an AWGN channel.. sampling the correlator outputs at t — T.5. V S . . 0) = ( 0 ..1.. 0 < r < r (5. .4. the received signal waveform may be expressed as r(t) = J i ( i ) + « ( 0 . 0 . we have -r r(t)sj(t)dt.0. . .4. . Then..4.> / £ ) (5. 0 . 0 . Multidimensional Sign3!s 211 i0 = ( V 2 f ..32: A set of M = 4 biorthogonal signal waveforms. . M 2 1 (5. . . . v ^ ) = 0..20) .. . A s in the case of orthogonal signals. . 0 . .0.19) where Sj(t) is the transmitted waveform and n{t) is a sample function of a white Gaussian noise process with power spectrum Nq/2 watts/hertz. i = 0. \ is largest. . To determine the probability of error. The Detector The detector observes the M/2 correlator outputs ( r.4.4. Thus. Suppose [r 11 = max {)/•.23) Then.33. the probability of a symbol error is simply PM = 1 (5. 8.2 f .4.26) (5.4.This is due to the fact that w e have plotted the symbol-error probability P. Then.1.33 illustrates P M as a function of the signal-to-noise ratio £H/No. 0 < i < M/2 — 1 } and selects the correlator output whose magnitude \r.u in Figure 5. The noise components are zero-mean Gaussian and have a variance of A~ ~ £ NqJ2.1 9 (5. |) (5. may be evaluated numerically for different values of M f r o m (5. However..25) and Fv. — .24) and (5. the probability of a correct decision is equal to the probability that r() = £ + tiQ > 0 and irol > |r. and 32.T a Finally. Jo v/27r / ru/JFf^n -*7'2dx p(ro)dro (5. We observe that this graph is similar to that for orthogonal signals. . 16. (i) dt.24) J-rv/JTR^Jl where p(ro ) = v2.212 CHAPTER 5. where £ = K£H for M = 2. suppose that i'o(0 was transmitted.4.| f o r i = 1. i - f s JO £ I = 0.21) 0 = 0 n.25).4. The graph shown in Figure 5. w e would find that the graphs . for biorthogonal signals we note that Pi > /S.4. + n . i = Jo M 0. 1.4. 1 M 2 1 (5. If we plot the equivalent bit-error probability.4. . .22) and £ is the symbol energy for each signal waveform. ( r ) if ry < 0. the detector selects the signal i / ( r ) if r^ > 0 and — j .. where f m = f i + 0 r n(t)Sj ( t ) d t . BASEBAND DIGITAL TRANSMISSION Suppose that the transmitted signai waveform is so{t). Then. . q u a d 8 ( ' b d c _ i n t 2 ' .O. for i=1 :length(snrJn. P e _ 3 2 ( i ) = 1 .32). final. .p]usJnf.2).snr. snr=10 * (snr-in-dB(i)/10).snr.2(i)=1 —quad8(' b d e _ m t 2 '.4).snr. Multidimensional Sign3!s 213 101og 10 (£fc/tfo) Figure 5. Pe.snr. tolerance. Pe. % MATLAB script that generates the probability of error versus the signal-to-noise initial_snr=0. end.snr=12. % plotting commands follow ratio.q u a d 8 ( ' b d t _ i n t 2 ' .fi).75.tolerance.plus_inf.! 1. snr_step=0.16).plusJnf.q u a d 8 ( ' b d t _ i n t 2 ' . % this is practically infinity snr-in_dB=initial_snr:snr-Step:finaLsnr.24) and (5.5.[].plus_inf.4.0. for M = 2 and A/ = 4 coincide.snr.8(i)=1 . % tolerance used for the integration pi u s .0.[]. tolerance=eps.O.(].tolerance.tolerance. Pe-l6{i)=1-quad8('bdt_int2'.dB). The M A T L A B script for the computation of the error probability in (5.0.4.33: Symbol-error probability for biorthogonal signals.26) is given below.[ ].tolerance. plus J nf.4. Pe_4{i)=1 . in f = 2 0 . 8 to generate the 2-bit —so.4.34.34: Block diagram of the system with M ~ 4 biorthogonal signals for M o n t e Carlo simulation. with j. 52 = { 0 . The binary sequence is grouped into pairs of bits. The additive noise components «o and n j Gaussian noise generators. we simulate the generation of the random variables ro and r | . the demodulation requires only two correlators or are ro and r j .y / Z ) 11 — I 3 = ( .< / £ . . as in Illustrative Problem 5. which constitute the input to the detector. BASEBAND DIGITAL TRANSMISSION Illustrative P r o b l e m 5 .214 CHAPTER 5. We begin by generating a binary sequence of O's and l ' s that occur with equal probability and are mutually statistically independent. we may use the symbols directly. Compare s./ W ) 1 0 .» . As shown. 0) Alternatively. The model of the system to be simulated is illustrated in Figure 5. whose outputs are generated by means of two method in Illustrative Problem 5. 0) 01 —• si = (0. Since S2 = . Figure 5. Î 1 Perform a Monte Carlo simulation for a digital communication system that employs M = 4 biorthogonal signals. each having a mean zero and .s i and S3 = matched filters. which are mapped into the corresponding signal components as follows: 0 0 -»• s 0 = (V5T. . F i g u r e 5 . % The matched filter outputs if ( d s o u r c e ( i ) = 0 ) r0=sqrt(E)+gn gauss(s g ma). for i=1 :length(SNRindB). Multidimensional Sign3!s 215 v a r i a n c e A2 and vary = Nq£/2. temp=rand. 0 0 0 b i t s at several d i f f e r e n t v a l u e s of the S N R £ ) . % detection. % signal to noise ratio per bit sgma=sqrt(E~ 2/(4»SNR)). Chapter 5M=4. dsource{i)=0: elseif (temp <0.1) if (temp<0. N o t e the a g r e e m e n t b e t w e e n the s i m u l a t i o n r e s u l t s a n d t h e t h e o r e t i c a l v a l u e of Pa g i v e n b y ( 5 . elseif (temp<0.5.25). 2 4 ) . dsource(i)=2. % simulated error rate smJd. 3 5 i l l u s t r a t e s t h e r e s u l t s of t h i s s i m u l a t i o n f o r t h e t r a n s m i s s i o n o f 2 0 . % uniform random variable over (0. % MATLAB script fur Illustrative Problem !!.4. a n d a n e r r o r c o u n t e r is u s e d t o c o u n t t h e n u m b e r o f s y m b o l e r r o r s a n d t h e n u m b e r of bit e r r o r s . T h e M A T L A B s c r i p t s f o r this p r o b l e m are given below. end end. Chapter 5. 2 6 ) a n d ( 5 . end. % sigrrui. echo on SNRmdB=0:2. 4 . and error probability computation numoferr=0.10. for the system % described in illustrated problem 10. Since £ For convenience.75). % quaternary orthogonal signalling E=1.prb(i)=smldP5lO(SNRmdB(i)). / N q . signal-to-noise ratio in dB.dB«log(10)/l0). SNR=exp(snrJn. 4 . elseif (dsource(i)==1) .5). for i=1 :N. dsource(i)=1. it f o l l o w s t h a t £ b ~ = 1 T h e d e t e c t o r o u t p u t is c o m p a r e d w i t h t h e t r a n s m i t t e d s e q u e n c e o f bits.err. standard deviation of notse N=10000. r 1 =gngauss(sgma). w e may normalize the s y m b o l energy to £ — 2 £ h . % plotting commands follow function ip!=smldP5IO(snrjn_dB) % Ipl = smldP5I0{snrJnJB) % SMLDP510 simulates the probability of error for the given % snrJn^lB. % number of symbols being simulated % generation of the quaternary data source for i=1:N. else dsource(i)=3. decis=1. if (decis"=dsource(i)). BASEBAND DIGITAL TRANSMISSION rO=gngauss(sgma). % if it is an error. decis=0. r t =—sqn (E)+gngaus s( sg ma). else dec i s=3. else rf)=gn gauss (sgma).216 CHAPTER 5. r 1 =sqrt (E )+gn ga uss( s gma). % detector follows if (ri)>abs(rl)). p=numofen/N. end. end. . end. end.sq it (E)+gn gauss( s gma). rl=gngauss(sgma).35: Symbol-error probability for M —4 biorthogonal signals f r o m M o n t e Carlo simulation compared with theoretical error probability. elseif (rl >abs(iO)). elseif {rO<-abs(r!)). decis=2. elseif (dsource(i)==2) rO= . increase to the error counter % bit error probability estimate Figure 5. numofen=numofeiT+1. Consequently.. The signal amplitude may be normalized to A = 1. zero-mean. — A).2 10 and when (i) is transmitted is A + nic.5. i. cr2 = 0. Hence. Suppose that the received signal waveform is sampled at a rate of 10/77. at 10 samples per bit interval. A.2.2.0 and cr2 = 2..o(0 is transmitted is rk = A k = 1. 3 10.1). —A.e. A.2 Repeat Problem 5. where the sequence {«^j is i. the sampled version of the received sequence when i . A.1.2 are used to transmit binary information through an AWGN channel.A + nk. A) and the signal w a v e f o r m i ' i ( f ) is represented by the 10samples ( A .0.1 Suppose the two orthogonal signals shown in Figure 5.2 .i. —A.. Plot the correlator outputs at time instants k = 1. Multidimensional Sign3!s 217 Problems 5. 2. A. Is one set better than the other from the viewpoint of transmitting a sequence of binary information signals? 7 0 -A 0 -A I T Figure P5. n = 1 < k < 6 < k < 5 10 . The received signal in each bit interval of duration Th is given by (5. —A. Gaussian with each random variable having the variance CT2. Describe the similarities and differences between this set of two signals and those illustrated in Figure 5.4.. . 5. a1 = 1.2.. Write a MATLAB routine that generates the sequence { r f o r each of the two possible received signals and perform a discrete-time correlation of the sequence {r* } with each of the two possible signals J o ( 0 and si (t) represented by their sampled versions for different values of the additive Gaussian noise variance cr2 = 0. A. — A. in discrete time.d.1 for the two signal waveforms i o ( 0 a n d i | ( 0 illustrated in Figure P5. the signal waveform so(t) with amplitude A is represented by the 10 samples (A.. 1.0. a 2 = 0.— 0. for different values of the additive Gaussian noise variance a .3 In this problem.000 bits) and measure the bit-error probability f o r t r 2 = 0.8 Run the MATLAB program that performs a Monte Carlo simulation of a quaternary PAM communication system.4 Repeat Problem 5. a 2 = 0. Perform the simulation for 10.1.5 Run the MATLAB program that performs a Monte Carlo simulation of the binary communication system shown in Figure 5. cr 2 = 1.0. and a 2 = 2.1.000 bits and measure the error probability f o r a 2 — 0. 5.000 symbols (20.3 for the signal waveforms shown in Figure P5. Show that these four signal waveforms are mutually orthogonal. Plot the correlator outputs at time instants corresponding to it = 1 . .000 symbols (20.1. <j 2 = 1. .10.1.10 Run the MATLAB program that performs the Monte Carlo simulation of a digital communication system that employs Ai = 4 orthogonal signals. 10. the objective is to substitute two matched filters in place of the two correlators in Problem 5. 5.8 to simulate Ai = 8 PAM signals and perform the Monte Carlo simulations as specified in Problem 5. Will the results of the Monte Carlo simulation . Plot the theoretical error rate and the error measured from the Monte Carlo simulation and compare these results.11 Consider the four signal waveforms shown in Figure P5.~ 2.7 Repeat Problem 5. a 2 = 0.0.5 for the binary communication system shown in Figure 5.} for each of the two possible received signals and perform the discrete-time matched filtering of the sequence with each of the two possible signals j q ( 0 ( 0 .5 for the binary communication system based on on-off signals. as described in Illustrative Problem 5. The condition for generating signals is identical to Problem 5. Also plot 1000 received signal-plus-noise samples at the input to the detector for each value of a2. Also plot 1000 received signal-plus-noise samples at the input to the detector for each value of a 2 .13 based on antipodal signals. cr2 = 1.1. Plot the theoretical error probability and the error rate measured from the Monte Carlo simulation and compare these results.2. 5.0. Perform the simulation for 10. based on orthogonal signals. Write a MATLAB routine that generates the sequence {/-<.0. . and a 2 = 2. BASEBAND DIGITAL TRANSMISSION 5. 2 . 5. Perform the simulation for 10. 5. 5.8.11. . 5. T h e signal amplitude may be normalized to A = I. and a.000 bits) and measure the symbol-error probability for a 2 = 0. represented by their sampled versions. and o~ — 2.0.6 Repeat Problem 5.8.0. a 1 = 1. 5.9 Modify the MATLAB program in Problem 5.0. Plot the theoretical error rate and the error rate measured from the Monte Carlo simulation and comparc the two results.218 CHAPTER 5. 5.4. Multidimensional Sign3!s of Problem 10 apply to these signals? W h y ? 219 *>(0 i;(0 i 3 (f) LMU Figure P 5 . l l 5.12 Run the MATLAB program that performs the Monte Carlo simulation of a digital communication system that employs M — 4 biorthogonai signals, as described in Illustrative Problem 5.11. Perform the simulations for 10,000 symbols (20,000 bits) and measure the symbol error probability for <72 — 0.1, <j 2 = 1.0, and a 2 = 2.0. Plot the theoretical symbol-error probability and the error rate measured from the Monte Carlo simulation, and compare the results. Also plot 1000 received signal-plus-noise samples at the input to the detector for each value of or 2 . 5.13 Consider the four signal waveforms shown in Figure P5.13. Show that they are biorthogonai. Will the results of the Monte Carlo simulation in Problem 5.12 apply to this set of four signal waveforms'? Why? .MO S2(D A T •tj (O 7 I T I T 0 -4 0 —A Figure P5.13 I I I I Chapter 6 Digital Transmission Through Bandlimited Channels 6.1 Preview In this chapter we treat several aspects of digital transmission through bandwidth-limited channels. We begin by describing the spectral characteristics of PAM signals. Secondly, we consider the characterization of bandlimited channels and the problem of designing signal waveforms for such channels. Then, we treat the problem of designing channel equalizers that compensate for distortion caused by bandlimited channels. We show that channel distortion results in intersymbol interference (ISI), which causes errors in signal demodulation. A channel equalizer is a device that reduces the intersymbol interference and thus reduces the error rate in the demodulated data sequence. 6.2 The Power Spectrum of a Digital PAM Signal In the preceding chapter we considered the transmission of digital information by pulse amplitude modulation (PAM). In this section, we study the spectral characteristics of such signals. A digital PAM signal at the input to a communication channel is generally represented as (6.2.1) where {a„} is the sequence of amplitudes corresponding to the information symbols from the source, g(t) is a pulse waveform, and T is the reciprocal of the symbol rate. T is also called the symbol interval. Each element of the sequence {iz„} is selected from one of the possible amplitude values, which are 221 222 CHAPTER 6. DIGITAL TRANSMISSION THROUGH BANDLIMITED CHANNELS Am = (2m - M+ \)d, ' m = 0, I . . . . . M - i (6.2.2) where d is a scale factor that determines the Euclidean distance between any pair of signal amplitudes {2d is the Euclidean distance between any adjacent signal amplitude levels). Since the information sequence is a random sequence, the sequence {«„} of amplitudes corresponding to the information symbols from the source is also random. Consequently, the PAM signal v(r) is a sample function of a random process V(r). To determine the spectral characteristics of the random process V(r), we must evaluate the power spectrum. First, we note that the mean value of V{t) is O Q ]T n=—oo E\V(t)}= E{a„)g(t-nT) (6.2.3) By selecting the signal amplitudes to be symmetric about zero, as given in (6.2.2), and equally probable, E(a„) = 0, and hence E [ V(r)] = 0. The autocorrelation function of V(t) is Rv(t + r ; 0 = E[V{i)V{( + r)] (6.2.4) It is shown in many standard texts on digital communications that the autocorrelation function is a periodic function in the variable r with period T. Random processes that have a periodic mean value and a periodic autocorrelation function are called periodically stationary, or cyclostationary . The lime variable / can be eliminated by averaging Rv(t + r ; f) over a single period, i.e., 1 Rv(*) = -= T fT'2 J-T/l Rv(t + z;t)dt (6.2.5) This average autocorrelation function for the PAM signal can be expressed as R»(= where R(l(m) as = E{u„an+m) 1 - °° RaWRgfr m=-oo - mT) (6.2.6) is defined is the autocorrelation of the sequence ( a n and Rs(x) Rg(r)= / g(t)g(t + i)dt (6.2.7) J —oo The power spectrum of V(i) is simply the Fourier transform of the average autocorrelation function R v ( r ) , i.e., 6.2. The Power Spectrum of a Digital PAM Signal 223 oo / •oc Rv(T)e~^nt'z dt = ~Sa(f)\G(f)\- (6.2.8) where <?«(/) is the power spectrum of amplitude sequence {a„} and G ( f ) is the Fourier transform of the pulse g(t). S a ( f ) is defined as (6.2.9) From (6.2.8) we observe that the power spectrum of the PAM signal is a function of the power spectrum of the information symbols (a„) and the spectrum of the pulse £ ( / ) . In the special case where the sequence {a„l is u n c o r r e c t e d , i.e., R<,{m) - " m= 0 (6.2.10) where a* — E{a-), it follows that S u ( f ) = a2 for all / and (6.2.11) In this case, the power spectrum of V (t) is dependent entirely on the spectral characteristics of the pulse g(t). ^lWWriMIMdaWWfl» I l l u s t r a t i v e P r o b l e m 6.1 Determine the power spectrum of V(r) when {a„1 is an uncorTelated sequence and g(r) is the rectangular pulse shown in Figure 6.1. SjiO Figure 6.1: Transmitter pulse. 224 CHAPTER 6. DIGITAL TRANSMISSION THROUGH BANDLIMITED CHANNELS The Fourier transform of g(t ) is G ( / ) = -L I J-co g ( t ) e - M d t x f T and , /sinrcfT\2 « " - • • b / F - ) This power spectrum is illustrated in Figure 6.2. Figure6.2: Power spectrum of the transmitted signal in Illustrative Problem 6.1 (for a 2 = 1). The MATLAB script for this computation is given below. % MATLAB echo on T=V, script for Illustrative Problem / , Chapter 6. delta_f=1/(100+U; f=—5/T:delta_f :5/T; sgma_a=1; 6.2. The Power Spectrum of a Digital PAM Signal 225 Sv=sgma.a"2*sinc(f*T) "2: % plottirii; command follows ploUf.Sv); ^ l U I L ^ ^ J L f J JJ = M = IMd,n Illustrative Problem 6.2 Suppose the autocorrelation function of the sequence {a„} is I I, m = 0 m = 1, — 1 (6.2.14) 0, otherwise and g(t) is the rectangular pulse shown in Figure 6.1. Evaluate S v ( f ) in this case. —^SSSSSB The power spectrum of the PAM signal V(/) is given by (6.2.8). The power spectrum of the sequence {«„} is, from (6.2.9) and (6.2.14), S a ( f )= 1 cos 2jt f T (6.2.15) = 2 cos 2 nfT Consequently, . / sinTr/7"\2 S v { f ) = 2cos j r / F ^ j (6.2.16) The graph of this power spectrum is shown in Figure 6.3. The MATLAB script for performing this computation is given below. In this case, the overall power spectrum of the transmitted signal V ( 0 is significantly narrower than the spectrum in Figure 6.2. % MATLAB script for Illustrative Problem 2, Chapter 6. echo on T=1; delta_f=1/(100*T); f=—5 /T: del ta_f: 5 /T; Sv=2*(cos(pi+f*T).*sinc(f*T)) "2\ % plotting command follows plot(f,Sv); 226 CHAPTER 6. DIGITAL TRANSMISSION THROUGH BANDLIMITED CHANNELS a.* D.t, 04 M (l •f -t -3 -2 L i) fT I 2 1 4 5 Figure 6.3: The power spectrum of the transmitted signal in Illustrative Problem 6.2 (for = n. 6.3 Characterization of Bandlimited Channels and Channel Distortion Many communication channels, including telephone channels and some radio channels, may be generally characterized as bandlimited linear filters. Consequently, such channels are described by their frequency response C ( / ) , which may be expressed as C ( f )= A ( f ) e ^ i f ) (6.3.1) where A { f ) is called the amplitude response and 9 ( f ) is called the phase response. Another characteristic that is sometimes used in place of the phase response is the envelope delay, or group delay, which is defined as I dtKf) A channel is said to be nondistorting, or ideal if, within the bandwidth W occupied by the transmitted signal, A ( f ) = constant and 8 ( f ) is a linear function of frequency (or the envelope delay r ( / ) = constant). On the other hand, if A ( / ) and r ( / ) are not constant within the bandwidth occupied by the transmitted signal, the channel distorts the signal. If A ( f ) is not constant, the distortion is called amplitude distortion, and if r ( / ) is not constant, the distortion on the transmitted signal is called delay distortion. 6. Characterization of Bandlimi ted Channels and Channel Distortion 227 As a result of the amplitude and delay distortion caused by the nonideal channel frequency response characteristic C ( f ) . The total time duration (multipath spread) of the channel response is approximately 0. As will be discussed in this chapter. Consequently. T h e time-variant multipath conditions give rise to a wide variety of frequency response characteristics. in brief. they overlap. Thus the channel delay distortion results in intersymbol interference. J.5 illustrates the measured average amplitude and delay as a function of frequency for a telephone channel of the switched telecommunications network. Typically. and the peaks of the pulses would no longer be distinguishable.7ms on the average. a sequence of successive pulses would be smeared into one another. As an example of the effect of delay distortion on a transmitted pulse. Besides telephone channels. We observe that the usable band of the channel extends from about 300 Hz to about 3200 Hz. as in PAM. such as shortwave ionospheric propagation (HF). hence. Radio channels. these radio channels are usually called time-variant multipath channels. the transmitted symbol rates on such a channel may be of the order of 2500 pulses or symbols per second. Instead. Figure 6. transmission of the pulse through a channel modeled as having a linear envelope delay characteristic r ( / ) [quadratic phase # ( / ) ] results in the received pulse shown in Figure 6. Hence. these radio channels are characterized statistically in terms of the scattering function.4(a) illustrates a bandlimited pulse having zeros periodically spaced in time at points labeled ± T . if transmission occurs at a rate of 10 7 symbols/second over such a channel. ±27". so we have intersymbol interference. Figure 6. let us consider the intersymbol interference on a telephone channel. The corresponding impulse response of the average channel is shown in Figure 6. and the spread between half-power points in Doppler frequency is a little less than 1 Hz on the strongest path and somewhat larger on the other paths. time d i s p e r s i o n and. In these channels.6. and mobile cellular radio are three examples of time-dispersive wireless channels. tropospheric scatter. In comparison. intersymbol interference might extend over 20 to 30 symbols. When the information is conveyed by the pulse amplitude.4(c) illustrates the output of a linear equalizer that compensates for the linear distortion in the channel. each of which has a peak at the periodic zeros of the other pulses.7. Instead. there are other physical channels that exhibit some form of time dispersion and thus introduce intersymbol interference. a scattering function measured on a medium-range ( I 5 0 . 7 p s will result in intersymbol interference that spans about seven symbols.m i ) tropospheric scatter channel is shown in Figure 6.4(b) having zero crossings that are no longer periodically spaced. Figure 6. for this reason. it is possible to compensate for the nonideal frequency response characteristic of the channel by use of a filter or equalizer at the demodulator. For illustrative purposes. The number of paths and the relative time delays among the paths vary with time. . However. then one can transmit a sequence of pulses. intersymbol interference—is the result of multiple propagation paths with different path delays. which. Consequently the frequency response characterization that is used for telephone channels is inappropriate for time-variant multipath channels. is a two-dimensional representation of the average received signal power as a function of relative time delay and Doppler frequency spread. a succession of pulses transmitted through the channel at rates comparable to the bandwidth W are smeared to the point that they are no longer distinguishable as well-defined pulses at the receiving terminal. etc. Its duration is about 10 ms. As an example. the multipath spread of 0 . 228 CHAPTER 6. .4: Effect of channel distortion: (a) channel input. (b) channel output. DIGITAL TRANSMISSION THROUGH BANDLIMITED CHANNELS (a) (b) Figure 6. (c) equalizer output. . H z looo :ooo 3000 Frequency. Figure 6. Hz Figure 6.25 1000 0 2000 3000 Frequency.5.6.3. Characterization of Bandiimited Channels and Channel Distortion 229 I 00 O 0.6: Impulse response of the average channel with amplitude and delay shown in Figure 6.75 T S 3 ^ 0.5: Average amplitude and delay characteristics of a medium-range telephone channel.50 £ < 0. DIGITAL TRANSMISSION THROUGH BANDLIMJTED CHANNELS Figure 6.230 CHAPTER 6.7: Scattering function of a medium-range tropospheric scatter channel. . M=[1 1 0 0]. — The impulse response and the frequency response of a length /V = 41 FIR filter that meets these specifications is illustrated in Figure 6. Illustrative Problem 6. fl=2+f. MATLAB may be used to design digital FIR or IIR filters that approximate the frequency response characteristics of analog communication channels. a bandlimited communication channel can be modeled as a linear filter whose frequency response characteristics match the frequency response characteristics of the channel. of ripple in the passband. % % % % % % the the the the the this desired cutoff f requency actual stopband frequency sampling frequency normalized passband frequency normalized stopband frequency number is found by experiment % describes the towpass filter % returns the FIR tap coefficients figureM): [H. if . N=41.^ I M I b h ^ J L M A U d B d f t f c 231 Illustrative Problem 6. Since N is odd. However. = 10 KHz. % MATLAB script far Illustrative echo on f_cutoff=20Q0. % plot of the impulse response follows figure<3).H_in_dB).5 dB. we select the stopband response to be —40 dB and the stopband frequency to be 2500 Hz. only an FIR filter could satisfy this condition. f-stopband=25Q0.(180/pi)*unwrap(angle(H))). figure(2V. fs=10000. In addition. F=[Q fl f2 1). Since the desired phase response is linear. H_in-dB=20*logl0(abs(H)): plot<W/(2*pi). f2=2+f_siopband/fs. The sampling rate for the digital filter is selected as F x =10. To be specific. % plotting command follows Problem 3.M). the FIR filter was designed in M A T L A B using the Chebyshev approximation method (Remez algorithm). Characterization of Bandlimiied Channels 3nd Channel Distortion .«itoff/fs. Suppose that we wish to model an ideal channel having an amplitude response A ( f ) = I for \ f \ < 2000 Hz and A ( f ) ~ 0 for | / [ > 2000 Hz and constant delay (linear phase) for all / . In this example. stem([0:N —1 |.3. we allow for a small amount.3 A s indicated above.8.4 An alternative method for designing an FIR filter that approximates the desired channel characteristics is based on the window method.000 Hz. Instead. it is not possible to achieve a zero response in the stopband. Chapter 6. which corresponds to a time delay of (/V + l ) / 2 0 ms at the sampling rate of F. 0.B).W]=freqz(B). plottW/(2*pi).6. B=remez(N — 1 .F. the delay through the filter is (/V + l ) / 2 taps. Bi . DIGITAL TRANSMISSION THROUGH BANDLIMITED CHANNELS .T TI l I T T -»• Figure 6...232 CHAPTER 6. (c) Frequency response of linear phase FIR filter in Illustrative Problem 6. . (b).8: (a) Impulse response.T.3. 1 . % The rectangular windowed version follows.. W=2000. Ts=1/Fs. echo on Length=101. ^ M M U M J » The sampled version of /i(r). | / | < W. h=2*W*sinc(2«W»t).4) An equivalent digital filter may be implemented by sampling /j(r) at r = n7 v .wmdo wed .. Hence.' . Fs= 10000.h=h((Length—N)/2+1.3) For example. The size of the sidelobes can be significantly reduced by employing a smoother window function.9.10 for N = 51.6.H. and hence sin h(t) = — 2. . ± 1 . Figure 6. to truncate the ideal channel response. The impulse response {/i^ = ui n h n ] and the corresponding frequency response of the FIR (truncated) filter are illustrated in Figure 6. t=Ts*n. This truncation is equivalent to multiplying {/!„} by a rectangular window sequence = 1 for |n| < (/V — l ) / 2 and = 0 for |n| > (/V + l ) / 2 .windowedJi magnitude version follows.*hanning_window.3.. Characterization of Bandlimited Channels and Channel Distortion 233 the desired channel response is C ( f ) for . % MATLAB script for Illustrative Problem 4.= l/T.) when the window function is a Hanning window of length N — 51.1).h=h{(Length-N)/2+1 :(Length+N }/2). this FIR filter is a poor approximation to the desired channel characteristics. Since {h„} has infinite length.Wl]=freqz(rec_windowed_h. windowed-H(1))).= 10 KHz.H)/abs(rcc.(Length+N)/2). such as a Hanning or a Hamming window. if the channel is ideal. windowed banning .m_dB=20* log 10(abs(rec.windowed.3.. .. % Frequency % to normalize % The Hanning response the of rec. hanning-windowed. ± 2 Let us now design an FIR filter with W = 2000 Hz and F. N=61. hn = h(nTs). rec. Note that the truncated filter has large sidelobes in the stopband.11 illustrates the impulse response and frequency response of {/i. then C ( f ) — 1. i..T VVr T T f (6. we may truncate it at some length N. d f (6.3. the impulse response of the channel is h(t) = f J-w C { f ) Je 2-T/I ' ^ J l. wi ndow=han n i n g(N). n=-(Lengt{i-1 }/2:(Length-1 )/2. follows [rec_windowed-H. Chapter 6.e. = w n hr. MATLAB provides routines to implement several different types of window functions. / | < W and C ( f ) ~ 0 for | / | > W. is illustrated in Figure 6. rec ^windowed. where Ts is the sampling interval and n = 0. ^n ^ n ^n % 1 1 J? 3? < f c< f o ffifa^h A .b2(t)S(r .3.5) where b\(t) and bi(t) are random processes that represent the time-varying propagation behavior of the channel and r j is the delay between the two multipath components.(r)<5(r) -I.JT1 J?$ % % Si 0 I -30 I -20 A 66 6 r . we may use relatively . I -10 0 n A A 6 A 1 10 1 20 1 30 1 40 50 Figure 6. Dlustrative Problem 6.12.windowed_H( 1»): % the pinning command* follow 4000 35QO 3000 2500 2000 - r— -i r- i 1 1 1 . DIGITAL TRANSMISSION THROUGH 8 A NDLI MI TED CHANNELS [hanning. r ) = i>. Its impulse response may be expressed as c(r.234 CHAPTER 6. 9 9o 9 1500 10 °0 " o o < p O 0 0 500 - 0 f •soo -1000 -50 1 -40 ^n ^n. We model i?i(f) and b i i t ) as Gaussian random processes generated by passing white Gaussian noise processes through lowpass filters.H. dB=20*log!0(abs(hatining-window ed_H)/abs(hanning. as illustrated in Figure 6.9: Samples of h{n) in Illustrative Problem 6.' 111. 1): h aiming-windowed-H_in.5 A two-path (multipath) radio channel can be modeled in the time domain. The problem is to simulate such a channel on the computer. windowed.xd) (6.4. In discrete time.W2]=freqz(hanning_ windowed-h. with d = 5 samples of delay.„ + b2.13 illustrates the output sequences {b\n} and {&.pz~1)2 I . {£>2.3. channel distortion causes intersymbol interference (ISI). For simplicity. we shall present a model that characterizes ISI. the bandwidth is wide. Characterization of Intersymbol Interference 235 Figure 6.p)1 i-Vr = r-—.p)z (I . . Hence. and 0 < p < 1 is the pole position.9. whereas when p is close to zero. we assume .4. (I .4 Characterization of Intersymbol Interference In a digital communication system. The discrete-time channel impulse response cn = b{. 6.6) or the corresponding difference equation bn = 2pi?„_i .*}. For example.8) is also shown. a simple lowpass digital filter having two identical poles is described by the ^-transform H(z) = — (1 . The position of the pole controls the bandwidth of the filter and.2pz~l + p z (6. the filter output sequence changes more slowly compared to the case when the pole is close to the origin.>. When p is close to unity (close to the unit circle). and \cn) when p = 0.3.. when p is close to the unit circle in the z-plane.p2b„-2 + (1 . Figure 6.14 illustrates the sequences {fei. the rate of variation of {b„}.7) where {UJ«} is the input WGN sequence.6.} generated by passing statistically independent WGN sequences through a filter having p = 0. the filter bandwidth is narrow. .99.4. In this section.n}. simple digital IIR filters excited by white Gaussian noise (WGN) sequences.10: Frequency response of the filter truncated with a rectangular window in Illustrative Problem 6. {b„} is the output sequence.3. Figure 6.p)2wn (6.n-d (6. hence. the received signal can be represented as (6.12: Two-path radio channel model.4. this treatment is easily extended to carrier (linearly) modulated signals discussed in the next chapter.4. and T is the signal interval ( 1 / 7 is the symbol rate). that the transmitted signal is a baseband PAM signal. DIGITAL TRANSMISSION THROUGH 8 A NDLI MI TED CHANNELS f Figure 6.236 CHAPTER 6. However. {a„} is the sequence of transmitted information symbols selected from a signal constellation consisting of M points. The transmitted PAM signal is expressed as oo = n=0 where g ( i ) is the basic pulse shape that is selected to control the spectral characteristics of the transmitted signal.11: Frequency response of filter truncated with Hanning window in Illustrative Problem 6. The signal s(t) is transmitted over a baseband channel. which may be characterized by a frequency response C ( / ) . Figure 6.1) . Consequently. c(r) is the impulse response of the channel. We denote its output as oa y(t)=J^anx(t-nT) n—Q + v{t) (6.6.4.3) where x(t) is the signal pulse response of the receiving filter. the optimum filter at the receiver is matched to the received signal pulse /i(i).2) where h(t) = g ( r ) * c ( 0 . .13: Outputs b\n and b7„ of a lowpass filter and the resulting c„ for p = 0. X ( f ) = H { f ) H * ( f ) — \H(f)\2.e. the frequency response of this filter is / / * ( / ) ..99. k ~ 0. 00 r(t) = Y^onh(t n—0 -nT) + w(t) (6. . suppose that the received signal is passed through a receiving filter and then sampled at the rate 1 / T samples/seconds. . Characterization of Intersymbol Interference 237 n Figure 6. 2 . * denotes convolution. To characterize ISI. and Uj(r) represents the additive noise in the channel. i. Hence. .4. if y(i) is sampled at times t = kT. 1.4. In general. and v(t) is the response of the receiving filter to the noise u)(t). Now. we have . c„.1. DIGITAL TRANSMISSION THROUGH 8 A NDLI MI TED CHANNELS 0 100 200 300 400 500 600 700 800 900 1000 Figure 6..238 CHAPTER 6.4.4.5) } The term is an arbitrary scale factor...9 (from top to bottom at/i = 1000. bi„. k = 0. I. (6.14: Outputs b\„. and bin)- y(lcT) = anx(kT . which we set equal to unity for convenience. Then .b2n> andc„ with pole at p = 0.nT) + v(kT) yk = n=0 + Vk < fc = 0.4) The sample values {y* ) can be expressed as yk = *o ak + \+Vk. (6. 7) represents the ISI. and vy is the additive noise at the jfcth sampling instant. Channel 2 1.4. we can display the received signal y(/) on the vertical input with the horizontal sweep rate set at 1 / 7". For example.4.5. 0. 0. ILLUSTRATIVE PROBLEM Illustrative Problem 6.6 In this problem we consider the effect of intersymboi interference (ISI) on the received signal sequence f y*} for two channels that result in the sequences { x y } as follows: Channel 1 1.1. it causes the system to be more sensitive to a synchronization error. Figure 6. Hence. 0. Now suppose that the transmitted signal sequence . thereby reducing the margin for additive noise to cause errors. The resulting oscilloscope display is called an eye pattern because of its rcscmblance to the human eye. the term n =0 nit Ea n *y- (6. Note that intersymboi interference distorts the position of the zero-crossings and causes a reduction in the eye opening.17. Figure 6. -0. T h e amount of ISI and noise in a digital communications system can be viewed on an oscilloscope. -0. the cascade of the transmitter and receiver filters and the channel at the sampling instants are represented by the equivalent discrete-time FIR channel filter shown in Figure 6.15 illustrates the eye patterns for binary and four-level PAM modulation.16 graphically illustrates the effect of ISI in reducing the opening of a binary eye. The effect of ISI is to cause the eye to close. 0 .6.4. For PAM signals.25. Thus.2. the ISI is limited to two symbols on either side of the desired transmitted signal. Characterization of Intersymbol Interference 239 Vy = (lie 4- a '<xk-n + Vk (6.6) The term ay represents the desired information symbol at the 4th sampling instant. n = 0 n = ±1 n=± 2 otherwise n = 0 n = ±1 n=± 2 otherwise Note that in these channels. Examples of eye patterns for binary and quaternary amplitude shift keying (or PAM). O p t i m u m sampling time Sensitivity to liming error Di stortion of zerc crossings s.16: Effect of intersymbol interference on eye opening. .240 CHAPTER 6.15. DIGITAL TRANSMISSION THROUGH 8 A NDLI MI TED CHANNELS BINARY QUATERNARY Figure 6. 1 Peak distortion t — Noise margin Figure 6. 5. the eye is completely closed. the design is based on transmitter and receiver filters that result in zero ISI. The corresponding transmitted signals are called partial response signals.5 -2 -3 0.i} in the absence of noise is shown in Figure 6.25 i -0. In both cases .2 I 3 Figure 6. Now. (a) channel 1.e. Thus.5 i i -1 0 (b) Channel 2 I 1 -0. Communication System Design for Bandlimited Channels 241 is binary. In the second case.17: FIR channel models with ISI. the received signal sequence is shown in Figure 6. the ISI alone does not cause errors at the detector that compares the received signal sequence {>•„) with the threshold set to zero.18(a). Thus. for this channel characteristic.0 0. In the case of Channel 2. 1. Hence. Then. Two cases are considered. errors will occur. {an — ±1).19.5 Communication System Design for Bandlimited Channels In this section we consider the design of the transmitter and receiver filters that are suitable for a baseband bandlimited channel. (b) channel 2. However. the received signal sequence {y.0 -3 0. In the first case.1) sequence {yn} is illustrated in Figure 6. i. we observe that the ISI can cause errors at the detector that compares the received sequence (>„} with the threshold set at zero.2 -0.1 t _2 -1 -1 o (a) C h a n n e l 1 0 1 ? 2 3 i -0. the second design approach leads to a controlled amount of ISI. 6. when the additive noise is sufficiently large. for Channel 1. and with additive white Gaussian noise having a variance of a 2 = 0. the eye diagram is open in the absence of noise.1. We note that in the absence of noise. even in the absence of noise.18(b). the noise-free and noisy ( a 2 = 0.. the design is based on transmitter and receiver filters that have a specified (predetermined) amount of ISI.6.25 ' 1. (b) additive Gaussian noise with a 2 = 0. DIGITAL TRANSMISSION THROUGH 8 A NDLI MI TED CHANNELS o 1 is transmitted x -1 is transmitted (a) o 1 is transmitted x -1 is transmitted (b) Figure 6. (a) no noise. .242 CHAPTER 6.1.18: Output of channel model I without and with AWGN. Communication System Design for Bandlimited Channels 243 o 1 is transmitted x -1 is transmitted (a) o 1 is transmitted x -1 is transmitted (b) Figure 6.19: Output of channel model 2 without and with AW GN.6. (b) Additive Gaussian noise with variance cr2 — 0. . (a) no noise.1.5. 0.2) where \/T is the symbol rate. A ( f ) and r ( / ) are constant within the channel bandwidth W.5. which is defined as r. The frequency response X r c ( / ) is illustrated in Figure 6. The frequency 1 / 2 T is called the Nyquist frequency..e. In general. The signal pulse . at the sampling instants r = kT. i. .1) (6.k ~ ± 1 .244 CHAPTER 6. Note that when a = 0.5. we note that .1. = 0 n £ 0 (6.3) 0. the excess bandwidth is 100. when a = j . a = and a = I.r r c (f) = 1 at r = 0 and x r c (r) = 0 at t . there are many signals that can be designed to have this property.1 Signal Design for Zero IS I The design of bandlimited signals with zero ISI was a problem considered by Nyquist about 70 years ago. For simplicity.kT.20(b) illustrates JTrc(i) for a — 0.5. ± 2 . usually expressed as a percentage of the Nyquist frequency. there is no ISI from adjacent symbols when there . One of the most commonly used signals in practice has a raised-cosine frequency response characteristic. and when a = I. Consequently.5. *rc ( / ) = o <\f\ < (6.t r c (i) having the raised-cosine spectrum is sin j r r / r c o s ( j r a f / r ) xrrc (f) = — — — T nt/T 1-4 a2t2/T2 (6.4 Figure 6. DIGITAL TRANSMISSION THROUGH 8 A NDLI MI TED CHANNELS we assume that the channel is ideal: i. X r c ( / ) reduces to an ideal "brick wall" physically nonrealizable frequency response with bandwidth occupancy I / 2 T .v(r) to have zero ISI. Since XrQ(f) satisfies (6.5.t(/iD= is that its Fourier transform X ( f ) satisfy 2C £ .k ^ 0. which takes values in the range 0 < a < 1. l/l > & where a is called the roll-off factor. For a > 0. 6. He demonstrated that a necessary and sufficient condition for a signal . and 1/7" is the symbol rate.e. For example. we assume that A ( f ) = 1 and r ( / ) = 0.20(a) for a = 0. 1.2). the bandwidth occupied by the desired signal X r c ( / ) beyond the Nyquist frequency 1 /2T is called the excess bandwidth.5. the excess bandwidth is 50. ( / ) G ^ ( / ) is selected as G r ( / ) G f f ( / ) = Xre(/) where XrQ(f) (6. the transmitter and receiver filters are jointly designed for zero IS1 at the desired sampling instants t = nT. (a) Raisedcosine frequency response.6. and a channel equalizer is needed to minimize its effect on system performance.5. Communication System Design for Bandlimited Channels 245 is no channel distortion. then the product (cascade of the two filters) G r ( / ) G r ( J ) is designed to yield zero ISI.7) is no longer zero. then the ISI at the . if G j { f ) is the frequency response of the transmitter filter and Gr{/) is the frequency response of the receiver filter. For example. in the presence of channel distortion.5. Channel equalizers are considered in Section 6.5) is the raised-cosine frequency response characteristic.20: Raised-cosine frequency response and corresponding pulse shape.4. In an ideal channel.6. if the product G T . (b) Pulse shapes for raised-cosine frequency response. However. T 2T IT T (a) Ratseci-cosine frequency response Figure 6. Thus. the 1SI given by (6. T. equivalently.246 CHAPTER 6. we may select the sampling frequency F.6) where Xrc(/) is given by (6. Since G j ( f ) is bandlimited. = 7 / 4 .7 We wish to design a digital implementation of the transmitter and receiver filters G r i f ) and G r { / ) such that their product satisfies (6. The desired magnitude response is \GJ{f)\= \G R (f)\= (/) (6. r (6 -5-9) Since g j .//V)= £ gT{n)e-J2*>*»/N (6. with frequency separation A / = FJN. to be at least 2 / 7 . Since G r ( f ) = •JXrc(/). Illustrative P r o b l e m 6.5. Hence the folding frequency is Fs/2 = 2 / 7 . .5.8) n = —(iV — l)/2 The inverse transform relation is CW-D/2 grin) — £ j J- T » . The frequency response is related to the impulse response of the digital filter by the equation (N-D/2 G r ( f )= Y1 n=-(N-i)/2 gT(n)e~j2*fnT> (6-5.7) where Ts is the sampling interval and N is the length of the filter. we may sample X r c ( f ) at equally spaced points in frequency. the impulse response of the desired linear phase transmitter filter is obtained by delaying g r ( n ) by (N .5.3). Thus w e have (N-n/2 y X r c ( m A / ) = v%c(mF.l ) / 2 samples. The MATLAB scripts for this computation are given below.o ^ 1 ± w ! L .5) and G r ( / ) is the matched filter to Grif)— g s m m w g f c The simplest way to design and implement the transmitter and receiver filters in digital form is to employ FIR filters with linear phase (symmetric impulse response).( n ) is symmetric.5. DIGITAL TRANSMISSION THROUGH 8 A NDLI MI TED CHANNELS sampling times t ~ nT is zero. Note that N is odd. Our choice is f v — — ~ rv — T or. 1 V . alpha and T % !yj=xrt(f. for i i s oh tamed next <7c the indices for g-T g-T(i)=g.alpha.alpha. Further reduction in the sidelobes may be achieved by increasing N . Finally.T. end.T))). Note that the frequency response is no longer zero for \ f \ > (i + a ) / T .\pij*2*pi*m*n(i)/N). Communication System Design for Bandlimited Channels 247 % MATLAB script for Illustrative Problem 7. g-T(i)=0.R. Coupler 6.g_T).1 for a = \ and N .W]=froqz(g_T. magnitude response of the transmitter and the receiver filters.5. iinp„resp.\n-dB=20*logl0(ahs(G_TVnm(abs(G. However. because the digital filter has finite duration. end.22. .T))*e.6. the expression Xrc(f). we show the impulse response of the cascade of the transmitter and receiver FIR filters. n =0. The parameters function. This may be compared with the ideal impulse response obtained by sampling x r c ( 0 at a rate Fs = 4/ T.of. end. <{1-alpha)/(2*T))). iespouse of the cascade function % % y=0. [yl = xreff.31. magG. y=(T/2)*(Ucos((pi*T/alpha)*(abs(f)-(1-alpha)/(2+T)))).1 ) / 2 : ( N —1)/2.1 N . a!pha=1/4.T(n-tM-1)/21 response characteristics tl2=0:N —1.alpha. echo on N=31. T=1. n= — ( N . The corresponding frequency response characteristics are shown in Figure 6. T) Evaluates T).^ f i ) . Cor m=—(N —1 )/2:(N — 1 )/2. the sidelobes in the spectrum are relatively small. must also be given as inputs to the if (abs(f) > ((1+alpha)/(2*T))).T(i)+sqn( xrc(4«iii/(N«T). in Figure 6. Figure 6.cascade=conv(g. % the expression Cor length! n).21(b). elseif (abs(f) > else y=T.21(a) illustrates gT(n .T. % get the frequency % normalised % impulse g-R=g-T. % derive g. % punting commands follow [G. . On the other hand.. with no excess bandwidth. it can be taken into account at the receiver.11) This representation follows from the sampling theorem for bandlimited signals. In general.. However. DIGITAL TRANSMISSION THROUGH 8 A NDLI MI TED CHANNELS f Figure 6. We have already seen that the condition of zero ISI is x(rtT) = 0 for n ^ 0. a signal x(t) that is bandlimited to W hertz..5. or "controlled".e. can be represented as l/l > W (6.21: Impulse response and frequency response of truncated discrete-time FIR filter at transmitter. By allowing for a controlled amount of ISI. X ( f ) = 0. The ISI that we introduce is deterministic...e. The spectrum of the bandlimited signal is . This means that we allow one additional nonzero value in the samples (.c(nr)}. we can achieve the rate of 2VV symbols/second. (6.10) x(t) = A > n = ~ oo x t n \ s i n 2 x W { t — n/2W) 1 I - . suppose we choose to relax the condition of zero ISI and thus achieve a symbol transmission in a bandwidth W = 1/2 T i. hence.264 CHAPTER 6. i. suppose that we design the bandlimited signal to have controlled ISI at one time instant.5. as discussed below.2 Signal Design for Controlled ISI As wc have observed from our discussion of signal design for zero ISI. a transmit filter with excess bandwidth may be employed to realize practical transmitting and receiving filters for bandlimited channels.. 6.5. 5.6. Communication System Design for Bandlimited Channels 249 ' o o o o o o o o a u o ^ ^ o . . 2W °° (6.5.13) . n = < U otherwise (6.o 40 gVl7 'Qo<aoooooQO( ) 10 Figure 6. .5.12) \ 2 w ' U ^ n=-o0 \ / \ > W One special case that leads to physically realizable transmitting and receiving filters is specified by the samples ( — ) = x(nT) \2WJ The corresponding signal spectrum is = i1' |o.22: Impulse response of the cascade of the transmitter filter with the matched filter at the receiver.o îlL 20 30 ° JÎL ° i i i. oo / •00 x(t)e- j2nfl dt I I o. v(f) is given as x ( 0 = sinc(2Wi -f 1) . This pulse is called a duobinary signal pulse. n = —1 otherwise (6.5.250 CHAPTER 6.5.5.t(r) = sine (2Wt) + sine (2 Wr . x(i) is given by \ 2W J w (6. otherwise m J i l ^ i / w ^ j ^ i y J W [ 0.17) . We note that the spectrum decays to zero smoothly. Thus. Therefore.23. It is illustrated along with its magnitude spectrum in Figure 6.14) otherwise .1) (6.15) where sinc(f) = s i n n * t f n t . which means that physically realizable (liters can be designed that approximate this spectrum very closely.23: Duobinary signal pulse and its spectrum. n = 1 X iw)- X{nT) = -I. DIGITAL TRANSMISSION THROUGH 8 A N D L I MI TED CHANNELS I X ( f ) = f [ < W 0.sine ( 2 W t . Another special case that leads to physically realizable transmitting and receiving filters is specified by the samples I.5.1) (6.16) The corresponding pulse . a symbol rate of 2 W is achieved. Figure 6. 0. However.24. It is interesting to note that the spectrum of this signal has a zero at f = 0. Thus greater bandwidth efficiency is obtained compared to raised cosine signal pulses.5. I*</)| Figure 6.5.18) This pulse and its magnitude spectrum are illustrated in Figure 6. The signals obtained when controlled ISI is purposely introduced by selecting two or more nonzero samples from the set [x{nj2W)) are called partial response signals. . One can obtain other interesting and physically realizable filter characteristics by selecting different values for the samples U ( n / 2 W ) } and more than two nonzero samples.6. the problem of unraveling the controlled ISI becomes more cumbersome and impractical. making it suitable for transmission over a channel that does not pass dc. Communication System Design for Bandlimited Channels 251 and its spectrum is - l/l < l/l > W W (6. as we select more nonzero samples.24: Modified duobinary signal pulse and its spectrum. The resulting signal pulses allow us to transmit information symbols at the Nyquist rate of 2 W symbols per second. It is called a modified duobinciry signal pulse. end.19) and.7 to obtain the impulse responses for an FIR implementation of the transmitter and receiver filters. % MATLAB script for Illustrative Problem 8.1 )/2.T is obtained next for i =1 :Iength(n). n=—(N—1)/2:(N-1)/2. end.±l. g. if ( abs((4*m)/(N*T)) < = W ). . we have \GT(f)\\GR(f)\= . end. Hence. DIGITAL TRANSMISSION THROUGH 8 A NDLI MI TED CHANNELS —CH32SEE3na22SmE2> Illustrative Problem 6. — cos i — | . for m=—(N—1 ) / 2 : ( N . i/1 > V V Now.1 (6. Chapter (5. hence. u l/l < W ' ~ l / l > vv (6.8 We wish to design a digital implementation of the transmitter and receiver filters G j ( f ) and G « ( / ) such that their product is equal to the spectrum of a duobinary pulse and G p ( f ) is the matched filter to G j ( f ) .T(i)=gJT(i)+sqrt{(1 /W)*cos((2*pi*m)/(N*T*W)))*exp(j*2«pi*m+n(i)/N). we have (V-D/2 m= -{N-\)ll / 4 m\ e )2nmn/N ^ n = s 0.5.21) and g/j(n) = gr (")• The MATLAB script for this computation is given below. —^EEBEm To satisfy the frequency domain specification. \ G T ( f ) \ = W \2W J l/i i w (6.5. T=1. % the indices for g_T % The expression for g. g-T(i)=0.20) 0. we follow the same approach as in Illustrative Problem 6. W=1/(2*T). — cos { — Y \W \2WJ [0.±2 ±N .5.252 CHAPTER 6. echo on N=31. with W = 1 / 2 T and Fs = y. we show the impulse response of the cascade of the transmitter and receiver FIR filters.T. Note that the frequency response characteristic is no longer zero for | / j > V V because the digital filter has finite duration.5.22) .1).25(a) illustrates gT (« - . and bk = ak + ak-\. in Figure 6. imp-resp_of_cascade=conv(g_R. the samples of the output of the receiver filter G r ( / ) are expressed as yk = ak + ak-i = bk + \>k where {at} is the transmitted sequence of amplitudes. Let us ignore the noise for the moment and consider + vk (6.dB=2b*loglO(abs(G-T)/max(abs(G. n = 0. 6. Communication System Design for Bandlimited Channels 253 7c obtain gJT(n-(iV-h/2) n2=0:N-1. 1 N - i for N = 31.W]=freqz(g. The corresponding frequency response characteristic is shown in Figure 6.3 Precoding for Detection of Partial Response Signals For the duobinary signal pulse. {vjt} is a sequence of additive Gaussian noise samples.R=g-T. Finally. x(nT) = 1 for n = 0.6.g_T).T.5.26.17) at a rate Fs = 4/T = 81V\ Figure 6. % obtain the frequency response characteristics [G. However. Figure 6.5.5.25(b).in. This impulse response may be compared with the ideal impulse response obtained by sampling .r(i) given by (6. % plotting commands follow and the receiver filters . 1 and 0 otherwise.25: Impulse response and frequency response of truncated discrete-timeduobinary FIR filter at the transmitter. Hence. % Impulse response of the cascade of the transmitter g. the sidelobes in the spectrum are relatively small.T))}. % normalized magrtitide response magG_T. The precoding is performed on the binary data sequence prior to modulation. the precoded sequence is defined as (6. Then. 20 . From the data sequence {Z>t} of l's and 0's that is to be transmitted. if ayt—i is detected in error. The major problem with this procedure is that errors arising from the additive noise tend to propagate. DIGITAL TRANSMISSION THROUGH 8 A NDLI MI TED CHANNELS 350 [ — — r- 1 1 1 300 L 9 9 O < j > 9 250 - 200 - 9 150 - 9 100 - 9 9 50 f- O ooooeoo^^^TTyo oooocoorcfrogi -so A 9 0I 9 0 ! i S H II II I I l 0 ^ 0 ? ?< ?i . is generated. The process can be repeated sequentially for every received symbol. the received signal in the £th signaling interval. 60 Figure 6. Error propagation can be prevented by precoding the data at the transmitter instead of eliminating the controlled ISI by subtraction at the receiver. 2 with corresponding probabilities and i . .26: Impulse response of the cascade of the transmitter filter with the matched filter at the receiver. by = —2. can be eliminated by subtraction. called the precoded sequence. '556®' 0 i 10 . 50 . Consequently. in effect. A A 6 a 0 gpirffloooooooc fflgo00000t.5. For example.2. k = 1.23) Pk = Dy& Pk-i. its effect on a^ is not eliminated. the binary case where ay = ± 1 with equal probability. a new sequence {pk). namely. the detection of ay is also likely to be in error. 30 n . by takes one of three possible values. its effect on by. thus allowing ay to be detected.l)st signaling interval. 40 . For the duobinary signal.254 CHAPTER 6. it is reinforced by the incorrect subtraction. If a*—I detected signal from the (it . 0. if bk = ±2. .22).6.27) Therefore.5. then Dk = 0. the sampled outputs from the receiving filter are given by (6. Iwl>l Thus precoding the data allows us to perform symbol-by-symbol detection at the receiver without the need for subtraction of previously detected symbols.^ b k + 1 (mod 2) (6. 1 Although this operation is identical to modulo-2 addition.1) = 2(pk + pk-\ Consequently.l) + ( 2 w _ i .25) Since Dk = pk ffi Pk-1.1: Binary signaling with duobinary pulses.5. then Dk = 1. and if bk = 0.5.1) (6. An example that illustrates the precoding and decoding operations is given in Table 6.5.1 The noise-free samples at the output of the receiving filter are given by bk = Qk + (6.5. That is.1. at = 2pk . The data sequence {Dk} is obtained according to the detection rule lo. it is convenient to view (he precoding operation for duobinary in terms of modulo-2 subtraction.5. 1 Then the transmitted signal amplitude is a k — — 1 if pk = 0 and a k = 1 if pk = 1. In the presence of additive noise. Communication System Design for Bandlimited Channels 255 Data sequence Dk - 1 1 1 0 1 -I 1 0 0 1 1 1 1 0 1 0 1 1 2 0 1 0 -1 0 1 0 0 -1 _2 0 0 0 -1 -2 0 1 1 1 0 1 0 I 1 2 0 0 1 1 2 0 0 1 1 2 0 1 0 -1 0 1 P r e c o d e d s e q u e n c e pk T r a n s m i t t e d s e q u e n c e uk Received sequence bk Decoded sequence Dk 0 - Table 6. it follows that the data sequence {D*} is obtained from [bk) using the relation D k . where 0 denotes modulo-2 subtraction. In this case yk = bk •+• vk is compared with the two thresholds set at 4-1 and — 1. Pk + Pk-i = ^bk + 1 (6.26) .24) = {2 P k . .35) The precoder for the modified duobinary pulse produces the sequence {p*} from the data sequence {Dk) according to the relation pk = Dk © pk^i (mod M) (6..(M . = l-bk + {M . M.33) Since Dk = pk + pk-\ (mod M).32) + Pk_. .36) From these relations.5. the samples at the output of the receiving filter may be expressed as bk — ak + ak_ 1 = (2p k .1)] + [ 2 p t _ . the received signal samples at the output of the receiving filter G R ( f ) are expressed as >'* = a* ..^ (mod M) (6.37) .34) In the case of modified duobinary pulse.30) where {p k } is the prccoded sequence that is obtained from an M-level data sequence according to the relation pk = D k Q p k . DIGITAL TRANSMISSION THROUGH 8 A NDLI MI TED CHANNELS The extension from binary PAM to multilevel PAM using duobinary pulses is straightforward.. .2. 2 .5.5. The M-level transmitted sequence \a k } results in a (noise-free) received sequence bk=ak + ak-i. (6. Pk + pk-i-(M-i)] (6.31) where the possible values of the data »cquence {Dk} are 0. it is easy to show that the detection rule for the recovering of the data sequence { } from {bk} in the absence of noise is Dk = -bk (mod M) (6.5.256 CHAPTER 6.5.1) (6. it follows that the transmitted data (£>4} are recovered from the received sequence {bk} by means of the relation Dk = ]-bk + (M .1)] = 2[pk Hence.5.29) which has 2M — 1 possible equally spaced amplitude levels.I) (mod M) (6.3. 1 . .(M .5.5. * = 1. The amplitude levels for the sequence («4} are determined from the relation ak =2pk-(M1) (6. . In the absence of noise.a*-2 + V k = bk + vk (6.5. as shown in Figure 6. precodes it for a duobinary pulse transmission system to produce {/?*}. Problem 9. On channels whose frequency response characteristics are unknown but time-invariant.out(2.9 Let us write a MATLAB program that takes a data sequence {Dk}. d_out(i+1)=rem(b(it1 )/2+1. By using this program.28 shows a block diagram of a system that employs a linear filter as a channel equalizer.6. the receiver filter response Gr(/) is matched to the transmitter response. end d. and the product G n ( f ) G r i f ) is usually designed so that either . tor i=1 length(d) p(i + 1)=rem(p(i)+d(i). The MATLAB script is given below. First. let us consider the design characteristics for a linear equalizer from a frequency domain viewpoint.. form the received noise-free sequence {i^} and.e. 6.27. As indicated in the previous section.out=d. recover the data sequence \ D k \ . On the other hand.6. using the relation in (6.2).*p—1. and maps the precoded sequence into the transmitted amplitude levels {«*}• Then from the transmitted sequence (a*}. Gn(f) ~ G j ( f ) . i. Chapter 6.1 for the case where M — 2. % MATLAB script for Illustrative echo on d=[1 1 1 0 1 0 0 1 0 0 0 1]. the parameters remain fixed during the transmission of data. we can verify the results in Table 6. we may measure the channel characteristics and adjust the parameters of the equalizer.}. so they are capable of tracking a slowly time-varying channel response. Such equalizers are called preset equalizers. once adjusted.5. P<1)=0. for i=1 :length(d) b{L+1}=a(i+1)+a{r). Figure 6.6 Linear Equalizers The most common type of channel equalizer used in practice to reduce ISI is a linear FIR filter with adjustable coefficients {c.2). T h e demodulator consists of a receiver filter with frequency response GR ( / ) in cascade with a channel equalizing filter that has a frequency response G g ( / ) .length(d)+1). Linear Equalizers 257 Illustrative Problem 6. b(1)=0. dd(1)=0. end a=2.34). adaptive equalizers update their parameters on a periodic basis during the transmission of data. 258 CHAPTER 6. DIGITAL TRANSMISSION THROUGH 8 A NDLI MI TED CHANNELS Figure 6. .28: Block diagram of a system with an equalizer. Figure 6.27: Linear transversal filter. 6. or controlled ISI for partial response signals. However. For the system shown in Figure 6. Since r < 7 .4) . We note that the inverse channel filter completely eliminates ISI caused by the channel. or transversal filter. the number of terms that constitute the ISI in the summation given by (6..4. when G / ? ( f ) G r ( / ) = X r c ( / ) . when the time delay r between adjacent taps is selected such that 1 / r > 21V > 1 / 7 . frequencies in the received signal above the folding frequency 1 / 7 are aliased into frequencies below 1 / 7 .27. Notice that. Hence. and its phase response is % ( / ) = — % ( / ) • In this case. In practice. we note that when the symbol rate 1 / 7 < 2W. On the other hand. the frequency response of the equalizer that compensates for the channel distortion is G £W F ( f )= — — = .2) Thus. the input to the detector is simply Zt=ak + T}k. the ISI caused by channel distortion is usually limited to a finite number of symbols on either side of the desired symbol. in practice the channel equalizer is implemented as a finite-duration impulse response (FIR) filter. (6. the equalizer compensates for the aliased channel-distorted signal. As a consequence. with adjustable tap coefficients {c„}. the equalizer is said to be the inverse channel filter to the channel response.2S.6).e . and it is called a fractionally spaced equalizer. for example. Linear Equalizers 259 there is zero ISl at the sampling instants as. no aliasing occurs.6. the amplitude response of the equalizer is | G g ( / ) | = l / | C ( / ) [ . In this case the input to the equalizer is the sampled sequence given by (6.6. the channel equalizer is said to ha \c fractionally spaced taps. 1 the equalizer is called a zero-forcing equalizer.L . in which the channel frequency response is not ideal.1) where X r c ( / ) is the desired raised-cosine spectral characteristic. The time delay r between adjacent taps may be selected as large as 7 . Since G r ( f ) G R ( / ) = X r c ( / ) by design.3) where rjy represents the additive noise and a k is the desired symbol. in which case the FIR equalizer is called a symbol-spaced equalizer. the desired condition for zero ISI is G T {f)C(f)G R (f)G £ ( f ) = Xre(/) (6. Since it forces the ISI to be zero at the sampling instants / = kT for k = 0. in this case. hence the inverse channel equalizer compensates for the true channel distortion..4. the symbol interval.& < f ) C(f) |C(/)| (6. In practice. In this case. the sampling rate at the input to the filter G { f ) is j ..7) is finite. 4 = 0.6. Hence. as illustrated in Figure 6. r is often selected at x = 7 / 2 . The impulse response of the FIR equalizer is (6.6.1.6. 6) mT.10 Consider a channel distorted pulse jc(r ). However.. as K is increased.260 CHAPTER 6. where L is the number of signal samples spanned by the ISI. i . given by the expression 1 1 +(2 t/T)2 T x(t) } - .6. we can control only 2K + I sampled values of i?(f). Thus.±l ±K (6.5) where {c. c is the (2 K + 1) coefficient vector.. DIGITAL TRANSMISSION THROUGH 8 A NDLI MI TED CHANNELS and the corresponding frequency response is K Ge(f )n = -K c p 7tfnT ^~ (6-6. at the input to the equalizer.6. m=0.} are the 2 K + I equalizer coefficients and K is chosen sufficiently large so that the equalizer spans the length of the ISI.8) which may be expressed in matrix form as Xc = q. We should emphasize that the FIR zero-forcing equalizer does not completely eliminate ISI because it has a finite length. the ISI is completely eliminated. The zero-forcing condition can now be applied to the samples of q(t) taken at times i = These samples are K q{mT) = n~— K CnXimT-nr). we may force the conditions K q(mT) = £ >i=-K cnx{mT - nx) J 1 ' |0.^ • M f a H A H M J d s M s M J f l f c Illustrative Problem 6. Specifically.±2 ±K (6. Since X { f ) = 6 > ( / ) C ( / ) G / ? ( / ) and .1 linear equations for the coefficients of the zero-forcing equalizer. 2 K + ! > L. where X is a (2K + 1) x (2 K + 1) matrix with elements x{mT .7) Since there are 2K + I equalizer coefficients. the equalized output signal pulse is K qU) = c„x{t-nz) (6. we obtain a set of 2 K 4. the residual ISI can be reduced and in the limit of K —• oo.ft r).e. and q is the (2 K + 1) column vector with one nonzero element. m = ° m=±l.t(r) is the signal pulse corresponding to X ( f ) .6. . 9 -2. .6.6.6.11) Figure 6.2 4. Let us determine the coefficients of a five-tap zero-forcing equalizer.— ) = i 2 0.2 (6. Thus we obtain cDP( = X [ q — -2.6. The MATLAB script for this computation is given below. The pulse is sampled at the rate 2/7" and is equalized by a zero-forcing equalizer. Note the small amount of residual ISI in the equalized pulse. the zero-forcing equalizer must satisfy the equations q{mT) = > cnx{mT ^ ^ nT 1.29 illustrates the original pulse ^ ( 0 and the equalized pulse.10) Then.6. is given as m = 0 m — re. % MATLAB echo on script far Illustrative Problem 10. —«yslUkM» According to (6.9) I î J The coefficient vector c and the vector q are given as C-2 c- 1 C O Ci Cj "0" 0 < 7= 1 0 0 (6. Fs=2/T. .8). Chapter 6. the linear equations Xc = q can be solved by inverting the matrix X.nT/2) X = 5 I Î7 1 L37 (6. ± 2 The matrix X with elements x(mT . Linear Equalizers 261 where l / T is the symbol rate.9 -3 4. T=1.6. 1. /(1+{(2/T)*[).x=equalized_x(3:lengt. which is K Z (0= J2 cny(t-nr) (6.3).equalizer_outpul((ir1)/2)=equaiized. its use may result in significant noise enhancement.9 -2. To elaborate.1.3 4.xU): end. equalized-X=fiIter(c_opt.2 4. x=1 .h(eqiialiKd.9 .2]: t=-5+T.opt=[-2.4. DIGITAL TRANSMISSION THROUGH 8 A NDLI MI TED CHANNELS Ts=1 /Fs.262 CHAPTER 6.10. Thus. let its downsample the equalizer % sampled pulse ut the output % jtnci there will be a delay of two samples equalized. Consequently. % plotting commands follow » » * u. let us consider the noise corrupted output of the FIR equalizer. This is easily seen by noting that in a frequency range where C ( / ) is small. c.[x 0 01). output for i = 1 . the channel equalizer Ge(J) = \ / C ( f ) compensates by placing a large gain in that frequency range. An alternative is to relax the zero IS1 condition and select the channel equalizer characteristic such that the combined power in the residual ISI and the additive noise at the output of the equalizer is minimized. we obtain . A channel equalizer that is optimized based on the minimum mean-square error (MMSE) criterion accomplishes the desired goal.x). given by (6.T/2:5+T. downsampJed. One drawback to the zero-forcing equalizer is that it ignores the presence of additive noise. the noise in that frequency range is greatly enhanced.29: Graph of original pulse and equalized pulse in Illustrative Problem 6.x)). As a consequence.2:length(equaJizcd.12) where y(f) is the input to the equalizer.*2). (a) Original pulse (b) E q u a l i z e d p u l s e Figure 6. The equalizer output is sampled at times f = mT.6. % to take care of the delay % Now. A„ = Y .6. these correlation sequences can be estimated by transmitting a test signal over the channel and using the time-average estimates 2 In this development we allow the signals j:(r) and _v(0 10 be complex-valued and (he data sequence also to be complex-valued.14) where the conelations are defined as Ry(n . In practice. n=-K H k=-K c«CkRy{n-k)- 2 £ ckR„.6.6.13) The desired response at the output of the equalizer at r — mT is the transmitted symbol «. the mean-square error (MSE) between the actual output sample z(mT) and the desired values am is 2 MSE = E \z(mT) K - = E K ^ K C „ y ( m T . T h u s we obtain the necessary conditions for the minimum M S E as K cnRv(n .16) These are the 2 K 4-1 linear equations for the equalizer coefficients./IT) .6. However. these equations depend on the statistical properties (the autocorrelation) of the noise as well as the ISI through the autocorrelation Ry{n). The minimum M S E solution is obtained by differentiating (6.6. In contrast to the zeroforcing solution described previously. the autocorrelation matrix Ry (n) and the cross-correlation vector Ray(n) are unknown a priori. and z(mT).nx)y(mT kx)a'm] -Jkr)) (6.±1.6.k) = Ray(k). Linear Equalizers 263 z(mT) = £ c„y(mT — nr) (6.14) with respect to the equalizer coefficients {cn}.k ) = E[y'(mT = E[y(mT - . Then.„.{k) ^ E(\am\2) (6. .6. k~ 0.±2 ±K (6. The error is defined as the difference between a„.15) Ray(k) and the expectation is taken with respect to the random information sequence {a m } and the additive noise. 16) with A!' = 2 a n d r = T / 2 . but now we shall design the five-tap equalizer based on the minimum MSE criterion.y < H T ) — m ^ u m u w m No The equalizer tap coefficients are obtained by solving (6.n r ) a £ k— I in place of the ensemble averages to solve for the equalizer coefficients given by (6.6.6.9) and I is the identity matrix.264 CHAPTER 6. l M » Illustrative Problem 6.6.e. The vector with elements is given as Ray(k) 5 l I 1 I L5 J The equalizer coefficients obtained by solving (6.6.10. l .16) are as follows: 2 .16).. The matrix with elements Ry(rt — k) is simply Ry = X'X + ^ l where X is given by (6. i.6. — « i m f c H A H M J d d . E{ |a„|2) = 1 The additive noise v(r) has zero mean and autocorrelation <pvv{*) = . E(an) =0 n £ m E{an am) = 0.17) Roy{n) 1 * = — J ] >(Jfc7 . The information symbols have zero mean and unit variance and are uncorrected.11 Let us consider the same channel-distorted pulse x(r) as in Illustrative Problem 6. DIGITAL TRANSMISSION THROUGHBANDLIMITED CHANNELS K t=1 (6. c is a column vector representing the 2 K + I equalizer coefficients. terop=temp+( 1 /(1 + { n .0956 -0.01.7347 0.30. Problem II.6. In the zero-forcing optimization criterion. N0=0.2 : 2 . % optimal tap coefficients % find the equalised pulse t= -3:1/2:3. T h e solution of (6.c^opt). the optimum equalizer coefficients are determined by solving the set of linear equations given by (6. for k = .6. % sampled pulse equalized-pulse=conv(x.~2). the linear equations are given by (6. X(k+3.6761 -0. % plotting command follows 6.0956 A plot of the equalized pulse is shown in Figure 6.1 Adaptive Linear Equalizers We have shown that the tap coefficients of a linear equalizer can be determined by solving a set of linear equations.length(equalized.6. Linear Equalizers 265 0. % iLssumtng that NO=0 0! Ry=X+(N0/2)*eye(5).i ) " 2 ) ) .6. Chapter 6 for i=—2:2.7347 1.n+3)=temp./(U(2*r/T).16). we may express the set of linear equations in the general matrix form (6.18) yields . and d is a (2 K + l)-dimensional column vector. c.2 : 2 .6. (emp=0.i ) " 2 ) ) * ( 1 / (1 + ( k .18) Bc = d where B is a (2 K + 1) x (2 K + 1) matrix. pulse)).equaUzed^pulse=equalized-pulse(1:2.6.8).' . if the optimization criterion is based on minimizing the MSE. end. computation is given below.6. Riy=[1/5 1/2 1 1/2 1/5). On the other hand. for n = . end.opt=inv(Ry)+Riy. In both cases. x=1. The MATLAB script for this % MATLAB script for Illustrative echo on T=1. % decimate the pulse to get the samples at the symbol rate decimated-. end. k = 0.. T h e simplest iterative procedure is the method of steepest descent..1.18) for the optimum coefficient vector is usually obtained by an iterative procedure that avoids the explicit computation of the inverse of the matrix B . the solution of (6.2. for the M S E criterion. For example. in the case of the M S E criterion. -opt (6. The change in the j t h tap coefficient is proportional to the size of the . co.11. denoted as gk.'th gradient component.6. which is the derivative of the M S E with respect to the 2 K + I filter coefficients.6. in which one begins by choosing arbitrarily the coefficient vector c.21) .266 CHAPTER 6.. Then the coefficient vector ck is updated according to the relation c*+i = c* .6.19) In practical implementations of equalizers. the gradient vector.6.A gt (6.30: Plot of the equalized pulse in Illustrative Problem 6. and each tap coefficient is changed in the direction opposite to its corresponding gradient component. found by taking the derivatives of the M S E with respect to each of the 2 K + 1 coefficients. The gradient vector. say. DIGITAL TRANSMISSION THROUGH 8 A NDLI MI TED CHANNELS Figure 6. is then computed at this point on the criterion surface. defined as go. is (6. the initial guess co corresponds to a point on the quadratic M S E surface in the (2K + l)-dimensional space of coefficients.20) gk = Bck-d.This initial choice of coefficient vector Co corresponds to a point on the criterion function that is being optimized. For example. 6.6. Linear Equalizers 267 where A is the step-size parameter for the iterative procedure. To ensure convergence of the iterative procedure, A is chosen to be a small positive number. In such a case, the gradient vector gk converges toward zero, i.e.. gy 0 as k —• oo, and the coefficient vector Ck -*• c„ pl , as illustrated in Figure 6.31 based on two-dimensional optimization. In general, convergence of the equalizer tap coefficients to c0pt cannot be attained in a finite number of iterations with the steepest-descent method. However, the optimum solution c op , can be approached as closely as desired in a few hundred iterations. In digital communication systems that employ channel equalizers, each iteration corresponds to a time interval for sending one symbol; hence, a few hundred iterations to achieve convergence to c o p t correspond to a fraction of a second. Figure 6.31 : Example of the convergence characteristics of a gradient algorithm. Adaptive channel equalization is required for channels whose characteristics change with time. In such a case, the ISI varies with time. The channel equalizer must track such time variations in the channel response and adapt its coefficients to reduce the ISI. In the context of the above discussion, the optimum coefficient vector c0pi varies with time due to time variations in the matrix B and, for the case of the MSE criterion, time variations in the vector d. Under these conditions, the iterative method described above can be modified to use estimates of the gradient components. Thus, the algorithm for adjusting the equalizer tap coefficients may be expressed as Ck+1 = Ck~ A-îk (6.6.22) where gk denotes an estimate of the gradient vector gk and cy denotes the estimate of the tap coefficient vector. In the case of the MSE criterion, the gradient vector gk given by (6.6.20) may also be expressed as gk = ~E(ekyp 268 CHAPTER 6. DIGITAL TRANSMISSION THROUGH 8 A NDLI MI TED CHANNELS An estimate g k of the gradient vector at the £th iteration is computed as gk = -«ky'k (6.6.23) where ek denotes the difference between the desired output from the equalizer at the kih time instant and the actual output z{kT) and yk denotes the column vector of 2 K + 1 received signal values contained in the equalizer at time instant k. The error signal ek is expressed as ek = <ik ~ Zk (6.6.24) where — z(kT) is the equalizer output given by (6.6.13) and ak is the desired symbol. Hence, by substituting (6.6.23) into (6.6.22), we obtain the adaptive algorithm for optimizing the taps coefficients (based on the MSE criterion) as Ci + I = h + A e k y k (6.6.23) Since an estimate of the gradient vector is used in (6.6.25), the algorithm iscalleda jrw/ifliftc gradient algorithm, It is also known as the LMS algorithm, A block diagram of an adaptive equalizer that adapts its tap coefficients according to (6.6.25) is illustrated in Figure 6.32. Note that the difference between the desired output ak and the actual output zk from the equalizer is used to form the error signal ek. This error is scaled by the step-size parameter A, and the scaled error signal Aek multiples the received signal values \y(kT - n r ) } at the 2K 4- I taps. The products Aeky*{kT — nr) at the 2K + 1 taps are then added to the previous values of the tap coefficients to obtain the updated tap coefficients, according to (6.6.25). This computation is repeated as each new signal sample is received. Thus, the equalizer coefficients are updated at the symbol rate. Initially, the adaptive equalizer is trained by the transmission of a known pseudorandom sequence {am \ over the channel. At the demodulator, the equalizer employs the known sequence to adjust its coefficients. Upon initial adjustment, the adaptive equalizer switches from a training mode to a decision-directed mode, in which case the decisions at the output of the detector are sufficiently reliable so that the error signal is formed by computing the difference between the detector output and the equalizer output, i.e., ek=ak- zk (6.6.26) where a k is the output of the detector. In general, decision errors at the output of the detector occur infrequently; consequently, such errors have little effect on the performance of the tracking algorithm given by (6.6.25). A rule of thumb for selecting the step-size parameter in order to ensure convergence and good tracking capabilities in slowly varying channels is 1 A (6.6.27) 5(2K+[)Pr 6.6. Linear Equalizers 269 A Output Figure 6.32: Linear adaptive equalizer based on the MSE criterion. where Pr denotes the received signal-plus-noise power, which can be estimated from the received signal. -^ss&EXEsaasssEm Illustrative Problem 6.12 Let us implement an adaptive equalizer based on the LMS algorithm given in (6,6,25). The channel number of taps selected for the equalizer is 2K + \ = 11. The received signal-plus-noise power Pr is normalized to unity. The channel characteristic is given by the vector x as * = [0.05 - 0.063 0.088 -0.126 - 0 . 2 5 0.9047 0.25 0 0.126 0.038 0.088] The convergence characteristics of the stochastic gradient algorithm in (6.6.25) are illustrated in Figure 6.33. These graphs were obtained from a computer simulation of the 11-tap adaptive equalizer. The graphs represent the mean-square error averaged over several realizations. As shown, when A is decreased, the convergence is slowed somewhat, but a lower MSE is achieved, indicating that the estimated coefficients are closer to c 0 p ( . The MATLAB script for this example is given below. % MATLAB echo on N=500; script for Illustrative Problem 12, Chapter 6. sequence % length of the information 270 CHAPTER 6. DIGITAL TRANSMISSION THROUGH 8 A NDLI MI TED CHANNELS Number of iterations Figure 6.33: Initial convergence characteristics of the LMS algorithm with different step sizes. K=5; actuaLisi=[0.05 - 0 . 0 6 3 0 . 0 8 8 - 0 . 1 2 6 - 0 . 2 5 0 . 9 0 4 7 0 . 2 5 0 0.126 0 . 0 3 8 0.088]: sigma=0.01; delta=0.115; Num.of.reaiizat]on$=1000, mse_av=zeros(1,N-2*K); for j=1 :Num_oLrealizauons. % compute the average over a number of % the information sequence for i=1:N, if (rand<0.5), info(i)=-1; else info(i)=1; end; end; % the channel output y=fiUer(actuaLisi,1 ,info); for i=1:2:N, [noised) noiseii+1 ))=gngauss(sigma); end: y=y+noise; % now the equalization part follows realizations 6.7. Nonlinear Equalizers 271 estimated.t=[0 0 0 0 0 1 0 0 0 0 for k = 1 : N - 2 * K . y_k=y(k;k+2*K); z_k=estima[ecLc*y_k.' ; e.k=info(k)—z_k; 0]; % initial estimate of ISI estimated_c=esUmated_c+delta*e.k*y.k; mse(k)=e_k"2. end; mse .av=mse. av+mse; end; mse_av=mse-av/Num_oLrea!izations; % plotting commands follow % iwu«-.w/iKire error versus iterations Although we have described in some detail the operation of an adaptive equalizer that is optimized on the basis of the M S E criterion, the operation of an adaptive equalizer based on the zero-forcing method is very similar. The major difference lies in the method for estimating the gradient vectors at each iteration. A block diagram of an adaptive zero-forcing equalizer is shown in Figure 6.34. Figure 6.34: An adaptive zero-forcing equalizer. 6.7 Nonlinear Equalizers The linear filter equalizers described above are very effective on channels, such as wire line telephone channels, where the ISI is not severe. The severity of the ISI is directly related to the spectral characteristics of the channel and not necessarily to the time span of the ISI. For example, consider the ISI resulting from two channels, illustrated in Figure 6.35. T h e time span for the ISI in Channel A is five symbol intervals on each side of the desired signal 272 CHAPTER 6. DIGITAL TRANSMISSION THROUGH 8 A NDLI MI TED CHANNELS component, which has a value of 0.72. On the other hand, the time span for the ISI in Channel B is one symbol interval on each side of the desired signal component, which has a value of 0.815. The energy of the total response is normalized to unity for both channels. 0.815 0.72 0.4 07 0.36 Q 2 1 0.4 07 0 04 0.07 . T T i . I .05 r | -0.21 -0.5 I ? I "t3 T r — T - °07 la) Channel A (b) Channel B Figure 6.35: Two channels with ISI. In spite of the shorter ISI span, channel B results in more severe ISI. This is evidenced in the frequency response characteristics of these channels, which are shown in Figure 6.36. We observe that channel B has a spectral null (the frequency response C ( f ) = 0 for some frequencies in the band | / | < W) at / = 1/2T. whereas this does not occur in the case of channel A. Consequently, a linear equalizer will introduce a large gain in its frequency response to compensate for the channel null. Thus, the noise in channel B will be enhanced much more than in channel A. This implies that the performance of the linear equalizer for channel B will be sufficiently poorer than that for channel A. In general, the basic limitation of a linear equalizer is that it performs poorly on channels having spectral nulls. Such channels are often encountered in radio communications, such as ionospheric transmission at frequencies below 30 MHz and mobile radio channels, such as those used for cellular radio communications. A decision-feedback equalizer (DFE) is a nonlinear equalizer that employs previous decisions to eliminate the ISI caused by previously detected symbols on the current symbol to be detected. A simple block diagram for a DFE is shown in Figure 6.37. The DFE consists of two filters. The first filter is called a feedforward filter; it is generally a fractionally spaced FIR filter with adjustable tap coefficients. This filter is identical in form to the linear equalizer described above. Its input is the received filtered signal > (i) sampled at some rate that is a multiple of the symbol rate, e.g., at rate 2/7". The second filter is a feedback filter. It is implemented as an FIR filter with symbol-spaced taps having adjustable coefficients. Its input is the set of previously detected symbols. The output of the feedback filter is 6.7. Nonlinear Equalizers 273 0.00 -6.00 -12.00 - 0.00 c. " 18.00 < -21.00 -30.00 0.00 0.63 1 26 1.88 251 3.14 0.00 063 1.26 188 2.51 3.14 <o frequency oj frequency (b) Figure 6.36: Amplitude spectra for (a) channel A shown in Figure 6.35(a) and (b) channel B shown in Figure 6.35(b). subtracted from the output of the feedforward filter to form the input to the detector. Thus, we have where {<;„} and {b„} are the adjustable coefficients of the feedforward and feedback filters, respectively, a m - n , n = 1 , 2 , . . . , /Vi, are the previously detected symbols, N\ is the length of the feedforward filter, and Afi is the length of the feedback filter. Based on the input zm, the detector determines which of the possible transmitted symbols is closest in distance to the input signal am. Thus it makes its decision and outputs am. What makes the DFE nonlinear is the nonlinear characteristic of the detector that provides the input to the feedback filter. The tap coefficients of the feedforward and feedback filters are selected to optimize some desired performance measure. For mathematical simplicity, the MSE criterion is usually applied, and a stochastic gradient algorithm is commonly used to implement an adaptive DFE. Figure 6.38 illustrates the block diagram of an adaptive DFE whose tap coefficients are adjusted by means of the LMS stochastic gradient algorithm. We should mention that decision errors from the detector that are fed to the feedback filter have a small effect on the performance of the DFE. In general, a small loss in performance of 1 to 2 dB is possible at error rates below 10~ 2 , but the decision errors in the feedback filters are not catastrophic. Although the DFE outperforms a linear equalizer, it is not the optimum equalizer from the viewpoint of minimizing the probability of error in the detection of the information CHAPTER 6. DIGITAL TRANSMISSION THROUGH BANDLIMITED CHANNELS Figure 6.37: Block diagram of a DFE. Feedforward filter Figure 6.38: Adaptive DFE. in the sense that it causes a severe degradation in the performance of a linear equalizer or a decision-feedback equalizer. which was originally devised for decoding convolutional codes as described in Section 8. In a digital communication system that transmits information over a channel that causes ISI. . channel equalizers are widely used in digital communication systems to mitigate the effects of ISI cause by channel distortion. the reader is referred to [3. the detector finds the sequence {a*} that maximizes the likelihood function A({«*}) = In/>({>*) | W ) ) where /?({>*) | {«il) is the joint probability of the received sequence (>•*} conditioned on {a*}.5 dB better than that of the DFE at an error probability of I0~ 4 . this is one example where the ML sequence detector provides a significant performance gain on a channel with a relatively short ISI span.35). the multipath propagation of the transmitted signal results in severe ISI. Hence. Linear equalizers are generally used for high-speed modems that transmit data over telephone channels.2. the optimum detector is a maximum-likelihood symbol sequence detector that produces at its output the most probable symbol sequence {â k } for the given received sampled sequence That is. we also illustrate the probability of error for a DFE. In conclusion. Figure 6. Such channels require more powerful equalizers to combat the severe ISI. For example. For purposes of comparison. Both results were obtained by computer simulation. MLSD is practical only for channels where the ISI spans only a few symbols and the ISI is severe.4].3. We observe that the performance of the ML sequence detector is about 4. The decision-feedback equalizer and the MLSD are two nonlinear channel equalizers that are suitable for radio channels with severe ISI.6. Consequently. such as mobile cellular communications and interoffice communications. The sequence of symbols {ilt) that maximizes this joint conditional probability is called the maximum-likelihood sequence detector.4.6). For a description of this algorithm in the context of sequence detection in the presence of ISI. An algorithm that implements maximum-likelihood sequence detection (MLSD) is the Viterbi algorithm. Nonlinear Equalizers 275 sequence {a*) from the received signal samples \yk) given in (6. The major drawback of MLSD for channels with ISI is the exponential behavior in computational complexity as a function of the span of the ISI.7.39 illustrates the error probability performance of the Viterbi algorithm for a binary PAM signal transmitted through channel B (see Figure 6. For wireless (radio) transmission. DIGITAL TRANSMISSION THROUGH BANDLIMITED CHANNELS Decision-feedback equalizer Correct bits fed back Detected bits fed back S N R .CHAPTER 6.35.39: Error probability of the Viterbi algorithm for a binary PAM signal transmitted through Channel B in Figure 6. dB (10 log y) Figure 6. . 2 127.1 The Fourier transform of the rectangular pulse in Illustrative Problem 6. otherwise and the sequence (a„) of signal amplitudes has the correlation function given by (6. 6. .95 and a delay of 5 samples. Let T = 1 for this computation. 1.5 Write a MATLAB program to design an FIR linear phase filter that models a lowpass bandlimited channel with desired amplitude response (1.4 0 dB for | / | > 3500.3 Write a MATLAB program to compute the power spectrum Sv(f ) of the signal V(r) when the pulse g(t) is |0.1 and the power spectrum S L . sample the rectangular pulse g(j) at r = kj 10 for k = 0. Plot the impulse response and the frequency response. This yields the sequence {gi} of sample values of £(f) Use MATLAB to compute the 128-point DFT of {gy I and plot the values I G ^ r for m = 0. {o. 6.2.13) and compare the two results. using a Hanning window. Then. 6.5 and plot the impulse response for p = 0.14). .1 when the pulse g(i) given as [O. 6. |/| < 3000 / > 3000 J ' via the window method.2.6. I. Nonlinear Equalizers 293 Problems 6. . 127. Also plot the exact spectrum j G ( f ) | 2 given in (6.2 Repeat the computation in Problem 6.4 Use MATLAB to design an FIR linear-phase filter that models a lowpass bandlimited channel that has a j-dB ripple in the passband | / |< 3000 Hz and a stopband attenuation of . Let us normalize T = 1 and 0-^ = 1. .7.(/) can be computed numerically wtth MATLAB by using the discrete Fourier transform (DFT) or the FFT algorithm.6 Write a MATLAB program that generates the impulse response for the two-path multipath channel in Illustrative Problem 6. otherwise 6. . 25. Then form \Gj-(k)\2 and compute the (2N — I)-point inverse DFT of | G r ( £ ) | 2 . and maps the precoded sequence into the transmitted amplitude levels {a*}. Plot the theoretical error probability for binary PAM with no IS I and compare the Monte Carlo simulation results with this ideal performance.9 Write a MATLAB program that generates the sampled version of the transmit filter impulse response g r i t ) given by (6.9) for an arbitrary value of the roll-off factor a .22) to form the input to the detector and use the detection rule in (6.5.000 bits and measure the bit-error probability for cr2 = 0.36). a2 = OA.000 binary data {±1} are transmitted through this channel. and measure the error rate when 10. Perform the simulation for 10. recover the data signal {D*}.278 CHAPTER 6.5 and a 2 = 1. Run the program for any pseudorandom data sequence (Ojt} for M = 2 and M — 4 transmitted amplitude levels and check the results.9 and compare this result with the ideal impulse response obtained by sampling xrc(t) at a rate Fx = 4/7".5.1. I I. Describe the major differences. using the relation given by (6. Plot and compare the frequency response of the discrete-time filters with those in Problem 6.10 Write a MATLAB program that computes the overall impulse response of the cascade of any transmit filter gr ( " ) with its matched filter at the receiver.28) to recover the data. (Use a 4N-point D F T of gr(n) by padding gT (n) with 3 N zeros. cr 2 = 0. a2 = 0. otherwise .7 Write a MATLAB simulation program that implements channel I in Illustrative Problem 6.9. where the precoding and amplitude sequence {a*} are performed as in Illustrative Problem 6.2. « = 0 n = ±l 0. You should observe some small degradation in the performance of the duobinary system.13 Write a MATLAB program that performs a Monte Carlo simulation of a binary PAM communication system that employs duobinary signal pulse. 6.8 Repeat Problem 6.) 6. 6.0. evaluate and plot the magnitude of the frequency response characteristic for this filter. Then from the transmitted sequence. Add Gaussian noise to the received sequence {£>*} as indicated in (6. 6.7 for the following channel: 6. The channel is corrupted by A W G N with variance a" = 0. This yields Gj{k) .9.12 Write a MATLAB program that takes a data sequence {Z)t}.1 (or more) 0 ' s and compute the (2/V — l)-point (or more) DFT.11 Repeat Problem 6. 0.5.6. DIGITAL TRANSMISSION THROUGH 8 A NDLI MI TED CHANNELS 6. Evaluate and plot gj(n) for a = { and N — 31.9 for N = 21 and N = 4 ! . precodes it for a modified duobinary pulse transmission system to produce {/>*}. 6. Evaluate this overall impulse response for the filter in Problem 6. form the received noise-free sequence (£>* = a k — £4-2} and. Also. cr 2 = 0. Pad grin) with N . and cr2 — I.5. This computation may be performed by use of the D F T as follows.5. Evaluate and plot the output of this zero-forcing equalizer for 50 sampled values. The pulse is sampled at the rate 2/7" and equalized by a zero-forcing equalizer with 2fi" -f 1 = 11 taps. a closer match to X r c ( / ) ' . Also.01. Nonlinear Equalizers 279 6.0. What are the major differences in these frequency response characteristics? 6. Does the higher sampling rate result in a better frequency response characteristic.5. given as input the sampled values of the pulse x(t) taken at the symbol rate and the spectral density of the additive noise Nq. 6. 0.16 for N = 21 and N = 41. Write a MATLAB program to solve for the coefficients of the zero-forcing equalizer. N0 = 0. 6.5. 6. 0. compute and plot the output of the cascade of this filter with its matched filter using the procedure described in Problem 6. evaluate and plot the magnitude of the frequency response of this filter.20 Repeat Problem 6. Compare the frequency responses of these filters with the frequency response of the filter designed in Problem 6. Also..16.1. A = 5.01. i. = 0 n = ±l n = ±3 n = ±4 NO = 0.18).9 for a sampling rate Fv = 8/7".r(f) are 1. and Nq ~ 0.7. Evaluate and plot GRIN) for N = 31.16 Write a MATLAB program that generates the sampled version of the transmit filter impulse response g r i t ) given by (6.14. and N = 61. Use the program to evaluate the coefficients of an 11 -tap equalizer when the sampled values of .1.17 Repeat Problem 6.19 and comment on the results as No is varied. Does this higher sampling rate result in a better approximation of the discrete-time filter impulse response to the ideal filter impulse response? 6.21) for the modified duobinary pulse specified by (6. = 8/7". ' 6 . evaluate the minimum MSE for the optimum equalizer coefficients.. and No .18 Repeat Problem 6.6.3.21 Write a genera! MATLAB program for computing the tap coefficients of an FIR equalizer of arbitrary length 2 K 4-1 based on the MSE criterion.15 For the filter designed in Problem 6.10 for the filter designed in Problem 6.10.19 for a MSE equalizer with Sq = 0. 0. Compare this sampled impulse response with the ideal impulse response obtained by sampling x r c ( 0 at a rate Z7.e.5.14 Repeat Problem 6.16. . 6. Compare these equalizer coefficients with those obtained in Problem 6.19 Consider the channel distorted pulse jt(r) given in Illustrative Problem 6.1.1.10. /Vo = 0. 6. DIGITAL TRANSMISSION THROUGHBANDLIMITED CHANNELS 6. Perform a Monte Carlo simulation of the system using 1000 training (binary) symbols and 10.24. The MSE equalizer is also an FIR filter with symbol-spaced tap coefficients.1. the equalizer employs the output of the detector in forming the error signal. Compare the measured error rate with that of an ideal channel with no ISJ.21.21. Training symbols are transmitted initially to train the equalizer.22 For the channel characteristics given in Problem 6. and Nq = 1.280 CHAPTER 6. Write a MATLAB program that computes the output of the equalizer of a specified length when the input is the sampled channel characteristic. For simplicity.24 . The channel is modeled as an FIR filter with symbol-spaced values. symbol-spaced equalizer for the channel response given in Problem 6. AWCN Figure P6. Use the program to evaluate the output of a zero-forcing. Compare these equalizer coefficients with the values of the coefficients obtained in Problem 6. 6.23 The amount of residual IS1 at the output of an equalizer can be evaluated by convolving the channel sampled response with the equalizer coefficients and observing the resulting output sequence. Use M) = 0.21.01. evaluate the coefficients of the MSE equalizer and the minimum MSE when the number of equalizer taps is 2 ! . In the data mode.24 Write a MATLAB Monte Carlo simulation program that simulates the digital communication system that is modeled in Figure P6. consider the case where the equalizer is a symbol-spaced equalizer and the channel sampled response also consists of symbol-spaced samples.000 binary data symbols for the channcl model given in Problem 6.21 and comment on whether the reduction in the MSE obtained with the longer equalizer is sufficiently large to justify its use. 1) where Am is the amplitude of the mth waveforms and gr(t) is a pulse whose shape determines the spectral characteristics of the transmitted signal.Chapter 7 Digital Transmission via Carrier Modulation 7. Recall that the signal amplitude takes the discrete values Am = (2m — 1 — M)d. as illustrated in Figure 7. the signal waveforms have the form sm(t) = AmgT(t) (7. most communication channels are bandpass channels. we consider four types of carrier-modulated signals that are suitable for bandpass channels: amplitude-modulated signals. 7. 281 m = 1. In this chapter. the only way to transmit signals through such channels is by shifting the frequency of the information-bearing signal to the frequency band of the channel.2. the information-bearing signal is transmitted directly through the channel without the use of a sinusoidal carrier. where W is the bandwidth of | G r ( / ) | 2 . hence. In such a case.2. and frequency-shift keying.1.2 M (1.1 Preview In the two previous chapters we considered the transmission of digital information through baseband channels. quadrature-amplitude-modulated signals.2 Carrier-Amplitude Modulation In baseband digital PAM. The spectrum of the baseband signals is assumed to be contained in the frequency band \ f \ < W. However.2) . phase-shift keying. . m = 1.. the baseband signal waveforms sm(t). places the signal .> W) and corresponds to the center frequency in the passband of the channel. r.2. / . . M is rectangular. DIGITAL TRANSMISSION VIA CARRIER MODULATION Figure 7. as shown in Figure 7.2. .is the carrier frequency (/ t . Amplitude modulation of the carrier cos 27r/ c r by the baseband signal waveforms sm (t) shifts the spectrum of the baseband signal by an amount fc and. the transmitted signal waveforms are expressed as um(t) = Amgr(t)cos27ifct. 0 otherwise < t < T the amplitude-modulated carrier signal is usually called amplitude shift keying (ASK).1: Energy density spectrum of the transmitted signal gr(t).2.2: Amplitude modulation of a sinusoidal carrier by the baseband PAM signal..282 CHAPTER 7.e. Hence.3) In the special case when the transmitted pulse shape gr(t) gr(O - f \ j V T (O.. . . i. To transmit the digital signal waveforms through a bandpass channel. Baseband Bandpass signal " ^ Carrier «S iTTft cos (2vfct) Figure 7.. 2. . (7. In this case the PAM signal is not bandlimited. m = 1. thus. where 2d is the Euclidean distance between two adjacent signal points. . M are multiplied by a sinusoidal carrier of the form cos27r/. where / c . We note that impressing the baseband signal sm(t) onto the amplitude of the carrier signal c o s 2 .4.7. The bandpass PAM signal waveforms may be represented in general as um(t) =sm1r(t) (7. the spectrum of the baseband signal sm(t) — . Figure 7.3.( / .2. Since multiplication of two signals in the time domain corresponds to the convolution of their spectra in the frequency domain. The bandpass signal is a double-sideband suppresscd-carrier (DSBSC) A M signal.6) .2. n £7-(i) is shifted in frequency by the carrier frequency fc.4) Thus.2. Carrier-Amplitude Modulation 283 into the passband of the channel.3: Spectra of (a) baseband and (b) amplitude-modulated signal.2. as illustrated in Figure 7. the spectrum of the amplitudemodulated signal is U m ( f ) = ^ [ G 7 . Recall that the Fourier transform of the carrier is [<5(/ — /[:) + <£(/ + /<•)J/2. ) + G 7 ./ . r / c ( f ) does not change the basic geometric representation of the digital PAM signal waveforms.5) where the signal waveform \j/{t) is defined as <A(0 = ^ r ( f ) c o s 2 7 r / t / (7.( / + /C)] (7. 11) are satisfied.2. d . -5J -Id .2. / J—ou f2(i)dt = 1 (7.. it follows that 1 . as shown in Figure 7...4.2. .11) Therefore.2.8) and (7. 3d 5d Figure 7. M (7.e.284 CHAPTER 7. m =1.2. That is.7) denotes the signal points that take the M values on the real line. The signal waveform < j/(t) is normalized to unit energy.idi (7.10) is equal to zero for each cycle of the integrand. ~d i 0 .2.2. DIGITAL TRANSMISSION VIA CARRIER MODULATION and sm = Am.10).4. In view of (7. = 1 + L J_oo ^(r)cos4jr/. g r ( r ) must be appropriately scaled so that (7. .9) / J —oo (i) cos 4 j t f ^ t dt = 0 (7. Signal point constellation for PAM signal.10) because the bandwidth W of gr(t) is much smaller than the carrier frequency. hence./ 2 J-oo g2T(t)dt=l (7. i... the integral in (7..2. In such a case. fc » W.2.2.. oo | 2 r(x> | roo / But -oo g f ( t ) cos 2n fL i dr = gf(t)dt 2 J-„ç.8) Consequently. gr(0 is essentially constant within any one cycle of c o s 4 7 r / c f . 13) and where A.12) where n(t) is a bandpass noise process.(/)sin2JT/ti (7. Carrier-Amplitude Modulation 285 7.2.2. The noise component has a zero mean. The received signal may be expressed as r ( 0 = AmgT(t)cos2nfct +n(0 (7.5: Demodulation of bandpass digital PAM signal.n. we obtain the output f r(t)i/(t)dt J — oo = Am+n = s„. Figure 7.f (r) are the quadrature components of the notse.2.2.+n (7.14) where n represents the additive noise component at the output of the correlator. Us variance can be expressed as /oo I* ( f ) \ -00 2 S n ( f ) d f (7. The Fourier transform of i jr(t) is .2.7.5.15) where * £ ( / ) is the Fourier transform of i/f (r) and Sn ( / ) is the power spectral density of the additive noise. which is represented as n(t) = nL (t) cos 2nfct .2.6). (r) and «. For illustrative purposes we consider a correlation-type demodulator. as shown in Figure 7.1 Demodulation of PAM Signals The demodulation of a bandpass digital PAM signal may be accomplished in one of several ways by means of correlation or matched filtering. By cross correlating the received signal r(t) with i j / ( t ) given by (7.2. echo oil T=1. % MATLAB script for Illustrated Problem !.14). delta. else .2. for i=1:N.1 In an amplitude-modulated digital PAM system. % amplitude t=-5*T-t-delta-T:delta-T:5*T. .J if — f | < J ~ otherwise w (7. DIGITAL TRANSMISSION VIA CARRIER MODULATION *(/) - \ [Gr(f .16) and the power spectral density of the bandpass additive noise process is i Na S A f ) =\ 2° [0.17) By substituting (7. .5. which is the input to the amplitude detector. that the probability of error of the optimum detector for the carrier-modulated PAM signal is identical to that of baseband PAM. Figure 7.2.2. g_T(i) = sinc(t(i)/T)*(cos(pi#alpha*t(i)/T)/(1 —4*alpha"2*t{i)~2/T"2)). Evaluate and graph the spectrum of the baseband signal and the spectrum of the amplitude-modulated signal. the transmitter filter with impulse response gj(f) has a square-root raised-cosine spectral characteristic as described in Illustrative Problem 6. The carrier frequency is / r = 4 0 / T .17) into (7.— l)iVu where £ av i> is the average energy per bit.6 illustrates these two spectral characteristics. % earner frequency A_m=1.2. % roll-off factor fc=40/T.5. we obtain <*l = M)/2.T=T/200.f r ) + Grif + /c)] (7-2.7. ILLUSTRATIVE PROBLEM Illustrative P r o b l e m 7.W M M V (M. % sampling interval alpha=0.2.15) and evaluating the integral. That is. Chapter 7. with a roll-off factor or = 0. It is apparent from (7. if (abs(i(i)r=T/(2+aJpha)). % time axis N^length(t).286 CHAPTER 7. The MATLAB script for this computation is given below.16) and (7. 1 (7. \ m = 0. plotif. % spectrum of the modulated signal % actual frequency scale f=—0. 7. % and.7.6: Spectra of baseband signal and amplitude-modulated (bandpass) signal.T u_m=A„m*g. Figure 7. a x i s t f .2.3.T)]). M — 2*. Since the range of the carrier phase is 0 <8 < 2n. M — 1.T. % the modulated signât U_m=abs(fft(u_m)). plot(f. where k is the number of information bits per transmitted symbol.1) where gr ( 0 is the transmitting filter pulse shape. 1 Af .fftshift(G_T)). This type of digital phase . The general representation of a set of M carrier-phase-modulated signal waveforms is Lit m um{t) = A g r ( / ) c o s i 2nfct / + — J . Carrier-Amplitude Modulation 287 g-Tti) = 0. the two carrier phases are 8q — 0 and 9i = tt rad. ta t = •T/f2*alpha) end. Thus. 1 . for binary phase modulation (M = 2). .ffishift(U_m)).5/delta_T. . .*cos(2*pi»fc*l). the carrier phases used to transmit digital information via digital-phase modulation are 8m = 2jtm/M.5/delta_T:1 /(dcIta_T*(N—1 )):0.3 Carrier-Phase Modulation In carrier-phase modulation the information that is transmitted over a communication channel is impressed on the phase of the carrier. . % spectrum of g. % plotting commands follow figured). % she value of g_T is 0 at r = T/(2*a!pha) end. GJT=abs(fft(g_T)).1 / T 1 / T 0 max(G. which determines the spectral characteristics of the transmitted signal and A is the signal amplitude. figure(2). for m = 0. For M-ary phase modulation. for all m = — where denotes the energy per transmitted symbol.3.6) have a constant envelope. DIGITAL TRANSMISSION VIA CARRIER MODULATION modulation is called phase-shift keying (PSK).288 CHAPTER 7.3. Figure 7.4) averages out to zero when fc W. .3. where W is the bandwidth of £ r ( 0 .6) as the sum of two angles. We note thai PSK signals have equal energy. os s ^ I 2. . + _ j <ir (7.e.8) . M M -. oo / -co 4 (t)dt .5) In this case.7 illustrates a four-phase (Atf = 4) PSK signal waveform. 11 m ==0 0. it is defined as grin = o < r < T (7.j f i ..— 2jrm „ = y / £ s sin (7.3. By viewing the angle of the cosine function in (7.3. When gT{t) is a rectangular pulse. Í2£^ ( .3. { 2nm\ A-sr r00 (7. i..T Ü T i r i O sin sin (7-3.1) as ««(0 = V^grOcos = + (^Jp-^ cos2xfct . A .3. .JI .3.3.7) where 2jrm = M . the transmitted signal waveforms in the symbol interval 0 < / < T may be expressed as (with A — s f £ l ) «„(f) = y i^ ^ -cco nrf/ . we may express the waveforms in (7.6) Note that the transmitted signals given by (7. The term involving the doublefrequency component in (7.4) / g\{t)dt »/—oo sir. 2nm\ n « .t r t + .3.3) (7. and the carrier phase changes abruptly at the beginning of each signal interval.2) = .J__ A 2 (0 A + j ( „ cos ( 4 .I (7. 3. in which adjacent phases differ by one binary digit.7.7: Example ot" a four-phase PSK signal. as illustrated in Figure 7.4) for M = 8. digital phase-modulated signals are represented geometrically as two-dimensional vectors with components s m c and s m v ..8.8. We observe that binary phase modulation is identical to binary PAM (binary antipodal signals). . Carrier-Amplitude Modulation 289 lSO°-phase 0°-phase -90°-phase shift shift shift 0 7 27 37 47 Figure 7. The mapping.10) Signal point constellations for M=2. a phase-modulated signal may be viewed as two quadrature carriers with amplitudes that depend on the transmitted phase in each signal interval.e. ^ i m U k i M M l M M M M I l l u s t r a t i v e P r o b l e m 7. of k information bits into the M = 2* possible phases may be done in a number of ways. and and fiiU) arc orthogonal basis functions defined as ' M O = jfr ( 0 cos 2 * fct ^2(0 = -5r(0sin2?r/ci (7. or assignment. Consequently.i. The preferred assignment is to use Gray encoding. the signal amplitude is normalized to unity.2 Generate the constant-envelope PSK signal waveforms given by (1. For convenience. Thus.2.3. the energy of these two basis functions is normalized to unity. 4.3. = ( ^ c o s ^ y^sinl^n) (7. Hence. and 8 are illustrated in Figure 7.9) By appropriately normalizing the pulse shape gr(t). only a single bit error occurs in the fc-bit sequence with Gray encoding when noise causes the erroneous selection of an adjacent phase to the transmitted phase. f. Cluipter 7.u2). u2=sqn(2^Bs/T)«cos(2*pi«fc*t+4»pi/M). u5=sqrt(2*Es/T)»cos(2*pt*fc*t+10«pi/M).1. u3=sqrt(2»Es/T)*cos(2*pi»fc»t+6*pi/M). p)ot(t.8: PSK signal constellations. Figure 7.9 illustrates the eight waveforms for the case in which fc = 6/7". p!oi(t.uO).1. u7=sqrt(2*Es/T)»cos(2*pi»fc*t+14*pi/M): % plotting commands follow subplot(8.4). N=100. 1 00 010« 011 # # 001 I 1 \% 11 M = 2 no* ( M = 4 • 100 m* M = 8 So •lOl Figure 7.6). subplot(8. uI=sqn(2*Es/T)»cos(2*pi*fc*t+2»pi/M).1.1).3).2). The MATLAB script for this computation is given below. plot(t. delta_T=T/(N-1).5).1. M=8. Es=T/2. subplol(8. plot(t. u6=sqn(2*Es/T)*cos(2»pL»fc*t+12*pi/M). u0=sqrtt2*Es/T)*cos(2«pi+fc*i). % carrier % number frequency of samples .290 CHAPTER 7. script for Illustrative Problem 2.1. plot(t.1.u4). fc=6/T. DIGITAL TRANSMISSION VIA CARRIER MODULATION 0. % MATLAB e c h o on T=1. subplol(8. t=0:delta_T:T.u3): subplot(8.ul). u4=sqrt(2*Es/T)*cos(2»pi*fc*t+8*pi/M). subpIot{8. subplot(8.ns{t) sin 2 j r / c f (7.1.3.1.2 0.1. subplot(8.7). Carrier-Phase Modulation 291 plot(i.9: M = 8 constant-amplitude PSK waveforms.4 0.3.1 Phase Demodulation and Detection The received bandpass signal in a signaling interval from an AWGN channel may be expressed as r ( f ) =um{t) = um(t) + n(t) + nc(t) cos 2 7 x f c t .6 0. which may be expressed as . plot(t.5 0. p!ol(t.u7).9).u6).7 0. T h e outputs of the two correlators yield the noise-corrupted signal components.3.11) where n t ( i ) and n s ( f ) are the two quadrature components of the additive noise.6 0. 7.1 0. The received signal may be correlated with < j/\{t) and ^ 2 ( 0 given by (7.3 0.3. 0.9 t Figure 7.u5).8). J —r>o ». For M > 4. The variance of nL and is E(n~) = E(n. Since binary phase modulation is identical to binary PAM.n = ( v ^ cos ^ where nc and nx are defined as 1 f00 + nt / E j " sin ~ «.) (7.12) .15) Because all signals have equal energy.v = ^ / iT(/)».) = 0 and E(nrns) = 0.3.1 M . Four-phase modulation may be viewed as two binary phasemodulation systems on quadrature (orthogonal) carriers. Consequently.J— O O (7. Thus.1 (7. there is no simple .3. r2) as &r — tan"' — n (7. The probability of error at the detector for phase modulation in an AWGN channel may be found in any textbook that treats digital communications. sm) = r • sm.3. DIGITAL TRANSMISSION VIA CARRIER MODULATION r = sm 4.3.•(?) arc ?ero-mean Gaussian random processes that are uncorrected.) = y Nq (7. As a consequence.292 CHAPTER 7. m =0. £(«.3.(»)iir . we obtain the correlation metrics C ( r .16) and select the signal from the set {i m } whose phase is closest to 9r.. an equivalent deiector metric for digital phase modulation is to compute the phase of the received signal vector r = ( n .14) The optimum detector projects the received signal vector r onto each of the M possible transmitted signal vectors |s /ri } and selects the vector corresponding to the largest projection.•) = £(». the probability of a bit error is identical to that for binary phase modulation.13) The quadrature noise components n t -(/) and «. the probability of error is where is the energy per bit. two fc-bit symbols corresponding to adjacent signal phases differ in only a single bit. W h e n a Gray code is used in the mapping. The equivalent bit-error probability for M-ary phase modulation is also difficult to derive due to the dependence of the mapping of fc-bit symbols into the corresponding signal phases. Carrier-Amplitude Modulation 293 closed-form expression for the probability of a symbol error. Figure 7. A good approximation for P\t is where k — \og2M bits per symbol.7. most fc-bit . dB Figure 7.10 illustrates the symbol error probability as a function of the SNR S N R / b i t .2.10: Probability of a symbol error for M-ary PSK. Because the most probable errors d u e to noise result in the erroneous selection of an adjacent phase to the true phase. 3. 01.5). 11.1. and (0. We begin by generating a sequence of quaternary (2-bit) symbols that are mapped into the corresponding four-phase signal points.3 We shall perform a Monte Carlo simulation of an M — 4 PSK communication system that models the detector as the one that computes the correlation metrics given in (7.0. where the subintervals correspond to the pairs of information bits 00.3. These pairs of bits are used to select the signal phase vector sm. Hence.15).5. which is the output of the signal correlator and the input to the detector.0.0. we simulate the generation of the random vector r given by (7. we use a random number generator that generates a uniform random number in the range (0. JjjHiJMilfc As shown.1). respectively. The model for the system to be simulated is shown in Figure 7. (0. 10.11: Block diagram of an M = 4 PSK system for a Monte Carlo simulation.75).19) -^CBSEEElBEESEmm Illustrative Problem 7.25).12). the equivalent bit-error probability for A/-ary phase modulation is well approximated as Pb*\Pt4 k (7.25. as shown in Figure 7. DIGITAL TRANSMISSION VIA CARRIER MODULATION symbol errors contain only a single bit error. This range is subdivided into four equal intervals.3.75. Figure 7. To accomplish this task.11.0). .8 for M = 4. (0. (0.294 CHAPTER 7. (pb.ps]=cm-sm32(snrJn-dB) % % % [pb. hold semilogy(SNRindB I . end.12 is the bit-error rate.3. signal-to-noise ratio in dB. for i=1 :length(SNRindB2).2. si ]=[—1 01. st0=[0 . s01 = [0 11.12 illustrates the results of the Monte Carlo simulation for the transmission of /V-10.12) and computes the projection (dot product) of r onto the four possible signal vectors s m . % generation of the data source . SNRindB2=0:0. snr=10"(snr. Its decision is based on selecting the signal point corresponding in the largest projection.1:10. semilogy(SNRindB2.bit. function (pb.000 symbols at different values of the SNR parameter £h/Na.dB/10).symboLerr.smld.1 ] . Chapter 7.' o ' ) .prb(i)=Qfiinct(sqn(2+SNR)). echo on SNRindB 1=0:2:10.psI=cm-sm32(SNRiniiBI(i)). which is defined as Pb ^ P m ! 2 . sgma=sqrt(E/snr)/2. Figure 7. prb( i)=ps.err. For convenience. for i=1 :lengih(SNRindBI).err.prb.prb).psJ=cmsm32(snrJnjiB) CM-SM32 finds the probability of bit error and symbol given value of snrJn^dB. we may normalize the variance to a 2 = 1 and control the SNR in the received signal by scaling the signal energy parameter 3TV. % the signal mapping s00=[1 0).3. s inid „ s y mbo L err.7. The detector observes the received signal vector r = i m + n as given in (7. E=1. Also shown in Figure 7.prb. and symbol errors and bit errors are counted. where = £t/2 is the bit energy. % MATLAB wnpi for Illustrative Problem 3. Carrier-Amplitude Modulation 295 The additive noise components n c and n s are statistically independent zero-mean Gaussian random variables with variance o2. SNR=exp(SNRindB2(i)*log< 10)/10).in. The MATLAB scripts for this Monte Carlo simulation are given below. given by (7.' * •). smld -biueri-prb(i)=pb. The output decisions from the detector are compared with the transmitted symbols. % energy per symbol % signal-to-noise ratio % noise variance error for the N=10000.smtd. end: % Plotting commands follow 7c simulated bit and symbol error rates % signal-to-noise rutin % theoretical bit-error rate semilogy(SNRindBl.theo. and the corresponding theoretical error probability. theo_err.18). or vice-versa. elseif (clO—c_max). symbolerror=1. dsourL'e2(l)=1 . decis2=1. else decisl = 1.s0l). % The decision on the ith symbol is made next c_max=max([cOO cOl clOcIl]). end. end. temp=rand. % detection and the probability of error % a uniform random variable between 0 arid I % with probability 1/4. decisl = 1. numolbiterror=numofbiterror+1 . % increment the error counter. <Jsourcel(i)=1. dsource2(i)=0. is: n(1)=gngauss(sgma). dsourcel(i)=0. else r=sl I+n.5). for i=1:N. dsourcel(i)=0. if (cOO==c_[TiaA).296 CHAPTER 7. end. decis2=0. nuinofbitcrror=0. decis2=0.sl0). elseif (temp<0. i\(2)=gngauss(sgina). DIGITAL TRANSMISSION VIA CARRIER MODULATION for i=1:N. if ((dsourcel(i)—0) & (dsouree2(i)==0)). r=s01+n. dsource2{i>=0. dsource2(i)=1. source output is "10" % with probability 1/4. r=sOÜ+n. . % The correlation metrics are computed below c00=doi(r. for the ith sy mho!. if (temp<0. elseif (cOl ==c_max). else if (temp<0. elseif ((dsource l(i)==0) & (dsource2(i)==D). if the decision is not correct symbolerror=0. % the received signal at the detector. decisl=0. c I0=dot(r. decisl =0. end. r=slO+n. c01=dot(r.75).s00). if (decisr=dsourcel(i)). dsourcel(i)=1 . source output is "01 " % with probability 1/4. numofbiterror=numofbiteiror+1.25). source output is "00" % with probability 1/4. elseif ((dsourceHi)==1) & (dsource2(i)==0)). cll=dot(i\sl I). source output is "11" calculation numofsymbolerror=0. end. decis2=1. if (decis2"=dsource2(i)). as described above. in binary phase modulation the information bit 1 may be transmitted by shifting . in which the transmitted data are differentially encoded prior to the modulator.3. For example. nu mo fsym bo terror = numofsymbolerror+1. if (symbolenor==1).12.2. % since 2N bits are transmuted Figure 7. end. Another type of carrier-phase modulation is differential phase modulation. 7. Carrier-Amplitude Modulation 297 syinbnlerror=1. the information is conveyed by phase shifts relative to the previous signal interval. In general.The estimation of the carrier-phase offset is usually performed by use of a phase-locked loop (PLL). Thus. we achieve coherent phase demodulation. ps=numofsymboleiTor/N.2 Differential Phase Modulation and Demodulation The demodulation of the phase-modulated carrier signal. this means that the receiver must estimate the carrier-phase offset of the received signal caused by a transmission delay through the channel and compensate for this carrier-phase offset in the cross correlation of the received signal with the two reference components and 1^2(0.7. end. Performance of a four-phase PSK system from the Monte Carlo simulation. end. requires that the carrier phase components ^ (f) and (!) be phase-locked to the received carriermodulated signal. In differential encoding. % since there are totally N symbols pb=numofbiieiT0r/(2*N). Thus.^ n K K k + nkn\^ (7._. Differentially encoded PSK signaling that is demodulated and detected as described above is called differential P S K (DPSK). T h e demodulation and detection of DPSK is illustrated in Figure 7. The generalization of differential encoding for M > 4 is straightforward. the mean value of r k r£_ j is independent of the carrier phase. the received signal vector at the output of the demodulator in the preceding signaling interval is the complex-valued quantity />_.298 CHAPTER 7. Similarly. To elaborate. Oi. we can project rk onto and use the phase of the resulting complex number.* ' + nk K (7.3.13. 180'.* 5 + 1 (7. Demodulation and detection of the differentially encoded phase-modulated signal may be performed as follows. The encoding is performed by a relatively simple logic circuit preceding the modulator. the relative phase shifts between successive intervals are 0°. At the A:th signaling intervaJ. 90°. = 1 > + y/zle^-Vnt-1 + e ' l ^ .3. The phase-modulated signals resulting from this encoding process arc called differentially encoded.22) which. The received signal phase 9 — tan""1 N/Ri at the detector is mapped into one of the M possible transmitted signal phases { 0 } that is closest to 9 .3. In four-phase modulation. R M R We observe that the demodulation of a differentially encoded phase-modulated signal does not require the estimation of the carrier phase. suppose that we demodulate the differentially encoded signal by cross correlating r(r> with g r i O c o s 2 j r / t f and —grit) smhr fct.3. whereas the information bit 0 is transmitted by a zero-phase shift relative to the phase in the preceding signaling interval. The result is P2 = (7. and 10. the two components of the demodulator output may be represented in complex-valued form as rk = v / J ^ V ' * .20) where 9 is the phase angle of the transmitted signal at the &th signaling interval. 11.23) . r4r. yields the phase difference 9 — 9 _ i. respectively. corresponding to the information bits 00. that is. and nk = nkc + jnkx is the noise. 4> is the carrier phase. Following the detector is a relatively simple phase comparator that compares the phases of the detected signal over two consecutive intervals to extract the transmitted information. DIGITAL TRANSMISSION VIA CARRIER MODULATION the phase of the carrier by 180 relative to the previous carrier phase. The probability of error for D P S K in an A W G N channel is relatively simple to derive for binary ( M = 2) phase modulation.21) The decision variable for the phase detector is the phase difference between these two complex numbers. Equivalently. in the absencc of noise. and 270°. « v ^ ^ - 1 . \ ^ dividing through by the new set of decision metrics becomes Jt = y/£~s + Re(nk y = lm(nt + nj-1) + «£_[) (7.3.i ) + «*«*-] (7.3.22) can be absorbed into the Gaussian noise components j and n k without changing their statistical properties.22) can be expressed as = £ t + + < . suppose the phase difference S^—Ok-i ~ 0. Carrier-Phase Modulation 299 at ( = k T Figure 7.3. The major difficulty is encountered in the determination of the probability density function for the phase of the random variable rkrk_ [. the term dominant noise term r is small relative to the and we also normalize ( n k + n * k _ [ ) .3. as we now demonstrate.13: Block diagram of D P S K demodulator.14. However.24) The complication in determining the probability density function of the phase is the term However. given by (7. Furthermore.22). the exponential factors a n ( j ejWt-<t>) j n (7. The graph of (7.3. I f w e neglect the term k r k . i (7-3. The phase is .7. an approximation to the performance of D P S K is easily obtained. Also shown in this figure is the probability of error for binary PSK. For M > 2. the difference in S N R between binary PSK and binary D P S K is less than 1 dB.3. the error probability performance of a DPSK demodulator and detector is extremely difficult to evaluate exactly. Without loss of generality.23) is shown in Figure 7. Therefore. at SNRs of practical interest.25) The variables x and y are u n c o r r e c t e d Gaussian random variables with identical variances a * = Nq. We observe that at error probabilities below 1 0 " 4 . 14: Probability of error for binary PSK and DPSK. DIGITAL TRANSMISSION VIA CARRIER MODULATION Figure 7 .300 CHAPTER 7. . the received signal plus noise vector is m y = [ cos M + n c sin = [. as described in Illustrative Problem 7. This result is relatively good for M > 4. The only difference is that the noise variance is now twice as large as in the case of PSK. 1 . Thus we conclude that the performance of DPSK is 3 dB poorer than that for PSK. 3. 10}.e.26) At this stage we have a problem that is identical to the one for phase-coherent demodulation. sequence=[0 1 0 0 11 0 0 1 1 1 1 [e]=cm_dpske(E. Carrier-Phase Modulation 301 0 r = tan" 1 .mappmg. SOLUTION The signal points are the same as those for PSK shown in Figure 7. Chapter 7 % For Gray mapping={0 1 3 2 7 6 4 5]. .4 Implement a differential encoder for the case of M — 8 DPSK. 2 .22). this computation can be performed as in (7. mapping 1 1 0 0 0 0].3. but it is pessimistic for M = 2 in the sense that the loss in binary DPSK relative to binary PSK is less than 3 dB at large SNR. SOLUTION The uniform random number generator (RNG) is used to generate the pairs of bits {00. Two Gaussian RNG are used to generate the noise components [n c ns]. ILLUSTRATIVE PROBLEM Illustrative Problem 7.3.7.3.01. T h e model for the system to be simulated is shown in Figure 7. However. The MATLAB script for implementing the differential encoder is given below..3. % e is (he differential encoder output ILLUSTRATIVE PROBLEM Illustrative Problem 7. The differential detector basically computes the phase difference between r* and Mathematically.sequence). M-FILE % MATLAB script for Illustrative Problem 4. i. for DPSK these signal points represent the phase change relative to the phase of the previous transmitted signal points.. by the differential encoder. Then. E=1.5 Perform a Monte Carlo simulation of an M = 4 DPSK communication system.8. 11. Each 2-bit symbol is mapped into one of the four signal points s = [cos 7rm/2 sin n m / 2 ] rn = 0 .15.M.c v (7. 15: Block diagram of M = 4 DPSK system for the Monte Carlo simulation. Figure 7.1:12. The error counter counts the symbol errors in the detected sequence.302 CHAPTER 7. We observe f r o m Figure 7. Also shown in Figure 7. echo on SNRindB 1=0:2:12. 90°. and a decision is made in favor of the phase that is closest to 8 k . where £ h = £ / 2 is the bit energy. for i=1 :length(SNRincSB 1).16 is the theoretical value of the symbol error rate based on the approximation that the term rt i . 270°}. S -1 % MATLAB scrip! for Illustrative Problem 5. r r k k-l = (r<-* + jrsk){rck-\ = rlkrik-l = +rskr.16 that the approximation results in an upper bound to the error probability.l ) + j >'k and 8k — t a n (y*/**) is the phase difference. The M A T L A B scripts for this Monte Carlo simulation are given below. The detected phase is then mapped into the pair of information bits. / N q . 180°.n k _ ] is negligible.16 illustrates the results of the Monte Carlo simulation for the transmission of N =10. SNRindB2=0:0. .ik-1 - 1) + iC^fa-l ~ ^ r ^ .000 symbols at different values of the SNR parameter £ i . DIGITAL TRANSMISSION VIA CARRIER MODULATION Figure 7. Chapter 7. The value of d k is compared with the possible phase differences (0°. 2)). end. theo. prev_theta=0.5). E=1. % signal-to-noise ratio sgma=sqrt(E/(4*snr)l.dpske(E. theta=angle(r(i. % Plotting commands follow semilogy(SNRmdBl. elseif (deIla.out put] = cm. prev_theta=theta. ' * '). end.:)=diff_enc_output(i. if ((delta. d e Ita-t het a=mod ( t het a . hold seinilogy(SNRindB2.dsoarce). % t< uniform random variable between 0 and I if (remp<0. % with probability 1/2. (difficile .prbi i )=2*Qfunct(<. signal-to-noise ratio in tiB N=10000.prev _ l heta. source output is "0" else dsource(i)=1 . % with probability ! 12. end.7.err. % energy per symbol snr=10~(snrJn_dB/10). 3.theta<3*pi/4).M. % DifferentiaI encoding of the data source follows mopping=[0 1 3 2]. dec is=[0 1]. % received signal is then for i=1:N.qrt(S MR)) . dsource(i)=0. 9c simulated error rate % signal-to-noise ratio % theoretical symbol-error rate function [p]=cm_sm34(snr_in_dB) % !pl=t:m_<on34(snrun_dBi % CMSMfinds the probability of error far the given % value of snrJn-dB. r(i.smld_err_prb. for i=1:N.:)+n. Carrier-Phase Modulation 303 smld_err_prb(i)=cnusm34(SNRmdRUi)) end.ttieo. temp=rand. % detection and the probability of error calculation numoferr=0. else decis=[1 0]. elseif (delta_theta<5*pi/4) decis=[1 1].err_prb). .theta<pi/4) | (delta_iheta>7»pi/4)). M=4. source output is "I" end. SNR=exp(SNRindB2(i)*log(10)/10). for i=1 length(SNRjndB2). % noise variance % generation of the data source follows for i=1:2*N. decis=[0 0].1)+j*r(i.mapping. end.2 * pi ) . [n(1) n(2))=gngauss(sgma1. M. remainder=remlN.sequence) % CMJOPSKE differentially encodes a sequence. k=log2(M). T h e t r a n s m i t t e d signal w a v e f o r m s h a v e the f o r m um{!) = AmcgT(t) cosln fct + Amsgr(t) sin 2 n f c t . % £ it the average energy.sequence).17 illustrates a 1 6 .k). enc_comp((i+k-1 )/k. so that it is.304 CHAPTER 7. index =2*index+sequencelj). % (enc-comp! cm-dpskeiE.M. e a c h of w h i c h is m o d u l a t e d b y a n i n d e p e n d e n t s e q u e n c e of i n f o r m a t i o n bits.mapping. N=lengih(sequence). 7. N-N+k -remainder. end.Q A M signal . mdex=0. scquence(i)=0. for j = i : i + k . end. "sequence" is the uncoded binary tlata sequence. theta=mod(2»pi»mapping( index )/M+iheta. m = 1. lheia=0. end. if (remainder~=0>.1) k-bit = 4 w h e r e { A m c } a n d { A m i } a r e t h e s e t s of a m p l i t u d e l e v e l s t h a t a r e o b t a i n e d b y m a p p i n g sequences into signal amplitudes. DIGITAL TRANSMISSION VIA CARRIER MODULATION % increase the error counter if the decision is not correct if ((decisf 1 r=dsource(2*i —1)) I (decis(2)"=dsource(2*i}». index=index+1.. for i=N+1 :N' + k — remainder. end. enc_comp((i+k-1 )/k. c o n s t e l l a t i o n t h a t is o b t a i n e d b y a m p l i t u d e m o d u l a t i n g e a c h q u a d r a t u r e e a r n e r b y M F o r e x a m p l e .2«pi). .comp] = em w dpske(E.4.4 A Quadrature Amplitude Modulation (QAM) signal employs two quadrature carriers. numoferr=numoferr+1.2)=sqrt(E)*sin(theia). end. p=numoferr/N. append zeros.1 )=sqn(E)*cos(theta).1 . Finally. % initially. end.2 M (7. F i g u r e 7. % if N is not divisible bv k. quadrature-amplitude-modulated c o s 2 j r / t r a n d s i n 2 n f L t . M is the number of constellation points °!o and mapping is the vector defining how the constellation points are % allocated.mapping. function [enc. assume that theta-0 for 1=1 :k:N. 16: Performance of four-phase DPSK system from Monte Carlo simulation (the solid curve is an upper bound based on approximation that neglects the noise term • • • • • • • • • • • • • • • • Figure 7.17: M = 16-QAM signal constellation.7. . Quadrature Amplitude Modulation 305 £ h / N 0 in dB Figure 7.4. m = 1.4. DIGITAL TRANSMISSION VIA CARRIER MODULATION PAM. n = 1.18: Functional block diagram of modulator for Q A M .2) is in terms of two-dimensional signal vectors of the form sm = ( V U A b c m = 1.2 M (7.1) and (7. 2 .4.Figure 7. Figure 7. .306 CHAPTER 7.19. 7. the combined amplitude and phase-modulation method results in the simultaneous transmission of k\ + = log 2 Ait binary digits occurring at a symbol rate Rh/(k\ + £ 2 ) . the received signal is corrupted by additive Gaussian . In general.18 illustrates the functional block diagram of a Q A M modulator.3) Examples of signal space constellations for Q A M are shown in Figure 7.4.2 Aí.2) If Ail = . and M i = 2 k 2 . More generally. Thus the transmitted Q A M signal waveforms may be expressed as umn(0 = AmgHt)cos(2jtfLt + d„). . . In addition..1 Demodulation and Detection of QAM Let us assume that a carrier-phase offset is introduced in the transmission of the signal through the channel. M2 (7. rectangular signal constellations result when two quadrature carriers are each modulated by PAM. Q A M may be viewed as a form of combined digital amplitude and digital-phase modulation. . Note that M = 4 Q A M is identical to Ai = 4 P S K .4.4. It is clear that the geometric signal representation of the signals given by (7. n . Under these conditions.5) as illustrated in Figure 7.20 is assumed to be synchronized to the received signal so that the correlator outputs are sampled at the proper instant in time.5).7. the outputs from the two correlators are rj = Amc r + nc cos <j) — nx sin <j> (7.19: (a) Rectangular and (b).20.4 (a) Figure 7.4. noise. Hence.20 estimates the carrier phase offset <p of the received signal and compensates for this phase offset by phase shifting if/lit) and ij/iit) as indicated in (7. and the outputs of the correlators are sampled and passed to the detector.(c) circular Q A M signal constellations.4) where <p is the carrier phase offset and n(t) = M f ) c o s 2 .6) 7 — Ams + nc sin <p + ns cos (p . r{t) may be expressed as r(0 = AmL-gr(t)cos(27T/cr + <p) + AmjgT(t)sm(2*fct + <t>) + " ( 0 (7-4. The phase-locked loop (PLL) shown in Figure 7. Quadrature Amplitude Modulation 307 M r 64 M = 32 1 T f T t I t V AM= 8 M = 16 (0 i /v . v ( 0 s i n 2 j r / c i The received signal is correlated with the two phase-shifted basis functions i M O =í7-(Ocos(2vT/cr + 0) <A:Í0 = St(0 sin(2 >T/ t f + <p) (7-4. The clock shown in Figure 7.4. T / t r .4. .4. M (7. we consider the performance of Q A M systems that employ rectangular signal constellations..2.4. Rectangular Q A M signal constellations have the distinct advantage .4.7) j •T The noise components are zero-mean.3). m = 1.4.. DIGITAL TRANSMISSION VIA CARRIER MODULATION 1 fT «f = " / .2 Probability of Error for QAM in an AWGN Channel In this section. uncorrelated Gaussian random variables with variance A'o/2.] 2 .20: Demodulation and detection of Q A M signals. r?) and sm is given by (7. The optimum detector computes the distance metrics D{r.sm) = | r — iff. Figure 7. 7.8) where r' = ( r | .Jo ni(')gT(r)dt (7..308 where CHAPTER 7. Therefore. .4.a r y PAM system. where g ^ / N o is the average SNR per bit.21 as a function of the average SNR per bit. the probability of a correct decision for the A/-ary Q A M system is p <= 0 ~ PSm)2 <7A9) where P i s the probability of error of a . This is no problem.a r y PAM with one-half the average power in each quadrature signal of the equivalent QAM system._ 2fil / * V (M 1 . On the other hand. In addition.12) 3 <42 (M - DNq for any k > I.11) We note that this result is exact for M = 2k when k is even. Quadrature Amplitude Modulation 309 of being easily generated as two PAM signals impressed on phase quadrature carriers.8 ). the Q A M signal constellation is equivalent to two PAM signals on quadrature carriers.I)No J y (7.4. they are easily demodulated. because it is rather easy to determine the error rate for a rectangular signal set.4. where k is even. we obtain where Z ^ / N o is the average SNR per symbol. it is relatively straightforward to show that the symbol-error probability is tightly upper-bounded as PM < I - . however. each having V m = 2k/2 signal points. Because the signals in the phase quadrature components are perfectly separated by coherent detection.( l . the probability of a symbol error for the Ai-ary Q A M is PM = I . For rectangular signal constellations in which M — 2k. when k is odd.P ^ y (7.7. If we employ the optimum detector that bases its decisions on the optimum distance metrics given by (7.4./ M . Specifically. By appropriately modifying the probability of error for Ai-ary PAM. The probability of a symbol error is plotted in Figure 7. there is no equivalent V M . the probability of error for Q A M is easily determined f r o m the probability of error for PAM. 310 CHAPTER 7. . dB Figure 7.21: Probability of a symbol error for QAM. DIGITAL TRANSMISSION VIA CARRIER MODULATION SNR/bit. 7. AmsJ. Two Gaussian RNG are used to generate the noise components [ n c .000 symbols at different values of the SNR parameter E^/Nq. Figure 7. = 4 is the bit energy. which have the coordinates [ Amc. The M A T L A B scripts for this problem are given below.4.23.nc Ams + n(] The detector computes the distance metrics given by (7. 62. where £7. The channel-phase shift 4 > is set to 0 for convenience.^ U M b h d A m H J ^ I J M f c I l l u s t r a t i v e P r o b l e m 7.11). The model of the system to be simulated is shown in Figure 7.24 is the theoretical value of the symbol-error probability given by (7.6 Perform a Monte Carlo simulation of an M = 16-QAM communication system using a rectangular signal constellation. —4ES3SQEBD The uniform random number generator (RNG) is used to generate the sequence of information symbols corresponding to the 16 possible 4-bit combinations of b\. The information symbols are mapped into the corresponding signal points. . Figure 7. .22: Block diagram of an M = 16-QAM system for the Monte Carlo simulation.8) and decides in favor of the signal point that is closest to the received vector r .4. Consequently. Also shown in Figure 7. the received signal-pius-noise vector is r = [AmL -1.4.24 illustrates the results of the Monte Carlo simulation for the transmission of N= 10. The error counter counts the symbol errors in the detected sequence.22.10) and (7. Quadrature Amplitude Modulation 311 .4. as illustrated in Figure 7. function [p)=cm_sm41(snr_in_dB) % fp]=cm_sm41(snrJnjiB) % CM SM 41 finds the probability of error for the given % value of snrjnjiB. distance between symbols d=1. % simulated error rate end.d d. % SNR per bit (given) sgma=sqrt(Eav/(8»sn.1:15.d —3*d.smld_err_prb. % generation of the data source folio for i=1:N. uniform follow . d -d. k=log2(M). between 0 and I % u number between 1 and 16.V.d 3*d. -3*d -3*d: . dsou rce (i)=1 +floor(M * temp).)).' * '). SNR=exp(SNRindB2(i)*log( 10)/10).dB/10).lheo_err_prb). for i=1:Iength(SNRindB2). end. —3*d d. N=10000. % a uniform R. % noise variance M=16. -3*d -d. % signal-lo-noise ratio % theoretical symbol error rate iheo-err-prb(i)=4*Qfunct(sqrt(3*k*SNR/(M-1))). SNRindB 2=0:0. d d.in. DIGITAL TRANSMISSION VIA CARRIER MODULATION % MATLAB scrip! for Illustrative Problem 6. hold semiJogy(SNRindB2. 3*d —3*d]. 3*d d. srnJd_err_prb(i)=cm_sm41(SNRindB l(i)). SNR m dB. Chapter 7. . % mm. d 3*d. -d -d. % energy per symbol snr=10~(snr.d . % Mapping to the signal constellation mapping=[-3*d 3*d. 3*d . 3*d 3»d. d —3*d. for i=1 length(SNRindBl). M=16. end. temp=rand.312 CHAPTER 7. Eav=10*d~2. echo on SNRindBl=0:2:15. % Plotting commands follow semilogy(SNRindB I . . 7. % received signal for i=1:N. % metric computation follow for j=1:M. 313 « • 3 2 - « • • • I • 1 i « • i -2 i -1 1 2 3 - 1 2 « 3 • 1 . As we will observe from our treatment below. end.21) "2. end. end. memcs(j)=(r(i. % detection and error probability calculation numoferr=0. qam_sig(i.:).2.7.:)=mappmg(dsoiirce(i).:)=qam^sig(i. In contrast. [min-metric decisj = inin(metrics). or the combined amplitude and phase. if (decis"=dsource(i1). Carrier-Amplitude Modulation for i=1:N. the phase of the carrier. for i=1 :N.5 Carrier-Frequency Modulation Above.1 ))"2+(r(i.1 )-mapping(j. • « • Figure 7. p=numoferr/(N). Digital information can also be transmitted by modulating the frequency of the carrier. digital transmission by frequency modulation is a modulation method that is appropriate for channels that lack the phase stability that is necessary to perform carrier-phase estimation. the linear modulation . [n(1) n(2)]=gngauss(bgma1.23: M = 16-QAM signal constellation for the Monte Carlo simulation. end.2)-mapping(j . r(i. we described methods for transmitting digital information by modulating either the amplitude of the carrier.:)+n. end. numoferr=numoferr+1. / — . 7.3£H V *h c o s 2 * M 0 < f < 7* (7. More generally. the M signal waveforms may be expressed as .1) where £t> is the signal energy per bit and 7). PAM. In binary FSK we employ two different frequencies. say. is the duration of the bit interval. Thus the two signal waveforms may be expressed as r>£h « I ( 0 = . coherent PSK.COS 27T f \ t . y Tb «2(0 = 0<t<Th I 7. M-ary F S K may be used to transmit a block of k — log 2 A/ bits per signal waveform. f\ and f j = f \ + A / .24: Performance of M = 16-QAM system from the Monte Carlo simulation.1 Frequency-Shift Keying T h e simplest form of frequency modulation is binary frequency-shift keying (FSK). methods that we have introduced—namely.5. to transmit a binary information sequence. In this case. and Q A M — r e q u i r e the estimation of the carrier phase to perform phase-coherent detection.5. DIGITAL TRANSMISSION VIA CARRIER MODULATION 2b//V 0 in dB Figure 7. The choice of frequency separation A / = / 2 — / [ is considered below.314 CHAPTER 7. As a measure of the similarity (or dissimilarity) between a pair of signal waveforms we use the correlation coefficient y m n - 1 = ~ f JO r um(t)u„(t)dt (7. We observe that the signal waveforms are orthogonal when Af is a multiple of 1 / 2 T . A f = fm .i — f cos [ 4 j t / c i + 27r(m •+. 0 *i = (0.5. Carrier-Frequency Modulation 315 um(t) = / 25:.5.5. The frequency separation A / determines the degree to which we can discriminate among the M possible transmitted signals. which is also the minimum distance among . n. 7 ^ ) (7.2) where = fci ^.7) 0 .5. the minimum frequency separation between successive frequencies for orthogonality is l / 2 7 \ We also note that the minimum value of the correlation coefficient is ymn ~ —0.fm~\ for all "1 = 1 . 7 1 5 / 1 .1. given as fO = ( V ^ . 2 M . where / m = / c + m A / . 1 Af . we obtain I fT Ymn~-=r / — . O i sM-i=( 0. which occurs at the frequency separation Af = 0 . cos(2 jrfct + Inm&ft). A/-ary orthogonal FSK waveforms have a geometric representation as M Afdimensional orthogonal vectors.t Jo ' 1 fT 1 fT = — 10 f c o s 2 .5.217. The distance between pairs of signal vectors is d = •s/23Ts for all m . is the energy per symbol.cos(2jr/ t -/ + 2nmAft) cos(2nfct + 2nnAft)dt 3b. and A / is the frequency separation between successive frequencies. Hence.0 0) 0) (7.5.3) By substituting for n m ( t ) and um(t) in (7.8) where the basis functions are tym(t) = J2/ T cos 2n{fc + mAf)t. A plot of ymn as a function of the frequency separation Af is given in Figure 7. r ( m — n)Af t dt . 0 < i < 7" (7. i..6) (7.5.I.4) 2jr(m — n) A / 7 " where the second integral vanishes when fc » 1/7". 3tx. m = 0. is the symbol interval.25.5) (7. y i T .e.5.5.n) A / i ] dt i Jo T Jo T Jo sin2jr(m n)AfT (7. Note that the A/ FSK waveforms have equal energy. T = £7/.7.3). . It is interesting to note that when <j>m <pm f o r m = 0. which may be expressed as n(t) = nc(t) cos Ikfçt . The demodulation and detection of the Ai-ary FSK signals is considered next. .5. . In phase-coherent demodulation. where {<pm} are the carrier-phase estimates.4. . A block diagram illustrating this type of demodulation is shown in Figure 7. Note that these signals are equivalent to the M-ary baseband orthogonal signals described in Section 5. the M signals.ns(t) sin 2nfct (7.25: Cross correlation coefficient as a function of frequency separation for FSK signals. .5.9) where <pm denotes the phase shift of the mth signal (due to the transmission delay) and n ( r ) represents the additive bandpass noise. . the carrier phases may be ignored in the demodulation and detection. Consequently. the filtered received signal at the input to the demodulator may be expressed as r(t) = eos(2ff fct + Inm&ft + 4>m) + n{t) (7.10) The demodulation and detection of the M F S K signals may be accomplished by one of two methods.2 Demodulation and Detection of FSK Signals Let us assume that the FSK signals are transmitted through an additive white Gaussian noise channel. the received signal r(t) is correlated with each of the M possible received signals cos {lnfct + 2jrmA/i + for m = 0.1. 1 . Furthermore. DIGITAL TRANSMISSION VIA CARRIER MODULATION 0 Figure 7. 7. As an alternative method. . we assume that each signal is delayed in the transmission through the channel. 1. .5. M— 1 .26.316 CHAPTER 7. Ai . One approach is to estimate the M carrier-phase shifts \<pm} and perform phase-coherent demodulation and detection. or a total of 2 M correlators.5. in general. Thus. if the mlh signal is transmitted. especially when the number of signals is large. 1 . . the frequency separation required for signal orthogonality at the demodulator is Af = 1 / 7 . Therefore.27. The 2 M outputs of the correlators are sampled at the end of the signal interval and are passed to the detector.1. Instead w e consider a method for demodulation and detection that does not require knowledge of the carrier phases.26: Phase-coherent demodulation of M-ary FSK signals.1 + nkc cos 27T(k .I —— •_sin<p 2n{k . M . PLL[ it JÓOrf» Sample at / = T cos (2ir/cf +<(>{) î'0Odt Sample at í = T Detector PLU Received signal cos (2 wf c t + 2ttA/+ <£.11) .UA/i + <t>u) Figure 7. / t i + 2 ^ A / r ) a n d / i s i n ( 2 ^ . In this case there are two correlators per signal waveform. .m)A/T 2n(k — m)AfT cos <t>m + sin 4>n (7.5.7. . we shall not consider coherent detection of F S K signals. Carrier-Frequency Modulation 317 (imperfect phase estimates). the 2M samples at the detector may be expressed as "sin 2n(k r* c = V ^ T rkt = V ^ 2n(k . which is twice the minimum separation for orthogonality when (p = <p.) Output decision PLU Mcõs $ Sample at r = T {2-rrf t +• 2ir{M . T h e received signal is correlated with the basis functions (quadrature carriers) ^ c o s ( 2 . + 2 „ A / O f o r m = 0. The demodulation may be accomplished as shown in Figure 7.m)6fT — m) A / T cos 2 n ( k — m)AfT . .m ) A / r sin 2 n ( k 2n{k-m)AfT m)A/T 1 m cos<pm . The requirement of estimating M carrier phases makes coherent demodulation of F S K signals extremely complex and impractical. cos 2irfc i Figure 7.12) Furthermore. rice ~ rikc rks = nks. w e observe that when k ^ m.. DIGITAL TRANSMISSION VIA CARRIER MODULATION denote the Gaussian noise components in the sampled outputs. We observe that when k = m. i. independent of the values of the phase shift provided that the frequency separation between successive frequencies is A / = 1 / T. (7.13) In the following development we assume that A / = 1/ T.27: Demodulation of M-ary signals for noncoherent detection. so that the signals are orthogonal. .e.5. k (7.5. the sampled values to the detector are rmt = = v ^ cos 0 m + n m c sin0m + nm.318 where n k c and CHAPTER 7. the signal components in the samples rand r k s will vanish. In such a case. the other 2(M — 1) correlator outputs consist of noise only. 7. Carrier-Frequency Modulation 319 It can be shown that the 2 M noise samples }and i«jt. mutually uncorrelated Gaussian random variables with equal variance CR~ = Nq}2. so that the received signal in the absence of noise is r ( 0 = cos ( i n f j t + ^ .l (7. .5. r / 1 / . M . .5. the joint probability density function for r„. P [s m was transmitted j R\ ~ P{sm | r>.. An equivalent detector is one that computes the squared envelopes d = r l r + ri.1 (7. rks I ^ q 1 .tct1 (7.r} arc zero-mean..' m = 0. . Consequently. M = 0. I L L U S T R A T I V E P R O B L E M Illustrative Problem 7. the optimum detector is called a square-law detector.15) Given the 2 M observed random variables ( r ^ . When the signals are equally probable. the optimum detector selects the signal that corresponds to the maximum of the posterior probabilities.\ M . a .16) where r is the 2A/-dimensional vector with elements {rkc. « = 1.16) computes the signal envelopes.5. } ^ . . In this case.L and r m y conditioned on <j>m is 1 Intrand for m ^ k. the optimum detcctor specified by (7. and / a = / i + 1/7).2. In this case the optimum detector is called an envelope detector. defined as = m = 0. 0<t<Th Numerically implement the correlation-type demodulator for the FSK signals. {r^}.. « 2 ( 0 = cos27r/2f. we have frk(rk<: n*) = 2.5. i.17) and selects the signal corresponding to the largest envelope of the set {r m }. I M . . The channel imparts a phase shift of 4> — 45° on each of the transmitted signals.e.7 Consider a binary communication system that employs the two FSK signal waveforms given as Mi(/) = c o s 2 .18) and selects the signal corresponding to the largest. . . 0<t<Th 0 < t < Tb where f\ = 1000/ 7).I (7.5.5. I. % noise variance for i=1 :N. t=0:Tb/(N-1):Tb. % the correlator outputs are computed next . % MATLAB scrip! for Illustrative echo on Tb=1. based on the signal ti|(r) being transmitted. phi=pi/4. f2=(! + 1/Tb.320 — CHAPTER 7. .(5000) r 2 = r 2 2 t (5000) + r\j. A MATLAB program that implements the correlations numerically is given below. The graphs of the correlator outputs are shown in Figure 7. Problem 7. as illustrated in Figure 7.28. 2 . 5000 k = 1 . ul=cos(2*pi*fl»0. fMOOO/Tb. end. the received signal r is sgma=1. % number of samples u2=cos(2*pi*f2«t): % af. N=5000.fuminjj thai it! is transmitted. Chapter 7. r(i)=cos(2+pi*f 1 »l(i)+phi)+gngau5s(sgma). Thus the correlator outputs are r i<( fc > = ! > ( £ ) » • ( £ ) . DIGITAL TRANSMISSION VIA CARRIER MODULATION We sample the received signal r(t ) at a rate F . u j ( r ) = sin 2 x f \ t .2 n-0 k rls(k) = v2 . r / j i . The correlation demodulator multiplies [r(n/Fx)} by the sampled version of m ( r ) = c o s 2 . . i n ( 0 = cos27r/2i. (5000) and selects the information bit corresponding to the larger decision variable. . = r l (5000) + rf.. .2 5000 k = 1. and u t ( 0 = sin I n / i t .5000 The detector is a square-law detector that computes the two decision variables r. = 5 0 0 0 / Th in the bit interval 7V Thus the received signal r(t) is represented by the 5000 samples {r(n/FJ}. * = 5000 riAk) = ( j r ) l 'i ( ] r ) ' J k = l.27. 7. % decision variables rl=r!c{5000)"2+rls(5000)~2.5. rlc(k)=rlc(k —1 )+r(k)*ul(k). r2c(k)=r2c(k-1 )+r(k)*u2(k). T h e p r o b a b i l i t y o f a s y m b o l pM = y { . rls(k)=rls(k —1 )+r(k)*vl(k). 7.5.19) . r 1 s( 1 )=r(1)*vl(1). for k=2:N. r2s(1)=r(1)*v2(1).28: Output of correlators for binary F S K demodulation.n ^ / n + I h / N . X ) n ^ ( V M n l ) J _ e . r2c(1)=r(1 )*u2(1).5. ( n + i ) (7. rlc(1)=r(1 )*ul(1). r2s(k)=r2s(k —1 )+r(k)»v2(k). Carrier-Frequency Modulation 321 vl=sin(2»pi*fl*t): v2=sin(2*pi*f2*t). r2=r2c(5000)"2+r2s(5000)" 2.3 Probability of Error for Noncoherent Detection of FSK FSK T h e d e r i v a t i o n o f t h e p e r f o r m a n c e o f t h e o p t i m u m e n v e l o p e d e t e c t o r f o r t h e M-ary error m a y be expressed as s i g n a l s c a n b e f o u n d in m o s t t e x t s o n d i g i t a l c o m m u n i c a t i o n . end. % plolling commands follow m Figure 7. 5.20) For M > 2. We observe that for any given bit-error probability. dB Figure 7. The cost of increasing M is the bandwidth required to transmit the signals. or Shannon limit. 32. which is pz = (7. SNR/bit. the cn-or probability can be made arbitrarily small provided that the SNR per bit exceeds . the SNR per bit decreases as M increases.29: Probability of a bit error for noncoherent detection of orthogonal F S K signals.322 CHAPTER 7. for any digital communication system transmitting information through an AWGN channel. this expression reduces to the probability of error for binary FSK.1 . In the limit as M oo. 8. T h i s is the channel capacity limit. DIGITAL TRANSMISSION VIA CARRIER MODULATION When M = 2. 4. the bit-error probability may be obtained from the symbol error probability by means of the relation 2*-1 Ph = ^tZTipM (7 5 21) ' ' The bit-error probability is plotted in Figure 7. 6 d B . Since the frequency separation between adjacent frequencies is A / = 1/7" for signal orthogonality. .29 as a function of the SNR per bit for M = 2. 16. The block diagram of the binary FSK system to be simulated is shown in Figure 7. and the detector is a square-taw detector.> 0 as M -»• oo.8 Perform a Monte Carlo simulation of a binary FSK communication system in which the signal waveforms are given by (7. The bit rate is R = k/T.30. Uniform RNG Gaussian RNG Gaussian RNG FSK signal selector Detector Output bit Gaussian RNG ± Gaussian RNG Compare I Bit-error counter Figure 7. where f2 = f \ + 1/7). the bit-rate-to-bandwidth ratio is R W We observe that R/ W .5. log 2 M M 323 where (7. k = log2M.7.1).r r\s = -fHÊb sin^ . Therefore. when « i ( 0 is transmitted.22) Illustrative Problem 7. the first demodulator output r\c = \f£b cos 4> + n ic + n[.5.5.30: Block diagram of a binary FSK system for the Monte Carlo simulation. Carrier-Frequency Modulation the bandwidth required for the M signals is W = MIT. FSffiiF Since the signals are orthogonal. 5.324 CHAPTER 7. % sinnul-to-notse ratio theo-err-prt>(i)=(1/2)*exp(-SNR/2). for i=1 :length(SNRindB2). « | v . hold semilogy(SNRindB2.20). echo on SNRindB)=0:2:15. % simulated ermr rate end.theo_en_prb). An error counter measures the error rate by comparing the transmitted sequence to the output of the detector. r 2 2 — r2c + r2.v \s 2 and selects the information bit corresponding to the larger of these two decision variables.1 where n \ c . The MATLAB programs that implement the Monte Carlo simulation are given below. Figure 7. % MATLAB script for Illustrative problem X.eiT-prt>(i)=cm-sm52(SNRindBl(i)). Eb=1. snr=10"(snrJn_dB/10). N=10000. the channel-phase shift tp may be set to zero for convenience. Chapter 7. ti2(:.sm1d_err_prb. % sigwl-to-noise ratio per bit . The square-law detector computes n = ru + -> r 2 . d=1. % theoretical symbol error rate end.31 illustrates the measured error rate and compares it with the theoretical eiror probability given by (7. and n 2 a r e mutually statistically independent zero-mean Gaussian random variables with variance cr2 and <p represents the channel-phase shift. % Plotting commands follow semj1ogy(SNRindB I . DIGITAL TRANSMISSION VIA CARRIER MODULATION and the second demodulator output is r 2c ~ n 2c ris — n 2.1:15. In the above expression. SNRindB2=0:0. for :!ength(SNRindB I). smld.' * •). SNR=exp(SNRindB2(i)*log(10)/10). signal-to-noise ratio in dB. function (p]=cm_sm52(snr_in_dB) % fp/-cm-imJ2(snrJ/i-dB) % CMSM52 Returns the probability of error for the given % value of snrJttjiB. numofen .7. dsource(i)=0.r made nest if (rO>rl). end. the A M signal is multiplied by a sinusoid with the same frequency and phase of the carrier and then demodulated. rOs=gngauss(sgma). p=numoferr/(N). % demodulator output if (dsource(i)==0). else dsource(i)=1. rlc=sqrt(Eb)«cos(phi)+gngauss(sgma). end. Synchronization in Communication Systems 325 sgma=sqrt(Eb/(2*snr)): % noise variance phi=0. % generation of the data source follows for i=1. In coherent demodulation. else decis=1.6. rOs=sqrt(Eb)+sin(phi)+gngauss(sgma). end: % detection and the probability of error calculation numoferr=0.6 that correct . % square law detector outputs rf)=rOc"2+-rOs~2. =numoferr+1. end. r I c=gngauss(sgina): rl s=gngauss(sgmal: else rOc=gngauss(sgma). 7. temp=rand. which can be applied only to the conventional AM scheme. 9r u uniform random variable between 0 and •! if (temp<0. rOc=sqn(Eb)*cos(phi)tgngauss(sginaj.N. % decision r. r2c=sqrt(Eb)*sin(phi)+gngauss(sgma). In noncoherent demodulation.5).6 Synchronization in Communication Systems In Section 3. In particular we showed that we can classify the demodulation schemes into coherent and noncoherent schemes. decis=0. end. rl=rlc"2+rls"2.3 we described the demodulation process for A M signals. envelope demodulation is employed and there is no need for precise tracking of the phase and the frequency of the carrier at the receiver. We furthermore showed in Illustrative Problem 3. end: % increment the error counter if the decision ts not correct if (decis"=dsource(i)). for i=1:N. 1 Carrier Synchronization A carrier-synchronization system consists of a local oscillator whose phase is controlled to be in synch with the carrier signal. 7. In the following discussion. for simplicity w e consider only binary PSK modulation systems. DIGITAL TRANSMISSION VIA CARRIER MODULATION Figure 7. clock synchronization. In the demodulation of PAM. loop (PLL). This is achieved by employing a phase-locked.31: Performance of a binary F S K system from the Monte Carlo simulation. These methods are studied under the title of carrier synchronization and apply to both analog and digital carrier modulation systems discussed in this chapter and Chapter 3. called timing synchronization. or timing recovery. phase synchronization in coherent demodulation is essential. is encountered only in digital communication systems. The PLL is driven by a sinusoidal signal at carrier frequency (or a multiple of it). and phase errors can result in considerable performance degradation. Another type of synchronization. In this section we discuss methods to generate sinusoids with the same frequency and phase of the carrier at the demodulator. PSK.6. and Q A M . In . We briefly discuss this type of synchronization problem in this section as well. A phase-locked loop is a nonlinear feedback-control system for controlling the phase of the local oscillator. In this chapter we discussed the demodulation schemes for digital carrier-modulated systems.526 CHAPTER 7. we assumed that w e have complete knowledge of the carrier frequency and phase. 6. then at the input of the loop filter we have e(t) = cos(47r/ t . the output will be a sinusoidal signal with the central frequency 2 / t . Without loss of generality we can assume that the amplitude is unity. The reason that we do not deal directly with u ( f ) is that usually the process m(r) is zero-mean.6. so the power content of u(t ) at fc is zero.and a low-frequency component.2 0 ( f ) ) .0 ( f ) ) . H { 2 / t ) / 2 .r/ c f . .2) Obviously./ . if the signal » . as shown in Figure 7. The role of the loop filter is to remove the high-frequency component and to make sure that 0 ( f ) follows closely the . a loop filter.1) A}. i.r .-r .4) Note that e(t) has a high. and an amplitude of A l .32: The phase-locked loop. = y'i-(') + A = y 2 A~ + . A. If we assume that the output of the V C O is s i n ( 4 / r / t / .2 0 ( f ) ) = sin(20(O .7..20(0) (7. input signal e(t) r(0 Loop Filter sin(47r/ c f 4. Synchronization in Communication Systems order to obtain the sinusoidal signal to drive the PLL.20(f)) + sin(8jr/ c f .( / ) is passed through a bandpass filter tuned to 2 fc.3) The PLL consists of a multiplier.6.32.0(f)) 327 (7. a phase of . this signal has a component at 2 f c .2 0 (r).e. Now.2 0 ( f ) ) (7..20(f)) (7.20) VCO «(0 Figure 7. the DSB-modulated signal u(t) = Acm(t)cos(2nfct where m(t) — ± 1 is squared to obtain tr(t) = A2m2(t) cos 2 (2:r/ ( -f .2 0 ( f ) . the input to the PLL is r ( f ) = c o s ( 4 j r f L t .6.2 0 ( f ) ) sin(4.6.2 0 ( 0 ) . and a voltage-controlled oscillator (VCO).f cos(4^/. cos(4^/. 5) where Vi » T\.6.328 CHAPTER 7. Note that . the difference 2 0 ( 0 — 2 0 ( f ) is very small. Figure 7.33 is replaced by a linear component.7) (7.33: The phase-locked loop after removal of high-frequency components. resulting in the linearized PLL model shown in Figure 7.2 0 ( f ) ) « 0 ( f ) .33. the PLL reduces to the one shown in Figure 7.34.6.6.6) 1 d . If we denote the input of the VCO as u(r). equivalently.6. Assuming that 0 ( f ) closely follows changes in 0 ( f ) . then the output of the V C O will be a sinusoid whose instantaneous frequency deviation from 2 f c is proportional to u(r).0 ( f ) (7.8) With this approximation the only nonlinear component in Figure 7. DIGITAL TRANSMISSION VIA CARRIER MODULATION variations in 0 ( 0 . and w e can use the approximation sin ( 2 0 ( f ) . —r<f>(t) jr dt where K is some proportionality constant. But the instantaneous frequency of the V C O output is 2fc+ therefore d K -0(r) = -v(t) dt 2 or.5 (7.A simple loop filter is a first-order lowpass filter with a transfer function of G(s) = — — 1 + Z2S 1 + T. After removal of the second and fourth harmonics. 2^(c)= x f v{z)dx J —CO (7. At this time some abrupt change causes a j u m p in 0 ( f ) that can be modeled as a step.6. The model shown in Figure 7.34 is a linear control system with a forward gain of G ( j ) and a feedback gain of K / s . l / j . the error will be A<t>(i) = 4>(i) .11) With H(s) as the transfer function. the transfer function of the system is given by H(s) = <t>(j) _ <P{s) ~ I + KG(s)/s KG(s)/s (7.9) and with the first-order model for G(s) assumed before. so A 0 ( O ~ 0.6. G(s) H(s) is given as M(s) = Í + ris 1 + (n + \/K)s + Z2S2/K = 1+ us + T2J (7. this model is represented in the transform domain. and the integrator is replaced by its transform domain equivalent. Synchronization in Communication Systems 329 0(0 G(s) 0(0 Figure 7.12) K + (1 + Kz. therefore.e.7. <t>(j) = K\/s.34: The linearized model f o r a phase-locked loop.13) . i. if the input to the PLL is i>(j).6. With this change we have A 4>(j) = (l + r2j)i K\ K + (1 + Kx\)s Kj(\+x2s) K + (\ + Kz^s + r2i2 + z2s2 s (7.6.)s Now let us assume that up to a certain time 0 ( 0 «a 0 ( 0 .<t>(j) = <P(s) = [1 <t>(s)H(s) ff<s)]<i(s) (1 + r2s)s + z2s2 <P(s) (7.10) (7.6.6.. " . <t>(j) = 1 / j — i s given by = Q01J + I J 3 + l .16) Here. 2 " s.505 which results in H(s) = 0. a simple first-order loop filter results in a PLL that can track j u m p s in the input phase.14) as long as all poles of sF{s) have negative real parts. O l i 2 +s + 1 .v-0 K + [\ + Kx\)s = 0 In other words.6. .+ 2{ci»„j 4- _ .11) can be written in the standard form H <f> = (2t(i>„-<dI/K)s + a>l . — Here r j = 0. W/t = ^ y/K/T2 = (oH{n + 2 \/K) where 0)„ is the natural frequency and £ is the damping factor. which states that f-nxs iim / ( f ) = lim sF(s) (7.6.01 and x-i — I.15) — lim . by using the final value theorem of the Laplace transform. (7.. therefore. DIGITAL TRANSMISSION VIA CARRIER MODULATION Now.330 CHAPTER 7. The transfer function (7. we conclude that lim A<f>(t) — lim ) ^ + X2S(7. • s « i • wis» si g c i si mana Illustrative Problem 7. determine and plot the response of the P L L to an abrupt change of height 1 to the input phase.e. £l)„ = 1 f = 0. 0 1 j + 1 Thus the response to <f>(t) = «(/)—i.6.01 j l+s and K = 1.01* + 1 s2 + 1 .6.9 [First-order P L L ] Assuming that G(J) = 1 +0. m takes the numerator and the denominator of the transfer function H{s) and returns A. equivalently. den=[1 1. _ f— 1. (a. the step response is obtained numerically. With this choice of numerator and denominator vectors.c. respectively. After determining the state-space representation of the system. the output of the PLL eventually follows the input.35. .b.7. however the speed by which it follows the input depends on the loop filter parameters and K . which returns the state-space model of a system described by its transfer function.C. This can be done most easily by using state-space techniques.f 0 A B C D _ 1 = _0_ 0. The M A T L A B script for this problem is given below. We will employ the MATLAB function tf2ss. Chapter 7 echo on num=(0. in the form f d — x(t) 1 | 11 = Ax(t) + + Bu(t) Du(t) y ( f ) = Cx(t) This representation can be approximated by x(t + At) = x(r) + Ax(t)At + y(t) = Cx(r) + Dtt(t) or. the V C O proportionality constant.01 1 ] and [ I 1. we note that we have to determine the output of a system with transfer function H(s) to the input « ( f ) .35. x(i + 1) = x(i) + Ax(i)At + >'(0 = Cx{i) + Du(i) Bu(i)At Bu(t)At For this problem it is sufficient to choose u(r) to be a step function and the numerator and the denominator vectors of His) to be [0. The function tf2ss. Synchronization in Communication Systems 331 In order to determine and plot the time response 0 ( f ) to the input u(r). As we can see from Figure 7.01 = 0 T h e plot of the output of the P L L is given in Figure 7. its state-space representation.m. % MATLAB script for Illustrative Problem 9.01 1].den).01 " 1 .01 1).01 1]. the state-space parameters of the system will be .d]=tf2ss(num. B. and D.6. The autocorrelation function is maximized at the optimum sampling time and is symmetric. in the absence of noise.6 0. y(i)=c»x(.*b*u(i).6.332 CHAPTER 7.2001).i)+dt.9. In all these cases we assumed that the receiver has complete knowledge of the sampling instant and can sample perfectly at this time. The operation of an early-late gate is based on the fact that in a PAM communication system the output of the matched filter is the autocorrelation function of the basic pulse signal used in the PAM system (possibly with some shift).8 0. Systems that achieve this type of synchronization between the transmitter and the receiver are called riming recovery.y) 7. for i= 1:2000 x(.2 0 0 2 4 6 a 10 12 14 16 18 20 Figure 7. clock-synchronization. plot(t(1:2000).i)+dr.. DIGITAL TRANSMISSION VIA CARRIER MODULATION 1.2 Clock Synchronization In Chapter 5 and in this chapter we have seen that a popular implementation of the optimal receiver makes use of matched filters and samplers at the matched filter output.2 0.dt.*a*x(:.35: The response of the PLL to an abrupt change in the input phase in Illustrative Problem 7. ät-0. or symbol-synchronization systems.2000).. A simple implementation of clock synchronization employs an early-late gate. end t=[0. x=zeros(2. at .20]. u=ones(1.>+1)=x(:.4 0.i). This means that.01. 18) Now let us assume that we are not sampling at the optimal sampling time T . Synchronization in Communication Systems sampling times T+ — T + 5 and T 333 — T .36.e..17) In this case. but instead we are sampling at T\. the optimum sampling time is obviously the midpoint between the early and late sampling times: (7. as the figure shows.6.6. this results in jy(r")| > |y(T+)| (7. i.36: The matched filter output and early and late samples. A typical autocorrelation function for positive and negative incoming pulses and the three samples is shown in Figure 7. therefore. Here T ~ T + = = T T .S i + & 2 where S\ < Si and.19) . will not be equal.6. matched filter output optimum sample t ~ t r + T~ T T+ Figure 7.5.7. y(T + ) - y(T~) (7.6. these samples are not symmetric with respect to the optimum sampling time T and.20) (7.6. the output of the sampler will be equal. If we take two extra samples at T+ — 7"i-H<5 and T ~ = Tj — <S. Two cases are examined.T~ — T\ — <5.^B file that simulates the operation of an early-late gate for this system. From Figure 7. when |y(7"~)| < | y ( 7 + ) | . it is sufficient to consider only the interval |/| < 0.4 x 0.22) ' and with or = 0. the expression for a raised-cosine waveform becomes x(t) = sinc(4800r) cos(4800 x OAnt) 1 . o n e when the incorrect sampling time is 7 0 0 and one when it is 500. Since the rate is 4800 bits/s.334 Also. for all practical purposes. which is roughly [—37. depending on their relative values.4. Conversely. In both cases the early-late gate corrects the sampling time to the optimum 600. the raised-cosine signal and the autocorrelation function are first computed and plotted. In the MATLAB script given below. Write a MATL. the sampling time is correct and no correction is necessary. generates a signal to correct the sampling time. 3 7 ] .4. DIGITAL TRANSMISSION VIA CARRIER MODULATION jT < Ti = +r+ (7.10 [Clock synchronization] A binary PAM communication systems uses a raised-cosine waveform with a rolloff factor of 0.3 . In this particular example the length of the autocorrelation function is 1201 and the m a x i m u m (i.37 it is clear that. and 7 + = T\ + 5 and then compares | y ( 7 ~ ) | and | y ( T + ) | and.e.6 x I 0 .0V " 3 ) This signal obviously extends from —oo to + 0 0 . when ]jy(y )] > j y ( T + ) [ .38. The early-late gate synchronization system therefore takes three samples at T\..21) Therefore.37. The system transmission rate is 4800 bits/s. the sampling time should be delayed. the optimum sampling time) occurs at the 600th component.6. T h e plot of this signal is given in Figure 7. the correct sampling time is before the assumed sampling time.16 x 4 8 0 0 V cos 1920jt/ ( 7 " Sl ''C(4800')1-l. Illustrative Problem 7. when | y ( 7 ~ ) | = | v ( T + ) j . and the sampling should be done earlier. Truncating the raised cosine pulse to this interval and computing the autocorrelation function results in the waveform shown in Figure 7. . we have 7 = 1 4800 (7.4746* . Obviously. in this case CHAPTER 7.6. 37: The raised-cosine signal in Illustrative Problem 7.10. .7.6.38: The autocorrelation function of the raised-cosine signal. Synchronization in Communication Systems 335 Figure 7. Figure 7. x=sinc(t.001*T/100:3*T]./T) *(cos{pi*alpha*t.t(2:length(t))+3*T]. % Another incorrect sampling time while abs(abs(y(n+d))-abs(y(n-d)))>=ee if abs(y(n+d))-abs(y(n-d))>0 n=n+e.d ) ) < 0 n=n-e. t=[-3*T:1./T).d ) ) < 0 n=n-e./(1 —4*alpha~2*t.'2/T"2)). T= 1 / 4 8 0 0 .x) y=xcorr(x). ty=[t-3wT.a b s ( y ( n . plot(i. plot(iy. echo on alpha=0.01. d=6Q. % The incorrect sampling time while abs(abs(y(n+d))-abs(y(n-d)))>=ee if abs(y(n+d))-abs(y(n-d))>0 n=n+e. end end pause % Press any key to see the corrected sampling time n . % Precision e=1. elseif a b s ( y ( n + d ) ) . Chapter 7. elseif a b s ( y ( n + d ) ) . pause % Press any key to see a plot of x(t). pause % Press any key to see u plot of the autocorrelation of xjt). % Early and late advance and delay ee=0.4. DIGITAL TRANSMISSION VIA CARRIER MODULATION % MATLAB scrip! fur Illustrative Problem 10.336 CHAPTER 7. % Step size n=700.y). end end pause % Press any key to see the corrected sampling time n n=500.a b s ( y ( n . Write a MATLAB program to generate the samples and perform the computation of the signal energy as described above.2 Repeat Problem 7.14) may be performed numerically using MATLAB.6. Evaluate and graph y (n) when r ( f ) = V ( 0 . 1 . where \jr(t) is the waveform described in Problem 7. Synchronization in Communication Systems 337 Problems 7.2. \j/(n/Fs) 7. The pulse grU) i n a y assumed to be rectangular. .4 The purpose of this problem is todemonstrate that (7. .000 samples. 0 < t < 2 grit) = 0.8) by the summation where W=40.2. the transmitter filter has a square-root raised-cosine spectral characteristic with roll-off factor a = 1. 7.2. N .000 samples/s on the signal waveform is (r) given by (7. . The carrier frequency is /. = 2000 Hz.4 and F ( =20. Write a MATLAB program that computes the correlator output n = 0. .5 The cross correlation of the received signal r(t) with as given by (7. Use a sampling rate of F t = 20.2. = 4 0 / 7".1 when the transmitter has a square-root duobinary spectral characteristic. . 7.1 in a carrier-amplitude-modulated PAM system.9)and (7.000 Hz.9) numerically using MATLAB. 7. otherwise Let the carrier frequency /.2.1 when the carrier frequency is /< = 8 0 / T.10) hold by evaluating (7.I where F s is the sampling frequency.7. I. Evaluate and graph the spectrum of the baseband signal and the amplitudemodulated PAM signal.6) and compute the energy of ijf (t) by approximating the integral in (7._.3 Repeat Problem 7.2.. 3.cosnt). 0 <:<T .338 CHAPTER 7. { 10. 7.7 In Illustrative Problem 7. but modify the detector so that is computes the received signal phase 8r as given by (7. correlation sequences when and ^ 2 ( 0 - Evaluate and plot these r(t) = 5mt>i(0 STU)= f 2.3. 8 T i t ) = 0 < t < 2 otherwise \iX for the same parameters given in Problem 7.6 Evaluate and graph the correlation {>•(«)} in Problem 7. (1. 0 ) c.9 Write a M A T L A B program that performs a M o n t e Carlo simulation of an M = 4-PSK communication system. That is.16) and selects the signal point whose phase is closes to dr.000 a.4. the eight PSK waveforms had a constant amplitude.8 Write a M A T L A B program that numerically computes the cross correlation of the received signal r(t) for a PSK signal with the two basis functions given by (7. as described in Illustrative Problem 7.0) 7. . I) + 0 < / < 2 otherwise samples/s. Fs=10. sm = ( . \f/\{t). otherwise Write a MATLAB program to compute and graph the M — 8-PSK signal waveforms for the case in which fc — 6/T. = i10. suppose that the signal pulse shape is grit) U(1 -cos27rr/r). compute M « >= = (£)- " = 0'1 n = 0-1 N N ~l where N is the number of samples of r ( r ) .sm = (jmi.1 .9).5 when the signal grU) i(l . sm = (0.f) = b.. DIGITAL TRANSMISSION VIA CARRIER MODULATION is 7. fc = 1000 Hz. jm.3. Instead of the rectangular pulse gr(0. 7.2. and the transmitted signal point is as given. .23).11 Write a M A T L A B program that performs a Monte Carlo simulation of a binary D P S K communication system.12 when the pulse waveform gj(r) is given as — cos I n t j T ) . 0 < / <T otherwise .3.. in this case. Hence.6.7.1 ) Figure P7.Q A M signal waveforms given by (7. .000 bits at different values of the SNR parameter £ b / N 0 . 2 7. \ ( 3 .12 Write a MATLAB program that generates and graphs the M — 8 .t r ^— / ( i . Perform the simulation for N=\0.It is convenient to normalize £ b to unity.13 Repeat Problem 7. . the transmitted signal phases are 9 = 0 and & — 180°. i. 0 < t < T otherwise grit) and the carrier frequency is fc = 8/7". ST(0 = 0. with a = Nq{2. A 0 = 0 phase change corresponds to the transmission of a zero.i ) Jo. A O = 180 phase change corresponds to the transmission of a one.10 Write a MATLAB program that implements a differential encoder and a differential decoder for a M = 4 DPSK system. Check the operation of the encoder and decoder by passing a sequence of 2-bit symbols through the cascade of the encoder and decoder and verify that the output sequence is identical to the input sequence.4.e. Plot and compare the measured error rate of the binary D P S K with the theoretical error probability given by (7. Synchronization in Communication Systems 339 7. 7.2) for the signal constellation shown in Figure P7.l A I J 1. [-3.12. where a 2 is the variance of the additive noise component. 7.1)/ ( (-i. (-3. the SNR is JE^/iVo = I/2CT2. the SNR can be controlled by scaling the variance of the additive noise component. Then. = ) 0.12 A s s u m e that the pulse waveform g r i t ) is rectangular. 1) . Perform the simulation for N = 1 0 .340 CHAPTER 7. Then. Write a MATLAB program that generates the 5000 samples for each of h i (f) and « 2 ( 0 and compute the cross correlation N-l / \ / \ and.15 7. the SNR is ifrav/'Vo = 1/2 o 2 . J V /iVo.. thus.It is convenient to normalize to unity. Plot and compare the measured symbol-error rate of the Q A M system with the upper bound on the theoretical error probability given by ( 7.12). 0 < r < 7b [ÏËb .14 for the M = 8-signal constellation shown in Figure P7.15. with a1 = No/2. Compare the error probabilities for the two M = 8 Q A M signal constellation and indicate which constellation gives the better performance. 0 0 0 (3-bit) symbols at different values of the SNR parameter i/. 7.16 Consider the binary FSK signals of the form 2 Eh MO = J Y Tb «2(0 = „ „ cos2n-/|r. where a 2 is the variance of each of the two additive noise components./cos2JT/2f.12. we obtain 5000 samples in the bit interval 0 < t < Th.4.15 Repeat the simulation in Problem 7. DIGITAL TRANSMISSION VIA CARRIER MODULATION 7. Figure P7.. verify numerically the orthogonality condition for H | ( f ) and » 2 ( 0 - . V 'b 2 Th 0<t<Th h = f\ + Let / ] = 1000/7/. By sampling the two waveforms at the rate Fs = 5000/7}.14 Write a MATLAB program that performs a Monte Carlo simulation of an M = 8Q A M communication system for the signal constellation shown in Figure P7. Synchronization in Communication Systems 341 7.19) and (7.5.10 with a rectangular pulse shape in the presence of AWGN for SNR values of 20. .20 In Illustrative Problem 7.21). starts to increase linearly. 10.19 Write a MATLAB program that performs a Monte Carlo simulation of a quaternary (M = 4) F S K communication system and employs the frequencies fk = / i + ~ • A . and record the number of symbol errors and bit errors. Plot and compare the measured symbol.. i. Now assume that the input changes according to a ramp. and 0 dB./A^. 5. 7.21 Repeat Illustrative Problem 7.and bit-error probability given by (7.9 it was assumed that the input phase has an abrupt j u m p .7 to compute and graph the correlator outputs when the received signal is r(t) = cos ( l . 7. Simulate the performance of a first-order PLL in this case and determine whether the loop is capable of tracking such a change. and the simulation showed that a first-order loop filter can track such a change.5. 1. 3 The detector is a square-law detector Perform the simulation for N=10.7 to compute and graph the correlator outputs when the transmitted signal is t n ( i ) and the received signal is r ( 0 = cos (irrf^j + . 2.000 (2-bit) symbols at different values o f t h e S N R parameter <£"/. 0 < t < T 7.18 Use the MATLAB program given in Illustrative Problem 7.r / i r ^ • 0 < r < 7).e.7.6.= 0.17 Use the MATLAB program given in Illustrative Problem 7. 7.and bit-error rates with the theoretical symbol. . Such a model is called a discrete memoryless channel (DMC) and is completely described by the channel 343 . In the simplest case this can be modeled as a conditional probability relating each output of the channel to its corresponding input. attenuation. namely.g. addition of noise. and some are probabilistic.2 Channel Model and Channel Capacity A communication channel transmits the information-bearing signal to the destination. Encoding and decoding techniques for these codes and their performance are discussed in detail in the later sections of this chapter. etc. In particular. and the most common unit for this quantity is bits. in the most general sense the mathematical model for a communication channel is a stochastic dependence between the input and the output signals. the information-carrying signal is subject to a variety of changes. 8. we consider two types of channels. e.g. block and convolutional coding.1 Preview The objective of any communication system is to transmit information generated by an information source from one location to another. linear and nonlinear distortion. We have already seen in Chapter 4 that the information content of a source is measured by the entropy of the source. The medium over which information is transmitted is called the commumcafion channel. In this transmission. In this chapter.. Since deterministic changes can be considered as special cases of random changes.Chapter 8 Channel Capacity and Coding 8. The second part of this chapter is devoted to coding techniques for reliable communication over communication channels. Some of these changes are deterministic in nature. we consider appropriate mathematical models for communication channels. We have also seen that an appropriate mathematical model for an information source is a random process. We also discuss a quantity called the channel capacity that is defined for any communication channel and gives a fundamental limit on the amount of information that can be transmitted through the channel. e. the binary symmetric channel (BSC) and the additive white Gaussian noise channel (AWGN).. We discuss the two most commonly used coding methods. multipath fading. 8.. F) denotes the mutual information between X (channel input) and y (channel output) and the maximization is carried out over all input probability distributions of the channel.3) . by the above definition. y € y . For the case of the binary sy m m e t n c channel.2.344 CHAPTER S. the capacity is given by the simple relation C = l-Hb(e) (8. The channel capacity is denoted by C. the capacity is given by the following expression C = max/(X. Shannon's fundamental result of information theory states that for discrete memoryless channels. for which the error probability tends to 0 as the olock length increases.1 Channel Capacity By definition. given for all x 6 X . at rates R < C reliable transmission over the channel is possible. A schematic model for this channel is shown in Figure 8. the channel output alphabet y. Figure 8. A binary symmetric channel corresponds to the case X = y = {0. CHANNEL CAPACITY AND CODING input alphabet X .2.r = 0) = f .1 : A binary symmetric channel (BSC).1) where / ( X . n = E E ^ i ^ l l jre% yey ( 822 --> where the mutual information is in bits and the logarithm is in base 2. One special case of the discrete memoryless channel is the binary. Reliable transmission is possible when there exists a sequence of codes with increasing block length. channel capacity is the maximum rate at which reliable transmission of information over the channel is possible. and at rates R > C reliable transmission is not possible.2. and the channel transition probability matrix p(y | v).n PU) (8.1. 1} and p(y = 0 j x — 1) = p(>' = I | .symmetric channel (BSC) that can be considered as a mathematical model for binary transmission over a Gaussian channel with hard decisions at the output. The mutual information between two random variables X and Y is defined as '(*. The parameter i is called the crossover probability of the channel. Plot the capacity of the resulting channel as a function of y . and the channel input is a process that satisfies an input power constraint of P . 2.2.8.2.4) Another important channel model is the bandlimited additive white Gaussian noise channel with an input power constraint. Assume that y changes f r o m .2 0 dB to 20 dB.6) Illustrative Problem 8. Shannon showed that the capacity of this channel in bits per second is given by C = W log ( 1 + | bits/second (8. 1. the capacity in bits per transmission is given by C = ^log(l + ^ I L L U S T R A T I V E P R O B L E M (8.2. Plot the error probability of the channel as a function of Y = (8-2.2.1 [Binary symmetric channel capacity] Binary data are transmitted over an additive white Gaussian noise channel using BPSK signaling and hard-decision decoding at the output using optimal matched filter detection.5) \ NQW } For a discrete-time additive white Gaussian noise channel with input power constraint P and noise variance a 2 . Z(t) X(l) x ( f ) = n (^p) j y ( t ) Figure 8. the noise is Gaussian and white with a (twosided) power spectral density of ^ 0 / 2 . W]. .(1 — .7) where is the energy in each B P S K signal and JVO/2 is the noise power spectral density.I V . The channel is bandlimited to [ . This channel is modeled as shown in Figure 8.2.2: Bandlimited additive white Gaussian noise channel. t ) . Channel Model and Channel Capaci ty 345 where e is the crossover probability of the channel and Hb(-) denotes the binary entropy function: Hh{x) = -x l o g ( .c) l o g ( 1 — x) (8. " (gamma_db. gamina= 10. pause % Press a key to see a plot of ermr probability vs.9) = 1 . Figure 8. capacity=1. Chapter if.1.Mb [Q ( % / 2 y ) ) to obtain a plot of C versus y . Here we use the relation C = 1 Hb{p) (8.2.3: BPSK error probability versus y = 2. This plot is shown in Figure 8.—entropy 2(p_error).#gamraa)).e rror) xlabeJ(' S N R / b i t ' ) title('Error p r o b a b i l i t y v e r s u s SNR/bit') . The error probability of the BPSK with optimal detection is given by P = Q ( y / 2 Y ) (8. CHANNEL CAPACITY AND CODING 1.3.8) The corresponding plot is shown in Figure 8.20). echo on gamma./10). The M A T L A B script for this problem is given below. % MATLAB script for Illustrative Problem !.error=q(sqn(2.2.db=[-20:0.4.p . SNR/bit cif se milogx( gamma.346 CHAPTER 8. p. The desired plot is given in Figure 8. when P / N o or W tends to infinity. W h e n P/Nq tends to infinity. ' ) pause % Press u key to see a plot of channel capacity vs. The capacity as a function of bandwidth is plotted in Figure 8.5. 2.6.8. . SWR/6i! elf semilogx(gamma.2 [Gaussian channel capacity] 1. the capacity behaves differently. the capacity of the channel also tends to zero.4: Channel capacity versus y = ylabeK' E r r o r Prob.2. In particular.capacity) xlabeK ' SNR/bi t *) titlef'Channel c a p a c i t y v e r s u s ylabelf'Channel c a p a c i t y " ) I L L U S T R A T I V E SNR/bit') P R O B L E M Illustrative Problem 8. 0 0 2.2 0 and 30 dB. P. As is seen in the plots. when either P/Nq or W tend to zero. what is the channel capacity when W increases indefinitely? MaMIIHhlJ* 1. Plot the capacity of an additive white Gaussian noise channel with a bandwidth of W = 3000 Hz as a function of P / N for values of P / N between . A the capacity of an additive white Gaussian noise channel with P/Nq = 25 dB as a function of IV. the capacity also tends to infinity. However. Channel Model and Channel Capaci ty 347 Figure 8. Figure 8. CHANNEL CAPACITY AND CODING Figure 8.6: Capacity as a function of bandwidth in an A W G N channel. .348 CHAPTER S.5: Capacity of an A W G N channel with W = 3000 Hz as a function of P/Nq. 20050 50:100000]./10).8.510:10:5000. for p(X — A) — p(X = —A) = 5. = R.105:5 500.*log2(1+pn0/3000).10) (8. the capacity does go to a certain limit. For this input distribution. pause % Press a key to see a plot of channel capacity vs. the output distribution is given by p ( y ) = _J====e-iy-A?/2crl ( 8 2 1 2 ) 2 V w .db/10). bandwidth clf semilogxiw.*log2(1+pn0.3 [Capacity of binary input AWGN channel] A binary input additive white Gaussian noise channel is modeled by the two binary input levels A and — A and additive zero-mean Gaussian noise with variance o~.2. and p(y \ X = -A) ~ JS(-A. y. Due to the symmetry in the problem the capacity is achieved for uniform input distribution. i. However.2. pn0-db=25. pause % Press a key to see a plot of channel capacity vs.(7 2 ). p(y | X = A) ~ A^(A.5. Plot the capacity of this channel as a function of Aja.1:30). when W tends to infinity. bandwidth i n an A W G N channel') xlabel(' B a n d w i d t h ( H z ) ' ) ylabe!('Capacity ( b i t s / s e c o n d ) ' ) I L L U S T R A T I V E P R O B L E M Illustrative Problem 8. In this case X = { — A.4427— No The M A T L A B script for this problem is given below.To determine this limiting value we have lim w^cc W l o g . % MATLAB script for Illustrative Problem 2.capacity) title(' Capaci ty v s ../w). pn0=10"(pn0.~(pn0_db. Chapter S. P/NO Clf se mi I ogx (p nG.db=[-20:0.11) P = 1.a2). A}. capacity=3000. P/NO in an A W G N channel') x label {' P/NO ') ylabel(' C a p a c i t y ( b i t s / s e c o n d ) ') clear w=[1:10.e.capac 1 1 y) title(' C a p a c i t y v s .12:2:100. ( \ -f — — ) V NQW J P iVo In 2 (8. capacity=w.5025:25:20000.2. echo on pn0. pn0=10. Channel Mode! and Channel Capacity 349 as shown in Figure 8. which is determined by P / N o . 2.7.2:20]. Y) = ~ f p(y\X 2 J-oo + ^ A) ] o g P i y l X 2 " p(y) A ) dy dy (8./ p(y | X = -A) 2! J—oo log 3 p(y | X = .-(a-c!b/10). a=l0. Chapter 8.A) p(y) Simple integration and change of variables results in I(X'. CHANNEL CAPACITY AND CODING and the mutual information between the input and the output is given by / (X.15) (8. % MATLAB script for Illustrative Problem 3.2. A plot of the resulting curve is shown in Figure 8.2.350 CHAPTER S. for i=1:201 . echo on a-db=[—20:0.14) Using these relations we can calculatc / ( X . A/a Figure 8.Y)=l-f where du (8.13) 1 f°° . / ) for various values of £ and plot the result. The M A T L A B script for this problem follows.7: Capacity of a binary input A W G N channel as a function of SNR=A/CT. c. and therefore the capacity is given by Both Ch and C j . Si\'R plat. Chapter 8.A. c.5*f(i)+0.5«f(i)+0.a(i)+5.3. g(i)-quad( • i i 3 _ 8 f u r .db=[-13:0.1e-3.and soft-decision schemes] A binary input channel uses the two input levels A and .-a(i)).5*g(i): end pause % Press a key to see the capacity curves semi logx{a. % MATLAB script for Illustrative Problem 4.a(i)+5.a(i)-5.[ ].8.5*g(i): end pause % Press a key to see capacity vs. g(i)=quad(' i l 3 _ 8 f u n ' .c_hard) .a(i}).1a—3. For the hard-decision case the crossover probability of the resulting binary symmetric channel is Q(A/tr). ' . the output is used directly without quantization (soft decision). as expected.-a(t)): c(i)=0. soft.[ ].8. are shown in Figure 8.4 [Comparison of hard.-a(i)-5.c) Ulle('Capacity versus SNR i n binary input A W G N channel') xlabel('SNR') ylabeK'Capacity ( b i t s / t r a n s m i s s i o n ) ') 351 Illustrative Problem 8. c_soft(i)=0.1e-3. semilogn(a.a(i)). an optimal decision is made on each input level (hard decision). T h e soft-decision part is similar to Illustrative Problem 8.a. The output of the channel is the sum of the input and an additive white Gaussian noise with mean 0 and variance a 2 .-a{i)-5. In one case.-a(i)+5. Soft-decision decoding outperforms harddecision decoding at all values of A/a.-a(i)+5. echo on a.2."(a_db/10). and in the other case.a(i)—5. T h e MATLAB script for this problem is given below. a=10.1 e -3. Channel Model and Channel Capaci ty f(i)=quad(' il3_8Eun'. This channel is used under two different conditions.[]. Plot the capacity in each case as a function of A ja.hard=1-entropy 2(q(a)): for i=1:53 f(i)=quad(' i l 3 _ 8 f u n ' .5:13].[]. <?. Illustrative Problem 8. the capacity as a function of P / N o is similar to the curve shown in Figure 8. Note that for constant P / N o .130:50:300.5.5500:500:10000]: pnO_db=[—20:1 30}.25:20:100.6. Chapter .— P — ) 2 \ How) Plot the capacity as a function of both W and — P/No• The desired plot is shown in Figure 8. CHANNEL CAPACITY AND CODING Figure 8.db/10>.400:100:1000. w=[1:5:20.1250:250:5000. The MATLAB script file for this problem is given below.9. for i=1 45 for j=1:51 c(i. For constant bandwidth.j)=w(i)*log2[1+pn0(j)/w(i)).352 CHAPTER S.8: Plots of C/y and Cy versus SNR = A / a . the plot reduces to the curve shown in Figure 8. pn0= 10. end end echo on . % MATLAB echo off script for Illustrative Problem 5.5 [Capacity versus bandwidth and S N R ] The capacity of a bandlimited AWGN channel with input power constraint P and bandwidth W is given by C = IV log. ~ (pnO. ( 1 4. 9: Capacity as a function of bandwidth and SNR in an A W G N channel.82 Channel Model and Channel Capacity 353 Figure 8. . fc) litle('Capacity v s .-(p_db/10). Capacity (bus/transmission) P (dB) Noise Power (dB) Figure 8.6].6 [Capacity of discrete-time A W G N channel] Plot the capacity of the discrete-time additive white Gaussian noise channel as a function of the input power and the noise variance.9. .354 CHAPTER S. % MATLAB script for Illustrative Problem 6.351.0.0. s . p=10.c'. surfl<w. bandwidth and SNR') ILLUSTRATIVE P R O B L E M Illustrative Problem 8. np.10: Capacity of the discrete-time AWGN channel as a function of the signal power (P) and the noise power (CT2). k={0. echo on p-db=[—20:1:20].8.10. Chapter 8. CHANNEL CAPACITY AND CODING pause % Press a key to see C Kr.0. The M A T L A B script file for this problem is given below. W and P/NO. s=(-70. — Œ S S E E ® The desired plot is given in Figure 8.db=p_db.5.pnO_db. One of the simplest block codes is the simple repetition code. if the majority of the received symbols are 1 's. that is. the decoder decides in favor of a 0.3. block codes and convolutional codes. This means some redundancy has to be introduced to increase the reliability of communication. and if the majority are O's. the error .Mapping of the information source outputs into channel inputs are done independently. Such a code is called an (n.p. c2* . end end pause % Press a key u> see ihe pint surfl(np_db. Channel Coding 355 n P~P' for i=1:41 for j=1 41 cii.3. the rate of the resulting code is k/n bits per transmission.db.3.l)k<y inputs of the encoder. An error occurs if at least ( « 4 . two sequences. usually denoted by c\. In convolutional encoding. The introduction of redundancy results tn transmission of extra bits and a reduction of the transmission rate. In order to decrease the effect of errors and achieve reliable communication.. source outputs of length ko are mapped into no channel inputs.L-) 8.2) T h e decoding is a simple majority vote decoding. Since the channel is a binary symmetric channel with crossover probability 6. the decoder decides in favor of a 1.1) i — (8.5*log2(Up(i)/np(j)). The length of the two sequences is chosen to be some odd number n.3 Channel Coding Communication through noisy channels is subject to errors. but the channel inputs depend not only on the most recent k$ source outputs but also on the last ( L .. it is necessary to transmit sequences that are as different as possible so that the channel noise will not change one sequence into another.j)=0. one consisting of all O's and one consisting of all 1 's are transmitted. . and the output of the encoder depends only on the current input sequence of length k and not on the previous input sequences. Channel coding schemes can be generally divided into two classes. but instead of transmitting a 0 and a 1 for the two messages. . k) block code and consists of 2k codewords of length n.l ) / 2 of the transmitted symbols are received in error. The encoding process is shown below: n odd 0—vOCL^ÖO n odd (8. binary source output sequences of length k are mapped into binary channel input sequences of length n \ therefore. in which there are two messages to be transmitted over a binary symmetric channel. In block coding.8. 3. therefore. .5) This. The error probability is given by p<= t O0-3*x and the resulting plot is shown in Figure 8. The rate of transmission has been decreased from one binary message per one use of the channel to one binary message per five usages of the channel. CHANNEL CAPACITY AND CODING For example.6) (8. reduce the transmission rate to zero. that pricc is a reduction in the rate of transmission and an increase in the complexity of the system. m u M s m We derive pe for values of n from 1 to 61.9886 bits/transmission (8.0114 = 0.11. we can reduce the error probability from 0. 9 9 9 ) 9 _ < t = 9. 0 0 1 ) = 1 . Of course. however.001 we have 5 Pe = E0- 0 0 1 ^0' 9 9 ^5' 4 = 9 " X i 0 ' " ^ l0 ~9 ( 83 ' -4) This means that by employing the channel five times. is achieved by employing encoding and decoding schemes that are much more complex than the simple repetition code.97 x 1 0 " 1 6 % 1 0 " 1 5 *=5 From above it seems that if we want to reduce the error probability to zero. More reliable transmission in this problem is achieved if we increase«. I L L U S T R A T I V E P R O B L E M Illustrative Problem 8.e. which in the above case is C = 1 . plot pt as a function of the block-length n.0. and Shannon showed that one can achieve asymptotically reliable communication (i..356 probability can be expressed as CHAPTER S. The complexity of the system has been increased because now we have to use an encoder (which has a very simple structure) and a decoder.00! to 10~ 9 . for n = 9 we have 9 pt = £ ] 0 . This. a price has been paid for this more reliable performance.H h ( 0 . 0 0 1 4 ( 0 . which implements majority vote decoding. instead of just once. with n — 5 and e — 0.3 in a binary symmetric channel. p e 0) by keeping the rate of transmission below the channel capacity. For instance. — m j . we have to increase n indefinitely and. is not the case. however.3.7 [Error probability in simple repetition codes] Assuming that e = 0. for i=1:2:61 p(i)=0. 2 61.11: E a o r probability of a simple repetition code for e = 0 . 3 and n = 1 . Channel Coding The M A T L A B script file for this problem is given below. In the binary case. A block code is linear if any linear combination of two codewords is a codeword.1 Linear Block Codes Linear block codes are the most important and widely used class of block codes. end end pause % Press a key to sec the pint stem((1:2:61 ).8. 8. Linear block codes . Chapter H echo on ep=0. In linear block codes the codewords form a k-dimensional subspace of an n -dimensional space.3.3. for j=(i+1)/2 i P(i)-p(')+prod(1 :i)/(prod(1 :j)*prod(l :(i—j)))*ep~j*(1 -cp)"{i-j).3. this means that the sum of any two codewords is a codeword. 357 % MATLAB script for Illustrative Problem 7.p(1:2:61)) xlabel(' a ' ) ylabelf' pe') title(' Error p r o b a b i l i t y as a f u n c t i o n of n in simple r e p e t i t i o n code' Figure 8. e. The minimum distance of a code is denoted by dinin.8) For linear codes the minimum distance is equal to the minimum weight of the code..9) u»min = min w(Ci) i. k) linear block code.3. which determines its error-correcting capabilities. there will be 16 codewords. An important parameter in a linear block code. — In order to obtain all codewords. we have to use all information sequences of length 4 and find the corresponding encoded sequences. For the case of k — 4. The rows are chosen in such a way that the decimal representation of each row is smaller than the decimal representation of all rows below it. starting from the all-0 sequence and ending with the all-1 sequence. . is the minimum (Hamming) distance of the code. and we have dtmn = mmdH(cj. which is a k x n binary matrix such that each codeword c can be written in the form c ~ uG (8.8 [Linear block codes] The generator matrix for a ( 1 0 . 4 ) linear block code is given by 1 1 0 1 0 1 1 1 0 1 1 0 I 0 0 1 1 0 1 1 1 0 1 1 0 1 0 1 1 1 ! 0 1 1 0 0 1 0 1 1 G = Determine all the codewords and the minimum weight of the code. CHANNEL CAPACITY AND CODING are described in terms of their generator matrix G.358 CHAPTER S. Let V denote a 2* x k matrix whose rows are all possible binary sequences of length k. the all-0 sequence of length n is always a codeword of an {n. Obviously. the minimum number of 1 's in any nonzero codeword.7) where u is the binary data sequence of length k (the encoder input). I L L U S T R A T I V E P R O B L E M Illustrative Problem 8.3. Since there is a total of 16 binary sequences of length 4. defined by (8.3.Cj) (8. which is defined as the minimum Hamming distance between any two distinct codewords. 3. which in this case is a 16 x 10 matrix whose rows are .10) where C is the matrix of codewords.3 Channel Coding ihe matrix U is given by 359 U = 0 0 0 0 0 0 0 0 1 1 I 1 1 1 1 1 0 0 0 0 0 1 0 I 1 0 1 0 1 1 1 1 0 0 0 0 0 1 0 1 1 0 I 0 1 1 0 1 1 1 0' 1 0 I 0 1 0 1 0 1 0 1 0 I We have C — VG (8.8. % MATLAB script for Illustrative Problem 8. denoting all information sequences 8. The matrix of codewords is given by 0 0 0 0" 1 0 1 0 1 0 1 0 1 0 1 0 1 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 0 1 0 1 I 0 1 0 1 0 1 0 0 1 0 1 0 0 0 1 0 1 1 0 1 0 1 1 1 1 0 0 0 0 0 1 0 I 1 0 1 0 1 1 0 1 1 1 1 1 0 I 0 I 1 0 I 0 1 0 0 I 1 1 1 1 1 0 0 I 1 1 Î 0 0 1 1 I 1 1 0 1 1 0 0 0 1 i 0 0 0 0 0 0 0 0 1 0 1 1 1 1 0 I 1 0 1 1 0 1 0 I 1 0 0 1 i 1 I 0 0 0 1 1 0 1 1 1 1 0 1 0 0 0 1 I 0' 1 1 0 0 1 1 0 0 1 1 1 0 1 1 0 I 0 0 0 0 0 0 1 1 1 0 1 1 1 0 0 0 0 1 1 1 1 1 1 0 0 0 0 1 0 1 0 1 1 1 0 1 1 1 1 L 1 1 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 1 1 0 1 0 0 1 1 0 1 0 0 1 0 0 1 1 0 0 1 A close inspection of the codewords shows that the minimum distance of the code is dmin 2.2"(-j+k+1))>=2"(—j+k) . for i=1 for j=k:—1:1 if rem(i—1. k=4. Chapter % generate U. CHANNEL CAPACITY AND CODING the codewords. The MATLAB script file for this problem is given below.360 CHAPTER 8. 3. the generator matrix g=[1 0 0 1 1 1 0 111. which is an m x {2m .2).2 Pi. as its columns. A linear block code is in the systematic form if its generator matrix is in the following form 1 0 G = 0 1 o 0 P I.k) matrix. For (8. % find the minimum w_min=min(sum((c(2:2" k. 2 P\.3. Channel Coding 361 u(ij)=1: else u(ij)=0.:))' ) ) .12) where denotes the k x k identity matrix and P is a k x (« . c=rem<u»g. 1 1 1 0 0 0 1 1 1 0.8. 2m — m — 1) linear block codes with m i n i m u m distance 3 and a very simple parity-check matrix.n-k (8. The parity-check matrix. 0 1 1 0 1 1 0 1 0 1.n—k Pl.11) Pk.14) and if G is in systematic form. the first k binary symbols in a codeword are the information bits. In a systematic code.n-k 0 0 1 Pk.3. and the remaining n — k binary symbols are the parity-check symbols.I P2A P 1. 1 1 0 1 1 1 1 0 0 1). The parity-check matrix of a code is any (« .15) . then H = [P< | h] H a m m i n g Codes Hamming codes are (2m — 1. end end end % define C. except the all-0 sequence. % generate codewords distance.k) x n binary matrix H such that for all codewords c we have cH' Obviously. has all binary sequences of length m.13) (8.3. I Pk.3.3. we will have GH' =0 = 0 (8.1) matrix.2 G = [h I P\ (8. 16) G - 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 1 1 1 1 0 I 1 (8.362 instance. each of length 15. — Here 1 1 0 0 0 0 1 1 0 1 0 1 1 1 1 0 1 0 0 0 1 1 1 0 1 1 0 1 1 1 0 0 1 1 1 0 1 0 0 0 1 0 1 1 0 1 1 1 1 0 0 0' 1 0 0 1 0 0 0 1 H = (8. we have a (7. for m form is CHAPTER S. and it results in d m [ n = 3.733.3. 1 0 0 0 0 0 0 0 0 0 0 0 I 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 1 I 0 1 0 1 I 1 1 1 0 0 0 1 1 1 0 1 1 0 1 1 1 0 0 1 1 1 0 1 0 0 1 0 1 1 0 1 1 1 1 G = = 2048 codewords.3. therefore. . 11) Hamming code and verify that its minimum distance is equal to 3. CHANNEL CAPACITY AND CODING 3.3. 4) code whose parity-check matrix in the systematic H From this we have - 1 0 I I 1 0 O i l 1 1 0 0" 1 0 1 0 1 0 0 1 (8. The M A T L A B script is given below. The rate U = 0.8.18) and.17) Illustrative Problem 8.9 [Hamming codes] Find all the codewords of the (15. we use a M A T L A B script similar to the one used in the Illustrative Problem 8. In order to verify the minimum distance of the code. 8. 0 0 0 0 0 0 0 0 0 1 0 1 1 01. for .3.min=min(stim<(c(2:2"k. can be obtained in terms of the minimum distance of the code.2). using a minimum Hamming distance criterion. 0 0 0 0 0 0 0 0 1 0 0 1 0 1 1. T h e (message) error probability of a linear block code with minimum distance d m i n . 0 0 0 0 1 0 0 0 0 0 0 1 001.3. end end end g=(1 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 1 1.19) where p denotes the error probability of the binary channel (error probability in demodulation) and M is the number of codewords ( M = 2k).2*(-j+k + 1))>=2"(-j+k) u(i. in hard-decision decoding. Performance of Linear Block Codes Linear block codes can be decoded using either soft-decision decoding or hard-decision decoding. 1 0 0.j)=0. 0 1 0 0 0 0 0 0 0 0 0 0 1 1 0.2"It for j =k: — 1:1 if rera(i-1.:))' ) ) .3. particularly at high values of the SNR. w. but a tight upper bound. the received signal is mapped into the codeword whose corresponding signal is at minimum Euclidean distance from the received signal.20) . Chapter 8. and then. 0 0 0 0 0 0 0 1 0 0 0 0 1 1 1. c=rem{u*g. Channel Coding 363 % MATLAB strip! for illustrative Problem 9. 0 0 0 0 0 1 0 0 0 0 0 0 1 0 1. first a bit-by-bit decision is made on the components of the codeword. 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1). T h e message-error probability in this case is upper-bounded by (8. is upper bounded by Pe <(M1)[4/»(1 -p)]d™/2 (8. The performance of this decoding scheme depends on the distance structure of the code.j)=1. else u(i. the decoding is performed. 0 0 0 1 0 0 0 0 0 0 0 1 0 1 0. 0 0 0 0 0 0 1 0 0 0 0 1 1 1 0. echo on k=11. In a hard-decision decoding scheme. In soft-decision decoding. the energy per bit. 411 H l s H sltS 4 L W J slsUld I d Jj fc Illustrative Problem 8. the energy per codeword is n£. the error probability of the binary channel is given by p= Q (8.364 CHAPTER S.For smaller Yb values. the above relations can be written ( A i .22) Pc < (M 1)0 d mini* Nd for antipodal signaling In these inequalities d m m is the minimum Hamming distance of the code and £ denotes the energy per each component of the codewords.11) Hamming code is used with antipodal signaling and hard-decision decoding.21) ( M ~ ] ) Q for orthogonal signaling (8.3. Therefore.1)Q Pc < / ^min 2 N0 dminRc£h No for orthogonal signaling (8.25) .10 [Performance of hard-decision decoding] Assuming that the (15.3. and since each codeword carries k information bits.3. the bounds become very loose and can even exceed 1. Since each codeword has n components.3.3. £ h is given by £ h _ ~ ~ _ £ ~ ~RC (8. and is the minimum Euclidean distance of the code and is given by ^E „ I s/2dmm£ which results in dm in£ 2N0 for orthogonal signaling for antipodal signaling (8. CHANNEL CAPACITY AND CODING where M = 2* is the number of codewords.24) for antipodal signaling (M - 1)6 The bounds obtained above are usually useful only for large values of yb = £ b / N o . plot the message-error probability as a function of yb — £ b / N o - mssam Since antipodal signaling is employed. Nq is the one-sided noise power spectral density.23) where Rc —kjn as denotes the rate of the code. 8.gamma_db]=p. Since the minimum distance of Hamming codes is 3.p ) ] d ^ 2 IA66£„ = 2047 [ 4 Q No W ) \ _ / \V I L466gf. P* < (2 1 1 .k .26) (8.11) Hamming with hard-decision decoding and antipodal signaling.db J.err. Channel C o d i n g 365 where 5T is the energy per component of the codc (energy per dimension) and is derived from by £ Therefore.73333. h .3. (8.d_ min} .3.3.3. e. 2Rc£h p = Q V No = £h R.28) The resulting piot is shown in Figure 8. hd _a (gamma.27) where Rc — k/n we have = j i = 0. function [p.12 Ph Figure 8. The M A T L A B function for computing the bound on message-error probability of a linear block code when hard-decision decoding and antipodal signaling is employed is given below.n .12: Error probability as a function of Yb for a (15.\ No ) (8. gam ma_ d b .l ) [ 4 p ( l . 10. "(d_rnin/2): In the MATLAB script given below. semilogy(gamma.15.ha. I I ) H a m m i n g code is used with an orthogonal binary modulation scheme instead of the antipodal scheme.366 CHAPTER S.h(5_a( 10. CHANNEL CAPACITY AND CODING % p-eJtd.en.b)).«R_c *gDmma_b)). plot the message-error probability as a function of yb = The problem is similar to Illustrative Problem S.3.31b Rc (8.number signaling is used.il. R_c=k/n.gamma_b|=p.err.*(4*p_b. " (gamma_db/10).11 [Hard-decision decoding] If the (15.e Jid-ii( gamma ulb J.3). except that the crossover probability of the equivalent binary symmetric channel (after hard-decision decoding) is given by (8. Chapter 8.30) (8.djn E-b/NJ) bits in the code of the code in) gamma_dbJ = lower EJj'NJ) = higher of information distance n = code block length d-jni/i .16. p_err=(2~k —1). |p.29) Using the relation £ we obtain --. gam ma_db Ji. gamtna_b=10.gamma.e.3. in = p.dbl gamma jdbJi k .p.minimum gamma-db=[gamma.b=:q(sqrt(2.n. p. % % % % % % % % m Mattah function for computing error probability hard-decision decoding of a linear block code when antipodal lp_err.11.31) .3. the above MATLAB function is employed to plot error probability versus Yb• % MATLAB script for Illustrative Problem 10.b.k.dbJ:(gamma-db_h-gamma-dbJ)/20:gamnna-iJb_h].*(1 —p.ha) Illustrative Problem 8. Pe < ( 2 U " l ) [ 4 p ( l . It is also instructive to plot the two error probability bounds for orthogonal and antipodal signaling on the same figure.~(gamma-b-db/10). T h e M A T L A B file for this problem is given below. qc|=q(sqri(0. Channel Coding and.3.13. In fact. This is done in Figure 8.7333^\ /— ^ 1. b_db=[ —4:1:14): gamma_b=1O.*qq).eiT=2047*qq "2.14.- 367 = 2047 4ß (8. p _ err) Figure 8.13.3. echo on gamma. finally. p. As w e observe from Figure 8. for these values of Yb the bound on the error probability is larger than 1.733£b\ /. — — .8. .»gamma. pause % Press a key to see p. % MATLA8 script for Illustrative Problem II. Chapter fi.13: Error probability versus Yb of a (15. .11) code with orthogonal signaling and hard-decision decoding.32) The p e -versus-£/>/<Vo plot is shown in Figure 8./ ? ) ] " /0.*{3-2.733. The superior performance of the antipodal signaling compared to orthogonal signaling is readily seen from a comparison of these two plots.b)).ß .err versus guoima J J curve I og log( gam m a_ b. for lower values of Yb the derived bound is too loose. / /o. gamma-idbJi.eJid-t>(gamma-dbJ. I L L U S T R A T I V E P R O B L E M Illustrative Problem 8.!:(gamnia_db-h—gamma_dbJ)/20:gamma_db-h].l.k."(gamma_db/10).hd.number of information bits in the code n .k. p-err=(2~k-1) *(4»p.»gamma_b)).e.db.min/2).mjn) % p-eJidj>m % % % % % % % % Matlab function for computing error probabiliry in hard-decision decoding of a linear block code when orthogonal signaling is used.gamma. The MATLAB function for computing the message-error probability in the case of hard-decision decoding with orthogonal signaling is given below.lower E-b/NJ) gammci-dbJi . p-b=q(sqrt(R.dbjsp.12 [Soft-decision decoding] Solve Illustrative Problem 8.higher EJ//NJ) k .ofgamma.code block length d-inin = minimum distance of the code ga/nma_db=[gamnia_db.gammu-dbj a p.c=k/n.n. function [p.c. gamma_b=10.err. Ipjtrr. J4: Comparison of antipodal and orthogonal signaling. .gamma.11 when soft-decision decoding is used instead of hard-decision decoding. CHANNEL CAPACITY AND CODING Pe 10' Figure 8.*(1 —p_b)).db. R.d-min) gumma ~dbJ .n^.h.368 CHAPTER 8.b."{d. £imi[1 — 3. — ) y 10 Nq ] for orthogonal signaling 2047Q { for antipodal signaling The corresponding plots arc shown in Figure 8. The superior performance of antipodal signaling is obvious from these plots. we have (Af-OCl Pe < \ ld™nRc£h 2jVQ ] | for orthogonal signaling {M — 1) 2 | y for antipodal signaling 2047(2 • / 1 1 £h . and M = 2 " — 1 — 2047.3. . Pc JO Figure 8. one for computing the error probability for antipodal signaling and one for computing the error probability for orthogonal signaling when soft-decision decoding is employed.15.3.8. Two MATL AB functions. Therefore. In the problem under study.15: Message-error probability versus Yh for soft-decision decoding. Channel Coding 369 » a w M f j f t ^ In this case we have to use Equation (8. are given below. Rc = j i .24) to find an upper bound for the error probability. *gamma.c=k/n.err.err _ho) .minimum % gainma.k.3). function [p. p _e rr_ s o .mm) in decoding of a linear block is used.16.gamma.hd.min. p.lower = higher E J j f N J ) of information distance bus in the code of the code 7o n = code block.d = tower EJb/ND EJbjNJ) bits in the code of the code gumma Mb Ji . " (gamma-db/10) .d-nnin) % pjejidj>.16.15. p _err_ b a.d juin) .m % % % % % L Mullah function soft-decision when antipodal [p-err.o{gamma_db. gamma-db J. p -en_sa.3).err. The MATLAB script that generates this figure is given below.k.sa.kn.3). b.so. p .gamma.db=[gamma-db_l.15.b. % MATLAB script for Illustrative Problem 12.gajnma_bl=p_e Jid.n.gam ma.higher n = code block = minimum length of information distance gamma-db=[gamma-dbJ.gamma_db-h. p.gamma.c=k/n.e-sd^o(10.o( 10. [p.a(10.11.*q(sqn(d.d-mjn) % p-esd-U. In Figure 8.gamma _db Ji.h.(gamma_db_h-gnmma-db_IV20:gamma-db_h]. gamma jib Jl. b. gamma_b= 10. R.16.11. [p-enr.gammauibj gumma-dbJ k = number djnin for computing signaling = p.crr=(2'k-1 ). [p-err-ha.k.err.16.16.e_sd.l.db]=p.n.b]=p. se milogy (gamma_b . gam ma.n. length .3). four plots corresponding to antipodal and orthogonal signaling with softand hard-decision decoding are shown.*R_c." (garmna.370 CHAPTER S.11.*R_c *gamma_b)).c-td error probability code .15.gammu-dbj gamma Mb J gamma^ibJt k ~ number djnin for computing error probability code in decoding of a lineur block is used.(g2mma_db_h-gamma.gainma.11. R.db.b)=p_e.gamma. Chapter 8. [p. signaling = p .15.db_l)/20:gaiTima_db-h]: gamma_b=10.efT=(2"k-1)./2)).e sd-u(gamma_db EJ>jNJ> J.m % % % % % % % % Matlub function soft-decision when orthogonal Ip^rr.a{10. gam ma_ b.db/10). CHANNEL CAPACITY AND CODING function [p-err.*q(sqrt(d_min.gamma_db]=p_e_sd_a(gamma-dbJ.b]=p_e„sd.ho. 3. . but also on the state of the encoder. therefore. The rate of this code is. and L = 4 is shown in Figure 8. and some prefer (L — I as the constraint length. The parameter L is called the constraint length of the convolutional code. In convolutional codes each sequence of information bits is mapped into a channel input sequence of length no but the channel input sequence depends not only on the most recent information bits but also on the last (L .\)ko inputs of the encoder.3. 8. The 3 encoded bits are then computed as shown in the figure and are transmitted over the channel. In this convolutional encoder the information bits are loaded into the shift register 2 bits at a time. therefore.8. where at each time instance the output sequence depends not only on the input sequence. which is determined by the last { L .16: Comparison of antipodal/orthogonal signaling and soft/hard-decision decoding. Therefore. R = Note that the 3 encoder outputs transmitted over the channel depend on the 2 information bits loaded into the shift register as well as the contents of the first three stages (6 bits) of the shift register. and the last 2 information bits in the shift register move out. no = 3.17. 1 Some authors define m = Lk0 as the constraint length.2 Convolutional Codes In block codes each sequence of k information bits is mapped into a sequence of n channel inputs in a fixed way regardless of the previous information bits. a finite-state machine with states. 1 A binary convolutional code is. the encoder has the structure of a finite-state machine. The contents of the last stage (2 bits) have no effect on the output since they leave the shift register as soon as the 2 information bits are loaded into it.l)&o inputs of the encoder. The schematic diagram for a convolutional code with ko = 2. Channel Coding 371 Yb Figure 8. e. g2. We also assume that the length of the information-bit sequence (input to the convolutional encoder) is a multiple of ko.l)fco 0's to bring back the convolutional encoder to the all-0 state.8". 1 < i < k^L and 1 < j < n.372 CHAPTER S. We also define the generator matrix of the convolutional code as G = . and £ = 4. the length of the input sequence is nko. the convolutional code is uniquely determined. denoted by . in general. .[0 £3 = 11 As soon as gi. CHANNEL CAPACITY AND CODING Figure 8. which is. is 1 if the ith element of the shift register is connected to the combiner corresponding to the j t h bit in the output and 0 otherwise. «o = 3. If.17 we have 0 0 1 0 0 0 1 0 0 0 0 0 I 0 0 0 0 0 0 0 0 1 1 1 G - It is helpful to assume the shift register that generates the convolutional code is loaded with 0's before the first information bit enters it (i. we pad it with 0's such that the resulting length is a multiple of ko..l)fco 0's indicated earlier in this paragraph. A convolutional code is usually defined in terms of the generator sequences of the convolutional code. the encoder is initially in zero state) and that the information bit sequence is padded with (L. The ith component of g j . in the convolutional code depicted in Figure 8. This is done before adding the (L ..17. the length of the output sequence .17: A convolutional code with i'o = 2. For the convolutional code shown in Figure 8. g„.. If the length of the input sequence is not a multiple of ko. an n x k$L matrix.. For example. after the first zero-padding. g2 gn are specified. we have g1 ~ [0 0 0 0 I 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1] I] 1] gl . 2). Its rtnss are gl. Note that zero-padding of the input sequence is done by the MATLAB function. for i=1:n+l-2 ul=[ul. input) % % % % % % % cnv^encd(g.1). .1.k0) > 0 error(' E r r o r .kO.input. The parameters no and L are derived from the matrix G .S. so t h e rate of t h e c o d e w i l l b e nkp (n -h Ll)n0 In practice n is much larger than L \ hence.g2.n+l—1).ieros{size{1 :kO-rern(length(input). . and the input sequence are given.kO)))l.m given below generates the output sequence of the convolutional encoder when G .J.u((i+U»kO—1 :i*k0+1)].UkO. % determine the output output=reshape(rem(g*uu. The input sequence (denoted by the parameter "input") starts with the first information bit that enters the encoder. Channel Coding 373 will b e (n + L - l ) « o .encd.gn kO is the number of bits entering the encoder at each clock cycle. % % % add extra zeros generate uu.2).zeros(size(1:(l-1)*k0))]. the above expression is well approximated by «0 The MATLAB function con. g i s n o t o f t h e r i g h t end % determine I and nO size.. function output=cnv_encd(g.kO) > 0 inpui=[input.2)/k0. encoder at various clock cycles u=[zeros(size(1:(l-1)*k0)). end uu=reshape(ul. end n=length(input)/kO. a matrix whose columns are the contents of conv. ul=u(l*k0:-1:1).input) determines the output sequence of a binary tonvolutional encoder g is the generator muttix of the ('/involutional code with nO rows and I*k0 columns. input . % check the size of matrix g if rem(size(g.') l=size(g. n0=size(g.the binary input seq check to see if extra zero-pudding is necessary if rem(length(input).n0*(l+n-1)). kq.kO. 0 0 0 0 0 0 0 1.\)ko 0's to the beginning and end of the input sequence.17 when the information sequence is 1 0 0 1 I 1 0 0 1 10 0 0 0 1 1 1 — ^ w i m . the generator matrix. T h e length of the output sequence is. Representation of Convolutional C o d e s W e have seen that a convolutional c o d e can be represented either by the structure of the encoder or G . therefore. Thus. We have also seen that a convolutional encoder can b e . . In this case it is sufficient to add one 0.1) x 3 = 36 The zero-padding required to make sure that the encoder starts from the all-0 state and returns to the all-0 state adds (Z.kQ. Therefore. since we have 0 0 0 0 1 0 1 0 0 0 0 0 1 0 0 0 0 0 0 ! I 1 1110 G = 0 0 we obtain no = 3 and L = 4 {this is also obvious from Figure 8.m.^ I M B I ^ j z ^ J L f J J d ^ U I l d ^ Illustrative Problem 8.t 0 0 0 0 0 0 1).374 CHAPTER S. output=cnv. the length of the information sequence is 17. we have the following information sequence i 00 1 1 100 1 10 0 0 0 Now. which gives a length of 18. the sequence under study becomes 0000001001 1 1001 1000011 10000000 Using the function cnv_encd. k0=2.17). CHANNEL CAPACITY AND CODING . the output sequence is found to be 0 0 0 0 0 1 10 1 1 1 1 1 0 1 0 1 1 1 0 0 1 1 0 1 0 0 1 0 0 1 1 1 i l l The M A T L A B script to solve this problem is given below. m Here. input=[1 0 0 1 1 1 0 0 1 1 0 0 0 0 1 1 1]. 18 ( _ + 4 . g=[0 0 1 0 1 0 0 1 .input). therefore.13 [Convolutional encoder] Determine the output of the convolutional encoder shown in Figure 8.encd{g. which is not a multiple of ko = 2. extra zero-padding will be done. 00.01. Consider the convolutional code with ICQ = 1. Obviously. The trellis diagram for this code is shown in Figure 8. for each clock cycle and branches corresponding to transitions between these states. no — 2.19: The trellis diagram for the convolutional code shown in Figure 8. therefore. and d. Channel Coding 375 represented as a finite-state machine and. c. In order to draw the trellis diagram for this code. Therefore. this code can be represented by a finite-state machine with four states corresponding to different possible contents of the first two elements of the shift register.19. on the time axis. namely.8. no — 2. and 11. the four states are denoted by black dots.18.18: A convolutional encoder with ko — I. a trellis diagram is a sequence of 2(i. *o = ' Figure 8.18. A trellis diagram is a state transition diagram plotted versus time. and the transitions between states are indicated .-i)*n s t a t e s > shown as dots. which corresponds to clock cycles. respectively. we have to draw four dots corresponding to each state for each clock cycle and then connect them according to various transitions that can take place between states. can be described by a state transition diagram representing the finite-state machine. and L = 3 shown in Figure 8. and L = 3. Let us represent these four states by the letters a.3. As we can see in Figure 8. b.19. 10. A more widely used method for representation of convolutional codes is in terms of their trellis diagram. Figure 8. and return to the all-0 state. where a denotes the number of l's in the output bit sequence and /? is the number of l ' s in the corresponding input sequence for that branch. any codeword of a convolutional encoder corresponds to a path through the trellis that starts from the all-0 state and returns to the all-0 state. To obtain the transfer function of the convolutional code. Also note that we always start from the all-0 state (state a). the number of states is 2 6 = 64. the exponent of D shows the number of 1 's in the codeword corresponding to that path (or equivalentiy the Hamming distance of the codeword with the all-0 codeword). 7) corresponds to a path through the trellis starting from the all-0 state and ending at the ali-0 state. The Transfer Function of a ConvolutionaJ Code For each convolutional code the transfer function gives information about the various paths through the trellis that start from the all-0 state and return to this state for the first time. All the other states are denoted as in-between states. J).18 is given by T(D. CHANNEL CAPACITY AND CODING by branches connecting these dots. Corresponding to each branch connecting two states. we can see that there exists one codeword with Hamming weight 5. we split the all-0 state into two states. it is easy to show that the transfer function of the code shown in Figure 8. can be expressed as T(D. when expanded. and the exponent of N indicates the number of 1 's in the input information sequence. J ) = D5NP + D6N2J4 + D6N2y5 + D1 N^J5 + ••• From the above expression for T(D.376 CHAPTER S. starting at the all-0 state and returning to the all-0 state. On each branch connecting two states. According to the coding convention described before. J) indicates the properties of all paths through the trellis starting from the all-0 path and returning to it for the first time. N. then. The exponent of J indicates the number of branches spanned by that path. J) = DSNJ3 1 . one denoting the starting stale and one denoting the first return to the all-0 state. etc. move through the trellis following the branches corresponding to the given input sequence. For example. a function of the form DaN** J is defined. N. the structure of the trellis is much more complex. Each term of T{D. we can use all rules that can be used to obtain the transfer function of a flow graph. two binary symbols indicate the encoder output corresponding to that transition. Therefore codewords of a convolutional code correspond to paths through the corresponding trellis.D N J 2 which. see [1]. As we will see later. For more details on deriving the transfer function of a convolutional code. J). for the convolutional encoder shown in Figure 8. the transfer function of a convolutional code plays a major role in bounding the error probability of the code. therefore. N. any self-loop at the all-0 state is ignored. The number of states in the trellis increases exponentially with the constraint length of the convolutional code. for . Since T(D.17. It also shows. N. N. Following the rules for deriving the transfer function. two codewords with Hamming weight 6. N. in deriving it.D N J . To obtain the transfer function of a convolutional code. The transfer function of the convolutional code is then the transfer function of the flow graph between the starting all-0 state and the final all-0 state and is denoted by T{D. 3. If hard-decision decoding is used. The smallest power of D in the expansion of T(D. J) is called the free distance of the convolutional code and is denoted by iif rce . 1 sequence c.8. this algorithm finds the path that is at the minimum Hamming distance from the received sequence. we are dealing with the corresponding sequence c' with . This algorithm is particularly interesting because it is a maximum-likelihood decoding algorithm.rl) (8. Since the desired path starts from the all-0 state and returns back to the all-O state. Channel Coding 377 example. In hard-decision decoding of convolutional codes. that the codeword with Hamming weight 5 corresponds to an input sequence of Hamming weight 1 and length 3. Instead of the binary 0. From the above we have dl(c\r) m = J^d2E(c'l. we want to choose a path through the trellis whose codeword.We denote the sequence of bits corresponding to the t'th branch by C.3. This is a consequence of the fact that the channel under study is an additive white Gaussian noise channel. with three differences: 1. Instead of y we are dealing directly with the vector r. therefore. and y. Instead of Hamming distance we are using Euclidean distance. and since each branch corresponds to no hits of the encoder output. and if soft-decision decoding is employed. 2. The Hamming distance between c and y is. N. the total number of bits in c and y is mno. respectively. is at minimum Hamming distance from the quantized received sequence y. 3. the Viterbi algorithm finds the path that is at the minimum Euclidean distance from the received sequence. we assume that this path spans a total of m branches. 7 = l [-*/£. where 1 < i < m and each Cj and y. the vector output of the optimal (matched filter type or correlator type) digital demodulator. Decoding of Convolutional Codes There exist many algorithms for decoding of convolutional codes.In this example dfree = 5. In hard-decision decoding the channel is binary memoryless (the fact that the channel is memoryless follows from the fact that the channel noise is assumed to be white).34) . C = ifc. which— upon receiving the channel output—searches through the trellis to find the path that is most likely to have generated the received sequence. The Viterbi algorithm is probably the most widely used decoding method of convolutional codes. denoted by c. if Cjj = 0 '> for 1 < i < m and 1 < j < n. m In soft-decision decoding we have a similar situation. is of length nq. 1. The important fact that makes this problem easy to solve. If the optimal path at a certain point passes through state 5. For cases where ko > I. This is easily observe" from (8. after the survivor at state 5 is determined. If we want to see which one of these two branches is a good candidate to minimize the overall distance. and we can move to the next state. 4.20) Figure 8. Now. This procedure is continued until we reach the all-0 state at the end of the trellis.33) and (8. CHANNEL CAPACITY AND CODING From Equations (8. Draw a trellis of depth m for the code under study. Find the distance of the ith subsequence of the received sequence to all branches connecting ith-stage states to the (i + l)st-stage states of the trellis. Now let us assume that we are dealing with a convolutional code with ko = 1. there are two paths that connect the previous states Si and S2 to this state (see Figure 8.33) and (8. and the other branch is simply not a suitable candidate and is deleted.is that the distance between a and b in both cases of interest can be written as the sum of distances corresponding to individual branches of the path.3. .3. we see that the generic form of the problem we have to solve is: Given a vector a to find a path through the trellis starting at the all-0 state and ending at the all-0 state such that some distance measure between a and a sequence b corresponding to that path is minimized.3. 3. The above procedure can be summarized in the following algorithm. For the last L — I stages of the trellis. we have to add the overall (minimum) metrics at states S] and S2 to the metrics of the branches connecting these two states to the state S. known as the Viterbi algorithm. (This is done because we know that the input sequence has been padded with ko(L — 1) 0's). This means that there are only two branches entering each state in the trellis.3. This branch is called a survivor at state S. Set I = 1 and set the metric of the initial all-0 state equal to 0. 2.20: Justification of the Viterbi algorithm. Parse the received sequence into m subsequences each of length «0 .34). the only difference is that at each stage we have to choose one survivor path from among 2*° branches leading to state S. draw only paths corresponding to the all-0 input sequences. we also save the minimum metric up to this state.34).378 CHAPTER S. Then obviously the branch that has the minimum total metric accumulation up to state S is a candidate to be considered for the state after the state S. —ilMfeHsWdMJdai. For each state of the ( / + !)st-stage. the degradation in performance due to path memory truncation is negligible. and the total surviving paths have to be stored. This means that m = 7 . and it is required to keep the surviving paths corresponding to the last & stages. IfV = m.21. the decoding delay and the amount of memory required for decoding a long information sequence is unacceptable. Also note that since the input information sequence is padded with ko{L — 1) = 2 zeros.l)st-stage states. and future received bits do not change this decision. The trellis diagram for this case is shown in Figure 8. go back through the trellis along the survivors to reach the initial all-0 state. The parsed received sequence y is also shown in this figure. choose the minimum of the candidate metrics and label the branch corresponding to this minimum value as the jwrvi'vor. each corresponding to one branch ending at that state. For each state at the (I + l)st-stage. In practice a suboptimal solution that does not cause these problems is desirable. To obtain the best guess about the input-bit sequence. the quantized received sequence is y = 01101111010001 The convolutional code is the one given in Figure 8. and assign the minimum of the metric candidates as the metrics of the (/ 4. the decoder makes a decision on the input bits corresponding to the first stage of the trellis (the first ko bits). Computer simulations have shown that if S > 5 L . otherwise. This means that the decoding delay will be k^S bits. we have considered only the zero inputs to the encoder (notice . increase / by 1 and go to step 4. This path is the optimal path. 8. for the final two stages of the trellis we will draw only the branches corresponding to all-0 inputs. after padding with two O's. Find the maximum-likelihood information sequence and the number of errors. which. On such an approach.8. This also means that the actual length of the input sequence is 5. Note that in drawing the trellis in the last two stages.14 [Viterbi decoding] Let us assume that in hard-decision decoding. which is referred to as path memory truncation. go to the next step. With this approach at the (5 + l)st stage. there are 2k<) candidate metrics. ^cMlkdMS^ The code is a (2. remove the last ko(L — 1) O's from this sequence. As we can see from the above algorithm. and we have to draw a trellis of depth 7. and the inputbit sequence corresponding to that is the maximum-likelihood decoded information sequence. 6.3. 1) code with L = 3. Starting with the all-0 state at the (in + l)st stage.18. The length of the received sequence y is 14. is that the decoder at each stage searches only < 5 stages back in the trellis and not to the start of the trellis.lrfldflfc Illustrative Problem 8. has increased to 7. 7. Channel Coding 379 5. The decoding cannot be started until the whole sequence (which in the case of convolutional codes can be very long) is received. Add these distances to the metrics of the /th-stage states to obtain the metric candidates for the (i-fl)st-stage states. which is at Hamming distance 4 from the received sequence. starting from that state we move along the surviving paths to the initial all-0 state. the metric is usually the negative of the log-likelihood. The MATLAB function viterbi. This path. which is denoted by a heavy path through the trellis. therefore. Now the metric of the initial all-0 state is set to 0 and the metrics of the next stage are computed.m defines the metric used in the decoding process. where the last two 0's are not information bits but were added to return the encoder to the all-0 state. From the two branches that enter each state. The input-bit sequence corresponding to this path is 1100000. CHANNEL CAPACITY AND CODING Received sequence • 01 10 II II 01 00 01 Figure 8. then. each one of them can be a survivor. Therefore. The corresponding codeword for the selected path is 11101011000000. there exist no dashed lines corresponding to 1 inputs). that in the final two stages. In the next stage there exists no comparison either. and survivors are to be chosen. If at any stage two paths result in the same metric. is the optimal path.m given below employs the Viterbi algorithm to decode a channel output. For hard-decision decoding this metric is the Hamming distance. one that corresponds to the least total accumulated metric remains as a survivor. Such cases have been marked by a question mark in the trellis diagram. For soft-decision decoding a similar procedure is followed. A number of short m-files called by viterbi. In this step there is only one branch entering each state. This means that a comparison has to be made here. For cases where the channel output is quantized. The procedure is continued to the final all-0 state of the trellis. with squared Euclidean distances substituted for Hamming distances.m are . there is no comparison. The trellis diagram for Vkerbi decoding of the sequence (01101111010001). This algorithm can be used both for soft-decision and harddecision decoding of convolutional codes. All other paths through the trellis correspond to codewords that are at greater Hamming distance from the received sequence. The separate file metric.380 CHAPTER S. — log /»(channel output | channel input). for the first time we have two branches entering each state. In the fourth stage.21. and the other branches are deleted (marked by a small circle on the trellis). and for soft-decision decoding it is the Euclidean distance. and the metrics (which are the Hamming distances between that part of the received sequence and the branches of the trellis) are added to the metric of the previous state. the information sequence is 11000. survivor_state=zeros(number.state.of-statcs-1 for 1=0:2" k — 1 [next .trellis+1). '7c This algorithm minimizes the metric rather than maximizing % the likelihood.l+1)+1.siate.survivor.cumulaied metric j=viterbi(G.1+1 )=bin2deci (branch.of.states— 1 for l=0:2~k-1 brancb.k).L. end if((state.states).J. % generate state transition matrix. states. % start decoding of non-tail channel outputs for i=1:depth. nextstate(j+1 . binary .depth..next.channeLoutput) %VITERBI The Wlerbi decoder for convolutional codes % I decoder u>uiput.1+1 !.number. input(j+1.l.of. output (j+1. branch_Output=rem(memory_contenis*G' .n) error(' c h a n n e l o u t p u t n o t of t h e r i g h t s i z e ' ) end L=size(G.output)/n. metric=zeros(n u mber.state+1)=l. depth. contents ]=nxt. number.channel^'utput) % G is u n x Lk matrix each row of which % determines the connections from the shift register to the % n th output of the code..outpiit=deci2bin(output(j+1.of . 2 ) .2).k.1).y) 7c and can be specified to accomodate hard and soft decision. output).2) > state_metric(j+1. of. function [decoder_output.siuvivor^tate. de p t h . if i <= L step=2~{{L—t)*k).2).0 f _trel lis). Channel Coding 381 also g i v e n b e l o w .of-states=2"({L-1 )*k). % survivor^state is a matrix showing the optima! path through % the trellis.trellis-L +1 flag=zeros(1 .of.binary_output(ll)).states.k.i). % check the sues if rern(size(G. for 11=1 :n branch-metric=branch_metric+metric(channel. . end end state .stat(j.of . n=size(G.of_tre[[is=length(channei. c han ne L outpu t_ nmtri x=re s hape (c h anne L o ut p ut .S.n.cumulated.state.1). memory.k) "=0 etTOr('Size of G and k do not a g r e e ' ) end if rem(size(channeLoutput.2)/k.metnc(nextstate(j+1.meinc]=vuerbi(G.l+1)=next . end for j=0: step :number. k/n is the rate of the code.output_xy!atnx(il.metric=0. The metric is given in a separate function metrtc(x. output matrix and input matrix for j=0:number. else step=1. 2).n). end % generate the decoder output from the optimal path state.states/(2"((i-depth-oLtrellis+L-2)*k)). matrix. me trie) |flae(nextstate(j+ 1. flag(nextstate(j+1. end decoder. matrix.binary .i+1)+1).depih.-1.trellis-L+1 dec_ouiput_deci=inpui(state_sequence{1. survivor_5tatc(ncxtstate(j + 1.. decoder.i+1)=j.deci.1). lasl. for j=0:last_stop-1 branch_metnc=Q.k).1). survivor.or. +braneh.depth_oLtrellis+2-i).trellis-L+1).depth.1 ) + 1.trellis-L+1)).(ll.of_tre!lis-i+1)=survivor.metric(nextstate(j + 1. meiric=branch.depih.trellis stale.bin=deci2bin(dec. for i=1 depth.metric=state.y) if x = y distance=0.1 )+1.trellis+1}: for i=1 rdepth.depth. 1.i)=dec. end end state.2:-1 .depth.state((state.iTietric=si3te-rnetric(:. end end end state.stop=nuinber_of. else distance=1.trellis-i+2).1)' .. metric+metnc(channeLouipui.output.output.of. end % start decoding of the tail channel-outputs for i=depth_of_trellis—L+2. end .of.1)rn).output=reshape(decoder.1 )+1. metric.1)+branch.1)+1 )=1.. dec-output. flag(nexisiate(j+1.sequenced.n branch. of-trellis+1). CHANNEL CAPACITY AND CODING +branch. function distance=metric(x.nurober-of_states).sequence=zeros(1 .i).1 + 1 )+1 )=1. end if((state. +1).sequenced.state(1 . for 11=1 .metric(1.2) = staie_memc(j+1. me tiic(j+-1.l+1)+1)==0) slate-meiric(nexmatc(j-t-1.i)+1. cumulated.of.mairix=zeros(k.maiax(.siate(nextstate(j+1.of.metric) |flag(nextsta[e(j+ 1.depth. end decodeder..state_sequence(1..outpui. state-sequence(1.2) = stale.1).of.k»(dep[h.2) > state_metric(j + 1.ouiput.1+1 )+1 .1 + 1 )+1.output (II)).meiric=state_metric(:.382 CHAPTER S.-1:1).1)+branch_metric.2.0f.trellis)=surviv0r.depih_of_tre!tis flag=zero5(1..bin(k.1)+1)==0) sta(e-meiric(nextstate(j + 1.i+1 )=j.output. binary-output=deci2bin(output(j-t-1. 14 using the MATLAB function viterbi.L.binary). i=i+1.k* (L -1)).slate.state. memory _conients=[binary . i = 1. end y«y(l:-1:1>: Illustrative Problem 8.8.k V.15 Repeat Illustrative Problem 8. binary.l) y = zeros(1. y=(l-1 :-1:0).I).output = [0 1 1 0 1 I 0 0 1 I 1 0 1 0 0 0 1] which results in decoder-output = [1 0] and an accumulated metric of 4. function y=deci2bin{x. s t ate _bi n ary=[bi n ary. while x>=0 & i<=l y(i)=rem{x. y=x*y'. Error Probability Bounds for Convolutional Codes Finding bounds on the error performance of convolutional codes is different from the method used to find error bounds for block codes because here we are dealing with sequences of .2).input . next_state=bin2deci(next.m. x=(x-y(i))/2. It is enough to use the m-file viterbi. Channel Coding 383 function [next_state.binary . function y=bin2deci{x) l=!ength(x). input.m with the following inputs 1 1 0 I 1 1 G = * = 1 channel. ne x t. y=2-"y.wput=deci2bit\(input. memory.3.state].contents]=nx[-stai (current. input .k) bi nary _state=deci2bin(current_state . state (1 :(L-2)*k)J. binary . we can bound the first error "Because of the linearity of convolutional codes. Such an event is called the first error event and the corresponding probability is called the first error event probability. If this happens. To find a bound on the average number of bits in error for each input bit. there has been no error. Since we are assuming that up to stage / there has been no error. it makes sense to normalize the number of bit errors to the length of the input sequence. /. up to stage I in the decoding. Since d is larger than df r e e . To determine this. Therefore. Now. Our first step is bounding the first error event probability. we lirst derive a bound on the average number of bits in error for each input sequence of length k. moving to the next stage (stage (/ + l)st) it is possible that another path through the trellis will have a metric less than the all-0 path and therefore cause errors. CHANNEL CAPACITY AND CODING very large length. let us assume that the all-0 sequence is transmitted" and. up to this stage the all-0 path through the trellis has the minimum metric. since the free distance of these codes is usually small. the higher the probability of making errors. we can. We are interested in finding a bound on the cxpected number of errors that can occur due to this input block of length k. The longer the input sequence. The number of errors is a random variable that depends both on the channel characteristics (signal-to-noise ratio in soft-decision decoding and crossover probability in hard-decision decoding) and the length of the input sequence. w e must have a path through the trellis that merges with the all-0 path for the first time at the (/ + l)st stage and has a metric less than the all-0 path. then.22: The path corresponding to the first error event. without loss of generality. .22. some errors will eventually occur. A measure that is usually adopted for comparing the performance of convolutional codes is the expected number of bits received in error per input bit.1 I 1+ 1 • • • a • Figure 8. This situation is depicted in Figure 8. Let P2(d) denote the probability that a path through the trellis that is at Hamming distance d f r o m the all-0 path is the survivor at the (/ + l)st stage. Now k information bits enter the encoder and result in moving to the next stage in the trellis.384 CHAPTER S. make this assumption. 3. Using the well-known upper bound on the Q function . therefore. Channel Coding 385 event probability by P.i denotes the number of paths at Hamming distance d from the all-0 path.or hard-decision decoding is employed. For soft-decision decoding. if antipodal signaling (binary PSK) is used.8.N.7 Q (-v) < -« we obtain Now. noting that we finally obtain r T | (D) 2 \D=e-R<£h/>% d=d{t where 7 i ( D ) = T{D. Pi{d) denotes the error probability for a path at Hamming distance d from the al 1-0 path and a. we have and.J) | . The value of Pi(d) depends on whether soft. < J2 a dP2(d) where on the right-hand side we have included all paths through the trellis that merge with the all-Opathat the ( / + l)st stage. N) = T(D. we note that each path through the trellis causes a u certain number of input bits to be decoded erroneously. This means that the average number of input bits in error can be obtained by multiplying the probability of choosing each path by the total number of input errors that would result if that path were chosen. To find a bound on the average number of bits in error for k input bits. = £ N.N) dN Pb -- 2k For hard-decision decoding. the final result is 1 dT2(D.N) dN a J2 äf(d)Dd <l~d(ret = (8.3.36).3. Thus.3. J)\j — j "äDäNW we have ST2{D. in the soft-decision case. CHANNEL CAPACITY AND CODING This is a bound on the first error event probability. the average number of bits in error. using (8. we obtain Ph(k) < 1 ar2(D. ——- N) To obtain the average number of bits in error for each input bit. but we are denoting it by f(d). It can be shown (see [1]) that P2(d) can be bounded by Here we are somewhat sloppy in notation.35) and (8. . The power of N is not strictly a function of d.36) Therefore. This however does not have any effect on the final result. J)? there is a total of f { d ) nonzero input bits. The only difference is the bound on P2(d).35) If we define T2(D. Hence.386 CHAPTER S. can be bounded by Oil h(k) < £ </=i/frec adf{d)P2{d) äisL < i £ w ajfid)e-«^» (8. we have to divide this bound by k.3. For a general D i n the expansion of T{D. Ph(k). the basic procedure follows the above derivation. N. as in the case for linear block codes.8 3. Channel Coding 387 Using this result. . soft-decision decoding outperforms hard-decision decoding by a margin of roughly 2 dB in additive white Gaussian noise channels. N ) ~ k dN A comparison of hard-decision decoding and soft-decision decoding for convolution al codes shows that here. it is straightforward to show that in hard-decision decoding. the probability of error is upper-bounded as -< p h 1 3 T i ——-—{D. 1} and y.9 Plot the capacity of a binary symmetric channel that uses orthogonal signals. 0. p{ 1 | I) = 0. 1.3 A Z-channe! is a binary input. and transition probabilities p(0 | 0) = 0.1.5.1.2} .4 A binary input. ternary output channel is characterized by the input and output alphabets X = {0. Y) as a function of p = P(X = 1). 1}. Do this once with the assumption of coherent detection and once with the assumption o? noncoherent detection. 1} and characterized by /j(0 1 I} = e and p( I | 0) = 0. and determine the channel capacity. binary output channel. For what value of p is the capacity minimized. CHANNEL CAPACITY AND CODING Problems 8. 0.1.0.' 8. 0.2. Plot / ( X .4.2 and />(1 ( 0) .2 A binary nonsymmctnc channel is characterized by the conditional probabilities p(0 \ 1) — 0.3? 8. p(0 | 2) = 0.0.1 Write a MATLAB script to plot the capacity of a binary symmetric channel with crossover probability p as a function of p for 0 < p < 1.5 A ternary input.6 Plot the capacity of a binary symmetric channel that employs binary orthogonal signaling as a function of 8. binary output channel is characterized by the input and output alphabets X = {0. How do your results differ from those obtained in Illustrative Problem 8. 8. Plot the mutual information I(X\ Y) between the input and the output of this channcl as a function of p — P{X = I). but assume that the two transmitted signals are equalenergy and orthogonal.1.388 CHAPTER S. 0 5 .4. with input and output alphabets % = y = {0.9. and determine the channel capacity.3.1) and P2 = P{X = 2). Y) as a function of p\ = P{X . Plot / ( X .2. = (0. 8.05. . Show the two plots on the same figure and compare the results.3. and transition probabilities p(() | 0) = 0 . For what value of p is the mutual information maximized.8 Compare the plots of the capacity for hard decision and soft decision when orthogonal signals are employed.2. and what is the value of this maximum? 8.7.1. 1. and what is the minimum value'. Y) as a function of p = P(X = 1) for f = 0. p{0 | 1) = 0. p{ 1 | 0) = 0.10 Write a MATLAB script that generates the generator matrix of a Hamming code in the systematic form for any given m. 8.. p( 1 | 1) = 0. as a function of j f . Compare these results with those obtained for antipodal signals. 8. 0. 0. 1. 8. Determine the capacity of the channel in each case.0. Plot / ( X .2} and = {0. p{ 1 | 1) = 0.7 Repeat Illustrative Problem 8. and choose the length of your information sequence appropriately. after obtaining the output of the encoder change the first 6 bits of the received sequence and decode the result using Viterbi decoding.1. and 0. Illustrative 8.17 In Problem 8. 8. 8.10 using orthogonal signaling with coherent and noncoherent detection.18. in Illustrative Problem 8.2. but instead of orthogonal and antipodal signaling compare the performance of coherent and noncoherent demodulation of orthogonal signals under soft-decision decoding.15. How many errors have occurred? Repeat the problem once by changing the last 6 bits in the received sequence and once by changing the first 3 bits and the last 3 bits and compare the results. respectively. Add (modulo 2) each of these error sequences to the encoded sequence and use the Viterbi algorithm to decode the result. once with hard-decision and once with soft-decision decoding. 8. . Generate four random binary error sequences. Repeat part a with k = 2 . 8. Let yb be in the interval from 3 to 9 dB.12.18 when the input sequence is 110010101010010111101011111010 8.11 Repeat Illustrative Problem 8.13 Repeat Illustrative Problem 8. Channel Coding 389 8.18 .12 Use Monte Carlo simulation to plot the error probability versus yi. Compare the decoder output with the transmitted sequence. If k = 1. 8.01. In all cases the Hamming metric is used. Assume that the modulation scheme is binary antipodal. determine the output of the encoder when the input sequence is 11001010101001011110101Ul1010 b.14 Use Monte Carlo simulation to plot the error probability versus Yh Problem 8. 0.05. each of length 2000.16 A convolutional code is described by 1 0 G = 1 1 1 1 1 0 1 1 0 1 1 a.3.12.19 Use Monte Carlo simulation to plot the bit-error rate versus y>b in a convolutional encoder using the code shown in Figure 8. Plot the results on the same figure. 0.8.10. In each case compare the decoded sequence with the encoder input and determine the bit-error rate. 8. Encode the sequence using the convoiutional code shown in Figure 8. Compare your results with the theoretical bounds.18 Generate an equiprobable binary sequence of length 1000. with probability of 1 equal to 0.15 Use MATLAB to find the output of the convolutional encoder shown in Figure 8. 9. and 2. The outputs are denoted by 0. CHANNEL CAPACITY AND CODING 8. The conditional probabilities of the channel are given by p{0 | 0) = p{\ | 1) = 0.18 is used to transmit information over a channel with two inputs and three outputs. This is the case where the output of a Gaussian channel is quantized to three levels. p(2 | 0) = p{2 I 1) = 0. 1.20 The encoder of Figure 8.09. Use the Viterbi algorithm to decode the received sequence 02020t10202112002220110101020220111112 .390 CHAPTER 8. thus. one of which interfaces with the modulator at the transmitting end and the second of which interfaces with the demodulator at the receiving end. We observe that the channel encoder and decoder and the modulator and demodulator are the basic elements of a conventional digital communication system. the communication system usually tracks the timing of the incoming received signal and keeps the PN sequence generator in synchronism. "Time synchronization o f t h e P N sequence generated at the receiver with the PN sequence contained in the received signal is required to properly despread the received spread spectrum signal.1. After time synchronization of the PN sequence generators is established. In addition to these elements. spread spectrum signals are being used to provide reliable communications in a variety of commercial applications. These two generators produce a pseudorandom or pseudonoise (PN) binary-valued sequence that is used to spread the transmitted signal in frequency at the modulator and to despread the received signal at the demodulator. The basic elements of a spread spectrum digital communication system are illustrated in Figure 9. including mobile vehicular communications and interoffice wireless communications. the transmission of information commences. in a practical system. a spread spectrum system employs two identical pseudorandom sequence generators. In this chapter we consider two basic types of spread spectrum signals for digital com- 391 . (2) to hide the signal by transmitting it at low power and.Chapter 9 Spread Spectrum Communication Systems 9.1 Preview Spread spectrum signals for digital communications were originally developed and used for military communications either (1) to provide resistance to hostile jamming. Today. In the data mode. however. or (3) to make it possible for multiple users to communicate through the same channel. synchronization is established prior to the transmission of information by transmitting a fixed PN bit pattern that is designed so that the receiver will detect it with high probability in the presence of interference. making it difficult for an unintended listener to detect its presence in noise. 2. 9. The available channel bandwidth is B( hertz. = \/R seconds. where Bc » R. PSK modulation is generally used with DS spread spectrum and is appropriate for applications where phase coherence between the transmitted signal and the received signal can be maintained over a time interval that spans several symbol (or bit) intervals.2 Direct-Sequence Spread Spectrum Systems Let us consider the transmission of a binary information sequence by means of binary PSK.o o < n < o o j and g r i t ) is a rectangular pulse of duration 7V This signal is multiplied by the signal from the PN sequence generator. FSK modulation is commonly used with FH spread spectrum and is appropriate in applications where phase coherence of the carrier cannot be maintained due to time variations in the transmission characteristics of the communications channel. On the other hand. and the bit interval is Tf. namely. namely. dircct sequence (DS) spread spectrum and frequency-hopped (FH) spread spectrum. At the modulator the bandwidth of the information signal is expanded to W = ß c hertz by shifting the phase of the carrier pseudorandom ly at a rate of W times per second according to the pattern of the PN generator. The resulting modulated signal is called a direct-sequence (DS) spread spectrum signal.2) .2. The information-bearing baseband signal is denoted as u ( 0 and is expressed as 00 v(t)= a "ST(t~nTh) (9. Two types of digital modulation are considered in conjunction with spread spectrum. .1) where (a„ — ± 1 . The information rate is R bits per second.392 CHAPTER 9. which may be expressed as oo c(t) = CNP(t-nTc) (9. PSK and FSK. SPREA D SPECTRUM COMMUNICATION S YSTEMS Figure 9. munications.1: Model of spread spectrum digital communication system. 2. thus.9. (a) PN signal.2. which shows.3. the convolution of the two spectra.S C signal «(r) = Acv{t)c(t) cos litfct (9. c{t) —*„>— T (a) PN signal v<0 (b) Data signal v(r)c(r) Tb -I (c) Product signal Figure 9. (b) Data signal. The product signal u(r)c(r). Direct-Sequence Spread Spcctnim Systems 393 where (<:„} represents the binary PN code sequence of ± l ' s and p{t) is a rectangular pulse of duration Tc. is used to amplitude-modulate the carrier Ac cos 2jrfct and.3) Since v{t)c{t) = ± 1 for any t. The spectrum spreading is illustrated in Figure 9.2: Generation of a DS spread spectrum signal. as illustrated in Figure 9. to generate the D S 8 . the narrow spectrum corresponding to the information-bearing signal and the wide spectrum corresponding to the signal f r o m the PN generator.2.2. (c) Product signal. This multiplication operation serves to spread the bandwidth of the information-bearing signal (whose bandwidth is approximately R hertz) into the wider bandwidth occupied by PN generator signal c(t) (whose bandwidth is approximately I / Tc). r / c i + 0 ( f ) ] (9. in simple terms using rectangular spectra.2. it follows that the carrier-modulated transmitted signal may also be expressed as u(t) = Ac cos [ 2 . also illustrated in Figure 9.4) . 3.394 CHAPTER 9. . Convolution of spectra of the (a) data signal with the (b) PN code signal. SPREAD SPECTRUM COMMUNICATION SYSTEMS V{f) (a) (C(/) (b) V{f) » C{f) Figure 9. 5) Hence.1 Signal Demodulation The demodulation of the signal is performed as illustrated in Figure 9.j(i)c(r) (9. which is synchronized to the PN codc in the received signal. Effects of Despreading on a Narrowband Interference It is interesting to investigate the effect of an interfering signal on the demodulation of the desired information-bearing signal.2. The ratio of the bit interval 7/.9. is usually selected to be an integer in practical spread spectrum systems.2.8) . Therefore. 9. Suppose that the received signal is r{t) = Acv(t)c(t) cos2nfct + i(t) (9. which is the bandwidth of the information-bearing signal. the only additive noise that corrupts the signal at the demodulator is the noise that falls within the information bandwidth of the received signal. the transmitted signal is a binary PSK signal whose phase varies at the rate 1 / . Thus.2. to the chip interval 7". Another interpretation is that Lc represents the number of possible 180° phase transitions in the transmitted signal during the bit interval 7).2. This operation is called (spectrum) despreading. The despreading operation at the receiver yields /-(r)c(r) = Acv(t)cos2rrfct 4. since the effect of multiplication by c(f) at the receiver is to undo the spreading operation at the transmitter. The rectangular pulse p(t) is usually called a chip. the demodulator for the despread signal is simply the conventional cross correlator or matched filter that was described in Chapters 5 and 7. The reciprocal 1 / Tc is called the chip rate and corresponds (approximately) to the bandwidth W of the transmitted signal. Lc is the number of chips of the PN code sequence per information bit. we have Acv(t)c2{t) cos 2nfct = Acv(t)cos2nfct (9. Therefore. We denote this ratio as L. and its time duration Tc is called the chip interval. Since the demodulator has a bandwidth that is identical to the bandwidth of the despread signal. Direct-Sequence Spread Spcctnim Systems 395 where B{t) = 0 when v{t)c{T) = I and 6{t) = x when v(t)c(t) = ..6) since c"(f) = I for all i. The resulting signal A t u ( f ) c o s 2 r t f c t occupies a bandwidth (approximately) of R hertz. = y (9.4. The received signal is first multiplied by a replica of the waveform <r(f) generated by the PN code sequence generator at the receiver.7) where /(f) denotes the interference.2.2.1 . the desired signal is despread back to a narrow bandwidth. SPREA D SPECTRUM COMMUNICATION S YSTEMS Received signal rit) c(t) Sampler gT(i) PN signal generator cos 2 v T/: Clock signai Figure 9. Therefore.2. where Pj = A2 jl is the average power of the interference. The primary cost for . The PN code sequence {c„} is assumed to be known only to the intended receiver. As an example.2.9) where f j is a frequency within the bandwidth of the transmitted signal. The factor W/R = Tt/Tc = Lc is called the processing gain of the spread spectrum system. Any other receiver that does not have knowledge of the PN code sequence cannot demodulate the signal. whereas any interference signals are spread over a wide bandwidth. the power in the interfering signal is reduced by an amount equal to the bandwidth expansion factor W JR. Since the desired signal is demodulated by a matched filter (or correlator) that has a bandwidth RT the total power in the interference at the output of the demodulator is = ^ W = = = f Lc (9. In summary. The net effect is a reduction in the interference power by the factor W / R . Consequently. The reduction in interference power is the basic reason for using spread spectrum signals to transmit digital information over channels with interference.10) W/R Th/T.396 CHAPTER 9. let us consider a sinusoidal interfering signal of the form / ( O = A J cos 2TX f j t (9. The effect of multiplying the interference i(t) with c{t) is to spread the bandwidth of i(r) to W hertz.4: Demodulation of DS spread spectrum signal. the use of a PN code sequence provides a degree of privacy (or security) that is not possible to achieve with conventional modulation. which is the processing gain of the spread spectrum system. Its multiplication with c ( f ) results in a wideband interference with power-spectral density JQ = Pj/W. By multiplying the received signal with a synchronized replica of the PN code signal. the PN code sequence is used at the transmitter to spread the informationbearing signal into a wide bandwidth for transmission over the channel. the probability of error is (approximately) Thus.2. we may express (9. we may express ^ as h Pj/W Pj/W Pj/Ps Now. (9.o 8 .9.2 Probability of Error In an AWGN channel. Direct-Sequence Spread Spcctnim Systems 397 this security and performance gain against interference is an increase in channel bandwidth utilization and in the complexity of the communication system. Then.2.e. if the interference is the sinusoidal signal given by (9. the interference power is reduced by the factor of the spread spectrum signal bandwidth W. which is assumed to be negligible. 2 £h N0 + Pj/W The Jamming Margin When the interference signal is a jamming signal. suppose we specify a required £h/J<3 to achieve a desired performance. A^o Pj/W. 9. the probability of error for a DS spread spectrum system employing binary PSK is identical to the probability of error for conventional (unspread) binary PSK. the error probability is expressed as Ph = Q • .9) with power Pj.11) On the other hand.2. If we account for the AWGN in the channel.2. we have ignored the AWGN. In this case.. i. using a logarithmic scale.2.14) as Pj 101ogii = w 01og-(£h\ (^) 1 1 0. A second application is multiple-access radiocommunications. the average noise power in the bandwidth W is P = The average received signal power at the intended receiver is PR. If we wish to hide the presence of the signal from receivers that are in the vicinity of the intended receiver. Determine the processing gain that is necessary to provide a jamming margin of 20 dB.16) where R is the code rate and is the minimum Hamming distance of the code. ^IWttdittik'HilgWMfi» Illustrative Problem 9. we consider an application in which the signal is transmitted at very low power.2. If the DS spread spectrum signal occupies a bandwidth W and the powerspectral density of the additive noise is No watts/hertz. MdMUiMm By using (9. SPREA D SPECTRUM COMMUNICATION S YSTEMS The ratio (Pj/PS)M is called the jamming margin. This is the relative power advantage that a jammer may have without disrupting the communication system. when the transmitted information is coded by a binary linear (block or convolutional) code.15) we find that the processing gain ( W / i ? ) J B = 30 dB. W/R L. First. This means that with W / R = 1000.2. and wc would still maintain reliable communication.1 Suppose that we require an 3EH/JO = 10 dB to achieve reliable communication with binary PSK. defined as Coding gain = R c d^ s n C (9. the SNR at the output of a soft-decision decoder is increased by the coding gain. the average jamming power at the receiver may be 100 times the power Ps of the desired signal.-G). N . — 1000.398 CHAPTER 9. Thus (9. the effect of the coding is to increase the jamming margin by the coding gain. . Performance of Coded Spread Spectrum Signals As shown in Chapter 8.. so that a listener would encounter great difficulty in trying to detect the presence of the signal.3 Two Applications of DS Spread Spectrum Signals In this subsection we briefly describe the use of DS spread spectrum signals in two applications.2. Therefore. i.e. Low-Detectability Signal Transmission In this application the information-bearing signal is transmitted at a very low power level relative to the background channel noise and thermal noise that is generated in the front end of a receiver.*««»""(fie 9.15) may be modified as S).2. 9. the signals from the other simultaneous users of the channel appear as additive interference.2 A DS spread spectrum signal is to be designed such that the power ratio at the intended receiver is Pr/P^ = 0. In digital cellular communications. any other receiver that has no knowledge of the PN code sequence is unable to take advantage of the processing gain and the coding gain. This type of digital communication. it follows that the necessary processing gain is L C = 1000. Consequently. Direct-Sequence Spread Spcctnim Systems 399 the signal is transmitted at a power level such that Pr/Pn 1. the presence of the information-bearing signal is difficult to detect. The desired value of jE". is called code division multiple access ( C D M A ) . one for each intended receiver./NQ is for acceptable performance.2. Thus it is possible to have several users transmit messages simultaneously over the same channel bandwidth. We say that the transmitted signal has a low probability of being intercepted (LPI). Consequently. due to the orthogonality of the N PN sequences. ^ — We may write /No as pRn No" Since No _ ~ lctc No ( P r \ . and it is called an LPI signal. Let us determine the number of simultaneous signals that can be accommodated in a CDMA system. a base station transmits signals to N mobile receivers using N orthogonal PN sequences. so that they arrive at each mobile receiver in synchronism. each intended receiver can demodulate its own signal without interference from the other transmitted signals that share the same bandwidth. this type of synchronism cannot be maintained in the signals transmitted from the mobile transmitters to the base station (the uplink. or reverse link). in which each transmitter/receiver user pair has its own distinct signature code for transmitting over a common channel bandwidth. We assume that all signals have identical average powers U U U . These N u signals are perfectly synchronized at transmission. However.01 for an AWGN channel. However.The intended receiver can recover the weak information-bearing signal from the background noise with the aid of the processing gain and the coding gain. Let us determine the minimum value of the processing gain required to achieve an £ h / N o of 10. Code Division Multiple Access The enhancement in performance obtained from a DS spread spectrum signal through the processing gain and the coding gain can be used to enable many DS spread spectrum signais to occupy the same channel bandwidth. The probability of error given in Section 9. Illustrative Problem 9..2.9218) P r e ~UJ e ( } = 10 and Pr/Pn = 10~ 2 . In the demodulation of each DS spread spectrum signal at the base station. provided that each signal has its own pseudorandom (signature) sequence.2 applies as well to the demodulation and decoding of LPI signals at the intended receiver. 1 Nu . if there are N u simultaneous users.17). we have dB \ A / dB \ = 2 0 4 . From the basic relationship given in (9.M M Illustrative Problem 9.I)ps = -J Nu . orthogonality of the pseudorandom sequences among the N„ users generally is difficult to achicve.6 .400 CHAPTER 9 SPREAD SPECTRUM COMMUNICATION SYSTEMS at the base station. and power control is exercised over all simultaneous users by use of a control channel that instructs the users on whether to increase or decrease their power levels.3. hence. However. We briefly treat this problem in Section 9. In fact. In many practical systems the received signal's power level from cach user is monitored at the base station. With such power control. we implicitly assumed that the pseudorandom code sequences used by the various users are orthogonal and that the interference from other users adds on a power basis only. especially if Nu is large.2.1 (9.3 Suppose that the desired level of performance for a user in a CDMA system is achieved when SEb/Jo — i f . = h = J_ 40 PN /V„ = 41 users . the design of a large set of pseudorandom sequences with good correlation properties is an important problem that has received considerable attention in the technical literature.^ i m u k i J s m M M .V„ .L e t u s determine the maximum number of simultaneous users that can be accommodated in the CDMA system if the bandwidth-tobit-rate ratio is 100 and the coding gain is 6 dB. .19) From this relation we can determine the number of users that can be accommodated simultaneously.1 and. the desired signal-to-noise interference power ratio at a given receiver is * = * PN (.10 = 16 dB / dB Consequently.2. In determining the maximum number of simultaneous users of the channel. The resulting sequence. ^ ^ A uniform random number generator (RNG) is used to generate a sequence of binary information symbols (±1). T h e b l o c k d i a g r a m o f the s y s t e m to be s i m u l a t e d is i l l u s t r a t e d in F i g u r e 9 . Direct-Sequence Spread Spcctnim Systems 401 I l l u s t r a t i v e P r o b l e m 9 .5: Model of DS spread spectrum system for Monte Carlo simulation. which contains Lc repetitions per bit. 4 T h e o b j e c t i v e o f this p r o b l e m is to d e m o n s t r a t e t h e e f f e c t i v e n e s s o f a D S s p r e a d s p e c t r u m s i g n a l in s u p p r e s s i n g s i n u s o i d a l i n t e r f e r e n c e v i a M o n t e Carlo s i m u l a t i o n . which compares this signal with the threshold of zero and decides on whether the transmitted bit is + 1 or —1. where corresponds to the number of PN chips per information bit. is multiplied by a PN sequence c(/i) generated by another uniform RNG. To this product sequence we add white Gaussian noise with variance cr2 = NQ/2 and sinusoidal interference of the form i(n) = A sin ojqn where 0 < COQ < it and the amplitude of the sinusoid is selected to satisfy A < Lc.9.6 . Each information bit is repeated L c times. The results of the Monte Carlo simulation are shown in Figure 9. 5 . The output of the summer is fed to the detector. The error counter counts the number of errors made by the detector. Figure 9.2. The demodulator performs the cross correlation with the PN sequence and sums (integrates) the blocks of L c signal samples that constitute each information bit. % measured error rate when there is no interference smJd-. SNRindB4=0. smM-err.Lc.prb4(i)=ss_Pe94(SNRindB4(i).A4.Lc.in. else data=1. % number of chips per bit A 1=3. noise.err-prb I (i)=ss.Lc.Lc. T h e v a r i a n c e o f t h e a d d i t i v e n o i s e w a s k e p t fixed i n t h e s e s i m u l a t i o n s a n d t h e level o f t h e d e s i r e d s i g n a l w a s s c a l e d to a c h i e v e the d e s i r e d S N R f o r e a c h s i m u l a t i o n r u n .err. % signal level required to achieve the given % signal-lo-noise ratio E_chip=Eb/Lc. A l s o s h o w n i n F i g u r e 9 . A.A 1 .w0). if (temp <0.5). 6 is t h e m e a s u r e d e r r o r r a t e w h e n t h e s i n u s o i d a l i n t e r f e r e n c e i s removed. for i=1:length(SNRindB4). Lc. w0) % tp!=ss-Pe94{snrJnMB. echo on Lc=20.w0). sgma=1. . Lc. 7e amplitude of the second sinusoidal interference A3=12.dB / 1 0 ) . This is accomplished by avoiding very large sized vectors. % fourth case: no interf erence w0=1: % frequency of the sinusoidal interference in radians SNRindB=0:2:30.A3.w0). Lc. % measured error rales smld.A2. The function % that returns the measured probability of error for the divert value of % the snrJnMB. interference. data=—1.err=0. % energy per chip N=10000.1:8. num. for i=1 length(SNRindB). end. % number of bits transmitted % The generation of the data. SPREA D SPECTRUM COMMUNICATION S YSTEMS f o r t h r e e d i f f e r e n t v a l u e s o f t h e a m p l i t u d e o f t h e s i n u s o i d a l i n t e r f e r e n c e w i t h Lc = 20. end. wO) % SS-PE92 finds the measured error rate. T h e M A T L A B s c r i p t s f o r t h e s i m u l a t i o n p r o g r a m is g i v e n b e l o w . % MATLAB script for Illustrative Problem 4.w0). smld_err-prb2(i)=ss-Pe94(SNRindB(i). % amplitude of the first sinusoidal interference A2-7.prb3(i)=ss_Pe94(SNRindB(i). and error % counting is performed all together in order to decrease the run time of the % program. % amplitude of the third sinusoidal interference A4=0. for i=1 :N.402 CHAPTER 9. Chapter 9. snr=10" (snr. % generate the next data bit temp=rand.of.Pe94<SNRindB(i). decoding process. % plotting commands follow function lpl=ssJ>e9<t(snrJn-dB. A. A and wO. % noise standard deviation is fixed Eb=2«sgma " 2*snr. pn. % if it is an error. 5 ) . In this section.3. % received signal rec. counter 9.*pn_seq. decision-variable=sum(temp).e. we briefly describe the construction of some PN sequences and their autocorrelation and cross correlation properties.sig=trans-sig+noise+interference. % PN sequence fur the duration of the bit is Vine ruled next for 1 Lc. A maximum-length shift-register sequence. end.Lc). end. end. % then the measured error probability is p=num_of-err/N.seq(j)= —1. Generation of PN Sequences 403 end. repeaied_data(j)=daia. By far. has a length L = 2m — I bits and is generated by an m-stage shift register with linear feedback.*pn-seq'. end. eise decisional. increment the error if (decision~=data). end. % interference n=(i — 1)*Lc+1 :i*Lc. % the transmitted signed is trans_sig=sqrt(E_chip)*repeated_data. temp=rand. i.9.3 Generation of PN Sequences A pseudorandom. decision=-1. % making decision if (decision_variable<0). The sequence is periodic with period L. Each period has a sequence of 2 m _ l ones and 2m~l — 1 zeros. else pn. % AWGN with variance sgma ~2 noise=sgma*randn{1 . % repeal it Lc times. Table 9.7. if ( t e m p < 0 .1 lists the shift register connections for generating maximum-length sequences. num-oLerr=mjm-of_err+1. as illustrated in Figure 9. sequence is a code sequence of l's and 0's whose autocorrelation has properties similar to those of white noise. or PN. the most widely known binary PN code sequences are the maximum-length shift-register sequences. interference=A*sin(wO*n). end. . or m-sequence for short. % determine the decision variable from the received signal temp=rec_sig.seq(j)=1. divide it into chips for j=1 :Lc. .1 . SPREA D SPECTRUM COMMUNICATION S YSTEMS Figure 9. An important characteristic of a periodic PN sequence is its autocorrelation function. a PN sequence should have an autocorrelation function that has correlation properties similar to white noise. We shall call the equivalent sequence {c„} with elements (—1. «= 1 0 < m < L —1 (9.404 CHAPTER 9. In the case of m-sequences. 1) a bipolar sequence. which is usually defined in terms of the bipolar sequences {c. In DS spread spectrum applications. 1) is mapped into a corresponding binary sequence with elements { . 1}. the ideal autocorrelation sequence for {c„} is flc(0) = L and Rc(m) = 0 for 1 < m < L — 1. the autocorrelation sequence is . Since the sequence {c„} is periodic with period L. Ideally. That is. the binary sequence with elements (0.3.1) where L is the period of the sequence.6: Error-rate performance of the system from the Monte Carlo simulation.} as L Rr(m) = £\„cn+m. the autocorrelation sequence is also periodic with period L. 31. Generation of PN Sequences 405 m stages Figure 9.7 1. 33. 8.23 1 .6 1. 18 1. 7 . 29. 18. 10 1 .2 1.7: Générai m . 15.26. 21 1. 9.s t a g e shift register with linear f e e d b a c k . 12 1. 26 1. 23.4 1.28 I. 20 1. 5.3. 5. 8.23.1: Shift-register connections for generating M L sequences.8 1.6 1.3 1. 10. 13 1. 24 1. 4 1. 15 1.32 1. 30 I. 15 1. 19 m 2 3 4 5 6 7 8 9 10 11 12 m 13 14 15 16 17 18 19 20 21 22 23 m 24 25 26 27 28 29 30 31 32 33 34 Stages Connected to Modulo-2 Adder 1. 2 5 .5. 2 1 . 22 1. 14. 9 . Table 9. Stages Connected to Modulo-2 Adder 1.26 1. 18. 11. 19 1. 12 Stages Connected to Modulo-2 Adder 1.27 1. 11.6.9. 34 . 16 1. 14 1. 29 1.7 1. 2 lists the peak magnitude for the periodic cross correlation between pairs of m-sequences for 3 < m < 12.60 0.37 0. To be specific.60 0. Ideally.35 0. SPREAD SPECTRUM COMMUNICATION SYSTEMS Table 9. Consequently.406 CHAPTER 9. from a practical viewpoint.29 0. Also listed in Table 9.06 0. For example. m-sequences are not suitable for C D M A communication systems. the peak magnitude /? max of the cross-correlation function is a large percentage of the peak value of the autocorrelation function. t ° . the ratio Rc(m)/Ri. the PN sequences used in practice by different users exhibit some correlation.34 Gold sequences 5 9 9 17 17 33 33 65 65 129 0. However. Table 9.2) For long m-sequences.1 /L—is small and. 1<m < L .03 0.03 [-!..22 0.(0) = .1 (9.3.71 0.32 0.13 0. . inconsequential. We observe that the number of m-sequences of the length L increases rapidly with m. the size of the off-peak values of /? c (m) relative to the peak value /?c(0)—i. It is known that the periodic cross correlation function between a pair of m-sequences of the same period can have relatively large peaks.2 is the number of m-sequences of length L — 2m — 1 for 3 < m < 12.71 0. for most sequences. in CDMA each user is assigned a particular PN sequence. In some applications the cross correlation properties of PN sequences are as important as the autocorrelation properties.27 0. the PN sequences among users should be mutually orthogonal so that the level of interference experienced by one user from transmissions of other users is zero. Therefore. Although it is possible to select a small subset of m-sequences that have relatively smaller cross correlation peak values than J?max> the number of sequences in the set is usually too small for CDMA applications.13 0.36 0. lei us consider the class of m-sequences. We also observe that. m-sequences m 3 4 5 6 7 8 9 10 11 12 L = 2'"" 1 7 15 31 63 127 255 511 1023 2047 4095 Number 2 2 6 6 18 16 48 60 176 144 ^mnx 5 9 11 23 41 95 113 383 287 1407 0.14 0.2: Peak cross correlations of w-sequences and Gold sequences. m-sequences are very close to ideal PN sequences when viewed in terms of their autocorrelation function. .e.06 0.37 0. . called preferred msequences. Gold sequences are constructed by taking a pair of specially selected m-sequences. every ( 2 m / 2 -I.9. For m even. Besides the well-known Gold sequences and Kasami sequences. there are other binary sequences that are appropriate for C D M A applications. The interested reader is referred to the papers by Scholtz [8]. Hence. we observe that Kasami sequences satisfy the lower bound and. they are slightly suboptimal.l)st bit of an m-sequence is selected.3) which.3. and forming the modu!o-2 sum of the two sequences for each of L cyclically shifted versions of one sequence relative to the other sequence. In Kasami's method of construction. for large values ot L and N is well approximated as i?max > v r . Generation of PN Sequences 407 Methods for generating PN sequences with better periodic cross correlation properties than m-sequences have been developed by Cold [5]. a lower bound on their maximum cross correlation is (9. For large L and m odd. Figure 9.3. and Sarwate and Pursley [9]. they are optimal. hence. Given a set of N sequences of period L. Kasami [7] described a method for constructing PN sequences by decimating an msequence.8. Gold sequences with m odd have i?rna* = •JlL. This method of construction yields a smaller set of PN sequences compared with Gold sequences. Thus L Gold sequences are generated as illustrated in Figure 9. On the other hand. but their maximum cross correlation value is i?max — y r .[6] and by Kasami [7]. It is interesting to compare the peak value of the cross correlation function for Gold sequences and for Kasami sequences with a known lower bound for the maximum cross correlation between any pair of binary sequences of length L.8: Generation of Gold sequences of length 31. = v/T. the maximum value of the cross correlation function between any pair of Gold sequences is ^?rnax — •JlL. Hence. echo on % first determine the maximal length shift register sequences % we'll rake the initial shift register content us "0(XX)l".:)-1. end. i . — ^ .8. end. for s h i f t .1 . Chapter 9. connections2=[1 1 1 0 1]. L = 2 * m — 1. c ojt=co rr .L.*shifted-c2)). for j = i + 1 .seq(i. % cyclically shift the second sequence and add it to the first one L = 2 " lengthfconnections I ) . end.seq(j. The maximum value of the cross correlation for these sequences is Rmax = 9. m = l e n g t h ( c o n n e ct ions).5 Let us generate the L — 31 Gold sequences that result from taking the modulo-2 sum of the two shift-register outputs shown in Figure 9.corr=0. A "zero" % means not connected. % [seq]=ss_mlsrs(connections) % SS-MLSRS generates the maximal length shift register sequence when the % shift register connections are given as input to the function. max _c ro ss . for i=1 .:)=(sequencel+temp) ./2).L —1. n m M f l f c T h e M A T L A B scripts for performing this computation is given below. % note that max^cross^orr turns out to be 9 in this example.1 . s h i f t e d . c2=2»gold.. sequence I =ss-mlsrs(conneciions I). a m o u n t + 1 :L) sequence2(1 : s h i f t . sequence2=ss_mlsrs(connections2).a m o u n t = 0 : L . % length of the shift register sequence requested . % MATLAB script for Illustrative Problem 5.3. end.. for m = 0 : L . corr=abs(sum(cJ. gold. % equivalent sequences cl=2»gold. for these sequences function (seqj=ssjnlsrs(connections).:)-1.*2. . whereas a "one" represents a connection. end. connections 1=|1 0 1 0 0].408 CHAPTER 9. % find the max value of the cross correlation max-cross. c 2 = f c 2 ( m + 1 :L) c 2 ( 1 : m ) ] .seq(shift_amount + 1 . t e m p = [ s e q u e n c e 2 ( s h i f t . SPREA D SPECTRUM COMMUNICATION S YSTEMS — « I M I f e H A U M J j s U J d ^ l Illustrative Problem 9. if ( c o n > m a x .c o r r ) .floor({sequencel+temp).a m o u n t ) | .1 . r o s s . The 31 sequences generated are shown in Table 9. Figure 9. Frequency-Hopped Spread Spectrum 409 % imntit register contents % first element of the seque?VIE regis ters=[zeros( 1 . there is an identical PN sequence generator. may be selected to be either equal to the symbol rate. also. The frequency-hopping rate. say. The resultant signal is then demodulated by means of an FSK demodulator. We shall consider only a hopping rate equal to the symbol rate. denoted as Rh. there are multiple hops per symbol. we may specify 1 m — 1 possible carrier frequencies. seq(1) —regis t<5rs(m)> for i=2:L. the modulator selects one of two frequencies. / o or f \ . it is difficult to maintain phase coherence in the synthesis of the frequencies used in the hopping pattern and.. new_reg_cont(j)=regis[ers(j-1)+conneclions(j)*seq(i-1).9. Thus. the pseudorandom frequency translation introduced at the transmitter is removed at the demodulator by mixing the synthesizer output with the received signal. For example. which is used to control the output of the frequency synthesizer. FSK modulation with noncoherent demodulation is usually employed in FH spread spectrum systems. is equal to or lower than the symbol rate. or higher than the symbol rate. If Rh is higher than the symbol rate. A block diagram of the transmitter and receiver for a FH spread spectrum system is shown in Figure 9. if binary F S K is employed. If Rf. synchronized with the received signal. The selection of the frequency slot(s) in each signal interval is made pseudorandomly according to the output from a PN generator.4. which is used to select a frequency fc that is synthesized by the frequency synthesizer. The resulting binary FSK signal is translated in frequency by an amount that is determined by the output sequence from a PN generator.e. corresponding to the transmission of a 0 for a 1. In any signaling interval the transmitted signal occupies one or more of the available frequency slots. the FH system is called a fast-hopping system.9. new_reg_coni(1 )=connecnons(1)»seq(i —1).m— 1) 1]. Consequently. For example. i. end. registers=new_reg_cont. for j=2:m. The modulation is either binary or M-ary FSK (MFSK). by taking m bits from the P N generator. the FH system is called a slow-hopping system. lower than the symbol rate. in the propagation of the signal over the channel as the signal is hopped from one frequency to another over a wide bandwidth. A signal for maintaining synchronism of the PN sequence generator with the FH received signal is usually extracted f r o m the received signal.10 illustrates a FH signal pattern. This frequency-translated signal is transmitted over the channel. Although binary PSK modulation generally yields better performance than binary FSK. At the receiver. seq(i)=registers(m): % current register % the next element contents of the sequence 9.4 Frequency-Hopped Spread Spectrum In frequency-hopped (FH) spread spectrum the available channel bandwidth W is subdivided into a large number of nonoverlapping frequency slots. . 5 . 01 1 1 0 0 0 0 1 0 0 0 0 1 1 0 0 1 0 0 1 0 1 1 1 1 0 0 0 0 0 0 1 0 1 101 1 0 0 0 0 1 iOl 1 0 1 0 0 0 01 111 1 1 0 0 1 1 10001000001001000110001 1 0 0 1 0 1 0 0 1 1 0 1 0 1 I 1 1 101 I 0 0 1 0 0 1 10101 1 11 1 0 1 0 1 0 0 0 1 0 1 1 0 1 1 1 1 0 0 0 1 1 1 0 0 0 1 0 1 1 1 1 0 I 10001 lOOOOlOiOQl 101 100 0 1 0 0 1 0 0 1 0 0 1 0 0 1 0 1 0 0 0 1 1 1 1 0 1 0 1 0 0 1 1 1 I 01 0 I 10 1 10 1 1 101 01 01 01 101 1 0 0 0 1 0 1 10 1 1 0 10 0 1 1 1 10 101 1 1 1 00 1 0 0 0 1 1 1 0 1 1 0 1 1 0 0 1 0 0 0 0 1 101 1 0 1 0 1 0 1 1 0 0 0 1 0 0 0 1 1 1 1 1 101 10 10 0 1 1 1 0 1 1 00 t 0 1 1 11 1 1 0 t 00 M 1 101 1 1 0 1001 0 1 0 0 0 1 001 1 1 0 1 0 1 0 1 1 0 0 0 0 0 1 1 0 0 1 I 1 0 0 1 0 0 1 1 0 1 0 0 0 0 1 0 1 0 1 0 0 1 11 1 1 I II 1 10 101 I 1 0 1 0 0 1 0 1 0 0 0 1 1 0 1 0 0 0 0 0 1 0 1 0 1 i 1 0 0 0 1 1 1 101 1 1 1 1 0 0 1 0 0 1 0 0 0 0 0 1 1001 t 101 1 0 0 0 0 1 0 0 1 101 1001 1 0 0 1 0 0 0 1 0 0 0 0 1 0 1 1 101 1 0 1 0 0 0 1 1 0 0 0 1 1 0 1 0 1 0 1 0 1 1 0 1 1 » 0 0 1 0 1 1 1 1 1 1 1 1 1 1 0 1 0 1 0 1 0 0 1 0 0 1 1 11 1 1 0 1 1 10101 1 0 0 1 0 1 1 1 0 0 0 0 0 0 0 0 0 1 101 1 1 1 0 1 0 1 1 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 10001 1 1 1 0 0 1 0 1 1 0 0 0 0 0 0 0 1 1 0 0 0 0 0 1 0 1 001 1 101001000101 1 10100000010100 1 1001 1 1 0 0 0 0 0 1 1 1 10 0 0 1 0 0 1 1 1 0 1 1 0 0 1 001001 101001 1111011110 0 1 0 1 0 1 0 1 1 1010100101001000010 1 1 1 0 0 0 0 1 1 101 1 0 0 0 1 1 0 1 0 110001100110101110101 1 1 1 1 1 1 0 0 0 1 0 1 11001 0 0 0 1 0 0 0 1 0 1 0 0 1 0 1 1001 1 0 0 0 0 0 0 1 0 0 1 11 1 0 0 1 1 0 0 11 1 1 0 1 1 10 100 1 0 0 0 1 0 1 0 1 0 1 0 0 0 1 1 0 0 1 0 1 0 1 0 0 0 0 0 1 1 0 1 0 1 1 1 1 1 1 0 0 0 0 1 0 1 1 1 0 0 1 1 1 1 1 I 1111 1 1 1001010000000101 1000001000 .410 CHAPTER 9. SPREA D SPECTRUM COMMUNICATION S YSTEMS Table 9 . 3 : T a b l e o f G o l d s e q u e n c e s f r o m Illustrative P r o b l e m 9 . 9.9: Block diagram of an HF spread spectrum system. . I I I 1 I I Tc 27. 37. Frequency-Hopped Spread Spectrum 411 PN sequence generator Intormaiion sequence FSK modulator Frequency synthesizer 1 Encoder Mixer Channel Output Figure 9. 4 Tc Time interval 5TC 6TC • Time Figure 9.4.10: An example of an FH pattern. w h e r e Jo is the s p e c t r a l d e n s i t y o f the i n t e r f e r e n c e . the r e c e i v e d s i g n a l w i l l b e j a m m e d w i t h p r o b a b i l i t y a . w h e r e PB = a v e r a g e p r o b a b i l i t y o f error is £BF JQWhen it is j a m m e d . the t r a n s m i t t e d f r e q u e n c i e s are s e l e c t e d w i t h u n i f o r m p r o b a b i l i t y in t h e f r e q u e n c y b a n d W . w h e r e PJ is the a v e r a g e p o w e r o f the b r o a d b a n d No.4. a s s u m i n g that Jo ^ = (9.1) T h e s a m e result a p p l i e s if the i n t e r f e r e n c e is a b r o a d b a n d s i g n a l or j a m m e r w i t h fiat s p e c t r u m that c o v e r s the entire F H b a n d o f w i d t h W . c a n b e e x p r e s s e d as <£"/> = is the bit rate. where 0 < A < 1. the i n t e r f e r e n c e a v e r a g e p o w e r PJ o f the f r e q u e n c y band o c c u p i e d by the i n t e r f e r e n c e .412 CHAPTER 9. the p r o b a b i l i t y o f error is l / 2 e x p ( — a p k / 2 ) . In an u n c o d e d s l o w . T h e d e m o d u l a t i o n and d e t e c t i o n is n o n c o h e r e n t . SPREA D SPECTRUM COMMUNICATION S YSTEMS 9. JQ the S N R can be e x p r e s s e d as P$TB = = Ps//?. w e o b s e r v e that £b> t h e e n e r g y p e r bit. NQ is r e p l a c e d b y NQ + JQ. S l o w F H s p r e a d s p e c t r u m s y s t e m s are p a r t i c u l a r l y v u l n e r a b l e to a p a r t i a l . and it w i l l n o t b e j a m m e d w i t h p r o b a b i l i t y 1 — a .b a n d i n t e r f e r e n c e c o m e s f r o m a j a m m e r that s e l e c t s ct to o p t i m i z e the e f f e c t o n the c o m m u n i c a t i o n s s y s t e m .2) Ja Pj/Ps w h e r e W / R is the p r o c e s s i n g g a i n a n d P j / P s is the j a m m i n g m a r g i n f o r the F H s p r e a d spectrum signal. T h e r e f o r e . S u p p o s e that the p a r t i a l . its v a l u e is S J ( F ) = w o r d s .b a n d i n t e r f e r e n c e is m o d e l e d as a z e r o .s p e c t r a l d e n s i t y o v e r a f r a c t i o n o f t h e total b a n d w i d t h W a n d z e r o in the r e m a i n d e r o f the f r e q u e n c y b a n d .4.4. In s u c h a c a s e .m e a n G a u s s i a n r a n d o m p r o c e s s w i t h a flat p o w e r .b a n d interf e r e n c e that m a y result e i t h e r f r o m i n t e n t i o n a l j a m m i n g o r in F H C D M A s y s t e m s . a n d A is t h e f r a c t i o n Plia) = 2 exp {—ctpb/2') aW/R\ -— I 2Pj/PsJ (9.exp v 2 \ . the In the r e g i o n or r e g i o n s w h e r e t h e JO/A.s p e c t r a l d e n s i t y is n o n z e r o . i n t e r f e r e n c e and W is the a v a i l a b l e c h a n n e l b a n d w i d t h . S i m i l a r l y . the p r o b a b i l i t y o f erTor o f s u c h a s y s t e m is P l h = e-*>nN a (9. In o t h e r is a s s u m e d to b e c o n s t a n t . T h e h o p rate is 1 h o p p e r bit.1 Probability of Error for FH Signals L e t us c o n s i d e r a F H s y s t e m in w h i c h binary F S K is u s e d to transmit the d i g i t a l i n f o r m a t i o n . C o n s e q u e n t l y .f r e e . A s in the c a s e o f a D S s p r e a d s p e c t r u m s y s t e m . In an A W G N c h a n n e l . p o w e r . s u p p o s e that the p a r t i a l . d e t e c t i o n o f the s i g n a l is a s s u m e d t o be e r r o r . T o b e s p e c i f i c .3) a ( = . a n d w h e n it is n o t j a m m e d .4. where P$ is the a v e r a g e s i g n a l p o w e r a n d R P J / W . t h e Therefore.h o p p i n g s y s t e m w i t h binary F S K m o d u l a t i o n a n d n o n c o h e r e n t d e t e c t i o n . 4. \ Worst-case paricial-band jamming a = 0.5) which is also shown in Figure 9.4.11: Performance of binary F S K with partial-band interference. By differentiating Pilot) and solving for the value of a that maximizes P i ( a ) .11.4. The j a m m e r is assumed to optimize its strategy by selecting o to maximize the probability of error. Pb>2 Pb < 2 (9.4.J JQ.01 10 15 20 25 SNR/bit. 1.4) The corresponding error probability for the worst-case partial-band j a m m e r is ph < 2 (9. Whereas the error probability decreases exponentially for full-band jamming. the error probability for worst-case partial-band j a m m i n g decreases only inversely with £F.3). we find that the j a m m e r ' s best choice of a is 2/ph. as given by (9. Frequency-Hopped Spread Spectrum 413 Figure 9. .11 illustrates the error rate as a function of the SNR ph for several values of a . d B Figure 9.9. The block diagram of the system to be simulated is shown in Figure 9. where 0 < a < 1.414 CHAPTER 9. the input to the detector. is . A second uniform RNG is used to determine when the additive Gaussian noise corrupts the signal and when it does not. which is the input to the FSK modulator. SPREA D SPECTRUM COMMUNICATION S YSTEMS — i i m f a H ^ M J d d . In the presence of noise.6 Via Monte Carlo simulation. assuming that a 0 is transmitted. The output of the FSK modulator is corrupted by additive Gaussian noise with probability a . demonstrate the performance of an FH digital communication system that employs binary FSK and is corrupted by worst-case partial-band interference. Figure 9. M B d a f c Illustrative Problem 9. ^ ^ A uniform random number generator (RNG) is used to generate a binary information sequence.12: Model of a binary FSK system with partial-band interference for the Monte Carlo simulation.12. rho=10"(rho-in. T h e M A T L A B scripts for the simulation program are given below. no errors occur at the detector.-. The variance of each of the noise components is a 2 = a 7 o / 2 .b I =0:5:35. for i=1:length(rho. where £}> is constrained to 3-f.dB/l0).bl).4. alpha=2/rho. end. 6. end.b2). if rho < 2 . % energy % optimal 9 E > optimal per bit if rho of alpha alpha > 2 else alpha=1. Then. % plotting command follow function Ipj = ssJ>e96<rhoM-dB) % fp!=ssJ'e96(rhoJnMB) % SSJ*E96 finds the measured error rate.9. Chapter 9 1c rho in JB for the simulated error rate % rho m dB for theoretical error rate compulation 1*.b2=0:0. Frequency-Hopped Spread Spectrum 415 ri = r + + no- 4. rho.represent the additive noise components. we may s e t $ = 0 and normalize Jt) by setting it equal to unity. ^2. Eb=rho. simulated error rate if (teinp>2) theo_err. end.5). For simplicity.. CT2 = 4 and a = 2 / £ h . hence. where a is given by (9.b2(i)/1Q). for i=1:length(rho. «i.4.% theoretical error rate if rho<2 end.( V ^ s i n < p + n ^ j 2 — where <p represents the channel-phase shift and Hi t . > 2. r 2 = 0 and . In the absence of noise.rale(i)=(1/2)*exp(-iemp/2).err. if (rho»2).r. % MATLAB script for illustrative Problem echo on rho. smld _err_prb(i)=5S . in the presence of panial-band interference. Figure 9..pe96 (rho_b] (i)).Since a2 = Jo/lot and a ~ 1/ph> it follows that. we have r.4). ph = £kIJ§ — £b.1:35. temp=10"(rho.4.rate(i)=1 /(exp (1 )*temp): % theoretical error rate if rho>2 else theo. The value % signal per interference ratio in dB is given as an % input to the function. Also shown in (he figure is the theoretical value of the probability of error given by (9 . "2<.13 illustrates the error rate that results from the M o n t e Carlo simulation. rl=rlc(i)"2+rls(i)"2. end.of-erT=num.err=0. end. end. :ls(i)=0. r I s(i)=rl s(i)+gngauss(sgma). r2c(i)=r2c(i)+gngauss(sgma). end. rlc(i)=sqn(Eb). % find the received signals for i=1:N. % measured bit error rate is then p=num. for i=1 :N.5) data(i)=1. % increment the counter if this is an error if (decis"=data(i)). num. % second decision variable % decision is made nest "if ( r l > r 2 ) . r2s(i)=0. rls(i)=0. r2c(i)=0. end.of. % the transmitted signal if (daia(i)==0).of_err/N. N=10000. % first decision variable r2=r2c(i)"2+r2s(i)~2. decis=0. SPREA D SPECTRUM COMMUNICATION S YSTEMS sgma=sqrt(1 /(2*alpha)).416 CHAPTER 9. else decis=1. temper and. r2s(!)=r2s(i)+gngauss(sgma). rlc(i)=rlc(i)+gngauss(sgma). end. end. else % noise standard % number of bits deviation transmitted rlc(i)=0. % the received signal is found by adding noise with probability if (rand<alpha).err+1. end. tf (temp <0. . r2s{i)=0. r2c(i)=sqrt(Eb). % generation of the data sequence for i=1:N. e!se data(i)=0. alpha % make the decisions and count the number of errors made num-of. . the SNR required in AWGN is about 10 dB.6) 2\ + ^2f22 . or one of the two transmitted signals is corrupted by interference. the two inputs to the combiner are either both corrupted by interference. assuming that a 0 is transmitted. and the signals from the multiple transmissions are weighed and combined together at the input to the detector. the SNR required at the detector is almost 60 dB in the presence of worst-case interference.6 . suppose that each information bit is transmitted on two successive frequency hops. Consequently. forms the combined decision variables. thus. For example. which is excessively high. To be specific.w2ri2 (9. 9.4.e.4. x = w\r\\ y = r 4. the same information bit is transmitted on multiple frequency hops.13: Error-rate performance of FH binary FSK system with partial-band interference — Monte Carlo simulation.2 Use of Signal Diversity to Overcome Partial-Band Interference The performance of the FH system corrupted by partial-band interference as described in the previous section is very poor.9. i. Frequency-Hopped Spread Spectrum 417 Pb Figure 9. By comparison. for the system to achieve an error probability of 1 0 . The resulting system is called a dual diversity system. The combiner is assumed to know the level of the interference and.4. in the absence of partial-band interference. the loss in SNR due to the presence of the partialband interference is approximately 50 dB. The way to reduce the effect of partial-band interference on the FH spread spectrum system is through signal diversity. In this case. or neither of the two transmitted signals is corrupted by interference. the weight placed on the received signal is large. compared to 60 dB (1000 times larger) for no diversity.6.v and y from the combiner are fed to the detector. when a 2 is small. the probability of error for the worst-case partial-band interference has the form P 2 (2) = % Ph Pb> 2 (9. On the other hand. Since signal diversity as described above is a trivial form of coding (repetition coding).is the variance of the additive noise plus interference. an increase in SNR by a factor of 10 (10 dB) results in a decrease of the error probability by a factor of 100. SPREA D SPECTRUM COMMUNICATION S YSTEMS where r[ j. Consequently.1. which decides in favor of the larger signal component. the weight placed on the received signal is small. In the absence of interference. In other words. However. it is not surprising to observe that instead of repeating the transmission of each information bit D times. The SNR per hop is £ . As a consequence. More generally. I L L U S T R A T I V E P R O B L E M Illustrative Problem 9. rji are the outputs of the square-law device from the second transmitted signal. the probability of error has the form Pl{D) = f>b pb> 2 (9. which is significantly smaller than a . where D is the order of diversity. where cr.7) where K2 is a constant and Ph — £ b ! h . an error probability of 1 0 . the probability of error for dual diversity decreases inversely as the square of the SNR. The two components . a value that may be typical of the level of the additive Gaussian noise. when interference is present.6 can be achieved with an SNR of about 30 dB with dual diversity. On the other hand. The weights u>\ and wi are set to !/<r 2 . the weight used in the combiner is set to w = 10. The performance of the FH signal with dual diversity is now dominated by the case in which both signal transmissions are corrupted by interference.8) where K o is a constant. as would be the case when there is no interference. which corresponds to o 2 = 0.In this case. as would be the case when interference is present.4. Hence. r n are the two outputs of the square-law device for the first transmitted signal and r\j.418 CHAPTER 9.7 Repeat the Monte Carlo simulation for the FH system considered in Illustrative Problem 9. and the . we may use a code with a minimum Hamming distance equal to D and softdecision decoding of the outputs from the square-law devices. when a 1 is large. if each information bit is transmitted on D frequency hops. Thus. but now employ dual diversity. the probability of this event is proportional to a 2 . where £ is constrained to be £ > 4. the weight is set to w = I j o 2 = 2 / £ .4. the combiner de-emphasizes the received signal components that are corrupted by interference. end. . trl2c(i)=sqrt(E). alpha=2/rho. % rh/i in dB % simulated error rate function [pl=ssJ. trlls(i)=0. 1 4 .T h e results of the M o n t e C a r l o s i m u l a t i o n are illustrated in F i g u r e 9 . T h e r e f o r e . end. end. Chapter 9. trl lc(i)=sqrt(E). tr22c(i)=0. terap=rand. % Plotting commands follow 7. tr!2s(i)=0. rho =10 "(.e97(rho. sgma=sqrt<E/2). if (temp < 0 . else a]pha=1. T h e M A T L A B s c r i p t s f o r the s i m u l a t i o n p r o g r a m is g i v e n b e l o w . % MATLAB scrip it for Illustrative Problem echo on rhO-b=0:2:24. trl2c(i)=0. % find the transmitted signals for i=1:N. the error p r o b a b i l i t y is p l o t t e d as a f u n c t i o n of £ k / J o . end. tr22s(i)=0: else trl lc(i)=0. tr21s(i)=0. else sgma=sqrt(t/2). The value of % signal per interference ratio in dB is given us an input % to the Junction.err_prb(i)-ss_Pe97(rho-b(i)). smld . % energy per symbol transmitted % tfie opiimui vtiiue (>/ ulpfiit if (rho>2). for i=1 :length(rho_b). end.in^!B) % [p!=ssJ'e97(rkoJn^d8> % SS-PE97 finds the measured error rate. % energy per information bit E=Eb/2. Eb=rho. N=10000. % number of bits transmitted % generation of the data sequence for i=1 :N. % the variance of the additive noise if ( E > 1 ) . Frequency-Hopped Spread Spectrum 419 t o t a l e n e r g y p e r b i t in t h e t w o h o p s is = 2£.dB/10).4.9. 5 ) data(i)=1: else data(i)=0. if (data(i)=-0).in . tr2!c(i)=0.rho . make the decisions. rlls=trlls(i). 9c the the received signal1: are if ( j a m m i n g I = = 1 ) rl k-=trl lc(i)+gngauss(sgmuV. r22=r22c ~ 2 + r 2 2 s " 2. wl=1/sgma~2. tr2lc(i)=sqn(E). rl2=r!2c~2+rl2s"2. end. % find the received signal*. end. r21s=lr2ls(i)+gngauss(5gma). r2k=(r21c(i). jamming2=1.ot. else rl l c = t r l l c ( i ) . finally. rl2s=trl2s(i). end. % jamming present on the second else j a m m i n g 1=0.420 CHAPTER 9. r22c=lr22f(i). r21=r21c*2+r21s~2. and count the number of errors num. end. else rl2c=trl2c(i). r2Ic=lr2lc(i)+gngauss(sgma). y=wl*r21+w2*r22. 9o jamming not present on the first made transmission transmission % jamming % jamming present on the second on the second transmission transmission not present rl l s = ( r l ls(i)+gngauss(sgma).err=0. r22s=tr22s( i) +gngauss(sgma) . if (rand<aJpha). if ( j a m m i n g 2 = = 1 ) . first the weights variables are computed a. r22s=ir22s(i). rl2s=trl2s(i)+gngauss(sgma). end. else wl = 1 0 . r22c=I r22 c (i)+gn gaus s (s g m a ) . else w2=10. end. % determine if there is jamming if (rand<aJpha). ir2ls(i)=0. end. % make the decision if ( * > y ) . lr!2s(i)=0. w2=1/sgma"2. SPREA D SPECTRUM COMMUNICATION S YSTEMS trl l s ( i ) = 0 . if ( j a m m i n g l — 1 ) . tr22s(i)=0. j a m m i n g 1=1.? follows decision variables x and y are computed . end. % the intermediate decision rll=rllc"2+rlls"2. % compute the decision variables. if ( j a m m i n g 2 — 1 ) rl2c=trl2c(i)+gngauss(sgma). the resulting x=wl*rl I+w2*rl2. else jamming2=0. lr22c(U=sqrt(E). for i=1:N. r2)s=lr2ls(i). Frequency-Hopped Spread Spectrum 421 decis=0. . end.9.4.o!_err/N.14: Error-rate performance of FH dual diversity binary FSK with partial-band interference — Monte Carlo simulation. else decis=1.is~=duta(i)). % the measured bit-error rale is then p=iium.err+-1. end. end. % increment ihe counter if this is an error if (dei. Pb Figure 9. n u n u o L i rr=num_of. 7. SPREA D SPECTRUM COMMUNICATION S YSTEMS Problems 9.4 for a processing gain of 10 and plot the measured error rate. Plot the measured error rate from the Monte Carlo simulation for this dual diversity system and compare this performance with that obtained in Illustrative Problem 9. Is the resulting sequence periodic? If so.1). Plot a graph of the measured SNR and.4 Write a MATLAB program that implements an m = 12-stage maximal-length shift register and generate three periods of the sequence. 1 .2 Write a MATLAB program to perform a Monte Carlo simulation of a DS spread spectrum system that operates in an LP! mode.3 Repeat the Monte Carlo simulation described in Illustrative Problem 9. 9.5 Wnte a MATLAB program that implements an m = 3-stage and an m ~ 4-stage maximal-length shift register and form the modulo-2 sum of their output sequences. 9. where the signal energy is £ > 4. 9.422 CHAPTER 9. Compute and graph the periodic autocorrelation function of the equivalent bipolar sequence given by (9. demonstrate that there is no performance gain from signal. 9. The system is corrupted by density Jo/at. 1 . The processing gain is 20 {13 dB) and the desired power signal-to-noise ratio Ps/Pn at the receiver prior to despreading the signal is —5 dB or smaller.7 when the weight used at the combiner in the absence of interference is set to w = 100 and with interference the weight is uj = l/cr2 = 2/£.3.1).5.6 Write a MATLAB program to compute the autocorrelation sequence of the L = 31 Gold sequences that were generated in Illustrative Problem 9. Write a MATLAB program that simulates the selection of the center frequency and the generation of the two frequencies in each of the N = 127 frequency bands.8 Write a Monte Carlo program to employs binary FSK with noncoherent partial-band interference with spectral spectrally flat over the frequency band system versus the SNR 3 t h / h simulate a FH digital communication system that (square-law) detection. of a DS spread specan AWGN channel. thus. 9.3.9 Repeat the Monte Carlo simulation in Illustrative Problem 9. Each stage of the shift register selects one of N = 127 nonoverlapping frequency bands in the hopping pattern. . where a = 0 . 9.7 An FH binary orthogonal FSK system employs an m = 7-state shift register to generate a periodic maximal-length sequence of length L = 127. Show the frequency selection pattern for the first 10 bit intervals. Plot the measured error rate for this 9. The interference is 0 < a < 0 . error rate versus the the spread spectrum 9. Plot the measured error rate as a function of the SNR. what is the period of the sequence? Compute and sketch the autocorrelation sequence of the resulting (bipolar) sequence using (9.1 Write a MATLAB program to perform a Monte Cario simuiation trum system that transmits the information via binary PSK through Assume that the processing gain is 10. Urbana. R. D.B.G. vol. Theory. Frequency-flopped Spread Spectrum 423 References [ I ] Proakis." IRE Transactions Theory. April 1966.4. Theory. New York: McGraw-Hill." Coordinated Science Laboratory. Upper Saddle River. University of Illinois. on Information [2] Max. D.G.1-54. R..2. "Crosscorrelation Properties of Pseudorandom and Related Sequences. T. \ol. 1995. Conference [9] Sarwate." IEEE Trans. vol.D. "Quantization for Minimum Distortion.9. "Maximum-Likelihood Sequence Estimation of Digital Sequences in the Presence of Intersymbol Interference. R.V. [8] Scholtz. Digital Communications. "Optimal Binary Sequences for Spread Spectrum Multiplexing. IT-6 (March 1960): 7-12. Inform. [7] Kasami. [3] Proakis.." IEEE Trans. "Optimal CDMA Codes. and Pursley. [6] Gold. 1994. J. [4] Forney. Inform. [5] Gold.." Proceedings of the IEEE. Systems Engineering. IT-18(May 1972): 363-378. Technical Report No. J. IT-14 (January 1968): 154-156. (November 1979): 54. J. R-285. Inform. III. M. 3rd ed.C. A. M. Washington. Theory.2.4." 1979 National Telecommunication Records. "Maximal Recursive Sequences with 3-Valued Recursive Cross Correlation Functions. and Salehi. . IT-13 (October 1967): 619-621. vol. Jr. Communication NJ: Prentice-Hall. vol. "Weight Distribution Formula for Some Class of Cyclic Codes. G. 68 (May 1980): 593-619." IEEE Trans. 345 binary symmetric channel (BSC). 371 error probability. 222 data compression 424 . 313 carrier-phase modulation. 79 amplitude distortion. 179 ASK. 69 bandpass processes. 212 block codes. 282 spectrum. 177 error probability. 344 biorthogonal signaling. 376 trellis diagram. 37 in-phase component. 371 constraint length. 344 carrier amplitude modulation. 399 channel capacity. 326 carrier-frequency modulation. 395 clock synchronization. 287 Carson's rule. 265 AM. 51 cyclostationary processes. 378 correlation receiver. 385-387 free distance. 35 envelope. 344 expression. 96 amplitude-shift keying. 226 amplitude modulation.Index adaptive equalizer. 355. 89 SNR. 377 transfer function. 355. 344 AWGN channel. 343. 71 bandpass signal. 89 bandwidth. 355 BSC. 344 capacity. 80 SSB-AM. 89 modulation efficiency. 361 linear block. 332 code division multiple access (CDMA). 210 error probability. 1 convolutional codes. 89 index of modulation. 90 normalized message. 399 codes block. 355 convolutional. 35 antipodal signals. 281 carrier synchronization. 33 bandlimited processes. 377. 166 covariance matrix. 357 simple repetition. 355 chip rate. 371 Hamming. 257. 375 Viterbi algorithm. 79 Conventional AM. 36 binary symmetric channel (BSC). 90 spectrum. 344 channel coding. 36 phase. 89 convolution integral. 283 analytic signals. 355 Conventional AM. 89 DSB-AM. 36 lowpass equivalent. 282 autocorrelation function. 119 CDMA. 37 quadrature component. 18 table. 321 Gold sequences. 118 DFE. 131 decision-feedback equalizer (DFE). 409 frequency-shift keying (FSK). 392 error probability. 132 envelope detection. 343 DPSK. 24 differential phase modulation. 291 QAM. 377 frequency modulation. 19 Rayleigh's theorem. 259 linear. 19 differentiation. 21 inverse. 25 FFT. 4 Fourier transforms. 80 bandwidth. 111 DSB-AM. 18 time shift. 16 definition. 101 Conventional AM. 316 error probability. 24 discrete memoryless channel (DMC). 116 bandwidth. 106 deviation constant. 118 modulation index. 119 Fourier series. 1 coefficients. 3. 36 lossless. 111 conventional AM. 343 discrete spectrum. 19 Parseval's relation. 18 modulation. 4 DMC. 111 equalizer adaptive. 19 free distance. 271 preset. 2 trigonometric. 272 delay distortion. 18 properties. 25 FM. 297 DSB-AM. 254 early-late gate. 82 spectrum. 259 zero-forcing. 32 entropy. 19 duality. 306 SSB-AM. 18 convolution.INDEX 425 symbol-spaced. 398 discrete Fourier transform (DFT). 119 deviation constant. 316 phase. 406 Hamming codes. 19 scaling. 18 for periodic signals. 118 pre-emphasis and de-emphasis filtering. 257 . 361 Hermitian symmetry. 116 frequency-hopped spread spectrum (FH). 80 duobinary signaling. 332 energy spectral density . 314 demodulation. 257. 119 SNR. 226 demodulation AM. 239 fast Fourier transform (FFT). 259 eye pattern. 18 linearity. 257 nonlinear. 252 precoding. 3 Hilbert transform. 272 DFT. 265 decision feedback (DFE). 32 energy-type signal. 297 direct sequence spread spectrum. 397 jamming margin. 101 frequency-shift keying (FSK). 272 fractionally-spaced. 2 exponential form. 131 lossy. 80 SNR. 147 uniform. 139 vector. 152 SQNR. 55 matched filter. 55 . 396 PSK. 146 phase spectrum. 361 linear equalizer. 257 processing gain. i33 in-phase component. 235 jamming margin. 244 Nyquist rate. 152 nonuniform. 271 Nyquist frequency. 145 LMS algorithm. 313 carrier-phase. 187 mutual information. 80 FM. 197 magnitude spectrum. 169. 71 cyclostationary. 32 preset equalizer. 184 Parseval's theorem. 138 INDEX modulation raised-cosine signal. 146 QAM. 403 power spectral density for periodic signals. 118 multidimensional signals. 309 quadrature amplitude (QAM). 63 Gauss-Markov process. 4 Markov process. 251 precoding. 222 filtering. 69 bandpass. 223 power-type signal. 138 uniform. 138 scalar. 69 Af-ary orthogonal signaling. 398 linear block codes. 33 PAM signal. 304 demodulation. 4 phase-locked loop (PLL). 244 random processes bandlimited. 170 maximum-likelihood sequence detection (MLSD). 205 M-aiy PAM. 293 Pulse-code modulation. 19 partial response signaling. 306 error probability. 281 carrier-frequency. 23 on-off signaling. 182 error probability. 201 error probability. 357 generator matrix. 131 lowpass equivalent of a signal. 358 performance. 196 error probability. 135 Huffman coding. 257 Lloyd-Max conditions. 344 nonlinear equalizer. 268 lossless data compression. 287 conventional AM. 304 quadrature component. 89 DSB-AM. 36 lowpass random processes. 288 error probability. 363 systematic. 116 SSB-AM. 79 carrier amplitude. 326 PN sequences.426 Huffman code efficiency. 146 mu-law nonlinearity. 275 modulation AM. 326 phase-shift keying (PSK). 358 minimum distance. 96 modulation index. 13! lossy data compression. 36 intersymbol interference (ISI). 36 quantization. 253 PCM. 288 PLL. 45 Rayleigh. 21 spread spectrum systems. 51 generation. 55 power spectrum. 19 simple repetition code. 96 LSSB-AM. 322 signal diversity. 409 SSB-AM. 23 Shannon limit. 4 magnitude. 69 Markov process. 5 spectrum discrete. 19 roii-off factor. 326 clock. 23 Nyquist rate. 57 white processes. 57 random variables Gaussian. 4. 51 lowpass. 259 . 332 transfer function. 46 Rayleigh's theorem. 377. 4. 392 direct sequence. 268 synchronization carrier. 244 sampling theorem. 327 Viterbi algorithm. 49 covariancc matrix. 96 SNR. 96 stochastic gradient algorithm. 21 of a signal. 417 signum function. 375 VCO. 355 sine function.INDEX 427 Gaussian. 378 voltage-controlled oscillator (VCO). 51 properties. 96 spectrum. 12 trellis diagram. 49 uniform. 96 USSB-AM. 327 zero-forcing equalizer. 392 frequency-hopped. 21 phase. Documents Similar To Contemporary Communication Systems Using Matlab Proakis and Salehi CopySkip carouselcarousel previouscarousel nextTheory and Design of Digital Communication Systems by Tri t. HaContemporary Communication Systems using Matlab - Proakis and Salehi.pdfPrinciples of Digital Communication_SolutionsMATLAB Signal Processing Toolbox user manualMatlab CommunicationMATLAB & Simulink for Digital CommunicationsimulationChannel Estimation With MatlabRf Toolbox 2MATLAB Communication Toolbox Primer1.Coding TheoryWireless Communications With Matlab and SimulinkDigital Communication ReceiversDiscrete Random Signals and Statistical Signal Processing Sol Manual-Charles W. TherrienSteven M. Kay Fundamentals of Statistical Signal Processing, Volume 2 Detection Theory 1998digital communications matlab programmesCircuit Design for RF Transceivers(Book)Error Control CodingPrinciples of Digital CommunicationKluwer - Introduction to Parallel Processing - Algorithms and ArchitecturesSolution Manual of Statistical Digital Signal Processing Modeling by MonsonHSolutions to Selected Exercises for Stochastic Processes Theory for Applications ( R. G. Gallager )Error Control Systems for Digital Communication and Storage (Stephen B. Wicker)DigitalCommMS.pdfPrinciples of Communication 5Ed R. E. Ziemer, William H. Tranter Solutions ManualMore From Regislane Paiva LimaSkip carouselcarousel previouscarousel nextIngles Tecnico MecanicaFotodiodo2Papoulis a. Probability, Random Variables, Stochastic Processes (3ed., MGH, sMatemática AplicadaFooter MenuBack To TopAboutAbout ScribdPressOur blogJoin our team!Contact UsJoin todayInvite FriendsGiftsLegalTermsPrivacyCopyrightSupportHelp / FAQAccessibilityPurchase helpAdChoicesPublishersSocial MediaCopyright © 2018 Scribd Inc. .Browse Books.Site Directory.Site Language: English中文EspañolالعربيةPortuguês日本語DeutschFrançaisTurkceРусский языкTiếng việtJęzyk polskiBahasa indonesiaSign up to vote on this titleUsefulNot usefulMaster Your Semester with Scribd & The New York TimesSpecial offer for students: Only $4.99/month.Master Your Semester with a Special Offer from Scribd & The New York TimesRead Free for 30 DaysCancel anytime.Read Free for 30 DaysYou're Reading a Free PreviewDownloadClose DialogAre you sure?This action might not be possible to undo. Are you sure you want to continue?CANCELOK
Copyright © 2024 DOKUMEN.SITE Inc.