Fundamentals of Electrical Engineering I



Comments



Description

Fundamentals of Electrical Engineering IBy: Don Johnson Fundamentals of Electrical Engineering I By: Don Johnson Online: < http://cnx.org/content/col10040/1.9/ > CONNEXIONS Rice University, Houston, Texas This selection and arrangement of content as a collection is copyrighted by Don Johnson. It is licensed under the Creative Commons Attribution 1.0 license (http://creativecommons.org/licenses/by/1.0). Collection structure revised: August 6, 2008 PDF generated: October 15, 2012 For copyright and attribution information for the modules contained in this collection, see p. 312. Table of Contents 1 Introduction 1.1 Themes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Signals Represent Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 Structure of Communication Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.4 The Fundamental Signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.5 Introduction Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2 Signals and Systems 2.1 Complex Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.2 Elemental Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.3 Signal Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.4 Discrete-Time Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.5 Introduction to Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.6 Simple Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.7 Signals and Systems Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 3 Analog Signal Processing 3.1 Voltage, Current, and Generic Circuit Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 3.2 Ideal Circuit Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 3.3 Ideal and Real-World Circuit Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 3.4 Electric Circuits and Interconnection Laws . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 3.5 Power Dissipation in Resistor Circuits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 3.6 Series and Parallel Circuits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 3.7 Equivalent Circuits: Resistors and Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 3.8 Circuits with Capacitors and Inductors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3.9 The Impedance Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 3.10 Time and Frequency Domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 3.11 Power in the Frequency Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 3.12 Equivalent Circuits: Impedances and Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 3.13 Transfer Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 3.14 Designing Transfer Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 3.15 Formal Circuit Methods: Node Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 3.16 Power Conservation in Circuits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 3.17 Electronics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 3.18 Dependent Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 3.19 Operational Ampliers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 3.20 The Diode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 3.21 Analog Signal Processing Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 4 Frequency Domain 4.1 Introduction to the Frequency Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 4.2 Complex Fourier Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 4.3 Classic Fourier Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 4.4 A Signal's Spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 4.5 Fourier Series Approximation of Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 4.6 Encoding Information in the Frequency Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 4.7 Filtering Periodic Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 iv 4.8 Derivation of the Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 4.9 Linear Time Invariant Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 4.10 Modeling the Speech Signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 4.11 Frequency Domain Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 5 Digital Signal Processing 5.1 Introduction to Digital Signal Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 5.2 Introduction to Computer Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 5.3 The Sampling Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . 173 5.4 Amplitude Quantization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 5.5 Discrete-Time Signals and Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 5.6 Discrete-Time Fourier Transform (DTFT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 5.7 Discrete Fourier Transforms (DFT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 5.8 DFT: Computational Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188 5.9 Fast Fourier Transform (FFT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 5.10 Spectrograms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 5.11 Discrete-Time Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 5.12 Discrete-Time Systems in the Time-Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 5.13 Discrete-Time Systems in the Frequency Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 5.14 Filtering in the Frequency Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 5.15 Eciency of Frequency-Domain Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 5.16 Discrete-Time Filtering of Analog Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 5.17 Digital Signal Processing Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 6 Information Communication 6.1 Information Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 6.2 Types of Communication Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 6.3 Wireline Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 6.4 Wireless Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 6.5 Line-of-Sight Transmission . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . 232 6.6 The Ionosphere and Communications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 6.7 Communication with Satellites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 6.8 Noise and Interference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 6.9 Channel Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 6.10 Baseband Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236 6.11 Modulated Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . 237 6.12 Signal-to-Noise Ratio of an Amplitude-Modulated Signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 6.13 Digital Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240 6.14 Binary Phase Shift Keying . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 6.15 Frequency Shift Keying . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244 6.16 Digital Communication Receivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 6.17 Digital Communication in the Presence of Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 6.18 Digital Communication System Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 6.19 Digital Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250 6.20 Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 6.21 Source Coding Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 6.22 Compression and the Human Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . 254 6.23 Subtlies of Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 6.24 Channel Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 6.25 Repetition Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . 259 6.26 Block Channel Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 .37 6. . . . 294 7 Appendix 7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305 Attributions . . . . . . . . . . .35 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266 Noisy Channel Coding Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304 Index . . . . . . . . . . . . . 276 Information Communication Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .31 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .36 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .28 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . .38 Error-Correcting Codes: Hamming Distance . . . . . . .3 Frequency Allocations . . . . . . . . . . . . . . . . 299 7. . . . . . . . . . . . . . . . . . 271 Message Routing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Decibels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Permutations and Combinations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 Communication Networks . . . . . . . . . . .33 6. . . . . . . . . . . . . . .34 6. . . . . . . . . . . . . . . . . . . . . . . . . . . 268 Capacity of a Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . .32 6.27 6. . . . 262 Error-Correcting Codes: Channel Decoding . . . . . . . . . . . . . . . . . 301 7. .v 6. . . 274 Communication Protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .29 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312 . . . . . . . . . . . . . . . . . . . . . . . . . . 302 Solutions . . . . . . . . . . . . . . . . . . . 273 Ethernet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269 Comparison of Analog and Digital Communication . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . 264 Error-Correcting Codes: Hamming Codes . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .30 6. . . . . . . . . . . . . . . . . . . . . . . . . . . 272 Network architectures and interconnection . . . . . . . . . . . . . . . . . vi . 1 Themes 1 From its beginnings in the late nineteenth century. and manipulation (circuits can be built to reduce noise and computers can be used to modify information). mailed to your friend and listened to by her on her stereo. transmission. You might send the le via e-mail to a friend. and how electrical signals represent information. on the latter theme: the Information can take a variety of forms. manipulation. she understands what you say. who reads it and understands it. broadly known structure so that someone else can understand what you say. and reception of information by electrical means. examples are text (like what you are reading now) and DNA sequences. Your words could have been recorded on a compact disc (CD). telegraphy and telephony to focusing on a much broader range of disciplines. currents. we will be concerned with how to • • • • represent all forms of information with electrical signals. Engineers. all of these scenarios are equivalent. the tongue. Conceptually we could use any form of energy to represent information. When you speak to a friend. Power However. This course describes what information is. The conversion of information-bearing signals from one energy form into another is known as conversion or transduction. This course concentrates representation. and receive electric signals and convert the information expressed by electric signals form. categorize information into two dierent forms: Analog information is continuous valued. which must have a well dened. encode information as voltages. There. content. and. From an information theoretic viewpoint. but electric signals are uniquely well-suited for information representation. transmission (signals can be broadcast from antennas or sent through wires). who don't care about information analog and digital. if what you say makes sense.org/content/m0000/2. 1 This content is available online at <http://cnx. plastic and computer lesare very dierent. Utterances convey information in sound pressure waves. which propagate to your friend's ear. Information arises in your thoughts and is represented by speech. and electromagnetic waves. but this loss does not necessarily mean that the conveyed information is lost. electrical engineering has blossomed from focusing on electrical circuits for power. 1 back into a useful . manipulate information-bearing electric signals with circuits and computers. the underlying themes are relevant today: creation and transmission and information have been the underlying themes of electrical engineering for a century and a half. Digital information is discrete valued. examples are audio and video. the lipsto move in a coordinated fashion. energy All conversion systems are inecient since some input energy is lost as heat. Thus. although the forms of the information representationsound waves.18/>.Chapter 1 Introduction 1. sound energy is converted back to neural activity. Information can take the form of a text le you type into your word processor. your thoughts are translated by your brain into motor commands that cause various vocal tract componentsthe jaw. how engineers quantify information. and it dates from 1837.1 Analog Signals Analog signals are usually signals dened over continuous independent variable(s). At mid-century. in which digital and analog communication systems interact and compete for design preferences. x0 say. and the speech signal thus corresponds to a function having independent variables of space and time and a value corresponding to air pressure: notation x signal at a particular spatial location. 1. INTRODUCTION Telegraphy represents the earliest electrical information system. three "inventions" changed the ground rules.dcs.uk/∼history/Mathematicians/Maxwell. circuit theory and electromagnetic theory were all an electrical engineer needed to know to be qualied and produce rst-rate designs. These equations predicted that light was an electro- magnetic wave. During the rst part of the twentieth century. Only once the intended system is specied can an implementation be selected. note: Thanks to the translation eorts of Rice University's Disability Support Services 4 . this 5 collection is now available in a Braille-printable version. Please click here to download a .org/content/m0000/latest/FundElecEngBraille. and were widely read. Maxwell's equations were simplied by Oliver Heaviside and others.2.1: Speech Example).ac. the development of the telephone in 1876 was due largely to empirical work. and only those with experience and intuition could develop telegraph 2 proclaimed in 1864 a set of equations systems.27/>.st-andrews.edu/ 5 http://cnx. and between hardware and software congurations in designing information systems. circuit theory served as the foundation and the framework of all of electrical engineering education.zip 6 This content is available online at <http://cnx. electrical science was largely empirical. these creations gave birth to the information age.com/minds/infotheory/ 4 http://www. This understanding of fundamentals led to a quick succession of inventionsthe wireless telegraph (1899). the invention of the transistor (1947). 1. About twenty years later. These were the rst public demonstration of the rst electronic computer (1946). Thus. http://www-groups. Because of the complexity of Maxwell's presentation. and that energy could propagate.rice.html http://www. A Mathematical Theory Although conceived separately. for example). and understand the tradeos between digital and analog alternatives. Speech (Section 4. At that time. An example of the resulting waveform gure (Figure 1.10) is produced by your vocal cords exciting acoustic resonances in your vocal tract.2 Signals Represent Information 6 Whether analog or digital.dxb and image les. Once Heinrich Hertz conrmed Maxwell's prediction of what we now call radio waves in about 1882.lucent. information is represented by the fundamental quantity in electrical engineering: the signal.org/content/m0001/2. t) (Here we use vector to denote spatial coordinates). The result is pressure waves propagating in the air. the primary focus shifted from how to build communication systems (the circuit theory era) to what communications systems were intended to accomplish. a signal is merely a function.2 CHAPTER 1. When you record someone talking. digital signals are discrete-valued. and the publication of of Communication by Claude Shannon3 (1948).zip le containing all the necessary . Stated in mathematical terms. or the integers (denoting the sequencing of letters and numbers in the football score). the vacuum tube (1905). and radio broadcastingthat marked the true emergence of the communications age. the laser was invented. Electrical science came of age when James Clerk Maxwell that he claimed governed all electrical phenomena. Today's electrical engineer must be mindful of the system's ultimate goal.dss. t) is shown in this . which opened even more design possibilities. 2 3 s (x. space (images). Analog signals are continuous- valued. Consequently. The independent variable of the signal could be time (speech. you are evaluating the speech s (x0 . 2 -0. which amounts to its optical reection properties.3 0. . Shown is a recording of the vowel "e" (as in "speech"). Photographs are static.4 0.1 -0. Black-and-white images have only one value at each point in space.1: A speech signal's amplitude relates to tiny air pressure variations. an image is shown. In Fig- ure 1.5 Figure 1.4 -0.2 Amplitude 0.1 0 -0.3 Speech Example 0.3 -0. and are continuous-valued signals dened over space.2 (Lena).5 0. demonstrating that it (and all other images as well) are functions of two independent spatial variables. Computers rely on the digital representation of information to manipulate and transform information. the ASCII code represents the letter A as 65. complicated texture. ASCII Table . and various other symbols represented by a seven-bit integer. color pictures are multivaluedvector-valuedsignals: green and blue is present. For example. the numbers. why is that? Color images have values that express how reectivity depends on the optical spectrum.2 Digital Signals The word "digital" means discrete-valued and implies the signal has an integer-valued independent variable. g (x) . a as the number 97 and the letter Table 1. s (x) = (r (x) . 1.and lowercase characters. which is used ubiquitously as a test image. and a face. but a dierent set of colors is used: How much of red.2. In this image. For example. such as time. The colors merely help show what signal values are about the same size. images today are usually thought of as having three values at every point in space. b (x)) T Mathematically. for example). . temperature readings taken every hour have continuousanalogvalues. Symbols do not have a numeric value. On the right is a perspective display of the Lena image as a signal: a function of two spatial variables.2: (b) On the left is the classic Lena image.1: ASCII Table shows the international convention on associating characters with integers.4 CHAPTER 1. but on a discrete variable. and each is represented by a unique number. INTRODUCTION Lena (a) Figure 1. The ASCII character code has the upper. Painters long ago found that mixing together combinations of the so-called primary colorsred. Interesting cases abound where the analog signal depends not on a continuous variable. signal values range between 0 and 255. but the signal's independent variable is (essentially) the integers. Digital information includes numbers and symbols (characters typed on the keyboard. It contains straight and curved lines. punctuation marks. Thus. yellow and bluecan produce very realistic color images. 5 00 nul 01 soh 02 stx 03 etx 04 eot 05 enq 06 ack 07 bel 08 bs 09 ht 0A nl 0B vt 0C np 0D cr 0E so 0F si 10 dle 11 dc1 12 dc2 13 dc3 14 dc4 15 nak 16 syn 17 etb 18 car 19 em 1A sub 1B esc 1C fs 1D gs 1E rs 1F us 20 sp 21 ! 22 " 23 # 24 $ 25 % 26 & 27 ' 28 ( 29 ) 2A * 2B + 2C .1: The ASCII translation table shows how standard keyboard characters are represented by integers.3 Structure of Communication Systems 7 Fundamental model of communication s(t) Source x(t) Transmitter message r(t) Channel modulated message Receiver corrupted modulated message Figure 1. 2F / 30 0 31 1 32 2 33 3 34 4 35 5 36 6 37 7 38 8 39 9 3A : 3B . this table displays rst the so-called 7-bit code (how many characters in a seven-bit code?). 3C < 3D = 3E > 3F ? 40 @ 41 A 42 B 43 C 44 D 45 E 46 F 47 G 48 H 49 I 4A J 4B K 4C L 4D M 4E N 4F 0 50 P 51 Q 52 R 53 S 54 T 55 U 56 V 57 W 58 X 59 Y 5A Z 5B [ 5C \ 5D ] 5E ^ 5F _ 60 ' 61 a 62 b 63 c 64 d 65 e 66 f 67 g 68 h 69 i 6A j 6B k 6C l 6D m 6E n 6F o 70 p 71 q 72 r 73 s 74 t 75 u 76 v 77 w 78 x 79 y 7A z 7B { 7C | 7D } 7E ∼ 7F del Table 1. Mnemonic characters correspond to control characters. 1. The numeric codes are represented in hexadecimal (base-16) notation. In pairs of columns. some of which may be familiar (like cr for carriage return) and some not (bel means a "bell").3: The Fundamental Model of Communication.17/>. 7 s(t) This content is available online at <http://cnx.org/content/m0002/2. then the character the number represents. Sink demodulated message . 2D - 2E . else the communication system cannot be considered reliable. The fundamental model of communications is portrayed in Figure 1. However. we represent a system as a box. the receiver must do its best to produce a received message sˆ (t) that resembles s (t) as much as possible.3 (Fundamental model of communication).4: System A system operates on its input signal x (t) y(t) to produce an output y (t).) Transmitted signals next pass through the next stage. the inverse system must exist.6 CHAPTER 1. information sources produce signals. A s (t). INTRODUCTION Denition of a system x(t) Figure 1. Thus. In the communications model.com/minds/infotheory/ . the signal received by the receiver. s (t) passing through a block labeled transmitter that produces the In the case of a radio transmitter. how it is corrupted and manipulated. speech. or several signals to produce more signals or to simply absorb them (Figure 1. the transmitter should not operate in such a way that the message s (t) cannot be recovered from x (t). This block diagram. It is this result that modern communications systems exploit. and how it is ultimately received is summarized by interconnecting block diagrams: The outputs of one or more systems serve as the inputs to others. However. because of the channel. and attenuated among many possibilities. the same block diagram applies although the systems can be very dierent. Examples of time-domain signals produced by a source are music. typed characters are encapsulated in packets. output signals by arrows pointing away. we rst need to understand the big picture to appreciate the context in which the electrical engineer works. graphical representation is known as a We denote input signals by lines having arrows pointing into the box. how information ows.lucent. The block diagram has the message signal x (t). and transmitter design and receiver design focus on how best to jointly fend o the channel's eects on signals. each signal corresponds to an electrical voltage or current. In the case of a computer network. From the communication systems big picture perspective. Signals can also be functions of two variablesan image is a signal that depends on two spatial variablesor more television pictures (video signals) are functions of two spatial variables and time. clever systems exist that transmit signals so that only the in crowd can recover them. The channel cannot be escaped (the real world is cruel). take this word to mean error-freedigital communication was possible over arbitrarily noisy channels. (It is ridiculous to transmit a signal in such a way that no one can recover the original. However.4 (Denition of a system)). In any case. is analog and is a system operates on zero. receiving input signals (usually coming from the left) and producing from them new output signals. we must understand electrical science and technology. and launched into the Internet. If the channel were benign (good luck nding such a channel in the real world). and why many 8 http://www. the receiver would serve as the inverse system to the transmitter. The channel is another system in our block diagram. In communication systems. exemplied by function of time. As typied by the communications model. messagessignals produced by sourcesmust be recast for transmission. attached with a destination address. the evil channel. In this fundamental model. In physical systems. each message-bearing signal. and characters typed on a keyboard. To be able to design systems. In electrical engineering. distorted. In the mathematical sense. the source produces a signal that will be absorbed by the sink. it accepts an input audio signal and produces a signal that physically is an electromagnetic wave radiated by an antenna and propagating as Maxwell's equations predict. and produces r (t). Nothing good happens to a signal in a channel: It can become corrupted by noise. Shannon 8 showed in his 1948 paper that reliablefor the moment. Such crytographic systems underlie secret communications. one. and yield the message with no distortion. 7 communications systems are going digital.4 The Fundamental Signal 9 1. Exercise 1.org/content/m0003/2. lumens. note: Notice that we shall call either sinusoid an analog signal. and how information can be processed by systems operating on information-bearing signals. Exercise 1.4. Clearly. the source is a system having no input but producing an output.) which means that a sinusoid having a frequency larger than one corresponds to a sinusoid having a frequency less than one. φ is the phase. The module on Information Communication (Section 6. Finally. and takes on values between 0 and 1. Frequency now has no dimensions. we term either a sinusoid. and determines the sinusoid's size.1 Show that cos (2πf n) = cos (2π (f + 1) n). . 9 This content is available online at <http://cnx. having a zero value at the origin. Here.4. the nature of the information they represent. and there we learn of Shannon's result and how to use it. The most ubiquitous and important signal in electrical engineering is the Sine Denition s (t) = Acos (2πf t + φ) orAcos (ωt + φ) A is known as the sinusoid's (1.2) Thus. and determines sinusoid's physical units (volts. the received message is passed to the information sink that somehow makes use of the message. realizing that in computations we must convert from degrees to radians. Note that if φ = − π2 . 11. Only when the discrete-time signal takes on a nite set of values can it be considered a digital signal. we most often express frequency in Hertz. the sinusoid corresponds to a sine function. The amplitude conveys the frequency f has units of Hz (Hertz) or s−1 . ω. the only dierence between a sine and cosine signal is the phase. and It has units of radians. what is their information content. One is electrical science: How are signals represented and manipulated electrically? The second is signal science: What is the structure of signals. AM radio stations have carrier frequencies of about 1 MHz (one mega-hertz or 106 Hz). (Solution on p. In communications. In the communications model. a sink has an input and no output. We can also dene a discrete-time variant of the sinusoid: variable is n Acos (2πf n + φ). 11.) Can you think of a simple signal that has a nite number of values but is dened in continuous time? Such a signal is also an analog signal. The how rapidly the sinusoid oscillates per unit time. and what capabilities does this structure force upon communication systems? 1. Finally.1) details Shannon's theory of information. how information is transformed between analog and digital forms. the independent and represents the integers. determines the sine wave's behavior at the origin (t = 0). and thus the frequency determines how many oscillations/second the sinusoid has.  π Asin (2πf t + φ) = Acos 2πf t + φ − 2 (1. no matter what their source. while FM stations have carrier frequencies of about 100 MHz. which has units of radians/second. etc).1 The Sinusoid sinusoid. Frequency can also be expressed by the symbol ω = 2πf . Understanding signal generation and how systems work amounts to understanding signals.2 (Solution on p. but we can express it in degrees. This understanding demands two dierent elds of knowledge.15/>.1) amplitude.4. The temporal variable t always has units of seconds. 18/>.4. we would keep the frequency constant (so the receiver would know what to expect) and change the amplitude at midnight. We'll learn how this is done in subsequent modules. What we can do is modulate them for a limited time (say T. and the individual bits are encoded into a sinusoid's amplitude and frequency. If we wanted to send the daily temperature.5) square wave.org/content/m10353/2. we shall learn that this is indeed possible. INTRODUCTION 1. we can send a real number (today's temperature. are constants that the transmitter and receiver must A = A0 (1 + kT ). Here. Now suppose we have a sequence of parameters to send. we could modulate the sinusoid's frequency as well as its amplitude. typed characters are encoded into eight bits. If we had two numbers we wanted to send at the same time. we'll learn what the limits are on such digital communication schemes. where A0 and k We could relate temperature to amplitude by the formula both know. The technical term is to from one place to another. modulate the carrier signal's parameters to transmit information To explore the notion of modulation. We have exploited all of the sinusoid's two parameters. generically denoted by sq (t)? d) By inspecting any device you plug into a wall socket. for example) by changing a sinusoid's amplitude accordingly.1: RMS Values The rms (root-mean-square) value of a periodic signal is dened to be s s= where T is dened to be the signal's a) What is the period of period: 1 T Z T s2 (t) dt 0 the smallest positive number such that s (t) = s (t + T ). and more importantly. This modulation scheme assumes we can estimate the sinusoid's amplitude and frequency. . What is the expression for the voltage provided by a wall socket? What is its rms value? 10 This content is available online at <http://cnx. 1.5 Introduction Problems 10 Problem 1. and send two parameters This simple notion corresponds to how a modem works.2 Communicating Information with Signals The basic idea of communication engineering is to use a signal's parameters to represent either real numbers or other signals. s (t) = Asin (2πf0 t + φ)? b) What is the rms value of this signal? How is it related to the peak value? c) What is the period and rms value of the depicted (Figure 1. every T seconds). you'll see that it is labeled "110 volts AC".8 CHAPTER 1. a) What is the smallest transmission interval that makes sense with the frequency f0 ? b) Assuming that ten cycles of the sinusoid comprise a single bit's transmission interval. Consequently. we allow one of several amplitude during any transmission interval. the modem's transmitted signal that represents a single bit has the form x (t) = Asin (2πf0 t) . in which binary information is represented by the presence or absence of a sinusoid (presence representing a "1" and absence a "0"). If N dierent values for the amplitude values are used. x (t) = A1 sin (2πf1 t) + A2 sin (2πf2 t) .5 Problem 1. A transmission is sent for a period of time T (known as the transmission or baud interval) and equals the sum of two amplitude-weighted carriers. RU computer modems use two frequencies (1600 and 1800 Hz) and several amplitude levels.2: Modems The word "modem" is short for "modulator-demodulator.9 sq(t) A ••• ••• –2 t 2 –A Figure 1. a) What is the smallest transmission interval that makes sense to use with the frequencies given above? In other words. what is the datarate of this transmission scheme? c) Now suppose instead of using "on-o" signaling. Problem 1." Modems are used not only for connecting computers to telephone lines. but also for connecting digital (discrete-valued) sources to generic channels.3: Advanced Modems To transmit symbols. Make sure you axes are labeled. what is the resulting datarate? d) The classic communications block diagram applies to the modem. In this problem. such as letters of the alphabet. 0 ≤ t ≤ T Within each bit interval T. and sending them one after another. the amplitude is either A or zero. we explore a simple kind of modem. not bits. 0 ≤ t ≤ T We send successive symbols by choosing an appropriate frequency and amplitude combination. what should T be so that an integer number of cycles of the carrier occurs? b) Sketch (using Matlab) the signal that modem produces over several transmission intervals. Discuss how the transmitter must interface with the message source since the source is producing letters of the alphabet. . note: N2 We use a discrete set of values for values for A2 .200 bits/s? Assume use of the extended (8-bit) ASCII code. . how many amplitude levels are needed to transmit ASCII characters at a datarate of 3. we have N1 N2 A1 and A2 .10 CHAPTER 1. and second interval. compute log2 (N1 N2 ). If we have N1 values for amplitude possible symbols that can be sent during each T A1 . INTRODUCTION c) Using your signal transmission interval. To convert this number into bits (the fundamental unit of information engineers use to qualify things). Solution to Exercise 1.2.6: Square Wave). cos (2π (f + 1) n) = cos (2πf n) cos (2πn) − See the plot in the module Elemental Signals .4. 7) As cos (α + β) = cos (α) cos (β) − sin (α) sin (β). 7) A square wave takes on the values (Section 2.11 Solutions to Exercises in Chapter 1 Solution to Exercise 1.1 (p. sin (2πf n) sin (2πn) = cos (2πf n).2 (p.4. 1 and −1 alternately. INTRODUCTION .12 CHAPTER 1. is the This content is available online at <http://cnx. The imaginary number a and b jb equals (0.Chapter 2 Signals and Systems 2. Note that are real-valued numbers. consists of the ordered pair is the imaginary component (the j is suppressed because the imaginary component of the pair is always in the second position).b). j is the real component and b √ −b2 . imaginary number has the form jb = An (a. http://www-groups.27/>.uk/∼history/Mathematicians/Ampere. Ampère equations mathematically exists only if the so-called imaginary quantity i used the symbol i −1 could be dened.ac. Representing sinusoids in terms of not a mathematical oddity. a realization made over a century ago.ac.dcs. is the x-coordinate and b. they are critical to modern electrical engineering. Fluency with complex numbers and rational functions in terms of an even more fundamental signal: the complex exponentials is of complex variables is a critical skill all engineers master. By then. the real part. complex .b).1 Complex Numbers 1 While the fundamental signal usd in electrical engineering is the sinusoid.1. Here.org/content/m0081/2.uk/∼history/Mathematicians/Euler.1 (The Complex Plane) shows that we can locate a complex number in what we call the plane.st-and.1 Denitions The notion of the square root of −1 originated with the quadratic formula: the solution of certain quadratic used √ 2 rst 3 for the imaginary unit but that notation did not take hold until roughly Ampère's time. using and electrical engineers chose a i for current was entrenched for writing complex numbers. Euler to denote current (intensité de current). It wasn't until the twentieth century that the importance of complex numbers to circuit theory became evident. the imaginary part.dcs. Figure 2. it can be expressed mathematically complex exponential. In short. A complex number. 2.html 1 2 13 y -coordinate. Understanding information and power system designs and developing new systems all hinge on using complex numbers.html 3 http://www-groups. z .st-and. a. SIGNALS AND SYSTEMS The Complex Plane Figure 2. with x and y directions. rotates the number's position by 90 degrees. rather.1 by ja. but it provides a convenient notation when we perform arithmetic manipulations. This property follows from the laws of vector addition. The Cartesian form of a . Exercise 2.1. written as z ∗ . From analytic geometry. Re (z) = Complex numbers can also be expressed in an alternate form. written as Re (z). multiplying a complex number j (Solution on p. Some obvious terminology. real part of the complex number z = a + jb. the following properties easily follow. which we will nd quite useful. the real and imaginary parts remain separate. this the vectors corresponding to the as the (vector) sum Cartesian form notation for a complex number represents vector addition. equals We consider the real part as a function that works by selecting that component of a complex number not multiplied by j .b) that can be regarded as coordinates in the plane.1: A complex number is an ordered pair (a. The imaginary part of z . a complex number z can be expressed z = a + jb where j indicates the y -coordinate.) Use the denition of addition to show that the real and imaginary parts can be expressed as a sum/dierence of a complex number and its conjugate. Im (z). we know that locations in the plane can be expressed as the sum of vectors.14 CHAPTER 2. multiplied by The j. z = Re (z) + jIm (z) (2. • The product of j and a real number is an imaginary number: number is a real number: j (jb) = −b because j 2 = −1. The a. has the same real part as z but an imaginary part of the opposite sign. z+z ∗ and 2 Im (z) = z−z ∗ 2j . This representation is known as the of z . Polar form arises arises from the geometric interpretation of complex numbers. the real part of the result equals the sum of the real parts and the imaginary part equals the sum of the imaginary parts. Complex numbers can also be expressed in polar coordinates as r∠θ. An imaginary number can't be numerically added to a real number. equals b: that part of a complex number that is Again.1) z ∗ = Re (z) − jIm (z) Using Cartesian notation. both the real and imaginary parts of a complex number are real-valued. a1 + jb1 + a2 + jb2 = a1 + a2 + j (b1 + b2 ) In this way. The product of j and an imaginary Consequently. 37. • If we add two complex numbers. Consequently. polar form. complex conjugate of z . . In using the arc-tangent formula to nd the angle.. z = a + jb = r∠θ √ r = |z| = a2 + b2 a = rcos (θ) b = rsin (θ) θ = arctan r The quantity quantity θ is known as the b a  magnitude of the complex number z . and j 4 = 1.3) (2. 2. θ θ3 − + .2 Euler's Formula Surprisingly. x x2 x3 + + + . θ θ2 θ3 − − j + . Exercise 2.2 Convert 3 − 2j (Solution on p. we use (2..4) The rst of these is easily derived from the Taylor's series for the exponential. 37. 1! 3! cos (θ). The angle.15 complex number can be re-written as   p a b 2 2 √ √ a + jb = a + b +j a2 + b2 a2 + b2 By forming a right triangle having sides a and b.. and Euler's Because of .. the polar form of a complex number z can be expressed mathematically as z = rejθ To show this result. ex = 1 + Substituting jθ for x. j 3 = −j . ejθ = cos (θ) + jsin (θ) cos (θ) = ejθ + e−(jθ) 2 sin (θ) = ejθ − e−(jθ) 2j (2. sin (θ).1.3) by a real constant corresponds to setting the radius of the complex number by the constant.  the imaginary ones to The remaining relations are easily derived from the rst.2) Euler's relations that express exponentials with imaginary arguments in terms of trigonometric functions. we see that the real and imaginary parts correspond to the cosine and sine of the triangle's base angle. we see that multiplying the exponential in (2. 1! 2! 3! Grouping separately the real-valued terms and the imaginary-valued ones.) to polar form. we must take is the complex number's into account the quadrant in which the complex number lies... and is frequently written as |z|. We thus obtain the polar form for complex numbers. 1! 2! 3! we nd that ejθ = 1 + j because j 2 = −1.1. ejθ = 1 − θ2 + ··· + j 2!  The real-valued terms correspond to the Taylor's series for rst relation results. multiplying two vectors to obtain another vector. but follows directly from following the usual rules of arithmetic. We convert the division problem into a multiplication problem by multiplying both the numerator and denominator by the conjugate of the denominator. the crucial quantity. Exercise 2. in a sense. Example 2. To divide. For instance.1.3 (Solution on p. 37.) What is the product of a complex number and its conjugate? Division requires mathematical manipulation. When the original complex numbers are in Cartesian form.10) . What we'll need to understand the circuit's eect is the transfer function in polar form. known as a transfer function. z1 z2 = r1 ejθ1 r2 ejθ2 = r1 r2 ej(θ1 +θ2 ) (2. will always be expressed as the ratio of polynomials in the variable s = j2πf . suppose the transfer function equals s+2 s2 + s + 1 s = j2πf (2. it's best to remember how (2.6) Note that we are.9) (2. The properties of the exponential make calculating the product and ratio of two complex numbers much simpler when the numbers are expressed in polar form. performing the arithmetic operation. SIGNALS AND SYSTEMS 2.16 CHAPTER 2.1.8) r1 ejθ1 r1 z1 = = ej(θ1 −θ2 ) jθ 2 z2 r2 e r2 To multiply. Complex arithmetic provides a unique way of dening vector multiplication. the radius equals the ratio of the radii and the angle the dierence of the angles.7) to perform divisionmultiplying numerator and denominator by the complex conjugate of the denominatorthan trying to remember the nal result. z1 ± z2 = (a1 ± a2 ) + j (b1 ± b2 ) (2.5) To multiply two complex numbers in Cartesian form is not quite as easy.3 Calculating with Complex Numbers Adding and subtracting complex numbers expressed in Cartesian form is quite easy: You add (subtract) the real parts and imaginary parts separately.1 When we solve circuit problems. Addition and subtraction of polar forms amounts to converting to Cartesian form. the radius equals the product of the radii and the angle the sum of the angles. and converting back to polar form. then performing the multiplication or division (especially in the case of the latter). z1 z2 = = = = a1 +jb1 a2 +jb2 a1 +jb1 a2 −jb2 a2 +jb2 a2 −jb2 (a1 +jb1 )(a2 −jb2 ) a2 2 +b2 2 a1 a2 +b1 b2 +j(a2 b1 −a1 b2 ) a2 2 +b2 2 Because the nal result is so complicated. it's usually worth translating into polar form. z1 z2 = (a1 + jb1 ) (a2 + jb2 ) = a1 a2 − b1 b2 + j (a1 b2 + a2 b1 ) (2. 12) “ ” 2πf jarctan 1−4π 2f2 “ “ ”” 2πf arctan(πf )−arctan 1−4π 2f2 (2. For it.14) its phase.1 Sinusoids Perhaps the most common real-valued signal is the sinusoid.29/>. Video signals are functions of three variables: two spatial dimensions and time. Steinmetz 5 introduced complex exponentials to electrical engineering.2 Elemental Signals 4 Elemental signals are the building blocks with which we build complicated signals. 2. early in the twentieth century. 2. j denotes √ −1. which we take to be time for the most part. and φ (2.11) 4 + 4π 2 f 2 ejarctan(πf ) 2 (1 − 4π 2 f 2 ) + 4π 2 f 2 e 4 + 4π 2 f 2 j e 1 − 4π 2 f 2 + 16π 4 f 4 (2. s (t) = Aej(2πf0 t+φ) (2.2. The complex exponential cannot be further decomposed most important signal in electrical engineering! Mathematical The complex amplitude is also known as a into more elemental signals. By denition. Aejφ is known as the signal's complex amplitude. In fact. then calculating the ratio. phasor. and is the manipulations at rst appear to be more dicult because complex-valued numbers are introduced. mathematicians thought engineers would not be suciently sophisticated to handle complex exponentials even though they greatly simplied solving circuit problems.15) = Aejφ ej2πf0 t Here. Fortunately.org/content/m0004/2. amplitude as a complex number in polar form. s (t) = Acos (2πf0 t + φ) For this signal.13) 2.17 Performing the required division is most easily accomplished by rst expressing the numerator and denominator each in polar form. one great example of which is an image. s2 s+2 j2πf + 2 = 2 +s+1 4π f 2 + j2πf + 1 p =q s = (2. http://www. most of the ideas underlying modern signal theory can be exemplied with one-dimensional signals. Exactly what we mean by the "structure of a signal" will unfold in this section of the course. the complex exponential. A is its amplitude. its magnitude is the amplitude Considering the complex A and its angle the signal phase. f0 its frequency. elemental signals have a simple structure.2. the independent variables are x and y (two-dimensional space). Very interesting signals are not functions solely of time.org/hall_of_fame/139. Thus.invent. Signals are nothing more than functions dened with respect to some independent variable. and demonstrated that "mere" engineers could 4 5 This content is available online at <http://cnx.html .2 Complex Exponentials The most important signal is complex-valued. 19)  (2. or as the sum of two complex exponentials. SIGNALS AND SYSTEMS use them to good eect and even obtain right answers! See Complex Numbers (Section 2. Euler relation: This decomposition of the sinusoid can be traced to Euler's relation.17) ej2πf t = cos (2πf t) + jsin (2πf t) Decomposition: (2. These two decompositions are mathematically equivalent to each other.1) for a review of complex numbers and complex arithmetic.18 CHAPTER 2. sinusoidal signals can be expressed as either the real or the imaginary part of a complex exponential signal. The complex exponential denes the notion of frequency: it is the only signal that contains only one frequency component. the choice depending on whether cosine or sine phase is needed. The sinusoid consists of two frequency components: one at the frequency other at f0 and the −f0 . Acos (2πf t + φ) = Re Aejφ ej2πf t Asin (2πf t + φ) = Im Aejφ ej2πf t  (2.20) .16) sin (2πf t) = ej2πf t − e−(j2πf t) 2j (2. Thus. cos (2πf t) = ej2πf t + e−(j2πf t) 2 (2.18) The complex exponential signal can thus be written in terms of its real and imaginary parts using Euler's relation. exponential is a circle (it has constant magnitude of circle equals the frequency period f. The projections onto the real and imaginary axes of the rotating vector representing the complex exponential signal are the cosine and sine signal of Euler's relation ((2. The number of times per second we go around the The time taken for the complex exponential to go around the circle once is 1 f . and the initial value of the As time increases. we can envision the complex exponential's temporal variations as seen in the above gure (Figure 2. complex exponential at The magnitude of the complex exponential is t = 0 has an angle of φ.2: Graphically. known as its T. and equals .19 Figure 2.2). A fundamental relationship is T = f .16)). the locus of points traced by the complex A). Using the complex plane. The rate at which the signal goes around the circle is the 1 frequency f and the time taken to go around is the period T . Its real and imaginary parts are sinusoids. the complex exponential scribes a circle in the complex plane as time evolves. A. 21) Exponential 1 e–1 t τ Figure 2.22) In the complex plane.3 Real Exponentials As opposed to complex exponentials which oscillate. we can dene 2.4 Unit Step u (t). The quantity τ is known as the exponential's the exponential to decrease by a factor of time constant . s (t) = = t Aejφ e− τ ej2πf t 1 Aejφ e(− τ +j2πf )t (2.4: The unit step. Origin warning: This signal is discontinuous at the origin. which approximately equals 0. and corresponds to the time required for 1 .3) decay. For such signals. complex frequency as the quantity multiplying t.3: The real exponential. Its value at the origin need not be dened.2.23)  1 if t > 0 The unit step function (Figure 2. t s (t) = e− τ (2.2. e A decaying complex exponential is the product of a real and a complex exponential. and doesn't matter in signal theory. and is dened   0 if t < 0 u (t) = (2. SIGNALS AND SYSTEMS 2.20 CHAPTER 2. real exponentials (Figure 2. . this signal corresponds to an exponential spiral.368.4) is denoted by to be u(t) 1 t Figure 2. p∆ (t) =     0    1 if t<0 1 if 0<t<∆ 0 if t>∆ (2. then turning it o.6) sq (t) is a periodic signal like the sinusoid. It too has an amplitude and a We nd subsequently that the sine wave is a t .6 Square Wave The square wave (Figure 2.2. Square Wave A T Figure 2. simpler signal than the square wave.24) p∆(t) t ∆ Figure 2.21 This kind of signal is used to describe signals that "turn on" suddenly.5: The pulse. period.2.5) describes turning a unit-amplitude signal on for a duration of ∆ seconds.5 Pulse The unit pulse (Figure 2.6: The square wave. 2. we can write it as the product of a sinusoid and a step: s (t) = Asin (2πf t) u (t). We will nd that this is the second most important signal in communications. For example. which must be specied to characterize the signal. to mathematically represent turning on an oscillator. 2. Mathematically. eciency. which we term the signal decomposition. As with analog signals. This content is available online at <http://cnx. Thus. the most important issue becomes.1 (Solution on p. Example 2. 2. In writing a signal as a sum of component signals. Discrete-time signals (Section 5. they are sequences. One of the fundamental results of signal theory (Section 5. which is also a discrete-time signal.3. such as space and time.) Express a square wave having period T and amplitude A as a superposition of delayed and amplitude-scaled pulses. we can exploit that structure to represent information (create ways of representing information with signals) and to extract information (retrieve the information thus represented). we have treated what are known as analog signals and systems. the approach is dierent: We develop a common representation of all symbolic-valued signals so that we can embody the information they contain in a unied way. Though we will never compute a signal's complexity. and Euler's relation does not adequately reveal its complexity. a signal expert looks for ways of decomposing a given signal into a sum of simpler signals.24/>. More complicated decompositions could contain derivatives or integrals of simple signals. the characters forming a text le form a sequence.3 Signal Decomposition SIGNALS AND SYSTEMS 6 A signal's complexity is not related to how wiggly it is. discrete-time signals are more general. Exercise 2. . For example. Because the sinusoid is a superposition of two complex exponentials. We must deal with such symbolic valued (p. Be that as it may. the pulse is very useful to us. analog signals are functions having continuous quantities as their independent variables. What is the most parsimonious and compact way to represent information so that it can be extracted later.2 As an example of signal complexity. 37. encompassing signals derived from analog ones and signals that aren't.12/>. Rather. we can change the component signal's gain by multiplying it by a constant and by delaying it. p∆ (t) = u (t) − u (t − ∆) (2.org/content/m0008/2. the pulse is a more complex signal than the step. Subsequent modules describe how virtually all analog signal processing can be performed with software. This result is important because discrete-time signals can be manipulated by systems instantiated as computer programs. for both real-valued and symbolic-valued signals. For symbolic-valued signals. we can express the pulse p∆ (t) as a sum of delayed unit steps.22 CHAPTER 2. the complex exponential is more fundamental. and that this decomposition is very useful. The complex exponential can also be written (using Euler's relation (2. the sinusoid is more complex. From an information representation perspective.16)) as a sum of a sine and a cosine. With this approach leading to a better understanding of signal structure. Clearly.3) will detail conditions under which an analog signal can be converted into a discrete-time one and retrieved without error. we seek ways of decomposing real-valued discrete-time signals into simpler components. As important as such results are.5) are functions dened on the integers. We could not prevent ourselves from the pun in this statement.4 Discrete-Time Signals 7 So far. We will discover that virtually every signal can be decomposed into a sum of complex exponentials.org/content/m0009/2.25) Thus. it essentially equals the number of terms in its decomposition. 6 7 This content is available online at <http://cnx. 180) signals and systems as well. 2. the word "complex" is used in two dierent ways here. This property can be easily understood by noting that adding an integer to the frequency of the discrete-time complex exponential has no eect on the signal's value. 1.3 Sinusoids Discrete-time sinusoids have the obvious form s (n) = Acos (2πf n + φ).2 Complex Exponentials The most important signal is. Discrete-Time Cosine Signal sn 1 … n … Figure 2.4.7: The discrete-time cosine signal is plotted as a stem plot. . . .23 2. We can delay a discrete-time signal by an integer just as with analog ones. of course. frequencies of their discretetime counterparts yield unique waveforms only when f lies in the interval − 12 .1 Real. s (n) = ej2πf n (2. . .4.4. A delayed unit sample has the expression δ (n − m). .27) = ej2πf n This derivation follows because the complex exponential evaluated at an integer multiple of 2. −1. 12  . the complex exponential sequence.26) 2.28) 2π equals one. }. . Can you nd the formula for this signal? 2.and Complex-valued Signals A discrete-time signal is represented symbolically as s (n).4 Unit Sample The second-most important discrete-time signal is the   1 δ (n) =  0 if unit sample. We usually draw discrete-time signals as stem plots to emphasize the fact they are functions dened only on the integers. which is dened to be n=0 otherwise (2. where n = {. ej2π(f +m)n = ej2πf n ej2πmn (2.4. and equals one when n = m. 0. As opposed to analog complex exponentials and sinusoids that can have their frequencies be any real value. and will prove useful subsequently. but we also have signals that denote the sequence of characters typed on the keyboard. Examination of a discrete-time signal's plot. They could represent keyboard characters. Please view or download it at <SignalApprox. We do have real-valued discrete-time signals like the sinusoid. all without the incursion of error. . integers that convey daily temperature. More formally. bytes (8-bit quantities).24 CHAPTER 2. if not impossible. s (n) takes on one of the values {a1 . many more dierent systems can be envisioned and constructed with programs than can be with analog signals. like e-mail. Discrete-time systems can act on discrete-time signals in ways similar to those found in analog signals and systems. Such characters certainly aren't real numbers. For such signals. is accomplished with analog signals and systems. . the transmission and reception of discrete-time signals. Because m is denoted by s (m) and the unit sample delayed to occur at m is δ (n − m).29) m=−∞ This kind of decomposition is unique to discrete-time signals. each element of the symbolic-valued signal comprise the alphabet A. like that of the cosine signal shown in Figure 2. with equivalent analog realizations dicult.7 (DiscreteTime Cosine Signal). discrete-time systems are ultimately constructed from digital circuits. processed with software. . aK } which This technical terminology does not mean we restrict symbols to being mem- bers of the English or Greek alphabet. and as a collection of possible signal values. 8 [Media Object] 8 This media object is a LabVIEW VI.llb> . reveals that all signals consist of a sequence of delayed and scaled unit samples. which consist entirely of analog circuit elements. 2.5 Symbolic-valued Signals Another interesting aspect of discrete-time signals is that their values do not need to be real numbers. In fact. Furthermore. s (n) = ∞ X s (m) δ (n − m) (2. Whether controlled by software or not. to design.8: The unit sample. Because of the role of software in discrete-time systems. . and converted back into an analog signal. they have little mathematical structure other than that they are members of a set. systems can be easily produced in software. Understanding how discrete-time and analog signals and systems intertwine is perhaps the main goal of this course. we can decompose any signal as a sum of unit samples delayed to the appropriate location the value of a sequence at each integer written and scaled by the signal value. a special class of analog signals can be converted into discrete-time signals. SIGNALS AND SYSTEMS Unit Sample δn 1 n Figure 2.4. then the second system. systems are like functions. we represent what a system does by the notation representing the input signal and y the output signal.9: The system depicted has input System x (t) y(t) and output y (t). with the information contained in x (t) Mathematically. The notation y (t) = S (x (t)) corresponds to this block diagram. systems operate on function(s) to produce other function(s).10: S1[•] w(t) S2[•] y(t) The most rudimentary ways of interconnecting systems are shown in the gures in this section.org/content/m0005/2. The simplest form is when one system's output is connected only to another's input. the ordering of the systems matter. For example. We term S (·) the input-output relation for the system. This notation mimics the mathematical symbology of a function: A system's input is analogous to an independent variable and its output the dependent variable. a system is a functional: a function of a function (signals are functions).25 2. This is the cascade conguration. Mathematically. y (t) = S (x (t)).19/>. processed by the rst. in others it does not. 9 This content is available online at <http://cnx.1 Cascade Interconnection cascade x(t) Figure 2. In some cases. In many ways. For the mathematically inclined. but usually consist of weaves of three basic interconnection forms. with x Mathematically. Denition of a system x(t) Figure 2.3: Fundamental model of communication) the ordering most certainly matters. Simple systems can be connected togetherone system's output becomes another's inputto accomplish some overall design. 2. rules that yield a value for the dependent variable (our output signal) for each value of its independent variable (its input signal). . in the fundamental model of communication (Figure 1. w (t) = S1 (x (t)). Interconnection topologies can be quite complicated.5.5 Introduction to Systems 9 Signals are manipulated by systems. and y (t) = S2 (w (t)). 2.5. SIGNALS AND SYSTEMS 2. The subtlest interconnection conguration has a system's output also contributing to its input. Two or more systems operate on outputs are added together to create the output in x (t) y (t).11: A signal x (t) y(t) The parallel conguration.12: The feedback conguration. Thus. The input e (t) equals the input signal minus the output of some other system's y (t): e (t) = x (t) − S2 (y (t)).3 Feedback Interconnection feedback x(t) e(t) + S1[•] y(t) – S2[•] Figure 2. . with this signal appearing as the input to all systems simultaneously and with equal strength. is routed to two (or more) systems.12: feedback) is that the feed-forward system produces y (t) = S1 (e (t)). is processed separately by both systems.2 Parallel Interconnection parallel x(t) S1[•] x(t) + S2[•] x(t) Figure 2. Block diagrams have the convention that signals going to more x (t) and their y (t) = S1 (x (t))+S2 (x (t)). with the the output: output to error signal used to adjust the output to achieve some condition dened by the input (controlling) signal. The mathematical statement of the feedback interconnection (Figure 2. and the information than one system are not split into pieces along the way. Engineers would say the output is "fed back" to the input through system 2.5. hence the terminology. Feedback systems are omnipresent in control problems.26 CHAPTER 2. 2. x (t) is a constant representing what speed you want. inverts its input) and attenuates. (2. creating output signals derived from their inputs. y (t) = Gx (t) (2.24/>.6. and y (t) is the car's speed as measured by a speedometer. 2. A real-world The gain can be positive or negative (if negative.30) amplier G 1 Amplifier G Figure 2.13: An amplier. A sine wave generator might be specied by y (t) = Asin (2πf0 t) u (t).6. we would say that the amplier can be greater than one or less than one. like amplitude and frequency. 2. which A and frequency f0 .6. system 2 is the identity system (output equals input).6 Simple Systems 10 Systems manipulate signals. says that the source was turned on at t=0 to produce a sinusoid of amplitude 2. In this application.31) .27 For example.1 Sources Sources produce signals without having input. in a car's cruise control system.14: delay) when the output signal equals the input signal at an earlier time. Examples would be oscillators that produce periodic signals like sinusoids and square waves and noise generators that yield signals with erratic waveforms (more about noise subsequently). the amplier actually example of an amplier is your home stereo. You control the gain by turning the volume control. If less than one.2 Ampliers An amplier (Figure 2. We like to think of these as having controllable parameters. y (t) = x (t − τ ) 10 This content is available online at <http://cnx.13: amplier) multiplies its input by a constant known as the amplier gain. Simply writing an expression for the signals they produce species sources. Why the following are categorized as "simple" will only become evident towards the end of the course.3 Delay A system serves as a time delay (Figure 2.org/content/m0006/2. Exercise 2. 2. if we have two systems in cascade.6. Again. such systems are dicult to build. the output emerges later than the input.32) time reversal Time Reverse Figure 2. y (t) = x (−t) (2.6. the output signal equals the input signal ipped about the time origin.28 CHAPTER 2. if the delay is positive.) Mentioned earlier was the issue of whether the ordering of systems mattered. but the notion of time reversal occurs frequently in communications systems. Such systems are dicult to build (they would have will be). The way to understand this system is to focus on the time origin: The output at equals the input at time t = 0. In other words. Thus. does the output depend on which comes rst? Determine if the ordering matters for the cascade of an amplier and a delay and for the cascade of a time-reversal system and a delay.4 Time Reversal Here. The delay can be negative.15: A time reversal system. . and plotting the output amounts to shifting the input plot to the right. time is the delay. 37. SIGNALS AND SYSTEMS delay Delay τ τ Figure 2.1 (Solution on p. but we will have occasion to advance signals to produce signal values derived from what the input in time. τ t=τ Here.14: A delay. in which case we say the system advances its input. 6. Just why linear systems are so important is related not only to their properties.6. The mathematical way of stating this property is to use the signal delay concept described in Simple Systems (Section 2.6. This general input-output relation property can be manipulated to indicate specic properties shared by all linear systems. This property follows from the simple derivation S (0) = S (x (t) − x (t)) = S (x (t)) − S (x (t)) = 0.6. • S (0) = 0 If the input is identically zero for all time.5 Derivative Systems and Integrators Systems that perform calculus-like operations on their inputs can produce waveforms signicantly dierent than present in the input. Said another way. A simple integrator would have input-output relation Z t y (t) = x (α) dα (2. but also because they lend themselves to relatively simple mathematical analysis. The equation above (2.35) Thus. They have the property that when the input is expressed as a weighted sum of component signals. the output equals the same weighted sum of the outputs produced by each component. For example.34) for all choices of signals and gains. the linear system denition provides the same output no matter which of these is used to express a given signal. • S (Gx (t)) = GS (x (t)) The colloquialism summarizing this property is "Double the input." Note that this property is consistent with alternate ways of expressing gain changes: Since 2x (t) also equals x (t) + x (t). Derivative systems operate in a straightforward way: A rst-derivative system d dt x (t).7 Time-Invariant Systems Systems that don't change their input-output relation with time are said to be time-invariant. the output is similarly delayed (advanced). (y (t) = S (x (t))) ⇒ (y (t − τ ) = S (x (t − τ ))) If you delay (or advance) the input. y (t) = and that the value of all signals at t = −∞ equals zero. which are divulged throughout this course.33) −∞ 2.6 Linear Systems Linear systems are a class of systems rather than having a specic input-output relation. . today's output is merely delayed to occur tomorrow. you double the output. "They're the only systems we thoroughly understand!" We can nd the output of any linear system to a complicated input by decomposing the input into simple signals.3: Delay).34) says that when a system is linear. It is a signal theory convention that the elementary integral operation have would have the input-output relationship a lower limit of −∞. if x (t) = e−t + sin (2πf0 t) the output S (x (t)) of any linear system equals  y (t) = S e−t + S (sin (2πf0 t)) 2. Linear systems form the foundation of system theory. the output of a linear system must be zero. and are the most important class of systems in communications. its output to a decomposed input is the sum of outputs to each input. (2. (2. a time-invariant system responds to an input you may supply tomorrow the same way it responds to the same input applied today. When S (G1 x1 (t) + G2 x2 (t)) = G1 S (x1 (t)) + G2 S (x2 (t)) S (·) is linear. Integral systems have the complication that the integral's limits must be dened.29 2. time-invariant systems are the SIGNALS AND SYSTEMS most thoroughly understood systems. three cube-roots. of any number. Linear.1 2.2: Discovering Roots Complex numbers expose all the roots of real (and complex) numbers. but characterizing them so that you can predict their behavior for any input remains an unsolved problem. etc.7 Signals and Systems Problems 11 Problem 2. a) What are the cube-roots of 27? In other words. .29/>. the magnitude and angle of the complex numbers given by the following expressions. Much of the signal processing and system theory discussed here concentrates on such systems. 11 This content is available online at <http://cnx.1: Complex Number Arithmetic Find the real part. linear and time-invariant. electric circuits are. For example.30 CHAPTER 2. imaginary part.3: Cool Exponentials Simplify the following (cool) expressions. The collection of linear. For example. what is 1 27 3 ? 1 b) What are the fth roots of 3 (3 5 )? c) What are the fourth roots of one? Problem 2. there should be two square-roots. Find the following roots. for the most part. Time-Invariant Table Input-Output Relation Linear Time-Invariant yes yes yes yes no yes yes yes y (t) = x1 + x2 yes yes y (t) = x (t − τ ) yes yes y (t) = cos (2πf t) x (t) yes no y (t) = x (−t) yes no y (t) = x2 (t) no yes y (t) = |x (t) | no yes y (t) = mx (t) + b no yes y (t) = y (t) = y (t) = y (t) = d dt (x) d2 dt2 (x) 2 d dt (x) dx dt + x Table 2. a) b) c) d) −1√ 1+ 3j 2 π 1 + j + ej 2 π π ej 3 + ejπ + e−(j 3 ) Problem 2.org/content/m10348/2. Nonlinear ones abound. nd an alternative answer for the complex exponential representation. b) V (t) = Re (V est )). write it as the real part of a complex exponential (v Explicitly indicate the value of the complex amplitude complex amplitude as a vector in the V -plane. are your answers unique? If so.4: Complex-valued Signals Complex numbers and phasors play a very important role in electrical engineering. What is the frequency (in Hz) of each? In general. S Re Aej2πf t  = Re S Aej2πf t Aej2πft  S[Re[Aej2πft]] Re[•] S[•] S[•] Re[•] Aej2πft Re[S[ Aej2πft]] Figure 2. and linear systems analysis is particularly easy. the output is the real part of the system's output to the complex exponential (see Figure 2. Represent each and indicate the location of the frequencies in the complex s-plane. i) ii) iii) 3sin √ (24t)  2cos 2π60t + π4   2cos t + π6 + 4sin t − π3 b) Show that for linear systems having real-valued outputs for real inputs. Solving systems for complex exponentials is much easier than for sinusoids. v (t) = cos (5t)  v (t) = sin 8t + π4 v (t) = e−t  v (t) = e−(3t) sin 4t + 3π 4 v (t) = 5e(2t) sin (8t + 2π) v (t) = −2 v (t) = 4sin (2t) + 3cos (2t)  √  v (t) = 2cos 100πt + π6 − 3sin 100πt + π2 .16). prove it.5: For each of the indicated voltages.16 Problem 2. that when the input is the real part of a complex exponential. a) Find the phasor representation for each. a) c) d) e) f) g) h) and the complex frequency s. and re-express each as the real and imaginary parts of a complex exponential. if not.31 a) b) c) jj j 2j j jj Problem 2. SIGNALS AND SYSTEMS Problem 2.32 CHAPTER 2.17) as a linear combination of delayed and weighted step functions and ramps (the integral of a step). .6: Express each of the following signals (Figure 2. 33 s(t) 10 t 1 (a) s(t) 10 t 2 1 (b) s(t) 10 t 2 1 (c) 2 s(t) 1 –1 t –1 (d) s(t) 1 1 -1 2 3 4 … t . Problem 2. y (t) (Fig- .18 a) Find and sketch this system's output when the input is the depicted signal (Figure 2.8: Linear Systems The depicted input (Figure 2.19 Problem 2. Time-Invariant Systems When the input to a linear.20) x (t) to a linear. the output is the signal ure 2. x(t) 1 0. b) Find and sketch this system's output when the input is a unit step.18).34 CHAPTER 2. x(t) y(t) 1 1 1 2 3 t 1 2 3 t –1 Figure 2.19). time-invariant system yields the output y (t).7: SIGNALS AND SYSTEMS Linear. time-invariant system is the signal x (t).5 1 2 3 t Figure 2. When the transmitted signal x (t) is a pulse. time-invariant system.22 a) What will be the received signal when the transmitter sends the pulse sequence (Figure 2.21)? x(t) 2 ••• 1 2 3 4 t –2 Figure 2.22). 1 r(t) 1 1 t 1 2 t Figure 2.35 x(t) y(t) 1 1/2 1 2 t 1 1 t –1/2 Figure 2.21 Problem 2.9: Communication Channel A particularly interesting communication channel can be modeled as a linear. the received signal x(t) r (t) is as shown (Figure 2.20 a) What is the system's output to a unit step input u (t)? b) What will the output be when the input is the depicted square wave (Figure 2.23) x1 (t)? . 23) x2 (t) that has half the duration as the original? 1 x1(t) 2 1 1 3 t x2(t) t 1/2 1 Figure 2. a is a constant. suppose the input is a unit pulse (unit-amplitude. Suppose we are given the following dierential equation to solve.36 CHAPTER 2. the output is given by  y (t) = 1 − e−(at) u (t).23 Problem 2. unit-duration) delivered to the circuit at time t = 10. a) When the input is a unit step (x (t) = u (t)). dy (t) + ay (t) = x (t) dt In this equation. What is the output voltage in this case? Sketch the waveform. SIGNALS AND SYSTEMS b) What will be the received signal when the transmitter sends the pulse signal (Figure 2. .10: Analog Computers So-called analog computers use circuits to solve mathematical problems. What is the total energy expended by the input? b) Instead of a unit step. particularly when they involve dierential equations. Solution to Exercise 2. 15) To convert 3 − 2j Similarly.1 (p. "Delay" means t → −t Case 1 y (t) = Gx (t − τ ). Im (z)) to polar form.1 (p.1.37 Solutions to Exercises in Chapter 2 Solution to Exercise 2. Solution to Exercise 2. order does not matter.1 (p. Time-reverse then delay: x ((−t) − τ ). 22) sq (t) = ∞ n=−∞ zz ∗ = r2 = (|z|) 2 r.7) degrees. we rst locate the number in the complex plane in the fourth quadrant. y (t) = x (− (t − τ )) = x (−t + τ ).2 (p. . t → t − τ. SolutionPto Exercise 2.1.3. 28) In the rst case. 14) z + z ∗ = a + jb + a − jb = 2a = 2Re (z). n (−1) ApT /2 t − n T2 Solution to Exercise 2. Thus.3 (p.7 degrees). z − z ∗ = a + jb − (a − jb) = 2jb = 2 (j. which equals The nal answer is √ √ 13 = q 32 + (−2) 2 . "Time-reverse" means Case 2 and the way we apply the gain and delay the signal gives the same result.588 radians (−33.1. 13∠ (−33. Delay then time-reverse: y (t) = . The distance from the origin to the complex number is the magnitude The angle equals −arctan  2 3 or −0. in the second it does.6. 16) zz ∗ = (a + jb) (a − jb) = a2 + b2 . SIGNALS AND SYSTEMS .38 CHAPTER 2. When we say that "electrons ow through a conductor. with positive current indicating that positive charge ows from the positive terminal to the negative." what we mean is that the conductor's constituent atoms freely give up electrons from their outer shells. ow is actually due to holes. Thus. Voltage is electric potential and represents the "push" that drives electric charge from one place to another. Voltage and currents comprise the electric instantiations of signals. Over the years. and circuit theory can be used to understand how current ows in reaction to electric elds. however. through electrochemical means. Thus. electric signals have been found to be the easiest to use. neurons "communicate" using propagating voltage pulses that rely on the ow of positive ions (potassium and sodium primarily. particularly certain semiconductors. Current ow also occurs in nerve cells found in your brain. with the positive sign denoting a positive voltage drop across the element. Electrons comprise current ow in many cases. and they rene the information representation or extract information from the voltage or current.14/>. Current. Voltage is dened across a circuit element. excess positive charge at one terminal and negative charge at the other. current ows. The systems used to manipulate electric signals directly are called circuits. Electrical engineers call these holes. Electric charge can arise from many sources.1 Voltage. A battery generates. electrons move in the opposite direction of positive current ow: Negative charge owing to the right is equivalent to positive charge moving to the left. and in some materials. current Here. A generic circuit element places a constraint between the classic variables of a circuit: voltage and current. and Generic Circuit Elements 1 We know that information can be represented by signals. Because electrons have a negative charge. and to some degree calcium) across the neuron's outer wall.Chapter 3 Analog Signal Processing 3. 1 This content is available online at <http://cnx. A missing electron. now we need to understand how signals are physically realized. current can come from many sources. is a virtual positive charge. What causes charge to move is a physical separation between positive and negative charge. the simplest being the electron. 39 . In many cases. It is important to understand the physics of current ow in conductors to appreciate the innovation of new electronic devices. "Flow" thus means that electrons hop from atom to atom driven along by the applied electric potential. we need to delve into the world of electricity and electromagnetism. When a conductor connects the positive and negative potentials. they make nice examples of linear systems. creating an electric eld.org/content/m0011/2. 1 (Solution on p.2 Ideal Circuit Elements 4 The elementary circuit elementsthe resistor.21/>. and both the unit and the quantity are named for Volta 3 units of amperes.htm http://www-groups.st-and. and through conductors. dening the drop.1 (Generic Circuit Element). a negative value producing power. Consequently. In v-i relation. capacitor. we have the convention that positive current ows from positive to negative voltage a voltage and a current. such as that depicted in Figure 3.org/content/m0012/2. and is named for the French physicist Ampère . Z t E (t) = p (α) dα −∞ Again. how many joules equals one kilowatt-hour? 3. power is the rate at which energy is A positive value for power indicates that at time means it is consumed or produced.html 4 This content is available online at <http://cnx. the is given by the product of the voltage and current. Is this really a unit of energy? If so. p (t) = v (t) i (t) t the circuit element is consuming power.uk/∼history/Mathematicians/Ampere. positive energy corresponds to consumed energy and negative energy corresponds to energy production. Voltage has units of volts.1: The generic circuit element.) Residential energy bills typically state a home's energy usage in kilowatt-hours. 2 . Again using the convention shown in Figure 3. The units of energy are joules since a watt equals joules/second.1. http://www.1 (Generic Circuit instantaneous power at each moment of time consumed by the element Voltages and currents also carry Element) for circuit elements. For every circuit element we dene The element has a v-i relation dened by the element's physical properties. energy is the integral of power.ac. Current has power. Just as in all areas of physics and chemistry. Note that a circuit element having a power prole that is both positive and negative over some time interval could consume or produce energy according to the sign of the integral of power.40 CHAPTER 3. power dened this way has units of watts.dcs. With voltage expressed in volts and current in amperes. 2 3 linear relationships between . Current ows through circuit elements.com/info/calendar/97/volta. ANALOG SIGNAL PROCESSING Generic Circuit Element i + v – Figure 3. and inductor impose voltage and current.bioanalytical. Exercise 3. which we indicate by lines in circuit diagrams. 116. p (t) = Ri2 (t) = 1 2 v (t) R As the resistance approaches innity. realizes a short circuit. 5 . named for the German electrical scientist Georg Ohm conductance. denoted by v-i Ω. the voltage goes to zero for a non-zero current ow. known as the resistance. Siemens When resistance is positive.3: 5 6 Capacitor. This situation corresponds to a short circuit. the equal to 6 .2.html A superconductor physically . 3. the relation for the resistor is written i = Gv . the voltage is proportional to the current.dcs.2.html http://w4.2 Capacitor Capacitor i C Figure 3. + v – v = Ri The resistor is far and away the simplest circuit element.1 Resistor Resistor i R Figure 3. In a resistor. and is named for the German electronics industrialist Werner von Sometimes. + v – i = C dv(t) dt http://www-groups. with the constant of proportionality R.41 3. a resistor consumes power.siemens.uk/∼history/Mathematicians/Ohm. 1 R.de/archiv/en/persoenlichkeiten/werner_von_siemens. with G. v (t) = Ri (t) Resistance has units of ohms.ac. as it is in most cases. Conductance has units of Siemens (S). As the resistance becomes zero.st-and. we have what is known as an open circuit: No current ows but a non-zero voltage can appear across the open circuit. A resistor's instantaneous power consumption can be written one of two ways.2: Resistor. the capacitance. The dierential and integral units of henries (H). The constant of proportionality.org. As current is the rate of change of charge.1) −∞ If the voltage across a capacitor is constant. ANALOG SIGNAL PROCESSING The capacitor stores charge and the relationship between the charge stored and the resultant voltage is q = Cv .2. 3. with larger valued inductors capable of storing more ux.si. the capacitor is equivalent to an open circuit. dv (t) i (t) = C dt or Z 1 v (t) = C t i (α) dα (3. then the current owing into it equals zero.42 CHAPTER 3. and is named for the English experimental physicist Michael Faraday 7 .uk/publish/faraday/faraday1.2) −∞ The power consumed/produced by an inductor depends on the product of the inductor current and its derivative p (t) = Li (t) and its total energy expenditure up to time t is given by E (t) = 7 8 di (t) dt http://www.edu/archives//ihd/jhp/ 1 2 Li (t) 2 . The power consumed/produced by a voltage applied to a capacitor depends on the product of the voltage and its derivative.4: Inductor.3 Inductor Inductor i + v – L Figure 3. v = L di(t) dt The inductor stores magnetic ux.iee. In this situation. the v-i relation can be expressed in dierential or integral form. and is named for the American physicist Joseph Henry forms of the inductor's v-i relation are v (t) = L di (t) dt or i (t) = 1 L Z t v (α) dα (3.html http://www. has units of farads (F). p (t) = Cv (t) dv (t) dt This result means that a capacitor's total energy expenditure up to time E (t) = t is concisely given by 1 2 Cv (t) 2 This expression presumes the fundamental assumption of circuit theory: all voltages and currents in any circuit were zero in the far distant past (t = −∞). Inductance has 8 . This content is available online at <http://cnx. i = −is v-i relation is v = vs regardless of what the current might be.4 Electric Circuits and Interconnection Laws 10 A circuit connects circuit elements together in a specic conguration designed to transform the source signal (originating from a voltage or current source) into another signalthe outputthat corresponds to the current or voltage dened for a particular circuit element.9/>. At very high frequencies.43 3. A simple resistive circuit is shown in Figure 3. Another name for a constant-valued voltage source is a battery.5 V voltage source. . For the voltage source. Sources of voltage and current are also circuit elements. the 1 kΩ First of all. but they will always deviate from the ideal in some way. the voltage source's As for the current source.4 Sources Sources i i + vs – + v – (a) Figure 3. are much harder to acquire. The fourth band on resistors species their tolerance. i = −is for any voltage v. 3. like a C-cell. say above 1 MHz.org/content/m0013/2. supplying However. it ceases to be modeled by a voltage source capable of any current (that's what ideal ones can do!) when the resistance of the light bulb is too small. resistor you can hold in your hand is not exactly an ideal 1 kΩ resistor. v = vs for any current i.2. roughly corresponds to a 1. but never have exactly their advertised values. On the other hand. regardless of the voltage. Thus.30/>. 3. a ashlight battery. for the current source. Current sources.6. More pertinent to the current discussion is another deviation from the ideal: If a sinusoidal voltage is placed across a physical resistor. but they are not linear in the strict sense of linear systems. the current will not be exactly proportional to it as frequency becomes high. 9 One central notion of circuit theory is combining the ideal elements to describe how physical elements operate in the real world. For example.3 Ideal and Real-World Circuit Elements Source and linear circuit elements are ideal circuit elements. and can be purchased in any supermarket.5: + v – is (b) The voltage source on the left and current source on the right are like all circuit elements in that they have a particular relationship between the voltage and current dened for them. we'll learn why later.org/content/m0014/2. For example. on the other hand. the smart engineer must be aware of the frequency ranges over which his ideal models match reality well. physical circuit elements can be readily found that well approximate the ideal. the way the resistor is constructed introduces inductance and capacitance eects. 10% is common. For example. the more money you pay). 9 10 This content is available online at <http://cnx. physical devices are manufactured to close tolerances (the tighter the tolerance. When two people dene variables according to their individual preferences.2) that the Do recall in dening your voltage and v-i relations for the elements presume that positive current ow is in the same direction as positive voltage drop. the signs of their variables may not agree. Once you dene voltages and currents. we want to determine the voltage across the resistor labeled by its value R2 . current variables (Section 3. we have a total of six voltages and currents that must be either specied or determined. we analyze the circuitunderstand what it accomplishesby dening currents and voltages for all circuit elements. we must write a set of equations that allow us to nd all the voltages and currents that can be dened for every circuit element. this . we need to solve some set of equations so that we relate the output voltage vout to the source voltage. To understand what this circuit accomplishes. Once the values for the voltages and currents are calculated.44 CHAPTER 3. You can dene the directions for positive current ow and positive voltage drop any way you like. As shown in the middle. On the bottom is the block diagram that corresponds to the circuit.6: The circuit shown in the top two gures is perhaps the simplest circuit that performs a signal processing function. Recasting this problem mathematically. R1 + – vin + R2 vout – (a) i 1 + v1 – i vin + – + v – iout R1 + R2 vout – (b) vin(t) Source System vout(t) (c) Figure 3. we need six nonredundant equations to solve for the six unknown voltages and currents. By specifying the source. input is provided by the voltage source vin and the output is the voltage vout across the resistor label The R2 . Until we have more knowledge about how circuits work. It would be simplea little too simple at this pointif we could instantly write down the one equation that relates these two voltages. ANALOG SIGNAL PROCESSING This circuit is the electrical embodiment of a system having its input provided by a source system producing vin (t). we have one. Because we have a three-element circuit. but current ow and voltage drop values for each element will agree. they may be positive or negative according to your denition. and then solving the circuit and element equations. Two nodes are explicitly indicated in Figure 3. The convention is to discard the equation for the (unlabeled) node at the bottom of the circuit. Kirchho's Laws.1: Kirchho 's Current Law).4. What this law means physically is that charge cannot accumulate in a node.4. the places where circuit elements attach to each other are called nodes. where do we get the other three equations we need? What we need to solve every circuit problem are mathematical statements that express how the circuit elements are interconnected.45 amounts to providing the source's v-i relation. the sum of all currents entering or leaving a node must equal zero. we can discard any one of them. are connected. The input is provided by the voltage source labelled vin and the output is the voltage vout across the resistor R2 . They are named for Gustav Kirchho a nineteenth century German physicist.) n-node circuit.6. This line simply means that the two elements are connected together. 11 http://en. (−i) − i1 = 0 i1 − i2 = 0 i + i2 = 0 Note that the current entering a node is the negative of the current leaving the node. you will nd that in an (Solution on p. means. The v-i relations for the resistors give us two more. Exercise 3. Said another way. These laws are essential to analyzing this and any circuit. In the example.org/wiki/Gustav_Kirchho . Given any two of these KCL equations.2: Kirchho 's Voltage Law (KVL)) and one for current (Section 3. we can nd the other by adding or subtracting them. below we have a three-node circuit and thus have three KCL equations. a third is at the bottom where the voltage source and resistor R2 fashion. i1 + v1 – vin + – R1 i + R2 vout vin + – – (a) + v – iout R1 + R2 vout – (b) Figure 3. Figure 3. determine what a connection among circuit elements 11 .1 In writing KCL equations. one of them is redundant and. 3.4. Can you sketch a proof of why this might be true? Hint: It has to do with the fact that charge won't accumulate in one place on its own. 116.wikipedia.6. we need the laws that govern the electrical connection of circuit elements. We are only halfway there. First of all. what goes in must come out. in mathematical terms. one for voltage (Section 3. Electrical engineers tend to draw circuit diagramsschematics in a rectilinear Thus the long line connecting the bottom of the voltage source with the bottom of the resistor is intended to make the diagram look pretty.7: The circuit shown is perhaps the simplest circuit that performs a signal processing function. Thus. exactly one of them is always redundant.1 Kirchho's Current Law At every node.4. What kind of system does our circuit realize and.org/content/m17305/1. our approach is to investigate power consumption/creation. it is the simplest way to solve the equations.2 (Solution on p. Using the v-i relation for the output resistor. we can back substitute this answer iout = into our original equations or ones we developed along the way. For the example circuit (Figure 3. We have now solved the circuit Though i1 = iout . what are the system's parameter(s)? 3.5 Power Dissipation in Resistor Circuits 12 We can nd voltages and currents in simple circuits containing resistors and voltage or current sources.7/>. sources and circuit-element values. all circuits conserve power. To nd any other circuit quantities. for now. it should not dissipate or create energy. we have vin : We have expressed one voltage or current in terms of R1 +R2 .2 Kirchho's Voltage Law (KVL) The voltage law says that the sum of voltages around every closed loop in the circuit must equal zero.4. we have three v-i relations. we will discover shortcuts for solving circuit vout and determine how it depends on vin and vin = v1 + vout . two KCL equations. vout = R2 vin R1 + R2 Exercise 3. 116. which Solving for the current in the output resistor. Yes. in terms of element values.4. we temporarily eliminate the quantity we seek. We should examine whether these circuits variables obey the Conservation of Power principle: since a circuit is a closed system.) Referring back to Figure 3. prove that because of KVL and KCL . The KVL equation can be rewritten as v-i relation. KVL expresses the fact that electric elds are conservative: The total work performed in moving a test charge around a closed path is zero. For the moment. we want to eliminate all the variables but on resistor values. trace a path through the circuit that returns you to the origin node. we obtain the quantity we seek. Substituting into it the resistor's problems. The KVL equation for our circuit is v 1 + v2 − v = 0 In writing KVL equations. v = vin v-i: v1 = R1 i1 vout = R2 iout (−i) − i1 = 0 KCL: i1 − iout = 0 KVL: −v + v1 + vout = 0 We have exactly the right number of equations! Eventually. One of the KCL equations says means that vin = R1 iout + R2 iout = (R1 + R2 ) iout .6. a circuit should serve some useful purpose. we have vin = R1 i1 + R2 iout . we will This content is available online at <http://cnx.46 CHAPTER 3. we follow the convention that an element's voltage enters with a plus sign when traversing the closed path. A closed loop has the obvious denition: Starting at a node.7). ANALOG SIGNAL PROCESSING 3. rst a resistor circuit's 12 Later. and one KVL equation for solving for the circuit's six voltages and currents. not obvious. we go from the positive to the negative of the voltage's denition. Exercise 3. its power is dissipated by heat. In fact. especially resistors. Consequently. the resistance of a wire of length R= The quantity ρ is known as the L and cross-sectional area A is given by ρL A resistivity and presents the resistance of a unit-length. unit cross- sectional area material constituting the wire. The total power consumed/created by a circuit equals the sum of each element's power. Consider the simple series circuit should in Figure 3. no more. but is by physics. we dened the iout to ow through the positive-voltage terminals of both resistors and found it to equal iout = R2 vin . v2 R go? 2 (R1 + R2 ) Consequently. 116. 41) that the power consumed by 2 vin i2 R or (3. If a room-temperature superconductor could be found.3) resistors always dissipate power. the dissipated power must be absorbed somewhere. Because the total power in a circuit must be zero (P = 0). consume it. which points to the voltage source as the producer of power. The voltage across the resistor R2 is the output voltage and we found it to equal vout = R1 +R2 R1 +R2 vin . 116. but the theory decrees that all sources must be provided energy to work. Consequently. We conclude that both resistors in our example circuit consume power.2 (Solution on p. 40.5.6. which means the longer the wire. But where do sources get their power? Again. . the smaller the resistance. With this convention. note: A physical wire has a resistance and hence dissipates power (it gets warm just like a resistor in a circuit). circuit theory does not model how sources are constructed. In performing our calculations. The current owing into the source's positive terminal is −iout .) Conrm that the source produces exactly the total power consumed by both resistors. This result is quite general: sources produce power and the circuit elements. Current owing through a resistor makes it hot. a negative value to created power. the greater the resistance and thus the power dissipated.47 As dened on p. the power calculation for the source yields  − (vin iout ) = − 1 vin 2 R1 + R2  We conclude that the source provides the power consumed by the resistors. The answer is not directly predicted by circuit theory. Most materials have a positive value for ρ. P = X vk ik k Recall that each element's current and voltage must obey the convention that positive current is dened to enter the positive-voltage terminal. Resistivity has units of ohm-meters. Superconductors have zero resistivity and hence do not dissipate power. this resistor dissipates power because we showed (p. But where does a resistor's power By Conservation of Power. The thicker the wire. electric power could be sent through power lines without loss! Exercise 3. some circuit elements must create power while others consume it. This result should not be surprising since any resistor equals either of the following. the instantaneous power consumed/created by every circuit element equals the product of its voltage and current.) Calculate the power consumed/created by the resistor R1 in our simple circuit example. no less.1 (Solution on p. P2 is positive.5. calculating the power for this resistor yields current P2 = R2 Since resistors are positive-valued. a positive value of vk ik corresponds to consumed power. In any case. For the two the voltage across one resistor equals the ratio of that resistor's value and the sum of resistances times the voltage across the series combination. we have that = R1 + R2 : Resistors in series: The series combination of two resistors acts. the equivalent circuit for a series combination of resistors is a single resistor having a resistance equal to the sum of its component resistances. This concept is so pervasive it has a name: voltage divider. The results shown in other modules (circuit elements (Section 3. ANALOG SIGNAL PROCESSING 3.8: + R2 vout – (b) The circuit shown is perhaps the simplest circuit that performs a signal processing function. the form of a ratio of the output voltage to the input voltage. The input is provided by the voltage source labelled iout R1 vin and the output is the voltage vout across the resistor R2 . and the values of other currents and voltages in this circuit as well.4). Because this analysis was made with ideal circuit elements. interconnection laws (Section 3. have profound implications. as far as the voltage source is concerned. The input-output relationship for this system. a complicated circuit when viewed from its terminals (the two places to which you might attach a source) appears to be a single circuit element (at best) or a simple combination of elements at worst.org/content/m10674/2. 13 This content is available online at <http://cnx. this important way of expressing input-output relationshipsas a ratio of output to inputpervades circuit and system theory. This result is the rst of several equivalent circuit ideas: In many cases. . we express how the components used to build the system aect the input-output relationship. Because it equals i2 . vout R2 = vin R1 + R2 In this way. KVL and KCL (Section 3. as a single resistor having a value equal to the sum of the two resistances. The current vin i1 i1 is the current owing out of the voltage source. found in this particular case by voltage divider.9/>.8).6 Series and Parallel Circuits 13 i1 + v1 – vin + – R1 i + R2 vout vin + – – + v – (a) Figure 3.4)) with regard to this circuit (Figure 3.4). we might expect this relation to break down if the input amplitude is too high (Will the circuit survive if the input changes from 1 volt to one million volts?) or if the source's frequency becomes too high. takes resistors connected this way have the same magnitudeare said to be connected in series-connected resistors in the example.48 CHAPTER 3. Thus. Resistors connected in such a way that current from one must ow only into anothercurrents in all series. iout iin Figure 3. this equivalence is made strictly from the voltage source's viewpoint. One interesting simple circuit (Figure 3.  1 R1 + 1 R2 −1 = R1 R2 R1 +R2 .1: . note that the top node consists of the entire upper interconnection section. Here. To write the KCL equation. what we will term a parallel connection. rather than in series. Thus. The KCL equation is v-i relations. Note that in making this equivalent circuit.2. Using the R1 iin R1 + R2 Exercise 3. You can easily show that the parallel combination of R1 and R2 has the v-i shorthand notation for this quantity is relation of a resistor having resistance R1 k R2 .49 vin Figure 3. the output voltage can no longer be dened: The output resistor labeled R2 no longer appears. A As the reciprocal of resistance is conductance (Section 3.6. what purpose does this revised circuit have? This circuit highlights some important properties of parallel circuits.10: R1 iin R2 i1 + + v R1 v1 R2 iout + v2 A simple parallel circuit. the circuit the voltage source "feels" (through the current drawn from it) is a single resistor having resistance R1 + R2 .9: R1 + – R2 vin + – R1+R2 The resistor (on the right) is equivalent to the two resistors (on the left) and has a resistance equal to the sum of the resistances of the other two resistors. Thus. we nd that iout = iin − i1 − i2 = 0.10 by a voltage source. How would iout be related to the source voltage? Based on this result. applying KVL reveals that all the voltages are identical: This result typies parallel connections. 116.10) has two resistors connected side-by-side.) Suppose that you replaced the current source in Figure 3.1 (Solution on p. v1 = v and v2 = v . the equivalent conductance is the sum of the conductances . Thus. Resistor).11 Similar to voltage divider (p. we can say that R1 R 1R 2 R1+R2 R2 Figure 3. current R1 resistor divided by the sum of resistances: i2 = R1 +R2 i. i2 = i i2 R1 R2 Figure 3. other divider takes the form of the resistance of the G2 G1 +G2 i.50 CHAPTER 3. ANALOG SIGNAL PROCESSING for a parallel combination of resistors. The current through a resistor in parallel with another is the ratio of the conductance of the rst to the sum of the conductances. for the depicted circuit. we have current divider for parallel resistances.12 . Expressed in terms of resistances. 48) for series resistances. such as an oscilloscope or a voltmeter. 49)). but it does not equal the input-output relation of the circuit without the voltage measurement device.13. 48). We must analyze afresh how this revised circuit. In circuits. and a sink. substituting for v1 . Comparing the input-output relations before and after. we describe a Thus. we have a complete system built from a cascade of three systems: a source. This is the condition we seek: Voltage measurement: Voltage measurement devices must have large resistances compared with that of the resistor across which the voltage is to be measured. parallel rules (p. the voltage across it is v1 = Req . Req ' R2 . The KVL equation written around the leftmost loop has vin = v1 + vout . The 2 vout = R1R+R vin . a sink is called a system-theoretic sink as a load resistance RL . the approximation would apply if R2  1 RL or R2  RL . what we need is As Req =  1 R2 + 1 RL −1 1 . In system-theory terms. we nd   R1 vin = vout +1 Req entering the node through or Req vout = vin R1 + Req Thus. the current current through each from their i2 = R1 must equal  the sumof the other two currents leaving the node. works. Let Req denote the equivalent resistance of the parallel combination of R2 and RL . Rather than dening eight variables and solving for the current in the load resistor. shown in Figure 3. KCL says that the sum of the three currents must equal zero. Using R1 's v-i R1 vout relation.51 vin R1 + + R2 – vout RL – source Figure 3. a signal processing system (simple as it is). Considering the node where all three resistors join. which means that i1 = vout R12 + R1L . let's take a hint from other analysis (series rules (p. Because the voltages are the same. Said another way.13: system sink The simple attenuator circuit (Figure 3. Therefore. For most applications. thus. We should look more carefully to determine if any values for the load resistance would lessen its impact on the circuit. load. with the current passing through it driving the measurement device through some type of display.8) is attached to an oscilloscope's input. . we can nd the v-i relations: vout vout R2 and iL = RL . Resistors R2 and RL are in a parallel conguration: The voltages across each resistor are the same while the currents are not. We can not measure voltages reliably unless the measurement device has little eect on what we are trying to measure. we have the input-output relationship for our entire system having the form of voltage divider. 2 input-output relation for the above circuit without a load is: Suppose we want to pass the output signal into a voltage measurement device. i1 = i2 + iL . we can represent these measurement devices as a resistor. we want to pass our circuit's output to a sink. Another valuable lesson emerges from this example concerning the dierence between cascading systems and cascading circuits.6. Said another way. with R3 . you might think that making the resistance Because the resistors R1 and R2 RL large enough would do the trick. To apply the series and parallel combination rules. try it rst.1 R2 R3 R1 R4 Figure 3. this ideal is rarely true unless the circuits are so designed. In system theory. you can never make the resistance of your a circuit cannot be designed in isolation that will work in cascade with all other circuits. enabling you to determine whether the voltage you measure closely equals what was present before they were attached to your circuit.) Let's be more precise: How much larger would a load resistance need to be to aect the inputoutput relation by less than 10%? by less than 1%? Example 3. you will voltage measurement device big enough.and large-scale views. with the attenuation (a fancy word for gains less than one) depending only on the ratio of the two resistor values select R2 R1 +R2  = 1+ any values for the two resistances we want to achieve the desired attenuation. this combination is in series with R4 .2 (Solution on p. The total resistance expression mimics the structure: RT = R1 k (R2 k R3 + R4 ) RT = R1 R2 R3 + R1 R2 R4 + R1 R3 R4 R1 R2 + R1 R3 + R2 R3 + R2 R4 + R3 R4 Such complicated expressions typify circuit "simplications. nd that oscilloscopes and voltmeters have their internal resistances clearly stated. ANALOG SIGNAL PROCESSING Exercise 3. but can catch many errors. Checking units does not guarantee accuracy. we can The designer of this .14 We want to nd the total resistance of the example circuit. Furthermore. it is best to rst determine the circuit's structure: What is in series with what and what is in parallel with what at both small. systems can be cascaded without changing the input-output relation of intermediate systems. the ratio of the numerator's and denominator's units should be ohms. Electrical engineers deal with this situation through the notion of specications: Under what conditions will the circuit perform as designed? Thus.52 CHAPTER 3. Ω3 ) as well as in the The entire expression is to have units of resistance. this approach works well. 116. we started We have R2 in parallel This series combination is in parallel with R1 . Note away from the terminals. R1 R2 −1 . and worked toward them. In our simple circuit. can have virtually any value. he or she must recognize what have come to be known as loading eects. that in determining this structure. In most cases. since our resistor circuit functions as an attenuator. Design is in the hands of the engineer. In cascading circuits. thus." A simple check for accuracy is the units: Each component of the numerator should have the same units (here 2 denominator (Ω ). 6.53 circuit must thus specify not only what the attenuation is.3 (Solution on p. series and parallel combination rules + R1 + RT R2 v2 v i – … RN – (a) series combination rule Figure 3. or smaller than the component resistances? What is this relationship for a parallel combination? 3.7 Equivalent Circuits: Resistors and Sources 14 We have found that the way to think about circuits is to locate and group parallel and series resistor combinations. but also the resistance values employed so that integratorspeople who put systems together from component systemscan combine systems together and have a chance of the combination working.) Contrast a series combination of resistors with a parallel one. a group of resistors functions as a single resistor. (a) RT = PN n=1 Rn vn = Rn RT v (b) GT = PN n=1 Gn i Exercise 3. 14 This content is available online at <http://cnx.15: in = Gn GT G1 GT i2 … GN G2 (b) parallel combination rule Series and parallel combination rules. This result is known as an equivalent circuit: from the viewpoint of a pair of terminals. 116. . In series combinations. Keep in mind that for series combinations.15 (series and parallel combination rules) summarizes the series and parallel combination results. in between.24/>. voltage and resistance are the key quantities. while for parallel combinations current and conductance are more important. Those resistors not involved with variables of interest can be collapsed into a single resistance. the resistance of which can usually be found by applying the parallel and series rules. in parallel ones.org/content/m0020/2. Which variable (voltage or current) is the same for each and which diers? What are the equivalent resistances? When resistors are placed in series. the voltages are the same. Figure 3. the currents through each element are the same. These results are easy to remember and very useful. is the equivalent resistance bigger. 17: The Thévenin equivalent circuit. which would conrm that the circuit does indeed function as a parallel combination of resistors. We seek the relation between v and i that describes the kind of element that lurks within the dashed box.6) . Let's consider our simple attenuator circuit (shown in the gure (Figure 3. the source's presence means that the circuit is not well modeled as a resistor. If we consider the simple circuit of Figure 3. In this case the Thévenin equivalent resistance is Req = R1 k R2 and the Thévenin equivalent source has voltage veq = R2 R1 +R2 vin . The result is v = (R1 k R2 ) i + R2 vin R1 + R2 (3. use the circuit laws and element relations.16)) from the viewpoint of the output terminals. Because the equivalent circuit has fewer elements. but do not attach anything to the output terminals. you cannot distinguish the two circuits. However. from viewpoint of the terminals. the v-i relation will be of the form v = Req i + veq (3. i + Req veq + – v – Figure 3. To perform this calculation. Thus. we nd that they have the same form. We want to nd the v-i relation for the output terminal pair. it is easier to analyze and understand than any other alternative.54 CHAPTER 3. For any circuit containing resistors and sources.5) Comparing the two v-i relations.17. we nd it has the v-i relation at its terminals of v = Req i + veq (3.4) If the source were zero. ANALOG SIGNAL PROCESSING i vin + R1 + R2 – v – Figure 3. and then nd the equivalent circuit for the boxed circuit. it could be replaced by a short circuit.16 This result generalizes to include sources in a very interesting and useful way. leaving the short-circuit current to be v . Consequently.1 (3.17). we know the circuit's construction and element values.19. Because Thévenin's theorem applies in general.8) (Solution on p.17. which has the eect of setting the current i to zero. let's derive its Thévenin equivalent two dierent ways.19 For the circuit depicted in Figure 3. i vin + – + R1 R2 v – Figure 3. isc = − Req eq From this property. Referring to the equivalent circuit.7) voc isc (3. Ohm's Law says that we have that the so-called open-circuit voltage voc v = Ri).) Use the open/short-circuit approach to derive the Thévenin equivalent of the circuit shown in Figure 3. Starting with the open/short-circuit approach. 116.55 and the Thévenin equivalent circuit for any such circuit is that of Figure 3. consider the equivalent circuit of this gure (Figure 3. the voltage across it is zero (remember. Now consider the situation when we set the terminal voltage to zero (short-circuit it) and measure the resulting current. veq = voc Req = − Exercise 3. Let the terminals be opencircuited. This equivalence applies no matter how many sources or resistors may be present in the circuit. We have . To be more specic.18.2) below. we should be able to make measurements or calculations only from the terminals to determine the equivalent circuit. we can determine the equivalent resistance. and derive the equivalent source and resistance. the source voltage now appears entirely across the resistor.7. In the example (Example 3. by applying KVL equals the Thévenin equivalent voltage.2 R2 iin R1 R3 Figure 3. let's rst nd the open-circuit voltage voc .18 Example 3. Because no current ows through the resistor. R +R R2 a current divider relationship as of voc = across 1 R3 . and thus and current.20: – All circuits containing sources and resistors can be described by simpler equivalent circuits. and can be 2 Thus. where ieq = v-i The short-circuit current equals the negative of the Mayer-Norton equivalent source. v-i relation for the Thévenin equivalent can be written as alent 15 and the current-source oriented To derive the latter. i Sources and Resistors + v – i i + + Req veq + ieq v – Req v – Mayer-Norton Equivalent Thévenin Equivalent Figure 3.org/content/m0021/latest/> . 15 "Finding Thévenin Equivalent Circuits" <http://cnx.56 CHAPTER 3. As you might expect. R3 is now in parallel with the series combination of and we obtain the same result. ANALOG SIGNAL PROCESSING R1 is in parallel with the series combination iin R3 R1 . resistor R1 and R2 . the v = Req i + veq or i= v − ieq Req (3. not on what is actually inside the circuit. When we short-circuit the terminals. The Mayer-Norton equivalent shown in Figure 3. From the viewpoint of the terminals. let's nd the equivalent resistance by reaching inside the circuit and setting the current source to zero. Choosing the one to use depends on the application. Note that both variations have the same equivalent resistance. To verify. R3 .10) veq Req is the Mayer-Norton equivalent source. Req = R3 k R1 + R2 . Thus. the Thévenin R3 (R1 +R2 ) equivalent resistance is R1 +R2 +R3 . no voltage appears R1 +R2 +R3 no current ows through it.9) (3. Thus. equivalent circuits come in two forms: the voltage-source oriented Thévenin equiv- Mayer-Norton equivalent (Figure 3.20 be easily shown to have this relation. We again have a current divider relationship: isc = − in 1 . we can replace the current source by an open circuit.20). R3 does not aect the short-circuit i R eliminated. Because the current is now zero. In short. he turned to com- munications engineering when he joined Siemens & Halske in 1922. he published (twice!) a proof of what is now called the Thévenin equivalent while developing ways of teaching electrical engineering concepts at the École Polytechnique. Since batteries are labeled with a voltage specication. they should serve as voltage sources and the Thévenin equivalent serves as RL is placed across its terminals. then. he rose to lead Siemen's Central Laboratory in 1936.html .) Find the Mayer-Norton equivalent circuit for the circuit below. Which one is used depends on whether what is attached to the terminals is a series conguration (making the Thévenin equivalent the best) or a parallel one (making Mayer-Norton the best). he published in a German technical journal the Mayer-Norton equivalent. the battery serves resistance of the circuit to which you attach it. If we have a load resistance much larger than the battery's equivalent RL +Req resistance. we certainly don't have a voltage source (the output voltage depends directly on the load resistance). spent two years in Nazi concentration camps. the renowned nineteenth century physicist. On the other hand. These models help us understand the limitations of a battery. If the load resistance the natural choice. Léon Charles Thévenin: He was an engineer with France's Postes. In 1926. If a load resistance using voltage divider: v= is much smaller.2 (Solution on p. and equals i R i = − RLeq+Reqeq . Télégraphe et Téléphone.7. the current through the load resistance is given by current divider. you bought a current source. Another application is modeling. For a current that does not vary with the load resistance.ac.uk/∼history/Mathematicians/Helmholtz. if you attach it to a circuit having a small equivalent resistance. When we buy a ashlight battery. to a good approximation. The rst is to simplify the analysis of a complicated circuit by realizing the any portion of a circuit can be described by either a Thévenin or Mayer-Norton equivalent. the voltage output can be found veq RL . In 1883.st-and. R2 iin R1 R3 Figure 3. Consider now the Mayer-Norton equivalent. He did not realize that the same result had been published by Hermann Helmholtz 16 . Thus. you get a voltage source if its equivalent resistance is much smaller than the equivalent the equivalent resistance. when you buy a battery. thiry years earlier. He rose to a position on Siemen's Board of Directors before retiring. During his interesting career. 116. Hans Ferdinand Mayer: After earning his doctorate in physics in 1920. and went to the United States for four years working for the Air Force and Cornell University before returning to Siemens in 1950. either equivalent circuit can accurately describe it. surruptiously leaked to the British all he knew of German warfare capabilities a month after the Nazis invaded Poland. 16 http://www-gap. was arrested by the Gestapo in 1943 for listening to BBC radio broadcasts.21 Equivalent circuits can be used in two basic ways.dcs. If the load resistance is comparable to neither as a voltage source or a current course. the battery does serve as a voltage source. this resistance should be much smaller than the equivalent resistance.57 Exercise 3. But more importantly. We would have to slog our way through the circuit equations. Edward L.11) The input-output relation for circuits involving energy storage elements takes the form of an ordinary dierential equation. It allows circuits containing capacitors and inductors to be solved with the we have learned to solved resistor circuits. having a reach far beyond just circuits. which we must solve to determine what the output voltage is for a given input. Norton: Edward Norton from its inception in 1922. where we obtain an In explicit input-output relation.58 CHAPTER 3. http://www.html 17 18 .org/content/m0023/2.22: A simple RC + C – + vout – circuit. simplifying them until we nally found the equation that related the source(s) to the output. Though Although not original with him. Charles Steinmetz 19 presented the key paper describing the approach in 1893.8 Circuits with Capacitors and Inductors 18 R vin Figure 3.edu/∼dhj/norton This content is available online at <http://cnx. the arithmetic of complex numbers is mathematically more complicated than with real numbers. Because of KVL. At this point. but also simplied the solution process in the most common situation. the increased insight into circuit behavior and the ease with which circuits are solved with impedances is well worth the diversion. the impedance concept is central to engineering and physics. we know that vin = vR + vout . Substituting vR = Ri i = C dvdtout . No evidence suggests Norton knew of Mayer's publication. The current through the capacitor is given by passing through the resistor.invent. In the ANALOG SIGNAL PROCESSING 17 was an electrical engineer who worked at Bell Laboratory same month when Mayer's paper appeared. a method was discovered that not only made nding the dierential equation easy. Norton wrote in an internal technical memorandum a paragraph describing the current-source equivalent.rice. and this current equals that into the KVL equation and using the v-i relation for the capacitor. we now have an implicit relation that requires more work to obtain answers. we could learn how to solve dierential equations. At the turn of the twentieth century. Let's consider a circuit having something other than resistors and sources.org/hall_of_fame/139. Note rst that even nding the dierential equation relating an output variable to a source is often very tedious. 3. impedance same methods To use impedances. 19 http://www.ece. contrast to resistive circuits. we arrive at RC dvout + vout = vin dt (3. The parallel and series combination rules that apply to resistors don't directly apply when capacitors and inductors occur. we must master complex numbers.12/>. 23: A simple For the above example amplitude Vin RC RC + – C + vout – circuit. exponential. assuming the current to be a complex exponential results in the voltage dt j2πf t having the form v = LIj2πf e . i = amplitude I = then V j2πf t . For the resistor. The critical consequence of assuming that sources have this form is that all voltages and currents in the circuit are also complex exponentials. Letting the voltage be a complex i = CV j2πf ej2πf t . The amplitude of this complex exponential is I = CV j2πf . we have The major consequence of assuming complex exponential voltage and currents is that the ratio Z = VI for each element does not depend on time. making its complex amplitude V = LIj2πf . let's investigate how each circuit element behaves when either the voltage or current is a complex exponential. and the why this should be true. When v = V ej2πf t . so is the current. resistor's voltage is a complex exponential.59 3. having v-i relations and the same frequency as the source. To appreciate amplitudes governed by KVL. Simple Circuit R vin Figure 3. For a capacitor. v = Ri. . di for the inductor. but does depend on source frequency.org/content/m0024/2. so would the voltage.23 (Simple Circuit)). The complex determines the size of the source and its phase. with an resistor's v-i relation) and a frequency the same as the voltage.23/>. This quantity is known as the element's impedance. let's pretend that all sources in the circuit are complex exponentials having the same frequency. KCL. 20 This content is available online at <http://cnx.9 The Impedance Concept 20 Rather than solving the dierential equation that arises in circuits containing capacitors and inductors. a complex exponential. where v = L . circuit (Figure 3. Thus. let vin = Vin ej2πf t . Finally. this ction will greatly ease solving the circuit no matter what the source really is. Although this pretense can only be mathematically true. if the current were assumed to be i = C dv dt . if the Re V (determined by the R Clearly. For example. If i (t) = Iej2πf t .10/>. the magnitude of the capacitor's impedance is inversely related to frequency. This observation means that if the current is a complex exponential and has constant amplitude. Let's consider Kircho 's circuit laws. Because impedances depend only on frequency. When voltages around a loop are all complex exponentials of the same frequency. the amplitude of the voltage decreases with frequency. Consequently. suppose we had a circuit element where the voltage equaled the square of the current: 2 j2π2f t v (t) = KI e v (t) = Ki2 (t). we are faced with solving the circuit in what is known as the time domain. What we emphasize here is that it is often easier to nd the output if we use impedances.13) nn the complex amplitudes of the voltages obey KVL. We can easily imagine that the complex amplitudes of the currents obey KCL.12) which means X Vn = 0 (3. This situation occurs because the circuit elements are linear and time-invariant. where instead of resistance. we use impedance.60 CHAPTER 3. 3. the ratio of voltage to current for each element equals the ratio of their complex amplitudes. meaning that voltage and current no longer had the same frequency and that their ratio was time-dependent. . frequency-dependent quantity.10 Time and Frequency Domains 21 When we nd the dierential equation relating the source and the output. For example. What we have discovered is that source(s) equaling a complex exponential of the same frequency forces all circuit variables to be complex exponentials of the same frequency. we have P nn vn = P = 0 nn Vn ej2πf t (3. as if they were resistors. and has a phase of − π2 . we can solve circuits using voltage and current divider and the series and parallel combination rules by considering the elements to be impedances. ANALOG SIGNAL PROCESSING Impedance i i + v – R C (a) Figure 3. we nd ourselves in the 21 This content is available online at <http://cnx. which depends only on the source's frequency and element values. in general. a complex-valued. Because for linear circuit elements the complex amplitude of voltage is proportional to the complex amplitude of current V = ZI  assuming complex exponential sources means circuit elements behave Because complex amplitudes for voltage and current also obey Kircho's laws.24: (a) Resistor: ZR = R i + v – (b) (b) Capacitor: + v – L (c) ZC = 1 (c) Inductor: j2πf C ZL = j2πf L The impedance is. .org/content/m10708/2. Since you can't be two places at the same time. If it were a voltage source having voltage vin = p (t) (a pulse). In the time domain. still let vin = Vin ej2πf t .25: The time and frequency domains are linked by assuming signals are complex exponentials.25 (Two Rooms) shows how this works. We do this because the impedance approach simplies nding how input and output are related. Two Rooms R vin + vout – + C – frequency-domain room time-domain room t f v(t) = Ve j2πft i(t) = Ie j2πft Only signals Only complex amplitudes differential equations KVL. Security guards make sure you don't try to sneak time domain variables into the frequency domain room and vice versa. To illustrate how the time domain. exponentials having the 22 same frequency. KCL superposition impedances transfer functions voltage & current divider KVL. 22 As we unfold the impedance story. Even though it's not. the complex exponential. KCL superposition vout(t) = … Vout = Vin•H(f) Figure 3. pretend the source is a complex exponential. in the fray. Passing into the frequency domain work room. http://www.html all variables in a linear circuit will also be complex The circuit's only remaining "mystery" is what each variable's . Impedances and complex exponentials are the way you get between the two rooms.61 frequency domain. the frequency domain and impedances t together. and suggests a general way of thinking about circuits. you are faced with solving your circuit problem in one of the two rooms at any point in time. let's go over how it works. Because of the importance of this approach. 1. signals are represented entirely by complex amplitudes.invent. Figure 3.org/hall_of_fame/139. 2. The entire point of using impedances is to get rid of time and concentrate on frequency. A common error in using impedances is keeping the time-dependent part. Only after we nd the result in the frequency domain do we go back to the time domain and put things back together again. signals can have any form. consider the time domain and frequency domain to be two work rooms. alleviates us from solving dierential equations. With a source equaling a complex exponential. we'll see that the powerful use of impedances suggested by Steinmetz greatly simplies solving circuits. We'll learn how to "get the pulse back" later. We can now solve using series and parallel combination rules how the complex amplitude of any variable relates to the sources complex amplitude. Thus.26 (Simple Circuits)) vin = Vin ej2πf t . the complex amplitude of the output voltage divider: Vout = Vout = Vout = Vout can be found using voltage ZC Vin ZC + ZR 1 j2πf C 1 j2πf C + R Vin 1 Vin j2πf RC + 1 If we refer to the dierential equation for this circuit (shown in Circuits with Capacitors and Inductors (Section 3. we consider the source to be a complex number (Vin here) and the elements to be impedances. we obtain the same relationship between their complex amplitudes. letting the output and input voltages be complex exponentials. (b) The impedance counterpart for the RC circuit.62 CHAPTER 3. and we assume that RC circuit (Figure 3. we can nd the dierential equation directly using impedances. we have RCj2πf Vout ej2πf t + Vout ej2πf t = Vin ej2πf t In the process of dening impedances. Simple Circuits ZR R vin + vout – + C – + Vin (a) Figure 3. Vout (j2πf RC + 1) = Vin and then put the complex exponentials back in.8) to be RC dvdtout + vout = vin ). note that the factor j2πf arises from the derivative of a complex exponential. ANALOG SIGNAL PROCESSING complex amplitude might be. In fact. If we cross-multiply the relation between input and output amplitudes. Using impedances. To nd these.26: (a) A simple RC ZC + Vout (b) circuit. using impedances is equivalent to using the dierential equation and solving it when the source is a complex exponential. we refer to the below. RC dvout + vout = vin dt . Note that the source and output voltage are now complex amplitudes. and revert back to the dierential equation.3 To illustrate the impedance approach. We can reverse the impedance process. Example 3. 3. Energy is the integral of power and.2/>.1. What time- domain operation corresponds to this division? 3. Finding the dierential equation relating output to input is far simpler when we use impedances than with any other technique. v (t) = i (t) = 1 2 1 2 V ej2πf t + V ∗ e−(j2πf t)  Iej2πf t + I ∗ e−(j2πf t)  Multiplying these two expressions and simplifying gives p (t) = = = We dene 1 2V I∗ to be 1 ∗ 4 VI 1 2 Re (V 1 2 Re (V complex power." Consequently.11. the voltage and current for any circuit element or collection of elements are sinusoids of the same frequency.14) (Solution on p. the rst term appreciates while the time-varying term "sloshes. as the integration interval increases. p (t) = what is the equivalent expression using impedances? The resulting calculation reveals more about power consumption in circuits and the introduction of the concept of When all sources produce sinusoids of frequency f.1 (3. the real-part of complex power represents long-term energy consump- tion/production. 116. From another viewpoint. Conceptually.1 (Solution on p.) Suppose the complex amplitudes of the voltage and current have xed magnitudes. Exercise 3. the complex amplitude of the voltage equals |V |ejφ and that of the current is |I|ejθ . We can also write the voltage and current in terms of their complex amplitudes using Euler's formula (Section 2.11 Power in the Frequency Domain 23 Recalling that the instantaneous power consumed by a circuit element or an equivalent circuit that represents a collection of elements equals the voltage times the current entering the positive-voltage terminal. it represents the power consistently consumed/produced by the circuit. the most convenient denition of the erage power consumed/produced by any circuit is in terms of complex amplitudes. v (t) i (t).63 This is the same equation that was derived much more tediously in Circuits with Capacitors and Inductors (Section 3. The second term varies with time at a frequency twice that of the source. how are and 23 θ related for maximum power dissipation? This content is available online at <http://cnx.10. Pave = 1 Re (V I ∗ ) 2 Exercise 3. v (t) = |V |cos (2πf t + φ) i (t) = |I|cos (2πf t + θ) V Here. + V ∗ I + V Iej4πf t + V ∗ I ∗ e−(j4πf t)  I ∗ ) + 12 Re V Iej4πf t  I ∗ ) + 12 |V ||I|cos (4πf t + φ + θ) The real-part of complex power is the rst term and since it does not change with time. average power.2: Euler's Formula).8). 116.) Suppose you had an expression where a complex amplitude was divided by j2πf . What phase relationship between voltage and current maximizes the average power? In other words.org/content/m17308/1. this term details how power "sloshes" back and forth in the circuit because of the sinusoidal source. av- φ . Of the circuit only the resistor dissipates power.1: RMS Values). We have derived a fundamental result: Only the real part of impedance contributes to long-term power dissipation. the input-output relation for the complex amplitudes of the terminal voltage and current is V = Zeq I + Veq I= with Veq = Zeq Ieq . charging the capacitor does consume power. resistors. elements. we found that the rms value of a sinusoid was its amplitude divided by √ 2. average power can also be written as Pave = 1 1 2 Re (Z) (|I|) = Re 2 2   1 2 (|V |) Z These expressions generalize the results (3. voltage and current (Vrms and What is average power expressed in terms of the rms values of the Irms respectively)? 3. capacitors. we have Thévenin and Mayer-Norton equivalent circuits as shown in Figure 3. Thévenin and MayerNorton equivalent circuits can still be dened by using impedances and complex amplitudes for voltage and currents.3) we obtained for resistor circuits.2 (Solution on p. 24 This content is available online at <http://cnx.11.64 CHAPTER 3. If you turn on a constant voltage source in an RC-circuit. .5.27 (Equivalent Circuits).20/>. It is important to realize that these statements apply only for sinusoidal sources. V − Ieq Zeq Thus. Capacitors and inductors dissipate no power in the long term. and inductors.12 Equivalent Circuits: Impedances and Sources 24 When we have circuits with capacitors and/or inductors as well as resistors and sources. For any circuit containing sources. Exercise 3. 116. ANALOG SIGNAL PROCESSING Because the complex amplitudes of the voltage and current are related by the equivalent impedance.org/content/m0030/2.) In an earlier problem (Section 1. First of all. I Sources. Capacitors. Inductors + V – I I + + Zeq Veq + – V Ieq Zeq – Thévenin Equivalent V – Mayer-Norton Equivalent (b) Equivalent circuits with impedances. simpler. we see two dierences.27: Comparing the rst. Example 3. which carries the implicit assumption that the voltages and currents are single complex exponentials.4 . gure with the slightly more complicated second gure. more circuits (all those containing linear elements in fact) have equivalent circuits that contain equivalents. Secondly. all having the same frequency. Figure 3. Resistors. the terminal and source variables are now complex amplitudes.65 Equivalent Circuits i Sources and Resistors + v – i i + + Req veq + v – ieq Req v – – Mayer-Norton Equivalent Thévenin Equivalent (a) Equivalent circuits with resistors. and the output is also a complex exponential having the same frequency. In fact. Thus. Zeq = R k 1 j2πf C = R 1+j2πf RC . except we use impedances and complex amplitudes. Note in particular that j2πf RC must be dimen- sionless. Thus. the transfer function completely describes how the circuit processes the input complex exponential to produce the output complex exponential. The equivalent impedance can be found by setting the source to zero. and nding the impedance using series and parallel combination rules.28 Let's nd the Thévenin and Mayer-Norton equivalent circuits for Figure 3. The circuit's function is thus summarized by the transfer function. Is it? 3. . circuit. the capacitor no longer has any eect on the Vout R .29 (Simple Circuit).28 (Simple RC Circuit). and the short-circuit current Isc equals In our case.org/content/m0028/2. ANALOG SIGNAL PROCESSING Simple RC Circuit I + R + Vin V C – – Figure 3. When we short the terminals.15) Implicit in using the transfer function is that the input is a complex exponential. Consequently. The open-circuit voltage and short-circuit current techniques still work. we should check the units of our answer. we have Veq = 1 Vin 1 + j2πf RC Ieq = Zeq = 1 Vin R R 1 + j2πf RC Again.20/>. known as the function or the frequency response. The transfer function reveals how the circuit modies the input amplitude in creating the output amplitude. The open-circuit voltage corresponds to the transfer function we have already found. the resistor and capacitor are in parallel once the voltage source is removed (setting it to zero amounts to replacing it with a short-circuit).13 Transfer Functions 25 The ratio of the output and input amplitudes for Figure 3. circuits are often designed to meet transfer 25 This content is available online at <http://cnx.66 CHAPTER 3. is given by Vout Vin = H (f ) = 1 j2πf RC+1 transfer (3. 29: A simple RC + vout – + C – circuit. Recall that sinusoids consist of the sum of two complex exponentials. note that we can compute the frequency response for both positive and negative frequencies. we can better appreciate a circuit's function by examining the magnitude and phase of its transfer function (Figure 3. even symmetry: Do note The negative frequency portion is a mirror image of the positive .30 (Magnitude and phase of the transfer function)). one having the negative frequency of the other.29 1 RC = 1.67 function specications. Because transfer functions are complex-valued. that the magnitude has We will consider how the circuit acts on a sinusoid soon. Simple Circuit R vin Figure 3. Magnitude and phase of the transfer function |H(f)| 1 1/√ 2 1 2 πRC -1 1 2 πRC 0 1 f (a) ∠H(f) π/ 2 π/ 4 0 1 2 πRC –π/ 4 -1 1 2 πRC f 1 –π/ 2 (b) Figure 3. frequency-dependent quantities.30: Magnitude and phase of the transfer function of the RC circuit shown in Figure 3. (a) |H (f ) | = √ (b) ∠ (H (f )) = −arctan (2πf RC) (2πf RC)2 +1 (Simple Circuit) when This transfer function has many important properties and provides all the insights needed to determine how the circuit functions. First of all. Thus resistance-capacitance combinations of 1. These all transfer functions associated with circuits. If the source is a sine wave. what would we do if the source were a unit step? When we use impedances to nd the transfer function between the source and the output variable.13. In this circuit the cuto frequency only on the product of the resistance and the capacitance. ANALOG SIGNAL PROCESSING odd symmetry: ∠ (H (−f )) = −∠ (H (f )).18) The circuit's output to a sinusoidal input is also a sinusoid. we know what it is from the positive frequency part. Show that if the source can be written as the imaginary part of a complex exponential is given by vout (t) = Im V H (f ) e  j2πf t vin (t) = Im V ej2πf t   the output . this frequency is known as the depends 3 10−3 1 2πRC = 10 or RC = 2π = 1. below the cuto frequency. cuto frequency. This phase shift corresponds to the dierence between a cosine and a sine. Exercise 3. the transfer A A H (f ) ej2πf t − H (−f ) e−(j2πf t) 2j 2j function is most conveniently expressed (3. As . This assumption may seem restrictive. Thus. a cuto frequency of 1 kHz occurs For these reasons.59 µF result in the when 10−4 . frequency portion: |H (−f ) | = |H (f ) |. we know that vin (t) = Asin (2πf t) = A 2j ej2πf t − e−(j2πf t) (3. The phase shift caused by the circuit at the cuto frequency precisely equals − π4 .59 × 100 nF or 10 Ω and 1. The dierential equation applies no matter what the source may be.16)  Since the input is the sum of two complex exponentials. Consequently. superposition applies. the circuit strongly attenuates the amplitude. the circuit does not much alter the amplitude of the complex exponential source.59 kΩ and same cuto frequency. The output voltage expression simplies to vout (t) = A 2j |H = A|H (f ) |sin (2πf t + ∠ (H (f ))) (f ) |ej2πf t+∠(H(f )) − A 2j |H H (f ) = ∠ (H (−f )) = form: and (f ) |e(−(j2πf t))−∠(H(f )) (3. we know that the output is also a sum of two similar complex exponentials.17) in polar |H (f ) |ej∠(H(f )) . • For frequencies greater than fc . Show that a similar result also holds for the real part. Secondly. the circuit's output has a much smaller amplitude than that of the source. First of all. • For frequencies below this frequency. because the circuit is linear.) This input-output property is a special case of a more general result. Furthermore. but at higher frequencies. The notion of impedance arises when we assume the sources are complex exponentials. having a gain equal to the magnitude of the circuit's transfer function evaluated at the source frequency and a phase equal to the phase of the transfer function at the source frequency. Thus.68 CHAPTER 3. It will turn out that this input-output relation description applies to any linear circuit having a sinusoidal source. a sinusoid is the sum of two complex exponentials. 1 The magnitude equals √ of its maximum gain (1 at f 2 denominator of the magnitude are equal). The frequency = 0) fc = when 2πf RC = 1 (the two terms in the 1 2πRC denes the boundary between two operating ranges. Thus. we The phase has properties of this specic example apply for don't need to plot the negative frequency component. each having a frequency equal to the negative of the other.1 (Solution on p. |H (−f ) | = |H (f ) | (even symmetry of the magnitude) −∠ (H (f )) (odd symmetry of the phase). the only dierence being that the complex amplitude of each is multiplied by the transfer function evaluated at each exponential's frequency. the phase shift caused by the circuit becomes − π2 . we can derive from it the dierential equation that relates input and output. phase is little aected. We can use the transfer function to nd the output when the input voltage is a sinusoid for two reasons. vout (t) = As noted earlier. 117. when the source frequency is in this range. two . each of which has a frequency dierent from the others. Example 3. nd element values that accomplish our design criteria.14 Designing Transfer Functions 26 If the source consists of two (or more) signals. assuming complex exponential sources is actually quite general. 26 This content is available online at <http://cnx. In short. In fact we can also solve the dierential equation using impedances! Thus. 4.org/content/m0031/2.69 we have argued. In particular. use it to nd the output due to each input component. we have not lost anything by temporarily pretending the source is a complex exponential. allowing us to understand how the circuit works on a complicated signal. We want the circuit to pass constant (oset) voltage essentially unaltered (save for the fact that the output is a current rather than a voltage) and remove the 60 Hz term.5 RL circuit i R vin + – + iout L v – Figure 3. add the results. In this sense. it is far simpler to use impedances to nd the dierential equation (because we can use series and parallel combination rules) than any other method.21/>. The source voltage equals Vin = 2cos (2π60t) + 3. Once we nd the transfer function. despite the apparent restrictiveness of impedances. ltering the source signal based on the frequency of each component complex exponential. suppose these component signals are complex exponentials. We have also found the ease of calculating the output for sinusoidal inputs through the use of the transfer function. nd the transfer function using impedances. 3. in which the input is a voltage source and the output is the inductor current. The transfer function portrays how the circuit aects the amplitude and phase of each component. linear circuits are a special case of linear systems. Because the input is the sum of sinusoidsa constant is a zero-frequency cosineour approach is 1. and therefore superposition applies. 2. Because low frequencies pass through the lter. Those components having a frequency less than the cuto frequency pass through the circuit with little modication while those having higher frequencies are suppressed. 3. we can write the output directly as indicated by the output of a circuit for a sinusoidal input (3. we call it a lowpass lter to express more precisely its function. we know from linear system theory that the output voltage equals the sum of the outputs produced by each signal alone.18). The circuit is said to act as a lter.31 Let's apply these results to a nal example. The total output due to our source is iout = 2|H (60) |cos (2π60t + ∠ (H (60))) + 3 × H (0) (3. ANALOG SIGNAL PROCESSING Because the circuit is a series combination of elements. The transfer function at 60 Hz would be function's denominator equal each other.3 3 relative to the constant term's amplitude of . We want this cuto frequency to be much less than 60 Hz.32 (Waveforms).19) = H (f ) where voltage divider = j2πf L R + j2πf L and inductor admittance = 1 j2πf L [Do the units check?] The form of this transfer function should be familiar. This specication R would require the component values to be related by L = 20π = 62. then use the Iout Vin v-i relation of the inductor to nd its current. and it will perform our desired function once we choose element values properly. 10 Hz. it is a lowpass lter.20) The cuto frequency for this lter occurs when the real and imaginary parts of the transfer R 2πL . the value we choose for the resistance will determine the scaling factor of how voltage is converted into current. The output is given by For the 60 Hz component signal. Suppose we place it at. consequently low cuto frequencies require small-valued resistors and large-valued inductors. | 2πfc L = R.3 R sin (2π60t). even a 1 H inductor is physically large. thus. Thus. = = j2πf L 1 R+j2πf L j2πf L 1 j2πf L+R (3. The constant term is easiest to handle.8. say. The phase of the 60 Hz component will very nearly be  = . 3 R . − π2 . and result in 0. leaving it to be 0. the output current is 3|H (0) | = 2|H (60) |cos (2π60t + ∠ (H (60))). Unfortunately.3 π R cos 2π60t − 2 0.8 Ω. Having a 100 mH inductor would require a which yields an attenuation (relative to the gain at zero frequency) of about an output amplitude of 6. An easily available resistor value is 6. The choice made here represents only one compromise. To make the resistance bigger would require a proportionally larger inductor.16 × j2π60L + R R 6j + 1 R 37 R (3.70 CHAPTER 3. The waveforms for the input and output are shown in Figure 3. let's use voltage divider to nd the transfer function between Vin and V. Thus. which gives fc = 1 1 1 1 1 1 |= | | = √ ' 0.21) 1/6. A factor of 10 relative R R size between the two components seems reasonable. this choice results in cheaply and easily purchased parts.28 Ω resistor. 71 Waveforms 5 input voltage Voltage (v) or Current (A) 4 3 2 1 output current 0 0 Figure 3.32: R = 6.28Ω 0.1 Time (s) and Input and output waveforms for the example RL circuit when the element values are L = 100mH. Note that the sinusoid's phase has indeed shifted; the lowpass lter not only reduced the 60 Hz signal's ◦ amplitude, but also shifted its phase by 90 . 3.15 Formal Circuit Methods: Node Method 27 In some (complicated) cases, we cannot use the simplication techniquessuch as parallel or series combination rulesto solve for a circuit's input-output relation. In other modules, we wrote v-i relations and Kircho 's laws haphazardly, solving them more on intuition than procedure. We need a formal method that produces a small, easy set of equations that lead directly to the input-output relation we seek. One such technique is the 27 node method. This content is available online at <http://cnx.org/content/m0032/2.22/>. 72 CHAPTER 3. ANALOG SIGNAL PROCESSING Node Voltage e1 e2 R1 + – vin R2 R3 Figure 3.33 The node method begins by nding all nodesplaces where circuit elements attach to each otherin the circuit. We call one of the nodes the reference node; the choice of reference node is arbitrary, but it is usually chosen to be a point of symmetry or the "bottom" node. For the remaining nodes, we dene voltages en that represent the voltage between the node and the reference. node These node voltages constitute the only unknowns; all we need is a sucient number of equations to solve for them. In our example, we have two node voltages. The very act of dening node voltages is equivalent to using all the KVL equations at your disposal. The reason for this simple, but astounding, fact is that a node voltage is uniquely dened regardless of what path is traced between the node and the reference. Because two paths between a node and reference have the same voltage, the sum of voltages around the loop equals zero. In some cases, a node voltage corresponds exactly to the voltage across a voltage source. In such cases, the node voltage is specied by the source and is not an unknown. For example, in our circuit, e1 = vin ; thus, we need only to nd one node voltage. The equations governing the node voltages are obtained by writing KCL equations at each node having an unknown node voltage, using the is v-i relations for each element. In our example, the only circuit equation e2 − vin e2 e2 + + =0 R1 R2 R3 (3.22) A little reection reveals that when writing the KCL equations for the sum of currents leaving a node, that node's voltage will always appear with a plus sign, and all other node voltages with a minus sign. Systematic application of this procedure makes it easy to write node equations and to check them before solving them. Also remember to check units at this point: Every term should have units of current. In our example, solving for the unknown node voltage is easy: e2 = R2 R3 vin R1 R2 + R1 R3 + R2 R3 (3.23) Have we really solved the circuit with the node method? Along the way, we have used KVL, KCL, and the v-i relations. Previously, we indicated that the set of equations resulting from applying these laws is necessary and sucient. This result guarantees that the node method can be used to "solve" any circuit. One fallout of this result is that we must be able to nd any circuit variable given the node voltages and sources. All circuit variables can be found using the current through R3 equals e2 R3 . v-i relations and voltage divider. For example, the 73 e1 e2 i R2 iin R1 R3 Figure 3.34 The presence of a current source in the circuit does not aect the node method greatly; just include it in writing KCL equations as a current leaving the node. The circuit has three nodes, requiring us to dene two node voltages. The node equations are e1 e1 − e2 + − iin = 0 R1 R2 e2 − e1 e2 + =0 R2 R3 (Node 1) (Node 2) Note that the node voltage corresponding to the node that we are writing KCL for enters with a positive sign, the others with a negative sign, and that the units of each term is given in amperes. Rewrite these equations in the standard set-of-linear-equations form.  e1 1 1 + R1 R2 1 + e2 e1 R2   − e2 1 = iin R2 1 1 + R2 R3  =0 Solving these equations gives e1 = e2 = To nd the indicated current, we simply use R1 R3 iin R1 + R2 + R3 i= Example 3.6: Node Method Example R2 + R3 e2 R3 e2 R3 . 74 CHAPTER 3. ANALOG SIGNAL PROCESSING 2 e2 e1 1 1 i + vin 1 – 1 Figure 3.35 In this circuit (Figure 3.35), we cannot use the series/parallel combination rules: The vertical resistor at node 1 keeps the two horizontal 1 prevents the two 1 Ω Ω resistors from being in series, and the 2 Ω resistor resistors at node 2 from being in series. We really do need the node method to solve this circuit! Despite having six elements, we need only dene two node voltages. The node equations are e1 − vin e1 e1 − e2 + + =0 1 1 1 e2 e2 − e1 e2 − vin + + =0 2 1 1 6 5 yields e1 = 13 vin and e2 = 13 vin . (Node 1) (Node 2) e2 5 1 = 13 vin . One unfortunate consequence of using the element's numeric values from the outset is that it Solving these equations The output current equals becomes impossible to check units while setting up and solving equations. Exercise 3.15.1 (Solution on p. 117.) What is the equivalent resistance seen by the voltage source? Node Method and Impedances E + R1 Vin + – C R2 Vout – Figure 3.36: Modication of the circuit shown on the left to illustrate the node method and the eect of adding the resistor R2 . The node method applies to RLC circuits, without signicant modication from the methods used on simple resistive circuits, if we use complex amplitudes. We rely on the fact that complex amplitudes satisfy 75 v-i relations. KVL, KCL, and impedance-based In the example circuit, we dene complex amplitudes for the input and output variables and for the node voltages. We need only one node voltage here, and its KCL equation is E − Vin E + Ej2πf C + =0 R1 R2 with the result E= R2 Vin R1 + R2 + j2πf R1 R2 C To nd the transfer function between input and output voltages, we compute the ratio E Vin . The transfer function's magnitude and angle are |H (f ) | = q R2 2 2 (R1 + R2 ) + (2πf R1 R2 C)  ∠ (H (f )) = −arctan 2πf R1 R2 C R1 + R2  This circuit diers from the one shown previously (Figure 3.29: Simple Circuit) in that the resistor R2 has been added across the output. What eect has it had on the transfer function, which in the original circuit 1 2πR1 C ? As shown in Figure 3.37 (Transfer Function), adding the second resistor has two eects: it lowers the gain in the passband (the range of frequencies for was a lowpass lter having cuto frequency fc = which the lter has little eect on the input) and increases the cuto frequency. Transfer Function |H(f)| 1 No R2 R1=1, R2=1 0 Figure 3.37: Here, 0 1 R1+R2 1 2πRC 2πR1C• R2 1 f Transfer functions of the circuits shown in Figure 3.36 (Node Method and Impedances). R1 = 1 , R2 = 1 , and C = 1. 76 CHAPTER 3. When R2 = R1 , ANALOG SIGNAL PROCESSING as shown on the plot, the passband gain becomes half of the original, and the cuto R2 frequency increases by the same factor. Thus, adding provides a 'knob' by which we can trade passband gain for cuto frequency. Exercise 3.15.2 (Solution on p. 117.) We can change the cuto frequency without aecting passband gain by changing the resistance in the original circuit. Does the addition of the R2 resistor help in circuit design? 3.16 Power Conservation in Circuits 28 Now that we have a formal methodthe node methodfor solving circuits, we can use it to prove a powerful result: KVL and KCL are all that are required to show that all circuits conserve power, regardless of what elements are used to build the circuit. Part of a general circuit to prove Conservation of Power a i1 1 2 i2 b 3 i3 c Figure 3.38 First of all, dene node voltages for all nodes in a given circuit. Any node chosen as the reference will do. For example, in the portion of a large circuit (Figure 3.38: Part of a general circuit to prove Conservation of Power) depicted here, we dene node voltages for nodes a, b and c. With these node voltages, we can express the voltage across any element in terms of them. For example, the voltage across element 1 is given by v1 = eb − ea . The instantaneous power for element 1 becomes v1 i1 = (eb − ea ) i1 = eb i1 − ea i1 Writing the power for the other elements, we have v2 i 2 = ec i2 − ea i2 v3 i 3 = ec i3 − eb i3 When we add together the element power terms, we discover that once we collect terms involving a particular node voltage, it is multiplied by the sum of currents leaving the node minus the sum of currents entering. For example, for node b, we have eb (i3 − i1 ). We see that the currents will obey KCL that multiply each we conclude that the sum of element powers must equal zero in any circuit regardless of the elements used to construct the circuit. node voltage. Consequently, X vk ik = 0 k 28 This content is available online at <http://cnx.org/content/m17317/1.2/>. note that the complex amplitudes of voltages and currents obey KVL and KCL. circuits: The source signal has more power than the output variable. and has much more power than expended in turning the handle.17 Electronics 29 So far we have analyzed electrical be it a voltage or a current. a power supply is a source of constant voltage as the water tower is supposed to provide a constant water pressure. The sum of the product of element voltages and currents will also be zero! 3. 3. The basic idea of the transistor is to let the weak input signal modulate a strong current provided by a source of electrical powerthe power supplyto produce a more powerful signal. there are four dierent kinds of dependent sources.77 The simplicity and generality with which we proved this results generalizes to other situations as well. with the turning achieved by the input signal. and the faucet is the transistor. Power has not been explicitly dened. the standard circuit-theoretical model for a transistor This content is available online at <http://cnx. 31 "Small Signal Model for Bipolar Transistor" <http://cnx. we can measure a set of currents. We can take a circuit and measure all the voltages. The power supply is like the water tower. Consequently. also known as the op-amp. Thus. if the topology has not changed. we have that P k KCL. the complex-conjugate of currents also satises ∗ k Vk Ik = 0. but no matter. We can model the op-amp with a new circuit element: the dependent source. In particular. to describe an op-amp. All we need is a set of voltages that obey KVL and a set of currents that obey KCL. This content is available online at <http://cnx. the voltages and currents can be measured at dierent times and the sum of v-i products is zero. inductors. the water ow varies accordingly. and circuits built of them will not magically do so either.8/>. gain: Such circuits are termed electrical in distinction to those that do provide power electronic circuits. we need a voltage-dependent voltage source. A device that is much more convenient for providing gain (and other useful features as well) than the transistor is the operational amplier. which means we also have VP k Ik = 0. The waterpower results from the static pressure of the water in your plumbing created by the water utility pumping the water up to your local water tower. such as your stereo reading a CD and producing sound. for a given circuit topology (the specic way elements are interconnected). X vk (t1 ) ik (t2 ) = 0 k Even more interesting is the fact that the elements don't matter. However.org/content/m0035/2. Resistors. and capacitors as individual elements certainly provide no power gain. we know that evaluating the real-part of an expression is linear. Just as in this analogy. A physical analogy is a water faucet: By turning the faucet back and forth.org/content/m1019/latest/> 29 30 31 . is accomplished by semiconductor circuits that contain transistors. An op-amp is an integrated circuit (a complicated circuit involving several transistors constructed on a chip) that provides a large voltage gain if you attach the power supply.org/content/m0053/2.14/>. Thus. Providing power gain. And nally. Furthermore. Finding the real-part of this power conservation gives the result that is also conserved in any circuit.18 Dependent Sources A 30 dependent source is either a voltage or current source whose value is proportional to some other voltage or current in the circuit. We can then make element-for-element replacements and. respectively. X1 k note: 2 average power Re (Vk Ik ∗ ) = 0 This proof of power conservation can be generalized in another very interesting way. circuit simplications such as current and voltage divider should not be applied in most cases. depicted is a voltage-dependent voltage source in the context of a generic circuit. Because dependent sources cannot be described as impedances. contains a current-dependent current source. but must be present for the circuit model to be accurate.78 CHAPTER 3. ANALOG SIGNAL PROCESSING Dependent sources do not serve as inputs to a circuit like active circuits: those containing electronic elements. Inputs attach to nodes a and b. independent sources. the power supply is not shown. and the output is node c.40: The op-amp has four terminals to which connections can be made. As the circuit model on the right shows. Using the node method for . Here. As in most active circuit schematics. like the node method (Section 3. op-amp Rout a a + c b c Rin – + – G(ea–eb) b Figure 3. the op-amp serves as an amplier for the dierence of the input node voltages. The dependent source model portrays how the op-amp works quite well.39: v kv … … + – Of the four possible dependent sources. They are used to model The dependent sources … … + – Figure 3. RLC circuits we have been considering so far are known as passive circuits. Most operational ampliers require both positive and negative supply voltages for proper operation.15). Figure 3. Analysis of circuits containing dependent sources essentially requires use of formal methods. and because the dependent variable cannot "disappear" when you apply parallel/series combining rules.40 (op-amp) shows the circuit symbol for the op-amp and its equivalent circuit in terms of a voltage-dependent voltage source. the output voltage equals an amplied version of the dierence of node voltages appearing across its inputs. we apply the node method. Do note that the units check.79 such circuits is not dicult.24) vout − (−G) v vout − v vout + + =0 Rout RF RL (3. and integrates the op-amp circuit model into the circuit. in particular what its typical element values are. Only two node voltages v and vout need be dened. Note that the op-amp is placed in the circuit "upside-down. Consider the circuit shown on the top in Figure 3. feedback op-amp RF R – vin + + + – RL vout – RF R + + Rout + – Rin v + – –Gv RL vout – Figure 3.41 (feedback op-amp).19).25) Note that no special considerations were used in applying the node method to this dependent-source circuit. To determine how the output voltage is related to the input voltage. vout relates to vin yields     RF Rout 1 1 1 1 1 1 1 1 + + + + − vout = vin Rout − GRF Rout Rin RL R Rin RF RF R Solving these to learn how  This expression represents the general input-output relation for this circuit. (3. with node voltages dened across the source treated as if they were known (as with independent sources). the expression will simplify greatly. known as the back conguration." with its inverting input at the top and serving as the only input. this conguration will appear again and again and its usefulness demonstrated.41: – The top circuit depicts an op-amp in a feedback amplier conguration.26) standard feed- Once we learn more about op-amps (Section 3. . As we explore op-amps in more detail in the next section. the remaining nodes are across sources or serve as the reference. and that the parameter G of the dependent source is a dimensionless gain. The node equations are v v − vout v − vin + + =0 R Rin RF (3. On the bottom is the equivalent circuit. usually less than 100 Ω. the op-amp serves as an amplier for the dierence of the input node voltages. G. While a desirable outcome if you are a rock & roll acionado. The The The large gain catches the eye. As the circuit model on the right shows.42: The op-amp has four terminals to which connections can be made. Inputs attach to nodes a and b. If you were to build such a circuitattaching a voltage source to node a.32/>. Another consideration in designing circuits with op-amps is that these element values are typical: Careful control of the gain can only be obtained by choosing a circuit so that its element values dictate the resulting gain. but their element values are very special. and looking at the outputyou would be disappointed. is small.19 Operational Ampliers ANALOG SIGNAL PROCESSING 32 Op-Amp Rout a a + c b c + Rin – – G(ea–eb) b Figure 3. In dealing with electronic components. which must be smaller than that provided by the op-amp. 3.80 CHAPTER 3. output resistance. Attaching the 1 mv signal not only would fail to produce a 100 V signal.42 (Op-Amp). Unmodeled limitations imposed by power supplies: It is impossible for electronic compo- nents to yield voltages that exceed those provided by the power supply or for them to yield currents that exceed the power supply's rating. on the order of 1 MΩ. 5 The voltage gain.org/content/m0036/2. • • • input resistance. is large. Rin . you cannot forget the unrepresented but needed power supply. 32 This content is available online at <http://cnx. the resulting waveform would be severely distorted. Typical power supply voltages required for op-amp circuits are ± (15V ). attaching node b to the reference. high-quality stereos should not distort signals. and the output is node c. Op-amps not only have the circuit model shown in Figure 3. is typically large. Rout . it suggests that an op-amp could turn a 1 mV input signal into a 100 V one. . exceeding 10 . smaller than Rin .81 opamp RF R – vin + + + – RL vout – RF R + + Rout + – Rin + v –Gv – RL vout – Figure 3. • Make the resistor.19. and integrates the op-amp circuit model into the circuit.27)) becomes  RF Rout − GRF  Because the gain is large and the resistance  − 1 G  1 1 + R RF Rout  1 − RF  vout = 1 vout R is small. 3.29) . we can simplify the expression dramatically. RF Rout Rout − GRF  1 1 1 + + Rout Rin RL  1 1 1 + + R Rin RF  1 − RF  vout = 1 vin R (3. • Make the load resistance. In choosing element values with respect to op-amp characteristics. R. leaving us with (3. which means that the 1 Rin term in the third factor is negligible. This situation drops the term 1 RL from the second factor of (3. On the bottom is the equivalent circuit.28) 1 −G .1 Inverting Amplier The feedback conguration shown in Figure 3. the expression ((3. With these two design criteria. the rst term becomes 1 1 + R RF  − 1 RF  vout = 1 vin R (3.27) provides the exact input-output relationship. RL .27).43: – The top circuit depicts an op-amp in a feedback amplier conguration.43 (opamp) is the most common op-amp circuit for obtaining what is known as an  inverting amplier. much larger than Rout . we obtain the classic input-output relationship for the op-amp-based inverting amplier. this factor will no longer depend on the op-amp's − R1F .44: Vout Vin = − ZZF Example 3. we can place op-amp circuits in cascade. With the transfer function of the above op-amp circuit in mind.  vout = − RF vin R  (3. if careful. the input-output relation for the inverting amplier also applies when the feedback and input circuit elements are impedances (resistors. capacitors.30) Consequently. Thus observation means that. Let's assume that the inversion (negative gain) does not matter. without incurring the eect of succeeding circuits changing the behavior (transfer function) of previous ones. see this problem (Problem 3. and inductors). we • ZF = K . let's consider some choices. opamp ZF Z – Vin + + + – Vout – Figure 3. and it will equal Under these conditions. note that this relationship does not depend on the load resistance. 3. Interestingly. It cannot exceed the op-amp's inherent gain and should not produce such large outputs that distortion results (remember the power supply!). the gain provided by our circuit is entirely determined by our choice of the feedback resistor RF and the input resistor R. This eect occurs because we use load resistances large compared to the op-amp's output resistance.7 Let's design an op-amp circuit that functions as a lowpass lter. It is always negative. and can be less than one or greater than one in magnitude.82 CHAPTER 3. and inherent gain.19. This choice means the feedback impedance is a resistor and that the input impedance is a series combination of an inductor and a resistor. • If we select the values of RF ANALOG SIGNAL PROCESSING R so that GR  RF . jf fc . We want the transfer function between the output and input voltage to be H (f ) = where K equals the passband gain and fc K 1 + jf fc is the cuto frequency.44).2 Active Filters As long as design requirements are met. . In circuit design. Z = 1 + try to avoid inductors because they are physically bulkier than capacitors. Additional considerations like parts cost might interest. Recall that we must have R < Rin . Consider the reciprocal of the feedback impedance (its admittance): jf fc . we need to examine the gain requirement more carefully.) What is special about the resistor values. and 10 pF and 33 MΩ would all theoretically work. electronics is more exible (a cascade of circuits can be built so that each has little eect on the others. Here is our inverting amplier. we have the gain equal to R H (f ) = − 1+j2πf RF C RF 1 R and the cuto frequency RF C . Thus.19. Letting the input resistance equal R.3. For resistors. As the op-amp's input impedance is about 1 MΩ. we don't want R too large. see Problem 3.3 Intuitive Way of Solving Op-Amp Circuits When we meet op-amp design specications. We must have RF |1+j2πf RF C| |ZF | R < 105 for all frequencies of < 105 . 117. 10 nF and 33 kΩ.83 • ZF = 1 . For the second design choice. Consider the general RC parallel 1 combination. and 6. As opposed to design with passive circuits. Exercise 3. Thus.3 × 10−5 . its admittance is RF +j2πf C . so much so that we don't need the op-amp's circuit model to determine the transfer function.8. 1+ jf fc ZF −1 = 1 + Z = 1 K .44) and gain (increase in power and amplitude) can result. easily obtained values of r are 1. but the values (like 1 Ω) are not right.4. Because the feedback "element" is an impedance (a parallel resistor capacitor combination). As this impedance decreases with frequency. Signals transmitted over the telephone have an upper frequency limit of about 3 kHz. Unless you have a high-power application (this isn't one) or ask for high-precision components. having values r10d . enter into the picture. We also need to ask for less gain than the op-amp can provide itself. we can simplify our circuit calculations greatly.7. 4. this expression suggests 1 fc F ). A 1 µF capacitor and a 330 Ω resistor. and the decades span 0-8. To complete our example. 1. let's assume we want a lowpass lter that emulates what the telephone companies do. Let's RF RF also desire a voltage gain of ten: R = 10. why these rather odd-appearing values for r? 3. . Thus. the transfer the parallel combination of a resistor (value = 1 Ω) and a capacitor (value = RF function of the op-amp inverting amplier now is Thus. We have the right idea. RF C = 5.1 (Solution on p. the rst two choices for the resistor and capacitor values (as well as many others in this range) will work well. Since this admittance is a sum of admittances. 3. and this requirement means that the we require last choice for resistor/capacitor values won't work. many choices for resistance and capacitance values are possible. costs don't depend heavily on component values as long as you stay close to standard values. Creating a specic transfer function with op-amps does not have a unique answer. the design specication of R RF R = 10 means that this criterion is easily met. which means R = 10 .19. we The current can ignore • iin iin must be very small. the .46 (opamp). The node voltage e is essentially zero. Armed with these approximations. meaning that it is essentially tied to the reference node. Thus. the voltage v must be small. the voltage v = 10 V . ANALOG SIGNAL PROCESSING opamp R iF RF i + + iin vin + – Rin Rout v + – –Gv RL – vout – Figure 3. which means that iin = Rin must be tiny. Because of this assumptionessentially no current ow through Rin the voltage v must also be essentially zero. large gain. For example. −5 if the output is about 1 v.84 CHAPTER 3. and small output impedancewe note the two following important facts.45 opamp + v – R e + RF iin=0 – + + RL – vout – Figure 3. This means that in op-amp circuits. Thus. • 105 times the v voltage v .46 When we take advantage of the op-amp's characteristicslarge input impedance. Consequently. the voltage across the op-amp's input is basically zero. let's return to our original circuit as shown in Figure 3. making the current iin = 10−11 A. The voltage produced by the dependent source is in our calculations and assume it to be zero. Because the current going into the op-amp is zero. Example 3. Because R the left end of the feedback resistor is essentially attached to the reference node. and the sum of currents owing into that node is zero. Because the feedback resistor is essentially in parallel with the load resistor. check to make sure the results you obtain current through the resistor R equals are consistent with the assumptions of essentially zero current entering the op-amp and nearly zero voltage across the op-amp's inputs. single-output op-amp circuit example. Furthermore. i owing (1) (2) vin vin in the resistor RF equals R1 + R2 . the node joining the three resistors is at the same potential as the e ' 0. and we know that transfer function. By superposition.47: Two-source. we obtain the input-output relation reference. the inverting amplier remains. the current given above. the feedback resistor appears in parallel with the load resistor. Thus. What utility does this circuit have? Can the basic notion of the circuit be extended without bound? . In this way. If either of the source-resistor combinations were not present. the voltages must satisfy v = −vout . When using this technique. all of the current owing through R ows v R through the feedback resistor (iF = i)! The voltage across the feedback resistor v equals in F . Using this approach makes analyzing R new op-amp circuits much easier. Let's try this analysis technique on a simple extension of the inverting amplier conguration shown in Figure 3.8 Two Source Circuit + v – R1 i RF e – R2 (1) vin + – (2) vin + + + – RL vout – Figure 3. we know that the input-output relation is vout =    RF (1) RF (2) − vin − v R1 R2 in (3.85 vin R . the voltage across it equals v R the negative of that across the output resistor: vout = −v = − in F .47 (Two Source Circuit).31) When we start from scratch. and is usually very small. The constant I0 is the q relation in Figure 3. Viewing this becomes obvious.0 Figure 3.48: –0. temperature and The resistor. circuit element is the diode (learn more   q i (t) = I0 e kT v(t) − 1 Here.48 (Diode).org/content/m0037/2. and very useful. and equals I0 . and no current when negative biased. Voltage and current sources are (technically) nonlinear devices: stated simply. the direction of current ow corresponds to the direction the arrowhead points. and kT = 25mv. doubling the current through a voltage source does not double the voltage. 33 34 This content is available online at <http://cnx. the quantity T q k is Boltzmann's constant.5 i + v – 0. Note that the diode's schematic symbol looks like an arrowhead. Here.86 CHAPTER 3. is the diode's temperature in K. and inductor are linear circuit elements in that their v-i relations are linear in the mathematical sense. the diode parameters were room I0 = 1 µA. At room temperature. known as the leakage or reverse-bias current.5 v (V) v-i relation and schematic symbol for the diode. Its input-output relation has an exponential form. "P-N Junction: Part II" <http://cnx.32) v-i When the voltage is positive. the current is quite small. capacitor. When we apply a negative voltage. (3.16/>. ANALOG SIGNAL PROCESSING 3. current ows easily through the diode. This situation is forward biasing. A less detailed model for the diode has any positive current known as owing through the diode when it is forward biased. the ratio leakage current.org/content/m1004/latest/> . the nonlinearity represents the charge of a single electron in coulombs. nonlinear 34 ).20 The Diode 33 Diode i (µa) 50 40 30 20 10 –1. A more blatant. Thus. diode circuit idiode v out v out R v in t " v out ' v out ' v in " v in v out Figure 3. we cannot use impedances nor series/parallel combination rules analyze circuits containing them. for this simple circuit we have   q vout = I0 e kT (vin −vout ) − 1 R This equation cannot be solved in closed form. current ows through (so the diode is forward biased). We can of course numerically solve Figure 3.33) vin vin is positive. when the diode so long as the voltage negative or vout vout is smaller than "tries" to be bigger than vin . Thus. We must understand what is going on from basic principles. (3. using computational and graphical aids. at this level of analysis.87 diode circuit + vin + vout R – – Figure 3. it only relies on KVL for its application. and the reverse-bias current ows through the diode. positive input voltages result in positive output voltages with negative ones resulting in vout = − (RI0 ). The reliable node method can always be used. and KVL is a statement about voltage drops around a closed path regardless of whether the elements are linear or not.49 (diode circuit) to determine the output voltage .50 We need to detail the exponential nonlinearity to determine how the circuit distorts the input voltage waveform. As an approximation. If the source is the diode is reverse-biased.49 Because of the diode's nonlinear nature. As the voltage across the diode is related to the logarithm of its current. diode circuit R vin – + + – + vout – Figure 3. and the clearly evident distortion must have some practical application if the circuit were to be useful. the diode is reverse-biased and the output voltage equals − (RI0 ). This reduction is smaller if the straight line has a shallower slope.47/>. and for positive vin the intersection occurs at a value for vout crosses the value of vin . known as a half-wave rectier. Clearly. (3. does not vary itself with straight line. and thus we have a xed the point at which the curve always intersect just once for any smaller than vin . We know that the current through the resistor must equal that through the diode. 3. The left side. As for the right side. For negative vin . We plot each term as a function of vout for various values of the input voltage vin .34) . let's express this equation graphically. What utility might this simple circuit have? The diode's nonlinearity cannot be escaped here. we see that the input-output relation is  vout = − Clearly. Thus.88 CHAPTER 3. where they intersect gives us the output voltage. is present in virtually every AM radio twice and each serves very dierent functions! We'll learn what functions later. This circuit. the diode's current is proportional to the input voltage. the current through the output resistor.51 Here is a circuit involving a diode that is actually simpler to analyze than the previous one.1: 35  35 Simple Circuit Analysis This content is available online at <http://cnx. the two curves will vin . vout axis gives us the value of vin . ANALOG SIGNAL PROCESSING when the input is a sinusoid.21 Analog Signal Processing Problems Problem 3. which expresses the diode's v-i relation. the name kT ln q  vin +1 RI0 logarithmic amplier is justied for this circuit. To learn more.org/content/m10349/2. which corresponds to using a bigger output resistor. nd the value for RL that results in a current of 5 A passing through it.52. what element values work? c) Again.52 For each circuit shown in Figure 3. d) What is the power dissipated by the load resistor i1 iin RL in this case? R2 R1 + – vin 15 A (a) Circuit A 20Ω (b) Circuit B Figure 3. for the last circuit.53) behavior. if zero voltage were possible. b) Solve these equations for i1 : In other words.89 i + i + + 1 i L 1 v v v 2 1 (a) Circuit a (b) Circuit b C (c) Circuit c Figure 3. are there element values that make the voltage v equal zero for all time? If so.2: Solving Simple Circuits a) Write the set of equations that govern Circuit A's (Figure 3. a) What is the voltage across each element and what is the voltage v in each case? b) For the last circuit. what circuit element could substitute for the capacitor-inductor series combination that would yield the same voltage? Problem 3. c) For Circuit B.53 RL . the current i equals cos (2πt). express this current in terms of element and source values by eliminating non-source voltages and currents. R1 R3 R1 R4 R2 R3 R2 R1 R2 R4 R5 R3 R4 (a) circuit a (b) circuit b 1 (c) circuit c 1 1 1 1 1 1 (d) circuit d Figure 3.4: Superposition Principle One of the most important consequences of circuit laws is the Superposition Principle: The current or voltage dened for any element equals the sum of the currents or voltages produced in the element by the independent sources.54 Calculate the conductance seen at the terminals for circuit (c) in terms of each element's conductance. How is the circuit (c) derived from circuit (b)? Problem 3. Problem 3. This Principle has important consequences in simplifying the calculation of ciruit variables in multiple source circuits. Compare this equivalent conductance formula with the equivalent resistance formula you found for circuit (b).54). nd the equivalent resistance using series and parallel combination rules.55 1/2 iin .90 CHAPTER 3. 1/2 vin + – i 1/2 1 Figure 3.3: ANALOG SIGNAL PROCESSING Equivalent Resistance For each of the following circuits (Figure 3. you can set the current source to zero (an open circuit) and use the usual tricks. Is applying the Superposition Principle easier than the technique you used in part (1)? Problem 3. We can nd each component by setting the other sources to zero. . you would set the voltage source to zero (a short circuit) and nd the resulting current. to nd the voltage source component.6: Thévenin and Mayer-Norton Equivalents Find the Thévenin and Mayer-Norton equivalent circuits for the following circuits (Figure 3.91 a) For the depicted circuit (Figure 3. Calculate the total current i using the Superposition Principle. Thus.56 Problem 3.56.55). nd the indicated current using any technique you like (you should use the simplest).57). To nd the current source component. is a linear combination of the two source values: i = This result means that we can think of the current as a superposition of two components. i b) You should have found that the current C1 vin +C2 iin . each of which is due to a source. 3 7sin 5t 6 + + – i 1 1 vout 6 6 4 2 – (a) circuit a 6 120 + – (b) circuit b 12 180 i 5 20 48 (c) circuit c Figure 3.5: Current and Voltage Divider Use current or voltage divider rules to calculate the indicated circuit variables in Figure 3. π 2 3 1 ANALOG SIGNAL PROCESSING 1 1. i1 = −1.8: Bridge Circuits Circuits having the form of Figure 3. .5 v 2 + – 1 + 1 (a) circuit a 1 (b) circuit b 3 10 – 20 20 sin 5t + – + 2 – 6 (c) circuit c Figure 3.58).58 Problem 3. the circuit N1 has the v-i relation a) Find the Thévenin equivalent circuit for circuit b) With is = 2. N1 i1 + 5 + – 1 v1 is N2 – Figure 3.7: Detective Work In the depicted circuit (Figure 3.57 Problem 3. determine R such that R N2 .92 CHAPTER 3.59 are termed bridge circuits. v1 = 3i1 + 7 when is = 2. will result in a zero voltage for R1 = 1Ω. Find Im (4 + 2j) ej2π20t .10: The Complex Plane The complex variable z is related to the real variable u according to z = 1 + eju • • • Sketch the contour of values z takes on in the complex plane.11: Cool Curves In the following expressions.59 a) What resistance does the current source see when nothing is connected to the output terminals? b) What resistor values. R  2 = 2Ω. c) Assume Problem 3.9: vout ? the current i when the current source Cartesian to Polar Conversion Convert the following expressions into polar form. if any. the variable x runs from zero to innity. z+1 What are the maximum and minimum values attainable by Sketch the contour the rational function Problem 3. |z|? z−1 traces in the complex plane. R3 = 2Ω and R4 = 4Ω.org/content/m10596/latest/> What geometric shapes do the . Plot their location in the complex plane a) b) c) 36 . following trace in the complex plane? a) b) c) 36 ejx 1 + ejx e−x ejx "The Complex Plane" <http://cnx. 2−j √63 2+j √63 f) g) 3 1+j3π e) is √ 2 1 + −3 3 + j4   4 − j 3 1 + j 12 π 3ejπ + 4ej 2 √ √  π 3 + j 2 × 2e−(j 4 ) d) iin Problem 3. Express your answer as a sinusoid.93 i R1 + iin R3 vout R2 R4 Figure 3. In many cases.61. + 1 vin + – 1 2 1 + vin v 1 + 2 – v vin 4 + iout 1 – – (a) circuit a (b) circuit b (c) circuit c 1 1 vin + iout 1 (d) circuit d Figure 3. .13: Transfer Functions Find the transfer function relating the complex amplitudes of the indicated variable and the source shown in Figure 3.60.60 Problem 3.94 CHAPTER 3. d) ANALOG SIGNAL PROCESSING ejx + ej (x+ 4 ) π Problem 3. Plot the magnitude and phase of the transfer function. they were derived using this approach.14: Using Impedances Find the dierential equation relating the indicated variable to the source(s) using impedances for each circuit shown in Figure 3. a) b) c) d) sin (2u) = 2sin (u) cos (u) cos2 (u) = 1+cos(2u) 2 cos2 (u) + sin2 (u) = 1 d du (sin (u)) = cos (u) Problem 3.12: Trigonometric Identities and Complex Exponentials Show the following trigonometric identities using complex exponentials. 15: Measurement Chaos The following simple circuit (Figure 3. i(t) vin(t) + v1(t) Z1 + + Z2 v2(t) Figure 3.61 Problem 3. the current i (t) equaled v2 (t) = 13 sin (2πf0 t). b) Find the impedances c) Construct these impedances from elementary circuit elements.95 R1 iout vin + C R1 iout L C R2 L iin + vin – R2 (a) circuit a (b) circuit b + L1 iin i R L2 v C iin 1 1 1 2 1 – (c) circuit c (d) circuit d Figure 3. 2 3 sin 2πf0 t + π 4  and the voltage .62 a) What is the voltage v1 (t)? Z1 and Z2 .62) was constructed but the signal measurements were made hap- √ hazardly. When the source was sin (2πf0 t). 16: ANALOG SIGNAL PROCESSING Transfer Functions In the following circuit (Figure 3. 1 2 iin 1 iout 1 2 1 2 Figure 3. what was the source? .64). the voltage source equals vin (t) = 10sin t 2 .18: Circuit Design cos (2t).63 a) Find the transfer function between the source and the indicated output voltage.64 a) What is the transfer function between the source and the indicated output current? b) If the output current is measured to be Problem 3. Problem 3.  + vin 1 + 2 – 4 vout – Figure 3. nd the output voltage.17: A Simple Circuit You are given this simple circuit (Figure 3. b) For the given source.63). Problem 3.96 CHAPTER 3. Show the same result for this combination. the frequency at which the transfer function is maximum. Now how much power must the generator produce to meet the same power requirement? Why is it more than it had to produce to meet the requirement for the resistive load? c) The load can be compensated to have a unity power factor (see exercise (Exercise 3. c) Use these two results to prove the general result we seek. b) At what frequency does the transfer function have a phase shift of zero? What is the circuit's gain at this frequency? c) Specications demand that this circuit have an output impedance (its equivalent impedance) less than 8Ω for frequencies above 1 kHz.000 watts of average power to meet this requirement? b) Suppose the load is changed to that shown in the second gure. a) Determine the load current RL and the average power the generator must produce so that the load receives 1.65. Problem 3.2)) so that the voltage and current are in phase for maximum power eciency. Why does the generator need to generate more than 1.97 + R vin + C – L vout – Figure 3. Problem 3.65 a) Find the transfer function between the input and the output voltages for the circuits shown in Figure 3.19: Equivalent Circuits and Power Suppose we have an arbitrary circuit of resistors that we collapse into an equivalent resistor using the series and parallel rules. Is the power dissipated by the equivalent resistor equal to the sum of the powers dissipated by the actual resistors comprising the circuit? Let's start with simple cases and build up to a complete proof. What element works and what is its value? d) With this compensated circuit. The transmission line consists of a long length of copper wire and can be accurately described as a 50Ω resistor. The generator produces 60 Hz and is modeled by a simple Thévenin equivalent. b) Now suppose R1 and R2 are connected in series. Show that the power dissipated by R1 k R1 equals the sum of the powers dissipated by the component resistors. how much power must the generator produce to deliver 1. a) Suppose resistors R1 and R2 are connected in parallel.000 watts of average power.000 average power to the load? .20: Power Transmission The network shown in the gure represents a simple power transmission system.11. The compensation technique is to place a circuit in parallel to the load circuit. Find element values that satisfy this criterion. a) Suppose we wanted the maximize "voltage transmission:" make the voltage across the load as large as possible. What choice of load impedance creates the largest load voltage? What is the largest load voltage? b) If we wanted the maximum current to pass through the load. set both derivatives to zero and solve the two equations simultaneously. ANALOG SIGNAL PROCESSING IL + Rs RT 100 Vg 100 power generator lossy power transmission line 1 load (a) Simple power transmission system (b) Modied load circuit Figure 3. what would we choose the load impedance to be? What is this largest current? c) What choice for the load impedance maximizes the average power dissipated in the load? What is most power the generator can deliver? note: One way to maximize a function of a complex variable is to write the expression in terms of the variable's real and imaginary parts. Vg + Zg ZL Figure 3. evaluate derivatives with respect to each. In most applications.98 CHAPTER 3.21: Optimal Power Transmission The following gure (Figure 3. The power generator is represented by a Thévinin equivalent and the load by a simple impedance. the source components are xed while there is some latitude in choosing the load.67 .67) shows a general model for power transmission.66 Problem 3. 22: Big is Beautiful Sammy wants to choose speakers that produce very loud music. c) Find the second transmitter's frequency so that the receivers can suppress the unwanted transmission by at least a factor of ten. Receiver design is greatly simplied if rst we remove the unwanted transmission (as much as possible).24: Circuit Detective Work In the lab. a voltage of √ sin t + 2 π 4  sin (t)." a) What does this mean in terms of the amplier's equivalent circuit? b) Any speaker Sammy attaches to the terminals can be well-modeled as a resistor. Each transmitter signal has the form xi (t) = Asin (2πfi t) . When a appears. a) What is the Thévenin equivalent circuit? b) What voltage will appear if we place a 1F capacitor across the terminals? Problem 3. 0 ≤ t ≤ T where the amplitude is either zero or A and each transmitter uses its own frequency harmonically related to the bit interval duration T. fi . What choice would maximize the voltage across the speakers? c) Sammy decides that maximizing the power delivered to the speaker might be a better choice. Problem 3. A voltage source vin = 2volts terminals for tests and measurements. has been attached to the left-hand terminals. Each frequency is where the transmitter 1 uses the the frequency 1 T . the open-circuit voltage measured across an unknown circuit's terminals equals 1 1Ω resistor is place across the terminals. He has an amplier and notices that the speaker terminals are labeled "8Ω source.25: Mystery Circuit We want to determine as much as we can about the circuit lurking in the impenetrable box shown in Figure 3. leaving the right . Choosing a speaker amounts to choosing the values for the resistor.99 Problem 3. The transmitter signals will be added together by the channel.23: Sharing a Channel Two transmitter-receiver pairs want to share the same digital communications channel. The datarate is 10Mbps. Assume the received signal is a voltage and the output is to be a voltage as well. a) Draw a block diagram that expresses this communication scenario. What values for the speaker resistor should be chosen to maximize the power delivered to the speaker? Problem 3. b) Find circuits that the receivers could employ to separate unwanted transmissions.68. Samantha says he is wrong. The test source equals sin (t) vin (t) (Figure 3.69 We make the following measurements. i + + vin ANALOG SIGNAL PROCESSING v Resistors Figure 3. Who is correct and why? b) When nothing is attached to the right-hand terminals.27: v (t) if a current source is attached to the terminals on the right so that i (t) = sin (t). Time-Invariant Systems For a system to be completely characterized by a transfer function. What circuit could produce this output? c) When a current source is attached so that i = 2amp. A system is said to be time-invariant if delaying the input delays the output by the . What resistor circuit would be consistent with this and the previous part? Problem 3. • • v (t) equals √12 cos t + current i (t) was −sin (t).100 CHAPTER 3.26: More Circuit Detective Work The left terminal pair of a two terminal-pair circuit is attached to a testing circuit. the π 4 . but also to be time-invariant. it needs not only be linear. a voltage of v = 1volt is measured.69). Linear. the voltage When a wire is placed across the terminals on the right.  a) What is the impedance seen from the terminals on the right? b) Find the voltage Problem 3. the voltage v is now 3 volts.68 a) Sammy measures v = 10volts when a 1Ω resistor is attached to the terminals. With nothing attached to the terminals on the right. i 1 vin + + Circuit – v – Figure 3. sleepless night. Clearly. Mathematically. this forgotten circuit is .29: A Testing Circuit The simple circuit here (Figure 3. a) What is the Thévenin equivalent circuit seen by the impedance? b) In searching his notes.71) was given on a test. L. a) Show that if a circuit has xed circuit elements (their values don't change over time). For example. b) Show that impedances cannot characterize time-varying circuit elements (R. Consequently.70 Problem 3.101 same amount. Consider the dierential equation that describes a circuit's input- output relationship. show that linear. Z He was. and C). time-invariant (LTI) one(s). represented by the impedance Z. Note that both linear and nonlinear systems have this property. i) diode ii) iii) iv) y (t) = x (t) sin (2πf0 t) y (t) = x (t − τ0 ) y (t) = x (t) + N (t) Problem 3. c) Determine the linearity and time-invariance of the following. important as the output is the current passing through it. S (•) S (x (t)) = y (t). Find the transfer function of the linear.28: Long and Sleepless Nights Sammy went to lab after a long. its input-output Hint: relationship is time-invariant.70. cannot remember what the circuit. and constructed the circuit shown in Figure 3. if is the input. Sammy nds that the circuit is to realize the transfer function H (f ) = Find the impedance Z 1 j10πf + 2 as well as values for the other circuit elements. meaning y (t) is the output of S (x (t − τ )) = y (t − τ ) for all delays τ is the time-invariant if a system S (•) and all inputs x (t) when x (t). What is its general form? Examine the derivative(s) of delayed signals. time-varying systems do not have a transfer function. i out R v in + – C Figure 3. a system that squares its input is time-invariant. the current i (t) = √ 2cos t − arctan (2) − π 4 .  at the frequency of the source? Black-Box Circuit You are given a circuit (Figure 3.31: A and φ? Solving a Mystery Circuit Sammy must determine as much as he can about a mystery circuit by attaching elements to the terminal and measuring the resulting voltage. When he attaches a 1F capacitor across the terminals. he measures the voltage across the terminals to be voltage is now 3× √ 2sin t − 3sin (t). When no source is attached (open-circuited terminals). + i(t) v(t) Circuit – Figure 3. i(t) Z ANALOG SIGNAL PROCESSING + + vin vout 1 – – Figure 3. the voltage across the 4 terminals has the form Asin (4t + φ).102 CHAPTER 3.72) that has two terminals for attaching circuit elements.30: 5sin (t). the π 4 . When you attach a voltage source equaling equals 4sin t + a) What will the terminal current be when you replace the source by a short circuit? b) If you were to build a circuit that was identical (from the viewpoint of the terminals) to the given one. the current through the source  π − 2sin (4t) .71 √ When the voltage source is a) What is voltage vout (t)? Z b) What is the impedance Problem 3. what would your circuit be? c) For your circuit. what are Problem 3.  a) What voltage should he measure when he attaches nothing to the mystery circuit? .72 sin (t) to the terminals. When he attaches a 1Ω resistor to the circuit's terminals. 103 b) What voltage should Sammy measure if he doubled the size of the capacitor to 2 F and attached it to the circuit? Problem 3.73 a) Sketch the magnitude and phase of the transfer function. + Z1 Vin + – Z2 Vout – Figure 3. show it to be so. We want to nd a circuit that will remove hum from any signal. Problem 3. b) At what frequency does the phase equal if not.74) consisting of two series impedances.73) has a transfer function between the output voltage and the source equal to H (f ) = −8π 2 f 2 8π 2 f 2 + 4 + j6πf . Hum gets its name because it sounds like a persistent humming sound. A Rice engineer suggests using a simple voltage divider circuit (Figure 3. Is your answer unique? If so.32: Find the Load Impedance The depicted circuit (Figure 3. π 2? c) Find a circuit that corresponds to this load impedance.33: Analog Hum Rejection Hum refers to corruption from wall socket power that frequently sneaks into circuits.74 . + 1/2 vin + ZL vout 4 – – Figure 3. give another example. 75 Problem 3.34: An Interesting Circuit 6 3 + vout iin – 2 1 Figure 3. nd the transfer function. C C L L Figure 3.76 a) For the circuit shown in Figure 3.76. choose circuit element values that will remove hum.75) for Which of these will work? b) Picking one circuit that works. ANALOG SIGNAL PROCESSING is a resistor. b) What is the output voltage when the input has the form Problem 3.77). a) The impedance the impedance Z1 Z2 . The Rice engineer must decide between two circuits (Figure 3.104 CHAPTER 3. c) Sketch the magnitude of the resulting frequency response.35: A Simple Circuit You are given the depicted circuit (Figure 3. iin = 5sin (2000πt)? . 105 1 1 v + out – iin 1 1 1 Figure 3.77 a) What is the transfer function between the source and the output voltage? b) What will the voltage be when the source equals sin (t)? c) Many function generators produce a constant oset in addition to a sinusoid.36: If the source equals what is the output voltage? An Interesting and Useful Circuit The depicted circuit (Figure 3.78 The portion of the circuit labeled "Oscilloscope" represents the scope's input impedance. A R2 = 1MΩ probe is a device to attach an oscilloscope to a circuit. nd the transfer function relating the indicated voltage to the source when the probe is used. Problem 3. 1 + sin (t). and it has the indicated circuit inside it.78) has interesting properties. a) Suppose for a moment that the probe is merely a wire and that the oscilloscope is attached to a circuit that has a resistive Thévenin equivalent impedance. . which are exploited in high-performance oscilloscopes. probe R1 + vin oscilloscope + C1 R2 – C2 vout – Figure 3. and C2 = 30pF (note the label under the channel 1 input in the lab's oscilloscopes). What would be the eect of the oscilloscope's input impedance on measured voltages? b) Using the node method. 80).106 CHAPTER 3. What is the impedance seen by the circuit being measured for this special value? Problem 3. An ELEC 241 student wants to understand the suspension system on his car. If the bumps are very gradual (think of a hill as a large but very gradual bump). A well-designed suspension system will smooth out bumpy roads. Find that relationship and describe what is so special about it. d) For a particular relationship among the element values. Without a suspension. the car's body moves in concert with the bumps in the raod. The student wants to nd a simple circuit that will model the car's motion. c) Plot the magnitude and phase of this transfer function when ANALOG SIGNAL PROCESSING R1 = 9MΩ and C1 = 2pF.37: A Circuit Problem + You are given the depicted circuit (Figure 3. He is trying to decide between two circuit models (Figure 3.79 a) Find the dierential equation relating the output voltage to the source. . we can use circuit models to describe mechanical systems. e) The arrow through C1 indicates that its value can be varied. b) What is the impedance seen by the capacitor? Problem 3.79). the transfer function is quite simple. 1/3 vin + – v – 1/6 4 2 Figure 3. Select the value for this capacitor to make the special relationship valid. reducing the car's vertical motion.38: Analog Computers Because the dierential equations arising in circuits resemble those that describe mechanical motion. the car's vertical motion should follow that of the road. b) You attach a 1 H inductor across the terminals. Label important frequency. road and car displacements are represented by the voltages vroad (t) and vcar (t).39: Transfer Functions and Circuits You are given the depicted network (Figure 3.81).40: when vin (t) = sin t 2 + π 4 . amplitude and phase values. c) Find vout (t) Problem 3. You assume the box contains a  π across the terminals when nothing is connected to them and 4 when you place a wire across the terminals. b) Sketch the magnitude and phase of your transfer function.80 Here. a) Which circuit would you pick? Why? b) For the circuit you picked.81 a) Find the transfer function between Vin and Vout . You measure the voltage the current √ 2cos (t) sin t + a) Find a circuit that has these characteristics. respectively. + 1/4 + vin 2 – 3/4 vout – Figure 3. circuit. what will be the amplitude of the car's motion if the road has a displacement given by vroad (t) = 1 + sin (2t)? Problem 3.107 + 1 vroad + – + 1 vcar 1 1 vroad + 1 – 1 – vcar – Figure 3. What voltage do you measure? .  Fun in the Lab You are given an unopenable box that has two terminals sticking out. 42: Operational Ampliers Find the transfer function between the source voltage(s) and the indicated output voltage for the circuits shown in Figure 3. .41: Dependent Sources Find the voltage vout in each of the depicted circuits (Figure 3.83.82). Problem 3. ib iin ANALOG SIGNAL PROCESSING R1 i + R2 βib 1/3 RL vout 1 – (a) circuit a + – –6 + 3 vout + – 3i – (b) circuit b Figure 3.82 Problem 3.108 CHAPTER 3. 83 .109 + + + — vin R1 v out R2 – – (a) op-amp a R1 R2 – + R3 + + V(1) in – + – V(2) in Vout R4 – (b) op-amp b + 5 + Vin – + – 10 5 Vout – (c) op-amp c 1 + 1 2 vin + – 2 – 1 2 4 4 + vout – (d) op-amp d Figure 3. the current the complex amplitude of the input.85 .84 a) What is the transfer function relating the complex amplitude of the output signal. to Vin ? RL see? vin = V0 e− τ . the voltage b) What equivalent circuit does the load resistor c) Find the output current when Problem 3. Problem 3.44: t Iout .110 CHAPTER 3. Z2 Z4 Z1 – Z3 – Vin + – + + + Vout – Figure 3.84) is claimed to serve a useful purpose. R C + C Vin Iout R RL Figure 3.43: ANALOG SIGNAL PROCESSING Op-Amp Circuit The following circuit (Figure 3.85) of a cascade of op-amp circuits illustrate the reason why op-amp realizations of transfer functions are so useful. Why Op-Amps are Useful The circuit (Figure 3. C1 = 1µF. Show that this transfer function equals the product of each stage's transfer function. C2 = 0.3kΩ. a) Find the transfer function relating the voltage b) In particular.87 vout (t) to the source.86 Problem 3.86).7 kΩ vout – Figure 3.45: Operational Ampliers Consider the depicted circuit (Figure 3. R1 = 530Ω. R2 R3 R1 Vin + – C1 – R4 – C2 + + + V out – Figure 3. Characterize . and then show that it can't work! Why can't it? 1 µF 1 kΩ 10 nF – + vin + + – 4.3kΩ.87). Find the transfer function for this op-amp circuit (Figure 3.01µF.111 a) Find the transfer function relating the complex amplitude of the voltage vout (t) to the source. the resulting transfer function and determine what use this circuit might have. b) What is the load impedance appearing across the rst op-amp's output? c) Figure 3.86 illustrates that sometimes designs can go wrong. R2 = 5. and R3 = R4 = 5. 88 a) Is this a pre-emphasis or de-emphasis circuit? Find the frequency f0 that denes the transition from low to high frequencies. Problem 3. conversion back to analog and de-emphasis.88) has been designed for pre-emphasis or de-emphasis (Samantha can't recall which). prior to analog-to-digital conversion signals are passed through what is known as a pre-emphasis circuit that leaves the low frequencies alone but provides increasing gain at increasingly f0 . . Label important amplitude and phase values and the frequencies at which they occur. The op-amp circuit here (Figure 3. the signal's spectrum should be what it was. Problem 3.112 CHAPTER 3. We want fl = 1kHz and fh is the cuto frequency of fh = 10kHz. with f = 4kHz? c) What circuit could perform the opposite function to your answer for the rst part? Problem 3.47: Pre-emphasis or De-emphasis? In audio applications.46: ANALOG SIGNAL PROCESSING Designing a Bandpass Filter We want to design a bandpass lter that has transfer the function H (f ) = 10  Here. b) Design a bandpass lter that meets these specications. Specify component values. digitization. De-emphasis circuits do the opposite and are applied after higher frequencies beyond some frequency digital-to-analog conversion. RF R R = 1 kΩ RF = 1 kΩ C = 80 nF + – C Vin + + Vout – – Figure 3. a) Plot the magnitude and phase of this frequency response. b) What is the circuit's output when the input voltage is sin (2πf t).89). After pre-emphasis.48: Active Filter Find the transfer function of the depicted active lter (Figure 3. fl j ffl j2πf   + 1 j ffh + 1 is the cuto frequency of the low-frequency edge of the passband and the high-frequency edge. 50: Optical Receivers In your optical telephone. the receiver circuit had the form shown (Figure 3. is larger than the lter's . b) If the input signal is the sinusoid sin (2πf0 t).89 Problem 3.90).49: This is a lter? You are given a circuit (Figure 3.91). what will the output be when f0 cuto frequency? Problem 3.90 a) What is this circuit's transfer function? Plot the magnitude and phase.113 R1 Rf C1 R1 R – + – + R2 Vin + – V out R2 C2 R – + Figure 3. + + Rin + – Vin C R1 R2 Vout – – Figure 3. 51: Reverse Engineering The depicted circuit (Figure 3. Assume the diode is ideal. Zin – 1 + Vout Figure 3. we. . If it does. show why it does not work. The photodiode acts as a current source. producing a current proportional to the light intesity falling upon it. They are trying to keep its use secret.92) to accomplish the same task. a) Find the transfer function relating light intensity to vout .92 Problem 3. Determine whether the idea works or not. representing RU Electronics. b) What should the circuit realizing the feedback impedance Zf be so that the transducer acts as a 5 kHz lowpass lter? c) A clever engineer suggests an alternative circuit (Figure 3. If not. As is often the case in this crucial stage. the op-amp stage serves to boost the signal and to lter out-of-band noise.91 This circuit served as a transducer. nd the impedance Zin that accomplishes the lowpass ltering task. ANALOG SIGNAL PROCESSING Zf – Vout + Figure 3.114 CHAPTER 3. Thus. converting light energy into a voltage vout .93) has been developed by the TBBG Electronics design group. the signals are small and noise can be a problem. have discovered the schematic and want to gure out the intended application. 8 nF R1 C – + Vin + + Vout – – Figure 3.115 R2 R1= 1 kΩ R2= 1 kΩ C = 31. what is the circuit's output when the input voltage is c) What function might this circuit have? sin (2πf0 t)? . what is the circuit's transfer function? b) With the diode in place.93 a) Assuming the diode is a short-circuit (it has been removed from the circuit). 45) KCL says that the sum of currents entering or leaving a node must be zero. ANALOG SIGNAL PROCESSING Solutions to Exercises in Chapter 3 Solution to Exercise 3. zero imaginary part occurs when the phases of the voltage and currents . Solution to Exercise 3. We can combine all but one node in a circuit into a supernode. vin R2 .7. a 10% change means that the ratio R L 1+ R 2 L < 0. which means that we simply have a resistor (R2 ) across a voltage source. Thus. 63) For maximum power dissipation. the imaginary part of complex power should be zero. which is larger than any component conductance. A 1% change means that R . 1 V ⇔ j2πf Z V ej2πf t dt Solution to Exercise 3. specifying KCL equations always species the remaining one. 49) Replacing the current source by a voltage source does not change the fact that the voltages are identical.6. Solution to Exercise 3. V I ∗ = |V ||I|ej(φ−θ) . Solution to Exercise 3.000 joules. This result does not depend on the resistor R1 . which will be larger than any component resistor's value.4. KCL for the supernode must be the negative of the remaining node's KCL equation.600.10. KCL applies as well to currents entering the combination. The two-resistor circuit has no apparent use. 55) 2 voc = R1R+R vin 2 R1 R2 Req = R1 +R2 .11.1 (p. n−1 Consequently. veq = R2 R1 +R2 vin and Solution to Exercise 3.2 (p. Solution to Exercise 3. the equivalent conductance is the sum of the component conductances.3 (p.1 (p.000 watt-seconds. the current is the same in each. which indeed directly corresponds to 3. Consequently. and isc = − vRin1 (resistor R2 is shorted out in this case).600.1. For a series combination.2 (p. the equivalent resistance is the sum of the resistances.1 (p. the KCL equation of one must be the negative of the other. Consequently. Since no currents enter an entire circuit.4. the voltage is the same.7. 63) Division by j2πf arises from integrating a complex exponential. the sum of currents must be zero. vin = R2 iout or iout = Solution to Exercise 3.2 (p. 51) Req = R2 RL R2 R2 must be less than 0.6. 40) One kilowatt-hour equals 3.01. If we consider two nodes together as a "supernode".1 (p. Solution to Exercise 3. The equivalent resistance is therefore smaller than any component resistance.1 (p. As the complex power is given by agree. 56) ieq = R1 R1 +R2 iin and Req = R3 k R1 + R2 .1 (p. If we had a two-node circuit. in a parallel combination. for a parallel combination.5.1. Thus. 47) R2 1 R1 2 2 vin 2 = 2 vin + 2 vin R1 + R2 (R1 + R2 ) (R1 + R2 ) Solution to Exercise 3.6.1 (p. 53) In a series combination of resistors.2 (p.5.116 CHAPTER 3. 47) The power consumed by the resistor R1 R2 R1 +R2 . 46) The circuit serves as an amplier having a gain of Solution to Exercise 3. can be expressed as (vin − vout ) iout = R1 2 vin 2 (R1 + R2 ) Solution to Exercise 3. the Superposition Principle says that the output to the imaginary part is    j2πf t j2πf t Im V H (f ) e . The response to Solution to Exercise 3. Solution to Exercise 3.2 (p.11. The key notion is writing the imaginary part as the dierence between a complex exponential and its complex conjugate:  V ej2πf t − V ∗ e−(j2πf t) Im V ej2πf t = 2j (3. 83) The ratio between adjacent values is about √ 2.19. .15.1 (p. 76) Not necessarily.2 (p. 11 equals the current we have just found plus the current owing through the other vertical 1 current equals Solution to Exercise 3. which means the response to V ∗ e−(j2πf t) is V ∗ H (−f ) e−(j2πf t) . This current Ω resistor.35) V ej2πf t is V H (f ) ej2πf t .117 Solution to Exercise 3. This e1 6 11 = v .15. The same argument holds for the real part: Re V e → Re V H (f ) ej2πf t .1 (p. making the total current through the voltage source (owing out of it) 1 13 in 13 vin . the equivalent resistance is Ω . 64) Pave = Vrms Irms cos (φ − θ). 68) power factor.1 (p. especially if we desire individual knobs for adjusting the gain and the cuto frequency.13. The cosine term is known as the Solution to Exercise 3. we need to nd the current owing through the voltage source. ∗ As H (−f ) = H (f ) . 13 Thus. 74) To nd the equivalent resistance. 118 CHAPTER 3. ANALOG SIGNAL PROCESSING . you might wonder whether they could be added together to represent a large 3 and Gauss4 in particular number of periodic signals.2 Complex Fourier Series 2 In an earlier module (Exercise 2. You would be right and in good company as well. Rather. we invented the impedance method because it made solving circuits easier. This content is available online at <http://cnx. which also applies to all linear. Calculating the spectrum is easy: The Fourier transform denes how we can nd a signal's spectrum. The study of the frequency domain combines these two notionsa system's sinusoidal response is easy to nd and a linear system's output to a sum of inputs is the sum of the individual outputsto develop the crucial idea of a signal's spectrum.uk/∼history/Mathematicians/Euler.uk/∼history/Mathematicians/Guass. television. But the Fourier series goes well beyond being another signal decomposition method.1 Introduction to the Frequency Domain 1 In developing ways of analyzing linear circuits.dcs. radio.st.ac. spectrum is so important that communications systems are regulated as to which portions of the spectrum they can use by the Federal Communications Commission in the United States and by International Treaty for the world (see Frequency Allocations (Section 7. we'll see that information systems rely heavily on spectral ideas. we developed the notion of a circuit's frequency response or transfer function. 4.10/>. sum of sinusoids is very large. We begin by nding that those signals that can be represented as a all signals can be expressed as a superposition of sinusoids. They worked on what is now known as the Fourier series: representing any periodic signal as a superposition of sinusoids. time-invariant systems.html 4 http://www-groups. As useful as this decomposition was in this example.Chapter 4 Frequency Domain 4.org/content/m0038/2.html 1 2 119 . We also learned the Superposition Principle for linear systems: The system's output to an input consisting of a sum of two signals is the sum of the system's outputs to each individual component. Euler 5 worried about this problem.dcs. Along the way. 3 http://www-groups. we showed that a square wave could be expressed as a superposition of pulses.st. and cellular telephones transmit over dierent portions of the spectrum.and. the Fourier series begins our journey to appreciate how a signal can be described in either the time-domain or the This content is available online at <http://cnx. it does not generalize well to other periodic signals: How can a superposition of pulses equal a smooth signal like a sinusoid? Because of the importance of sinusoids to linear systems.3.ac. In fact. describes how the circuit responds to a sinusoidal input when we express it in terms of a complex exponential.org/content/m0042/2. In fact.31/>. and Jean Baptiste Fourier got the credit even though tough mathematical issues were not settled until later.3)).and.st.dcs.html 5 http://www-groups. For example. This notion.and.uk/∼history/Mathematicians/Fourier.ac.1). As this story unfolds. it makes not dierence if we have a time-domain or a frequency. The real and imaginary parts of the unusual way for convenience in dening the classic Fourier series. T ]. . Because the signal has period T .4) . . this signal can be expressed as   1 if 0 < t < T 2 sqT (t) =  −1 if T < t < T 2 The expression for the Fourier coecients has the form ck = note: 1 T Z 0 T 2 e−(j 2πkt T ) dt − 1 T When integrating an expression containing j. its spectrum. frequency-domain with no compromise. . (4.2.) What is the complex Fourier series for a sinusoid? To nd the Fourier coecients. we note the orthogonality property Z T ej 2πkt T e (−j) 2πlt T 0   T dt =  0 if k=l k 6= l if (4.valued for real-valued signals: c0 = a0 . can be expressed harmonically related sine waves: sinusoids having frequencies that are integer multiples of fundamental frequency. Mathematically. these functions are always present and form the representation's building blocks." we can nd a signal's complex Fourier coecients.1) by ck = c0 = 1 T 1 T e−(j2πlt) RT 0 RT 0 and integrate over the interval s (t) e−(j 2πkt T [0. They depend on the signal period T. knowing the Fourier coecients is equivalent to knowing the signal. Thus. and are indexed by Key point: Assuming we know the period. Simply multiply each side of (4. 0. ∞ X s (t) = ck e j 2πkt T k T.domain characterization of the signal. Let s (t) periodic be a FREQUENCY DOMAIN signal with period T. the fundamental frequency is T1 . Exercise 4. ) dt (4. −1. 167.1) k=−∞ Fourier coecients 1 ck are written in this 2 (ak − jbk ). 1. k. ..1 (Solution on p. The zeroth coecient equals the signal's n o with ck = average value and is real. basis functions and form the foundation of the Fourier series.2) Assuming for the moment that the complex Fourier series "works. The family of functions ej 2πkt T are called No matter what the periodic signal might be. even those that have constant-valued segments like a square wave. The as sum of the complex Fourier series expresses the signal as a superposition of complex exponentials having frequencies k = {. (4.1 Finding the Fourier series coecients for the square wave sqT (t) is very simple.}. by exploiting the orthogonality properties of harmonically related complex expo- nentials.3) s (t) dt Example 4. . We want to show that periodic signals. Z T e−(j 2πkt T ) dt T 2 treat it just like any other constant.120 CHAPTER 4. 5) k odd k even Thus. the complex Fourier series for the square wave is X sq (t) = k∈{. which says the signal has even symmetry about the origin.6) Consequently..121 The two integrals are very similar. one equaling the negative of the other.−3. A real-valued Fourier expansion amounts to an expansion in terms of only cosines. The coecients decay slowly k -th harmonic of the signal's This index corresponds to the period. (real-valued periodic signals have conjugate-symmetric spectra).. if This result follows from the integral that calculates the that you are given the Fourier coecients for positive indices and zero and are told the signal is real-valued.2: ck = c−k ∗ . but only those having frequencies equal to odd multiples of the fundamental frequency as the frequency index k increases. hence the entire spectrum..1: If s (t) is real. the Fourier coecients are purely imaginary. this result means Re (ck ) = Re (c−k ): The real part of the Fourier coecients for real-valued signals is even. are ck e− j2πkτ T . Similarly. you can nd the negative-indexed coecients. Consequently. the Fourier coecients of even signals are real-valued. Im (ck ) = −Im (c−k ): The imaginary parts of the Fourier coecients have odd symmetry. c−k = ck .1. becomes bk −2 j2πk  =  =  2 jπk  0 k (−1) − 1 if if The nal expression  (4.3. where seconds results in a spectrum having a in comparison to the spectrum of the undelayed signal. Property 4.3: If s (−t) = −s (t). conjugate symmetry. } 2 (j) 2πkt T e jπk (4. A signal's Fourier series spectrum Property 4. and we have .. Showing this property is easy. This kind of symmetry. 1 T . the square wave equals a sum of complex exponentials. c−k = −ck .. ck = c−k ∗ ck has interesting properties. Furthermore..4: The spectral coecients for a periodic signal delayed by the spectrum of shift of − 2πkτ T s (t).7) Note that the range of integration extends over a period of the integrand. Proof: 1 T RT 0 s (t − τ ) e(−j) 2πkt T dt = = R 2πk(t+τ ) 1 T −τ s (t) e(−j) T dt T −τ R T −τ 2πkt 1 (−j) 2πkτ T s (t) e(−j) T dt Te −τ (4. ck denotes linear phase Note that the spectral magnitude is unaected. s (t − τ ). which says the signal has odd symmetry. ck from the signal. The square wave is a great example of an odd-symmetric signal. R T −τ −τ (·) dt = RT 0 (·) dt.−1. which is the simplest example of an even signal. Therefore. Property 4. Consequently. Property 4. is known as If s (−t) = s (t). it should not matter how we integrate over a period. Given the previous property for real-valued signals. which means that our result. Delaying a signal by τ τ . The complex Fourier spectrum of this signal is given by ck = 1 T Ae− j2πkt T  dt = − 0  A  − j2πk∆ T e −1 j2πk At this point. This general mathematical result says you can calculate a signal's power in either the time domain or the frequency domain. to plot it we need to calculate its magnitude and phase. we nd that the coecients do indeed have conjugate symmetry: c−k ∗ . consequently. The complex Fourier series obeys FREQUENCY DOMAIN one of the most important results in signal analysis. Theorem 4.122 CHAPTER 4. The pulse width is ∆. simplifying this expression requires knowing an interesting property.1: Periodic pulse signal.1: Parseval's Theorem Average power calculated in the time domain equals the power calculated in the frequency domain. ck = The periodic pulse signal has neither even nor odd symmetry. Parseval's Theorem. ck = Ae − jπk∆ T πk∆ T sin  (4.1). the period T. Z ∆ and the amplitude A.8) k=−∞ This result is a (simpler) re-expression of how to calculate a signal's power than with the real-valued Fourier series expression for power. p(t) A ∆ … ∆ … t T Figure 4.10) ! sign (k) . Let's calculate the Fourier coecients of the periodic pulse signal shown here (Figure 4. 1−e −(jθ) =e − jθ 2    jθ  θ − jθ − jθ 2 2 2 e −e =e 2jsin 2 Armed with this result. |ck | = A| sin πk∆ ∠ (ck ) = − + πneg T πk∆ T πk sin  | πk∆ T πk (4. no additional symmetry exists in the spectrum. we can simply express the Fourier series coecients for our pulse sequence.9) πk Because this signal is real-valued. Because the spectrum is complex valued. 1 T Z T 2 s (t) dt = 0 ∞ X 2 (|ck |) (4. Exercise 4. Thus. 167. Periodic Pulse Sequence Figure 4. In . the phase has a linear component. Advancing the signal by this amount centers the pulse about the origin. we expect a shift of the formula is a little less than at index 6. our calculated spectrum is consistent with the properties of the Fourier spectrum.2 (Periodic Pulse Sequence) requires some explanation as it does not seem to agree with what (4. the formula and the plot do agree. −π in the phase between indices 4 and 6. the result is a slightly negative number as shown. At −π . a delay of 2 is present in our signal. leaving the occasional negative values to be accounted for as a phase shift of π.) What is the value of c0 ? Recalling that this spectral coecient corresponds to the signal's average value. does your answer make sense? The phase plot shown in Figure 4. 2π π every time can be added to a phase We see that at frequency index 4 The phase at index 5 is undened because the magnitude is zero in this example. The somewhat complicated expression for the phase results because the sine term can be negative.2 (Solution on p. the phase value predicted by − (2π). leaving an even signal.2 and A = 1. which in Also note the presence of a linear phase term (the rst term in ∠ (ck ) is proportional to frequency turn means that its spectrum is real-valued. the formula suggests that the phase of the linear term should be less than (more negative) than In addition. Thus. Here T = 0. Because we can add 2π without aecting the value of the spectrum index 6. Thus. We must realize that any integer multiple of at each frequency the phase is nearly without aecting the value of the complex spectrum. k T ).123 The function neg (·) equals -1 if its argument is negative and zero otherwise. ∆ Comparing this term with that predicted from delaying a signal.2. magnitudes must be positive.2: The magnitude and phase of the periodic pulse sequence's spectrum is shown for positive∆ frequency indices.10) suggests. There. with a jump of the sinusoidal term changes sign. −π . The spectrum while the coecients ck of the complex Fourier series express the spectrum as a magnitude and phase. even the frequency.12) (k = l) and (k 6= 0) and (l 6= 0) (k 6= l) or (k = 0 = l) (k = l) and (k 6= 0) and (l 6= 0) k=0=l k 6= l These orthogonality relations follow from the following important trigonometric identities. FREQUENCY DOMAIN phase calculations like those made in MATLAB. T 2.13) These identities allow you to substitute a sum of sines and/or cosines for a product of them. π) by adding to each phase value. Each term in the sum can be integrating by noticing one of two important properties of sinusoids.23/>. sin (α) sin (β) = cos (α) cos (β) = sin (α) cos (β) = 1 2 (cos (α − β) − cos (α + β)) 1 2 (cos (α + β) + cos (α − β)) 1 2 (sin (α + β) + sin (α − β)) (4. 6 The classic Fourier series as derived originally expressed a periodic signal (period T ) in terms of harmonically related sines and cosines. Equating the classic Fourier series (4.) Derive this relationship between the coecients of the two Fourier series.124 CHAPTER 4. . values are usually conned to the range some (possibly negative) multiple of 2π 4. 167. square of a unit-amplitude sinusoid over a period T equals The integral of a sinusoid over an The integral of the This content is available online at <http://cnx. express the real and imaginary parts respectively of the spectrum. k ∈ Z l ∈ Z if T 2 if T if 0 if (4.3.1).11) The complex Fourier series and the sine-cosine series are identical. ck = 1 (ak − jbk ) 2 Exercise 4. an extra factor of two and complex conjugate become necessary to relate the Fourier coecients in each. Just as with the complex Fourier series. • • 6 integer number of periods equals zero. each representing a signal's Fourier coecients. are orthogonal.3 Classic Fourier Series [−π.org/content/m0039/2. Note that the cosine and sine of harmonically related frequencies. ak and bk .1 (Solution on p. Z T  sin 0 T Z  sin 0 Z T  cos 0 2πkt T  2πkt T   sin  cos 2πkt T 2πlt T  2πlt T    cos dt =   2πlt T  T 2 if  0 dt =        dt = 0 . we can nd the Fourier coecients using the orthogonality same properties of sinusoids.11) to the complex Fourier series (4. s (t) = a0 + ∞ X  ak cos k=1 2πkt T  + ∞ X  bk sin k=1 2πkt T  (4. we obtain a0 T . 167. To use these. b1 = 1 2 b2 = b3 = · · · = 0  − cos  2π(k+1)t T  dt (4. If k = 0 = l. in the second. in which case we obtain al T 2 . 167. al = T Z 2 T  s (t) cos 0 2πlt T  dt .14) The rst and third terms are zero.16) 2 bk Begin with the sine terms in the series. R T 2 0 sin 2πt T  sin 2πkt T  dt = = T 1 2 2 0  R  1 2  0 cos if  2π(k−1)t T k=1 otherwise Thus. k 6= 0 average value of s (t). the integration will sift out all but T the term involving al .   sin 2πt  if 0 ≤ t < T s (t) =  0 if T ≤ t < T T 2 (4. l 6= 0 All of the Fourier coecients can be found similarly. (4. because integration is linear.) What is the Fourier series for a unit-amplitude square wave? Example 4. The idea is that.17) Using our trigonometric identities turns our integral of a product of sinusoids into a sum of integrals of individual sinusoids. a0 = ak = bk = 1 T 2 T 2 T RT 0 RT 0 RT 0 s (t) dt Exercise 4.2 The expression for 2πkt dt T  2πkt dt T s (t) cos is referred to as the .3. to nd bk = 2 T Z T 2  sin 0 we must calculate the integral 2πt T   sin 2πkt T  dt (4.15) (Solution on p. let's. multiply the Fourier series for a signal by the cosine of the cos  RT RT dt = 0 a0 cos s (t) cos 2πlt T 0   RT P∞ 2πkt cos 2πlt dt k=1 bk 0 sin T T 2πlt T  P∞ dt + k=1 ak RT 0 cos 2πkt T  cos 2πlt T  dt + (4. the only non-zero term in the sum results when the indices k and l are equal (but not zero). Consequently.3 a0  s (t) sin Exercise 4.3.2 Let's nd the Fourier series representation for the half-wave rectied sinusoid.) Why? (Solution on p.18) .125 th l harmonic  2πlt and integrate. which are much easier to evaluate. for example. 4. such as the half-wave rectied sinusoid. On to the cosine terms. but yield the complicated result     − 2 21 π k −1 ak =  0 if k odd if k ∈ {2.5 0 k 0 Figure 4. consists of a sum of elemental sinusoids. aspect of the Fourier spectrum is its uniqueness: You can unambiguously nd the spectrum from the signal (decomposition (4. which corresponds to FREQUENCY DOMAIN equals 1 π . . } (4.4 A Signal's Spectrum 7 A periodic signal.5 0 k -0. k=1 corresponds to 2 kHz. Each coecient is directly related k T .5 bk 0. such as shown in Figure 4. spectrum.3: 2 4 6 8 10 The Fourier series spectrum of a half-wave rectied sinusoid is shown. k=2 k. Fourier Series spectrum of a half-wave rectied sine wave ak 0. Thus. etc.19) Thus. The average value. 7 A signal's frequency A periodic signal can be dened either in the time domain (as a This content is available online at <http://cnx. function) or in the frequency domain (as a spectrum). The word "spectrum" implies corresponds somehow to frequency. . if we half-wave rectied a 1 kHz sinusoid.org/content/m0040/2. The remainder of the cosine coecients are easy to nd. The index indicates the multiple of the fundamental frequency at which the signal has energy. any aspect of the signal can be found from the spectrum and vice versa. 4. domain expression is its spectrum. here to a sinusoid having a frequency of to 1 kHz. Thus. . the fundamental. a0 . but very important. the Fourier series for the half-wave rectied sinusoid has non-zero terms for the average.126 CHAPTER 4. displays the signal's that the independent variable. A subtle. .3 (Fourier Series spectrum of a half-wave rectied sine wave). and the even harmonics.21/>.15)) and the signal from the spectrum (composition). A plot of the Fourier coecients as a function of the frequency index. Clearly the time domain provides the answer directly.127 A fundamental aspect of solving electrical engineering problems is whether the time or frequency domain provides the most understanding of a signal's properties and the simplest way of manipulating it. for nonperiodic signals. the orthogonality properties (4. the natural time interval is clearly its period.22) . A signal's instantaneous power is dened to be its square. To use a frequency domain approach would require us to nd the spectrum. However.1 (Solution on p. form the signal from the spectrum and calculate the maximum. We dene the rms value of a periodic signal to be s rms (s) = T Z 1 T s2 (t) dt (4. The uniqueness property says that either domain can provide the right answer. For a periodic signal. we need to substitute the spectral representation of the signal into this expression.) What is the rms value of the half-wave rectied sinusoid? To nd the average power in the frequency domain.21) Exercise 4. ∞ power (s) = a0 2 + 1X 2 ak + bk 2 2 k=1 (4. a better choice would be entire time or time from onset.20) 0 and thus its average power is power (s) rms2 (s) R 1 T 2 T 0 s (t) dt = = (4. suppose we want to know the (periodic) signal's maximum value. The survivors leave a rather simple expression for the power we seek. As a simple example.12) say that most of these crossterms integrate to zero. the average power is the square of its root-mean-squared (rms) value. 1 power (s) = T Z T a0 + 0 ∞ X  ak cos k=1 2πkt T  + ∞ X  bk sin k=1 2πkt T !2 dt The square inside the integral will contain all possible pairwise products. For a periodic signal. we're back in the time domain! Another feature of a signal is its average power.4. 167. The average power is the average of the instantaneous power over some time interval. 10/>. FREQUENCY DOMAIN Power Spectrum of a Half-Wave Rectied Sinusoid Ps(k) 0. 167. .1 0 0 Figure 4.2).4. the power contained in a signal at its k th harmonic is ak 2 +bk 2 .4: 2 4 6 8 10 k Power spectrum of a half-wave rectied sinusoid.128 CHAPTER 4. deviation of a sine wave from the ideal is measured by the in the fundamental.5 ( Fourier Series spectrum of a half-wave rectied sine wave ) shows how this sequence of signals portrays the signal more accurately as more terms are added. the contribution of each term in the Fourier series toward representing the signal can be measured by its Thus. Furthermore.) total harmonic distortion. contribution to the signal's average power. sK (t) = a0 + K X k=1  ak cos 2πkt T  + K X  bk sin k=1 2πkt T  (4. Is this calculation most easily performed in the time or frequency domain? 4. Ps (k). such as shown in Figure 4. signal containing K +1 Dene sK (t) to be the Fourier terms.23) Figure 4.org/content/m10687/2. It could well be that computing this sum is easier than integrating the signal's square. plots each harmonic's contribution to the total power.2 (Solution on p.4 (Power Spectrum of a Half-Wave 2 Rectied Sinusoid).2 0. power spectrum Exercise 4. The . 8 This content is available online at <http://cnx. which equals the total power in the harmonics higher than the rst compared to power In high-end audio. Find an expression for the total harmonic distortion for any periodic signal.5 Fourier Series Approximation of Signals 8 It is interesting to consider the sequence of signals that we obtain as we incorporate more terms into the Fourier series approximation of the half-wave rectied sine wave (Example 4. 129 Fourier Series spectrum of a half-wave rectied sine wave ak 0.5 0 k -0.5 bk 0.5 0 k 0 2 4 6 8 10 (a) 1 K=0 0.5 0 t 1 K=1 0.5 0 t 1 K=2 0.5 0 t 1 K=4 0.5 0 0 0.5 1 1.5 2 t (b) Figure 4.5: The Fourier series spectrum of a half-wave rectied sinusoid is shown in the upper portion. The index indicates the multiple of the fundamental frequency at which the signal has energy. The cumulative eect of adding terms to the Fourier series for the half-wave rectied sine wave is shown in the bottom portion. The dashed line is the actual signal, with the solid line showing the nite series approximation to the indicated number of terms, K + 1. We need to assess quantitatively the accuracy of the Fourier series approximation so that we can judge how rapidly the series approaches the signal. When we use a K + 1-term series, the errorthe dierence 130 CHAPTER 4. between the signal and the K + 1-term K (t) = FREQUENCY DOMAIN seriescorresponds to the unused terms from the series. ∞ X  ak cos k=K+1 2πkt T ∞ X  +  bk sin k=K+1 2πkt T  (4.24) To nd the rms error, we must square this expression and integrate it over a period. Again, the integral of most cross-terms is zero, leaving v u ∞ u1 X t ak 2 + bk 2 rms (K ) = 2 (4.25) k=K+1 Figure 4.6 (Approximation error for a half-wave rectied sinusoid) shows how the error in the Fourier series for the half-wave rectied sinusoid decreases as more terms are incorporated. In particular, the use of four terms, as shown in the bottom plot of Figure 4.5 ( Fourier Series spectrum of a half-wave rectied sine wave ), has a rms error (relative to the rms value of the signal) of about 3%. The Fourier series in this case converges quickly to the signal. Approximation error for a half-wave rectied sinusoid Relative rms error 1 0.8 0.6 0.4 0.2 0 Figure 4.6: 0 2 4 6 8 10 K The rms error calculated according to (4.25) is shown as a function of the number of terms in the series for the half-wave rectied sinusoid. The error has been normalized by the rms value of the signal. We can look at Figure 4.7 (Power spectrum and approximation error for a square wave) to see the power spectrum and the rms approximation error for the square wave. 131 Power spectrum and approximation error for a square wave Ps(k) 1 0.5 0 k 0 2 4 6 8 10 0 2 4 6 8 10 Relative rms error 1 0.5 0 Figure 4.7: K The upper plot shows the power spectrum of the square wave, and the lower plot the rms error of the nite-length Fourier series approximation to the square wave. The asterisk denotes the rms error when the number of terms K in the Fourier series equals 99. Because the Fourier coecients decay more slowly here than for the half-wave rectied sinusoid, the rms error is not decreasing quickly. Said another way, the square-wave's spectrum contains more power at higher frequencies than does the half-wave-rectied sinusoid. This dierence between the two Fourier series results 1 k2 while those of the square 1 wave are proportional to k . If fact, after 99 terms of the square wave's approximation, the error is bigger than 10 terms of the approximation for the half-wave rectied sinusoid. Mathematicians have shown that because the half-wave rectied sinusoid's Fourier coecients are proportional to no signal has an rms approximation error that decays more slowly than it does for the square wave. Exercise 4.5.1 (Solution on p. 167.) Calculate the harmonic distortion for the square wave. More than just decaying slowly, Fourier series approximation shown in Figure 4.8 (Fourier series approximation of a square wave) exhibits interesting behavior. 132 CHAPTER 4. FREQUENCY DOMAIN Fourier series approximation of a square wave 1 K=1 0 t -1 1 K=5 0 t -1 1 K=11 0 t -1 1 K=49 0 t -1 Figure 4.8: Fourier series approximation to sq (t). The number of terms in the Fourier sum is indicated in each plot, and the square wave is shown as a dashed line over two periods. Although the square wave's Fourier series requires more terms for a given representation accuracy, when comparing plots it is not clear that the two are equal. Does the Fourier series really equal the square wave at all values of t? In particular, at each step-change in the square wave, the Fourier series exhibits a peak followed by rapid oscillations. As more terms are added to the series, the oscillations seem to become more rapid and smaller, but the peaks are not decreasing. For the Fourier series approximation for the half-wave rectied sinusoid (Figure 4.5: Fourier Series spectrum of a half-wave rectied sine wave ), no such behavior occurs. What is happening? Consider this mathematical question intuitively: Can a discontinuous function, like the square wave, be expressed as a sum, even an innite one, of continuous signals? One should at least be suspicious, and in fact, it can't be thus expressed. This issue brought Fourier 9 much criticism from the French Academy of Science (Laplace, Lagrange, Monge and LaCroix comprised the review committee) for several years after its presentation on 1807. It was not resolved for also a century, and its resolution is interesting and important to understand from a practical viewpoint. 9 http://www-groups.dcs.st-and.ac.uk/∼history/Mathematicians/Fourier.html 133 The extraneous peaks in the square wave's Fourier series never disappear; they are termed Gibb's phenomenon after the American physicist Josiah Willard Gibbs. They occur whenever the signal is discontinuous, and will always be present whenever the signal has jumps. Let's return to the question of equality; how can the equal sign in the denition of the Fourier series be justied? The partial answer is that pointwiseeach and every value of tequality is not guaranteed. However, mathematicians later in the nineteenth century showed that the rms error of the Fourier series was always zero. limit rms (K ) = 0 K→∞ What this means is that the error between a signal and its Fourier series approximation may not be zero, but that its rms value will be zero! It is through the eyes of the rms value that we redene equality: The usual denition of equality is called if s1 (t) = s2 (t) pointwise equality: for all values of t. Two signals A new denition of equality is said to be equal in the mean square if rms (s1 − s2 ) = 0. s1 (t), s2 (t) are said to be equal pointwise mean-square equality: Two signals are For Fourier series, Gibb's phenomenon peaks have nite height and zero width. The error diers from zero only at isolated pointswhenever the periodic signal contains discontinuitiesand equals about 9% of the size of the discontinuity. The value of a function at a nite set of points does not aect its integral. This eect underlies the reason why dening the value of a discontinuous function, like we refrained from doing in dening the step function (Section 2.2.4: Unit Step), at its discontinuity is meaningless. Whatever you pick for a value has no practical relevance for either the signal's spectrum or for how a system responds to the signal. The Fourier series value "at" the discontinuity is the average of the values on either side of the jump. 4.6 Encoding Information in the Frequency Domain 10 To emphasize the fact that every periodic signal has both a time and frequency domain representation, we can exploit both to encode information into a signal. Refer to the Fundamental Model of Communication (Figure 1.3: Fundamental model of communication). We have an information source, and want to construct a transmitter that produces a signal T x (t). For the source, let's assume we have information to encode every seconds. For example, we want to represent typed letters produced by an extremely good typist (a key is struck every T seconds). Let's consider the complex Fourier series formula in the light of trying to encode information. K X x (t) = ck ej 2πkt T (4.26) k=−K We use a nite sum here merely for simplicity (fewer parameters to determine). An important aspect of the spectrum is that each frequency component ck can be manipulated separately: Instead of nding the Fourier spectrum from a time-domain specication, let's construct it in the frequency domain by selecting ck the according to some rule that relates coecient values to the alphabet. In dening this rule, we want to always create a real-valued signal x (t). Because of the Fourier spectrum's properties (Property 4.1, p. 121), the spectrum must have conjugate symmetry. This requirement means that we can only assign positive- indexed coecients (positive frequencies), with negative-indexed ones equaling the complex conjugate of the corresponding positive-indexed ones. Assume we have N letters to encode: {a1 , . . . , aN }. One simple encoding rule could be to make a single an occurs, we make cn = 1 1 is used to represent a letter. Note T N the range of frequencies required for the encodingequals T . Another possibility is Fourier coecient be non-zero and all others zero for each letter. For example, if and ck = 0, k 6= n. that the bandwidth In this way, the nth harmonic of the frequency to consider the binary representation of the letter's index. For example, if the letter 13 to its base 2 representation, we have 13 = 11012 . a13 occurs, converting We can use the pattern of zeros and ones to represent directly which Fourier coecients we "turn on" (set equal to one) and which we "turn o." 10 This content is available online at <http://cnx.org/content/m0043/2.17/>. 134 CHAPTER 4. Exercise 4.6.1 FREQUENCY DOMAIN (Solution on p. 168.) Compare the bandwidth required for the direct encoding scheme (one nonzero Fourier coecient for each letter) to the binary number scheme. Compare the bandwidths for a 128-letter alphabet. Since both schemes represent information without loss  we can determine the typed letter uniquely from the signal's spectrum  both are viable. Which makes more ecient use of bandwidth and thus might be preferred? Exercise 4.6.2 (Solution on p. 168.) Can you think of an information-encoding scheme that makes even more ecient use of the spectrum? In particular, can we use only one Fourier coecient to represent N letters uniquely? We can create an encoding scheme in the frequency domain (p. 133) to represent an alphabet of letters. But, as this information-encoding scheme stands, we can represent one letter for all time. However, we note that the Fourier coecients depend the signal's spectrum every T only on the signal's characteristics over a single period. We could change as each letter is typed. In this way, we turn spectral coecients on and o as letters are typed, thereby encoding the entire typed document. For the receiver (see the Fundamental Model of Communication (Figure 1.3: Fundamental model of communication)) to retrieve the typed letter, it would simply use the Fourier formula for the complex Fourier spectrum 11 for each T -second interval to determine what each typed letter was. Figure 4.9 (Encoding Signals) shows such a signal in the time-domain. Encoding Signals x(t) 2 1 0 0 T 2T 3T t -1 -2 Figure 4.9: The encoding of signals via the Fourier spectrum is shown over three "periods." In this ex- ample, only the third and fourth harmonics are used, as shown by the spectral magnitudes corresponding to each T -second interval plotted below the waveforms. Can you determine the phase of the harmonics from the waveform? In this Fourier-series encoding scheme, we have used the fact that spectral coecients can be independently specied and that they can be uniquely recovered from the time-domain signal over one "period." Do 11 "Complex Fourier Series and Their Properties", (2) <http://cnx.org/content/m0065/latest/#complex> we can exploit the superposition property.11/>.135 note that the signal representing the entire document is no longer periodic. 12 This content is available online at <http://cnx. . k k j 2πkt T because f = T e T . the circuit's transfer function does depend on frequency. we found for linear circuits that their output to a complex exponential input is just the frequency response evaluated at the signal's frequency times the complex exponential. time-invariant lter reshapes such signals in general. we simply multiply the input spectrum by the frequency response . This approach represents a simplication of how modern modems represent text that they transmit over telephone lines. which means that the circuit's output will dier as the period varies. By understanding the Fourier series' properties (in particular that coecients are determined only over a T -second interval. the output has a Fourier series. we can construct a communications system. which means that it too is periodic. Said mathematically. y (t) =  ∞ X k=−∞   2πkt k ck H ej T T (4. Because the Fourier series represents a periodic signal as a linear combination of complex exponentials. if x (t) is periodic thereby having a Fourier series.  especially that while the Fourier coecients do not depend on the signal's period.27) Thus. The circuit modies the magnitude and phase of each Fourier coecient. Thus.7 Filtering Periodic Signals 12 The Fourier series representation of a periodic signal makes it easy to determine how a linear. then the output y (t) = H component. Furthermore.org/content/m0044/2. relation obeys superposition: The fundamental property of a linear system is that its input-output L (a1 s1 (t) + a2 s2 (t)) = a1 L (s1 (t)) + a2 L (s2 (t)). 4. Its Fourier coecients equal To obtain the spectrum of the output. a linear circuit's output to this signal will be the superposition of the output to each if x (t) = ej 2πkt T . Note ck H k T . . then left). The lter's cuto frequency was set to the various values indicated in the top row. 168.2 0.136 CHAPTER 4. As the cuto frequency decreases (center. rounding the leading and trailing edges.10: A periodic pulse signal. The input's period was 1 ms (millisecond).2).) What is the average value of each output waveform? The correct answer may surprise you.2 fc: 100 Hz 0 0 10 20 Frequency (kHz) 0 0 10 20 Frequency (kHz) 1 fc: 10 kHz 0 0 10 20 Frequency (kHz) 1 Amplitude 1 fc: 1 kHz 0 0 1 Time (ms) 2 0 0 1 Time (ms) 2 0 0 1 Time (ms) 2 (b) ∆ Figure 4.7. Bottom plots show the lter's output signals. the rounding becomes more prominent. Note how the signal's spectrum extends well above its fundamental frequency.1 (Solution on p. Exercise 4.3 The periodic pulse signal shown on the left above serves as the input to a RC -circuit that has the transfer function (calculated elsewhere (Figure 3. serves as the input to lowpass lter.10 (Filtering a periodic signal) shows the output changes as we vary the lter's cuto frequency. Example 4. with the leftmost waveform showing a small ripple.30: Magnitude and phase of the transfer function)) H (f ) = 1 1 + j2πf RC (4. (a) Periodic pulse signal (b) Top plots show the pulse signal's spectrum for various cuto frequencies. FREQUENCY DOMAIN Filtering a periodic signal p(t) A ∆ … ∆ … t T Spectral Magnitude (a) 0. which display the output signal's spectrum and the lter's transfer function.28) Figure 4. such as shown on the left part ( T an RC = 0. Having a cuto frequency ten times higher than the fundamental does perceptibly change the output waveform. The bottom row shows the output signal derived from the Fourier series coecients shown in the top row.2 0. we made these calculations entirely in the frequency domain. Z ∞ limit sT (t) ≡ s (t) = T →∞ S (f ) ej2πf t df (4.org/content/m0046/2.29) − T2 where we have used a symmetric placement of the integration interval about the origin for subsequent derivational convenience. Using Fourier series. the dierential equation governing the circuit's behavior. More importantly.137 This example also illustrates the impact a lowpass lter can have on a waveform. The simple RC lter used here has a rather gradual frequency response.31) As the period increases. allowing a much more dramatic selection of the input's Fourier coecients. Later. both periodic and nonperiodic ones. We calculate the spectrum according to the familiar formula ck (T ) = T 2 Z 1 T sT (t) e− j2πkt T dt (4. the spectral lines become closer together.21/>. Z ∞ P (f ) = −∞ 13 p (t) e−(j2πf t) dt = Z 0 ∆ e−(j2πf t) dt = p (t).33) −∞ S (f ) is the Fourier transform of s (t) (the Fourier transform is symbolically denoted by the uppercase any signal for which the integral ((4. Fourier transform. . we can calculate how any linear circuit will respond to a periodic input. 4. of all signals. we vary the frequency index k proportionally as we increase the period. We need a denition for periodic or not. This spectrum is calculated by what is known as the Let sT (t) be a periodic signal having period T.33)) converges.5: Pulse). Dene Z T 2 sT (t) e−(j2πf t) dt ST (f ) ≡ T ck (T ) = (4.2. we will describe lters that have much more rapidly varying frequency responses. much less solving. which means that higher harmonics are smoothly suppressed. Can we use similar techniques for nonperiodic signals? What is the response of the lter to a single pulse? Addressing these issues requires us to nd the Fourier spectrum the Fourier spectrum of a signal. we have calculated the output of a circuit to a periodic input without writing. We denote the spectrum for any assumed value of the period by ck (T ).4 Let's calculate the Fourier transform of the pulse signal (Section 2.30) − T2 making the corresponding Fourier series ∞ X sT (t) = ST (f ) ej2πf t k=−∞ 1 T (4. version of the signal's symbol) and is dened for Example 4. We want to consider what happens to this signal's spectrum as we let the period become longer and longer.32) −∞ with ∞ Z s (t) e−(j2πf t) dt S (f ) = (4. Furthermore. Therefore.   1 e−(j2πf ∆) − 1 − (j2πf ) This content is available online at <http://cnx. becoming a continuum.8 Derivation of the Fourier Transform 13 Fourier series clearly open the frequency domain as an interesting and useful way of determining how circuits and systems respond to periodic input signals. Let f be a xed frequency equaling k T . Figure 4. Spectral Magnitude Spectral Magnitude Spectrum Figure 4.34) .11: 0.10). Thus. sin(t) has a t (pronounced "sink") function. The direct Fourier transform (or simply the Fourier transform) calculates a signal's frequency domain representation from its time-domain variant ((4. we expanded the period keeping the pulse's duration xed at 0.2 T=1 0 0. we often symbolically express these transform calculations as F (s) F (s) and F −1 (S). = S (f ) R∞ = −∞ s (t) e−(j2πf t) dt respectively. T =1 For the bottom panel. and computed its Fourier series coecients. the sinc pulse's Fourier transform equals |∆sinc (πf ∆) |.35)) nds the time-domain representation from the frequency domain. P (f ) = e−(jπf ∆) FREQUENCY DOMAIN sin (πf ∆) πf Note how closely this result resembles the expression for Fourier series coecients of the periodic pulse signal (4.11 (Spectrum) shows how increasing the period does indeed lead to a continuum of coecients.2. 0 Frequency (Hz) 10 20 The upper plot shows the magnitude of the Fourier series spectrum for the case of with the Fourier transform of to -10 p (t) shown as a dashed line. and is denoted by sinc (t). Rather than explicitly writing the required integral. The quantity special name.138 CHAPTER 4. the magnitude of the and that the Fourier transform does correspond to what the continuum becomes. (4. The inverse Fourier transform ((4.34)). The Fourier transform relates a signal's time and frequency domain representations to each other.2 T=5 0 -20 T = 5. What is F (S (f ))? In other words. A common misunderstanding is that while a signal exists in both the time and frequency domains.6) where we dene Fourier series coecients according to letter to be transmitted.8. we need only plot the positive frequency portion of the spectrum (we can easily determine the remainder of the spectrum). Exercise 4. . 168. with the Fourier transform bridging between the two.1: Short Table of Fourier Transform Pairs and Table 4. Exercise 4.139 F −1 (S) = S (f ) ej2πf t df  S (f ) = F F −1 (S (f )) . and we won't try here. which states that power computed in either domain equals the power in the other. Note that the direct and inverse transforms dier only in the sign of the exponent. note: Recall that the Fourier series for a square wave gives a value for the signal at the dis- continuities equal to the average value of the jump. the spectrum at negative frequencies equals the complex conjugate of the spectrum at the corresponding positive frequencies.1 (Solution on p. though most familiarly dened in the time-domain. understanding communications and information processing systems requires a thorough understanding of signal structure and of how systems work in both the time and frequency domains. For example. and inverse transforming the result. a single formula expressing a signal must contain only time or frequency: Both cannot be present simultaneously. Thus. use the wrong exponent sign in evaluating the inverse Fourier transform. Furthermore. Especially important among these properties is Parseval's Theorem. performing a simple calculation in the frequency domain.2: Fourier Transform Properties). Consequently. A signal's time A signal thus "exists" in both the time and frequency domains. Properties of the Fourier transform and some useful transform pairs are provided in the accompanying tables (Table 4. We will learn (Section 4. Z ∞ s2 (t) dt = −∞ Z ∞ 2 (|S (f ) |) df (4. This idea is shown in another module (Section 4.36) −∞ Of practical importance is the conjugate symmetry property: When s (t) is real-valued. and frequency domain representations are uniquely related to each other. This situation mirrors what happens with complex amplitudes in circuits: As we reveal how communications systems work and are designed. We are transforming (in the nontechnical meaning of the word) a signal from one representation to another. but being unequal at a point is indeed minor.35) −∞ these results are indeed valid with minor exceptions. it behooves the wise engineer to use the simpler of the two.) How many Fourier transform operations need to be applied to get the original signal back: F (· · · (F (s))) = s (t)? Note that the mathematical relationships between the time domain and frequency domain versions of the same signal are termed transforms. Showing that you "get back to where you started" is dicult from an analytic viewpoint. This value may dier from how the signal is dened in the time domain. impedances depend on frequency and the time variable cannot appear. a signal.2 (Solution on p.8. 168. really can be dened equally as well (and sometimes more easily) in the frequency domain.9) that nding a linear. we will dene signals entirely in the frequency domain without explicitly nding their time domain variants.) The diering exponent signs means that some curious results occur when we use the wrong sign. We can dene an information carrying signal in either the time or frequency domains. and = We must have s (t) = F −1 (F (s (t))) and s (t) R∞ (4. We express Fourier transform pairs as s (t) ↔ S (f ). time-invariant system's output in the time domain can be most easily calculated by determining the input signal's spectrum. 140 CHAPTER 4. Realizing that the Fourier series is a special case of the Fourier transform.2 Example 4.5 In communications. a very important operation on a signal s (t) is to amplitude modulate it. Short Table of Fourier Transform Pairs s (t) e −(at) S (f ) 1 j2πf +a 2a 4π 2 f 2 +a2 u (t) e(−a)|t|   1 p (t) =  0 if |t| < if |t| > ∆ 2 ∆ 2 sin(πf ∆) πf   1 S (f ) =  0 sin(2πW t) πt if |f | < W if |f | > W Table 4. Using this operation more as an example rather than elaborating the communications aspects here. we want to compute the Fourier transform  the spectrum  of (1 + s (t)) cos (2πfc t) . we simply calculate the Fourier series coecients instead. and plot them along with the spectra of nonperiodic signals on the same frequency axis.1 Fourier Transform Properties Time-Domain Frequency Domain Linearity a1 s1 (t) + a2 s2 (t) a1 S1 (f ) + a2 S2 (f ) Conjugate Symmetry s (t) ∈ R S (f ) = S (−f ) Even Symmetry s (t) = s (−t) S (f ) = S (−f ) Odd Symmetry s (t) = −s (−t) Scale Change s (at) S (f ) = −S (−f )   f 1 |a| S a Time Delay s (t − τ ) e−(j2πf τ ) S (f ) Complex Modulation ej2πf0 t s (t) S (f − f0 ) Amplitude Modulation by Cosine s (t) cos (2πf0 t) Amplitude Modulation by Sine s (t) sin (2πf0 t) S(f −f0 )+S(f +f0 ) 2 S(f −f0 )−S(f +f0 ) 2j Dierentiation d dt s (t) Rt s (α) dα −∞ Integration Multiplication by t Area Value at Origin Parseval's Theorem ∗ j2πf S (f ) 1 j2πf S (f ) if dS(f ) 1 −(j2π) df ts (t) R∞ s (t) dt −∞ s (0) R∞ −∞ S (0) = 0 S (0) R∞ S (f ) df −∞ R∞ 2 (|S (f ) |) df −∞ 2 (|s (t) |) dt Table 4. FREQUENCY DOMAIN The only diculty in calculating the Fourier transform of any signal occurs when we have periodic signals (in either domain). and its only nonzero not periodic unless s (t) has the same period as the sinusoid.12? . we have F (s (t) cos (2πfc t)) = S (f − fc ) + S (f + fc ) 2 (4. (1 + s (t)) cos (2πfc t) = cos (2πfc t) + s (t) cos (2πfc t) cos (2πfc t). 168.141 Thus. Its period is Fourier coecients The second term is 1 fc . To nd its time domain representation. Its highest frequency  the largest frequency containing power  is W Hz. 2 For the spectrum of we use the Fourier series. The spectrum of the amplitude modulated signal is shown in Figure 4. Exercise 4. we simply use the inverse Fourier transform. S(f) –W f W X(f) S(f+fc) –fc–W –fc –fc+W Figure 4. the resulting spectrum has "lines" corresponding to the Fourier series components at 1 spectrum shifted to components at ± (fc ) and scaled by 2 .) s (t) that corresponds to the spectrum shown in the upper panel of Figure 4. Once amplitude modulated. the spectrum of the second term can be derived as Z ∞ S (f ) ej2πf t df cos (2πfc t) s (t) cos (2πfc t) = −∞ Using Euler's relation for the cosine. Note how in this gure the signal s (t) ± (fc ) and the original triangular is dened in the frequency domain.37) This component of the spectrum consists of the original signal's spectrum delayed and advanced in frequency.12: S(f–fc) fc–W fc fc+W f A signal which has a triangular shaped spectrum is shown in the top plot.12. 1 are c±1 = . (s (t) cos (2πfc t)) = (s (t) cos (2πfc t)) = 1 2 Z 1 2 Z ∞ S (f ) ej2π(f +fc )t df + −∞ ∞ S (f − fc ) ej2πf t df + −∞ Z ∞ (s (t) cos (2πfc t)) = −∞ ∞ 1 2 Z 1 2 Z S (f ) ej2π(f −fc )t df −∞ ∞ S (f + fc ) ej2πf t df −∞ S (f − fc ) + S (f + fc ) j2πf t e df 2 Exploiting the uniqueness property of the Fourier transform.8. Using Euler's relation.3 What is the signal (Solution on p. In particular. time-invariant systems to nd a formula for the RC -circuit's response to a pulse input. In this example.42) Fourier Transform Properties) suggests This content is available online at <http://cnx.39) Thus. the output's Fourier transform equals Y (f ) = e−(jπf ∆) sin (πf ∆) 1 πf 1 + j2πf RC (4. 168. in this example. . Example 4.) x (t). The baseband signal's bandwidth W .org/content/m0048/2. P (f ) = e−(jπf ∆) H (f ) = sin (πf ∆) πf 1 1 + j2πf RC (4. the amplitude-modulated signal? Try the calculation in both the time and frequency domains. Thus. The bandwidth of a bandpass equals not signal is not its highest frequency. Let's momentarily make the expression for ∆) e−(jπf ∆) sin(πf πf Y (f ) more complicated.41) Consequently. the spectrum of the output is X (f ) H (f ). we have a bandpass signal. We have expressions for the input's spectrum and the system's frequency response.40) You won't nd this Fourier transform in our table.8.142 CHAPTER 4.4 What is the power in FREQUENCY DOMAIN (Solution on p. Exercise 4. Why a signal's bandwidth should depend on its spectral shape will become clear once we develop communications systems. The way we derived the spectrum of non-periodic signal from periodic If x (t) serves as the input to a linear. and the required integral is dicult to evaluate as the expression stands.9 Linear Time Invariant Systems 14 When we apply a periodic input to a linear.2: thinking about this expression as a 14 product of terms. 4. the highest frequency at which it has power. the output is periodic and has Fourier series coecients equal to the product of the system's frequency response and the input's Fourier coecients (Filtering Periodic Signals (4.27)). This situation requires cleverness and an understanding of the Fourier transform's properties. time-invariant system.18/>. the bandwidth is 2W Hz. Signals such as speech and the Dow Jones averages are baseband signals.38) (4. Y (f ) =  1  1 1 − e−(jπf ∆) j2πf 1 + j2πf RC The table of Fourier transform properties (Table 4. time-invariant system having frequency response H (f ).6 ones makes it clear that the same kind of result works when the input is not periodic: Let's use this frequency-domain input-output relationship for linear. but the range of positive frequencies where the signal has power. Since x (t)'s spectrum is conned to a frequency band close to the origin (we assume fc  W ). = e−(jπf ∆) e = 1 j2πf −e−(jπf ∆) j2πf  −(j2πf ∆) jπf ∆ 1−e (4. recall Euler's relation for the sinusoidal term and note the fact that multiplication by a complex exponential in the frequency domain amounts to a time delay. (4. we call the signal s (t) a baseband signal because its power is contained at low frequencies. Exercise 4. Thus.43) of the dierence of two terms: the constant the Fourier transform's linearity. In this example. We essentially treated multiplication by these factors as if they were transfer functions of some ctitious circuit. The second term in this result does not begin until t = ∆. RC e We can translate each of these frequency-domain products into time-domain operations order we like in any because the order in which multiplications occur doesn't aect the result.9. the waveforms shown in the Filtering Periodic Signals (Figure 4.10: Filtering a periodic signal) example mentioned above are exponentials. • The term 1 − e−(j2πf ∆) means. all linear. and rightly so. We say that time constant of an exponentially decaying signal equals the time it takes to decrease by 1 e of its original value. subtract the time-delayed signal from its original. manipulating. Although we don't have all we need to demonstrate the result as yet. 168. but in the rule Y (f ) = X (f ) H (f ). relying mostly on what multiplication by certain factors. Later we'll show that by involving software that we really don't need to be concerned about constructing a transfer function from circuit elements and op-amps. the time-constant of the rising and falling portions of the output equal the the product of the circuit's resistance and capacitance. you may be concerned that this approach is glib. start with the product of Let's 1 j2πf (integration in the time domain) and the transfer function:   t 1 1 ↔ 1 − e− RC u (t) j2πf 1 + j2πf RC Y (f ) consists e−(j2πf ∆) . we will assume linear systems having certain properties (transfer functions) without worrying about what circuit has the desired property. Thus.) Derive the lter's output by considering the terms in (4.1 (Solution on p.1 Transfer Functions The notion of a transfer function applies well beyond linear circuits. . At this point. • The inverse transform of the frequency response is t 1 − RC u (t). corresponded to a circuit that integrated.     t−∆ t Y (f ) ↔ 1 − e− RC u (t) − 1 − e− RC u (t − ∆) (4. 4. You should get the same answer.143 • • Multiplication by 1 j2πf means integration.41) in the order given. Thus. which term is the input and which is the transfer function is merely a notational matter (we labeled one factor with an X and the other with an H ). As we tackle more sophisticated problems in transmitting. Along the way we may make the system serve as the input. we used the table extensively to nd the inverse Fourier transform. and receiving information. We even implicitly interpreted the circuit's transfer function as the input's spectrum! This approach to nding inverse transforms  breaking down a complicated expression into products and sums of simple components  is the engineer's way of breaking down the problem into several subproblems that are much easier to solve and then gluing the results together. and e −(j2πf ∆) The transfer function 1 j2πf to one that delayed. time-invariant systems. in the time domain. Multiplication by the complex exponential e−(j2πf ∆) means delay by ∆ seconds in the time domain.44) Note that in delaying the signal how we carefully included the unit step. meant. Because of The middle term in the expression for 1 and the complex exponential (4. like 1 j2πf and e−(j2πf ∆) . time-invariant systems have a frequency-domain input-output relation given by the product of the input's Fourier transform and the system's transfer function. linear circuits are a special case of linear. we simply subtract the results.9. Integrate last rather than rst. 2 Commutative Transfer Functions Another interesting notion arises from the commutative property of multiplication (exploited in an example above (Example 4. the cascade having the linear systems in the opposite order yields the same result.6)): We can rather arbitrarily chose an order in which to apply each product. the cascade acts like a single linear system.44)).9. FREQUENCY DOMAIN 4.13). then breaking it down into components arranged according to standard congurations.144 CHAPTER 4. time-invariant systems as well. we nd a ready way of realizing designs. Furthermore. the cascade's output spectrum is X (f ) H1 (f ) H2 (f ). Using the fact that op-amp circuits can be connected in cascade with the transfer function equaling the product of its component's transfer function (see this analog signal processing problem (Problem 3. Engineers exploit this property by determining what transfer function they want. having transfer function H1 (f ) H2 (f ). see this Frequency Domain Problem (Problem 4. Because this product also equals X (f ) H2 (f ) H1 (f ). We now understand why op-amp implementations of transfer functions are so important. time-invariant systems. This result applies to other congurations of linear. . Consider a cascade of two linear. Because the Fourier transform of the rst system's output is X (f ) H1 (f ) and it serves as the second system's input. Air pressure produced by the lungs forces air through the vocal cords that. 15 This content is available online at <http://cnx.10 Modeling the Speech Signal 15 Vocal Tract Nasal Cavity Lips Teeth Tongue Oral Cavity Vocal Cords Air Flow Lungs Figure 4.145 4. produce pus of air that excite resonances in the vocal and nasal cavities. .org/content/m0049/2.29/>. What are not shown are the brain and the musculature that control the entire speech production process.13: The vocal tract is shown in cross-section. when under tension. Certainly in the case of speech and in many other cases as well. If held open when you blow through it. air pressure from the lungs causes the vocal cords to vibrate. arising from simple sources.) Note that the vocal cord system takes a constant input and produces a periodic airow that corresponds to its output signal. To visualize this eect. Your lung power is the simple source referred to earlier. pT (t). FREQUENCY DOMAIN Model of the Vocal Tract neural control l(t) Lungs Figure 4. When the vocal cords are placed under tension by the surrounding musculature.1 (Solution on p. Figure 4.14 (Model of the Vocal Tract) shows the model speech production system. and s (t) are the air pressure provided by the lungs. and the speech output respectively. The naturalness of linear system models for speech does not extend to other situations. neural control pT(t) Vocal Tract The signals s(t) l (t). it can be modeled as a constant supply of air pressure. Singers modify vocal cord tension to change the pitch to produce the desired musical note. The information contained in the spoken word is conveyed by the speech signal. leaving linear systems models as approximations. Clearly. The vocal cords respond to this input by vibrating. the periodic pulse output provided by the vocal cords. we need to understand the speech signal's structure  what's special about the speech signal  and how we can describe and model speech production. This modeling eort consists of nding a system's description of how relatively unstructured signals. Control signals from the brain are shown as entering the systems from the top. If held tautly and close together.14: Vocal Cords The systems model for the vocal tract. Is this system linear or nonlinear? Justify your answer. In singing. In many cases. are given structure by passing them through an interconnection of systems to yield speech. Because the fundamental equation of acoustics  the wave equation  applies here and is linear. which means the output of this system is some periodic function. This eect works best with a wide rubber band. these come from the same source.10. Vocal cord tension is governed by a control input to the musculature. but for modeling purposes we describe them separately since they control dierent aspects of the speech signal. For speech and for many other situations. Exercise 4. take a rubber band and hold it in front of your lips. musicality is largely conveyed by pitch. a process generically known as modulation. this situation corresponds to "breathing mode". biology. The characteristics of the model depends on whether you are saying a vowel or a consonant. the underlying mathematics governed by the physics. The change of signal structure resulting from varying the control input enables information to be conveyed by the signal. Because we shall analyze several speech transmission and processing schemes. in system's models we represent control inputs as signals coming into the top or bottom of the system. blowing through the opening causes the sides of the rubber band to vibrate. system choice is governed by the physics underlying the actual production process. We concentrate rst on the vowel production mechanism. and/or chemistry of the problem are nonlinear. Nonlinear models are far more dicult at the current state of knowledge to understand. the air passes through more or less freely. 168.13 (Vocal Tract) shows the actual speech production system and Figure 4.146 CHAPTER 4. we can use linear systems in our model with a fair amount of accuracy. in western . but not necessarily a sucient level of accuracy. it is the control input that carries information. You can imagine what the airow is like on the opposite side of the rubber band or the vocal cords. impressing it on the system's output. and information engineers frequently prefer linear models because they provide a greater level of comfort. This dierence is also readily apparent in the speech signal itself. After puberty. bringing the teeth together. and lips. and nasal cavity the tract. we shall assume that the pitch period is constant.15 (Speech Spectrum)." Rounding the lips. the dierence between a statement and a question is frequently expressed by pitch changes. spreading the teeth. To simplify our speech modeling eort. Speech specialists tend to name the mouth." and "Let's go to the park?". note the sound dierences between "Let's go to the park. lips. so much so that we describe it as being noise. when consonants such as "f" are produced. .9) contains pitch frequency fundamental frequency 1 or the T . The sound pressure signal thus produced enters the mouth behind the tongue. which has the eect of lowering their pitch frequency to the range 80-160 Hz. pitch is much less important. The resulting output airow is quite erratic. the vocal cords do not produce a periodic output. For some consonants.147 speech. With this simplication. Whereas the organ pipe has the simple physical structure of a straight tube. Spreading the lips. A sentence can be read in a monotone fashion without completely destroying the information expressed by the sentence. the vocal cords of males undergo a physical change. teeth. time-invariant system that has a frequency response typied by several peaks. and bringing the tongue toward the front portion of the roof of the mouth produces the sound "ee. For example. vocal The physics governing the sound disturbances produced in the vocal tract and those of an organ pipe are quite similar. the cross-section of the vocal tract "tube" varies along its length because of the positions of the tongue. If we could examine the vocal cord output. Before puberty. the so-called nasal sounds "n" and "m" have this property." These variations result in a linear. we collapse the vocal-cord-lung system as a simple source that produces the periodic pulse signal (Figure 4. It is these positions that are controlled by the brain to produce the vowel sounds. the vocal cords are placed under much less tension. as shown in Figure 4. However. For example. The spectrum of this signal (4. Going back to mechanism. teeth. tongue.1). The vocal cords' periodic output can be well described by the periodic pulse train periodic pulse signal (Figure 4. harmonics of the frequency pitch frequency for normal speech ranges between 150-400 Hz for both males and females. The primary dierence between adult male and female/prepubescent speech is pitch. We dene noise carefully later when we delve into communication problems. we could probably discern whether the speaker was male or female. with T pT (t) as shown in the denoting the pitch period. which results in turbulent ow.14 (Model of the Vocal Tract)). creates acoustic disturbances. what is known as the F0. and exits primarily through the lips and to some extent through the nose. the vocal cords vibrate just as in vowels. and positioning the tongue toward the back of the oral cavity produces the sound "oh. For others. and are numbered consecutively from low to high frequency.01 0. 168.5 “oh” 0 -0.5 0 0." with F2 being much higher during "ee.15 (Speech Spectrum). we know that the spectrum of the output equals the product of the pitch signal's spectrum and the vocal tract's frequency response. FREQUENCY DOMAIN Spectral Magnitude (dB) Speech Spectrum 30 30 “oh” 20 10 10 0 0 -10 -10 -20 0 F1 F2 5000 F3 F4 F5 Frequency (Hz) 0. In the time domain. These peaks are known as formants. determine the pitch period and the pitch frequency. We know that the outputthe speech signal we utter and that is heard by others and ourselveswill also be periodic.015 Time (s) 0.10. We . speech signal processors would say that the sound "oh" has a higher rst formant frequency than the sound "ee. The spectral peaks are known as formants.02 The ideal frequency response of the vocal tract as it produces the sounds "oh" and "ee" are shown on the top left and top right. speech has a Fourier series representation given by a linear circuit's response to a periodic signal (4. rejecting high or low frequencies. the vocal tract serves to shape the spectrum of the vocal cords.148 CHAPTER 4.02 -0.15 (Speech Spectrum). Since speech signals are periodic. Example time-domain speech signals are shown in Figure 4.27). Because the acoustics of the vocal tract are linear. the pitch. where the periodicity is quite apparent." F2 and F3 (the second and third formants) have more energy in "ee" than in "oh.005 0.015 Time (s) 0. Exercise 4.) From the waveform plots shown in Figure 4. Thus.5 -20 5000 “ee” 0 0 Figure 4.5 Amplitude “ee” 20 0 F1 F2 F3 F4F5 Frequency (Hz) 0." Rather than serving as a lter. respectively. serving as the input to a linear system.15: 0. The bottom plots show speech waveforms corresponding to these sounds.2 (Solution on p.005 0.01 0. we have a periodic signal. HV (f ) (4.1ms) (4. (a) The vocal cords' output spectrum vocal tract's transfer function. is ck = Ae − jπk∆ T sin πk∆ T ' 9. derived in this equation (p. predicted by our voice spectrum (a) pulse 50 Pitch Lines Spectral Amplitude (dB) 40 30 20 10 0 -10 -20 0 1000 2000 3000 Frequency (Hz) 4000 5000 (b) voice spectrum Figure 4. The Fourier series for the vocal cords' output.149 thus obtain the fundamental model of speech production. HV (f ) and the speech spectrum.45) is the transfer function of the vocal tract system. smooth line. for example.46) πk and is plotted on the top in Figure 4. the spectrum of his speech model is shown in Figure 4. about a 110 Hz pitch (T  If we had.16 (voice spectrum). 122). (b) The .16: The vocal tract's transfer function. shown as the thin.16(b) (voice spectrum)." The pitch lines corresponding to harmonics of the pitch frequency are indicated. is superimposed on the spectrum of actual male speech corresponding to the sound "oh. a male speaker with saying the vowel "oh". PT (f ). S (f ) = PT (f ) HV (f ) Here. . Note how the line spectrum.10) Figure 4. for the frequencies example.) The Fourier series coecients for speech are related to the vocal tract's transfer function only at k T . especially at the higher frequencies. and captures all the important features. and we realize from our model that they are due to the vocal cord's periodic excitation of the vocal tract. see previous result (4.16 (voice spectrum) would change if the pitch were twice as high (≈ (300) Hz).17 (spectrogram). . measured spectrum certainly demonstrates what are known as The pitch lines. Exercise 4. When we speak. . they change according to their control signals to produce speech. pitch and the vocal tract's transfer function are not static. k ∈ {1.10. }.3 (Solution on p. The vocal tract's shaping of the line spectrum is clearly evident. FREQUENCY DOMAIN The model spectrum idealizes the measured spectrum. Engineers typically display how the speech spectrum changes over time with what is known as a spectrogram (Section 5. but dicult to discern exactly. 2.9). . is visible during the vowels. but not during the consonants (like the ce in "Rice").150 CHAPTER 4. 168. Would male or female speech tend to have a more clearly identiable formant structure when its spectrum is computed? Consider. The model transfer function for the vocal tract makes the formants much more readily evident. how the spectrum shown on the right in Figure 4. which indicates how the pitch changes. where the periodicities can be seen. is interesting that one system that does systems act like a not It support this 5 kHz bandwidth is the telephone: Telephone bandpass lter passing energy between about 200 Hz and 3. In our sample utterance." Blue indicates low energy portion of the spectrum.2 kHz.17 (spectrogram). Below the spectrogram is the time-domain speech signal. the "ce" sound in "Rice"" contains most of its energy above 3. Eective speech transmission systems must be able to cope with signals having this bandwidth.8 si 1 1.6 Time (s) Uni ver 0. We want to determine how to transmit and receive it.4 ce 0.2 Ri 0. We see from Figure 4. Ecient and eective speech transmission requires us to know the signal's properties and its structure (as expressed by the fundamental model of speech production). The fundamental model for speech indicates how engineers use the physics underlying the signal generation process and exploit its structure to produce a systems model that suppresses the physics while emphasizing how the signal is "constructed. this ltering eect is why it is extremely dicult to distinguish the sounds "s" and "f" over the telephone.151 spectrogram 5000 Frequency (Hz) 4000 3000 2000 1000 0 0 0. for example. that speech contains signicant energy from zero frequency up to around 5 kHz. we know that speech contains a wealth of information.17: Displayed is the spectrogram of the author saying "Rice University. The most important consequence of this ltering is the removal of high frequency energy.2 ty Figure 4. with red indicating the most energetic portions. Try this yourself: Call a friend and determine if they .2 kHz." From everyday life. Such exibility is achievable (but not without some loss) with programmable digital systems. Hint: . To reduce the "digital bandwidth" so drastically means that engineers spent many years to develop signal processing and coding methods that could capture the special characteristics of speech without destroying how it sounds. Ecient speech transmission systems exploit the speech signal's special structure: What makes speech speech? You can conjure many signals that span the same frequencies as speechcar engine sounds. Radio does support this bandwidth (see more about AM and FM radio systems (Section 6.11)). Fundamentally. we need to do more than ltering to determine the speech signal's structure. it would arrive horribly distorted. FREQUENCY DOMAIN can distinguish between the words "six" and "x".11 Frequency Domain Problems Problem 4. dog barksbut don't sound at all like speech. For the third signal.19). This content is available online at <http://cnx. we need to manipulate signals in more ways than are possible with analog systems.1: 16 Simple Fourier Series Find the complex Fourier series representations of the following signals without explicitly calculating Fourier integrals.18 Problem 4. Exploiting the special structure of speech requires going beyond the capabilities of analog signal processing systems. If you say these words in isolation so that no context provides a hint about which word you are saying.18). Many speech transmission systems work by nding the speaker's pitch and the formant frequencies.org/content/m10350/2. your friend will not be able to tell them apart. speech transmitted the same way would sound ne.42/>. s(t) 1 t 1 1 3 8 4 8 1 Figure 4. 4.152 CHAPTER 4. violin any 5 kHz Speech signals music.2: Fourier Series Find the Fourier series representation for the following periodic signals (Figure 4. What is the signal's period in each case? a) b) c) d) e) f) s (t) = sin (t) s (t) = sin2 (t) s (t) = cos (t) + 2cos (2t) s (t) = cos (2t) cos (t) s (t) = cos 10πt + π6 (1 + cos (2πt)) s (t) given by the depicted waveform (Figure 4. nd the complex Fourier series for the triangle wave 16 without performing the usual Fourier integrals. can be transmitted using less than 1 kbps because of its special structure. We shall learn later that transmission of bandwidth signal requires about 80 kbps (thousands of bits per second) to transmit digitally. If you used a speech transmission system to send a violin sound. 20). .3: Phase Distortion We can learn about phase distortion by returning to circuits and investigate the following circuit (Figure 4.19 Problem 4.153 How is this signal related to one for which you already have the series? s(t) 1 1 2 3 2 3 t (a) s(t) 1 1 t (b) 1 s(t) 1 2 3 4 t (c) Figure 4. To assess the quality of an approximation. What selection of terms minimizes the mean-squared error? Find an expression for the mean-squared error resulting from your choice. a) Find a frequency-domain expression for the approximation error when we use the truncated Fourier series as the approximation. b) Instead of truncating the series. 1 1 + + vin(t) – FREQUENCY DOMAIN – vout(t) 1 1 Figure 4. 1  = T 2 where T Z 2 (s (t) − s˜ (t)) dt 0 s (t) is the reference signal and s˜ (t) its approximation.e.154 CHAPTER 4. we want to approximate a reference signal by a somewhat simpler signal. Problem 4.21). Show that the square wave is applies a linear phase shift to the signal's spectrum. What is the Fourier series for the output voltage? T = 0. b) Find the magnitude and phase of this transfer function.4: Approximating Periodic Signals Often..m might be useful.01 and T = 2. c) Find the Fourier series for the depicted signal (Figure 4. Plot the mean-squared error as a function of for both approximations. Use Matlab to nd the truncated approximation and best approximation involving two terms. Use the transfer function of a delay to compute using Matlab the Fourier series of the output. which T 4 . s˜ (t) = K X ck ej 2πk T t k=−K The point of this problem is to analyze whether this approach is the best (i. For a periodic signal s (t).20 a) Find this lter's transfer function. K . let's generalize the nature of the approximation to including any set of 2K + 1 terms: We'll always include the c0 and the negative indexed term corresponding to ck . always minimizes the meansquared error). How would you characterize this circuit? c) Let vin (t) be a square-wave of period T. the most frequently used error measure is the mean-squared error. the square wave is passed through a system that delays its input. Let the delay τ be indeed delayed. d) Use Matlab to nd the output's waveform for the cases the two kinds of results you found? The software in delineates e) Instead of the depicted circuit. One convenient way of nding approximations for periodic signals is to truncate their Fourier series. What value of T fourier2. mat contains these data (daylight hours in the rst row. Examining the plots of input and output. then daily temperatures would be proportional to the number of daylight hours. Hot Days The daily temperature is a consequence of several eects.22) shows that the average daily high temperature does not behave that way. Texas. we want to understand the temperature component of our environment using Fourier series and linear system theory. a) Let the length of day serve as the sole input to a system having an output equal to the average daily temperature. corresponding average daily highs in the second) for Houston. The le temperature. would you say that the system is linear or not? How did you reach you conclusion? . The plot (Figure 4. If this were the dominant eect. one of them being the sun's heating.22 In this problem.5: Long. 95 14 Temperature 85 13 80 75 Daylight 12 70 65 Daylight Hours Average High Temperature 90 11 60 55 10 50 0 50 100 150 200 Day 250 300 350 Figure 4.21 Problem 4.155 1 s(t) 1 2 t Figure 4. 7: Duality in Fourier Transforms "Duality" means that the Fourier transform and the inverse Fourier transform are very similar. c) How are these answers related? What is the general relationship between the Fourier transform of and the inverse transform of s (t) s (f )? 1 s(t) 1 S(f) f t 1 1 (a) (b) Figure 4.23(a)).156 CHAPTER 4. c4 ) FREQUENCY DOMAIN of the complex Fourier series for each signal. let's concentrate only on the rst harmonic. c) What is the harmonic distortion in the two signals? Exclude c0 from this calculation. respectively. What is the phase shift between input and output signals? e) Find the transfer function of the simplest possible linear model that would describe the data. . In particular. quently. a) Calculate the Fourier transform of the signal shown below (Figure 4. the waveform s (t) s (f ) in the time domain and the spectrum Conse- have a Fourier transform and an inverse Fourier transform. a) b) c) d) x (t) = e−(a|t|) −(at) x (t) = te u (t)  1 if |f | < W X (f ) =  0 if |f | > W x (t) = e−(at) cos (2πf0 t) u (t) Problem 4. . Would days be hotter? If so. d) Because the harmonic distortion is small. b) Calculate the inverse Fourier transform of the spectrum shown below (Figure 4. What are their spectral properties? a) Calculate the Fourier transform of the single pulse shown below (Figure 4.. give a physical explanation for the phase shift. . Characterize and interpret the structure of this model.23 Problem 4. by how much? Problem 4.8: Spectra of Pulse Sequences Pulse sequences occur often in digital communication and in other elds as well.. b) Find the rst ve terms (c0 . that are very similar.23(b)).6: Fourier Transform Pairs Find the Fourier or inverse Fourier transform of the following. f ) Predict what the output would be if the model had no phase shift.24(a)). . To represent a sequence of bits.24 Problem 4.? .157 b) Calculate the Fourier transform of the two-pulse sequence shown below (Figure 4.. Describe how the spectra change as the number of repeated pulses increases. T t T t Figure 4. 1 1 2 t 1 2 (a) 1 1 2 t 1 2 (b) 1 1 2 t 1 2 3 4 5 6 7 8 9 (c) Figure 4. You should look for a general expression that holds for sequences of any length. c) Calculate the Fourier transform for the ten-pulse sequence shown in below (Figure 4..25 a) What is the spectrum of the waveform that represents the alternating bit sequence . it is represented by a positive pulse of duration T. d) Using Matlab.9: Spectra of Digital Communication Signals One way to represent bits with signals is shown in Figure 4.24(b)). plot the magnitudes of the three spectra. the appropriately chosen pulses are placed one after the other. it is represented by a negative pulse of the same duration. If the value of a bit is a 1.25. If it is a 0..01010101.24(c)). 00110011. you could use an integrator in a car to determine distance traveled from the speedometer.. a) First. having unit amplitude and width T 2 . Problem 4. Sound travels at a relatively slow speed and our brain uses the fact that sound will arrive at one ear before the other. such as integration and dierentiation.158 CHAPTER 4. We want to develop an active circuit (it contains an op-amp) having an output that is proportional to the integral of its input.10: Lowpass Filtering a Square Wave Let a square wave (period T) serve as the input to a rst-order lowpass system constructed as a RC lter. consider the response of the lter to a simple pulse.11: Mathematics with Circuits Simple circuits can implement simple mathematical operations. We want to derive an expression for the time-domain response of the lter to this input. How long must the period be so that the response does not achieve a relatively constant value between transitions in the square wave? What is the relation of the lter's cuto frequency to the square wave's spectrum in this case? Problem 4. FREQUENCY DOMAIN b) This signal's bandwidth is dened to be the frequency range over which 90% of the power is contained.26). a sound coming from the right arrives at the left ear ear.. Derive an expression for the lter's output to this pulse. For example. Now what is the bandwidth? Problem 4... what is the lter's response to the square wave? c) The nature of this response should change as the relation between the square wave's period and the lter's cuto frequency change. What is this signal's bandwidth? c) Suppose the bit sequence becomes . a) What is the transfer function of an integrator? b) Find an op-amp circuit so that its voltage output is proportional to the integral of its input for all signals. τ seconds after it arrives at the right .12: Where is that sound coming from? We determine where sound is coming from because we have two ears and a brain. As shown here (Figure 4. b) Noting that the square wave is a superposition of a sequence of these pulses.. 27).26 Once the brain nds this propagation delay. The idea is to determine the delay values according to some criterion that is based on what is measured by the two ears. model what the brain might do. For each of the following (Figure 4. it can determine the sound direction. determine the overall transfer function between x (t) and y (t).13: and the processor output y (t)? y (t). to maximize the power in How are Arrangements of Systems Architecting a system of modular components means arranging them in various congurations to achieve some overall input-output relation. RU signal processors want to design an ear's signal by some amount then adds them together. s (t) ∆l and ∆r to τ ? a) What is the transfer function between the sound signal b) One way of determining the delay τ is to choose these maximum-power processing delays related Problem 4. ∆l and ∆r In an attempt to optimal system that delays each are the delays applied to the left and right signals respectively.159 Sound wave τ s(t-τ) s(t) Figure 4. . 28). time-invariant lter having the transfer function shown πt below (Figure 4. What does it say about the eect of the ordering of linear. Find the expression for y (t). x(t) y(t) H2(f) H1(f) FREQUENCY DOMAIN (a) system a x(t) H1(f) y(t) x(t) H2(f) x(t) (b) system b x(t) e(t) y(t) H1(f) – H2(f) (c) system c Figure 4. Let the signal s (t) = H(f) 1 4 1 1 4 Figure 4.27 The overall transfer function for the cascade (rst depicted system) is particularly interesting.160 CHAPTER 4.14: Filtering sin(πt) be the input to a linear. the lter's output. time-invariant systems in a cascade? Problem 4.28 f . 161 Problem 4.15: Circuits Filter! A unit-amplitude pulse with duration of one second serves as the input to an RC-circuit having transfer function H (f ) = j2πf 4 + j2πf a) How would you categorize this transfer function: lowpass, highpass, bandpass, other? b) Find a circuit that corresponds to this transfer function. c) Find an expression for the lter's output. Problem 4.16: Reverberation Reverberation corresponds to adding to a signal its delayed version. a) Assuming τ represents the delay, what is the input-output relation for a reverberation system? Is the system linear and time-invariant? If so, nd the transfer function; if not, what linearity or timeinvariance criterion does reverberation violate. b) A music group known as the ROwls is having trouble selling its recordings. The record company's engineer gets the idea of applying dierent delay to the low and high frequencies and adding the result to create a new musical eect. Thus, the ROwls' audio would be separated into two parts (one less than the frequency f0 , the other greater than f0 ), these would be delayed by τl and τh respectively, and the resulting signals added. Draw a block diagram for this new audio processing system, showing its various components. c) How does the magnitude of the system's transfer function depend on the two delays? Problem 4.17: Echoes in Telephone Systems A frequently encountered problem in telephones is echo. Here, because of acoustic coupling between the ear piece and microphone in the handset, what you hear is also sent to the person talking. That person thus not only hears you, but also hears her own speech delayed (because of propagation delay over the telephone network) and attenuated (the acoustic coupling gain is less than one). Furthermore, the same problem applies to you as well: The acoustic coupling occurs in her handset as well as yours. a) Develop a block diagram that describes this situation. b) Find the transfer function between your voice and what the listener hears. c) Each telephone contains a system for reducing echoes using electrical means. What simple system could null the echoes? Problem 4.18: Eective Drug Delivery In most patients, it takes time for the concentration of an administered drug to achieve a constant level in the blood stream. Typically, if the drug concentration in the patient's intravenous line is concentration in the patient's blood stream is Cd u (t), the  Cp 1 − e−(at) u (t). a) Assuming the relationship between drug concentration in the patient's drug and the delivered concentration can be described as a linear, time-invariant system, what is the transfer function? b) Sometimes, the drug delivery system goes awry and delivers drugs with little control. What would the patient's drug concentration be if the delivered concentration were a ramp? More precisely, if it were Cd tu (t)? c) A clever doctor wants to have the exibility to slow down or speed up the patient's drug concentration. In other words, the concentration is to be  Cp 1 − e−(bt) u (t), with b bigger or smaller than a. should the delivered drug concentration signal be changed to achieve this concentration prole? How 162 CHAPTER 4. Problem 4.19: FREQUENCY DOMAIN Catching Speeders with Radar RU Electronics has been contracted to design a Doppler radar system. Radar transmitters emit a signal that bounces o any conducting object. Signal dierences between what is sent and the radar return is processed and features of interest extracted. In Doppler systems, the object's speed along the direction of the radar x (t) = Acos (2πfc t). Bcos (2π ((fc + ∆f) t + ϕ)), where the Doppler oset frequency ∆f equals beam is the feature the design must extract. The transmitted signal is a sinsusoid: The measured return signal equals 10v , where v is the car's velocity coming toward the transmitter. a) Design a system that uses the transmitted and return signals as inputs and produces ∆f . b) One problem with designs based on overly simplistic design goals is that they are sensitive to unmodeled assumptions. How would you change your design, if at all, so that whether the car is going away or toward the transmitter could be determined? c) Suppose two objects traveling dierent speeds provide returns. How would you change your design, if at all, to accomodate multiple returns? Problem 4.20: Let m (t) Demodulating an AM Signal denote the signal that has been amplitude modulated. x (t) = A (1 + m (t)) sin (2πfc t) Radio stations try to restrict the amplitude of the signal frequency fc m (t) so that it is less than one in magnitude. The is very large compared to the frequency content of the signal. What we are concerned about here is not transmission, but reception. a) The so-called coherent demodulator simply multiplies the signal x (t) by a sinusoid having the same frequency as the carrier and lowpass lters the result. Analyze this receiver and show that it works. Assume the lowpass lter is ideal. b) One issue in coherent reception is the phase of the sinusoid used by the receiver relative to that used by the transmitter. Assuming that the sinusoid of the receiver has a phase φ? depend on φ, how does the output What is the worst possible value for this phase? c) The incoherent receiver is more commonly used because of the phase sensitivity problem inherent in coherent reception. Here, the receiver full-wave recties the received signal and lowpass lters the result (again ideally). Analyze this receiver. Does its output dier from that of the coherent receiver in a signicant way? Problem 4.21: Unusual Amplitude Modulation We want to send a band-limited signal having the depicted spectrum (Figure 4.29(a)) with amplitude modulation in the usual way. I.B. Dierent suggests using the square-wave carrier shown below (Figure 4.29(b)). Well, it is dierent, but his friends wonder if any technique can demodulate it. a) Find an expression for X (f ), the Fourier transform of the modulated signal. X (f ), being careful to label important magnitudes and b) Sketch the magnitude of frequencies. c) What demodulation technique obviously works? x (t) some other way. One friend suggests modulating   3πt πt , another wants to try modulating with cos (πt) and the third thinks cos will 2 2 work. Sketch the magnitude of the Fourier transform of the signal each student's approach produces. d) I.B. challenges three of his friends to demodulate x (t) with cos Which student comes closest to recovering the original signal? Why? 163 S(f) 1 /4 f 1/4 (a) 1 1 3 t (b) Figure 4.29 Problem 4.22: Sammy Falls Asleep... While sitting in ELEC 241 class, he falls asleep during a critical time when an AM receiver is being described. The received signal has the form message signal is m (t); r (t) = A (1 + m (t)) cos (2πfc t + φ) where the phase φ is unknown. The W Hz and a magnitude less than 1 (|m (t) | < 1). The phase φ it has a bandwidth of is unknown. The instructor drew a diagram (Figure 4.30) for a receiver on the board; Sammy slept through the description of what the unknown systems where. cos 2πfct LPF W Hz r(t) xc(t) ? sin 2πfct ? LPF W Hz xs(t) ? Figure 4.30 a) What are the signals xc (t) and xs (t)? b) What would you put in for the unknown systems that would guarantee that the nal output contained the message regardless of the phase? Hint: Think of a trigonometric identity that would prove useful. c) Sammy may have been asleep, but he can think of a far simpler receiver. What is it? 164 CHAPTER 4. Problem 4.23: FREQUENCY DOMAIN Jamming Sid Richardson college decides to set up its own AM radio station KSRR. The resident electrical engineer any carrier frequency and message bandwidth for the station. A rival college jam its transmissions by transmitting a high-power signal that interferes with radios that try to receive KSRR. The jamming signal jam (t) is what is known as a sawtooth wave (depicted in Figure 4.31) decides that she can choose decides to having a period known to KSRR's engineer. jam(t) A … … –T t 2T T Figure 4.31 a) Find the spectrum of the jamming signal. b) Can KSRR entirely circumvent the attempt to jam it by carefully choosing its carrier frequency and transmission bandwidth? terms of T, Problem 4.24: If so, nd the station's carrier frequency and transmission bandwidth in the period of the jamming signal; if not, show why not. AM Stereo A stereophonic signal consists of a "left" signal l (t) and a "right" signal from an orchestra's left and right sides, respectively. transmitter rst forms the sum signal r (t) that conveys sounds coming To transmit these two signals simultaneously, the s+ (t) = l (t) + r (t) and the dierence signal s− (t) = l (t) − r (t). 2W , where Then, the transmitter amplitude-modulates the dierence signal with a sinusoid having frequency W is the bandwidth of the left and right signals. The sum signal and the modulated dierence signal are added, the sum amplitude-modulated to the radio station's carrier frequency the spectra of the left and right signals are as shown (Figure 4.32). L(f) –W R(f) W f –W W f Figure 4.32 a) What is the expression for the transmitted signal? Sketch its spectrum. fc , and transmitted. Assume 165 b) Show the block diagram of a stereo AM receiver that can yield the left and right signals as separate outputs. c) What signal would be produced by a conventional coherent AM receiver that expects to receive a standard AM signal conveying a message signal having bandwidth Problem 4.25: W? Novel AM Stereo Method A clever engineer has submitted a patent for a new method for transmitting two signals in the same transmission bandwidth as commercial AM radio. simultaneously As shown (Figure 4.33), her approach is to modulate the positive portion of the carrier with one signal and the negative portion with a second. Example Transmitter Waveform 1.5 1 Amplitude 0.5 0 -0.5 -1 -1.5 0 1 2 3 4 5 Time 6 7 8 9 10 Figure 4.33 In detail the two message signals m1 (t) and m2 (t) are bandlimited to W Hz and have maximal amplitudes fc much greater than W . The transmitted signal x (t) is given by equal to 1. The carrier has a frequency   A (1 + am (t)) sin (2πf t) 1 c x (t) =  A (1 + am2 (t)) sin (2πfc t) In all cases, sin (2πfm t) if sin (2πfc t) ≥ 0 if sin (2πfc t) < 0 0 < a < 1. The plot shows the transmitted signal when the messages are sinusoids: m1 (t) = m2 (t) = sin (2π2fm t) where 2fm < W . You, as the patent examiner, must determine and whether the scheme meets its claims and is useful. x (t) than given above. m1 (t) and m2 (t) from x (t). a) Provide a more concise expression for the transmitted signal b) What is the receiver for this scheme? It would yield both c) Find the spectrum of the positive portion of the transmitted signal. d) Determine whether this scheme satises the design criteria, allowing you to grant the patent. Explain your reasoning. 166 CHAPTER 4. Problem 4.26: FREQUENCY DOMAIN A Radical Radio Idea An ELEC 241 student has the bright idea of using a square wave instead of a sinusoid as an AM carrier. The transmitted signal would have the form x (t) = A (1 + m (t)) sqT (t) where the message signal m (t) would be amplitude-limited: |m (t) | < 1 a) Assuming the message signal is lowpass and has a bandwidth of wave's period T W Hz, what values for the square are feasible. In other words, do some combinations of b) Assuming reception is possible, can W and T prevent reception? standard radios receive this innovative AM transmission? If so, show how a coherent receiver could demodulate it; if not, show how the coherent receiver's output would be corrupted. Assume that the message bandwidth Problem 4.27: W =5 kHz. Secret Communication An amplitude-modulated secret message m (t) has the following form. r (t) = A (1 + m (t)) cos (2π (fc + f0 ) t) The message signal has a bandwidth of oset the carrier frequency by f0 W Hz and a magnitude less than 1 (|m (t) | < 1). The idea is to Hz from standard radio carrier frequencies. Thus, "o-the-shelf" coherent demodulators would assume the carrier frequency has fc Hz. Here, f0 < W . a) Sketch the spectrum of the demodulated signal produced by a coherent demodulator tuned to fc Hz. b) Will this demodulated signal be a scrambled version of the original? If so, how so; if not, why not? c) Can you develop a receiver that can demodulate the message without knowing the oset frequency Problem 4.28: fc ? Signal Scrambling An excited inventor announces the discovery of a way of using analog technology to render music unlistenable without knowing the secret recovery method. The idea is to modulate the bandlimited message special periodic signal s (t) m (t) by a that is zero during half of its period, which renders the message unlistenable and supercially, at least, unrecoverable (Figure 4.34). 1 s(t) T 4 T 2 T t Figure 4.34 a) What is the Fourier series for the periodic signal? b) What are the restrictions on the period T so that the message signal can be recovered from m (t) s (t)? c) ELEC 241 students think they have "broken" the inventor's scheme and are going to announce it to the world. How would they recover the original message modulating signal? without having detailed knowledge of the  + Bk cos 2πkt T  Because the signal is real- c−k = ck ∗ or A−k = Ak and B−k = −Bk .2. 128) P Total harmonic distortion equals ∞ 2 2 k=2 ak +bk a1 2 +b1 2 . Solution to Exercise 4. which means ak = bk =   4 πk  0 if if k k ck = odd even Thus. 125) 2 jπk .48) √ 22.2 (p..2 (p. 1 c−1 = − 2j . 1 j2πf t 1 e − e−(j2πf t) 2j 2j (4. 125) The average of a set of numbers is the sum divided by the number of terms. As a half-wave rectied sine wave is zero A since the integral of the squared half-wave rectied sine wave 2 The rms value of a sinusoid equals its amplitude divided by during half of the period. 124) Write the coecients of the complex Fourier series in Cartesian form as ck = Ak + jBk and substitute into the expression for the complex Fourier series. the numerator equals the square of the signal's rms value minus the power in the average and the power in the rst harmonic.11). After weadd the positive-indexed and negative-indexed terms. 127)  2πkt T  (4. To obtain the classic Fourier series (4. its rms value is equals half that of a squared sinusoid. Clearly.1 (p. the integral corresponds to the average.3. The coecients of the sine terms are given by bk = − (2Im (ck )) so that We found that the complex Fourier series coecients are given by imaginary. This quantity clearly corresponds to the periodic pulse signal's average value. ∞ X ck e j 2πkt T ∞ X = (Ak + jBk ) ej 2πkt T k=−∞ k=−∞ Simplifying each term in the sum using Euler's formula. Viewing signal integration as the limit of a Riemann sum. } 4 sin πk Solution to Exercise 4. Solution to Exercise 4. the coecients of the complex Fourier series have conjugate symmetry: Solution to Exercise 4. Solution to Exercise 4. we must have 2Ak = ak T T and 2Bk = −bk . Solution to Exercise 4. 120) Because of Euler's relation.4.47) and the other coecients are zero.3. valued. The coecients are pure 0. each term in the Fourier series  2πkt becomes 2Ak cos − 2Bk sin 2πkt . 123) c0 = A∆ T .167 Solutions to Exercises in Chapter 4 Solution to Exercise 4.3. . However. sin (2πf t) = Thus..1 (p.1 (p.3 (p. (Ak + jBk ) ej 2πkt T = =   (Ak + jBk ) cos 2πkt + jsin 2πkt T T   Ak cos 2πkt − Bk sin 2πkt + j Ak sin T T We now combine terms that have the same frequency index 2πkt T in magnitude.4. c1 = 1 2j . the Fourier series for the square wave is X sq (t) = k∈{1. this quantity is most easily computed in the fre- quency domain.2.2 (p.3.. 0065 s. log2 N N T . Subtracting from the undelayed signal response's time-domain version by ∆ results in RC e −(t−∆) −t 1 1 RC RC yields u (t)− RC e u (t − ∆).10.10. Because the integral of a sum equals the RC e The inverse transform of the frequency response is sum of the component integrals (integration is linear). 139) F (F (F (F (s (t))))) = s (t).6. Solution to Exercise 4. This closer spacing more accurately reveals the formant structure. Multiplying the frequency response by RC e 1 − e−(j2πf ∆) means subtract from the original signal its time-delayed version.2 (p. the average output values equal the respective average input values.3 (p. The integral is provided in the example (4.1 (p. the spacing between spectral lines is smaller.1 (p. The bottom-right panel has a period of about 0.7.2 (p. 133) 1− N signals directly encoded require a bandwidth of N = 128. Clearly.3 (p.009 s. Solution to Exercise 4. Solution to Exercise 4. Solution to Exercise 4. a constant input (a zero-frequency sinusoid) should yield a constant output. Solution to Exercise 4. 148) In the bottom-left panel. 131) Total harmonic distortion in the square wave is Solution to Exercise 4. which equals a frequency of 111 Hz. 142) The result is most easily found in the spectrum's formula: the power in the signal-related part of half the power of the signal x (t) is s (t).9. Solution to Exercise 4. 143) t 1 − RC u (t).1 (p. binary encoding 128 is superior. Using a binary representation. Delaying the frequency −(t−∆) 1 RC u (t − ∆).1 (p. we need T . We need two more to get us back −∞ yields where we started.10.8. we can consider each separately.1 (p. two Fourier transforms applied to s (t) ∞ S (f ) ej2πf (−t) df = s (−t) −∞ R∞ R∞ S (f ) e−(j2πf t) df = −∞ S (f ) ej2πf (−t) df = s (−t).8. and equals W  sin(πW t) πW t s (t) = 2 Solution to Exercise 4.8. the binary-encoding scheme has a factor of  1 4 2 2 π FREQUENCY DOMAIN = 20%. Because integration and signal-delay are linear. a frequency of 154 Hz. Doubling the pitch frequency to 300 Hz for Figure 4.6. The periodic output indicates nonlinear behavior. 146) If the glottis were linear. We know that F (S (f )) = s (−t). Therefore.2 (p. Now we integrate this sum.05 smaller bandwidth.1 (p.5. 136) Because the lter's gain at zero frequency equals one. Solution to Exercise 4. . 134) We can use N dierent amplitude values at only one frequency to represent the various letters.168 CHAPTER 4. Solution to Exercise 4. For 7 = 0. the integral of a delayed signal equals the delayed version of the integral.4 (p.16 (voice spectrum) would amount to removing every other spectral line. 141) The signal is the inverse Fourier transform of the triangularly shaped spectrum.8. Solution to Exercise 4. the period is about 0. 150) Because males have a lower pitch frequency.44). 139) Z ∞ F (S (f )) = S (f ) e−(j2πf t) df = Z −∞ Solution to Exercise 4. Despite such fundamental dierences. and linear systems parallel what previous chapters described.2. can be performed quickly enough to be calculated as the signal is produced. linear ltering. the speech sent over digital cellular telephones. n) for a discrete-"time" two-dimensional signal like a photo taken with a digital camera. The key reason why digital signal processing systems have a technological advantage today is the puter: com- computations. presenting This content is available online at <http://cnx. The modern denition of a computer is an electronic device that performs calculations on data. All analog systems operate in real time. a system that produces its output as rapidly as the input arises is said to be a real-time system. but the similarities should not be construed as "analog wannabes. continuity has no meaning for sequences. functions s (n) to denote a discrete-time one-dimensional signal s (m. 3 This content is available online at <http://cnx. This exibility comes at a price. 1 2 169 . dened only for the integers.3/>. we must understand a little about how computers compute. This exibility has obvious appeal. 2 and programmability means that the signal processing system can be easily changed. We must explicitly worry about the delity of converting analog signals into digital ones.2 Introduction to Computer Organization 3 5. like the Fourier transform. Clearly. These similarities make it easy to understand the denitions and why we need them.org/content/m10781/2.or complex-valued functions of a continuous variable such as time or space  we can dene digital ones as well. We will also discover that digital systems enjoy an algorithmic advantage that contributes to rapid processing speeds: Computations can be restructured in non-obvious ways to speed the processing. the theory underlying digital signal processing mirrors that for analog signals: Fourier transforms.org/content/m10263/2. For example. a consequence of how computers work.1 Computer Architecture To understand digital signal processing systems. The music stored on CDs. Only recently have computers become fast enough to meet real-time requirements while performing non-trivial signal processing. Programmability means that we can perform signal processing operations impossible with analog systems (circuits). and the video carried by digital television all evidence that analog signals can be accurately converted to digital ones and back again. We thus use the notation such as a digital music recording and Digital signals are sequences.1 Introduction to Digital Signal Processing 1 Not only do we have analog signals  signals that are real. How do computers perform signal processing? 5. Sequences are fundamentally dierent than continuous-time signals.29/>. digital ones that depend on a computer to perform system computations may or may not work in real time. and has been widely accepted in the marketplace. we need real-time signal processing systems. Taking a systems viewpoint for the moment." We will discover that digital signal processing is not an approximation to analog processing.Chapter 5 Digital Signal Processing 5. we could use stick gure counting or Roman numerals. or symbolic (obeying 4 Each computer instruction that performs an elementary numeric calculation  any law you like).2 Representing Numbers Focusing on numbers. this representation relies on integer-valued computations. This problem is At its heart. The An example of a symbolic computation is sorting a list of names. D/A converters). a memory. all numbers can represented by the positional notation system. A/D (analog-to-digital) converter. Computers are in which computational steps occur periodically according to ticks of a clock. That is incredibly fast! A "step" does not. The sum or product of two integers is also an integer. however. • Computers perform integer (discrete-valued) computations. Alternative number representation systems exist. necessarily mean a computation like an addition. positional representation system uses the position of digits ranging from 0 to b-1 5 The b-ary to denote a number. Organization of a Simple Computer CPU Memory I/O Interface Keyboard Figure 5. clocked devices.2. etc. and output devices (monitors. The generic computer contains input devices (keyboard. DIGITAL SIGNAL PROCESSING the results to humans or other computers in a variety of (hopefully useful) ways. which means that the clock speed need not express the computational speed. computational unit. but the quotient of two integers is likely to not be an integer. Computational speed is expressed in units of millions of instructions/second (Mips). computers break such computations down into several stages. What I/O devices might be present on a given computer vary greatly. Your 1 GHz computer (clock speed) may have a computational speed of 200 Mips.170 CHAPTER 5. printers. Computer calculations can be numeric (obeying the laws of arithmetic). For example. The computational unit is the computer's heart. logical (obeying the laws of an algebra).1: CRT Disks Network Generic computer hardware organization. This description be- lies clock speed: When you say "I have a 1 GHz computer. a multiplication." you mean that your computer takes 1 nanosecond to perform each step. unfortunately. and usually consists of a central processing unit (CPU). and an a input/output (I/O) interface.). mouse. 5. These were useful in ancient times. an addition. or a division  does so only for integers. • A simple computer operates fundamentally in discrete time. How does a computer deal with numbers that have digits to the right of the decimal point? addressed by using the so-called oating-point representation of real numbers. but very limiting when it comes to arithmetic calculations: ever tried to divide two Roman numerals? 4 5 . Complex numbers (Section 2. as n= ∞ X Mathematically. we tack on a special bitthe computer's memory consists of an ordered sequence of represent an unsigned number ranging from 0 to 255." thereby representing "1" or "0. so that the digits representing this number are d0 = 5. 221. This same 4 3 2 1 0 number in binary (base 2) equals 11001 (1 × 2 + 1 × 2 + 0 × 2 + 0 × 2 + 1 × 2 ) and 19 in hexadecimal and we succinctly express (base 16). each digit of which is known as a bit (binary digit). dk ∈ {0. 32. −128 to 127. positional systems represent the dk bk . 6 You need one more bit to do that. d1 = 2.) For both 32-bit and 64-bit integer representations. Digital computers use the base 2 or binary number representation.2: The various ways numbers are represented in binary are illustrated.1) can be thought of as two real numbers that obey special rules to manipulate them. To represent signed values. each bit is represented as a voltage that is either "high" or "low. usually expressed in terms of the number of bits. A byte can therefore respectively. b − 1} (5. The number 25 in base 10 equals 2×101 +5×100 . an unsigned 32-bit number can represent integers ranging between 0 and enough to enumerate every human in the world! 6 Exercise 5. what are the largest numbers that can be represented if a sign bit must also be included. a collection of eight bits. commonly assumed to be due to us having ten ngers.1) k=0 n in base-b as nb = dN dN −1 . . Humans use base 10. rather having only a nite number of bytes is the problem.1 232 − 1 (4. dk ∈ {0. But a computer cannot represent The fault is not with the binary number system.2) k=−∞ All numbers can be represented by their sign. Since we want to store many numbers in a computer's memory. and 64. are 16.295). Number representations on computers d7 d6 d5 d4 d3 d2 d1 d0 unsigned 8-bit integer s d6 d5 d4 d3 d2 d1 d0 signed 8-bit integer s s exponent mantissa floating point Figure 5. Fractions between zero and one are represented the same way. . it takes an innite number of bits to represent that have a π. a number almost big (Solution on p." sign bitto express the sign. . . b − 1} (5. we are restricted to those nite binary representation. While a gigabyte of memory may seem to be a lot.171 quantity b is known as the positive integer n base of the number system. Thus. .2. Common lengths. d0 . integer and fractional parts. .294. Here. . If we take one of the bits and make it the sign bit. we can make the same byte to represent numbers ranging from all possible real numbers. . . .967. Large integers can be represented by an ordered sequence of bytes. and all other dk equal zero. The bytes. The number of bytes for the exponent and mantissa components of oating point numbers varies. −1 X f= dk bk . . .the mantissa m . but there are always some that cannot. The choice of base denes which do and which don't. the latter situation occurring when the integer exceeds the range of numbers that a limited set of bytes can represent. the binary representation system is used. providing an extra bit to represent the mantissa a little more accurately. x = m2e (5.6 does not have an exact binary representation. they can be represented exactly in a computer using the binary positional notation.2. note that 1/3 = 0.2 (Solution on p. the number 2.) What are the largest and smallest numbers that can be represented in 32-bit oating point? in 64-bit oating point that has sixteen bits allocated to the exponent? Note that both exponent and mantissa require a sign bit. the fractional 2. Electronic circuits that make up the physical computer can add and subtract integers without error. 0. but with a little more complexity.. the number part of which has an exact binary representation. If you were thinking that base 10 numbers would solve this inaccuracy.. 8 See if you can nd this representation.600000079. 9 Realizing that real numbers can be only represented approximately is quite important.625 × 22 . A computer's representation of integers is either perfect or only approximate.. which means 7 The number zero is an exception to this rule. representations have similar representation problems: if the number powers of two to yield a fraction lying between 1/2 and 1 that has a x Floating point can be multiplied/divided by enough nite binary-fraction representation.172 CHAPTER 5. how about numbers having nonzero digits to the right of the decimal point? In other words. 1 . Note that this approximation has a much longer decimal expansion. when does addition cause problems?) 7 In some computers.6 will be represented as 2. how are numbers that have fractional parts represented? For such numbers.. and underlies the entire eld of numerical analysis. For example. but has nite representation in base 3. The sign of the mantissa represents the sign of the number and the exponent can be a signed integer.5 equals 8 However. This increasing accuracy means that more numbers can be represented exactly. In single precision oating point numbers. not catastrophically in error as with integers. this normalization is taken to an extreme: the leading binary digit is not explicitly expressed. Otherwise. (This statement isn't quite true. This level of accuracy may not suce in numerical calculations. but with one byte (sometimes two bytes) reserved to represent the exponent e of a power-of-two multiplier for the number . and only be represented approximately in oating point. 221. and it only oating point number having a zero fraction. So long as the integers aren't too large.333333. has an innite representation in decimal (and binary for that matter).3) The mantissa is usually taken to be a binary fraction having a magnitude in the range that the binary representation is such that is the d−1 = 1.typically 4 or 8 .. 9 Note that there will always be numbers that have an innite representation in any chosen positional system. the number is represented exactly in oating point. This convention is known as the hidden-ones notation. point numbers consume 8 bytes. Such inexact numbers have an innite binary representation. DIGITAL SIGNAL PROCESSING While this system represents integers well. which require 32 bits (one byte for the exponent and the remaining 24 bits for the mantissa). we can only represent the number approximately. the number 2. and quadruple precision 16 bytes. Exercise 5. which seeks to predict the numerical accuracy of any computation.expressed by the remaining bytes.. Double precision oating The more bits used in the mantissa.to represent the number. 1  2. The oating-point system uses a number of bytes . the greater the accuracy. Note the carries that might occur. Exercise 5. represents a must be true for the statement to be true. any computer using base-2 representations and arithmetic can also easily evaluate logical statements.4) 1×0=0 10 subtraction of two single-digit binary numbers yields the same bit as addition. It laid the foundation for what we now call Boolean algebra. and an array of such voltages express numbers akin to positional notation. . which expresses as equations logical statements. 12 We assume that we do not use oating-point A/D converters.2. 221.3 (Solution on p.20/>. A and B.) Add twenty-ve and seven in base 2. Signals processed by digital computers must discrete-valued: their values must be proportional to the integers. Logic circuits perform arithmetic operations. This restriction means that both the time axis and the amplitude axis must be quantized: a multiple of the integers. The variables of logic indicate truth or falsehood. In contrast. statement that both A and B A ∩ B. Computers use high and low voltage values to express a bit. The Irish mathematician falsehood by a "0. a signal's value can no longer be any real number. analog-to-digital 10 A carry means that a computation performed at a given position aects other positions as well. yields a value of truth if either is true. no one has found a way of performing the amplitude quantization step without introducing an unrecoverable error. Note that if we represent truth by a "1" and binary multiplication corresponds to AND and addition (ignoring carries) to XOR. You use this kind of statement to A and B occur. XOR. 11 This content is available online at <http://cnx. the AND of tell search engines that you want to restrict hits to cases where both of the events the OR of A and B. More importantly. be Consequently.1 Analog-to-Digital Conversion Because of the way computers are organized.3 Computer Arithmetic and Logic The binary addition and multiplication tables are Note that if carries are ignored. Why is the result "nice"? Also note that the logical operations of AND and OR are equivalent to binary addition (again if carries are ignored).173 5. equals the union of A ∪ B and A ∩ B . A ∪ B. and as described in the next section. signal must be represented by a nite number of bytes.   0+0=0                      0+1=1    1 + 1 = 10    1+0=1       0×0=0   0×1=0    1×1=1   (5. Here.3.2.org/content/m0050/2. This fact makes an integer-based computational device much more powerful than might be apparent. the exclusive or operator.3 The Sampling Theorem 11 5. The signals that can be sampled without introducing error are interesting. 5. 1 + 1 = 10 is an example of a computation that involves a carry. They must each be 12 Quite surprisingly. we can make a signal "samplable" by ltering. the Sampling Theorem allows us to quantize the time axis without error for some signals." George Boole discovered this equivalence in the mid-nineteenth century. conversion introduces error. Thus.  n ∈ {.3. . sampling. For our purposes here. pTs (t) = ∞ X ck e j2πkt Ts (5. with Ts known as the sampling interval. we can recover ltering. we approximate it as the product x (t) = s (t) PTs (t).6) πk and the periodic pulse signal are chosen properly. Claude 13 . we center the periodic pulse signal about the origin so that its Fourier series coecients are real (the signal is even). revived the result once computers were made public after World War Shannon II.174 CHAPTER 5. as shown in Figure 5. rst derived this result. 1. known as the Sampling Theorem. .2 The Sampling Theorem Digital transmission of information and digital signal processing all require signals to rst be "acquired" by a computer. 13  s (t) from x (t) by . has nonzero values only during the time intervals nTs − ∆ 2 .5) k=−∞ sin ck = If the properties of s (t) πk∆ Ts  http://www. Clearly. −1. in the 1920s. with PTs (t) To characterize being the periodic pulse signal. }. nTs + ∆ 2 . Sampled Signal s(t) t s(t)pTs(t) ∆ Ts t Figure 5. a Bell Laboratories engineer.3: The waveform of an example signal is shown in the top plot and its sampled version in the bottom. . the issue is how the signal values between the samples can be reconstructed since they are lost in the sampling process. 0. also at Bell Laboratories.com/minds/infotheory/ (5. . The resulting signal. One of the most amazing and useful results in electrical engineering is that signals can be converted from a function of time into a sequence of numbers back into the signal with (theoretically) no error. DIGITAL SIGNAL PROCESSING 5. It found no real application back then. .3 (Sampled Signal). without error: We can convert the numbers Harold Nyquist. . The sampled version of the analog signal s (t) is s (nTs ). the value of the original signal at the sampling times is preserved.lucent. In the bottom plot.4: – 1 –W Ts c-1 X(f) c0 X(f) c0 –W Ts 1 Ts> 2W c2 c1 W The spectrum of some bandlimited (to sampling interval f W 1 Ts 2 Ts c1 W W 1 Ts f 1 Ts< 2W c2 2 Ts f Hz) signal is shown in the top plot. Note that if the signal were not bandlimited. the sampling interval is chosen suciently small to avoid aliasing. X (f ) = ∞ X k=−∞   k ck S f − Ts (5. Using the Fourier series representation of the periodic sampling signal. Evaluating this transform directly is quite easy.7) k=−∞ Considering each term in the sum separately. rendering recovery of the original signal impossible. the spectrum of the sampled signal consists of weighted (by the coecients ck ) (5. however. ∞ X x (t) = ck e j2πkt Ts s (t) (5. aliasing S(f) –W Aliasing c-1 c-2 – 2 Ts c-2 – 1 Ts – 2 Ts Figure 5. If the aliasing will occur. the terms in this sum overlap each other in the frequency domain. Z ∞ s (t) e j2πkt Ts e −(j2πf t) Z ∞ dt = −∞ s (t) e −(j2π (f − Tks )t) −∞   k dt = S f − Ts Thus.4 (aliasing)). If.175 To understand how signal values between the samples can be "lled" in. and . is chosen too large relative to the bandwidth W. we need to know the spectrum of the product of the complex exponential and the signal. we need to calculate the sampled signal's spectrum. This unpleasant phenomenon is known as aliasing.9) In general.8) and delayed versions of the signal's spectrum (Figure 5. we satisfy two conditions: • The signal s (t) is bandlimitedhas power in a restricted frequency rangeto W Hz. the component spectra would always overlap. 221. These two conditions ensure the ability to recover a bandlimited signal from its sampled version: We thus have the Exercise 5.176 CHAPTER 5.3 (Solution on p. 44. n ∈ {. As we narrow the pulse.23/>. What property characterizes the ones going the same direction? If we satisfy the Sampling Theorem's conditions. making be s (nTs ). convince yourself that less than two samples/period will not suce to specify it. 221. 14 This content is available online at <http://cnx.4 Amplitude Quantization 14 The Sampling Theorem says that if we sample a bandlimited signal without error from its samples s (nTs ). what signal would your resulting undersampled signal become? 5. In these ways. many sound cards do post-sampling lters. −1. 1. For 1 Ts . They sample at high frequencies. the signal will change only slightly during each pulse. High-quality sampling systems ensure that no aliasing occurs by unceremoniously lowpass ltering the signal (cuto frequency being slightly lower than the Nyquist frequency) before sampling.) The Sampling Theorem (as stated) does not mention the pulse width ∆. known today as the corresponds to the highest frequency at which a signal can contain energy and remain compatible with The frequency the Sampling Theorem. . s (t) pTs (t) will simply If indeed the Nyquist frequency equals the signal's highest frequency. 221. 0.2 (Solution on p. for that matter. If the sampling rate 1 Ts is not high enough. . at least two samples will occur within the period of the signal's highest frequency sinusoid. If.3.) To gain a better appreciation of aliasing.2: Digital Signals) form.2. Let the sampling interval Ts be 1. . aliasing will not occur. . In this delightful case.5 and 4.05 kHz in our example).3. . s (t) fast enough.) What is the simplest bandlimited signal? Using this signal. • the sampling interval Ts DIGITAL SIGNAL PROCESSING is small enough so that the individual components in the sum do not overlap Ts < 1/2W . . spectral lines go as the period decreases. What is the eect of this parameter on our ability to recover a signal from its samples (assuming the Sampling Theorem's two conditions are met)? Nyquist frequency Shannon sampling frequency 1 and the . Such systems therefore vary the anti-aliasing lter's cuto frequency as the sampling rate varies. the nonzero values of the signal samples. the signal contains frequencies beyond the sound card's Nyquist frequency. Because not have anti-aliasing lters or. such quality features cost money. the signal's ∆ smaller and smaller. 0. it can be recovered Sampling is only the rst phase of acquiring data into a computer: Computational processing further requires that the samples be values are converted into digital (Section 1. quantized: analog In short.1 kHz for example. Exercise 5.org/content/m0051/2.1 Sampling Theorem. we will have performed . some will move to the left and some to the right. consider two values for the square wave's period: 3. }. however. the resulting aliasing can be impossible to remove. and hope the signal contains no frequencies above the Nyquist frequency (22. sketch the spectrum of a sampled square wave. Note in particular where the simplicity consider only the spectral repetitions centered at − T1s . Exercise 5. we can recover the original signal by lowpass ltering with a lter having a cuto frequency equal to W x (t) Hz. the sampling signal captures the sampled signal's temporal variations in a way that leaves all the original signal's structure intact.3. (Solution on p. analog-to-digital (A/D) conversion. 2Ts . 2 where B is the number of bits used in the A/D conversion process (3 in the case depicted here).625 in this scheme. the quantization interval 2 . upon conversion back to an analog value. Assuming we can scale the signal without [−1.75 are assigned the integer value six and.) Recalling the plot of average daily highs in this frequency domain problem (Problem 4. the original amplitude value cannot be recovered without error.625.177 Q[s(nTs)] 7 ∆ 6 5 4 3 2 1 0 –1 –0. This distortion is irreversible. 1].25. they all become 0. First it is sampled. 1. Thus. why is this plot so jagged? Interpret this eect in terms of analog-to-digital conversion.5 1 s(nTs) (a) signal 1 1 sampled signal 7 0. We dene a quantization interval to be the range of values assigned to the same integer.5 and 0. the so-called . it is (Solution on p.625. we'll dene this range to be A/D converter assigns amplitude values in this range to a set of integers.25 0 4 0 3 -0. 221. 1] to one of eight integers between 0 and 7. all inputs having values lying between 0. Note how the sampled signal waveform becomes distorted after amplitude quantization. For example.5 and 0. . 2B − 1 for each sampled input. The width of a single 2 quantization interval ∆ equals B . . assigns an amplitude equal to the value lying halfway in the quantization interval. for our example three-bit A/D converter. Furthermore.4. in general. . .75 6 0. then amplitude-quantized to three bits. Because values lying anywhere within a quantization interval are assigned the same value for computer processing.75 become 0.75 –1 amplitude-quantized and sampled signal 0 -1 (b) Figure 5.5 shows how a three-bit A/D converter assigns input values to the integers.25 2 -0. 2B Exercise 5.5 0. In analog-to-digital conversion. Figure 5. A phenomenon reminiscent of the errors incurred in representing numbers on a computer prevents signal amplitudes from being converted with no error into a binary number representation. the device that converts integers to amplitudes.5). For example the two signal values between 0.5: A three-bit A/D converter assigns voltage in the range [−1.1 ∆ is 0. A of the integers  0. the D/A converter. the B -bit converter produces one aecting the information it expresses. The error introduced by converting a signal from analog to digital form by sampling and amplitude quantization then back again would be half the quantization interval for each amplitude value.5 5 0.5 1 -0. Thus. it can be reduced (but not eliminated) by using more bits in the A/D converter. The integer 6 would be assigned to the amplitude 0. the signal is assumed to lie within a predened range. Typically. The bottom panel shows a signal going through the analog-to-digital. 5dB 2 (5. sinusoid.178 CHAPTER 5. To calculate the rms value. It can be shown that if the computer processing is linear.5 Exercise 5. 221. 222. (Solution on p.76. along with a typical signal's value before amplitude s (nTs ) and after Q (s (nTs )).) [−1. we must square the error and average it over the interval. constant term 10log1. we can process them using digital hardware or software. DIGITAL SIGNAL PROCESSING A/D error equals half the width of a quantization interval: 1 . The illustration (Figure 5. . 1]. every bit increase in the A/D converter yields a 6 dB increase in the signal-to-noise ratio. computer processing. we nd that the signal-to- noise ratio for the analog-to-digital conversion process equals SNR = 1 2 2−(2(B−1)) 12 = 3 2B 2 = 6B + 10log1. 2B the more bits available in the A/D converter. the signal power is the square of the rms amplitude: power (s) =  signal-to-noise ratio.4. To what signal-to-noise ratio does this correspond? Once we have acquired signals with an A/D converter. the smaller the quantization error. we need to compute the which equals the ratio of the signal power and the quantization error power.6: quantization Its width is ∆ A single quantization interval is shown.10)  21 2 ∆ 12 converter equals 2 2B = 2−(B−1) .3 What would the amplitude (Solution on p. As we have xed the input-amplitude range. and unsampling is equivalent to some analog linear system. 222. A]? This derivation assumed the signal's amplitude lay in the range quantization signal-to-noise ratio be if it lay in the range Exercise 5. Assuming the signal is a √1 2 2 = 1 2 . ∆ } ε s(nTs) Q[s(nTs)] Figure 5. the result of sampling.6) details a single quantization interval.  denotes the error thus incurred.) Music on a CD is stored to 16-bit accuracy.4. . r rms () = = Since the quantization interval width for a B -bit  1 ∆ R ∆ 2 −∆ 2 2 d (5. we note that no matter into which quantization interval the signal's value falls. and the quantization error is denoted by To nd the power in the quantization error.) How many bits would be required in the A/D converter to ensure that the maximum amplitude quantization error was less than 60 db smaller than the signal's peak value? Exercise 5.11) Thus.4 (Solution on p. Why go to all the bother if the same function can be accomplished using analog techniques? Knowing when digital processing excels and when it does not is an important issue. the error will have the same characteristics.4. To analyze the amplitude quantization error more deeply. [−A.2 The equals 1. For symbolic-valued signals.org/content/m10342/2. they are sequences. the most important issue becomes.7: The discrete-time cosine signal is plotted as a stem plot. the approach is dierent: We develop a common representation of all symbolic-valued signals so that we can embody the information they contain in a unied way.179 5.2 Complex Exponentials The most important signal is. such as space and time. Because this approach leading to a better understanding of signal structure.13) = ej2πf n This derivation follows because the complex exponential evaluated at an integer multiple of Thus. −1. Discrete-time signals are functions dened on the integers.1 Real. . we need only consider frequency to have a value in some unit-length interval. 5. From an information representation perspective.12) is dimensionless and that adding an integer to the frequency of the discrete-time complex exponential has no eect on the signal's value. we can exploit that structure to represent information (create ways of representing information with signals) and to extract information (retrieve the information thus represented).15/>. where n = {.and Complex-valued Signals A discrete-time signal is represented symbolically as s (n). . . analog signals are functions having as their independent variables continuous quantities. . the complex exponential sequence. for both real-valued and symbolic-valued signals.5. 0. }. As with analog signals.5 Discrete-Time Signals and Systems 15 Mathematically. . Cosine sn 1 … n … Figure 5. . 5. ej2π(f +m)n = ej2πf n ej2πmn (5. A signal delayed by m samples has the expression s (n − m). 15 This content is available online at <http://cnx. s (n) = ej2πf n Note that the frequency variable f (5. 1. of course. eciency: what is the most parsimonious and compact way to represent information so that it can be extracted later. 2π equals one. we seek ways of decomposing discrete-time signals into simpler components. We can delay a discrete-time signal by an integer just as with analog ones.5. Can you nd the formula for this signal? We usually draw discrete-time signals as stem plots to emphasize the fact they are functions dened only on the integers. . 12 .16) 5. 1). How to choose a unit-length time counterparts yield unique waveforms interval for a sinusoid's frequency will become evident later.5. 5. DIGITAL SIGNAL PROCESSING 5. like that of the cosine signal shown in Figure 5.180 CHAPTER 5. Examination of a discrete-time signal's plot. s (n) = ∞ X s (m) δ (n − m) (5. m we can decompose is denoted by s (m) Because the value of and the unit sample delayed to occur at m is written any signal as a sum of unit samples delayed to the appropriate location and scaled by the signal value. we can also choose the frequency to lie in the interval [0.5. as opposed to the situation with analog signals.14) otherwise Unit sample δn 1 n Figure 5.8: The unit sample.   1 u (n) =  0 if n≥0 if n<0 (5. We do have real-valued discrete-time signals like the sinusoid. As opposed to analog complex exponentials and sinusoids that can have their frequencies be any real value.4 Unit Sample The second-most important discrete-time signal is the   1 δ (n) =  0 unit sample. 5. a sequence at each integer δ (n − m). and will prove useful subsequently.3 Sinusoids Discrete-time sinusoids have the obvious form s (n) = Acos (2πf n + φ). This choice of frequency interval is arbitrary. which is dened to be n=0 if (5.5.6 Symbolic Signals An interesting aspect of discrete-time signals is that their values do not need to be real numbers.7 (Cosine). frequencies of their discrete- only when f lies in the interval  − 21 . reveals that all signals consist of a sequence of delayed and scaled unit samples.5. but we also have signals that denote the sequence of .5 Unit Step The unit sample in discrete-time is well-dened at the origin.15) m=−∞ This kind of decomposition is unique to discrete-time signals. 6. and converted back into an analog signal.org/content/m10247/2. When the signal is real-valued. note that a sinusoid having a frequency equal 1 to the Nyquist frequency has a sampled waveform that equals 2Ts corresponds to the discrete-time frequency   1 n nT s = cos (πn) = (−1) cos 2π × 2T s 1 − j2πn 2 = e−(jπn) 2 equals e frequency equals analog frequency multiplied by the sampling interval The exponential in the DTFT at frequency fD = fA Ts 16 This content is available online at <http://cnx. For such signals. Understanding how discrete-time and analog signals and systems intertwine is perhaps the main goal of this course. is accomplished with analog signals and systems.1 (Solution on p.7 Discrete-Time Systems Discrete-time systems can act on discrete-time signals in ways similar to those found in analog signals and systems. More formally. discrete-time systems are ultimately constructed from digital circuits. Derive this property from the denition of the DTFT. symbolic-valued signal s (n) takes on one of the values {a1 . the spectrum at negative frequencies can be derived from positive-frequency spectral values. n = (−1) . 2 . if not impossible. When we obtain the discrete-time signal via sampling an analog signal.) A special property of the discrete-time Fourier transform is that it is periodic with period one:   S ej2π(f +1) = S ej2πf . To show this. bytes (8-bit quantities). Whether controlled by software or not.6 Discrete-Time Fourier Transform (DTFT) The Fourier transform of the discrete-time signal s (n) 16 is dened to be ∞ X  s (n) e−(j2πf n) S ej2πf = (5. 2 . we can further simplify our plotting chores by showing the spectrum only over  1 0.31/>.17) n=−∞ Frequency here has no units. . with the transform of a sum of signals equaling the sum of their transforms. Exercise 5. they have little mathematical structure other than that they are members of a set. They could represent keyboard characters. we need only plot the spectrum over one period to understand completely the spectrum's structure. this denition is linear. Such characters certainly aren't real numbers. processed with software. integers that convey daily temperature. which consist entirely of analog circuit elements.181 characters typed on the keyboard. typically. In fact. 176) 1 2 . systems can be easily produced in software. Furthermore. . . a special class of analog signals can be converted into discrete-time signals.5. the Nyquist frequency (p. 5. many more dierent systems can be envisioned and "constructed" with programs than can be with analog signals. Real-valued signals have conjugate-symmetric spectra:  ∗ S e−(j2πf ) = S ej2πf . meaning that discrete-time (5. with equivalent analog realizations dicult. like e-mail. we plot the spectrum over the frequency range  1 1 −2. the transmission and reception of discrete-time signals. aK } which This technical terminology does not mean we restrict symbols to being mem- bers of the English or Greek alphabet. all without the incursion of error. Because of the role of software in discrete-time systems. As should be expected. . 5.18) . Because of this periodicity. 222. each element of the comprise the alphabet A. and as a collection of possible signal values. to design. we can express the magnitude and phase of this spectrum.10) of become increasingly equal. ∞ X Thus. c0 . say 0.4: aliasing) provides another way of deriving this result.19) geometric series. n −(j2πf n) n=−∞ a u (n) e  P∞ −(j2πf ) n n=0 ae (5. 0. the largest Fourier coecient.  S ej2πf = 1 1 − ae−(j2πf ) (5.1 · Ts and use ampliers to rescale the signal.10 (Spectra of exponential signals)). we have a lowpass spectrumthe spectrum diminishes that the spectrum is a periodic function. We need only consider the spectrum between to unambiguously dene it. |α| < 1 1−α (5. as long as P∞ 1 . the above formulae clearly demonstrate the periodic nature of the spectra of discrete-time signals. respectively. which are governed by the Fourier series coecients (4. the amplitudes of the signal's spectral repetitions. s (n) = Simply plugging the signal's expression into the Fourier transform formula. the Nyquist frequency corresponds to the frequency . the sampled signal's spectrum becomes 1 1 1 .22) 2 (1 − acos (2πf )) + a2 sin2 (2πf )    asin (2πf ) j2πf −1 ∠ S e = −tan 1 − acos (2πf ) No matter what value of a (5. the value of |c0 | = zero: of ∆ pTs (t). becoming innitely large as the pulse duration decreases. S ej2πf  = = This sum is a special case of the αn = n=0 |a| < 1. The aliasing gure (Fig- ure 5. as frequency increases from 0 to for a> − 21 . Figure 5.21) Using Euler's relation. periodic with period Example 5.182 fD CHAPTER 5. Thus.1) reveals that as ∆.20) we have our Fourier transform. where u (n) is the unit-step sequence. When a< and 1 2 with increasing a leading to a greater low frequency content. we have a highpass spectrum (Figure 5.9 (Spectrum of exponential signal) shows indeed 1 2 0.23) we choose. As the duration of each pulse in the periodic sampling signal pTs (t) narrows.1 Thus.  |S ej2πf | = q 1 (5. Thus. and fA DIGITAL SIGNAL PROCESSING represent discrete-time and analog frequency variables. Practical systems use a small value ∆ signal (Figure 4. Ts 2Ts 2 Let's compute the discrete-time Fourier transform of the exponentially decaying sequence an u (n). to maintain a mathematically viable Sampling Theorem. decreases to A∆ . Examination of the periodic pulse decreases. the amplitude A must Ts 1 increase as . clearly demonstrating the periodicity of all discrete-time spectra. 2]. The angle has units of degrees.5) is shown over the frequency range [-2.9: The spectrum of the exponential signal (a = 0. .183 Spectrum of exponential signal |S(ej2πf)| 2 1 f -2 -1 0 1 2 ∠S(ej2πf) 45 -2 -1 1 2 f -45 Figure 5. 5 0 f 0. The "trick" is to consider the dierence between the series' sum and the sum of the series multiplied by α.) Derive this formula for the nite geometric series sum.5 and a = −0.27) .2 Analogous to the analog pulse signal. −1  NX e−(j2πf n) S ej2πf = (5. DIGITAL SIGNAL PROCESSING Angle (degrees) Spectral Magnitude (dB) Spectra of exponential signals Figure 5.5 f a = –0.11 (Spectrum of length-ten pulse).5 0.10: 20 a = 0.24) otherwise The Fourier transform of this sequence has the form of a truncated geometric series.5 a = 0.) S ej2πf  = = 1−e−(j2πf N ) 1−e−(j2πf ) N) e−(jπf (N −1)) sin(πf sin(πf ) (5.9 10 0 a = 0.5? Example 5.184 CHAPTER 5.26) all values of α.2 for (Solution on p. What is the apparent relationship between the spectra for a = 0.25) n=0 For the so-called nite geometric series.   1 s (n) =  0 if 0≤n≤N −1 (5. Applying this result yields (Figure 5.5 a = 0. 222.6. let's nd the spectrum of the length-N pulse sequence. Exercise 5.5 -10 90 45 a = –0.9 -90 -45 The spectra of several exponential signals are shown. we know that N +n 0 −1 X n=n0 α n = α n0 1 − αN 1−α (5. Thus. the pulse's duration. our transform can be concisely expressed as S ej2πf = e−(jπf (N −1)) dsinc (πf ). which is known as the  dsinc (x).29) 2 = s (n) The Fourier transform pairs in discrete-time are  P∞ S ej2πf = n=−∞ s (n) e−(j2πf n)  R1 s (n) = −2 1 S ej2πf ej2πf n df 2 (5.11: The spectrum of a length-ten pulse is shown.30) .185 discrete-time sinc sin(N x) sin(x) . we nd that R 1 2 − 21  S ej2πf ej2πf n df 1 2 = R = P s (m) e−(j2πf m) ej2πf n df R 21 (−(j2πf ))(m−n) df mm s (m) − 1 e − 12 P mm (5. Can you explain the rather complicated appearance of the phase? The inverse discrete-time Fourier transform is easily derived from the following relationship: R 1 2 − 12   1 =  0 e−(j2πf m) ej2πf n df if m=n if m 6= n (5. The ratio of sine functions has the generic form of function The discrete-time pulse's spectrum contains many ripples. the number of which increase with N.28) = δ (m − n) Therefore. Spectrum of length-ten pulse Figure 5. 6. Exercise 5. 222. DIGITAL SIGNAL PROCESSING The properties of the discrete-time Fourier transform mirror those of the analog Fourier transform. for which there is no formula. . How then would you compute the spectrum? For example. where δ (n) is the unit sample (Fig- ure 5.org/content/m0506/latest/> This content is available online at <http://cnx. for analog signals no similar exact spectrum computation exists. One important common property is Parseval's Theorem. the double sum collapses into a single sum because nonzero values occur only P n = m. we simply substitute the Fourier transform expression into the frequencydomain expression for power.7 Discrete Fourier Transforms (DFT) 18 The discrete-time Fourier transform (and the continuous-time transform as well) can be evaluated when we have an analytic expression for the signal. when This terminology is a carry-over from the analog world. Certainly discrete-time spectral analysis is more exible than continuous-time spectral analysis. For analog-signal spectra. the most obvious ones are the equally spaced ones 17 18 "Properties of the DTFT" <http://cnx. . N − 1]. [0.28). The DTFT properties table 17 shows similarities and dierences. the integral equals δ (m − n). s (t) pTs (t). . all frequencies within a period. at a few frequencies. where the How is the discrete-time signal energy related to Assume the signal is bandlimited and that the sampling rate was chosen appropriate to the Sampling Theorem's conditions. which conceptually can be easily computed save for two issues. such as the speech signal used in the previous chapter. R 1 2 − 12  2 |S ej2πf | df 1 2 P ∗ s (n) e−(j2πf n) mm s (n) ej2πf m df 1 R P ∗ 2 = ej2πf (m−n) df n. which must be nite to compute the signal's spectrum. 5.mn. like stands requires evaluating the spectra at  1 1 −2.32) 2 Using the orthogonality relation (5. Suppose we just have a signal.m s (n) s (n) −1 = R − 21 P nn (5. . . The sum extends over the signal's duration. • Signal duration. We term nn s2 (n) the energy in the discrete-time signal s (n) in spite of the fact that discrete-time signals don't consume (or produce for that matter) energy. so we'll assume that the signal extends over • Continuous frequency.186 CHAPTER 5. k ∈ {0. ∞ X Z 2 1 2 (|s (n) |) =  2 |S ej2πf | df (5. giving Parseval's Theorem as a result. use must build special devices. Subtler than the signal duration issue is the fact that the frequency variable is continuous: It may only need to span one period. 1]. While in discrete-time we can exactly calculate spectra.28/>. but the DTFT formula as it Let's compute the spectrum f= k K.17) is a sum. Thus.8: Unit sample).3 (Solution on p. It is exceedingly dicult to store an innite-length signal in any case.) Suppose we obtained our discrete-time signal from values of the product pTs (t) duration of the component pulses in the total energy contained in s (t)? is ∆. which turn out in most cases to consist of A/D converters and discrete-time computations. 2 or [0. K − 1}. how did we compute a spectrogram such as the one shown in the speech signal example (Figure 4.org/content/m10249/2.17: spectrogram)? The Discrete Fourier Transform (DFT) allows the computation of spectra from discrete-time data.31) − 12 n=−∞ To show this important property. The formula for the DTFT (5. Note that you can think about this computationally motivated choice as sampling the spectrum. . . . If we write out the expression for the DFT as a set of linear equations. N − 1}.36) l=−∞ {0. If it did not. Exercise 5. . the term corresponding s (n) + s (n + K) m.187 We thus dene the discrete Fourier transform (DFT) to be S (k) = N −1 X s (n) e− j2πnk K . K − 1} how do we nd s (n).1 (Solution on p. n ± 2K. . . . The issue now is how many frequencies are enough to capture how the spectrum changes with frequency. s (0) + s (1) + · · · + s (N − 1) = S (0) 2π s (0) + s (1) e(−j) K + · · · + s (N − 1) e(−j) . . . we could not get back unambiguously! Clearly. we can return from the frequency domain we entered via the DFT.7. N −1 X n single and m both range over unit sample for m. the term l=0 soon). n ± K. N − 1}? Presumably.37) = S (1) . characterize this eect a dierent way. our formula becomes s (n) = to be a (5. 222. Substituting the DFT formula in this prototype inverse transform yields One way of answering this question is determining an inverse discrete Fourier transform formula: given s (n) = K−1 −1 X NX s (m) e−(j 2πmk K ) ej 2πnk K (5. . In this way. . . . . Given the sampling interpretation of the spectrum. some discrete-time signal values equal the sum of the original signal values. the formula will be of the form PK−1 j2πnk s (n) = k=0 S (k) e K . n = {0. . . 2π(N −1) K (5. We can express this result as Thus. more about this interpretation later. Another way to understand this requirement is to use the theory of linear equations. S (k). The only way for some values of situation means that our to eliminate this problem must have at least as many frequency samples as the signal's duration. S (k) is shorthand for   k S ej2π K . . then s (n) would equal a sum of values.) When we have fewer frequency samples than the signal's duration. This n.34) k=0 m=0 Note that the orthogonality relation we use so often has a dierent character now. k ∈ {0.33) n=0 Here. n ∞ X s (m) K m=0 The integers K. n = {0. K−1 X e−(j 2πkm K ) ej 2πkn K k=0   K =  0 (m = {n. . we need the sum in this range. . . We can compute the spectrum at as many equally spaced frequencies as we like. . . . If we evaluate the spectrum at to m = n+K will also appear for some values of prototype transform equals is to require always provides a unit sample (we'll take care of the factor of K ≥ N: We K fewer frequencies than the signal's duration. k = {0. . K − 1} (5. N − 1}. .35) otherwise δ (m − n − lK) (5. To have an inverse transform. }) if We obtain nonzero value whenever the two indices dier by multiples of K P l δ (m − n − lK). and we would not have a valid transform: Once going into the frequency domain. . As we have N frequencies. this consideration translates to the number of basic computational steps required to perform the needed processing. meaning we have 2N multiplications to perform. An issue that never arises in analog "computation. Complexity is not so much tied to specic computers or programming languages but to how many steps are required on any computer. known as the complexity. Adding numbers requires N −1 2N + 2 (N − 1) = 4N − 2 computations is N (4N − 2)." like that performed by a circuit. For a real-valued signal. s (0) + s (1) e(−j) we have K equations in N 2π(K−1) K + · · · + s (N − 1) e(−j) DIGITAL SIGNAL PROCESSING 2π(N −1)(K−1) K = S (K − 1) unknowns if we want to nd the signal from its sampled spectrum. By convention. As multiplicative constants don't matter since we are making a "proportional to" evaluation.38) Example 5. This notation is read "order N -squared".org/content/m0503/2. additions.4 Use this demonstration to synthesize a signal from a DFT sequence. we must have K ≥ N. the resulting set of equations can indeed be solved. each frequency requires computational steps. we would expect that the computation time to approximately quadruple.llb> Example 5. if we double the length of the data. is how much work it takes to perform the signal processing operation such as ltering. the number of DFT frequency values K is chosen to equal the signal's duration N.llb> 5. This require- ment is impossible to fulll if K < N. This media object is a LabVIEW VI.188 CHAPTER 5. consider the formula for the discrete Fourier transform.33) computes the spectrum at N equally spaced frequencies from a length- N sequence. Please view or download it at <DFT_Component_Manipulation. each real-times-complex multiplication requires two real multiplications. The discrete Fourier transform pair consists of Discrete Fourier Transform Pair S (k) = s (n) = −(j 2πnk N ) n=0 s (n) e PN −1 1 j 2πnk N k=0 S (k) e N PN −1 (5. To add the results together.8 DFT: Computational Complexity 19 We now have a way of computing the spectrum for an arbitrary signal: The Discrete Fourier Transform (DFT) (5. Thus.3 Use this demonstration to perform DFT analysis of a signal. we must multiply each signal value by a complex number and add together the results. the total number of N basic In complexity calculations. Consequently. and take the dominant termhere the 4N 2 termas reecting how much work is involved in making the computation. For example. Please view or download it at <DFTanalysis.11/>. we nd the DFT is an O N2  computational procedure. we only worry about what happens as the data lengths increase. becomes equivalent to how long the computation takes (how long must we wait for an answer). 19 This content is available online at <http://cnx. For each frequency we chose. a procedure's stated complexity says that the time taken will be proportional to some function of the amount of data used in the computation and the amount demanded. Thus. Our orthogonality relation essentially says that if we have a sucient number of equations (frequency samples). In computation. we must keep the real and imaginary parts separate. The number of steps. This media object is a LabVIEW VI. . Three questions emerge. we assumed the data to be real. Does this symmetry change the DFT's complexity? frequency components (k = Secondly. . 222.1 (Solution on p. We could seek methods that reduce the constant of proportionality.st-and.) Before developing the FFT. and the second of the odd-numbered elements. http://www-groups. suppose the data are complex-valued. not only was the computation of a signal's spectrum greatly speeded.) In making the complexity evaluation for the DFT. we have something more smaller complexity results? In 1965. let's try to appreciate the algorithm's impact. of a Normally.ac. Consider what happens to the even-numbered and odd-numbered elements of the sequence in the DFT calculation. We want to calculate a transform of a signal that is 10 times longer. . N − 1. The rst DFT is combined with the − j2πk N . First of all.1 (Solution on p.9 Fast Fourier Transform (FFT) 20 One wonders if the DFT can be computed faster: Does another computational procedure  an algorithm  exist that can compute the same quantity.39) + · · · + s (N − 2) e(−j) N + 2π×(2+1)k 2π(N −(2−1))k (−j) (−j) N N + s (3) e + · · · + s(N− 1) e = 2π ( N −1 k 2π ( N ) 2 2 −1) (−j) 2πk (−j) (−j) 2πk (−j) N N N N s (0) + s (2) e   2 + · · · + s (N − 2) e 2 2 + · · · + s (N − 1) e 2 + s (1) + s (3) e S (k) = (−j) 2πk N s(1) e form N 2 -length DFT. the spectra of such signals have conjugate symmetry.9.dcs. IBM researcher Jim Cooley and Princeton faculty member John Tukey developed what is now known as the Fast Fourier Transform (FFT). . we assume that the signal's duration is a power of two: N = 2L . instead.html . what is the DFT's complexity now? Finally. 21 in the early nineteenth century developed the same algorithm. but historical work has shown that Gauss did not publish it! After the FFT's rediscovery. Later research showed that no algorithm for computing the DFT could have a smaller complexity than the FFT. the number of frequency indices in a DFT calculation range between zero and the transform length minus one.33)) can be computed from the corresponding positive frequency components.189 Exercise 5.21/>. but more eciently. 222. N + 1 in the DFT (5. Exercise 5. Suppose a short-length transform takes 1 ms. the spectral computational time will not quadruple as with the DFT algorithm. it approximately doubles.8.org/content/m10250/2. . 2π(N −2)k 2π2k s (0) + s (2) e(−j) N (5. The FFT simply reuses the computations 20 21 This content is available online at <http://cnx. both of which compute exactly the same quantity. . but also the added feature of algorithm meant that computations had exibility not available to analog implementations. .. Here. The rst one is a DFT of the evennumbered elements. meaning that negative N 2 + 1. a less important but interesting question is suppose we want K frequency values instead of N.uk/∼history/Mathematicians/Gauss. Surprisingly. To derive the FFT. Compare how much longer a straightforward implementation of the DFT would take in comparison to an FFT. Now when the length of data doubles. but do not change the DFT's complexity dramatic in mind: Can the computations be restructured so that a O N2  . The computational advantage of the FFT comes from recognizing the periodic nature of the discrete Fourier transform. second multiplied by the complex exponential e The half-length transforms are each evaluated at Each term in square brackets has the frequency indices k = 0. now what is the complexity? 5. It is an algorithm for computing that DFT that has order O (N logN ) for certain length inputs. Pairs of these transforms are combined by adding one to the other multiplied by a complex exponential. the number of 3N times the length can be divided by two. Now for the fun. etc. At this point. each of the half-length transforms can be reduced to two quarter-length transforms. Each pair N 3N requires 4 additions and 2 multiplications. the total complexity is still dominated by the half-length DFT calculations. Because the number of stages. FFT has Length-8 DFT decomposition s0 s2 s4 s6 Length-4 DFT s1 s3 s5 s7 Length-4 DFT S0 e–j0 S1 –j2π/8 e S2 –j2π2/8 e S e–j2π3/8 3 S e–j2π4/8 4 S5 –j2π5/8 e S6 –j2π6/8 e S7 e–j2π7/8 (a) S0 s0 s4 s2 s6 S1 +1 S2 e S3 e π/2 +1 s1 s5 e π/4 +1 s3 s7 S4 e +1 e 0 e π/2 e π/2 e π/4 S5 S6 S7 4 length-2 DFTs 2 length-4 DFTs (b) Figure 5. Figure 5. Thus. and add the results (complexity O (N )). Because N = 2L . When these half-length transforms are successively decomposed. which makes the complexity of the FFT O (N log2 N ). multiply one of them by the it stands. involving only additions. DIGITAL SIGNAL PROCESSING made in the half-length transforms and combines them through additions and the multiplication by e− j2πk N . we now compute two length2 transforms (complexity 2O 4 which is not periodic over complex exponential (complexity O (N )). but the proportionality coecient has been reduced. giving a total number of computations equaling 6 · 4 = 2 .190 CHAPTER 5. This decomposition continues until we are left with length-2 transforms. each of these to two eighth-length ones.12 (Length-8 DFT decomposition)  2 N N ). As 2 .and odd-indexed inputs marks the rst phase of developing the FFT algorithm. equals log2 N .12: The initial decomposition of a length-8 DFT into the terms using even. the rst stage of the N 2 length-2 transforms (see the bottom part of Figure 5. the number of arithmetic operations equals 2 log2 N . N illustrates this decomposition. we are left with the diagram shown in the bottom panel that depicts the length-8 FFT computation. . This number of computations does not change from stage to stage. This transform is quite simple.12 (Length-8 DFT decomposition)). we arrive at the diagram summarizing the length-8 fast Fourier transform (Figure 5. 222.9. As shown on Figure 5.4)) and to compute spectra of discrete-time signals (using the FFT 22 This content is available online at <http://cnx. By further decomposing the length-4 DFTs into two length-2 DFTs and combining their outputs.13 (Buttery). Each buttery requires one complex multiplication and two complex additions. Let's look at the details of a length-8 DFT. Examining how pairs of outputs are collected together. Although most of the complex multiplies are quite simple (multiplying by e−(j 2 ) π means swapping real and imaginary parts and changing their signs). we see that the two complex multiplies are related to each other. Considering Figure 5. Exercise 5. sampling (Section 5.12 (Length-8 DFT decomposition) aren't quite the same. and A/D conversion (Section 5. .2 (Solution on p.13 (Buttery) as the frequency index goes from 0 through 7.org/content/m0505/2. and we can reduce our computational work even further.3 (Solution on p. with the outputs added and subtracted together in pairs. we recycle values from the length-4 DFTs into the nal calculation because of the periodicity of the DFT output.191 Doing an example will make computational savings more obvious. It is so computationally ecient that power-of-two transform lengths are frequently used regardless of what the actual length of the data.3). 24 and 34 composite respectively).20/>. the original Cooley-Tukey algorithm is far and away the most frequently used. Buttery a+be–j2πk/N a a+be–j2πk/N a e–j2πk/N a–be–j2πk/N b Figure 5. making the number of basic computations 2 log2 N as uating the complexity as full complex multiplies. In over thirty years of Fourier transform algorithm development.9. By considering together the computations involving common output frequencies from the two half-length DFTs. Why not? How is the ordering determined? Other "fast" algorithms were discovered.) Note that the ordering of the input sequence in the two parts of Figure 5.3). and forms the quantities shown.13 (Buttery)). the number of prime factors a given integer has measures how it is. the number 18 is less so and 17 not at all (it's prime). In number theory. let's count those for purposes of eval- N 2 = 4 complex multiplies and N = 8 complex 3N stages.10 Spectrograms N of the transform be? 22 We know how to acquire analog signals for digital processing (pre-ltering (Section 5. all of which make use of how many common factors the transform length N has.13: e–j2π(k+N/2)/N b –1 e–j2πk/N a–be–j2πk/N The basic computational element of the fast Fourier transform is the buttery. It takes two complex numbers. We have additions for each stage and log2 N = 3 predicted. The numbers 16 and 81 are highly composite (equaling (2 1 · 32 ). we create the basic computational element known as a buttery (Figure 5.12 (Length-8 DFT decomposition)). Exercise 5.) Suppose the length of the signal were 500? How would you compute the spectrum of this signal using the Cooley-Tukey algorithm? What would the length 5. 222. we rst decompose the DFT into two length-4 DFTs. represented by a and b. How long was the sampled signal (in terms of samples)? What was the datarate during the sampling process in bps (bits per second)? Assuming the computer storage is organized in terms of bytes (8-bit quantities).14 (Speech Spectrogram) the signal lasted a little over 1.025 kHz sampling rate for the speech is 1/4 of the CD sampling rate. is calculated. The 11.10. DIGITAL SIGNAL PROCESSING algorithm (Section 5. Point of interest: Music compact discs (CDs) encode their signals at a sampling rate of 44.025 kHz and passed through a 16-bit A/D converter.2 Ri 0. 222.10).4 ce 0.1 kHz. which is used to analyze speech (Section 4.) Looking at Figure 5. and was the lowest available sampling rate commensurate with speech signal bandwidths available on my computer.14 ver 0.14 (Speech Spectrogram).9)).8 si 1 ty 1.1 (Solution on p.6 Time (s) Uni Figure 5.192 CHAPTER 5. Exercise 5.2 seconds. We'll learn the rationale for this number later. The speech was sampled at a rate of 11. let's put these various components together to learn how the spectrogram shown in Figure 5.2 . how many bytes of computer memory does the speech consume? Speech Spectrogram 5000 Frequency (Hz) 4000 3000 2000 1000 0 0 0. with color indicating the spectral amplitude. clearly changes its character with time. frames: To display these spectral changes.15 (Spectrogram Hanning vs.15 (Spectrogram Hanning 2 1 − cos N accomplished by multiplying the framed signal by the sequence applied a rectangular window: window. Applying a Hanning window gracefully tapers the signal toward frame edges. Each frame is not so long that signicant signal variations are retained within a frame. we essentially w (n) = 1. A better way to frame signals for spectrograms is to apply a window: Shape the signal values within a frame so that the signal decays gracefully as it nears the edges. shown in the bottom of Figure 5. here demarked by the vertical lines.14 (Speech Spectrogram) involved creating frames. that were 256 samples long and nding the spectrum of each. this shaping greatly reduces spurious oscillations in each frame's spectrum. Considering the spectrum of the Hanning windowed frame. a Fourier transform of each frame is calculated using the FFT. contiguous groups of samples. Computing Figure 5. Conceptually. the signal may change very abruptly. the long signal was sectioned into comparatively short.14 (Speech Spectrogram). As shown in Figure 5. If a rectangular window is applied (corresponding to extracting a frame from the signal). oscillations appear in the spectrum (middle of bottom row).193 The resulting discrete-time signal. Rectangular 256 n Hanning Window Rectangular Window FFT (512) FFT (512) f Figure 5. A transform of such a segment reveals a curious oscillation in the spectrum. a feature not present in the original signal. An important detail emerges when we examine each framed signal (Figure 5. an artifact directly related to this sharp amplitude change. Roughly speaking. A much more graceful window is the Hanning 1 2πn it has the cosine shape w (n) = . This shaping is w (n). 0 ≤ n ≤ N −  1. Rectangular)).15: f The top waveform is a segment 1024 samples long taken from the beginning of the "Rice University" phrase. Rectangular). thereby yielding a more accurate computation of the signal's spectrum at that moment of time. but not so short that we lose the signal's spectral character. Spectrogram Hanning vs. the speech signal's spectrum is evaluated over successive time segments and stacked side by side so that the corresponds to time and the y -axis x-axis frequency. In sectioning the signal. we nd that the oscillations resulting from applying the . At the frame's edges. vs. the non- overlapped Hanning windowed version shown below it is very ragged.17 (Overlapping windows for computing spectrograms) illustrates these computations.194 CHAPTER 5. and spectral magnitude color-coded. The speech signal. This solution requires more Fourier transform calculations than needed by rectangular windowing. and displayed in spectrograms with frequency extending vertically. we see that we have managed to amplitude-modulate the signal with the periodically repeated window (Figure 5. .16 (Non-overlapping windows)). Non-overlapping windows n n Figure 5. To alleviate this problem. If you examine the windowed signal sections in sequence to examine windowing's aect on signal amplitude. Compare your answer with the length- 2N transform of a length- N Hanning window.) What might be the source of these oscillations? To gain some insight. but the spectra are much better behaved and spectral changes are much better captured. frames are overlapped (typically by half a frame duration). and certainly has edges. such as shown in the speech spectrogram (Figure 5.10. 222.2 (Solution on p. is sectioned into overlapping. spectral information extracted from the bottom plot could well miss important features present in the original. Figure 5. The spectra of each of these is calculated. with a Hanning window applied to each frame. Exercise 5. window time location running horizontally.14: Speech Spectrogram).16: In comparison with the original speech segment shown in the upper plot. Clearly. equal-length frames. DIGITAL SIGNAL PROCESSING rectangular window obscured a formant (the one located at a little more than half the Nyquist frequency). what is the length- 2N discrete Fourier transform of a length-N pulse? The pulse emulates the rectangular window. interconnecting the circuit elements provided a natural starting place for constructing useful devices. shift-invariant systems.1 (Solution on p.) One of the rst analog systems we described was the amplier (Section 2. with spectral amplitude values color-coded.3 Why the specic values of 256 for (Solution on p. We found that implementing an amplier was dicult in analog systems. wherein we multiply the input signal's Fourier transform by a frequency response. Exercise 5. Frames were 256 samples long and a Hanning window was applied with a half-frame overlap. with the magnitude of the rst 257 FFT values displayed vertically. 23 This content is available online at <http://cnx. is not only a viable alternative.2: Ampliers).5/>.org/content/m0507/2.11 Discrete-Time Systems 23 When we developed analog systems. A length-512 FFT of each frame was computed. We begin with discussing the underlying mathematical structure of linear. requiring an op-amp at least. we are not limited by hardware considerations but by what can be constructed in software. In discrete-time signal processing.) N and 512 for K? Another issue is how was the length-512 transform of each length-256 windowed frame computed? 5. 222.17: The original speech segment and the sequence of overlapping Hanning windows applied to it are shown in the upper portion. but also a computationally ecient one.195 Overlapping windows for computing spectrograms n FFT FFT FFT FFT FFT FFT Log Spectral Magnitude FFT f Figure 5. Exercise 5. .11. we will discover that frequency-domain implementation of systems.10.6. and devise how software lters can be constructed. What is the discrete-time implementation of an amplier? Is this especially hard or easy? In fact. 222. .6. . with n0 > 0. we need only a mathematical specication. l = {1. discrete-time delays can only be integer valued. 29)) if delaying the input delays the corresponding output.196 CHAPTER 5. . . y (n) = a1 y (n − 1) + · · · + ap y (n − p) + b0 x (n) + b1 x (n − 1) + · · · + bq x (n − q) Here. and the current and previous inputs. Make the . we would need even earlier values. If S (x (n)) = y (n).org/content/m10251/2. while in analog signals. p}. Because we have no physical constraints in "constructing" such systems.42) and {b0 . past values y (n − l). 24 initial conditions: we must provide output values that occurred before the input started. delays can be arbitrarily valued. . Choosing n0 to be negative advances the signal along the integers. . Dierence equations are usually expressed in software with for loops. We want to concentrate on systems that are both linear and shift-invariant. As opposed to analog delays (Section 2. In analog systems. ap } a0 ? aside: There is an asymmetry in the coecients: where is y (n) (5. We have essentially divided the equation by it. ad innitum.24/>. the dierential equation species the input-output relationship in the time-domain. we would need more previous values of the output. bq }. The way out of this predicament is to specify the system's the p does impact how the system responds to a given input. and to the current and The system's characteristics are determined by the choices for the and the coecients' values {a1 . which we have not yet computed. delaying a signal corresponds to a linear phase shift of the signal's discrete-time Fourier transform: Linear discrete-time systems have the superposition property.41) We use the term shift-invariant to emphasize that delays can only have integer values in discrete-time. implicit description of a system (we must explicit way of computing the somehow solve the dierential equation). S (a1 x1 (n) + a2 x2 (n)) = a1 S (x1 (n)) + a2 S (x2 (n)) A discrete-time system is called shift-invariant (5.*y(n-1:-1:n-p)) + sum(b.. We have thus created the convention that As opposed to dierential equations. values we have not yet computed. It will be these that allow us the full power of frequency-domain analysis and implementations. the output signal y (n) is related to its past values of the input signal number of coecients p and q x (n). which does not change the input-output relationship. . . .42). in fact. These values can be arbitrary. . then a shift-invariant system has the property S (x (n − n0 )) = y (n − n0 ) (5.*x(n:-1:n-q)). What input and output values enter into the computation of y (1)? We need values for y (0). b1 . To compute these values. but the choice One choice gives rise to a linear system: This content is available online at <http://cnx. which only provide an a0 is always one. . This coecient would multiply the term in (5.3: Delay). To compute them. The corresponding discrete-time specication is the dierence equation.40) (analogous to time-invariant analog systems (p. . dierence equations provide an output for any input. A MATLAB program that would compute the rst 1000 values of the output has the form for n=1:1000 y(n) = sum(a. DIGITAL SIGNAL PROCESSING 5. end An important detail emerges when we consider making this program work.. as written it has (at least) two bugs. . s (n − n0 ) ↔ e−(j2πf n0 ) S ej2πf  . In the frequency domain. .12 Discrete-Time Systems in the Time-Domain A discrete-time signal s (n) is 24 delayed by n0 samples when we write s (n − n0 ). y (−1). We simply express the dierence equation by a program that calculates each output from the previous output values. y (−1) = 0.43) To compute the output at some index. However. For all non-zero values of IIR (Innite Impulse Response). and the system's response to the "impulse" lasts forever.1 Coecient values determine how the output behaves. The eect of the parameter a the output simply equals the input times the gain lasts forever. In more detail. If it equals zero. a is positive and less than one. If is a decaying exponential. previous output compute this system's output to a unit-sample input: negative y (0) = ay (−1) + b What is the value of y (−1)? (5. a.44) Because we have used an input that is zero for all negative indices. let's x (n) = δ (n).45) Table 5. When If a is negative and greater a = −1. We can envision how the lter responds to this input by making a table.5 Let's consider the simple system having p=1 and q = 0. n > 0 leaving unit-sample is zero. such systems are said to be b can be any value.12.6: Linear Systems) if the input that is zero for produce a zero output. the output changes . For n > 0. not conceptual. When a = 1. we start by trying to compute the output at n = 0.) The initial condition issue resolves making sense of the dierence equation for inputs that start at some index. What is it? How can it be "xed?" Example 5. all time did not y (0) = b. error. this dierence equation says we need to know what the y (n − 1) and what the input signal is at that moment of time.1 (Solution on p. the dierence equation would not describe a linear system (Section 2.197 initial conditions zero. The parameter serves as a gain. and is more complicated (Table 5. b. The reason lies in the denition of a linear system (Section 2. the output The reason for this terminology is that the unit sample also known as the impulse (especially in analog situations).6.6: Linear Systems): The only way that the output to a sum of signals can be the sum of the individual outputs occurs when the initial conditions in each case are zero. the y (n) = ay (n − 1) . With this assumption. Certainly. the output the output is a unit step. 223. the output oscillates while decaying exponentially. which leaves us with the dierence equation input .6. it is reasonable to assume that the output is also zero. than −1. Exercise 5. the program will not work because of a programming. y (n) = ay (n − 1) + bδ (n) n x (n) y (n) −1 0 0 0 1 b 1 0 ba 2 0 ba2 : 0 : n 0 ban (5.1). Because the input is zero for indices. y (n) = ay (n − 1) + bx (n) (5. b = 1 n 4 n 2 0 n -1 Figure 5. Here. The dierence equation says that the number in the next generation is some multiple of the previous one. whether positive x(n) n 1 y(n) a = 0.) Note that the dierence equation (5.).42). n a are used in population models to describe how population size increases might correspond to generation. |a| > 1. which compounding occurs (daily. we typically require that the output remain bounded for any input. the population becomes extinct. b = 1 n The input to the simple example system. the output signal becomes larger and larger. (the bank provides no gain).5. alternating between b and −b. with the outputs for several system parameter values shown below. etc.5. Can such terms also . b = 1 1 y(n) a = –0. monthly.2 (Solution on p.18: y(n) a = 1. In signal processing applications. and b=1 a n The same dierence indexes the times at equals the compound interest rate plus one.12. y (n) = a1 y (n − 1) + · · · + ap y (n − p) + b0 x (n) + b1 x (n − 1) + · · · + bq x (n − q) does not involve terms like y (n + 1) be included? Why or why not? or x (n + 1) on the equation's right side. 223. a unit sample. growing exponentially.1. If this multiple is less than one. Positive values of over time. Here. equation also describes the eect of compound interest on deposits. Exercise 5. that means that we restrict |a| < 1 and chose values for it and the gain according to the application. For our example. DIGITAL SIGNAL PROCESSING More dramatic eects when or negative. is shown at the top. if greater than one. sign forever. the population ourishes.198 CHAPTER 5. changed in amplitude and phase. 1 q for mpulse When the input is a unit-sample. Example 5. For now. Such a that could be updated daily. These amplitude and phase changes comprise the frequency response we seek. we need to nd the frequency response of discrete-time systems. . x (n) = Xej2πf n .19) shows that the unit-sample response is a pulse of width is also known as a boxcar.6 A somewhat dierent system has no "a" coecients. Assume the Plugging these signals into the fundamental dierence equation (5. This waveform given to this system.14/>. Please view or download it at <DiscreteTimeSys.19: The plot shows the unit-sample response of a length-5 boxcar lter. the output equals the running average of input's previous system could be used to produce the average weekly temperature (q = 7) q values. n = {0. Such systems are said to be Response) because their unit sample responses have nite duration.47) The assumed output does indeed satisfy the dierence equation if the output complex amplitude is related to the input amplitude by Y = b0 + b1 e−(j2πf ) + · · · + bq e−(j2πqf ) X 1 − a1 e−(j2πf ) − · · · − ap e−(j2πpf ) 25 This media object is a LabVIEW VI. We'll derive its ure 5. Thus. we have Y ej2πf n = a1 Y ej2πf (n−1) + · · · + ap Y ej2πf (n−p) + b0 Xej2πf n + b1 Xej2πf (n−1) + · · · + bq Xej2πf (n−q) (5. Consider the dierence equation y (n) = 1 (x (n) + · · · + x (n − q + 1)) q (5. No need to worry about initial conditions here. .42). We proceed as when we used impedances: let the input be a complex exponential signal. We used impedances to derive directly from the circuit's structure the frequency response.199 y(n) 1 5 n Figure 5. hence the name boxcar lter FIR (Finite I Plotting this response (Fig- q and height frequency response and develop its ltering interpretation in the next section. shift-invariant system.org/content/m0510/2. the output equals then equals zero thereafter. q − 1}. 25 [Media Object] 5. The only structure we have so far for a discrete-time system is the dierence equation.13 Discrete-Time Systems in the Frequency Domain 26 As with analog linear systems. the output should also be a complex exponential of the same frequency. The complex exponential input signal is Note that this input occurs for output has a similar form: all values of n. note that the dierence equation says that each output value equals the average of the input's current and previous values.llb> 26 This content is available online at <http://cnx. . When we have a linear.46) Because this system's output depends only on current and previous input values. . y (n) = Y ej2πf n . 1 q . we need not be concerned with initial conditions. . To compute the length-N DFT. its transfer function. DIGITAL SIGNAL PROCESSING This relationship corresponds to the system's frequency response or.6)) has the frequency response q−1  1 X H ej2πf = e−(j2πf m) q m=0 (5. We nd that any discrete-time system dened by a dierence equation has a transfer function given by  b0 + b1 e−(j2πf ) + · · · + bq e−(j2πqf ) H ej2πf = 1 − a1 e−(j2πf ) − · · · − ap e−(j2πpf ) Furthermore. Example 5. having a cuto frequency of Exercise 5.7 The frequency response of the simple IIR system (dierence equation given in a previous example (Example 5. we assume that the signal has a 27 This content is available online at <http://cnx.17/>. the system is usually IIR.200 CHAPTER 5.49) Example 5. p By selecting the coecients and lter type. negative a results in a highpass lter. This design exibility can't be found in analog systems. In the next section. When a system's transfer function has both terms. The larger the coecient in magnitude. what kind of lter and how do you control its characteristics with the lter's coecients? These examples illustrate the point that systems described (and implemented) by dierence equations serve as lters for discrete-time signals. the transfer function relates the discrete-time Fourier transform of the system's output to the input's Fourier transform.org/content/m10257/2.1 1 q. by another name.51) This expression amounts to the Fourier transform of the boxcar signal (Figure 5. because (5.10: Spectra of exponential signals) portrays the magnitude and phase of this transfer function. The lter's order is given by the number in the transfer function (if the system is IIR) or by the number q p of denominator coecients of numerator coecients if the lter is FIR. When the lter coecient a is positive.14 Filtering in the Frequency Domain 27 Because we are interested in actual computations rather than analytic calculations. 223. lters having virtually any frequency response desired can be designed.    Y ej2πf = X ej2πf H ej2πf (5. the exponential signal spectrum (Figure 5.8 The length-q boxcar lter (dierence equation found in a previous example (Example 5. We see that boxcar lterslength-q signal averagershave a lowpass behavior.48) any discrete-time signal can be expressed as a superposition of complex exponential signals and because linear discrete-time systems obey the Superposition Principle. we must consider the details of the discrete Fourier transform.19). 5. How would you characterize this system: Does it act like a lter? If so. and its order equals regardless of q. There we found that this frequency response has a magnitude equal to the absolute value of dsinc (πf ).5)) is given by  H ej2πf = b 1 − ae−(j2πf ) (5. the more pronounced the lowpass or highpass ltering. oering a much greater range of ltering possibilities than is possible with circuits.50) This Fourier transform occurred in a previous example.13.) Suppose we multiply the boxcar lter's coecients by a sinusoid: bm = 1q cos (2πf0 m) Use Fourier transform properties to determine the transfer function. we have a lowpass lter. we detail how analog signals can be ltered by computers. (Solution on p. see the length-10 lter's frequency response (Figure 5. .11: Spectrum of length-ten pulse). 223. Assume we have an input signal having duration lter having a length-q +1 Nx that we pass through a FIR unit-sample response. and derive the corresponding length-N DFT by sampling the frequency response. h (n) ↔ H ej2πf  (5. Combining the frequency-domain and time-domain interpretations of a linear. (5. If. Finding this signal is quite easy. What is the duration of the output signal? The dierence equation for this lter is y (n) = b0 x (n) + · · · + bq x (n − q) (5. be it time or frequency. results in the output's Fourier transform equaling the system's linear.) This statement is a very important result. we can only implement an IIR lter accurately in the time domain with the system's dierence equation. N − 1} H (k) = H e N Computing the inverse DFT yields a length-N signal sample response might be. with the input value previous dening the extent of the lter's Nx depends on memory of past input values. Exercise 5. note that the discretetime Fourier transform of a unit sample equals one for all frequencies. k = {0.47) in terms of lter coecients. In the time-domain. Frequency-domain implementations are restricted to Another issue arises in frequency-domain ltering that is related to time-domain aliasing. . shift-invariant systems are related to each other by input. which has X transfer function. (Solution on p. . 223. the duration exceeds N. The nature of these errors is easily explained by appealing to the Sampling Theorem. Because sampling in the frequency domain causes repetitions of the unit-sample response in the time domain.14. can result in aliasing in the other) unless we sample fast enough. x (Nx − 1).5). the duration of the unit-sample response determines the minimal sampling rate that prevents aliasing. we have that pairs unit-sample response.2 N ≥ q. we have the potential for aliasing in the time domain (sampling in one domain. First of all. Consequently.) Express the unit-sample response of a FIR lter in terms of dierence equation coecients. errors are encountered. Note that the corresponding question for IIR lters is far more dicult to answer: Consider the example (Example 5. q samples For example.14. For IIR systems.54) This equation says that the output depends on current and past input values. the output for a unit-sample input is known as the system's and is denoted by h (n). we can analytically specify the frequency response.) Derive the minimal DFT length for a length-q unit-sample response using the Sampling Theorem. this time when we consider the output. invariant system's unit-sample response. we don't have a direct handle on which signal has a Fourier transform equaling a given frequency response.  j2πk  .1 j2πf e     Y ej2πf = H ej2πf X ej2πf . the output returns to zero .3 N. (Solution on p.53) no matter what the actual duration of the unit- If the unit-sample response has a duration less than or equal to N (it's a FIR lter). through x (Nx − q). FIR lters. Here. (Solution on p. Derive it yourself. the output at index x (Nx ) (which equals zero). Thus. Because frequency responses have an explicit frequency-domain specication (5. 223. . By sampling in the frequency domain.201 duration less than or equal to N. however.52) Returning to the issue of how to use the DFT to perform ltering. For FIR systems  they by denition have nite-duration unit sample responses  the number of required DFT samples equals the unit-sample response's duration: Exercise 5. computing the inverse DFT of the sampled frequency response indeed yields the unit-sample response. sketch the time-domain result for various choices of the DFT length Exercise 5. we cannot use the DFT to nd the system's unit-sample response: aliasing of the unitsample response will always occur. . Because of the input and output of a unit-sample = 1. shift- h (n) and the transfer function are Fourier transform in terms of the discrete-time Fourier transform.14. the dominant factor is not the duration of input or of the unit-sample response. The number of rational numbers is note: countably innite (the numerator and denominator correspond to locating the rational by row and column. Thus. the output q + Nx . it is that signal's is . The frequency-domain relationship between a lter's input and always true:    Y ej2πf = H ej2πf X ej2πf .". Demonstrate the accuracy of this statement. computing Y (k) = H (k) X (k). The sampling interval here 1 K for a length-K DFT: faster sampling to avoid aliasing thus requires a longer transform calculation. The DFT's length must be at least the sum of the input's and unit-sample response's duration minus one. Before detailing this procedure. continuous That's ne for analytic calculation. The main theme of this result is that a lter's output extends longer than either its input or its unit-sample response. 223. the number of irrational numbers is uncountably innite. An uncountably innite quantity cannot be so associated. Guess which is "bigger?" The DFT computes the Fourier transform at a nite set of frequencies  samples the true spectrum  which can lead to aliasing in the time-domain unless we sample suciently fast. As the input signal's last value occurs at index Nx − 1. X e = n x (n) e−(j2πf n) . Unfortunately.202 CHAPTER 5. Frequency-domain ltering. using this relationship output is to perform ltering is restricted to the situation when we have analytic formulas for the frequency response and the input signal. the total number so-located can be counted. but of the output. and computing Figure 5. to avoid aliasing when we use DFTs. To accommodate a shorter signal than DFT length. Thus. for example. rst compute the DFT of the input. x(n) X(k) Y(k) DFT y(n) IDFT H(k) Figure 5. but computationally we would have to make an uncountably innite number of computations. the number of values at which we must evaluate the frequency response's DFT must be at least q + Nx and we must compute the same length DFT of the input.20: To lter a signal in the frequency domain. let's clarify why so many new issues arose in trying to develop a frequencydomain implementation of linear ltering. voila!). we simply zero-pad the input: Ensure that for indices extending beyond the signal's duration that the signal is zero. DIGITAL SIGNAL PROCESSING only after the last input value passes through the lter's memory. This Fourier transforms in this result are discrete P j2πf time Fourier transforms. Did you know that two kinds of innities can be meaningfully dened? A countably innite quantity means that it can be associated with a limiting process associated with integers.20. the last nonzero output value occurs when signal's duration equals Exercise 5. of course.14. multiplying them to create the output's DFT inverse DFT of the result to yield the the y (n). Thus. is accomplished by storing the lter's frequency response as the DFT input's DFT X (k). The reason why we had to "invent" the discrete Fourier transform (DFT) has the same origin: The spectrum resulting from the discrete-time Fourier transform depends on the frequency variable f. Since the longest signal among the input. multiply the result by the sampled frequency response.) In words. and nally compute the inverse DFT of the product. we express this result as "The output's duration equals the input's duration plus the lter's duration minus one.4 n − q = Nx − 1 or n = q + Nx − 1. unit-sample response and output is the output. We calculate these discrete Fourier transforms using the fast Fourier transform algorithm. (Solution on p. diagrammed in H (k). We need to choose any FFT length that exceeds the required DFT length.203 duration that determines the transform length.[djia zeros(1.4)]). To use frequency domain techniques. The MATLAB programs that compute the ltered output in the time and frequency domains are Time Domain h = [1 1 1 1 1]/5. Y = H. The output duration will be 253+5−1 = 257. y = ifft(Y). Note the "edge" eects in the ltered output. we are restricted to power-of-two transform lengths. and this length just undershoots our required length. The lter we want is a length-5 averager (as shown in the unit-sample response (Figure 5. Example 5.21 shows the input and the ltered output. and the input's duration is 253 (365 calendar days minus weekend days and holidays).21: The blue line shows the Dow Jones Industrial Average from 1997.19)). .9 Suppose we want to average daily stock prices taken over last year to yield a running weekly average (average over ve trading sessions). 512). H = fft(h.*X. 256 is a power of two (2 8 = 256). As it turns out. we must use length-512 fast Fourier transforms. Because we want to use the FFT. Figure 5. We simply extend the other two signals with zeros (zero-pad) to compute their DFTs. y = filter(h. Dow-Jones Industrial Average 8000 7000 6000 5000 4000 3000 2000 Daily Average Weekly Average 1000 0 0 50 100 150 200 250 Trading Day (1997) Figure 5. and the red one the length-5 boxcar-ltered result that provides a running weekly of this market index. 512). DJIA = fft(djia. and this determines the transform length we need to use. Frequency Domain h = [1 1 1 1 1]/5.1. x (n) = ∞ X m=−∞ 28 ! x (n − mNx ) ⇒ y (n) = ∞ X m=−∞ This content is available online at <http://cnx. lter each. respectively. note: The filter DIGITAL SIGNAL PROCESSING program has the feature that the length of its output equals the length of its input. For any given value the right side. but so far we have required the input to have limited duration (so that we could calculate its Fourier transform).or frequency-domain implementation would be the most ecient. but far more than in the time-domain implementation. MATLAB's fft function automatically zero-pads its input if the specied transform length (its second argument) exceeds the signal's length. ! y (n − mNx ) (5. The frequency domain result will have a small 2.55) . Thus.271. each requiring (6K compare Nx (2q + 1) ↔ 6 (Nx + q) + 3 (Nx + q) log2 (Nx + q) 2 Exact analytic evaluation of this comparison is quite dicult (we have a transcendental equation to solve). The lter "sees" these initial and nal values as the dierence equation passes over the input. what will we do when the input signal is innitely long? The dierence equation scenario ts perfectly with the envisioned digital ltering structure (Figure 5. For the time-domain. However. the number of arithmetic operations in the time-domain implementation is far less than those required by the frequency domain version: 514 versus 62.2 × 10−11  because of the inherent nite precision Because of the unfortunate mist between signal lengths and favored FFT lengths. dierence- Nx (2 (q) + 1). and add the results together." Because the lter is linear. The output-signal-duration-determined length must be at least Nx + q .     q q 3 2q + 1 ↔ 6 × 1 + + 1+ log2 (Nx + q) Nx 2 Nx Insight into this comparison is best obtained by dividing by With this manipulation.16/>. we need only count the computations required by each. These artifacts can be handled in two ways: we can just ignore the edge eects or the data from previous and succeeding years' last and rst week. 5.24). the number of frequency-domain computations. the frequency-domain computations would have been more than a factor of two less (28. Nx . of the lter's order q. imaginary component  largest value is nature of computer arithmetic.15 Eciency of Frequency-Domain Filtering 28 To determine for what signal and lter durations a time. the program zero-pads the input appropriately. If the input signal had been one sample shorter. An interesting signal processing aspect of this example is demonstrated at the beginning and end of the output. To section a signal means expressing it as a linear combination of length-Nx non-overlapping "chunks. The solution to this problem is quite simple: Section the input into frames. The frequency-domain approach requires three Fourier trans3K (log 2 K) computations for a length-K FFT. we need forms. will exceed the left if the signal's duration is long enough. can be placed at the ends. we are evaluating the number of computations per sample. for lter durations greater than about 10.org/content/m10279/2.204 CHAPTER 5.696). as long as the input is at least 10 samples. so long as the FFT's power-of-two The frequency-domain approach is not yet viable. and the multiplication of two spectra 2 computations). To force it to produce a signal having the proper length. the frequency-domain approach is faster constraint is advantageous. we must equation approach. ltering a sum of terms is equivalent to summing the results of ltering each term. The ramping up and down that occurs can be traced to assuming the input is zero before it begins and after it ends. Exercise 5.22: The noisy input signal is sectioned into length-48 frames. In addition. the number (Nx + q) log2 (Nx + q) + 6 (Nx + q). Each ltered section is added to other outputs that overlap to create the signal equivalent to having ltered the entire input. not just butt them together. Thus. In the frequency-domain approach. Sectioned Input n Filter Filter Filtered.) Show that as the section length increases. we must add multiply them to lter a section. note that each ltered section has a duration longer than the input. The sinusoidal component of the signal is shown as the red dashed line. the number of computations for each output is 2 (q) + 1.1 (Solution on p. computation counting changes because we need only compute the lter's frequency response H (k) once.22. Overlapped Sections Output (Sum of Filtered Sections) n Figure 5. Note that the choice of section duration is arbitrary. The number of computations for a time-domain implementation essentially remains constant whether we section the input or not. Consequently. each of which is ltered using frequency-domain techniques. 223. the frequency-domain approach is much faster. the number of terms to add corresponds to the excess duration of the output compared with the input (q ). of computations for a section amounts to the ltered outputs together. We need only compute two DFTs and Letting Nx denote a section's length. Computational considerations reveal a substantial advantage for a frequency-domain implementation over a time-domain one. For even modest lter orders. Once the lter is chosen.15. which amounts to a xed overhead. we must literally add the ltered sections together.205 As illustrated in Figure 5. the frequency domain approach becomes increasingly more ecient. The frequency-domain approach thus requires  1+ q Nx  log2 (Nx + q) + 7 Nqx + 6 computations per output value. we should section so that the . a real-time. time-domain lter could accept each sample as it becomes available. .1.24) with a frequency-domain implementation requires some additional signal management not required by time-domain implementations. We note that the noise has been dramatically reduced. . Buering can also be used in time-domain lters as well but isn't required. instead.206 CHAPTER 5.5 The gure shows the unit-sample response of a length-17 Hanning lter on the left and the frequency response on the right. the operation of buering. . To use Filtering with the dierence equation would require 33 computations per output while the frequency domain requires a little over 16. Example 5.22 shows a portion of the noisy signal's waveform. A smart Rice engineer has selected a FIR lter having a unit-sample response corresponding a 2πn 1 .10 We want to lowpass lter a signal that contains a sinusoid and a signicant amount of noise. lter in real time by producing Nx outputs for the same number of inputs faster than Nx Ts . we must lter one section while accepting into memory the next section to be ltered. each section must be 48 samples long. Nx + q is a power of two. they operate on sections. Its frequency 17 1 − cos 17 response (determined by computing the discrete Fourier transform) is shown in Figure 5. Nx required FFT length is precisely a power of two: Choose so that DIGITAL SIGNAL PROCESSING Nx + q = 2L . Implementing the digital lter shown in the A/D block diagram (Figure 5. To period-17 sinusoid: h (n) =  apply. all in less that the sampling interval Ts . and produce the output value. This lter functions as a lowpass lter having a cuto frequency of about 0.23. building up sections while computing on previous ones is known as In programming. with a sinusoid now clearly visible in the ltered output. 16}. One of the primary applications of linear lters is noise removal: preserve the signal by matching lter's passband with the signal's spectrum and greatly reduce all other frequency components that may be present in the noisy signal.23: |H(ej2πf)| Spectral Magnitude 0. . which makes q = 16. 1 h(n) 0 Figure 5. we can select the length of each section so that the frequency-domain ltering approach is maximally ecient: Choose the section length Nx so that a length-64 FFT. discerning the sine wave in the signal is virtually impossible. They Because they generally take longer to produce an output section than the sampling interval duration. If it weren't for the overlaid sinusoid. Some residual noise remains because noise components within the lter's passband appear in the output as well as the signal. . The example shown in Figure 5. Frequency- domain approaches don't operate on a sample-by-sample basis. Conceptually.22 shows how frequency-domain ltering works. this frequency-domain implementation is over twice as fast! Figure 5.1 Index n 0 0 Frequency 0. n = {0. calculate the dierence equation. The complexities of time-domain and frequency-domain implementations depend on dierent aspects of the ltering: The time-domain implementation depends on the combined orders of the lter while the frequency-domain implementation depends on the logarithm of the Fourier transform's length. The resulting digital signal x (n) can now be ltered in the time-domain with a dierence equation or in the frequency domain with Fourier transforms. which is determined by the analog signal's bandwidth. however. we have transforms (computed using the FFT algorithm) and which makes the total complexity thus requires O (logN ) O (N logN ) for N O (N ) O (N logN ) for the two for the multiplication by the transfer function. and computing the inverse transform of the result.3. Another implicit assumption is that the digital lter can operate in real time: The computer and the ltering algorithm must be suciently fast so that outputs are computed faster than input values arrive. This lowpass lter (LPF) has a Ts . but require a more complicated system.207 Exercise 5.15. The idea begins by computing the Fourier transform of a length-N portion of the input x (n).2: The Sampling Theorem). we are assuming that the input signal has a lowpass spectrum and can be bandlimited without aecting important signal aspects. What is the source of this delay? Can it be removed? 5. we can process. 223. thus determines how long our program has to compute each output y (n). it is 29 This content is available online at <http://cnx.24: x(n) = Q[x(nTs)] x(nTs) t = nTs 1 Ts < 2W Q[•] Digital Filter y(n) To process an analog signal digitally. Note that the input and output lters must be analog lters. the greater the accuracy of the entire system. input values.16 Discrete-Time Filtering of Analog Signals 29 Because of the Sampling Theorem (Section 5.) Note that when compared to the input signal's sinusoidal component. The sampling interval. A frequency domain implementation computational complexity for each output value. In the latter situations. It could well be that in some problems the time-domain version is more ecient (more easily satises the real time requirement). Highpass signals cannot be ltered digitally. Bandpass signals can also be ltered digitally. the signal D/A x (t) LPF W y(t) must be ltered with an anti- aliasing lter (to ensure a bandlimited signal) before A/D conversion.24. the output's sinusoidal component seems to be delayed. This approach seems overly complex and potentially inecient. . analog signals "with a computer" by constructing the system shown in Figure 5. while in others the frequency domain approach is faster. trying to operate without them can lead to potentially very inaccurate digitization. To use this system.org/content/m0511/2. a dierence equation (5. multiplying it by the lter's transfer function. The greater the number Q [·] of the A/D converter. which determines allowable sampling intervals of bits in the amplitude quantization portion D/A converter and a second anti-aliasing lter (having the same bandwidth as the rst one).2 (Solution on p. The computational complexity for calculating each output with Frequency domain implementation of the lter is also possible. A/D x(t) LPF W Figure 5.21/>. Detailing the complexity. The resulting output y (n) then drives a cuto frequency of W Hz.42) is O (p + q). in particular lter. can a change in sampling rate prevent aliasing. if not. 30 This content is available online at <http://cnx. 5. We want to sample it.25 (Pulse Signal). If ck s t − T2 ? is periodic with period what are the Fourier series coecients of b) Find the Fourier series of the signal p (t) represents the signal's Fourier series coecients. if not.16. a) What sampling frequency (if any works) can be used to sample the result of passing RC highpass lter with R = 10kΩ and s (t) through an C = 8nF? derivative of s (t)? s (t) has been modulated by an 8 kHz sinusoid having an unknown phase: the resulting signal is s (t) sin (2πf0 t + φ). Problem 5.208 CHAPTER 5.42). b) What sampling frequency (if any works) can be used to sample the c) The signal show how and nd the smallest sampling rate that can be used.42/>. 223. shown in Figure 5.) Derive this value for the number of computations for the general dierence equation (5. show how the signal can be recovered from these samples. Find an expression for and sketch the spectrum of the sampled signal.17 Digital Signal Processing Problems Problem 5. with f0 = 8kHz and φ =? Can the modulated signal be sampled so that the original signal can be recovered from the modulated signal regardless of the phase value φ? If so.org/content/m10351/2. show why not. Filtering with a dierence equation is straightforward. and the number of computations that must be made for each output value is Exercise 5. DIGITAL SIGNAL PROCESSING the FFT algorithm for computing the Fourier transforms that enables the superiority of frequency-domain implementations.2: Non-Standard Sampling Using the properties of the Fourier series can ease nding a signal's spectrum. we need to detail both implementations to determine which will be more suitable for any given ltering problem. but it has been subjected to various signal processing manipulations. Because complexity considerations only express how algorithm running-time increases with system parameter choices.1: The signal s (t) 30 Sampling and Filtering is bandlimited to 4 kHz. a) Suppose a signal s (t) T . .1 2 (p + q). c) Suppose this signal is used to sample a signal bandlimited to 1 T Hz. (Solution on p. d) Does aliasing occur? If so. why not? .26 a) Find the Fourier spectrum of this signal.25 Problem 5.4: The signal s (t) Bandpass Sampling has the indicated spectrum.209 Pulse Signal p(t) A … ∆ ∆ T/2 … ∆ 3T/2 t T 2T ∆ ∆ –A Figure 5. He multiplies the bandlimited signal by the depicted periodic pulse signal to perform sampling (Figure 5.3: A Dierent Sampling Scheme A signal processing engineer from Texas A&M claims to have developed an improved sampling scheme. p(t) A … ∆ ∆ ∆ ∆ … t Ts 5Ts 4 Ts 4 Figure 5. TS be related to the signal's bandwidth? If not.26). how should Problem 5. b) Will this scheme work? If so. If we sample it at precisely the Nyquist rate. a) Let s (t) be a sinusoid having frequency W Hz. We can express the quantized sample the quantization error at the nth sample. Problem 5. W Hz. Show that this is indeed the case. . how large is maximum quantization error? d) We can describe the quantization error as noise. Assuming the converter rounds. DIGITAL SIGNAL PROCESSING S(f) –2W –W W 2W f Figure 5.6: Hardware Error An A/D converter has a curious hardware problem: Every other sampling pulse is half its normal amplitude (Figure 5. you do have to be careful how you do so. how accurately do the samples convey the sinusoid's amplitude? In other words.210 CHAPTER 5. b) How fast would you need to sample for the amplitude estimate to be within 5% of the true value? c) Another issue in sampling is the inherent amplitude quantization produced by A/D converters. one wonders whether a lower sampling rate could be used. where  (t) represents the maximum voltage allowed by the converter is b bits. In addition to the rms value of a signal. Assume Vmax volts and that it quantizes amplitudes to Q (s (nTs )) as s (nTs ) +  (t). While true in principle.5: s (t) from its samples. What is the signal-to-noise ratio of the quantization error for a full-range sinusoid? Express your result in decibels. with a power proportional to the square of the maximum error. an important aspect of a signal is its peak value. which equals max {|s (t) |}. we can sample it at any rate original signal can be extracted from the samples. nd the worst case example. and nd the system that reconstructs Problem 5. Sampling Signals 1 Ts > 2W and recover the waveform This statement of the Sampling Theorem can be taken to mean that all information about the If a signal is bandlimited to exactly.28).27 a) What is the minimum sampling rate for this signal suggested by the Sampling Theorem? b) Because of the particular structure of this spectrum. Note that the pulse sequence is backwards from the binary representation. but a simple circuit illustrates how they work.29) serves as the input to a rst-order RC lowpass lter. ∆ A 1 0 0 T 1 2T 1 3T 4T t Figure 5. Thus.28 a) Find the Fourier series for this signal. converter. We want to design the lter and the parameters ∆ and T so that the output voltage at time 4T (for a 4-bit converter) is proportional .29 This signal (Figure 5. Let's assume we have a B -bit converter.211 p(t) A …A 2 … ∆ ∆ ∆ ∆ 2T 3T ∆ t T 4T Figure 5. the number 13 has the binary representation 1101 (1310 For a 4-bit = 1 × 23 + 1 × 22 + 0 × 21 + 1 × 20 ) and would be represented by the depicted pulse sequence. we want to convert numbers having a into a voltage proportional to that number. the number by a sequence of B B -bit representation The rst step taken by our simple converter is to represent pulses occurring at multiples of a time interval T. b) Can this signal be used to sample a bandlimited signal having highest frequency Problem 5. The presence of a pulse indicates a 1 in the corresponding bit position. and pulse absence means a 0 occurred.7: W = 1 2T ? Simple D/A Converter Commercial digital-to-analog converters don't work this way. We'll see why that is. 0. . n = {0. .8: Discrete-Time Fourier Transforms Find the Fourier transforms of the following sequences. Show the circuit that works. DIGITAL SIGNAL PROCESSING to the number. . 1. . . where S ej2πf a) b) c) d)  s (n) is some sequence having Fourier transform . etc. . he samples TS = 2. samples. a) The discrete-time Fourier transform of b) The discrete-time Fourier transform of c) The discrete-time Fourier transform of   cos2 π n if n = {−1.9: Spectra of Finite-Duration Signals Find the indicated spectra for the following signals.10: Just Whistlin' Sammy loves to whistle and decides to record and analyze his whistling in lab. 0. n (−1) s (n) s (n) cos (2πf0 n)  s n  if n (even) 2 x (n) =  0 if n (odd) ns (n) Problem 5. This combination of pulse creation and ltering constitutes our simple D/A converter. so he grabs 30 consecutive. 3T should be twice that of a pulse produced at 2T . .212 CHAPTER 5.or over-sample his whistle? b) What is the discrete-time Fourier transform of c) How does the 32-point DFT of x (n) x (n) θ? depend on and how does it depend on θ? . his sa (t) = sin (4000t).5 × 10−4 to obtain s (n) = sa (nTS ). the voltage due to a pulse at which in turn is twice that of a pulse at • T. 7} 4 s (n) =  0 if otherwise d) The length-8 DFT of the previous signal. The requirements are • The voltage at time t = 4T should diminish by a factor of 2 the further the pulse occurs from this time. Problem 5. but arbitrarily chosen. 29} a) Did Sammy under. −1. 2} s (n) =  0 if otherwise   sin π n if n = {0. 1} 4 s (n) =  0 if otherwise   n if n = {−2. To analyze the spectrum. The 4-bit D/A converter must support a 10 kHz sampling rate. He calls this sequence x (n) and realizes he can write it as x (n) = sin (4000nTS + θ) . How do the converter's parameters change with sampling rate and number of bits in the converter? Problem 5. In other words. . He is a very good whistler. Sammy (wisely) whistle is a pure sinusoid that can be described by his recorded whistle with a sampling interval of decides to analyze a few samples at a time. a) Show that b) If h (n) x (n) = P denotes the i x (i) δ (n − i) δ (n) is the unit-sample. q} and zero otherwise. The key idea is that a sequence can be written as a weighted linear combination of unit samples. . . assume our lter is FIR. . 1 for n = {0.13: sin πn 4 ?  A Special Discrete-Time Filter Consider a FIR lter governed by the dierence equation y (n) = q + 1.30 a) What is the dierence equation that denes this lter's input-output relationship? b) What is this lter's transfer function? c) What is the lter's output when the input is Problem 5.213 Problem 5. c) In particular. If the what is the duration of the lter's output to this signal? 1 2 2 1 x (n + 2) + x (n + 1) + x (n) + x (n − 1) + x (n − 2) 3 3 3 3 a) Find this lter's unit-sample response.11: Discrete-Time Filtering We can nd the input-output relation for a discrete-time lter much more easily than for analog lters.30) unit-sample reponse. shift-invariant lter to a unit-sample inputnd an expression for the output.12: A Digital Filter A digital lter has the depicted (Figure 5.   1 if n = 0 δ (n) =  0 otherwise where unit-sample responsethe output of a discrete-time linear. q an d) Let the lter be a boxcar averager: be a pulse of unit height and Problem 5. h (n) = q+1 q+1 duration N . Let the input odd integer. Find the lter's output when N = 2 . h(n) 2 1 –1 0 1 2 3 4 n Figure 5. with the unit-sample response having duration input has duration N. . . 1.14: Simulating the Real World Much of physics is governed by dierntial equations. c) Suppose we take a sequence and stretch it out by a factor of three.   s x (n) =  0 Sketch the sequence x (n) n 3  if n = 3m . m = {. DIGITAL SIGNAL PROCESSING b) Find this lter's transfer function. we can approximate it. The digital lter described by the dierence equation y (n) = x (n) − x (n − 1) resembles the derivative formula.15: Derivatives The derivative of a sequence makes little sense.31)? x(n) 3 2 1 0 1 2 3 Figure 5. . but still. For example. −1. . what classic lter category does it fall into). what will be the simulated output? sinusoid. a) What is this lter's transfer function? b) What is the lter's output to the depicted triangle input (Figure 5. Characterize this transfer function (i. how should the sampling interval T be chosen so that the approximation works well? Problem 5. x (t) is a c) Assuming the unit step. . suppose we have the dierential equation dy (t) + ay (t) = x (t) dt and we approximate the derivative by d y (nT ) − y ((n − 1) T ) y (t) |t=nT ' dt T where T essentially amounts to a sampling interval. 0. We want to explore how well it works. .. . What is the lter's output to this input? particular. a) What is the dierence equation that must be solved to approximate the dierential equation? b) When x (t) = u (t). The idea is to replace the derivative with a discrete-time approximation and solve the resulting dierential equation. what is the output at the indices where the input x (n) is intentionally zero? In Now how would you characterize this system? Problem 5.31 4 n 5 6 .214 CHAPTER 5. } otherwise for some example s (n). .e. and we want to use signal processing methods to simulate physical problems. let the input be a boxcar signal and the unit-sample response also be a boxcar. Under what conditions will d y (n) be proportional to dt x (t) |t=nTs ? the lter act like a dierentiator? In other words. . Show that this approach is too simplistic. He will. N < K? Find the inverse DFT of the product of the DFTs of two length-3 boxcars. The result of part (b) would then be the lter's output if we could implement the lter with length-4 DFTs. use the FFT algorithm. c) If we could use DFTs to perform linear ltering.16: The DFT Let's explore the DFT and its properties.19: A Digital Filter A digital lter is described by the following dierence equation: 1 y (n) = ay (n − 1) + ax (n) − x (n − 1) . but he is behind schedule and needs to get his results as quickly as possible. 2N − 1}. where b) Consider the special case where K = 4. b) Does a Parseval's Theorem hold for the DCT? c) You choose to transmit information about the signal s (n) according to the DCT coecients. a = √ 2 You . says that retrieving the wanted DFTs is easy: Just nd the real and imaginary parts of S (k). when will Problem 5. an Aggie who knows some signal processing. . p.215 c) Suppose the signal x (n) is a sampled analog signal: x (n) = x (nTs ). which one would you send? Problem 5. a) What is the length-K DFT of length-N boxcar sequence. a) Find the inverse DCT. So that you can use what you just calculated.18: Discrete Cosine Transform (DCT) The discrete cosine transform of a length-N sequence is dened to be Sc (k) = N −1 X  s (n) cos n=0 Note that the number of frequency terms is 2πnk 2N  2N − 1: k = {0. where s1 (n) s2 (n) are two real-valued signals of which he needs to compute the spectra. the DFTs of the original signals? b) Sammy's friend. it should be true that the product of the input's DFT and the unit-sample response's DFT equals the output's DFT. He gets the idea of two transforms at one time by computing the transform of s (n) = s1 (n) + js2 (n). 215)? d) What would you need to change so that the product of the DFTs of the input and unit-sample response in this case equaled the DFT of the ltered output? Problem 5. c) While his friend's idea is not correct. or course. The issue is whether he can computing and retrieve the individual DFTs from the result or not. Does the actual output of the boxcar-lter equal the result found in the previous part (list. . it does give him an idea. What approach will work? Hint: Use the symmetry properties of the DFT. d) How does the number of computations change with this approach? Will Sammy's idea ultimately lead to a faster computation of the required DFTs? Problem 5. could only send one. .17: DSP Tricks Sammy is faced with computing lots of discrete Fourier transforms. a) What will be the DFT S (k) of this complex-valued signal in terms of S1 (k) and S2 (k). )? πn 2 . special purpose. a) What is the lter's transfer function? How would you characterize it? πn 2 ? c) What is the lter's output when the input is the depicted discrete-time square wave (Figure 5. the input sequence is zero for output to be y (n) = δ (n) + δ (n − 1). y (n) = y (n − 1) + x (n) − x (n − 4) a) Find this lter's unit sample response. n< sin  Can his measurement be correct? In other words. why not? Problem 5. highpass.21: Yet Another Digital Filter A lter has an input-output relationship given by the dierence equation y (n) = 1 1 1 x (n) + x (n − 1) + x (n − 2) 4 2 4 .216 CHAPTER 5. is there an input that can yield this output? If so.32)? b) What is the lter's output when the input equals cos  x(n) 1 … … n –1 Figure 5. nd the input x (n) that gives rise to this output. DIGITAL SIGNAL PROCESSING a) What is this lter's unit sample response? b) What is this lter's transfer function? c) What is this lter's output when the input is Problem 5. b) What is the lter's transfer function? How would you characterize this lter (lowpass. 0.20: sin πn 4 ?  Another Digital Filter A digital lter is determined by the following dierence equation. Sammy measures the c) Find the lter's output when the input is the sinusoid d) In another case.. then becomes nonzero. If not.. .32 . a) What is the lter's unit-sample response? b) What is the discrete-Fourier transform of the output? c) What is the time-domain expression for the output? Problem 5. −1.22: A Digital Filter in the Frequency Domain We have a lter with the transfer function  H ej2πf = e−(j2πf ) cos (2πf ) operating on the input signal x (n) = δ (n) − δ (n − 2) that yields the output y (n).25: The signal x (n) x (n) = cos πn 2  + 2sin 2πn ? 3  Detective Work equals δ (n) − δ (n − 1). show why not.217 Problem 5. .24: what is the input? Digital Filtering A digital lter has an input-output relationship expressed by the dierence equation y (n) = x (n) + x (n − 1) + x (n − 2) + x (n − 3) 4 . then b) What is this system's output when the input is c) If the output is observed to be Problem 5. b) Find the output when x (n) = cos (2πf0 n).  sin πn 2 ? y (n) = δ (n) + δ (n − 1). x (n) served as the input to a linear FIR (nite impulse response) lter. Problem 5. 0. if not. linear system produces an output x (n) equals a unit sample. } when its input . shift invariant. . the y (n) = δ (n) − δ (n − 1) + 2δ (n − 2). a) Find the length-8 DFT (discrete Fourier transform) of this signal.26: A discrete-time. 0. Is this statement true? If so. indicate why and nd b) You are told that when output was the system's unit sample response. c) How would you describe this system's function? y (n) = {1. a) Find the dierence equation governing the system. b) What is this lter's output when Problem 5.23: Digital Filters A discrete-time system is governed by the dierence equation y (n) = y (n − 1) + x (n) + x (n − 1) 2 a) Find the transfer function for this system. a) Plot the magnitude and phase of this lter's transfer function. . b) Assuming the sampling rate is fs to what analog frequency does f0 correspond? c) A more general approach is to design a lter having a frequency response the absolute value of a cosine:  |H ej2πf | ∝ |cos (πf N ) |. Does Sammy's result mean that Samantha's answer is wrong? c) The homework problem says to lowpass-lter the sequence by multiplying its DFT by   1 H (k) =  0 if k = {0. the signal and the accompanying hum have been sampled. her group partner Sammy says that he computed the inverse DFT of her answer and got δ (n + 1) + δ (n − 1).27: Time Reversal has Uses  H ej2πf . pass the result through an A/D converter.28: DIGITAL SIGNAL PROCESSING x (n) is passed through this system to is then passed through the system to yield the x (n) and y (n)? Removing Hum The slang word hum represents power line waveforms that creep into signals because of poor circuit construction. our clever engineer wants to design a digital AM receiver.30: DFTs A problem on Samantha's homework asks for the δ (n − 7). what is the output's signal-to-noise ratio? Problem 5. signal w (−n) A discrete-time system has transfer function w (n). perform all the demodulation with digital signal processing systems. What we seek are lters that can remove hum. Will this ltering algorithm work? If so. Assume in this problem that the carrier frequency is always a large the message signal's bandwidth even multiple of W. the 60 Hz signal (and its harmonics) are added to the desired signal. Select the parameter N and the sampling rate so that the frequencies at which the cosine equals zero correspond to 60 Hz and its odd harmonics through the fth. The receiver would bandpass the received signal. not only can the fundamental but also its rst few harmonics be removed. magnitude proportional to In this way. a) Find lter coecients for the length-3 FIR lter that can remove a sinusoid having f0 digital frequency from its input. why not? . if not.218 CHAPTER 5. Usually. a) What is the smallest sampling rate that would be needed? b) Show the block diagram of the least complex digital AM receiver. nd the ltered output. 1. and end with a D/A converter to produce the analog message signal. d) Find the dierence equation that denes this lter. 8-point DFT of the discrete-time signal δ (n − 1) + a) What answer should Samantha obtain? b) As a check. Problem 5. A signal yield the signal time-reversed time-reversed What is the transfer function between Problem 5. we want to design a digital lter for hum removal. Problem 5. 7} otherwise and then computing the inverse DFT. The output y (−n). In this problem. c) Assuming the channel adds white noise and that a b-bit A/D converter is used.29: Digital AM Receiver Thinking that digital implementations are always better. thus reconstituting an N -point block.34: Signal Compression Because of the slowness of the Internet. In other words. a) What is the block diagram for your lter implementation? Explicitly denote which components are analog.32: Echoes Echoes not only occur in canyons. b) What sampling rate must be used and how many bits must be used in the A/D converter for the acquired signal's signal-to-noise ratio to be at least 60 dB? For this calculation. but also in auditoriums and telephone circuits. First of all. a) Find the dierence equation of the system that models the production of echoes. How much data should be processed at once to produce an ecient algorithm? What length transform should be used? c) Is the analyst's information correct that FFT techniques produce more accurate averages than any others? Why or why not? Problem 5. In one situation where the echoed signal has been sampled. ELEC 241 students are asked to write the most ecient (quickest) x (n) is 1. the input signal x (n) emerges as x (n) + a1 x (n − n1 ) + a2 x (n − n2 ).31: Stock Market Data Processing Because a trading week lasts ve days. and n2 = 25. which are digital (a computer performs the task). assume the signal is a sinusoid. The receiver would assemble the transmitted spectrum and compute the inverse DFT. c) If the lter is a length-128 FIR lter (the duration of the lter's unit-sample response equals 128). what is the transfer function of your system? Problem 5. He b-bits. program that has the same input-output relationship. you must break the tie. b) To simulate this echo system. what system's output is the signal Problem 5. and send these over the network. Suppose the duration of a1 = Because of the undecided vote. . Which approach is more ecient and why? c) Find the transfer function and dierence equation of the system that suppresses the echoes. lossy signal compression becomes important if you want signals to be received quickly.33: x (n)? Digital Filtering of Analog Signals RU Electronics wants to develop a lter that would be used in analog applications. Half the class votes to just program the dierence equation 2 5 while the other half votes to program a frequency domain approach that exploits the speed of the FFT. a2 = . and compute its then would discard (zero the spectrum) at half of the frequencies. stock markets frequently compute running averages each day over the previous ve trading days to smooth price uctuations. The technical stock analyst at the Buy-LoSell-Hi brokerage rm has heard that FFT ltering techniques work better than any others (in terms of producing more accurate averages). and will serve as a lowpass lter. and which interface between analog and digital worlds. should it be implemented in the time or frequency domain? d) Assuming H ej2πf  is the transfer function of the digital lter. but that is implemented digitally. The lter is to operate on signals that have a 10 kHz bandwidth.219 Problem 5. with the echoed signal as the input. he would section the signal into length-N blocks. An enterprising 241 student has proposed a scheme based on frequency-domain processing. n1 = 10.000 and that 1 1 . a) What is the dierence equation governing the ve-day averager for daily stock prices? b) Design an ecient FFT-based ltering algorithm for the broker. quantize them to N -point DFT. would the proposed scheme yield satisfactory results? . DIGITAL SIGNAL PROCESSING a) At what frequencies should the spectrum be zeroed to minimize the error in this lossy compression scheme? b) The nominal way to represent a signal digitally is to use simple waveform. b-bit quantization of the time-domain How long should a section be in the proposed scheme so that the required number of bits/sample is smaller than that nominally required? c) Assuming that eective compression can be achieved.220 CHAPTER 5. For 32-bit oating point.221 Solutions to Exercises in Chapter 5 Solution to Exercise 5. the pulse duration has no signicant eect on recovering a signal from its samples.2. 176) 9863 10 2±(127) = 1.036.7 × 1038 (5.3. the largest number is we have 9. signed integers. and these samples would appear to have arisen from a lower frequency sinusoid. the largest (smallest) numbers are For 64-bit oating point.2. At the Nyquist frequency. . In oating point.4.647 and for b = 64. The only eect of pulse duration is to unequally weight the spectral repetitions. Solution to Exercise 5. 110012 + 1112 = 1000002 = 32.9 × 10−39 ). 172) For b = 32.223. Reducing the sampling rate would result in fewer samples/period.807 or about Solution to Exercise 5. Because we are only concerned with the repetition centered about the origin.3 (p.372.3. the negative frequency lines move to the left and the positive frequency ones to the right.2. As the square wave's period decreases.854. 9. the number of bits in the exponent determines the largest and smallest representable numbers. 173) 25 = 110112 and 7 = 1112 .2 (p. We nd that Solution to Exercise 5.1 (p. the high temperature's amplitude was quantized as a form of A/D conversion. The dashed lines correspond to the frequencies about which the spectral repetitions (due to sampling with Ts = 1) occur. Solution to Exercise 5. 176) The simplest bandlimited signal is the sine wave.483. the largest number is about Solution to Exercise 5. .3 (p.2 × 1018 . 176) f=1 f = –1 T=4 f T = 3. Thus. 171) For b-bit 2b−1 − 1. exactly two samples/period would occur.33 The square wave's spectrum is shown by the bolder set of lines centered about the origin.1 (p.2 (p.775.1 (p.3. we have 2. 177) The plotted temperatures were quantized to the nearest degree.5 f Figure 5.147. Solution to Exercise 5. 4 kbps. the spectrum of the samples equals the analog spectrum. The datarate is 11025 × 16 = 176. Solution to Exercise 5. but the complexity remains unchanged. 184) α N +n 0 −1 X αn − n=n0 N +n 0 −1 X αn = αN +n0 − αn0 n=n0 which.9. 191) The transform can have any greater than or equal to the actual duration of the signal. When only K frequencies are needed. Solution to Exercise 5.9.1 (p. 178) Solving 2−B = .6. 195) These numbers are powers-of-two. 189) O (KN ). Using the FFT. 188) When the signal is real-valued. a factor of 3 less than the DFT would have needed. Recall that the FFT is an algorithm to compute the DFT (Section 5.8  = P∞ = P∞ = n=−∞ dB.2 × 11025 = 13230. 194) The oscillations are due to the boxcar window's Fourier transform. after manipulation.3 (p.10.9.6.2 (p.56) = S e Solution to Exercise 5. s (n) e−(j2π(f +1)n) −(j2πn) s (n) e−(j2πf n) n=−∞ e P∞ −(j2πf n) n=−∞ s (n) e  j2πf (5. 1024.222 CHAPTER 5. quantization interval ∆= Solution to Exercise 5. which equals the sinc function.4. To use the Cooley-Tukey algorithm. the complexity is Solution to Exercise 5.001 results in B = 10 bits. A].4. Solution to Exercise 5. which demands retaining all frequency values.1 (p. we simply zero-pad the signal. 186) If the sampling frequency exceeds the Nyquist frequency. The storage bytes. Solution to Exercise 5. the length of the resulting zero-padded signal can be 512. Thus.2 (p.1 (p. a 1ms computing time would increase by a factor of about 10log2 10 = 33. 191) The upper panel has not used the FFT algorithm to compute the length-4 DFTs while the lower one has. If a DFT required 1ms to compute. Solution to Exercise 5. 2B 2 The signal-to-noise ratio does not depend on the signal amplitude. Solution to Exercise 5.4 (p. Solution to Exercise 5.10.7). Solution to Exercise 5. the complexity is again the same. Extending the length of the signal this way merely means we are sampling the frequency axis more nely than required. Solution to Exercise 5.1 (p.10. 192) Number of samples equals required would be 26460 1. etc.2 (p. but over the normalized analog frequency original signal's energy multiplied by fT. and signal having ten times the duration would require 100ms to compute.1 (p. We simply pad the signal with zero-valued samples until a computationally advantageous signal length results. To compute a longer transform than the input signal's duration.8. samples long. 178) A 16-bit A/D converter yields a SNR of Solution to Exercise 5. If the data are complex-valued.3 (p. we may only need half the spectral values. 181) S ej2π(f +1) 6 × 16 + 10log1. yields the geometric sum formula.5 = 97.3 (p.4. . 178) With an A/D range of [−A. the A 2A and the signal's rms value (again assuming it is a sinusoid) is √ .7. DIGITAL SIGNAL PROCESSING Solution to Exercise 5. 187) This situation amounts to aliasing in the time-domain. and the FFT algorithm can be exploited with these lengths. The ordering is determined by the algorithm.2 (p.6. the energy in the sampled signal equals the T.3 (p. but digital ones can be programmed.1 (p. 200) It now acts like a bandpass lter with a center frequency of f0 and a bandwidth equal to twice of the original lowpass lter. we split the input into log2 (Nx + q) + 7 Nqx + 6 per input in the section. To x it. Doing so would require the output to emerge before the input arrives! Solution to Exercise 5. Solution to Exercise 5. we must start the signals later in the array.1 (p. 208) p+q+1 2 (p + q).11. Solution to Exercise 5. the transform length must equal or exceed the signal's duration.16. Solution to Exercise 5. All lters have phase shifts.14.2 (p. this quantity decreases as Nx increases.6) of a length-q boxcar lter. Solution to Exercise 5.1 (p. 201) L In sampling a discrete-time signal's Fourier transform times equally over [0. 207) The delay is not computational delay herethe plot shows the rst output value is aligned with the lter's rst inputalthough in real systems this is an important consideration. S (k) ↔ ∞ X s (n − iL) (5.3 (p. the lter's phase shift:   cos 2πf n − φ 2πf  Rather. Solution to Exercise 5. 195) In discrete-time signal processing. The time-domain implementation requires a total of computations. Thus.13. Solution to Exercise 5. the delay is due to A phase-shifted sinusoid is equivalent to a time-delayed one: cos (2πf n − φ) = . Thus. it stays constant. and this condition is not allowed in MATLAB. such terms can cause diculties.14. the corresponding signal equals the periodic repetition of the original signal. 197) The indices can be negative.1 (p. This delay could be removed if the lter introduced no phase shift. Thus the statement is correct. denote the input's total duration.4 (p. 205) Let N and the signal's Nx .58) bm δ (n − m) (5.2 (p.223 Solution to Exercise 5. the total number of arithmetic operations .12. 198) Such terms would require the system to know what future input or output values would be before the current value was computed.14.1 (p. 2π) to form the DFT.15.59) m=0 The unit-sample response equals h (n) = q X m=0 which corresponds to the representation described in a problem (Example 5. N Nx sections. We have equals multiplications and p+q−1 additions.12. a very easy operation to perform. Solution to Exercise 5. or 2q + 1 computations per input value. Solution to Exercise 5. an amplier amounts to a multiplication.2 (p. For the time-domain implementation.57) i=−∞ To avoid aliasing (in the time domain). Such lters do not exist in analog form. Because we divide again by Nx to nd the number of computations per input value in the entire input. but not in real time.15. 202) The unit-sample response's duration is q+1 Solution to Exercise 5. 201) The DTFT of the unit sample equals a constant (equaling 1). 201) The dierence equation for an FIR lter has the form q X y (n) = bm x (n − m) (5.14.1 (p. the Fourier transform of the output equals the transfer function. Thus. each of which requires  1+ q Nx  N (2q + 1) In the frequency domain. 224 CHAPTER 5. DIGITAL SIGNAL PROCESSING . some old like AM radio. broadcast communication. and to what extent can the receiver tolerate errors in the received information? 2. What is the nature of the information source. Fundamentally digital signals. or from many to many. Shannon showed that once in this form. The audio compact disc (CD) and the digital videodisk (DVD) are now considered digital communications systems. and computer les consist of a sequence of bytes.lucent. http://www.org/content/m0513/2. the development of cheap. they also aect the information content. like computer networks. signals express information. we know how to convert analog a properly engineered system can communicate digital information with no error despite the fact that the communication channel thrusts noise onto all transmissions. a form of "discrete-time" signal despite the fact that the index sequences byte position. communications) and electromagnetic radiation ( 1 2 This content is available online at <http://cnx. Information comes neatly packaged in both analog and digital forms. signals into digital ones. 1. place to another. with communication design considerations used throughout. What are the channel's characteristics and how do they aect the transmitted signal? In short. like a telephone conference call or a chat room. amplitude-quantized signals. The answer is to use a digital commu- In most cases.1 Information Communication 1 As far as a communications engineer is concerned. digital as well as analog transmission are accomplished using analog signals.3: Fundamental model of communication). is clearly an analog signal. but to transmit it from one point-to-point communication. and the creation of high-bandwidth communication systems. like voltages in Ethernet (an example of wireless) in cellular telephone. for example. not time Communication systems endeavor not to manipulate information. what are we going to send and how are we going to send it? Interestingly. Because of the Sampling Theorem. like computer les (which are a special case of symbolic signals). so-called systems can be fundamentally analog. Communication sample. Speech. Because systems manipulate signals. some new like computer networks. This startling result has no counterpart in analog systems. Go back to the fundamental model of communication (Figure 1. The question as to which 2 fundamental is better.8/>.Chapter 6 Information Communication 6. from one place to many others. AM radio will remain noisy.com/minds/infotheory/ 225 wireline . The convergence of these theoretical and engineering results on communi- cations systems has had important consequences in other arenas. Communications design begins with two fundamental considerations. you should convert all information-bearing signals into discrete-time. are in the proper form. because of Claude Shannon's work on a theory of information published in 1948. high-performance computers. nication strategy. has been answered. We describe and analyze several such systems. or digital. like radio. This chapter develops a common theory that underlies how such systems work. analog or digital communication. which are also governed by Maxwell's equations. Maxwell's equations neatly summarize the physics of all electromagnetic phenomena. In contrast to wireline channels where the receiver takes in only the transmitter's signal. This content is available online at <http://cnx. The equations as written here are simpler versions that apply to free-space propagation and conduction in metals. and optic ber transmission. Perhaps the most important aspect of them is that they are linear with respect to the electrical and magnetic elds. Some wireline channels operate in broadcast modes: one or more transmitter is connected to several receivers. Wireless channels are much more public. σ ρ is the charge density. We are not going to solve Maxwell's equations here. wireline channels are more private and much less prone to interference. ∇×E =− ∂ (µH) ∂t (6. Consequently.org/content/m0099/2. One simple example of this situation is cable television.226 CHAPTER 6. do bear in mind that a fundamental understanding of communications channels ultimately depends on uency with Maxwell's equations. 6. A noisier channel subject to interference compromises the exibility of wireless communication. Thus. while the frowny face says that interference and noise are much more prevalent than in wireline situations. coaxial cable or optic ber. the channel is one of several wires connecting transmitter to receiver. radio. and for circuits. Nonlinear media are becoming increasingly important in optic ber communications. 6. Wireline channels physically connect transmitter to receiver with a "wire" which could be a twisted pair. Computer networks can be found that operate in point-to-point or in broadcast modes. letting receiver electronics select wanted signals and disregarding others. This content is available online at <http://cnx.1) div (E) = ρ ∇ × H = σE + ∂ (E) ∂t div (µH) = 0 where E H the magnetic eld.29/>. thereby allowing portable transmission and reception. Kircho 's Laws represent special cases of these equations is the electric eld. Listening in on a conversation requires that the wire be tapped and the voltage measured. the receiver's antenna will react to electromagnetic radiation coming from any source.3 Wireline Channels 4 Wireline channels were the rst used for electrical communications in the mid-nineteenth century for the telegraph. You will hear the term note: tetherless networking applied to completely wireless computer networks. µ magnetic permeability. with a transmitter's antenna radiating a signal that can be received by any antenna suciently close enough. electrical conductivity.org/content/m0100/2. including cir- cuits. the elds (and therefore the voltages and currents) resulting from two or more sources will note: Nonlinear electromagnetic media do exist.  dielectric permittivity. 3 4 Here. channels connect a single transmitter to a single receiver: a Simple wireline point-to-point connection as with the telephone.2 Types of Communication Channels Electrical communications channels are either INFORMATION COMMUNICATION 3 wireline or wireless channels. add.13/>. The transmitter . This feature has two faces: The smiley face says that a receiver can take in transmissions from any source. . In the case of single-wire communications. Single-wire metallic channels cannot support high-quality signal transmission having a bandwidth beyond a few hundred Hertz over any appreciable distance.and noise-free. In fact. wherein the wires are wrapped about each other. is what Ethernet uses as its channel.2 (Circuit Model for a Transmission Line) for an innitesimally small length. In either case. wireline channels form a dedicated circuit between transmitter and receiver. fondly called "co-ax" by engineers. Telephone cables are one example of a coaxial cable. and nds use in cable television and Ethernet. Coaxial cable. which all have the circuit model shown in Figure 6. How these pairs of wires are physically congured greatly aects their transmission characteristics.µd Figure 6. by the time signals arrive at the receiver.1 (Coaxial Cable Cross-section). Another is a dielectric material in between. Both twisted pair and co-ax are examples of transmission lines. These information-carrying circuits are designed so that interference from nearby electromagnetic sources is minimized. ground for the reference node in circuits originated in single-wire You can imagine that the earth's electrical characteristics are highly variable. One example is twisted pair. and they are. most wireline channels today essentially consist of pairs of conducting wires Figure 6. As we shall nd subsequently.1: dielectric ri central conductor outer conductor Coaxial cable consists of one conductor wrapped around the central conductor.εd. Coaxial Cable Cross-section insulation σ σ rd σd. where a concentric conductor surrounds a central wire with twisted pair channel. and the transmitter applies a message-related voltage across the pair.227 simply creates a voltage related to the message signal and applies it to the wire(s). We must have a circuit a closed paththat supports current ow. several transmissions can share the circuit by amplitude modulation techniques. they are relatively interference. the term telegraphs. the earth is used as the current's return path. This type of cable supports broader bandwidth signals than twisted pair. Consequently. commercial cable TV is an example. This circuit model arises from solving Maxwell's equations for the particular transmission line geometry. Thus. σd )   ln rrdi   ∼ µd rd ln L= 2π ri ∼ G= For twisted pair. the element values depend the outer radius of the dielectric dielectric constant d .228 CHAPTER 6. For coaxial cable. R ri . this notation represents that element values here have per-unit-length units. ∼ For example. Note that all the circuit elements have values expressed by the product of a constant times a length. The inductance and the capacitance derive from transmission line geometry. Element values depend on geometry and the properties of materials used to construct the transmission line. t) and i (x.3) π arccosh d 2r  πσ arccosh d 2r  δ + arccosh 2r  d 2r  The voltage between the two conductors and the current owing through them will depend on distance along the transmission line as well as time. and the parallel conductance from the medium between the wire pair.2: I(x+∆x) V(x+∆x) … – The so-called distributed parameter model for two-wire cables has the depicted circuit model structure. the series resistance on the inner conductor's radius σ. INFORMATION COMMUNICATION Circuit Model for a Transmission Line I(x–∆x) + I(x) ˜ R∆x ˜ L∆x ˜ G∆x … V(x–∆x) + ˜ C∆x ˜ R∆x + ˜ L∆x ˜ G∆x V(x) ˜ C∆x – – Figure 6. ∼ R= rd . having a separation r d between the conductors that have conductivity σ and common radius and that are immersed in a medium having dielectric and magnetic properties. The series resistance comes from the conductor used in the wires and from the conductor's geometry. the conductivity of the conductors and magnetic permittivity 1 2πδσ ∼ C=  1 1 + rd ri µd of the dielectric as  (6. and the conductivity σd . x When we place a sinusoidal source at one end of the transmission line.2) 2πd   ln rrdi 2 (π. these voltages and currents will also be sinusoidal because the transmission line model consists of linear circuit elements. has units of ohms/meter. As is customary in analyzing linear . t). the element values are then ∼ R= ∼ C= ∼ G= µ L= π ∼  1 πrδσ (6. We express this dependence as v (x.  ∼  ∼ d I (x) = − G +j2πf C V (x) dx (6. because must segregate the solution for negative and positive unless V+ = 0 x. we nd from KCL.229 circuits. d2 dx2 V (x) = γ 2 V+ e−(γx) + V− eγx  (6. t) = Using the transmission line circuit model. V+ e(−(a+jb))x + V− e(a+jb)x The voltage cannot increase without limit.11) . The quantities V+ and V− are constants determined by the source and physical considerations. ∼ ∼  ∼ ∼ d2 V (x) = +j2πf +j2πf V (x) G C R L dx2 (6.2 (Circuit Model for a Transmission Line). in this region.4) V-I relation for RL series ∼ ∼ V (x) − V (x + ∆ (x)) = I (x) R +j2πf L ∆ (x) Rearranging and taking the limit ∆ (x) → 0 yields the so-called (6. KVL. and v-i relations the equations governing the complex amplitudes. Expressing γ in terms of its real and imaginary parts in our solution shows that such increases are a (mathematical) possibility. we can obtain a single equation that governs how the voltage's or the current's complex amplitude changes with position along the transmission line.   V e(−(a+jb))x if x > 0 + V (x) =  V− e(a+jb)x if x < 0 x<0 These physical constraints give us a (6. Taking the derivative of the second equation and plugging the rst equation into the result yields the equation governing the voltage. physically possible solutions for voltage amplitude cannot increase with distance along the transmission line.8) This equation's solution is Calculating its second derivative and comparing the result with our equation for the voltage can check this solution. v (x.9) = γ 2 V (x) γ satises  r  ∼ ∼  ∼ ∼ = ± G +j2πf C R +j2πf L Our solution works so long as the quantity γ (6. we express voltages and currents as the real part of complex exponential signals. we The rst term will increase exponentially for V− for x > 0. t) = Re I (x) ej2πf t  . a similar result applies to cleaner solution. a (f ) V (x) = is always positive. let the spatial origin be the middle of the transmission line model Figure 6. KCL at Center Node ∼ ∼ I (x) = I (x − ∆ (x)) − V (x) G +j2πf C ∆ (x) (6. and write circuit variables as a complex amplitudehere dependent on distancetimes a complex exponential: Re V (x) ej2πf t  and i (x.6)  ∼  ∼ d V (x) = − R +j2πf L I (x) dx By combining these equations. For example.5) transmission line equations.10) = ± (a (f ) + jb (f )) Thus. and we express it in terms of real and imaginary parts as indicated. γ depends on frequency. Because the circuit model contains simple circuit elements.7) V (x) = V+ e−(γx) + V− eγx (6. Considering the spatial region x > 0. We denote this propagation 2πf | ∼ ∼  ∼ ∼ G +j2πf C R +j2πf L ∼ j2πf LR ∼ and (6. One period of this variation. the voltage appeared to move to the right with a speed equal to speed by c.) Find the propagation speed in terms of physical parameters for both the coaxial cable and twisted pair examples. also known as the attenuation constant. the quantity under the radical simplies and we nd the propagation speed to be 1 limit c = q (6. 294. L.  2πf b (assuming b > 0). we can solve for the current's complex amplitude.230 CHAPTER 6. we would also see a sinusoidal voltage. t) = Re V+ e−(ax) ej(2πf t−bx) The complex exponential portion has the form of a the voltage (take its picture at t = t1 ). t= If we could take a snapshot of we would see a sinusoidally varying waveform along the transmission wavelength. It equals the reciprocal of manufacturers in units of dB/m.16) . known as the picture at some later time propagating wave. f 2 . we nd that  ∼ ∼  d V (x) = − (γV (x)) = − R +j2πf L I (x) dx which means that the ratio of voltage and current complex amplitudes does not depend on distance.3. b (f ). Exercise 6. equals λ = 2π b . INFORMATION COMMUNICATION exponentially along a transmission space constant.12)   2πf (t2 − t1 ) 2πf t2 − bx = 2πf (t1 + t2 − t1 ) − bx = 2πf t1 − b x − b the second waveform appears to be the rst one.6). for example.15) = Z0 The quantity Z0 is known as the transmission line's characteristic impedance. Because line. which depends on frequency. and it equals c=| Im r ∼ In the high-frequency region where to ∼ ∼ −4 π 2 . a (f ). which means the transmission line appears resistive in this high-frequency regime. we know that the voltage's complex amplitude will is proportional to The complete solution for the voltage has the form   v (x. (6. By using the second of the transmission line equation (6.1 (Solution on p. this propagation speed is a fraction (one-third to two-thirds) of the speed of light. V (x) I(x) r∼ ∼ R+j2πf L ∼ G+j2πf C = ∼ (6. is the distance over which the voltage This solution suggests that voltages (and currents too) will decrease line. also provides insight into how transmission lines work. e−(jbx) .13) ∼ j2πf C G. but delayedshifted to the rightin space. C .14) ∼∼ f →∞ LC For typical coaxial cable. and is expressed by γ . If we were to take a second t2 . the characteristic impedance is real. Note that when the signal frequency is suciently high. v u∼ uL limit Z0 = t ∼ f →∞ C (6. Thus. The 1 e . decreases by a factor of The presence of the imaginary part of Because the solution for x>0 vary sinusoidally in space. Note that in the high-frequency regime that the space constant is approximately zero.4 Wireless Channels 5 Wireless channels exploit the prediction made by Maxwell's equation that electromagnetic elds propagate in free space like light. In this situation. An antenna radiates a given amount of power into free space. and it propagates down a cylinder of glass. having spectral energy at low frequencies.) What is the limiting value of the space constant in the high frequency regime? 6. In wireline communication. Considering a sphere centered at the transmitter. which is found by integrating the radiated power over the surface of the sphere.3. From the encompassing view of Maxwell's equations. which means the attenuation is quite small.15/>. a 1 MHz electromagnetic eld has a wavelength of 300 m. Here. Because most information signals are baseband signals. physical connectiona circuitbetween transmitter and receiver. we use transmission lines for high-frequency wireline signal communication. In general terms. Optic ber communication has exactly the same properties as other transmission lines: Signal strength decays exponentially according to the ber's space constant and propagates at some speed less than light would in free space. and ideally this power propagates without loss in all directions.3.3 (Solution on p. it creates an electromagnetic eld that propagates in all directions (although antenna geometry aects how much power ows in any given direction) that induces electric currents in the receiver's antenna. the total power. the lower the frequency the bigger the antenna must be. the dominant factor is the relation of the antenna's size to the eld's wavelength. To summarize.) From tables of physical constants. Because no electric conductors are present and the ber is protected by an opaque insulator. they must be modulated to higher frequencies to be transmitted over wireless channels. the electromagnetic eld is light. For example. 294. Consequently. must be constant regardless of the sphere's radius. we have a direct. optic ber transmission is interference-free. Thus. if 5 This content is available online at <http://cnx. Antenna geometry determines how energetic a eld a voltage of a given frequency creates.org/content/m0101/2. 294. When a voltage is applied to an antenna. The fundamental equation relating frequency and wavelength for a propagating wave is λf = c Thus.231 Typical values for characteristic impedance are 50 and 75 Ω. Compare this frequency with that of a mid-frequency cable television signal.2 (Solution on p. how the signal diminishes as the receiver moves further from the transmitter derives by considering how radiated power changes with distance from the transmitting antenna. nd the frequency of a sinusoid in the middle of the visible light range. For most antenna-based wireless systems. When we select the transmission line characteristics and the transmission frequency so that we operate in the highfrequency regime. Transmitted signal amplitude does decay exponentially along the transmission line. the only dierence is the electromagnetic signal's frequency. This requirement results from the conservation of energy. A related transmission line is the optic ber. wavelength and frequency are inversely related: High frequency corresponds to small wavelengths. Antennas having a size or distance from the ground comparable to the wavelength radiate elds most eciently. Exercise 6. Exercise 6. p (d) represents the power . we don't have two conductorsin fact we have noneand the energy is propagating in what corresponds to the dielectric material of the coaxial cable. signals are not ltered as they propagate along the transmission line: The characteristic impedance is real-valuedthe tranmission line's equivalent impedance is a resistorand all the signal's components at various frequencies propagate at the same speed. 14/>. integrated with respect to direction at a distance d INFORMATION COMMUNICATION from the antenna. but also one antenna may not "see" another because of the earth's curvature. c = = √1 µ0 0 3 × 108 m/s (6. it sets an upper limit on how fast signals can propagate from one place to another. In wireless channels. not only does radiation loss occur (p. Losses in wireline channels are explored in the Circuit Models module (Section 6.4. the further from the transmitter the receiver is located. For this quantity to be a constant.17) Thus. 6 This content is available online at <http://cnx.18) Known familiarly as the speed of light. AR = for some value of the constant k. . where repeaters can extend the distance between transmitter and receiver beyond what passive losses the wireline channel imposes.5 Line-of-Sight Transmission 6 Long-distance transmission over either kind of channel encounters attenuation problems. this delay would be two to three times longer because of the slower propagation speed.232 CHAPTER 6. Exercise 6.) Why don't signals attenuate according to the inverse-square law in a conductor? What is the dierence between the wireline and wireless cases? The speed of propagation is governed by the dielectric constant µ0 and magnetic permeability 0 of free space.org/content/m0538/2. 6.1 (Solution on p. If a lossless (zero space constant) coaxial cable connected the East and West coasts. Because signals travel at a nite speed. we must have p (d) ∝ which means that the received signal amplitude AR 1 d2 must be proportional to the transmitter's amplitude AT and inversely related to distance from the transmitter. the inverse-distance attenuation found in wireless channels persists across all frequencies.3). a signal travels across the United States in 16 ms. 231). kAT d (6. a receiver senses a transmitted signal only after a time delay directly related to the propagation speed: ∆ (t) = d c At the speed of light. the total power will be p (d) 4πd2 . the weaker the received signal. a reasonably small time delay. Whereas the attenuation found in wireline channels can be controlled by physical parameters and choice of transmission frequency. 294. org/content/m0539/2. a mathematical physicist with strong engineering interests.1 (6. networks of antennas sprinkle the countryside (each located on the highest hill possible) to provide long-distance wireless communications: Each antenna receives energy from one antenna and retransmits to another.233 dLOS }h earth R Figure 6. 295.5. what happens to wavelength when carrier frequency decreases? Using a 100 m antenna would provide line-of-sight transmission over a distance of 71. the inventor of wireless telegraphy. . boldly tried such long distance communication without any evidence  either empirical or theoretical  that it was possible. Assuming both antennas have height h above the earth's surface.4 km.38 × 106 m). Line-of-sight transmission means the transmitting and receiving antennae can "see" each other as shown.5.3: Two antennae are shown each having the same height. What is the range of cellular telephone where the handset antenna has essentially zero height? Exercise 6.) Derive the expression of line-of-sight distance using only the Pythagorean Theorem.) Can you imagine a situation wherein global wireless communication is possible with only one transmitting antenna? In particular. At the turn of the century. This kind of network is known as a relay network. The maximum distance at which they can see each other. like ship-to-shore communication. dLOS . 6. long distance wireless communication. maximum line-of-sight distance is p √ dLOS = 2 2hR + h2 ' 2 2Rh where R is the earth's radius ( Exercise 6. Consequently.6 The Ionosphere and Communications 7 If we were limited to line-of-sight communications.2 (Solution on p. but only at night. Line-of-sight communication has the transmitter and receiver antennas in visual contact with each other. would be impossible. of course). who hypothesized that an invisible electromagnetic "mirror" surrounded the earth. occurs when the sighting line just grazes the earth's surface. propagating electromagnetic energy does not follow the earth's surface.19) 6. (Solution on p. When the experiment worked. Generalize it to the case where the antennas have dierent heights (as is the case with commercial radio and cellular telephone).10/>. physicists scrambled to determine why (using Maxwell's equations. 7 This content is available online at <http://cnx. Using such very tall antennas would provide wireless communication within a town or between closely spaced population centers. Marconi. It was Oliver Heaviside. 294. At the usual radio frequencies. subject to interference and noise. Of great importance in satellite communications is the transmission delay.org/content/m0540/2. 231). It's time to be more precise about what these quantities are and how they dier.20) Calculations yield R = 42200km. 295.7. Interference represents man-made signals. where the time for one revolution about the equator exactly matches the earth's rotation time of one day. The problem with such interference is that it occupies the same frequency band as the desired communication signal.7 Communication with Satellites 8 Global wireless communication relies on satellites.17/>. Cellular telephone channels are subject to adjacent-cell phone conversations using the same signal frequency.24 s.org/content/m0515/2. M 3 R as GM T 2 4π 2 the earth's mass. INFORMATION COMMUNICATION What he meant was that at optical frequencies (and others as it turned out). but becomes a mirror at night when solar radiation diminishes.8. This content is available online at <http://cnx.10/>.000 km when we substitute minimum and maximum ionospheric altitudes. Exercise 6. Telephone lines are subject to power-line interference (in the United States a distorted 60 Hz sinusoid). a plasma that encompasses the earth at altitudes hi between 80 and 180 km that reacts to solar radiation: It becomes transparent at Marconi's frequencies during the day. √ The communication delay encountered with a single reection in this channel is 2 2Rhi +hi 2 . but at the frequencies Marconi used.1 (Solution on p. Exercise 6. for transatlantic communication. The time for electromagnetic elds to propagate to a geosynchronous satellite and return is 0. at least two reections would be required.234 CHAPTER 6. Satellites will move across the sky unless they are in geosynchronous orbits. TV satellites would require the homeowner to continually adjust his or her antenna if the satellite weren't in geosynchronous orbit. which ranges between 2. . which ranges c between 6.) In addition to delay. Calculate the attenuation incurred by radiation going to the satellite (one-way loss) with that encountered by Marconi (total going up and down). assuming the ionosphere acts like a perfect mirror. requiring satellite transmitters to use frequencies that pass through it. again a small time interval. (6.8 and 10 ms. 6. a signicant delay. The maximum distance along the earth's surface that can be reached by a single ionospheric reection is 2Rarccos  R R+hi  . Note that the attenuation calculation in the ionospheric case. the mirror was transparent. Here. This distance does not span the United States or cross the Atlantic. ground stations transmit to orbiting satellites that amplify the signal and retransmit it back to earth. is not a straightforward application of the propagation loss formula (p.8 Noise and Interference 9 We have mentioned that communications are. how would the receiver remove it? 8 9 This content is available online at <http://cnx. which This altitude greatly exceeds that of the ionosphere.1 (Solution on p.) Suppose interference occupied a dierent frequency band. 6. the propagation attenuation encountered in satellite communication far exceeds what occurs in ionospheric-mirror based communication. to varying degrees.010 and 3. it reected electromagnetic radiation back to earth. and has a similar structure. 295. Newton's equations applied to orbiting bodies predict that the time T for one orbit is related to distance from the earth's center r R= where G is the gravitational constant and corresponds to an altitude of 35700km. He had predicted the existence of the ionosphere. f2 ] f2 =2 Ps (f ) df (6. Because signals must have negative frequency components that mirror positive frequency ones. Satellite channels are subject to deep space noise arising from electromagnetic radiation pervasive in the galaxy. the power spectrum of the (6. and we need a way of describing such signals despite the fact we can't write a formula for the noise white noise. we can write an explicit expression for it that may contain some unknown aspects (how large it is. (|H (f ) |) 6. The signal propagates through the channel at a speed equal to or less than the speed of light. and its value at any frequency is unrelated to the phase at any other frequency. • • • The transmitted signal is usually not ltered by the channel. time-invariant system. system's output is given by 2 Py (f ) = (|H (f ) |) Px (f ) Thus. . "Parseval's Theorem". Z Power in [f1 . Noise signals have little structure and arise from both human and natural sources.21) Integrating the power spectrum over any range of frequencies equals the power the signal contains in that band. When we pass white noise through a lter. (1) <http://cnx. Because interference has man-made structure. which means that the channel delays the transmission. • 10 11 The channel may introduce additive interference and/or noise. of a non-noise signal s (t) to be the magnitude-squared of its Fourier transform. Thermal noise plagues all electronic circuits that contain resistors. for example). Because of the emphasis here on frequency-domain power. At each frequency. in receiving small amplitude signals.235 We use the notation i (t) to represent interference.23) This result applies to noise signals as well. allowing us to use a common model for how the channel aects transmitted signals. the power spectrum equals the constant band equals N0 (f2 − f1 ). With this denition. the power in a frequency When we pass a signal through a linear. the output's spectrum equals the product (p. receiver ampliers will most certainly add noise as they boost the signal's amplitude.9 Channel Models 11 Both wireline and wireless channels share characteristics.org/content/m0047/latest/#parseval> This content is available online at <http://cnx. The signal can be attenuated. the resultant noise signal has a power equal to the sum of the component powers.22) f1 Using the notation n (t) to represent a noise signal's waveform. The most widely used noise model is It is dened entirely by its frequency-domain characteristics. we dene noise in terms of its power spectrum. 2 Ps (f ) ≡ (|S (f ) |) (6. signal like we can for interference. All channels are subject to noise. N0 2 .11/>. we are lead to dene the Because of Parseval's Theorem 10 . 142) of the system's frequency response and the input's spectrum. • • White noise has constant power at all frequencies. we routinely calculate the power in a spectral band as the integral over positive frequencies multiplied by two. For white noise. the phase of the noise spectrum is totally uncertain: It can be any value in between 0 • and 2π . we dene the power spectrum Ps (f ) power spectrum.org/content/m0516/2. the output is also a noise signal but with power spectrum 2 N0 2 . When noise signals arising from two dierent sources add. Thus. 25) (6. and television.10 Baseband Communication 12 We use analog communication techniques for analog message signals. the signal that emerges from the channel is corrupted. and television.3: Fun- damental model of communication) has the depicted form. The attenuation is due to propagation loss.26) In most cases. Transmission and reception of analog signals using analog results in an inherently noisy received signal (assuming the channel adds noise. speech. . which it almost certainly does). like music. the receiver's input signal is related to the transmitted one by r (t) = αx (t − τ ) + i (t) + n (t) (6. Adding the interference and noise is justied by the linearity property of Maxwell's equations. then developing the transmitter and receiver that best compensate for the channel's corrupting behavior. 12 baseband communication. these ratios can be expressed in terms of power R∞ 2α2 0 Px (f ) df SIR = Rf 2 flu Pi (f ) df R∞ 2α2 0 Px (f ) df SNR = N0 (fu − fl ) (6. x(t) r(t) x(t) Delay τ Channel Attenuation α + + r(t) Interference Noise i(t) n(t) Figure 6. Transmission and reception of analog signals using analog results This content is available online at <http://cnx. but does contain the transmitted signal. 295. Variations in signal-tointerference and signal-to-noise ratios arise from the attenuation because of transmitter-to-receiver distance variations. fu ]. like music.4: The channel component of the fundamental model of communication (Figure 1.24) This expression corresponds to the system model for the channel shown in Figure 6. by the signal-to-interference ratio ( are computed according to the relative power of each Assuming the signal x (t)'s spectrum spans the frequency interval spectra.4. Communication system design begins with detailing the channel model. Letting α INFORMATION COMMUNICATION represent the attenuation introduced by the channel. Exercise 6. We characterize the channel's quality SIR) and the signal-to-noise ratio (SNR).1 (Solution on p. [fl . In this book.9.org/content/m0517/2. we shall assume that the noise is white.236 CHAPTER 6. The ratios within the transmitted signal's bandwidth.19/>. speech.) Is this model for the channel linear? As expected. the interference and noise powers do not vary for a given receiver. 6. The simplest form of analog communication is Point of Interest: We use analog communication techniques for analog message signals. which it almost certainly does). which is somewhat out of date. x (t) = Gm (t) (6. The lter does not aect the signal componentwe assume its gain is unitybut does lter the 2 noise. In the lter's output. an analog message signal must be modulated: The transmitted signal's spectrum occurs at much higher frequencies than those occupied by the signal. thus.26/>. as shown in Figure 6. speech. You don't use baseband communication in wireless systems simply because low-frequency signals do not radiate well. frequency or phase of what is known as the carrier sinusoid. Assuming the signal occupies a bandwidth of extends from zero to W ). 6. The key idea of modulation is to aect the amplitude.29) . we focus on amplitude modulation (AM). the received signal power equals α2 G2 power (m) and the noise power N0 W . Frequency modulation (FM) and less frequently used phase modulation (PM) are not discussed here. like music.237 in an inherently noisy received signal (assuming the channel adds noise.org/content/m0518/2. and television. (6. The receiver in a baseband system can't do much more than lter the received signal to remove out-of-band noise (interference is small in wireline channels). which gives a signal-to-noise ratio of munication system.5.5: The receiver for baseband communication systems is quite simple: a lowpass lter having the same bandwidth as the signal.27) An example. but also for wireline systems like cable television. We use the signal-to-noise ratio of the receiver's output ^ m (t) to evaluate any analog-message com- α and white noise of spectral height N0 . Transmission and reception of analog signals using analog results in an inherently noisy received signal (assuming the channel adds noise. the transmitted signal equals the message times a transmitter gain. Assume that the channel introduces an attenuation SNRbaseband = The signal power power (m) α2 G2 power (m) N0 W will be proportional to the bandwidth (6. r(t) LPF W ^ m(t) Figure 6. Point of Interest: We use analog communication techniques for analog message signals.28) W. which it almost certainly does). Here. like commercial radio and television. The amplitude modulated message signal has the form x (t) = Ac (1 + m (t)) cos (2πfc t) 13 This content is available online at <http://cnx. is the wireline telephone system. W Hz (the signal's spectrum the receiver applies a lowpass lter having the same bandwidth. in baseband communication the signal-to-noise ratio varies only with transmitter gain and channel attenuation and noise level.11 Modulated Communication 13 Especially for wireless channels. removing frequency components above W Hz. 31) At this point. 2W ~ R(f) R(f) f fc W c fc+W Figure 6. which means that the transmitter antenna and carrier frequency are chosen jointly during the design process.32) (Solution on p. assuming the signal's bandwidth is W Hz (see the gure (Figure 6.6). The dashed line indicates the white noise level. the message signal is multiplied by a constant and a sinusoid at twice the carrier frequency. INFORMATION COMMUNICATION carrier amplitude. reception of an amplitude modulated signal is quite easy (see Problem 4.5)).1 (6. where fc is the carrier frequency assumed to be less than one: Ac and |m (t) | < 1. Multiplication by the constant term returns the message signal to baseband (where we want it to be!) while multiplication by the double-frequency term yields a very high frequency signal. The so-called coherent receiver multiplies the input signal by a sinusoid and lowpass-lters the result (Figure 6. cos 2πfct r(t) ~ r(t) BPF fc.6)). the signal's amplitude is From our previous exposure to amplitude modulation (see the Fourier Transform example (Example 4.20). Note that the lters' characteristics  cuto frequency and center frequency for the bandpass lter  must be match to the modulation and message parameters. derive the same result in the frequency domain.30) Because of our trigonometric identities. The carrier frequency is usually much larger than the signal's highest frequency: fc  W .) This derivation relies solely on the time domain.11.238 CHAPTER 6. . 295. the Also. The lowpass lter removes this high-frequency signal. Ignoring the attenuation and noise introduced by the channel for the moment. we know that cos2 (2πfc t) = 1 (1 + cos (2π2fc t)) 2 (6. leaving only the baseband signal. the received signal is ^ m (t) = Ac (1 + m (t)) 2 Exercise 6. ^ m (t) = LPF (x (t) cos (2πfc t)) =  LPF Ac (1 + m (t)) cos2 (2πfc t) (6. You won't need the trigonometric identity with this approach. Because it is so easy to remove the constant term by electrical meanswe insert a capacitor in series with the receiver's outputwe typically ignore it and concentrate on the signal portion of the receiver's output when calculating signal-to-noise ratio. we know that the transmitted signal's spectrum occupies the frequency range [fc − W.6: LPF W f ^ m(t) ^ M(f) f fc W c fc+W f W W f The AM coherent receiver along with the spectra of key signals is shown for the case of a triangular-shaped signal spectrum. fc + W ]. Thus. the power spectrum after multiplication by the carrier has the form P (f + fc ) + P (f − fc ) 4 (6. occurs as it increases. but does remove out-of-band noise power. .) If you calculate the magnitude-squared of the rst equation. This signal's Fourier transform equals αAc (M (f + fc ) + M (f − fc )) 2 (6. Letting P (f ) denote the power spectrum of r˜ (t)'s noise component.37) Let's break down the components of this signal-to-noise ratio to better appreciate how the channel and the transmitter parameters aect communications performance. the signal power equals α Ac power(m) . As we derive the signal-to-noise ratio in the demodulated signal. we must understand how the coherent demodulator aects the bandpass The demodulated signal noise found in r˜ (t). We must thus insert a bandpass lter having bandwidth 2W and center frequency fc : This lter has no eect on the received signal-related component. this integral Thus. we must deal with the power spectrum since we don't have the Fourier transform available to us. the total noise power in this lter's output equals N0 W .12 Signal-to-Noise Ratio of an Amplitude-Modulated Signal 14 When we consider the much more realistic situation when we have a channel that introduces attenuation and noise. on the other hand. Exercise 6. What is it? α2 Ac 2 power (m).6). let's also calculate the signal-to-noise ratio of the bandpass lter's output r˜ (t). 2 4 To determine the noise power. The signal component of r˜ (t) equals αAc m (t) cos (2πfc t). should be ltered from the received signal before demodulation. you don't obtain the second unless you make an assumption. The so-called received signal-to-noise front-end bandpass lter and before demodulation  equals SNRr = α2 Ac 2 power (m) 4N0 W (6. 2· N0 2 ·W ·2· 1 4 = Thus.1 (Solution on p. we can make use of the just-described receiver's linear nature to directly derive the receiver's output. the total signal-related power in equals the noise amplitude N0 r˜ (t) is times the lter's bandwidth ratio  the signal-to-noise ratio after the de rigeur 2W .34) making the power spectrum. with the result that the demodulated output contains noise that cannot be removed: It lies in the same spectral band as the signal. As shown in the triangular-shaped signal spectrum (Figure 6.12. we apply coherent receiver to this ltered signal. SNR.35) ^ 2 2 αAc m(t) + nout (t). The noise power equals the integral of the 2 noise power spectrum.33)  α 2 Ac 2  2 2 (|M (f + fc ) |) + (|M (f − fc ) |) 4 (6. 14 Better performance. because the power spectrum is constant over the transmission band.18/>. m (t) = Because we are concerned with noise. The attenuation aects the output in the same way as the transmitted signal: It scales the output signal by the same amount. The white noise. 295. The signal-to-noise ratio of the receiver's output thus equals 2 SNR ^ = α2 Ac 2 power(m) 2N0 W = 2SNRr m (6. as measured by the This content is available online at <http://cnx. Clearly.239 6.org/content/m0541/2.36) The delay and advance in frequency indicated here results in two spectral noise bands falling in the lowfrequency region of lowpass lter's passband. . digital communication errors can be zero. send s1 (t). . Thus. In summary. Exactly what signals we use ultimately aects how well the bits can be received.10/>. On earth. we need a forum for agreeing on carrier frequencies and on signal bandwidth. and both receivers will produce the sum of the two signals. if we transmit the signal s0 (t). What we clearly need to do is talk to the other party. fc has no eect on SNR.2 (Solution on p. signal power would increase proportionally.7).org/content/m0519/2. short wave (also AM). Signal amplitude essentially equals the integral of the magnitude of the signal's spectrum. Thus. even though the channel adds noise! We represent a bit by associating one of two specic analog signals with the bit's value. Exercise 6. Thus. • Noise added by the channel adversely aects the signal-to-noise ratio. with the result that the signal power remains constant. These two signals comprise the b (n) = 0. The two resulting transmissions will add. we will then develop a way of tacking on communication bits to the message bits that will reduce channel-induced errors greatly. signal set for digital communication and are designed with the channel and bit stream in mind. }is the goal here. this forum is the government. the If the signal spectrum had a constant amplitude as we increased the bandwidth. increasing signal bandwidth does indeed decrease the signal-to-noise ratio of the receiver's output. our transmitter enforced the criterion that signal amplitude was constant (Section 6. For wireline channels. In the United States. We found that analog schemes. As more and more users wish to use radio. . Separate frequency bands are allocated for commercial AM. these signals have a nite duration T common to both signals. However. • • • INFORMATION COMMUNICATION Ac  increases the signal-to-noise ratio proportionally. error-free transmission of a sequence of bitsa bit stream {b (0) . amplitude modulation is the only alternative.13 Digital Communication 15 Eective. On the other hand. The one AM parameter that does not aect signal-to-noise ratio is the carrier frequency fc : We can choose any value we want so long as the transmitter and receiver use the same value. using baseband or amplitude modulation makes little dierence in terms of signal-to-noise ratio. but we have assumed that fc  W . In virtually every case. cellular telephone (the analog version of which is AM). Enforcing the signal amplitude specication means that as the signal's bandwidth increases we must decrease the spectral amplitude. the Federal Communications Commission (FCC) strictly controls the use of the electromagnetic spectrum for communications. . always yield a received signal containing noise as well as the message signal when the channel adds noise. note: This result isn't exact. Interestingly. amplitude modulation provides an eective means for sending a bandlimited signal from one place to another. Digital communication schemes are very dierent.12. In theory. as represented by amplitude modulation.240 CHAPTER 6. and satellite communications. signal-to-noise ratio decreases as distance-squared between transmitter and receiver. and agree to use separate carrier frequencies. Once we decide how to represent bits by analog signals that can be transmitted over wireline (like a computer network) or wireless (like digital cellular telephone) channels. baseband 15 This content is available online at <http://cnx. 295. FM. bandwidth W enters the signal-to-noise expression in two places: implicitly through More transmitter power  increasing The carrier frequency The signal signal power and explicitly in the expression's denominator. For wireless channels. b (1) .) Suppose all users agree to use the same signal bandwidth. but we do know that m (0) = R∞ −∞ M (f ) df . suppose someone else wants to use AM and chooses the same carrier frequency. • Increasing channel attenuation  moving the receiver farther from the transmitter  decreases the signal-to-noise ratio as the square. this duration is known as the bit interval. if b (n) = 1. How closely can the carrier frequencies be while avoiding communications crosstalk? What is the signal bandwidth for commercial AM? How does this bandwidth compare to the speech bandwidth? 6. Mathematically. T? 6. . .7 Here. (6.1 (Solution on p. Exercise 6.241 and modulated signal sets can yield the same performance.38) s1 (t) = − (ApT (t)) s0(t) s1(t) A T T t –A t Figure 6.) What is the expression for the signal arising from a digital transmitter sending the bit stream n = {. 0. The entire bit stream b (n) is represented by a sequence of these signals. the transmitted signal has the form x (t) = X (−1) b(n) ApT (t − nT ) nn and graphically Figure 6. s0 (t) = ApT (t) (6. each signal of which has duration b (n).14 Binary Phase Shift Keying 16 A commonly used example of a signal set consists of pulses that are negatives of each other (Figure 6. . . 295.8 shows what a typical transmitted signal might be. s1 (t).org/content/m10280/2.14/>. 1.39) . Other considerations determine how signal set choice aects digital communication performance. .13. 16 This content is available online at <http://cnx. .7). } using the signal set s0 (t). we have a baseband signal set suitable for wireline transmission. −1. The lower one shows an amplitude-modulated variant suitable for wireless channels. which happened to be digital as well: the telegraph. we do not want to choose signal set members to be the same. In this example it equals the reciprocal of the bit interval: transmission. We could also have made the negative-amplitude pulse represent a 0 and the positive one a 1. we must have R= 1 T . The choice of signals to represent bit values is arbitrary to some degree. This choice is indeed arbitrary and will have no eect on performance assuming the receiver knows which signal represents which bit. we design transmitter and receiver together.242 CHAPTER 6.  2πkt s0 (t) = ApT (t) sin T    2πkt s1 (t) = − ApT (t) sin T  (6. INFORMATION COMMUNICATION x(t) A “0” “1” T “1” 2T “0” 3T 4T t –A (a) A x(t) “0” “1” “1” “0” t T 2T (b) 3T 4T Figure 6. The name comes from concisely expressing this popular way of communicating digital information. Clearly. Changing the sign of sinusoid amounts to changingshifting the phase by π (although we don't have a sinusoid yet). we couldn't distinguish bits if we did so. The datarate R of a digital communication system is how frequently an information bit is transmitted. for a 1 Mbps (megabit per second) T = 1µs. Thus. The word "keying" reects back to the rst electrical communication system.40) . The word "binary" is clear enough (one binary-valued quantity is transmitted during a bit interval). This way of representing a bit streamchanging the bit changes the sign of the transmitted signalis known as binary phase shift keying and abbreviated BPSK. As in all communication systems.8: The upper plot shows how a baseband signal set for transmitting the bit sequence 0110. A simple signal set for both wireless and wireline channels amounts to amplitude modulating a baseband signal set (more appropriate for a wireline channel) by a carrier having a frequency harmonic with the bit interval. fundamental.3 (Solution on p.10: Here we show the transmitted waveform corresponding to an alternating bit sequence. in the style of the baseband signal set. If the bit sequence is constantalways 0 or always 1the transmitted signal is a constant.14.14. the transmitted signal is a square wave having a period of 2T . meaning that the eective bandwidth 3 3R 2T or. the signal's bandwidth is innite. a digital communications signal requires more bandwidth than the datarate: a 1 Mbps baseband system requires a of our baseband signal is bandwidth of at least 1. expressing this quantity in terms of the datarate. Listen carefully when someone describes the transmission bandwidth of digital communication systems: Did they say "megabits" or "megahertz"? Exercise 6.2 (Solution on p. 295.5 MHz. what is the total harmonic distortion of the received 2T signal? 17 "Signal Sets".10. Figure 2 <http://cnx. which has zero bandwidth. What is the transmission bandwidth of these signal sets? We need only consider the baseband version as the second is an amplitude-modulated version of the rst.) Write a formula. 295. we use the 90%-power bandwidth to assess the eective range of frequencies consumed by the signal. We'll show later that indeed both signal sets provide identical performance levels when the signal-to-noise ratios are equal. In practical terms.1 What is the value of (Solution on p. From our work in Fourier series. Exercise 6.org/content/m0542/latest/#g1001> .243 s0(t) s1(t) A T A t T t Figure 6. for the transmitted signal as shown in the plot of the baseband signal set 17 that emerges when we use this modulated signal. we know that this signal's spectrum contains odd-harmonics of the 1 2T .) k in this example? This signal set is also known as a BPSK signal set. x(t) A “0” “1” T 2T “0” 3T “1” 4T t –A Figure 6. Thus.) Show that indeed the rst and third harmonics contain 90% of the transmitted power. In this case. strictly speaking. Thus.9 Exercise 6. The bandwidth is determined by the bit sequence. 295. which here equals The rst and third harmonics contain that fraction of the total power.14. 2 . The worst-casebandwidth consumingbit sequence is the alternating one shown in Figure 6. receiver uses a front-end lter of bandwidth If the 3 . 12/>.15 Frequency Shift Keying In 18 frequency-shift keying (FSK).13). the bit aects the frequency of a carrier sinusoid. s1 (t) . 18 This content is available online at <http://cnx. etc. s0 (t).12: 2T 3T 4T This plot shows the FSK waveform for same bitstream used in the BPSK example (Figure 6. T T the transitions at bit interval boundaries are smoother than those of BPSK. 4 3 and f1 = .. In the depicted example.244 CHAPTER 6. zero.12). As can be seen from the transmitted signal for our example bit stream (Figure 6.11 f0 . f1 are usually harmonically related to the bit interval.14.org/content/m0545/2. Think of it as two signals added together: The rst comprised of the signal s0 (t). 295. the zero signal.) What is the 90% transmission bandwidth of the modulated signal set? 6. s0 (t) = ApT (t) sin (2πf0 t) (6.8). we again consider the alternating bit stream.4 (Solution on p. INFORMATION COMMUNICATION Exercise 6. and the second having the same structure but interleaved with the rst and containing (Figure 6.41) s1 (t) = ApT (t) sin (2πf1 t) s0(t) s1(t) A A T t t T Figure 6. The frequencies f0 = A x(t) “0” “1” “1” “0” t T Figure 6. To determine the bandwidth required by this signal set. Because the receiver usually does not determine which bit was sent until synchronization occurs. with produce a transmission bandwidth larger than that resulting from using a BPSK signal set. • It must determine when bit boundaries occur: The receiver needs to synchronize with the transmitted signal. the receiver must then determine every what bit was transmitted during the previous bit interval. f0 = T and f1 = T with k1 > k0 . but with the addition of the constant term c0 . If the dierence between harmonic numbers is 1. If the dierence is 2. then the FSK bandwidth is T smaller than the BPSK bandwidth. f1 + 2T − f0 − 2T = f1 − f0 + T1 . 6. Figure 1 <http://cnx.13: “1” 2T 3T 4T The depicted decomposition of the FSK-modulated alternating bit stream into its frequency components simplies the calculation of its bandwidth. k0 k1 If the two frequencies are harmonics of the bit-interval duration. The bandwidth thus equals. the k +−k0 +1 bandwidth equals 1 .245 A A x(t) “0” “1” “0” “0” “0” t “1” t = T 2T 3T 4T + “1” A t T Figure 6.16 Digital Communication Receivers 19 The receiver interested in the transmitted bit stream must perform two tasks when received waveform r (t) begins.org/content/m0544/latest/#g1003> .18/>. "Transmission Bandwidth". it does not know when during the preamble it obtained synchronization. This procedure amounts to what in digital hardware as self-clocking signaling: The receiver of a bit stream must derive the clock  when bit boundaries occur  from its input signal. 19 20 T seconds We focus on this aspect of the digital This content is available online at <http://cnx.org/content/m0520/2. This reference bit sequence is usually the alternating sequence as shown in the square wave example 20 and in the FSK example (Figure 6. both use the same value for the bit interval T. • Once synchronized and data bits are transmitted. Synchronization can occur because the transmitter begins sending with a reference bit sequence. the bandwidths are equal and larger dierences rst harmonics to achieve it. known as the preamble. The receiver knows what the preamble bit sequence is and uses it to determine when bit boundaries occur. Each component can be thought of as a xed-frequency sinusoid multiplied by a square wave of period 2T that alternates between one and zero. The transmitter signals the end of the preamble by switching to a second bit sequence. This baseband square wave has the same Fourier spectrum as our BPSK example. Because transmitter and receiver are designed in concert. The second preamble phase informs the receiver that data bits are about to come and that the preamble is almost over. This quantity's presence changes the number of Fourier series terms required for the 90% bandwidth: Now we need only include the zero and  1 1 f0 < f1 .13). integrates the product over the bit interval.42) nT notation before. The receiver for digital communication is known as a matched lter. Optimal receiver structure s0(t-nT) (n +1)T ∫ (⋅) r(t) nT s1(t-nT) (n +1)T Choose Largest ∫ (⋅) nT Figure 6. the received value of b (n). maxi {i. is given by ^ Z (n+1)T b (n) = argmax i You may not have seen the with respect to the index argmax i i. Note that the precise numerical value of the integrator's output does not matter. Mathematically. Whichever path through the receiver yields the largest value corresponds to the receiver's decision as to what bit was sent during the previous bit interval. For the next bit interval. argmax i r (t) si (t) dt (6. multiplies the received signal by each of the possible members of the transmitter signal set.14 (Optimal receiver structure). shown in Figure 6.44) . which we label ^ b (n). with the next bit decision made at the end of the bit interval. the multiplication and integration begins again. INFORMATION COMMUNICATION receiver because this strategy is also used in synchronization. what does matter is its value relative to the other integrator's output. and compares the results. This receiver. If bit 0 were sent using the baseband BPSK signal set. the integrator outputs would be Z (n+1)T r (t) s0 (t) dt = A2 T (6.43) nT Z (n+1)T r (t) s1 (t) dt = − A2 T  r (t) s0 (t) dt = − A2 T  nT If bit 1 were sent.14: The optimal receiver structure for digital communication faced with additive white noise channels is the depicted matched lter. Let's assume a perfect channel for the moment: The received signal equals the transmitted one. ·} yields the maximum value of its argument equals the value of the index that yields the maximum.246 CHAPTER 6. Z (n+1)T nT Z (n+1)T nT r (t) s1 (t) dt = A2 T (6. the values of these integrals are random quantities drawn from some probability distribution that vary erratically from bit interval to bit interval. this receiver would always choose the bit correctly. Variability of the Noise Term  We quantify variability by the spectral height of the white noise N0 2 added by the channel.46) .) Can you develop a receiver for BPSK signal sets that requires only one multiplier-integrator combination? Exercise 6. then the receiver will make an error. the values of the integrals will hover about zero. If the noise is such that its integral term is more negative than αA2 T .17 Digital Communication in the Presence of Noise 21 r (t) = αsi (t)+n (t). it would only make the values smaller.1 (Solution on p. deciding that the transmitted zero-valued bit was indeed a one. Because they involve noise. "Detection of Signals in Noise" <http://cnx.) What is the corresponding result when the amplitude-modulated BPSK signal set is used? Clearly.247 Exercise 6.org/content/m11406/latest/> 21 22 (6.15/>.org/content/m0546/2. but all that matters is which is largest.45)) denes how large the noise term must be for an incorrect receiver decision to result. the integrators' outputs in the matched When we incorporate additive noise into our channel model. 296. Because the noise has zero average value and has an equal amount of power in all frequency bands.16.45) nT (n+1)T Z 2 (n+1)T r (t) s1 (t) dt = αA T + nT n (t) s1 (t) dt nT It is the quantities containing the noise terms that cause errors in the receiver's decision-making process.14: Optimal receiver structure) would be: Z (n+1)T r (t) s0 (t) dt = αA2 T + Z n (t) s0 (t) dt nT Z (n+1)T (6.14). the dierence-signal-energy term is • • 4α2 A4 T 2 . Probability Distribution of the Noise Term  The value of the noise terms relative to the signal terms and the probability of their occurrence directly aect the likelihood that a receiver error will occur. the underlying distributions are Gaussian. errors can creep in. 296. Deriving the following expression for the probability the receiver makes an error on any bit transmission is complicated but can be found at here 22 and here23 . The signal-dierence energy equals Z T 2 (s1 (t) − s0 (t)) dt 0 For our BPSK baseband signal set. Channel attenuation would not aect this correctness. What aects the probability of such errors occurring is the energy in the dierence of the received signals in comparison to the noise term's variability.16. The probability that this situation occurs depends on three factors: • Signal Set Choice  The dierence between the signal-dependent terms in the integrators' outputs (equations (6. For the white noise we have been considering. 6. q R pe = Q = Q T 0 q (s1 (t)−s0 (t))2 dt 2N0 2α2 A2 T N0   for the BPSK case This content is available online at <http://cnx. 0 using a BPSK signal set (Section 6. so that If the transmitter sent bit lter receiver (Figure 6.2 (Solution on p. What is important is how much they vary.org/content/m16253/latest/> 23 "Continuous-Time Detection Theory" <http://cnx. very nonlinear function.248 CHAPTER 6. equals the energy expended by the transmitter in sending the bit.15 illustrates.15: The function 1 Q (x) 2 3 The term A2 T 5 6 is plotted in semilogarithmic coordinates. we label this term Eb . For example. . We arrive at a concise expression for the probability the matched lter receiver makes a bit-reception error. Q (x) decreases 100. when by a factor of 4 x increases from 4 to 5. Note that it decreases very rapidly for small increases in its arguments. INFORMATION COMMUNICATION R ∞ − α2 √1 e 2 dα. As Figure 6.47) α 2 Eb N0 . Q (·) is a decreasing. This integral has no closed form expression. but it 2π x can be accurately computed. Here Q (·) is the integral Q (x) = 10 0 10-2 Q(x) 10-4 10-6 10-8 0 Figure 6. s pe = Q   2α2 Eb  N0 Figure 6.16 shows how the receiver's error rate varies with the signal-to-noise ratio (6. whether due to increased distance from the transmitter (smaller α) or to increased noise in the channel (larger error approaches 1/2.) Derive the expression for the probability of error that would result if the FSK signal set were used.9/>.17. At a signal-to-noise ratio of 12 dB. In words.1 (Solution on p. performance gainssmaller probability of error pe  can be easily obtained. the error probability decreases dramatically. Exercise 6.18 Digital Communication System Properties 24 Results from the Receiver Error module (Section 6. the probability the receiver makes an In such situations. and show that its performance identically equals that of the baseband BPSK signal set. baseband or modulated. one out of one hundred million bits will. the receiver performs only slightly better than the "receiver" that ignores what was transmitted and merely guesses what bit was transmitted. it becomes almost impossible to communicate information when digital channels become noisy. All BPSK signal sets. on the average. • As the signal-to-noise ratio increases. the probability the receiver makes an error equals 10−8 .249 Probability of Bit Error 10 0 10 -4 BPSK 10 -6 10 -8 -5 Figure 6. the lower (and therefore better) one the BPSK signal set.) Derive the probability of error expression for the modulated BPSK signal set. • As the received signal becomes increasingly noisy. 296.16: FSK 10 -2 0 5 Signal-to-Noise Ratio (dB) 10 The probability that the matched-lter receiver makes an error on any bit transmission is plotted against the signal-to-noise ratio of the received signal. Consequently. Exercise 6. be in error. yield the same performance for the same bit energy. 6.18. N0 ). Signal set choice can make a signicant dierence in performance. 24 This content is available online at <http://cnx. 296. Adding 1 dB improvement in signal-to-noise ratio can result in a factor of 10 smaller • pe .17) reveals several properties about digital communication systems.1 (Solution on p. The BPSK signal set does perform much better than the FSK signal set once the signal-to-noise ratio exceeds about 5 dB.org/content/m10282/2. The upper curve shows the performance of the FSK signal set. • Once the signal-to-noise ratio exceeds about 5 dB. . which is discussed in Digital Communication (Section 6. You might wonder whether another receiver might be better. Since bits are transmitted at a rate R. Signal Sets 26 .30).19) and Shannon's Noisy Channel Coding Theorem (Section 6. We do have some tricks up our sleeves. on the average N pe bits will be received in error. Suppose the error probability is an impressively Data on a computer network like Ethernet is transmitted at a rate R = 100Mbps. Rpe . where the signal representing a bit is the negative of the signal representing the other bit. The reason for this result rests in the dependence of probability of error integrator outputs: For a given Eb .17 (DigMC).14/>. BPSK Signal Set27 .19 Digital Channels 25 Let's review how digital communication systems work within the Fundamental Model of Communication (Figure 1. This content is available online at <http://cnx.250 CHAPTER 6. Digital Communication Receivers (Section 6. 6. Do note the phrase "on the average" here: Errors occur randomly because of the noise introduced by the channel. can be lumped into a single system known as Communication System Properties the digital channel.org/content/m0547/latest/> 30 "Error Probability" <http://cnx. and we can only predict the probability of occurrence. Digital 29 . We need to understand digital channels (Section 6. INFORMATION COMMUNICATION The matched-lter receiver provides impressive performance once adequate signal-to-noise ratios occur. This error rate is very high. however.16).org/content/m0542/latest/> 27 "BPSK signal set" <http://cnx.org/content/m0102/2. requiring a much smaller pe to achieve a more acceptable average occurrence rate for errors occurring.13). Factors in Receiver Error (Section 6.org/content/m0544/latest/> 29 "Digital Communcation System Properties" <http://cnx.17). obtaining very small error probabilities is not dicult. pe on the dierence between the noise-free no other signal set provides a greater dierence.org/content/m0548/latest/> 25 26 . Frequency Shift Keying (Sec- tion 6. which means the channel noise is small and the attenuation low. As shown in Figure 6. no signal set can provide better performance than the BPSK signal optimal: set. that can essentially reduce the error rate to zero without resorting to expending a large amount of energy at the transmitter.org/content/m0543/latest/> 28 "Transmission Bandwidth" <http://cnx. The answer is that the matched-lter receiver is No other receiver can provide a smaller probability of error than the matched lter regardless of the SNR. Furthermore. the message is a single bit. Because Ethernet is a wireline channel. and Error Probability30 . Transmission Bandwidth28 . which means that errors would occur roughly 100 per second.3: Fundamental model of communication). The entire analog transmission/reception system. "Signal Sets" <http://cnx. How small should the error probability be? Out of N transmitted bits. errors occur at an average frequency of small number like 10−6 .15). http://www. . . and the matched-lter receiver. and received by a matched-lter receiver.org/content/m0070/2.48) . the probabilities must be numbers between zero and one and must sum to one. which indicate the output alphabet symbols that result for each possible transmitted symbol and the probabilities of the various reception possibilities. k = {1. probability pe For example. 6. For this model to make sense. The probabilities on transitions coming from the same symbol must sum to one. K}. The symbolic-valued signal encoded into a bit sequence b (n). channel properties.20 Entropy 31 32 published in Communication theory has been formulated best for symbolic-valued signals. . Each bit is represented by an analog signal.14/>. known as a channel. which shows how each transmitted bit could be received. the depicted transition diagram. each symbol can occur at index a probability P r [ak ].17: 0 pe Source Coder Source b(n) Receiver Sink 1 1–pe Digital Channel The steps in transmitting digital information are shown in the upper system. For the matched-lter receiver and the signal sets we have seen. which became the cornerstone of digital communication. In the simplest signal model.251 DigMC s(m) Source Source Coder b(n) x(t) Transmitter 1–pe b(n) 0 s(m) r(t) Channel s(m) Source Decoder b(n) Sink s(m) Source Decoder pe 1 Figure 6. transition diagrams. captures how transmitted bits are received. we can concentrate on how bits are received. From the received bitstream received symbolic-valued signal ^ s (m) ^ b (n) the is derived. and it encapsulates signal set choice. . s (m) forms the message. which allowed him to quantify n with K -sided coin the information present in a signal. transmitting a 0 results in the reception of a 1 with (an error) or a 0 with probability Digital channels are described by 1 − pe (no error). 0 ≤ P r [ak ] ≤ 1 31 32 This content is available online at <http://cnx.lucent. and it is The indices dier because more than one bit/symbol is usually required to represent the message by a bitstream. transmitted through the (unfriendly) channel. the Fundamental Model of Communication. With this simple but entirely accurate model. The probability of error pe binary symmetric is the sole parameter of the digital channel. What this model says is that for each signal value a is ipped (note that the coin need not be fair). Claude Shannon 1948 The Mathematical Theory of Communication. The lower block diagram shows an equivalent system wherein the analog portions are combined and modeled by a transition diagram.com/minds/infotheory/ (6. He showed the power of probabilistic models for symbolic-valued signals. 20. models are used.) Derive the maximum-entropy results.50) kk Because we use the base-2 logarithm. = P r [al ]). P r [a0 ] = 1 2 P r [a1 ] = 1 4 P r [a2 ] = 1 8 P r [a3 ] = 1 8 Note that these probabilities sum to one as they should. The entropy of this alphabet equals H (A) 1 1 1 1 1 1 2 log2 2 + 4 log2 4 + 8 log2 8 1 1 1 1 2 −1+ 4 −2+ 8 −3+ 8 = − = − = 1. the ideas we develop here also work when more accurate. K X INFORMATION COMMUNICATION P r [ak ] = 1 (6. For this denition to make sense.14/>. A zero-probability symbol never occurs. entropy has units of bits. the The minimum value occurs when only one symbol occurs. X P r [ak ] log2 P r [ak ] (6. . The key quantity that characterizes a symbolic-valued signal is the H (A) = − entropy of its alphabet. The maximum value attainable by an alphabet's entropy occurs when the symbols are equally likely (P r [ak ] entropy equals log2 K . Bit sequences form the "coin of the realm" in digital communications: they are the universal way of representing 33 This content is available online at <http://cnx. Derive the value of the minimum entropy alphabet.21 Source Coding Theorem 33 The signicance of an alphabet's entropy rests in how we can represent it with a sequence of bits. Exercise 6.1 A four-symbol alphabet has the following probabilities.49) k=1 This coin-ipping model assumes that symbols occur without regard to what preceding or succeeding symbols were. As 1 2 = 2−1 .252 CHAPTER 6. we dene 0log2 0 = 0 so that such symbols do not aect the entropy. log2 21 = −1. it has probability one of occurring and the rest have probability zero.org/content/m0091/2. both the numeric aspect (entropy equals log2 K ) and the theoretical one (equally likely symbols maximize entropy). Despite this probabilistic model's over-simplicity.1 (Solution on p.75bits + 18 log2 18  −3  (6. Example 6. but still probabilistic.51) 6. thus. a false assumption for typed text. 296. we must take special note of symbols having probability zero of occurring. In this case. 2 A four-symbol alphabet has the following probabilities.53) Whenever the number of symbols in the alphabet is a power of two (as in this case).13) is the transmission of symbolic-valued signals from one place to another. the fewer bits required for digital transmission of les expressed in that alphabet. the simple binary code indeed satises the Source Coding Theoremwe are within one bit of the 34 http://www. we could use one bit for every character: File transmission would be fast but useless because the codebook creates errors. However.lucent. we don't want to use so few bits that the receiver cannot determine what each character was from the bit sequence. As we shall explore in some detail elsewhere. for example. For example. The Source k=1 denote the number of bits used to represent the symbol number of bits B (A) required to Coding Theorem states that the average number of bits needed to accurately represent the alphabet need only to satisfy represent the entire alphabet equals PK − H (A) ≤B (A)< H (A) + 1 (6. Because we want to send the le quickly. Because the entropy equals 1. When faced with the problem. We convert back and forth between symbols to bit-sequences with what is known as a codebook: a table that associates symbols to bit sequences. we want to use as few bits as possible. Let's see if we can nd a codebook for this four-letter alphabet that satises the Source Coding Theorem. P r [a0 ] = 1 2 P r [a1 ] = 1 4 P r [a2 ] = 1 8 P r [a3 ] = 1 8 and an entropy of 1. the alphabet's entropy species to within one bit how many bits on the average need to be used to send the alphabet.253 symbolic-valued signals.75bits. The average B (a ) P r [a ] k k . digital communication (Section 6. a0 ↔ 00 a1 ↔ 01 a2 ↔ 10 a3 ↔ 11 (6. Point of Interest: You may be conjuring the notion of hiding information from others when we use the name codebook for the symbol-to-bit-sequence table. The smaller an alphabet's entropy.52) Thus. The codebook terminology was developed during the beginnings of information theory just after World War II.1).75 bits (Example 6. Example 6. we must be able to unique bit sequence to each symbol so that we can go between symbol and bit sequences without assign a error. which equals 2 in this case. simple binary code: The simplest code to try is known as the convert the symbol's index into a binary number and use the same number of bits for each symbol by including leading zeros where necessary. we must rst represent each character by a bit sequence. 34 proved in his monumental work what we call today the Shannon Source Coding Theorem. Let B (ak ) − ak . which comprises mathematically provable methods of securing information. of sending a le across the Internet.com/minds/infotheory/ . the average − number of bits B (A) equals log2 K . There is no relation to cryptology. In creating this table. A binary tree always has two branches at each node. We're pretty sure he received an A. in this case.org/content/m0092/2. making the probability of the node equal to the sum of the merged nodes' probabilities. We can reach the entropy limit! The simple binary code is. INFORMATION COMMUNICATION entropy limitbut you might wonder if you can do better. One codebook like this is a0 ↔ 0 a1 ↔ 10 a2 ↔ 110 a3 ↔ 111 (6. Build the tree by merging the two lowest probability symbols at each level. In the xed rate source coder. Furthermore. P r [a0 ] = 35 1 2 This content is available online at <http://cnx. That source coder is not unique. we have a symbolic-valued signal source. . You might be wondering why anyone would want to intentionally create errors. that we want to represent with as few lossless if they lossy if they use fewer bits than the alphabet's entropy.254 CHAPTER 6. less ecient than the unequal-length code. The race was won by then MIT graduate student David Human in 1954. your choice won't aect • B (A). Compression schemes that assign symbols to bit sequences are known as obey the Source Coding Theorem.5% faster. the best ordering being in decreasing order of probability. a smaller average number of bits can indeed be obtained.22 Compression and the Human Code 35 Shannon's Source Coding Theorem (6. label each of the emanating branches with a binary number.2) modules has a four-symbol alphabet with the following probabilities.75. they are a lossy compression scheme means that you cannot recover a symbolic-valued signal from its compressed version without incurring some error.52) has additional applications in data compression. may not be the most ecient way of encoding symbols into bits. Here.1) and Source Coding (Example 6. Using the ecient code. but lossy compression schemes are frequently used where the eciency gained in representing the signal outweighs the signicance of the errors. The idea is to use shorter bit sequences for the symbols that occur more often.54) − Now B (A)= 1 · × 21 + 2 · × 14 + 3 · × 18 + 3 · × 81 = 1. In the early years of information theory. one that produces a xed number of bits/symbol. Form a binary tree to the right of the table. Shannon's Source Coding Theorem states that symbolic-valued signals require H (A) on the average at least number of bits to represent each of its values. 6.19/>. the race was on to be the rst provably maximally ecient source coding algorithm. Example 6. Using bits as possible. At each node. If we chose a codebook with diering number of bits for the symbols. If more than two nodes/symbols share − the lowest probability at a given level.21) we nd that using a so-called A. What is not discussed there is a procedure for designing an ecient source coder: one guaranteed to produce the fewest bits/symbol on the average.3 The simple four-symbol alphabet used in the Entropy (Example 6. which are symbols drawn from the alphabet module on the Source Coding Theorem (Section 6. pick any two. like a computer le or an image. we know that no more ecient codebook can be found because of Shannon's Theorem. who worked on the problem as a project in his information theory course. The bit sequence obtained from passing from the tree's root to the symbol is its Human code. we can transmit the symbolic-valued signal having this alphabet 12. • • Create a vertical table for the symbols. and one approach that does achieve that limit is the Point of Interest: to nd a Human source coding algorithm. the average number of bits/symbol resulting from the Human coding algorithm would equal 1. b (n) = 101100111010 . a3 . We can't do better. valued signal If we had the symbolic- our Human code would produce the bitstream If the alphabet probabilities were dierent.. The Human code does satisfy the Source Coding Theoremits average length is within one bit of the alphabet's entropybut you might wonder if a better code existed. The bit sequence obtained by traversing the tree from the root to the symbol denes that symbol's binary code. Furthermore. a4 . and verify the claimed average code length and alphabet entropy. and P r [a4 ] = 20 .75 The average number of bits required to represent this alphabet equals bits.) Derive the Human code for this second set of probabilities. we may not be able to achieve the entropy limit.75 bits.68 bits. . . Exercise 6. }. with the root node (the one at which the tree begins) dening the codewords. the had the probabilities P r [a1 ] = entropy limit is 1. 296.1). . s (m) = {a2 . a1 .18 (Human Coding Tree). However.255 P r [a1 ] = 1 4 P r [a2 ] = 1 8 P r [a3 ] = 1 8 and an entropy of 1. Human Coding Tree Symbol Probability 1 a1 a2 a3 a4 Figure 6. a1 . . This alphabet has the Human coding tree shown in Figure 6. David Human showed mathematically that no other code could achieve a shorter average code than his.18: 2 1 4 1 8 1 8 Source Code 0 0 10 0 1 2 0 1 1 4 1 110 1 111 We form a Human code for a four-letter alphabet having the indicated probabilities of occurrence. and therefore dierent code. The binary tree created by the algorithm extends to the right. 1. a2 . If our symbols 1 1 1 1 2 .1 (Solution on p. clearly a dierent tree. . P r [a2 ] = 4 . The code thus obtained is not unique as we could have labeled the branches coming out of each node dierently. P r [a3 ] = 5 . . which is the Shannon entropy limit for this source alphabet.22. could well result.75 bits (Example 6. telegraph relied on a network In short.4 The rst electrical communications systemthe telegraphwas digital.) − Calculate what the relation between T and the average bit rate B (A) R is. Example 6. an infrequent error can devastate the ability to translate a bitstream into a symbolic signal.23. it communicated text over wireline connections using a binary codethe Morse codeto represent individual letters. To capture how often bits must be transmitted to keep up with the source's production of symbols. the telegrapher needs to insert a pause to inform the receiver when letter boundaries occur. note: A good example of this need is the Morse Code: Between each letter.) Show by example that a bitstream produced by a Human code is not necessarily self-synchronizing. Exercise 6. how does the receiver determine when symbols begin and end? If you required created a source code that a separation marker in the bitstream between symbols. the bit sequences that represent individual symbols can have diering lengths so the bitstream index m does not increase in lock step with the symbol-valued signal's index n. the not unlike the basics of modern computer networks. presumably getting the message closer to its destination. we can only compute − B (A) averages. synchronization can easily be lost even if the receiver started "in synch" with the source. Exercise 6. 6. To say it It was also far ahead of some needed technologies. presaged modern communications would be an understatement. To separate codes for each letter.) Sketch an argument that prex coding. A subtlety of source coding is whether we need "commas" in the bitstream. (Solution on p. whether derived from a Human code or not. will provide unique decoding when an unequal number of bits/symbol are used in the code. If our source code averages bits/symbol and symbols are produced at a rate R. To send a message from one place to another. When we use an unequal number of bits to represent symbols. Human showed that his (maximally ecient) code had the prex property: No code for a symbol began another symbol's code.2 (Solution on p. we can the bitstream is assign a unique and correct symbol sequence to the bitstream. However. can we always nd the correct symbol boundaries? The self-synchronization issue does mitigate the use of ecient source coding algorithms. 296. The Morse code.256 CHAPTER 6.19. Morse code required that a spacea pausebe inserted between each letter. We need ways of reducing reception errors that pe without demanding be smaller. was not a prex code. Exercise 6. 296.3 (Solution on p. the average − bit rate equals B (A) R. shown in Figure 6.23. who would relay the message on to the next operator. that space counts as another code 36 This content is available online at <http://cnx.23 Subtlies of Coding INFORMATION COMMUNICATION 36 In the Human code. Despite the small probabilities of error oered by good signal set design and the matched lter.1 and this quantity determines the bit interval duration T. telegraph operators would tap the message using a telegraph key to another operator. When rst deployed in 1844. In information theory. no commas are placed in the bitstream. it would be very inecient since you are essentially requiring an extra symbol in the transmission stream. 296. but you can unambiguously decode the sequence of symbols from the bitstream. Once you have the prex property. partially self-synchronizing: Once the receiver knows where the bitstream starts. if they occur (and they will). namely the Source Coding Theorem. Are xed-length codes self synchronizing? Another issue is bit errors induced by the digital channel.org/content/m0093/2.3). . having a prex code does not guarantee total synchronization: After hopping into the middle of a bitstream. As shown in this example (Example 6.16/>.23. Figure 6.19 shows a Human code for English text.257 letter. . and is grossly inecient (about 25%). which means that the Morse code encoded text with a three-letter source code: dots. which as we know is ecient. The resulting source code is not within a bit of entropy. dashes and space.  01010111011 K 0.06 .- 0101010 W 1..06 .-.53 .06  1000 P 1. 0101011101011 Morse and Human Codes for American-Roman Alphabet. The entropy H (A) of the this source is 4.14 bits.73 -.258 CHAPTER 6. 110001 G 1. 10101 D 2.97 -. 1001 J 0.. INFORMATION COMMUNICATION Morse and Human Code Table Figure 6..56 bits. 0110 T 7.. 000011 X 0. 001 F 1.5 symbols..13 ...-. 0100 O 6. 10100 M 2.68 .. The % column indicates the average probability (expressed in percent) of the letter occurring in English. Adding one more symbol for the letter separator and converting to bits yields an average codeword length of 5. 000010 Z 0.- 01010110 L 3.25 -.70 ..48  00011 N 5. The average Morse codeword length is 2.68 - 1101 U 2. 0111 S 5.- 0101011100 R 5..19: % Morse Code Human Code A 6...07 -.35 bits.32 -.-...63 .87 .07 ..87 .11 -..10 .14 .65 . The average Human codeword length is 4. 01011 E 10.81 .27 ..- 00010 V 0. 00000 Q 0. . 11001 I 6.- 1011 B 1.- 010101111 Y 1. 110000 H 3.31 -. 010100 C 3.22 .-. . and the receiver the corresponding channel decoder (Figure 6. This content is available online at <http://cnx. to some extent. Unfortunately. 6.5/>. but also into CDs and bar codes used on merchandise. no one has found such a code. the error correcting bits.22/>. seems like there is always more work to do. Instead of the communication model (Figure 6.20). Shannon did not demonstrate an error correcting code that would achieve this remarkable feat. Shannon's result proves it exists. In any case.20: To correct errors that occur in the digital channel. The idea is for the transmitter to send not only the symbol- derived bits emerging from the source coder but also additional bits derived from the coder's bit stream. Properly designed channel coding can greatly reduce the probability (from the uncoded value of pe ) that a data bit be received in error remains pe b (n) is received incorrectly even when the probability of c (l) or becomes larger. the transmitter inserts a channel coder before analog modulation.24 Channel Coding 37 We can. Shannon's Noisy Channel Coding Theorem (Section 6.25 Repetition Codes 38 Perhaps the simplest error correcting code is the 37 38 repetition code. a channel coder and decoder are added to the communication system. This system forms the Fundamental Model of Digital Communication.org/content/m0071/2. These additional bits.17: DigMC) shown previously. This block diagram shown there forms the Fundamental Model of Digital Communication. s(m) Source b(n) c(l) Source Coder c(l) b(n) Channel Decoder Digital Channel Channel Coder s(m) Source Decoder Sink Figure 6. This content is available online at <http://cnx. help the receiver determine if an error has occurred in the data bits (the important bits) or in the error-correction bits. correct errors made by the receiver with only the error-lled bit stream emerging from the digital channel available to us.259 6.org/content/m10782/2. that should not prevent us from studying commonly used error correcting codes that not only nd their way into all digital communication systems.30) says that if the data aren't transmitted too quickly. that error correction codes exist that can correct all the bit errors introduced by the channel. in fact. the transmitter encodes pe . Thus. the channel coder produces three. an odd number of times in fact. let's consider the three-fold repetition code: for every bit b (n) emerging from the source coder. 0 0 110 that with probability 0 2 pe (1 − pe ) 101 1) Bit 010 100 into a 3 0 as 000. Here.21: Digital Transmitter l 2/R’ t T 3T 6T The upper portion depicts the result of directly modulating the bit stream transmitted signal x (t) using a baseband BPSK signal set. we know that more of the bits should be correct rather than in 2 error. c (l) has a data rate three times higher than that of the original The coding table illustrates when errors can be corrected and when they can't by the majority-vote decoder. This reduction in the bit interval means that the transmitted energy/bit decreases by a factor of three. the resulting three times smaller than the uncoded version.260 CHAPTER 6. the transmitter sends the data bit several times. For example. Coding Table Code 000 001 Probability (1 − pe ) pe (1 − pe ) 2 011 pe 2 (1 − pe ) 1 111 2 pe (1 − pe ) 1 2 1 3 1 pe (1 − pe ) pe (1 − pe ) pe 0 2 Table 6. Because the 1 . INFORMATION COMMUNICATION Repetition Code bn 1 x(t) 0 Digital Transmitter n 1/R’ 2/R’ t T 2T Channel Coder x(t) cl 1 0 1/R’ Figure 6. the bit stream emerging from the channel coder bit stream b (n). If that bit stream passes through a (3.1) channel coder to yield the bit stream transmitted signal requires a bit interval T b (n) into a is the datarate produced by the source c (l). The last column shows the results of the majority-vote .1: In this example. The channel creates an error (changing a 0 The rst column lists all possible received datawords and the second the probability of each dataword being received. Simple majority voting of the received bits (hence the reason for the odd number) determines the error probability pe is always less than transmitted bit more accurately than sending it alone. R' coder. which results in an increased error probability in the receiver. the error probability does channel coding really help: pe of the digital channel goes up.261 decoder.25.K) to represent a given block code's coding eciency K = 1 and N = 3 . When the decoder produces 0. Exercise 6. and discover what it takes for a code to be maximally ecient: Correct as many errors as possible using the fewest error correction bits as possible (making the eciency 39 This content is available online at <http://cnx. The ultimate reason is the repetition code's ineciency: transmitting one data bit for every three transmitted is too inecient for the amount of error correction provided. Note that the blocking (framing) imposed by the channel coder does not correspond to symbol boundaries in the bit stream b (n). The error probability of the decoders is the sum of the probabilities when the decoder produces 1. E equals the ratio digital channel operate at a dierent rate by using the index l for the channel-coded bit stream c (l). the probability of bit error occurring in the digital channel increases relative to the value obtained when no channel coding is used. 296.) Demonstrate mathematically that this claim is indeed true. the top row corresponds to the case in which no errors occurred). The bit interval K in comparison to the no-channel-coding situation. calculate the probability a bit is received incorrectly with a three-fold repetition code. which means the energy N by the same amount. Because of this reduction. if more than one error occurs. 259).26.) Using MATLAB. the receiver can correct the error. We use the notation (N.1 pe . The repetition code (p. 3pe 2 × (1 − pe ) + pe 3 . if one bit of the three bits is received in error. Does any error-correcting code reduce communication errors when real-world constraints are taken into account? The answer now is yes. Point of Interest: It is unlikely that the transmitter's power could be increased to compensate. the probability of error is always less than Exercise 6. In the three-fold repetition code (p. and quanties the overhead introduced by channel coding.26 Block Channel Coding 39 Because of the higher datarate imposed by the channel coder. we need to develop rst a general framework for channel coding. K N as large as possible). A block code's K . Show that when the energy per bit Eb is reduced by 1/3 that this probability is larger than the no-coding probability of error. especially when we employ variable-length source codes. The bit interval must decrease by a factor of three if the duration must be reduced by per bit Eb goes down transmitter is to keep up with the data stream. so Using this This probability of a decoding (Solution on p. 1 long as pe < .21: Repetition Code).org/content/m0094/2. it inserts an additional produce a block of N block channel coding. 259) represents a special case of what is known as every K bits that enter the block channel coder. . To understand channel coding. The question thus becomes Is the eective error probability lower with channel coding even though the error probability for each transmitted bit is larger? The answer is no: Using a repetition code for channel coding cannot ultimately reduce the probability that a data bit is received in error. it successfully corrected the errors introduced by the channel (if there were any. the channel decoder announces the bit is repetition code. Is 3pe 2 × (1 − pe ) + pe 3 ≤ pe ? 6. Thus. We represent the fact that the bits sent through the parameters.1 (Solution on p. Such is the sometimes-unfriendly nature of the real world. ^ b (n) 6= 0 1 instead of transmitted value of 0. The rate at which bits N must be transmitted again changes: So-called data bits b (n) emerge from the source coder at an average − 1 rate B (A) and exit the channel at a rate E higher. as illustrated here (Figure 6.15/>. 2 equals the uncoded value. 296. N −K For error-correction bits to bits for transmission. 27 Error-Correcting Codes: Hamming Distance So-called 40 linear codes create error-correction bits by combining the data bits linearly.2 For example. known as a codeword. by c. The phrase "linear combination" means here single-bit binary arithmetic. As we consider other block codes.) Show that adding the error vector col[1. 1) error correction code described by the following coding table and. geometrically..0] to a codeword ips the codeword's leading bit and leaves the rest unaected.0. and the generator matrix G denes all block-oriented linear channel coders. we can think of the datawords Hamming distance between binary datawords c1 and c2 .55) (Solution on p. The b. We dene the 2N distance between codewords. c2 ) to be the minimum number of bits that must be "ipped" to go from one word to the other.. . We can express the Hamming distance as d (c1 .29/>. we see that adding a 1 corresponds to ipping a bit.22.1 (6.27. the simple idea of the decoder taking a majority vote of the received bits won't generalize easily. the distance between codewords is 3 bits. INFORMATION COMMUNICATION 6. In our table of binary arithmetic.org/content/m10283/2. 0⊕0=0 1⊕1=0 0⊕1=1 1⊕0=1 0·0=0 1·1=1 0·1=0 1·0=0 Table 6. possible datawords to select which As shown in Figure 6. by the succeeding matrix expression. more concisely. 40 This content is available online at <http://cnx.. Furthermore. subtraction and addition are equivalent. c2 ) = sum (c1 ⊕ c2 ) Exercise 6. 297. denoted by d (c1 . We need a broader view that takes into account the A length-N codeword means that the receiver must decide among the of the 2K codewords was actually transmitted.. let's consider the specic (3.262 CHAPTER 6. For example. c (1) = b (1) c (2) = b (1) c (3) = b (1) or c = Gb where  1     G=  1  1   c (1)    c=  c (2)  c (3)   b = b (1) The length-K (in this simple example K = 1) block of data bits is represented by the vector length-N output block of the channel coder. we want to nd the codeword (one of the lled circles in Figure 6. then the code Introducing code bits increases the probability that any bit arrives in error (because bit interval durations decrease).26. Because distance corresponds to ipping a bit.263 The probability of one bit being ipped anywhere in a codeword is the channel introduces equals the number of ones in N −1 N pe (1 − pe ) . to have a code that can correct all single-bit errors. Note that the received dataword groups do not overlap. which means the code can correct all single-bit errors. the only possible codewords. Note that if a impossible to determine which codeword was actucannot correct the channel-induced error. we need to develop rst a general framework for channel codes and discover what it takes for a code to be maximally ecient: Correct as many errors as possible using the fewest error correction bits as possible (making the K N as large as possible. A much better code than our (3. Our repetition code has this property.) We also need a systematic way of nding the codeword closest to any received dataword. However.4) code. The right plot shows the datawords that result when one error occurs as the codeword goes through the channel. This criterion means that if any two codewords are two bits apart. the lled circles represent the codewords [0 0 0] and [1 1 1].1) repetition code demonstrates that we can lose (Exercise 6. The (3. To perform decoding when errors occur. In the left plot. dataword lies a distance of 1 from two codewords. To develop good channel coding.22: 0 1 1 1 0 1 1 In a (3. codewords must have a minimum separation of three.1).1) repetition code is the following (7. only 2 of the possible 8 three-bit data blocks are codewords. We can represent these bit patterns geometrically with the axes being bit positions in the data block. The center plot shows that the distance between codewords is 3. The number of errors e. 1 1 1 0 1 Figure 6. Thus. using a well-designed error-correcting code corrects bit reception errors. The unlled ones correspond to the transmission.22) that has the highest probability of occurring: the one closest to the one received. The three datawords are unit distance from the original codeword. eciency c (1) = b (1) c (2) = b (2) c (3) = b (3) c (4) = b (4) c (5) = b (1) ⊕ b (2) ⊕ b (3) c (6) = b (2) ⊕ b (3) ⊕ b (4) c (7) = b (1) ⊕ b (2) ⊕ b (4) . Do we win or lose by using an error-correcting code? The answer is that we can win if the code is well-designed. it is ally sent. calculating the Hamming distance geometrically means following the axes rather than going "as the crow ies".1) repetition code. the probability of any particular error vector decreases with the number of errors. 4).) Suppose we want a channel code to have an error-correction capability of minimum Hamming distance between codewords dmin n How do we calculate the minimum distance between codewords? Because we have 2 c = Gb.264 CHAPTER 6. Exercise 6. The error correction capability of a channel code is limited by how close together any two error-free blocks are. 24 = 16 1 0 0 0        G=       0 1 0 0 0 1 0 0 0 1 1 1 0 1 1 1 1 0  0    0    1    0   1   1 27 = 128 of the   possible blocks at the channel decoder correspond to error-free transmission and reception. It is formed from .2 (6. no sum of dmin = 3. This content is available online at <http://cnx. Using matrix notation. to we need only compute the number of ones that comprise all non-zero codewords. Triple G is an identity matrix. Bad codes would produce blocks close together. and we have a channel coder that can correct sums will have at least three bits because the upper portion of columns has fewer than three bits. the next one four. with block of nd dmin K−1  2 −1 . the number which can be a large number. and that all codewords can be found by all possible pairwise sums of the columns.org/content/m0072/2. and the last two three. K Therefore bits. the bottom portion of a sum of columns must have at least one bit. which means that all occurrences of one error within a received 7-bit block. 6. c Error correction amounts to searching for the codeword closest to the received block ^ c in terms of the Hamming distance between the two. Because bi ⊕ bj always yields another data bits. What must the be? 2K codewords. The quantity to examine. G's rst column has three ones. which would result in ambiguity when assigning a block of data bits to a received block. Because the bottom portion of each column diers from the other columns in at least one place.27. Thus. dmin = min (d (ci .28 Error-Correcting Codes: Channel Decoding 41 Because the idea of channel coding has merit (so long as the code is ecient). Recall that our channel coding ci ⊕ cj = G (bi ⊕ bj ). note that because the upper portion of G is an identity matrix.20/>. the corresponding upper portion of all column sums must have exactly two bits. INFORMATION COMMUNICATION where the generator matrix is In this (7. in designing code error correction codes is the minimum distance between codewords.56) dmin ≥ 3. One way of checking for errors is to try recreating the error correction bits from the data portion of the received block by multiplying the received block 41 ^ c by the matrix H ^ c. For our example (7. Note that the columns of G are codewords (why is this?). To nd dmin . 297. let's develop a systematic procedure for performing channel decoding. Considering sums of column pairs next. of possible unique pairs equals procedure is linear.4) code. ci 6= cj To have a channel code that can correct all single-bit errors. cj )) . therefore. (Solution on p. we nd that the dierence between any two codewords is another codeword! Thus. we make this calculation known as the parity check matrix. Finding these codewords is easy once we examine the coder's generator matrix. we need only count the number of bits in each column and sums of columns. 1. . To perform our channel decoding. 0. 3.28.) Show that adding the error vector (1. For our (7.28. Such an error pattern cannot be detected by our coding strategy. but such multiple error patterns are very unlikely to occur. and sixth bits).28. if this result is zero. 1) are both The second results when the rst one experiences three bit errors (rst.265 the generator matrix G G by taking the bottom. 0. Because the result of the product is a length. consult a table of length-(N − K) binary vectors to associate them with the minimal error pattern that could have resulted in the non-zero result. 1. 1. matrix of zeroes. . .) Hc = 0 for all the columns of G. For example. If no digital channel errors occurwe receive a codeword T G.) pe be so that a single-bit error is more likely to occur than a triple-bit error? pe is . compute (conceptually at least) ^ H c. values. the rst column of calculations show that multiplying this vector by H Exercise 6. indicating the presence of one or more errors induced by the digital channel. 1. 2. (1. 298. codewords in the example (7.57) Identity and the result of multiplying this matrix with a binary vector. 0. 0) T to a codeword ips the codeword's leading bit and leaves the rest unaected. 0. we can ^ H c= Hc ⊕ e = He. 0. error-correction portion of and attaching to it an identity matrix. 0.4) code. Because the presence of an error can be mathematically written as ^ c= c ⊕ e . . 0.) error occurring during the transmission/reception of one codeword can create the same received word as a singlebit error or no error in another codeword.2 (Solution on p.3 How small must (Solution on p. if non-zero. is a codeword.1 Show that 0 results in Simple (Solution on p. Consequently. no detectable or correctable error occurred.     1   H= 0    1 | 1 1 1 1 1  0    1 0 1 0    1 0 0 1  } | {z } 0 0 {z 1 Lower portion of G The parity check matrix thus has size (N − K) received word is a lengthso that ^ c= c  then ^ H c= 0. . Exercise 6. Our receiver uses the principle of maximum probability: An error-free transmission is much more likely than one with three errors if the bit-error probability small enough. a length-(N − K) zero-valued vector. in the third item raises the point that a double (or triple or quadruple . 298. . Exercise 6. ^ H c does not equal zero. Does this property guarantee that all codewords also When the received bits ^ c do not HG = 0 an (N − K) × K satisfy Hc = 0? In other words. ^ b (n). 297. show that form a codeword. 1. second. (6. 0. 1) and T (0. then 4. 0. T (1. (N − K) × N . For example.4) code. with e a vector of binary values having a 1 in those positions where a bit error occurred. 1. add the error vector thus obtained to the received vector ^ c to correct the error (because 5. Select the data bits from the corrected word to produce the received bit sequence The phrase minimal c ⊕ e ⊕ e = c).(N − K) vector of binary N −K have 2 − 1 non-zero values that correspond to non-zero error patterns e. 1) . 0. 0. and multiply them by the parity check matrix. we are done.3 This corresponds to our decoding table: We associate the parity check matrix multiplication result with the error pattern and add this to the received word. Because the bit stream emerging from the source decoder is segmented into four-bit blocks. we must question whether our (7. As with the repetition code. Parity Check Matrix e He 1000000 101 0100000 111 0010000 110 0001000 011 0000100 100 0000010 010 0000001 001 Table 6.4) example. our (7.266 CHAPTER 6.29 Error-Correcting Codes: Hamming Codes 42 For the (7.org/content/m0097/2. Clearly. single-bit error patterns give a unique result.4) channel code does yield smaller error rates. we have 2N −K − 1 = 7 error patterns that can be corrected. Figure 6. 42 This content is available online at <http://cnx. . If we obtain unique answers. If more than one error occurs (unlikely though it may be). We start with single-bit error patterns. INFORMATION COMMUNICATION 6. we can try double-bit error patterns.23 (Probability of error occurring) shows that if the signal-to-noise ratio is large enough channel coding yields a smaller error probability. and is worth the additional systems required to make it work. this "error correction" strategy usually makes the error worse in the sense that more bits are changed from what was transmitted. In our case.4) code's error correction capability compensates for the increased error probability due to the necessitated reduction in bit energy.25/>. the fair way of comparing coded and uncoded transmission is to compute the probability of block error: the probability that any bit in a block remains in error despite error correction and regardless of whether the error occurs in the data or in coding buts. if two or more error patterns yield the same result. 4) code has the length and number of data bits that perfectly ts correcting single bit errors. the transmitter reduced the energy it expends during a single-bit transmission by 4/7. N −K 2 Codes that have −1 = N are known as 2N −K − 1. and all data bits in the block are received correctly.73 31 26 0. Hamming codes are the simplest single-bit error correction codes.84 63 57 0. equals the codeword length N. and the following table (Table 6. Hamming Codes (eciency) N K E 3 1 0.4) Code 10 -6 10 -8 -5 Figure 6. and the generator/parity check matrix formalism for channel coding and decoding works for them.57 15 11 0. Here 6 (7p0e ) (1 − p0e ) equals the probability of exactly on in seven bits emerging from the channel in error.4: Hamming Codes) provides the parameters of these codes.90 127 120 0.94 Table 6.23: as (1 − pe ) 4 0 5 Signal-to-Noise Ratio (dB) The probability of an error occurring in transmitted 10 K=4 data bits equals 1 − (1 − pe )4 equals the probability that the four bits are received without error. Now the probability of 7 0 0 6 any bit in the seven-bit block being in error after error correction equals 1 − (1 − pe ) − (7pe ) (1 − pe ) . appending three extra bits for error correction. The channel decoder corrects this type of error. Note that our (7. Hamming codes.4 . 0 where pe is the probability of a bit error occurring in the channel when channel coding occurs.33 7 4 0.4) single-bit error correcting code is used.267 Probability of error occurring Probability of Block Error 10 0 Uncoded (K=4) 10 -2 10 -4 (7. When a (7. The upper curve displays how this probability of an error anywhere in the four-bit block varies with the signal-to-noise ratio. This pleasant property arises because the number of error patterns that can be corrected. If the eciency is less than the capacity of the digital channel.268 CHAPTER 6. E<C (6. error correction can be powerful enough to correct all errors as the block length increases. 298. limit P r [block N →∞ error] =1 (6. the probability of multiple-bit errors can exceed the number of single-bit errors unless the channel single-bit error probability pe is very small. INFORMATION COMMUNICATION Unfortunately.30. a channel's capacity changes with the signal-to-noise ratio: As one increases or decreases.org/content/m0073/2. His result comes in two complementary forms: the Noisy Channel Coding Theorem and its converse. http://www.30. more error correction will be needed. Exercise 6. the probability of an error in a decoded block must approach one regardless of the code that might be chosen. The key for this capability to exist is that the code's eciency be less than the channel's capacity.30 Noisy Channel Coding Theorem 43 As the block length becomes larger. 6.com/minds/infotheory/ 0dB.1 (Solution on p.60) Figure 6.and double-bit errors with a "perfect t"? 6. This result astounded communication engineers when Shannon published it in 1948. for such large blocks. Analog communication always yields a noisy version of the transmitted signal. in digital communication. The capacity measures the overall error characteristics of a channelthe smaller the capacity the more frequently errors occurand an overly ecient error-correcting code will not build in enough error correction capability to counteract channel errors. Consequently.4) Hamming code has an eciency of 0.58) 6.lucent.29.12/>. For example.24 (capacity of a channel) shows how capacity varies with error probability. our (7. an error-correcting code exists that has the property that as the length of the code increases. we need to enhance the code's error correcting capability by adding double as well as single-bit error correction. so does the other.) What must the relation between N and K be for a code to correct all single. the capacity is given by C = 1 + pe log2 pe − 1log2 (1 − pe ) bits/transmission (6. . For a binary symmetric channel.2 Converse to the Noisy Channel Coding Theorem If E > C . the probability of an error occurring in the decoded block approaches zero.57. limit P r [block N →∞ error] =0 . Do codes exist that can correct all errors? Perhaps the crowning achievement of Claude Shannon's 44 creation of information theory answers this question.1 Noisy Channel Coding Theorem Let E denote the eciency of an error-correcting code: the ratio of the number of data bits to the total number of bits used to represent them. and codes having the same eciency but longer block sizes can be used on additive noise channels where the signal-to-noise ratio exceeds 43 44 This content is available online at <http://cnx.59) These results mean that it is possible to transmit digital information over a noisy channel (one that introduces errors) and receive the information without error if the code is suciently inecient compared to the channel's characteristics. Generally. The bandwidth restriction arises not so much from channel properties.31 Capacity of a Channel 45 In addition to the Noisy Channel Coding Theorem and its converse (Section 6.1 0. the revised Noisy Channel Coding Theorem states that some errorcorrecting code exists such that as the block length increases. errors will overwhelm you no matter what channel coding you use.3 Error Probability (Pe) 0. 6.30). Shannon also derived the capacity for a bandlimited (to W Hz) additive white noise channel. is less than capacity.4 0.2 0.269 Capacity (bits) Capacity (bits) capacity of a channel 1 0.61) This result sets the maximum datarate of the source coder's output that can be transmitted through the bandlimited channel with no error. capacity calculations are made to understand the fundamental limits on transmission rates. Codes such as the Hamming code work quite well in practice to keep error rates low.org/content/m0098/2.5 Figure 6. but they remain greater than zero. For this reason. especially for wireless channels. but from spectral regulation. and did not indicate what this code might be. it has never been found. the signal set is unrestricted. B (A) R.5 0 0 0. error-free transmission is possible if the source − coder's datarate. 46 Shannon's proof of his theorem was very clever.13/>. more important in communication system design is the converse.24: 0 -10 -5 0 5 Signal-to-Noise Ratio (dB) 10 The capacity per transmission through a binary symmetric channel is plotted as a function of the digital channel's error probability (upper) and as a function of the signal-to-noise ratio for a BPSK signal set (lower). 45 46 . even to the point that more than one bit can be transmitted each "bit interval. For this case.5 1 0. C = W log2 (1 + SNR) bits/s (6. Until the "magic" code is found." Instead of constraining channel code eciency. This content is available online at <http://cnx. It states that if your data rate exceeds capacity. Although analog systems are less expensive in many cases than digital ones for the same application.1 (Solution on p. the coherent receiver (Figure 6. can inexpensively communicate a bandlimited analog signal from one location to another (point-to-point communication) or from one point to many (broadcast). • Eciency: The Source Coding Theorem allows quantication of just how complex a given message source is and allows us to exploit that complexity by source coding (compression). We cannot exploit signal structure to achieve a more ecient communication system.15) for a comparison. For example. In analog communication. we have a specic criterion by which to formulate error-correcting codes that can bring us as close to error-free transmission as we might want. point-to-point digital systems can be organized into global (and beyond as well) systems that provide ecient and exible information transmission. and symbolic-valued ones (computer data. In addition to digital communication's ability to transmit a wider variety of signals than analog systems. the so-called 33 kbps modems operate right at the capacity limit. Even analog-based networks. which could be analog ones obtained by analog-to-digital conversion.org/content/m0074/2. C = 3 × 103 log2 1 + 103 = 29. • Performance: Because of the Noisy Channel Coding Theorem. The second result states capacity more generally. with the only issue being the number of bits used in A/D conversion (how accurately do we need to represent signal amplitude). 298. the only parameters of interest are message bandwidth and amplitude. and represents the number of bits/transmission. Although it is not shown here. for example).31.5 The telephone channel has a bandwidth of 3 kHz and a signal-to-noise ratio exceeding 30 dB (at least they promise this much). An analysis (Section 6. digital schemes are capable of error-free transmission while analog ones cannot overcome channel disturbances. Any signal that can be transmitted by analog means can be sent by digital means.901 kbps  (6.32 Comparison of Analog and Digital Communication 47 Analog communication systems.11/>. Note that the data rate allowed by the capacity can exceed the bandwidth when the signal-to-noise ratio 1 T .62) Thus. explored in the next section.  6. but better communication performance occurs when we use digital systems (HDTV). see this problem (Problem 6. Computer networks. amplitude modulation (AM) radio being a typifying example. one of the most popular of which is multi-level signaling. digital systems oer much more eciency. employ modern computer networking ideas rather than the purely analog systems of the past. How would you convert the rst denition's result into units of bits/second? Example 6. and much greater exibility. we can transmit several bits during one transmission interval by representing bit by some signal's amplitude. Images can be sent by analog means (commercial television). What kind of signal sets might be used to achieve capacity? Modem signal sets send more than one bit/transmission exceeds 0 dB.270 CHAPTER 6. Even though we may send information by way of a noisy channel. Our results for BPSK and FSK indicated the bandwidth they require exceeds using a number.12) of this receiver thus indicates that some residual error will always be present in an analog system's output. such as the telephone system. 47 This content is available online at <http://cnx. better performance. • Flexibility: Digital communication systems can transmit real-valued discrete-time signals. two bits can be sent with a signal set comprised of a sinusoid with amplitudes of ± (A) and ± A 2 .) The rst denition of capacity applies only for binary symmetric channels. having units of bits/second. are what we call such systems today. . The maximum data rate a modem can produce for this wireline channel and hope that errors will not become rampant is the capacity.6) provides the largest possible signal-to-noise ratio for the demodulated message. INFORMATION COMMUNICATION Exercise 6. Here. post oces. Two routes are shown. • The message arrives at one of the network's exit points.33. 6. The idea of a network rst emerged with perhaps the oldest form of organized communication: the postal service. of this model is that the channel is dedicated: Only one communications link through the channel is allowed for all time. . with the increased speed of digital computers.1 (Solution on p.org/content/m0075/2. or your friendly mailman or mailwoman picking up the letter.25 describes point-to-point communications well. and is delivered to the recipient (what we have termed the message sink).) Develop the network model for the telephone system. digital communication is now the best choice for many situations. Exercise 6.11/>. some would say aw. trying not to corrupt the message while doing so. cellular telephone. The model shown in Figure 6. • • A user writes a letter.25: The prototypical communications networkwhether it be the postal service. Most communication networks. or the Internetconsists of nodes interconnected by links. serving in the communications context as the message source. Entry points in the postal case are mailboxes. the development of increasingly ecient algorithms. wherein the link between transmitter and receiver is straightforward. Regardless whether we have a wireline or wireless channel. making it as analogous as possible with the postal service-communications network metaphor. 298. and the ability to interconnect computers to form a communications infrastructure.3: Fundamental model of communication).271 Consequently. Communication Network Source Sink Figure 6. One modern example of this communications mode is the modem that connects a personal computer with an information server via a telephone line. Messages formed by the source are transmitted within the network by dynamic routing. • The communications network delivers the message in the most ecient (timely) way possible. The longer one would be used if the direct link were disabled or congested. This message is sent to the network by delivery to one of the network's public entry points.33 Communication Networks 48 Communication networks elaborate the Fundamental Model of Communications (Figure 1. even modern ones. and they have the channel to themselves. communication bandwidth is precious. and if it could be shared without signicant degradation in communications performance (measured by signal-to-noise ratio for analog signal transmission and by bit-error probability for digital transmission) so much the better. What is most interesting about the network system is the ambivalence of the message source and sink about how the communications link is made. share many of its aspects. What they do care about is message integrity and communications 48 This content is available online at <http://cnx. The key aspect. with nodes relaying the message having some notion of the best possible path at the time of transmission. The rst electrical communications network was the telegraph. 6. most analog ones make inecient use of communication links because truly dynamic routing is dicult. routing takes place when you place the call. . Certainly in the case of the postal system dynamic routing occurs. Internally. The telephone network is more dynamic. Modern communication networks strive to achieve the most ecient (timely) and most reliable information delivery system possible. Morse code. if not impossible.272 CHAPTER 6. many messages share the com- time-domain multiplexing: Rather than the continuous communications mode implied in the Model as presented. but once it establishes a call the path through the network is xed. and satellite communication links. eciency. for example). and electronic mail needed a dierent approach than point-to-point. which assigned a sequence of dots and dashes to each letter of the alphabet. sharing in time the channel's capacity. munications channel between nodes using what we call nodes However. it was becoming clear that not only was digital communication technically superior. because of the long transmission time. During the 1960s. a communication failure might require retransmission of the entire le. today's networks use heterogeneous links. rather than sending a long letter in the envelope you provide. The notion of computer networks was born then. Communication paths that form the Internet use wireline. The rationale for the network enforcing smaller transmissions was that large le transfers would consume network resources all along the route. le transfer. In radio networks. Rather than a matched lter. Note that no omnipotent router views the network as a whole and pre-determines every message's route. INFORMATION COMMUNICATION Furthermore. The analogy is that the postal service. drunken operators). The users of that path control its use. From today's perspective. now called the Internet. served as the source coding algorithm. This kind of connection through a networkxed for the duration of the communication sessionis known as a circuit-switched connection. such as commercial television. the network can better manage congestion. each station has a dedicated portion of the electromagnetic spectrum. and this spectrum cannot be shared with other stations or used in any other than the regulated way. and he wrote the message (translating from received bits to symbols). places each page in a 49 This content is available online at <http://cnx. was born. By creating packets. and. Here the network consisted of telegraph operators who transmitted the message eciently using Morse code and routed the message so that it took the shortest possible path to its destination while taking into account internal network failures (downed lines. opens the envelope. the fact that this nineteenth century system handled digital communications is astounding.9/>.26). communication networks do have point-to-point communication links between network well described by the Fundamental Model of Communications. the average number of bits/symbol. to obtain. message sequences are sent. Computer networks elaborate the basic network model by subdividing messages into smaller chunks called packets (Figure 6. as described in Subtleties of Coding (Example 6.org/content/m0076/2. the receiver was the operator's ear. In the telephone system. Telephone network customers would be quite upset if the telephone company momentarily disconnected the path so that someone else could use it. each of which has its own address and is routed independently of others. optical ber. route messagesdecide what addressthat is usually separate from the At a grander viewpoint.4). Note: Because of the need for a comma between dot-dash sequences to dene letter (symbol) boundaries.34 Message Routing 49 Focusing on electrical networks. and can consider issues like inoperative and overly busy links. and what was then called the ARPANET. the network must nodes and links to usebased on destination informationthe message information. Routing in networks is necessarily dynamic: The complete route taken by messages is formed as the network handles the message. and may not make ecient use of it (long pauses while one person thinks. but also that the wide variety of communication modescomputer login. exceeded the Source Coding Theorem's upper bound. The signal set consisted of a short and a long pulse. the route is xed once the phone starts ringing. 273 separate envelope, and using the address on your envelope, addresses each page's envelope accordingly, and mails them separately. The network does need to make sure packet sequence (page numbering) is maintained, and the network exit point must reassemble the original message accordingly. Receiver Address Transmitter Address Data Length (bytes) Data Error Check Figure 6.26: Long messages, such as les, are broken into separate packets, then transmitted over computer networks. A packet, like a letter, contains the destination address, the return address (transmitter address), and the data. The data includes the message part and a sequence number identifying its order in the transmitted message. Communications networks are now categorized according to whether they use packets or not. A system like the telephone network is said to be circuit switched: The network establishes a xed route that lasts the entire duration of the message. Circuit switching has the advantage that once the route is determined, the users can use the capacity provided them however they like. Its main disadvantage is that the users may not use their capacity eciently, clogging network links and nodes along the way. Packet-switched networks continuously monitor network utilization, and route messages accordingly. Thus, messages can, on the average, be delivered eciently, but the network cannot guarantee a specic amount of capacity to the users. 6.35 Network architectures and interconnection 50 The network structureits architecture (Figure 6.25)typies what are known as wide area networks (WANs). The nodes, and users for that matter, are spread geographically over long distances. "Long" has no precise denition, and is intended to suggest that the communication links vary widely. The Internet is certainly the largest WAN, spanning the entire earth and beyond. Local area networks, LANs, employ a 51 . LANs connect single communication link and special routing. Perhaps the best known LAN is Ethernet to other LANs and to wide area networks through special nodes known as gateways (Figure 6.27). In the IP address (Internet Internet, a computer's address consists of a four byte sequence, which is known as its Protocol address). An example address is bytes specify the computer's domain 128.42.4.32: each byte is separated by a period. The rst two (here Rice University). Computers are also addressed by a more human-readable form: a sequence of alphabetic abbreviations representing institution, type of institution, and computer name. A given computer has both names ( 128.42.4.32 Data transmission on the Internet requires the numerical form. So-called is the same as soma.rice.edu). name servers translate between alphabetic and numerical forms, and the transmitting computer requests this translation before the message is sent to the network. 50 51 This content is available online at <http://cnx.org/content/m0077/2.10/>. "Ethernet" <http://cnx.org/content/m0078/latest/> 274 CHAPTER 6. INFORMATION COMMUNICATION Wide-Area Network LAN Gateway LAN D B A C LAN Gateway B Figure 6.27: The gateway serves as an interface between local area networks and the Internet. The two shown here translate between LAN and WAN protocols; one of these also interfaces between two LANs, presumably because together the two LANs would be geographically too dispersed. 6.36 Ethernet 52 L Figure 6.28: Z0 Z0 Terminator Terminator Transceiver Transceiver Computer A Computer B The Ethernet architecture consists of a single coaxial cable terminated at either end by a resistor having a value equal to the cable's characteristic impedance. Computers attach to the Ethernet through an interface known as a transceiver because it sends as well as receives bit streams represented as analog voltages. Ethernet uses as its communication medium a single length of coaxial cable (Figure 6.28). This cable serves as the "ether", through which all digital data travel. Electrically, computers interface to the coaxial cable (Figure 6.28) through a device known as a transceiver. This device is capable of monitoring the voltage appearing between the core conductor and the shield as well as applying a voltage to it. Conceptually it consists of two op-amps, one applying a voltage corresponding to a bit stream (transmitting data) and another serving as an amplier of Ethernet voltage signals (receiving data). The signal set for Ethernet resembles that shown in BPSK Signal Sets, with one signal the negative of the other. parallel, resulting in the circuit model for Ethernet shown in Figure 6.29. 52 This content is available online at <http://cnx.org/content/m10284/2.13/>. Computers are attached in 275 Exercise 6.36.1 (Solution on p. 298.) From the viewpoint of a transceiver's sending op-amp, what is the load it sees and what is the transfer function between this output voltage and some other transceiver's receiving circuit? Why should the output resistor Rout be large? xA(t) Rout Z0 rA(t) Transceiver Rout xA(t) Figure 6.29: resistance Rout + – xB(t) Coax Rout + – … x (t) Z Rout + Z0 – The top circuit expresses a simplied circuit model for a transceiver. must be much larger than Z0 The output so that the sum of the various transmitter voltages add to create the Ethernet conductor-to-shield voltage that serves as the received signal r (t) for all transceivers. In this case, the equivalent circuit shown in the bottom circuit applies. No one computer has more authority than any other to control when and how messages are sent. Without scheduling authority, you might well wonder how one computer sends to another without the (large) interference that the other computers would produce if they transmitted at the same time. The innovation random-access method. This method relies on all packets transmitted over the coaxial cable can be received by all transceivers, regardless of of Ethernet is that computers schedule themselves by a the fact that which computer might actually be the intended recipient. In communications terminology, Ethernet directly supports broadcast. Each computer goes through the following steps to send a packet. 1. The computer senses the voltage across the cable to determine if some other computer is transmitting. 2. If another computer is transmitting, wait until the transmissions nish and go back to the rst step. If the cable has no transmissions, begin transmitting the packet. 3. If the receiver portion of the transceiver determines that no other computer is also sending a packet, continue transmitting the packet until completion. 4. On the other hand, if the receiver senses interference from another computer's transmissions, immediately cease transmission, waiting a random amount of time to attempt the transmission again (go to step 1) until only one computer transmits and the others defer. The condition wherein two (or more) computers' transmissions interfere with others is known as a collision. 276 CHAPTER 6. INFORMATION COMMUNICATION The reason two computers waiting to transmit may not sense the other's transmission immediately arises because of the nite propagation speed of voltage signals through the coaxial cable. The longest time any 2L c , where L is the coaxial cable's length. The maximum-length-specication for Ethernet is 1 km. Assuming a propagation computer must wait to determine if its transmissions do not encounter interference is speed of 2/3 the speed of light, this time interval is more than 10 µs. As analyzed in Problem 22 (Prob- lem 6.31), the number of these time intervals required to resolve the collision is, on the average, less than two! Exercise 6.36.2 (Solution on p. 298.) Why does the factor of two enter into this equation? (Consider the worst-case situation of two transmitting computers located at the Ethernet's ends.) Thus, despite not having separate communication paths among the computers to coordinate their transmissions, the Ethernet random access protocol allows computers to communicate without only a slight degradation in eciency, as measured by the time taken to resolve collisions relative to the time the Ethernet is used to transmit information. Pmin . The time required to transmit such Pmin , where C is the Ethernet's capacity in bps. Ethernet now comes in two dierent types, C each with individual specications, the most distinguishing of which is capacity: 10 Mbps and 100 Mbps. If A subtle consideration in Ethernet is the minimum packet size packets equals the minimum transmission time is such that the beginning of the packet has not propagated the full length of the Ethernet before the end-of-transmission, it is possible that two computers will begin transmission at the same time and, by the time their transmissions cease, the other's packet will not have propagated to the other. In this case, computers in-between the two will sense a collision, which renders both computer's transmissions senseless to them, without the two transmitting computers knowing a collision has occurred at all! For Ethernet to succeed, we must have the minimum packet transmission time exceed propagation time: Pmin C > 2L c or Pmin > 2LC c twice the voltage (6.63) Thus, for the 10 Mbps Ethernet having a 1 km maximum length specication, the minimum packet size is 200 bits. Exercise 6.36.3 (Solution on p. 298.) The 100 Mbps Ethernet was designed more recently than the 10 Mbps alternative. To maintain the same minimum packet size as the earlier, slower version, what should its length specication be? Why should the minimum packet size remain the same? 6.37 Communication Protocols 53 The complexity of information transmission in a computer networkreliable transmission of bits across a channel, routing, and directing information to the correct destination within the destination computers operating systemdemands an overarching concept of how to organize information delivery. No unique set of rules satises the various constraints communication channels and network organization place on information transmission. For example, random access issues in Ethernet are not present in wide-area networks such as the Internet. A protocol is a set of rules that governs how information is delivered. For example, to use the telephone network, the protocol is to pick up the phone, listen for a dial tone, dial a number having a specic number of digits, wait for the phone to ring, and say hello. In radio, the station uses amplitude or frequency modulation with a specic carrier frequency and transmission bandwidth, and you know to turn on the radio and tune in the station. In technical terms, no one protocol or set of protocols can be used for any communication situation. Be that as it may, communication engineers have found that a common thread 53 This content is available online at <http://cnx.org/content/m0080/2.19/>. 277 runs through the organization of the various protocols. This grand design of information transmission organization runs through all modern networks today. What has been dened as a networking standard is a layered, hierarchical protocol organization. As shown in Figure 6.30 (Protocol Picture), protocols are organized by function and level of detail. Protocol Picture Application http Presentation telnet Session tcp Transport ip Network ecc Data Link signal set Physical detail ISO Network Protocol Standard Figure 6.30: Protocols are organized according to the level of detail required for information transmis- sion. Protocols at the lower levels (shown toward the bottom) concern reliable bit transmission. Higher level protocols concern how bits are organized to represent information, what kind of information is dened by bit sequences, what software needs the information, and how the information is to be interpreted. Bodies such as the IEEE (Institute for Electronics and Electrical Engineers) and the ISO (International Standards Organization) dene standards such as this. Despite being a standard, it does not constrain protocol implementation so much that innovation and competitive individuality are ruled out. Segregation of information transmission, manipulation, and interpretation into these categories directly aects how communication systems are organized, and what role(s) software systems fulll. Although not thought about in this way in earlier times, this organizational structure governs the way communication engineers think about all communication systems, from radio to the Internet. Exercise 6.37.1 (Solution on p. 298.) How do the various aspects of establishing and maintaining a telephone conversation t into this layered protocol organization? We now explicitly state whether we are working in the physical layer (signal set design, for example), the data link layer (source and channel coding), or any other layer. IP abbreviates Internet protocol, and governs gateways (how information is transmitted between networks having dierent internal organizations). TCP (transmission control protocol) governs how packets are transmitted through a wide-area network such as the Internet. Telnet is a protocol that concerns how a person at one computer logs on to another computer across a network. A moderately high level protocol such as telnet, is not concerned with what data links (wireline or wireless) might have been used by the network or how packets are routed. Rather, it establishes connections between computers and directs each byte (presumed to represent a typed character) to the appropriate operation system component at each end. It is not concerned with what the characters mean or what programs the person is typing to. That aspect of information transmission is left to protocols at higher layers. Recently, an important set of protocols created the World Wide Web. These protocols exist independently of the Internet. The Internet insures that messages are transmitted eciently and intact; the Internet is not 278 CHAPTER 6. INFORMATION COMMUNICATION concerned (to date) with what messages contain. HTTP (hypertext transfer protocol) frame what messages contain and what should be done with the data. The extremely rapid development of the Web on top of an essentially stagnant Internet is but one example of the power of organizing how information transmission occurs without overly constraining the details. 6.38 Information Communication Problems 54 Problem 6.1: Signals on Transmission Lines A modulated signal needs to be sent over a transmission line having a characteristic impedance of 50 (Ω). Z0 = So that the signal does not interfere with signals others may be transmitting, it must be bandpass ltered so that its bandwidth is 1 MHz and centered at 3.5 MHz. The lter's gain should be one in magnitude. An op-amp lter (Figure 6.31) is proposed. R2 R1 C1 – Vin C2 + + Z0 – Figure 6.31 a) What is the transfer function between the input voltage and the voltage across the transmission line? b) Find values for the resistors and capacitors so that design goals are met. Problem 6.2: Noise in AM Systems ^ The signal s (t) emerging from an AM communication system consists of two parts: the message signal, s (t), and additive noise. The plot (Figure 6.32) shows the message spectrum S (f ) and noise power spectrum PN (f ). The noise power spectrum lies completely within the signal's band, and has a constant value there of N0 2 . 54 This content is available online at <http://cnx.org/content/m10352/2.29/>. Each output can then be transmitted separately and the original signal reconstructed at the receiver.279 S(f) PN(f) A N0/2 A/2 –W f W –W f W Figure 6. H1 (f ) and H2 (f ) are complementary if H1 (f ) + H2 (f ) = 1 We can use complementary lters to separate a signal into two parts by passing it through each lter. What is the signal-to-noise ratio in the upper half of the frequency band? c) A clever 241 student suggests ltering the message before the transmitter modulates it so that the signal spectrum is balanced (constant) across frequency. assume that fc is much larger than |m (t) | < 1.3: Complementary Filters Complementary lters usually have opposite ltering characteristics (like a lowpass and a highpass) and have transfer functions that add to one. Realizing that this ltering aects the message signal. Draw a block diagram of this communication system. In this problem. the message is bandlimited to W W. and the carrier frequency . c) What is the receiver's signal-to-noise ratio? How does it compare to the standard system that sends the signal by simple amplitude modulation? Problem 6. the signal-to-noise ratio is not constant within subbands. As with all analog modulation schemes.32 a) What is the message signal's power? What is the signal-to-noise ratio? b) Because the power in the message decreases with frequency. How does this system's signal-to-noise ratio compare with that of the usual AM radio? Problem 6. a) What is the transmission bandwidth? b) Find a receiver for this modulation scheme. c) What is the signal-to-noise ratio of the received signal? Hz. Let's assume the message is bandlimited to W Hz and that H1 (f ) = a a+j2πf . Mathematically. a) What circuits would be used to produce the complementary lters? b) Sketch a block diagram for a communication system (transmitter and receiver) that employs complementary signal transmission to send a message m (t). the phase deviation is small.4: Phase Modulation A message signal m (t) phase modulates a carrier if the transmitted signal equals x (t) = Asin (2πfc t + φd m (t)) where φd is known as the phase deviation. the student realizes that the receiver must also compensate for the message to arrive intact. The receiver knows what the . a) What is the spectrum of the modulated signal? b) Who is correct? Why? c) The teaching assistant does not want to take sides. over the bandwidth of the message m (t). a) What would be the output of a traditional AM receiver tuned to the carrier frequency fc ? b) RU Electronics proposes to counteract jamming by using a dierent modulation scheme. Thus. Digital Amplitude Modulation Two ELEC 241 students disagree about a homework problem.6: were both available. if the carrier frequency is the fc so that the transmitted signal is jammer would transmit AJ n (t) sin (2πfc t + φ). etc. unaware of the change.5: cos (x) ' 1 sin (x) ' x and for small INFORMATION COMMUNICATION x. What does he have in mind? Anti-Jamming One way for someone to keep people from receiving an AM transmission is to transmit noise at the same carrier frequency. The scheme's 1 fc ) having the indicated waveform (Figure 6. where the signal s (n) has no special characteristics and the modulation frequency known. while the receiver tunes a standard AM receiver to a harmonic of the carrier frequency. Samantha says that approach won't work.280 CHAPTER 6. "Random switching" means that one carrier frequency is used for some period of time. is transmitting with a carrier frequency of fc . The issue concerns the discrete-time signal s (n) cos (2πf0 n). s (n) He tells them that if s (n) cos (2πf0 n) and can be recovered. c) The jammer. back to the rst. What is the spectrum of the transmitted signal with transmitted signal has the form AT (1 + m (t)) c (t) where the proposed scheme? Assume the message bandwidth frequency W c (t) is a periodic carrier signal (period is much less than the fundamental carrier fc .33). Sammy says that he can recover s (n) f0 is from its amplitude-modulated version by the same approach used in analog communications.33 Problem 6. The noise n (t) AT (1 + m (t)) sin (2πfc t) has a constant power density spectrum The channel adds white noise of spectral height N0 2 . s (n) sin (2πf0 n) Problem 6.7: Secret Comunications A system for hiding AM transmissions has the transmitter randomly switching between two carrier frequencies f1 and f2 . What is the signal-to-noise ratio of the receiver tuned to the harmonic having the largest power that does not contain the jammer? c(t) 1 1/2fc 0 3/4fc 1/4fc 1/fc t –1 Figure 6. switches to the other for some other period of time. Hint: Use the facts that Problem 6. x (t) and adds white noise The channel attenuates the transmitted signal . Assume the left and right signals are bandlimited to W Hz. c) Assume the channel adds white noise to the transmitted signal. x (t) = A (1 + ml (t)) cos (2πfc t) + Amr (t) sin (2πfc t) a) Find the Fourier transform of x (t).35) has.9: A Novel Communication System A clever system designer claims that the depicted transmitter (Figure 6. cos 2πfct x(t) × LPF W Hz × LPF W Hz BPF sin 2πfct Figure 6. AM stereo is not. despite its complexity. The message signal and the carrier frequency of spectral height N0 2 . but is much simpler to understand and analyze. While FM stereo is commonplace.34. fc  W . advantages over the usual amplitude modulation system. signal has bandwidth W. a) How dierent should the carrier frequencies be so that the message could be received? b) What receiver would you design? c) What signal-to-noise ratio for the demodulated signal does your receiver yield? Problem 6. Consequently. Find the signal-to-noise ratio of each signal.8: AM Stereo Stereophonic radio transmits two signals simultaneously that correspond to what comes out of the left and right speakers of the receiving radio. m (t) is bandlimited to W Hz. The channel adds white noise of spectral height Assume the message N0 2 .34 Problem 6. shown in Figure 6.281 carrier frequencies are but not when carrier frequency switches occur. the receiver must be designed to receive the transmissions regardless of which carrier frequency is used. An amazing aspect of AM stereo is that both signals are transmitted within the same bandwidth as used to transmit just one. Show that this receiver indeed works: It produces the left and right signals separately. What is the transmission bandwidth and how does it compare with that of standard AM? b) Let us use a coherent demodulator as the receiver. b) Show that the usual coherent receiver demodulates this signal. b) An ELEC 241 almuni likes digital systems so much that he decides to produce a discrete-time version. c) Find the signal-to-noise ratio that results when this receiver is used. The value 1.35 The transfer function H (f ) is given by   j if f < 0 H (f ) =  −j if f > 0 a) Find an expression for the spectrum of x (t). a) Find a receiver for this transmission scheme. . of bk f0 is the frequency oset for each bit and it is harmonically related to the bit interval is either −1 or T. and analyze its performance. As shown in Figure 6. He samples the received signal (sampling interval Ts = T N ). metropolitan cellular radio channels also contain multipath: the attenuated signal and a delayed. Sketch your answer.282 CHAPTER 6. several bits are gathered together and transmitted simultaneously on dierent carrier frequencies during a T to x (t) = A second interval. Problem 6. For example. multipath occurs because the buildings reect the signal and the reected path length between transmitter and receiver is longer than the direct path.10: Multi-Tone Digital Communication In a so-called multi-tone system. d) Find a superior receiver (one that yields a better signal-to-noise ratio). INFORMATION COMMUNICATION A sin 2πfct × H(f) x(t) × m(t) × A cos 2πfct Figure 6. further attenuated signal are received superimposed. Problem 6. B−1 X B bits would be transmitted according bk sin (2π (k + 1) f0 t) .36. How should N be related to B .11: How would you recommend he implement the receiver? City Radio Channels In addition to additive white noise.64) k=0 Here. 0 ≤ t < T (6. the number of simultaneously transmitted bits? c) The alumni wants to nd a simple form for the receiver so that his software implementation runs as eciently as possible. if so. What is the model for the channel. she suggests BPSK signal sets that have the depicted basic signals (Figure 6. including the multipath and the additive noise? b) Assume d is 1 km. how so? Would analog cellular telephone.12: Downlink Signal Sets In digital cellular telephone systems.5 times as long.37).36 a) Assume that the length of the direct path is d meters and the reected path is 1. for two simultaneous data streams. Find and sketch the magnitude of the transfer function for the multipath component of the channel. be aected or not? Analog cellular telephone uses amplitude modulation to transmit voice. For example. d) How would the usual AM receiver be modied to minimize multipath eects? Express your modied receiver as a block diagram. Problem 6. which operates at much higher carrier frequencies (800 MHz vs. How would you characterize this transfer function? c) Would the multipath aect AM radio? If not. 1 MHz for radio). a clever Rice engineer suggests using a dierent signal set for each data stream. Rather than send signals at dierent frequencies. s1(t) s2(t) A A T –A T t –A Figure 6. the base station (transmitter) needs to relay dierent voice signals to several telephones at the same time.37 t .283 Reflected Path Direct Path Transmitter Figure 6. why not. a) What is the block diagram describing the proposed system? b) What is the transmission bandwidth required by the proposed system? c) Will the proposal work? Does the fact that the two data streams are transmitted in the same bandwidth at the same time mean that each receiver's performance is aected? Can each bit stream be received without interference from the other? Problem 6. The channel adds white noise and attenuates the transmitted signal.284 CHAPTER 6. Assume you have two CD-quality signals (each sampled at 44. the constant -1 to a 1). If b(1) (n) and b(2) (n) each represent a bit stream.14: Digital Stereo Just as with analog communication. The signal has bandwidth Hz. it should be possible to send two signals simultaneously over a digital channel. b) What is the maximum datarate the scheme can provide in terms of the available bandwidth? c) Find a receiver that yields both the analog signal and the bit stream. b(2) (n) equal either +1 or -1 according to the bit being transmitted for each signal. the data signal as the analog signal bandwidth m (t).1 kHz with 16 bits/sample). W In addition to sending this analog signal. s1 (t) and INFORMATION COMMUNICATION −s1 (t) and in data stream 2 by s2 (t) and each of which are modulated by 900 MHz carrier.13: A signal m (t) Mixed Analog and Digital Transmission is transmitted using amplitude modulation in the usual way. Thus. X(f) B 2W analog digital f fc Figure 6. d (t) representing the text is transmitted as the same time The transmission signal spectrum is as shown (Figure 6. the transmitter also wants to auxiliary band that lies slightly above the analog transmission band. and the carrier frequency is send ASCII text in an fc . the transmitted signal has the form x (t) = A X b(1) (n) sin (2πfc (t − nT )) p (t − nT ) + b(2) (n) cos (2πfc (t − nT )) p (t − nT ) n where p (t) is a unit-amplitude pulse having duration T and b(1) (n). The transmitter sends the two data streams so that their bit intervals align. One suggested transmission scheme is to use a quadrature BPSK scheme. Problem 6. Each receiver uses a matched lter for its receiver. bits are represented in data stream 1 by −s2 (t). and has a total B.38 a) Write an expression for the time-domain version of the transmitted signal in terms of digital signal m (t) and the d (t).38). Using an 8-bit representation of the characters and a simple baseband BPSK signal set (the constant signal +1 corresponds to a 0. The requirement is that each receiver not receive the other's bit stream. . We want to compare the resulting quality of the received signals. . Assume the speech signal has a 4 kHz bandwidth and.17: Source Compression Consider the following 5-letter source. d) In the digital case. and the channels introduce the same attenuation and additive white noise.285 a) What value would you choose for the carrier frequency fc ? b) What is the transmission bandwidth? c) What receiver would you design that would yield both bit streams? Problem 6. Assume simple binary source coding and a modulated BPSK transmission scheme.125 d 0. the total noise power equals the sum of these two. in the digital case.15: Digital and Analog Speech Communication Suppose we transmit speech signals over comparable digital and analog channels. Letter Probability a 0.16: Source Compression Consider the following 5-letter source. What is the signal-to-noise ratio of the received speech signal as a function of pe ? e) Compute and plot the received signal's signal-to-noise ratio for the two transmission schemes as a function of channel signal-to-noise ratio.25 c 0.5 a) Find this source's entropy. c) Find an unequal-length codebook for this sequence that satises the Source Coding Theorem. is sampled at an 8 kHz rate with eight-bit A/D conversion. a) What is the transmission bandwidth of the analog (AM) and digital schemes? b) Assume the speech signal's amplitude has a magnitude less than one. However. b) Show that the simple binary coding is inecient.5 b 0. What is maximum amplitude quantization error introduced by the A/D converter? pe that Eb . Does your code achieve the entropy limit? d) How much more ecient is this code than the simple binary code? Problem 6. Problem 6. Assume the transmitters use the same power. Find the mean-squared error between the transmitted and received c) In the digital case. each bit in quantized speech sample is received in error with probability depends on signal-to-noise ratio amplitude. errors in each bit have a dierent impact on the error in N0 the reconstructed speech sample.0625 e 0.0625 Table 6. Because these are separate. f ) Compare and evaluate these systems. the recovered speech signal can be considered to have two noise sources added to each sample's true value: One is the A/D amplitude quantization noise and the second is due to channel errors. How would you characterize this source code in words? c) How many fewer bits would be used in transmitting this speech segment with your Human code in comparison to simple binary coding? Problem 6. Although these integers could be represented by a binary code for digital transmission. The sample values are found to have the shown relative frequencies. c) Find the Human code for this source. such as speech.15 1 0.6 a) Find this source's entropy.1 Table 6. Find the relative frequency of occurrence of quantized amplitude values. Sample Value Probability 0 0.286 CHAPTER 6.5).15 e 0. b) Find the Human code for this source. a signal bandlimited to 5 kHz is sampled with a two-bit A/D converter at its Nyquist frequency.3 3 0. • for n=0:7. a) Load into Matlab the segment of speech contained in y.4 b 0.15 d 0. 1).18: Speech Compression When we sample a signal. To simulate a 3-bit converter.2 c 0. The following Matlab program computes the number of times each quantized value occurs.2 Table 6. we should consider whether a Human coding would be more ecient.35 2 0. y_quant = round(3. INFORMATION COMMUNICATION Letter Probability a 0. we quantize the signal's amplitude to a set of integers. b) Show that the simple binary coding is inecient.7 .mat. Its sampled values lie in the interval (-1. end. signal amplitudes are represented by 2b b-bit integers. For a converter. we use Matlab's round function to create quantized amplitudes corresponding to the integers • [0 1 2 3 4 5 6 7].5*y + 3.19: Digital Communication In a digital cellular system. What is its average code length? Problem 6. Find the entropy of this source. count(n+1) = sum(y_quant == n). c) How would you modify your code so that the probability of the letter d a being confused with the letter is minimized? If so.21: Universal Product Code The Universal Product Code (UPC). how does the performance change? Problem 6. demonstrate that this goal cannot be achieved. Letter Probability a 1/3 b 1/3 c 1/4 d 1/12 Table 6. An .40) of a portion of the code is shown. Signal Set 1 s1(t) A s0(t) A T Signal Set 2 s0(t) s1(t) A A/2 T t T t T t -A/2 t Figure 6.20: Signal Compression Letters drawn from a four-symbol alphabet have the indicated probabilities. often known as a bar code. if not. Find an error correcting code for two-bit data blocks that corrects all single-bit errors. example (Figure 6. Problem 6.39). what is your new code.8 a) What is the average number of bits necessary to represent this alphabet? b) Using a simple binary code for this alphabet.287 We send the bit stream consisting of Human-coded samples using one of the two depicted signal sets (Figure 6. what signal-to-noise ratio would be needed for your chosen signal set to guarantee that the bit error probability will not exceed from the transmitter (relative to the distance at which the 10−3 ? If the receiver moves twice as far 10−3 error rate was obtained).39 a) What is the datarate of the compressed source? b) Which choice of signal set maximizes the communication system's performance? c) With no error-correcting coding. labels virtually every sold good. a two-bit block of data bits naturally emerges. Now how many bars are needed to represent each digit? c) What is the probability that the 11-digit code is read correctly if the probability of reading a single bit incorrectly is pe ? d) How many error correcting bars would need to be present so that any single bar error occurring in the 11-digit code can be corrected? Problem 6. c) Give the decoding table for this code.40 d. How many patterns of 1. and 3 errors are correctly decoded? d) What is the block error probability (the probability of any number of errors occurring in the decoded codeword)? Problem 6. … INFORMATION COMMUNICATION … d Figure 6.288 CHAPTER 6. a) How many bars must be used to represent a single digit? b) A complication of the laser scanning system is that the bar code must be read either forwards or backwards. letter a b c d e f g h i probability 1 4 1 8 1 8 1 8 1 8 1 16 1 16 1 16 1 16 .9 a) What is this code's eciency? b) Find the generator matrix G and parity-check matrix H for this code. enter the price into the cash register. 2. laser scanners read this code. Here a sequence of black and white bars. In retail stores. each having width presents an 11-digit number (consisting of decimal digits) that uniquely identies the product. Data Codeword 00 00000 01 01101 10 10111 11 11010 Table 6.23: Digital Communication A digital source produces sequences of nine letters with the following probabilities. and after accessing a database of prices.22: Error Correcting Codes A code maps pairs of information bits into codewords of length 5 as follows. 24: Overly Designed Error Correction Codes An Aggie engineer wants not only to have codewords for his data. How does the resulting code compare with the best possible code? b) A clever engineer proposes the following (6. He decides to represent 3-bit data with 6-bit codewords in which none of the data bits appear explicitly.11 What is the error correction capability of this code? c) The channel's bit error probability is 1/8.4) Hamming code having the generator matrix  1 0 0 0        G=       0 1 0 0 0 1 0 0 0 1 1 1 0 1 1 1 0 1  0    0    1    0   1   1  This code corrects all single-bit error. . matrix that recovers the data bits from the codeword. c1 = d1 ⊕ d2 c4 = d1 ⊕ d2 ⊕ d3 c2 = d2 ⊕ d3 c5 = d1 ⊕ d2 c3 = d1 ⊕ d3 c6 = d1 ⊕ d2 ⊕ d3 Table 6. What kind of code should be used to transmit data over this channel? Problem 6. c) What is the error correcting capability of the code? Problem 6. but also to hide the information from Rice engineers (no fear of the UT engineers). consider a (7. but if a double bit error occurs. error correction algorithms believe that a smaller number of errors have occurred and correct accordingly.25: Error Correction? It is important to realize that when more transmission errors than can be corrected. For example.12 a) Find the generator matrix b) Find a 3×6 G and parity-check matrix H for this code. it corrects using a single-bit error correction approach.3) code to correct errors after transmission through a digital channel.10 a) Find a Human code that compresses this source. c1 = d 1 c4 = d1 ⊕ d2 ⊕ d3 c2 = d 2 c5 = d2 ⊕ d3 c3 = d 3 c6 = d1 Table 6.289 Table 6. Because scratches span several bits. a) How many error-correction bits are required to correct scratch-induced errors for each 16-bit sample? b) Rather than use a code that can correct several errors in a codeword. The audio CD standard requires 16-bit. 44.41 Problem 6. . in transmitting digitized signals. most occur because of dust and scratches on the disk surface. How much would the output signal-to-noise ratio improve using this error correction scheme? Problem 6. a clever 241 engineer proposes interleaving consecutive coded samples. For example.290 CHAPTER 6. what is the result of channel decoding? Express your result as a binary error sequence for the data bits. Assume that scratch and dust-induced errors are four or fewer consecutive bits long. errors occur as frequently for the most signicant bit as they do for the least signicant bit. the former errors have a much larger impact on the overall signal-to-noise ratio than the latter. Problem 6. Rather than applying error correction to each sample value. Yet.28: Communication System Design RU Communication Systems has been asked to design a communication system that meets the following requirements. only the least signicant 4 bits can be received in error. evaluate this proposed scheme with respect to the non-interleaved one. several consecutive bits in error are much more common.41) shows. sample n sample n+1 sample n+2 sample n+3 1111 1010 0000 4-way interleaver 1100100111001001 0101 Figure 6. the bits representing coded samples are interpersed before they are written on the CD. Now. We use single-bit error correction on the most signicant four bits and none on the least signicant four. INFORMATION COMMUNICATION a) How many double-bit errors can occur in a codeword? b) For each double-bit error pattern.27: Compact Disk Errors occur in reading audio compact disks. then performs error-correction.1 kHz analog-to-digital conversion of each channel of the stereo analog signal. why not concentrate the error correction on the most important bits? Assume that we sample an 8 kHz signal with an 8-bit A/D converter. a) How many error correction bits must be added to provide single-bit error correction on the most signicant bits? b) How large must the signal-to-noise ratio of the received signal be to insure reliable communication? c) Assume that once error correction is applied. As the cartoon (Figure 6. Very few errors are due to noise in the compact disk player. Bits are transmitted using a modulated BPSK signal set over an additive white noise channel.26: Selective Error Correction We have found that digital transmission errors occur with a probability that remains constant no matter how "important" the bit may be. a singlebit error is rare. The CD player de-interleaves the coded data. • Once received. The least-acceptable picture received by television sets located at an analog station's broadcast perimeter has a signal-to-noise ratio of about 10 dB. measured 100 meters from the tower is 70 dB. you must rst consider certain fundamentals. as measured by the signal-to-noise ratio.35 Table 6.28 6 5. • • Before transmitting. the N computers will either remain silent (no heads) or a collision will occur (more than one head). which has a 5 kHz bandwidth.29: HDTV As HDTV (high-denition television) was being developed. its transmission occurs successfully. b H 3 2. The RUCS engineers nd that the entropy bits • b H of the sampled message signal depends on how many are used in the A/D converter (see table below). how many bits per sample must be used to guarantee that a high-quality picture. Problem 6. the FCC restricted this digital system to use in the same bandwidth (6 MHz) as its analog (AM) counterpart. a) Using signal-to-noise ratio as the criterion.30: Digital Cellular Telephones In designing a digital version of a wireless telephone. the message signal must have a signal-to-noise ratio of at least 20 dB. The desired range for a cell is 1 km. the quality of the received signal.25 5 4. Can a digital cellphone system be designed according to these criteria? Problem 6. and the algorithm resumes (return to the beginning). First of all. which achieves a signal-to-noise ratio of 20 dB. can be received by any HDTV set within the same broadcast region? b) Assuming the digital television channel has the same characteristics as an analog one. how much compression must HDTV systems employ? Problem 6. ip a coin that has probability If only one of the N p of coming up heads computer's coins comes up heads. must be at least as good as that provided by wireline telephones (30 dB) and the message bandwidth must be the same as wireline telephone. The access algorithm works as follows. • If none or more than one head comes up.31: Optimal Ethernet Random Access Protocols Assume a population of N computers want to transmit information on a random access channel. and the others must wait until that transmission is complete and then resume the algorithm.291 • • The baseband message signal has a bandwidth of 10 kHz. The signal-to-noise ratio of the allocated wirelss channel.19 4 3. HDTV video is sampled on a 1035 × 1840 raster at 30 images per second for each of the three colors. This unsuccessful transmission situation will be detected by all computers once the signals have propagated the length of the cable. .13 Can these specications be met? Justify your answer. The signal is to be sent through a noisy channel having a bandwidth of 25 kHz channel centered at 2 MHz and a signal-to-noise ration within that band of 10 dB. transmitter repeater D/2 receiver D/2 D Figure 6. analog and digital communication. but the noise power added by the channel increases with bandwidth with a proportionality constant of 0.33: Designing a Speech Communication System We want to examine both analog and digital communication alternatives for a dedicated speech transmission system. Assume the speech signal has a 5 kHz bandwidth. We have some latitude in choosing the transmission bandwidth. Is the capacity larger with the repeater system than without it? If so. The receiver experiences the same amount of white noise as the repeater. INFORMATION COMMUNICATION a) What is the optimal probability to use for ipping the coin? In other words. The wireless link between transmitter and receiver is such that 200 watts of power can be received at a pre-assigned carrier frequency. when. what is the average number of coin ips that will be necessary to resolve the access so that one computer successfully transmits? d) Evaluate this algorithm. we must consider the system's capacity.42). a) Design an analog system for sending speech under this scenario. and a repeater is positioned halfway between them (Figure 6. What is the received signal-to-noise ratio under these design constraints? b) How many bits must be used in the A/D converter to achieve the same signal-to-noise ratio? c) Is the bandwidth required by the digital channel to send the samples without error greater or smaller than the analog bandwidth? . what is the signal-to-noise ratio of the demodulated signal at the receiver? Is this better or worse than the signal-to-noise ratio when no repeater is present? c) For digital communication. why not? Problem 6. let's assume that the transmitter and receiver are D m apart.42 a) What is the block diagram for this system? b) For an amplitude-modulation communication system.1 watt/kHz. the signal the repeater receives contains white noise as well as the transmitted signal.292 CHAPTER 6.32: Repeaters Because signals attenuate with distance from the transmitter. However. repeaters are frequently employed for both For example. Is it realistic? Is it ecient? Problem 6. what should p be to maximize the probability that exactly one computer transmits? b) What is the probability of one computer transmitting when this optimal value of p is used as the number of computers grows to innity? c) Using this optimal probability. if not. What the repater does is amplify its received signal to exactly cancel the attenuation encountered along the rst leg and to re-transmit the signal to the ultimate receiver. c) Without employing compression. Each station licensed for this band will transmit signals having a bandwidth of 10 kHz.5 MHz has been allocated for a new high-quality AM band. a) How many stations can be allocated to this band and with what carrier frequencies? b) Looking ahead. twice the message bandwidth of what current stations can send.293 Problem 6. The characteristics of the new digital radio system need to be established and you are the boss! Detail the characteristics of the analog-to-digital converter that must be used to prevent aliasing and ensure a signal-to-noise ratio of 25 dB. how many digital radio stations could be allocated to the band if each station used BPSK modulation? Evaluate this design approach. Analog You are the Chairman/Chairwoman of the FCC.34: Digital vs. . The frequency band 3 MHz to 3. conversion to digital transmission is not far in the future. which is modeled as a transmission line having resistance. ∼ 2πf C G ∼ ∼ 2πf LR.3 (p. 2 As shown previously (6. For coaxial cable. Light in the middle of 5 × 1014 Hz. 231) ∼ As frequency increases. the attenuation (space) constant equals the real part of this expression.66) . with the inverse-square law a consequence of the conservation of power. the visible band has a wavelength of about 600 nm. and equals Solution to Exercise 6. voltages and currents in a wireline channel. capacitance and inductance. the visible electromagnetic frequencies are over six orders of magnitude higher! Solution to Exercise 6.43 Use the Pythagorean Theorem. The exponential decay of wireline channels occurs because they have losses and some ltering. v ! ! q u ∼ ∼ ∼ ∼u R G t 1+ 1+ γ = j2πf LC ∼ ∼ j2πf C j2πf L and (6. the answer depends less on geometry than on material properties.5. Solution to Exercise 6.1 (p. 232) a (f ) = ∼ GZ0 + ZR0 . The inverse-square law governs free-space propagation because such propagation is lossless.11).65) ∼ ∼ !! 1 1 G R ' j2πf LC × 1 + ∼ + ∼ 2 j2πf C L  v v  u∼ u∼ q ∼∼ 1 ∼ u L ∼u tC  t ' j2πf LC + G ∼+ R ∼ 2 C L q ∼∼ ∼ Thus.4.2 (p. In this high-frequency region. ( ) d arccosh 2r δ d 2r +arccosh 2r Solution to Exercise 6.3. Cable 8 MHz or 2 × 10 Hz). 231) You can nd these frequencies from the spectrum allocation chart (Section 7.3).1 (p. For twisted pair. the top of the earth to a tangency 2 (h + R) = R2 + d2 .3. decay exponentially with distance. The line-of-sight distance between two earth-based antennae equals q dLOS = 2h1 R + h1 2 + q 2h2 R + h2 2 (6. which corresponds to a frequency of television transmits within the same frequency band as broadcast television (about 200 Thus.3. ( ) .1 (p. c= √1 µ r c= √ 1 µd d .294 CHAPTER 6. 233) d1 d2 h1 h2 R R R Figure 6. 230) In both cases. and R the earth's radius. INFORMATION COMMUNICATION Solutions to Exercises in Chapter 6 Solution to Exercise 6. where h is the antenna height. d is the distance from point with the earth's surface. it was proportional to . much better than in the telephone! SolutionPto Exercise 6. The US Navy did use such a communication scheme to reach all of its submarines at once.1 (p.12.75 kHz.8.295 As the earth's radius is much larger than √ √ d √LOS = 2h1 R + 2h2 R .1 (p. while the baseband signal M (f ) emerges.14. 240) Separation is 2W .1 (p. wavelength increases and can approach the distance between the earth's surface and the ionosphere. The signal-related portion of the transmitted spectrum is given by 1 2X (f − fc ) + 12 X (f + fc ) 1 1 4 (M (f − 2fc ) + M (f )) + 4 (M (f + 2fc ) 1 1 1 4 M (f − 2fc ) + 2 M (f ) + 4 M (f + 2fc ) = = + M (f )) (6. but twice. Commercial AM signal bandwidth is 5kHz. Solution to Exercise 6.2 (p.5. which means that the total loss is the geosynchronous orbit lies at an altitude of 35700km. the other antenna's range is Solution to Exercise 6. 239) The key here is that the two spectra that the carrier frequency M (f − fc ) M (f + fc ) fc M (f − fc ). known as the uplink. b(n) (−1)  ApT (t − nT ) sin 2πkt T  . Such low carrier frequencies would be limited to low bandwidth analog communication and to low datarate digital communications. Speech is well contained in this bandwidth.11. 2. Solution to Exercise 6. 243) k = 4.1 (p. Solution to Exercise 6. Solution to Exercise 6. Solution to Exercise 6. very lucky. 234) 4. 243) x (t) = X nn Solution to Exercise 6.7. 243) The harmonic distortion is 10%. and scales the result by half.3 (p. the term normally obtained in computing the magnitude-squared equals zero.1 (p. for Marconi. Multiplying at the receiver by the carrier shifts this spectrum to fc and to −fc .2 (p. 241) x (t) = ∞ n=−∞ sb(n) (t − nT ). 236) The additive-noise channel is not linear because it does not have the zero-input-zero-output property (even though we might transmit nothing.14. the relation λf = c gives a corresponding frequency of 3. the receiver's input consists of noise). Assuming a distance between the two of 80 km. exactly what arrives.8 × 10−8 . M (f + fc ) do not overlap because we have assumed is much greater than the signal's highest frequency. we have to a good approximation that at ground elevation. Solution to Exercise 6.13. Consequently. The amplitude loss in the satellite case is proportional to −10 Reection is the same as transmitting product of the uplink and downlink losses.12.4 × 10 The The ionosphere begins at an altitude of about 50 km. Reecting o the ionosphere not only encounters the same loss.67) The signal components centered at twice the carrier frequency are removed by the lowpass lter. the antenna height. 234) Transmission to the satellite. encounters inverse-square law power losses. Solution to Exercise 6.14. Marconi was If the interferer's spectrum does not overlap that of our communications channelthe interferer is out-ofbandwe need only use a bandpass lter that selects our transmission band and removes other portions of the spectrum. Solution to Exercise 6.1 (p.9.1 (p. If one antenna is 2h1 R. say h2 = 0 . 233) As frequency decreases.2 (p. 238) X (f ) = 12 M (f − fc ) + 12 M (f + fc ). then would synchronize. . Focus on a particular symbol. Using the quadratic formula.1 (p. . Note that we must start at the beginning of the bit stream. − B(A)R Solution to Exercise 6. Solution to Exercise 6. The noise power q 2  α Eb . one term is 1log2 1 = 0. If we had a xed-length code (say 00.18. log2 P r [a0 ] − log2 (1 − P r [a0 ] + · · · + P r [aK−2 ]). 252) 1 1 1 kk K log2 K = log2 K . bilities sum to one.26. the factor of two smaller value than in the baseband case arising because the sinusoidal signals have less energy for the same amplitude. the average bit-error probability pe is given by the probability of error equation (6. Jumping into the middle leads to no synchronization at all! Solution to Exercise 6. INFORMATION COMMUNICATION Solution to Exercise 6. which equals 1. The derivative equals and all other derivatives have the same form (just substi- tute your letter's index).2 (p. With no coding. and the others are 0log2 0. we choose the other signal. the situation much worse. 261) is This question is equivalent to 3pe × (1 − pe ) + pe 2 ≤ 1 or 2pe 2 − 3pe + 1 ≥ 0. 249) The noise-free integrator outputs dier by αA2 T . For the minimum entropy answer. Solution to Exercise 6. we are done.20. say the rst.75 bits. To prove K . 256) Consider the bitstream . taken from the bitstream 0|10|110|110|111|. 261) 1 2 the error rate produced by coding is smaller. The minimum value of entropy is zero. each probability must equal the others.23.47): .25. and we would get lost. the signals are negatives of each other: s1 (t) = −s0 (t).0110111. 247) The matched lter outputs are ± A2 T 2 because the sinusoid has less power than a pulse having the same amplitude.1 (p.1 (p. Choosing the largest therefore amounts to choosing which one is positive.11). the rst codeword encountered in a bit stream must be the right one. 255) The Human coding tree for the second set of probabilities is 1 21+ 1 1 log 5 5 + (Human Coding Tree)). the output of each multiplier-integrator combination is the negative of the other. 256) T = 1 identical to that for the rst (Figure 6.4 (p. Because this is an upward-going parabola.10.3 (p. Solution to Exercise 6. we nd that they are located at 1 2 and 1.46) yields pe = Q N0 The noise-free integrator output dierence now equals αA2 T = Solution to Exercise 6. in the BPSK case. 249) αEb remains the same as 2 . Consequently. .2 (p. We would decode the initial part incorrectly.01. 256) Because no codeword begins with another's codeword. Solution to Exercise 6.1 (p.14. . .1 (p.16. and we are done. If it is positive. The average code length is is straightforward: H (A) = − 1 1 2 log 2 + 14 log 14 + Solution to Exercise 6.18 1 1 1 2 + 3 + 3 = 1.1 (p. H (A) = − that this is the maximum-entropy probability assignment. 244) Twice the baseband bandwidth because both positive and negative frequencies are shifted to the carrier by the modulation: 3R. Consequently in the range 0 ≤ pe ≤ Solution to Exercise 6. Solution to Exercise 6..1 (p. which from the probability of error equation (6.23.23. which we dene to be zero also. We only need to calculate one of these.16. we need only check where its roots are. Thus.22. If it is negative. .296 CHAPTER 6. mula: the terms P r [a0 ] log2 P r [a0 ] and The derivative with respect to this probability (and all the others) must be zero.1 (p. Thus.17. The end of one codeword and the beginning of another could be a codeword. 20 20 . The entropy calculation 4 5  20 1 1 log . jumping into the middle does not guarantee perfect decoding. the just as in the baseband case. Solution to Exercise 6.68 bits. 246) In BPSK. Stated in terms of dierence equals 2αEb Eb . we must explicitly take into account that proba- P Equally likely symbols each have a probability of P r [a0 ] appears twice in the entropy for(1 − P r [a0 ] + · · · + P r [aK−2 ]) log2 (1 − P r [a0 ] + · · · + P r [aK−2 ]). by the laws of binary arithmetic. Plotting this reveals that the increase in bit-error probability out of the channel because of the energy reduction is not compensated by the repetition coding. 10 0 Error Probability with and without (3.297 pe = Q where q 2α2 Eb N0 p0e = Q  q .1 (p. adding 0 to a binary value results in that binary value while adding 1 results in the opposite binary value. . . 262) In binary arithmetic (see Table 6. Solution to Exercise 6. the result consists G and itself that.1) Repetition Coding 10 -1 Error Probability Coded Uncoded 10 -2 10 -3 10 -4 10 0 10 1 Signal-to-Noise Ratio Figure 6.2).28. 264) dmin = 2n + 1 Solution to Exercise 6. the bit-error probability is given by 2α2 E 3N0 b  2 3p0e ×(1 − p0e )+p0e 3 .44 Solution to Exercise 6. is always When we multiply the parity-check matrix times any codeword equal to a column of of the sum of an entry from the lower portion of zero.1 (p.27. 265) G.2 (p.27. With a threefold repetition code. The route remains xed as long as the call persists. Since multiplying by Solution to Exercise 6.1 (p.28.1 (p. 265) H is also linear. pe < 0.) Solution to Exercise 6. Solution to Exercise 6. which connects you to the nearest station.33. packet to travel the Ethernet's length The time taken for one computer's and for the other computer's transmission to arrive equals the round- trip.78) codes. Solution to Exercise 6. adding 0 to a binary value results in that binary value while adding 1 In binary arithmetic see this table results in the opposite binary value.1 (p. 2N −K − 1 ≥ N + N (N − 1) 2 or 2N −K ≥ N2 + N + 2 2 The rst two solutions that attain equality are (5. and routes the call accordingly. 271) The network entry point is the telephone handset.org/content/m0095/latest/#table1> .36. The network looks up where the destination corresponding to that number is located. 269) To convert to bits/second. resulting from ^ H c N single-bit and N (N −1) double-bit errors can occur. Transmitters must sense a collision before packet transmission ends.1 (p. 276) The worst-case situation occurs when one computer begins to transmit just before the other's packet arrives. Solution to Exercise 6. we must have pe < q For N pe (1 − pe )  1 (N −1)(N −2) 6 +1 N = 7. 277) When you pick up the telephone.68) with equality. However. 276) The cable must be a factor of ten shorter: It cannot exceed 100 m. The transfer function to some other transceiver's receiver circuit is The transmitting op-amp sees a load or Rout Rout + Z0 k divided by this load.68) no perfect code exists (Perfect codes satisfy relations like (6.36. not one-way.2 (p.2 (p. you initiate a dialog with your network interface by dialing the number. 275) Rout N .298 CHAPTER 6. Your friend receives the message via the same devicethe handsetthat served as the network entry point. Solution to Exercise 6. we divide the capacity stated in bits/transmission by the bit interval duration T. Hc = 0. 268) In a length-N block. 55 . Solution to Exercise 6. other than the single-bit error correcting Hamming code. where N is the number of transceivers other than this one attached to the coaxial cable. (6. 265) The probability of a single-bit error in a length-N block is  probability N  3  pe 3 (1 − pe )N −3 . Dialing the telephone number informs the network of who will be the message recipient.29.31.31.37.28. making connecting old and new systems together more complex than need be.1 (p. Dierent minimum packet sizes means dierent packet formats. The telephone system forms an electrical circuit between your handset and your friend's handset. propagation time. N −1 and a triple-bit error has For the rst to be greater than the second.1) and (90.3 (p.3 (p. Solution to Exercise 6. Solution to Exercise 6. What you say amounts to high-level protocol while establishing the connection and maintaining it corresponds to low-level protocol. 55 "Error Correction" <http://cnx. The number of non-zero vectors 2 must equal or exceed the sum of these two numbers. INFORMATION COMMUNICATION Because the code is linearsum of any two codewords is a codewordwe can generate all codewords as sums of columns of G.36. Here power (s. 2 = amplitude (s) 10log amplitude 2 (s ) 0 = amplitude(s) 20log amplitude(s 0) stating relative change in terms of decibels is unambiguous. a decibel is a tenth of a Bel.2) Plugging this expression into the denition for decibels. respectively.1) power (s0 ) and amplitude (s0 ) represent a reference power and amplitude.1 Decibels 1 The decibel scale expresses amplitudes and power values logarithmically. in decibels) = 20log amplitude (s) amplitude (s0 ) (7. we nd that power(s) 10log power(s 0) Because of this consistency. of 10 increase in amplitude corresponds to a 20 dB increase in both amplitude and power! 1 This content is available online at <http://cnx. The denitions for these dier. in decibels) = 10log power (s) power (s0 ) amplitude (s. You will hear statements like "The signal went down by 3 dB" and "The lter's gain in the stopband is Exercise 7.16/>. Who is this measure named for? The consistency of these two denitions arises because power is proportional to the square of amplitude: power (s) ∝ amplitude2 (s) (7.) The prex "deci" implies a tenth. Quantifying power or amplitude in decibels essentially means that we are comparing quantities to a standard or that we want to express how they changed. 299 (7. (Solution on p.org/content/m0082/2.1. but are consistent with each other.1 −60" (Decibels is abbreviated dB. 304.Appendix 7.3) A factor .). 1 −10 Common values for the decibel. The accompanying table provides "nice" decibel values. . Converting decibel values back and forth is fun. that corresponds to a ratio of √ 2. 26 dB equals 10 + 10 + 6 dB Decibel quantities add. Because the transfer function multiplies the input signal's spectrum. we 10 × 10 × 4 = 400.5 3 10 5 4 6 5 7 8 9 10 10 0. to nd the output amplitude at a given frequency we simply add the lter's gain in decibels (relative to a reference of one) to the input amplitude at that frequency. The decibel values for all but the powers of ten are approximate. and tests your ability to think of decibel values as sums and/or dierences of the well-known values and of ratios as products and/or quotients.1: 0 2 1. This conversion rests on the logarithmic nature of the decibel scale. For example. ratio values multiply. One reason decibels are used so much is the frequency-domain input-output relation for linear systems: Y (f ) = X (f ) H (f ). This calculation is one reason that we plot transfer function magnitude on a logarithmic vertical scale expressed in decibels. but are accurate to a decimal place.300 APPENDIX Decibel table Power Ratio dB 1 √ 2 √ Figure 7. to nd the decibel value for halve the decibel value for 2. Now the order matters. to the answer is the same as the lottery problem with k = 6. in other words. Answering such questions occurs in many applications beyond games. Note that the probability that zero or one or two.2 Permutations and Combinations 2 7. 1. Numbering the bit positions from 1 N. 880 dierent lineups! The the number of orderings. 304. we have symbol for the  combination of k things drawn from a pool of n is n  k  and equals n! (n−k)!k! . The number of ways a pool of Thus.) What does the sum of binomial coecients equal? In other words. what is n X   k=0 A related problem is calculating the probability that p is the probability of any bit being in error. (x + y) =  0 1 2 n Combinatorials occur in interesting places. . If we are to pick for the rst one. with n! = n (n − 1) (n − 2) . you only have to choose the right set of 6 numbers. . problem is selecting the batting lineup for a baseball team. n n   pn = 1.13/>. Can you prove . obeyed the formula 6 For example. something must happen to the codeword! That means that we must  have  n 0    (1 − p)n +  n 1    p(1 − p)n−1 +  n 2   p2 (1 − p)n−2 + · · · +  this? 2  This content is available online at <http://cnx. of a sum (Solution on p. Calculating permutations is the easiest. the number of combinations equals the number of permutations divided by k things can be ordered equals k!.) What are the chances of winning the lottery? Assume you pick numbers from the numbers power       n-th  n n n n n  xn +   xn−1 y +   xn−2 y 2 + · · · +   yn . For example. To win.the number of ways of choosing things when order does problems amounts to understanding not matter as in lotteries and bit errors. (n − k + 1). . For the second choice. In digital communications. The chances of winning equal the number of dierent length-k sequences that can be chosen. This result can be written in terms of factorials as (n−k)! n (n − 1) (n − 2) . you might ask how many possible double-bit errors can occur in a codeword.301 APPENDIX 7.2.org/content/m10262/2. we have n choices The number of length-two ordered sequences is n (n − 1). we dene 0! = 1. k numbers from a pool of n.2. once we choose the nine starters for our baseball game.and combinations . 304.2. Newton derived that the Exercise 7. for example. For mathematical convenience. is p2 (1 − p) n−2 .the number of ways of choosing things when order matters as in baseball lineups . The lottery "game" consists of picking of A related.1 (Solution on p. we have n − 1. Continuing to choose until we make k choices means the number of permutations n! . but dierent. etc. 9!  = 362.1 Permutations and Combinations k numbers from a pool of n. you select 6 numbers out 60. the order in which you pick the numbers doesn't matter.2 1-60. and many more choices are possible than when order does not matter. n k   any two bits are in error in a length-n codeword when The probability of any particular two-bit error sequence The probability of a two-bit error occurring anywhere equals this probability times the  number of combinations:  n 2   p2 (1 − p)n−2 . Solving these kind of permutations . therefore be is When order does not matter. Exercise 7. . errors occurring must be one. 3 This content is available online at <http://cnx. the United States and other national governments in the 1930s began regulating the carrier frequencies and power outputs stations could use. Detailed radio carrier frequency assignments are much too detailed to present here. . this regulation has become increasingly important. which shows what kinds of broadcasting can occur in which frequency bands. This is the so-called Frequency Allocation Chart.12/>.org/content/m0083/2.302 APPENDIX 7. With increased use of the radio spectrum for both public and private use.3 Frequency Allocations 3 To prevent radio stations from transmitting signals on top of each other. EXPLORATION SAT.0 Standard Frequency and Time Signal Satellite (S-E) Stand. FIXED BROADCASTING SATELLITE BROADCASTING BROADCASTING AMATEUR MARITIME MOBILE FIXED Space Research STAND.8 RADIOLOCATION 81. (S-E) SPACE RES.003 10.5875 AERONAUTICAL MOBILE (R) AERONAUTICAL MOBILE (R) 1429 1432 1435 11.0 33. SATELLITE(S-E) (Passive) SAT.2 75.0 AERONAUTICAL RADIONAVIGATION (RADIO BEACONS) INTER-SATELLITE MOBILE SPACE RESEARCH (s-E) (deep space only) 22.2 MET.010 21.0 ± 1GHz FIXED FIXED SPACE RES.990 15.9 150.000 m 100 kHz LF AM Broadcast Ultra-sonics BROAD.6 2290 300 MHz FIXED Radiolocation RADIOLOCATION SATELLITE (E-S) 2160 2310 AERONAUTICAL RADIONAVIGATION Earth Exploration Satellite (S-S) MOBILE MOBILE FIXED 300.4 MOBILE MOBILE SATELLITE (E-S) LAND MOBILE MARITIME MOBILE MARITIME MOBILE MARITIME MOBILE LAND MOBILE MARITIME MOBILE LAND MOBILE MARITIME MOBILE FIXED MOBILE METEOROLOGICAL AIDS (RADIOSONDE) METEOROLOGICAL AIDS (Radiosonde) †† FIXED FIXED STANDARD FREQ. EXPL.2 126.03 18. AND TIME SIGNAL (60 kHz) 13.05 150.7145 Space Research EARTH EXPL.1 18.5 ± .0875 123.5 2194 FIXED MOBILE FIXED MARITIME MOBILE AMATEUR SATELLITE LAND MOBILE FIXED FIXED MARITIME MOBILE AMATEUR MARITIME MOBILE FIXED Figure 7. Space Research 2501 2850 AERONAUTICAL MOBILE (R) 3000 AERONAUTICAL RADIONAVIGATION MARITIME RADIONAVIGATION (RADIO BEACONS) Aeronautical Mobile 2502 2505 300 kHz STANDARD FREQ.000 kHz) Space Research STANDARD FREQ. (S-E) 15.25 300 3.007 MHz INTERSATELLITE MOBILE RADIOLOCATION FIXED 1 0 -2Å3 1020Hz FIXED FIXED Amateur Radiolocation FIXED FIXED SATELLITE (S-E) Mobile ** EARTH EXPLORATION SATELLITE (Passive) MOBILE INTERSATELLITE SPACE RESEARCH (Passive) FIXED x GAMMA-RAY Gamma-ray 1019Hz x3 x 10 -1Å3 0 ISM – 122.5 RADIOLOCATION RADIOLOCATION AERO.275 11.3 cm 300 GHz 100 GHz Radar Radar Bands EHF 8.48 26. (S-E) SPACE OPN.0 20.0 6. (Passive) FIXED ISM – 24.015 MHz 0 10.41 27.075 FIXED MOBILE** LAND MOBILE 5.0 MOBILE AMATEUR SATELLITE Amateur 4.0 RADIOLOCATION 173.36 1755 2110 MOBILE FIXED FIXED 15. SAT.0 14. (s-E) RADIONAVIGATION SATELLITE MOBILE SATELLITE MOBILE RADIONAVIGATION MOBILE FIXED FIXED SATELLITE (S-E) 17. (Passive) ASTRONOMY RADIO ASTRONOMY 50.19 SPACE RESEARCH (E-S) 7.0 MOBILE FIXED FIXED INTERSATELLITE MOBILE 182. (S-E) 190. Satellite Earth Expl.4 14.45 FIXED Mobile MET.6 335.0 37.0 MOBILE AMATEUR SATELLITE MOBILE RADIONAVIGATION SATELLITE FIXED SAT.68 19.0 METEOROLOGICAL AIDS RADIOLOCATION RADIOLOCATION 5. (S-E) MET. SAT.995 10. AERONAUTICAL MOBILE (R) AMATEUR 1400 Land Mobile (TLM & TLC) MOBILE (AERONAUTICAL TELEMETERING) 30 9.125 4.5 1651 1660 1660.0 SPACE RESEARCH (Passive) AERONAUTICAL RADIONAVIGATION (RADIO BEACONS) MARITIME MOBILE BROADCASTING (FM RADIO) Radiolocation MOBILE SATELLITE RADIONAVIGATION RADIONAVIGATION SATELLITE MOBILE 1015Hz 3 x 10 3Å Amateur Satellite Amateur FIXED 505 510 MARITIME MOBILE MARITIME MOBILE (SHIPS ONLY) FIXED MOBILE* Amateur Radiolocation 495 MOBILE (DISTRESS AND CALLING) 4.1365 12. (E-S) MOBILE SAT. RADIONAV.23 AERONAUTICAL MOBILE (OR) AERONAUTICAL MOBILE (R) RADIO ASTRONOMY FIXED Mobile* BROADCASTING FIXED Mobile* TRAVELERS INFORMATION SERVICE AT 1610 kHz RADIOLOCATION 170. (Passive) (Passive) 17.525 MOBILE 890 902 RADIOLOCATION 1350 FIXED SATELLITE (E-S) 5. Mobile SATELLITE (E-S) SATELLITE(S-E) FIXED Satellite (E-S) 8.005 10.995 5.924 22.35 STATES RADIO ASTRONOMY FIXED FREQUENCY RADIODETERMINATION SATELLITE MOBILE ALLOCATIONS INTER-SATELLITE RADIO SERVICES COLOR LEGEND LAND MOBILE FIXED SATELLITE (E-S) FIXED SATELLITE (E-S) THE RADIO SPECTRUM AERONAUTICAL MOBILE MOBILE MOBILE 512.003 5.91 FIXED MOBILE SATELLITE SATELLITE FIXED MOBILE (E-S) (E-S) 74. (S-E) 3.815 FIXED x FIXED SATELLITE (S-E) Amateur Satellite RADIOLOCATION RADIOLOCATION † (1999) 20. AND TIME SIGNAL(2500kHz) MOBILE 3000 25. and Space Research Time Signal Satellite (E-S) 134.25 13. SATELLITE (Passive) RADIOLOCATION BROADCASTING (TV CHANNELS 14-20) RADIORadioLOCATION location Radiolocation RADIONAVIGATION MARITIME Radiolocation RADIONAVIGATION FIXED 3.4 59.5 1559 1610 1610.685 RADIO ASTRONOMY 72.2 23.0 MOBILE MOBILE FIXED 4.0 1625 1800 FIXED 30.0 FIXED 30 GHz FIXED MOBILE RADIORADIONAVIGATION ASTRONOMY SATELLITE MOBILE SATELLITE 300 GHz MOBILE FIXED RADIONAVIGATION SATELLITE (E-S) 252. (Passive) MOBILE SATELLITE 31. (Passive) MARITIME MOBILE SATELLITE (E-s) 15.15 FIXED -5 Å3 Cosmic-ray COSMIC-RAY 1023Hz FIXED 151.89 24.0 AMATEUR EARTH EXPLORATION SATELLITE (Passive) 21.175 26.41 17. SAT. AIDS (Radiosonde) MOBILE SATELLITE (S-E) FIXED SATELLITE (S-E) WAVELENGTH ACTIVITIES FREQUENCY 0 BAND DESIGNATIONS FI XED MOBILE SATELLITE (E-S) SATELLITE (E-S) SPACE RESEARCH (Passive) SPACE OPERATION SPACE RESEARCH Radiolocation MOBILE SATELLITE (E-S) RADIONAVIGATION SATELLITE 38. (S-E) FIXED INTERSATELLITE SPACE EARTH RESEARCH INTER. FREQ. (E-S) MOBILE SAT.303 APPENDIX Frequency Allocation Chart UNITED 3 kHz NOT ALLOCATED RADIONAVIGATION Fixed FIXED MARITIME MOBILE MARITIME MOBILE FIXED MARITIME MOBILE FIXED MARITIME MOBILE FIXED Radiolocation RADIONAVIGATION Radiolocation FIXED FIXED 160 MARITIME MOBILE 190 AERONAUTICAL RADIONAVIGATION 200 2495 STANDARD FREQ.S. (E-S) Mobile Sat.6 SPACE FIXED EARTH EXPL.0 FIXED Fixed 8.995 20. (S-E) SPACE OPN. RADIONAVIGATION RADIO DET. (Passive) FIXED 217.4 84.000 KHZ) MOBILE 27.07 25. (Passive) Fixed MOBILE.9 9. SAT.4 FIXED MOBILE SATELLITE (E-S) MOBILE MOBILE AERONAUTICAL RADIONAVIGATION 37.425 FIXED AERONAUTICAL RADIONAVIGATION (RADIO BEACONS) Aeronautical Mobile 4. SAT.0 AERONAUTICAL MOBILE SATELLITE (R) (space to Earth) AERONAUTICAL MOBILE SATELLITE (R) (space to Earth) AERONAUTICAL MOBILE (OR) AERONAUTICAL MOBILE (R) MOBILE -4 Å3 150.02 MHz MARITIME RADIONAVIGATION 3.0125 AERONAUTICAL MOBILE (R) MET. Standard Freq.0 MOBILE FIXED AERONAUTICAL MOBILE SATELLITE Radiolocation Amateur FIXED SATELLITE (E-S) FIXED ISM – 5.4 Radiolocation FIXED STANDARD FREQUENCY AND TIME SIGNAL 32.0 FIXED SPACE RESEARCH (Passive) 3 GHz MOBILE SATELLITE (E-S) FIXED SATELLITE (E-S) RADIO ASTRONOMY FIXED GOVERNMENT/ NON-GOVERNMENT SHARED 30 GHz * EXCEPT AERO MOBILE (R) ** EXCEPT AERO MOBILE ‡‡ BAND TO BE DESIGNATED FOR MIXED USE # BAND ALLOCATED TO PERSONAL COMMUNICATIONS SERVICES (PCS) 30. SAT.55 EARTH EXPL. (S-E) Mob. RADIO DET. Frequency and Time Signal Satellite (S-E) FIXED SATELLITE GOVERNMENT EXCLUSIVE ACTIVITY CODE DESCRIPTION Capital Letters between oblique strokes NON-GOVERNMENT EXCLUSIVE EXAMPLE 1st Capital with lower case letters Capital Letters /BROADCASTING/ ALLOCATION USAGE DESIGNATION SERVICE Mobile U. (Passive) MOBILE* * RADIO RESEARCH EARTH EXPL. (Passive) (Passive) SAT. (Passive) AMATEUR SATELLITE AMATEUR MOBILE** BROADCASTING SATELLITE RADIOLOCATION AMATEUR Amateur AMATEUR MOBILE SPACE RES.(Ground) 36.5 FIXED SATELLITE (E-S) 2390 2400 2402 2417 2450 Space Research STANDARD FREQ.025 AERONAUTICAL MOBILE (R) MOBILE* MOBILE 399.5 137.5 402.7 128.1 10.0 FIXED SATELLITE (S-E) 17.2 FIXED EARTH EXPL.68 ± .8 156.2 13.2 Mobile Fixed /BROADCASTING/ BROADCASTING SATELLITE 3 x 10 7m 10 Hz VERY LOW FREQUENCY (VLF) Infra-sonics SPACE RESEARCH STD.6 54.97 18.5 2190.77 FIXED 30. & TIME SIGNAL SAT.0 RADIO ASTRONOMY ISM – 40.0 Radiolocation Radiolocation Aeronautical Radionavigation (Radio Beacons) FIXED SAT.2 58.0 222. Sat. 1910-1930 MHz IS DESIGNATED FOR UNLICENSED PCS DEVICES RADIO ASTRONOMY FIXED FIXED AERONAUTICAL RADIONAVIGATION RADIONAV.85 21.46 5.168 18. (E-S) MOBILE SAT.5 RADIOLOCATION FIXED 54. (Passive) MOBILE MOBILE 15.0 7. Satellite (Active) RADIONAVIGATION INTER-SATELLITE Earth Standard Exploration Frequency and FIXED Satellite Time Signal (S-S) Satellite (E-S) FIXED Radio. SAT. RADIONAV. (S-E) SPACE RES.21 1675 148.175 137.0 FIXED MARITIME MOBILE 40.99 MOBILE AMATEUR SATELLITE FIXED LAND MOBILE 50. RADIONAV. (E-S) Satellite (E-S) 14.21 25.0 MARITIME RADIONAVIGATION (RADIO BEACONS) 19.0 Radiolocation MOBILE SATELLITE (S-E) SPACE RES.010 15.575 161.(Ground) 322.0 300.1 8.5 MOBILE MOBILE RADIOLOCATION FIXED 4.90 FIXED MOBILE Fixed SATELLITE (E-S) SATELLITE (S-E) 8. BROADCASTING (TV CHANNELS 2-4) 3 cm 10 GHz SHF RADIONAVIGATION 92.8 STANDARD FREQUENCY AND TIME SIGNAL SATELLITE RADIONAVIGATION RADIONAVIGATION 31.25 ± .0 1544 132.0 ± 50 MHz MOBILE RADIOLOCATION AMATEUR SATELLITE SPACE RES.2475 157. (s-E)(s-s) Amateur MOBILE 23. SATELLITE FIXED SATELLITE Satellite (E-S) SAT.000 kHz) MARITIME MOBILE FIXED MARITIME MOBILE (TELEPHONY) LAND MOBILE EARTH EXPLORATION SAT.0 FIXED RADIOLOCATION SATELLITE (E-S) 33.5 MOBILE EARTH EXPLORATION SATELLITE EARTH EXPLORATION SATELLITE AERO. SAT. AIDS (Radiosonde) 3. (400.54 28. (S-E) MOBILE MOBILE RADIO ASTRONOMY 164.9375 123.0 MOBILE Amateur MOBILE BROADCASTING SATELLITE Earth Space Exploration Radio Research Sat.0 MOBILE LAND MOBILE FIXED AERONAUTICAL RADIONAVIGATION 51.0 614.0 EARTH EXPL.125 GHz ISM – 245.25 14. MOBILE SAT.0 608.0 400.0 Aeronautical Radionavigation 241.Fixed location 17.8 29. MOBILE** ASTRONOMY (Passive) SAT.155 34.625 161.45 12.4 AERO.1 FIXED 95.060 AERONAUTICAL MOBILE (R) BROADCASTING (TV CHANNELS 5-6) EARTH EXPLORATION SATELLITE (Passive) SPACE RESEARCH (Passive) RADIO ASTRONOMY Visible VISIBLE 1014Hz 3 x 10 4Å ISM – 915.3 RADIO EARTH EXPL.0 MARITIME MOBILE LAND MOBILE 470. AND TIME SIGNAL(5000 KHZ) Space Research STANDARD FREQ.0 88.0 ± 13 MHz 1013Hz Infrared 3 x 10 5Å INFRARED 1 THz 0.040 BROADCASTING AERONAUTICAL MOBILE (R) AERONAUTICAL MOBILE AERONAUTICAL MOBILE 137-139 SPACE RESEARCH (SPACE TO EARTH) Radiolocation MOBILE SATELLITE RADIONAVIGATION x RADIONAVIGATION SATELLITE MOBILE -3 Å3 1021Hz MOBILE FIXED SATELLITE (S-E) MOBILE** (1999) FIXED SPACE RESEARCH (Passive) 8. MOBILE (E-S) SAT.68 FIXED MOBILE AERONAUTICAL MOBILE (OR) Mobile AMATEUR 72.5 MOBILE SATELLITE (E-s) MOBILE SATELLITE (E-S) 1549. (Aero.0 5.5 1668.0125 1670 1710 FIXED †† (1999/2004) Mobile* RADIOLOCATION FIXED METEOROLOGICAL AIDS (Radiosonde) MOBILE MOBILE INTER-SATELLITE METEOROLOGICAL SATELLITE (s-E) 1646. (S-E) MET.7 116. SAT.10 FIXED MOBILE MOBILE †† (1999/2004) FIXED FIXED (LOS) 22.0 30 MHz FIXED SATELLITE (S-E) FIXED 29.4 SAT. (S-E) (E-S) (E-S) (no airborne) 8. Satellite (Active) (Active) 174. (S-E) MOB. Satellite (E-S) (Radiosonde) Space to Earth (E-S) FIXED SATELLITE (S-E) MOBILE SATELLITE (E-S) FIXED SATELLITE (E-S) 3 x 10 6m 1 kHz Audible Range Sonics 100 Hz EARTH EXPLORATION SAT.0 Space Research (S-E) MOBILE 4.685 6. Met.005 20.8 75.163 MHz MOBILE SATELLITE (E-S) Land Mobile 1605 1615 1705 19.0 4.25 Fixed 7. (S-E) MARITIME Aeronautical Mobile MOBILE 2300 MOBILE SATELLITE 24.125 ± 0.0 1535 Met.8 1626.25 5. SATELLITE FIXED (E-S)(no airborne) SATELLITE (S-E) (E-S) 8. (S-E) Space Opn.4 36.80 19.6 AMATEUR SATELLITE FIXED BROADCASTING (TV CHANNELS 7-13) MOBILE FIXED 238.068 18. SAT.0 INTER-SATELLITE RADIONAVIGATION RADIOLOCATION FIXED 31. (E-S) Satellite (E-S) Land Mobile Satellite (E-S) MOBILE MOBILE † (1999) FIXED † (1999) MOBILE † (1999) 13.55 25.0 FIXED SATELLITE (E-S) MOBILE FIXED PLEASE NOTE: THE SPACING ALLOTED THE SERVICES IN THE SPECTRUM SEGMENTS SHOWN IS NOT PROPORTIONAL TO THE ACTUAL AMOUNT OF SPECTRUM OCCUPIED.5 Radio FIXED SAT.990 19.75 FIXED MOBILE* RADIO ASTRONOMY Radiolocation RADIOLOCATION 415 4.9 59 70 FIXED Fixed METEOROLOGICAL AIDS 265.4 43. MARITIME MOBILE SAT. (S-E) AERO.45 21. AND TIME SIGNAL(25. (S-E) SPACE RES.0 MARITIME MOBILE 435 FIXED MOBILE FIXED 8.36 13. FREQ. SAT.03 cm Sub-Millimeter 9.91 30.12 ± .33 25.500 GHz 1 X-ray X-RAY 1018Hz x1 3Å 0 RADIOLOCATION Amateur AERONAUTICAL RADIONAVIGATION MARITIME MOBILE Mobile* 1240 RADIOLOCATION SPACE OPERATION (E-S) (1999) Radiolocation FIXED SATELLITE (E-S) Space Research AMATEUR SATELLITE 144.0 FIXED SATELLITE (S-E) AMATEUR MARITIME RADIONAVIGATION (S-E) Meteorological Satellite (E-S) MOBILE FIXED 45.68 5.05 MOBILE Radiolocation MOBILE 23. (Passive) EARTH EXPL.45 161.0 LAND MOBILE MARITIME MOBILE 2900 Radiolocation 24.0 420.0 22.0 3.0 23.2 Fixed kHz FIXED STANDARD FREQ.438 460.5 117.55 Mobile FIXED FIXED Satellite (S-E) SATELLITE (S-E) 7.0 406.0 300 ISM – 6.025 FIXED EARTH EXPL.1 MHz) 3.6 MOBILE 185.8125 MARITIME MOBILE E RC M NA O TIONAL TELEC 10 25Hz 3 x 10-7Å FIXED SPACE RES.55 BROADCASTING AERONAUTICAL MOBILE (R) AERONAUTICAL MOBILE (OR) FIXED AMATEUR AMATEUR SATELLITE Mobile FIXED MARITIME MOBILE 2200 EARTH SPACE SPACE RESEARCH OPERATION EXPLORATION (s-E)(s-s) (s-E)(s-s) SAT.95 26.0 RADIONAVIGATION SATELLITE FIXED Radiolocation FIXED AMATEUR FIXED 3.0 FIXED Amateur EARTH EXPLORATION SATELLITE (Passive) MOBILE INTERSATELLITE SPACE RESEARCH (Passive) FIXED 3.0 BROADCASTING SATELLITE 9 525 AERONAUTICAL MOBILE (R) 11.0 149.23 27.7 MOBILE SATELLITE (S-E) SPACE EARTH EXPLORATION SATELLITE (Passive) SPACE RESEARCH (Passive) RADIO ASTRONOMY FIXED FIXED 18.45 806 Radiolocation 102.73 6. AIDS SPACE OPN.825 138.0 AMATEUR 7.0 3 MHz SPACE RESEARCH (Passive) 1900 MARITIME MOBILE FIXED FIXED Mobile* AERONAUTICAL MOBILE (OR) 2655 RADIO ASTRONOMY Mobile AMATEUR SATELLITE Amateur 220.0 AERONAUTICAL MOBILE (OR) 7.0 9. (Passive) Mobile Satellite (E-s) AERONAUTICAL MOBILE SATELLITE (R) (E-s) 1558.215 FIXED Mobile Satellite EARTH EXPL.525 100.5 MET. (Passive) RADIO MOBILE SATELLITE NAVIGATION MET.2 30 MHz FIXED MOBILE SATELLITE MOBILE RADIOLOCATION SATELLITE FIXED MOBILE SATELLITE MOBILE Earth Expl. Satellite (E-S) LAND MOBILE FIXED FIXED LAND MOBILE SATELLITE RADIONAVIGATION FIXED SATELLITE (S-E) FIXED SATELLITE (S-E) MARITIME MOBILE RADIONAVIGATION SATELLITE 47.4 75.3 73.5 Radiolocation RADIONAVIGATION AERONAUTICAL MOBILE (OR) 46.765 FIXED AMATEUR SATELLITE AMATEUR Mobile 10. (S-E) MET.45 9.175 11.0 AERONAUTICAL MOBILE (OR) 35.5 Radiolocation Radiolocation Earth Expl.7 4.47 FIXED MOBILE FIXED SATELLITE (S-E) 37.235 FIXED 7.005 5.S . (Passive) RADIO ASTRONOMY Earth Expl.0 BROADCASTING (TV CHANNELS 21-36) RADIO ASTRONOMY SPACE RESEARCH (Passive) EARTH EXPLORATION SATELLITE (Passive) 30. Sat. SAT. (S-E) Space Research (S-E) MOB.75 FIXED 7.6 5. UN IO Aeronautical Mobile Maritime Radionavigation (Radio Beacons) Aeronautical Radionavigation (Radio Beacons) 275 285 300 .0 INTERSATELLITE MOBILE FIXED Radiolocation Space Research (E-S) (deep space) RADIOLOCATION 168. & TIME SIG.95 AERONAUTICAL RADIONAVIGATION FIXED SATELLITE (S-E) FIXED EARTH EXPLORATION SATELLITE (Passive) SPACE RESEARCH (Passive) RADIO ASTRONOMY 1017Hz 13 x 10Å 0 RADIOLOCATION EARTH RADIO EXPLORATION ASTRONOMY SATELLITE (Passive) FIXED 142.2 173.7 SPACE RESEARCH (S-E) (Deep Space) 14 6. Astronomy (Passive) (Passive) Radiolocation MOBILE Radiolocation FIXED RADIODETERMINATION SAT. LAND MOBILE MARITIME MOBILE LAND MOBILE FIXED MOBILE** RADIO ASTRONOMY BROADCASTING MARITIME MOBILE LAND MOBILE MOBILE** FIXED FIXED MOBILE** FIXED MOBILE** LAND MOBILE FIXED MOBILE 2065 MARITIME MOBILE AMATEUR STANDARD FREQ.063 AERONAUTICAL MOBILE (R) BROADCASTING (TV CHANNELS 38-69) PLS 7.0 24.05 8.0 Meteorological Satellite (S-E) MOBILE Mobile FIXED 30.5 LAND MOBILE 6.65 4.5 10.6 FIXED Land Mobile MOBILE 1990 FIXED BROADCASTING 1850 MOBILE FIXED Mobile 22.005 15. RESEARCH SAT.4 15.9 17.775 162.41 13.65 MET.5 Radiolocation MOBILE SATELLITE (S-E) Fixed LAND MOBILE AMATEUR 2360 FIXED 27.0 AERONAUTICAL MOBILE SATELLITE (R) (E-s) 16.85 Amateur FIXED SATELLITE (E-S) 3m 100 MHz VHF SPACE RESEARCH (Passive) FIXED SATELLITE (S-E) FIXED FIXED 6. (Passive) MARITIME RADIONAVIGATION AMATEUR SATELLITE 235. EARTH EXPL.35 MOBILE 2690 Space Research STANDARD FREQ.85 MOBILE 49.075 GHz INTERSATELLITE MOBILE RADIOLOCATION FIXED 30 m 10 MHz HF FM Broadcast MAGNIFIED ABOVE ISM – 61.0 Amateur 3 GHz 275.89 29.6 74. EARTH EXPL. SATELLITE (Passive) MOBILE 5. Satellite Space Operations (S-E) (S-E) SPACE RES. SATELLITE (S-E) SATELLITE (S-E) FIXED Satellite (S-E) 7.0 5.8 14.1875 157.0 225. AND TIME SIGNAL(15.05 400.0 216.925 Maritime Radionavigation (Radio Beacons) 3 325 38.55 10.67 26.0 38.0 12.000 kHz) Space Research STANDARD FREQ.1 FIXED FIXED SATELLITE (S-E) MOBILE MOBILE FIXED SATELLITE SATELLITE (S-E) (S-E) Radiolocation RADIOLOCATION EARTH EXPLORATION SATELLITE (Passive) MOBILE (Passive) SPACE RESEARCH FIXED MOBILE FIXED 3.0 401.0 AMATEUR RADIONAVIGATION SATELLITE 13.6 4. (R) (E-S) SPACE RESEARCH (Passive) AMATEUR AMATEUR AMATEUR SATELLITE AMATEUR MOBILE SATELLITE (E-S) Fixed Radiolocation Amateur Amateur Satellite 21.EXPLORATION (Passive) SATELLITE SAT.75 RADIOLOCATION 6. METEOROLOGICAL AIDS (RADIOSONDE) MOBILE SATELLITE (E-S) RADIO FIXED MOBILE ASTRONOMY AERONAUTICAL RADIONAVIGATION 66.0 144.0 FIXED MOBILE FIXED SATELLITE (E-S) FIXED BROADCASTING Radiolocation 42.15 410.000 m 10 kHz FIXED MOBILE SATELLITE SATELLITE (S-E) (S-E) MOBILE 4.26 13. 86.5 Space Mobile Research Mobile Fixed (TLM) Land Mobile (TLM & TLC) Fixed (TLM) Land Mobile (TLM & TLC) MOBILE Mobile MOBILE SAT.0 2170 2173.0 406.5 1645.0 FIXED Radiolocation RADIOLOCATION 40.99 25.66 50. STANDARD FREQUENCY & TIME SIGNAL (20.0 LAND MOBILE Amateur Satellite Amateur RADIOLOCATION 0. SAT.0 RADIOLOCATION RADIOLOCATION SPACE RESEARCH (Passive) 4.0 AERONAUTICAL RADIONAVIGATION (RADIO BEACONS) FIXED MOBILE** 9.6 12.78 ± .0 FIXED 39.5 1700 14. (E-S) (S-E) RADIOLOCATION FIXED SATELLITE (S-E) EARTH SPACE RESEARCH EXPLORATION (Passive) SATELLITE (Passive) FIXED FIXED SATELLITE (E-S) 176.0 535 BROADCASTING 108. (Passive) MARITIME MOBILE Mobile AERONAUTICAL RADIONAVIGATION Aeronautical Mobile Fixed FIXED 1530 1545 Mobile Satellite (S.975 FIXED 1022Hz EARTH RES.195 MARITIME MOBILE 1 FIXED Land Mobile SAT.4 FIXED BROADCASTING AERONAUTICAL MOBILE (R) 1427 30 MARITIME MOBILE Space Research MOBILE RESEARCH RADIO ASTRONOMY SPACE(Passive) 1390 9.95 AERONAUTICAL RADIONAVIGATION 1016Hz 3 x 10 2Å Ultraviolet ULTRAVIOLET 105.5 136.2 300 MARITIME MOBILE STANDARD FREQ.7 17.0 MARITIME RADIONAVIGATION RADIOLOCATION 33.0 3. AND TIME SIGNAL(10.0 Radiolocation Radiolocation Radiolocation 2483. (S-E) SPACE RES. AND TIME SIGNAL (20 kHz) 74. SATELLITE (Space to Earth) AERO.2 MOBILE FIXED AERONAUTICAL RADIONAVIGATION FIXED MOBILE FIXED MOBILE 10. SPACE (Passive) (Passive) MOBILE 149.0 FIXED RADIO ASTRONOMY FIXED 76.E) 11.005 MOBILE 2107 MARITIME MOBILE (TELEPHONY) MOBILE (DISTRESS AND CALLING) MARITIME MOBILE (TELEPHONY) LAND MOBILE 2700 FIXED 22.6 7.2 MOBILE AERONAUTICAL RADIONAVIGATION MARITIME MOBILE SATELLITE 47. (S-E) SPACE OPN. SAT.0 137.0 MOBILE SATELLITE (S-E) FIXED Land Mobile 14.2 MOBILE SATELLITE (R) (E-s) RADIO ASTRONOMY RADIO ASTRONOMY METEOROLOGICAL SATELLITE (s-E) †† RADIO ASTRONOMY FIXED FIXED SATELLITE (S-E) 235.025 137.8 MOBILE FIXED SATELLITE (S-E) MOBILE FIXED 17. EARTH SPACE RES.0 FIXED # MOBILE MOBILE 19.0 3 MHz LAND MOBILE 5. (E-S) RADIO ASTRONOMY AERO.0 3.65 FIXED 121. (S-E) 40.0 928 929 932 935 940 941 944 960 1215 RADIONAVIGATION SATELLITE (S-E) 12.2 ISM – 2450.0 Amateur FIXED LAND MOBILE FIXED LAND MOBILE MOBILE FIXED FIXED 10.3 FIXED Standard Frequency and Time Signal Satellite (S-E) 202.6 47.0 2500 110 130 2000 MOBILE** FIXED 61 90 RADIOLOCATION MOBILE 21.0375 157.25 BROADCASTING Radiolocation FIXED 24. RADIONAV.0 17.8 ± .3 MOBILE MOBILE EARTH EXPLORATION SAT.01 25.4 76. (E-S) FIXED MOBILE Astronomy 72. SAT.96 27.0 248.1 26. (S-E) MET. AERONAUTICAL MOBILE (OR) 146.7 STANDARD FREQ.0 5.6 1613.0 Radiolocation 3. (S-E) Mob.0 RADIO NAVIGATION FIXED FIXED † (1999) FIXED AERONAUTICAL MOBILE (R) ISM – 13.175 FIXED Mobile MET.875 FIXED SATELLITE (E-S) MOBILE Radiolocation LAND MOBILE 65.230 AERONAUTICAL MOBILE (R) MOBILE FIXED LAND MOBILE Radio Astronomy LAND MOBILE 403.0 AERONAUTICAL MOBILE (OR) 1300 Space Research (E-S) 7.250 GHz 59-64 GHz IS DESIGNATED FOR UNLICENSED DEVICES 300 m 1 MHz THE RADIO SPECTRUM MF 64. DEPARTMENT OF COMMERCE National Telecommunications and Information Administration Office of Spectrum Management M March 1996 FIXED MINISTRATIO N Primary T AD E T OF CO TMEN MM A IC A M TIO NS & INF OR R PA DE N Secondary Permitted U.0 10.05 FIXED FIXED SATELLITE (S-E) MOBILE FIXED -6 Å 10 24Hz RADIOLOCATION EARTH EXPLORATION SATELLITE (Passive) MOBILE SATELLITE (Space to Earth) AERONAUTICAL MOBILE SATELLITE (R) (space to Earth) 14.7 29.35 STANDARD FREQ.1 STANDARD FREQ. SAT. SAT.35 1850-1910 AND 1930-1990 MHzARE ALLOCA TED TO PCS.BROADCASTING CASTING SATELLITE 32.560 ± .0 FIXED 2150 S) MOBILE MOBILE (LOS) 16.5 FIXED FIXED AMATEUR BROADCASTING FIXED AERONAUTICAL MOBILE (R) ISM – 27.30 FIXED SATELLITE (S-E) Mobile Satellite (S-E) FIXED 7.965 9.3 AERONAUTICAL RADIONAVIGATION (Ground) 39.5 FIXED RADIO MOBILE* * SATELLITE ASTRONOMY (E-S) RADIONAVIGATION 3.0 MOBILE AMATEUR C Microwaves FIXED Radiolocation Radiolocation Meteorological Aids Aeronautical Mobile AERONAUTICAL RADIONAVIGATION FIXED MOBILE* LAND MOBILE MOBILE FIXED MOBILE SATELLITE (E-S) 30 cm 1 GHz FIXED SATELLITE (E-S) X UHF SPACE RESEARCH (S-E) (deep space only) FIXED RADIOLOCATION AERONAUTICAL RADIONAVIGATION MARITIME RADIONAVIGATION 335 405 4. AND TIME SIGNAL AERONAUTICAL RADIONAVIGATION 29.25 25. (S-E) SPACE OPN.55 FIXED BROADCASTING (AM RADIO) 250.56 450.9 Space Research (Passive) AERONAUTICAL RADIONAV.5 EARTH SPACE EXPLORATION RESEARCH SATELLITE SPACE RESEARCH (Passive) RADIO ASTRONOMY BROADCASTING SATELLITE Radiolocation 328.65 5.78 18. RADIO DET.8 MOBILE FIXED FIXED 231.855 23.5 71.0 AMATEUR 300 MHz FIXED SATELLITE (E-S) MOBILE FIXED 3 x 10 5m 3 kHz EARTH EXPLORATION SAT.7 FIXED FIXED SATELLITE MOBILE (E-S) AERONAUTICAL RADIONAV.0 23.0 MOBILE SATELLITE (S-E) SPACE RESEARCH (S-E) AMATEUR MOBILE FIXED 42. Space Res.25 RADIO ASTRONOMY METEOROLOGICAL AIDS METEOROLOGICAL SATELLITE FIXED SAT.0 MOBILE ** FIXED FIXED FIXED SATELLITE (S-E) 200.4 174. TLM) (Space to Earth) (Space to Earth) MARITIME MOBILE SATELLITE MOBILE SATELLITE (S-E) (space to Earth) 14. 2. We use decibels today because common values are small integers. . not absolute dierences. If we used Bels. which aren't as elegant.2 (p. 063. 301) Because of Newton's binomial theorem.304 APPENDIX Solutions to Exercises in Chapter 7 Solution to Exercise 7.1 (p.2. they would be decimal fractions.1. 301) 60  6 = 60! 54!6! = 50. In other words. matter to us. percentage. 860.1 (p. Solution   to Exercise 7. 299) Alexander Graham Bell. He developed it because we seem to perceive physical quantities like loudness and brightness logarithmically. Solution to Exercise 7. the sum equals n (1 + 1) = 2n . 120 circuit switched. 1 active circuits. Ÿ 6. broadcast communication. 142 characteristic impedance. Ÿ 3. 175 block. Ÿ 6. Ÿ 6.34(272) ASP. 14 cascade. Ÿ 6. Ÿ 3. 238 Cartesian form. 6. 242 circuit-switched. Ÿ 6.17(247) algorithm. 140 bridge circuits. 39 bit. 272 bit-reception error. Ÿ 6. 238 carrier frequency. 189. Ÿ 6. 7.8(58) capacity.3(5). Ÿ 3. Ÿ 5. Ÿ 3.21(88) attenuation. baseband communication.2(17). Ÿ 3. 6. Ÿ 6.2(2). 284 average power. Ÿ 6. They are merely associated with that section.12(239) broadcast.13(199) amplitude. Ÿ 1.14(241) closed circuit. 236 Ÿ 3.1(225).31(269). Ÿ 3.32(270) buttery.5(25) channel.2(17). Ÿ 3. 272 binary symmetric channel.10(236).19(250). Ÿ 6. Ÿ 2. 252 aliasing. 142.14(241).15(244) circuit.4(22).28(264) bandpass signal. Ÿ 6. 8. Ÿ 1. Ÿ 6. Ÿ 3. Ÿ 6.14(241).9(235). Ÿ 3.11(237). Ÿ 5. 22.12(239). 151 channel decoding.2(226) basis functions.11(237). Ÿ 7. Ÿ 6. Ÿ 2.9(189). Ÿ 6. 191 analog computers. Ÿ 1.10(236) circuit model. Ÿ 6. Ÿ 2.2(226) analog communication. Ÿ 4.12(239) attenuation constant. 230 bandwidth.26(261). Ÿ 6. 142.21(252). apples.10(236). Ÿ 5.27(262) bandpass lter.16(207). Ÿ 6. Ÿ 6. 171 clock speed.2(40). Ÿ 2.11(237).30(268). 259.6(133). Keywords do not necessarily appear in the text of the page. Ÿ 6. 230 auxiliary band. Ÿ 6.19(250) amplitude modulate. Ÿ 6. Ÿ 6.2(40) . Ÿ 6. Ÿ 5.6(181).26(261). 92 amplitude modulation. Ÿ 6. 225 Ÿ 5. Ÿ 1. Ÿ 2.11(237) block diagram.10(236) analog signals. bandlimited. Ex.15(204). Ÿ 6. 63. Ÿ 6.12(239).25(259). 78 bit stream.4(7). 189 bits. Ÿ 6. 226. 199. 181 block channel coding. Ÿ 5. Ÿ 6. apples.2(169) amplier.21(88) analog signal. Ÿ 6. 63 B Ex.13(240).6(27) boxcar lter. Ÿ 6. 266 alphabet.32(270) broadcast mode. Ÿ 5. 240.305 INDEX Index of Keywords and Terms Keywords are listed by the section with that keyword (page numbers are in parentheses).25(259) channel coding. 206 Ÿ 6.14(200). Ÿ 1.9(59). Ÿ 6.16(207) analog-to-digital (A/D) conversion. charge.1(39) Ÿ 6. Ÿ 3.31(269) channel coder. Ÿ 5.9(235).20(86) baseband signal.1(299) BPSK. Ÿ 3.36(274) analog.34(272).1(13) ARPANET.30(268). Ÿ 1. Ÿ 6. Ÿ 6. Ÿ 6. Ÿ 2. Ÿ 6.6(27). C capacitor. Ÿ 5.9(235).31(269) Ÿ 6.1(39) boolean arithmetic. Ÿ 5. 273 binary phase shift keying.2(40).1(13) Cartesian form of z. Ÿ 2.14(200).5(25) Ampere. Ÿ 6.1 (1) A Terms are referenced by the page they appear on. 43. Ÿ 6.3(5).21(88). 240 address. Ÿ 3. Ÿ 3. 171 analog problem. 268. 176 angle.1(39). 24.36(274) carrier.8(58). buering. Ÿ 2. Ÿ 5. Ÿ 6.4(43).4(176).2(169) bit interval.14(241). 36 bytes. Ÿ 2. 237 carrier amplitude. 251 circuits.4(7). 1. 133. 15 angle of complex number. 261 AM. Ÿ 6. Ÿ 6. Ÿ 6. Ÿ 6. Ÿ 5. Ÿ 6.1(299) decode. Ÿ 1. Ÿ 6.21(252). Ÿ 5.306 INDEX coaxial cable.12(196).3(22).11(237) collision.7(234).13(199) discrete-valued. Ÿ 5. Ÿ 6.10(191). Ÿ 3. Ÿ 6.9(189) D data compression.1(39) conjugate symmetry. Ÿ 6.9(235) communication network. Ÿ 6.21(252).1(39). 63 complex-valued.32(270) communication channel.29(266). 173 domain. Ÿ 5. 196.2(119). Ÿ 5. Ÿ 6. 17 complex amplitudes. Ÿ 5.1(13). Ÿ 5.19(250). Ÿ 5. Ÿ 5.8(234). Ÿ 5. Ÿ 5.14(200) dierence equation.10(191).17(247).6(181).4(7) codebook. Ÿ 6.16(207) discrete-time ltering.2(169) conductance.22(254). Ÿ 5. Ÿ 5. Ÿ 5.15(204).6(48). Ÿ 6.28(264) decompose. Ÿ 6. 39. Ÿ 6.14(200). Ÿ 1. 238 coherent receiver.7(186). Ÿ 3. Ÿ 6.3(5).7(186). Ÿ 6. Ÿ 6.20(251). Ÿ 6.37(276) communication networks.2(40). Ÿ 6.5(232). Ÿ 5.1(225) communication theory.20(251) Complementary lters. 275 combination.19(250) digital lter. 271 dependent source. 121 Cooley-Tukey algorithm. Ÿ 6.34(272). Ÿ 6. Ÿ 5. 202 codeword. Ÿ 5.14(200) digital. Ÿ 4. 188 component. Ÿ 6. Ÿ 6. Ÿ 1. Ÿ 6.2(17) compression.23(256) cuto frequency. Ÿ 6. Ÿ 6. Ÿ 2.22(254). Ÿ 5.20(251). 1. 112 decibel. Ÿ 6.14(200). Ÿ 6.15(204).3(226).3(22). 189 computational complexity. Ÿ 6. Ÿ 2. 50 coding.37(276) Computer networks. 279 complex.9(189). Ÿ 6.11(195).16(207) digital signal. Ÿ 7. Ÿ 6. Ÿ 6. 242. Ÿ 2.26(261). Ÿ 6. 227 cosine.9(189). Ÿ 6.2(226) communication channels. Ÿ 6. Ÿ 5. Ÿ 5. 185 Discrete-Time Systems. Ÿ 5. Ÿ 2.27(262).32(270).18(249). 261 coherent.14(241). 187.22(254).9(189). 14 complex exponential.14(200). 273 . Ÿ 6. Ÿ 5. Ÿ 2.29(266) decoding.6(181).22(254). Ÿ 5. Ÿ 5. Ÿ 6.2(301) combinations.28(264).36(274).11(195).12(196).1(225). Ÿ 6. Ÿ 5. Ÿ 6. Ÿ 2. 41 conductor. Ÿ 6. 13.2(2) digital signal processing.1(39) DFT.30(268). Ÿ 6. Ÿ 6.3(22) dedicated. 58 complex plane. Ÿ 5.10(191).4(22) complexity.16(245).25(259). Ÿ 3.17(247) digital communication systems. Ÿ 6.33(271) digital communication. Ÿ 5.31(269).32(270) digital communication receiver. Ÿ 6.2(301) communication.35(273). Ÿ 6.16(207) digital sources. Ÿ 6. 23. Ÿ 6. Ÿ 6.15(244). Ÿ 6. Ÿ 5. 13 complex power. Ÿ 7. Ÿ 6.23(256) computational advantage.2(119) complex frequency.12(196).20(86) Discrete Fourier Transform. Ÿ 5.16(207) discrete-time Fourier transform.2(17) complex exponential sequence. Ÿ 5.20(251). 301 combinatorial.2(40) codeword error.14(200). Ÿ 6.21(252).31(269) De-emphasis circuits. Ÿ 3. Ÿ 7.16(207) computer network.33(271).26(261). Ÿ 2.6(181) discrete-time sinc function. Ÿ 6. 13 complex numbers. Ÿ 6. Ÿ 2. Ÿ 6. Ÿ 5.21(252).23(256) diode.4(22) decomposition. Ÿ 6. Ÿ 7.27(262). 253 countably innite. 64 complex conjugate. 262 current. Ÿ 6. 254 datarate. Ÿ 5.19(250) digital communication receivers. 270 computer organization.13(240). 179 complex Fourier series. Ÿ 5.14(241). Ÿ 5. Ÿ 5. Ÿ 6. Ÿ 3. Ÿ 3.37(276) communication systems. 20 complex number. Ÿ 6.2(301) current divider.15(204). 68 coding eciency.8(188).36(274) communication protocol. Ÿ 4. Ÿ 5. 77 device electronic.14(200) discrete-time. Ÿ 5. Ÿ 6. Ÿ 6.4(22).6(233).8(188).23(256). Ÿ 6. Ÿ 2.6(181). Ÿ 3. Ÿ 6. Ÿ 6.4(22) complex amplitude. Ÿ 2. Ÿ 6.17(208) Fourier transform.1(1) Ÿ 5.5(25) FFT.9(59) generator matrix.6(181). Ÿ 6. Ÿ 6.1(39) frames. 119. Ÿ 6.14(200) electron. 147 error correction.25(259) error-correcting codes. Fourier coecients.3(124). Ÿ 3.27(262). Ÿ 4. 162 formants. Ÿ 4.28(264). Ÿ 1.10(60). Ÿ 6. Ÿ 3. Ÿ 6.26(261). Ÿ 4.31(269) Communication. 262 xed rate. fundamental assumption. Ÿ 6. Ÿ 1. Ÿ 3. 234 Faraday. 15 Gauss. 270 harmonically. Ÿ 6.2(119) gateway. Ÿ 3.15(204). 270 fourier spectrum. 193 electronic circuits.2(40) geosynchronous orbits. Ÿ 5. Ÿ 4.9(189).2(169) Heaviside. Ÿ 5. 86 DSP. 69 half-wave rectier.14(200).25(259). Ÿ 6.4(126) lter. Ÿ 5. Ÿ 3. Ÿ 6. Ÿ 6. 302 elemental signals. Ÿ 4. 273 Euler's relations.12(64) frequency shift keying. 272 137. Ÿ 6.29(266). Ÿ 6. Ÿ 3. Ÿ 2.4(22).11(63) Ÿ 5. Ÿ 5. Hamming. Ÿ 5. 120.16(207) Hamming code. 88 ltering. energy.29(266) FIR.20(251). Ÿ 6.7(53) frequency-shift keying.2(119). Ÿ 6.29(266) geometric series.15(244) Equivalent Circuits.6(181). Ÿ 5.2(17) frequency domain. 7.3(124).3(124).15(71) hidden-ones notation.25(259). 133 fast Fourier transform. Ÿ 5. 254 Hanning window.29(266).2(40) Gibb's phenomenon. 66. Ÿ 6. Ÿ 4. Ÿ 6.8(137). 124 Ÿ 5. Ÿ 6.10(191). Ÿ 5.31(269) Ÿ 6. Ÿ 3. 119. Ÿ 6. Ÿ 3. Ÿ 5.1(39).10(191).28(264) ethernet. 61.2(40) frequency allocation chart. Ÿ 6.29(266) forward biasing.2(40) Heinrich Hertz.307 INDEX E Doppler. 273 Hamming distance. Ÿ 5. Ÿ 4. Ÿ 4.1(39). 120 oating point. Ÿ 4. Ÿ 6. Ÿ 3. fundamental model of communication.2(169) forward bias.9(189). Ÿ 4.11(195). 227 H half wave rectied sinusoid. Ÿ 6.2(17).19(80) Ÿ 6. 199 Hamming codes. Ÿ 6.7(135). 25 error correcting codes.16(207) Fourier series.27(262).8(137). Ÿ 6. 182 farad. Ÿ 2. Ÿ 3. Ÿ 6.1(119). electrical.3(302).14(200). Ÿ 6. 42 Ÿ 6.27(262). Ÿ 2.15(204) ground.7(186).15(244).35(273) Euler's relation. Ÿ 3.30(268) FSK. Ÿ 2. Ÿ 4. Ÿ 4. 172 . Ÿ 5.33(271) error probability.17(77). Ÿ 3. Ÿ 5.20(86) double-bit.2(119). Ÿ 2. 148 double precision oating point. Ÿ 5. Ÿ 4. 53.13(199) equivalent circuit. Ÿ 3. Ÿ 3. Ÿ 4.1(119).6(133) elec241 problems. 149 G gain. Ÿ 6.9(142) eciency. Ÿ 3.19(250).26(261).2(17). 262. 244 error. Ÿ 1. Ÿ 5.1(169). 193 Flexibility.19(250) Fundamental Model of Digital error-correcting code.21(252) frequency response.10(191). 77 frequency. electrical engineering.15(204) entropy. 120. Ÿ 4.15(204) feedback. 259. Ÿ 5.7(234). Ÿ 6.4(231) element.15(204). Ÿ 5. Ÿ 5. Ÿ 5.6(233) ux.2(17) gateways. Ÿ 6.6(27). Ÿ 6. 27 Euler relations. 40. 252. 189 Henry. Ÿ 6. Ÿ 5.3(124) exponential.36(274) Euler. Ÿ 2. electronics. 267 xed.12(196).30(268).1(1) form. Ÿ 4.13(199). Ÿ 6. Ÿ 7. Ÿ 2.19(250) error correcting code.4(7). Ÿ 3.27(262) fundamental frequency. Ÿ 5.30(268) functional. Ÿ 5. Ÿ 3.29(266) Ÿ 5. Ÿ 5.3(124) F fundamental model of speech production.2(40) formal circuit method. 6(48).14(241).1(13) Ÿ 6. Ÿ 6.33(271).34(272). 254 Ÿ 6. 246 Ÿ 6. Ÿ 6.15(244). Ÿ 6. 14 local area network. Ÿ 1.11(237) inverting amplier.15(71) node.1(225).23(256) ionosphere.7(53) initial conditions. 81 Morse code. Ÿ 6.7(135) Ÿ 6.5(232). Ÿ 3. 226 information theory.1(1).2(301) Ÿ 6. Ÿ 6.20(251).12(239) inverse Fourier transform.4(7).4(43) . 164 James Maxwell.15(71). Ÿ 6.6(233) IP address. 146.1(1) Mayer-Norton. Ÿ 6. 56 input. Ÿ 3. 40 model of communication. Ÿ 4. Ÿ 6.5(232).6(27) modem. Ÿ 6. 13 load. linear.10(236). Ÿ 1. Ÿ 6.1(1) joules. Ÿ 6. 197 linear phase shift.29(266). Ÿ 6.16(245). Ÿ 6. Ÿ 1.18(249).1(39) nerve.36(274) Kirchho 's Laws. Ÿ 6.35(273) modulated communication. Ÿ 3. Ÿ 6.35(273) impedance. Ÿ 1. Ÿ 5. 80 message.8(137) modulation. Ÿ 6. Ÿ 3.11(237).35(273).27(262). lowpass lter. Ÿ 4.22(254).1(39) holes. Ÿ 7. 58.30(268).23(256).14(200).4(43). 254 Ÿ 6.6(48). Ÿ 2. Ÿ 1.5(232) information communication.2(40). Ÿ 6. Ÿ 6.7(234). 233 Kircho. Ÿ 6.35(273).34(272). Ÿ 1.2(40). 256.5(8) interference. Ÿ 3. Ÿ 1.12(239). Ÿ 2. Ÿ 3. 1. 45 networks. 262 IIR. Ÿ 3. Ÿ 6.2(226).6(27) imaginary number.6(233). Ÿ 2. Ÿ 6. lossy. 273 J j. 234 modulate. Ÿ 1. Ÿ 6. 270 N name server. Ÿ 6. Ÿ 2. Ÿ 6.9(59).22(254).33(271). long-distance communication. 46. 51 imaginary part. Marconi. Ÿ 6. Ÿ 6.21(252).7(234) Ÿ 6.2(17) linear systems.3(43) integrator. Ÿ 3.8(234). Ÿ 6. Ÿ 6. Ÿ 6. Ÿ 6. Ÿ 6.7(135) i. Ÿ 6. Ÿ 6. Ÿ 6.3(5) integrated circuit. 226 254 I linear circuit. Ÿ 6. Ÿ 2. Ÿ 3. 196 Mayer-Norton equivalent. Ÿ 3.35(273) leakage.36(274) KCL. Ÿ 6. Ÿ 6. Ÿ 3. Ÿ 3. Ÿ 3. Ÿ 6.2(17) jam. Ÿ 6. Ÿ 3.8(234).34(272) instantaneous power.9(235). 299 information. 39 KVL.36(274) Maxwell's equations. Ÿ 6. 237 internet protocol address. Ÿ 6. lottery.2(2) long-distance. Ÿ 2. Ÿ 6. 273 negative. 59. matched lter. Ÿ 6. Ÿ 6.13(240). Ÿ 6. Ÿ 3.2(17) linear codes. Ÿ 4. Ÿ 6. 15 magnitude of complex number.17(77) models and reality. 69.37(276) modulated. 40 K LAN.15(71).34(272) input-output relationship.8(58) logarithmically.1(1) hole.22(254).31(269). Ÿ 6.5(25) mean-square equality.1(39) network.16(245). M magnitude. Ÿ 2.1(13).35(273) name servers. Ÿ 3. 233 Human source coding algorithm. Ÿ 6.16(76) L leakage current. Ÿ 3.11(63) logarithmic amplier.308 INDEX history of electrical engineering. 40. 46. Ÿ 3. Ÿ 3.16(76) network architecture.17(247).35(273). Ÿ 6.25(259). Local area networks.28(264).23(256) multi-level signaling. 273 Ÿ 3. lossless. Ÿ 6.32(270).27(262).23(256) line-of-sight. 121 imaginary. 86 Human. 48 message routing. Ÿ 2.7(234) Ÿ 6. Ÿ 6. long-distance transmission. Ÿ 6.10(60). Ÿ 6.26(261). 88 inductor. Ÿ 2.22(254).20(86) Human Code. Ÿ 3. 133 input resistance. 8 internet. Ÿ 3.6(233) Ÿ 6.1(13). 34(272). Ÿ 2. 49. Ÿ 6. 292 permutations. Ÿ 6.3(124).5(8).12(239).12(239) Parseval's theorem. 97.34(272) point-to-point. Ÿ 3. Ÿ 6. Ÿ 2. Ÿ 3.17(77). Ÿ 2. Ÿ 6. 128. Ÿ 2. Ÿ 3.2(169) S samples. 127. 17 resistor. 133 polar form.2(40). Ÿ 6.1(299) noise removal.6(181) probabilistic models.36(274) packet size. Ÿ 1. Ÿ 3. 245 numbers on a computer. Ÿ 4.5(25) output resistance.4(176). 230 operational amplier.34(272). 80 output spectrum. Ÿ 3. Ÿ 5.36(274) packet-switched. Ÿ 6. 230 open circuit.2(119).2(119) relay networks. Ÿ 3. Ÿ 3. 46. 112 Norton.2(301) ohm.16(245) 139.2(169) quantized. Ÿ 5.11(63). Ÿ 3.19(80) propagating wave. 272 Power.2(119). 174 Sampling Theorem. Ÿ 5. 40.2(119). 270 reference node.2(40) probability of error. Ÿ 3.8(234).34(272) point-to-point communication.30(268) power spectrum. Ÿ 3.36(274). 41 phase modulates.1(225).20(86) pre-emphasis circuit.2(40). 147 reverse-bias. 301 repetition code. 276 output.37(276). Ÿ 7. Ÿ 6. Ÿ 2.9(142) P pulse. Ÿ 6.1(13).8(137) Q quantization interval. 86 pitch lines. Ÿ 1. receiver. 122. Ÿ 6.5(46) physical.6(181) rectication. Ÿ 4.7(234) .1(39).2(17).1(39) reverse bias. Ÿ 6.36(274) packets. Ÿ 3. Ÿ 4.1(1) problems.15(71). 7. Ÿ 6. 188 orthogonality. 271.4(22) parity check matrix. 41 propagation speed. 72 period.29(266) real-valued. Ÿ 6.2(17) parity. 127 point to point. 15 positional notation.8(234). Ÿ 5. 273 quadruple precision oating point. Ÿ 6. 176 sampling. Ÿ 3.18(249) Oliver Heaviside. Ÿ 4. Ÿ 6. 272 point to point communication.4(7).2(17) resistance.8(137).7(53). Ÿ 3. 233 periodic signal. Ÿ 6. 176 satellite. Ÿ 5. 187 sampling interval. Ÿ 3. Ÿ 1. Ÿ 1. Ÿ 6. Ÿ 6.309 INDEX O node method. 71 positive. Ÿ 3.17(247).4(126) passive circuits. 47 phasor. Ÿ 6. 45. 235.2(226). noise. 225 pointwise equality. 264 received signal-to-noise-ratio. Ÿ 6. Ÿ 6. 272 random-access. Ÿ 3. 147. Ÿ 2. Ÿ 2. Ÿ 3. 1.30(268) Ÿ 7. 177 packet. 176. 72 postal service.2(40).2(301) repeaters. Ÿ 3. 77.25(259). 78 reference. Ÿ 5. 259 phase. 251 probability. 235 nonlinear.21(88) op-amp. 226.2(119). 14.4(176).16(76).5(25). 124 protocol. Ÿ 5. Ÿ 6.2(17).17(77).1(39) node voltages. 14 parity check. Ÿ 6. Ÿ 4. Ÿ 4. 117 noisy channel coding theorem.19(80) proportional. Ÿ 3. 19 relay network. Ÿ 3. 256 Nyquist frequency. 275 parallel. Ÿ 4.3(5). Ÿ 3. 51 real.20(86) pitch frequency. 173.29(266) real part. 299 Performance.6(48). 279 resistivity.12(64) preamble. Ÿ 6. Ÿ 4. Ÿ 3.2(169) prex. Ÿ 6. Ÿ 6. Ÿ 4. Ÿ 6.5(232) permutation. Ÿ 7.5(46). 176 R random access.33(271) nodes.33(271) routing. Ÿ 6. Ÿ 2. 150 rms.1(225). 77. Ÿ 6.33(271) route. 206 power factor. Ÿ 4. Ÿ 6. 12(196) wavelength. 7 transmitter.16(245) wide area networks.4(126) transfer function. 48 speech model. Ÿ 6.17(247) sinc.3(5). Ÿ 6.27(262) transmission bandwidth.9(235). 6. Ÿ 4. Ÿ 2. 180 Ÿ 6. 40 superposition. transforms. Ÿ 5.4(7). Ÿ 2. Ÿ 2. Ÿ 2. 143 shift-invariant. Ÿ 1.12(196) short circuit.3(22).19(250).1(1) Shannon. 20. Ÿ 6.2(17).2(17) watts.1(119).4(22) self-clocking signaling.14(200) time delay. Ÿ 4.3(124) standard feedback conguration. Ÿ 2. 272 signal. 60. Ÿ 2. 39. 1. Ÿ 2.4(22). Ÿ 6. Ÿ 5. Ÿ 3.10(145) voltage gain.5(232). Ÿ 6. 226. Ÿ 3.23(256) T tetherless networking. 227 U unit step.4(176).1(119). 230 uncountably innite.16(245). 229 single-bit.10(191) voltage.31(269) symbolic-valued signals. Ÿ 1. Ÿ 6. Ÿ 4. Ÿ 4. Ÿ 6. Ÿ 5.6(27) sign bit.21(252).7(234) system theory. twisted pair. 202 unit sample. Ÿ 2.4(231) Superposition Principle.13(199). Ÿ 4. Ÿ 1. Ÿ 6. 230.18(249). 41 time invariant.4(22) unit-sample response. Ÿ 2. Ÿ 4.2(40) time reversal.4(22) wide area network. Ÿ 6.6(27) signal decomposition. Ÿ 4. 235. Ÿ 2.5(232). 23.3(226) sine.19(250) transceiver. Ÿ 6. Ÿ 2.9(142) Siemens. Thevenin. Ÿ 6.2(40) Ÿ 6.7(53). Ÿ 3. Ÿ 6. Ÿ 6. transatlantic communication.1(39) spectrograms. Ÿ 6. Ÿ 4.2(17).1(225). 2. Ÿ 2.2(119). signal-to-noise.12(64) Ÿ 6. 237.6(48). Ÿ 4.5(25) time-invariant. 7. Ÿ 6. Ÿ 2.3(5). Ÿ 3. 245 window.6(233) signal-to-noise-ration. Ÿ 6. Ÿ 3. Ÿ 6. Ÿ 6. 79 W WAN.6(233) Ÿ 6. Ÿ 5. Ÿ 6. Ÿ 6. Ÿ 6. 245 self-synchronization. Ÿ 3.35(273) Steinmetz. Ÿ 6. 273 synchronize. Ÿ 5.17(247). Ÿ 6.22(254) space constant.5(25) sawtooth. 240. 6 wireless. Ÿ 2. 90 white noise. 180.14(241). signals.36(274). Ÿ 3. Ÿ 1.3(124) SIR. Ÿ 1. 6 sinusoid.4(22) transmission line equations. Ÿ 6. Ÿ 3.4(231) . 138 transmission line.29(266) transmission lines. Ÿ 2.20(251). Ÿ 6. Ÿ 6. Ÿ 2. Ÿ 6. 274 signal spectrum. Ÿ 6. Ÿ 5.2(119). Ÿ 6. 178 Ÿ 5.33(271) sequences. Ÿ 6. 126 voltage divider. Ÿ 1.14(241).31(269) transition diagrams.310 INDEX satellite communication. Ÿ 6. Ÿ 3. Ÿ 6. 176 time constant.10(145). Ÿ 6. Ÿ 6. 253. 55 Shannon sampling frequency. Ÿ 2. Ÿ 2. 251 signal-to-noise-ratio. Ÿ 6. Ÿ 1.4(7). 147 Volta. 6.33(271) telephone. Ÿ 6. 253 transmission error.13(66). 48 themes of electrical engineering. Ÿ 6. Ÿ 5.10(236).3(5). Ÿ 6. Ÿ 1. 227 sink.14(200) signal-to-noise ratio. Ÿ 2.14(69).31(269) source coding theorem. Ÿ 6. 226 series. Ÿ 3. 225.12(239). 120.6(233) wireless channel. Ÿ 1.31(269) Thévenin equivalent circuit. Ÿ 3. Ÿ 6. 66. Ÿ 4. 22 total harmonic distortion. Ÿ 3.4(22).18(249).6(27).6(27) shift-invariant systems.30(268).2(40) spectrum.2(226). Ÿ 6.21(88) Ÿ 6.1(39). Ÿ 6.35(273) synchronization. telegraph.6(48).4(22).1(119). 164 systems.2(17).2(169). Ÿ 2.12(196) time domain.12(239) transmission.27(262). Ÿ 6.2(17) source. 80 square wave.21(252).10(60).3(226). 201 V vocal tract.8(234). Ÿ 3.10(236). Ÿ 5.2(2).19(250) simple binary code. 193 system. 139 Ÿ 6.2(17). 171 time-domain multiplexing. 119.3(5).33(271).9(235) SNR. 128 signal set. Ÿ 4. 196.3(5). 225.2(226). 202 . Ÿ 6.37(276) Z zero-pad.3(226) World Wide Web. Ÿ 6. Ÿ 6.311 INDEX wireline. Ÿ 6. 226.5(232) wireline channel. 27/ Pages: 2-5 Copyright: Don Johnson License: http://creativecommons.0 Module: "Introduction Problems" By: Don Johnson URL: http://cnx.15/ Pages: 7-8 Copyright: Don Johnson License: http://creativecommons.org/licenses/by/1.org/content/m10353/2.org/content/m0001/2.org/content/m0081/2.org/licenses/by/1.org/content/col10040/1.org/content/m0003/2.18/ Pages: 8-10 Copyright: Don Johnson License: http://creativecommons.27/ Pages: 13-17 Copyright: Don Johnson License: http://creativecommons.18/ Pages: 1-2 Copyright: Don Johnson License: http://creativecommons.0 Module: "The Fundamental Signal" By: Don Johnson URL: http://cnx.org/content/m0002/2.0/ Module: "Complex Numbers" By: Don Johnson URL: http://cnx.org/licenses/by/1.org/licenses/by/1.312 Attributions Collection: Fundamentals of Electrical Engineering I Edited by: Don Johnson URL: http://cnx.9/ License: http://creativecommons.0 Module: "Structure of Communication Systems" By: Don Johnson URL: http://cnx.org/licenses/by/3.0 Module: "Themes" By: Don Johnson URL: http://cnx.org/licenses/by/1.org/content/m0000/2.0 Module: "Signals Represent Information" By: Don Johnson URL: http://cnx.17/ Pages: 5-7 Copyright: Don Johnson License: http://creativecommons.0 ATTRIBUTIONS .org/licenses/by/1. org/licenses/by/1.org/content/m0009/2.org/content/m0005/2.0 Module: "Simple Systems" By: Don Johnson URL: http://cnx.21/ Pages: 40-43 Copyright: Don Johnson License: http://creativecommons.ATTRIBUTIONS Module: "Elemental Signals" By: Don Johnson URL: http://cnx. and Generic Circuit Elements" By: Don Johnson URL: http://cnx.0 Module: "Voltage.org/licenses/by/1.0 Module: "Signals and Systems Problems" By: Don Johnson URL: http://cnx.12/ Page: 22 Copyright: Don Johnson License: http://creativecommons.0 Module: "Ideal Circuit Elements" By: Don Johnson URL: http://cnx.org/content/m0008/2.org/licenses/by/1.org/content/m0006/2.org/licenses/by/1.29/ Pages: 17-21 Copyright: Don Johnson License: http://creativecommons.29/ Pages: 30-36 Copyright: Don Johnson License: http://creativecommons.0 313 .0 Module: "Introduction to Systems" By: Don Johnson URL: http://cnx.org/licenses/by/1.24/ Pages: 22-24 Copyright: Don Johnson License: http://creativecommons. Current.org/content/m0011/2.org/content/m10348/2.org/content/m0012/2.org/licenses/by/1.0 Module: "Signal Decomposition" By: Don Johnson URL: http://cnx.24/ Pages: 27-30 Copyright: Don Johnson License: http://creativecommons.org/licenses/by/1.org/licenses/by/1.org/content/m0004/2.14/ Pages: 39-40 Copyright: Don Johnson License: http://creativecommons.0 Module: "Discrete-Time Signals" By: Don Johnson URL: http://cnx.19/ Pages: 25-27 Copyright: Don Johnson License: http://creativecommons. 10/ Pages: 60-63 Copyright: Don Johnson License: http://creativecommons.0 Module: "Time and Frequency Domains" By: Don Johnson URL: http://cnx.0 Module: "Power Dissipation in Resistor Circuits" By: Don Johnson URL: http://cnx.23/ Pages: 59-60 Copyright: Don Johnson License: http://creativecommons.9/ Pages: 48-53 Copyright: Don Johnson License: http://creativecommons.24/ Pages: 53-58 Copyright: Don Johnson License: http://creativecommons.7/ Pages: 46-47 Copyright: Don Johnson License: http://creativecommons.org/content/m0023/2.org/licenses/by/1.org/content/m0020/2.org/licenses/by/1.30/ Pages: 43-46 Copyright: Don Johnson License: http://creativecommons.0 Module: "The Impedance Concept" By: Don Johnson URL: http://cnx.0 Module: "Equivalent Circuits: Resistors and Sources" By: Don Johnson URL: http://cnx.org/content/m0013/2.org/licenses/by/1.org/licenses/by/2.org/content/m17305/1.org/licenses/by/1.org/licenses/by/1.org/content/m10708/2.org/content/m0024/2.org/licenses/by/1.org/content/m0014/2.org/content/m10674/2.0 Module: "Circuits with Capacitors and Inductors" By: Don Johnson URL: http://cnx.0 ATTRIBUTIONS .12/ Page: 58 Copyright: Don Johnson License: http://creativecommons.9/ Page: 43 Copyright: Don Johnson License: http://creativecommons.314 Module: "Ideal and Real-World Circuit Elements" By: Don Johnson URL: http://cnx.org/licenses/by/1.0 Module: "Electric Circuits and Interconnection Laws" By: Don Johnson URL: http://cnx.0/ Module: "Series and Parallel Circuits" By: Don Johnson URL: http://cnx. 0 Module: "Designing Transfer Functions" By: Don Johnson URL: http://cnx.org/content/m0035/2.2/ Pages: 76-77 Copyright: Don Johnson License: http://creativecommons.org/licenses/by/1.8/ Page: 77 Copyright: Don Johnson License: http://creativecommons.org/licenses/by/1.org/content/m0032/2.org/content/m17317/1.0/ Module: "Electronics" By: Don Johnson URL: http://cnx.org/licenses/by/1.org/content/m0053/2.0 Module: "Transfer Functions" By: Don Johnson URL: http://cnx.21/ Pages: 69-71 Copyright: Don Johnson License: http://creativecommons.0/ Module: "Power Conservation in Circuits" By: Don Johnson URL: http://cnx.org/content/m0028/2.org/licenses/by/1.22/ Pages: 71-76 Copyright: Don Johnson License: http://creativecommons.14/ Pages: 77-79 Copyright: Don Johnson License: http://creativecommons.0 Module: "Formal Circuit Methods: Node Method" By: Don Johnson URL: http://cnx.0 315 .20/ Pages: 64-66 Copyright: Don Johnson License: http://creativecommons.org/licenses/by/2.org/content/m0031/2.0 Module: "Dependent Sources" By: Don Johnson URL: http://cnx.ATTRIBUTIONS Module: "Power in the Frequency Domain" By: Don Johnson URL: http://cnx.0/ Module: "Equivalent Circuits: Impedances and Sources" By: Don Johnson URL: http://cnx.org/content/m17308/1.org/licenses/by/3.2/ Pages: 63-64 Copyright: Don Johnson License: http://creativecommons.org/licenses/by/1.org/content/m0030/2.org/licenses/by/2.20/ Pages: 66-69 Copyright: Don Johnson License: http://creativecommons. 10/ Pages: 128-133 Copyright: Don Johnson License: http://creativecommons.0/ Module: "Introduction to the Frequency Domain" By: Don Johnson URL: http://cnx.org/content/m10349/2.org/licenses/by/1.org/licenses/by/3.316 Module: "Operational Ampliers" By: Don Johnson URL: http://cnx.org/content/m0042/2.16/ Pages: 86-88 Copyright: Don Johnson License: http://creativecommons.21/ Pages: 126-128 Copyright: Don Johnson License: http://creativecommons.org/licenses/by/1.org/content/m0040/2.10/ Page: 119 Copyright: Don Johnson License: http://creativecommons.0 Module: "Complex Fourier Series" By: Don Johnson URL: http://cnx.org/licenses/by/3.0 Module: "Analog Signal Processing Problems" By: Don Johnson URL: http://cnx.0/ ATTRIBUTIONS .0 Module: "A Signal's Spectrum" By: Don Johnson URL: http://cnx.47/ Pages: 88-115 Copyright: Don Johnson License: http://creativecommons.0/ Module: "The Diode" By: Don Johnson URL: http://cnx.org/licenses/by/3.org/content/m0037/2.org/licenses/by/3.org/content/m10687/2.0/ Module: "Classic Fourier Series" By: Don Johnson URL: http://cnx.org/content/m0036/2.0 Module: "Fourier Series Approximation of Signals" By: Don Johnson URL: http://cnx.org/content/m0038/2.org/licenses/by/1.org/content/m0039/2.32/ Pages: 80-85 Copyright: Don Johnson License: http://creativecommons.23/ Pages: 124-126 Copyright: Don Johnson License: http://creativecommons.31/ Pages: 119-124 Copyright: Don Johnson License: http://creativecommons.org/licenses/by/1. ATTRIBUTIONS Module: "Encoding Information in the Frequency Domain" By: Don Johnson URL: http://cnx.0 Module: "Introduction to Computer Organization" By: Don Johnson URL: http://cnx.org/content/m0046/2.org/content/m10263/2.18/ Pages: 142-144 Copyright: Don Johnson License: http://creativecommons.3/ Page: 169 Copyright: Don Johnson License: http://creativecommons.0 Module: "Filtering Periodic Signals" By: Don Johnson URL: http://cnx.29/ Pages: 169-173 Copyright: Don Johnson License: http://creativecommons.0 Module: "Introduction to Digital Signal Processing" By: Don Johnson URL: http://cnx.0 317 .0 Module: "Derivation of the Fourier Transform" By: Don Johnson URL: http://cnx.org/content/m10781/2.0/ Module: "Frequency Domain Problems" By: Don Johnson URL: http://cnx.org/licenses/by/1.org/licenses/by/1.0 Module: "Modeling the Speech Signal" By: Don Johnson URL: http://cnx.org/licenses/by/1.org/content/m0049/2.org/content/m10350/2.21/ Pages: 137-142 Copyright: Don Johnson License: http://creativecommons.org/licenses/by/1.org/licenses/by/1.29/ Pages: 145-152 Copyright: Don Johnson License: http://creativecommons.42/ Pages: 152-166 Copyright: Don Johnson License: http://creativecommons.17/ Pages: 133-135 Copyright: Don Johnson License: http://creativecommons.org/licenses/by/1.org/licenses/by/1.org/content/m0048/2.org/content/m0043/2.11/ Pages: 135-137 Copyright: Don Johnson License: http://creativecommons.0 Module: "Linear Time Invariant Systems" By: Don Johnson URL: http://cnx.org/licenses/by/3.org/content/m0044/2. org/licenses/by/1.org/licenses/by/3.org/content/m10247/2.0/ Module: "Amplitude Quantization" By: Don Johnson URL: http://cnx.org/content/m10250/2.org/licenses/by/3.org/licenses/by/1.21/ Pages: 189-191 Copyright: Don Johnson License: http://creativecommons.org/licenses/by/1.0 Module: "Discrete-Time Fourier Transform (DTFT)" By: Don Johnson URL: http://cnx.28/ Pages: 186-188 Copyright: Don Johnson License: http://creativecommons.0/ Module: "Discrete-Time Signals and Systems" By: Don Johnson URL: http://cnx.318 Module: "The Sampling Theorem" By: Don Johnson URL: http://cnx.0 Module: "Fast Fourier Transform (FFT)" By: Don Johnson URL: http://cnx.org/content/m0051/2.0/ ATTRIBUTIONS .23/ Pages: 176-178 Copyright: Don Johnson License: http://creativecommons.11/ Pages: 188-189 Copyright: Don Johnson License: http://creativecommons.org/content/m0050/2.0 Module: "DFT: Computational Complexity" By: Don Johnson URL: http://cnx.org/content/m0503/2.31/ Pages: 181-186 Copyright: Don Johnson License: http://creativecommons.0 Module: "Discrete Fourier Transform (DFT)" Used here as: "Discrete Fourier Transforms (DFT)" By: Don Johnson URL: http://cnx.org/licenses/by/3.15/ Pages: 179-181 Copyright: Don Johnson License: http://creativecommons.org/licenses/by/1.20/ Pages: 173-176 Copyright: Don Johnson License: http://creativecommons.org/content/m10342/2.org/content/m10249/2. 0 Module: "Eciency of Frequency-Domain Filtering" By: Don Johnson URL: http://cnx.0 Module: "Filtering in the Frequency Domain" By: Don Johnson URL: http://cnx.org/content/m0510/2.org/licenses/by/1.0 Module: "Discrete-Time Systems" By: Don Johnson URL: http://cnx.20/ Pages: 191-195 Copyright: Don Johnson License: http://creativecommons.org/content/m0511/2.0/ Module: "Discrete-Time Systems in the Frequency Domain" By: Don Johnson URL: http://cnx.5/ Page: 195 Copyright: Don Johnson License: http://creativecommons.org/licenses/by/3.ATTRIBUTIONS Module: "Spectrograms" By: Don Johnson URL: http://cnx.0/ 319 .0 Module: "Digital Signal Processing Problems" By: Don Johnson URL: http://cnx.16/ Pages: 204-207 Copyright: Don Johnson License: http://creativecommons.14/ Pages: 199-200 Copyright: Don Johnson License: http://creativecommons.0 Module: "Discrete-Time Systems in the Time-Domain" By: Don Johnson URL: http://cnx.org/content/m10257/2.org/licenses/by/1.org/licenses/by/1.org/content/m10251/2.org/content/m0507/2.17/ Pages: 200-204 Copyright: Don Johnson License: http://creativecommons.org/content/m10351/2.0 Module: "Discrete-Time Filtering of Analog Signals" By: Don Johnson URL: http://cnx.org/licenses/by/3.org/licenses/by/1.org/licenses/by/1.org/licenses/by/1.21/ Pages: 207-208 Copyright: Don Johnson License: http://creativecommons.42/ Pages: 208-220 Copyright: Don Johnson License: http://creativecommons.org/content/m0505/2.org/content/m10279/2.24/ Pages: 196-199 Copyright: Don Johnson License: http://creativecommons. 320 Module: "Information Communication" By: Don Johnson URL: http://cnx.org/licenses/by/1.29/ Pages: 226-231 Copyright: Don Johnson License: http://creativecommons.org/content/m0100/2.10/ Pages: 233-234 Copyright: Don Johnson License: http://creativecommons.org/licenses/by/1.10/ Page: 234 Copyright: Don Johnson License: http://creativecommons.org/content/m0099/2.org/licenses/by/1.0 Module: "Line-of-Sight Transmission" By: Don Johnson URL: http://cnx.0 Module: "Communication with Satellites" By: Don Johnson URL: http://cnx.org/licenses/by/1.org/content/m0538/2.0 Module: "Noise and Interference" By: Don Johnson URL: http://cnx.org/content/m0539/2.org/licenses/by/1.org/content/m0513/2.13/ Page: 226 Copyright: Don Johnson License: http://creativecommons.org/content/m0540/2.0 Module: "The Ionosphere and Communications" By: Don Johnson URL: http://cnx.org/licenses/by/1.org/content/m0101/2.15/ Pages: 231-232 Copyright: Don Johnson License: http://creativecommons.org/licenses/by/1.17/ Pages: 234-235 Copyright: Don Johnson License: http://creativecommons.0 Module: "Wireline Channels" By: Don Johnson URL: http://cnx.14/ Pages: 232-233 Copyright: Don Johnson License: http://creativecommons.org/licenses/by/1.0 Module: "Types of Communication Channels" By: Don Johnson URL: http://cnx.org/content/m0515/2.0 ATTRIBUTIONS .0 Module: "Wireless Channels" By: Don Johnson URL: http://cnx.8/ Page: 225 Copyright: Don Johnson License: http://creativecommons. org/content/m0516/2.org/licenses/by/1.org/licenses/by/1.0 Module: "Digital Communication" By: Don Johnson URL: http://cnx.14/ Pages: 241-244 Copyright: Don Johnson License: http://creativecommons.19/ Pages: 236-237 Copyright: Don Johnson License: http://creativecommons.org/licenses/by/1.org/licenses/by/1.0 Module: "Baseband Communication" By: Don Johnson URL: http://cnx.org/content/m0545/2.0 Module: "Signal-to-Noise Ratio of an Amplitude-Modulated Signal" By: Don Johnson URL: http://cnx.ATTRIBUTIONS Module: "Channel Models" By: Don Johnson URL: http://cnx.0 321 .0 Module: "Modulated Communication" By: Don Johnson URL: http://cnx.0 Module: "Binary Phase Shift Keying" By: Don Johnson URL: http://cnx.org/content/m0519/2.18/ Pages: 239-240 Copyright: Don Johnson License: http://creativecommons.org/content/m0518/2.26/ Pages: 237-238 Copyright: Don Johnson License: http://creativecommons.org/licenses/by/1.org/licenses/by/1.org/content/m0541/2.org/licenses/by/1.18/ Pages: 245-247 Copyright: Don Johnson License: http://creativecommons.org/content/m0520/2.org/content/m10280/2.10/ Pages: 240-241 Copyright: Don Johnson License: http://creativecommons.0 Module: "Frequency Shift Keying" By: Don Johnson URL: http://cnx.11/ Pages: 235-236 Copyright: Don Johnson License: http://creativecommons.12/ Pages: 244-245 Copyright: Don Johnson License: http://creativecommons.0 Module: "Digital Communication Receivers" By: Don Johnson URL: http://cnx.org/content/m0517/2.org/licenses/by/1. 14/ Pages: 250-251 Copyright: Don Johnson License: http://creativecommons.0 Module: "Entropy" By: Don Johnson URL: http://cnx.14/ Pages: 252-254 Copyright: Don Johnson License: http://creativecommons.9/ Pages: 249-250 Copyright: Don Johnson License: http://creativecommons.0/ Module: "Source Coding Theorem" By: Don Johnson URL: http://cnx.org/content/m0070/2.0 ATTRIBUTIONS .322 Module: "Digital Communication in the Presence of Noise" By: Don Johnson URL: http://cnx.0/ Module: "Digital Communication System Properties" By: Don Johnson URL: http://cnx.15/ Pages: 247-249 Copyright: Don Johnson License: http://creativecommons.org/content/m10282/2.org/licenses/by/1.org/licenses/by/3.org/content/m0092/2.org/content/m0102/2.org/licenses/by/1.org/licenses/by/1.org/content/m0546/2.0 Module: "Compression and the Human Code" By: Don Johnson URL: http://cnx.0 Module: "Digital Channels" By: Don Johnson URL: http://cnx.0 Module: "Subtleties of Source Coding" Used here as: "Subtlies of Coding" By: Don Johnson URL: http://cnx.org/licenses/by/1.org/content/m0091/2.19/ Pages: 254-255 Copyright: Don Johnson License: http://creativecommons.16/ Pages: 256-258 Copyright: Don Johnson License: http://creativecommons.org/licenses/by/3.org/licenses/by/1.org/content/m0093/2.14/ Pages: 251-252 Copyright: Don Johnson License: http://creativecommons. 22/ Pages: 259-261 Copyright: Don Johnson License: http://creativecommons.org/licenses/by/1.0 Module: "Error-Correcting Codes: Channel Decoding" By: Don Johnson URL: http://cnx.org/licenses/by/1.0 Module: "Capacity of a Channel" By: Don Johnson URL: http://cnx.org/content/m0098/2.org/content/m0097/2.0 Module: "Noisy Channel Coding Theorem" By: Don Johnson URL: http://cnx.5/ Page: 259 Copyright: Don Johnson License: http://creativecommons.org/content/m10283/2.0 Module: "Repetition Codes" By: Don Johnson URL: http://cnx.13/ Pages: 269-270 Copyright: Don Johnson License: http://creativecommons.25/ Pages: 266-268 Copyright: Don Johnson License: http://creativecommons.org/licenses/by/1.15/ Page: 261 Copyright: Don Johnson License: http://creativecommons.0 Module: "Error-Correcting Codes: Hamming Distance" By: Don Johnson URL: http://cnx.org/licenses/by/1.org/content/m10782/2.org/licenses/by/1.29/ Pages: 262-264 Copyright: Don Johnson License: http://creativecommons.20/ Pages: 264-265 Copyright: Don Johnson License: http://creativecommons.ATTRIBUTIONS Module: "Channel Coding" By: Don Johnson URL: http://cnx.0 Module: "Block Channel Coding" By: Don Johnson URL: http://cnx.org/content/m0094/2.org/content/m0072/2.org/licenses/by/1.org/licenses/by/1.12/ Pages: 268-269 Copyright: Don Johnson License: http://creativecommons.0 323 .org/content/m0071/2.org/licenses/by/1.0 Module: "Error-Correcting Codes: Hamming Codes" By: Don Johnson URL: http://cnx.org/content/m0073/2. 29/ Pages: 278-293 Copyright: Don Johnson License: http://creativecommons.0 ATTRIBUTIONS .10/ Pages: 273-274 Copyright: Don Johnson License: http://creativecommons.324 Module: "Comparison of Analog and Digital Communication" By: Don Johnson URL: http://cnx.0 Module: "Communication Networks" By: Don Johnson URL: http://cnx.org/licenses/by/1.0 Module: "Communication Protocols" By: Don Johnson URL: http://cnx.org/content/m0076/2.org/content/m0080/2.org/content/m0077/2.org/licenses/by/1.13/ Pages: 274-276 Copyright: Don Johnson License: http://creativecommons.11/ Pages: 270-271 Copyright: Don Johnson License: http://creativecommons.0 Module: "Information Communication Problems" By: Don Johnson URL: http://cnx.org/content/m10284/2.org/content/m0074/2.org/content/m0075/2.org/licenses/by/1.org/licenses/by/1.11/ Pages: 271-272 Copyright: Don Johnson License: http://creativecommons.16/ Pages: 299-300 Copyright: Don Johnson License: http://creativecommons.19/ Pages: 276-278 Copyright: Don Johnson License: http://creativecommons.0 Module: "Message Routing" By: Don Johnson URL: http://cnx.org/licenses/by/3.org/licenses/by/1.org/content/m10352/2.org/licenses/by/1.org/content/m0082/2.0 Module: "Network architectures and interconnection" By: Don Johnson URL: http://cnx.0 Module: "Ethernet" By: Don Johnson URL: http://cnx.0/ Module: "Decibels" By: Don Johnson URL: http://cnx.9/ Pages: 272-273 Copyright: Don Johnson License: http://creativecommons.org/licenses/by/1. 0 325 .ATTRIBUTIONS Module: "Permutations and Combinations" By: Don Johnson URL: http://cnx.org/licenses/by/1.12/ Pages: 302-304 Copyright: Don Johnson License: http://creativecommons.13/ Page: 301 Copyright: Don Johnson License: http://creativecommons.org/licenses/by/1.org/content/m10262/2.0 Module: "Frequency Allocations" By: Don Johnson URL: http://cnx.org/content/m0083/2. digital transmission of analog signals. . interactive courses are in use worldwide by universities. Chinese.and frequency-domain analysis. Italian. professors and lifelong learners. Connexions has been pioneering a global system where anyone can create course materials and make them fully accessible and easily reusable free of charge. Vietnamese. French. Connexions materials are in many languages. Portuguese. Digital information theory. Sampling Theorem. including English. and Thai. Spanish. time. distance learners. Connexions has partnered with innovative on-demand publisher QOOP to accelerate the delivery of printed course materials and textbooks into classrooms worldwide at lower prices than traditional academic publishers. Elementary signal theory. K-12 schools. teaching and learning environment open to anyone interested in education. transmission. We connect ideas and facilitate educational communities. We are a Web-based authoring.Fundamentals of Electrical Engineering I The course focuses on the creation. community colleges. Japanese. Connexions's modular. and reception of information by electronic means. Connexions is part of an exciting new information distribution system that allows for Print on Demand Books. including students. teachers. manipulation. About Connexions Since 1999. error-correcting codes. and lifelong learners. Documents Similar To Fundamentals of Electrical Engineering ISkip carouselcarousel previouscarousel nextlecture9_4hw7solTata Institute of Fundamental Research Nationwide Examination Syllabus6xaaeop7gJ8Lab4funcionesComprehensive Viva Voce Ppt9709_w10_qp_33DSP01Centum VP 01 ENG Forms NL908502Dynamic analysis of NEMS18_032006L09mathsiGeophys. J. Int.-1996-Maus-113-20Signals and Systems BasicLAB OUTSIDE FLEX WORK (1).docLecture 02Functions of a Complex VariableCANON iR2200 iR2800 iR3300 Service Manual PagesEngineering9000 U Single_Dual Output_1DHT22Electronics Engineering CurriculumAn Introduction to Orthogonal Frequency Division MultiplexingA 212493Electrical&ElectronicsDIgital signal Processing lab manualEnd Term Time Table Jan2012Digital Image ProcessingBest Books About Complex NumberComplex Numbers in Geometryby I. M. YaglomAnalytic Trigonometry: The Commonwealth and International Library of Science, Technology, Engineering and Liberal Studies: Mathematics Divisionby William J. BruceAn Imaginary Tale: The Story of √-1by Paul J. NahinIntermediate Algebraby Charles P. McKeagueComplex Numbers: Lattice Simulation and Zeta Function Applicationsby S C RoyDr. Euler's Fabulous Formula: Cures Many Mathematical Illsby Paul J. NahinFooter MenuBack To TopAboutAbout ScribdPressOur blogJoin our team!Contact UsJoin todayInvite FriendsGiftsLegalTermsPrivacyCopyrightSupportHelp / FAQAccessibilityPurchase helpAdChoicesPublishersSocial MediaCopyright © 2018 Scribd Inc. .Browse Books.Site Directory.Site Language: English中文EspañolالعربيةPortuguês日本語DeutschFrançaisTurkceРусский языкTiếng việtJęzyk polskiBahasa indonesiaSign up to vote on this titleUsefulNot usefulMaster Your Semester with Scribd & The New York TimesSpecial offer for students: Only $4.99/month.Master Your Semester with a Special Offer from Scribd & The New York TimesRead Free for 30 DaysCancel anytime.Read Free for 30 DaysYou're Reading a Free PreviewDownloadClose DialogAre you sure?This action might not be possible to undo. Are you sure you want to continue?CANCELOK
Copyright © 2024 DOKUMEN.SITE Inc.