Adaptive Filter Theory

Adaptive Filter Theory


Yazar Simon Haykin
Yayınevi Pearson Education
ISBN 9780273764083
Baskı yılı 2013
Sayfa sayısı 912
Ağırlık 1.10 kg
Stok durumu Var    Stok detayları
Kargoya teslim 5 gün

For courses in Adaptive Filters. Haykin examines both the mathematical theory behind various linear adaptive filters and the elements of supervised multilayer perceptrons. In its fifth edition, this highly successful book has been updated and refined to stay current with the field and develop concepts in as unified and accessible a manner as possible.
Preface Acknowledgments Background and Preview 1. The Filtering Problem 2. Linear Optimum Filters 3. Adaptive Filters 4. Linear Filter Structures 5. Approaches to the Development of Linear Adaptive Filters 6. Adaptive Beamforming 7. Four Classes of Applications 8. Historical Notes Bibliography Chapter 1 Stochastic Processes and Models 1.1 Partial Characterization of a Discrete-Time Stochastic Process 1.2 Mean Ergodic Theorem 1.3 Correlation Matrix 1.4 Correlation Matrix of Sine Wave Plus Noise 1.5 Stochastic Models 1.6 Wold Decomposition 1.7 Asymptotic Stationarity of an Autoregressive Process 1.8 Yule-Walker Equations 1.9 Computer Experiment: Autoregressive Process of Order Two 1.10 Selecting the Model Order 1.11 Complex Gaussian Processes 1.12 Power Spectral Density 1.13 Properties of Spectral Density 1.14 Transmission of a Stationary Process Through a Linear Filter 1.15 Cramer Spectral Representation for a Stationary Process 1.16 Power Spectrum Estimation 1.17 Other Statistical Characteristics of a Stochastic Process 1.18 Polyspectra 1.19 Spectral-Correlation Density 1.20 Summary and Discussion Problems Bibliography Chapter 2 Wiener Filters 2.1 Linear Optimum Filtering: Statement of the Problem 2.2 Principle of Orthogonality 2.3 Minimum Mean-Square Error 2.4 Wiener-Hopf Equations 2.5 Error-Performance Surface 2.6 Multiple Linear Regression Model 2.7 Example 2.8 Linearly Constrained Minimum-Variance Filter 2.9 Generalized Sidelobe Cancellers 2.10 Summary and Discussion Problems Bibliography Chapter 3 Linear Prediction 3.1 Forward Linear Prediction 3.2 Backward Linear Prediction 3.3 Levinson-Durbin Algorithm 3.4 Properties of Prediction-Error Filters 3.5 Schur-Cohn Test 3.6 Autoregressive Modeling of a Stationary Stochastic Process 3.7 Cholesky Factorization 3.8 Lattice Predictors 3.9 All-Pole, All-Pass Lattice Filter 3.10 Joint-Process Estimation 3.11 Predictive Modeling of Speech 3.12 Summary and Discussion Problems Bibliography Chapter 4 Method of Steepest Descent 4.1 Basic Idea of the Steepest-Descent Algorithm 4.2 The Steepest-Descent Algorithm Applied to the Wiener Filter 4.3 Stability of the Steepest-Descent Algorithm 4.4 Example 4.5 The Steepest-Descent Algorithm as a Deterministic Search Method 4.6 Virtue and Limitation of the Steepest-Descent Algorithm 4.7 Summary and Discussion Problems Bibliography Chapter 5 Method of Stochastic Gradient Descent 5.1 Principles of Stochastic Gradient Descent 5.2 Application: Least-Mean-Square (LMS) Algorithm 5.3 Gradient-Adaptive Lattice Filtering Algorithm 5.4 Other Applications of Stochastic Gradient Descent 5.5 Summary and Discussion Problems Bibliography Chapter 6 The Least-Mean-Square (LMS) Algorithm 6.1 Signal-Flow Graph 6.2 Optimality Considerations 6.3 Applications 6.4 Statistical Learning Theory 6.5 Transient Behavior and Convergence Considerations 6.6 Efficiency 6.7 Computer Experiment on Adaptive Prediction 6.8 Computer Experiment on Adaptive Equalization 6.9 Computer Experiment on Minimum-Variance Distortionless-Response Beamformer 6.10 Summary and Discussion Problems Bibliography Chapter 7 Normalized Least-Mean-Square (LMS) Algorithm and Its Generalization 7.1 Normalized LMS Algorithm: The Solution to a Constrained Optimization Problem 7.2 Stability of the Normalized LMS Algorithm 7.3 Step-Size Control for Acoustic Echo Cancellation 7.4 Geometric Considerations Pertaining to the Convergence Process for Real-Valued Data 7.5 Affine Projection Adaptive Filters 7.6 Summary and Discussion Problems Bibliography Chapter 8 Block-Adaptive Filters 8.1 Block-Adaptive Filters: Basic Ideas 8.2 Fast Block LMS Algorithm 8.3 Unconstrained Frequency-Domain Adaptive Filters 8.4 Self-Orthogonalizing Adaptive Filters 8.5 Computer Experiment on Adaptive Equalization 8.6 Subband Adaptive Filters 8.7 Summary and Discussion Problems Bibliography Chapter 9 Method of Least Squares 9.1 Statement of the Linear Least-Squares Estimation Problem 9.2 Data Windowing 9.3 Principle of Orthogonality Revisited 9.4 Minimum Sum of Error Squares 9.5 Normal Equations and Linear Least-Squares Filters 9.6 Time-Average Correlation Matrix PHI 9.7 Reformulation of the Normal Equations in Terms of Data Matrices 9.8 Properties of Least-Squares Estimates 9.9 Minimum-Variance Distortionless Response (MVDR) Spectrum Estimation 9.10 Regularized MVDR Beamforming 9.11 Singular-Value Decomposition 9.12 Pseudoinverse 9.13 Interpretation of Singular Values and Singular Vectors 9.14 Minimum-Norm Solution to the Linear Least-Squares Problem 9.15 Normalized Least-Mean-Square (LMS) Algorithm Viewed as the Minimum-Norm Solution to an Underdetermined Least-Squares Estimation Problem 9.16 Summary and Discussion Problems Bibliography Chapter 10 The Recursive Least-Squares (RLS) Algorithm 10.1 Some Preliminaries 10.2 The Matrix Inversion Lemma 10.3 The Exponentially Weighted RLS Algorithm 10.4 Selection of the Regularization Parameter 10.5 Update Recursion for the Sum of Weighted Error Squares 10.6 Example: Single-Weight Adaptive Noise Canceller 10.7 Statistical Learning Theory 10.8 Efficiency 10.9 Computer Experiment on Adaptive Equalization 10.10 Summary and Discussion Problems Bibliography Chapter 11 Robustness 11.1 Robustness, Adaptation, and Disturbances 11.2 Robustness: Preliminary Considerations Rooted in H Optimization 11.3 Robustness of the LMS Algorithm 11.4 Robustness of the RLS Algorithm 11.5 Comparative Evaluations of the LMS and RLS Algorithms from the Perspective of Robustness 11.6 Risk-Sensitive Optimality 11.7 Trade-Offs Between Robustness and Efficiency 11.8 Summary and Discussion Problems Bibliography Chapter 12 Finite-Precision Effects 12.1 Quantization Errors 12.2 Least-Mean-Square (LMS) Algorithm 12.3 Recursive Least-Squares (RLS) Algorithm 12.4 Summary and Discussion Problems Bibliography Chapter 13 Adaptation in Nonstationary Environments 13.1 Causes and Consequences of Nonstationarity 13.2 The System Identification Problem 13.3 Degree of Nonstationarity 13.4 Criteria for Tracking Assessment 13.5 Tracking Performance of the LMS Algorithm 13.6 Tracking Performance of the RLS Algorithm 13.7 Comparison of the Tracking Performance of the LMS and RLS Algorithms 13.8 Tuning of Adaptation Parameters 13.9 Incremental Delta-Bar-Delta (IDBD) Algorithm 13.10 Autostep Method 13.11 Computer Experiment: Mixture of Stationary and Nonstationary Environmental Data 13.12 Summary and Discussion Problems Bibliography Chapter 14 Kalman Filters 14.1 Recursive Minimum Mean-Square Estimation for Scalar Random Variables 14.2 Statement of the Kalman Filtering Problem 14.3 The Innovations Process 14.4 Estimation of the State Using the Innovations Process 14.5 Filtering 14.6 Initial Conditions 14.7 Summary of the Kalman Filter 14.8 Kalman Filter as the Unifying Basis for RLS Algorithms 14.9 Variants of the Kalman Filter 14.10 Summary and Discussion Problems Bibliography Chapter 15 Square-Root Adaptive Filters 15.1 Square-Root Kalman Filters 15.2 Building Square-Root Adaptive Filters on Their Kalman Filter Counterparts 15.3 QRD-RLS Algorithm 15.4 Adaptive Beamforming 15.5 Inverse QRD-RLS Algorithm 15.6 Finite-Precision Effects 15.7 Summary Problems Bibliography Chapter 16 Order-Recursive Adaptive Filters 16.1 Order-Recursive Adaptive Filters Using Least-Squares Estimation: An Overview 16.2 Adaptive Forward Linear Prediction 16.3 Adaptive Backward Linear Prediction 16.4 Conversion Factor 16.5 Least-Squares Lattice (LSL) Predictor 16.6 Angle-Normalized Estimation Errors 16.7 First-Order State-Space Models for Lattice Filtering 16.8 QR-Decomposition-Based Least-Squares Lattice (QRD-LSL) Filters 16.9 Fundamental Properties of the QRD-LSL Filter 16.10 Computer Experiment on Adaptive Equalization 16.11 Recursive LSL Filters Using a Posteriori Estimation Errors 16.12 Recursive LSL Filters Using a Priori Estimation Errors with Error Feedback 16.13 Relation Between Recursive LSL and RLS Algorithms 16.14 Finite-Precision Effects 16.15 Summary and Discussion Problems Bibliography Chapter 17 Blind Deconvolution 17.1 Overview of the Blind Deconvolution 17.2 Channel Identifiability Using Cyclostationary Statistics 17.3 Subspace Decomposition for Fractionally Spaced Blind Identification 17.4 Bussgang Algorithm for Blind Equalization 17.5 Extension of the Bussgang Algorithm to Complex Baseband Channels 17.6 Special Cases of the Bussgang Algorithm 17.7 Fractionally Spaced Bussgang Equalizers 17.8 Estimation of Unknown Probability Distribution Factor of Signal Source 17.9 Summary and Discussion Problems Bibliography Epilogue 1. Robusness, Efficiency, and Complexity 2. Kernel-Based Nonlinear Adaptive Filtering Bibliography Appendix A Theory of Complex Variables A.1 Cauchy-Reimann Equations A.2 Cauchys Integral Formula A.3 Laurents Series A.4 Singularities and Residues A.5 Cauchys Residue Theorem A.6 Principle of the Argument A.7 Inversion Integral for the z-Transform A.8 Parsevals Theorem Bibliography Appendix B Computation of Derivatives in the Complex Domain B.1 Differentiability and Analyticity B.2 Wirtinger Derivatives B.3 Matrix and Vector Derivatives B.4 Newton Updates Bibliography Appendix C Method of Lagrange Multipliers C.1 Optimization Involving a Single Equality Constraint C.2 Optimization Involving Multiple Equality Constraints C.3 Optimization Beamformer Bibliography Appendix D Estimation Theory D.1 Likelihood Function D.2 Cramer-Rao Inequality D.3 Properties of Maximum-Likelihood Estimators D.4 Conditional Mean Estimator Bibliography Appendix E Eigenanalysis E.1 The Eigenvalue Problem E.2 Properties of Eigenvalues and Eigenvectors E.3 Low-Rank Modeling E.4 Eigenfilters E.5 Eigenvalue Computations Bibliography Appendix F Langevin Equation of Nonequilibrium Thermodynamics F.1 Brownian Motion F.2 Langevin Equation Bibliography Appendix G Rotations and Reflections G.1 Plane Rotations G.2 Two-Sided Jacobi Algorithm G.3 Cyclic Jacobi Algorithm G.4 Householder Transformation G.5 The QR Algorithm Bibliography Appendix H Complex Wishart Distribution H.1 Definition H.2 The Chi-Square Distribution as a Special Case H.3 Properties of the Complex Wishart Distribution H.4 Expectation of the Inverse Correlation Matrix PHI-1(n) Bibliography Glossary Bibliography Suggested Readings Index

Axess
Axess

Taksit Taksit Tutarı Toplam Tutar
Tek çekim - 950.00 TL
2 ay 501.13 TL 1002.25 TL
3 ay 337.25 TL 1011.75 TL
6 ay 174.96 TL 1049.75 TL
9 ay 120.86 TL 1087.75 TL
12 ay 93.81 TL 1125.75 TL

cardFinans
cardFinans

Taksit Taksit Tutarı Toplam Tutar
Tek çekim - 950.00 TL
2 ay 501.13 TL 1002.25 TL
3 ay 337.25 TL 1011.75 TL
6 ay 174.96 TL 1049.75 TL
9 ay 120.86 TL 1087.75 TL
12 ay 93.81 TL 1125.75 TL

Bonus
Bonus

Taksit Taksit Tutarı Toplam Tutar
Tek çekim - 950.00 TL
2 ay 501.13 TL 1002.25 TL
3 ay 337.25 TL 1011.75 TL
6 ay 174.96 TL 1049.75 TL
9 ay 120.86 TL 1087.75 TL
12 ay 93.81 TL 1125.75 TL

World
World

Taksit Taksit Tutarı Toplam Tutar
Tek çekim - 950.00 TL
2 ay 501.13 TL 1002.25 TL
3 ay 337.25 TL 1011.75 TL
6 ay 174.96 TL 1049.75 TL
9 ay 120.86 TL 1087.75 TL
12 ay 93.81 TL 1125.75 TL

Maximum
Maximum

Taksit Taksit Tutarı Toplam Tutar
Tek çekim - 950.00 TL
2 ay 501.13 TL 1002.25 TL
3 ay 337.25 TL 1011.75 TL
6 ay 174.96 TL 1049.75 TL
9 ay 120.86 TL 1087.75 TL
12 ay 93.81 TL 1125.75 TL

Paraf
Paraf

Taksit Taksit Tutarı Toplam Tutar
Tek çekim - 950.00 TL
2 ay 501.13 TL 1002.25 TL
3 ay 337.25 TL 1011.75 TL
6 ay 174.96 TL 1049.75 TL
9 ay 120.86 TL 1087.75 TL
12 ay 93.81 TL 1125.75 TL

Kredi Kartı (Tek Çekim)
Kredi Kartı (Tek Çekim)

Taksit Taksit Tutar ı Toplam Tutar
Peşin - 950.00 TL

Bonus, Maximum, Paraf, Cardfinans, Axess ve World özelliği olan tüm kartlar ile ödeme yapılabilir.