Central Library, Indian Institute of Technology Delhi
केंद्रीय पुस्तकालय, भारतीय प्रौद्योगिकी संस्थान दिल्ली

Sequential Monte Carlo methods for nonlinear discrete-time filtering [electronic resource] / Marcelo G.S. Bruno.

By: Bruno, Marcelo G. SMaterial type: TextTextSeries: Synthesis digital library of engineering and computer science | Synthesis lectures on signal processing ; # 11.Publication details: San Rafael, Calif. (1537 Fourth Street, San Rafael, CA 94901 USA) :: Morgan & Claypool,, c2013Description: 1 electronic text (xi, 87 p.) : ill., digital fileISBN: 9781627051200 (electronic bk.)Subject(s): Signal processing -- Mathematics | Electric filters, Digital | Monte Carlo method | Bayesian statistical decision theory | particle filtering | sequential Monte Carlo methods | Bayesian estimation | distributed estimationAdditional physical formats: Print version:: No titleDDC classification: 621.3822 LOC classification: TK5102.9 | .B787 2013Online resources: Abstract with links to resource Also available in print.
Contents:
Preface -- Acknowledgments -- Introduction -- Bayesian estimation of static vectors -- The stochastic filtering problem -- Sequential Monte Carlo methods -- Sampling/importance resampling (SIR) filter -- Importance function selection -- Markov chain Monte Carlo move step -- Rao-Backwellized particle filters -- Auxiliary particle filter -- Regularized particle filters -- Cooperative filtering with multiple observers -- Application examples -- Summary -- A. Appendix A -- B. Appendix B -- Bibliography -- Author's biography.
Abstract: In these notes, we introduce particle filtering as a recursive importance sampling method that approximates the minimum-mean-square-error (MMSE) estimate of a sequence of hidden state vectors in scenarios where the joint probability distribution of the states and the observations is non-Gaussian and, therefore, closed-form analytical expressions for the MMSE estimate are generally unavailable. Bayesian approaches to static (i.e., time-invariant) parameter estimation. In the sequel, we describe the solution to the problem of sequential state estimation in linear, Gaussian dynamic models, which corresponds to the well-known Kalman (or Kalman-Bucy) filter. Finally, we move to the general nonlinear, non-Gaussian stochastic filtering problem and present particle filtering as a sequential Monte Carlo approach to solve that problem in a statistically optimal way. We review several techniques to improve the performance of particle filters, including importance function optimization, particle resampling, Markov Chain Monte Carlo move steps, auxiliary particle filtering, and regularized particle filtering. We also discuss Rao-Blackwellized particle filtering as a technique that is particularly well-suited for many relevant applications such as fault detection and inertial navigation. Finally, we conclude the notes with a discussion on the emerging topic of distributed particle filtering using multiple processors located at remote nodes in a sensor network. Throughout the notes, we often assume a more general framework than in most introductory textbooks by allowing either the observation model or the hidden state dynamic model to include unknown parameters. In a fully Bayesian fashion, we treat those unknown parameters also as random variables. Using suitable dynamic conjugate priors, that approach can be applied then to perform joint state and parameter estimation.
Tags from this library: No tags from this library for this title. Log in to add tags.
Star ratings
    Average rating: 0.0 (0 votes)
Holdings
Item type Current library Call number Status Date due Barcode
Ebooks Ebooks Indian Institute of Technology Delhi - Central Library
Available

Mode of access: World Wide Web.

System requirements: Adobe Acrobat Reader.

Part of: Synthesis digital library of engineering and computer science.

Series from website.

Includes bibliographical references (p. 81-85).

Preface -- Acknowledgments -- Introduction -- Bayesian estimation of static vectors -- The stochastic filtering problem -- Sequential Monte Carlo methods -- Sampling/importance resampling (SIR) filter -- Importance function selection -- Markov chain Monte Carlo move step -- Rao-Backwellized particle filters -- Auxiliary particle filter -- Regularized particle filters -- Cooperative filtering with multiple observers -- Application examples -- Summary -- A. Appendix A -- B. Appendix B -- Bibliography -- Author's biography.

Abstract freely available; full-text restricted to subscribers or individual document purchasers.

Compendex

INSPEC

Google scholar

Google book search

In these notes, we introduce particle filtering as a recursive importance sampling method that approximates the minimum-mean-square-error (MMSE) estimate of a sequence of hidden state vectors in scenarios where the joint probability distribution of the states and the observations is non-Gaussian and, therefore, closed-form analytical expressions for the MMSE estimate are generally unavailable. Bayesian approaches to static (i.e., time-invariant) parameter estimation. In the sequel, we describe the solution to the problem of sequential state estimation in linear, Gaussian dynamic models, which corresponds to the well-known Kalman (or Kalman-Bucy) filter. Finally, we move to the general nonlinear, non-Gaussian stochastic filtering problem and present particle filtering as a sequential Monte Carlo approach to solve that problem in a statistically optimal way. We review several techniques to improve the performance of particle filters, including importance function optimization, particle resampling, Markov Chain Monte Carlo move steps, auxiliary particle filtering, and regularized particle filtering. We also discuss Rao-Blackwellized particle filtering as a technique that is particularly well-suited for many relevant applications such as fault detection and inertial navigation. Finally, we conclude the notes with a discussion on the emerging topic of distributed particle filtering using multiple processors located at remote nodes in a sensor network. Throughout the notes, we often assume a more general framework than in most introductory textbooks by allowing either the observation model or the hidden state dynamic model to include unknown parameters. In a fully Bayesian fashion, we treat those unknown parameters also as random variables. Using suitable dynamic conjugate priors, that approach can be applied then to perform joint state and parameter estimation.

Also available in print.

Title from PDF t.p. (viewed on February 17, 2013).

There are no comments on this title.

to post a comment.
Copyright © 2022 Central Library, Indian Institute of Technology Delhi. All Rights Reserved.

Powered by Koha