I was looking at a grad course series with the following description:
------------------------------
ECE 275A. Parameter Estimation I (4)
Linear least Squares (batch, recursive, total, sparse, pseudoinverse, QR, SVD); Statistical figures of merit (bias, consistency, Cramer-Rao lower-bound, efficiency); Maximum likelihood estimation (MLE); Sufficient statistics; Algorithms for computing the MLE including the Expectation Maximation (EM) algorithm. The problem of missing information; the problem of outliers. (Recommended prerequisites: ECE 109 and ECE 153.) Prerequisites: graduate standing.
ECE 275B. Parameter Estimation II (4)
The Bayesian statistical framework; Parameter and state estimation of Hidden Markov Models, including Kalman Filtering and the Viterbi and Baum-Welsh algorithms. A solid foundation is provided for follow-up courses in Bayesian machine learning theory. (Recommended prerequisites: ECE 153.) Prerequisites: ECE 275A; graduate standing.
---------------------------------------
------------------------------
ECE 275A. Parameter Estimation I (4)
Linear least Squares (batch, recursive, total, sparse, pseudoinverse, QR, SVD); Statistical figures of merit (bias, consistency, Cramer-Rao lower-bound, efficiency); Maximum likelihood estimation (MLE); Sufficient statistics; Algorithms for computing the MLE including the Expectation Maximation (EM) algorithm. The problem of missing information; the problem of outliers. (Recommended prerequisites: ECE 109 and ECE 153.) Prerequisites: graduate standing.
ECE 275B. Parameter Estimation II (4)
The Bayesian statistical framework; Parameter and state estimation of Hidden Markov Models, including Kalman Filtering and the Viterbi and Baum-Welsh algorithms. A solid foundation is provided for follow-up courses in Bayesian machine learning theory. (Recommended prerequisites: ECE 153.) Prerequisites: ECE 275A; graduate standing.
---------------------------------------