For many problems, the computer scientists tell us to use expectation maximization. For example, in fitting a distribution with a mixture of Gaussians, EM is the bee's knees, apparently. This surprises me, because the EM optimization is so slow and predictable; I am guessing that a more aggressive optimization might beat it. Of course a more aggressive optimization might not be protected by the same guarantees as EM (which is super stable, even in high dimensions). It would be a service to humanity to investigate this and report places where EM can be beat. Of course this may all have been done; I would ask my local experts before embarking.
Do EM or its implementations go by other names? I can't say I've ever heard of it, but I've been fitting gaussians and gaussian distributions often enough that I think I should have by now.
ReplyDelete@Adam: Not that I know of. But here's the Wikipedia page.
ReplyDeleteSome prior art on comparing EM to gradient-based optimization: http://www.utstat.toronto.edu/~rsalakhu/papers/emecg.pdf
ReplyDelete