When compressive learning fails: blame the decoder or the sketch?

Publication
in Proceedings of iTWIST'20, Paper-ID: 22, Nantes, France, December, 2-4, 2020

Abstract: In compressive learning, a mixture model (a set of centroids or a Gaussian mixture) is learned from a sketch vector, that serves as a highly compressed representation of the dataset. This requires solving a non-convex optimization problem, hence in practice approximate heuristics (such as CLOMPR) are used. In this work we explore, by numerical simulations, properties of this non-convex optimization landscape and those heuristics.