Abstract: Randomized sets of binary tests have appeared to be quite effective in solving a variety of image processing and vision problems. The exponential growth of their memory usage with the size of the sets however hampers their implementation on the memory-constrained hardware generally available on low-power embedded systems. Our paper addresses this limitation by formulating the conventional semi-naive Bayesian ensemble decision rule in terms of posterior class probabilities, instead of class conditional distributions of binary tests realizations. Subsequent clustering of the posterior class distributions computed at training allows for sharp reduction of large binary tests sets memory footprint, while preserving their high accuracy. Our validation considers a smart metering applicative scenario, and demonstrates that up to 80% of the memory usage can be saved, at constant accuracy.