Abstract: This work presents and compare three realistic scenarios to perform near sensor decision making based on Dimensionality Reduction (DR) techniques of high dimensional signals in the context of highly constrained hardware. The studied DR techniques are learned according to two alternative strategies: one whose parameters are learned in a compressed signal representation, as being achieved by random projections in a compressive sensing device, the others being performed in the original signal domain. For both strategies, the inference is yet indifferently performed in the compressed domain with dedicated algorithm depending on the selected learning technique. Our results, based on two common datasets, show that performing the inference in the compressed domain represents a competitive approach compared to the classical classification strategy (inference in the original signal domain) regarding memory and computational requirements. We also exhibit the fact that it is especially well suited for embedded applications in the context of hardware implementations with limited resources even with specific hardware design and limitations.