In this paper, we propose a learning based approach for alias minimization of 1-D signals. Given an under-sampled test speech signal and a training set consisting of several speech signals each of which are under-sampled as well as sampled at greater than Nyquist rate, we estimate the non-aliased frequencies for the test signal using the training set. The learning of non-aliased frequencies corresponds to estimating them using a training set. The test signal and each of the under-sampled training set signal are first interpolated to the size of The non-aliased signals. They are then divided into a number of segments and discrete cosine transform (DCT) is computed for each segment. Assuming that the lower frequencies are non-aliased and minimally distorted, we replace the aliased DCT coefficients of the test signal with the best search from the training set. The non-aliased test signal is then re-constructed by taking the inverse DCT. The comparison with the standard interpolation technique in terms of both subjective and quantitative analysis indicates better performance. © 2010 IEEE.