This article analyzes ways in which the L1 minimization method fails in a new theory of signal processing called “compressive sampling” or “compressive sensing”. This new theory aims to reconstruct original signals based on very few samples. The samples are taken by multiplying a sample matrix with sparse signals, and then the recovery can be obtained by L1 minimization. However, sometimes it is impossible to recover original signals by finding the solution which minimizes the L1 norm. Therefore, it is necessary to generate data sets to find when and why L1 minimization doesn’t work. We show that probabilistic sample matrices generated by Gaussian and uniform distributions fail for very different reasons than those generated by Bernoulli distributions.