Context
Attempting to reproduce an illustrative example of compressive sampling from Candes-Wakin 2008. Specifically, the L1 recovery of a sparse signal shown on pg 5 in Fig. 2. Using my code (below), results in ~12 - 20% error rates for exact reconstruction whereas the example seems to suggest we should get closer to 0%.
Question
Where does my implementation go wrong? Perhaps it's i) the order of sampling indices first, followed by inverse dct or ii) the ECOS solver that cvxpy invokes (instead of a more appropriate solver?)
Note: Fig 2 in Candes-Wakin suggests reconstruction from complex valued Fourier coefficients, whereas I use discrete cosine transform along the lines of Rish-Grabarnik section 3.2 in their book (link from Prof. Rish's website)
Code numpy (1.20.x), scipy, cvxpy (1.1.14)
import numpy as np import scipy.fftpack as spfft import cvxpy as cvx from matplotlib import pyplot as plt def set_signal(low: int = -1, high: int = 1, n: int = 10) -> np.ndarray: """Return a bounded uniform random vector in R^n.""" s = np.random.uniform(low=low, high=high, size=(n, 1)) return s def sparsify_signal(s: np.ndarray, n_sparse: int = 1) -> np.ndarray: """Set random n_sparse many elements of s to zero.""" if not isinstance(s, np.ndarray): s = np.array(s) indices_to_zero = np.random.choice(a=len(s), size=n_sparse, replace=False) s[indices_to_zero] = 0 return s # PARAMS N_dim = 512 # real-dim of where original signal lives S_dim = 30 # support size: num non-zero entries in original signal null_dim = N_dim - S_dim m_samples = 60 # alternative: S_sparse * log(N_dim), for some log base... delta = 1e-5 # lower bound nonzero solution terms; used in post-opt analysis # INITIALIZE X = set_signal(n=N_dim) # won't reconstruct this, but sparse version of it X_sparse = sparsify_signal(s=X, n_sparse=null_dim) yt = spfft.dct(X_sparse, norm='ortho') # project sparse signal to (real) Fourier # PROJECT AND SAMPLE sample_indices = np.random.choice(N_dim, m_samples, replace=False) # fix coordinates to sample sample_indices.sort() # plotting convenience yt_sample = yt[sample_indices] # restrict sparse signal to random sample indices yt_sample = yt_sample.squeeze() # Discrete cosine transform as matrix representation A = spfft.dct(np.identity(N_dim), norm='ortho', axis=0) A = A[sample_indices] # Optimization vx = cvx.Variable(N_dim) # reconstruct a signal in N_dim space objective = cvx.Minimize(cvx.norm(vx, 1)) constraints = [A@vx == yt_sample] # constraint in Fourier space opt_problem = cvx.Problem(objective, constraints) result = opt_problem.solve(verbose=True) # Summarize optimization print("status: {}".format(opt_problem.status)) optimal_x = vx.value.reshape(N_dim, 1) # unsqueeze # Number of nonzero elements in the solution (its cardinality or diversity). nnz_l1 = (np.absolute(optimal_x) > delta).sum() E = np.isclose(optimal_x, X_sparse) E_support = np.isclose(optimal_x[sample_indices], X_sparse[sample_indices]) # we could computed L2 error, but the claim is 100% on the nose reconstruction...probabilistically error_rate = 1 - (E.sum() / N_dim) S_error_rate = 1 - (E_support.sum() / m_samples) print('Found a feasible x in R^{} that has {} nonzeros.'.format(N_dim, nnz_l1)) print('Reconstruction error rate is %0.2f' % error_rate) print('Support Reconstruction error rate is %0.2f' % S_error_rate) # PLOT plt.plot(X_sparse, '.', color='b', label='original sparse signal') plt.plot(optimal_x, '.', color='g', label='approximation') plt.legend()``` 