4
$\begingroup$

Context

Attempting to reproduce an illustrative example of compressive sampling from Candes-Wakin 2008. Specifically, the L1 recovery of a sparse signal shown on pg 5 in Fig. 2. Using my code (below), results in ~12 - 20% error rates for exact reconstruction whereas the example seems to suggest we should get closer to 0%.

Question

Where does my implementation go wrong? Perhaps it's i) the order of sampling indices first, followed by inverse dct or ii) the ECOS solver that cvxpy invokes (instead of a more appropriate solver?)

Note: Fig 2 in Candes-Wakin suggests reconstruction from complex valued Fourier coefficients, whereas I use discrete cosine transform along the lines of Rish-Grabarnik section 3.2 in their book (link from Prof. Rish's website)

Code numpy (1.20.x), scipy, cvxpy (1.1.14)

import numpy as np import scipy.fftpack as spfft import cvxpy as cvx from matplotlib import pyplot as plt def set_signal(low: int = -1, high: int = 1, n: int = 10) -> np.ndarray: """Return a bounded uniform random vector in R^n.""" s = np.random.uniform(low=low, high=high, size=(n, 1)) return s def sparsify_signal(s: np.ndarray, n_sparse: int = 1) -> np.ndarray: """Set random n_sparse many elements of s to zero.""" if not isinstance(s, np.ndarray): s = np.array(s) indices_to_zero = np.random.choice(a=len(s), size=n_sparse, replace=False) s[indices_to_zero] = 0 return s # PARAMS N_dim = 512 # real-dim of where original signal lives S_dim = 30 # support size: num non-zero entries in original signal null_dim = N_dim - S_dim m_samples = 60 # alternative: S_sparse * log(N_dim), for some log base... delta = 1e-5 # lower bound nonzero solution terms; used in post-opt analysis # INITIALIZE X = set_signal(n=N_dim) # won't reconstruct this, but sparse version of it X_sparse = sparsify_signal(s=X, n_sparse=null_dim) yt = spfft.dct(X_sparse, norm='ortho') # project sparse signal to (real) Fourier # PROJECT AND SAMPLE sample_indices = np.random.choice(N_dim, m_samples, replace=False) # fix coordinates to sample sample_indices.sort() # plotting convenience yt_sample = yt[sample_indices] # restrict sparse signal to random sample indices yt_sample = yt_sample.squeeze() # Discrete cosine transform as matrix representation A = spfft.dct(np.identity(N_dim), norm='ortho', axis=0) A = A[sample_indices] # Optimization vx = cvx.Variable(N_dim) # reconstruct a signal in N_dim space objective = cvx.Minimize(cvx.norm(vx, 1)) constraints = [A@vx == yt_sample] # constraint in Fourier space opt_problem = cvx.Problem(objective, constraints) result = opt_problem.solve(verbose=True) # Summarize optimization print("status: {}".format(opt_problem.status)) optimal_x = vx.value.reshape(N_dim, 1) # unsqueeze # Number of nonzero elements in the solution (its cardinality or diversity). nnz_l1 = (np.absolute(optimal_x) > delta).sum() E = np.isclose(optimal_x, X_sparse) E_support = np.isclose(optimal_x[sample_indices], X_sparse[sample_indices]) # we could computed L2 error, but the claim is 100% on the nose reconstruction...probabilistically error_rate = 1 - (E.sum() / N_dim) S_error_rate = 1 - (E_support.sum() / m_samples) print('Found a feasible x in R^{} that has {} nonzeros.'.format(N_dim, nnz_l1)) print('Reconstruction error rate is %0.2f' % error_rate) print('Support Reconstruction error rate is %0.2f' % S_error_rate) # PLOT plt.plot(X_sparse, '.', color='b', label='original sparse signal') plt.plot(optimal_x, '.', color='g', label='approximation') plt.legend()``` 
$\endgroup$

1 Answer 1

1
$\begingroup$

We're after the problem:

$$\begin{aligned} \arg \min_{\boldsymbol{x}} \quad & {\left\| \boldsymbol{x} \right\|}_{1} \\ \text{subject to} \quad & \hat{F} \boldsymbol{x} = \boldsymbol{y} \end{aligned}$$

Where $ \hat{F} $ is a sub set of rows of a unitary matrix.
In the case of the example of Emmanuel J. Candes, Michael B. Wakin - An Introduction To Compressive Sampling it is sub sampled DFT Matrix.

The actual analysis of the example is given by Robust Uncertainty Principles: Exact Signal Reconstruction from Highly Incomplete Frequency Information. In this case we have 3 main parameters:

  1. The Number of Samples ($ N $) - 256 (In their code 512).
  2. The Number of Non Zero Samples ($ \left| T \right| $) - Changes from 2 to 32.
  3. The Number of Given Frequency Samples ($ \left| \Omega \right| $) - Changes from 4 to 64.

I solved the problem using my solution to How Can L1 Norm Minimization with Linear Equality Constraints (Basis Pursuit / Sparse Representation) Be Formulated as Linear Programming.

The results I get are:

enter image description here

Which are according to their numerical examples.
The code is available at my StackExchange Codes Signal Processing Q78143 GitHub Repository (Look at the SignalProcessing\Q78143 folder).

$\endgroup$
2
  • $\begingroup$ thank you -- if I'm reading your code right, one of the key elements of your solution is to i) use a discrete fourier transform (not a dct as alluded to in the Rishik reference) and ii) apply equality constraints to the corresponding real and imaginary separately as in this line . $\endgroup$ Commented Sep 24, 2021 at 2:11
  • $\begingroup$ You could use the DCT matrix if you build it as an orthogonal matrix. Since it is a real transform you won’t need to do the trick I did. $\endgroup$ Commented Sep 24, 2021 at 3:27

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.