• No results found

Allm¨ an os¨ akerhetsprincip

ar optimal, s˚a ¨ar kx0k1 = kx`1k1. D¨armed ¨ar x0 en l¨osning till BP, och med beaktande av entydighetsantagandet ser man att x0 = x`1. Detta bevisar, att varje delf¨oljd (xγk) konvergerar mot punkten x`1, och detsamma g¨aller f¨or hela f¨oljden (xγ).

A.4 Allm¨ an os¨ akerhetsprincip

F¨oljande sats ¨ar en generalisering av Donoho–Starks os¨akerhetsprincip. Be-viset ¨ar fr˚an [30], men det gavs ursprungligen av A. Pinkus.

Sats A.4 L˚at Ψ och Φ vara tv˚a olika baser f¨or Cnoch antag att deras ¨ omsesi-diga koherens ¨ar µ(Ψ, Φ). Om en vektor x har framst¨allningarna α respektive β i dessa baser, s˚a ¨ar

vilket vidare ger, med beaktande av f¨orh˚allandet mellan geometriskt och aritmetiskt medeltal, att

1

µ(Ψ, Φ) ≤p|S| · |T | ≤ |S| + |T |

2 . (A.19)

Som en konsekvens av denna sats f˚as ett entydighetsvillkor f¨or glesa l¨ os-ningar till systemet (1.1) i s˚adana fall, d˚a matrisen har 2-orto-struktur, och matrisens koherens ¨ar lika med basernas ¨omsesidiga koherens. Betrakta n¨ am-ligen systemet [Ψ|Φ]x = y och antag att x och x0 ¨ar dess tv˚a l¨osningar. resul-tatet av Gribonval och Nielsen (se s. 9).

Referenser

[1] R. Baraniuk, M. Davenport, R. DeVore och M. Wakin, ”A simple proof of the restricted isometry property for random matrices,” Constructive Approximation, 2007.

[2] T.T. Cai och T. Jiang, ”Limiting laws of coherence of random matrices with applications to testing covariance structure and construction of compressed sensing matrices,” The Annals of Statistics, vol. 39, s. 2330–

2355, 2011.

[3] T.T. Cai och A. Zhang, ”Sharp RIP bound for sparse signal and low-rank matrix recovery,” Applied and Computational Harmonic Analysis, vol. 35, s. 74–93, 2013.

[4] T.T. Cai och L. Wang, ”Orthogonal matching pursuit for sparse signal recovery with noise,” IEEE Transactions on Information Theory, vol.

57, 4680–4688.

[5] T.T. Cai, L. Wang och G. Xu, ”New bounds for restricted isometry constants,” IEEE Transactions on Information Theory, vol. 56, s. 4388–

4394.

[6] T.T. Cai, L. Wang och G. Xu, ”Stable recovery of sparse signals and an oracle inequality,” IEEE Transactions on Information Theory, vol. 56, nr 7, s. 3516–3522, 2010.

[7] E.J. Cand`es och M.B. Wakin, ”An introduction to compressive samp-ling,” IEEE Signal Processing Magazine, s. 21–30, 2008.

[8] E.J. Cand`es och T. Tao, ”Near optimal signal recovery from random projections: Universal encoding strategies?,” IEEE Transactions on In-formation Theory, vol. 52, nr 12, s. 5406–5425, 2008.

[9] E.J. Cand`es och T. Tao, ”Decoding by linear programming,” IEEE Transactions on Information Theory, vol. 51, nr 12, 2005.

[10] E.J. Cand`es, J. Romberg och T. Tao, ”Robust uncertainty principles:

Exact signal reconstruction from highly incomplete frequency informa-tion,” IEEE Transactions on Information Theory, vol. 52, nr 2, s. 489–

509, 2006.

[11] E.J. Cand`es och J. Romberg, ”Sparsity and Incoherence in Compressive Sampling,” Inverse Problems, vol. 23, nr 3, s. 969–985, 2007.

[12] E.J. Cand`es, ”The restricted isometry property and its implications for compressed sensing,” Compte Rendus de l’Academie des Sciences, Paris, Serie I, vol. 346. s. 589–592.

[13] E.J. Cand`es, M. Rudelson, T. Tao och R. Vershynin, ”Stable signal reco-very from incomplete and inaccurate measurements,” Communications on Pure and Applied Mathematics, vol 59, s. 1207–1223.

[14] S.S. Chen, D.L. Donoho och M.A. Saunders, ”Atomic decomposition by basis pursuit,” SIAM Journal on Scientific Computing, vol. 20, nr 1, s.

33–61, 1998.

[15] A. Cohen, W. Dahmen och R. DeVore, ”Compressed sensing and best k-term approximation,” Journal of the American Mathematical Society, vol. 22, nr 1, s. 211–231, 2009.

[16] M.A. Davenport, M.F. Duarte, Y.C. Eldar och G. Kutyniok’ ”Introduc-tion to compressed sensing,” Preprint 93.

[17] M.A. Davenport och M.B. Wakin, ”Analysis of orthogonal matching pur-suit using the restricted isometry property,” Preprint, 2009.

[18] G. Davis, S. Mallat och Z. Zhang, ”Adaptive time-frequency decompo-sition with matching pursuits,” Optical Engineering, vol. 33, s. 2183, 1994.

[19] D.L. Donoho och M. Elad, ”Maximal sparsity representation via `1 mi-nimization,” Proceedings of the National Academy of Sciences, vol. 100, s. 2197–2202, 2003.

[20] D.L. Donoho och M. Elad, ”Optimally sparse representation in general (non-orthogonal) dictionaries via `1-minimization,” Proceedings of the National Academy of Sciences, USA, vol. 100, s. 2197–2202, 2003.

[21] D.L. Donoho, ”For most large underdetermined systems of linear equa-tions the minimal `1-norm solution is also the sparsest solution,” Com-munications on Pure and Applied Mathematics, vol. 59, nr 7, s. 907–934, 2006.

[22] D.L. Donoho, ”For most large underdetermined systems of linear equa-tions the minimal `1-norm solution approximates the sparsest near-solution,” Communications on Pure and Applied Mathematics, vol. 59, nr 6, s. 797–829, 2006.

[23] D.L. Donoho, ”Neighborly polytopes and sparse solution of underdeter-mined linear equations,” Technical Report, Stanford University, 2005.

[24] D.L. Donoho, ”Fast solution of `1-minimization problems when the so-lution may be sparse”, IEEE Transactions on Information Theory, vol.

54, nr 11, 2008.

[25] D.L. Donoho och X. Huo, ”Uncertainty principles and ideal atomic de-composition,” IEEE Transactions on Information Theory, vol. 47, nr 7, s. 2845–2862, 2001.

[26] D.L. Donoho och P.B. Stark, ”Uncertainty principles and signal recove-ry,” SIAM Journal on Applied Mathematics, vol. 49, nr 3, s. 906–931, 1989.

[27] C. Dossal, ”A necessary and sufficient condition for exact recovery by `1 minimization,” Compte Rendus de l’Academie des Sciences, Paris, Serie I, vol. 350, s. 117-120, 2012.

[28] C. Dossal, M-L. Chabanol, G. Peyr´e och Jalal Fadili, ”Sharp support recovery from noisy random measurements by `1 minimization,” Applied and Computational Harmonic Analysis, vol. 33, nr 1, s. 24–43, 2012.

[29] B. Efron, T. Hastie, I.M. Johnstone och R. Tibshirani, ”Least angle regression,” The Annals of Statistics, vol. 32, nr 2, 407–499, 2004.

[30] M. Elad, Sparse and Redundant Representations. Berlin, 2010.

[31] J. Ellenberg, ”Fill in the blanks: Using math to turn lo-res datasets into hi-res samples,” 2010. Tillg¨anglig:

www.wired.com/2010/02/ff algorithm/all/1. [H¨amtad: 29 decem-ber, 2016].

[32] M. Fornasier och H. Rauhut, ”Compressive sensing,” Handbook of Mat-hematical Methods in Imaging, Springer, 2010.

[33] S. Foucart och H. Rauhut, A Mathematical Introduction to Compressive Sensing. Birkh¨auser, 2013.

[34] S. Foucart och M. Lai, ”Sparsest solutions of underdetermined linear systems via `q-minimization for 0 < q ≤ 1,” Applied and Computational Harmonic Analysis, vol. 26, nr 3, s. 395–407, 2009.

[35] J.J. Fuchs, ”On sparse representation in arbitrary redundant bases,” IE-EE Transactions on Information Theory, vol. 50, nr 6, s. 1341–1344, 2004.

[36] J.J. Fuchs, ”More on sparse representations in arbitrary bases.”

[37] J.J. Fuchs, ”Recovery of exact sparse representations in the presence of bounded noise,” IEEE Transactions on Information Theory, vol. 51, nr 10, s 3601–3608, 2005.

[38] R. Gribonval och M. Nielsen, ”Sparse representations in unions of bases,”

IEEE Transactions on Information Theory, vol. 49, s. 3320–3325, 2003.

[39] N. Hurley och S. Rickard, ”Comparing measures of sparsity,” IEEE Transactions on Information Theory, vol. 55, nr 10, s. 4723–4741, 2009.

[40] Y. Jin, K. Young-Han och B.D. Rao, ”Support recovery of sparse signals,” 2010. Tillg¨anglig: https://arxiv.org/abs/1003.0888. [H¨amtad:

april 2017].

[41] D.S. Johnson och L.A. McGeoch, ”The traveling salesman problem: A case study in local optimization,” Local Search in Combinatorial Opti-misation. London, 1997

[42] D.L. Kreher och R. Stinson, Combinatorial Algorithms. Generation, Enumeration, and Search, CRC Press 1999.

[43] S. Kunis och H. Rauhut, ”Random sampling of sparse tri-gonometric polynomials II - Orthogonal matching pursuit ver-sus basis pursuit,” Preprint, 2006. Tillg¨anglig: https://www.tu-chemnitz.de/mathematik/preprint/2006/PREPRINT 06.pdf. [H¨amtad:

17 april, 2017].

[44] R. Maleh, ”Improved RIP analysis of orthogonal matching pursuit, ” 2011. Tillg¨anglig: https://arxiv.org/abs/1102.4311. [H¨amtad: 17 april, 2017].

[45] C.F. Mecklenbr¨auker, P. Gerstoft och E. Z¨ochmann, ”Beamforming of the residuals is the LASSO’s dual,” IEEE Transactions on Signal Pro-cessing, 2015.

[46] Q. Mo och S. Li, ”New bounds on the restricted isometry constant δ2k,” Applied and Computational Harmonic Analysis, vol. 31, s. 460–468, 2011.

[47] B.K. Natarajan, ”Sparse approximate solutions to linear systems,” SIAM Journal on Scientific Computing, vol. 24, nr 2, s. 227–234.

[48] Y. Pati, R. Rezaiifar och P. Krishnaprasad, ”Orthogonal Matching Pur-suit: recursive function approximation with application to wavelet de-composition,” Asilomar Conf. on Signals, Systems and Comput.

[49] M.D. Plumbley, ”On polar polytopes and the re-covery of sparse representations,” 2005. Tillg¨anglig:

https://www.eecs.qmul.ac.uk/∼markp/2005/Plumbley05-polar.pdf.

[H¨amtad: april 2017].

[50] C. Ramirez, V. Kreinovich och M. Argaez, ”Why `1 is a good approxi-mation to `0: A geometric explanation,” Journal of Uncertain Systems, vol. 7, nr 3, s. 203–207.

[51] H. Rauhut, ”Compressive sensing and structured random matri-ces,” F¨orel¨asningsanteckningar. Tillg¨anglig: http://perso-math.univ-mlv.fr/users/banach/workshop2010/talks/Rauhut.pdf. [H¨amtad: 17 april, 2017].

[52] G. Reeves och M. Gastpar, ”Sampling bounds for sparse support recovery in the presence of noise,” IEEE International Symposium on Information Theory – Proceedings, s. 2187–2191.

[53] G. Reeves och M. Gastpar, ”Approximate sparsity pattern recovery:

Information-theoretic lower bounds,” IEEE Transactions on Informa-tion Theory, vol. 59, nr 6, s. 3451–3465.

[54] C. Rego, D. Gamboa, F. Glover och C. Osterman, ”Traveling salesper-son problem heuristics: leading methods, implementations and latest advances,”European Journal of Operational Research, vol. 211, nr 3, s.

427–441.

[55] I. Rish och G. Grabarnik, Sparse Modeling: Theory, Algorithms and Applications. New York: CRC Press, 2014.

[56] M. Rudelson och R. Vershynin, ”On sparse reconstruction from fouri-er and gaussian measurements,” Communications on Pure and Applied Mathematics, vol. 61, s. 1024–1045, 2008.

[57] L. Shepp och B. Logan, ”The fourier reconstruction of a head section,”

IEEE Transactions on Nuclear Science, vol. 21, nr 3, s. 21–43, 1974.

[58] Z. Sun, ”Sparse optimization lecture: Sparse recove-ry guarantees,” f¨orel¨asningsanteckningar. Tillg¨anglig:

http://www.math.ucla.edu/∼wotaoyin/summer2013/slides/

Lec03 SparseRecoveryGuarantees.pdf. [H¨amtad: 16 april, 2017].

[59] M.A. Sustik, J.A. Tropp, I.S. Dhillon och R.W. Heath, ”On the existence of equiangular tight frames,” Linear Algebra and its Applications, vol.

426, s. 619–635, 2007.

[60] R.J. Tibshirani, ”The lasso problem and uniqueness,” Electronic Journal of Statistics, vol. 7, s. 1456–1490, 2013.

[61] J.A. Tropp, ”Greed is good: Algorithmic results for sparse approxima-tion,” IEEE Transactions on Information theory, vol. 50, nr 11, s. 2231–

2242.

[62] J.A. Tropp, ”Just relax: Convex programming methods for identifying sparse signals in noise,” IEEE Transactions on Information Theory, vol.

52, nr 3, s. 1030–1051, 2006.

[63] J.A. Tropp och A.C. Gilbert, ”Signal recovery from random measure-ments via orthogonal matching pursuit; The gaussian case,” ACM Re-port, 2007.

[64] M.J. Wainwright, ”Sharp thresholds for high-dimensional and noisy spar-sity recovery using `1-constrained quadratic programmming (Lasso)” IE-EE Transactions on Information Theory, vol. 55, s. 2183–2202.

[65] L.R. Welch, ”Lower bounds on the maximum cross correlation of signals,”

IEEE Transactions on Information Theory, vol. 20, nr 3, s. 397–399, 1974.

[66] J. Zhang et al, ”Noisy sparse recovery based on parameterized quadra-tic programming by thresholding,” EURASIP Journal on Advances in Signal Processing, 2011.

Related documents