TY - GEN
T1 - Greedy Bayesian double sparsity dictionary learning
AU - Serra, Juan G.
AU - Villena, Salvador
AU - Molina, Rafael
AU - Katsaggelos, Aggelos K.
N1 - Funding Information:
Research funded by the Spanish Ministry of Economy and Competitiveness (MINECO) projects TIN2013-43880-R and DPI2016-77869-C2-2-R, and the US Department of Energy grant DE-NA0002520.
Publisher Copyright:
© 2017 IEEE.
PY - 2018/2/20
Y1 - 2018/2/20
N2 - This work presents a greedy Bayesian dictionary learning (DL) algorithm where not only the signals but also the dictionary representation matrix accept a sparse representation. This double-sparsity (DS) model has been shown to be superior to the standard sparse approach in some image processing tasks, where sparsity is only imposed on the signal coefficients. We present a new Bayesian approach which addresses typical shortcomings of regularization-based DS algorithms: the prior knowledge of the true noise level and the need of parameter tuning. Our model estimates the noise and sparsity levels as well as the model parameters from the observations and frequently outperforms state-of-the-art dictionary based techniques by taking into account the uncertainty of the estimates. Additionally, we introduce a versatile notation which generalizes denoising, inpainting and compressive sensing problem formulations. Finally, theoretical results are validated with denoising experiments on a set of images.
AB - This work presents a greedy Bayesian dictionary learning (DL) algorithm where not only the signals but also the dictionary representation matrix accept a sparse representation. This double-sparsity (DS) model has been shown to be superior to the standard sparse approach in some image processing tasks, where sparsity is only imposed on the signal coefficients. We present a new Bayesian approach which addresses typical shortcomings of regularization-based DS algorithms: the prior knowledge of the true noise level and the need of parameter tuning. Our model estimates the noise and sparsity levels as well as the model parameters from the observations and frequently outperforms state-of-the-art dictionary based techniques by taking into account the uncertainty of the estimates. Additionally, we introduce a versatile notation which generalizes denoising, inpainting and compressive sensing problem formulations. Finally, theoretical results are validated with denoising experiments on a set of images.
KW - Bayesian Inference
KW - Dictionary Learning
KW - Sparse Representation
UR - http://www.scopus.com/inward/record.url?scp=85045309754&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85045309754&partnerID=8YFLogxK
U2 - 10.1109/ICIP.2017.8296619
DO - 10.1109/ICIP.2017.8296619
M3 - Conference contribution
AN - SCOPUS:85045309754
T3 - Proceedings - International Conference on Image Processing, ICIP
SP - 1935
EP - 1939
BT - 2017 IEEE International Conference on Image Processing, ICIP 2017 - Proceedings
PB - IEEE Computer Society
T2 - 24th IEEE International Conference on Image Processing, ICIP 2017
Y2 - 17 September 2017 through 20 September 2017
ER -