This work presents a greedy Bayesian dictionary learning (DL) algorithm where not only the signals but also the dictionary representation matrix accept a sparse representation. This double-sparsity (DS) model has been shown to be superior to the standard sparse approach in some image processing tasks, where sparsity is only imposed on the signal coefficients. We present a new Bayesian approach which addresses typical shortcomings of regularization-based DS algorithms: the prior knowledge of the true noise level and the need of parameter tuning. Our model estimates the noise and sparsity levels as well as the model parameters from the observations and frequently outperforms state-of-the-art dictionary based techniques by taking into account the uncertainty of the estimates. Additionally, we introduce a versatile notation which generalizes denoising, inpainting and compressive sensing problem formulations. Finally, theoretical results are validated with denoising experiments on a set of images.