Scientific research is not about "preferring" one algorithm over another. It's about *understanding*. I am interested in understanding Boltzmann machines and I am interested in understanding all kinds of auto-encoders. The world of unsupervised learning is wide open. No clear winner but many interesting questions. As far as unsupervised pre-training goes, denoising auto-encoders are somewhat easier to train and use than RBMs, but give about the same results, so I would go for the former if I had to make a choice. But these days, many more interesting algorithms have been proposed, such as for example the ladder networks for semi-supervised learning, which involve some funny architecture which looks like a stack of denoising auto-encoders with a joint training objective over all layers and a supervised learning objective as well.