Blog

Self-supervised learning with pretext invariant representations

Introduction Recently, I have started to experiment with self-supervised learning techniques. In the last few months, we can see the rapid development of new methods. Moreover, self-supervised learning is reaching transfer learning performance. This family of methods allows using of unlabeled data to increase the performance of the task (e.g. classification, detection or so on). To better understand the idea behind those algorithms visit here or here. Self-supervised learning is used not only in computer vision domain, here is an example of successful use of self-supervised learning in automatic speech recognition https://ai.

Learn probabilistic PCA first to better understand Variational Autoencoders

I prepare a presentation on Variational Autoencoders to my university colleagues. I think what can help them understand the topic better. I noticed that a derivation of the loss function can cause problems as it requires understanding and intuition of probabilistic concepts. I think that understanding probabilistic PCA first helps to understand Variational Autoencoders faster. Probabilistic PCA is a great and simple algorithm. The big advantage of this algorithm is that all the probability distributions used in the algorithm are Gaussian distributions.