Variational Inference

Preliminaries It is usually the case that we have a dataset $\mathcal{D} = {x_1, \cdots, x_N}$ and a parametrized family of distributions $p_\theta (x)$. We would like to find the parameters that best describe the data. This is typically done using [[MLE and MAP|maximum likelihood estimation (MLE)]]. In this method, the optimal parameters are those that maximize the log likelihood of the data. Mathematically speaking, $$ \hat{\theta}_\mathrm{MLE} = \arg\max_\theta \frac{1}{N}\sum_{i=1}^{N}\log p_{\theta}(x_i)....

March 7, 2023 · 14 min · Saeed Hedayatian

In Praise of Einsum

This is a short note about the einsum functionality that is present in numpy, jax, etc. Understanding what it does is a bit tricky -naturally, because it can do the job of many other functions- but it is also very useful and can help a lot with linear algebraic computations. I will use numpy’s np.einsum() notation, but the underlaying concepts are the same regardless of syntactic differences in other libraries....

February 19, 2023 · 8 min · Saeed Hedayatian