Welcome to 🐙 oktopus!¶
oktopus is all about Bayes’ Law:
\[\log \underbrace{p(\theta | \mathbf{y})}_\text{posterior} = \log \underbrace{p(\mathbf{y} | \theta)}_\text{likelihood} + \log \underbrace{p(\theta)}_\text{prior} + \overbrace{h(\mathbf{y})}^\text{doesn't depend on $\theta$}\]
In other words: posterior information is a combination of prior information and the information acquired after observing data (likelihood).
With that in mind, oktopus provides an easy interface to solve problems such as:
- Maximum Likelihood Estimator (MLE):
\[\arg \min_{\theta \in \Theta} - \log p(\mathbf{y} | \theta)\]
- Fisher Information Matrix:
\[\mathbb{E}\left[\nabla_\theta\log p(\mathbf{y} | \theta)\left[\nabla_\theta\log p(\mathbf{y} | \theta) \right]^{\textrm{T}} \right]\]
- Maximum a Posteriori Probability Estimator (MAP):
\[\arg \min_{\theta \in \Theta} - \log p(\theta | \mathbf{y})\]