Prior Distributions

class oktopus.prior.Prior[source]

A base class for a prior distribution. Differently from Likelihood, a prior is a PDF that depends solely on the parameters, not on the observed data.

Attributes

name A name associated with the prior

Methods

__call__(params) Calls evaluate()
evaluate(params) Evaluates the negative of the log of the PDF at params
fit([optimizer]) Minimizes the evaluate() function using scipy.optimize.minimize(), scipy.optimize.differential_evolution(), scipy.optimize.basinhopping(), or skopt.gp.gp_minimize().
gradient(params) Returns the gradient of the loss function evaluated at params
evaluate(params)[source]

Evaluates the negative of the log of the PDF at params

Parameters:

params : scalar or array-like

Value at which the PDF will be evaluated

Returns:

value : scalar

Value of the negative of the log of the PDF at params

name

A name associated with the prior

class oktopus.prior.JointPrior(*args)[source]

Combine indepedent priors by summing the negative of the log of their distributions.

Notes

*args are stored in self.components.

Examples

>>> from oktopus import UniformPrior, GaussianPrior, JointPrior
>>> jp = JointPrior(UniformPrior(-0.5, 0.5), GaussianPrior(0., 1.))
>>> jp.evaluate((0., 0.))
0.0
>>> jp((0., 0.)) # jp is also a callable to evaluate()
0.0

Attributes

*args (tuple of instances of Prior) Instances of Prior to be combined

Methods

__call__(params) Calls evaluate()
evaluate(params) Computes the sum of the log of each distribution given in args evaluated at params.
fit([optimizer]) Minimizes the evaluate() function using scipy.optimize.minimize(), scipy.optimize.differential_evolution(), scipy.optimize.basinhopping(), or skopt.gp.gp_minimize().
gradient(params) Computes the gradient of the sum of the log of each distribution given in args evaluated at params.
evaluate(params)[source]

Computes the sum of the log of each distribution given in args evaluated at params.

Parameters:

params : tuple

Value at which the JointPrior instance will be evaluated. This must have the same dimension as the number of priors used to initialize the object

Returns:

value : scalar

Sum of the negative of the log of each distribution given in args

gradient(params)[source]

Computes the gradient of the sum of the log of each distribution given in args evaluated at params.

Parameters:

params : tuple

Value at which the JointPrior instance will be evaluated. This must have the same dimension as the number of priors used to initialize the object

Returns:

value : scalar

Gradient of the sum of the negative of the log of each distribution given in args

mean
class oktopus.prior.UniformPrior(lb, ub, name=None)[source]

Computes the negative log pdf for a n-dimensional independent uniform distribution.

Examples

>>> from oktopus import UniformPrior
>>> unif = UniformPrior(0., 1.)
>>> unif(.5)
-0.0
>>> unif(1)
inf

Attributes

lb (int or array-like of ints) Lower bounds (inclusive)
ub (int or array-like of ints) Upper bounds (exclusive)

Methods

__call__(params) Calls evaluate()
evaluate(params)
fit([optimizer]) Minimizes the evaluate() function using scipy.optimize.minimize(), scipy.optimize.differential_evolution(), scipy.optimize.basinhopping(), or skopt.gp.gp_minimize().
gradient(params)
evaluate(params)[source]
gradient(params)[source]
mean

Returns the mean of the uniform distributions

variance

Returns the variance of the uniform distributions

class oktopus.prior.GaussianPrior(mean, var, name=None)[source]

Computes the negative log pdf for a n-dimensional independent Gaussian distribution.

Examples

>>> from oktopus import GaussianPrior
>>> prior = GaussianPrior(0, 1)
>>> prior(2.)
2.0

Attributes

mean (scalar or array-like) Mean
var (scalar or array-like) Variance

Methods

__call__(params) Calls evaluate()
evaluate(params)
fit([optimizer]) Minimizes the evaluate() function using scipy.optimize.minimize(), scipy.optimize.differential_evolution(), scipy.optimize.basinhopping(), or skopt.gp.gp_minimize().
gradient(params)
evaluate(params)[source]
gradient(params)[source]
mean
variance
class oktopus.prior.LaplacianPrior(mean, var, name=None)[source]

Computes the negative log pdf for a n-dimensional independent Laplacian random variable.

Examples

>>> from oktopus import LaplacianPrior
>>> prior = LaplacianPrior(0, 2)
>>> prior(1.)
1.0

Attributes

mean (scalar or array-like) Mean
var (scalar or array-like) Variance

Methods

__call__(params) Calls evaluate()
evaluate(params)
fit([optimizer]) Minimizes the evaluate() function using scipy.optimize.minimize(), scipy.optimize.differential_evolution(), scipy.optimize.basinhopping(), or skopt.gp.gp_minimize().
gradient(params) Returns the gradient of the loss function evaluated at params
evaluate(params)[source]
mean
variance

Inheritance Diagram

Inheritance diagram of oktopus.prior