Loss Functions¶
-
class
oktopus.loss.
LossFunction
[source]¶ An abstract class for an arbitrary loss (cost) function. This type of function appears frequently in estimation problems where the best estimator (given a set of observed data) is the one which minimizes some sort of objective function.
Methods
__call__
(params)Calls evaluate()
evaluate
(params)Returns the loss function evaluated at params fit
([optimizer])Minimizes the evaluate()
function usingscipy.optimize.minimize()
,scipy.optimize.differential_evolution()
,scipy.optimize.basinhopping()
, orskopt.gp.gp_minimize()
.gradient
(params)Returns the gradient of the loss function evaluated at params
hessian
(params)Returns the Hessian matrix of the loss function evaluated at params
-
evaluate
(params)[source]¶ Returns the loss function evaluated at params
Parameters: params : ndarray
parameter vector of the model
Returns: loss_fun : scalar
Returns the scalar value of the loss function evaluated at params
-
fit
(optimizer='minimize', **kwargs)[source]¶ Minimizes the
evaluate()
function usingscipy.optimize.minimize()
,scipy.optimize.differential_evolution()
,scipy.optimize.basinhopping()
, orskopt.gp.gp_minimize()
.Parameters: optimizer : str
Optimization algorithm. Options are:
- ``'minimize'`` uses :func:`scipy.optimize.minimize` - ``'differential_evolution'`` uses :func:`scipy.optimize.differential_evolution` - ``'basinhopping'`` uses :func:`scipy.optimize.basinhopping` - ``'gp_minimize'`` uses :func:`skopt.gp.gp_minimize`
‘minimize’ is usually robust enough and therefore recommended whenever a good initial guess can be provided. The remaining options are global optimizers which might provide better results precisely in cases where a close engouh initial guess cannot be obtained trivially.
kwargs : dict
Dictionary for additional arguments.
Returns: opt_result :
scipy.optimize.OptimizeResult
objectObject containing the results of the optimization process. Note: this is also stored in self.opt_result.
-
-
class
oktopus.loss.
L1Norm
(data, model, regularization=None)[source]¶ Defines the L1 Norm loss function. L1 norm is usually useful to optimize the “median” model, i.e., it is more robust to outliers than the quadratic loss function.
\[\arg \min_{\theta \in \Theta} \sum_k |y_k - f(x_k, \theta)|\]Examples
>>> from oktopus import L1Norm >>> import autograd.numpy as np >>> np.random.seed(0) >>> data = np.random.exponential(size=50) >>> def constant_model(a): ... return a >>> l1norm = L1Norm(data=data, model=constant_model) >>> result = l1norm.fit(x0=np.mean(data)) >>> result.x array([ 0.83998338]) >>> print(np.median(data)) # the analytical solution 0.839883776803
Attributes
data (array-like) Observed data model (callable) A functional form that defines the model regularization (callable) A functional form that defines the regularization term Methods
__call__
(params)Calls evaluate()
evaluate
(params)fit
([optimizer])Minimizes the evaluate()
function usingscipy.optimize.minimize()
,scipy.optimize.differential_evolution()
,scipy.optimize.basinhopping()
, orskopt.gp.gp_minimize()
.gradient
(params)Returns the gradient of the loss function evaluated at params
hessian
(params)Returns the Hessian matrix of the loss function evaluated at params
-
regularization
¶
-