Objectives


Usage of objectives

An objective function (or loss function, or optimization score function) is one of the two parameters required to compile a model:

Scala:

model.compile(loss = "mean_squared_error", optimizer = "sgd")

Python:

model.compile(loss='mean_squared_error', optimizer='sgd')

Scala:

model.compile(loss = MeanSquaredError(sizeAverage = true), optimizer = "sgd")

Python:

model.compile(loss=MeanSquaredError(size_average=True), optimizer='sgd')

Available objectives

MeanSquaredError

The mean squared error criterion e.g. input: a, target: b, total elements: n

loss(a, b) = 1/n * sum(|a_i - b_i|^2)

Scala:

loss = MeanSquaredError(sizeAverage = true)

Parameters:

Python:

loss = MeanSquaredError(size_average=True)

Parameters:

MeanAbsoluteError

Measures the mean absolute value of the element-wise difference between input and target

Scala:

loss = MeanAbsoluteError(sizeAverage = true)

Parameters:

Python:

loss = MeanAbsoluteError(size_average=True)

Parameters:

BinaryCrossEntropy

Also known as logloss.

Scala:

loss = BinaryCrossEntropy(weights = null, sizeAverage = true)

Parameters:

Python:

loss = BinaryCrossEntropy(weights=None, size_average=True)

Parameters:

SparseCategoricalCrossEntropy

A loss often used in multi-class classification problems with SoftMax as the last layer of the neural network. By default, input(y_pred) is supposed to be probabilities of each class, and target(y_true) is supposed to be the class label starting from 0.

Scala:

loss = SparseCategoricalCrossEntropy(logProbAsInput = false, zeroBasedLabel = true, weights = null, sizeAverage = true, paddingValue = -1)

Parameters:

Python:

loss = SparseCategoricalCrossEntropy(log_prob_as_input=False, zero_based_label=True, weights=None, size_average=True, padding_value=-1)

Parameters:

MeanAbsolutePercentageError

Compute mean absolute percentage error for intput and target

Scala:

loss = MeanAbsolutePercentageError()

Parameters:

Python:

loss = MeanAbsolutePercentageError()

Parameters:

MeanSquaredLogarithmicError

Compute mean squared logarithmic error for input and target

Scala:

loss = MeanSquaredLogarithmicError()

Python:

loss = MeanSquaredLogarithmicError()

CategoricalCrossEntropy

This is same with cross entropy criterion, except the target tensor is a one-hot tensor.

Scala:

loss = CategoricalCrossEntropy()

Python:

loss = CategoricalCrossEntropy()

Hinge

Creates a criterion that optimizes a two-class classification hinge loss (margin-based loss) between input x (a Tensor of dimension 1) and output y.

Scala:

loss = Hinge(margin = 1.0, sizeAverage = true)

Parameters:

Python:

loss = Hinge(margin=1.0, size_average=True)

Parameters:

RankHinge

Hinge loss for pairwise ranking problems.

Scala:

loss = RankHinge(margin = 1.0)

Parameters:

Python:

loss = RankHinge(margin=1.0)

Parameters:

SquaredHinge

Creates a criterion that optimizes a two-class classification squared hinge loss (margin-based loss) between input x (a Tensor of dimension 1) and output y.

Scala:

loss = SquaredHinge(margin = 1.0, sizeAverage = true)

Parameters:

Python:

loss = SquaredHinge(margin=1.0, size_average=True)

Parameters:

Poisson

Compute Poisson error for intput and target

Scala:

loss = Poisson()

Python:

loss = Poisson()

CosineProximity

Computes the negative of the mean cosine proximity between predictions and targets.

Scala:

loss = CosineProximity()

Python:

loss = CosineProximity()

KullbackLeiblerDivergence

Loss calculated as:

y_true = K.clip(y_true, K.epsilon(), 1)
y_pred = K.clip(y_pred, K.epsilon(), 1)

and output K.sum(y_true * K.log(y_true / y_pred), axis=-1)

Scala:

loss = KullbackLeiblerDivergence()

Python:

loss = KullbackLeiblerDivergence()