TFOptimizer


TFOptimizer

TFOptimizer is used for optimizing a TensorFlow model with respect to its training variables on Spark/BigDL.

Remarks:

Create a TFOptimizer:

import tensorflow as tf
from zoo.tfpark import TFOptimizer
from bigdl.optim.optimizer import *
loss = ...
optimizer = TFOptimizer.from_loss(loss, Adam(1e-3))
optimizer.optimize(end_trigger=MaxEpoch(5))

For Keras model:

from zoo.tfpark import TFOptimizer
from bigdl.optim.optimizer import *
from tensorflow.keras.models import Model

model = Model(inputs=..., outputs=...)

model.compile(optimizer='rmsprop',
            loss='sparse_categorical_crossentropy',
            metrics=['accuracy'])

optimizer = TFOptimizer.from_keras(model, dataset)
optimizer.optimize(end_trigger=MaxEpoch(5))

Methods

from_loss (factory method)

Create a TFOptimizer from a TensorFlow loss tensor. The loss tensor must come from a TensorFlow graph that only takes TFDataset.tensors and the tensors in tensor_with_value as inputs.

from_loss(loss, optim_method, session=None, val_outputs=None,
                  val_labels=None, val_method=None,
                  clip_norm=None, clip_value=None, metrics=None,
                  tensor_with_value=None, **kwargs)

Arguments

from_keras (factory method)

Create a TFOptimizer from a tensorflow.keras model. The model must be compiled.

from_keras(keras_model, dataset, optim_method=None, **kwargs)

Arguments

set_train_summary

set_train_summary(summary)

Arguments

set_val_summary

set_val_summary(summary)

Arguments

set_constant_gradient_clipping

set_constant_gradient_clipping(min_value, max_value)

Arguments

set_gradient_clipping_by_l2_norm

set_gradient_clipping_by_l2_norm(self, clip_norm)

Arguments

optimize

optimize(self, end_trigger=None)

Arguments