Tensorflow 2.0: custom keras metric caused tf.function retracing warning

tensorflow triggered tf.function retracing
tensorflow 2.0 tf function example
tf.py_function example
tf.function keras
tf.function-decorated function tried to create variables on non-first call.
tensorflow 2.0 tracing
using a tftensor as a python bool is not allowed
tensorflow custom function

When I use the following custom metric (keras-style):

from sklearn.metrics import classification_report, f1_score
from tensorflow.keras.callbacks import Callback

class Metrics(Callback):
    def __init__(self, dev_data, classifier, dataloader):
        self.best_f1_score = 0.0
        self.dev_data = dev_data
        self.classifier = classifier
        self.predictor = Predictor(classifier, dataloader)
        self.dataloader = dataloader

    def on_epoch_end(self, epoch, logs=None):
        print("start to evaluate....")
        _, preds = self.predictor(self.dev_data)
        y_trues, y_preds = [self.dataloader.label_vector(v["label"]) for v in self.dev_data], preds
        f1 = f1_score(y_trues, y_preds, average="weighted")
        print(classification_report(y_trues, y_preds,
                                    target_names=self.dataloader.vocab.labels))
        if f1 > self.best_f1_score:
            self.best_f1_score = f1
            self.classifier.save_model()
            print("best metrics, save model...")

I obtained the following warning:

W1106 10:49:14.171694 4745115072 def_function.py:474] 6 out of the last 11 calls to .distributed_function at 0x14a3f9d90> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/beta/tutorials/eager/tf_function#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details.

This warning occurs when a TF function is retraced because its arguments change in shape or dtype (for Tensors) or even in value (Python or np objects or variables).

In the general case, the fix is to use @tf.function(experimental_relax_shapes=True) before the definition of the custom function that you pass to Keras or TF somewhere. This tries to detect and avoid unnecessary retracing, but is not guaranteed to solve the issue.

In your case, i guess the Predictor class is a custom class, so place @tf.function(experimental_relax_shapes=True) before the definition of Predictor.predict().

Tensorflow 2.0: custom keras metric caused tf.function retracing , This warning occurs when a TF function is retraced because its arguments change in shape or dtype (for Tensors) or even in value (Python or np objects or variables). In the general case, the fix is to use @tf. I have encountered a similar problem when I use Keras custom metrics. info: tensorflow 2.0.0; running on cpu; warning log; W1106 10:49:14.171694 4745115072 def_function.py:474] 6 out of the last 11 calls to <function _make_execution_function.<locals>.distributed_function at 0x14a3f9d90> triggered tf.function retracing.

then using @tf.function(experimental_relax_shapes=True) will probably solve your problem

Better performance with tf.function, Have I written custom code (as opposed to using a stock example script (use command below): v2.0.0-rc2-26-g64c3d38 2.0.0; Python version: 3.5.2 import tensorflow as tf class MeanIoUIgnoreLabel(tf.keras.metrics. opened dynamic library libcudnn.so.7 WARNING:tensorflow:5 out of the last 9 calls to� WARNING:tensorflow:5 out of the last 5 calls to <function _make_execution_function.<locals>.distributed_function at 0x7fe3de7c0268> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors.

Add this line after importing tensorflow:

tf.compat.v1.disable_eager_execution()

Simple custom metric caused tf.function retracing when , Have I written custom code (as opposed to using a stock example script A lot of warnings saying that there is a tf.function retracing are happening when using a tf from tensorflow.keras.layers import Conv1D from tensorflow.keras.models import Tracing is expensive and the excessive number of tracings is likely due to� Ask questions Simple custom metric caused tf.function retracing when training on multiple GPUs System information Have I written custom code (as opposed to using a stock example script provided in TensorFlow): Yes

Unwanted tf.function retracing when using variable-length inputs , Simple custom metric caused tf.function retracing when training on multiple GPUs (source or binary): binary; TensorFlow version (use command below): v2.0.0- rc2-26-g64c3d38 2.0.0 import tensorflow as tf class MeanIoUIgnoreLabel(tf. keras.metrics. WARNING:tensorflow:5 out of the last 9 calls to <bound method � I'm using Tensorflow 2.2.0-gpu, and I have a simple Keras model that's composed of a few dense layers and a linear output (reference the code below). I'm training the model on variable-length samples, and when I run the code I get warnings about tf.function retracing.

Simple custom metric caused tf.function retracing when , In this post, I will explore different options of how tf.functions can be used to improve the training speed of a custom keras models. and the excessive number of tracings is likely due to passing python objects instead of tensors. Also We also notice a bunch of warnings from tensorflow about retracing. In TF2, keras model.fit uses tf.function under the hood automatically. So you don't need to pass run_eagerly=False, it is False by default. run_eagerly is only needed if you actually need op by op execution.

Custom Keras Models and tf functions in Tensorflow 2.1, When dose Tensorflow add elasticity function in worker? tensorflow/tensorflow Just some ideas for the Custom training loops tutorial (with tf.distribute. System information - TensorFlow backend (yes / no): yes - TensorFlow version: 2.2.0 - Keras version: 2.3.1 - Python version: 3.6.9. Working with a 12 heads output model I noticed that my model checkpoints were not saved by the callback.

Comments
  • Where should the decorator be added?