![]() Train_loss = tf.('train_loss', dtype=tf.float32) Choose loss and optimizer: loss_object = tf.()Ĭreate stateful metrics that can be used to accumulate values during training and logged at any point: # Define our metrics The training code follows the advanced quickstart tutorial, but shows how to log metrics to TensorBoard. Train_dataset = train_dataset.shuffle(60000).batch(64) Test_dataset = tf._tensor_slices((x_test, y_test)) Use the same dataset as above, but convert it to tf.data.Dataset to take advantage of batching capabilities: train_dataset = tf._tensor_slices((x_train, y_train)) When training with methods such as tf.GradientTape(), use tf.summary to log the required information. You can see what other dashboards are available in TensorBoard by clicking on the "inactive" dropdown towards the top right. ![]() For example, the Keras TensorBoard callback lets you log images and embeddings as well. Distributions can be found in the Distributions dashboard.Īdditional TensorBoard dashboards are automatically enabled when you log other types of data. Histograms can be found in the Time Series or Histograms dashboards. This can be useful to visualize weights and biases and verify that they are changing in an expected way.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |