Python-Tensorboard 2.7.0: TensorFlow's Visualization Toolkit

icon
Latest Release: 2.7.0

The 2.7 minor series tracks TensorFlow 2.7.

Features

  • Time Series plugin
    • Run selection now is based on regex filter (#5252)
    • Run match logic matches run name and alias (#5334, #5351)
    • Prepare Time Series for promotion to the first tab (#5291)
    • Improve/persist tag filter in the URL (#5249, #5236, #5263, #5265, #5271, #5300)
    • Show sample count on image cards (#5250)
    • Keep all digits for step values (#5325)
    • Remove pinned view while filtering (#5324)
    • Show relative time in tooltip (#5319)
    • UI: style improvements, adjust scroll position
  • Core
    • Resizable run table sidebar (#5219)
    • Support for fsspec filesystems (#5248)
  • Hparams
    • Treat no data as an empty experiment rather than an error (#5273)
    • Add tf.stop_gradient in tf.summary.histogram (#5311) - thanks @allenlavoie

Bug fixes

  • Darkmode improvements and fixes (#5318)
  • Time Series
    • Improve visibility logics (#5234, #5235)
    • Reset PluginType filter when selected all (#5272)
  • PR curve plugin: display correct thresholds (#5191)
  • Line chart
    • Recreate charts upon fatal renderer errors (#5237)
    • Fix zoom interaction (#5215)
    • Skip axis label render based on visibility (#5317)
  • Dropdown ui fixes (#5194, #5199, #5242)
  • Navigation handling (#5223, #5216)
  • Documentation
    • Document the Time Series dashboard (#5193)
    • Update README.md to include no-data example (#5163)
Source code(tar.gz)
Source code(zip)

TensorBoard Travis build status GitHub Actions CI Compat check PyPI

TensorBoard is a suite of web applications for inspecting and understanding your TensorFlow runs and graphs.

This README gives an overview of key concepts in TensorBoard, as well as how to interpret the visualizations TensorBoard provides. For an in-depth example of using TensorBoard, see the tutorial: TensorBoard: Getting Started. Documentation on how to use TensorBoard to work with images, graphs, hyper parameters, and more are linked from there, along with tutorial walk-throughs in Colab.

You may also be interested in the hosted TensorBoard solution at TensorBoard.dev. You can use TensorBoard.dev to easily host, track, and share your ML experiments for free. For example, this experiment shows a working example featuring the scalars, graphs, and histograms dashboards.

TensorBoard is designed to run entirely offline, without requiring any access to the Internet. For instance, this may be on your local machine, behind a corporate firewall, or in a datacenter.

Usage

Before running TensorBoard, make sure you have generated summary data in a log directory by creating a summary writer:

# sess.graph contains the graph definition; that enables the Graph Visualizer.

file_writer = tf.summary.FileWriter('/path/to/logs', sess.graph)

For more details, see the TensorBoard tutorial. Once you have event files, run TensorBoard and provide the log directory. If you're using a precompiled TensorFlow package (e.g. you installed via pip), run:

tensorboard --logdir path/to/logs

Or, if you are building from source:

bazel build tensorboard:tensorboard
./bazel-bin/tensorboard/tensorboard --logdir path/to/logs

# or even more succinctly
bazel run tensorboard -- --logdir path/to/logs

This should print that TensorBoard has started. Next, connect to http://localhost:6006.

TensorBoard requires a logdir to read logs from. For info on configuring TensorBoard, run tensorboard --help.

TensorBoard can be used in Google Chrome or Firefox. Other browsers might work, but there may be bugs or performance issues.

Key Concepts

Summary Ops: How TensorBoard gets data from TensorFlow

The first step in using TensorBoard is acquiring data from your TensorFlow run. For this, you need summary ops. Summary ops are ops, just like tf.matmul and tf.nn.relu, which means they take in tensors, produce tensors, and are evaluated from within a TensorFlow graph. However, summary ops have a twist: the Tensors they produce contain serialized protobufs, which are written to disk and sent to TensorBoard. To visualize the summary data in TensorBoard, you should evaluate the summary op, retrieve the result, and then write that result to disk using a summary.FileWriter. A full explanation, with examples, is in the tutorial.

The supported summary ops include:

Tags: Giving names to data

When you make a summary op, you will also give it a tag. The tag is basically a name for the data recorded by that op, and will be used to organize the data in the frontend. The scalar and histogram dashboards organize data by tag, and group the tags into folders according to a directory/like/hierarchy. If you have a lot of tags, we recommend grouping them with slashes.

Event Files & LogDirs: How TensorBoard loads the data

summary.FileWriters take summary data from TensorFlow, and then write them to a specified directory, known as the logdir. Specifically, the data is written to an append-only record dump that will have "tfevents" in the filename. TensorBoard reads data from a full directory, and organizes it into the history of a single TensorFlow execution.

Why does it read the whole directory, rather than an individual file? You might have been using supervisor.py to run your model, in which case if TensorFlow crashes, the supervisor will restart it from a checkpoint. When it restarts, it will start writing to a new events file, and TensorBoard will stitch the various event files together to produce a consistent history of what happened.

Runs: Comparing different executions of your model

You may want to visually compare multiple executions of your model; for example, suppose you've changed the hyperparameters and want to see if it's converging faster. TensorBoard enables this through different "runs". When TensorBoard is passed a logdir at startup, it recursively walks the directory tree rooted at logdir looking for subdirectories that contain tfevents data. Every time it encounters such a subdirectory, it loads it as a new run, and the frontend will organize the data accordingly.

For example, here is a well-organized TensorBoard log directory, with two runs, "run1" and "run2".

/some/path/mnist_experiments/
/some/path/mnist_experiments/run1/
/some/path/mnist_experiments/run1/events.out.tfevents.1456525581.name
/some/path/mnist_experiments/run1/events.out.tfevents.1456525585.name
/some/path/mnist_experiments/run2/
/some/path/mnist_experiments/run2/events.out.tfevents.1456525385.name
/tensorboard --logdir /some/path/mnist_experiments

Logdir & Logdir_spec (Legacy Mode)

You may also pass a comma separated list of log directories, and TensorBoard will watch each directory. You can also assign names to individual log directories by putting a colon between the name and the path, as in

tensorboard --logdir_spec name1:/path/to/logs/1,name2:/path/to/logs/2

This flag (--logdir_spec) is discouraged and can usually be avoided. TensorBoard walks log directories recursively; for finer-grained control, prefer using a symlink tree. Some features may not work when using --logdir_spec instead of --logdir.

The Visualizations

Scalar Dashboard

TensorBoard's Scalar Dashboard visualizes scalar statistics that vary over time; for example, you might want to track the model's loss or learning rate. As described in Key Concepts, you can compare multiple runs, and the data is organized by tag. The line charts have the following interactions:

  • Clicking on the small blue icon in the lower-left corner of each chart will expand the chart

  • Dragging a rectangular region on the chart will zoom in

  • Double clicking on the chart will zoom out

  • Mousing over the chart will produce crosshairs, with data values recorded in the run-selector on the left.

Additionally, you can create new folders to organize tags by writing regular expressions in the box in the top-left of the dashboard.

Histogram Dashboard

The HistogramDashboard displays how the statistical distribution of a Tensor has varied over time. It visualizes data recorded via tf.summary.histogram. Each chart shows temporal "slices" of data, where each slice is a histogram of the tensor at a given step. It's organized with the oldest timestep in the back, and the most recent timestep in front. By changing the Histogram Mode from "offset" to "overlay", the perspective will rotate so that every histogram slice is rendered as a line and overlaid with one another.

Distribution Dashboard

The Distribution Dashboard is another way of visualizing histogram data from tf.summary.histogram. It shows some high-level statistics on a distribution. Each line on the chart represents a percentile in the distribution over the data: for example, the bottom line shows how the minimum value has changed over time, and the line in the middle shows how the median has changed. Reading from top to bottom, the lines have the following meaning: [maximum, 93%, 84%, 69%, 50%, 31%, 16%, 7%, minimum]

These percentiles can also be viewed as standard deviation boundaries on a normal distribution: [maximum, μ+1.5σ, μ+σ, μ+0.5σ, μ, μ-0.5σ, μ-σ, μ-1.5σ, minimum] so that the colored regions, read from inside to outside, have widths [σ, 2σ, 3σ] respectively.

Image Dashboard

The Image Dashboard can display pngs that were saved via a tf.summary.image. The dashboard is set up so that each row corresponds to a different tag, and each column corresponds to a run. Since the image dashboard supports arbitrary pngs, you can use this to embed custom visualizations (e.g. matplotlib scatterplots) into TensorBoard. This dashboard always shows you the latest image for each tag.

Audio Dashboard

The Audio Dashboard can embed playable audio widgets for audio saved via a tf.summary.audio. The dashboard is set up so that each row corresponds to a different tag, and each column corresponds to a run. This dashboard always embeds the latest audio for each tag.

Graph Explorer

The Graph Explorer can visualize a TensorBoard graph, enabling inspection of the TensorFlow model. To get best use of the graph visualizer, you should use name scopes to hierarchically group the ops in your graph - otherwise, the graph may be difficult to decipher. For more information, including examples, see the graph visualizer tutorial.

Embedding Projector

The Embedding Projector allows you to visualize high-dimensional data; for example, you may view your input data after it has been embedded in a high- dimensional space by your model. The embedding projector reads data from your model checkpoint file, and may be configured with additional metadata, like a vocabulary file or sprite images. For more details, see the embedding projector tutorial.

Text Dashboard

The Text Dashboard displays text snippets saved via tf.summary.text. Markdown features including hyperlinks, lists, and tables are all supported.

Frequently Asked Questions

My TensorBoard isn't showing any data! What's wrong?

First, check that the directory passed to --logdir is correct. You can also verify this by navigating to the Scalars dashboard (under the "Inactive" menu) and looking for the log directory path at the bottom of the left sidebar.

If you're loading from the proper path, make sure that event files are present. TensorBoard will recursively walk its logdir, it's fine if the data is nested under a subdirectory. Ensure the following shows at least one result:

find DIRECTORY_PATH | grep tfevents

You can also check that the event files actually have data by running tensorboard in inspect mode to inspect the contents of your event files.

tensorboard --inspect --logdir DIRECTORY_PATH

TensorBoard is showing only some of my data, or isn't properly updating!

Update: the experimental --reload_multifile=true option can now be used to poll all "active" files in a directory for new data, rather than the most recent one as described below. A file is "active" as long as it received new data within --reload_multifile_inactive_secs seconds ago, defaulting to 4000.

This issue usually comes about because of how TensorBoard iterates through the tfevents files: it progresses through the events file in timestamp order, and only reads one file at a time. Let's suppose we have files with timestamps a and b, where a<b. Once TensorBoard has read all the events in a, it will never return to it, because it assumes any new events are being written in the more recent file. This could cause an issue if, for example, you have two FileWriters simultaneously writing to the same directory. If you have multiple summary writers, each one should be writing to a separate directory.

Does TensorBoard support multiple or distributed summary writers?

Update: the experimental --reload_multifile=true option can now be used to poll all "active" files in a directory for new data, defined as any file that received new data within --reload_multifile_inactive_secs seconds ago, defaulting to 4000.

No. TensorBoard expects that only one events file will be written to at a time, and multiple summary writers means multiple events files. If you are running a distributed TensorFlow instance, we encourage you to designate a single worker as the "chief" that is responsible for all summary processing. See supervisor.py for an example.

I'm seeing data overlapped on itself! What gives?

If you are seeing data that seems to travel backwards through time and overlap with itself, there are a few possible explanations.

  • You may have multiple execution of TensorFlow that all wrote to the same log directory. Please have each TensorFlow run write to its own logdir.

    Update: the experimental --reload_multifile=true option can now be used to poll all "active" files in a directory for new data, defined as any file that received new data within --reload_multifile_inactive_secs seconds ago, defaulting to 4000.

  • You may have a bug in your code where the global_step variable (passed to FileWriter.add_summary) is being maintained incorrectly.

  • It may be that your TensorFlow job crashed, and was restarted from an earlier checkpoint. See How to handle TensorFlow restarts, below.

As a workaround, try changing the x-axis display in TensorBoard from steps to wall_time. This will frequently clear up the issue.

How should I handle TensorFlow restarts?

TensorFlow is designed with a mechanism for graceful recovery if a job crashes or is killed: TensorFlow can periodically write model checkpoint files, which enable you to restart TensorFlow without losing all your training progress.

However, this can complicate things for TensorBoard; imagine that TensorFlow wrote a checkpoint at step a, and then continued running until step b, and then crashed and restarted at timestamp a. All of the events written between a and b were "orphaned" by the restart event and should be removed.

To facilitate this, we have a SessionLog message in tensorflow/core/util/event.proto which can record SessionStatus.START as an event; like all events, it may have a step associated with it. If TensorBoard detects a SessionStatus.START event with step a, it will assume that every event with a step greater than a was orphaned, and it will discard those events. This behavior may be disabled with the flag --purge_orphaned_data false (in versions after 0.7).

How can I export data from TensorBoard?

The Scalar Dashboard supports exporting data; you can click the "enable download links" option in the left-hand bar. Then, each plot will provide download links for the data it contains.

If you need access to the full dataset, you can read the event files that TensorBoard consumes by using the summary_iterator method.

Can I make my own plugin?

Yes! You can clone and tinker with one of the examples and make your own, amazing visualizations. More documentation on the plugin system is described in the ADDING_A_PLUGIN guide. Feel free to file feature requests or questions about plugin functionality.

Once satisfied with your own groundbreaking new plugin, see the distribution section on how to publish to PyPI and share it with the community.

Can I customize which lines appear in a plot?

Using the custom scalars plugin, you can create scalar plots with lines for custom run-tag pairs. However, within the original scalars dashboard, each scalar plot corresponds to data for a specific tag and contains lines for each run that includes that tag.

Can I visualize margins above and below lines?

Margin plots (that visualize lower and upper bounds) may be created with the custom scalars plugin. The original scalars plugin does not support visualizing margins.

Can I create scatterplots (or other custom plots)?

This isn't yet possible. As a workaround, you could create your custom plot in your own code (e.g. matplotlib) and then write it into an SummaryProto (core/framework/summary.proto) and add it to your FileWriter. Then, your custom plot will appear in the TensorBoard image tab.

Is my data being downsampled? Am I really seeing all the data?

TensorBoard uses reservoir sampling to downsample your data so that it can be loaded into RAM. You can modify the number of elements it will keep per tag by using the --samples_per_plugin command line argument (ex: --samples_per_plugin=scalars=500,images=20). Alternatively, you can change the source code in tensorboard/backend/application.py. See this Stack Overflow question for some more information.

I get a network security popup every time I run TensorBoard on a mac!

Versions of TensorBoard prior to TensorBoard 2.0 would by default serve on host 0.0.0.0, which is publicly accessible. For those versions of TensorBoard, you can stop the popups by specifying --host localhost at startup.

In TensorBoard 2.0 and up, --host localhost is the default. Use --bind_all to restore the old behavior of serving to the public network on both IPv4 and IPv6.

Can I run tensorboard without a TensorFlow installation?

TensorBoard 1.14+ can be run with a reduced feature set if you do not have TensorFlow installed. The primary limitation is that as of 1.14, only the following plugins are supported: scalars, custom scalars, image, audio, graph, projector (partial), distributions, histograms, text, PR curves, mesh. In addition, there is no support for log directories on Google Cloud Storage.

How can I contribute to TensorBoard development?

See DEVELOPMENT.md.

I have a different issue that wasn't addressed here!

First, try searching our GitHub issues and Stack Overflow. It may be that someone else has already had the same issue or question.

General usage questions (or problems that may be specific to your local setup) should go to Stack Overflow.

If you have found a bug in TensorBoard, please file a GitHub issue with as much supporting information as you can provide (e.g. attaching events files, including the output of tensorboard --inspect, etc.).

Comments

  • Downloading hparams table should include the Trial ID column
    Downloading hparams table should include the Trial ID column

    Jan 12, 2022

    Downloading the hparams table as a csv omits the Trial ID column. It would be helpful if this was included along with the logged hyperparameters and metrics for completeness.

    Reply
  • build(deps): bump follow-redirects from 1.14.1 to 1.14.7
    build(deps): bump follow-redirects from 1.14.1 to 1.14.7

    Jan 12, 2022

    Bumps follow-redirects from 1.14.1 to 1.14.7.

    Commits
    • 2ede36d Release version 1.14.7 of the npm package.
    • 8b347cb Drop Cookie header across domains.
    • 6f5029a Release version 1.14.6 of the npm package.
    • af706be Ignore null headers.
    • d01ab7a Release version 1.14.5 of the npm package.
    • 40052ea Make compatible with Node 17.
    • 86f7572 Fix: clear internal timer on request abort to avoid leakage
    • 2e1eaf0 Keep Authorization header on subdomain redirects.
    • 2ad9e82 Carry over Host header on relative redirects (#172)
    • 77e2a58 Release version 1.14.4 of the npm package.
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    Reply
  • build(deps): bump engine.io from 4.1.1 to 4.1.2
    build(deps): bump engine.io from 4.1.1 to 4.1.2

    Jan 13, 2022

    Bumps engine.io from 4.1.1 to 4.1.2.

    Release notes

    Sourced from engine.io's releases.

    4.1.2

    :warning: This release contains an important security fix :warning:

    A malicious client could send a specially crafted HTTP request, triggering an uncaught exception and killing the Node.js process:

    RangeError: Invalid WebSocket frame: RSV2 and RSV3 must be clear at Receiver.getInfo (/.../node_modules/ws/lib/receiver.js:176:14) at Receiver.startLoop (/.../node_modules/ws/lib/receiver.js:136:22) at Receiver._write (/.../node_modules/ws/lib/receiver.js:83:10) at writeOrBuffer (internal/streams/writable.js:358:12)

    This bug was introduced by this commit, included in [email protected], so previous releases are not impacted.

    Thanks to Marcus Wejderot from Mevisio for the responsible disclosure.

    Bug Fixes

    • properly handle invalid data sent by a malicious websocket client (a70800d)

    Links

    Changelog

    Sourced from engine.io's changelog.

    4.1.2 (2022-01-11)

    :warning: This release contains an important security fix :warning:

    A malicious client could send a specially crafted HTTP request, triggering an uncaught exception and killing the Node.js process:

    RangeError: Invalid WebSocket frame: RSV2 and RSV3 must be clear at Receiver.getInfo (/.../node_modules/ws/lib/receiver.js:176:14) at Receiver.startLoop (/.../node_modules/ws/lib/receiver.js:136:22) at Receiver._write (/.../node_modules/ws/lib/receiver.js:83:10) at writeOrBuffer (internal/streams/writable.js:358:12)

    This bug was introduced by this commit, included in [email protected], so previous releases are not impacted.

    Thanks to Marcus Wejderot from Mevisio for the responsible disclosure.

    Bug Fixes

    • properly handle invalid data sent by a malicious websocket client (a70800d)
    Commits
    • c6315af chore(release): 4.1.2
    • a70800d fix: properly handle invalid data sent by a malicious websocket client
    • See full diff in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    Reply
  • Adding node and cell number for tensorboard graph
    Adding node and cell number for tensorboard graph

    Jan 14, 2022

    I am trying to trace the tensorboard graph for https://github.com/promach/gdas

    However, from what I can observe so far, the tensorboard graph does not really indicate user-understandable node number and cell number which makes it so difficult for tracking down the connections within the graph.

    Any suggestions ?

    image

    type:support 
    Reply
  • build(deps): bump marked from 2.1.3 to 4.0.10
    build(deps): bump marked from 2.1.3 to 4.0.10

    Jan 15, 2022

    Bumps marked from 2.1.3 to 4.0.10.

    Release notes

    Sourced from marked's releases.

    v4.0.10

    4.0.10 (2022-01-13)

    Bug Fixes

    • security: fix redos vulnerabilities (8f80657)

    v4.0.9

    4.0.9 (2022-01-06)

    Bug Fixes

    v4.0.8

    4.0.8 (2021-12-19)

    Bug Fixes

    v4.0.7

    4.0.7 (2021-12-09)

    Bug Fixes

    v4.0.6

    4.0.6 (2021-12-02)

    Bug Fixes

    v4.0.5

    4.0.5 (2021-11-25)

    Bug Fixes

    • table after paragraph without blank line (#2298) (5714212)

    v4.0.4

    4.0.4 (2021-11-19)

    ... (truncated)

    Commits
    • ae01170 chore(release): 4.0.10 [skip ci]
    • fceda57 ?️ build [skip ci]
    • 8f80657 fix(security): fix redos vulnerabilities
    • c4a3ccd Merge pull request from GHSA-rrrm-qjm4-v8hf
    • d7212a6 chore(deps-dev): Bump jasmine from 4.0.0 to 4.0.1 (#2352)
    • 5a84db5 chore(deps-dev): Bump rollup from 2.62.0 to 2.63.0 (#2350)
    • 2bc67a5 chore(deps-dev): Bump markdown-it from 12.3.0 to 12.3.2 (#2351)
    • 98996b8 chore(deps-dev): Bump @​babel/preset-env from 7.16.5 to 7.16.7 (#2353)
    • ebc2c95 chore(deps-dev): Bump highlight.js from 11.3.1 to 11.4.0 (#2354)
    • e5171a9 chore(release): 4.0.9 [skip ci]
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    Reply
  • time-namespaced state: fix experiment data initial loading on timeNamespacedState enabled
    time-namespaced state: fix experiment data initial loading on timeNamespacedState enabled

    Jan 18, 2022

    fix experiment data initial loading on timeNamespacedState enabled.

    This only happens when the browser is initiated on experiment view and timeNamespacedState is enabled from the param. NamespaceUpdateOption is set to NEW initially and gets updated to UNCHANGED on any navigation afterwards. However in loading experiment view initially there is no navigation actions therefore the option remains NEW, which generates a new namespaceId. In the fix we set the value to UNCHANGED after the first run of app navigation flow. However this is tircky to add the unit tests. InitAction perfectly ran only once of the changed part and trigging any follow up actions triggers navigation$. Add webtests to catch this. (googlers, please see cl/421637297)

    Reply
  • AttributeError: 'int' object has no attribute 'encode'
    AttributeError: 'int' object has no attribute 'encode'

    Jan 21, 2019

    when i run the following code from the what-if-notebook tool:

    #@title Convert dataset to tf.Example protos {display-mode: "form"} examples = df_to_examples(df)

    I get the folowing error


    AttributeError Traceback (most recent call last) in () 2 #@title Convert dataset to tf.Example protos {display-mode: "form"} 3 ----> 4 examples = df_to_examples(df)

    in df_to_examples(df, columns) 70 example.features.feature[col].float_list.value.append(row[col]) 71 elif row[col] == row[col]: ---> 72 example.features.feature[col].bytes_list.value.append(row[col].encode('utf-8')) 73 examples.append(example) 74 return examples

    AttributeError: 'int' object has no attribute 'encode'

    type:bug stat:awaiting response plugin:what-if-tool 
    Reply
  • TensorBoard doesn't load new data when summaries are stored in GCS.
    TensorBoard doesn't load new data when summaries are stored in GCS.

    Jan 10, 2018

    I have been storing summaries in GCS bucket, and running TensorBoard on a Compute Engine instance. This set up worked fairly well. Coming back from holiday break, I noticed that TensorBoard no longer reload new summary events. The only way for me to see new data is to restart TensorBoard process.

    type:bug core:backend stat:awaiting response 
    Reply
  • Blank Page in Browser (and other error)
    Blank Page in Browser (and other error)

    Jan 7, 2020

    When starting tensorboard I only get a blank white page. Tested on multiple browsers and scripts. I just updated to the new Tensorflow 2.0, yesterday with the older version it worked. If I mark the page it just shows 4 blue vertical lines. Inspecting the web page with chrome shows a lot of html code. Using VSCode, Win10 Chrome and Firefox.

    I start tensorboard with any of these: tensorboard --logdir ./logs tensorboard --logdir ./logs --bind_all tensorboard --logdir ./logs --host localhost --port 8080

    Results are the same.

    strange other error

    Also I get a strange error now with the new version if I take the official example and create the log_dir in the following way: log_dir="logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S") It raises the error "tensorflow.python.framework.errors_impl.NotFoundError: Failed to create a directory: logs/fit/20200107-131107\train; No such file or directory [Op:CreateSummaryFileWriter]" So I have to use os.path.join or just take no subdirectories.

    Example file to reconstruct error

    from datetime import datetime
    import tensorflow as tf
    
    mnist = tf.keras.datasets.mnist
    (x_train, y_train),(x_test, y_test) = mnist.load_data()
    
    def create_model():
      return tf.keras.models.Sequential([
        tf.keras.layers.Flatten(input_shape=(28, 28)),
        tf.keras.layers.Dense(512, activation='relu'),
        tf.keras.layers.Dropout(0.2),
        tf.keras.layers.Dense(10, activation='softmax')
      ])
    
    
    model = create_model()
    model.compile(optimizer='adam',
                  loss='sparse_categorical_crossentropy',
                  metrics=['accuracy'])
    
    log_dir= datetime.now().strftime("%Y%m%d-%H%M%S")
    tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)
    
    model.fit(x=x_train, 
              y=y_train, 
              epochs=1, 
              validation_data=(x_test, y_test), 
              callbacks=[tensorboard_callback])
    
    
    

    Diagnostics

    Diagnostics output
    --- check: autoidentify
    INFO: diagnose_tensorboard.py version d515ab103e2b1cfcea2b096187741a0eeb8822ef
    
    --- check: general
    INFO: sys.version_info: sys.version_info(major=3, minor=6, micro=7, releaselevel='final', serial=0)
    INFO: os.name: nt
    INFO: os.uname(): N/A
    INFO: sys.getwindowsversion(): sys.getwindowsversion(major=10, minor=0, build=17763, platform=2, service_pack='')
    
    --- check: package_management
    INFO: has conda-meta: False
    INFO: $VIRTUAL_ENV: None
    
    --- check: installed_packages
    INFO: installed: tensorboard==2.1.0
    INFO: installed: tensorflow==2.0.0
    INFO: installed: tensorflow-estimator==2.0.1
    
    --- check: tensorboard_python_version
    INFO: tensorboard.version.VERSION: '2.1.0'
    
    --- check: tensorflow_python_version
    INFO: tensorflow.__version__: '2.0.0'
    INFO: tensorflow.__git_version__: 'v2.0.0-rc2-26-g64c3d382ca'
    
    --- check: tensorboard_binary_path
    INFO: which tensorboard: b'C:\\Python3\\Scripts\\tensorboard.exe\r\n'
    
    --- check: addrinfos
    socket.has_ipv6 = True
    socket.AF_UNSPEC = <AddressFamily.AF_UNSPEC: 0>
    socket.SOCK_STREAM = <SocketKind.SOCK_STREAM: 1>
    socket.AI_ADDRCONFIG = <AddressInfo.AI_ADDRCONFIG: 1024>
    socket.AI_PASSIVE = <AddressInfo.AI_PASSIVE: 1>
    Loopback flags: <AddressInfo.AI_ADDRCONFIG: 1024>
    Loopback infos: [(<AddressFamily.AF_INET6: 23>, <SocketKind.SOCK_STREAM: 1>, 0, '', ('::1', 0, 0, 0)), (<AddressFamily.AF_INET: 2>, <SocketKind.SOCK_STREAM: 1>, 0, '', ('127.0.0.1', 0))]
    Wildcard flags: <AddressInfo.AI_PASSIVE: 1>
    Wildcard infos: [(<AddressFamily.AF_INET6: 23>, <SocketKind.SOCK_STREAM: 1>, 0, '', ('::', 0, 0, 0)), (<AddressFamily.AF_INET: 2>, <SocketKind.SOCK_STREAM: 1>, 0, '', ('0.0.0.0', 0))]
    
    --- check: readable_fqdn
    INFO: socket.getfqdn(): 'dnpc.ddns.lcl'
    
    --- check: stat_tensorboardinfo
    INFO: directory: C:\Users\Public\Documents\Wondershare\CreatorTemp\.tensorboard-info
    INFO: os.stat(...): os.stat_result(st_mode=16895, st_ino=3096224744537609, st_dev=1926005927, st_nlink=1, st_uid=0, st_gid=0, st_size=0, st_atime=1578399251, st_mtime=1578399251, st_ctime=1578397643)
    INFO: mode: 0o40777
    
    --- check: source_trees_without_genfiles
    INFO: tensorboard_roots (1): ['C:\\Python3\\lib\\site-packages']; bad_roots (0): []
    
    --- check: full_pip_freeze
    INFO: pip freeze --all:
    absl-py==0.7.0
    astor==0.7.1
    attrs==19.1.0
    backcall==0.1.0
    bayespy==0.5.18
    bleach==3.1.0
    cachetools==4.0.0
    certifi==2019.3.9
    chardet==3.0.4
    click==7.0
    clickclick==1.2.2
    colorama==0.4.1
    connexion==1.1.15
    cycler==0.10.0
    Cython==0.29.11
    decorator==4.3.2
    defusedxml==0.5.0
    dlib==19.17.0
    entrypoints==0.3
    flask==1.0.3
    gast==0.2.2
    gmplot==1.2.0
    google-auth==1.10.0
    google-auth-oauthlib==0.4.1
    google-pasta==0.1.8
    gpx-parser==0.0.4
    grpcio==1.26.0
    h5py==2.9.0
    idna==2.8
    imageio==2.5.0
    importlib==1.0.4
    inflection==0.3.1
    ipykernel==5.1.0
    ipython==7.3.0
    ipython-genutils==0.2.0
    ipywidgets==7.4.2
    itsdangerous==1.1.0
    jedi==0.13.3
    Jinja2==2.10
    joblib==0.13.2
    jsonschema==3.0.1
    jupyter==1.0.0
    jupyter-client==5.2.4
    jupyter-console==6.0.0
    jupyter-core==4.4.0
    Keras-Applications==1.0.8
    Keras-Preprocessing==1.0.8
    kiwisolver==1.0.1
    Markdown==3.0.1
    MarkupSafe==1.1.1
    matplotlib==3.0.2
    mistune==0.8.4
    nbconvert==5.4.1
    nbformat==4.4.0
    networkx==2.2
    notebook==5.7.4
    numpy==1.16.1
    oauthlib==3.1.0
    openapi-spec-validator==0.2.7
    opt-einsum==3.1.0
    pandas==0.24.1
    pandocfilters==1.4.2
    parso==0.3.4
    pickleshare==0.7.5
    Pillow==6.0.0
    pip==19.2.3
    primo==1.0
    prometheus-client==0.6.0
    prompt-toolkit==2.0.9
    protobuf==3.6.1
    pyasn1==0.4.8
    pyasn1-modules==0.2.7
    Pygments==2.3.1
    pykalman==0.9.5
    pyparsing==2.3.1
    PyQt5==5.12.1
    PyQt5-sip==4.19.15
    pyrsistent==0.14.11
    python-dateutil==2.6.0
    pytz==2018.9
    pywinpty==0.5.5
    pyyaml==5.1
    pyzmq==18.0.1
    qtconsole==4.4.3
    requests==2.21.0
    requests-oauthlib==1.3.0
    rsa==4.0
    scikit-learn==0.21.2
    scipy==1.3.3
    seaborn==0.9.0
    Send2Trash==1.5.0
    setuptools==44.0.0
    six==1.12.0
    sklearn==0.0
    swagger-server==1.0.0
    swagger-spec-validator==2.4.3
    tensorboard==2.1.0
    tensorflow==2.0.0
    tensorflow-estimator==2.0.1
    termcolor==1.1.0
    terminado==0.8.1
    testpath==0.4.2
    tornado==6.0.1
    traitlets==4.3.2
    typing==3.6.2
    urllib3==1.24.3
    wcwidth==0.1.7
    webencodings==0.5.1
    Werkzeug==0.14.1
    wheel==0.32.3
    widgetsnbextension==3.4.2
    wrapt==1.11.2
    
    

    Next steps

    No action items identified. Please copy ALL of the above output, including the lines containing only backticks, into your GitHub issue or comment. Be sure to redact any sensitive information.

    core:frontend os:windows stat:awaiting tensorflower 
    Reply
  • add base_url support and base_url tests
    add base_url support and base_url tests

    Aug 28, 2017

    The following PR adds base_url support for tensorboard such that a user can specify the base path as follows:

    tensorboard --logdir=/path/to/log_files --base_url=/path/to/tensorboard
    

    Tensorboard will then be able to be accessed at (assuming default port of 6006):

    localhost:6006/path/to/tensorboard/
    

    Changes mostly in applications.py by adding the base_url as a prefix to each self.data_applications[path]

    Additionally, test for base_url support was added to applications_test.py

    Reply
  • old html5lib makes pip dysfunctional
    old html5lib makes pip dysfunctional

    Sep 29, 2017

    html5lib dependency should be updated to 1.0b10

    windows 10 x64 Anaconda 5.0.0 with python 3.6.2 install tensorflow-gpu -> tensorboard -> html5lib downgraded -> pip dysfunctional

    type:bug type:build/install 
    Reply
  • Feature Request: Add an optional command-line argument for a prefix URL
    Feature Request: Add an optional command-line argument for a prefix URL

    Jun 16, 2017

    (I've migrated this issue from https://github.com/tensorflow/tensorflow/issues/10655. It was originally posted by @mebersole. It had four thumbs up emoji there.)

    Would like to add a command-line argument that allows TensorBoard to run at a different URL location than the root domain. So for example:

    • command-line: --base_url runhere
    • URL location: http://<address>:6006/runhere/

    Motivation for request: There are locations where only minimal ports are open and it would be great to use nginx (or similar) to route TensorBoard through port 80.

    stat:contributions welcome type:feature core:backend 
    Reply