TensorFlow 1.8.0 Released

Overview:


The team behind TensorFlow has unveiled TensorFlow 1.8.0, just a week after the Release candidate version


Numerous exciting features introduced and various bugs fixed for tf.data, tf.keras and Eager Execution


Many other miscellaneous changes made, read on to find out


 


Introduction


The TensorFlow updates keep on rolling! Less than a month ago, the team behind this ultra-popular library had released TensorFlow 1.7 for the general public, with the TensorFlowRT and TensorFlow Debugger plugin features.


Now, they have unveiled the full version of TensorFlow 1.8.0, just a week after the Release Candidate release. It contains modifications and improvements on previously launched features like Eager Execution and tf.keras.


In this article, we’ll take a look at the main features that come packaged in this release.




Let us have a look at the major features and improvements in TensorFlow 1.8.0:


In order to run an Estimator model on multiple GPUs on one machine, tf.contrib.distribute.MirroredStrategy() to tf.estimator.RunConfig() can now be passed.


To support prefetching to GPU memory, tf.contrib.data.prefetch_to_device() has been added.


BoostedTreesClassifier, BoostedTreesRegressor are added as pre-made estimators.


To improve performance and usability there has been an addition of 3rd generation pipeline config for Cloud TPUs


The moving out of tf.contrib.bayesflow to its own repo.


To allow generic proto parsing as well as RPC communication, tf.contrib.{proto,rpc} is added.


Very recently (and in the last couple of updates) tf.data, tk.keras, Eager execution were released and demonstrated in the TensorFlow Dev summit! Here are a major features and improvements in the same:


tf.data:


To enable prefetching dataset elements to GPU memory, tf.contrib.data.prefetch_to_device
has been added.


Addition of tf.contrib.data.AUTOTUNE, which based on your system and environment, allows the tf.data runtime to automatically tune the prefetch buffer sizes.


Addition of tf.contrib.data.make_csv_dataset in order to build datasets of CSV files.


Eager Execution:


With eager execution, the Datasets can be used as standard python iterators (for
batch in dataset:). When eager execution is enabled, Dataset.__iter__() and Dataset.make_one_shot_iterator() can be used to create iterators.


Automatic device placement has been enabled.


tf.GradientTape has moved out of contrib.


tf.keras:


fashion mnist dataset has been added.


New data preprocessing functions: image/random_brightness, sequence/TimeseriesGenerator, and text/hashing_trick.


 


Other important features and changes


Accelerated Linear Algebra (XLA):


Select and scatter in reference util and evaluator now use lexicographical order to break ties.


TensorFlow Debugger (tfdbg) CLI:


During the operations of tensor-filter, exclusion of nodes by regular expressions allowed.


In some text terminals, spurious background colors fixed.


tf.contrib:


Added meta-distribution Batch Reshape which can reshape batch dimensions.


tf.contrib.layers.recompute_grad can be used for explicit gradient checkpointing on TPU.


Addition of tf.contrib.framework.argsort.


DNNBoostedTreeCombinedEstimator can now work with core versions of feature columns as well as losses.


Added a non-linear image warping ops: tf.contrib.image.sparse_image_warp, tf.contrib.image.dense_image_warp, and tf.contrib.image.interpolate_spline.


Bug fixed in tf.contrib.opt.MultitaskOptimizerWrapperwhere types of tensors were mismatched.


There are a few other changes made which you can see on the github page.


 


Our take on this:


Within less than a month’s time, TensorFlow team has provided updates and bug fixes to their latest release. TensorFlow has also provided a guide to install r1.8 to your machines as well. Looking at the number of features they have added in such less time they have got us excited about what’s coming up.


But a quick glance on Reddit shows us that the ML community is divided on the number of updates TensorFlow seems to be getting lately. It seems a new update is rolled out at a never-before-seen frequency and that has turned into a source of some agitation among data scientists.


Comments

Popular Posts