Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

June 17, 2017

TensorFlow 1.2 Hits The Streets!

Filed under: TensorFlow — Patrick Durusau @ 8:23 pm

TensorFlow 1.2

I’m not copying the features and improvement here, better that you download TensorFlow 1.2 and experience them for yourself!

The incomplete list of models at TensorFlow Models:

  • adversarial_crypto: protecting communications with adversarial neural cryptography.
  • adversarial_text: semi-supervised sequence learning with adversarial training.
  • attention_ocr: a model for real-world image text extraction.
  • autoencoder: various autoencoders.
  • cognitive_mapping_and_planning: implementation of a spatial memory based mapping and planning architecture for visual navigation.
  • compression: compressing and decompressing images using a pre-trained Residual GRU network.
  • differential_privacy: privacy-preserving student models from multiple teachers.
  • domain_adaptation: domain separation networks.
  • im2txt: image-to-text neural network for image captioning.
  • inception: deep convolutional networks for computer vision.
  • learning_to_remember_rare_events: a large-scale life-long memory module for use in deep learning.
  • lm_1b: language modeling on the one billion word benchmark.
  • namignizer: recognize and generate names.
  • neural_gpu: highly parallel neural computer.
  • neural_programmer: neural network augmented with logic and mathematic operations.
  • next_frame_prediction: probabilistic future frame synthesis via cross convolutional networks.
  • object_detection: localizing and identifying multiple objects in a single image.
  • real_nvp: density estimation using real-valued non-volume preserving (real NVP) transformations.
  • resnet: deep and wide residual networks.
  • skip_thoughts: recurrent neural network sentence-to-vector encoder.
  • slim: image classification models in TF-Slim.
  • street: identify the name of a street (in France) from an image using a Deep RNN.
  • swivel: the Swivel algorithm for generating word embeddings.
  • syntaxnet: neural models of natural language syntax.
  • textsum: sequence-to-sequence with attention model for text summarization.
  • transformer: spatial transformer network, which allows the spatial manipulation of data within the network.
  • tutorials: models described in the TensorFlow tutorials.
  • video_prediction: predicting future video frames with neural advection.

And your TensorFlow model is ….?

Enjoy!

No Comments

No comments yet.

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.

Powered by WordPress