Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Show HN: A general-purpose encoder-decoder framework for Tensorflow (github.com/google)
54 points by dennybritz on March 13, 2017 | hide | past | favorite | 7 comments


There is already a seq2seq in the tree under contrib, is this one different from/replacing it?

https://github.com/tensorflow/tensorflow/tree/master/tensorf...


This is a good question and we should probably add this to the FAQ. tf.contrib.seq2seq is a low-level library that you can use to build seq2seq models; it is used internally by this project. The key difference here is that the google/seq2seq is an end-to-end pipeline that you can run with your own data and that comes with a lot of bells and whistles.


To someone not active in the Tensorflow community, it is really not obvious what this is for. What are typical use cases? Why does the world need this?


Encoder-Decoder models are a very common technique in sequence-to-sequence models for deep learning. These models have had very big wins lately in NLP tasks such as translation, POS tagging, dialogue generation, etc.

The Tensorflow documentation has an okay writeup about seq2seq models: https://www.tensorflow.org/tutorials/seq2seq

The author of the library also has a small blurb about it on his blog: http://www.wildml.com/deep-learning-glossary/#seq2seq


Are there any examples of seq2seq networks being used for tasks other than NLP? For example, can it be used for something like noise removal?


Yes, these models can applied to a lot of non-NLP tasks. For example, I've seen seq2seq models applied to medical record prediction, program generation, etc. Noise removal seems like a good candidate.


Noise removal? See this page (https://blog.keras.io/building-autoencoders-in-keras.html) and search for "Application to image denoising"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: