This is a good question and we should probably add this to the FAQ. tf.contrib.seq2seq is a low-level library that you can use to build seq2seq models; it is used internally by this project. The key difference here is that the google/seq2seq is an end-to-end pipeline that you can run with your own data and that comes with a lot of bells and whistles.
To someone not active in the Tensorflow community, it is really not obvious what this is for. What are typical use cases? Why does the world need this?
Encoder-Decoder models are a very common technique in sequence-to-sequence models for deep learning. These models have had very big wins lately in NLP tasks such as translation, POS tagging, dialogue generation, etc.
Yes, these models can applied to a lot of non-NLP tasks. For example, I've seen seq2seq models applied to medical record prediction, program generation, etc. Noise removal seems like a good candidate.
https://github.com/tensorflow/tensorflow/tree/master/tensorf...