Text summarization is one of the famous problems in natural language processing and deep learning in recent years. Generally, text summarization contains a short note on a large text document. Our main purpose is to create a short, fluent and understandable abstractive summary of a text document. For making a good summarizer we have used amazon fine food reviews dataset, which is available on Kaggle. We have used reviews text descriptions as our input data, and generated a simple summary of that review descriptions as our output. To assist produce some extensive summary, we have used a bi-directional RNN with LSTM's in encoding layer and attention model in decoding layer. And we applied the sequence to sequence model to generate a short summary of food descriptions. There are some challenges when we working with abstractive text summarizer such as text processing, vocabulary counting, missing word counting, word embedding, the efficiency of the model or reduce value of loss and response machine fluent summary. In this paper, the main goal was increased the efficiency and reduce train loss of sequence to sequence model for making a better abstractive text summarizer. In our experiment, we've successfully reduced the training loss with a value of 0.036 and our abstractive text summarizer able to create a short summary of English to English text.