Text summarization, which is the process of making a brief summary of text while preserving its overall meaning, is an important problem in Natural Language Processing (NLP). Automatic Bangla text summarization is a less-funded and less-researched area, which is why this study was focused on it. This research achieved the purpose of exploring and developing a robust abstractive summarization model for Bangla text by using big language models. Given the scarcity of resources and tools for Bangla NLP, the work goal was to design a deep learning model capable of generating concise summaries for Bangla news texts. The model uses a bidirectional LSTM-based encoder-decoder architecture trained on a curated dataset of Bangla articles and corresponding summaries. Preprocessing steps included text cleaning, tokenization, and embedding with pre-trained word vectors. Evaluation was conducted using BLEU and ROUGE metrics, with our model achieving competitive performance compared to extractive baselines. The results indicate the model's effectiveness in producing coherent summaries and highlight the potential of neural approaches for low-resource languages like Bangla. Future work involves experimenting with transformer-based models and multilingual retraining to enhance summarization quality. Hence, this work demonstrates the utilization of this advanced mechanism in Bangla text summarization and thereby helps develop NLP applications for low-resource languages. The results point out promising paths for future multilingual summarization research, highlighting the benefits of leveraging pre-trained multilingual models to address challenges in low-resource languages.