This study explores Bangla tense classification using advanced RNN architectures GRU, LSTM, Bi-LSTM, and Bi-GRU on a meticulously constructed and balanced dataset representing Present, Future, and Past tenses. Text data was sourced from diverse platforms, including blogs, Facebook posts, magazines, and newspapers, to capture linguistic variability. Data annotation, conducted by three human experts with majority voting, ensured high-quality labels.After extensive preprocessing, the models were evaluated using precision, recall, F1-score, and accuracy. All architectures achieved over 95% accuracy, with GRU emerging as the most effective 96% accuracy, due to its computational efficiency and sequential modelling capabilities. Bi-GRU demonstrated comparable performance, leveraging bidirectional processing for enhanced contextual understanding.These findings highlight GRU’s suitability for computationally efficient tasks and provide a framework for improving low-resource language processing. Future work could integrate advanced embeddings and attention mechanisms to enhance performance further.