Language-based multilabel text classification is a crucial topic in natural language processing (NLP). However, unlike English, the development of natural language processing in Bengali is still in its infancy, and there is little research done specifically for the Bengali language, one of the most widely spoken languages in the world. As a consequence, it’s past time to address this issue for effective information management and data structure. This research presents a novel human-annotated Bangla sentence dataset. The dataset was categorized into five classes (assertive, interrogative, imperative, optative, or exclamatory) containing 10,000 sentences based on the function and purpose of Bangla grammar context. Fine-tuning applied six transformer-based pre-trained multilingual and monolingual BERT models to this multilabeled dataset. The study evaluated the effectiveness of various models, including BERT, AlBERT, RoBERTa, DistilBERT, XLNet, and BanglaBERT, in sentence classification tasks. The results showed that BanglaBERT outperformed all other models with an accuracy of 99.03%. The study concluded that BanglaBERT is superior in achieving exceptional accuracy and precision, surpassing the capabilities of other popular machine learning models such as LSTM, RNN, SVC, DT, KNN, and RF, as well as other BERT models of Bangla text classification. BERT-based models have great potential in understanding complex language nuances and context and are critical to advancing natural language processing tasks, especially in Bangla language processing.