The CheckThat! Lab is a challenging lab designed to address the issue of disinformation. We participated in CheckThat! Lab Task 2, which is focused on classification of subjectivity in news articles. This shared task included datasets in six different languages, as well as a multilingual dataset created by combining all six languages. We followed standard preprocessing steps for Arabic, Dutch, English, German, Italian, Turkish, and multilingual text data. We employed a transformer-based pretrained model, specifically XLM-RoBERTa large, for our official submission to the CLEF Task 2. Our results were impressive, as we achieved the 1st, 1st, 2nd, 5th, 2nd, 2nd, and 3rd positions on the leaderboard for the multilingual, Arabic, Dutch, English, German, Italian, and Turkish text data, respectively. Furthermore, we also applied BERT and BERT multilingual (BERT-m) models to assess the subjectivity of the text data. Our study revealed that XLM-RoBERTa large outperformed BERT and BERT-m in all performance measures for this particular dataset provided in the shared task.