Driver fatigue poses a critical threat to road safety, necessitating the development of robust detection methods to minimize traffic accidents and societal burdens. Deep neural networks have recently been effectively applied to Electroencephalography (EEG)-based driving fatigue detection. Nevertheless, most of the existing models, particularly those relying on extensive pooling, often struggle to capture long-range dependencies within images. To address this issue, we propose a Correlation-based Channel Selection (CCS) with a Vision Transformer (ViT) approach for driver fatigue detection using an EEG. Our methodology integrates a pioneering Channel Selection (CS) block to extract discriminative channels via CCS. This mechanism systematically identifies the most informative EEG channels crucial for fatigue detection. Subsequently, we leverage Continuous Wavelet Transform (CWT) to convert the selected EEG channels into a time-frequency spectral image. Finally, the resulting time-frequency spectral images, encompassing both temporal and spectral information, are concatenated and fed into the Vision Transformer (ViT) model to classify them as either normal or fatigued. The proposed model is evaluated on a publicly available EEG dataset containing recordings from twelve subjects. The model achieved superior accuracy: 95.83% for the effective selected combined subject and 99.925% for the average accuracy of each subject, demonstrating its potential for robust driver fatigue detection.