The progress of Brain-Computer Interface (BCI) technology is at the forefront of modern innovation, offering transformative potential spanning a broad spectrum of applications. A vital component of this progress involves the identification of Motor Imagery (MI) patterns in electroencephalogram (EEG) signals, which play a crucial role in assisting individuals with disabilities, enabling device and environmental control, and augmenting human capabilities. Despite its promise, the limited accuracy in decoding brain signals has hindered the broader adoption of BCIs in practical settings. This paper unveils an innovative model that integrates fast Fourier transform and multi-head attention mechanisms to classify EEG-based MI for distinguishing movements of the left and right hands. Notably, this model requires minimal preprocessing and does not rely on artifact removal techniques. The proposed model has been assessed using three benchmark datasets-BCI Competition IV-2a, IV-2b, and III-IIIb achieving accuracies of 75.67, 76.15, and 74.21%, respectively, using Leave-One-Subject-Out (LOSO) cross-validation. These findings underscore the model’s robust performance and potential to advance the state of MI classification.