Breast cancer is one of the leading causes of cancer-related morbidity worldwide, underscoring the need for advanced diagnostic tools to improve early detection and treatment outcomes. This study introduces MammoSegNet, a novel convolutional neural network architecture optimized for precisely segmenting mammographic images. The proposed MammoSegNet incorporates Inception-ResNet blocks, Squeeze-and-Excitation (SE) modules, and dilated convolutions to enable multi-scale feature extraction and efficient attention refinement while maintaining low computational complexity. MammoSegNet performance was rigorously evaluated on BCDR-D01 and INbreast datasets to examine its robustness and generalization. Using stratified fivefold cross-validation, the model was trained on BCDR-D01 and tested on the unseen INbreast dataset through Monte Carlo cross-validation. Preprocessing techniques, including Region of Interest (ROI) Isolation to concentrate on relevant areas, Normalization to standardized pixel intensities, and Data Augmentation to expand the dataset and enhance the model’s robustness, were employed. Additionally, a specialized image enhancement method called peak feature intensity transformation (PFIT) was designed to amplify diagnostic features while preserving structural integrity. Comparative evaluations confirmed MammoSegNet’s superior performance across metrics, achieving 97% accuracy on BCDR-D01 and 95% on INbreast. Statistical t-tests validated these improvements, and visual heatmaps demonstrated the model’s effectiveness in isolating tumor regions. These findings establish MammoSegNet as a promising tool for enhancing breast cancer diagnostic accuracy and reliability in medical applications.