Due to the difficulties of brain tumor segmentation, this paper proposes a strategy for extracting brain tumors from three-dimensional Magnetic Resonance Image (MRI) and Computed Tomography (CT) scans utilizing 3D U-Net Design and ResNet50, taken after by conventional classification strategies. In this inquire, the ResNet50 picked up accuracy with 98.96%, and the 3D U-Net scored 97.99% among the different methods of deep learning. It is to be mentioned that traditional Convolutional Neural Network (CNN) gives 97.90% accuracy on top of the 3D MRI. In expansion, the image fusion approach combines the multimodal images and makes a fused image to extricate more highlights from the medical images. Other than that, we have identified the loss function by utilizing several dice measurements approach and received Dice Result on top of a specific test case. The average mean score of dice coefficient and soft dice loss for three test cases was 0.0980. At the same time, for two test cases, the sensitivity and specification were recorded to be 0.0211 and 0.5867 using patch level predictions. On the other hand, a software integration pipeline was integrated to deploy the concentrated model into the webserver for accessing it from the software system using the Representational state transfer (REST) API. Eventually, the suggested models were validated through the Area Under the Curve–Receiver Characteristic Operator (AUC–ROC) curve and Confusion Matrix and compared with the existing research articles to understand the underlying problem. Through Comparative Analysis, we have extracted meaningful insights regarding brain tumour segmentation and figured out potential gaps. Nevertheless, the proposed model can be adjustable in daily life and the healthcare domain to identify the infected regions and cancer of the brain through various imaging modalities.