
Advanced Breast Cancer Detection using Deep Learning from Mammogram and Histopathological Images
Advanced Breast Cancer Detection using Deep Learning from Mammogram and Histopathological Images
IEEE BASE PAPER TITLE:
Optimizing Breast Cancer Mammogram Classification Through a Dual Approach: A Deep Learning Framework Combining ResNet50, SMOTE, and Fully Connected Layers for Balanced and Imbalanced Data
IEEE BASE PAPER ABSTRACT:
Breast cancer is a global health concern where early and accurate diagnosis is crucial. Mammogram scans provide detailed imaging but require expert interpretation, which is time-consuming. While deep learning shows promise in medical image analysis, the prevalence of imbalanced datasets in medical diagnosis hinders the development of accurate and reliable classification models.
We propose a novel deep-learning framework for breast cancer classification from Mammogram scans. The framework addresses imbalanced data through a unique two-module pipeline incorporating the Synthetic Minority Over-sampling Technique (SMOTE). One module employs SMOTE on the entire dataset to balance class distribution. At the same time, the second separates a portion (20%) of the original imbalanced data for evaluation and applies SMOTE to the remaining 80%.
The framework incorporates a blockwise Convolutional Neural Network (CNN), utilizingVGG16 preprocessing for input standardization and ResNet50 for feature extraction. A fully connected classification model, consisting of multiple dense layers with batch normalization and dropout for regularization, was developed to assess the extracted features.
The model architecture was iteratively refined to combat overfitting, with the final version incorporating three dense layers (128, 256, and 128 neurons) with dropout rates of 0.5. Our model achieved 99% accuracy on a balanced dataset and 90% on an imbalanced portion. The framework includes an interpretable visualization technique for randomly selected predictions across all classes.
Our approach significantly improves diagnostic accuracy in breast cancer classification from Mammogram scans, effectively addressing the challenge of imbalanced data in medical image analysis. This work contributes to medical image analysis and computer-aided diagnosis. The proposed techniques for handling imbalanced data and providing interpretable results can be extended to improve diagnostic accuracy across various medical conditions.
PROJECT OUTPUT VIDEO:
ALGORITHM / MODEL USED:
Xception model, MobileNet model, DenseNet201 Architecture.
OUR PROPOSED PROJECT ABSTRACT:
Breast cancer remains one of the leading causes of mortality among women worldwide, necessitating the development of efficient and accurate diagnostic tools. This project presents an advanced deep learning-based diagnostic framework that integrates dual imaging modalities: mammogram and histopathological images, for enhanced breast cancer detection. The system is developed using Python as the backend language, Flask as the web framework, and HTML, CSS, and JavaScript for the frontend interface.
For the mammogram analysis, two separate convolutional neural network architectures, Xception and MobileNet, were employed. The mammogram dataset comprises 3,044 training images classified into four BI-RADS categories: BI-RADS 1 (1865 images), BI-RADS 3 (387), BI-RADS 4 (408), and BI-RADS 5 (384). The Xception model achieved a training accuracy of 96% and a test accuracy of 90%, while the MobileNet model demonstrated superior generalization with a training accuracy of 96% and a test accuracy of 94%.
In parallel, the histopathological image classification module utilizes the DenseNet201 architecture to distinguish between benign and malignant tissue samples using the BreakHis 400X dataset. This dataset includes 1,693 high-resolution biopsy images, with 371 benign and 777 malignant images used for training and 176 benign and 369 malignant images for testing. The DenseNet201 model achieved a training accuracy of 98% and a validation accuracy of 88%.
The system provides a comprehensive performance analysis based on Accuracy, Precision, Recall, F1-Score, and Confusion Matrix. MobileNet recorded an accuracy of 94.3%, precision of 81.7%, recall of 94.3%, and F1-score of 94.3%, outperforming the Xception model, which achieved an accuracy of 90.7%, precision of 72.3%, recall of 90.7%, and F1-score of 90.7%. For the histopathological analysis, DenseNet201 achieved an accuracy of 88.1%, precision of 91.3%, recall of 88.1%, and F1-score of 88.1%. Additionally, class distribution statistics are visualized to illustrate the dataset balance and support interpretability.
By combining radiological and microscopic imaging modalities, this project offers a robust and intelligent diagnostic solution for breast cancer, enhancing early detection and assisting clinical decision-making with improved accuracy and reliability.
SYSTEM REQUIREMENTS:
HARDWARE REQUIREMENTS:
- System : Pentium i3 Processor.
- Hard Disk : 20 GB.
- Monitor : 15’’ LED.
- Input Devices : Keyboard, Mouse.
- Ram : 8 GB.
SOFTWARE REQUIREMENTS:
- Operating System : Windows 10 / 11.
- Coding Language : Python 3.12.0.
- Web Framework : FLASK.
- Frontend : HTML, CSS, JavaScript.
REFERENCE:
ABDULLAH FAHAD A. ALSHAMRANI AND FAISAL SALEH ZUHAIR ALSHOMRANI, “Optimizing Breast Cancer Mammogram Classification Through a Dual Approach: A Deep Learning Framework Combining ResNet50, SMOTE, and Fully Connected Layers for Balanced and Imbalanced Data”, IEEE Access, VOLUME 13, 2025.
👉CLICK HERE TO BUY THIS PROJECT “Advanced Breast Cancer Detection using Deep Learning from Mammogram and Histopathological Images” SOURCE CODE👈
Frequently Asked Questions (FAQ’s) & Answers:
1. What is the main objective of this project?
The primary objective is to develop an AI-powered diagnostic system that uses deep learning to detect breast cancer from both mammogram and histopathological images. This dual-modality approach enhances the reliability and accuracy of diagnosis by combining radiological and tissue-level analysis.
2. What are the two types of images used in this project?
The project uses: • Mammogram images, classified under BI-RADS categories (BI-RADS 1, 3, 4, and 5), for early screening. • Histopathological images from biopsy slides, classified as benign or malignant, for confirmatory diagnosis.
3. Which deep learning models are used in the project?
Three models are used: • Xception Model for mammogram classification • MobileNet Model for mammogram classification • DenseNet201 Model for histopathological image classification
4. What is the size and structure of the dataset used?
• Mammogram Dataset: 3,044 training images across 4 classes (BI-RADS 1, 3, 4, 5) • Histopathology Dataset: 1,693 images (benign and malignant) from the BreakHis 400X dataset
5. What is the architecture of the system?
The system follows a dual-module architecture: • One branch processes mammograms using Xception and MobileNet models. • Another branch processes histopathological images using DenseNet201. • Both branches operate independently and provide separate predictions via a unified web interface.
6. What technologies are used to build this project?
• Backend: Python, TensorFlow, Keras • Frontend: HTML, CSS, JavaScript • Web Framework: Flask
7. What are the performance results of the models?
• Xception Model: 96% train accuracy, 90% test accuracy • MobileNet Model: 96% train accuracy, 94% test accuracy • DenseNet201 Model: 98% train accuracy, 88% validation accuracy Evaluation metrics include accuracy, precision, recall, F1-score, and confusion matrix.
8. Is the system real-time and user-friendly?
Yes. The system features a Flask-based web application where users can upload images and receive predictions along with performance metrics in real time. The interface is intuitive and responsive across devices.
9. What makes this project different from the base IEEE paper?
While the base paper focused only on mammogram classification using ResNet50 and SMOTE, this project expands functionality by: • Incorporating both mammogram and histopathological data • Using multiple deep learning models (Xception, MobileNet, DenseNet201) • Delivering predictions through a complete web interface
10. Is the system suitable for clinical or research applications?
Yes. While the system is designed as a research prototype, it can be adapted for clinical decision support, telemedicine platforms, or academic use. It is also scalable and can be extended to other types of cancer detection with relevant datasets.



