
AI-Powered Alzheimer’s Disease Detection using Deep Learning
AI-Powered Alzheimer’s Disease Detection using Deep Learning
IEEE BASE PAPER TITLE:
Design of a CNN–Swin Transformer Model for Alzheimer’s Disease Prediction Using MRI Images
IEEE BASE PAPER ABSTRACT:
Alzheimer’s Disease (AD) is a progressive neurological condition that deteriorates memory, cognition, and behavior, especially in older adults. Timely identification is essential to enhance patient outcomes and inform therapy choices. This research introduces an extensive deep learning model for multiclass Alzheimer’s disease stage classification using structural MRI images from the publicly accessible OASIS dataset. The procedure begins with thorough preprocessing, including skull stripping, axial slice extraction, and intensity normalization to guarantee uniform input quality. Deep Convolution Generative Adversarial Network (DCGAN) is used to produce authentic synthetic MRI slices, therefore addressing data imbalance and enhancing class representation and training stability. The EffSwin-XNet model introduces a novel hybrid deep learning framework that strategically fuses EfficientNet-B0 and the Swin Transformer, enabling both local and global feature extraction from MRI brain images. This represents a significant advancement over conventional convolutional neural networks and a feature fusion attention mechanism that adaptively emphasizes discriminative features. Grad-CAM is used for explainability to view the brain areas influencing each categorization judgment, hence increasing therapeutic confidence. The model attains a classification accuracy of 95.3%, surpassing traditional CNN and hybrid benchmarks. This study presents an enhanced, interpretable, and efficient method for stage-wise categorization of Alzheimer’s Disease, demonstrating significant potential for use in clinical decision-support systems for early classification and intervention.
PROJECT OUTPUT VIDEO:
ALGORITHM / MODEL USED:
MobileNet, InceptionV3.
OUR PROPOSED PROJECT ABSTRACT:
Alzheimer’s Disease (AD) is a progressive neurodegenerative disorder that severely impacts memory, cognition, and daily functioning. Early detection and classification of AD stages are crucial for effective treatment planning and patient care. This project, titled “AI-Powered Alzheimer’s Disease Detection using Deep Learning” presents a web-based system developed using Python as the backend language, Flask as the web framework, and HTML, CSS, and JavaScript for the frontend interface. The system leverages advanced deep learning models to analyze MRI brain scan images for automated classification into four stages of Alzheimer’s: Mild Demented, Moderate Demented, Very Mild Demented, and Non Demented.
The dataset consists of MRI images organized in a 70-15-15 split across training, validation, and testing sets. To enhance model generalization and reduce overfitting, data augmentation techniques such as rotation, flipping, and contrast adjustments were applied, emphasizing the importance of augmentation in medical imaging tasks. The training set includes 6272 Mild Demented, 4524 Moderate Demented, 6720 Non Demented, and 6272 Very Mild Demented images, while the test and validation sets contain proportionate distributions of the four classes.
Two deep learning architectures were employed and evaluated: MobileNet and InceptionV3. MobileNet achieved a training accuracy of 99% and a test accuracy of 98.35%, while InceptionV3 attained a training accuracy of 99.48% and a test accuracy of 94.25%. Experimental results demonstrate that MobileNet outperformed InceptionV3 in terms of test accuracy, making it a reliable choice for real-time AD detection.
The developed system provides a secure, user-friendly web interface where brain scans can be analyzed efficiently to predict Alzheimer’s stages with high accuracy. By integrating deep learning with medical imaging, this project contributes to the advancement of AI-assisted healthcare, supporting clinicians in early diagnosis and improving the prospects of timely interventions for patients.
SYSTEM REQUIREMENTS:
HARDWARE REQUIREMENTS:
- System : Pentium i3 Processor.
- Hard Disk : 20 GB.
- Monitor : 15’’ LED.
- Input Devices : Keyboard, Mouse.
- Ram : 8 GB.
SOFTWARE REQUIREMENTS:
- Operating System : Windows 10 / 11.
- Coding Language : Python 3.12.0.
- Web Framework : Flask.
- Frontend : HTML, CSS, JavaScript.
REFERENCE:
VELU AND N. JAISANKAR, “Design of a CNN–Swin Transformer Model for Alzheimer’s Disease Prediction Using MRI Images”, IEEE Access, Volume: 13, 2025.
👉CLICK HERE TO BUY THIS PROJECT “AI-Powered Alzheimer’s Disease Detection using Deep Learning” SOURCE CODE👈
Frequently Asked Questions (FAQ’s) and Answers
1. What is the main objective of this project?
The main objective of this project is to detect and classify Alzheimer’s Disease into four stages — Mild Demented, Moderate Demented, Very Mild Demented, and Non Demented — using MRI brain scan images with the help of deep learning models.
2. Which technologies are used to develop this system?
• Backend Programming Language: Python • Web Framework: Flask • Frontend Technologies: HTML, CSS, JavaScript, Bootstrap • Deep Learning Models: MobileNet and InceptionV3
3. What dataset is used in the project?
The dataset consists of MRI brain scan images categorized into four classes: Mild Demented, Moderate Demented, Very Mild Demented, and Non Demented. The dataset is split into 70% training, 15% validation, and 15% testing subsets. Augmented images are included in the training set to improve model robustness.
4. How many images are used for training, validation, and testing?
• Training Set: Mild Demented – 6272, Moderate Demented – 4524, Very Mild Demented – 6272, Non Demented – 6720 • Validation Set: Mild Demented – 1344, Moderate Demented – 969, Very Mild Demented – 1344, Non Demented – 1440 • Testing Set: Mild Demented – 1344, Moderate Demented – 971, Very Mild Demented – 1344, Non Demented – 1440
5. What is the role of data augmentation in this project?
Data augmentation techniques such as rotation, flipping, and contrast adjustment are applied to increase the dataset size artificially. This helps reduce overfitting, improves generalization, and strengthens the reliability of the deep learning models.
6. What are the results achieved by the models?
• MobileNet: Training Accuracy – 99%, Test Accuracy – 98.35% • InceptionV3: Training Accuracy – 99.48%, Test Accuracy – 94.25%
7. How does the system work?
1. User uploads an MRI brain scan through the web interface. 2. The image is pre-processed (resized and normalized). 3. The selected model (MobileNet or InceptionV3) predicts the class. 4. The result is displayed along with a description of the predicted Alzheimer’s stage.
8. Can this system be used in real-time medical practice?
The system is designed as a decision-support tool. While it shows high accuracy, it should be used to assist doctors and healthcare professionals rather than as a standalone diagnostic system.
9. Why were MobileNet and InceptionV3 chosen?
• MobileNet is lightweight, fast, and efficient for real-time applications. • InceptionV3 is deeper, capable of extracting detailed features, and widely used in medical image classification.
10. Who can use this system?
• Healthcare professionals for diagnostic support. • Researchers working on medical imaging and AI in healthcare. • Students for academic and project purposes.
11. Is the user interface accessible to non-technical users?
Yes. The frontend is designed with a simple and intuitive interface that allows users to upload an MRI scan and view the prediction results with descriptive details.