An Efficient Deep Convolutional Neural Network Approach for Multiclass Skin Cancer Classification
Volume: 14 - Issue: 02 - Date: 01-02-2025
Approved ISSN: 2278-1412
Published Id: IJAECESTU444 | Page No.: 101-107
Author: Cholke Dnyaneshwar Ramdas
Co- Author: Dr. Tripti Arjariya,,,
Abstract:-Skin cancer is one of the most prevalent and life-threatening diseases, making early and accurate
diagnosis crucial for effective treatment. In this study, we propose an efficient deep convolutional neural
network (DCNN)-based approach for the multiclass classification of skin cancer using dermoscopic images.
The model is trained and evaluated on the HAM10000 dataset, which contains a diverse set of skin lesion
images. The proposed DCNN architecture is optimized to enhance feature extraction and classification
performance by leveraging deep learning techniques, ensuring robustness against variations in lesion
appearance, size, and texture.
To assess the effectiveness of the proposed model, we conduct a comparative analysis against state-of-the-art
deep learning architectures, including VGG16, VGG19, DenseNet121, DenseNet201, and MobileNetV2.
Performance metrics such as accuracy, precision, recall, F1-score, specificity, and area under the curve
(AUC) are used for evaluation. The experimental results demonstrate that the proposed DCNN model
outperforms existing transfer learning-based models, achieving superior classification accuracy and
robustness.
This research contributes to the advancement of automated skin cancer detection by providing a reliable,
efficient, and scalable deep learning-based diagnostic tool. Future work will focus on further improving
model generalization through advanced augmentation techniques, real-time deployment in clinical settings,
and integration with telemedicine platforms to facilitate early skin cancer detection.
Key Words:-Skin Cancer Classification, Deep Convolutional Neural Network, HAM10000 Dataset, Multiclass Classification, Computer-Aided Diagnosis, Deep Learning.
Area:-Engineering
Download Paper:
Preview This Article