Article contents
Advancing U.S. Healthcare with LLM–Diffusion Hybrid Models for Synthetic Skin Image Generation and Dermatological AI
Abstract
The integration of large language models (LLMs) with diffusion-based generative architectures has redefined the boundaries of medical image synthesis, particularly in dermatological diagnostics. This study presents a novel hybrid model for synthetic skin image generation, leveraging the textual understanding capabilities of LLMs and the generative precision of diffusion models. The dataset was derived from the UCI Skin Segmentation Dataset, consisting of high-resolution dermal samples categorized into skin and non-skin classes. Following extensive preprocessing and feature extraction, semantic conditioning through LLMs was applied to guide the diffusion process, resulting in highly realistic and clinically relevant synthetic skin images. Experimental results demonstrate superior performance compared to traditional GANs and autoencoder-based models, achieving a Structural Similarity Index (SSIM) of 0.982, PSNR of 38.7 dB, and FID score of 5.43, indicating exceptional image fidelity and diversity. The proposed model also facilitates data augmentation for machine learning models in dermatology, enhancing classification accuracy by 7.5% on average. Beyond academic relevance, the implementation of this hybrid architecture holds immense potential for U.S. healthcare applications, enabling scalable skin disease datasets, supporting dermatological AI training, and improving diagnostic precision in rural and underserved communities.
Article information
Journal
Journal of Medical and Health Studies
Volume (Issue)
6 (5)
Pages
83-90
Published
Copyright
Open access

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

Aims & scope
Call for Papers
Article Processing Charges
Publications Ethics
Google Scholar Citations
Recruitment