Unleashing Customization in GANs through Delineation guided Image Synthesis
Main Article Content
Abstract
Interacting with AI systems through text alone can be challenging, especially when conveying complex visual concepts. This paper presents an innovative AI system that leverages a multi-GAN framework—integrating specialized Generative Adversarial Networks (GANs) such as Pix2Pix, SketchGAN, DCGAN, and ESRGAN—to interpret and generate high-fidelity visual content based on user sketches. By employing these GANs in a sequential pipeline, the system optimizes image synthesis quality through targeted stages, from sketch refinement to high-resolution enhancement. This structured approach enhances real-time interaction by improving image editing capabilities, enabling users to communicate more intuitively with AI. This rapid and precise visualization tool streamlines design workflows in industries like architecture and fashion, while also advancing AI towards more sophisticated, human-like intelligence that fosters creativity and production efficiency.