You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This project implements a Pix2Pix Conditional GAN that generates realistic images from edge outlines, using object masks from the COCO 2017 dataset. The model learns to translate binary edge maps into colored, high-resolution dog/object images using a U-Net-based generator and a PatchGAN discriminator.
Generative_Image_Rotation: Using Pix2Pix cGAN to transform randomly oriented Protoplanetary Disk images into standardized face-on views for astronomical research.
This project implements an image-to-image translation model using the Pix2Pix architecture with TensorFlow. The goal is to generate realistic images from input images using the TU-Graz dataset.
Utilize a MobileNetV2 encoder and Pix2Pix decoder to perform precise semantic segmentation, distinguishing objects in images, such as identifying flooded areas in flood images. The purpose is to enable accurate object delineation for applications like disaster response and environmental monitoring.
A free and open source tool for star removal in astronomy images. A GAN model implemented in tensorflow and trained to remove stars from astronomical images