Navigate:
All Repospix2pix
~$PIX2PI0.0%

pix2pix: Image-to-image translation with conditional GANs

Torch implementation for paired image-to-image translation using cGANs.

LIVE RANKINGS • 06:50 AM • STEADY
OVERALL
#236
19
AI & ML
#83
3
30 DAY RANKING TREND
ovr#236
·AI#83
STARS
10.6K
FORKS
1.7K
DOWNLOADS
8.3M
7D STARS
+3
7D FORKS
+1
Tags:
See Repo:
Share:

Learn more about pix2pix

pix2pix is a generative model implementation based on conditional adversarial networks (cGANs) designed for paired image-to-image translation tasks. It uses a generator-discriminator architecture where the generator learns to map from input images to output images conditioned on the input, while the discriminator distinguishes real output pairs from generated ones. The framework supports bidirectional translation (AtoB or BtoA) and includes training and testing pipelines with support for GPU acceleration via CUDA. Common applications include converting semantic segmentation maps to photorealistic images, generating building facades from architectural drawings, and translating between day and night scene photographs.

pix2pix

1

Conditional adversarial framework

Uses paired image data with a discriminator that evaluates both input and output together, enabling more structured translations compared to unconditional GANs. This conditioning mechanism allows the model to learn task-specific mappings rather than generating arbitrary outputs.

2

Modest data requirements

Achieves reasonable results on relatively small datasets, such as 400 images for facade generation trained in approximately 2 hours on a single GPU. This contrasts with many deep learning approaches that typically require substantially larger training sets.

3

Bidirectional translation support

Supports training in both directions (AtoB and BtoA) through a command-line parameter, allowing the same architecture to learn reverse mappings without separate model implementations. This flexibility enables experimentation with different translation directions on the same dataset.


from models import create_model
from options.test_options import TestOptions
from data import create_dataset

opt = TestOptions().parse()
opt.num_threads = 0
opt.batch_size = 1
opt.direction = 'AtoB'

model = create_model(opt)
model.setup(opt)
dataset = create_dataset(opt)

for i, data in enumerate(dataset):
    model.set_input(data)
    model.test()
    visuals = model.get_current_visuals()
    generated_image = visuals['fake_B']


[ EXPLORE MORE ]

Related Repositories

Discover similar tools and frameworks used by developers