Trace HN: Efficient-VDVAE an Initiate-source memory-atmosphere excellent deep hierarchical VAE

Louay Hazami   ·   Rayhane Mama   ·   Ragavan Thurairatnam Efficient-VDVAE is a memory and compute efficient very deep hierarchical VAE. It converges faster and is more stable than current hierarchical VAE models. It also achieves SOTA likelihood-based performance on several image datasets. Pre-trained model checkpoints We provide checkpoints of pre-trained models on MNIST, CIFAR-10, Imagenet…

86
Trace HN: Efficient-VDVAE an Initiate-source memory-atmosphere excellent deep hierarchical VAE

That is the advantageous plugin ever!!

Louay Hazami
·
Rayhane Mama
·
Ragavan Thurairatnam

MIT license PWC PWC PWC PWC PWC PWC PWC PWC

Efficient-VDVAE is a memory and compute atmosphere excellent very deep hierarchical VAE. It converges faster and is more stable than contemporary hierarchical VAE models. It also achieves SOTA probability-primarily primarily based efficiency on lots of represent datasets.

Pre-skilled model checkpoints

We present checkpoints of pre-skilled models on MNIST, CIFAR-10, Imagenet 32×32, Imagenet 64×64, CelebA 64×64, CelebAHQ 256×256 (5-bits and 8-bits), FFHQ 256×256 (5-bits and 8bits), CelebAHQ 1024×1024 and FFHQ 1024×1024 in the hyperlinks in the desk below. All provided models are those skilled for desk 4 of the paper.

Dataset Pytorch JAX Negative ELBO
Logs Checkpoints Logs Checkpoints
MNISTlinklink79.09 nats
CIFAR-10linklink2.87 bits/murky
Imagenet 32×32linklink3.58 bits/murky
Imagenet 64×64linklink3.30 bits/murky
CelebA 64×64linklink1.83 bits/murky
CelebAHQ 256×256 (5-bits)QueuedQueuedlinklink0.51 bits/murky
CelebAHQ 256×256 (8-bits)QueuedQueuedlinklink1.35 bits/murky
FFHQ 256×256 (5-bits)QueuedQueuedlinklink0.53 bits/murky
FFHQ 256×256 (8-bits)linklinklinklink2.17 bits/murky
CelebAHQ 1024×1024linklink1.01 bits/murky
FFHQ 1024×1024linklink2.30 bits/murky

Notes:

  • Downloading from the “Checkpoints” link will obtain the minimal required files to renew coaching/produce inference. The minimal files are the model checkpoint file and the saved hyper-parameters of the hump (defined further below).
  • Downloading from the “Logs” link will obtain further pre-coaching logs comparable to tensorboard files or saved images from coaching. “Logs” also holds the saved hyper-parameters of the hump.
  • Downloaded “Logs” and/or “Checkpoints” must be continuously unzipped of their implementation folder (efficient_vdvae_torch for Pytorch checkpoints and efficient_vdvae_jax for JAX checkpoints).
  • A pair of of the model checkpoints are lacking in either Pytorch or JAX for the moment. We can update them rapidly.

Pre-requisites

To hump this codebase, you wish:

  • Machine that runs a linux primarily primarily based OS (tested on Ubuntu 20.04 (LTS))
  • GPUs (ideally more than 16GB)
  • Docker
  • Python 3.7 or higher
  • CUDA 11.1 or higher (would per chance per chance perhaps additionally additionally be installed from here)

We propose working the total code below inside of a Linux display veil or any rather just a few terminal multiplexer, since some commands can desire hours/days to provide and you do not favor them to die if you shut your terminal.

Set up

To originate the docker represent aged in each the Pytorch and JAX implementations:

cd originate  
docker originate -t efficient_vdvae_image .  

All code executions must be done inside of a docker container. To starting up the docker container, we present a utility script:

sh docker_run.sh  # Starts the container and attaches terminal
cd /workspace/Efficient-VDVAE  # Internal docker container

Setup datasets

All datasets would per chance per chance perhaps additionally additionally be robotically downloaded and pre-processed from the comfort script we present:

cd data_scripts
sh download_and_preprocess.sh 

Notes:

  • would per chance per chance perhaps additionally additionally be one of (imagenet32, imagenet64, celeba, celebahq, ffhq). MNIST and CIFAR-10 datasets will gather robotically downloaded later when coaching the model, and they produce no require any dataset setup.
  • For the celeba dataset, a manual obtain of img_align_celeba.zip and list_eval_partition.txt files is severe. Both files must be placed under /dataset_dumps/.
  • img_align_celeba.zip obtain link.
  • list_eval_partition.txt obtain link.

Surroundings the hyper-parameters

In this repository, we exercise hparams library (already included in the Dockerfile) for hyper-parameter administration:

  • Specify all hump parameters (different of GPUs, model parameters, and so on) in one .cfg file
  • Hparams evaluates any expression aged as “price” in the .cfg file. “price” would per chance per chance perhaps additionally additionally be any strange python object (floats, strings, lists, and so on) or any python strange expression (1/2, max(3, 7), and so on.) as long because the analysis doesn’t require any library importations or doesn’t count on rather just a few values from the .cfg.
  • Hparams saves the configuration of outdated runs for reproducibility, resuming coaching, and so on.
  • All hparams are saved by name, and re-the usage of the identical name will elevate the damaged-down hump as a substitute of constructing a novel one.
  • The .cfg file is destroy up into sections for readability, and all parameters in the file are accessible as class attributes in the codebase for comfort.
  • The HParams object keeps a world relate all one of many simplest ways thru the total scripts in the code.

We extremely imply having a deeper peek into how this library works by reading the hparams library documentation, the parameters description and figures 4 and 5 in the paper sooner than attempting to hump Efficient-VDVAE.

We now possess got heavily tested the robustness and balance of our draw, so altering the model/optimization hyper-parameters for memory load nick price would per chance per chance perhaps additionally serene not introduce any drastic instabilities as to originate the model untrainable. That is of course as long because the changes don’t affirm the important balance positive aspects we represent in the paper.

Coaching the Efficient-VDVAE

To hump Efficient-VDVAE in Torch:

cd efficient_vdvae_torch  
# Arena the hyper-parameters in "hparams.cfg" file  
# Arena "NUM_GPUS_PER_NODE" in "assure.sh" file  
sh assure.sh  

To hump Efficient-VDVAE in JAX:

cd efficient_vdvae_jax  
# Arena the hyper-parameters in "hparams.cfg" file  
python assure.py  

In recount for you to hump the model with less GPUs than on hand on the hardware, as an illustration 2 GPUs out of 8:

CUDA_VISIBLE_DEVICES=0,1 sh assure.sh  # For torch  
CUDA_VISIBLE_DEVICES=0,1 python assure.py  # For JAX  

Fashions robotically originate checkpoints in the course of coaching. To resume a model from its closing checkpoint, space its in hparams.cfg file and re-hump the identical coaching commands.

Since coaching commands will keep the hparams of the defined hump in the .cfg file. If attempting to restart a pre-present hump (by re-the usage of its name in hparams.cfg), we present a comfort script for resetting saved runs:

cd efficient_vdvae_torch  # or cd efficient_vdvae_jax  
sh reset.sh   #  is the first field in hparams.cfg  

Trace:

  • To originate things more uncomplicated for designate novel customers, we present example hparams.cfg files that would additionally additionally be aged under the egs folder. Detailed description of the characteristic of every parameter is also inside of hparams.cfg.

Monitoring the coaching process

Whereas penning this codebase, we keep extra emphasis on verbosity and logging. Excluding for the broadcast logs on terminal (in the course of coaching), that it’s possible you’ll additionally video display the coaching growth and build discover of excellent metrics the usage of Tensorboard:

# Whereas exterior efficient_vdvae_torch or efficient_vdvae_jax  
# Bustle exterior the docker container
tensorboard --logdir . --port  --reload_multifile Accurate  

In the browser, navigate to localhost: to visualize all saved metrics.

If Tensorboard just just isn’t installed (exterior the docker container):

pip set up --upgrade tensorboard

Inference with the Efficient-VDVAE

Efficient-VDVAE toughen lots of inference modes:

  • “reconstruction”: Encodes then decodes the take a look at space images and computes take a look at NLL and SSIM.
  • “skills”: Generates random images from the prior distribution. Randomness is controlled by the hump.seed parameter.
  • “div_stats”: Pre-computes the frequent KL divergence stats aged to procure out turned into-off variates (talk over with allotment 7 of the paper). Trace: This mode must be hump sooner than “encoding” mode and sooner than attempting to provide masked “reconstruction” (Remark over with hparams.cfg for a detailed description).
  • “encoding”: Extracts the latent distribution from the inference model, pruned to the quantile defined by synthesis.variates_masks_quantile parameter. This latent distribution is usable in downstream duties.

To hump the inference:

/hparams-.cfg”
# Set the same in “hparams.cfg”
python synthesize.py “>

cd efficient_vdvae_torch  # or cd efficient_vdvae_jax  
# Arena the inference mode in "logs-/hparams-.cfg"  
# Arena the identical  in "hparams.cfg"  
python synthesize.py  

Notes:

  • Since coaching a model with a popularity will keep that configuration under logs-/hparams-.cfg for reproducibility and error nick price. Any changes that one wants to originate in the course of inference time would per chance per chance perhaps additionally serene be applied on the saved hparams file (logs-/hparams-.cfg) as a substitute of the predominant file hparams.cfg.
  • The torch implementation currently doesn’t toughen multi-GPU inference. The JAX implementation does.

Seemingly TODOs

  • Create files loaders Out-Of-Core (OOC) in Pytorch
  • Create files loaders Out-Of-Core (OOC) in JAX
  • Update pre-skilled model checkpoints
  • Add Fréchet-Inception Distance (FID) and Inception Procure (IS) as measures for sample effective efficiency.
  • Toughen the format of the encoded dataset aged in downstream duties (output of encoding mode, if there might perhaps be a want)
  • Write a decoding mode API (if wished).

Bibtex

Whereas you occur to make exercise of this codebase, please cite our paper:

@article{hazami2022atmosphere excellent,
  title={Efficient-VDVAE: Less is more},
  author={Hazami, Louay and Mama, Rayhane and Thurairatnam, Ragavan},

Read More
Portion this on knowasiak.com to confirm with of us on this topicRegister on Knowasiak.com now when you occur to would per chance per chance perhaps properly be not registered yet.

Charlie Layers
WRITTEN BY

Charlie Layers

Fill your life with experiences so you always have a great story to tellBio: About: