diff --git a/Stylegan2_ada_Custom_Training.ipynb b/Stylegan2_ada_Custom_Training.ipynb new file mode 100755 index 0000000000000000000000000000000000000000..758121d1571e0f3c891a5017dbae128dd333f501 --- /dev/null +++ b/Stylegan2_ada_Custom_Training.ipynb @@ -0,0 +1,464 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": { + "id": "cPI5E5y0pujD" + }, + "source": [ + "<p align=\"center\">\n", + " <a\n", + " href=\"https://youtu.be/dcb4Ckpkx2o\"\n", + " target=\"_blank\"\n", + " rel=\"noopener noreferrer\">\n", + " <img\n", + " alt=\"Night Sky Latent Walk\"\n", + " width=\"350\" height=\"350\"\n", + " src=\"https://github.com/ArthurFDLR/GANightSky/blob/main/.github/random_walk.gif?raw=true\">\n", + " </a>\n", + "</p>\n", + "\n", + "# 🚀 StyleGan2-ADA for Google Colab\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "ktqaMJUZuOl7" + }, + "source": [ + "1. [Install StyleGAN2-ADA on your Google Drive](#scrollTo=5YcUMPQp6ipP)\n", + "2. [Train a custom model](#scrollTo=Ti11YiPAiQpb)\n", + "3. [Generate images from pre-trained model](#scrollTo=f0A9ZNtferpk)\n", + "4. [Latent space exploration](#scrollTo=5yG1UyHXXqsO)\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "5YcUMPQp6ipP" + }, + "source": [ + "## Install StyleGAN2-ADA on your Google Drive" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "SI_i1MwgpzOD" + }, + "source": [ + "StyleGAN2-ADA only works with Tensorflow 1. Run the next cell before anything else to make sure we’re using TF1 and not TF2.\n" + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "metadata": { + "id": "iKYAU7Wub3WW" + }, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "UsageError: Line magic function `%tensorflow_version` not found.\n" + ] + } + ], + "source": [ + "%tensorflow_version 1.x\n", + "!nvidia-smi" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "19_1uXab3gND" + }, + "source": [ + "Then, mount your Drive to the Colab notebook: " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "pxxYlEKI9Gis" + }, + "outputs": [], + "source": [ + "from google.colab import drive\n", + "from pathlib import Path\n", + "\n", + "content_path = Path('/').absolute() / 'content'\n", + "drive_path = content_path / 'drive'\n", + "drive.mount(str(drive_path))" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "epV6TDzAjox1" + }, + "source": [ + "Finally, run this cell to install StyleGAN2-ADA on your Drive. If you’ve already installed the repository, it will skip the installation process and only check for updates. If you haven’t installed it, it will install all the necessary files. Beside, **in**, **out**, **datasets** and **training** folders are generated for data storage. Everything will be available on your Google Drive in the folder **StyleGAN2-ADA** even after closing this Notebook." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "8HX77jscX2zV" + }, + "outputs": [], + "source": [ + "stylegan2_repo_url = 'https://github.com/dvschultz/stylegan2-ada' # or https://github.com/NVlabs/stylegan2-ada\n", + "project_path = drive_path / 'MyDrive' / 'StyleGAN2-ADA'\n", + "stylegan2_repo_path = project_path / 'stylegan2-ada'\n", + "\n", + "# Create project folder if inexistant\n", + "if not project_path.is_dir():\n", + " %mkdir \"{project_path}\"\n", + "%cd \"{project_path}\"\n", + "\n", + "for dir in ['in', 'out', 'datasets', 'training']:\n", + " if not (project_path / dir).is_dir():\n", + " %mkdir {dir}\n", + "if not (project_path / 'datasets' / 'source').is_dir():\n", + " %mkdir \"{project_path / 'datasets' / 'source'}\"\n", + "\n", + "# Download StyleGAN2-ada\n", + "!git config --global user.name \"ArthurFDLR\"\n", + "!git config --global user.email \"arthfind@gmail.com\"\n", + "if stylegan2_repo_path.is_dir():\n", + " !git -C \"{stylegan2_repo_path}\" fetch origin\n", + " !git -C \"{stylegan2_repo_path}\" checkout origin/main -- *.py\n", + "else:\n", + " print(\"Install StyleGAN2-ADA\")\n", + " !git clone {stylegan2_repo_url}" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "Ti11YiPAiQpb" + }, + "source": [ + "## Train a custom model" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "ioqYi9NzkUfG" + }, + "source": [ + "Once you have installed StyleGAN2-ADA on your Google Drive and set up the working directory, you can upload your training dataset images in the associated folder." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "OlV5HIEqiZvu" + }, + "outputs": [], + "source": [ + "dataset_name = 'NightSky'\n", + "datasets_source_path = project_path / 'datasets' / 'source' / (dataset_name + '.zip')\n", + "if datasets_source_path.is_dir():\n", + " print(\"Dataset ready for import.\")\n", + "else:\n", + " print('Upload your images dataset as {}'.format(datasets_source_path))" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "-y1-tvr5617d" + }, + "source": [ + "Unfortunately, large datasets might exceed the Google Drive quota after a few training batches. Indeed, StyleGAN2 download datasets multiple times during training. You might have to import your dataset in the local storage session. However, large files cannot be copy/paste from Drive *(Input/Output error)*. \n", + "\n", + "Run this cell to download your zipped dataset from your Drive and unzip it in the local session." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "eQZGo4g5y7rh" + }, + "outputs": [], + "source": [ + "local_dataset_path = content_path / 'dataset'\n", + "if not local_dataset_path.is_dir():\n", + " print(\"Importing dataset...\")\n", + " %mkdir \"{local_dataset_path}\"\n", + " %cp -a \"{project_path / 'datasets' / 'source' / (dataset_name + '.zip')}\" \"{local_dataset_path}\"\n", + " print(\"Zip file succesfuly imported\")\n", + "else:\n", + " print('Zip file allready imported')\n", + "\n", + "import zipfile\n", + "with zipfile.ZipFile(str(local_dataset_path / (dataset_name + '.zip')), 'r') as zip_ref:\n", + " zip_ref.extractall(str(local_dataset_path))\n", + "print('Extraction completed')" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "HeS9tDvt61VG" + }, + "source": [ + "### Convert dataset to .tfrecords" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "_Q58MJbckLUc" + }, + "source": [ + "Next, we need to convert our image dataset to a format that StyleGAN2-ADA can read:`.tfrecords`.\n", + "\n", + "This can take a while." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "IjH8kBDo3kFP" + }, + "outputs": [], + "source": [ + "local_images_path = local_dataset_path / 'images'\n", + "local_dataset_path /= 'tfr'\n", + "\n", + "if (local_dataset_path).is_dir():\n", + " print('\\N{Heavy Exclamation Mark Symbol} Dataset already created \\N{Heavy Exclamation Mark Symbol}')\n", + " print('Delete current dataset folder ({}) to regenerate tfrecords.'.format(local_dataset_path))\n", + "else:\n", + " %mkdir \"{local_dataset_path}\"\n", + " !python \"{stylegan2_repo_path / 'dataset_tool.py'}\" create_from_images \\\n", + " \"{local_dataset_path}\" \"{local_images_path}\"" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "8DvTupHzP2s_" + }, + "source": [ + "There are numerous arguments to tune the training of your model. To obtain nice results, you will certainly have to experiment. Here are the most popular parameters:\n", + "\n", + "\n", + "* *mirror:* Should the images be mirrored vertically?\n", + "* *mirrory:* Should the images be mirrored horizontally?\n", + "* *snap:* How often should the model generate image samples and a network pickle (.pkl file)?\n", + "* *resume:* Network pickle to resume training from?\n", + "\n", + "To see all the options, run the following ```help``` cell.\n", + "\n", + "Please note that Google Colab Pro gives access to V100 GPUs, which drastically decreases (~3x) processing time over P100 GPUs." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "Fxu7CA0Qb1Yd" + }, + "outputs": [], + "source": [ + "!python \"{stylegan2_repo_path / 'train.py'}\" --help" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "jOftFoyiDU3s" + }, + "outputs": [], + "source": [ + "training_path = project_path / 'training' / dataset_name\n", + "if not training_path.is_dir():\n", + " %mkdir \"{training_path}\"\n", + "\n", + "#how often should the model generate samples and a .pkl file\n", + "snapshot_count = 2\n", + "#should the images be mirrored left to right?\n", + "mirrored = True\n", + "#should the images be mirrored top to bottom?\n", + "mirroredY = False\n", + "#metrics? \n", + "metric_list = None\n", + "#augments\n", + "augs = 'bgc'\n", + "\n", + "resume_from = 'ffhq1024'\n", + "\n", + "!python \"{stylegan2_repo_path / 'train.py'}\" --outdir=\"{training_path}\" \\\n", + " --data=\"{local_dataset_path}\" --resume=\"{resume_from}\" \\\n", + " --snap={snapshot_count} --augpipe={augs} \\\n", + " --mirror={mirrored} --mirrory={mirroredY} \\\n", + " --metrics={metric_list} #--dry-run" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "f0A9ZNtferpk" + }, + "source": [ + "## Generate images from pre-trained model\n", + "\n", + "You can finally generate images using a pre-trained network once everything is set-up. You can naturally use [your own model once it is trained](#scrollTo=Ti11YiPAiQpb&uniqifier=1) or use the ones NVLab published on [their website](https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada/pretrained/).\n", + "\n", + "<p align=\"center\">\n", + " <img\n", + " alt=\"Night Sky Latent Walk\"\n", + " width=\"450\" height=\"300\"\n", + " src=\"https://github.com/ArthurFDLR/GANightSky/blob/main/.github/Random_Generation.png?raw=true\">\n", + "</p>" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "xnlrb9QzgaGp" + }, + "outputs": [], + "source": [ + "%pip install opensimplex\n", + "!python \"{stylegan2_repo_path / 'generate.py'}\" generate-images --help " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "qQqYjeRsfYD2" + }, + "outputs": [], + "source": [ + "from numpy import random\n", + "seed_init = random.randint(10000)\n", + "nbr_images = 6\n", + "\n", + "generation_from = 'https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada/pretrained/ffhq.pkl'\n", + "\n", + "!python \"{stylegan2_repo_path / 'generate.py'}\" generate-images \\\n", + " --outdir=\"{project_path / 'out'}\" --trunc=0.7 \\\n", + " --seeds={seed_init}-{seed_init+nbr_images-1} --create-grid \\\n", + " --network={generation_from}" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "5yG1UyHXXqsO" + }, + "source": [ + "## Latent space exploration" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "kYLhEFGMtJIw" + }, + "source": [ + "It is also possible to explore the latent space associated with our model and [generate videos like this one](https://youtu.be/dcb4Ckpkx2o).\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "veceGR6QYA93" + }, + "outputs": [], + "source": [ + "%pip install opensimplex\n", + "!python \"{stylegan2_repo_path / 'generate.py'}\" generate-latent-walk --help " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "UjsN05ksZYZ7" + }, + "outputs": [], + "source": [ + "from numpy import random\n", + "walk_types = ['line', 'sphere', 'noiseloop', 'circularloop']\n", + "latent_walk_path = project_path / 'out' / 'latent_walk'\n", + "if not latent_walk_path.is_dir():\n", + " %mkdir \"{latent_walk_path}\"\n", + "\n", + "explored_network = '/content/drive/MyDrive/StyleGAN2-ADA/models/network-snapshot-026392.pkl'\n", + "\n", + "seeds = [random.randint(10000) for i in range(100)]\n", + "print(','.join(map(str, seeds)))\n", + "print(\"Base seeds:\", seeds)\n", + "!python \"{stylegan2_repo_path / 'generate.py'}\" generate-latent-walk --network=\"{explored_network}\" \\\n", + " --outdir=\"{latent_walk_path}\" --trunc=0.7 --walk-type=\"{walk_types[2]}\" \\\n", + " --seeds={','.join(map(str, seeds))} --frames {len(seeds)*20}" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "H9878RlntrDp" + }, + "source": [ + "## While you wait ...\n", + "\n", + "... learn more about Generative Adversarial Networks and StyleGAN2-ADA:\n", + "\n", + "* [This Night Sky Does Not Exist](https://arthurfindelair.com/thisnightskydoesnotexist/): Generation of images from a model created using this Notebook on Google Colab Pro.\n", + "* [This **X** Does Not Exist](https://thisxdoesnotexist.com/): Collection of sites showing the power of GANs.\n", + "* [Karras, Tero, et al. _Analyzing and Improving the Image Quality of StyleGAN._ CVPR 2020.](https://arxiv.org/pdf/2006.06676.pdf): Paper published for the release of StyleGAN2-ADA.\n", + "* [Official implementation of StyleGAN2-ADA](https://github.com/NVlabs/stylegan2-ada)\n", + "* [StyleGAN v2: notes on training and latent space exploration](https://towardsdatascience.com/stylegan-v2-notes-on-training-and-latent-space-exploration-e51cf96584b3): Interesting article from Toward Data Science" + ] + } + ], + "metadata": { + "accelerator": "GPU", + "colab": { + "collapsed_sections": [], + "machine_shape": "hm", + "name": "Stylegan2-ada Custom Training.ipynb", + "private_outputs": true, + "provenance": [] + }, + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.8.10" + } + }, + "nbformat": 4, + "nbformat_minor": 1 +}