From b14022ea92986bbcfeccb81fd9a0116b92807448 Mon Sep 17 00:00:00 2001 From: Kevin Black <12429600+kvablack@users.noreply.github.com> Date: Tue, 4 Jul 2023 01:21:46 -0700 Subject: [PATCH] Update README.md --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 09956b1..561a06b 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,6 @@ # ddpo-pytorch -This is an implementation of [Denoising Diffusion Policy Optimization (DDPO)](https://rl-diffusion.github.io/) in PyTorch with support for [low-rank adaptation (LoRA)](https://huggingface.co/docs/diffusers/training/lora). Unlike our original research code (which you can find [here](https://github.com/jannerm/ddpo)), this implementation runs on GPUs, and if LoRA is enabled, requires less than 10GB of GPU memory to finetune a Stable Diffusion-sized model! +This is an implementation of [Denoising Diffusion Policy Optimization (DDPO)](https://rl-diffusion.github.io/) in PyTorch with support for [low-rank adaptation (LoRA)](https://huggingface.co/docs/diffusers/training/lora). Unlike our original research code (which you can find [here](https://github.com/jannerm/ddpo)), this implementation runs on GPUs, and if LoRA is enabled, requires less than 10GB of GPU memory to finetune Stable Diffusion! ![DDPO](teaser.jpg) @@ -42,4 +42,4 @@ The image at the top of this README was generated using LoRA! However, I used a You can find the exact configs I used for the 4 experiments in `config/dgx.py`. For example, to run the aesthetic quality experiment: ```bash accelerate launch scripts/train.py --config config/dgx.py:aesthetic -``` \ No newline at end of file +```