From 9cbe82177a2ec2ab8566b3f60babc7bae103759d Mon Sep 17 00:00:00 2001 From: Parth38 <58384863+Parth38@users.noreply.github.com> Date: Sat, 2 Dec 2023 06:27:38 -0600 Subject: [PATCH 1/2] Update value_guided_sampling.py Changed the scheduler step function as predict_epsilon parameter is not there in latest DDPM Scheduler --- src/diffusers/experimental/rl/value_guided_sampling.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/diffusers/experimental/rl/value_guided_sampling.py b/src/diffusers/experimental/rl/value_guided_sampling.py index dfb27587d7d5..f46d3ac98b17 100644 --- a/src/diffusers/experimental/rl/value_guided_sampling.py +++ b/src/diffusers/experimental/rl/value_guided_sampling.py @@ -113,7 +113,7 @@ def run_diffusion(self, x, conditions, n_guide_steps, scale): prev_x = self.unet(x.permute(0, 2, 1), timesteps).sample.permute(0, 2, 1) # TODO: verify deprecation of this kwarg - x = self.scheduler.step(prev_x, i, x, predict_epsilon=False)["prev_sample"] + x = self.scheduler.step(prev_x, i, x)["prev_sample"] # apply conditions to the trajectory (set the initial state) x = self.reset_x0(x, conditions, self.action_dim) From 08c16b896e00fed472933225117257c32d4ec863 Mon Sep 17 00:00:00 2001 From: Parth38 <58384863+Parth38@users.noreply.github.com> Date: Sat, 2 Dec 2023 07:01:57 -0600 Subject: [PATCH 2/2] Update value_guided_sampling.md Updated a link to a working notebook --- docs/source/en/api/pipelines/value_guided_sampling.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/en/api/pipelines/value_guided_sampling.md b/docs/source/en/api/pipelines/value_guided_sampling.md index 01b7717f49f8..3c7e4977a68a 100644 --- a/docs/source/en/api/pipelines/value_guided_sampling.md +++ b/docs/source/en/api/pipelines/value_guided_sampling.md @@ -24,7 +24,7 @@ The abstract from the paper is: *Model-based reinforcement learning methods often use learning only for the purpose of estimating an approximate dynamics model, offloading the rest of the decision-making work to classical trajectory optimizers. While conceptually simple, this combination has a number of empirical shortcomings, suggesting that learned models may not be well-suited to standard trajectory optimization. In this paper, we consider what it would look like to fold as much of the trajectory optimization pipeline as possible into the modeling problem, such that sampling from the model and planning with it become nearly identical. The core of our technical approach lies in a diffusion probabilistic model that plans by iteratively denoising trajectories. We show how classifier-guided sampling and image inpainting can be reinterpreted as coherent planning strategies, explore the unusual and useful properties of diffusion-based planning methods, and demonstrate the effectiveness of our framework in control settings that emphasize long-horizon decision-making and test-time flexibility.* -You can find additional information about the model on the [project page](https://diffusion-planning.github.io/), the [original codebase](https://github.com/jannerm/diffuser), or try it out in a demo [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/reinforcement_learning_with_diffusers.ipynb). +You can find additional information about the model on the [project page](https://diffusion-planning.github.io/), the [original codebase](https://github.com/jannerm/diffuser), or try it out in a demo [notebook](https://colab.research.google.com/drive/1rXm8CX4ZdN5qivjJ2lhwhkOmt_m0CvU0#scrollTo=6HXJvhyqcITc&uniqifier=1). The script to run the model is available [here](https://github.com/huggingface/diffusers/tree/main/examples/reinforcement_learning).