Sanghyun Son

CS PhD @ UMD

Gradient Informed Proximal Policy Optimization


Conference paper


Sanghyun Son, Laura Yu Zheng, Ryan Sullivan, Yi-Ling Qiao, Ming Lin
NeurIPS, 2023

Arxiv Github
Cite

Cite

APA   Click to copy
Son, S., Zheng, L. Y., Sullivan, R., Qiao, Y.-L., & Lin, M. (2023). Gradient Informed Proximal Policy Optimization. In NeurIPS.


Chicago/Turabian   Click to copy
Son, Sanghyun, Laura Yu Zheng, Ryan Sullivan, Yi-Ling Qiao, and Ming Lin. “Gradient Informed Proximal Policy Optimization.” In NeurIPS, 2023.


MLA   Click to copy
Son, Sanghyun, et al. “Gradient Informed Proximal Policy Optimization.” NeurIPS, 2023.


BibTeX   Click to copy

@inproceedings{son2023a,
  title = {Gradient Informed Proximal Policy Optimization},
  year = {2023},
  author = {Son, Sanghyun and Zheng, Laura Yu and Sullivan, Ryan and Qiao, Yi-Ling and Lin, Ming},
  booktitle = {NeurIPS}
}

Poster at NeurIPS 2023

Abstract

We introduce a novel policy learning method that integrates analytical gradients from differentiable environments with the Proximal Policy Optimization (PPO) algorithm. To incorporate analytical gradients into the PPO framework, we introduce the concept of an {\alpha}-policy that stands as a locally superior policy. By adaptively modifying the {\alpha} value, we can effectively manage the influence of analytical policy gradients during learning. To this end, we suggest metrics for assessing the variance and bias of analytical gradients, reducing dependence on these gradients when high variance or bias is detected. Our proposed approach outperforms baseline algorithms in various scenarios, such as function optimization, physics simulations, and traffic control environments.


Share


Follow this website


You need to create an Owlstown account to follow this website.


Sign up

Already an Owlstown member?

Log in