Execution-based Code Generation using Deep Reinforcement Learning

Published in arXiv, 2023

Recommended citation: Parshin Shojaee, Aneesh Jain, Sindhu Tipirneni and Chandan K. Reddy. (2023). "Execution-based Code Generation using Deep Reinforcement Learning." arXiv. https://arxiv.org/abs/2301.13816

The utilization of programming language (PL) models, pretrained on large-scale code corpora, as a means of automating software engineering pro- cesses has demonstrated considerable potential in streamlining various code generation tasks such as code completion, code translation, and program synthesis. However, current approaches mainly rely on supervised fine-tuning objectives borrowed from text generation, neglecting specific sequence- level features of code, including but not limited to compilability as well as syntactic and functional correctness. To address this limitation, we propose PPOCoder, a new framework for code generation that combines pretrained PL models with Proxi- mal Policy Optimization (PPO) deep reinforcement learning and employs execution feedback as the external source of knowledge into the model optimization. PPOCoder is transferable across different code generation tasks and PLs. Extensive ex- periments on three code generation tasks demonstrate the effectiveness of our proposed approach compared to SOTA methods, improving the success rate of compilation and functional correctness over different PLs. Our code can be found here. Download paper here

Recommended citation: Parshin Shojaee, Aneesh Jain, Sindhu Tipirneni and Chandan K. Reddy. (2023). “Execution-based Code Generation using Deep Reinforcement Learning.” arXiv.