XLCoST: A Benchmark Dataset for Cross-lingual Code Intelligence
Published in ICLR 2023 Workshop on DL4C, 2023
Recommended citation: Ming Zhu, Aneesh Jain, Karthik Suresh, Roshan Ravindran, Sindhu Tipirneni, Chandan K. Reddy. (2022). "XLCoST: A Benchmark Dataset for Cross-lingual Code Intelligence." ICLR 2023 Workshop on DL4C. https://openreview.net/forum?id=CuP9bgvsKA
Recent advances in machine learning have significantly improved the understanding of source code data and achieved good performance on a number of downstream tasks. Open source repositories like GitHub enable this process with rich unlabeled code data. However, the lack of high quality labeled data has largely hindered the progress of several code related tasks, such as program translation, summarization, synthesis, and code search. This paper introduces XLCoST, Cross-Lingual Code SnippeT dataset, a new benchmark dataset for cross-lingual code intelligence. Our dataset contains fine-grained parallel data from 8 languages (7 commonly used programming languages and English), and supports 10 cross-lingual code tasks. To the best of our knowledge, it is the largest parallel dataset for source code both in terms of size and the number of languages. We also provide the performance of several state-of-the-art baseline models for each task. We believe this new dataset can be a valuable asset for the research community and facilitate the development and validation of new methods for cross-lingual code intelligence.
Recommended citation: Ming Zhu, Aneesh Jain, Karthik Suresh, Roshan Ravindran, Sindhu Tipirneni, Chandan K. Reddy. (2022). “XLCoST: A Benchmark Dataset for Cross-lingual Code Intelligence.” ICLR 2023 Workshop on DL4C.