Too Noisy To Learn: Enhancing Data Quality for Code Review Comment Generation
This program is tentative and subject to change.
Code review is an important practice in software development, yet it is time-consuming and requires substantial effort. While recent work has leveraged open-source datasets to train neural models for automating code review tasks, including code review comment generation, a significant portion of noisy comments (e.g., vague or non-actionable feedback) persists despite existing cleaning methods using heuristics and machine learning approaches. Such remaining noise may lead models to generate low-quality review comments, yet removing them requires a complex semantic understanding of both code changes and natural language comments. In this paper, we investigate the impact of such noise on review comment generation and propose a novel approach using large language models (LLMs) to further clean these datasets. Based on an empirical study on a large-scale code review dataset, our LLM-based approach achieves 66-85% precision in detecting valid comments. Using the predicted valid comments to fine-tune the state-of-the-art code review models (cleaned models) can generate review comments that are 13.0% - 12.4% more similar to valid human-written comments than the original models. We also find that the cleaned models can generate more informative and relevant comments than the original models. Our findings underscore the critical impact of dataset quality on the performance of review comment generation. We advocate for further research into cleaning training data to enhance the practical utility and quality of automated code review.