MSR 2025
Mon 28 - Tue 29 April 2025 Ottawa, Ontario, Canada
co-located with ICSE 2025
Mon 28 Apr 2025 16:35 - 16:40 at 214 - LLMs for Code Chair(s): Ali Ouni

Large Language Models (LLMs), such as transformer-based neural networks trained on billions of parameters, have become increasingly prevalent in software engineering (SE). These models, trained on extensive datasets that include code repositories, exhibit remarkable capabilities for SE tasks. However, evaluating their effectiveness poses significant challenges, primarily due to the potential overlap between the datasets used for training and those employed for evaluation. To address this issue, we introduce SnipGEN, a comprehensive repository mining tool designed to leverage prompt engineering across various downstream tasks for code generation. SnipGEN aims to mitigate data contamination by generating robust testbeds and crafting tailored data points to assist researchers and practitioners in evaluating \llms for code-related tasks. In our exploratory study, SnipGEN mined approximately 227K data points from 338K recent code changes in GitHub commits, focusing on method-level granularity. SnipGEN features a collection of prompt templates that can be combined to create a Chain-of-Thought-like sequence of prompts, enabling a nuanced assessment of LLMs’ code generation quality. By providing the mining tool, the methodology, and the dataset, SnipGEN empowers researchers and practitioners to rigorously evaluate and interpret LLMs’ performance in software engineering contexts.

Mon 28 Apr

Displayed time zone: Eastern Time (US & Canada) change

16:00 - 17:30
LLMs for CodeTechnical Papers / Data and Tool Showcase Track / Tutorials at 214
Chair(s): Ali Ouni ETS Montreal, University of Quebec
16:00
10m
Talk
How Much Do Code Language Models Remember? An Investigation on Data Extraction Attacks before and after Fine-tuning
Technical Papers
Fabio Salerno Delft University of Technology, Ali Al-Kaswan Delft University of Technology, Netherlands, Maliheh Izadi Delft University of Technology
16:10
10m
Talk
Can LLMs Generate Higher Quality Code Than Humans? An Empirical Study
Technical Papers
Mohammad Talal Jamil Lahore University of Management Sciences, Shamsa Abid National University of Computer and Emerging Sciences, Shafay Shamail LUMS, DHA, Lahore
Pre-print
16:20
10m
Talk
Prompt Engineering or Fine-Tuning: An Empirical Assessment of LLMs for Code
Technical Papers
Jiho Shin York University, Clark Tang , Tahmineh Mohati University of Calgary, Maleknaz Nayebi York University, Song Wang York University, Hadi Hemmati York University
16:30
5m
Talk
Drawing Pandas: A Benchmark for LLMs in Generating Plotting Code
Data and Tool Showcase Track
Timur Galimzyanov JetBrains Research, Sergey Titov JetBrains Research, Yaroslav Golubev JetBrains Research, Egor Bogomolov JetBrains Research
Pre-print
16:35
5m
Talk
SnipGen: A Mining Repository Framework for Evaluating LLMs for Code
Data and Tool Showcase Track
Daniel Rodriguez-Cardenas William & Mary, Alejandro Velasco William & Mary, Denys Poshyvanyk William & Mary
Pre-print
16:50
40m
Tutorial
Harmonized Coding with AI: LLMs for Qualitative Analysis in Software Engineering Research
Tutorials
Christoph Treude Singapore Management University, Youmei Fan Nara Institute of Science and Technology, Tao Xiao Kyushu University, Hideaki Hata Shinshu University