MSR 2025
Mon 28 - Tue 29 April 2025 Ottawa, Ontario, Canada
co-located with ICSE 2025
Mon 28 Apr 2025 16:20 - 16:30 at 214 - LLMs for Code Chair(s): Ali Ouni

The rapid advancements in large language models (LLMs) have greatly expanded the potential for automated code-related tasks. Two primary methodologies are used in this domain: prompt engineering and fine-tuning. Prompt engineering involves applying different strategies to query LLMs, like ChatGPT, while fine-tuning further adapts pre-trained models, such as CodeBERT, by training them on task-specific data. Despite the growth in these areas, there remains a lack of comprehensive comparative analysis between these two approaches for code models. In this paper, we evaluate GPT-4 using three prompt engineering strategies—basic prompting, in-context learning, and task-specific prompting—and compare it against 17 fine-tuned LLMs across three code-related tasks: code summarization, code generation, and code translation. Our results indicate that GPT-4 with prompt engineering does not consistently outperform fine-tuned models. For instance, in code generation, GPT-4 is outperformed by fine-tuned models by 28.3%pt on the MBPP dataset. It also shows mixed results for code translation tasks. Additionally, a user study was conducted involving 27 graduate students and 10 industry practitioners. The study revealed that GPT-4 with conversational prompts, incorporating human feedback during interaction, significantly improved performance compared to automated prompting. Participants often provided explicit instructions or added context during these interactions. These findings suggest that GPT-4 with conversational prompting holds significant promise for automated code-related tasks, whereas fully automated prompt engineering without human involvement still requires further investigation.

Mon 28 Apr

Displayed time zone: Eastern Time (US & Canada) change

16:00 - 17:30
LLMs for CodeTechnical Papers / Data and Tool Showcase Track / Tutorials at 214
Chair(s): Ali Ouni ETS Montreal, University of Quebec
16:00
10m
Talk
How Much Do Code Language Models Remember? An Investigation on Data Extraction Attacks before and after Fine-tuning
Technical Papers
Fabio Salerno Delft University of Technology, Ali Al-Kaswan Delft University of Technology, Netherlands, Maliheh Izadi Delft University of Technology
16:10
10m
Talk
Can LLMs Generate Higher Quality Code Than Humans? An Empirical Study
Technical Papers
Mohammad Talal Jamil Lahore University of Management Sciences, Shamsa Abid National University of Computer and Emerging Sciences, Shafay Shamail LUMS, DHA, Lahore
Pre-print
16:20
10m
Talk
Prompt Engineering or Fine-Tuning: An Empirical Assessment of LLMs for Code
Technical Papers
Jiho Shin York University, Clark Tang , Tahmineh Mohati University of Calgary, Maleknaz Nayebi York University, Song Wang York University, Hadi Hemmati York University
16:30
5m
Talk
Drawing Pandas: A Benchmark for LLMs in Generating Plotting Code
Data and Tool Showcase Track
Timur Galimzyanov JetBrains Research, Sergey Titov JetBrains Research, Yaroslav Golubev JetBrains Research, Egor Bogomolov JetBrains Research
Pre-print
16:35
5m
Talk
SnipGen: A Mining Repository Framework for Evaluating LLMs for Code
Data and Tool Showcase Track
Daniel Rodriguez-Cardenas William & Mary, Alejandro Velasco William & Mary, Denys Poshyvanyk William & Mary
Pre-print
16:50
40m
Tutorial
Harmonized Coding with AI: LLMs for Qualitative Analysis in Software Engineering Research
Tutorials
Christoph Treude Singapore Management University, Youmei Fan Nara Institute of Science and Technology, Tao Xiao Kyushu University, Hideaki Hata Shinshu University