Towards Detecting Prompt Knowledge Gaps for Improved LLM-guided Issue Resolution
This program is tentative and subject to change.
Large language models (LLMs) have become essential in software development, especially for issue resolution. However, despite their widespread use, significant challenges persist in the quality of LLM responses to issue resolution queries. LLM interactions often yield incorrect, incomplete, or ambiguous information, largely due to knowledge gaps in prompt design, which can lead to unproductive exchanges and reduced developer productivity. In this paper, we analyze 433 developer-ChatGPT conversations within GitHub issue threads to examine the impact of prompt knowledge gaps and conversation styles on issue resolution. We identify four main knowledge gaps in developer prompts: Missing Context, Missing Specifications, Multiple Context, and Unclear Instructions. Assuming that conversations within closed issues contributed to successful resolutions while those in open issues did not, we find that ineffective conversations contain knowledge gaps in 54.7% of prompts, compared to only 13.2% in effective ones. Additionally, we observe seven distinct conversational styles, with Directive Prompting, Chain of Thought, and Responsive Feedback being the most prevalent. We find that knowledge gaps are present in all styles of conversations, with Missing Context being the most repeated challenge developers face in issue-resolution conversations. Based on our analysis, we identify key textual and code-related heuristics—Specificity, Contextual Richness, and Clarity—that are associated with successful issue closure and help assess prompt quality. These heuristics lay the foundation for an automated tool that can dynamically flag unclear prompts and suggest structured improvements. To test feasibility, we developed a lightweight browser extension prototype for detecting prompt gaps, that can be easily adapted to other tools within developer workflows.
This program is tentative and subject to change.
Tue 29 AprDisplayed time zone: Eastern Time (US & Canada) change
14:00 - 15:30 | |||
14:00 10mTalk | Automatic High-Level Test Case Generation using Large Language Models Technical Papers Navid Bin Hasan Bangladesh University of Engineering and Technology, Md. Ashraful Islam Bangladesh University of Engineering and Technology, Junaed Younus Khan Bangladesh University of Engineering and Technology, Sanjida Senjik Bangladesh University of Engineering and Technology, Anindya Iqbal Bangladesh University of Engineering and Technology Dhaka, Bangladesh | ||
14:10 10mTalk | Prompting in the Wild: An Empirical Study of Prompt Evolution in Software Repositories Technical Papers Mahan Tafreshipour University of California at Irvine, Aaron Imani University of California, Irvine, Eric Huang University of California, Irvine, Eduardo Santana de Almeida Federal University of Bahia, Thomas Zimmermann University of California, Irvine, Iftekhar Ahmed University of California at Irvine Pre-print | ||
14:20 10mTalk | Towards Detecting Prompt Knowledge Gaps for Improved LLM-guided Issue Resolution Technical Papers Ramtin Ehsani Drexel University, Sakshi Pathak Drexel University, Preetha Chatterjee Drexel University, USA Pre-print | ||
14:30 10mTalk | Intelligent Semantic Matching (ISM) for Video Tutorial Search using Transformer Models Technical Papers | ||
14:40 10mTalk | Language Models in Software Development Tasks: An Experimental Analysis of Energy and Accuracy Technical Papers Negar Alizadeh Universiteit Utrecht, Boris Belchev University of Twente, Nishant Saurabh Utrecht University, Patricia Kelbert Fraunhofer IESE, Fernando Castor University of Twente | ||
14:50 10mTalk | TriGraph: A Probabilistic Subgraph-Based Model for Visual Code Completion in Pure Data Technical Papers Anisha Islam Department of Computing Science, University of Alberta, Abram Hindle University of Alberta | ||
15:00 5mTalk | Inferring Questions from Programming Screenshots Technical Papers Faiz Ahmed York University, Xuchen Tan York University, Folajinmi Adewole York University, Suprakash Datta York University, Maleknaz Nayebi York University | ||
15:05 5mTalk | Human-In-The-Loop Software Development Agents: Challenges and Future Directions Industry Track Jirat Pasuksmit Atlassian, Wannita Takerngsaksiri Monash University, Patanamon Thongtanunam University of Melbourne, Kla Tantithamthavorn Monash University, Ruixiong Zhang Atlassian, Shiyan Wang Atlassian, Fan Jiang Atlassian, Jing Li Atlassian, Evan Cook Atlassian, Kun Chen Atlassian, Ming Wu Atlassian | ||
15:10 5mTalk | FormalSpecCpp: A Dataset of C++ Formal Specifications Created Using LLMs Data and Tool Showcase Track Madhurima Chakraborty University of California, Riverside, Peter Pirkelbauer Lawrence Livermore National Laboratory, Qing Yi Lawrence Livermore National Laboratory |