Bridging the gap between complex scientific research and the curious minds eager to explore it.

Computer Science, Software Engineering

Unlocking Repair Success: The Power of Local Context in Software Mining

Unlocking Repair Success: The Power of Local Context in Software Mining

Tokenization, a crucial step in natural language processing (NLP), involves breaking down text into smaller units called tokens. However, there’s a limit to how much context can be added before further increases have no effect due to the input window’s size constraints. In this article, we explore the impact of varying context sizes on repair success rates and highlight the importance of systematically documenting context choices in NPR models.
Context Matters: Varying Context Sizes Impact Repair Success
Context plays a significant role in improving repair success rates, with relative improvements ranging from 16% to 29% when varying context sizes from 2 to 56 lines. While extending the context size still yields relative increases, these changes are close to those offered by some APR approaches. Ensembles leveraging a context of up to 80 lines perform well, and other strategies may further improve performance.
Systematic Documentation is Key: Clearly Documenting Context Choices
Given the impact of context on repair success, it’s essential to systematically document the context size used in any experiment. However, approaches from the literature are not always clear, making it crucial to provide detailed information about context choices. Vague descriptions like "10 lines of code surrounding the buggy code" can be interpreted differently, causing confusion.
Context Window Position Matters: Distributing Context Pre and Post Bug
In addition to systematically documenting context size, specifying how the context is distributed in pre and post bug positions matters. The article calls on the community to provide clear documentation of their choices.
Conclusion: Tokenization and Context Size Impact Repair Success
Tokenization is a critical step in NLP, but context size constraints limit further increases beyond a certain point. Understanding the impact of varying context sizes on repair success rates is crucial, and systematically documenting choices can help improve performance. By distributing context pre and post bug positions, we can optimize our models for better repair success rates.