Research & Position Papers
Submission Link: https://llm4code2025.hotcrp.com/
This workshop accepts both research papers and position papers:
- Research papers (4 –- 8 pages including references) on novel approaches, tools, datasets, or studies.
- Position papers (1 –- 4 pages including references) on novel ideas and positions that have yet to be fully developed.
All papers will be submitted via HotCRP and be reviewed in double-blinded. The submission should comply with the IEEE format (in line with ICSE 2025) and should present the original contribution. At least one author of each accepted paper should register for the workshop and present the paper in the workshop.
All accepted papers by default will appear in the ICSE 2025 workshop proceedings (i.e., the archival option). Meanwhile, we also provide a non-archival option on the paper submission system (HotCRP) for the authors who prefer to not have their papers in the proceedings. For the non-archival papers, the camera-ready version will only be posted/advertised on our workshop website (not listed on DBLP). Please note that no matter which option you choose (archival or non-archival), the submission should be fully original (not accepted/published anywhere else) by the submission time, and at least one author has to register for the workshop and present. The official publication date of the workshop proceedings is the date the proceedings are made available by IEEE. This date may be up to two weeks prior to the first day of ICSE 2025. The official publication date affects the deadline for any patent filings related to published work.
🏆 We will announce the best paper awards (up to 10% papers) during the workshop!
Topics
This workshop accepts (not limited to) the submissions of the following topics:
- LLM applications on code-relevant tasks
- LLM-based code generation
- LLM-based fuzzing test
- LLM-based test generation
- LLM-based GUI testing
- LLM-based mobile application testing
- LLM-based fault localization
- LLM-based program repair
- LLM-based vulnerability detection
- LLM-based code maintenance
- LLM-based program analysis
- LLM-based code comprehension
- LLM-based reverse engineering
- LLM-based code evolution and maintenance
- LLM-based refactoring
- LLM-based software engineering agents
- Datasets and Evaluation
- Datasets for LLM4Code pre-training
- Datasets for LLM4Code post-training
- Datasets for LLM4Code evaluation
- Automated dataset generation/augmentation (e.g., via LLMs)
- Empirical studies on LLM4Code
- Model design and optimization on LLM4Code
- Model architecture design
- Model hyperparameter tuning
- Prompt tuning and prompt engineering
- Pretraining objective design
- Model alignment
- Model distillation
- Model optimization/quantization