Open Source Language Models Can Provide Feedback: Evaluating LLMs' Ability to Help Students Using GPT-4-As-A-Judge C Koutcheme, N Dainese, S Sarsa, A Hellas, J Leinonen, P Denny Proceedings of the 2024 on Innovation and Technology in Computer Science …, 2024 | 17 | 2024 |
In-context symbolic regression: Leveraging large language models for function discovery M Merler, K Haitsiukevich, N Dainese, P Marttinen arXiv preprint arXiv:2404.19094, 2024 | 7 | 2024 |
Benchmarking Educational Program Repair C Koutcheme, N Dainese, S Sarsa, J Leinonen, A Hellas, P Denny arXiv preprint arXiv:2405.05347, 2024 | 6 | 2024 |
Evaluating language models for generating and judging programming feedback C Koutcheme, N Dainese, S Sarsa, A Hellas, J Leinonen, S Ashraf, ... Proceedings of the 56th ACM Technical Symposium on Computer Science …, 2025 | 5 | 2025 |
Generating Code World Models with Large Language Models Guided by Monte Carlo Tree Search N Dainese, M Merler, M Alakuijala, P Marttinen 38th Conference on Neural Information Processing Systems (NeurIPS 2024), 2024 | 5 | 2024 |
Using Program Repair as a Proxy for Language Models’ Feedback Ability in Programming Education C Koutcheme, N Dainese, A Hellas Workshop on Innovative Use of NLP for Building Educational Applications, 165-181, 2024 | 3 | 2024 |
Reader: Model-based language-instructed reinforcement learning N Dainese, P Marttinen, A Ilin Conference on Empirical Methods in Natural Language Processing, 16583-16599, 2023 | 3 | 2023 |
Can docstring reformulation with an LLM improve code generation? N Dainese, A Ilin, P Marttinen Proceedings of the 18th Conference of the European Chapter of the …, 2024 | 2 | 2024 |
In-Context Symbolic Regression: Leveraging Language Models for Function Discovery M Merler, N Dainese, K Haitsiukevich arXiv preprint arXiv:2404.19094, 2024 | | 2024 |
Deep Reinforcement Learning methods for StarCraft II Learning Environment N Dainese | | 2020 |