Acknowledgments: Funding and Support for Explanatory Feedback Research
We acknowledge the funding from the Richard King Mellon Foundation and the Learning Engineering Virtual Institute, as well as the invaluable guidance from key collaborators for this research.

Table of Links
2. Background
2.1 Effective Tutoring Practice
2.2 Feedback for Tutor Training
2.3 Sequence Labeling for Feedback Generation
2.4 Large Language Models in Education
3. Method
3.1 Dataset and 3.2 Sequence Labeling
3.3 GPT Facilitated Sequence Labeling
4. Results
6. Limitation and Future Works
\ APPENDIX
B. Input for Fine-Tunning GPT-3.5
C. Scatter Matric of the Correlation on the Outcome-based Praise
D. Detailed Results of Fine-Tuned GPT-3.5 Model's Performance
8. ACKNOWLEDGMENTS
This work is supported by funding from the Richard King Mellon Foundation (Grant #10851) and the Learning Engineering Virtual Institute (https://learning-engineering-virtu al-institute.org/). Any opinions, findings, and conclusions expressed in this paper are those of the authors. We also wish to express our gratitude to Dr. Ralph Abboud and Dr. Carolyn P. Ros´e for their invaluable guidance and recommendations, and to Yiyang Zhao and Yuting Wang for their assistance in verifying the rating scheme.
\
:::info This paper is available on arxiv under CC BY 4.0 DEED license.
:::
:::info Authors:
(1) Jionghao Lin, Carnegie Mellon University (jionghal@cs.cmu.edu);
(2) Eason Chen, Carnegie Mellon University (easonc13@cmu.edu);
(3) Zeifei Han, University of Toronto (feifei.han@mail.utoronto.ca);
(4) Ashish Gurung, Carnegie Mellon University (agurung@andrew.cmu.edu);
(5) Danielle R. Thomas, Carnegie Mellon University (drthomas@cmu.edu);
(6) Wei Tan, Monash University (wei.tan2@monash.edu);
(7) Ngoc Dang Nguyen, Monash University (dan.nguyen2@monash.edu);
(8) Kenneth R. Koedinger, Carnegie Mellon University (koedinger@cmu.edu).
:::
\
What's Your Reaction?






