LARK: NLP & AI Research Lab @ CU
LARK: NLP & AI Research Lab @ CU
Research
News
People
Events
Publications
Contact
Article
Anchored Answers: Unravelling Positional Bias in GPT-2's Multiple-Choice Questions
Using principles from Mechanistic Interpretability, we are able to investigate and correct the positional bias in GPT2 models regarding multiple-choice questions.
Ruizhe Li
,
Yanjun Gao
PDF
Dataset
Poster
Video
Source Document
Learning to Maximize Mutual Information for Chain-of-Thought Distillation
We propose a new training paradigm for Chain-of-Thought distillation with the perspective from information bottleneck.
Xin Chen
,
Hanxian Huang
,
Yanjun Gao
,
Yi Wang
,
Jishen Zhao
,
Ke Ding
PDF
Code
Dataset
Poster
Video
Source Document
Cite
×