Skip to content

An Empirical Analysis on Large Language Models in Debate Evaluation

 

Author

Pinxin Liu

Mentors

Hangfeng He and Chenliang Xu

Abstract

In this study, we investigate the capabilities and inherent biases of advanced large language models (LLMs) such as GPT-3.5 and GPT-4 in the context of debate evaluation. We discover that LLM’s performance exceeds humans and surpasses the performance of state-of-the-art methods fine-tuned on extensive datasets. We additionally explore and analyze biases present in LLMs, including positional bias, lexical bias, order bias, which may affect their evaluative judgments. Our findings reveal a consistent bias in both GPT-3.5 and GPT-4 towards the second candidate response presented, attributed Positional biases, Lexical biases, Order biases, etc. LLMs Evaluation: Pro side is the winner to prompt design. We also uncover a lexical bias in both GPT-3.5 and GPT-4, especially when label sets carry connotations such as numerical or sequential, highlighting the critical need for careful label verbalizer selection in prompt design. Additionally, our analysis indicates a tendency of both models to favor the debate’s concluding side as the winner, suggesting an end-of-discussion bias. Our findings also reveal that Prompt engineering strategies are not effective in alleviating biases.

Fingerprinting VPN Traffic