大语言模型评估技术研究进展
作者:
作者单位:

1.北京计算机技术及应用研究所,北京 100854;2.东南大学计算机科学与工程学院,南京 211189

作者简介:

通讯作者:

基金项目:

国家自然科学基金(62376057);东南大学启动研究基金(RF1028623234)。


Research Progress in Evaluation Techniques for Large Language Models
Author:
Affiliation:

1.Beijing Computer Technology and Applied Research Institute, Beijing 100854, China;2.School of Computer Science and Engineering, Southeast University, Nanjing 211189, China

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
    摘要:

    随着大语言模型的广泛应用,针对大语言模型的评估工作变得至关重要。除了大语言模型在下游任务上的表现情况需要评估外,其存在的一些潜在风险更需要评估,例如大语言模型可能违背人类的价值观并且被恶意输入诱导引发安全问题等。本文通过分析传统软件、深度学习模型与大模型的共性与差异,借鉴传统软件测评和深度学习模型评估的指标体系,从大语言模型功能评估、性能评估、对齐评估和安全性评估几个维度对现有工作进行总结,并对大模型的评测基准进行介绍。最后依据现有研究与潜在的机遇和挑战,对大语言模型评估技术方向和发展前景进行了展望。

    Abstract:

    With the widespread application of large language models, the evaluation of large language models has become crucial. In addition to the performance of large language models in downstream tasks, some potential risks should also be evaluated, such as the possibility that large language models may violate human values and be induced by malicious input to trigger security issues. This paper analyzes the commonalities and differences between traditional software, deep learning systems, and large model systems. It summarizes the existing work from the dimensions of functional evaluation, performance evaluation, alignment evaluation, and security evaluation of large language models, and introduces the evaluation criteria for large models. Finally, based on existing research and potential opportunities and challenges, the direction and development prospects of large language models evaluation technology are discussed.

    参考文献
    相似文献
    引证文献
引用本文

赵睿卓,曲紫畅,陈国英,王坤龙,徐哲炜,柯文俊,汪鹏.大语言模型评估技术研究进展[J].数据采集与处理,2024,(3):502-523

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
历史
  • 收稿日期:2024-03-29
  • 最后修改日期:2024-05-10
  • 录用日期:
  • 在线发布日期: 2024-06-14