基于思维链的大语言模型知识蒸馏
作者:
作者单位:

1.西安电子科技大学计算机科学与技术学院,西安 710000;2.武警工程大学反恐指挥信息工程教育部重点实验室(立项),西安 710086

作者简介:

通讯作者:

基金项目:


Knowledge Distillation of Large Language Models Based on Chain of Thought
Author:
Affiliation:

1.School of Computer Science and Technology, Xidian University, Xi’an 710000, China;2.Key Laboratory of Counter-Terrorism Command & Information Engineering of Ministry of Education (Approval), Engineering University of PAP, Xi’an 710086, China

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
    摘要:

    思维链(Chain of thought, CoT)提示使大语言模型能够按照具体推理步骤处理复杂的任务,让大语言模型在常识推理、数学逻辑推理和可解释性等方面表现出更强的能力。然而,CoT方法的主要缺点在于其对庞大语言模型的依赖,这些模型通常拥有数百亿的参数,在大规模部署方面面临挑战。为此,本文提出一种基于思维链的大模型知识蒸馏方法,主要目标在于充分利用大型语言模型的思维推理能力,通过知识蒸馏技术,引导小模型解决复杂任务。以大型模型为教师模型,小型模型为学生模型,通过获取教师模型的推理数据来微调学生模型。通过更改数据生成方式、基于聚类的问答示例采样、示例启发式纠错以及答案的自适应生成等一系列精心设计的方法,使教师模型的生成过程更高效,生成的推理数据质量更高、数量更多,从而更好地微调学生模型,使其获得强大的推理能力,实现高效的知识蒸馏。这一研究框架旨在建立一个有效的知识传递机制,使得大模型的深度思考能够有效指导小模型,为解决复杂任务提供更为智能且高效的解决方案。通过这种方式,希望能够克服大模型部署的挑战,并促进语言模型在现实世界中的应用和进步。

    Abstract:

    The chain of thought (CoT) prompts enable large language models to process complex tasks according to specific reasoning steps, allowing them to demonstrate stronger capabilities in common sense reasoning, mathematical logic reasoning, and interpretability. However, the main drawback of the CoT approach lies in its reliance on massive language models, which typically have billions of parameters and face challenges in large-scale deployment. To address this issue, this paper proposes a large model knowledge distillation method based on the CoT, aiming to fully leverage the thinking and reasoning capabilities of large language models. Through knowledge distillation techniques, the main goal is to guide smaller models in solving complex tasks.This study adopts a large model as the teacher model and a small model as the student model, fine-tuning the student model by acquiring reasoning data from the teacher model. Through a series of carefully designed methods, such as changing data generation methods, clustering-based sampling of question-answer examples, heuristic correction of examples, and adaptive generation of answers, this study makes the generation process of the teacher model more efficient, resulting in higher-quality and larger quantities of reasoning data. This enables better fine-tuning of the student model, allowing it to acquire strong reasoning capabilities and achieve efficient knowledge distillation. The framework of this study aims to establish an effective knowledge transfer mechanism, allowing the deep thinking of large models to effectively guide smaller models, providing more intelligent and efficient solutions for solving complex tasks. Through this approach, we hope to overcome the challenges of deploying large models and promote the application and advancement of language models in the real world.

    参考文献
    相似文献
    引证文献
引用本文

李荣涵,浦荣成,沈佳楠,李栋栋,苗启广.基于思维链的大语言模型知识蒸馏[J].数据采集与处理,2024,(3):547-558

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
历史
  • 收稿日期:2024-04-02
  • 最后修改日期:2024-04-26
  • 录用日期:
  • 在线发布日期: 2024-06-14