可解释的深度TSK模糊系统综述
作者:
作者单位:

江南大学人工智能与计算机学院,无锡 214000

作者简介:

通讯作者:

基金项目:

国家自然科学基金(61972181);江苏省自然科学基金(BK20191331);江苏省研究生科研与实践创新计划项目(KYCX22-2315)。


Survey of Interpretable Deep TSK Fuzzy Systems
Author:
Affiliation:

School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214000, China

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
    摘要:

    深度神经网络在多个领域取得了突破性的成功,然而这些深度模型大多高度不透明。而在很多高风险领域,如医疗、金融和交通等,对模型的安全性、无偏性和透明度有着非常高的要求。因此,在实际中如何创建可解释的人工智能(Explainable artificial intelligence, XAI)已经成为了当前的研究热点。作为探索XAI的一个有力途径,模糊人工智能因其语义可解释性受到了越来越多的关注。其中将高可解释的Takagi-Sugeno-Kang(TSK)模糊系统和深度模型相结合,不仅可以避免单个TSK模糊系统遭受规则爆炸的影响,也可以在保持可解释性的前提下取得令人满意的测试泛化性能。本文以基于栈式泛化原理的可解释的深度TSK模糊系统为研究对象,分析其代表模型,总结其实际应用场景,最后剖析其所面临的挑战与机遇。

    Abstract:

    While the existing deep neural networks have earned great successes in various application scenarios,they are still facing black-box challenges that they are not very suitable for some application fields such as healthcare, finance and transportation. Therefore, explainable artificial intelligence (XAI) has been becoming a hot research topic in recent years. Among the existing XAI means, since fuzzy AI systems have the impressive ability to achieve an excellent trade-off between performance and interpretability,interpretable deep Takagi-Sugeno-Kang (TSK) fuzzy systems have been drawing more and more attentions. We first state the concept of the classical TSK fuzzy systems,then give a comprehensive overview of interpretable deep TSK fuzzy systems which are based on stacked generalization principle, including their structures,representative models and application scenarios, and finally discuss their future development direction according to their existing problems.

    参考文献
    相似文献
    引证文献
引用本文

王士同,谢润山,周尔昊.可解释的深度TSK模糊系统综述[J].数据采集与处理,2022,37(5):935-951

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
历史
  • 收稿日期:2021-08-12
  • 最后修改日期:2022-09-01
  • 录用日期:
  • 在线发布日期: 2022-09-25