融合细粒度特征编码的点云分类分割网络
DOI:
作者:
作者单位:

1.辽宁工程技术大学;2.沈阳理工大学

作者简介:

通讯作者:

基金项目:

辽宁省科技厅应用基础研究项目(2022JH2/101300274)


Fusion fine-grained feature encoding for point cloud classification and segmentation
Author:
Affiliation:

1.Liaoning Technical University;2.Shenyang Ligong University

Fund Project:

Foundation items: Applied Basic Research Projects of Department of Science & Technology of Liaoning Province (2022JH2/101300274)

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
    摘要:

    有效获取点云特征是分析和处理三维点云场景的关键。针对目前深度学习方法特征信息提取不充分,难以捕捉深层次语义信息的问题,提出了一种融合细粒度特征编码的网络来提高点云分类与分割任务的准确率。首先,特征提取模块包含两个子模块,一个是扩张图卷积模块,相比图卷积能够提取更丰富的几何信息,另一个是细粒度特征编码模块,能够获取局部区域的细节特征。其次,通过可学习参数将二者动态融合,有效地学习每个点的上下文信息。最后,将提取的所有特征相加,通过通道亲和注意力模块来强调不同通道,协助特征图来避免可能的冗余。在ModelNet40及ScanObjectNN数据集上进行点云分类实验,总体分类精度分别为93.3%和80.0%。在ShapeNet Part数据集上进行点云部件分割实验,平均交并比为85.6%。实验结果表明,与目前主流方法相比,该网络具有优势。

    Abstract:

    Effective acquisition of point cloud features is the key to analyzing and processing 3D point cloud scenes. To address the problem that current deep learning methods have inadequate feature information extraction and difficulty capturing deep semantic information, a fusion fine-grained feature encoding network is proposed to improve the accuracy of point cloud classification and segmentation tasks. First, the feature extraction module contains two sub-modules, one is the dilation graph convolution module, which can extract richer geometric information than graph convolution, and the other is the fine-grained feature encoding module, which can capture detailed features of local regions. Second, the two modules are dynamically fused by learnable parameters to efficiently learn the contextual information of each point. Finally, all the extracted features are summed and pass the channel-wise affinity attention module, assisting the feature map to avoid redundancy by emphasizing its distinct channels. Point cloud classification experiment is performed on the ModelNet40 and ScanObjectNN datasets, and the overall accuracy is 93.3 percent and 80.0 percent, respectively. The mIoU is 85.6 percent for part segmentation experiments on the ShapeNet Part dataset. The experimental results show that the proposed method performs better than the current mainstream methods.

    参考文献
    相似文献
    引证文献
引用本文
分享
文章指标
  • 点击次数:
  • 下载次数:
历史
  • 收稿日期:2023-04-17
  • 最后修改日期:2023-10-12
  • 录用日期:2023-11-09
  • 在线发布日期: