2026/3/2 2:27:18
网站建设
项目流程
免费的网站域名,wordpress高级自定义字段怎么显示,如和做视频解析网站,动漫制作专业电脑推荐前言
本文介绍了将双坐标注意力特征提取#xff08;DCAFE#xff09;模块与YOLO26相结合的方法。DCAFE模块采用“并行坐标注意力双池化融合”设计#xff0c;通过平均池化和最大池化并行支路捕获特征#xff0c;经通道自适应调整生成注意力权重#xff0c;增强特征表达。…前言本文介绍了将双坐标注意力特征提取DCAFE模块与YOLO26相结合的方法。DCAFE模块采用“并行坐标注意力双池化融合”设计通过平均池化和最大池化并行支路捕获特征经通道自适应调整生成注意力权重增强特征表达。我们将DCAFE模块引入YOLO26对相关代码进行修改和注册并配置了YOLO26 - DCAFE.yaml文件。文章目录 YOLO26改进大全卷积层、轻量化、注意力机制、损失函数、Backbone、SPPF、Neck、检测头全方位优化汇总专栏链接: YOLO26改进专栏文章目录前言介绍摘要文章链接基本原理核心设计原理详细结构与流程1. 特征方向编码双池化并行支路2. 特征拼接与降维3. 注意力权重生成4. 特征加权与融合核心代码YOLO26引入代码注册步骤1:步骤2配置yolo26-DCAFE.yaml实验脚本结果介绍摘要精准的药用花卉分类是其在医疗保健、制药以及食品和化妆品行业中发挥重要作用的前提。此外了解药用花卉物种也是保护和维护生物多样性的必要条件。然而由于较大的类内差异、较小的类间差异以及与其他树木类别的视觉相似性野外药用花卉分类是一项具有挑战性的任务。如今众多深度学习方法尤其是卷积神经网络已被应用于多个基于图像的目标分类任务中。但现有方法难以捕捉花瓣纹理、花卉结构变异等复杂特征图。本文提出一种新型 “Flora-NET” 方法首次将坐标注意力与内卷神经网络相结合应用于花卉行业的分类任务。该方法实现了全新的特征细化模块即双坐标注意力特征提取DCAFE和内卷基特征细化Inv-FR。DCAFE 模块采用平均池化与最大池化相结合的并行坐标注意力方法Inv-FR 模块通过串行内卷层和自适应核进一步增强空间信息。研究人员在两个公开可用的药用花卉数据集上对 Flora-NET 方法进行了训练和测试。大量实验表明该方法在两个基准数据集上的准确率分别比现有的深度神经分类方法高出约 6.50% 和 5.59%。超参数调优、k 折交叉验证和消融实验也验证了 Flora-NET 方法的有效性。相关代码和数据集可通过以下链接获取https://github.com/ersachingupta11/Flora-NET。文章链接论文地址论文地址代码地址代码地址基本原理核心设计原理DCAFE模块的核心是**“并行坐标注意力双池化融合”**突破了传统坐标注意力CA仅用平均池化的局限具体逻辑如下坐标注意力基础通过水平X轴和垂直Y轴两个方向的特征编码捕获位置敏感信息让模型精准定位花卉区域。双池化并行设计同时使用平均池化捕获全局通用特征抑制背景噪声和最大池化保留局部关键特征突出花卉纹理/边缘两条支路独立运算后融合避免单一池化导致的信息丢失。通道自适应调整通过1×1卷积实现通道降维与恢复平衡模型复杂度和特征表达能力。详细结构与流程假设输入特征图为 ( F \in H×W×C )H高度、W宽度、C通道数DCAFE模块的处理步骤如下1. 特征方向编码双池化并行支路平均池化支路对输入特征图分别进行水平核尺寸 (H,1)和垂直核尺寸 (1,W)平均池化得到 ( X_{AvgPool} \in H×1×C ) 和 ( Y_{AvgPool} \in 1×W×C )捕获全局平滑特征。最大池化支路同理进行水平和垂直最大池化得到 ( X_{MaxPool} \in H×1×C ) 和 ( Y_{MaxPool} \in 1×W×C )保留局部尖锐特征如花瓣边缘、花蕊轮廓。2. 特征拼接与降维每条支路中将水平和垂直编码后的特征向量拼接得到 ( (HW)×1×C ) 的特征向量如64×64×96的输入拼接后为128×1×96。通过1×1卷积进行通道降维下采样率 ( D_r32 )将通道数从C压缩至 ( C/D_r )减少计算量得到中间特征向量 ( f^a \in (HW)×1×(C/D_r) )。3. 注意力权重生成将降维后的特征向量沿空间维度拆分为水平( f_h^a \in H×1×(C/D_r) )和垂直( f_w^a \in W×1×(C/D_r) )两个张量。对两个张量分别应用1×1卷积恢复通道数至C并通过sigmoid激活函数生成注意力权重矩阵 ( s_h \in H×1×C )水平权重和 ( s_w \in 1×W×C )垂直权重。4. 特征加权与融合每条支路平均/最大池化的权重矩阵与原始输入特征图 ( F ) 相乘得到加权后的特征图 ( Y^a )平均池化支路输出和 ( Y^m )最大池化支路输出。将两条支路的输出特征图拼接最终生成1260个增强通道的特征图传递给后续Inv-FR模块。核心代码classCoordAttMeanMax(nn.Module):def__init__(self,inp,oup,groups32):super(CoordAttMeanMax,self).__init__()self.pool_h_meannn.AdaptiveAvgPool2d((None,1))self.pool_w_meannn.AdaptiveAvgPool2d((1,None))self.pool_h_maxnn.AdaptiveMaxPool2d((None,1))self.pool_w_maxnn.AdaptiveMaxPool2d((1,None))mipmax(8,inp//groups)self.conv1_meannn.Conv2d(inp,mip,kernel_size1,stride1,padding0)self.bn1_meannn.BatchNorm2d(mip)self.conv2_meannn.Conv2d(mip,oup,kernel_size1,stride1,padding0)self.conv1_maxnn.Conv2d(inp,mip,kernel_size1,stride1,padding0)self.bn1_maxnn.BatchNorm2d(mip)self.conv2_maxnn.Conv2d(mip,oup,kernel_size1,stride1,padding0)self.relunn.ReLU(inplaceTrue)defforward(self,x):identityx n,c,h,wx.size()# Mean pooling branchx_h_meanself.pool_h_mean(x)x_w_meanself.pool_w_mean(x).permute(0,1,3,2)y_meantorch.cat([x_h_mean,x_w_mean],dim2)y_meanself.conv1_mean(y_mean)y_meanself.bn1_mean(y_mean)y_meanself.relu(y_mean)x_h_mean,x_w_meantorch.split(y_mean,[h,w],dim2)x_w_meanx_w_mean.permute(0,1,3,2)# Max pooling branchx_h_maxself.pool_h_max(x)x_w_maxself.pool_w_max(x).permute(0,1,3,2)y_maxtorch.cat([x_h_max,x_w_max],dim2)y_maxself.conv1_max(y_max)y_maxself.bn1_max(y_max)y_maxself.relu(y_max)x_h_max,x_w_maxtorch.split(y_max,[h,w],dim2)x_w_maxx_w_max.permute(0,1,3,2)# Apply attentionx_h_meanself.conv2_mean(x_h_mean).sigmoid()x_w_meanself.conv2_mean(x_w_mean).sigmoid()x_h_maxself.conv2_max(x_h_max).sigmoid()x_w_maxself.conv2_max(x_w_max).sigmoid()# Expand to original shapex_h_meanx_h_mean.expand(-1,-1,h,w)x_w_meanx_w_mean.expand(-1,-1,h,w)x_h_maxx_h_max.expand(-1,-1,h,w)x_w_maxx_w_max.expand(-1,-1,h,w)# Combine outputsattention_meanidentity*x_w_mean*x_h_mean attention_maxidentity*x_w_max*x_h_max# Sum the attention outputsreturnattention_meanattention_maxclassDCAFE(nn.Module):def__init__(self,in_channels,out_channels):super(DCAFE,self).__init__()self.coord_attCoordAttMeanMax(in_channels,out_channels)defforward(self,x):returnself.coord_att(x)YOLO26引入代码在根目录下的ultralytics/nn/目录新建一个attention目录然后新建一个以DCAFE.py为文件名的py文件 把代码拷贝进去。importtorchimporttorch.nnasnnclassCBS(nn.Module):def__init__(self,in_channels,out_channels,kernel_size1,stride1):super(CBS,self).__init__()self.convnn.Conv2d(in_channels,out_channels,kernel_sizekernel_size,stridestride,padding0)self.bnnn.BatchNorm2d(out_channels)self.silunn.SiLU()defforward(self,x):returnself.silu(self.bn(self.conv(x)))importtorchimporttorch.nnasnnimportmathimporttorch.nn.functionalasFclassCoordAttMeanMax(nn.Module):def__init__(self,inp,oup,groups32):super(CoordAttMeanMax,self).__init__()self.pool_h_meannn.AdaptiveAvgPool2d((None,1))self.pool_w_meannn.AdaptiveAvgPool2d((1,None))self.pool_h_maxnn.AdaptiveMaxPool2d((None,1))self.pool_w_maxnn.AdaptiveMaxPool2d((1,None))mipmax(8,inp//groups)self.conv1_meannn.Conv2d(inp,mip,kernel_size1,stride1,padding0)self.bn1_meannn.BatchNorm2d(mip)self.conv2_meannn.Conv2d(mip,oup,kernel_size1,stride1,padding0)self.conv1_maxnn.Conv2d(inp,mip,kernel_size1,stride1,padding0)self.bn1_maxnn.BatchNorm2d(mip)self.conv2_maxnn.Conv2d(mip,oup,kernel_size1,stride1,padding0)self.relunn.ReLU(inplaceTrue)defforward(self,x):identityx n,c,h,wx.size()# Mean pooling branchx_h_meanself.pool_h_mean(x)x_w_meanself.pool_w_mean(x).permute(0,1,3,2)y_meantorch.cat([x_h_mean,x_w_mean],dim2)y_meanself.conv1_mean(y_mean)y_meanself.bn1_mean(y_mean)y_meanself.relu(y_mean)x_h_mean,x_w_meantorch.split(y_mean,[h,w],dim2)x_w_meanx_w_mean.permute(0,1,3,2)# Max pooling branchx_h_maxself.pool_h_max(x)x_w_maxself.pool_w_max(x).permute(0,1,3,2)y_maxtorch.cat([x_h_max,x_w_max],dim2)y_maxself.conv1_max(y_max)y_maxself.bn1_max(y_max)y_maxself.relu(y_max)x_h_max,x_w_maxtorch.split(y_max,[h,w],dim2)x_w_maxx_w_max.permute(0,1,3,2)# Apply attentionx_h_meanself.conv2_mean(x_h_mean).sigmoid()x_w_meanself.conv2_mean(x_w_mean).sigmoid()x_h_maxself.conv2_max(x_h_max).sigmoid()x_w_maxself.conv2_max(x_w_max).sigmoid()# Expand to original shapex_h_meanx_h_mean.expand(-1,-1,h,w)x_w_meanx_w_mean.expand(-1,-1,h,w)x_h_maxx_h_max.expand(-1,-1,h,w)x_w_maxx_w_max.expand(-1,-1,h,w)# Combine outputsattention_meanidentity*x_w_mean*x_h_mean attention_maxidentity*x_w_max*x_h_max# Sum the attention outputsreturnattention_meanattention_maxclassDCAFE(nn.Module):def__init__(self,in_channels,out_channels):super(DCAFE,self).__init__()self.coord_attCoordAttMeanMax(in_channels,out_channels)defforward(self,x):returnself.coord_att(x)注册在ultralytics/nn/tasks.py中进行如下操作步骤1:fromultralytics.nn.attention.DCAFEimportDCAFE步骤2修改def parse_model(d, ch, verboseTrue):DCAFE配置yolo26-DCAFE.yamlultralytics/cfg/models/26/yolo26-DCAFE.yaml# Ultralytics AGPL-3.0 License - https://ultralytics.com/license# Ultralytics YOLO26 object detection model with P3/8 - P5/32 outputs# Model docs: https://docs.ultralytics.com/models/yolo26# Task docs: https://docs.ultralytics.com/tasks/detect# Parametersnc:80# number of classesend2end:True# whether to use end-to-end modereg_max:1# DFL binsscales:# model compound scaling constants, i.e. modelyolo26n.yaml will call yolo26.yaml with scale n# [depth, width, max_channels]n:[0.50,0.25,1024]# summary: 260 layers, 2,572,280 parameters, 2,572,280 gradients, 6.1 GFLOPss:[0.50,0.50,1024]# summary: 260 layers, 10,009,784 parameters, 10,009,784 gradients, 22.8 GFLOPsm:[0.50,1.00,512]# summary: 280 layers, 21,896,248 parameters, 21,896,248 gradients, 75.4 GFLOPsl:[1.00,1.00,512]# summary: 392 layers, 26,299,704 parameters, 26,299,704 gradients, 93.8 GFLOPsx:[1.00,1.50,512]# summary: 392 layers, 58,993,368 parameters, 58,993,368 gradients, 209.5 GFLOPs# YOLO26n backbonebackbone:# [from, repeats, module, args]-[-1,1,Conv,[64,3,2]]# 0-P1/2-[-1,1,Conv,[128,3,2]]# 1-P2/4-[-1,2,C3k2,[256,False,0.25]]-[-1,1,Conv,[256,3,2]]# 3-P3/8-[-1,2,C3k2,[512,False,0.25]]-[-1,1,Conv,[512,3,2]]# 5-P4/16-[-1,2,C3k2,[512,True]]-[-1,1,Conv,[1024,3,2]]# 7-P5/32-[-1,2,C3k2,[1024,True]]-[-1,1,SPPF,[1024,5,3,True]]# 9-[-1,2,C2PSA,[1024]]# 10# YOLO26n headhead:-[-1,1,nn.Upsample,[None,2,nearest]]-[[-1,6],1,Concat,[1]]# cat backbone P4-[-1,2,C3k2,[512,True]]# 13-[-1,1,nn.Upsample,[None,2,nearest]]-[[-1,4],1,Concat,[1]]# cat backbone P3-[-1,2,C3k2,[256,True]]# 16 (P3/8-small)-[-1,1,DCAFE,[256]]# 17 (P3/8-small)-[-1,1,Conv,[256,3,2]]-[[-1,13],1,Concat,[1]]# cat head P4-[-1,2,C3k2,[512,True]]# 19 (P4/16-medium)-[-1,1,DCAFE,[512]]# 21 (P4/16-medium)-[-1,1,Conv,[512,3,2]]-[[-1,10],1,Concat,[1]]# cat head P5-[-1,1,C3k2,[1024,True,0.5,True]]# 22 (P5/32-large)-[-1,1,DCAFE,[1024]]# 25 (P5/32-large)-[[17,21,25],1,Detect,[nc]]# Detect(P3, P4, P5)实验脚本importwarnings warnings.filterwarnings(ignore)fromultralyticsimportYOLOif__name____main__:# 修改为自己的配置文件地址modelYOLO(./ultralytics/cfg/models/26/yolo26-DCAFE.yaml)# 修改为自己的数据集地址model.train(data./ultralytics/cfg/datasets/coco8.yaml,cacheFalse,imgsz640,epochs10,single_clsFalse,# 是否是单类别检测batch8,close_mosaic10,workers0,optimizerMuSGD,ampTrue,projectruns/train,nameyolo26-DCAFE,)结果