wordpress数据量大网站访问手机笑话网站模板
2026/4/13 6:21:39 网站建设 项目流程
wordpress数据量大网站访问,手机笑话网站模板,wordpress 5.0.2安装,那些网站后台做推广效果好前言 本文介绍了三重注意力机制#xff08;Triplet Attention#xff09;#xff0c;这是一种通过三分支结构捕获跨维度交互以计算注意力权重的轻量化方法。该方法利用旋转操作构建通道与空间维度间的依赖关系#xff0c;有效编码通道间和空间信息#xff0c;且计算开销极…前言本文介绍了三重注意力机制Triplet Attention这是一种通过三分支结构捕获跨维度交互以计算注意力权重的轻量化方法。该方法利用旋转操作构建通道与空间维度间的依赖关系有效编码通道间和空间信息且计算开销极低。我们将 Triplet Attention 模块集成进 YOLOv8 的 Neck 部分插入于特征融合后的 C2f 模块之后增强了模型对关键特征的提取能力。实验结果表明引入 Triplet Attention 的 YOLOv8 模型在目标检测任务中能够以可忽略的额外计算成本显著提升检测精度。文章目录 YOLOv8改进大全卷积层、轻量化、注意力机制、损失函数、Backbone、SPPF、Neck、检测头全方位优化汇总专栏链接: YOLOv8改进专栏文章目录前言介绍摘要文章链接基本原理核心代码引入代码注册步骤1:步骤2配置yolov8_TripletAttention.yaml实验脚本结果介绍摘要得益于在通道或空间位置之间构建相互依赖关系的能力注意力机制在最近被广泛研究并广泛应用于各种计算机视觉任务中。在本文中我们研究了轻量但有效的注意力机制并提出了三重注意力这是一种通过使用三分支结构捕获跨维度交互来计算注意力权重的新方法。对于输入张量三重注意力通过旋转操作及后续的残差变换构建维度间依赖关系并以可忽略的计算开销编码通道间和空间信息。我们的方法简单且高效可以作为附加模块轻松插入经典骨干网络中。我们在各种具有挑战性的任务中证明了我们方法的有效性包括 ImageNet-1k 上的图像分类以及 MSCOCO 和 PASCAL VOC 数据集上的目标检测。此外我们通过可视化检查 GradCAM 和 GradCAM 结果提供了对三重注意力性能的广泛见解。我们方法的实证评估支持了在计算注意力权重时捕捉跨维度依赖关系的重要性。本文的代码可在 https://github.com/LandskapeAI/triplet-attention 公开获取。文章链接论文地址论文地址代码地址代码地址基本原理核心代码importtorchimporttorch.nnasnnclassBasicConv(nn.Module):def__init__(self,in_planes,out_planes,kernel_size,stride1,padding0,dilation1,groups1,reluTrue,bnTrue,biasFalse,):super(BasicConv,self).__init__()self.out_channelsout_planes self.convnn.Conv2d(in_planes,out_planes,kernel_sizekernel_size,stridestride,paddingpadding,dilationdilation,groupsgroups,biasbias,)self.bn(nn.BatchNorm2d(out_planes,eps1e-5,momentum0.01,affineTrue)ifbnelseNone)self.relunn.ReLU()ifreluelseNonedefforward(self,x):xself.conv(x)ifself.bnisnotNone:xself.bn(x)ifself.reluisnotNone:xself.relu(x)returnxclassChannelPool(nn.Module):defforward(self,x):returntorch.cat((torch.max(x,1)[0].unsqueeze(1),torch.mean(x,1).unsqueeze(1)),dim1)classSpatialGate(nn.Module):def__init__(self):super(SpatialGate,self).__init__()kernel_size7self.compressChannelPool()self.spatialBasicConv(2,1,kernel_size,stride1,padding(kernel_size-1)//2,reluFalse)defforward(self,x):x_compressself.compress(x)x_outself.spatial(x_compress)scaletorch.sigmoid_(x_out)returnx*scaleclassTripletAttention(nn.Module):def__init__(self,gate_channels,reduction_ratio16,pool_types[avg,max],no_spatialFalse,):super(TripletAttention,self).__init__()self.ChannelGateHSpatialGate()self.ChannelGateWSpatialGate()self.no_spatialno_spatialifnotno_spatial:self.SpatialGateSpatialGate()defforward(self,x):x_perm1x.permute(0,2,1,3).contiguous()x_out1self.ChannelGateH(x_perm1)x_out11x_out1.permute(0,2,1,3).contiguous()x_perm2x.permute(0,3,2,1).contiguous()x_out2self.ChannelGateW(x_perm2)x_out21x_out2.permute(0,3,2,1).contiguous()ifnotself.no_spatial:x_outself.SpatialGate(x)x_out(1/3)*(x_outx_out11x_out21)else:x_out(1/2)*(x_out11x_out21)returnx_out引入代码在根目录下的ultralytics/nn/目录新建一个attention目录然后新建一个以TripletAttention为文件名的py文件 把代码拷贝进去。importtorchimporttorch.nnasnnclassBasicConv(nn.Module):def__init__(self,in_planes,out_planes,kernel_size,stride1,padding0,dilation1,groups1,reluTrue,bnTrue,biasFalse):super(BasicConv,self).__init__()self.out_channelsout_planes self.convnn.Conv2d(in_planes,out_planes,kernel_sizekernel_size,stridestride,paddingpadding,dilationdilation,groupsgroups,biasbias)self.bnnn.BatchNorm2d(out_planes,eps1e-5,momentum0.01,affineTrue)ifbnelseNoneself.relunn.ReLU()ifreluelseNonedefforward(self,x):xself.conv(x)ifself.bnisnotNone:xself.bn(x)ifself.reluisnotNone:xself.relu(x)returnxclassZPool(nn.Module):defforward(self,x):returntorch.cat((torch.max(x,1)[0].unsqueeze(1),torch.mean(x,1).unsqueeze(1)),dim1)classAttentionGate(nn.Module):def__init__(self):super(AttentionGate,self).__init__()kernel_size7self.compressZPool()self.convBasicConv(2,1,kernel_size,stride1,padding(kernel_size-1)//2,reluFalse)defforward(self,x):x_compressself.compress(x)x_outself.conv(x_compress)scaletorch.sigmoid_(x_out)returnx*scaleclassTripletAttention(nn.Module):def__init__(self,no_spatialFalse):super(TripletAttention,self).__init__()self.cwAttentionGate()self.hcAttentionGate()self.no_spatialno_spatialifnotno_spatial:self.hwAttentionGate()defforward(self,x):x_perm1x.permute(0,2,1,3).contiguous()x_out1self.cw(x_perm1)x_out11x_out1.permute(0,2,1,3).contiguous()x_perm2x.permute(0,3,2,1).contiguous()x_out2self.hc(x_perm2)x_out21x_out2.permute(0,3,2,1).contiguous()ifnotself.no_spatial:x_outself.hw(x)x_out1/3*(x_outx_out11x_out21)else:x_out1/2*(x_out11x_out21)returnx_out注册在ultralytics/nn/tasks.py中进行如下操作步骤1:fromultralytics.nn.attention.TripletAttentionimportTripletAttention步骤2修改def parse_model(d, ch, verboseTrue):elifmin{TripletAttention}:c2ch[f]args[c2,*args]配置yolov8_TripletAttention.yamlultralytics/cfg/models/v8/yolov8_TripletAttention.yaml# Ultralytics YOLO , AGPL-3.0 license# YOLOv8 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect# Parametersnc:2# number of classesscales:# model compound scaling constants, i.e. modelyolov8n.yaml will call yolov8.yaml with scale n# [depth, width, max_channels]n:[0.33,0.25,1024]# YOLOv8n summary: 225 layers, 3157200 parameters, 3157184 gradients, 8.9 GFLOPss:[0.33,0.50,1024]# YOLOv8s summary: 225 layers, 11166560 parameters, 11166544 gradients, 28.8 GFLOPsm:[0.67,0.75,768]# YOLOv8m summary: 295 layers, 25902640 parameters, 25902624 gradients, 79.3 GFLOPsl:[1.00,1.00,512]# YOLOv8l summary: 365 layers, 43691520 parameters, 43691504 gradients, 165.7 GFLOPsx:[1.00,1.25,512]# YOLOv8x summary: 365 layers, 68229648 parameters, 68229632 gradients, 258.5 GFLOPs# YOLOv8.0n backbonebackbone:# [from, repeats, module, args]-[-1,1,Conv,[64,3,2]]# 0-P1/2-[-1,1,Conv,[128,3,2]]# 1-P2/4-[-1,3,C2f,[128,True]]-[-1,1,Conv,[256,3,2]]# 3-P3/8-[-1,6,C2f,[256,True]]-[-1,1,Conv,[512,3,2]]# 5-P4/16-[-1,6,C2f,[512,True]]-[-1,1,Conv,[1024,3,2]]# 7-P5/32-[-1,3,C2f,[1024,True]]-[-1,1,SPPF,[1024,5]]# 9# YOLOv8.0n headhead:-[-1,1,nn.Upsample,[None,2,nearest]]-[[-1,6],1,Concat,[1]]# cat backbone P4-[-1,3,C2f,[512]]# 12-[-1,1,nn.Upsample,[None,2,nearest]]-[[-1,4],1,Concat,[1]]# cat backbone P3-[-1,3,C2f,[256]]# 15 (P3/8-small)-[-1,1,TripletAttention,[]]# 16-[-1,1,Conv,[256,3,2]]-[[-1,12],1,Concat,[1]]# cat head P4-[-1,3,C2f,[512]]# 19 (P4/16-medium)-[-1,1,TripletAttention,[]]# 20-[-1,1,Conv,[512,3,2]]-[[-1,9],1,Concat,[1]]# cat head P5-[-1,3,C2f,[1024]]# 23 (P5/32-large)-[-1,1,TripletAttention,[]]# 24-[[16,20,24],1,Detect,[nc]]# Detect(P3, P4, P5)实验脚本importosfromultralyticsimportYOLO# Define the configuration options directlyyamlultralytics/cfg/models/v8/yolov8_TripletAttention.yaml# Initialize the YOLO model with the specified YAML filemodelYOLO(yaml)# Print model informationmodel.info()if__name____main__:# Train the model with the specified parametersresultsmodel.train(dataultralytics/datasets/original-license-plates.yaml,nameyolov8_TripletAttention,epochs10,workers8,batch1)结果

需要专业的网站建设服务?

联系我们获取免费的网站建设咨询和方案报价,让我们帮助您实现业务目标

立即咨询