Pytorch 获取中间 FeatureMap 方法

在学习 DeepLabV3+ 的时候发现需要用到 不同层的 FeatureMap,因此特别关注了下相关代码;在还没完全弄明白的时候,又需要复现 TSGB论文 时,其中也用到了不同层的 FeatureMap,因此认真的 Debug 了这段代码,核心代码如下:

from collections import OrderedDictimport torch
from torch import nnclass IntermediateLayerGetter(nn.ModuleDict):"""Module wrapper that returns intermediate layers from a modelIt has a strong assumption that the modules have been registeredinto the model in the same order as they are used.This means that one should **not** reuse the same nn.Moduletwice in the forward if you want this to work.Additionally, it is only able to query submodules that are directlyassigned to the model. So if `model` is passed, `model.feature1` canbe returned, but not `model.feature1.layer2`.Arguments:model (nn.Module): model on which we will extract the featuresreturn_layers (Dict[name, new_name]): a dict containing the namesof the modules for which the activations will be returned asthe key of the dict, and the value of the dict is the nameof the returned activation (which the user can specify)."""def __init__(self, model, return_layers):if not set(return_layers).issubset([name for name, _ in model.named_children()]):raise ValueError("return_layers are not present in model")orig_return_layers = return_layersreturn_layers = {k: v for k, v in return_layers.items()}layers = OrderedDict()for name, module in model.named_children():layers[name] = moduleif name in return_layers:del return_layers[name]if not return_layers:breaksuper(IntermediateLayerGetter, self).__init__(layers)self.return_layers = orig_return_layersdef forward(self, x):out = OrderedDict()for name, module in self.named_children():x = module(x)if name in self.return_layers:out_name = self.return_layers[name]out[out_name] = xreturn out# example
m = torchvision.models.resnet18(pretrained=True)
# extract layer1 and layer3, giving as names `feat1` and feat2`
new_m = torchvision.models._utils.IntermediateLayerGetter(m,{'layer1': 'feat1', 'layer3': 'feat2'})out = new_m(torch.rand(1, 3, 224, 224))
print([(k, v.shape) for k, v in out.items()])# [('feat1', torch.Size([1, 64, 56, 56])), ('feat2', torch.Size([1, 256, 14, 14]))]

方法二

出了上方的方法,在逛知乎的时候,发现一位大神也整理了相关的算法,在此没有仔细Debug,先转载备份下,回头有空再分析
此方法转载于知乎:https://zhuanlan.zhihu.com/p/362985275

class TestForHook(nn.Module):def __init__(self):super().__init__()self.linear_1 = nn.Linear(in_features=2, out_features=2)self.linear_2 = nn.Linear(in_features=2, out_features=1)self.relu = nn.ReLU()self.relu6 = nn.ReLU6()self.initialize()def forward(self, x):linear_1 = self.linear_1(x)linear_2 = self.linear_2(linear_1)relu = self.relu(linear_2)relu_6 = self.relu6(relu)layers_in = (x, linear_1, linear_2)layers_out = (linear_1, linear_2, relu)return relu_6, layers_in, layers_outfeatures_in_hook = []
features_out_hook = []def hook(module, fea_in, fea_out):features_in_hook.append(fea_in)features_out_hook.append(fea_out)return Nonenet = TestForHook()"""
# 第一种写法,按照类型勾,但如果有重复类型的layer比较复杂
net_chilren = net.children()
for child in net_chilren:if not isinstance(child, nn.ReLU6):child.register_forward_hook(hook=hook)
""""""
推荐下面我改的这种写法,因为我自己的网络中,在Sequential中有很多层,
这种方式可以直接先print(net)一下,找出自己所需要那个layer的名称,按名称勾出来
"""
layer_name = 'relu_6'
for (name, module) in net.named_modules():if name == layer_name:module.register_forward_hook(hook=hook)print(features_in_hook)  # 勾的是指定层的输入
print(features_out_hook)  # 勾的是指定层的输出


本文来自互联网用户投稿,文章观点仅代表作者本人,不代表本站立场,不承担相关法律责任。如若转载,请注明出处。 如若内容造成侵权/违法违规/事实不符,请点击【内容举报】进行投诉反馈!

相关文章

立即
投稿

微信公众账号

微信扫一扫加关注

返回
顶部