【RuntimeError: set_sizes_and_strides is not allowed on a Tensor created from .data or .detach().】
RuntimeError: set_sizes_and_strides is not allowed on a Tensor created from .data or .detach().
GAN训练中遇到的问题,先更新生成器,用生成样本计算鉴别器损失时,原代码是这样的:
requires_grad(D, flag=True)
requires_grad(T, flag=False)
optimizer_D.zero_grad()
loss_D = 3 * D.compute_loss(real_d, (real_mask * 2 - 1), 1) + \
D.compute_loss(d1.detach(), (real_mask * 2 - 1), 0) + \D.compute_loss(d21.detach(), (g_mask * 2 - 1), 0) + D.compute_loss(d22.detach(), (g_mask * 2 - 1), 0)
loss_gp = opt.lambda_gp * (compute_gradient_penalty(D, real_d, d21.detach()) + compute_gradient_penalty(D, real_d, d22.detach()))
代码运行到loss_D的计算时抛出错误:
RuntimeError: set_sizes_and_strides is not allowed on a Tensor created from .data or .detach().
If your intent is to change the metadata of a Tensor (such as sizes / strides / storage / storage_offset)
without autograd tracking the change, remove the .data / .detach() call and wrap the change in a with torch.no_grad(): block.
For example, change:
x.data.set_(y)
to:
with torch.no_grad():
x.set_(y)
大概意思是说用detach()时候不能使用某个功能。但是!!我之前用另一个生成器模型时候,并没有这样的错误,而这个错误仅涉及到鉴别器的前向操作,并且,loss_gp的计算是包含了loss_D计算时所需的操作的,但loss_gp不会抛出这种错误(我排查过,十分确定)。就非常的迷惑。尝试了requires_grad_(False)这种梯度截断和它推荐的解决办法,都没有解决问题,最后把detach换成detach_就可以了
大写的问号,疑惑。。。
本文来自互联网用户投稿,文章观点仅代表作者本人,不代表本站立场,不承担相关法律责任。如若转载,请注明出处。 如若内容造成侵权/违法违规/事实不符,请点击【内容举报】进行投诉反馈!
