1、导入需要的模块
import torch
import numpy
as np
from torch
.autograd
import Variable
2、tensor间的转换
a
= torch
.ones
(2,3)
print("a:",a
)
float_a
= a
.data
.float()
print("float_a:",float_a
)
int_a
= a
.type(torch
.IntTensor
)
print("int_a:",int_a
)
b
= torch
.eye
(2,3).data
.double
()
print("b:",b
)
a_
= a
.type_as
(b
)
print("a_类型:",a_
.type())
print("a_:",a_
)
a
: tensor
([[ 1., 1., 1.],
[ 1., 1., 1.]])
float_a
: tensor
([[ 1., 1., 1.],
[ 1., 1., 1.]])
int_a
: tensor
([[ 1, 1, 1],
[ 1, 1, 1]], dtype
=torch
.int32
)
b
: tensor
([[ 1., 0., 0.],
[ 0., 1., 0.]], dtype
=torch
.float64
)
a_类型: torch
.DoubleTensor
a_
: tensor
([[ 1., 1., 1.],
[ 1., 1., 1.]], dtype
=torch
.float64
)
3、CPU <-> GPU
没有GPU,贫穷限制了我的操作
print("GPU可用数目:",torch
.cuda
.device_count
())
var
= torch
.Tensor
(2,3)
if torch
.cuda
.is_available
():
var
= var
.cuda
()
print("var:",var
)
var
= var
.cuda
().data
.cpu
().numpy
()
GPU可用数目: 0
4、tensor <-> numpy
a
= np
.ones
((2,3))
a_tensor
= torch
.from_numpy
(a
)
print("a:",a
)
print("a_tensor:",a_tensor
)
b
= a_tensor
.numpy
()
print("b:",b
)
a
: [[1. 1. 1.]
[1. 1. 1.]]
a_tensor
: tensor
([[ 1., 1., 1.],
[ 1., 1., 1.]], dtype
=torch
.float64
)
b
: [[1. 1. 1.]
[1. 1. 1.]]
5、Variable
var_tensor
= Variable
(torch
.Tensor
(2,3))
print("var_tensor:",var_tensor
)
var_numpy
= var_tensor
.data
.numpy
()
var_to_tensor
= Variable
(torch
.from_numpy
(var_numpy
))
print("var_numpy:",var_numpy
)
print("var_to_tensor:",var_to_tensor
)
var_tensor
: tensor
(1.00000e-39 *
[[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 9.4592, 0.0000]])
var_numpy
: [[4.203895e-45 0.000000e+00 1.401298e-45]
[0.000000e+00 9.459202e-39 0.000000e+00]]
var_to_tensor
: tensor
(1.00000e-39 *
[[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 9.4592, 0.0000]])
由于作者水平有限,因此不能保证文中内容准确无误,如有错误,请在下方留言,欢迎指出,谢谢!
转载请注明原文地址: https://yun.8miu.com/read-133533.html