Pytorch 单机多GPU运行

    xiaoxiao2022-07-14  152

    一、单机单GPU

    1、set current device (gpu id)

    # the first methord CUDA_VISIBLE_DEVICES=gpi_id python XXX.py # the second methord torch.cuda.set_device(gpu_id)

    2、put the object(tensor,variable,model…) to GPU memory

    # 1、Tensor ten1 = torch.FloatTensor(2).cuda() # 2、Variable ten1 = torch.FloatTensor(2) #first variable,then cuda V1_cpu = autograd.Variable(ten1) V1 = V1_cpu.cuda() # first cuda,then variable ten1_cuda = ten1.cuda() V2 = autograd.Variable(ten1_cuda)

    3、Returns a copy of this object in CPU memory,then you can interact with Numpy

    V1_cpu = V1.cpu() V2_cpu = V2.cpu()

    二、单机多GPU

    device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") The torch.device contains a device type (‘cpu’ or ‘cuda’) and optional device ordinal for the device type. model = Model(input_size,output_size) # init the model if torch.cuda.device_count() > 1: # Returns the number of GPUs available model = nn.DataParallel(model) model.to(device)

    NB:The batch size should be larger than the number of GPUs used.

    the link will help you DATA PARALLELISM pytorch 多GPU训练总结(DataParallel的使用)

    and you can realize how to select a device in th cross-gpu operations. CUDA SEMANTICS

    最新回复(0)