16

From https://pytorch.org/

to install pytorch on MacOS the following is stated :

conda install pytorch torchvision -c pytorch # MacOS Binaries dont support CUDA, install from source if CUDA is needed 

Why would want to install pytorch without cuda enabled ?

Reason I ask is I receive error :

--------------------------------------------------------------------------- AssertionError Traceback (most recent call last) in () 78 # predicted = outputs.data.max(1)[1] 79 ---> 80 output = model(torch.tensor([[1,1]]).float().cuda()) 81 predicted = output.data.max(1)[1] 82

~/anaconda3/lib/python3.6/site-packages/torch/cuda/init.py in _lazy_init() 159 raise RuntimeError( 160 "Cannot re-initialize CUDA in forked subprocess. " + msg) --> 161 _check_driver() 162 torch._C._cuda_init() 163 _cudart = _load_cudart()

~/anaconda3/lib/python3.6/site-packages/torch/cuda/init.py in _check_driver() 73 def _check_driver(): 74 if not hasattr(torch._C, '_cuda_isDriverSufficient'): ---> 75 raise AssertionError("Torch not compiled with CUDA enabled") 76 if not torch._C._cuda_isDriverSufficient(): 77 if torch._C._cuda_getDriverVersion() == 0:

AssertionError: Torch not compiled with CUDA enabled

when attempting to execute code :

x = torch.tensor([[0,0] , [0,1] , [1,0]]).float() print(x) y = torch.tensor([0,1,1]).long() print(y) my_train = data_utils.TensorDataset(x, y) my_train_loader = data_utils.DataLoader(my_train, batch_size=2, shuffle=True) # Device configuration device = 'cpu' print(device) # Hyper-parameters input_size = 2 hidden_size = 100 num_classes = 2 learning_rate = 0.001 train_dataset = my_train train_loader = my_train_loader pred = [] for i in range(0 , model_iters) : # Fully connected neural network with one hidden layer class NeuralNet(nn.Module): def __init__(self, input_size, hidden_size, num_classes): super(NeuralNet, self).__init__() self.fc1 = nn.Linear(input_size, hidden_size) self.relu = nn.ReLU() self.fc2 = nn.Linear(hidden_size, num_classes) def forward(self, x): out = self.fc1(x) out = self.relu(out) out = self.fc2(out) return out model = NeuralNet(input_size, hidden_size, num_classes).to(device) # Loss and optimizer criterion = nn.CrossEntropyLoss() optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) # Train the model total_step = len(train_loader) for epoch in range(num_epochs): for i, (images, labels) in enumerate(train_loader): # Move tensors to the configured device images = images.reshape(-1, 2).to(device) labels = labels.to(device) # Forward pass outputs = model(images) loss = criterion(outputs, labels) # Backward and optimize optimizer.zero_grad() loss.backward() optimizer.step() {:.4f}'.format(epoch+1, num_epochs, i+1, total_step, loss.item())) output = model(torch.tensor([[1,1]]).float().cuda()) 

To fix this error I need to install pytorch from source with cuda already installed ?

4
  • 3
    "Why would want to install pytorch without cuda enabled ?" : Those who don't have a CUDA-capable GPU might want to. Are you on the Mac platform? If so, are you certain you have a CUDA-capable GPU installed in your Mac? It seems evident that if you installed as you indicated (via conda), that your pytorch does not have CUDA enabled. This would be consistent with the assertion error. It's also puzzling why you specify device = 'cpu' in your pytorch script, but also: output = model(torch.tensor([[1,1]]).float().cuda()) Commented Jan 2, 2019 at 23:00
  • @RobertCrovella thanks Robert. I incorrectly assumed that in order to run pyTorch code CUDA is required as I also did not realize CUDA is not part of PyTorch. In order to write code that is cross compatible between CPU and GPU do I need to include/exclude .cuda() ? Commented Jan 2, 2019 at 23:19
  • 3
    @blue-sky. remove any cuda() and use device instead to achieve such compatibility. Commented Jan 3, 2019 at 2:29
  • 3
    To elaborate: you'll need to use .to(device) instead of .cuda(). Depending on the value of 'device' the GPU can then be used. Typically this is done like so: device = torch.device('cuda' if torch.cuda.is_available() else 'cpu'). Commented Mar 10, 2020 at 14:28

3 Answers 3

12

To summarize and expand on the comments:

  • CUDA is an Nvidia proprietary (apparently unlicensed) technology to allow general computing on GPU processors.
  • Very few Macbook Pro's have an Nvidia CUDA-capable GPU. Take a look here to see whether your MBP has an Nvidia GPU. Then, look at the table here to see if that GPU supports CUDA
  • Same story for iMac, iMac Pro and Mac Pro.
  • Therefore, PyTorch is installed without CUDA support by default on MacOS

This PyTorch github issue mentions that very few Macs have Nvidia processors: https://github.com/pytorch/pytorch/issues/30664

IF your Mac does have a CUDA-capable GPU, then to use CUDA commands on MacOS you'll need to recompile pytorch from source with correct command line options.

Sign up to request clarification or add additional context in comments.

1 Comment

you'll need to recompile pytorch from source with correct command line options. what does it mean?
8

When using the hugging face "phi2" model with the sample code i received the same error and using "mps" instead of "cuda" worked.

torch.set_default_device("mps")

Comments

0

You can Pytourch MPS backend if MAC does not have Nvedia GPU. Documentation: https://pytorch.org/docs/stable/notes/mps.html

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.