4

I am trying to create a convolutional model in PyTorch where

  • one layer is fixed (initialized to prescribed values)
  • another layer is learned (but initial guess taken from prescribed values).

Here is a sample code for model definition:

import torch.nn as nn class Net(nn.Module): def __init__(self, weights_fixed, weights_guess): super(Net, self).__init__() self.convL1 = nn.Conv1d(1, 3, 3, bias=False) self.convL1.weight = weights_fixed # I want to keep these weights fixed self.convL2 = nn.Conv1d(3, 1, 1, bias=False) self.convL1.weight = weights_guess # I want to learn these weights def forward(self, inp_batch): out1 = self.convL1(inp_batch) out2 = self.convL2(out1) return out2 

and the sample use:

weights_fixed = ... weights_guess = ... model = Net(weights_fixed, weights_guess) loss_fn = nn.CrossEntropyLoss() optim = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9) train_dataset = ... #define training set here for (X, y) in train_dataset: optim.zero_grad() out = model(X) loss = loss_fn(out, y) loss.backward() optim.step() 

How can I make the weights weights_fixed - fixed and weights_guess - learnable?

My guess would be weights_fixed = nn.Parameter(W1,requires_grad=False) weights_guess = nn.Parameter(W2,requires_grad=True) where for the sake of completeness import numpy as np import torch

krnl = np.zeros((5,order+1)) krnl[:,0] = [ 0. , 1., 0. ] krnl[:,1] = [-0.5, 0., 0.5] krnl[:,2] = [ 1. ,-2., 1. ] W1 = torch.tensor(krnl) a = np.array((1.,2.,3.)) W2 = torch.tensor(a) 

But I am utterly confused. Any suggestions or references would be greatly appreciated. Of course I went over PyTorch docs, but it did not add clarity to my understanding.

0

4 Answers 4

3

Just wrap the learnable parameter with nn.Parameter (requires_grad=True is the default, no need to specify this), and have the fixed weight as a Tensor without nn.Parameter wrapper.

All nn.Parameter weights are automatically added to net.parameters(), so when you do training like optimizer = optim.SGD(net.parameters(), lr=0.01), the fixed weight will not be changed.

So basically this:

weights_fixed = W1 weights_guess = nn.Parameter(W2) 
Sign up to request clarification or add additional context in comments.

Comments

1

You can pass to optimizer only parameters that you want to learn:

optim = torch.optim.SGD(model.convL2.parameters(), lr=0.1, momentum=0.9) # Now optimizer bypass parameters from convL1 

If you model have more layers, you must convert parameters to list:

params_to_update = list(model.convL2.parameters()) + list(model.convL3.parameters()) optim = torch.optim.SGD(params_to_update, lr=0.1, momentum=0.9) 

as described here: https://discuss.pytorch.org/t/giving-multiple-parameters-in-optimizer/869

Comments

0

You can do this :

# this will be inside your class mostly self.conv1.weight.requires_grad = False 

And this will be where you are defining the optimizer :

optimizer = optim.SGD(filter(lambda p: p.requires_grad, net.parameters()), lr=0.1)

So, the optimizer will only use the parameters which have gradients enabled.

Comments

0

Modify your model definition to be:

import torch.nn as nn class Net(nn.Module): def __init__(self, weights_fixed, weights_guess): super(Net, self).__init__() self.convL1 = nn.Conv1d(1, 3, 3, bias=False) self.convL1.weight = weights_fixed # I want to keep these weights fixed self.convL1.requires_grad = False self.convL2 = nn.Conv1d(3, 1, 1, bias=False) self.convL1.weight = weights_guess # I want to learn these weights def forward(self, inp_batch): out1 = self.convL1(inp_batch) out2 = self.convL2(out1) return out2 

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.