Im playing around with genetic algorithms with pytorch and I'm looking for a more efficient way of mutating the weights of the network (applying a small modification to them)
Right now I have a suboptimal solution where I loop through the parameters and apply a random modification.
child_agent = network() for param in child_agent.parameters(): if len(param.shape) == 4: # weights of Conv2D for i0 in range(param.shape[0]): for i1 in range(param.shape[1]): for i2 in range(param.shape[2]): for i3 in range(param.shape[3]): param[i0][i1][i2][i3] += mutation_power * np.random.randn() elif len(param.shape) == 2: # weights of linear layer for i0 in range(param.shape[0]): for i1 in range(param.shape[1]): param[i0][i1] += mutation_power * np.random.randn() elif len(param.shape) == 1: # biases of linear layer or conv layer for i0 in range(param.shape[0]): param[i0] += mutation_power * np.random.randn() This solution is bound to my architecture and needs recoding If I decide to add more layers. Is there any way to do this more efficiently and clean? Preferable that it works regardless of how the architecture of my network looks like.
Thanks