As suggested by the title, is it possible to fix some part of the neural network while training?
Since Mathematica provides a way to extract part of a neural network and combine it with some layers to make a new one:
newNet = NetChain[{Take[oldNet, 3], 10, Ramp, 10}] It would be very helpful to fix the layers taken from the old network. In this way one can reuse the neural network and investigate how transferrable are features in the neural network (c.f. https://arxiv.org/abs/1411.1792).
LearningRateMultiplierswhat you are looking for? And here are examples of transfer learning. $\endgroup$