I have a behaviours vector representing some identity. I need to binary classify [malicious or benign] each instance [ideally with a normalised severity score].
For that I can use a variety of linear classifiers/kernelized SVM/Random Forest etc...
The issue is that once the classifier has been trained I'd like to allow the user the ability to configure which behaviours are more (or less) critical.
For example, one behaviour might be encryption done by some process and a user fearing ransomware might want to make this behaviour more significant.
Given linear classifier (which I'd like to avoid) simply multiplying the given W with the given configuration would do the trick. What can be done in kernelized SVM/Random Forest/DNN etc for equivalent result?