0
$\begingroup$

Pre-trained model is widely ultilized in different jobs. I wouder whether a pre-trained model which is trained on data domain A will work well on data domain B.

For example, if I fine-tune a model(trained on ImageNet) to solve some classification problems in biomedical aspects, will such pretrained-finetune method be better than training my own model?

$\endgroup$

1 Answer 1

2
$\begingroup$

In general, a model trained on a given domain will always outperform another that is trained on a different domain. However, there is no free lunch in data science. In your case this means that is impossible to say whether or not a pre-trained model will outperform your own model. This is because it depends on

  1. How well the pre-trained model generalizes to the biomedical domain - which is very hard to estimate without actually testing it out.
  2. How well your own model will perform. This in turn depends on your computational resources, data quality and volume, optimization techniques, experience, and a ton of other factors.

In short: you'll have to try it out!

$\endgroup$
2
  • $\begingroup$ Thanks for your reply. Yes I agree with you. I just read some papers use pre-trained ResNet on Imagenet even though their data domain is not similar to ImageNet. Then I wonder whether pre-trained weights will let the model have better generalization capacity even on a new data domain. $\endgroup$ Commented Oct 21, 2022 at 3:49
  • $\begingroup$ It might also be very worth your while to take a look at fine-tuning. It will allow you to combine the strength of a pre-trained network and the domain knowledge advantage of training a new one. stats.stackexchange.com/questions/331369/… $\endgroup$ Commented Oct 21, 2022 at 11:22

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.