1
$\begingroup$

I've made a wristwatch model. It consists of many parts, so I've created an instance of finished project, so that I can easily transform (rotate, scale or move as a whole) it for rendering according to my needs.

However, I created the model approximately 4 times larger than a real life wristwatch size. I did so because I was advised earlier when I was a beginner, so that I don't have to deal with some 'issues' while modelling - like zooming on parts, entering very small values in nodes (like bump nodes, noise node). So, it felt more convenient.

But for rendering they advised that it's better to render it at real life scale. That's why I scaled down the instance 4 times smaller.

There's a texture I'm using for the watch's dial (highlighted blue surface): enter image description here

Here's an example image texture I'm using:

enter image description here

For testing, I rendered same scene using both approaches:

  1. Keeping original scale of instance
  2. Scaling it down 4 times smaller

Note that size of wristwatch was kept almost same in the rendered result by adjusting the camera. So it wasn't like the wrist watch was actually looking 4 times smaller in the render. I hope you'd understand it. Here's a rough illustration to explain it further:

enter image description here

I did not notice significant difference in the quality of that texture, but I suspect it affects the quality of image textures. I'm not sure by my comparison test.

Given all that, is this approach of modelling at large scale and then scaling down the instance significantly, a good or bad practice? Can it lead to quality issues of image textures (or maybe some other issues as well)?

If it creates quality issues, then other than rendering at original scale of instance, are there any ways to optimize texture so that scaling down doesn't affect the render quality noticeably?

$\endgroup$
1
  • $\begingroup$ What could maybe change is a result of the camera (camera probably has real life dimensions in both cases), so one might show a different depth of field and different focus. $\endgroup$ Commented Jul 12 at 13:53

1 Answer 1

2
$\begingroup$

No. I honestly don’t know why you’re asking, since you went to the effort to do some science and test it yourself, but yeah your results are correct. There’s no reason why it would reduce image texture quality, since the exact same extrapolation formula is being used to stretch the image over the model, and the size is virtual. It’s as if the “DPI” of the “printer“ that puts the image on the model is basically the numerical precision of the floating point numbers Blender uses for models. I guess you could eventually run into floating point errors if the model was gigantic or tiny, but that’s not gonna be an issue here, and you’d have other problems with the 3-D itself at that point anyway.

$\endgroup$

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.