When I downscale an image enough, the result looks like a different image. I'm not talking about fidelity. I know of the various algorithms that can be selected in photoshop when downscaling, but what is it that makes the image look different?
For example, say you have a 1024x1024 pixel image of a face, when you downscale to 64x64, it no longer looks like a face, it looks like lungs or something. There is a difference between bicubic, nearest neighbor, and bilinear, in terms of fidelity, but what is it that my mind sees that makes the image look different?
If I would manually manipulate pixels (not be selecting an algorithm when changing the image size), what would I do to make the downscaled image be perceived as the same face, but smaller?
For example, say you have a 1024x1024 pixel image of a face, when you downscale to 64x64, it no longer looks like a face, it looks like lungs or something. There is a difference between bicubic, nearest neighbor, and bilinear, in terms of fidelity, but what is it that my mind sees that makes the image look different?
If I would manually manipulate pixels (not be selecting an algorithm when changing the image size), what would I do to make the downscaled image be perceived as the same face, but smaller?