While developing mipmaps and new interpolation techniques for them such as Trilinear and EWA, I've found an unexpected problem, and I need your advice and opinions about how to proceed about it.
You can see in the first comment below a drawing explaining the problem.
What I found was this: when I was using a black and white "checker" pattern in a standard sRGB PNG file, I got surprising results in the interpolation, where the mipmaps were showing too dark a gray when mixing the black and white pixels. The gray was 0.216 approximately in the internal YafaRay linear RGB space, while I was expecting to have a middle gray linear 0.5 (mixing black 0.0 and white 1.0, both "extreme" values which are unaffected by the sRGB to linear conversion)
While investigating this I found out that YafaRay does the interpolation of the texture pixels first, and the color conversion afterwards. Therefore the first step was, for example, mixing the sRGB black and white giving a gray sRGB 0.5, which translates to gray linear 0.216 (approximately).
I've tested other renderers such as Blender Internal and Luxrender and both seem to do the same, but I think it's wrong.
How I believe it should work is: do the color space conversion first for each texture pixel being used for interpolation and afterwards, interpolate the linear RGB values. I think this would be the "correct" way to do to avoid artifacts and wrong color results from interpolating directly sRGB colors. I'm really surprised that Blender Internal and (especially) LuxRender do it the same way as YafaRay!
The problem is this: if we change the color space conversion before the interpolation, I will have to do many more color conversions per texture pixel (texel). For bilinear it would multiply the color space calculations by 4. For bicubic, a lot more (I think by 9). For trilinear mipmap, by 8. For EWA mipmap interpolation, by *a lot*.
* Keep it as it is now. I'm not happy with the idea as wrong colors can result from interpolating sRGB texels directly.
* Change to decode color space before interpolation: it would be best for color accuracy, but could make renders quite a bit slower.
* Add a parameter to choose the "old way" or the "new correct slower way". However, I'm not sure this should be user-adjustable. Why to choose a setting potentially giving wrong color results?
* Create "linear" decoded versions of the loaded sRGB textures and use them during the render instead of the originals, so no color space decoding takes place during rendering, only during texture loading. The problem with this approach is that I need to keep the original sRGB and the decoded linear in RAM, effectively duplicating (or more if mipmaps are used) the amount of memory used for textures.
In the next comment I will attach a drawing showing the problem. Please let me know what do you think about this.
Thanks an best regards!