YafaRay Texture Interpolation and Color Spaces, maybe wrong?

Component:YafaRay Core
Category:bug report
Assigned:David Bluecame
Status:ready to commit


While developing mipmaps and new interpolation techniques for them such as Trilinear and EWA, I've found an unexpected problem, and I need your advice and opinions about how to proceed about it.

You can see in the first comment below a drawing explaining the problem.

What I found was this: when I was using a black and white "checker" pattern in a standard sRGB PNG file, I got surprising results in the interpolation, where the mipmaps were showing too dark a gray when mixing the black and white pixels. The gray was 0.216 approximately in the internal YafaRay linear RGB space, while I was expecting to have a middle gray linear 0.5 (mixing black 0.0 and white 1.0, both "extreme" values which are unaffected by the sRGB to linear conversion)

While investigating this I found out that YafaRay does the interpolation of the texture pixels first, and the color conversion afterwards. Therefore the first step was, for example, mixing the sRGB black and white giving a gray sRGB 0.5, which translates to gray linear 0.216 (approximately).

I've tested other renderers such as Blender Internal and Luxrender and both seem to do the same, but I think it's wrong.

How I believe it should work is: do the color space conversion first for each texture pixel being used for interpolation and afterwards, interpolate the linear RGB values. I think this would be the "correct" way to do to avoid artifacts and wrong color results from interpolating directly sRGB colors. I'm really surprised that Blender Internal and (especially) LuxRender do it the same way as YafaRay!

The problem is this: if we change the color space conversion before the interpolation, I will have to do many more color conversions per texture pixel (texel). For bilinear it would multiply the color space calculations by 4. For bicubic, a lot more (I think by 9). For trilinear mipmap, by 8. For EWA mipmap interpolation, by *a lot*.

Possible solutions:

* Keep it as it is now. I'm not happy with the idea as wrong colors can result from interpolating sRGB texels directly.

* Change to decode color space before interpolation: it would be best for color accuracy, but could make renders quite a bit slower.

* Add a parameter to choose the "old way" or the "new correct slower way". However, I'm not sure this should be user-adjustable. Why to choose a setting potentially giving wrong color results?

* Create "linear" decoded versions of the loaded sRGB textures and use them during the render instead of the originals, so no color space decoding takes place during rendering, only during texture loading. The problem with this approach is that I need to keep the original sRGB and the decoded linear in RAM, effectively duplicating (or more if mipmaps are used) the amount of memory used for textures.

In the next comment I will attach a drawing showing the problem. Please let me know what do you think about this.

Thanks an best regards!



This is a drawing showing my findings. Please let me know if you believe I'm wrong here:

Interpolation and Color Spaces problems.jpg 89.61 KB


An interesting article about interpolating in sRGB vs linear RGB:



And this video mentioned in the article above is quite interesting too:



Hi, David,

Indeed us from the beginning in using YafaRay, we have always had problems in the colors, we were never able to get the correct colors in spite of several attempts' to use the various RGB color maps, linear or sRGB, the result was always different from our color map !!! In fact, we made our own interpolation to try to make the same colors, but honestly there was no logical explanation for it !!!

In the latest versions we tried to follow your advice but the results were not always perfectly correct !!!

Unfortunately we can not help you technically about resolving this problem, but we can ensure that there is something wrong !!!

Good luck in solving this further problem !!!






Hello, John.

Thank you for your comments. I'm not sure if it's asking too much, but would it be possible for you to tell me whether Edificius decodes the sRGB textures before interpolating or if it does the interpolation directly on the original sRGB texture pixels and decodes the color space only for the result?

Thanks and best regards!


Hi David,

On the textures we don't do any kind of interpolation.

We simple save the texture on file in it original format and write the path of the file in the Yafaray xml. 





Status:needs review» ready to commit


This problem was difficult to solve in a reasonable manner. I had to come to a compromise solution by creating a new parameter to let the user decide how to interpolate respect to color space decoding.

The change is in this new commit, which is a large one:



I believe that the correct way to interpolate textures is to decode color space into linear space every time a texel is read, before the texels are used for interpolation. That way the interpolation is calculated correctly in linear color space.

Unfortunately there are negative consequences of decoding color space before interpolation: it will take many more calculations as color space is decoded, not on the result of the interpolation, but for every time a texel is read to be interpolated. In bilinear that means to multiply the color space decoding calculations by 4. In bicubic by 16!!  In trilinear by 8 and in EWA... it depends on each case but it could be a lot!!

In order to avoid that, and to provide a background compatibility path, I've created a new per-texture setting:
* ColorSpace Interpolation parameter in Blender-Exporter. This is both per-texture or if in the texture is set to "Default" then it will use the global setting with the same name in the main render tab
* <colorspace_interpolation_method sval="xxxxxx"/> in XML interface

The way this will work is as follows:
* <colorspace_interpolation_method sval="old-legacy"/>  This will use the original YafaRay color space decoding after interpolation. However, it's not recommended anymore because it's not mathematically correct and it could show differences when using mipmaps respect to non-mipmap interpolation
* <colorspace_interpolation_method sval="linear-memory"/>  This will use the new (hopefully mathematically correct!) interpolation after color space decoding. However, rendering *will* be slower than the old system, as many more color space decoding calculations need to take place.
* <colorspace_interpolation_method sval="linear-speed"/>  This will use the new (hopefully mathematically correct!) interpolation after color space decoding. To avoid the speed penalty of the "linear-memory" method, in this case I created a "clone" of the original texture in RAM with the colors already decoded to linear. Therefore the interpolation will get values from linear texels in RAM and not having to do any color space decoding during render. This will make rendering speed faster, probably even a bit faster than the original old-legacy YafaRay method. However, this comes with a penalty in RAM usage, as the memory to store textures will double!!

I also had another problem while implemented the new methods explained above. The linear-speed method, which is based on generating linear color copies of the original textures, will make the copies in the same internal Color Buffer format as the original. If the texture is using Optimization=optimized, until now, YafaRay stored the original texture in 8-bit per color channel. However, when creating linear copies of the original textures, 8-bit per channel will not have enough bit depth to store all the details of the decoded color. Therefore I had to increase the "optimized" color buffer size from RGB888 (3 bytes / 8 bit per channel) to RGB101010 (4 bytes / 10 bit per channel) and the RGBA8888 (4 bytes / 8 bits per channel) to RGBA1010108 (5 bytes / 10 bits per channel for RGB and 8 bits for A). This way I hoped to be able to keep all color information even after decoding the color space.

Therefore, when using "optimized" textures, the RAM usage will increase in around 20-25% respect to the previous v3.1.1 version. I could not find any reasonable way to avoid it while keeping the color accuracy. If RAM is a concern for a certain render, the user can choose "linear-memory" which is slower but uses a lot less RAM, and also the user can choose the optimization="compressed" which reduces the RAM usage further at the cost of losing some color precision/information.


Status:ready to commit» needs work

I'm not yet convinced... I will try to do a more "intelligent" automatic handling of buffers depending on the type of image and the expected usage (coloring, bump mapping, normal mapping, stencil, etc, etc). I would like to avoid this parameter and the "double RAM" requirement for linear-speed.


Hi, David,

First of all we thank you for your excellent work and the time they devote to the continuous improvement of YafaRay.

We started to test the new version and actually contributed more RAM is very high, we are around 30-40% !!!

I hope we can find a solution that does not require all this ram more.

Yours sincerely


ACCA Software


Yes, unfortunately this is what I expected. I will have to find another solution. I'll let you know asap.


Status:needs work» ready to commit


To avoid the "double RAM" requirement with the new "correct" texture interpolation, I've simplified the interpolation interface to remove the option "colorspace_interpolation_method" introduced in the previous commit. From now on, all image textures are stored in RAM as LinearRGB and no longer stored "as is". The "old-legacy" interpolation method used in old YafaRay versions will no longer be available.

The change is here: https://github.com/YafaRay/Core/commit/003d61469f564d4e442d3d2f8360ff4ec...

From now on the user will be responsible for selecting correct ColorSpaces for all textures, including bump map, normal map, etc. For example for Non-RGB / Stencil / Bump / Normal maps, etc, textures are typically already linear and the user should select "linearRGB" in the texture properties, but if the user (by mistake) keeps the default sRGB for them, YafaRay will (incorrectly) apply the sRGB->LinearRGB conversion causing the values to be incorrect.

However, I've added a "fail safe" so for any "float" textures, bump maps, normal maps, etc, when getting colors after interpolatio YafaRay will to a "inverse" color conversion to the original Color Space. This way, even a mistake in user's color space selection in bump maps, normal maps, etc, will not cause any significant problems in the image as they will be converted back to their original color space. However, in this case rendering will be slower and potential artifacts can appear due to interpolation taking place in the wrong color space. For optimal results, the user must select correctly the color space for all textures.


Thanks David,

We will try once the new version and as always if you'll let you know how it goes !!!