Madshrimps Forum Madness

Madshrimps Forum Madness (https://www.madshrimps.be/vbulletin/)
-   Hardware/Software Problems, Bugs (https://www.madshrimps.be/vbulletin/f14/)
-   -   High precision blending on 128-bit color card (https://www.madshrimps.be/vbulletin/f14/high-precision-blending-128-bit-color-card-2665/)

vpegorar 16th July 2003 15:39

High precision blending on 128-bit color card
 
Hi everybody !

I am doing volume rendering in openGL under windows using 3D textures. As I am rendering scientific data, I want to have the more details as possible and, as a consequence, I want to use lots of slices. But as I increase the number of slices, I calculate the slices color in such a way that the alpha value of each slice gets smaller and smaller in order to get the same global attenuation. The problem is that I reach a point where the alpha values are so small that they do not have any effect any more.
The only possibility that I see to solve my problem would be to compute the process with values coded on at least 2 bytes instead of only one. But I don't know how to do that. If I set the internal format of the texture with ALPHA_16, instead of ALPHA_8, glGetTexLevelParameteriv(GL_TEXTURE_3D_EXT,0,GL_TE XTURE_ALPHA_SIZE,&val2) still give me 8 as result, while glGetTexLevelParameteriv(GL_TEXTURE_3D_EXT,0,GL_TE XTURE_INTERNAL_FORMAT,&val1) returns the same internal format as I asked for. Moreover, even if the texture would be coded on 16 bits instead of 8, I am not even sure that the data wouldn't be resized on 1 byte during the computation. I have tried to change the pixel format descriptor, or use glutInitDisplayString ,hoping that this would define the format used for calculations in openGL, but it seems that alpha size bigger than 8 bits is not supported.

Actually, I am using a GeForce FX 5900 Ultra that has true 128-bit color precision (thanks to CineFX 2.0 Engine), according to <http://www.nvidia.com/view.asp?PAGE=fx_5900>
Instead of using some tricks in opengl, I would rather, if possible, use this 128-bit hardware feature, but I still don't know how to do that. The rendering is currently definitely not computed using 32 bits per color chanel, so how can I "activate" it ? I hope that this is not only dedicated to floating point pbuffers, that do not support blending.

Does anybody know how to solve my problem ?

Thanks.


All times are GMT +1. The time now is 08:09.

Powered by vBulletin® - Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO