It appears you have not yet registered with our community. To register please click here...

 
Go Back [M] > Hardware Madness > Hardware/Software Problems, Bugs
High precision blending on 128-bit color card High precision blending on 128-bit color card
FAQ Members List Calendar Search Today's Posts Mark Forums Read


High precision blending on 128-bit color card
Closed Thread
 
Thread Tools
Old 16th July 2003, 15:39   #1
vpegorar
 
Posts: n/a
High precision blending on 128-bit color card

Hi everybody !

I am doing volume rendering in openGL under windows using 3D textures. As I am rendering scientific data, I want to have the more details as possible and, as a consequence, I want to use lots of slices. But as I increase the number of slices, I calculate the slices color in such a way that the alpha value of each slice gets smaller and smaller in order to get the same global attenuation. The problem is that I reach a point where the alpha values are so small that they do not have any effect any more.
The only possibility that I see to solve my problem would be to compute the process with values coded on at least 2 bytes instead of only one. But I don't know how to do that. If I set the internal format of the texture with ALPHA_16, instead of ALPHA_8, glGetTexLevelParameteriv(GL_TEXTURE_3D_EXT,0,GL_TE XTURE_ALPHA_SIZE,&val2) still give me 8 as result, while glGetTexLevelParameteriv(GL_TEXTURE_3D_EXT,0,GL_TE XTURE_INTERNAL_FORMAT,&val1) returns the same internal format as I asked for. Moreover, even if the texture would be coded on 16 bits instead of 8, I am not even sure that the data wouldn't be resized on 1 byte during the computation. I have tried to change the pixel format descriptor, or use glutInitDisplayString ,hoping that this would define the format used for calculations in openGL, but it seems that alpha size bigger than 8 bits is not supported.

Actually, I am using a GeForce FX 5900 Ultra that has true 128-bit color precision (thanks to CineFX 2.0 Engine), according to <http://www.nvidia.com/view.asp?PAGE=fx_5900>
Instead of using some tricks in opengl, I would rather, if possible, use this 128-bit hardware feature, but I still don't know how to do that. The rendering is currently definitely not computed using 32 bits per color chanel, so how can I "activate" it ? I hope that this is not only dedicated to floating point pbuffers, that do not support blending.

Does anybody know how to solve my problem ?

Thanks.
 
Closed Thread


Similar Threads
Thread Thread Starter Forum Replies Last Post
Super Talent Launches 450X, 533X, and 600X High Speed CompactFlash Media Cards jmke WebNews 0 13th November 2009 12:26
Corsair® Launches High Density DDR3 Memory for Core™ i5 and Core™ i7 jmke WebNews 0 2nd September 2009 11:15
EVGA Precision 1.5.0 Released jmke WebNews 0 8th March 2009 10:17
Razer Copperhead High Precision Gaming Mouse Sidney WebNews 0 15th November 2005 15:00
Unreal Tournament 2004 Windows 64 Bit Patch Tested Sidney WebNews 0 5th October 2005 23:48
Valve's High Dynamic Range Explored Sidney WebNews 0 1st October 2005 00:26
Razer Diamondback High Precision Gaming Mouse Sidney WebNews 0 15th June 2005 06:25
Far Cry Moves To 64 bit Sidney WebNews 1 14th May 2005 19:13
Unreal Tournament 2004 Server Performance: 32 bit vs 64 bit Sidney WebNews 0 4th August 2004 19:46

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off


All times are GMT +1. The time now is 14:35.


Powered by vBulletin® - Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO