python programming
category: general [glöplog]
<rant type="pointless">of course, mr. i-know-everything-except-how-to-release-a-proper-final :))</rant>
beh. above message meant for plek of course :)
At least my GPU doesn't play the role of polyfiller when I'm watching utilizing software rendering :)
oh it does actually.
it's just one biiiiiig fullscreen quad :)))
it's just one biiiiiig fullscreen quad :)))
and before i forget....
plek: BALFASZ!!!
plek: BALFASZ!!!
guess this is another "excellent" example of what happens with pouet threads... :)
No, it won't wait for the ESCAPE key, but there is actual, serious research going on into using the GPU for stuff traditionally done with CPUs...
i'm aware of this direction, and surely it's quite interesting - but still, the gpu won't do everything for you. you can trick it to do _some_ computation,
those well-suited to the architecture. anyway, the results of latest research are hardly used in demos... (and we also need compatibility. badly. for a research paper you don't)
and someone calling himself a scener and not even *wanting* to use the extra 2-3 ghz computing power the cpu gives him/her nowadays - now that's a really sad situation :(
No, it won't wait for the ESCAPE key, but there is actual, serious research going on into using the GPU for stuff traditionally done with CPUs...
i'm aware of this direction, and surely it's quite interesting - but still, the gpu won't do everything for you. you can trick it to do _some_ computation,
those well-suited to the architecture. anyway, the results of latest research are hardly used in demos... (and we also need compatibility. badly. for a research paper you don't)
and someone calling himself a scener and not even *wanting* to use the extra 2-3 ghz computing power the cpu gives him/her nowadays - now that's a really sad situation :(
But if the GFX card is 17 (!!!) times faster, you can easily say "fuck that CPU", be it even twice as fast as it is now it doesn't bring you much performance-wise. And compatibility: since when do sceners care much about it? It has to run on one of the compo machines, on a coder's machine... and nothing else. Kolor demoeingine did not even try to run on ATI cards till recently. Of course, this is not very nice of them, but acceptable. Heck, there are still things which can be done with NVidia register combiners, but not yet with shaders!
Even for the cases where CPU power is enough, you want to combine it somehow with 3D acceleration. And there the problem begins.
* CPU transforms geometry much, much, much slower than the GFX card.
* Uploading image data onto the GPU is a fast process. Downloading it from the CPU is extremely slow! You can count on something in the order of extracting 40 full frames per second, but if you go and take multiple layers... it is simply waaay to slow.
Ergo: the only thing that could be reasonably done, is to transform a minor part of geometry on the CPU, and to generate some textures, but you could not reasonably process the output. Back then, some games even did the complete drawing on CPU (such as Outcast, IIRC), just to do the things which are now trivially done in the shaders.
I am for amazing stuff to be done - be it a CPU eater or requiere an NVidia card... And for research results being put into demos!
(hmmm... didn't i recently say i was for the least common denominator of OpenGL 1.4 + ARB shaders? don't listen to me, i'm a liar)
Even for the cases where CPU power is enough, you want to combine it somehow with 3D acceleration. And there the problem begins.
* CPU transforms geometry much, much, much slower than the GFX card.
* Uploading image data onto the GPU is a fast process. Downloading it from the CPU is extremely slow! You can count on something in the order of extracting 40 full frames per second, but if you go and take multiple layers... it is simply waaay to slow.
Ergo: the only thing that could be reasonably done, is to transform a minor part of geometry on the CPU, and to generate some textures, but you could not reasonably process the output. Back then, some games even did the complete drawing on CPU (such as Outcast, IIRC), just to do the things which are now trivially done in the shaders.
I am for amazing stuff to be done - be it a CPU eater or requiere an NVidia card... And for research results being put into demos!
(hmmm... didn't i recently say i was for the least common denominator of OpenGL 1.4 + ARB shaders? don't listen to me, i'm a liar)
Quote:
and someone calling himself a scener and not even *wanting* to use the extra 2-3 ghz computing power the cpu gives him/her nowadays - now that's a really sad situation :(
I agree, but the days of the "Oooh, look at my 5,000 polycube starfield routine" type demos are so totally over. Overall design is what makes a popular demo, and you don't need the latest CPU or graphics card for that...
OTOH, once the demo scene starts doing stuff with pixel shaders, I think we'll be seeing some amazing new ideas. 2D post-process filters using z-buffer data from the 3D renderer could lead to some pretty wicked new effects. [hint, hint!] ;)
> 2D post-process filters using z-buffer data from the 3D
> renderer could lead to some pretty wicked new effects.
Too bad that using the zb data is quite impossible on the PC , that you need a second pass and a special vertex/pixel shader (or a wicked D3DTEXTURETRANSFORM setup) for it and that most of the demoscene's engines aren't clean enough for a "for (pass=0: pass<2; pass++) RenderScene(pass);". :)
eye: But if the GFX card is 17 (!!!) times faster
it is not. there exists a particular computation which can be done with a particular gfx card 17 times faster than with a particular processor...
you want to combine it somehow with 3D acceleration
that's exactly what i wrote about 15 posts ago :)
And there the problem begins. [etcetc]
pleeease, don't be so down-to-earth :) a cpu can be used to do other things than rotating vectors, too :)))
Rabit: I agree, but the days of the "Oooh, look at my 5,000 polycube starfield routine" type demos are so totally over
they were already over even at the time of releasing of Dope ("6800 polygons") and Stars (the bee) - but that does not mean that demos should not contain technically advanced code
...I think we'll be seeing some amazing new ideas. 2D post-process filters using z-buffer data from the 3D renderer could lead to some pretty wicked new effects. [hint, hint!] ;)
for example focal blur? :) i've been thinking about this for at least three or four years :)
it is not. there exists a particular computation which can be done with a particular gfx card 17 times faster than with a particular processor...
you want to combine it somehow with 3D acceleration
that's exactly what i wrote about 15 posts ago :)
And there the problem begins. [etcetc]
pleeease, don't be so down-to-earth :) a cpu can be used to do other things than rotating vectors, too :)))
Rabit: I agree, but the days of the "Oooh, look at my 5,000 polycube starfield routine" type demos are so totally over
they were already over even at the time of releasing of Dope ("6800 polygons") and Stars (the bee) - but that does not mean that demos should not contain technically advanced code
...I think we'll be seeing some amazing new ideas. 2D post-process filters using z-buffer data from the 3D renderer could lead to some pretty wicked new effects. [hint, hint!] ;)
for example focal blur? :) i've been thinking about this for at least three or four years :)
Heck, there are still things which can be done with NVidia register combiners, but not yet with shaders!
WTF?
WTF?
for example focal blur? :) i've been thinking about this for at least three or four years :)
I remember at least 3 prods (Raw Confessions, Coma, Schism) using that, the oldest of them being from 2002...
I remember at least 3 prods (Raw Confessions, Coma, Schism) using that, the oldest of them being from 2002...
gargaj: there's always chance to improve - i don't remember of being satisfied with any i saw :)
(but i'll check them out right now to be sure :)
(but i'll check them out right now to be sure :)
to be correct, i won't check them out as they dont work on my machine :(
but for example i saw raw confessions several times at parties, and i'm quite sure i would a notice a nice focal blur in a demo :)
but for example i saw raw confessions several times at parties, and i'm quite sure i would a notice a nice focal blur in a demo :)
a nice focal blur is something you don't notice first. a good dof effect is always subtle. but if you want some uberblur, be my guest.
gargaj: you always notice an effect you are thinking about for years :)) - also in general i tend to notice effects...
btw schism (including the focal blur) worked without pixelshaders too (guess it would be nicer with shaders) - the fb was ok but i can easily imagine better :)
btw schism (including the focal blur) worked without pixelshaders too (guess it would be nicer with shaders) - the fb was ok but i can easily imagine better :)
the pic is nice - but i want to check it out moving :)
It's not impressive anymore :(
I am going back to CPC
I am going back to CPC
6800=7800
They say though, that this was a joke.
Nevertheless it was times more impressive for my 486 with a GUS, than the screenshot link that Gargaj posted.
They say though, that this was a joke.
Nevertheless it was times more impressive for my 486 with a GUS, than the screenshot link that Gargaj posted.
I just watched some older NVidia GeForce 3 & 4 demos. They are so fucking beautiful! Guys, scene has yet a *long* way to go to reach that. Or can anyone name me some demos of *that* visual quality and creative GPU feature use?
you just can't seem to find the difference between "paid job" and "free-time hobby", can you?
and besides, why don't YOU do something "impressive" instead of trying to prove your superior knowledge on everything?
(and FYI, "register combiners" are a subset for pixel shaders)
(and FYI, "register combiners" are a subset for pixel shaders)
ok, i'm a smartass. :> i'll do something sane someday. And hey, Gargaj, if i wouldn't appreciate the demoscene, which is obviously a free-time hobby, what would i be doing here?
Wasn't there some trick with depth buffer manipulation which wasn't possible with shaders until recently? Ok, i don't really know, i just think i heard something somewhere. I'm not yet through with the basics. ;)
Wasn't there some trick with depth buffer manipulation which wasn't possible with shaders until recently? Ok, i don't really know, i just think i heard something somewhere. I'm not yet through with the basics. ;)
Quote:
I just watched some older NVidia GeForce 3 & 4 demos. They are so fucking beautiful! Guys, scene has yet a *long* way to go to reach that. Or can anyone name me some demos of *that* visual quality and creative GPU feature use?
eye/midiclub: They look nice, but much of the 'visual quality' comes from the fact that NVIDIA has enough cash to hire professional animators, texture artists, modelers, and programmers to work months to produce something that basically shows off one cool effect. Also, NVIDIA demos have very little replay value compared to demoscene productions. :)
Quote:
Too bad that using the zb data is quite impossible on the PC , that you need a second pass and a special vertex/pixel shader (or a wicked D3DTEXTURETRANSFORM setup) for it and that most of the demoscene's engines aren't clean enough for a "for (pass=0: pass<2; pass++) RenderScene(pass);". :)
kb: Heh, probably true. Though two pass isn't THAT much harder to implement, really. :)
A quick glanceful at the DirectX SDK shows that there are some ways to read the zb, at least with the CPU (IDirect3DIndexBuffer8::Lock, I assume used for simultaneous hardware/software rendering), then feed it back as a texture for shader processing. But I guess that defeats the purpose of relying 100% on the GPU. :)
blala: I wasn't necessarily thinking of just focal blur type effects, but yeah those would be cool to see more of.
..and they are not impressive.