adaptative cpu usage throttling
category: general [glöplog]
Hey,
I'm trying to make a demo for low-end platforms (which I have no access to right now) Think an integrated intel gpu for instance.
Right now I have a particle based effect, whose performance is correlated with the number of active particles x number of forces/constraints.
Is there a good strategy for throttling the cpu usage of such an effect, if I'm willing to compromise on the look somewhat?
I've been thinking of monitoring the cpu usage + time to next frame on a given period and alter the number of active particles.
Anyway, does anyone have an experience with such kind of effect-throttling depending on the host?
I'm trying to make a demo for low-end platforms (which I have no access to right now) Think an integrated intel gpu for instance.
Right now I have a particle based effect, whose performance is correlated with the number of active particles x number of forces/constraints.
Is there a good strategy for throttling the cpu usage of such an effect, if I'm willing to compromise on the look somewhat?
I've been thinking of monitoring the cpu usage + time to next frame on a given period and alter the number of active particles.
Anyway, does anyone have an experience with such kind of effect-throttling depending on the host?
Easiest would probably to do a small benchmark at the beginning (rendering black on black or some such) and throttle based on that.
My experience is that throttling always looks horrible, and should be avoided like the pest.
Indeed, even if you manage to play with the number of particles, you're going to get additionnal trouble when trying to figure out how to blend still nicelly on different amount of particles. i.e. your particle effect will look like 4 blobs on the lowest end configuration and like a big white shit on the highend hardware. sorry if i'm a bit rough but you get the idea. :)
Indeed, even if you manage to play with the number of particles, you're going to get additionnal trouble when trying to figure out how to blend still nicelly on different amount of particles. i.e. your particle effect will look like 4 blobs on the lowest end configuration and like a big white shit on the highend hardware. sorry if i'm a bit rough but you get the idea. :)
I thought about this a while back, for an effect where I can vary the number of polys and the texture resolution for speed or quality. This kind of got abandoned before I wrote the throttling part, but my idea was to run a benchmark like sol suggested, but to do it on the loading screen as a background effect while it's doing fairly light work like loading textures. I figured a not-too-smooth loading bar effect was better than a plain loading bar :)
That should give a slightly low performance target, which would help keep the demo smooth if the OS starts arsing around in the background.
That should give a slightly low performance target, which would help keep the demo smooth if the OS starts arsing around in the background.
might as well just use the good old "Quality: good/medium/bad" setup option then... i suppose the main problem is the whole aesthetics tradeoff. at least i wouldnt want to have my effects look worse on slightly less capable computers.
What some games are doing now is changing the size of the viewport dynamically depending on framerate and upscaling at the end. For that you'd probably best use just a smoothed framerate for the metric (averaged over the last second or two for example).
gargaj: so set a lower limit. Would you like your effects to look more awesome on future super high end computers? :)
you could also try to correlate the fps count with the number of particles, i.e. if you get awesomely high fps => increase the particle count and vice versa.
(btw. this is correct usage of i.e.)
(btw. this is correct usage of i.e.)
Quote:
Indeed :)(btw. this is correct usage of i.e.)
Wonder what recent fairlight demos would look like on a netbook using this method? Probably 'abstract'.
Quote:
Wonder what recent fairlight demos would look like on a netbook using this method? Probably 'abstract'.
Might be that 'create an empty black opengl window' or 'show a white dot in centre of the window' tutorials would have been a 100% visual match in such case :D
I don't get the point.
The particle system running in software?! Why not just underclock the CPU. Or let the thread sleep for a fixed amount of time. It's just like simulating old hardware. I don't get the adaptive part. It doesn't make sense. And the GPU will handle the shit in a different manner anyway. But for example fixed funtion pipes are well .. fixed. So there shouldn't be any difference to recent hardware. Except the number of FPS to get which is to control or to estimate for the old hardware.
I'm working on a similar case. So all I did was reading out device caps and rough hardware data and estimate performance and graphics issues based on running a demo of the time that hardware was available. Which was around 2000. guess which 'demo' I used ;).
The particle system running in software?! Why not just underclock the CPU. Or let the thread sleep for a fixed amount of time. It's just like simulating old hardware. I don't get the adaptive part. It doesn't make sense. And the GPU will handle the shit in a different manner anyway. But for example fixed funtion pipes are well .. fixed. So there shouldn't be any difference to recent hardware. Except the number of FPS to get which is to control or to estimate for the old hardware.
I'm working on a similar case. So all I did was reading out device caps and rough hardware data and estimate performance and graphics issues based on running a demo of the time that hardware was available. Which was around 2000. guess which 'demo' I used ;).
The issue here is simple. He wants to parametrize the effect adaptively based on the client's CPU performance. No viewports involved, no GPU involved (as far as mentioned at least), none of that. But I'd have to go with nystep and ask yourself this: is this "feature" of any importance to the project you're working on? If so, carefully measure and find a heuristic that makes the effect look similar but more or less detailed on both sides of the spectrum. Therein lies the challenge and it's completely related to the effect/visual itself, there is no common solution.
I actually through the question to figure if this was a silly idea to pursue. It seems indeed experimenting with a controlling parameter that is calibrated with resources on various hardware and tuning them would work. I don't mind it if it's effect dependent, just more work.
And to be more specific, yeah I'm also wondering about the GPU part of the equation, for instance, how many poly's are acceptable. It seems harder to measure too.
The issue is this, on a general purpose (non-fixed) platform, how to "degrade gracefully."
(it's not such a big deal but that demo is meant not for a competition)
The viewport adaptation method seems rather attractive for pixel-shader based effects. Isn't wipeout-hd using this method on ps3?
And to be more specific, yeah I'm also wondering about the GPU part of the equation, for instance, how many poly's are acceptable. It seems harder to measure too.
The issue is this, on a general purpose (non-fixed) platform, how to "degrade gracefully."
(it's not such a big deal but that demo is meant not for a competition)
The viewport adaptation method seems rather attractive for pixel-shader based effects. Isn't wipeout-hd using this method on ps3?
tesla by sunflower does this - if i remember correctly. when i got a new pc back then, it suddenly had more particles on screen.
All games fuck with buffer resolutions to accomodate speed.
depending on the system behavior, try to always show N particles onscreen, but compute just M particles based on available cpu\gpu, then approximate\interpolate the other N-M. might do the job
I like your idea rmeht .. it sounds worth a good try :)
sounds allright yeah. hope it looks that way too since individual interaction between particles (therefore directly coupled to their amount) can make or break an effect.
demoscene design rule n.17: if you move them fast and glowing enough, no one will notice the difference *g*
i doubt particle physics being the real problem here
very good point theyeti.. it turns out that after having a good look, some unexpected part of the system is taking way more cpu time than I expected (compared to the physics!)
@rmeht good point there. The simulation issue would be solved but the 'oldies' might be a lil weak to actually draw the same amount of particles. The FPS would just go down but reducing the amount would break the look, probably.
Well. It depends on the actual effect how to solve the visual problem.
Smoke types are kinda easy to degrade without losing to much details. Complex particle systems rather not.
Well. It depends on the actual effect how to solve the visual problem.
Smoke types are kinda easy to degrade without losing to much details. Complex particle systems rather not.
you can compute a different set of "real" particle at every frame, so every shown particle gets correctly updated in a short timeslice. wont work on very dynamic system tho, yes
I'd really like to see a 'shot' of the actual effect. It'd would help finding a smart solution. mmh.