4k intro, C vs assembler?
category: code [glöplog]
ATI/AMD perhaps?
Use #version 430 and be happy.
maybe "#version 430 compatibility".
And what do you need those features for anyways? :)
https://twitter.com/zarfeblong/status/478993927081852929
*wry*
*wry*
Thanks for your replies! It is an AMD card indeed, but the actual problem was that I was mangling in a vertex shader and varyings and everything which obviously isn't necessary.
glCreateShaderProgramv is pretty great :)
glCreateShaderProgramv is pretty great :)
Hio. I was testing Crinkler 2.0 + Visual Studio 2015 + Windows10 (no particular reason, just making sure the intro frameworks I have online were up to date), and while playing around I did a mini-finding. I didn't know in which thread to drop it, so I'll do it here.
I got quite some savings by moving away from the old FPU_Intrinsics+/QIfist+__fastcall+fltused combo and embracing a new AVX2+__vectorcall+customSinCos() setup.
Of course little changes for simple 1k intros or empty/shell frameworks, but for real intros the difference I am seeing is pretty big (20 -100 bytes) if the intro is CPU-heavy (has CPU music synth or CPU scripting/animation). I'm pretty happy of course, but I'd love to know if other people can test and confirm this with their own code.
ps - if I had written my intro in assembler I'd have never been able to perform this test with a few mouse clicks and find the trick (sometimes being lazy has its advantages I guess!)
I got quite some savings by moving away from the old FPU_Intrinsics+/QIfist+__fastcall+fltused combo and embracing a new AVX2+__vectorcall+customSinCos() setup.
Of course little changes for simple 1k intros or empty/shell frameworks, but for real intros the difference I am seeing is pretty big (20 -100 bytes) if the intro is CPU-heavy (has CPU music synth or CPU scripting/animation). I'm pretty happy of course, but I'd love to know if other people can test and confirm this with their own code.
ps - if I had written my intro in assembler I'd have never been able to perform this test with a few mouse clicks and find the trick (sometimes being lazy has its advantages I guess!)
Thanks for sharing IQ, good finding. Its kind of expected for __vectorcall / AVX extensions as it is using less registers at the end, still nice to see real-world example / confirmation. Pretty encouraging to do some native 4k.
Also I am quite looking forward to SIMD operations in C++17. I wonder where the "committe" is going there,hopefully in a good direction;-)
Also I am quite looking forward to SIMD operations in C++17. I wonder where the "committe" is going there,hopefully in a good direction;-)
we just wait a while and see what SPIR-V Shader on Vulcan bring along :D
Quote:
Hacking GCN via OpenGL
This is insane!
:U
:D Crazy shit, man!
@tomkh
I'd rather have full C++11 (and possiby 14, but this seems to be too much to ask for) support in GCC, VS and CLANG... ;) And std::filesystem so I could finally dump the boost crap...
@tomkh
Quote:
Also I am quite looking forward to SIMD operations in C++17. I wonder where the "committe" is going there,hopefully in a good direction;-)
I'd rather have full C++11 (and possiby 14, but this seems to be too much to ask for) support in GCC, VS and CLANG... ;) And std::filesystem so I could finally dump the boost crap...
raer: we are mixing two things here, core C++ front-end (pure language features) that IMHO should be minimalistic and even more tailored for optimizations (more hints to optimizer, efficient vector math etc..), and standard library that is built on top of it (yet unified across vendors) possibly developed by other group of people etc... So one doesn't exclude another, really.
Quote:
Hacking GCN via OpenGL
Wow... This is art.
[rant]
Yeah. You're right. C++ and the STL are not the same thing. What I'm saying is that I'd rather have FULL solid compiler + STL support for all the good stuff already there in 11 and 14 instead newest features.
C++ already feel like a new language and I llike it, but when delegate constructors work on GCC 4.something, but not on VS2013 (Update X), but in CLANG when it's tuesday the moon is full, I start loosing my sanity... ;) Especially when you need to develop cross-platform stuff (Win Linux, Mac).
So I'll probably stick with '11 for 2016 and can start using out '14 features in 2017. '17 will be usable maybe by 2019. That how it feels atm...
[/rant]
Yeah. You're right. C++ and the STL are not the same thing. What I'm saying is that I'd rather have FULL solid compiler + STL support for all the good stuff already there in 11 and 14 instead newest features.
C++ already feel like a new language and I llike it, but when delegate constructors work on GCC 4.something, but not on VS2013 (Update X), but in CLANG when it's tuesday the moon is full, I start loosing my sanity... ;) Especially when you need to develop cross-platform stuff (Win Linux, Mac).
So I'll probably stick with '11 for 2016 and can start using out '14 features in 2017. '17 will be usable maybe by 2019. That how it feels atm...
[/rant]
wow. an edit button would rule... ;)
Quote:
Hacking GCN via OpenGL
As much as I was impressed with the work done, I just wish 1) it wasn't written (and illustrated) like it was done by a 12 year old from 9gag and 2) the "UNREGISTERED" label on Sublime Text wasn't on prominent display :)
Yeh the hacking is awesome but those "meme" pictures or whatever doesn't seem to match the target audience.
Y u so judgemental? ;) The guy apparently is working hard and doing a lot of good (check out his homepage), so I will totally forgive him he is lurking into silly forums after work to overreact and occasionally take insipration from :P
I know those memes are worn-out, but at least it is easy to understand what he felt during his discoveries.
I know those memes are worn-out, but at least it is easy to understand what he felt during his discoveries.
raer: actually your ranting is not unjustified. I started digging into this SIMD rumours for C++17 and from what I understand so far it is more about under-the-hood vectorization/parallelization, which is not much use for cg/gamedev folks who are rather (I presume) waiting for something like portable vector types (a'la OpenCL/GLSL/HLSL) as primitive types, automatically optimized for SSE/AVX/FMA/etc...