a topic about graphics cards again (sorry)
category: general [glöplog]
I was wondering if any of you ever had to poll the card type (not just vendor) and available texture memory. How is this possible ? Not through glGetString, is there another extension ?
so for the "card type"
and thus
as for the available video memory, I don't know unfortunately and apparently since DX7 the caps don't return the "video memory usage" value and OpenGL seems like it never had this kind of a functionality. I might be wrong though, in which case I'd love to learn.
Code:
GL_RENDERER Returns the name of the renderer. This
name is typically specific to a
particular configuration of a hardware
platform. It does not change from
release to release.
and thus
Code:
glGetString(GL_RENDERER);
as for the available video memory, I don't know unfortunately and apparently since DX7 the caps don't return the "video memory usage" value and OpenGL seems like it never had this kind of a functionality. I might be wrong though, in which case I'd love to learn.
Navis: I had this discussion with Ryg long ago and the outcome, if my memory doesn't fail me once again, was that it was impossible to get a 100% reliable value, depending on the type of card you had.
Maybe it has changed ever since, I never bothered trying once again but I would be really interested indeed.
Maybe it has changed ever since, I never bothered trying once again but I would be really interested indeed.
it has something to do with Win32_VideoController apparently. Investigating..
I think OGL sucks in this department. You might want to roll your own stuff with
glPrioritizeTextures
and
glAreTexturesResident
glPrioritizeTextures
and
glAreTexturesResident
maybe you can have a look at the implementation of SDL_GetVideoInfo
I'm running something called Win32_VideoController .Seems to work quite ok, but that would be only for windows.
yeah, unfortunately MSDN says "Windows Server 2003, Windows XP, Windows 2000, and Windows NT 4.0: This class is supported."
argh. I should really learn to read. it says "supported", not "not supported". gotta see the eye doctor pronto :)
navis for linux there's always /proc/meminfo but I am not sure if it does give the video memory information.
Basically I'm looking for a "shading model" capabilities string (FP 3.0 4.0 etc.). Wonder where that would be
I don't think that information is available, but of course you can always go the trial and error way. for example you can make say 2 shaders with SM specific code then try to compile, the one that compiles is supported :). well this really is a bruteforce way of doing stuff but well it will work.
shouldn't these be available with glGetString?
gl get string returns only 4 things. And I can't see anything on support fo rshader mode xyz... I wonder if Cg does that...
http://www.opengl.org/wiki/Shading_languages:_How_to_detect_shader_model%3F
Rare: nope, it can return the extensions supported but it won't say anything more than GL_ARB_fragment_shader et al.
uhm. that is basically what I meant...
there's also GL_ARB_shading_language_xxx (100 for a proper OGL 2.0 driver)
yeah but navis wants to learn the exact SM(s) supported. the extensions won't give it away. in the end a GX 295 supports fragment shaders but so did a GeForce 4 Ti.
If you use OpenGL with GLSL and use GL_ARB_shading_language_xxx to detect the version you should be pretty much ok, shouldn't you?!
Navis, there are plenty of new kinda cool extensions, and on the ATI side, this one may be of interrest for the video memory usage: GL_ATI_meminfo
It seems to appear here with latest ATI drivers on a 4870.. but i didnt test it yet..
It seems to appear here with latest ATI drivers on a 4870.. but i didnt test it yet..
If anyone wants to go to COM hell and use WMI, here's what's working for me:
Code:
#define _WIN32_DCOM //for COM
#include <windows.h>
#include <comdef.h> //for COM
#include <Wbemidl.h> //for WMI
#include <tchar.h> //for _stprintf
#pragma comment(lib, "wbemuuid.lib") //for WMI
#define _PRINT_INFORMATION
TCHAR wmi_temp[1024] = {0}; //helper array for conversion
//convert WMI BSTR to regular TCHAR array. sucks.
TCHAR * wmi_BSTR2TCHAR(BSTR in)
{
//convert BSTR to _bstr_t
_bstr_t bstrIntermediate(in);
// you have to go through _bstr_t to have it work in ANSI and Unicode
if (_stprintf_s((TCHAR *)&wmi_temp, 1023, _T("%s"), (LPCTSTR)bstrIntermediate) > 0) {
//return array pointer
return (TCHAR *)&wmi_temp;
}
return NULL;
}
int getVideoRam()
{
HRESULT hres;
//initialize COM
hres = CoInitializeEx(0, COINIT_MULTITHREADED);
if (FAILED(hres)) {
//failed to initialize COM
#ifdef _PRINT_INFORMATION
printf("Failed to initialize COM library. Error code = 0x%x\n", hres);
#endif
}
else {
//COM successfully initialized, initialize security
hres = CoInitializeSecurity(NULL,
-1, //COM negotiates service for us
NULL, //authentication services - none
NULL, //reserved
RPC_C_AUTHN_LEVEL_DEFAULT, //authentication
RPC_C_IMP_LEVEL_IMPERSONATE, //impersonation (current level)
NULL, //authentication info
EOAC_NONE, //additional capabilities
NULL); //reserved
if (FAILED(hres)) {
//failed to impersonate current security level
#ifdef _PRINT_INFORMATION
printf("Failed to initialize security. Error code = 0x%x\n", hres);
#endif
//uninitialize COM
CoUninitialize();
}
else {
//security impersonation worked, obtain the initial locator to Windows Management on host computer
IWbemLocator *pLoc = NULL;
hres = CoCreateInstance(CLSID_WbemLocator, 0, CLSCTX_INPROC_SERVER, IID_IWbemLocator, (LPVOID *) &pLoc);
if (FAILED(hres)) {
//couldn't get IWbemLocator
#ifdef _PRINT_INFORMATION
printf("Failed to create IWbemLocator object. Error code = 0x%x\n", hres);
#endif
//uninitialize COM
CoUninitialize();
}
else {
//we have a connection to Windows Management now
IWbemServices *pSvc = NULL;
//connect to the root\cimv2 namespace with the current user and obtain pointer pSvc to make IWbemServices calls
hres = pLoc->ConnectServer(_bstr_t(L"ROOT\\CIMV2"), //WMI namespace
NULL, //user name
NULL, //user password
0, //locale
NULL, //security flags
0, //authority
0, //context object
&pSvc); //IWbemServices proxy
if (FAILED(hres)) {
//failed to connect service call interface
#ifdef _PRINT_INFORMATION
printf("Could not connect to \"root\\cimv2\" namespace. Error code = 0x%x", hres);
#endif
//release locator
pLoc->Release();
//uninitialize COM
CoUninitialize();
}
else {
//successfully connected to root\cimv2 WMI namespace
//set the IWbemServices proxy so that impersonation of the user (client) occurs
hres = CoSetProxyBlanket(pSvc, //the proxy to set
RPC_C_AUTHN_WINNT, //authentication service
RPC_C_AUTHZ_NONE, //authorization service
NULL, //Server principal name
RPC_C_AUTHN_LEVEL_CALL, //authentication level
RPC_C_IMP_LEVEL_IMPERSONATE, //impersonation level
NULL, //client identity
EOAC_NONE); //proxy capabilities
if (FAILED(hres)) {
//no, they be stealin' my proxy blanket!
#ifdef _PRINT_INFORMATION
printf("Could not set proxy blanket. Error code = 0x%x", hres);
#endif
//release namespace connection
pSvc->Release();
//release locator
pLoc->Release();
//uninitialize COM
CoUninitialize();
}
else {
//use the IWbemServices pointer to make requests of WMI.
//query for BIOS info
IEnumWbemClassObject* pEnumerator = NULL;
hres = pSvc->ExecQuery(bstr_t("WQL"),
bstr_t("SELECT * FROM Win32_VideoController"),
WBEM_FLAG_FORWARD_ONLY | WBEM_FLAG_RETURN_IMMEDIATELY,
NULL,
&pEnumerator);
if (FAILED(hres)) {
//couldn't get BIOS info table
#ifdef _PRINT_INFORMATION
printf("Query for Win32_VideoController failed. Error code = 0x%x\n", hres);
#endif
//release namespace connection
pSvc->Release();
//release locator
pLoc->Release();
//uninitialize COM
CoUninitialize();
}
else {
//got table, now put info into class object
IWbemClassObject *pclsObj;
ULONG uReturn = 0;
//enumerate information entries while there are any
while (pEnumerator) {
hres = pEnumerator->Next(WBEM_INFINITE, 1, &pclsObj, &uReturn);
if(0 == uReturn) {
break;
}
VARIANT vtProp;
//get the value of the properties we want to display
hres = pclsObj->Get(L"Name", 0, &vtProp, 0, 0);
if (hres == WBEM_S_NO_ERROR) {
//store string
printf("Video card name is \"%s\"\n", wmi_BSTR2TCHAR(vtProp.bstrVal));
}
VariantClear(&vtProp);
//get the value of the properties we want to display
hres = pclsObj->Get(L"AdapterRAM", 0, &vtProp, 0, 0);
if (hres == WBEM_S_NO_ERROR) {
//store string
printf("Video RAM: %dMB\n", vtProp.uintVal / (1024 * 1024));
}
VariantClear(&vtProp);
}
}
//do cleanup
//release namespace connection
pSvc->Release();
//release locator
pLoc->Release();
//uninitialize COM
CoUninitialize();
return true;
}
}
}
}
}
return false;
}
Yuck! Thanks for reminding me why I don't code on windows...
short answer:
no
long answer:
somehow yes but nothing reliable. what are you looking for, physical video memory size? logical video memory size? as for the "turbocache" stuff nvidia changed things a bit. (having memory on the card as well as using main memory...) its possible to grab the cards vendor string, id whatever and go for the vendor specific sdk. (nvidia provides nvapi for that, it's quite easy to grab the phys/logical memory with that) i don't know about ati/amd nor intel sdk's out there. in gl there are even some nasty extensions providing some memory information (although i wouldn't touch them hardly). games used to provide some configuration option for textures like. (low, medium, high quality and so on)
i'm saving the "its up to the driver where data gets stored"-discussion over here :)
anyways, i can't say i wouldn't need this kind of information from time to time, since with streaming large amounts of data it is worth knowing when your driver starts swapping out data. but the effort for getting a reliable value grows when you start supporting multiple platforms and so on and so on...
right now i would suggest going for that configuration option. :)
no
long answer:
somehow yes but nothing reliable. what are you looking for, physical video memory size? logical video memory size? as for the "turbocache" stuff nvidia changed things a bit. (having memory on the card as well as using main memory...) its possible to grab the cards vendor string, id whatever and go for the vendor specific sdk. (nvidia provides nvapi for that, it's quite easy to grab the phys/logical memory with that) i don't know about ati/amd nor intel sdk's out there. in gl there are even some nasty extensions providing some memory information (although i wouldn't touch them hardly). games used to provide some configuration option for textures like. (low, medium, high quality and so on)
i'm saving the "its up to the driver where data gets stored"-discussion over here :)
anyways, i can't say i wouldn't need this kind of information from time to time, since with streaming large amounts of data it is worth knowing when your driver starts swapping out data. but the effort for getting a reliable value grows when you start supporting multiple platforms and so on and so on...
right now i would suggest going for that configuration option. :)