Jump to content
  • Advertisement
Sign in to follow this  
psycoding

3d accelleration without ogl / d3d

This topic is 3526 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

How can I access the GPU / VRAM directly without using open gl or direct 3d ? (best for me would be using it in assembler) thanks in advance

Share this post


Link to post
Share on other sites
Advertisement
Don't know from nVidia's part, but at least for ATi cards, you can find open specifications from http://www.x.org/docs/AMD/.

Haven't read any of those so don't know any specifics, but hope that helps you in the direction you want.

Share this post


Link to post
Share on other sites
It's simple. You simply have to write code to interface with the display driver (separate code for each type of display hardware on the market, of course), using the non-publically-available specifications (or even the publically-available ones) for the hardware.

Actually, come to think of it, that's not easy at all.

Really, why would you ever want to do this? The APIs used to interface with graphics hardware exist for a very good reason - no sane person would want to write that much device-specific code.

If your rationale is "I want my app to be faster" I suggest maybe taking a step back and asking yourself if you think you can code a better, faster interface to the hardware than the hundreds of people who have intimate access to their specifications, and a ton of resources.

Since every game on the market uses either D3D or OpenGL, and they seem to run fine, the speed of the API is obviously not an issue.

If it's for some...other...reason, might I suggest taking the "just say no" option?

Share this post


Link to post
Share on other sites
This IS after all, one of the key reasons gaming on windows managed to take over DOS in the first place (although it was probably inevitable eventually and is hard to see any other outcome now - but isn't that always the case?). Homebrewing code for every piece of graphics hardware on the market was such a pain 15 years ago when there were like 10-20 different specifications (at least the popular ones... I'm sure there were many more). I can only imagine trying to write for the 100s of graphics cards we have today... and the fact that no longer are we just trying to write to this new-fangled 16 bit or 32 bit display with resolutions > 320x240, but now we're trying to do massive amounts of calculations and 2d/3d rendering.

Why do you want to do this anyway? (just curious)

Cheers
-Scott

Share this post


Link to post
Share on other sites
If you don't need such hw access for graphic, maybe you could look into CUDA and OpenCL (I think ATI has something similar, don't remember its name right now). These are api to use GPU as general pourpose processors, so perhaps what you need is what they offer.

I think that you will never directly access the HW, without using an upperlevel api: there are simply to many devices aout there (at least 4 generations for each manufacturer, perhaps less for intel, several different versions in each generation).

Share this post


Link to post
Share on other sites
Quote:
Original post by cignox1
If you don't need such hw access for graphic, maybe you could look into CUDA and OpenCL (I think ATI has something similar, don't remember its name right now). These are api to use GPU as general pourpose processors, so perhaps what you need is what they offer.

ATI's was called Close To Metal (CTM), now AMD has the Stream SDK. Which among other things, allows you to do GPGPU computing.

Share this post


Link to post
Share on other sites
I'm writing my own 3d engine, mainly for my radiosity project.
And I want to write everything on my own, not using any other code than mine, except the framebuffer... and if it was possible, I'd also like to do this on my own.
Now where I'm finished with mainly most optimizations, I have to see that
using it in my radiosity project is just painful slow with just a very simple scene even though in triangle rendering my tmapper surprisingly beats ogl when many small triangles are used, and that's what radiosity is all about, many many small triangles.. Because of this I wanted to get HW access, because using ogl/d3d is no alternative for me...
so something like it's mentioned above, gpu as general purpose processor is exactly what i searched for.

I see it as an exercise and it's very interesting.
I just hate using black boxed code, even if it's fast.

Share this post


Link to post
Share on other sites
What are you using to create your window? Cos if it's WIN32 then that is a black box too...

Share this post


Link to post
Share on other sites
Better write your own compiler and linker while you're at it - you wouldn't want to rely on black-box code that you didn't even compile yourself.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!