Cyndanera

OpenGL Access opengl from gpu? asm?

Recommended Posts

The hardware drivers are pretty much all proprietary, so unless you are a big enough player for Intel/AMD/NVidia to directly support your platform... you don't.

That said, "your own OS" in this day and age is almost certainly a Linux or Android fork, and those come with GPU drivers.

Share this post


Link to post
Share on other sites

If you want to use OpenGL on an OS made from scratch, you will literally have to implement OpenGL from scratch too! OpenGL itself is just a text document describing how implementations of it (which you find in your graphics drivers) should behave.

AMD and Intel publish a lot of information on their hardware, e.g. http://developer.amd.com/resources/developer-guides-manuals/, which is everything you need to know to write a driver. See also: https://en.wikipedia.org/wiki/Free_and_open-source_graphics_device_driver

Share this post


Link to post
Share on other sites

For a simpler entrance you might take a look at the Mesa 3D Library that has grown all along the evolution of OpenGL and was developed for system that dosent have had access to gl drivers. Mesa contains some driver logic to access AMD/Nvidia/Intel GPUs

Share this post


Link to post
Share on other sites

You could always try to reverse engineer the driver too, but thats alot alot of work.  But you could always use the source of Linux's open source drivers for AMD/Nvida. (although the last time I checked they were pretty far behind there closed source counterparts - especially in the case of nvidia since they don't publish any documentation at all.

8 hours ago, Hodgman said:

AMD and Intel publish a lot of information on their hardware, e.g. http://developer.amd.com/resources/developer-guides-manuals/, which is everything you need to know to write a driver.

Does this documentation include how to communicate with the command processor?  I thought it was just an ISA reference?

Share this post


Link to post
Share on other sites

Yeah I think the ISA docs tell you how to program the CU's, and the register docs tell you how to program the rest of the HW. e.g. writing to the VGT_DRAW_INITIATOR register (which is mapped to GPU address 0x287f0) launches a draw. Interestingly when I searched the interwebs for that constant name, google came up with the source for a WiiU emulator!

https://github.com/decaf-emu/decaf-emu/blob/master/src/libdecaf/src/modules/gx2/gx2_draw.cpp

Share this post


Link to post
Share on other sites
5 minutes ago, Hodgman said:

and the register docs tell you how to program the rest of the HW.

That doesn't seem to have been updated since 2012.  Well looking at the open-source drivers seems to be the way to go.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


  • Forum Statistics

    • Total Topics
      627719
    • Total Posts
      2978790
  • Similar Content

    • By ilovegames
      Your home planet was attacked. Now you have to use your spaceship to battle the invaders. Powerful 3D arcade with outer space background. Very addictive. Good luck!
      Download https://falcoware.com/StarFighter.php
       




    • By ilovegames
      Attack Of Mutants is an adrenaline - powered and bloody shooter in with lots of horror and action! Beat back the waves of opponents. The game features a lot of weapons and types of enemies. Show them what you are capable of. Prove your power and strength!
      Download https://falcoware.com/AttackOfMutants.php



    • By ilovegames
      BOOM is a multiplayer shooter that takes place on one of the satellites of Saturn. Destroy your enemies using your large arsenal! The game features excellent graphics and a spacious map. Good luck fighter!   Controls: W - Forward S - Backward A - Left D - Right SPACE - Jump Enter - Chat LBM - Shot RBM - Sight V - Third party   Download https://falcoware.com/rus/BOOM.php



    • By DelicateTreeFrog
      Hello! As an exercise for delving into modern OpenGL, I'm creating a simple .obj renderer. I want to support things like varying degrees of specularity, geometry opacity, things like that, on a per-material basis. Different materials can also have different textures. Basic .obj necessities. I've done this in old school OpenGL, but modern OpenGL has its own thing going on, and I'd like to conform as closely to the standards as possible so as to keep the program running correctly, and I'm hoping to avoid picking up bad habits this early on.
      Reading around on the OpenGL Wiki, one tip in particular really stands out to me on this page:
      For something like a renderer for .obj files, this sort of thing seems almost ideal, but according to the wiki, it's a bad idea. Interesting to note!
      So, here's what the plan is so far as far as loading goes:
      Set up a type for materials so that materials can be created and destroyed. They will contain things like diffuse color, diffuse texture, geometry opacity, and so on, for each material in the .mtl file. Since .obj files are conveniently split up by material, I can load different groups of vertices/normals/UVs and triangles into different blocks of data for different models. When it comes to the rendering, I get a bit lost. I can either:
      Between drawing triangle groups, call glUseProgram to use a different shader for that particular geometry (so a unique shader just for the material that is shared by this triangle group). or
      Between drawing triangle groups, call glUniform a few times to adjust different parameters within the "master shader", such as specularity, diffuse color, and geometry opacity. In both cases, I still have to call glBindTexture between drawing triangle groups in order to bind the diffuse texture used by the material, so there doesn't seem to be a way around having the CPU do *something* during the rendering process instead of letting the GPU do everything all at once.
      The second option here seems less cluttered, however. There are less shaders to keep up with while one "master shader" handles it all. I don't have to duplicate any code or compile multiple shaders. Arguably, I could always have the shader program for each material be embedded in the material itself, and be auto-generated upon loading the material from the .mtl file. But this still leads to constantly calling glUseProgram, much more than is probably necessary in order to properly render the .obj. There seem to be a number of differing opinions on if it's okay to use hundreds of shaders or if it's best to just use tens of shaders.
      So, ultimately, what is the "right" way to do this? Does using a "master shader" (or a few variants of one) bog down the system compared to using hundreds of shader programs each dedicated to their own corresponding materials? Keeping in mind that the "master shaders" would have to track these additional uniforms and potentially have numerous branches of ifs, it may be possible that the ifs will lead to additional and unnecessary processing. But would that more expensive than constantly calling glUseProgram to switch shaders, or storing the shaders to begin with?
      With all these angles to consider, it's difficult to come to a conclusion. Both possible methods work, and both seem rather convenient for their own reasons, but which is the most performant? Please help this beginner/dummy understand. Thank you!
    • By Canislupus54
       
      I'm looking for a team to help me out with a 3D platformer. Basically, you jump between platforms with parkour elements and you fight enemies with a mix of melee and ranged attacks. This is purely a hobby project. I'm not promising any payment, ever. You can do it for experience, to learn, for fun, whatever, as long as you don't expect to get paid. Right now I need a 3D modeler and animator. Reply or email me at jordestoj@yahoo.com if you're interested. Thanks.
  • Popular Now