Jump to content
  • Advertisement

cgrant

Member
  • Content Count

    446
  • Joined

  • Last visited

Community Reputation

1848 Excellent

1 Follower

About cgrant

  • Rank
    Member

Personal Information

  • Role
    Programmer
  • Interests
    Design
    Programming

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. cgrant

    Shaders on embedded systems

    Hard to say what will happen when you run your application on a device. There are several things to consider when write shaders for any system. 1. What level of shader support is available ex. SMx, GLSL ES x.x . This is first and foremost factor which will decide whether or not your shader will 'work' on the system. 2. For some API like GLSL ES the same shader can fail on different HW from different vendors even when spec compliant. Again this will only manifest itself during testing. In general the question being ask in too vague as its hard to say what will happen when your shader runs without even seeing the shader code ( the description alone is not a good indicator. ). However, if you stick to making your shader compliant to the shader specification of the API being used, then you are more than half-way there.
  2. Also, its more than just re-inventing the wheel as not hardware is created equal. As an API designer you want features in the API to map directly to the HW as much as possible ( efficiency ), but with numerous different HW configuration, this is no easy task. Which then spawn the next issue, in order to be as efficient as possible on platform/hw that doesn't have direct mapping for API features, 'hacks' are then introduced into the API which muddies the interface and introduce unnecessary complexity. However, the biggest reason, imo and others have mentioned this above is politics...
  3. cgrant

    About OpenGL used in games

    Game devs do not update OpenGL drivers as those are provided by IHV ex Intel, Nvidia, AMD. I'm thinking you meant to say they updated the game to support/use OpenGL 4.x features, but understand that the 2 are different. If you are using the legacy fix function pipeline ( ie. no shaders ) I would not expect any performant increase. Also I'm not going to make any comment in regards to FPS as this is not a valid performance metric especially as a developer( tons of post on this forum describing why this doesn't really give any real indication of perf loss/gain ). Hope that answers your ?
  4. Sorry to say, but that is where you come in. If all that was already done, then what does that leave you to do? Besides just 'mimicking' what is already existent. If you know all the bits and pieces, then the design of how to fit these all together is up to you as everyone use case will be different. There is no magic recipe for an 'engine' or 'renderer' architecture. However, there are a few best practice or common practice that smart people on this forum can give you pointer on. My suggestion is to dive in, maybe do a high-level design mock-up based on your intended use and then ask to have it critique for example. Or better yet, figure out what is you actually need before you start thinking about minute details...those will come in good time.
  5. Don't have a definitive answer, but does it really make a difference in your use case ? Its seems that even in the Vulkan case you would still need to do some translation since you have your own version of object handles.
  6. If you are expecting a LAB format in any API you are going to be out of luck. Just because the format says RGB/RGBA there is not constraint on what the actual data represent. If the LAB data you have can be quantized into any of the support texture format, then there is nothing stopping you from using that texture format to upload your texture. The caveat being that D3D has not concept of LAB textures so you will have to manually handle the interpretation of the texture values. So it can be done, you just have to convert/process the values after you sample the texture.
  7. Seems like you already answered your question. "Increase performance" is a very vague term, in order to quantify performance you need metrics. In either case since you mentioned a 2D use case, if all your objects are in the same place ( ie no parallax ), the using just 2 floats to represent position makes sense. There are certain inherent behavior of the GPU that you cannot control and which may have no bearing on what you put in, with the vertex shader position output size being one of those. The general guidance is to use the smallest type( size and count ) possible while maintaining enough precision in order to reduce the amount of data transferred from CPU to GPU ( bandwidth ).
  8. buffers as far as I know are immutable wrt to size...The overhead of the driver managing the growth or shrinkage of buffers outways the gain. You as the application developer have to manage resources. If its the case that 'suddenly you realize you need to render more vertices' then you have the conditions that you could use to figure out the high water mark for your buffer size. If you don't want to waste memory, then you can also create multiple fix sized buffer, but this may require you to submit multiple draw calls. In the end I think you have to consider your use case and plan your buffer creation accordingly under the assumption that you cannot resize buffers directly..
  9. I'm confused. What does look correct means? The linear conversion happens when the SRV is read. If you write the sampled result to a RTV ( if I'm interpreting this correctly ) then the linearized value is whats actually written. Unless you are doing the exact gamma encoding step as the original input, I would not expect the result to look the same.
  10. Yes and no, depending on the alignment of uniform, you may find that you may have issue with indexing and getting the expected values at correct indices ( the GLSL ES specification discusses this ). Yes that is how uniform array are define. I would suggest downloading the OpenGL GLSL ES specification for the version you are using as a reference while development as it comes in really handy at times. Cheers.
  11. Driver may not be smart enough, or just make the lazy assumption that all shared resources must be sync on context switch whether or not they are dirty. If this worked previously without issue then its most likely a driver change that brought about the issue.
  12. To add to what others have mentioned, the description you gave in lacking any meaningful info for other to provide/suggest a solution. 1. How was timing done? I keep mentioning this in every other beginner post. FPS is NOT a good performance metric. Give us absolute clock time...meaning seconds, milliseconds, nanosecond 2. Where or what is FloatBuffer and how is it implemented? 3. What does your shaders look like ? 4. What does your rendering pass look like? Too many unknown..if what I'm trying to get at.
  13. If you have shared resources( implied given that you are using shared context ), then the driver have to ensure a consistent view of each resources when each context becomes active. The only way to do this is through some form of synchronization which I think others have pointed out. This goes for all shareable resources...iirc the specification points this out to. Without this automatic 'synchronization' the driver cannot be ensure coherency as with what you mentioned its possible that one context may be modifying the resource while one another is reading it( which means a multi-threaded setup). If you are not using multiple thread, then having multiple context really makes no sense...as a single context would work fine since each window will supply its own device context which is all that *MakeCurrent cares about. Even in this case there will still be a price to pay for calling *MakeCurrent when switching windows. If you application is multithreaded then there is no need to call *MakeCurrent more than once to initialize the context on that thread as once that association is made it never changes unless you manually call *MakeCurrent on the SAME thread with a different context.
  14. cgrant

    COM Interface best practices

    I think the common practice was bullet point #3 wherein you use the least common denominator and then fetch the latest interface if required.
  15. FBO are basically free as it just a 'name' OpenGL object. FBO attachments are not as textures or render buffer requires memory as well as a name. So you are not really saving anything using just a single FBO. With that said, did you profile the code to see if there is even a need for this 'optimization' ? Although I don't recall the specs going over the solution you propose, the attachments are a part of the FBO state so I'm going to say this is undefined behavior aka it does not work consistently aka don't do it. The specs explicitly call out reading and writing to the same attachment as not allowed as it creates a feeback loop.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!