Jump to content
  • Advertisement


  • Content count

  • Joined

  • Last visited

Community Reputation

1844 Excellent

1 Follower

About cgrant

  • Rank

Personal Information

  • Role
  • Interests

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Sorry to say, but that is where you come in. If all that was already done, then what does that leave you to do? Besides just 'mimicking' what is already existent. If you know all the bits and pieces, then the design of how to fit these all together is up to you as everyone use case will be different. There is no magic recipe for an 'engine' or 'renderer' architecture. However, there are a few best practice or common practice that smart people on this forum can give you pointer on. My suggestion is to dive in, maybe do a high-level design mock-up based on your intended use and then ask to have it critique for example. Or better yet, figure out what is you actually need before you start thinking about minute details...those will come in good time.
  2. Don't have a definitive answer, but does it really make a difference in your use case ? Its seems that even in the Vulkan case you would still need to do some translation since you have your own version of object handles.
  3. If you are expecting a LAB format in any API you are going to be out of luck. Just because the format says RGB/RGBA there is not constraint on what the actual data represent. If the LAB data you have can be quantized into any of the support texture format, then there is nothing stopping you from using that texture format to upload your texture. The caveat being that D3D has not concept of LAB textures so you will have to manually handle the interpretation of the texture values. So it can be done, you just have to convert/process the values after you sample the texture.
  4. Seems like you already answered your question. "Increase performance" is a very vague term, in order to quantify performance you need metrics. In either case since you mentioned a 2D use case, if all your objects are in the same place ( ie no parallax ), the using just 2 floats to represent position makes sense. There are certain inherent behavior of the GPU that you cannot control and which may have no bearing on what you put in, with the vertex shader position output size being one of those. The general guidance is to use the smallest type( size and count ) possible while maintaining enough precision in order to reduce the amount of data transferred from CPU to GPU ( bandwidth ).
  5. buffers as far as I know are immutable wrt to size...The overhead of the driver managing the growth or shrinkage of buffers outways the gain. You as the application developer have to manage resources. If its the case that 'suddenly you realize you need to render more vertices' then you have the conditions that you could use to figure out the high water mark for your buffer size. If you don't want to waste memory, then you can also create multiple fix sized buffer, but this may require you to submit multiple draw calls. In the end I think you have to consider your use case and plan your buffer creation accordingly under the assumption that you cannot resize buffers directly..
  6. I'm confused. What does look correct means? The linear conversion happens when the SRV is read. If you write the sampled result to a RTV ( if I'm interpreting this correctly ) then the linearized value is whats actually written. Unless you are doing the exact gamma encoding step as the original input, I would not expect the result to look the same.
  7. Yes and no, depending on the alignment of uniform, you may find that you may have issue with indexing and getting the expected values at correct indices ( the GLSL ES specification discusses this ). Yes that is how uniform array are define. I would suggest downloading the OpenGL GLSL ES specification for the version you are using as a reference while development as it comes in really handy at times. Cheers.
  8. Driver may not be smart enough, or just make the lazy assumption that all shared resources must be sync on context switch whether or not they are dirty. If this worked previously without issue then its most likely a driver change that brought about the issue.
  9. To add to what others have mentioned, the description you gave in lacking any meaningful info for other to provide/suggest a solution. 1. How was timing done? I keep mentioning this in every other beginner post. FPS is NOT a good performance metric. Give us absolute clock time...meaning seconds, milliseconds, nanosecond 2. Where or what is FloatBuffer and how is it implemented? 3. What does your shaders look like ? 4. What does your rendering pass look like? Too many unknown..if what I'm trying to get at.
  10. If you have shared resources( implied given that you are using shared context ), then the driver have to ensure a consistent view of each resources when each context becomes active. The only way to do this is through some form of synchronization which I think others have pointed out. This goes for all shareable resources...iirc the specification points this out to. Without this automatic 'synchronization' the driver cannot be ensure coherency as with what you mentioned its possible that one context may be modifying the resource while one another is reading it( which means a multi-threaded setup). If you are not using multiple thread, then having multiple context really makes no sense...as a single context would work fine since each window will supply its own device context which is all that *MakeCurrent cares about. Even in this case there will still be a price to pay for calling *MakeCurrent when switching windows. If you application is multithreaded then there is no need to call *MakeCurrent more than once to initialize the context on that thread as once that association is made it never changes unless you manually call *MakeCurrent on the SAME thread with a different context.
  11. cgrant

    COM Interface best practices

    I think the common practice was bullet point #3 wherein you use the least common denominator and then fetch the latest interface if required.
  12. FBO are basically free as it just a 'name' OpenGL object. FBO attachments are not as textures or render buffer requires memory as well as a name. So you are not really saving anything using just a single FBO. With that said, did you profile the code to see if there is even a need for this 'optimization' ? Although I don't recall the specs going over the solution you propose, the attachments are a part of the FBO state so I'm going to say this is undefined behavior aka it does not work consistently aka don't do it. The specs explicitly call out reading and writing to the same attachment as not allowed as it creates a feeback loop.
  13. Don't have any links to share right now, but I've experience cases ( I want to say on AMD HW ) in the past where the texture parameters would not reflect unless specified before the call to create the texture store. That is not to say the texture would not display as the default params would be enough. A quick test would be to change the filtering/wrap mode to see. However, if all is working well for you then now worries then.
  14. Why are you limited yourself to 2 threads? I'm currently working on my editor using the Sony ATF framework ( WinForms version ) and with that application idle message pumps the 'main thread' as this is the thread that does all the rendering. The UI does not block and should not block unless you like being frustrated. Whenever a drag and drop operation is performed a loading task is issue which perform the resource loading. Resource loading happens in different phases and is fully threaded. -File I/O. If the resource is a graphics resource then dispatch additional task to load the graphics resource. There is nothing stopping you from creating additional OpenGL context on other threads to do the loading. -If the task does not require graphics resources then a callback is scheduled to indication its completion. There is a lot of specific details that I left out as they are more or less related to my current specific architecture. I do admit it was a little tricky getting the drag and drop behavior to work ( most sync issues ). However, without a proper thread/multithread/task system in place you are going to find yourself just hacking stuff to pieces. Like others have mentioned though, editors for the most part are not realtime in the sense that runtime performance should be a mode of the editor vs being the actual editor. If that is not the case then what you seem to be aiming at is an in-engine editor which you alluded to having a separate design plan for that. Why would you want them different ? Wouldn't that be doing twice the work ? My approach that I touched on above utilized the same framework that the actual application built using the editor would use and so far I cannot complain.
  15. Side note, your texture parameters must be specified before creating the actual texture with any glTexImage* functions.
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!