Jump to content
  • Advertisement

cgrant

Member
  • Content count

    440
  • Joined

  • Last visited

Community Reputation

1841 Excellent

1 Follower

About cgrant

  • Rank
    Member

Personal Information

  • Industry Role
    Programmer
  • Interests
    Design
    Programming

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Seems like you already answered your question. "Increase performance" is a very vague term, in order to quantify performance you need metrics. In either case since you mentioned a 2D use case, if all your objects are in the same place ( ie no parallax ), the using just 2 floats to represent position makes sense. There are certain inherent behavior of the GPU that you cannot control and which may have no bearing on what you put in, with the vertex shader position output size being one of those. The general guidance is to use the smallest type( size and count ) possible while maintaining enough precision in order to reduce the amount of data transferred from CPU to GPU ( bandwidth ).
  2. buffers as far as I know are immutable wrt to size...The overhead of the driver managing the growth or shrinkage of buffers outways the gain. You as the application developer have to manage resources. If its the case that 'suddenly you realize you need to render more vertices' then you have the conditions that you could use to figure out the high water mark for your buffer size. If you don't want to waste memory, then you can also create multiple fix sized buffer, but this may require you to submit multiple draw calls. In the end I think you have to consider your use case and plan your buffer creation accordingly under the assumption that you cannot resize buffers directly..
  3. I'm confused. What does look correct means? The linear conversion happens when the SRV is read. If you write the sampled result to a RTV ( if I'm interpreting this correctly ) then the linearized value is whats actually written. Unless you are doing the exact gamma encoding step as the original input, I would not expect the result to look the same.
  4. Yes and no, depending on the alignment of uniform, you may find that you may have issue with indexing and getting the expected values at correct indices ( the GLSL ES specification discusses this ). Yes that is how uniform array are define. I would suggest downloading the OpenGL GLSL ES specification for the version you are using as a reference while development as it comes in really handy at times. Cheers.
  5. Driver may not be smart enough, or just make the lazy assumption that all shared resources must be sync on context switch whether or not they are dirty. If this worked previously without issue then its most likely a driver change that brought about the issue.
  6. To add to what others have mentioned, the description you gave in lacking any meaningful info for other to provide/suggest a solution. 1. How was timing done? I keep mentioning this in every other beginner post. FPS is NOT a good performance metric. Give us absolute clock time...meaning seconds, milliseconds, nanosecond 2. Where or what is FloatBuffer and how is it implemented? 3. What does your shaders look like ? 4. What does your rendering pass look like? Too many unknown..if what I'm trying to get at.
  7. If you have shared resources( implied given that you are using shared context ), then the driver have to ensure a consistent view of each resources when each context becomes active. The only way to do this is through some form of synchronization which I think others have pointed out. This goes for all shareable resources...iirc the specification points this out to. Without this automatic 'synchronization' the driver cannot be ensure coherency as with what you mentioned its possible that one context may be modifying the resource while one another is reading it( which means a multi-threaded setup). If you are not using multiple thread, then having multiple context really makes no sense...as a single context would work fine since each window will supply its own device context which is all that *MakeCurrent cares about. Even in this case there will still be a price to pay for calling *MakeCurrent when switching windows. If you application is multithreaded then there is no need to call *MakeCurrent more than once to initialize the context on that thread as once that association is made it never changes unless you manually call *MakeCurrent on the SAME thread with a different context.
  8. cgrant

    COM Interface best practices

    I think the common practice was bullet point #3 wherein you use the least common denominator and then fetch the latest interface if required.
  9. FBO are basically free as it just a 'name' OpenGL object. FBO attachments are not as textures or render buffer requires memory as well as a name. So you are not really saving anything using just a single FBO. With that said, did you profile the code to see if there is even a need for this 'optimization' ? Although I don't recall the specs going over the solution you propose, the attachments are a part of the FBO state so I'm going to say this is undefined behavior aka it does not work consistently aka don't do it. The specs explicitly call out reading and writing to the same attachment as not allowed as it creates a feeback loop.
  10. Don't have any links to share right now, but I've experience cases ( I want to say on AMD HW ) in the past where the texture parameters would not reflect unless specified before the call to create the texture store. That is not to say the texture would not display as the default params would be enough. A quick test would be to change the filtering/wrap mode to see. However, if all is working well for you then now worries then.
  11. Why are you limited yourself to 2 threads? I'm currently working on my editor using the Sony ATF framework ( WinForms version ) and with that application idle message pumps the 'main thread' as this is the thread that does all the rendering. The UI does not block and should not block unless you like being frustrated. Whenever a drag and drop operation is performed a loading task is issue which perform the resource loading. Resource loading happens in different phases and is fully threaded. -File I/O. If the resource is a graphics resource then dispatch additional task to load the graphics resource. There is nothing stopping you from creating additional OpenGL context on other threads to do the loading. -If the task does not require graphics resources then a callback is scheduled to indication its completion. There is a lot of specific details that I left out as they are more or less related to my current specific architecture. I do admit it was a little tricky getting the drag and drop behavior to work ( most sync issues ). However, without a proper thread/multithread/task system in place you are going to find yourself just hacking stuff to pieces. Like others have mentioned though, editors for the most part are not realtime in the sense that runtime performance should be a mode of the editor vs being the actual editor. If that is not the case then what you seem to be aiming at is an in-engine editor which you alluded to having a separate design plan for that. Why would you want them different ? Wouldn't that be doing twice the work ? My approach that I touched on above utilized the same framework that the actual application built using the editor would use and so far I cannot complain.
  12. Side note, your texture parameters must be specified before creating the actual texture with any glTexImage* functions.
  13. Just to be pedantic..4K as I think its being used here has nothing to do with GPU supported texture resolution as that is a HW limitation. Most modern GPU are capable of support 16K textures, but as mentioned, this large resolution texture comes with a price.
  14. I did miss that 2D part, but yeah the only sane way you are going to get by is to do so form of scaling from your default resolution. Still think there will be some distortion, as even though the app is rendered at resolution X, the native display has a specific resolution ( can be queried ). You can go the route of having assets for a few common resolution ( gotta do the research ), or redo your asset so that they correspond to some other metric, ex. aspect ratio, instead of fix resolution. Does your app works at any other resolution than 1024 x 768 currently ? If not, can you say why ?
  15. Why would you off the bat limit the resolution of you application with no indication as to why this would be help. If its just for the sake of let say running on a PC then this is not optimal. Most framework these days( most likely SFML ) have the ability to enumerate the device capabilities, using that as your starting point would present a more flexible design. If it turn out that the specific resolution is too high to support your application feature, then you can use the said result(s) from enumeration to dial back resolution. Also, if you get caught up on resolution you will soon find this to be a pain, there is no such thing a mobile resolution especially on Android where the device and so wide and varied.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!