Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

1848 Excellent

1 Follower

About cgrant

  • Rank

Personal Information

  • Role
  • Interests

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. cgrant

    Optimization suggestions

    Are we talking full 3D models for the crops are sprites ? If so you may want to try some LOD scheme wherein only closest crops within a certain range are 3D models and outside the range they are billboards. If they crops are rendered as billboard, you can probably do some for of sorting ( if this is not being done already ) or make a more efficient billboard representation. In any case this is just shooting in the dark as without accurate profiling we'll all be guessing where the issue lies. What is your average frame time? FPS is NOT frame time...we need clock time . Don't know what hardware( GPU ) you are supporting, but have you run the application through a graphics profiler to see where the bottleneck is ? Vertex transform/Fill/Memory Access ? could be anything, but this has to be done up front before any accurate corrective approach can be given.
  2. I guess I don't fully understand what you are trying to achieve...if all you are after is simple integral index..why would you need more than a single channel texture ? Are you use separate indices for array textures ? If you are using indices, then the associated geometry must have a way to specify a index into this 'index buffer' which is just another level of indirection. If this is the case why not just store the actual index that you are trying to look up instead? All in all DXT is lossy so chances are you will not get the same index back that was written into the texture prior to compression.
  3. cgrant

    Duplicating a VAO

    The VAO vertex and index buffers id are independent of the VAO id. Granted I did not spend much time reading the code. Also pardon me for sounding harsh as there maybe a language misunderstanding also. So from the top ( and maybe this will help to better understand what you are trying to acheive . 1) What exactly do you mean by duplicating a VAO? -VAO cannot be duplicated, each VAO needs its separate id. 2)What does cloning the cube mean? If you mean being able to draw the same cube multiple times, then there is no need to create more that one instance of the cube and draw the same cube multiple times with different transform.
  4. I guess what exactly are you expecting or trying to achieve ? If you are changing the output type of your fragment shader, how are you visualizing the output? Please remember that no matter what the out of your fragment shader is, if you are expecting a visual feedback then that out will still be converted to some 'visual' by the windowing system responsible for realizing the output of the framebuffer. In the end the display you are looking at operates on RGB primaries ( 8-bit for the most part ) so integral types as somewhat meaningless unless you are using them for intermediate buffers...Also as stated above, the framebuffer being rendered to have a big impact on how the resulting fragment shader output get 'converted' and stored.
  5. cgrant

    Questions About OpenGL 4.6 + SPIR-V

    afaik, SPIR-V is just SPIR-V the attributes you are assuming/attaching to SPIR-V being byte code really has nothing to do with SPIR-V. The GLSL shader compiler in driver itself most likely utilized some intermediate representation after converting shader tokens so nothing new there. The only difference would be that SPIR-V is standardize while each vendor would have had their own proprietary intermediate representation. With that said it stands to reason that issues you've mentioned wrt OpenGL and shader usage would most likely still remain in that driver path. Apart from being able to create/load the intermediate representation of your shader, I wouldn't expect that much difference between SPIR-V + OpenGL vs GLSL.
  6. cgrant

    clCreateFromGLBuffer crash

    So I dug up my example code that kinda does the same thing you are doing and I had an explicit note in the code ( granted this is on AMD platform ) to create the OpenCL context prior to creating any GL objects, especially buffered object. I would expect context creation would fail is you did NOT specify a valid OpenGL context handle and device context. I don't know what other objects SMFL creates in the process of its OpenGL context initialization, but it would be good to investigate. Also, how are you retrieving/initializing your OpenCL entry points ? Remember OpenCL follows a similar model to OpenGL. In your example you are using GLEW to initialize your OpenGL entry points (function pointers.) What are you using for OpenCL ?
  7. cgrant

    clCreateFromGLBuffer crash

    I think I had this issue once, and it was relate to the order of creation CL and OpenGL context. If you've validate all irreversible has mentioned above and all is well, try swapping the order of our context creation.
  8. cgrant

    Shaders on embedded systems

    Hard to say what will happen when you run your application on a device. There are several things to consider when write shaders for any system. 1. What level of shader support is available ex. SMx, GLSL ES x.x . This is first and foremost factor which will decide whether or not your shader will 'work' on the system. 2. For some API like GLSL ES the same shader can fail on different HW from different vendors even when spec compliant. Again this will only manifest itself during testing. In general the question being ask in too vague as its hard to say what will happen when your shader runs without even seeing the shader code ( the description alone is not a good indicator. ). However, if you stick to making your shader compliant to the shader specification of the API being used, then you are more than half-way there.
  9. Also, its more than just re-inventing the wheel as not hardware is created equal. As an API designer you want features in the API to map directly to the HW as much as possible ( efficiency ), but with numerous different HW configuration, this is no easy task. Which then spawn the next issue, in order to be as efficient as possible on platform/hw that doesn't have direct mapping for API features, 'hacks' are then introduced into the API which muddies the interface and introduce unnecessary complexity. However, the biggest reason, imo and others have mentioned this above is politics...
  10. cgrant

    About OpenGL used in games

    Game devs do not update OpenGL drivers as those are provided by IHV ex Intel, Nvidia, AMD. I'm thinking you meant to say they updated the game to support/use OpenGL 4.x features, but understand that the 2 are different. If you are using the legacy fix function pipeline ( ie. no shaders ) I would not expect any performant increase. Also I'm not going to make any comment in regards to FPS as this is not a valid performance metric especially as a developer( tons of post on this forum describing why this doesn't really give any real indication of perf loss/gain ). Hope that answers your ?
  11. Sorry to say, but that is where you come in. If all that was already done, then what does that leave you to do? Besides just 'mimicking' what is already existent. If you know all the bits and pieces, then the design of how to fit these all together is up to you as everyone use case will be different. There is no magic recipe for an 'engine' or 'renderer' architecture. However, there are a few best practice or common practice that smart people on this forum can give you pointer on. My suggestion is to dive in, maybe do a high-level design mock-up based on your intended use and then ask to have it critique for example. Or better yet, figure out what is you actually need before you start thinking about minute details...those will come in good time.
  12. Don't have a definitive answer, but does it really make a difference in your use case ? Its seems that even in the Vulkan case you would still need to do some translation since you have your own version of object handles.
  13. If you are expecting a LAB format in any API you are going to be out of luck. Just because the format says RGB/RGBA there is not constraint on what the actual data represent. If the LAB data you have can be quantized into any of the support texture format, then there is nothing stopping you from using that texture format to upload your texture. The caveat being that D3D has not concept of LAB textures so you will have to manually handle the interpretation of the texture values. So it can be done, you just have to convert/process the values after you sample the texture.
  14. Seems like you already answered your question. "Increase performance" is a very vague term, in order to quantify performance you need metrics. In either case since you mentioned a 2D use case, if all your objects are in the same place ( ie no parallax ), the using just 2 floats to represent position makes sense. There are certain inherent behavior of the GPU that you cannot control and which may have no bearing on what you put in, with the vertex shader position output size being one of those. The general guidance is to use the smallest type( size and count ) possible while maintaining enough precision in order to reduce the amount of data transferred from CPU to GPU ( bandwidth ).
  15. buffers as far as I know are immutable wrt to size...The overhead of the driver managing the growth or shrinkage of buffers outways the gain. You as the application developer have to manage resources. If its the case that 'suddenly you realize you need to render more vertices' then you have the conditions that you could use to figure out the high water mark for your buffer size. If you don't want to waste memory, then you can also create multiple fix sized buffer, but this may require you to submit multiple draw calls. In the end I think you have to consider your use case and plan your buffer creation accordingly under the assumption that you cannot resize buffers directly..
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!