Advertisement Jump to content
  • Advertisement

cgrant

Member
  • Content Count

    450
  • Joined

  • Last visited

Community Reputation

1848 Excellent

1 Follower

About cgrant

  • Rank
    Member

Personal Information

  • Role
    Programmer
  • Interests
    Design
    Programming

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I guess what exactly are you expecting or trying to achieve ? If you are changing the output type of your fragment shader, how are you visualizing the output? Please remember that no matter what the out of your fragment shader is, if you are expecting a visual feedback then that out will still be converted to some 'visual' by the windowing system responsible for realizing the output of the framebuffer. In the end the display you are looking at operates on RGB primaries ( 8-bit for the most part ) so integral types as somewhat meaningless unless you are using them for intermediate buffers...Also as stated above, the framebuffer being rendered to have a big impact on how the resulting fragment shader output get 'converted' and stored.
  2. cgrant

    Questions About OpenGL 4.6 + SPIR-V

    afaik, SPIR-V is just SPIR-V the attributes you are assuming/attaching to SPIR-V being byte code really has nothing to do with SPIR-V. The GLSL shader compiler in driver itself most likely utilized some intermediate representation after converting shader tokens so nothing new there. The only difference would be that SPIR-V is standardize while each vendor would have had their own proprietary intermediate representation. With that said it stands to reason that issues you've mentioned wrt OpenGL and shader usage would most likely still remain in that driver path. Apart from being able to create/load the intermediate representation of your shader, I wouldn't expect that much difference between SPIR-V + OpenGL vs GLSL.
  3. cgrant

    clCreateFromGLBuffer crash

    So I dug up my example code that kinda does the same thing you are doing and I had an explicit note in the code ( granted this is on AMD platform ) to create the OpenCL context prior to creating any GL objects, especially buffered object. I would expect context creation would fail is you did NOT specify a valid OpenGL context handle and device context. I don't know what other objects SMFL creates in the process of its OpenGL context initialization, but it would be good to investigate. Also, how are you retrieving/initializing your OpenCL entry points ? Remember OpenCL follows a similar model to OpenGL. In your example you are using GLEW to initialize your OpenGL entry points (function pointers.) What are you using for OpenCL ?
  4. cgrant

    clCreateFromGLBuffer crash

    I think I had this issue once, and it was relate to the order of creation CL and OpenGL context. If you've validate all irreversible has mentioned above and all is well, try swapping the order of our context creation.
  5. cgrant

    Shaders on embedded systems

    Hard to say what will happen when you run your application on a device. There are several things to consider when write shaders for any system. 1. What level of shader support is available ex. SMx, GLSL ES x.x . This is first and foremost factor which will decide whether or not your shader will 'work' on the system. 2. For some API like GLSL ES the same shader can fail on different HW from different vendors even when spec compliant. Again this will only manifest itself during testing. In general the question being ask in too vague as its hard to say what will happen when your shader runs without even seeing the shader code ( the description alone is not a good indicator. ). However, if you stick to making your shader compliant to the shader specification of the API being used, then you are more than half-way there.
  6. Also, its more than just re-inventing the wheel as not hardware is created equal. As an API designer you want features in the API to map directly to the HW as much as possible ( efficiency ), but with numerous different HW configuration, this is no easy task. Which then spawn the next issue, in order to be as efficient as possible on platform/hw that doesn't have direct mapping for API features, 'hacks' are then introduced into the API which muddies the interface and introduce unnecessary complexity. However, the biggest reason, imo and others have mentioned this above is politics...
  7. cgrant

    About OpenGL used in games

    Game devs do not update OpenGL drivers as those are provided by IHV ex Intel, Nvidia, AMD. I'm thinking you meant to say they updated the game to support/use OpenGL 4.x features, but understand that the 2 are different. If you are using the legacy fix function pipeline ( ie. no shaders ) I would not expect any performant increase. Also I'm not going to make any comment in regards to FPS as this is not a valid performance metric especially as a developer( tons of post on this forum describing why this doesn't really give any real indication of perf loss/gain ). Hope that answers your ?
  8. Sorry to say, but that is where you come in. If all that was already done, then what does that leave you to do? Besides just 'mimicking' what is already existent. If you know all the bits and pieces, then the design of how to fit these all together is up to you as everyone use case will be different. There is no magic recipe for an 'engine' or 'renderer' architecture. However, there are a few best practice or common practice that smart people on this forum can give you pointer on. My suggestion is to dive in, maybe do a high-level design mock-up based on your intended use and then ask to have it critique for example. Or better yet, figure out what is you actually need before you start thinking about minute details...those will come in good time.
  9. Don't have a definitive answer, but does it really make a difference in your use case ? Its seems that even in the Vulkan case you would still need to do some translation since you have your own version of object handles.
  10. If you are expecting a LAB format in any API you are going to be out of luck. Just because the format says RGB/RGBA there is not constraint on what the actual data represent. If the LAB data you have can be quantized into any of the support texture format, then there is nothing stopping you from using that texture format to upload your texture. The caveat being that D3D has not concept of LAB textures so you will have to manually handle the interpretation of the texture values. So it can be done, you just have to convert/process the values after you sample the texture.
  11. Seems like you already answered your question. "Increase performance" is a very vague term, in order to quantify performance you need metrics. In either case since you mentioned a 2D use case, if all your objects are in the same place ( ie no parallax ), the using just 2 floats to represent position makes sense. There are certain inherent behavior of the GPU that you cannot control and which may have no bearing on what you put in, with the vertex shader position output size being one of those. The general guidance is to use the smallest type( size and count ) possible while maintaining enough precision in order to reduce the amount of data transferred from CPU to GPU ( bandwidth ).
  12. buffers as far as I know are immutable wrt to size...The overhead of the driver managing the growth or shrinkage of buffers outways the gain. You as the application developer have to manage resources. If its the case that 'suddenly you realize you need to render more vertices' then you have the conditions that you could use to figure out the high water mark for your buffer size. If you don't want to waste memory, then you can also create multiple fix sized buffer, but this may require you to submit multiple draw calls. In the end I think you have to consider your use case and plan your buffer creation accordingly under the assumption that you cannot resize buffers directly..
  13. I'm confused. What does look correct means? The linear conversion happens when the SRV is read. If you write the sampled result to a RTV ( if I'm interpreting this correctly ) then the linearized value is whats actually written. Unless you are doing the exact gamma encoding step as the original input, I would not expect the result to look the same.
  14. Yes and no, depending on the alignment of uniform, you may find that you may have issue with indexing and getting the expected values at correct indices ( the GLSL ES specification discusses this ). Yes that is how uniform array are define. I would suggest downloading the OpenGL GLSL ES specification for the version you are using as a reference while development as it comes in really handy at times. Cheers.
  15. Driver may not be smart enough, or just make the lazy assumption that all shared resources must be sync on context switch whether or not they are dirty. If this worked previously without issue then its most likely a driver change that brought about the issue.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!