Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

476 Neutral

About jakovo

  • Rank

Personal Information

  • Role
  • Interests
  1. forget it this post... the problem was not in the code, the source code string buffer was not ending with '\0' so that caused compilation issues...   sorry for taking your time!
  2. Hi everyone,   Is there any particular restriction on the number of SSBO I can use in a compute shader?...   in the code below, as soon as I write the code of Input2 the code fails to compile... if I remove it the compilation goes ok. #extension GL_ARB_compute_shader : enable #extension GL_ARB_shader_storage_buffer_object : enable layout(std430) buffer; layout(binding = 0) buffer Input0 { float array1[]; }; layout(binding = 1) buffer Input1 { float array2[]; }; // if I only remove this one, the code compiles ok layout(binding = 2) buffer Input2 { float array3[]; } layout(local_size_x = 128) in; void main() { } . I did a search to see if there were any restrictions on the number of SSBO I can use or something, but nothing that could hint at what might be the problem... any ideas?
  3. Is there any good material out there about writing an engine for Dx12/Vulkan?...   I'm starting to get into DX12/Vulkan, I've written a small engine for Dx11/9 before, but as I understand, writing an engine for the new APIs requieres a different approach, and its architecture should be somehow different from that of traditional DX11/OpenGL...  so I'd like to read a little bit about what should I keep in mind when re-writing my engine for the new APIs.   Thanks
  4. jakovo

    Custom Shader Builder

    Thanks Hodgman,   That's sounds exactly for what I was looking for. So I just have to create a C# app which runs the fx compiler with the different arguments, and tell Visual Studio to use that app when building the solution, instead of fxc.exe... sounds much easier than I initially thought.   I thought I'd have to build some kind of VisualStudio macro of such.   Thanks!
  5. Hi everyone,   I want to make a custom shader builder for my shaders, so that I can build multiple times the same file setting different #defines... this way I can write a shader once, with multiple #defines setting the behaivor for when the engine provides a normalmap or uses x number of lights, if it has transparency, etc.   I believe this is called an Ubershader.     Has anyone built a custom builder in VisualStudio to do something like this? Or does anyone know of examples at how it can be done?.. I have no idea how to tell VisualStudio to build multiple times the same file passing different arguments.   Thanks!
  6. I've worked a couple of games were we did this splitting... in both cases we had the engine know what bone would be used to split an animation from...   For example, your artist could create a running animation, a shooting animation independently....this way we set to use the pelvis as the spliting bone... so you could run the shooting animation from the pelvis upwards (ignoring it from the pelvis downwards), and the running animation from the pelvis downwards (ignoring the upper part of the animation).... and you would get a "running and shooting" animation....    The same could be done with the head, we could specify the neckbone for splitting animations, so that we could rotate the head depending on where the user was looking at (not animated by the arstist), and everything below would be animated accordingly depending on the current action by the player (running, shooting, standing still, etc).
  7.   Being C++ an extension of C, at the core there's really not much difference between both of them... C++ just adds new stuff to the language, abstracts some others and fixes a few little things here and there, to the point that you could almost compile C code on a C++ compiler without problems... so, by learning C you're already learning the basics of C++...   Also C makes you think more at the metal level, you could easily figure out the assembly produced by your code in C... and that's good... if you then move to C++ and see how it abstracted some things, you can figure out what's going on on those abstractions (nothing is pure magic), and you can decide if it's good for what you're doing or not...   But don't give it too much thought, as long as you understand what's going on with the code you write no matter what you learn first or after... the important thing is to take the next step in your career ;)
  8. What about creating a "transitional group of cells"?   You can easily check if the whole cell is within the LOD distance or just part of it... if it's just partially within the LOD distance you know what side of your cell is in the LOD n and which in the LOD n-1... so you create a transitional group of cells which holds the positions from the lowest LOD on one side, and the positions of the highest LOD on the other, and interpolate the in-between vertices position of that group from LOD n to LOD n-1...
  9. jakovo

    PBR 3D Models

    complementing what Hodgman already said, you can find a nice guide to PBR in this Alegoritmic's two-part PBR Guide for free (targeted mainly for artists, but technical enough for programmers as well).   And in a couple of months Matt Pharr and others will release the Third Edition of the book Physically Based Rendering From Theory to Implementation.   Hope that helps!
  10. jakovo

    Colorimetry: Violet

    Thanks everyone!   I didn't knew cone cells were still receiving to some degree other frequencies as well. Now that makes much more sense explaining why violet's RGB include's red...   however, I still have a little doubt. If we look at the frequency response graph:   https://en.wikipedia.org/wiki/File:1416_Color_Sensitivity.jpg at around the ~400nm (where violete is) we can see both green and red cones absorbance almost the same (around 35-40 photons), while blue cones almost double the absorvance (around ~90)....  (I corroborated this graph's info with the original research paper here to make sure it was accurate)...   Wouldn't that mean that in RGB to see violet we would need values of R40%, G40%, B90% ? (while in reality violete is more like R40%, G0%, B60%)   Sorry, just trying to see the correlation between physical cone sensivity and RGB values.   Thanks!
  11. Ok, so...   Violet is a color of the visible light at the highest end of the spectrum with a wavelength between 380nm-450nm and freq between 668Thz-789Thz (traveling through air I assume). It is considered one of the monochromatic colors (meaning, it is only defined by its wavelength/frequency, and not the combination with others).   Now, our retina can only perceive Red, Green and Blue... each of the three types of cells are more sensitive to certain wavelength/frequency close to red, green and blue... and depending on the combination of the intensity of these three the brain interpolates it in order for us to see for example Yellow (somewhere between Green & Red).   If we look at any light spectrum chart, we can see more or less where Red, Green and Blue colors are located and in the middle between them the colors that would come by mixing them.   However, if I look at the RGB values of the Violet color, its a Mix of Red and Blue... which are at the opposite side of the visible spectrum!   If Violet is really only dependent on being an electromagnetic wave with length between 380nm-450nm/668Thz-789Thz... and our retina can only perceive color by mixing RGB... then:   How come a monochromatic color ranging to the highest end of the spectrum (where the blues are) be perceived by our eyes as a mix with red when "red cells" are only sensitive to the wavelengths/frequency of the lowest side of the spectrum???   Thanks!
  12. jakovo

    What is a wavefront?

    Oh!... I see... I was misinterpreting the first quote...    Your explanation makes perfect sense now.... thanks Hodgman.
  13. Looking at AMD's documentation on GCN architecture ( http://developer.amd.com/community/blog/2014/05/16/codexl-game-developers-analyze-hlsl-gcn/ ) it is a little confussing what exactly a wavefront is.   It says:         Ok, so a thread is a wavefront... but later on it says:         Ok, so... a thread is not a wavefront as the sentence before, but a wavefront can have multiple threads...         So... a shader has different wavefronts (and wavefronts have threads).       Also, a little bit confusing, it says that:         But this contradicts with the 64 mentioned on the first quote...   Can someone explain?   Thanks!
  14. Thanks Jason Z,   Ok, so, you have a Material object with the DepthStencil / Blend / Rasterizer States... and each Object en your scene references any particular Material they'll use...  some thing like this: . class Material {    RasterizerState   rs;    BlendState       bs;    DepthStencilState pss; }; class Object {    // other stuff    shared_ptr<Material> mat; }; Am I right?....  now, in the actual rendering code how do you handle Mirrors in your case?...   I mean... you'd have to draw to the stencil buffer the area where the mirror is (which needs one particular set of states), then the renderer needs to draw the reflected scene objects in that area (which imply another set of states), and finally again draw the actual mirror with some transpareny over the reflected area (which again implies other set of states).   Thanks!
  15. How do you think it would be best to handle DepthStencilState / BlendState / RasterizerState ?   Associating each object in scene to its own states, like this: class Object {    // shaders and stuff    RasterizerState   rs;    BlendState       bs;    DepthStencilState pss; }; void Render() {    for( Object obj : ObjectList )    {        SetRasterizerState( obj.rs );        SetBlendState( obj. bs );        SetDepthStencilState( obh.pss );        Draw();    } } . Or having a flag in each object indicating how is it going to be drawn, and setup states inside the rendering function: class Object {    // shaders and stuff int flags; // uses transparency / casts shadows / etc }; void Render() {    // Draw Normal Objects    RasterizerState rs; // setup acordingly    SetRasterizerState( rs );    BlendState bs; // setup acordingly    SetBlendState( bs );    DepthStencilState pss; // setup acordingly    SetDepthStencilState( pss );    for( Object obj : NormalObjectList )    {        Draw();    }    // Draw Objects with transparency    RasterizerState   rs; // setup acordingly    SetRasterizerState( rs );    BlendState       bs; // setup acordingly    SetBlendState( bs );    DepthStencilState pss; // setup acordingly    SetDepthStencilState( pss );    for( Object obj : TransparentObjectList )    {        Draw();    } // ETC.... } .   So far I was using the first one, and it was working ok, but then I started to implement Mirrors wich require first to render the mirror to the Stencil buffer, then render the scene on the stenciled area, and then render the mirror's glass with some sort of transpareny over the reflected scene... which would be hard to do having each object associated with it's own DepthStencil / Blend / Rasterizer states...   However, sometimes you would like to render a specific object in wireframe mode, and sounds more natural to just modify that object's RasterizerState...   So.... how do you experienced people handle your states in your code?
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!