Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

842 Good

About wcassella

  • Rank

Personal Information

  • Role
    3D Artist
    Creative Director
    Game Designer
    UI/UX Designer
  • Interests
  1. wcassella

    Open Source games with good achitecture design

    As much as I like Unreal, it's a great example of this. Since it's a commercial game engine, Epic has a strong motivation to make most common things easy to use even at the cost of efficiency and simplicity. They've added a fully functional garbage collector and reflection system to C++, for example. As long as you keep context in mind, I think browsing other's code is usually helpful, if a bit slow. Do it if you want to find things to avoid as much as things to try out.
  2. Since I've finished school and my previous game project, I've had a bit of down time before I start work (not game-dev related, unfortunately). I've decided to try my hand at writing something I've always been a little bit interested in: UI frameworks. Ideally I'd want something that offers a similar user experience to an immediate-mode framework, but with better support for things like styling and animation (and isn't horribly ugly). If completed, I'd want to be able to use it for tools, as well as in-game UI. Anyway, I've got a few loose ideas floating around in my head, so I thought I'd throw them out here to see if anyone had any insight. I have a decent amount of experience using a variety of UI frameworks in the past, but I've never actually implemented one before so if any of this sounds incorrect it's probably because I'm thinking about it from the wrong angle. My first idea for this was to just use the same sorts of controls (and a similar layout algorithm) to that offered by WPF, but I'm a bit concerned that regenerating the entire UI every frame with that sort of system would be too inefficient. It requires two passes over every element, first to compute required width/height (bottom-up) and second to arrange children (top-down). This is the same design in use by Unreal for its editor. I began working on a simple implementation, but one snag I ran into was in the case of something like a WrapPanel. Say you have the following scenario: For each column (assuming you're creating a vertical wrap panel), the amount of horizontal space allocated to each element in each column is determined by the amount of horizontal space required by the widest element in that column. If the WrapPanel runs out of vertical space, a new column is created (regardless of remaining horizontal space). Images compute required size based on the width and height of the image, but may scale up assuming they remain proportional. (So allocating more horizontal space also requires more vertical space). In a given wrap panel you have some narrow elements, an image, and a wide element. Based on individually computed height requirements, it is determined that all elements can fit in one column, with the wide element being the widest. By allocating more horizontal space to the image, it now requires more vertical space and the wide element can no longer fit in the column, creating a new one. Now the element that set the width of the column no longer exists in the column, which is weird. I can think of a few solutions to this problem: Only allocate height in a vertical wrap panel based on what the element computed. This would effectively disallow elements that must stretch proportionally from stretching at all. This seems like the most sensible solution, but I don't believe this is what WPF is doing (playing around with it, I'm really not sure at all what WPF is doing). Place the wide element in a new column, and recompute layout for elements in the first column with the width of the next widest element (inefficient). Place the wide element in a new column, and keep the elements in the first column as they are (might cause some elements to be clipped). Keep all elements in the same column (will definitely cause some elements to be clipped). Just force WrapPanels to have uniformly sized elements. I can't think of any use cases off the top of my head where this would be a significant issue. On the complete opposite end of the complexity spectrum, one UI layout solution would be to simply disallow elements from computing width/height based on contents. Elements may stretch to fill their container, but UI is laid out in a single top-to-bottom pass where allocated width and height for each element are known up front. This may be acceptable for in-game UIs (which are typically very restrained in their complexity), but this breaks completely when faced with lists of non-uniformly sized elements. This could be loosened to a degree by requiring only one axis to be fixed (ie, allocated width is known but height may be computed, or allocated height is known but width may be computed), but this would face problems if you have say a horizontal list (where height allocated for each element is known, but not width) which contains a vertical list (where width allocated for each element is supposed to be known, but not height). Even if that issue were resolved somehow, I imagine this would create quite a burden on the UI programmer, as they would have to perform a lot of guessing and checking on the requirements for certain elements, which in some cases may not even be possible. One loosely formed idea I had was to use the layout to generate a linear set of instructions for computing coordinates/dimensions, ordered by the dependencies elements had on one another. So auto-sized elements would generate instructions to have the dimensions of their contained elements computed first, then compute their own coordinates/dimensions, then compute the coordinates of their contained elements. Fixed-size elements would simply compute their contained elements coordinates and dimensions directly. This is very similar to the first approach (and I imagine it could suffer from similar or even worse complexity issues), with the exception that the generated instructions wouldn't distinguish between any sort of bottom-up or top-down pass, which could potentially reduce the amount of redundant work. I haven't thought this idea through very thoroughly, however, so I could be wrong. Anyway, I'd love to get a second opinion on these ideas, especially when it comes to performance. I wouldn't be opposed to introducing some burden on the application to specify which elements have become invalid in between UI iterations if necessary, but ideally those would be very coarse-grain at the least.
  3. I recently decided to buy a new keyboard. I've never used a mechanical keyboard before, and I keep hearing about how awesome they are, so I did some research and picked out the Das Keyboard 4 Professional (with MX Brown switches). Everyone who's reviewed it seems to say it's the best keyboard they've ever owned, but I've been using it for a few days and... I'm not sure I like it very much. I haven't owned very many keyboards in my life, so either I've peaked too soon, or it just isn't for me. The overall design is fantastic: the construction feels extremely solid and the media controls are well thought out. However, I dislike how stiff the keys are, and the key depression distance feels a bit too deep. Some of the larger keys have a different sound and feel than the others; the backspace key in particular wiggles a bit more and sometimes makes a high-pitched squeak when pressed. The resting angle is fairly flat, so I have to hold my arms up in an awkward way when I type on it. I don't particularly mind the lack of back-lighting or macro keys, but for $170 I'd expect the absence of extra features to mean that everything else was perfect. It's kind of disappointing, because the simplicity of this keyboard is super refreshing. Most others I checked out looked like what you'd get if a you bred a spaceship and a Christmas tree. Having two USB 3.0 ports on the back is a great addition as well, and something I haven't seen on anything else. As for replacements, I'm currently looking into the Corsair K70. I'm hoping the shorter bottom-out distance will make it less exhausting to type on, while still providing a nice crisp feeling. As for the lights, I'm planning on setting them to somewhere between 'white' and 'off'. Anyway, I'm the most indecisive person I know (maybe?), so I'd love/hate to see any other keyboard choices you guys can give me. Ultimately the only thing it needs to have is a number pad (and the rest of the keyboard too), since I use Blender a lot.
  4. Thanks to help I got here, I've now got a light mapping system that can generate nice looking shadows and global illumination. However, I'm a little confused on how to effectively use the results in my rendering pipeline. Should I separate the shadow map and GI map (maybe put shadow mask the alpha component) so that I can still get dynamic specular highlights for the precomputed light source, but only on the directly lit areas? Is there a more effective way of achieving that that I'm not thinking of? Is there anything specific to PBR (energy conservation, combining with environment maps) that I should keep in mind? Thanks
  5. wcassella

    Lightmap global illumination issues

      Wouldn't that make the expression: float3 result = sum * (1.0f / (numSamples * 2 * Pi)); Edit: Reading that again, I'm actually not sure. There is definitely too much light when it's not done as sum / Area, though.   Double Edit: Yeah, you're definitely right. The reason I was getting too much light was I was dividing by the number of batch samples (8), rather than the number of total samples (128).   Anyway, thank you! It looks much better now.     I'm still getting slightly too much light in areas where you wouldn't expect (around the corners even in the dark), I'm gonna check my direct buffer sampling code to make sure I didn't make any mistakes there.     Update: I was accidentally swizzling the barycentric coordinates for the ray hit location. It now looks like this:  
  6. I've decided to ditch the the lightmapping library I was using previously, since it ran unbelievably slowly and it had some strange artifacts that I didn't want to deal with. In it's place, I've just gone with integrating embree and doing ray-tracing on the CPU. The basic procedure for what I'm doing so far is this: direct_buff is a black image for each texel in the lightmap: Cast a ray originating from the texel, in the direction of the directional light source if the ray was not occluded: influence = max(dot(light_dir, texel_norm), 0) direct_buff[texel] = influence * light_color endif endfor indirect_buff = copy(direct_buff) for each texel in the lightmap: Compute a TBN matrix for this texel Create 128 random unit vectors on the hemisphere represented by the TBN matrix Cast a ray originating from the texel, in the direction of each vector for each ray that hit something: Sample albedo value stored at the hit coordinate Sample direct lighting value stored at the hit coordinate influence = dot(ray_dir, texel_norm) / (distance * distance) indirect_buff[texel] += influence * albedo * direct endfor endfor return indirect_buff It's not ideal, since I'm using a physically based rendering pipeline but this has no consideration for any of the usual PBR stuff, other than albedo. However, as long as it looks decent I'm fine with it.   So I whipped up a simple scene similar to the Cornell Box, when unlit it looks like this:     Computing just the direct lighting value (with the light source outside the box) and multiplying with albedo, I get this:     Ok, looks pretty good. Then as a test, I output just the indirect influence (not multiplying by albedo, also greatly reducing brightness):     Makes sense, I think. Surfaces that are close to other surfaces get more indirect ray hits and influence than those that aren't. Should it be that bright around the corners though? I'm not sure...   Anyway, here's what it looks like with the final output terms (and multiplying by albedo):     Ehh, yeah that doesn't look quite right. Why is there such an insane amount of light in the corners, even where there's no direct lighting? I'm getting some nice color bleeding from the red wall onto the left cube, but that doesn't distract from the rest of it. Here's another shot, taken from behind the right box:     Yeah, there definitely shouldn't be that much light there. I'll check my math, but if anyone could give me any tips for where I might be going wrong, I'd appreciate it. I can post the code, but it's pretty gross at the moment.   Thanks!
  7. I'm currently in the process of integrating a lightmapper into my game engine, using this library. Some of the sample images seem to have a bit too much light around the corners, but overall it looks pretty good. 99.9% of my game is static, so I think this is the best solution for general-purpose lighting in my case. However, if anyone knows of any other libraries or tools I should look into though, I'd love to hear your suggestions.   Anyway, my engine's mesh format supports having 1-2 UV layouts per mesh. This way you can have one UV layout for materials, and another for light mapping, but if you only have one then light mapping will fall back to using the material layout. In both cases, they're artist controlled, which I prefer.   My idea for generating the scene's lightmap was to pack all UV layouts for static objects into one image, where each is scaled with respect to their size and importance in the scene, using an algorithm like this. One issue with this however, is that for small objects the margin between UV islands is going to get really small, to the point where bleeding might become a significant issue. An approach to fixing this would be to regenerate UV layouts for the entire scene during light mapping, but that's just more complication that I'd like to avoid.   So I have two questions here:   Have you encountered this issue, was it significant, and if so what did you do to fix it?   Are there any other light mapping tips or libraries you can suggest before I get heavily invested in this?   Thanks!    
  8. I've read in several places at this point that to get the most out of your CPU's cache, it's important to pack relevant data together, and access it in a linear fashion. That way the first access to that region of memory loads it into a cache line, and the remaining accesses will be much cheaper. One thing that isn't clear to me, however, is how many cache lines you can have "active" at once.   So for example, if you have a series of 3D vectors, and you lay them out like this: [xxxx...] [yyyy...] [zzzz...] And then you access your data as: for (std::size_t i = 0; i < len; ++i) { auto x_i = x[i]; auto y_i = y[i]; auto z_i = z[i]; // Do something with x, y, and z } Does each array get it's own cache line? Or does accessing the 'y' element push 'x' out of the cache, and then accessing the 'z' element push 'y' out of the cache? If you were to iterate backwards, would that cause more cache misses than iterating forwards?   On another note, while I try to follow best practices for this stuff where possible, I literally have zero idea how effective (or ineffective) it is, since I have no tools for profiling it, and I don't have time write one version of something and then test it against another. Are there any free (or free for students) tools for cache profiling on Windows? I'd love to use Valgrind, but I don't have anything that can run Linux that is also powerful enough to run my game.   Thanks!
  9. wcassella

    GLSL Mat3 troubles

    Thanks for the heads up. Is that due to OpenGL doing interpolation between vertices to get the values for the fragment shader?   Anyway, ultimately I've come to the conclusion that this is a driver bug. I'm developing on a Surface Book with that wacky hybrid GPU thing, and when I run my game with the integrated GPU it works, but when I run it with the dedicated Nvidia GPU it does not. I"ll just stick to constructing the TBN matrix in the fragment shader.
  10. Having recently gotten tangent vectors working, I wanted to try out normal mapping. I'm constructing a TBN matrix in my vertex shader like so (including the whole thing for clarity): // basic.vert #version 430 core uniform mat4 model; uniform mat4 view; uniform mat4 projection; in vec3 v_position; in vec3 v_normal; in vec3 v_tangent; in vec2 v_texcoord; out VS_OUT { vec3 position; vec2 texcoord; vec3 tangent; vec3 bitangent; vec3 normal; mat3 TBN; } vs_out; void main() { gl_Position = projection * view * model * vec4(v_position, 1); vs_out.position = (model * vec4(v_position, 1)).xyz; // Compute TBN matrix // Not using bitangent sign, I was *going* to add that next vec3 v_bitangent = cross(v_normal, v_tangent); vec3 T = normalize(vec3(model * vec4(v_tangent, 0.0))); vec3 B = normalize(vec3(model * vec4(v_bitangent, 0.0))); vec3 N = normalize(vec3(model * vec4(v_normal, 0.0))); vs_out.tangent = T; vs_out.bitangent = B; vs_out.normal = N; vs_out.TBN = mat3(T, B, N); } You may have noticed that some of my shader outputs are redundent with the TBN matrix, I'll get to that.   So just to test this out, I wrote my fragment shader like so: // basic.frag #version 430 core uniform sampler2D diffuse; in VS_OUT { vec3 position; vec2 texcoord; vec3 tangent; vec3 bitangent; vec3 normal; mat3 TBN; } fs_in; layout (location = 0) out vec3 out_position; layout (location = 1) out vec3 out_normal; layout (location = 2) out vec4 out_diffuse; layout (location = 3) out float out_specular; void main() { out_position = fs_in.position; out_normal = fs_in.TBN[2]; out_diffuse = texture(diffuse, fs_in.texcoord); out_specular = 0.3; } Finally, the viewport shader just outputs the normal buffer instead of doing any shading. However, I just get a black screen, with no errors.   So I changed the out_normal assignment line in my fragment shader to: out_normal = fs_in.normal; And it works. After a lot of head scratching, I ended up with this. // basic.vert //... out VS_OUT { //... mat4 TBN; } vs_out; //... vs_out = mat4(vec4(T, 0), vec4(B, 0), vec4(N, 0), vec4(0, 0, 0, 1)); // basic.frag //... in VS_OUT { //... mat4 TBN; } fs_in; //... out_normal = TBN[2].xyz; And it works. That obviously isn't really what I'd want, but I have no idea what to believe anymore. Is this a driver bug, or am I doing something stupid (it wouldn't be the first time)?   Edit -  Some additional notes   Using (when TBN is a mat3): out_normal = fs_in.TBN[0]; Outputs the bitangent vector, and: out_normal = fs_in.TBN[1]; Outputs the normal vector.   fs_in.tangent/bitangent/normal always returns the expected result, so I may end up just constructing the TBN matrix in the fragment shader.
  11. These are all really helpful, thanks! I've been busy with rendering code recently, but now I've got a chance to try these out.
  12. Hi,   I'm working on a game built on my own engine for my senior thesis in college. Right now I'm having some trouble with physics integration. Currently I'm using Bullet, which I've had great success with in the past for things like rigid body simulation and collision detection, but now I'm trying to use the built in character controller (btKinematicCharacterController), and it's... not great.   The problems:   I'm getting strange behavior where the movement will skip slightly every few frames. It looks reeaally bad in first-person, and I'd like to get rid of it as soon as possible. I've done some tests, and I'm sure at this point that it's nothing to do with my rendering code, or anything that isn't controlled by Bullet.   I've also noticed that the way it detects if you're on the ground doesn't seem to be very intelligent, so if you hit your head on something mid-jump, it allows you to jump again.   Also, it says in the documentation that users of the API are supposed to implement interaction with rigid bodies themselves, but when I tested it it seems that interaction already works (you can push objects around, and they can push you). Problem is, I can't figure out how to control it.   I'm not sure about the rest of Bullet, but the brief time I spent poking through the character controller source kinda killed the confidence I had in the quality of the code. There's tons of stuff that's just commented or ifdef'd out, and the parameter to btKinematicCharacterController::jump has a default value that leaves it uninitialized, which causes you to fly off into space unless you supply the argument yourself.   What I need:   The game I'm trying to make is a fairly simple first-person puzzle game. All I really need from the physics engine is solid collision detection, and controlled interaction with certain objects (basically just pushing blocks around). I don't have a lot of time to focus on physics (I'm already worried about finishing what I have left to do in the time I have, even if this were already fixed). I've considered switching to PhysX, but I'd rather keep the game source code as easy to redistribute as possible, and integrating PhysX is just more time I spend not working on other things. Does anyone know any good tricks to beat the bullet kinematic character controller into shape, or a good implementation I can use in it's place? Is it worth switching to a different physics engine?   Thanks
  13. I'm working on a new game, and for this one I've decided to go with a more pure ECS design than my last one. In particular, Entities are represented with unique IDs, Components are POD structs with no inheritance requirements (though they must be default and copy-constructible), and Systems do not contain any non-reproducible state.   I think I should clarify the last bit, in my approach Systems may contain whatever intermediate data they need, as long as it can be reproduced from the state of the Entities and Components once the game is reloaded. For example, a CColliderComponent would only contain basic collision shape data, whereas the BulletPhysicsSystem would own the actual btCollisionShape objects that correspond to those components, and update them accordingly.   The problem I now face is that the BulletPhysicsSystem needs to know when new colliders have been created, destroyed, or when their shapes have been changed. To do that, I've decided to add a new element: 'Tags'. Tags are like components of components, though they are more representative of events. Unlike components, you can have multiple instances of the same type of tag on a single component, and they are not persistent. They only live to the end of the current frame, and then all tags are destroyed.   In such a system, when a CColliderComponent's is created, it is given the TNewComponent tag. When its shape is changed, it's given the TCollisionShapeChanged tag. When its destroyed, it is given the TDestroyedComponent tag, and then destroyed at the end of the frame. That way, the BulletPhysicsSystem can do a pass where it checks for specific tags on all components it understands, and updates its internal state accordingly, allowing components and tags to remain as simple POD structures.   I just came up with this idea a few hours ago and its more than likely that there may be some serious flaws with it, so I'm interested in hearing how any one here has managed to keep components as POD, while still updating things where necessary. I don't think the systems setting a field on a component should be responsible for knowing exactly everyone who cares about modifications to that field, so it seems that some sort of design for notifying the relevant systems would be useful.
  14.   In one of his QuakeCon keynotes, John Carmack said something along the lines of: "Just because you have the best chisel in the world, that doesn't make you a sculptor".   A lot of people think "I've got all these awesome ideas for images, now if only I knew how to draw!", but there's a problem with that. If tomorrow, everyone on earth was magically able to effortlessly produce on paper the images they had in their minds, the people who had trained to do that would still make better images, since they've come to understand what makes an image good, rather than just having a good idea.   I'd summarize it like this: There's a craft to the art, but there's also an art to the craft. You simply cannot have one without the other.
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!