• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

BrentMorris

Members
  • Content count

    88
  • Joined

  • Last visited

Community Reputation

1223 Excellent

About BrentMorris

  • Rank
    Member
  1. The differences are all the same to me, except that .NET Core can run on Linux! In a cloud environment Linux installations are also leaner and cheaper than on Windows.
  2. If you only care about swaying and not wind patterns in a global sense, you could probably determine swayage using the value noise that shader-toy is fond of. https://www.shadertoy.com/view/lsf3WH   You can use it to generate a cheap approximation of perlin-noise on the GPU. It's bound to look better than a trig wave, and I believe (don't quote me on this) that it's cheaper than sampling from a gradient texture too. You could probably layer a couple samples together to get erratic results that look like gusts. You can do all of the classic noise-combination tricks like sampling them at different scales, translating the sample positions at different speeds, multiplying normalized samples together, etc. There's going to be some magical franken-combination that looks good I think. Edit: I messed around with it a little out of curiosity. I think this could look pretty good. Paste this into shader toy:   // Created by inigo quilez - iq/2013 // License Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. // Value Noise (http://en.wikipedia.org/wiki/Value_noise), not to be confused with Perlin's // Noise, is probably the simplest way to generate noise (a random smooth signal with // mostly all its energy in the low frequencies) suitable for procedural texturing/shading, // modeling and animation. // // It produces lowe quality noise than Gradient Noise (https://www.shadertoy.com/view/XdXGW8) // but it is slightly faster to compute. When used in a fractal construction, the blockyness // of Value Noise gets qcuikly hidden, making it a very popular alternative to Gradient Noise. // // The princpiple is to create a virtual grid/latice all over the plane, and assign one // random value to every vertex in the grid. When querying/requesting a noise value at // an arbitrary point in the plane, the grid cell in which the query is performed is // determined (line 30), the four vertices of the grid are determined and their random // value fetched (lines 35 to 38) and then bilinearly interpolated (lines 35 to 38 again) // with a smooth interpolant (line 31 and 33). // Value Noise 2D: https://www.shadertoy.com/view/lsf3WH // Value Noise 3D: https://www.shadertoy.com/view/4sfGzS // Gradient Noise 2D: https://www.shadertoy.com/view/XdXGW8 // Gradient Noise 3D: https://www.shadertoy.com/view/Xsl3Dl // Simplex Noise 2D: https://www.shadertoy.com/view/Msf3WH float hash(vec2 p) // replace this by something better { p = 50.0*fract( p*0.3183099 + vec2(0.71,0.113)); return -1.0+2.0*fract( p.x*p.y*(p.x+p.y) ); } float noise( in vec2 p ) { vec2 i = floor( p ); vec2 f = fract( p ); vec2 u = f*f*(3.0-2.0*f); return mix( mix( hash( i + vec2(0.0,0.0) ), hash( i + vec2(1.0,0.0) ), u.x), mix( hash( i + vec2(0.0,1.0) ), hash( i + vec2(1.0,1.0) ), u.x), u.y); } float fbm( in vec2 p) { mat2 m = mat2( 1.6, 1.2, -1.2, 1.6 ); float f = 0.0; f += 0.5000*noise( p ); p = m*p; f += 0.2500*noise( p ); p = m*p; f += 0.1250*noise( p ); p = m*p; f += 0.0625*noise( p ); return f; } float norm( in float f ) { return 0.5 + 0.5 * f; } void mainImage( out vec4 fragColor, in vec2 fragCoord ) { vec2 p = fragCoord.xy / iResolution.xy; vec2 uv = p * vec2(iResolution.x/iResolution.y,1.0); float time = iGlobalTime; uv *= 8.0; vec2 uv1 = uv * 0.5 + vec2(time * 0.3, time * 0.7) * 0.15; vec2 uv2 = uv + vec2(time, -time) * 0.23; float f1 = norm(fbm(uv1)); float f2 = norm(fbm(uv2)); float f = f1 * f2; fragColor = vec4( f, f, f, 1.0 ); } You would just evaluate something like that at each grass position.
  3. 1) Detect edge vertices when generating a mesh. 2) Detect which edge vertices are 'odd', these are the ones that mismatch with the next LOD and create holes. 3) Get the neighboring even-vertex-edge positions. 4) Set the odd-vertex position to the average of the two neighboring even positions. This should close your cracks. The higher-LOD vertices follow a straight line between the two even positions, where the more detailed mesh just has another sample point at the half-way mark. Then your next problem is knowing when to close the cracks on a side. The most straightforward approach is probably to regenerate a mesh whenever a neighboring mesh changes LOD. There is another approach where you store two positions per vertex (the main position, and an edge transition position) and a bitflag mask for which edge a vertex is on (or zero if it isn't on an edge), and then when you render the mesh you pass in a bitflag for which edges of the mesh are bordering higher LODs. In the shader you check the edge mask of the vertex against the neighboring LOD bit mask, and if the vertex is on the edge with the higher LOD you set the output position to the transition position. The extra vertex data isn't that bad in the grand scheme of things, and this lets you use the same pre-generated mesh without having to recalculate it every time a neighboring LOD changes. All you have to do is pass in the neighboring-higher-LOD bitmask.  
  4. Use [Alt + Print Screen] to take a screenshot of the active window instead of everything. Especially useful if you ever need to take a specific screenshot and you have 2+ monitors. Edit to fit the topic: I was caught cropping a screenshot of a specific window out of a 3 monitor Print-Screen when I was told about it.
  5. My guess is that you are doing one-bit alpha on the grass, and that the grass texture is mipmapped. The mipmap downsampling is producing results that make your 1-bit alphas disappear in the higher mip levels. In the past I have fixed this with distance field alpha instead of a 1-bit interpreted alpha channel. As long as you are using a 1 bit alpha you can adjust your threshold to make the further mips 'thinner' or 'thicker'. Both versions tend to look bad! A thinner threshold will make grass disappear like you are seeing. Here's the paper. http://www.valvesoftware.com/publications/2007/SIGGRAPH2007_AlphaTestedMagnification.pdf   You use that technique but for grass instead of text. A distance field is much more tolerant of the mip downsampling.
  6. Somebody correct me if I'm wrong, but if every path in a shader takes the same branch it is nearly (not entirely) the same cost as if the changes were compiled in with defines. If you are rendering an object with a specific set of branches that every thread will take it may not be a big deal. If threads take different branches you will eat the cost of all branches.
  7. Those Y's look correct mirrored vertically (over the X-axis) to me. What rendering API are you using? Your data is correct, but you might be interpreting the rendering side of it incorrectly. Most "draw circle/line" API's work where positive Y values move downwards instead of upwards; The top left of the screen is (0, 0) and the bottom left corner is (0, height). Try rendering those points at (x, imageHeight - y) and see if they match up to the circles.
  8. I understand why the ternary operator branches on the GPU if there is any in-line computation, like: // Branching on the (value+1)! float value = 3; value = (value < 4) ? (value) : (value+1); but.. if the ternary operator is literally switching between two variables should it necessarily branch? // Why should this branch? float value = 3; float valuePlus = value + 1; value = (value < 4) ? value : valuePlus; // It should be equivalent to: float value = 3; float valuePlus = value + 1; float values[2]; values[0] = value; values[1] = valuePlus; int valueIndex = (int)(value < 4); value = values[valueIndex]; // And no branching! At least in my head it should just be an address offset between two values. I know this is a nitpicky branching concern, but it's an interesting mini-optimization depending on the usage. Does anyone know if GPU drivers detect and make this sort of optimization or is this a non-concern?
  9. Can you kick that river texture up to an 8 bit texture? You could do a proper gaussian blur on the texture and have some falloff. It will still be that blocky, but at least it won't be a 1-bit abrupt dropoff.
  10. Really? I would at least expect soft blending over the span of a single texel from your river texture with any kind of *linear blending. The size of a river texel from your screenshot looks pretty huge, so it should make some kind of a difference! The blend range should be on par with the blending you see between the river/land outside of the helicopter. How are you blending the river texture with the other grass texture in your shader?   If this really can't be solved with a texture sampler, you might have to stream in high resolution textures for smaller areas. Depending on your needs you could go with a more sophisticated geomipmapping solution if you really need some draw distance. If not that maybe you could do some kind of percentage closer filtering, where you take a number of river texture samples around a sample position and get the average of them. It could look noisy depending on how you do it, though.
  11. It looks like that river texture is being sampled with a point texture sampler. The nearest river pixel value is returned, which gives it the blocky look. You need to use a texture sampler with filtering that returns a blend of nearby river pixel values. Try using a bilinear texture sampler. If that doesn't look good enough generate mipmaps (as mhagain mentioned) and use a trilinear/anisotropic texture sampler to make it so the texture values are blended together smoothly.
  12. OpenGL

    http://vulkan-tutorial.com/assets/Khronos-Vulkan-GDC-Mar15.pdf   The events thing sounds pretty awesome. Command buffers can fire off completion events! You can query an event to see if it has been completed yet, or you can join/wait for an event to finish if you need to. We can finally do GPU->CPU readback without stalling the driver until it catches up! That's a whole lot more assuring to me than "it will probably be done in ~3 frames!"
  13. I consider an intermediate level to be when you are comfortable enough with a language that you fight with concepts instead of the language that you are working with. When your thought process becomes "How does this work, and what's the best way to make it happen?" instead of "How do I do this, and why is it not compiling?". You are 'intermediate' when you can comfortably express your thoughts in code, and you are more concerned about the concepts behind making a piece of code work the way you want it to. You will know the right questions to ask to accomplish roughly anything that you want to do.   Beyond that I consider an advanced level to be when you have specific knowledge about different types of programming. These can include graphics, networking, front/back-end web development, etc. You start to learn the best practices, interesting ways of doing things, and the specifics of a field. It's best not to put a label on how "advanced" you are, because at this point it really just depends on what you know.
  14. I don't completely follow you! Are you saying that the wave geometry is showing up on the reflection, and you don't want it to be there? Or that the reflection from the surface isn't reflecting how you think it should be? A screenshot would be helpful to illustrate the issue.
  15. I don't think this has been done, and there's probably a good reason, but does it seem feasible to use a cloud service to augment your own servers? If your net code was built to mirror a particular cloud service you could build an abstracted "spin up an instance" function to prefer your own servers over the cloud service servers if they aren't busy. This might not work very well if the game in question requires a lot of interaction between your own instances and cloud instances though. I think it could work well as a means of keeping your game available to as many people that want it while having time to expand your own server capabilities.