• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

venzon

Members
  • Content count

    229
  • Joined

  • Last visited

Community Reputation

256 Neutral

About venzon

  • Rank
    Member
  1. I realized I wasn't redistributing all of the .dll files needed to make this work out-of-the-box for someone without GTKmm installed. Version 10.10.16 fixes this so please give it a shot. The latest version will always be available at this link: http://code.google.com/p/renderhog/downloads/list
  2. What? I've released the first public version of RenderHog (version 10.10.09.2). What's RenderHog? RenderHog is a cross-platform GLSL shader development environment. RenderHog takes care of interacting with OpenGL 3 and allows the user to load assets, set GL state, set render pass inputs and outputs, and setup variables all using a node-tree interface. This allows the shader author to concentrate on writing shaders instead of writing supporting code. The interface is designed to be familiar to users of AMD's RenderMonkey tool while improving on that tool in several areas. Why? The last release of RenderMonkey was in 2008, and it has several areas that need improvement: interface performance, it's Windows only, and it lacks features found in the newest OpenGL specifications. I want to fix those issues. In addition, I wanted to teach myself (and others by example, since it's open source) the OpenGL 3 way of setting up a rendering pipeline without using any deprecated functions. Limitations This initial release works, but many of the nodes you can add don't have complete option sets filled in yet. For example, in the "render state" node, you can set enable/disable depth testing and set the depth test mode, but there's no option yet to set alpha blending modes, enable/disable alpha testing, and so on. Same goes with sampler state: you get mag filter state, min filter state, but that's about it. There isn't any geometry shader support yet. Because RenderHog is cross-platform, it only supports OpenGL. There's no DirectX support. It'd be nice to add DirectX or CG support later, but this isn't a priority for me. In windows, the draw area is refreshed at a low rate and the framerate counter doesn't work properly Any other known issues are on the issue list: http://code.google.com/p/renderhog/issues/list. Feel free to add to the list if you find new problems. Website The website contains a description of the interface and a short tutorial: http://renderhog.googlecode.com Download This is a link to the Windows installer: http://code.google.com/p/renderhog/downloads/list There aren't any binary packages for Linux or OSX, but you can download the source and compile it yourself. I haven't tried compiling on OSX, but it should compile under Linux with no problems (as long as you have the prerequisite libraries). Screenshot [Edited by - venzon on October 20, 2010 9:24:35 PM]
  3. I implemented this in OpenGL/GLSL and at first I was disappointed with the results... but after playing with it a bit I got pretty decent results. The most important thing I found was clamping Normal.xy before doing the texture lookups: Normal.xy = clamp(Normal,-float2(1.0,1.0)*0.4,float2(1.0,1.0)*0.4); //Prevent over-blurring in high-contrast areas! -venzon Normal.xy *= vPixelViewport * 2; // Increase pixel size to get more blur float4 Scene0 = tex2D( ColorTextureSampler, i_TexCoord.xy ); float4 Scene1 = tex2D( ColorTextureSampler, i_TexCoord.xy + Normal.xy ); float4 Scene2 = tex2D( ColorTextureSampler, i_TexCoord.xy - Normal.xy ); float4 Scene3 = tex2D( ColorTextureSampler, i_TexCoord.xy + float2(Normal.x, -Normal.y) ); float4 Scene4 = tex2D( ColorTextureSampler, i_TexCoord.xy - float2(Normal.x, -Normal.y) ); [Edited by - venzon on October 2, 2010 12:32:22 AM]
  4. Quote:Original post by shiqiu1105 But I dont have any matrix library. And it bothers me to write one. Is there any matrix library availale??? GLM is a decent matrix library, though sparse on documentation.
  5. Here's a simple approach: http://petrocket.blogspot.com/2010/01/simple-flexibile-atmosphere-shaders.html
  6. Here is the exact method that 3dsmax uses: http://area.autodesk.com/blogs/chris/how_the_3ds_max_scanline_renderer_computes_tangent_and_binormal_vectors_for_normal_mapping
  7. Oh, I understand now. Your graphics API will interpolate vertex normals for you if you're using OpenGL or DirectX, but if you're writing your own software rendering, the method you used should work, as far as I know.
  8. In Portal, Valve used a camera motion blur that samples in an elliptical path: http://www.valvesoftware.com/publications/2008/GDC2008_PostProcessingInTheOrangeBox.pdf Maybe you could do something similar, sampling around an elliptical path that corresponds to the wheel's rotation? You'd have to store information about the center of the rotation somehow....
  9. If your .obj exporter can export per-vertex normals, you have enough information to be able to render the object with the smooth areas smooth and the flat areas flat. Your 3d package's exporter should take care of getting the vertex normals correct for triangles that belong to smoothing groups.
  10. How often do you write shaders, and how much experience do you have with them? At the moment I'd say an average of one shader a week. I've been writing shaders in my spare time for maybe 3 years. What shader language do you use? I use GLSL for hobby projects and HLSL at work. What contexts have you written them in (your own game engine, XNA, WPF, GPGPU experiments, etc.)? In my own game engine at home and in our in-house engine at work. What tool(s) do you use to write shader code? Notepad++ in windows, Kate in linux. Have you tried a shader IDE like RenderMonkey or FX Composer, and what was your experience? I love Rendermonkey and use it often. My chief complaints with rendermonkey are: 1) it's hard for our artists to quickly preview a scene consisting of, say, five objects each with their own diffuse and normal maps, 2) the camera controls are limited (an FPS style walk around camera mode would be nice), 3) the integrated shader text editor has odd quirks and bugs, 4) it's hard to set up an effect that involves using a shader to recursively process an image into smaller and smaller sizes (or vice versa), 5) it should be easier to set up a full-screen-quad pass, 6) it should be easier to share shader code amongst passes. I have used FX composer and didn't like the interface. It was confusing (UI overload!), slow, and crashed often. Have you ever used a visual tool to author a shader program (such as the shader exporter from 3DS max), and what was your experience? Nope What difficulties do you routinely encounter when writing shaders? Getting everything into the right coordinate spaces (especially when our artists use multiple tools that generate assets with different coordinate spaces), debugging/visualizing problems with the input data. What have the technical restrictions of shaders prevented you from doing, which you wish you could do? It's frustrating to really need a feature from DX10 or DX11 but be tied to DX9. Also, I wish I could take many more texture samples per pixel. What's been the shader effect that was the most difficult/time-consuming for you to write? (no code necessary, but screenshots would be sweet) I've spent forever tweaking SSAO shaders. I'm still not satisfied with my results. :-( What shader effect are you most proud of? I recently wrote a lighting uber-shader that is technically not that innovative, but uses an elegant collection of standard, best-practice techniques to produce some outstanding results in the hands of our artists, while fitting nicely into our art pipeline. [Edited by - venzon on August 16, 2010 7:03:23 PM]
  11. It won't always provide correct results, but you could render depth from an ortho camera on each axis, then use that to bound your particles.
  12. Thanks for the responses folks. Clamping can indeed produce some pretty nasty artifacts when blurring HDR images. Discarding the off-screen pixels and re-adjusting weights seems to work alright. It still has artifacts, but it's passable. Here's how I implemented it: I set the address mode to clamp to border with a border color of 0,0,0,0. Since the image I'm blurring has alpha 1 everywhere, I use the alpha to detect how many samples I got from within the image. In the pixel shader, I do this (pseudocode): for (int x = 0; x < kernelsize.x; x++) { for (int y = 0; y < kernelsize.y; y++) { color += getPixel(coordinates)*kernel[x][y]; } } color.rgb /= color.a; If I replace the division by color.a at the end to division by my kernel normalization value, then I end up with the vignette effect, which is useful in some situations (I like how it looks for a bloom buffer).
  13. I'm applying a particularly large Gaussian blur kernel as a post-process effect. I'm encountering artifacts at the edges of the image, where part of the kernel reaches off of the screen. I tried doing wrap, clamp to edge, and mirror for the texture addressing mode, but these all introduce their own artifacts. Some other ideas I had but haven't tried out yet: * discard off-screen samples and adjust the kernel's weights accordingly (I suspect this would cause sharpening of the image around the border) * render the scene with a larger FOV and a higher resolution than the framebuffer (this sounds like a huge pain) Is there a correct way to do this? How do photoshop, etc handle this?
  14. This SSAO technique does a great job of eliminating self-occlusion. It uses surface normal information.
  15. Quote:Original post by DvDmanDT Turbo C/C++ are actually quite nice to work with. To be honest, I think it's easier to get things working with that than with modern libraries for C++. I don't want to sound negative, but with all the trouble you're having, maybe you should question your assumption and try a modern development environment.