Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

529 Good

About dougbinks

  • Rank

Personal Information

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. dougbinks

    C++ as a scripting language

      Runtime Compiled C++ is also used in http://kythera.ai/ which is in StarCitizen and other titles such as Umbra https://umbragame.com/   Unsurprisingly since I'm the author of RCC++ it's also in my game http://www.avoyd.com   Lots of edits to this, as by heck is the editor great at losing what you type if it can't parse it.
  2.   UE4's UI is written using their own UI called Slate running on top of thier rendering layer - so a port of the rendering engine leads to a port of the editor UI, which is handy.   A common approach is to use web technology, either through an embedded simple web server in your game or by embedding Webkit such as the Chromium Embedded Framework (other alternatives can likely be found through a search on these forums).
  3.   Yes, it's tricky. Eric Lengyel's transvoxel algorithm is one of the best for marching cubes (http://www.terathon.com/voxels/), and Miguel Cepero's Voxel Farm seams are pretty decent http://procworld.blogspot.fr/2013/07/emancipation-from-skirt.html, though I'm not sure what technique he uses as he has dual contour voxels. I use simple skirts for now with the skirt normal adjusted to minimize transition contrast, though I've not had time to cater for certain edge cases such as the one you spotted.   The main issue with shadow maps for me is the lack of light propagation in occluded volumes, such as caves. Otherwise they're great.
  4. Funnily enough I've been thinking about the lighting system in my own work recently, and there's some similarities as I'm using an Octree as well.   You can probably do decent per-vertex lighting the Minecraft way by subdividing the large voxels. In my own game I subdivide large nodes to the smallest voxel size for nearby regions, and use higher nodes of the branch of the Octree for level of detail in the distance. Hopefully you can see the LODs working in this (debug rendering) video. Using this approach has the advantage of fast vertex generation without seams, using skirts between LOD regions. So if you use this approach and have slow moving  or stationary lights you should be able to do fairly decent lighting which gives you some global illumination like properties.   I'd disagree that deferred rendering doesn't scale well - indeed that's it's purpose. Culling can be done via tiles in DX11+ (or equivalent in OpenGL), see Andrew Lauritzen's work, or you can use the stencil buffers. I've been considering using the voxel octree itself to do CPU side culling, but haven't looked into it yet.   I'm also considering voxel cone tracing or light propagation volumes, doing the work CPU side using the octree and then uploading the results asynchronously to either a light-map or 3D texture cascade. I've no results or even back of the napkin calculations of feasibility yet, but  will try to blog the results when I do.
  5. dougbinks

    Dynamic Resolution Rendering

    Only just spotted this was put up here - the author link, though having my name, wasn't posted here by me as the original article was from some work I did whilst at Intel (many thanks @Gaiiden for putting it up). I know the last post here was some years back, but I thought I'd answer some of the questions since I have a few minutes spare!   @Doug Rogers (a) - there's pretty much no discussion in literature about this technique, and when made the presentation at GDC I was unaware of any product using dynamic (versus static) resolution scaling. I hope there's still value in having an open discussion and data + sample code even if the idea isn't new. Only the art came from Project Offset, though I didn't have time (or the desire in a demo) to implement the rendering techniques needed to show off their wonderful artwork as well as possible. Our in house sample team artist did a great job in converting things for a demo though!   @Doug Rogers (b) -  The sample uses both the CPU measure of frame time and a GPU measure of rendered frame time to calculate what the current frame time is. If you use just the CPU rate then you can't accurately measure how much under the refresh time you are, and if you use the GPU rate you can miss out on some CPU side pipeline issues. I also remove cases where the CPU frame time spikes as this can be caused by non rendering stalls. The control seems stable, but you could add hysteresis bands if needed easily. I think an improvement might be to use set bands of resolution scales with the resolution lerping between them to keep things stable and smooth.   @Doug Rogers (c) - Agreed, understanding your rendering pipeline and art limitations is important. With many modern games using deferred lighting and complex post processing pipelines, frame times tend to scale with pixel count over a wide range. If your frame times don't depend on resolution, dynamic resolution makes it easy to increase the resolution to the maximum (or even go to super-sampling).   @Hodgman - It probably would have been better in frame times ;) However whilst inter frame times need to be measured as times so as to be easily added up, the overall frame rate is sometimes more easy to measure against refresh rates. Mostly I just needed folk to see that there's fairly good scaling for a reasonably realistic scene in terms of polycount.   @Lewis_1985 - the code is single threaded, and tested on a dual core MacBook Air system of the same generation CPU & GPU but under much lower power constraints gives better scaling, and has the nice side effect of allowing the resolution to scale as the GPU power is lowered by the system when the CPU starts eating into it's budget (which I tested by running a second high CPU bound process alongside, but don't have the data any more sadly).   @InvalidPointer - I was also concerned about multiresolution temporal data, and thought I'd have to fix the resolution in pair sets resulting in some flickering when it changes. It turned out to just work fairly well. As for load factors, I'd love to have a decent PS time measure on a cross platform basis, but failing that I'd agree that multiple measures are the way forwards. I'm no longer with Intel, but if I get around to trying it I'd like to measure indicators prior to rendering so as to be able to predict the resolution to set. Vertex count, particle & deferred light numbers (perhaps with a simple area measure) and move/turn rates if using motion blur could all help.   Oops. Long post on old topic, apologies!   [FYI if you tried out the demo when this article came out, you might want to take a look at the updated code which has a few improvements.]
  6. dougbinks

    Spherical Wave expansion effect

    I think I would be tempted to use a post process effect for this.   Calculate the radial distance using screen position and depth, you should be able to google for how to do this, then if this radius is smaller than the current 'sphere radius' you output the colour data from your colour buffer, and if it's larger you apply an edge filter to draw outlines. This won't draw polygons as such, so if you want that effect you may need to draw polygon edges or a per triangle colour which you then edge filter into another buffer.
  7. dougbinks

    implementing scripting

      You can use our system outside of RCC++ with only a small amount of effort - have a look at RuntimeProtector.h and the implementation functions in RuntimeObjectSystem_Platform*.cpp   [Edit] Note that this will likely interfere with any crash handler like Breakpad, but since it's primarily intended for development I'd simply turn it off in shipping code if you use Breadpad in shipping code. I intend to look into a solution at some point.
  8. dougbinks

    implementing scripting

      All output is piped through a logging interface, and if you copy this to either stdio (on Linux/Mac OS X) or OutputDebugString (Windows) you get full double click to go to error support as well. Debugging is also handled similarly, with the added benefit of the game not stopping due to error catching (though this has issues with gdb which I need to sort out). See http://runtimecompiledcplusplus.blogspot.fr/2011/09/crash-protection.html   It's intended initial use is for scripting style additions to an engine. For this purpose it should be easier to add than adding something like lua or python. The problem at the moment is the lack of documentation, which I do hope to fix.
  9. dougbinks

    implementing scripting

      The approach in RCC++ is somewhat different from what I'm aware other teams have used in the past, as rather than simply allowing shared libraries (DLLs on Windows) to be reloaded dynamically RCC++ detects changes in files and compiles the minimal set of source files needed into a shared library then links this in. This gives a quicker turn around at the cost of some developer effort in code markup using macros.     Thanks :)
  10.   The lack of constexpr is certainly a nuisance. You can do compile-time hashing with VS though - have a look here (and in the comments): http://www.altdevblogaday.com/2011/10/27/quasi-compile-time-string-hashing/
  11.   An automatic local id system is probably do-able, but there are gotchas. I think the example you use may not work as the template function may get instantiated multiple times (at least once for each dll) leading to multiple static members with different values for type for the same Type.
  12.   In your case typeid would likely work as well as a dynamic cast - both end up calling strcmp in order to get cross dll compatibility. For this reason generally all performance aware applications like games generally end up writing their own type system like I've described, and additionally many games build without RTTI.     Note that it has to be the exact same version - the debug version is not compatible with the release version.
  13.   You cannot dynamic_cast to a type which has not been fully defined, so the limitations are the same as for the GetInterface system. Note that you each plugin does not need to have it's own interface, they can implement interfaces you have defined in your engine.
  14. dougbinks

    implementing scripting

      Looks great :)
  15.   If you want some form of fast type conversion without dynamic cast you'll need an interface type which you derive from, yes. In order for a plugin writer to be able to use code, they need the declaration of that code at minimum - there's little use being able to cast a type to another type if they can't then use that type. So yes you'll need to distribute interfaces for the functionality. An alternative is to write a message passing or signal/slot type system, but this still requires that you define some way for programmers to understand how to use this.     Not quite sure I'm getting the issue here. If you want someone to be able to use an interface they need the declaration of that interface, which is usually in a header file.     It's worth your while following your current course through to completion if you've already got some way in. The issue I'm referring to is not really a premature optimization, but just a good point to start. There's some good material to be found by searching the web for information on data oriented design (often referred to as DOD). It's also a very good principal to not overly generalize systems until you need to - there's some good information on that in this blog and comment discussion.
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!