Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 08 Apr 2011
Offline Last Active Feb 19 2016 05:52 PM

Topics I've Started

Procedurally-regenerating identical content on different machines (floating point deter...

17 February 2016 - 10:44 PM

I've been designing the architecture for a distributed procedural content simulation engine for some time now, and recently it's become apparent that floating point determinism is going to become a thorn in my side with respect to my efforts to enable distributed setups, so for those of you with experience in (or reasonable knowledge of) floating point determinism issues, I'd really like to know how you'd approach this scenario:


Imagine you have two machines. The machines can be reasonably guaranteed to use modern, common hardware and operating systems (PowerPCs, IBM mainframes and other gotchas are not relevant here). In other words, both machines are either going to provide modern commodity x64 architectures on a server rack, or be two relatively-modern gaming machines of two friends whose machines are sharing the workload I'll describe. The operating system types can be constrained to the standard three (Windows/OSX/Linux).


Anyway, imagine we have two machines, and one of them is going to procedurally-generate some content based on known inputs. Let's say the content is a simple perlin noise-based terrain. Taking this a step further, let's say we procedurally-generate a set of physical objects (for brevity, perhaps they're simple wooden cubes of varying sizes and weights). We then use physics algorithms to scatter them around the terrain and allow them to settle based on physical characteristics such as gravity and the shape of the terrain.


Here's the catch:

  1. We don't know in advance which of the two machines will do the work.
  2. The machine that generates the content will not be able to guarantee that the other machine is available at the time that the work needs to be performed.
  3. The other machine will, later on, have to do generate the exact same outputs under the same conditions.
  4. We don't want to store the overall result as there will likely be too much data.
  5. It'd be nice to be able to offload some of the compute work (where relevant and reasonable) to the GPU (whatever is available on the machine in question).

Some places that I am willing to compromise:

  1. Reducing floating point precision
  2. Storing a limited subset of source data, as long as bulkier derivative data can be regenerated as needed
  3. Using large integers instead of floats
  4. Using fixed point calculations

Some additional questions:

  1. Is it possible to write integer-only versions of the popular coherent noise algorithms? (perlin/simplex/brownian/etc.)
  2. Can I get away with forcing IEEE754 floating point accuracy or will that compromise speed too much?
  3. Are 64-bit integers a thing on GPUs yet, or likely to become a thing any time soon?

As an addendum to this, I'm considering the possibility that this problem doesn't really have a good solution at this point in time, and that perhaps I need to be willing to simply store a lot of data and ensure that it is available when a machine needs to generate new content.

Thoughts on the Boost.Build system? (as opposed to CMake?)

04 October 2015 - 04:53 PM

For C++ on Windows, I've been preferring development in Sublime Text, rather than Visual Studio. It's more lightweight, there's less magic going on and I've got it configured the way I like. And I really hate VC++'s filtered folder project system. I just want to see my source directory and not have to screw around with the project file every time I want to add a child folder. Also, as someone from a non-C++ background, I feel I am learning more by running my own build script (just a custom PowerShell script at the moment) and having it run CL.exe. CMake is all well and good, but my project is built around a large number of plugins which each have to be built separately, and... well I guess the way CMake clutters up my folders with a million files just irks me a bit. That, and I've read a number of stories from other projects where the developers were complaining about CMake problems that made their lives harder rather than easier on Linux. I like things to be simple and clean.


Even so, I know that maintaining your own set of build scripts can get complicated and be a pain to maintain, particularly when you want to target the other major OS's as well. CMake's not the only show in town. What do you guys think about Boost's build system? I built Boost for the first time a few weeks ago and found the build process to be very clean and straightforward. Still, CMake seems to be what the majority of dev teams gravitate towards, so I'd like to get the opinions of those more experienced than myself with cross-platform C and C++ development. If you know of something else really solid as well, I'd be interested in that as well.

Uses for unordered access views?

05 July 2015 - 01:12 AM

As I learn, I keep coming across documentation references to unordered access views. I've googled around and seen a lot of abstract talk about what they're for, i.e. you can read from and write to them in shaders, but haven't really seen much in terms of practical use cases. It looks like they're useful in compute shaders for storing the results of massive parallel operations, but I am sure I have seen references to using them in normal graphics pipeline operations as well.

What are some good uses for UAVs? What do you use them for?

Selecting texture formats

24 June 2015 - 08:26 PM

When creating a new dynamic texture, there are so many texture formats to choose from! Obviously you pick what is right for the job, as textures are used for a lot more than just texturing, and different image sources have different types of image data.

I'm assuming that data on the GPU is converted on the fly to whatever the GPU wants to use, meaning you can use different texture formats and everything still "works", but I could do with an expert opinion or two on this subject.

Is there anything I should keep in mind with regards to texture format selection and consistency across an application? I just read here that BGRA is a bit more efficient on the GPU than RGBA, but then I looked back at my existing code and I see I have some texture initialization code that uses RGBA and also that my swap chain uses BGRA. Am I potentially creating headaches for myself, or efficiency issues, if I don't try to use the same format everywhere? Is the chosen format abstracted away from me on the GPU? Anything else I should know?

Request: add [SharpDX] tag to DirectX and XNA forum

24 June 2015 - 07:40 PM

In the DirectX and XNA forum, there is a tag option for SlimDX, which is pretty out of date now, but nothing for SharpDX, which is much more up to date and much more well maintained. I dare say it's probably even more common than SlimDX these days. Could we get a tag there in the dropdown list?