Jump to content

  • Log In with Google      Sign In   
  • Create Account

Promit

Member Since 29 Jul 2001
Online Last Active Today, 06:48 PM

#5157547 Is working in terminal/console really a waste of time?

Posted by Promit on 02 June 2014 - 10:06 AM

I love when the "UNIX" (because let's face it, you are all using Linux anyway and Linux isn't a UNIX) guys pop up to explain that all they need is a text editor and not a fancy IDE. As if emacs or vi are simple text editors -- for most practical purposes, they are IDEs in their own right.




#5157435 Is working in terminal/console really a waste of time?

Posted by Promit on 01 June 2014 - 06:10 PM

I think I wrote console-based programs for all of four months before getting bored with that garbage. No regrets. However, a lot of tooling, utilities, scripts, etc are frequently console based. So it's quite important to be able to work with standard streams (stdin/stdout/stderr), piped streams, console IO, unattended processing, etc. Console programs are valuable and so is writing them.




#5157356 C++ XML Serialization

Posted by Promit on 01 June 2014 - 10:58 AM

I developed a solution a while back that works relatively well in practice, and I've been using it at work for some time now. It's targeted towards JSON via JsonCpp, but the basic ideas should be adaptable. It is not simple, leaning on some fairly arcane C++/template stuff.




#5157201 Why do graphic intensive game use a lot of energy

Posted by Promit on 31 May 2014 - 02:17 PM

Allow me to add slightly more technical detail here. Each transistor on a processor consumes power while it's active -- a certain amount while open, less while closed. There's also energy consumed in switching, but even when not in use they are draining some power. Most chips, whether CPUs or GPUs, are divided into large groups of transistors or higher level units. These units can be powered off independently, allowing the chip to drastically cut back on its power use when mostly idle. So when you're doing light computing tasks, both the CPU and GPU shut off most of the chip and run in minimal modes. Historically GPUs also integrated a very low power 2D mode that was used for desktop interface stuff. Since user interfaces are now mostly 3D rendered, that particular setting is no longer present. Pretty much all modern CPU and GPU chips are running at flexible clock speeds and voltages as well. When the system reduces clock speeds, it's able to reduce supply voltages as well. This cuts back on internal current leakage and overall power consumption quite a bit.

 

When you fire up a graphics intensive game, it forces clocks to maximum and all internal processing units to fully active. The memory chips will begin switching at high speed, so that will pull more power. Heat increases, which will also trigger at least one internal fan. Depending on the game, the hard drive on optical drive may be brought up to speed too, which will also increase power usage. Even bringing the network interfaces to full speed will increase power usage.




#5157069 Scott Meyers - The Last Thing D Needs

Posted by Promit on 30 May 2014 - 07:02 PM

I am actually going to kick this out of the Lounge.




#5156284 Amateur Video Game Development - Cost Questions

Posted by Promit on 27 May 2014 - 09:26 AM

Not to offend, but have you actually attempted to do any of the ground research here? Game budgets have been published for a wide variety of different commercial and indie titles, and there have been a wide array of Kickstarter campaigns for games as well. Have you even looked at the current campaigns? This isn't an idle or minor thing -- you can't go into a business venture without any clue of what's involved. Not just monetary involvement, but the development timeline, management requirements, etc. On top, your lack of any "on the ground" experience in what the day-to-day in development looks like is a major liability.

 

 


In summation: If I plan out every possible aspect of the game in text / GDD / sketches what will my costs be for a PC simulation / building / interactive settlement management game?

Exactly the same as if you've planned nothing at all. On-paper game designs have no value. Ultimately, the cost on this type of project is going to depend on how many people you intend to pay, for how many work hours, for how many months. I don't know, but will assume that you don't intend to provide any material support (computing hardware etc) to your workers. Assuming you use existing tech or mod an existing game in order to minimize the technical development, the ultimate timeline will be based on how long it takes to get the game mechanics right. That doesn't have to take more than 4-6 months, if you don't actually want to see any of your money back. With the right programmer and artists motivated properly (team of 3-5 in total?), I think the whole thing could be put together inside a year. Back of the envelope, that puts cost somewhere in the $300K range assuming no major mishaps, maybe even less if you really go skeletal. Of course you have little or no basis on which to judge whether things are actually going well schedule and budget wise, so you pretty much have to hope your team are doing the right thing.

 

None of that is looking at the revenue side, of course. If you'd like to handle this in a way that will bring back more than a few thousand dollars, then the best advice is probably to just not start...




#5155339 Is this commenting macro safe to use?

Posted by Promit on 22 May 2014 - 11:32 PM

It might be safe, but it's certainly horrifying. At the very least I would use a macro that calls Foo, and change the definition of that macro based on DEBUG or not. Something like:

#ifdef DEBUG
#define CALL_FOO(a, b, c, d, e) Foo(a, b, c, d, e)
#else
#define CALL_FOO(a, b, c, d, e) Foo(a, b)
#endif

But I would revisit why you're doing this in the first place and think if maybe there's something you can do with the implementation of Foo itself. Or your overall code. Not really enough info to work with....




#5155265 Using the ARB_multi_draw_indirect command

Posted by Promit on 22 May 2014 - 01:53 PM

There's MultiDrawElements as well. You should go over the spec for the extension carefully:

https://www.opengl.org/registry/specs/ARB/multi_draw_indirect.txt

http://www.opengl.org/registry/specs/ARB/draw_indirect.txt




#5155252 Using the ARB_multi_draw_indirect command

Posted by Promit on 22 May 2014 - 01:01 PM

First, read the NVIDIA slides if you haven't already. multi_draw_indirect starts at slide 63.

 

Consider the signature of glDrawArrays:

void glDrawArrays(GLenum  mode, GLint  first, GLsizei  count);

This function takes a mode and two int parameters (basically). Instead of submitting that function, create a buffer:

{ [first | count],
[first | count],
[first | count],
[first | count],
[first | count] }

Now you can call MultiDrawArrays, passing appropriate pointers into this buffer, and a single call will submit five draws at once. Or you can call MultiDrawArraysIndirect, with a single pointer and a stride. Here's the trick: Indirect understands a buffer binding called DRAW_INDIRECT_BUFFER. Now you can upload that buffer above into GPU memory (a buffer object) and execute it from there. Why would you want to do that? You wouldn't. But this is the cleverest part: you can use GPU compute to generate the buffer without any copies. And here's another bit mentioned by the slides: there's an extension called shader_draw_parameters that adds a DrawID into the shader, telling you whether this is draw call 1/2/3/4/5. So you can use that value to select between, let's say, multiple modelview matrices passed into the shader. 

 

The tricky part is setting up all of your input data to leverage as much of this as possible. You need to share buffers and as many shader parameters as possible, and use DrawID cleverly.




#5155106 deferred shading question

Posted by Promit on 21 May 2014 - 12:23 PM

You may find this presentation useful: The Rendering Technology of Killzone 2




#5154949 why the alphablend is a better choice than alphatest to implement transparent...

Posted by Promit on 20 May 2014 - 07:27 PM

Alpha testing hasn't been supported in OpenGL ES since 2.0 anyway; you're required to implement it manually using the discard operation. Discard will essentially force that draw call to slow path, just like alpha blend. In those cases, no hidden surface removal is available. You may want to review the Smedberg recommendations about how to handle the various cases.




#5154948 What OpenGL book do you recommend for experts?

Posted by Promit on 20 May 2014 - 07:23 PM

I found Insights to be supremely useful -- you probably already know how closely Riccio follows the state of the industry with regards to OpenGL. Of course the big problem is that any book with too many concrete recommendations is likely doomed, given the quicksand shifting of drivers, implementations, and specs in current day GL. There's too much space for stale information.




#5154658 Prevent Losing Entire Project To Malware

Posted by Promit on 19 May 2014 - 11:28 AM


Run an up-to-date OS (Windows XP, Vista, 7 and 8.0 do not count).
I'm sorry but 7 absolutely DOES count (assuming it is service packed). As long as it's not an end of life product, Microsoft is continuing to issue security patches. There's nothing about 8.1 that improves security over 7 when both systems are properly maintained.

 

First of all: any file you don't want to lose should be able to survive the total physical destruction of any given computer you own. Ideally all of them, and your house. Personally I like using a combination of externally hosted cloud backup services, internal backup, and good old external source control. Second: you need to figure out how and why you're getting virused, because that's a problem in itself.




#5153838 Cases for multithreading OpenGL code?

Posted by Promit on 15 May 2014 - 03:20 PM

The long and short of it is that multiple contexts and context switching are so horrifically broken on the driver side, across all vendors for all platforms, that there is nothing to gain and everything to lose in going down this road. If you want to go down the multithreaded GL road in any productive way whatsoever, it'll be through persistent mapped buffers and indirect draws off queued up buffers. More info here: http://www.slideshare.net/CassEveritt/beyond-porting




#5150461 Go for video game programming?

Posted by Promit on 29 April 2014 - 07:27 PM

The problem with GC in games isn't really performance, particularly for the indie crowd who are using those types of languages. At least, not directly. GC is plenty fast. What's lacking is control. The GC based languages are horrified that the application might want to exercise any kind of control or hinting about what to GC and when. I don't want random allocations to block and trigger GC. I want to be able to dispatch a bunch of GPU calls, then tell the runtime "hey, you've got 3ms to do as much incremental GC as you can". I want to be able to control the balance of GC time and convergence, and force full GC when required. I'd love to be able to tag objects explicitly with lifetime hints.

 

And frankly, I need to be able to rely on a consistent implementation underneath with known characteristics. I get why the language designers don't want this, but it's a huge practical problem for games. We NEED to be able to exercise real-time guarantees.






PARTNERS