Jump to content

  • Log In with Google      Sign In   
  • Create Account


Promit

Member Since 29 Jul 2001
Offline Last Active Yesterday, 02:38 PM
*****

#5158448 Why do large engines use their own string class

Posted by Promit on 05 June 2014 - 11:18 AM

It also uses a custom allocator (presumably without the psychosis of std allocator) and supports misc things like being constructed on top of an existing char*. When I see stuff like this, I'm reminded of a comment that appears in the Fluid Studios mmgr code: "Kids, please don't try this at home. We're trained professionals here." :)




#5158272 Metal API .... whait what

Posted by Promit on 04 June 2014 - 09:31 PM

I suspect the difference in performance between Metal and ES is going to be gigantic, considerably overshadowing PC API advances. Major engines gain the functionality, which means most of the devs get it, and pretty much the whole ecosystem wins. Plus we're talking about a serious competitive advantage against Android and Windows Phone.




#5158017 glGetUniformLocation Failing on uniform

Posted by Promit on 03 June 2014 - 11:32 PM

It's worth noting that you're allowed to pass a uniform location of -1 into the glUniform* functions, so in practice you don't really have to worry about whether the uniform exists or not. Just pass the location you got in and all will be well.




#5158015 Overhead of GL_DYNAMIC_STORAGE_BIT​ or GL_MAP_WRITE_BIT​ when there's no...

Posted by Promit on 03 June 2014 - 11:25 PM

Personally I think you're making things needlessly difficult. Vectors aren't supposed to support this usage. Just block out your own memory (array/vector/whatever) and give the meshes a pointer and an offset. As far as the interleaving thing, just keep in mind that on-chip GPU bandwidth is gigantic and vertices are few. Interleaving unused attributes just means a bit of extra copy bandwidth. On the other hand, some hardware (all NV chips circa 2008, not sure what the situation is these days) can't use separated streams, so the driver has to go in and interleave the streams into spare memory before the draw is executed. I also suspect that using mismatched vertex formats between the vertex buffer and the shader will cause a few internal recompiles, though that's relatively minor.




#5157974 Metal API .... whait what

Posted by Promit on 03 June 2014 - 06:37 PM

Given the course this conversation has taken, I'm booting it out of Horrors and into Graphics.

 

As far as the Metal API design, there's no real surprises (apart from the C++ shading language thing). I'm a little dismayed that command buffers are transient-only objects, as I'd much prefer to build them once and just reissue. Anything to fix the moronic way GL handles pipeline state is good. Anything to get away from the moronic way GLSL handles everything is good. Everything to [...] threading is good. The list just goes on.

 

In the meantime we're seeing multiple vendors clearly draw out battle lines on APIs, to the point that being dependent on GL may well become a significant competitive disadvantage. Most people aren't drinking the MultiDrawIndirect kool-aid (and it wasn't even in the cards for ES). The interesting questions are, just how much fragmentation are we going to see before things pull back together? Does GL have legs? Or will we get some new standard that is reliable across platforms and vendors?




#5157971 Overhead of GL_DYNAMIC_STORAGE_BIT​ or GL_MAP_WRITE_BIT​ when there's no...

Posted by Promit on 03 June 2014 - 06:28 PM

As with anything in OpenGL, maybe. If the driver decides to take your flag at face value, then yes, there are implications for internal memory allocation that will create overhead. This may or may not have actual consequences in practice, especially depending on the driver involved. Keep in mind that until recently, pretty much everything had WRITE set anyway because that was the only way to get data into buffers in the first place.

 

That said, i don't understand what the problem is which is leading you to this question in the first place. But it sounds like you're abusing std::vector in a way you shouldn't be. It's also worth noting that drivers and hardware HATE when you keep separate vertex streams like that. Interleaved = good. Separate = bad. I'd deal with that mistake first.




#5157547 Is working in terminal/console really a waste of time?

Posted by Promit on 02 June 2014 - 10:06 AM

I love when the "UNIX" (because let's face it, you are all using Linux anyway and Linux isn't a UNIX) guys pop up to explain that all they need is a text editor and not a fancy IDE. As if emacs or vi are simple text editors -- for most practical purposes, they are IDEs in their own right.




#5157435 Is working in terminal/console really a waste of time?

Posted by Promit on 01 June 2014 - 06:10 PM

I think I wrote console-based programs for all of four months before getting bored with that garbage. No regrets. However, a lot of tooling, utilities, scripts, etc are frequently console based. So it's quite important to be able to work with standard streams (stdin/stdout/stderr), piped streams, console IO, unattended processing, etc. Console programs are valuable and so is writing them.




#5157356 C++ XML Serialization

Posted by Promit on 01 June 2014 - 10:58 AM

I developed a solution a while back that works relatively well in practice, and I've been using it at work for some time now. It's targeted towards JSON via JsonCpp, but the basic ideas should be adaptable. It is not simple, leaning on some fairly arcane C++/template stuff.




#5157201 Why do graphic intensive game use a lot of energy

Posted by Promit on 31 May 2014 - 02:17 PM

Allow me to add slightly more technical detail here. Each transistor on a processor consumes power while it's active -- a certain amount while open, less while closed. There's also energy consumed in switching, but even when not in use they are draining some power. Most chips, whether CPUs or GPUs, are divided into large groups of transistors or higher level units. These units can be powered off independently, allowing the chip to drastically cut back on its power use when mostly idle. So when you're doing light computing tasks, both the CPU and GPU shut off most of the chip and run in minimal modes. Historically GPUs also integrated a very low power 2D mode that was used for desktop interface stuff. Since user interfaces are now mostly 3D rendered, that particular setting is no longer present. Pretty much all modern CPU and GPU chips are running at flexible clock speeds and voltages as well. When the system reduces clock speeds, it's able to reduce supply voltages as well. This cuts back on internal current leakage and overall power consumption quite a bit.

 

When you fire up a graphics intensive game, it forces clocks to maximum and all internal processing units to fully active. The memory chips will begin switching at high speed, so that will pull more power. Heat increases, which will also trigger at least one internal fan. Depending on the game, the hard drive on optical drive may be brought up to speed too, which will also increase power usage. Even bringing the network interfaces to full speed will increase power usage.




#5157069 Scott Meyers - The Last Thing D Needs

Posted by Promit on 30 May 2014 - 07:02 PM

I am actually going to kick this out of the Lounge.




#5156284 Amateur Video Game Development - Cost Questions

Posted by Promit on 27 May 2014 - 09:26 AM

Not to offend, but have you actually attempted to do any of the ground research here? Game budgets have been published for a wide variety of different commercial and indie titles, and there have been a wide array of Kickstarter campaigns for games as well. Have you even looked at the current campaigns? This isn't an idle or minor thing -- you can't go into a business venture without any clue of what's involved. Not just monetary involvement, but the development timeline, management requirements, etc. On top, your lack of any "on the ground" experience in what the day-to-day in development looks like is a major liability.

 

 


In summation: If I plan out every possible aspect of the game in text / GDD / sketches what will my costs be for a PC simulation / building / interactive settlement management game?

Exactly the same as if you've planned nothing at all. On-paper game designs have no value. Ultimately, the cost on this type of project is going to depend on how many people you intend to pay, for how many work hours, for how many months. I don't know, but will assume that you don't intend to provide any material support (computing hardware etc) to your workers. Assuming you use existing tech or mod an existing game in order to minimize the technical development, the ultimate timeline will be based on how long it takes to get the game mechanics right. That doesn't have to take more than 4-6 months, if you don't actually want to see any of your money back. With the right programmer and artists motivated properly (team of 3-5 in total?), I think the whole thing could be put together inside a year. Back of the envelope, that puts cost somewhere in the $300K range assuming no major mishaps, maybe even less if you really go skeletal. Of course you have little or no basis on which to judge whether things are actually going well schedule and budget wise, so you pretty much have to hope your team are doing the right thing.

 

None of that is looking at the revenue side, of course. If you'd like to handle this in a way that will bring back more than a few thousand dollars, then the best advice is probably to just not start...




#5155339 Is this commenting macro safe to use?

Posted by Promit on 22 May 2014 - 11:32 PM

It might be safe, but it's certainly horrifying. At the very least I would use a macro that calls Foo, and change the definition of that macro based on DEBUG or not. Something like:

#ifdef DEBUG
#define CALL_FOO(a, b, c, d, e) Foo(a, b, c, d, e)
#else
#define CALL_FOO(a, b, c, d, e) Foo(a, b)
#endif

But I would revisit why you're doing this in the first place and think if maybe there's something you can do with the implementation of Foo itself. Or your overall code. Not really enough info to work with....




#5155265 Using the ARB_multi_draw_indirect command

Posted by Promit on 22 May 2014 - 01:53 PM

There's MultiDrawElements as well. You should go over the spec for the extension carefully:

https://www.opengl.org/registry/specs/ARB/multi_draw_indirect.txt

http://www.opengl.org/registry/specs/ARB/draw_indirect.txt




#5155252 Using the ARB_multi_draw_indirect command

Posted by Promit on 22 May 2014 - 01:01 PM

First, read the NVIDIA slides if you haven't already. multi_draw_indirect starts at slide 63.

 

Consider the signature of glDrawArrays:

void glDrawArrays(GLenum  mode, GLint  first, GLsizei  count);

This function takes a mode and two int parameters (basically). Instead of submitting that function, create a buffer:

{ [first | count],
[first | count],
[first | count],
[first | count],
[first | count] }

Now you can call MultiDrawArrays, passing appropriate pointers into this buffer, and a single call will submit five draws at once. Or you can call MultiDrawArraysIndirect, with a single pointer and a stride. Here's the trick: Indirect understands a buffer binding called DRAW_INDIRECT_BUFFER. Now you can upload that buffer above into GPU memory (a buffer object) and execute it from there. Why would you want to do that? You wouldn't. But this is the cleverest part: you can use GPU compute to generate the buffer without any copies. And here's another bit mentioned by the slides: there's an extension called shader_draw_parameters that adds a DrawID into the shader, telling you whether this is draw call 1/2/3/4/5. So you can use that value to select between, let's say, multiple modelview matrices passed into the shader. 

 

The tricky part is setting up all of your input data to leverage as much of this as possible. You need to share buffers and as many shader parameters as possible, and use DrawID cleverly.






PARTNERS