Jump to content

  • Log In with Google      Sign In   
  • Create Account

Promit

Member Since 29 Jul 2001
Offline Last Active Yesterday, 01:02 PM

#5160653 Build a render farm using DirectX and radiosity?

Posted by on 15 June 2014 - 10:51 AM

Direct3D always runs in the context of a particular window. However, you don't need a monitor in order to create a window, and the window doesn't need to be visible anywhere.




#5159540 Contacting press

Posted by on 10 June 2014 - 10:53 AM

Personally I also appreciate the talk by Ben Kuchera (formerly of Ars Technica, now of Polygon) about how things look, from the journalist side of things.




#5158833 CG or GLSL ?

Posted by on 06 June 2014 - 08:02 PM

Cg has been discontinued, so I wouldn't go with that one.




#5158779 Metal API .... whait what

Posted by on 06 June 2014 - 02:41 PM

Also the AZDO presentation has a problem which tends to show up in NVIDIA presentations, which is that it's only applicable to NVIDIA hardware and follows a lot of custom built fast paths. Nevermind that the features are unavailable, inconsistent, or just plain implode on competing vendors' drivers. (And sometimes this is explicitly stated IN the NV presentations O_O) And then they act like everyone is crazy to be building these other APIs because their custom fast path and extensions supposedly do the same thing.

 

None of which even begins to cover the total failure to support even basic advances in graphics on the ES side.




#5158569 Metal API .... whait what

Posted by on 05 June 2014 - 05:24 PM

Very interesting comments, Hodgman. If only the mobile world were as simple as the desktop chips... dead serious. So much more variation in how the mobile platforms work, much more poorly documented, and the drivers... dear god the drivers.




#5158448 Why do large engines use their own string class

Posted by on 05 June 2014 - 11:18 AM

It also uses a custom allocator (presumably without the psychosis of std allocator) and supports misc things like being constructed on top of an existing char*. When I see stuff like this, I'm reminded of a comment that appears in the Fluid Studios mmgr code: "Kids, please don't try this at home. We're trained professionals here." :)




#5158272 Metal API .... whait what

Posted by on 04 June 2014 - 09:31 PM

I suspect the difference in performance between Metal and ES is going to be gigantic, considerably overshadowing PC API advances. Major engines gain the functionality, which means most of the devs get it, and pretty much the whole ecosystem wins. Plus we're talking about a serious competitive advantage against Android and Windows Phone.




#5158017 glGetUniformLocation Failing on uniform

Posted by on 03 June 2014 - 11:32 PM

It's worth noting that you're allowed to pass a uniform location of -1 into the glUniform* functions, so in practice you don't really have to worry about whether the uniform exists or not. Just pass the location you got in and all will be well.




#5158015 Overhead of GL_DYNAMIC_STORAGE_BIT​ or GL_MAP_WRITE_BIT​ when there's no...

Posted by on 03 June 2014 - 11:25 PM

Personally I think you're making things needlessly difficult. Vectors aren't supposed to support this usage. Just block out your own memory (array/vector/whatever) and give the meshes a pointer and an offset. As far as the interleaving thing, just keep in mind that on-chip GPU bandwidth is gigantic and vertices are few. Interleaving unused attributes just means a bit of extra copy bandwidth. On the other hand, some hardware (all NV chips circa 2008, not sure what the situation is these days) can't use separated streams, so the driver has to go in and interleave the streams into spare memory before the draw is executed. I also suspect that using mismatched vertex formats between the vertex buffer and the shader will cause a few internal recompiles, though that's relatively minor.




#5157974 Metal API .... whait what

Posted by on 03 June 2014 - 06:37 PM

Given the course this conversation has taken, I'm booting it out of Horrors and into Graphics.

 

As far as the Metal API design, there's no real surprises (apart from the C++ shading language thing). I'm a little dismayed that command buffers are transient-only objects, as I'd much prefer to build them once and just reissue. Anything to fix the moronic way GL handles pipeline state is good. Anything to get away from the moronic way GLSL handles everything is good. Everything to [...] threading is good. The list just goes on.

 

In the meantime we're seeing multiple vendors clearly draw out battle lines on APIs, to the point that being dependent on GL may well become a significant competitive disadvantage. Most people aren't drinking the MultiDrawIndirect kool-aid (and it wasn't even in the cards for ES). The interesting questions are, just how much fragmentation are we going to see before things pull back together? Does GL have legs? Or will we get some new standard that is reliable across platforms and vendors?




#5157971 Overhead of GL_DYNAMIC_STORAGE_BIT​ or GL_MAP_WRITE_BIT​ when there's no...

Posted by on 03 June 2014 - 06:28 PM

As with anything in OpenGL, maybe. If the driver decides to take your flag at face value, then yes, there are implications for internal memory allocation that will create overhead. This may or may not have actual consequences in practice, especially depending on the driver involved. Keep in mind that until recently, pretty much everything had WRITE set anyway because that was the only way to get data into buffers in the first place.

 

That said, i don't understand what the problem is which is leading you to this question in the first place. But it sounds like you're abusing std::vector in a way you shouldn't be. It's also worth noting that drivers and hardware HATE when you keep separate vertex streams like that. Interleaved = good. Separate = bad. I'd deal with that mistake first.




#5157547 Is working in terminal/console really a waste of time?

Posted by on 02 June 2014 - 10:06 AM

I love when the "UNIX" (because let's face it, you are all using Linux anyway and Linux isn't a UNIX) guys pop up to explain that all they need is a text editor and not a fancy IDE. As if emacs or vi are simple text editors -- for most practical purposes, they are IDEs in their own right.




#5157435 Is working in terminal/console really a waste of time?

Posted by on 01 June 2014 - 06:10 PM

I think I wrote console-based programs for all of four months before getting bored with that garbage. No regrets. However, a lot of tooling, utilities, scripts, etc are frequently console based. So it's quite important to be able to work with standard streams (stdin/stdout/stderr), piped streams, console IO, unattended processing, etc. Console programs are valuable and so is writing them.




#5157356 C++ XML Serialization

Posted by on 01 June 2014 - 10:58 AM

I developed a solution a while back that works relatively well in practice, and I've been using it at work for some time now. It's targeted towards JSON via JsonCpp, but the basic ideas should be adaptable. It is not simple, leaning on some fairly arcane C++/template stuff.




#5157201 Why do graphic intensive game use a lot of energy

Posted by on 31 May 2014 - 02:17 PM

Allow me to add slightly more technical detail here. Each transistor on a processor consumes power while it's active -- a certain amount while open, less while closed. There's also energy consumed in switching, but even when not in use they are draining some power. Most chips, whether CPUs or GPUs, are divided into large groups of transistors or higher level units. These units can be powered off independently, allowing the chip to drastically cut back on its power use when mostly idle. So when you're doing light computing tasks, both the CPU and GPU shut off most of the chip and run in minimal modes. Historically GPUs also integrated a very low power 2D mode that was used for desktop interface stuff. Since user interfaces are now mostly 3D rendered, that particular setting is no longer present. Pretty much all modern CPU and GPU chips are running at flexible clock speeds and voltages as well. When the system reduces clock speeds, it's able to reduce supply voltages as well. This cuts back on internal current leakage and overall power consumption quite a bit.

 

When you fire up a graphics intensive game, it forces clocks to maximum and all internal processing units to fully active. The memory chips will begin switching at high speed, so that will pull more power. Heat increases, which will also trigger at least one internal fan. Depending on the game, the hard drive on optical drive may be brought up to speed too, which will also increase power usage. Even bringing the network interfaces to full speed will increase power usage.






PARTNERS