Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!

1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Member Since 29 Jul 2001
Offline Last Active Yesterday, 11:50 PM

#5171780 OpenGL ES on desktops

Posted by Promit on 05 August 2014 - 09:19 PM

We run the same render path for desktop and mobile GL. I have a "proxy" header, which defines functions like gl::Clear, gl::VertexAttribPointer, etc instead. The proxy changes a few function names as needed, and a few things are switched on either compile or runtime flags. (Not many things.) I don't try to fully emulate correct GL ES behavior on desktop; I just want my ES code to run. This proxy does that and works across Windows, Linux, Mac, and iOS. 


As for shaders, I found that trying to share them across desktop and mobile was a catastrophe. I now use hlsl2glsl (linked above) with some custom patches for full Mac and ES support. I should remember to assemble those changes into a pull request some day.

#5168425 Textures tear at distance

Posted by Promit on 22 July 2014 - 11:42 AM

I just realized playing with znear seems to help a lot. I've moved it to .5 from .01 and the z-fighting seems to have almost vanished! =D
Correct - The overall Z precision is a function of (zfar-znear) / znear. The presence of znear in the denominator means it has a huge impact on your available depth precision. The larger you can make it, the better.

#5168068 Prevent Paging

Posted by Promit on 20 July 2014 - 11:04 PM

Run a 64 bit process, memory map everything in your game using the default settings (ie don't lock pages), and then let the OS do its job. Interference with the underlying memory management systems will only make things worse. Memory mapping frees you from the problem of worrying about what's allocated in the first place.


If things start paging, buy a bigger server.


And against my better judgement, I am going to point out the Windows function VirtualLock.

#5168047 Get a IDirect3DVertexShader9 from a ID3DXEffect interface

Posted by Promit on 20 July 2014 - 07:18 PM

Call GetVertexShader? Presumably you need to provide a handle to a pass (GetPass/GetPassByName), though the documentation doesn't really bother to say.

#5168036 Patenting an Algorithm?

Posted by Promit on 20 July 2014 - 05:28 PM

In other domains, things are different. Take for example, video coding standards like H264. These are not self contained. You can't release a new standard, or a small update to the standard, every couple of months and declare all blueray players sold up to that date deprecated. When you create the standard it must be top notch, state of the art. Also, you can't just show a couple of power point slides to show the general idea, because you actually want every implementation of that standard ever build to behave exactly the same. You have to provide a reference implementation, which shows exactly every single operation.

if only that were consistent with reality.


Most industry-developed standards, including h.264, are developed in committee by a wide group of participating members. These members typically hold patents on some aspect of the technology, and they agree to make that technology available to the standardization group. Why? Because everyone with an IP stake in that particular standard agrees that everyone else with a stake can use it, thus putting all of the members on an even footing and not being required to pay large license fees. This part makes sense.


BUT: Let's say you have a patent whose invention is not finally included in the published standard. What happens at this point is you no longer have a stake contributed to the working group, which means you're no longer part of the royalty fee deal. You have to buy a license! Which is expensive. Oops. So what happens when somebody like MPEG-LA get together is not only about choosing the most competent technology. It is also about many groups vying to get as much IP into that standard as possible, regardless of its technical merits. Somewhere at the intersection of technical and financial back-and-forth is where the final standard is set.


Vorbis, Dirac, Theora, etc are developed by collaboration between open source volunteers, and groups who were never a party to these big standards discussions. Their argument is that by focusing strictly on technical excellence, rather than engaging in proxy patent battles, they can produce a superior final product. I don't know to what extent that holds up in reality, but certainly Vorbis has very much held its own from a technical standpoint relative to AAC, nevermind the aging and relatively poor MP3 standard. Theora doesn't seem to fair as well. We also have the unusual case of WebP/WebM, where an independent proprietary technology was opened up after the fact. Layered on top is the reality that none of the open codecs have any traction in hardware decoders, which has become more and more of a problem over time.


MPEG-LA has also wielded its total contributed patent chest as a weapon against competitors, notably Microsoft's VC-1 in the HD-DVD era.


At the end of the day, patents create significant distortions to how things are created, shared, and licensed. Whether those distortions are positive or negative depend on what you're talking about and from what perspective you're looking at it. There's a real sense in the software industry that patents as a whole have created more problems than they've solved.

#5167632 How to pack your assests into one binary file (custom file format etc)

Posted by Promit on 18 July 2014 - 10:35 AM

All you need is a file with a header section, an index section, and all the actual file data following. The index can be as simple as a list of pairs: file name and offset. Building the file, you take all your input files, and write them into a buffer while making index entries that record the name and offset. Then build your header and index, and dump the whole thing to disk.


(You can improve this process by writing placeholder header and index blocks to a file, then write the files directly to the target without an intermediary buffer. Then seek back to the beginning of the file and write the correct header and index.)


Reading is simple: load the index, use it to find the data. Maybe even memory map the whole thing.


You can choose the layer compression and more complex file arrangements on top of this scheme, but it's not really necessary. This is effectively how zip and tar files are laid out. Personally I just use a library like PhysicsFS instead of developing my own formats.

#5167516 Patenting an Algorithm?

Posted by Promit on 17 July 2014 - 08:49 PM

I have a close friend who spent some time in patent law. Regardless of the issues you're asking about, there's something very important to understand: patents have no value if you can't defend/prosecute them. Let's say you have a patent on your algorithm and somebody is using it without your permission. What are you going to do about it? Nothing, that's what. Because you don't have the quarter million dollars to start the case.

#5166579 obj file format

Posted by Promit on 13 July 2014 - 08:50 AM

Personally I'd re-export them as triangles in a modeling tool, if possible.

#5166314 is this much importatnt to have a strong fixed gdd?

Posted by Promit on 11 July 2014 - 04:44 PM

I am of the opinion that the best place to put a GDD is a paper shredder. Prototype ideas and see if they work. Writing up a whole concept serves no purpose.

#5165462 Loading textures to the GPU

Posted by Promit on 07 July 2014 - 11:58 PM

The texture is completely decompressed by the API, unless the file is a special compression format that can be decompressed by the GPU. Those formats are the various DXT or BC formats that are stored inside DDS files. Common compression formats (PNG, JPEG, TGA, etc) are API-decompressed.

#5165395 Converting OpenGL 3.1 code for Multi-sampled Framebuffers to be OpenGL 2.1 co...

Posted by Promit on 07 July 2014 - 05:21 PM

The key is to understand that extensions are fundamentally written as two pieces:

1) An overall synopsis explaining what the extension does and what the motivations behind the extension are.

2) A diff that explains how the extension modifies the core spec


The "written against" part is part of #2, explaining what differences the extension creates against that particular spec version. It doesn't mean that version is required; as a practical matter it just tells you when the extension was actually written. You just need the specific spec version in order to have precise definitions of all the behavioral modifications of that extension.

#5165389 Converting OpenGL 3.1 code for Multi-sampled Framebuffers to be OpenGL 2.1 co...

Posted by Promit on 07 July 2014 - 05:01 PM

I believe this is the document you need:


Looks like this particular extension is part of the first wave of "core extensions", meaning the functions are designed to require no code changes between extension and core forms. All you have to do is load the functions (which GLEW will do magically) and enjoy.

#5164232 How to capture video of gameplay

Posted by Promit on 01 July 2014 - 11:25 PM

Buy FRAPS, and an SSD if you need high res or frame rate. There are many options out there and all of them manage to disappoint except FRAPS. It's so very worth it.

#5164071 Variance Shadow Map fades when shadow is close to occluder

Posted by Promit on 01 July 2014 - 11:28 AM

This is a natural and obnoxious part of VSM that can be addressed with the light bleeding fix, to some extent. It's soured me on VSM considerably in recent years. You might find this presentation helpful: http://developer.download.nvidia.com/presentations/2008/GDC/GDC08_SoftShadowMapping.pdf

ESM, or VSM+ESM, may be a useful approach.

#5164063 OpenGL ES 2.0, FBO, low performance

Posted by Promit on 01 July 2014 - 10:50 AM

I don't think you should be using two contexts. You should probably be using a single context, two off screen FBOs to textures, and then compositing both FBOs to the actual context render buffer.