Jump to content

  • Log In with Google      Sign In   
  • Create Account

Vincent_M

Member Since 16 Jan 2007
Offline Last Active Yesterday, 02:20 AM

Posts I've Made

In Topic: Current-Gen Lighting

15 December 2014 - 12:34 PM

Thanks, TiagoCosta. I checked out the websites, and I see many PowerPoint presentations. I've been to SIGGRAPH's website before, but again, just syllabi and PowerPoint presentations that seem to supplement the talks. There are a few interesting ideas in the slides, but nothing goes into depth (again, slides should only contain concise, important points made in the live talk). Are there videos that are supposed to accompany it, or was the live presentation only for people who attended those SIGGRAPH talks?


In Topic: Current-Gen Lighting

11 December 2014 - 03:10 PM


Physically based rendering has been a household word for many years now.
No, they do not use “last-generation but higher-resolution textures”.  Read up on physically based rendering, albedo, shininess, roughness, specular reflectance, etc.

I thought they may have been using physics-based lighting, but I wasn't sure if even current consoles were capable of that yet. I've been trying to find more information on real-time physically based lighting for the past 2 months now. From what Marmoset's website says, that term is used pretty generically, and could be a bunch of things. I found a book on the subject, but it's not for real-time application.

 


To add to L.Spiro's list, read on Image-Based Lighting (IBL), Cook-Torrance, energy conservation, Tonemapping, ambient obscurance etc.

I'll read up on those terms. I've heard of image-based lighting, but I've always thought it had something to do with deferred rendering. I always thought Cook-Torrance was more of a raytracing algorithm for describing how light is reflected/refracted when entering mediums, and I have heard of tone mapping. Sounds like I have a lot to research to cover!


In Topic: Does glMapBuffer() Allocate Client-Side Memory?

10 December 2014 - 12:22 PM

glMapBufferRange() sounds like it'd be a hassle haha... Is glMapBuffer() generally faster than glBufferSubData(), or is it mainly there for convenience? It doesn't seem like it would cut down on any overhead.


In Topic: Texture Arrays

07 December 2014 - 08:03 PM

I believe that the sparse texture extension would allow you to create a 1024x1024xN texture array and then for the slice with the 512x512 texture you would simply leave the unused mip level unallocated. If you need for example both RGBA8 and RGBA32F then create two different texture arrays.

I've read about sparse textures, but I never knew the context in which they'd be helpful. I always thought they'd be useful for doing things like procedural terrain rendering, and John Carmack's megatextures. That's the thing with learning modern OpenGL from The OpenGL Programming Guide (8th Edition): it goes into detail on how each API function works, but doesn't provide too much context on how they could be in real-world techniques. Providing some examples would really help reinforce what the API's actually doing. Then, once I'm familiar with how it works, and why it's used like that, I can develop my own techniques from there. Requiring all textures to be of the same format makes sense to me too. I could have multiple texture arrays based on different formats required depending on the rendering technique I'm using for my meshes.

 

I'm finding the OpenGL SuperBible (6th Edition) to be more helpful there!

 

 

You can upload your 512x512 texture into mip level 1 (assuming 0-index) of another layer, and in your shader specify an lod bias so you effectively use mip level 1 as your "base level".

I hadn't thought of that... And it sounds like a good idea. Since I'm only expecting power of 2 textures, I could use the largest texture as the highest resolution mip level, and the lowest resolution texture as my starting point mip-level. I read about mipmap bias controls, but I'm not too familiar with how to actually use them yet. Less memory is wasted with your solution compared to mine, though. My idea was to upscale the smaller textures to make them larger, but that quadruples the amount of texture memory exponentially, per mip-level. For example, going from 256x256 to 1024x1024 would require my smaller texture to bloat 16x in memory. I could also down-sample too, which would probably yield better results, as it follows mipmap methodology, and reduces the memory footprint. Of course, for optimal quality and performance, same-sized textures should probably be provided.


In Topic: OpenGL 5 - Release?

01 December 2014 - 03:13 PM


^What Phantom said -- one of the biggest features is basically that we're finally getting threading support in D3D12/GLNext/Mantle.
D3D11 almost has great multi-core support, but the internal achitecture hobbles the performance gains.
GL has always had multi-threading support, but never actually been possible to use it to to reduce CPU-side overhead by scaling over multiple cores...
Vincent_M, on 30 Nov 2014 - 10:43 PM, said:
 
TheChubu, on 28 Nov 2014 - 01:43 AM, said:
I wonder if they actually making a new API, and its good and all... How much of an impact it would actually have?
It would make a huge difference. Look at the current public releases of OpenGL vs DirectX. DirectX is a nicely-written API compared to OpenGL's current existence with all of its deprecated features and functions' parameters being re-purposed over the versions. Despite DirectX's pleasing interface to work with, it's Windows-only and runs a little slower than OpenGL on Windows.
 
From the data I've seen, this isn't at all true.

The oft-cited (but not intended as a benchmark) Valve L4D2 comparison was with a D3D9 renderer vs GL... and D3D9 was renowned for having huge per-draw-call overheads.
To illustrate why the L4D2 data is not a good benchmark to look at, the difference between their two datapoints is a mere 0.4ms of CPU time.
The story wasn't "We rewrote our entire renderer and did a huge amount of optimization work, and saved 0.4ms in the process", and even if it was, it's such a tiny optimization that would make it into a non-story - 2.4% of a 60Hz frame saved.
It's very frustrating when people like this take a stupidly small number of data-points that aren't actually from a benchmark, and then write whole articles about them as if it was actual data (btw, their entire "Why do we still use Direct3D?" section is just plain wrong)

Well, this is embarrassing. I thought OpenGL was usually just fasterlaugh.png

 


As Vincent said, so many API features don't map to the hardware, so when using these old APIs (especially using OpenGL's deprecated features) incurs large and unpredictable CPU-side overheads.

I remember when I was moving over from OpenGL ES 1.1 to ES 2.0 when the 3GS came out. I started learning the features, and had a lot of "ah-hah" moments. Not only did shaders allow for some pretty interesting effects, but it also allowed us to do things with less state changes as GL_TEXTURE was no longer required to enable.

 

I'm not sure if Mantle will ever take off as a widely-adopted API, but I think it kicked off the next era of graphics APIs that work in-sync with how GPUs actually work nowadays.


PARTNERS