Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 04 May 2004
Offline Last Active Aug 08 2013 01:18 AM

Posts I've Made

In Topic: John Carmack joins Oculus as CTO

08 August 2013 - 01:13 AM


Their SDK isn't quite up to scratch when it comes to professional game middleware, so having a CTO who's not only used a lot of middleware, but has also developed major engines will surely help in that area smile.png


My understanding is that game development for the console has been lackluster so far.

Well the consumer version isn't out yet, so you shouldn't expect much mainstream support for it yet either.

I'd also avoid calling it "a console" -- it's just a type of display device for your PC.


Ah, I misunderstood what the project actually was. I thought is was a self contained portable gaming unit. Didn't realize it actually required PC to run. Oculus status downgraded to "meh".


Status should be the other way around : yet another console that won't pick up & have no one dev for it = meh. A new real tool that can be used with any compatible game on PC = not meh at all.

In Topic: Google chrome easter egg?

24 July 2013 - 04:37 AM

Edit²: When I disable "predict network actions to improve page load performance" under advanced settings it's gone. Apparently chrome loads the first search result in the background which then starts playing automatically...

Wait, so chrome preloads "and" runs pages you may never click on? Am i the only one thinking ouch to privacy & security there?

In Topic: C++ DX API, help me get it?

09 July 2013 - 08:23 AM


4) Why all the if(failed(bla)) ? Why isn't code throwing? 


 NEVER use exceptions in C++! I love HRESULT error codes

6) Why is everything taking a pointer? I get the point for large (or medium) objects but why for example does something like the feature level, which isn't a large object nor an array, and that you're likely to be using once (or hell, maybe twice!) in your whole application get passed by pointer? I'm new at C++ but unless i get it wrong it means you must create a (local or global) variable, assign a value to it, and pass a pointer to it, if it was by reference you could just pass in D3D_FEATURE_LEVEL_11_0 for example


 Use CComPtr(atlbase.h) if you dont like pointers...




well.... i really dont care about PVOID LPVOID FLOAT ect...

D3D is designed for performance and user-control.. If you dont like that then you can use something like irrlicht/ogre/panda/whatever


None of this relates to performance at all, unless you consider "setting up directx" a performance critical part of any application where it's important to saveup nanoseconds in object instantiation and saving 4 bytes copies here & there? Performance is no reason here (i could get it if it was in per frame actions, but just not here), nor does it relate to user control at all. Anyway i already had my answers earlier on this thread, it's just com limitations & legacy code.


So now just looking for a thin wrapper around it, NOT an engine, something like a directX for actual C++ and not com/C.

Anyone could recommand such a thing? Something very thin where i could still refer to DX documentation, but just use it in a more "modern C++" way, C++11 is fine (within visual studio 2013's limitations)

In Topic: C++ DX API, help me get it?

09 July 2013 - 06:02 AM

Ok so as i expected most of it is to be blamed on legacy or COM, actually makes me quite happy to hear that.


Is there any library on the C++ side that does the same as SharpDX for .net? (keep the same low level API access, but wrap it in namespaced classes with default constructors etc, something that would feel more "modern C++ish" without being an engine but that would be a good base for starting one without doing my own wrappers on everything?)

In Topic: So, Direct3D 11.2 is coming :O

07 July 2013 - 06:51 PM


I just wish they lifted the 16K texture size limit at the same time as they introduced tiled ressources, wonder if they'll do that in DX12, would be really nice to just load up those high res 64-128K textures and not have to manage them.

OK, so lets do the maths...

32k => 32768 bits
64k => 65536 bits
128k => 131072 bits

Of course we now need to square those numbers
32k => 1,073,741,824
64k => 6,294,967,296
128k => 1,721,115,869,184

But that's still only 1 bit per pixel, so lets make it RGBA8, x4 to get it into bytes, then /1024 a few times to get the number sane :
32k => 4,294,967,296 bytes => 4.0GB (BC3 -> 1GB, BC1 -> 0.5GB)
64k => 25,179,869,184 bytes => 16.0GB (BC3 -> 4GB, BC1 -> 1.0GB)
128k => 68,719,476,736 bytes => 64.0GB (BC3 -> 16GB, BC1 -> 4.0GB)

Of course that is only the top level mip, I think the maths is you need to add another 1/3 on for all the mip levels (so 5GB+, 21GB+ and 75GB+ all in).

Now, my GPU (AMD HD 7970 ) has 3GB of memory, the largest memory on a single GPU is 6GB on something like an NV Titan.

So, there is a slight size issue with trying to load textures that big into VRAM AND most of it will wasted as you simply won't be looking at the data.

16K with PRT is a sane solution, even if it does involve a bit more work.


I don't really get your point, the whole point of tiled ressources is that you don't load all that, but you can get more detail when needed, i actually have textures that large & of course i don't want to display the whole thing at full quality, but the same applies for 16K, i'd just like all the work to be offloaded to DX (instead of an array of 16K tex, i'd like to handle it as a single 64tex when, functionally, it's an actual single 64K tex being mapped and not an atlas of smaller textures).


If you mean issues with loading from disk i used amplify from unity, & it's working just fine with 512K textures, and that's a software solution!