Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 29 Jul 2001
Offline Last Active Yesterday, 11:09 PM

#5231271 What are your opinions on DX12/Vulkan/Mantle?

Posted by Promit on 27 May 2015 - 10:51 AM

That's going to depend on the on-the-ground realities of driver and operating system support, when the dust finally settles. Remember, GL sounds like a great cross platform Direct3D killer on paper but doesn't live up to that in reality. We still don't know how Vulkan in real life will be. I expect that major engines, including DICE/Frostbite, will simply support both.

#5231269 Graphics engines: what's more common; standard vertex structure or dynami...

Posted by Promit on 27 May 2015 - 10:50 AM

I have a couple base types that are byte-for-byte matched to their shaders (now mandatory in Metal), and a couple specialized types that are for specific things but also matched to shaders. So there's essentially a utility vertex, a skinned model vertex, a scene model vertex, and a few specialty things like water surface vertices. Technically the underlying mesh format is data driven and allows you to declare any vertex type you like, but the tooling will only give you a few types to export.


I don't like to custom build and interleave vertex formats based on the shaders, though I've seen this done. I'd rather pay some extra transfer cost than deal with the static memory use explosion when shader formats differ in trivial ways. I also don't like to deinterleave the streams and bind them separately as KaiserJohan suggested, because this is actually not optimal on the GPU side. In a few specialized cases we do use vertices that assemble from multiple streams, but I try to avoid it.

#5230744 Movie IP rights

Posted by Promit on 24 May 2015 - 05:48 PM

It varies depending on the exact deal that was struct bringing the movie to the market. The publisher is most likely to hold the IP, but sometimes the actual production studio does. In any case, if it's not some random indie thing your likelihood of getting rights - or even a response - is in the "lottery odds" territory.

#5230621 Increased frame time with less workload

Posted by Promit on 23 May 2015 - 05:32 PM

I suspect your low frame rate is hit when you look in a direction that causes polys to be drawn to the viewport back to front.

A depth prepass would be an easy test.

#5230608 Increased frame time with less workload

Posted by Promit on 23 May 2015 - 03:18 PM

Is backface culling enabled?

#5230530 what Motor/ Cognitive skills are required in this game?

Posted by Promit on 22 May 2015 - 11:34 PM

I work on game development for a Neurology lab, and as a result get to spend every day chatting with neurologists/neuroscientists and looking at how those things relate to game development and concepts that are common in our world. 


Coming from that as my day job, reading you guys discuss it from the outside is an absolutely fascinating experience.

#5230185 software rendering win32

Posted by Promit on 20 May 2015 - 10:07 PM

It's mostly doing it through GDI in the code you shared previously that was never really an acceptable course. If you want to do a full soft renderer, the way to do it is actually to spool up a window using the hardware renderer in D3D, request access to the backbuffer, and then modify the pixels there. The actual rendering part of the code won't really change much at all from what you had before, it is just a lot more efficient to upload to video memory this way.


See this document for the basic instructions on what to do, after you have a D3D 9 device up and running: https://msdn.microsoft.com/en-us/library/windows/desktop/bb172231%28v=vs.85%29.aspx


Of course it would also be a substantial performance improvement to use SSE 1/2/3/4 vector instructions to compute things.


You asked one more question previously which was drowned in all the noise, regarding cracks between polygons in a rasterizer. I think this article may have the answers you need: http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter42.html

#5229723 win32 cpu render bottleneck

Posted by Promit on 18 May 2015 - 06:01 PM

I'm going to ignore your nonsense reasoning and simply say this: Everyone else here is working on modern machines, modern software, modern techniques, and so on. If you insist on living twenty years ago that's your prerogative, but the amount of help available to you will likely be minimal. Note that nobody seriously did what you're doing in any era of game developing. Mainstream development jumped straight from DOS pixel pushing to DirectX. Forcing single pixel commands through GDI was never an acceptable approach.


I also note that you waited three hours before complaining about not getting a reply. One to two days would be more appropriate. We are not all here solely for your needs.

#5229409 How does Clash of Clans (mobile) keep track of timing?

Posted by Promit on 17 May 2015 - 12:00 AM

CoC doesn't load without internet access. The activity bar/notification bar can be based on local time, but the game should be checking when you open the app.

This. CoC will not load without a connection and it will not tolerate more than maybe 10-15 seconds offline (during which your actions get queued up and won't take effect).


That said, all you have to do offline is save the finish time via NSUserDefaults. What I do is to save a bunch of stuff in there, and then put a hash into Keychain. This trips if they tamper with the user defaults and I can respond appropriately. This is probably not robust to jailbreaks but oh well.

#5227357 GI ground truth for comparison

Posted by Promit on 05 May 2015 - 12:42 PM

Isn't POVRay a common choice for this sort of thing?

#5226724 Interpreting a performance profiling results

Posted by Promit on 01 May 2015 - 11:19 AM

My code is highlighted. Why is the graphics dll at the same level as WinMainCRTStartup()? Looking at JonsGame::Game::Run() there also popups functions from way deeper.

Much of the NV graphics stuff actually happens in an outside thread, which is why you're seeing it at the same level. It's top level on a thread running alongside your game, rather than being called directly from it. The pop up functions are glitches that show up in most profilers when a proper callstack isn't available; ignore them. 


What I can see right now is about 4.25% of your actual CPU time is being spent running your own code, and the rest of it is outside.

What are these random Sleeps() for example?

Those are most likely pauses due to vertical sync. Right now you're spending so much time waiting on the driver that a CPU profile is going to give you relatively little information. Run the game long enough to get somewhere in the range of 10k samples inside your Run function, and then maybe you'll start seeing something useful. But it looks to me like you either need a GPU profiler, or have nothing interesting to profile in the first place.

#5226720 Which is the best university in the "world" for Computer Graphics?

Posted by Promit on 01 May 2015 - 11:01 AM

Not "Best" because that's an idiotic, meaningless, and ultimately unproductive question. But read SIGGRAPH papers over the years and see what universities publish regularly, to get a feel for the research side of things. A few notables in the US:

UC Berkeley

UC Los Angeles (UCLA)


University of North Carolina (UNC Chapel Hill)

Carnegie Mellon

Massachusetts Institute of Technology (MIT)


New York University (NYU)


California Institute of Technology (CalTech)

Georgia Institute of Technology (Georgia Tech)

University of Washington


Outside the US, ETH Zurich caught my attention.


Specializations shift over time depending on the students and faculty currently present at the school and their interests. So there's no point identifying any particular university as having any particular interest.

#5226602 Interpreting a performance profiling results

Posted by Promit on 30 April 2015 - 05:27 PM

In general there are two ways to start dealing with the profile: Top-down inclusive time, and bottom-up exclusive time.


Top-down inclusive time tells you how much time was spent inside a function plus any functions it called (children). Then you can see which functions it called, and how much time was spent in those functions+children, and so on all the way down to the leaves of the call tree. This gives you an overall birds-eye view of where your system spends its time.


Bottom-up exclusive time is really about identifying the largest leaves on the tree. This gives you how much time was spent in functions excluding any outside calls it made, which allows you to focus on the computations the consume the most actual computing resources. It's a little bit trickier of a view to work in, as the major offenders here can often be disparate and it may not be immediately obvious what to do with them. It's most challenging with functions that are called from many different places to do many different system tasks.


Always start with those two views of the data.

#5225589 Is AP Computer Science Worth It?

Posted by Promit on 26 April 2015 - 01:10 AM

The long and short of it is that as far as high school courses go, it's one of the most useful and pretty universally accepted at colleges. 

#5225280 Vertex Projection (without gluProject)

Posted by Promit on 24 April 2015 - 01:01 PM

Did you take into account the fact that in OpenGL, the origin of the viewport is in the bottom left and goes up in the positive direction?