Jump to content

  • Log In with Google      Sign In   
  • Create Account

Promit

Member Since 29 Jul 2001
Offline Last Active Today, 12:51 AM

#5199293 Difference between clock tick and clock cycle?

Posted by Promit on Yesterday, 01:33 PM

There are many, many different clocks in any given computing device. You always need to be clear about which clock you're talking about, or the terminology is pointless. Clock cycles typically refer to the clock signal driving a processor, usually the CPU or occasionally the GPU, but those are not the only clocks around. 




#5199037 C++ how to declare something that isn't declared?!?

Posted by Promit on 18 December 2014 - 07:12 PM

It can be written simpler still:

struct X { struct X* ptr; };



#5198376 Current-Gen Lighting

Posted by Promit on 15 December 2014 - 12:54 PM

Full conference proceedings are generally under paid subscriptions, not published for free. Not sure if SIGGRAPH in particular makes full recordings available; I know GDC does. But I strongly recommend you go through all the SIGGRAPH slides in detail, as well as looking up any materials they reference. There is a LOT in there. It might be easiest to start from 2012 and make your way forwards in time from there.

 

I also like the FilmicGames blog by John Hable.




#5198083 Getting Bounding Box For Sphere on Screen

Posted by Promit on 14 December 2014 - 02:02 AM

Last time I did this, I just rendered a sphere biggrin.png I mean you're issuing a draw call anyway, who cares about a couple dozen extra polys...




#5197450 Physical Based Models

Posted by Promit on 10 December 2014 - 02:14 PM

Honestly I haven't seen enough consistency in what texture maps are expected by physically based renderers to be able to easily produce generic stock models that work well. Albedo is already somewhat variable, but the wide variety of specular/reflectance/rough maps in use nowadays is a tricky problem. A lot of engines are using oddball encodings, and there's not a particularly good way to distribute maps even as raw floating point.




#5196894 Does glMapBuffer() Allocate Client-Side Memory?

Posted by Promit on 07 December 2014 - 10:12 PM

Okay, let's break this down.


At first, I thought: "Great! Direct access to GPU memory

Wrong.


I got the suspicion that glMapBuffer() is really copying whatever data to a client-side pool, then copying the modified data back, and destroying the temporary pool once glUnmapBuffer()'s called.

Close.


At first, I thought glMapBuffer() actually returned a pointer to the GPU's memory, but now it sounds like glMapBuffer()'s doing behind-the-scenes client-side copying, depending on the driver. Is my suspicion correct?

Mostly.

 

So here's the deal: MapBuffer returns a pointer which you can write to. What this pointer actually refers to is the driver's discretion, but it's going to be client memory as a practical matter. (The platforms that can return direct memory won't do it through GL.) This may be memory that the GPU can access via DMA, which means that the GPU can initiate and manage the copy operation without the CPU/driver's participation. The driver also doesn't necessarily need to allocate this memory fresh, as it can keep reusing the same block of memory over and over as long as you remember to Unmap.
 


I thought operating systems typically provide ALL memory, regardless of where in the system it's located, its own unique range of memory addresses. For example, memory address 0x00000000 to 0x2000000 point to main system memory while 0x20000001 to 0x2800000 all point to the GPU's memory. These memory ranges are dictated by the amount of recognized system memory, and GPU memory (including virtual memory stored in page files).

Not so much. Windows Kernel 6.x (Vista) gained the ability to map GPU memory into the virtual address space of a particular process, but that's more about internal management of multitasking with the GPU than having much to do with application code. It's not going to live in the same physical memory address space used for main system memory, though, and you can't read/write to it arbitrarily.




#5196731 Why is math transformation taxing to most CPUs?

Posted by Promit on 06 December 2014 - 10:07 PM

Well, the author may need to replenish the oil in his lantern, if he wants to shed light on anything.




#5196698 What's the simplest method to implement "toon-style shading"?

Posted by Promit on 06 December 2014 - 04:08 PM

Step 1: quantize the N dot L lighting term. Simple as: ndotl = ndotl > 0.5 ? 0.2 : 0.7; And just toy with the threshold and result values there. You can do more clever stuff - try smoothstep rather than a hard transition. You can also quantize to more colors, say 4, but I never liked  how it looks.

Step 2: Outlined edges. Take the dot product of the normal and the view vector, and if it exceeds a threshold, set the output color to black and exit.




#5196564 Why is math transformation taxing to most CPUs?

Posted by Promit on 05 December 2014 - 11:43 PM


1) The computer must track all of the vertices of all of the objects in the 3-D world, including the ones you cannot currently see.
 
Question 1: When the book says the computer, do they really mean the program itself or the CPU that does the processing of the "addresses" or the RAM where the addresses are stored

The vertices have to be stored in RAM, of course. The program is responsible for managing that data.


2)  This calculation process is called transformation is extremely taxing to most CPUs.
 
Question 2: Is it because the CPU cannot process the address of the data quickly per frame? Does it lead to a slow frame-rate in the game?

GPUs were invented because CPUs can't keep up. Let's assume you have a million vertices (quite low nowadays) to deal with. Each one needs to multiply the vertex position by a matrix in order to transform it to 2D space, which means four multiply-add operations across four rows of four elements. Let's crudely call it 50 instructions to process each vertex, giving us 50M total per frame. Now we want to run at 60 frames per second, which puts us at 50M * 60 = 3 billion calculations per second. If we're optimistic and assume that we're getting one calculation per cycle out of a 3 GHz CPU (this is very unrealistically optimistic), we've pretty much consumed the entire amount of CPU that we had in doing just vertices. Let's not forget pixels need to be calculated, physics needs to be calculated, oh and the actual game needs to fit in there somewhere.




#5191016 Any opinions of what to do next?

Posted by Promit on 03 November 2014 - 04:50 PM

HDR, if you don't have that. Then I'd start tackling some more complex materials, potentially PBR and more sophisticated specular models.




#5190666 64-bit ARM CPU?

Posted by Promit on 01 November 2014 - 08:41 PM

Bam!

http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-823-computer-system-architecture-fall-2005/

 

Note that writing in assembly does not necessarily produce the fastest possible code, even in math libraries. The compiler can beat you even if you copy its code thanks to global optimization. More importantly than writing, I maintain that being able to read and debug assembly code remains an enormously valuable skill for any systems level programmer.




#5190510 Why don't you use GCC on windows?

Posted by Promit on 31 October 2014 - 10:48 PM

Historically, GCC generated pretty shoddy binary code compared to VC anyway. The build chain on Windows was a mess too, requiring either the MinGW port which tended to be out of date, or aaaauuuuuggggh Cygwin. Lastly, most devs want to use VS just because it's the best of the IDEs, and VC++ is automatically supported while other compilers require messy configuration. I don't know if that is all still true, but those are the classic reasons.

 

Nowadays, the emergence of clang as a serious toolchain powerhouse has served to starkly show how dated both GCC and VC are. Frankly clang makes a mockery of the other compilers in practically every metric, whether it's build times, code quality, language features, whatever. A wide array of developers are familiar with it thanks to Apple platform development. I'm very curious to see what happens when clang finally becomes usable on Windows. 

 

All that said, it's pretty straightforward to write C++03 code, even with dashes of 11, that runs smoothly across the major compilers. My choice of compiler doesn't affect the code development at all.




#5189434 [Fixed]Image parsing : Read all file or x bytes at a time ?

Posted by Promit on 27 October 2014 - 10:27 AM

I prefer to memory map the file and skip the memory allocation part entirely. TGA of course requires some decompression code, so you will have to allocate a destination array. If your slowdown is in debug builds only, then you may be running up against the debug time sanity checks in std::vector.




#5189314 Mac or PC - Really, this is a programming question.

Posted by Promit on 26 October 2014 - 07:44 PM

Hardware: both.

Desktop environment: Win/VS, Mac/XCode or Xamarin, Linux/CodeBlocks

I use VMs regularly, dual boot Linux in a few specific cases (off USB 3 portable drives) but not as a general use thing.

 

Our core tech runs across all of these platforms, although iOS is the only public platform release right now.




#5188794 What Is Your Game Design Technique?

Posted by Promit on 23 October 2014 - 01:16 PM

No documents. No documents

 

We have a solid general idea of what the core gameplay goal is. Early on, this can shift and adjust and tweak every week or two. And then every month or two as prototypes crystallize, it's all about playing them with a very harsh judgement, and deciding what is or isn't fun. It's completely iterative, and we never quite know where the game will wind up. But on paper designs do nothing for me.






PARTNERS