Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Ohforf sake

Member Since 04 Mar 2008
Offline Last Active Today, 09:41 AM

Posts I've Made

In Topic: Ideas for a seminar in computer graphics

19 April 2015 - 04:26 AM

Two months is not a lot of time, especially since you (presumably?) won't be working full time on it.

 

Anyways, here are two programming heavy ideas from the top of my head:

 

Global Illumination is kinda hard, especially given your limited time frame and experience. However, this comes to mind:

http://codeflow.org/entries/2012/aug/25/webgl-deferred-irradiance-volumes/

It is a neat approach that might be feasible in the two months if you push yourself a bit. There seems to be code if you get stuck and you might be able to come up with some creative improvement.

 

Another idea I always wanted to implement that involves light, although in an unusual way, would be a content creation tool that helps with texturing models. Usually, triangle meshes are unwrapped and then the textures are painted directly. You could create a tool where instead of directly painting the final texture(s), you set up a couple of projectors around the object, like spot lights which project an image (hence the need for light and shadows). The artists would then paint the images of those projectors and the final model textures would be baked by your tool. To some degree this is already supported in the major modelling packages but you could enhance it by allowing projectors to mix and combine colors so that reusable dirt or rust decals can be added on top. You could also allow the projectors to not only affect the color textures, but also the textures for the other material parameters.

The downside is that such a tool can be rather GUI heavy. The upside is that you can easily "scale" the project according to your progress. Eg start with a purely non-gui application that reads the projector positions and mesh from a blender export, displays a preview, and performs the bake. Then add features like GUI, different projector types, etc until the two months are over.


In Topic: Normal Map Generator (from diffuse maps)

18 April 2015 - 07:42 AM

Thanks for sharing your code.

 

However, I do share the sentiment of the others that this is can be a bad idea. While it is true that 99% of all players, artists, and programmers won't be able to tell that the normalmaps are broken, they will be able to tell that it looks bad, or at least not "right". Trust me, I've been there, done that. Also on a commercial project.

 

The real problem comes later though. Once a significant amount of all materials have broken normalmaps, the spec/gloss maps are adapted to it to somehow counteract the effect. Then the lighting. All of a sudden you can no longer change individual assets to "good" normalmaps because it would break the entire setup. And before you know it, "bad looking" becomes your new art style that every new asset has to adhere to, because otherwise the game would not look coherent.

 

If you don't have the time or ressources to make actual normalmaps, then not using normalmaps or using funky normalmaps might be the right choice. There are very good looking games out there that aren't photo realistic. But it should be a conscious choice.


In Topic: C++ cant find a match for 16 bit float and how to convert 32 bit float to 16...

11 April 2015 - 03:00 AM

For large amounts of data, there are also SIMD intrinsics that can do this:

half -> float: _mm_cvtph_ps and _mm256_cvtph_ps
float -> half: _mm_cvtps_ph and _mm256_cvtps_ph
see https://software.intel.com/sites/landingpage/IntrinsicsGuide/

Oh, I just noticed you aren't doing this on a PC. But some ARM processors support similar conversion functions. See for example: https://gcc.gnu.org/onlinedocs/gcc/Half-Precision.html

In Topic: How would I read/write bit by bit from/to a file?

10 April 2015 - 05:24 AM

I'm guessing here that by "decimal" you mean "as text"?

If so, this is your problem:

std::uint8_t F = 10111001;
std::ofstream K("C:/Users/WDR/Desktop/kml.enc", std::ios::binary);
for(int i = 0; i < 256; i++)
{
    K << F;
}


The ostream::operator<< operator always outputs text, even if the stream is set to binary. This is a bit braindead, I know... Try:
std::uint8_t F = 10111001; // btw, this is missing a 0b prefix, it should be 0b10111001
std::ofstream K("C:/Users/WDR/Desktop/kml.enc", std::ios::binary);
for(int i = 0; i < 256; i++)
{
    K.write(&F, sizeof(F));
}
You should get a 256 byte long file where every byte is 0b10111001.

In Topic: Relation between TFLOPS and Threads in a GPU?

09 April 2015 - 01:13 PM

Peak performance (in FLoating point OPerations per Second = FLOPS) is the theoretical upper limit on how many computations a device can sustain per second. If a Titan X were doing nothing else than computing 1 + 2 * 3 then it could do that 3 072 000 000 000 times per second and since there are two operations in there (an addition and a multiplication) this amounts to 6 144 000 000 000 FLOPS or about 6.144 TFLOPS. But you only get that speed if you never read any data or write back any results or do anything else other than a multiply followed by an addition.

 

A "thread" (and Krohm rightfully warned of its use as a marketing buzzword) is generally understood to be an execution context. If a device executes a program, this refers to the current state, such as the current position in the program, the current values of the local variables, etc.

 

Threads and peak performance are two entirely different things!

 

Some compute devices (some Intel CPUs, some AMD CPUs, SUN niagara CPUs and most GPUs) can store more than one execution context aka "thread" on the chip so that they can interleave the execution of both/all of them. This sometimes falls under the term of "hardware-threads", at least for CPUs. And this is done for performance reasons. But it does not affect the theoretical peak performance of the device, only how much of that you can actually use. And the direct relationship between the maximum number of hardware threads, the used number of hardware threads, and the achieved performance ... is very complicated. It depends on lots of different factors like memory throughput, memory latency, access patterns, the actual algorithm, and so on.

So if this is what you are asking about, then you might have to look into how GPUs work and how certain algorithms make use of that.


PARTNERS