Lightness1024Member Since 06 Aug 2009
Offline Last Active May 18 2013 08:07 AM
- Group Members
- Active Posts 179
- Profile Views 2,175
- Member Title Member
- Age Age Unknown
- Birthday Birthday Unknown
Posted by Lightness1024 on 08 April 2013 - 09:03 AM
Posted by Lightness1024 on 02 April 2013 - 09:03 AM
depending on the depth of your displacement, the light is supposedly so much farther than the vector will not be very different. it will make a difference only on very incident angles AND deep regions in the depth map AND with lights very close to the parallaxed surface. Many conditions that suggest that the calculation overhead to get the correct vector is not necessary.
If you really want that vector : you know the UV to get the color ? then you also know the UV to read the height map, you just have to use the world position of the plane at UV b and go towards the inverse normal from a distance dictacted by the heightmap's value (x artist factor that tunes the depth extent).
To get the world position of b : either you can use the rastered world position passed from the vertex shader through the varyings (which is the world position of a) and displace it according to ddx/ddy of the depth of the position reprojected into viewspace, and multiply by the difference of UV rescaled into viewspace. (using the vertex buffer min/max of the coordinates and the world matrix to estimate this scale factor).
This one is complicated and imprecise.
Another way would be determining the vector a to impact.
or, the one I recommend : find the local position (in plane space) using the UV; normally the UV should range 0 to 1 (in float2) and simply be an exact equivalent of the local coordinate in object space. then you just need to make it 3d by putting 0 in the Z (since it is a plane) and multiply by the world matrix and projection matrix to get the world coordinate. there you go.
Remain the problem of shadowing, you need to evaluate if that ray is free to reach the light or not. same principle than what you did to find b.
Posted by Lightness1024 on 28 March 2013 - 09:27 AM
it is far from being natural that update some thousands stuffs takes that long.
In extreme carnage I'm updating the IA of 2000 to 3000 enemies that are stored in a linked list (not the fastest thing to iterate on...) and doing lots of ray casts in each of them, and it runs in real time (> 60 FPS).
Stronger even : Intel Ray Tracing Library, or many indie demos (on cpu) are able to cast millions of rays per second and 1 cast is much heavier than update a sprite id.
So definitely, you have a bug. what is the language ? do you not have a garbage collection issue rather than a looping issue ? can you run on MacOS ? if yes use the built in instruments profiler. Or valgrind + cachegrind on linux. Or Vtune, or AMD code analyst on windows.
Posted by Lightness1024 on 20 March 2013 - 09:23 AM
Area lighting has just come to be supported in recent engines through the usage of light propagation volumes (cry engine 3) and sparse voxel octree cone tracing (unreal engine 4).
It was previously supported through static light-mapping before, using lengthy final gathering computations. final gathering is the second step after photon mapping, it generally uses a fixed number of sample rays that are gathered per surface patch around the hemisphere of the surface patch and it brings the contribution of the nearest photon at its impact point, if it makes sense. (photon has an opposed impact normal etc...) This takes minutes or even hours to prepare such a solution.
Now, you're free to try and invent your own method. don't forget to publish it and present it at siggraph 2013
Posted by Lightness1024 on 20 March 2013 - 09:08 AM
It is easy to load a model yourself, but then you have to animate it. that is far from easy, even from a full time professional. Skinning, animation blending, animation tree edition, skeleton retargeting.. it is all very common stuff but yet multi year project.
I don't recommend trying it, just go for Unity.
Using OpenGL directly to do a game is kind of a thing of the past.
Posted by Lightness1024 on 18 March 2013 - 07:07 AM
Posted by Lightness1024 on 17 March 2013 - 09:01 AM
mvc is far from helping parallelization, it just separates data from view. if you have a process that operates in parallel on your data and you were careful to make the data<->view links thread safe, then you can talk about parallelization but mvc didn't help for it once bit.
some parallelization can be done with chunk separation if treatment is independant, like kernels in Cuda/OpenCL or fragments in shader language. this concept can be applied by thrust library on C++ containers for example, or OpenMP on loops as compiler pragmas.
Or thread groups managed by hand. none of those concepts relate to mvc.
there are other parallelization ideas in promises and futures, or procedural programming with immutable data that open the door to parallel treatment but I have no knowledge of this applied in practice.
how mvc is applied to a game depends on the subsystem, but there is mostly never the choice anyway because what resides in the graphic card must be copied most of the time. so your model is the level representation in CPU memory, and the view what is displayed by the engine. the controller is the player and other dynamic mechanics...
this also applies on smaller scales in various places.
Posted by Lightness1024 on 16 March 2013 - 09:38 AM
Most game companies have their own engine, only newer studios (created 2 years ago max) use ready-to-use engines, and most of them will go for unity because of license fees. cryengine is a risk because too expensive, only unreal would be a good idea to learn because the pricing is progressive according to sales. also its the best engine... of all time. just that. (the v4 that is.) and it will teach you the best technologies and practice, in terms of tooling integration etc... probably the best example anyway for a company with its own engine to aim at.
However, its not a question of resume.
Seriously, if I were to see the resume of a guy who brags with a list of "know engines" or "engines worked with" I would find that so lame that I would put the paper aside and see the next candidate.
If you want to say that you have experience with an engine, say it naturally in your cover letter.
But what is important rather, is having a general 3D knowledge. read the research papers ! when you know who is Kajiya, Nishita, Blinn, Kaplanyan, Rammamorthi, Torrance, Daschbacher, Hanranhan, Jensen, Schlick, Debevec, Perlin, Nayar, Lefebvre, Crassin, Neyret... and what is their work and all of their ramifications, then and only then you have conquered the knowledge that is necessary to continue this industry.
I insist that it is crucial to read on all of that, and nvidia research, ati research and stuffs like GPU gems and cie. not knowing an engine that will give you practice on the technology at a frame T in its history.
Also, games are not only a matter of graphics, but also specific game mechanics and tooling and various other production pipeline related stuff, and embracing the whole "corporate engine" is a plus because you can work more efficiently, in a huge codebase, thinking about human factors, e.g. not enforcing your own coding rules on everybody is an exemple of how to ease team work. Using diplomacy and politics to help the company move forward, when you want to make your project move from svn to git you're going to need those, I tell you...
I don't know how many hours I could continue on with that, but to sum it up all, I wanted to give you another perspective because you seem so hot headed and a bit stubborn on technology matters. C++ is great but my personal opinion is that game companies have already passed a turning point where C++ is becoming too expensive and a lot of studios died last year (more than 50) because of lack of clients of promising projects says the press. I believe its rather a problem of being too expensive because of C++. And especially the way it is used for game dev. it is enough to read the paper about EASTL to have a glimpse ! they code everything themselves for god knows what obscure reason. Many are predicting the death of AAA games against casual games. The CEO of crytek said himself that the upcoming generation of consoles is probably the last.
They are all responsible and they can only blame themselves, its not all because of C++, C++ is an awsome language, but the way game companies use it has a big role in this global decline.
I hope I gave you some perspective.
Posted by Lightness1024 on 13 March 2013 - 09:20 AM
The answer is --- "during development and to deal with changes in behavior of OS, API and library functions".
It seems we both agree that once we have our applications working (or even just functions or subsystems working), we almost don't get any errors at all. However, when we write a couple thousand lines of new code, we might have made a mistake, inserted a typo, or misunderstood how some OS/API/library function/service is supposed to work [in some situations]. So that's mostly what error checking is for.
This might imply we can just remove the error checking from each function or subsystem after we get it working. There was a short time in my life when I did that. But I discovered fairly soon why that's not a wise idea. The answer is... our programs are not stand-alone. We call OS functions, we call API functions, we call support library functions, we call functions in other libraries we create for other purposes and later find helpful for our other applications. And sometimes other folks add bugs to those functions, or add new features that we did not anticipate, or handle certain situations differently (often in subtle ways). If we remove all our error catching (rather than omit them with #ifdef DEBUG or equivalent, we tend to run into extremely annoying and difficult to identify bugs at random times in the future as those functions in the OS, APIs and support libraries change.
There is another related problem with the "no error checking" approach too. If our application calls functions in lots of different APIs and support libraries, it doesn't help us much if the functions in those support libraries blow themselves up when something goes wrong. That leaves us with few clues as to what went wrong. So in an application that contains many subsystems, and many support libraries, we WANT those function to return error values to our main application so we can figure out what went wrong with as little hassle as possible.
You seem like a thoughtful programmer, so I suspect you do what I do --- you try to write much, most or almost all of your code in an efficient but general way so it can be adopted as a subsystem in other applications. While the techniques you prefer work pretty well in "the main application", they aren't so helpful if portions of your applications become a support library. At this point in my career, almost every application I write (even something huge like my entire 3D simulation/graphics/game engine) is designed to become a subsystem in something larger and more inclusive. So I sorta think of everything I write now as a subsystem, and worry how convenient and helpful it will be for an application that adopts it.
Anyway, those are my thoughts. No reason you need to agree or follow my suggestions. If you only write "final apps" that will never be subsystems in other apps, your approaches are probably fine. I admit to never having programmed with RAII, and generally avoiding nearly everything that isn't "lowest-level" and "eternal". The "fads" never end, and 99% of everything called "a standard" turns out to be gone in 5 or 10 years.... which obsoletes application that adopt those fads/standards with them. I never run into these problems, because I never adopt any standards that don't look reliably "eternal" to me. Conventional errors are eternal. OS exception mechanisms are eternal. Also, all the function in my libraries are C functions and can be called by C applications compiled with C compilers (in other words, the C function protocol is eternal). This makes my applications as generally applicable as possible... not just to my own applications, but to the widest possible variety of others too.
There's no reason you or anyone else needs to make these same policy decisions. I am fully aware that most people chase fads their entire lives, and most of the code they write becomes lame, problematic or worthless after a few years --- not because their code was bad, but because assumptions they and support libraries adopted are replaced by other fads or become obsolete. All I can say is, my policies accomplish what I want extremely effectively. Most of the code I write is part of a very large, very long term application that will end up taking 20 years to complete (and will then be enhanced and extended indefinitely). So I literally must not adopt any fads, or anything that might become a fad in the next 30 years. You would be completely correct to respond that not everyone needs to write in such an "eternal", "bomb proof" and "future proof" manner as I do. People can make their own decisions. That's fine with me. I hope that's fine with you too.
One final comment that is also somewhat specific to my long term application (and therefore a requirement for every subsystem I develop). This application must be able to run for years, decades, centuries. True, I don't count on this, the application is inherently designed to recognize and create "stable points" (sorta like "restore points" in windows), and therefore be able to crash, restart and pick up where it left off without "losing its mind". But the intention isn't to crash, restart, restore very often... the attempt is to design in such a way that this never happens. Yet the application must be able to handle this situation reliable, efficiently and effectively. Perhaps the best example of this kind of system is an exploration spacecraft that travels and explores asteroids, moons, planets (from orbit) and the solar-system in general. The system must keep working, no matter what. And if "no matter what" doesn't work out, it needs to restart-restore-continue without missing a beat. Now you'll probably say, "Right... so go ahead and let it crash". And I'd say that maybe that would work... maybe. But physical systems are too problematic for this approach in my opinion. Not only do physical machines wear and break, they go out of alignment, they need to detect problems, realign themselves, reinitialize themselves, replace worn or broken components when necessary, and so forth. And those are only the problems with the mechanisms themselves. The number of unexpected environments and situations that might be encountered are limitless, and the nature of many of these are not predictable in advance (except in the very most general senses).
I suppose I have developed a somewhat different way of looking at applications as a result of needing to design something so reliable. It just isn't acceptable to let things crash and restart again. That would lead to getting stuck in endless loops... trying to do something, failing, resetting, restarting... and repeating endlessly. A seriously smart system needs to detect and record every problem it can, because that is all evidence that the system will need to figure out what it needs to fix, when it needs to change approach, how it needs to change its approach, and so forth. This leads to a "never throw away potentially useful information" premise. Not every application needs to be built this way. I understand that.
In short: why choose to have your code full of error checking (which breaks code flow and makes the code harder to read - that is really undeniable, IMO) to handle errors that are rare and unrecoverable anyway? Leave those to exceptions (or just crash the process), and keep the error checking code for cases where you can intelligently handle them and take appropriate action. It's best not to conflate exceptional conditions with expected errors.
I'm not sure what "error driven code" is supposed to be. In my programs, including my 3D simulation/graphics/game engine, errors are extremely rare, pretty much vanishingly rare. You could say, this (and many programs) are "bomb proof" in the sense that they are rock solid and have no "holes". Unfortunately, things go wrong in rare situations with OS API functions and library functions, including OpenGL, drivers, and so forth... so even "a perfect application" needs to recognize and deal with errors.
you're not the only one to take this approach. in a less strict fashion, somehow linux kernel guidelines follow that mentality as well. I like the idea, though I'll never practice it because I love too much my "fads" and high level libraries, because they're so much fun. its fun to learn and apply practices e.g. from patterns, or from boost stuffs like optional, tuples, mpl, functions, lambdas. typically fads. but genetic evolution works by keeping the best. Some companies encourage ideas so that out of the emulsion they make keep the best (free Friday). If we try lots of software engineering stuffs, we are free to throw 80% after 5 years and decide it was not so nice after the hype has passed, but the 20% could stick along for the next 50 years so it was worth the effort.
Posted by Lightness1024 on 10 March 2013 - 03:16 AM
I'm a natural advocate of Stroppy's solution which corresponds better the the equivalent in pseudo code. errors can be handled via exceptions for better messages/codes.
especially now that there is std::move.
Posted by Lightness1024 on 06 March 2013 - 08:20 AM
You can use libfreetype and ask that metric to the library. you can then generate your own bitmaps that you copy to directx surfaces. otherwise you just hope that directx font rendering will be close enough, if an approximation is enough for you that will do.
Posted by Lightness1024 on 06 March 2013 - 08:16 AM
Maybe learn Unity, it would teach you sane designs like Entity Components; and you're in luck, it is C#.
also I believe there is a guide to the beginner in this website somewhere
Posted by Lightness1024 on 03 March 2013 - 08:54 AM
Design patterns are simply patterns that are frequently observed in good (or bad) software.
years ago when I started with C and SDL I didn't care about design patterns at all, so now I found myself surrounded with their chaos and can't simply code without standing with one of them.
So my question is, where to start in design patterns?
People say "I did this thing lots of times", and name it, and talk about it publicly, and then it gets called a pattern.
You can read books or web pages on design patterns all you want, they will help teach you things you can do, and things that are not recommended.
The thing about the patterns is that they are useful in discussing what is going on.
Patterns are not some magical thing that you must use; they are just designs that people noticed were frequently found in code.
There are various wikis filled up with thousands of design patterns. If you don't have a design pattern that fits your goals, write a design of your own.
Yeah but naming something a pattern also allows a community to refine it to its purest and most robust form. like boost did with smart pointers for example. or iterators, optional, variants, destruction closures and wtf you name it...
Entities/components, seriously, this is not a pattern. a pattern is a factory, or a visitor. it is a part of the code. (a very small part)
entity/component however, is the paradigm you choose to code with. so instead of functional, or object oriented, or procedural, you will choose entity/component.
It is much higher over patterns for me, it is a paradigm. It encompasses your whole program, and you code into it, not the other way around.
though, same property, naming it will allow community to refine it.
I see one big issue, that was covered by some of the 3 links : adoption by peers in the same company. hem; the guy of tony hawks had an ONLY 3 year old code base, and very very understanding coworkers to finally go along with the adoption. Frankly, in my company, the chance of adoption is zero, yeah, the Kelvin absolute cold 0.
Posted by Lightness1024 on 03 March 2013 - 03:59 AM
I tried that in your almost exact same situation, but it was a failure (only 2 days spent on it though).
For two reasons:
- difficult to find the correct smearing direction for each pixel, though I'm sure there is a way had I put a bit of effort
- serious issue with desaturation over the whole sky. to have "god rays" somehow you need to "add haze" and that is terrible for the result. your image becomes whiter and desaturated overall. I tried various operators; like multiplication, or mix of mul and add, none were good. anyway, consider using somekind of pow() function to add godrays only near the solid angle that subtends the sun. Because physically, in-scattering mainly happens on light paths around this angle anyway.
- impossible to use with tiled rendering.
I will add one reason that could be a bother to your particular case, you have animated clouds so you will need to re-run this. but it is damn costly (100 samplings per pixel) so I suggest you use the fact that your rotation is very slow to smear progressively (in multiple passes over multiple frames).
But again, if I were you, I would still attempt this method, it seemed the easiest and best fit for cool results in a simple fasion for this case. I'm just saying it will require serious engineering. Kenny Mitchell gives us a theory, in practice it's always another story.
Posted by Lightness1024 on 19 February 2013 - 09:05 AM
Yes my clouds can move, in lower resolution, after any move is done, the high resolution bakes itself into an half cube (upper sky dome) using tiled rendering.
its too slow for pure real time, it would lower the system to around 1 or 2 FPS in full res full real time.
The biggest issue is the number of times the fractal needs to get evaulated per pixel. knowing the fractal is made by iterating reading a noise texture with UV coords that multiply exponentially (octaves). I calculated that at max quality one pixel = 1800 texture reads. Of course it only works so great because the texture is so small it can get in the cache. But I even suggest using registers (constant buffers) to pass an even smaller noise base, that could help I guess.
Though, in my technique the shader was so complex, on DX9 it was difficult to compile because I would exceed the max number of registers very often.
Because of the presence of loops, and dependence on variables outside the loops, creating arrays with size that multiplies with the nested loops...
Anyways, the night, sunset and dawn are handled with a mix of empirical tunings and the effects from the aerial perspective (rayleigh). The empirical stuff is mainly the intensity of the sun light, and the sky light that I use as a second source from the top. Normally you should evaluate the irradiance of the sky as a 3D function (encoded into spherical harmonics or as an lookupable envmap) but a single scalar is more than enough in the sky case because of its uniformity across the hemisphere.
The quality of the technique is that the light direction change really change the volume impression we get of the clouds. Though it is not using scattering formulas, its already slow enough instead it uses empirical formulas.
About the parameters, there are literally tens of them (~50), but I reduced it to 10 "master" parameters that drives the other ones with empirical ramps that I adjusted to be "cute". So the user can move the sun, time of day, or change the quantity of coverage, and it any combination it remains controlled and the best consensus. It took days to adjust.