Lightness1024Member Since 06 Aug 2009
Offline Last Active Sep 10 2013 03:36 AM
- Group Members
- Active Posts 179
- Profile Views 3,797
- Submitted Links 0
- Member Title Member
- Age Age Unknown
- Birthday Birthday Unknown
Posted by Lightness1024 on 08 April 2013 - 09:03 AM
Posted by Lightness1024 on 02 April 2013 - 09:03 AM
depending on the depth of your displacement, the light is supposedly so much farther than the vector will not be very different. it will make a difference only on very incident angles AND deep regions in the depth map AND with lights very close to the parallaxed surface. Many conditions that suggest that the calculation overhead to get the correct vector is not necessary.
If you really want that vector : you know the UV to get the color ? then you also know the UV to read the height map, you just have to use the world position of the plane at UV b and go towards the inverse normal from a distance dictacted by the heightmap's value (x artist factor that tunes the depth extent).
To get the world position of b : either you can use the rastered world position passed from the vertex shader through the varyings (which is the world position of a) and displace it according to ddx/ddy of the depth of the position reprojected into viewspace, and multiply by the difference of UV rescaled into viewspace. (using the vertex buffer min/max of the coordinates and the world matrix to estimate this scale factor).
This one is complicated and imprecise.
Another way would be determining the vector a to impact.
or, the one I recommend : find the local position (in plane space) using the UV; normally the UV should range 0 to 1 (in float2) and simply be an exact equivalent of the local coordinate in object space. then you just need to make it 3d by putting 0 in the Z (since it is a plane) and multiply by the world matrix and projection matrix to get the world coordinate. there you go.
Remain the problem of shadowing, you need to evaluate if that ray is free to reach the light or not. same principle than what you did to find b.
Posted by Lightness1024 on 28 March 2013 - 09:27 AM
it is far from being natural that update some thousands stuffs takes that long.
In extreme carnage I'm updating the IA of 2000 to 3000 enemies that are stored in a linked list (not the fastest thing to iterate on...) and doing lots of ray casts in each of them, and it runs in real time (> 60 FPS).
Stronger even : Intel Ray Tracing Library, or many indie demos (on cpu) are able to cast millions of rays per second and 1 cast is much heavier than update a sprite id.
So definitely, you have a bug. what is the language ? do you not have a garbage collection issue rather than a looping issue ? can you run on MacOS ? if yes use the built in instruments profiler. Or valgrind + cachegrind on linux. Or Vtune, or AMD code analyst on windows.
Posted by Lightness1024 on 20 March 2013 - 09:23 AM
Area lighting has just come to be supported in recent engines through the usage of light propagation volumes (cry engine 3) and sparse voxel octree cone tracing (unreal engine 4).
It was previously supported through static light-mapping before, using lengthy final gathering computations. final gathering is the second step after photon mapping, it generally uses a fixed number of sample rays that are gathered per surface patch around the hemisphere of the surface patch and it brings the contribution of the nearest photon at its impact point, if it makes sense. (photon has an opposed impact normal etc...) This takes minutes or even hours to prepare such a solution.
Now, you're free to try and invent your own method. don't forget to publish it and present it at siggraph 2013
Posted by Lightness1024 on 20 March 2013 - 09:08 AM
It is easy to load a model yourself, but then you have to animate it. that is far from easy, even from a full time professional. Skinning, animation blending, animation tree edition, skeleton retargeting.. it is all very common stuff but yet multi year project.
I don't recommend trying it, just go for Unity.
Using OpenGL directly to do a game is kind of a thing of the past.
Posted by Lightness1024 on 18 March 2013 - 07:07 AM
Posted by Lightness1024 on 17 March 2013 - 09:01 AM
mvc is far from helping parallelization, it just separates data from view. if you have a process that operates in parallel on your data and you were careful to make the data<->view links thread safe, then you can talk about parallelization but mvc didn't help for it once bit.
some parallelization can be done with chunk separation if treatment is independant, like kernels in Cuda/OpenCL or fragments in shader language. this concept can be applied by thrust library on C++ containers for example, or OpenMP on loops as compiler pragmas.
Or thread groups managed by hand. none of those concepts relate to mvc.
there are other parallelization ideas in promises and futures, or procedural programming with immutable data that open the door to parallel treatment but I have no knowledge of this applied in practice.
how mvc is applied to a game depends on the subsystem, but there is mostly never the choice anyway because what resides in the graphic card must be copied most of the time. so your model is the level representation in CPU memory, and the view what is displayed by the engine. the controller is the player and other dynamic mechanics...
this also applies on smaller scales in various places.
Posted by Lightness1024 on 16 March 2013 - 09:38 AM
Most game companies have their own engine, only newer studios (created 2 years ago max) use ready-to-use engines, and most of them will go for unity because of license fees. cryengine is a risk because too expensive, only unreal would be a good idea to learn because the pricing is progressive according to sales. also its the best engine... of all time. just that. (the v4 that is.) and it will teach you the best technologies and practice, in terms of tooling integration etc... probably the best example anyway for a company with its own engine to aim at.
However, its not a question of resume.
Seriously, if I were to see the resume of a guy who brags with a list of "know engines" or "engines worked with" I would find that so lame that I would put the paper aside and see the next candidate.
If you want to say that you have experience with an engine, say it naturally in your cover letter.
But what is important rather, is having a general 3D knowledge. read the research papers ! when you know who is Kajiya, Nishita, Blinn, Kaplanyan, Rammamorthi, Torrance, Daschbacher, Hanranhan, Jensen, Schlick, Debevec, Perlin, Nayar, Lefebvre, Crassin, Neyret... and what is their work and all of their ramifications, then and only then you have conquered the knowledge that is necessary to continue this industry.
I insist that it is crucial to read on all of that, and nvidia research, ati research and stuffs like GPU gems and cie. not knowing an engine that will give you practice on the technology at a frame T in its history.
Also, games are not only a matter of graphics, but also specific game mechanics and tooling and various other production pipeline related stuff, and embracing the whole "corporate engine" is a plus because you can work more efficiently, in a huge codebase, thinking about human factors, e.g. not enforcing your own coding rules on everybody is an exemple of how to ease team work. Using diplomacy and politics to help the company move forward, when you want to make your project move from svn to git you're going to need those, I tell you...
I don't know how many hours I could continue on with that, but to sum it up all, I wanted to give you another perspective because you seem so hot headed and a bit stubborn on technology matters. C++ is great but my personal opinion is that game companies have already passed a turning point where C++ is becoming too expensive and a lot of studios died last year (more than 50) because of lack of clients of promising projects says the press. I believe its rather a problem of being too expensive because of C++. And especially the way it is used for game dev. it is enough to read the paper about EASTL to have a glimpse ! they code everything themselves for god knows what obscure reason. Many are predicting the death of AAA games against casual games. The CEO of crytek said himself that the upcoming generation of consoles is probably the last.
They are all responsible and they can only blame themselves, its not all because of C++, C++ is an awsome language, but the way game companies use it has a big role in this global decline.
I hope I gave you some perspective.
Posted by Lightness1024 on 14 March 2013 - 10:51 AM
Hey there are some really captivating stories there.
richardjdare : yours was kind of sad :'(
Schrompf : yours was a bit bitter
and the best hacker medal : DracoLacertae
My turn then, I'm self taught at first then Academy taught. And both worlds completed each other very good.
At around 13 I started with QBasic but it took me 1 year to be good enough at imperative algorithmic to start to make a game, a copy of mario basically:
I had a mentor at the time, same age, but like two years ahead in terms of comprehension and he had a knack to really read books which I hadn't.
Then I went to Visual Basic 6, following the tracks of my mentor.
(by the way, who is this man : http://www.irisa.fr/alf/index.php?option=com_content&view=article&id=94&Itemid=15)
I made several little games, like a worms game and a live-chat html formatting for messages in AOL chatrooms.
I also did a serious worm game on Ti-89 calculator but the basic integrated language was too slow. Also I had to print the whole code out because the screen was too small and my code was all in a huge functions with lots of goto.
So I went over to C to harvest performance on that machine, gcc is my first C teacher, I did another horribly coded game but perfectly functional called "envahisseurs de l'espace" (space invaders).
Directly after that, I moved on back to PC and with an illegal copy of Visual Studio 6 I started my biggest indie project until now : Projet SERHuM. I planned on taking 5 years, but 5 years later I was only at like 10% of the whole dev so I gave up.
In the meantime I had joined the "classe préparatoire" which is a special elitist course to prepare for french engineering shcools.
So basically, I ended up with the 40 heads of classes of the town's high schools, doing math (12hrs/week courses), physics (11h/w), electronic (5h) and mechanic (5h) + 4hrs of severely graded weekly tests, during two years. And I don't mention the almost equivalent time that you are expected to work at home.
During this perdiod, the teachers shout at us, tell us that we are so hopeless, and yet in the same time can't stop to brag about that course path being the golden one, and that all the most important person of the country took it. (which is 70% true)
Then I passed the exams for the two majors lists of "Grandes Ecoles" (engineering schools) of the country, and some other private ones. I got accepted to the private stuff but the quality of the teaching was not as good as my first public school choice, the ENSEIRB. So I went there for 3 years and could never have been happier. We were taught true computer science from the Unix perspective all along. The school was associated with the Bordeaux 1 University laboratory (the Labri) which is the place where Shlick published his PhD. (for the one who has already seen his name doing fresnel reflections in shaders for example.)
Parallel to the engineering school I took some supplementary lessons from the University to complete a Master degree (which is looked down by engineers generally because the engineer diploma is superior).
This allowed me to study multimedia from the academic point of view, so I learned the canonical way, colors spaces, from fourier and laplace transforms to C.e.l.p. coders, by image treatment operators, as well as classic literature of image rendering theory : the rendering equation and stuff.
I also had to review Antoine Bouthors papers about cloud rendering http://www-evasion.imag.fr/Membres/Antoine.Bouthors/ during my master, in the meantime as doing some other school projects like a compiler with flex and yacc, or distributed compilation system to learn networks, or doing proper third normal form databases, or assistant researcher-related-work to make graphics visualizers for a task scheduling set of libraries/algorithms that the Labri is working on. (http://runtime.bordeaux.inria.fr/Runtime/)
After that I went to Japan to do some research on Supercomputers, then back to France I worked 4 years at e-on software, which is my greatest skill leap after my internship at Etranges Libellules. E-on software has many people graduated from the best schools of the country : Centrale and Polytechnique, and even if I had some practical C++ tricks to teach, I had many work practice to learn and stuff about 3D rendering. This gave me the chance to attend the siggraph with a full conference pass and exibitor as well since we are showing Vue and LumenRT at our booth.
I could implement crazy stuff while there like message based OpenGL engine, water rendering, caustics, tree rendering, clouds rendering and even real time indirect lighting...
But I decided it was the time to go back to Japan and now, believe it or not, I work at the desk just beside L.Spiro at tri-Ace, and I do tooling for artists and designers.
As an indie, I presented on gamedev my 2D car game before : http://www.gamedev.net/topic/564828-extreme-carnage---shoot-cars-buy-weapons-plant-defense-turrets/
I also did nuclear age on the same engine : http://forum.games-creators.org/showthread.php?t=7837
and extracted the engine into : http://sourceforge.net/projects/carnage-engine/
and many other little stuffs.
What I learned about self teaching, is that there is a severe limit. Isolation and self learning can get you somewhere, but when you are surrounded by super amazingly intelligent people then suddenly you realize that there is a "next level" and you thrive to go play in that same playground. Basically, you're pulled forward by the "masters" of the field. Then it becomes all so thrilling. You understand more and more with the years of experience, the research papers read, re-read, re-re-read...
You realize that the world is very small, and you are generally not one person away from knowing e.g. the CEO of nVidia, Carmack, Torvalds, the demo groups like Farbraush or in my case the guys of narbacular drops (portal, portal 2..), Cyril Crassin or Eric Bruneton. Yeah even you jcabeleira, we know each other through one person who is one of my colleagues right now.
To all the community, I say : you all rocks, let us all make great games !
Posted by Lightness1024 on 13 March 2013 - 09:20 AM
The answer is --- "during development and to deal with changes in behavior of OS, API and library functions".
It seems we both agree that once we have our applications working (or even just functions or subsystems working), we almost don't get any errors at all. However, when we write a couple thousand lines of new code, we might have made a mistake, inserted a typo, or misunderstood how some OS/API/library function/service is supposed to work [in some situations]. So that's mostly what error checking is for.
This might imply we can just remove the error checking from each function or subsystem after we get it working. There was a short time in my life when I did that. But I discovered fairly soon why that's not a wise idea. The answer is... our programs are not stand-alone. We call OS functions, we call API functions, we call support library functions, we call functions in other libraries we create for other purposes and later find helpful for our other applications. And sometimes other folks add bugs to those functions, or add new features that we did not anticipate, or handle certain situations differently (often in subtle ways). If we remove all our error catching (rather than omit them with #ifdef DEBUG or equivalent, we tend to run into extremely annoying and difficult to identify bugs at random times in the future as those functions in the OS, APIs and support libraries change.
There is another related problem with the "no error checking" approach too. If our application calls functions in lots of different APIs and support libraries, it doesn't help us much if the functions in those support libraries blow themselves up when something goes wrong. That leaves us with few clues as to what went wrong. So in an application that contains many subsystems, and many support libraries, we WANT those function to return error values to our main application so we can figure out what went wrong with as little hassle as possible.
You seem like a thoughtful programmer, so I suspect you do what I do --- you try to write much, most or almost all of your code in an efficient but general way so it can be adopted as a subsystem in other applications. While the techniques you prefer work pretty well in "the main application", they aren't so helpful if portions of your applications become a support library. At this point in my career, almost every application I write (even something huge like my entire 3D simulation/graphics/game engine) is designed to become a subsystem in something larger and more inclusive. So I sorta think of everything I write now as a subsystem, and worry how convenient and helpful it will be for an application that adopts it.
Anyway, those are my thoughts. No reason you need to agree or follow my suggestions. If you only write "final apps" that will never be subsystems in other apps, your approaches are probably fine. I admit to never having programmed with RAII, and generally avoiding nearly everything that isn't "lowest-level" and "eternal". The "fads" never end, and 99% of everything called "a standard" turns out to be gone in 5 or 10 years.... which obsoletes application that adopt those fads/standards with them. I never run into these problems, because I never adopt any standards that don't look reliably "eternal" to me. Conventional errors are eternal. OS exception mechanisms are eternal. Also, all the function in my libraries are C functions and can be called by C applications compiled with C compilers (in other words, the C function protocol is eternal). This makes my applications as generally applicable as possible... not just to my own applications, but to the widest possible variety of others too.
There's no reason you or anyone else needs to make these same policy decisions. I am fully aware that most people chase fads their entire lives, and most of the code they write becomes lame, problematic or worthless after a few years --- not because their code was bad, but because assumptions they and support libraries adopted are replaced by other fads or become obsolete. All I can say is, my policies accomplish what I want extremely effectively. Most of the code I write is part of a very large, very long term application that will end up taking 20 years to complete (and will then be enhanced and extended indefinitely). So I literally must not adopt any fads, or anything that might become a fad in the next 30 years. You would be completely correct to respond that not everyone needs to write in such an "eternal", "bomb proof" and "future proof" manner as I do. People can make their own decisions. That's fine with me. I hope that's fine with you too.
One final comment that is also somewhat specific to my long term application (and therefore a requirement for every subsystem I develop). This application must be able to run for years, decades, centuries. True, I don't count on this, the application is inherently designed to recognize and create "stable points" (sorta like "restore points" in windows), and therefore be able to crash, restart and pick up where it left off without "losing its mind". But the intention isn't to crash, restart, restore very often... the attempt is to design in such a way that this never happens. Yet the application must be able to handle this situation reliable, efficiently and effectively. Perhaps the best example of this kind of system is an exploration spacecraft that travels and explores asteroids, moons, planets (from orbit) and the solar-system in general. The system must keep working, no matter what. And if "no matter what" doesn't work out, it needs to restart-restore-continue without missing a beat. Now you'll probably say, "Right... so go ahead and let it crash". And I'd say that maybe that would work... maybe. But physical systems are too problematic for this approach in my opinion. Not only do physical machines wear and break, they go out of alignment, they need to detect problems, realign themselves, reinitialize themselves, replace worn or broken components when necessary, and so forth. And those are only the problems with the mechanisms themselves. The number of unexpected environments and situations that might be encountered are limitless, and the nature of many of these are not predictable in advance (except in the very most general senses).
I suppose I have developed a somewhat different way of looking at applications as a result of needing to design something so reliable. It just isn't acceptable to let things crash and restart again. That would lead to getting stuck in endless loops... trying to do something, failing, resetting, restarting... and repeating endlessly. A seriously smart system needs to detect and record every problem it can, because that is all evidence that the system will need to figure out what it needs to fix, when it needs to change approach, how it needs to change its approach, and so forth. This leads to a "never throw away potentially useful information" premise. Not every application needs to be built this way. I understand that.
In short: why choose to have your code full of error checking (which breaks code flow and makes the code harder to read - that is really undeniable, IMO) to handle errors that are rare and unrecoverable anyway? Leave those to exceptions (or just crash the process), and keep the error checking code for cases where you can intelligently handle them and take appropriate action. It's best not to conflate exceptional conditions with expected errors.
I'm not sure what "error driven code" is supposed to be. In my programs, including my 3D simulation/graphics/game engine, errors are extremely rare, pretty much vanishingly rare. You could say, this (and many programs) are "bomb proof" in the sense that they are rock solid and have no "holes". Unfortunately, things go wrong in rare situations with OS API functions and library functions, including OpenGL, drivers, and so forth... so even "a perfect application" needs to recognize and deal with errors.
you're not the only one to take this approach. in a less strict fashion, somehow linux kernel guidelines follow that mentality as well. I like the idea, though I'll never practice it because I love too much my "fads" and high level libraries, because they're so much fun. its fun to learn and apply practices e.g. from patterns, or from boost stuffs like optional, tuples, mpl, functions, lambdas. typically fads. but genetic evolution works by keeping the best. Some companies encourage ideas so that out of the emulsion they make keep the best (free Friday). If we try lots of software engineering stuffs, we are free to throw 80% after 5 years and decide it was not so nice after the hype has passed, but the 20% could stick along for the next 50 years so it was worth the effort.
Posted by Lightness1024 on 10 March 2013 - 03:16 AM
I'm a natural advocate of Stroppy's solution which corresponds better the the equivalent in pseudo code. errors can be handled via exceptions for better messages/codes.
especially now that there is std::move.
Posted by Lightness1024 on 06 March 2013 - 08:20 AM
You can use libfreetype and ask that metric to the library. you can then generate your own bitmaps that you copy to directx surfaces. otherwise you just hope that directx font rendering will be close enough, if an approximation is enough for you that will do.
Posted by Lightness1024 on 06 March 2013 - 08:16 AM
Maybe learn Unity, it would teach you sane designs like Entity Components; and you're in luck, it is C#.
also I believe there is a guide to the beginner in this website somewhere
Posted by Lightness1024 on 03 March 2013 - 08:54 AM
Design patterns are simply patterns that are frequently observed in good (or bad) software.
years ago when I started with C and SDL I didn't care about design patterns at all, so now I found myself surrounded with their chaos and can't simply code without standing with one of them.
So my question is, where to start in design patterns?
People say "I did this thing lots of times", and name it, and talk about it publicly, and then it gets called a pattern.
You can read books or web pages on design patterns all you want, they will help teach you things you can do, and things that are not recommended.
The thing about the patterns is that they are useful in discussing what is going on.
Patterns are not some magical thing that you must use; they are just designs that people noticed were frequently found in code.
There are various wikis filled up with thousands of design patterns. If you don't have a design pattern that fits your goals, write a design of your own.
Yeah but naming something a pattern also allows a community to refine it to its purest and most robust form. like boost did with smart pointers for example. or iterators, optional, variants, destruction closures and wtf you name it...
Entities/components, seriously, this is not a pattern. a pattern is a factory, or a visitor. it is a part of the code. (a very small part)
entity/component however, is the paradigm you choose to code with. so instead of functional, or object oriented, or procedural, you will choose entity/component.
It is much higher over patterns for me, it is a paradigm. It encompasses your whole program, and you code into it, not the other way around.
though, same property, naming it will allow community to refine it.
I see one big issue, that was covered by some of the 3 links : adoption by peers in the same company. hem; the guy of tony hawks had an ONLY 3 year old code base, and very very understanding coworkers to finally go along with the adoption. Frankly, in my company, the chance of adoption is zero, yeah, the Kelvin absolute cold 0.
Posted by Lightness1024 on 03 March 2013 - 03:59 AM
I tried that in your almost exact same situation, but it was a failure (only 2 days spent on it though).
For two reasons:
- difficult to find the correct smearing direction for each pixel, though I'm sure there is a way had I put a bit of effort
- serious issue with desaturation over the whole sky. to have "god rays" somehow you need to "add haze" and that is terrible for the result. your image becomes whiter and desaturated overall. I tried various operators; like multiplication, or mix of mul and add, none were good. anyway, consider using somekind of pow() function to add godrays only near the solid angle that subtends the sun. Because physically, in-scattering mainly happens on light paths around this angle anyway.
- impossible to use with tiled rendering.
I will add one reason that could be a bother to your particular case, you have animated clouds so you will need to re-run this. but it is damn costly (100 samplings per pixel) so I suggest you use the fact that your rotation is very slow to smear progressively (in multiple passes over multiple frames).
But again, if I were you, I would still attempt this method, it seemed the easiest and best fit for cool results in a simple fasion for this case. I'm just saying it will require serious engineering. Kenny Mitchell gives us a theory, in practice it's always another story.