Jump to content

  • Log In with Google      Sign In   
  • Create Account

Lightness1024

Member Since 06 Aug 2009
Offline Last Active Sep 18 2014 06:43 AM

#5179320 Thesis idea: offline collision detection

Posted by Lightness1024 on 10 September 2014 - 07:46 AM

Frankly, in this age, if you arn't aware of the whole historical bibliography on some subject, then ANY idea a human can possibly have, has already been: at least thought about, possibly tried and potentially published if is has any value.

I hope you're not thinking about a phd thesis ? because I don't see how any of this world's academy would allow someone to enter a 3/4 years cycle of research on an idea that sounds of little utility, propsed by a person who has basically next to no knowledge of the field.

 

Sorry to sound really harsh, I just want to calm down the game. At least go read everything you need to before, other ideas will come up when reading papers, only to realize later that it was proposed as a paper 2 years later, that you will also read, and have another idea, that either happens to not work, or be covered by some even later paper, and this cycle goes on until, if you are lucky and clever, you finally can get your idea that will actually bring progress to the world's status of the research. BUT, a phd being in 3/4 years the chance is great that some other team will publish a very close work before you finish... Yep.

 

Good luck anyway :-)




#5179314 Deep water simulation

Posted by Lightness1024 on 10 September 2014 - 07:35 AM

Of course it is the reason. Also you get another problem that is much worse :

 

your baked animation will be tiled and repeated !

Not only, it is very difficult to MAKE it tileable in space, but you must also make it repeatable in TIME, and those are 2 pretty crazy stuff to have correctly.

 

In the era of shader model 2 (c.f AMD render monkey, ocean sample), water was indeed made using baked animated noise.

 

Also, about huge textures, think about the bandwidth, not only the memory is great, but today's graphic cards are limited by memory bandwidth rather than raw ALU.




#5168379 How to pack your assests into one binary file (custom file format etc)

Posted by Lightness1024 on 22 July 2014 - 08:42 AM

How about you put all of those into a zip using the zlib ? it has one of the best licenses over in the wild; and maybe it just solves exactly your packing problem, and provides compression as a cherry on top of the cake.




#5168377 Inter-thread communication

Posted by Lightness1024 on 22 July 2014 - 08:27 AM

For what its worth, if anybody knows this software:

http://www.e-onsoftware.com/products/vue/

I'm the co-author of the opengl rendering together with christophe riccio (http://www.g-truc.net/).

So basically, the opengl preview viewports of Vue has a GUI-thread message producer; and Render-thread message consumer queue system, based on a dual std::vector that swaps during consumption (each frame); and each message "push" takes the lock (boost mutex + lock) and each consumption also before swapping the queue. It just so happens that in common situations, more than 200 000 messages are pushed by second, and it is by no way a bottleneck.

I just mean to say, if you are afraid of 512 locks per frame... there is a serious problem in your "don't do too much premature optimization" thinking process, that needs fixing.

I agree that its total fun to attempt to write a lock free queue, but, if it was production code, frankly not worth the risk; and plainly misplaced focus time.

 

Now just about the filter thing, one day, just to see, I made a filter to avoid useless duplication of messages, it worked but was slower than the raw, dumb queue. I idon't say its absolutely going to be the same in your case; just try... but in my case being clever was being slower.




#5142969 Cascaded shadow map splits

Posted by Lightness1024 on 28 March 2014 - 07:36 PM

Cascaded shadow maps biggest problem is in the computation of the frustums of the cameras used to render the shadows. There are multiple kinds of policies;

The most common is surely the one that cuts the main cmera view frusutms into subparts according to distance and use a bouding volume of those slice to create an encompassing orthogonal frustum for the shadow camera.

 

There are lost efficiency in this scheme because of bounding volume of bounding volume so lots of shadow pixels end up off screen and never used. In other words you loose resolution in the visible zone.

 

Therefore some recent solutions using compute shaders to be able to determine the actual pixel perfect min depth and max depth percieved in a given image, then you can optimize the slices of the camera frustum to perfection making crazy high resolution shadows, especially in scenes a bit enclosed in walls.

 

There is another very simple policy for shadow frustums, just center the shadow camera on the view camera's position and zoom out in the direction of the light, each cascade zooming out a bit more thus logically encoding more distance in view space. But this has the problem of calculating shadows behind the view where they could be unnecessary.

I say could; because actually you never know when a long oblique shadows must drop from a high rise bulding located far behind you. this is why this simple scheme is also popular.

 

In my opinion; this is your scheme that fails. you should visualize the shadow zones by sampling red; blue and green to obtain this:

http://i.msdn.microsoft.com/dynimg/IC340432.jpg

once you get this debugging will be easy.




#5051207 Man Hours Necessary to Make a Game

Posted by Lightness1024 on 08 April 2013 - 09:03 AM

I was going to say 300 hours, but that would be only the programming part. you need to art an equivalent for the artistic part and the gameplay part will take a variable amount of time depending on how far you want to fine tune it. so roughly put between 600 and 700 will get you there. You should however drop the C++ and go for C# unless you want a (almost) direct portability to linux and mac ? Because it saves the time of having to setup and understand boost and stl, memory overrun issues and dangers of temporary references and other joys. C++ needs to code while concentrated and focused at 100%, with rigor, (apply RAII, careful design thinking with SOLID, avoid smells...) Whereas in C# you can code correctly even after a beer, and the compiler is much much nicer in its messages. Sometimes it even gives you the solution directly. Also there are virtually no build times which accelerates code-to-test cycle. I could go on and on... I know SFML has C# interfacing, but I know that SFML is not en engine, and you will lose time having to do other things like level editor and stuff. Though you're in luck, C# has built in serialization, that will save you tons of time to save and load your levels, and the current game state. Even if you go with doing your editor yourself. Now even with that, I'll recommend trying to find some platform game engine with an editor, if you can find one that will not limit you in your project. Because yes, engines are good, BUT a new game may not be doable in an engine that's already done, even with the most generic thinking from the author, you cannot support all future ideas and game paradigms. good luck, go for it


#5049190 Direction to light in parallax/relief mapping

Posted by Lightness1024 on 02 April 2013 - 09:03 AM

depending on the depth of your displacement, the light is supposedly so much farther than the vector will not be very different. it will make a difference only on very incident angles AND deep regions in the depth map AND with lights very close to the parallaxed surface. Many conditions that suggest that the calculation overhead to get the correct vector is not necessary.

If you really want that vector : you know the UV to get the color ? then you also know the UV to read the height map, you just have to use the world position of the plane at UV b and go towards the inverse normal from a distance dictacted by the heightmap's value (x artist factor that tunes the depth extent).

 

To get the world position of b : either you can use the rastered world position passed from the vertex shader through the varyings (which is the world position of a) and displace it according to ddx/ddy of the depth of the position reprojected into viewspace, and multiply by the difference of UV rescaled into viewspace. (using the vertex buffer min/max of the coordinates and the world matrix to estimate this scale factor).

This one is complicated and imprecise.

 

Another way would be determining the vector a to impact.

 

or, the one I recommend : find the local position (in plane space) using the UV; normally the UV should range 0 to 1 (in float2) and simply be an exact equivalent of the local coordinate in object space. then you just need to make it 3d by putting 0 in the Z (since it is a plane) and multiply by the world matrix and projection matrix to get the world coordinate. there you go.

 

Remain the problem of shadowing, you need to evaluate if that ray is free to reach the light or not. same principle than what you did to find b.




#5047662 How to update a lot of random tiles in a 2D array?

Posted by Lightness1024 on 28 March 2013 - 09:27 AM

it is far from being natural that update some thousands stuffs takes that long.

In extreme carnage I'm updating the IA of 2000 to 3000 enemies that are stored in a linked list (not the fastest thing to iterate on...) and doing lots of ray casts in each of them, and it runs in real time (> 60 FPS).

Stronger even : Intel Ray Tracing Library, or many indie demos (on cpu) are able to cast millions of rays per second and 1 cast is much heavier than update a sprite id.

So definitely, you have a bug. what is the language ? do you not have a garbage collection issue rather than a looping issue ? can you run on MacOS ? if yes use the built in instruments profiler. Or valgrind + cachegrind on linux. Or Vtune, or AMD code analyst on windows.




#5044910 Area lights (Forward and Deferred)

Posted by Lightness1024 on 20 March 2013 - 09:23 AM

Area lighting has just come to be supported in recent engines through the usage of light propagation volumes (cry engine 3) and sparse voxel octree cone tracing (unreal engine 4).

It was previously supported through static light-mapping before, using lengthy final gathering computations. final gathering is the second step after photon mapping, it generally uses a fixed number of sample rays that are gathered per surface patch around the hemisphere of the surface patch and it brings the contribution of the nearest photon at its impact point, if it makes sense. (photon has an opposed impact normal etc...) This takes minutes or even hours to prepare such a solution.

 

Now, you're free to try and invent your own method. don't forget to publish it and present it at siggraph 2013 :)




#5044903 Loading a model into OpenGL

Posted by Lightness1024 on 20 March 2013 - 09:08 AM

It is easy to load a model yourself, but then you have to animate it. that is far from easy, even from a full time professional. Skinning, animation blending, animation tree edition, skeleton retargeting.. it is all very common stuff but yet multi year project.

I don't recommend trying it, just go for Unity.

Using OpenGL directly to do a game is kind of a thing of the past.




#5044198 The games that everybody writes.

Posted by Lightness1024 on 18 March 2013 - 07:07 AM

worms/gorilla

solar striker

frog




#5043965 mvc ,games and multicore

Posted by Lightness1024 on 17 March 2013 - 09:01 AM

mvc is far from helping parallelization, it just separates data from view. if you have a process that operates in parallel on your data and you were careful to make the data<->view links thread safe, then you can talk about parallelization but mvc didn't help for it once bit.

some parallelization can be done with chunk separation if treatment is independant, like kernels in Cuda/OpenCL or fragments in shader language. this concept can be applied by thrust library on C++ containers for example, or OpenMP on loops as compiler pragmas.

Or thread groups managed by hand. none of those concepts relate to mvc.

there are other parallelization ideas in promises and futures, or procedural programming with immutable data that open the door to parallel treatment but I have no knowledge of this applied in practice.

 

how mvc is applied to a game depends on the subsystem, but there is mostly never the choice anyway because what resides in the graphic card must be copied most of the time. so your model is the level representation in CPU memory, and the view what is displayed by the engine. the controller is the player and other dynamic mechanics...

 

this also applies on smaller scales in various places.




#5043691 Existing 3D Game Engine for Gameplay Programming

Posted by Lightness1024 on 16 March 2013 - 09:38 AM

Most game companies have their own engine, only newer studios (created 2 years ago max) use ready-to-use engines, and most of them will go for unity because of license fees. cryengine is a risk because too expensive, only unreal would be a good idea to learn because the pricing is progressive according to sales. also its the best engine... of all time. just that. (the v4 that is.) and it will teach you the best technologies and practice, in terms of tooling integration etc... probably the best example anyway for a company with its own engine to aim at.

However, its not a question of resume.

Seriously, if I were to see the resume of a guy who brags with a list of "know engines" or "engines worked with" I would find that so lame that I would put the paper aside and see the next candidate.

If you want to say that you have experience with an engine, say it naturally in your cover letter.

But what is important rather, is having a general 3D knowledge. read the research papers ! when you know who is Kajiya, Nishita, Blinn, Kaplanyan, Rammamorthi, Torrance, Daschbacher, Hanranhan, Jensen, Schlick, Debevec, Perlin, Nayar, Lefebvre, Crassin, Neyret... and what is their work and all of their ramifications, then and only then you have conquered the knowledge that is necessary to continue this industry.

I insist that it is crucial to read on all of that, and nvidia research, ati research and stuffs like GPU gems and cie. not knowing an engine that will give you practice on the technology at a frame T in its history.

Also, games are not only a matter of graphics, but also specific game mechanics and tooling and various other production pipeline related stuff, and embracing the whole "corporate engine" is a plus because you can work more efficiently, in a huge codebase, thinking about human factors, e.g. not enforcing your own coding rules on everybody is an exemple of how to ease team work. Using diplomacy and politics to help the company move forward, when you want to make your project move from svn to git you're going to need those, I tell you...

I don't know how many hours I could continue on with that, but to sum it up all, I wanted to give you another perspective because you seem so hot headed and a bit stubborn on technology matters. C++ is great but my personal opinion is that game companies have already passed a turning point where C++ is becoming too expensive and a lot of studios died last year (more than 50) because of lack of clients of promising projects says the press. I believe its rather a problem of being too expensive because of C++. And especially the way it is used for game dev. it is enough to read the paper about EASTL to have a glimpse ! they code everything themselves for god knows what obscure reason. Many are predicting the death of AAA games against casual games. The CEO of crytek said himself that the upcoming generation of consoles is probably the last.

They are all responsible and they can only blame themselves, its not all because of C++, C++ is an awsome language, but the way game companies use it has a big role in this global decline.

I hope I gave you some perspective.




#5043096 Anyone here a self-taught graphics programmer?

Posted by Lightness1024 on 14 March 2013 - 10:51 AM

Hey there are some really captivating stories there.

richardjdare : yours was kind of sad :'(

Schrompf : yours was a bit bitter

and the best hacker medal : DracoLacertae

 

My turn then, I'm self taught at first then Academy taught. And both worlds completed each other very good.

At around 13 I started with QBasic but it took me 1 year to be good enough at imperative algorithmic to start to make a game, a copy of mario basically:

http://projets.6mablog.com/post/2008/05/22/periode-1999-2000

I had a mentor at the time, same age, but like two years ahead in terms of comprehension and he had a knack to really read books which I hadn't.

Then I went to Visual Basic 6, following the tracks of my mentor.

(by the way, who is this man : http://www.irisa.fr/alf/index.php?option=com_content&view=article&id=94&Itemid=15)

I made several little games, like a worms game and a live-chat html formatting for messages in AOL chatrooms.

I also did a serious worm game on Ti-89 calculator but the basic integrated language was too slow. Also I had to print the whole code out because the screen was too small and my code was all in a huge functions with lots of goto.

So I went over to C to harvest performance on that machine, gcc is my first C teacher, I did another horribly coded game but perfectly functional called "envahisseurs de l'espace" (space invaders).

Directly after that, I moved on back to PC and with an illegal copy of Visual Studio 6 I started my biggest indie project until now : Projet SERHuM. I planned on taking 5 years, but 5 years later I was only at like 10% of the whole dev so I gave up.

In the meantime I had joined the "classe préparatoire" which is a special elitist course to prepare for french engineering shcools.

So basically, I ended up with the 40 heads of classes of the town's high schools, doing math (12hrs/week courses), physics (11h/w), electronic (5h) and mechanic (5h) + 4hrs of severely graded weekly tests, during two years. And I don't mention the almost equivalent time that you are expected to work at home.

During this perdiod, the teachers shout at us, tell us that we are so hopeless, and yet in the same time can't stop to brag about that course path being the golden one, and that all the most important person of the country took it. (which is 70% true)

Then I passed the exams for the two majors lists of "Grandes Ecoles" (engineering schools) of the country, and some other private ones. I got accepted to the private stuff but the quality of the teaching was not as good as my first public school choice, the ENSEIRB. So I went there for 3 years and could never have been happier. We were taught true computer science from the Unix perspective all along. The school was associated with the Bordeaux 1 University laboratory (the Labri) which is the place where Shlick published his PhD. (for the one who has already seen his name doing fresnel reflections in shaders for example.)

Parallel to the engineering school I took some supplementary lessons from the University to complete a Master degree (which is looked down by engineers generally because the engineer diploma is superior).

This allowed me to study multimedia from the academic point of view, so I learned the canonical way, colors spaces, from fourier and laplace transforms to C.e.l.p. coders, by image treatment operators, as well as classic literature of image rendering theory : the rendering equation and stuff.

I also had to review Antoine Bouthors papers about cloud rendering http://www-evasion.imag.fr/Membres/Antoine.Bouthors/ during my master, in the meantime as doing some other school projects like a compiler with flex and yacc, or distributed compilation system to learn networks, or doing proper third normal form databases, or assistant researcher-related-work to make graphics visualizers for a task scheduling set of libraries/algorithms that the Labri is working on. (http://runtime.bordeaux.inria.fr/Runtime/)

 

After that I went to Japan to do some research on Supercomputers, then back to France I worked 4 years at e-on software, which is my greatest skill leap after my internship at Etranges Libellules. E-on software has many people graduated from the best schools of the country : Centrale and Polytechnique, and even if I had some practical C++ tricks to teach, I had many work practice to learn and stuff about 3D rendering. This gave me the chance to attend the siggraph with a full conference pass and exibitor as well since we are showing Vue and LumenRT at our booth.

I could implement crazy stuff while there like message based OpenGL engine, water rendering, caustics, tree rendering, clouds rendering and even real time indirect lighting...

 

But I decided it was the time to go back to Japan and now, believe it or not, I work at the desk just beside L.Spiro at tri-Ace, and I do tooling for artists and designers.

 

As an indie, I presented on gamedev my 2D car game before : http://www.gamedev.net/topic/564828-extreme-carnage---shoot-cars-buy-weapons-plant-defense-turrets/

I also did nuclear age on the same engine : http://forum.games-creators.org/showthread.php?t=7837

and extracted the engine into : http://sourceforge.net/projects/carnage-engine/

 

and many other little stuffs.

 

What I learned about self teaching, is that there is a severe limit. Isolation and self learning can get you somewhere, but when you are surrounded by super amazingly intelligent people then suddenly you realize that there is a "next level" and you thrive to go play in that same playground. Basically, you're pulled forward by the "masters" of the field. Then it becomes all so thrilling. You understand more and more with the years of experience, the research papers read, re-read, re-re-read...

You realize that the world is very small, and you are generally not one person away from knowing e.g. the CEO of nVidia, Carmack, Torvalds, the demo groups like Farbraush or in my case the guys of narbacular drops (portal, portal 2..), Cyril Crassin or Eric Bruneton. Yeah even you jcabeleira, we know each other through one person who is one of my colleagues right now.

 

To all the community, I say : you all rocks, let us all make great games !




#5042726 Best Practice for Values Return C/C++

Posted by Lightness1024 on 13 March 2013 - 09:20 AM

The answer is --- "during development and to deal with changes in behavior of OS, API and library functions".

 

It seems we both agree that once we have our applications working (or even just functions or subsystems working), we almost don't get any errors at all.  However, when we write a couple thousand lines of new code, we might have made a mistake, inserted a typo, or misunderstood how some OS/API/library function/service is supposed to work [in some situations].  So that's mostly what error checking is for.

 

This might imply we can just remove the error checking from each function or subsystem after we get it working.  There was a short time in my life when I did that.  But I discovered fairly soon why that's not a wise idea.  The answer is... our programs are not stand-alone.  We call OS functions, we call API functions, we call support library functions, we call functions in other libraries we create for other purposes and later find helpful for our other applications.  And sometimes other folks add bugs to those functions, or add new features that we did not anticipate, or handle certain situations differently (often in subtle ways).  If we remove all our error catching (rather than omit them with #ifdef DEBUG or equivalent, we tend to run into extremely annoying and difficult to identify bugs at random times in the future as those functions in the OS, APIs and support libraries change.

 

There is another related problem with the "no error checking" approach too.  If our application calls functions in lots of different APIs and support libraries, it doesn't help us much if the functions in those support libraries blow themselves up when something goes wrong.  That leaves us with few clues as to what went wrong.  So in an application that contains many subsystems, and many support libraries, we WANT those function to return error values to our main application so we can figure out what went wrong with as little hassle as possible.

 

You seem like a thoughtful programmer, so I suspect you do what I do --- you try to write much, most or almost all of your code in an efficient but general way so it can be adopted as a subsystem in other applications.  While the techniques you prefer work pretty well in "the main application", they aren't so helpful if portions of your applications become a support library.  At this point in my career, almost every application I write (even something huge like my entire 3D simulation/graphics/game engine) is designed to become a subsystem in something larger and more inclusive.  So I sorta think of everything I write now as a subsystem, and worry how convenient and helpful it will be for an application that adopts it.

 

Anyway, those are my thoughts.  No reason you need to agree or follow my suggestions.  If you only write "final apps" that will never be subsystems in other apps, your approaches are probably fine.  I admit to never having programmed with RAII, and generally avoiding nearly everything that isn't "lowest-level" and "eternal".  The "fads" never end, and 99% of everything called "a standard" turns out to be gone in 5 or 10 years.... which obsoletes application that adopt those fads/standards with them.  I never run into these problems, because I never adopt any standards that don't look reliably "eternal" to me.  Conventional errors are eternal.  OS exception mechanisms are eternal.  Also, all the function in my libraries are C functions and can be called by C applications compiled with C compilers (in other words, the C function protocol is eternal).  This makes my applications as generally applicable as possible... not just to my own applications, but to the widest possible variety of others too.

 

There's no reason you or anyone else needs to make these same policy decisions.  I am fully aware that most people chase fads their entire lives, and most of the code they write becomes lame, problematic or worthless after a few years --- not because their code was bad, but because assumptions they and support libraries adopted are replaced by other fads or become obsolete.  All I can say is, my policies accomplish what I want extremely effectively.  Most of the code I write is part of a very large, very long term application that will end up taking 20 years to complete (and will then be enhanced and extended indefinitely).  So I literally must not adopt any fads, or anything that might become a fad in the next 30 years.  You would be completely correct to respond that not everyone needs to write in such an "eternal", "bomb proof" and "future proof" manner as I do.  People can make their own decisions.  That's fine with me.  I hope that's fine with you too.

 

One final comment that is also somewhat specific to my long term application (and therefore a requirement for every subsystem I develop).  This application must be able to run for years, decades, centuries.  True, I don't count on this, the application is inherently designed to recognize and create "stable points" (sorta like "restore points" in windows), and therefore be able to crash, restart and pick up where it left off without "losing its mind".  But the intention isn't to crash, restart, restore very often... the attempt is to design in such a way that this never happens.  Yet the application must be able to handle this situation reliable, efficiently and effectively.  Perhaps the best example of this kind of system is an exploration spacecraft that travels and explores asteroids, moons, planets (from orbit) and the solar-system in general.  The system must keep working, no matter what.  And if "no matter what" doesn't work out, it needs to restart-restore-continue without missing a beat.  Now you'll probably say, "Right... so go ahead and let it crash".  And I'd say that maybe that would work... maybe.  But physical systems are too problematic for this approach in my opinion.  Not only do physical machines wear and break, they go out of alignment, they need to detect problems, realign themselves, reinitialize themselves, replace worn or broken components when necessary, and so forth.  And those are only the problems with the mechanisms themselves.  The number of unexpected environments and situations that might be encountered are limitless, and the nature of many of these are not predictable in advance (except in the very most general senses).

 

I suppose I have developed a somewhat different way of looking at applications as a result of needing to design something so reliable.  It just isn't acceptable to let things crash and restart again.  That would lead to getting stuck in endless loops... trying to do something, failing, resetting, restarting... and repeating endlessly.  A seriously smart system needs to detect and record every problem it can, because that is all evidence that the system will need to figure out what it needs to fix, when it needs to change approach, how it needs to change its approach, and so forth.  This leads to a "never throw away potentially useful information" premise.  Not every application needs to be built this way.  I understand that.

 

 

I'm not sure what "error driven code" is supposed to be.  In my programs, including my 3D simulation/graphics/game engine, errors are extremely rare, pretty much vanishingly rare.  You could say, this (and many programs) are "bomb proof" in the sense that they are rock solid and have no "holes".  Unfortunately, things go wrong in rare situations with OS API functions and library functions, including OpenGL, drivers, and so forth... so even "a perfect application" needs to recognize and deal with errors.

In short: why choose to have your code full of error checking (which breaks code flow and makes the code harder to read - that is really undeniable, IMO) to handle errors that are rare and unrecoverable anyway? Leave those to exceptions (or just crash the process), and keep the error checking code for cases where you can intelligently handle them and take appropriate action. It's best not to conflate exceptional conditions with expected errors.

 

you're not the only one to take this approach. in a less strict fashion, somehow linux kernel guidelines follow that mentality as well. I like the idea, though I'll never practice it because I love too much my "fads" and high level libraries, because they're so much fun. its fun to learn and apply practices e.g. from patterns, or from boost stuffs like optional, tuples, mpl, functions, lambdas. typically fads. but genetic evolution works by keeping the best. Some companies encourage ideas so that out of the emulsion they make keep the best (free Friday). If we try lots of software engineering stuffs, we are free to throw 80% after 5 years and decide it was not so nice after the hype has passed, but the 20% could stick along for the next 50 years so it was worth the effort.






PARTNERS