LouisCastricato

Members
  • Content count

    18
  • Joined

  • Last visited

Community Reputation

139 Neutral

About LouisCastricato

  • Rank
    Member
  1. Multi-agent game suggestions

    If you meant specifically about interactive media, I strongly recommend getting a good foundation in statistics, reinforcement learning, altnerative planners, and computational narratology (http://narrative.csail.mit.edu/cmn12/proceedings.pdf)   Edit: I misunderstood entirely. I thought you said you wanted to know how to learn it haha. The overall goal of my research (It doesn't matter how long it takes. I have multiple universities lined up to help if need be) is to be able to read through a book, extract all of its raw story, simulate said book, and then be able to generalize it. So I want to take half of book A, and seemlessly combine it with half of book B. I have a university in Austria that is willing to help (Provide researchers & equipment) next Spring on generalizing simulated plot lines, so this is something that is actively pushing torwards reality. For reference, if their research is successful, it will be made open source. (Just that section. Not the entirity of the software)
  2. Multi-agent game suggestions

    I'm going to tell you what my professor told me when I started becoming interested in multiagent AI (About 2 years ago or so), since I still think its the best advice I've ever been given on a subject: Try to make the sims. Or atleast your version of it. It doesn't need to be amazing, it doesn't need to well written, hell it probably doesn't even need to be optimized. But try to implement a small sims like game based off of the knowledge of AI you have now. Just tinker until you're satisfied with it (Eg: Until you think that they're good enough to survive on their own for a while). Don't use code from any external sources. Only write it from your own knowledge.   Then, once you're done with that, rewrite all of it in a type of state machine or controller you've never worked with. Only read the papers that are crucial to optimizing and improving your sims game. You wrote your sims game as a FSM? Rewrite everything as a continuous fuzzy logic controller. Didn't use planning? Be sure to include HTNs.   After that, start looking into optimizing it and THEN look at how other people have implemented their version of the sims. Compare your results with theirs. See what you could have done better.   When I was first learning this stuff, that took me about 4 months in total. But I learned a TON about AI. It was awful at times. The subjects were overwhelming, and I never really knew if I had implemented things correctly. But the ability to tinker with AI subjects from the ground up rather than just learning them from a tutorial series or book felt amazing. I strongly recommend not buying any introduction to AI books, or watching any tutorial series unless you absolutely need it (Eg: For a class, or if you're entirely lost). I always felt like they hurt more than helped.
  3. Multi-agent game suggestions

    Yup. The majority of my prebake process is taking a procedurally generated decision pool of about 7.3mil or so decisions, and doing reduction after reduction to bring it down to sub 100K. Without prebaking, it takes my computer a solid 5 minutes to iterate over a few million agents. With the prebake, since most of the agents are clustered (semi-supervised), it usually only takes 10 seconds or so for a group of that size. This is mostly because we're only leaving the clusters with higher level decisions available to them, and rather than using T2 FLCs during runtime, we're baking all of that into a nice markov random field. The quality difference isnt noticable what so ever from the perspective of a player. (At least the people who we've tested it on).   It isn't perfect, there is still a ton of more work to do. The environment needs to remain markovian throughout the entire simulation (Of course this is a requirement by most multiagent reinforcement learners though ), the primary decision pool cannot change in any way (Although how they're infered can change easily), and due to the limitations of CUDA, fuzzy decision trees can only branch so many times before we hit DR. All of this will be fixed eventually though.   Onto the Unpersons for a second. Thats primarily a demonstration of the software. It was a studio we purchased last December due to a fairly nice track record of cool rogue like games (Unpersons is the first English game they've made, otherwise I'd link the others. I can't find them since my Spanish isn't as good as I hoped.). A major component of the software is the ability to read through annotated text and extract starting states for time series functions. So we have the player write biographies for all of the main characters, and then do our prebake phase during runtime. Since the game doesn't actually have many agents (50 or so), we can do the prebake phase in only a few seconds. So its more of you can play through a rogue like that you've written the story for than anything else. Really cool in concept, and in practice (While it is a bit limited as we didn't want the player to annotated everything) is quite awesome. We use some auto annotation methods after asking the player a few methods. It isn't just a set of combo and check boxes. Its an actual text editor which I absolutely love The main characters also generate their own journals during runtime through some pretty cool NLG algorithms we have a linguist working with.
  4. Multi-agent game suggestions

    @Jeffery   That may have actually been my article   I'm currently actively doing research on artifically limited decision spaces and constant time planning for interactive narrative in multiagent simulations. (I have been for about the past 10 months). Note that when I say artificially limited decision space, I mean during runtime. During runtime I only present the agents with decisions that are crucial to their survival, directly assist in reaching their own plot points, or are only highly prevalent to their subdemographic (Eg: Miners vs Knights). Story telling in games: http://dotpowered.net/news/6 http://dotpowered.net/news/10 http://dotpowered.net/news/14 Architecture: http://dotpowered.net/news/8 http://dotpowered.net/news/12   During the prebake phase, I do a fair bit of HTNs in order to break down plot points. After that, I do discrete time series feature prediction via DRNNs. I take the outputted markov random field and attempt to construct a behavior tree from it.   During runtime when the agent wants to go to the next stage of its plot, it only uses the behavior tree rather than having to do planning. The catch is that this behavior tree isn't a "if A then B." It looks at the agent's time series up to that point (At every previous major state transition it has encountered. Eg: Going from a Peasent to a Miner), and it helps the agent determine its next state transition over a markov random field. Its extremely easy to acount for the player's actions as its just another transition state. Originally, when agents conflicted I left the fate of the system to the player. But I moved that to being managed by a PD-WoLF implementation. The player can still have large effects on the conflict resolution, but not as large as they did originally. Also PD-WoLF is a lot more entropy resistant than leaving things up to the player, so it assists in keeping the system deterministic.   In practice this works extremely well, and I can generate near 12 hours of story in only 20 seconds on a GTX 760 2GB. Actual story planning is near 100% independent from the number of agents that are present within the game; however, it is dependent on the number of clusters. Thats usually about 200 or so in a larger game (400K NPCs). Agent collaboration is a HUGE overhead though. Even though i've done months and months of work to reduce thread interdependence, it still is the biggest bottle neck on the system.
  5. Just a nice video that I found

    https://www.youtube.com/watch?v=lGar7KC6Wiw   I think this sums up quite well what the majority of Highschoolers and even College kids think of when they hear about the game industry.This video may have been posted here before, but I felt the urge to share it never the less.   On a side note, this was my perspective taking up game development as a hobby back almost 6 years ago now. Even though most of us won't admit it, I think we've all viewed the industry like this at one point or another.
  6. Implementing pathfinding into AI

    If you have any interest in a tad more realistic looking pathfinding, Left For Dead is always a favourite: http://www.valvesoftware.com/publications/2009/ai_systems_of_l4d_mike_booth.pdf (Still requires a navmesh though)
  7. That could work. I'll try that tomorrow.   On another note though with stabilization, one of my colleagues took a look at a paper on Papov's theory for stability on fuzzy logic control systems, and followed it quite closely. We were hopping that the system would stabilize after 10,000 - 20,000 iterations; however, it actuality it was stable by time it had computed the second advertisement.   Screenshots in case you're curios: http://i.imgur.com/sjLwsnK.png   After 3k iterations, the system no longer had any noise. The only collapse that happened was at iteration 52k-ish.   Edit: I didn't realize I posted this from my personal account lol
  8. Metal shader correct ?

    You could just tint the environment color. There isn't much of an issue with that.
  9. Approximation of Normals in Screen Space

    http://www.cse.ust.hk/~pang/papers/ID0225.pdf It's a guided approach but it gives a baseline of this style of work. Honestly though multiple view points or multiple lighting setups can uniquely determine the solution. Thanks! That really helped. Based off my current system, I can do the method described in the paper without user input (besides the picture)
  10. Approximation of Normals in Screen Space

      If you have a mobile device then it can record video. Video can w/out a doubt reconstruct 3d surfaces. Let me know if you're interested in this or if you want to stick with the static one picture approach.   -= Dave   I think I wanna stay with the static approach, since I wouldn't have much of a science paper if I didn't (Mainly because, I wanna do something new, and extremely challenging)
  11. Anyone here a self-taught graphics programmer?

    I started with Python when I was 7 years old, with some general effect writing for blender game engine. I never really took it seriously until I was about 12, and wrote my first game. It was a small marble game, that I recently ported over to Windows phone (Did the port about 3 years ago)   Currently, I don't program as much as I used to, as my time is consumed by writing science papers on the methods I develop. I do miss writing engines though.   I still have the website up for my old engine (The project is long dead) wirezapp.net 
  12. Approximation of Normals in Screen Space

      I do like how that sounds, since one of the algorithms that I developed finds shadows within the image, and parents it to a light source. Then, from that I can find an estimated light direction.   Do you mind elaborating on the technique you are explaining? Perhaps provide some links    I appologize, this is the worst case you can find yourself in. In general, the solution is underdetermined because a gradient has two components for a surface and you have one equation. If you can find two highlights in your image (from different light sources) then its very easy to solve. General photometric stereo techniques require at minimum two equations. Intuitively, this means the normals can take any isotropic rotation and give the same lighting intensity. For example, imagine a ball lit given an intensity of .747 any normal that has a rotation of 45 degrees from the +Z axis would satisfy this equation.   However, that doesn't stop an algorithm from working. Given enough ingenuity and some user input you can still solve it. There has been published algorithms that do accomplish what you are looking for but its a guided process and generates depth. From depth, it's easy to get back to normals. If you are still looking to go this way then let me know and I'll dig up the paper that does this when I get home from work.   Do you have any other information? If you are working with computer vision then typically you have either 3d information or at least depth?   -= Dave   I have no form of depth information, or 3D scene information/   My algorithm detects multi-level gradient by calculating an estimated rate of decay of each visible shadow within the room. Said being, utilizing that, I can detect where shadows overlay, or where more than one show is visible.   The final objective is a bit on the sci-fi end, but seems more and more practical every day that I work on this. I want to make a 3D scanner that can work on any existing mobile device, without any form of optical modifications, or user input. Out off all the issues that I have, the 2 largest ones are Normal Approximation without any form of depth,or 3D data, and threshold approximation, so the AI can classify whether the image contains a pattern to its interest      PS: For the time being, lets pretend performance doesn't matter   The reason why I am trying to approximate normals, is because ambient occlusion requires it. My idea is that, since AO gives depth perception to video games, and special effects, why cant it give computer vision applications depth perception? I think it may come down to a matter of just solving for X
  13. Best way to do terrain?

    KD rendering is a good option. Voxels and fractals are also good options.     But, I personally enjoy using tessellation in DX11 for my terrain rendering, since I can let the GPU do almost all the work
  14. Turning the ground white from snow

    If you want this to look really amazing, layer a texture over the ground as snow begins to fall.    This texture can be dynamically generated, or prebaked. (Just depends on what quality and performance you want)   As the snow begins to fall, I strongly feel that a glow effect might make it look pretty amazing.    I like the idea of the noise function, but I think it might be better if the snow overlay is generated based off the terrain that it falls on, ie: it clusters on the sides of hills, spreads out on hill tops, etc, etc
  15. how can I archieve this?

    Nothing advanced at all.   At most, its a variant of rim lighting, but considering how outdated the graphics look, I doubt that's what they were using (All though, this may give quite nice results)   More than likely, its probably just prebaked into the textures (could be a specularity map) during art design, nothing fancy. Alternatively, if you like that effect, but don't want have to make the textures, you could use a toon shader.     Microsoft (XNA) has very nice examples of the methods I described above    Also, if you REALLY want it to look pixelated, just make your mesh, and look for an example of how to generate voxels from a triangular mesh. If you really wanna get fancy, you could also program your application to remap the textures of the original model to the voxels