Jump to content
  • Advertisement

MilchoPenchev

Member
  • Content Count

    293
  • Joined

  • Last visited

Community Reputation

1178 Excellent

About MilchoPenchev

  • Rank
    Member

Personal Information

  1. MilchoPenchev

    My Biggest Fear for the Future of Human-Computer Interfaces

    I have to disagree a little with you here.   First, you start talking about UI and UI design - true, more or less most applications put their own little twist on their UI in both visuals and organizational manner. But while using Win7, I ran into a lot of programs that stuck the the Win7 look. Windows 8 and the whole metro tile concept is new, and most people have yet to write software for it. Given enough time, there will be a unified UI that will come into play.   Second, talking about the Unix command line I had to laugh. I'm not one of those *nix gurus that has detailed intricate knowledge of the shell commands, and related materials. I do use Fedora on a daily basis now at my work. And I found the command line to be just as disjoint as the UI is between major software. Want to find something using the shell? Easy, it's: find [switches] [path] [expression] But wait, want to look for text inside some files? grep [options] [expression] [files]   To put it in another way, "find" takes the haystack first, then the needle, while grep takes the needle first, then the haystack (using the old looking for a needle in a haystack analogy) Sure, its minor and sure I can remember it, but it's just as problematic as remembering different UI paradigms and designs.    In fact it's worse in some ways because the [options] that those programs accept are different from each other. In fact, most shell commands have options that differ, even when potentially trying to represent the same thing.  See, these programs were not written as a unified tool, but each was tackled individually, often by different people, over different periods of time.   For those who read webcomics the somewhat recent xkcd on this subject makes me laugh. And you know what? I couldn't think of a valid tar command either. F*ck. It's not that these tools aren't useful - they are. And it's not that these tools aren't powerful - they also are. But they are just as disjoint and often harder to learn than any UI is, for the sole reason that UIs can visually represent information to make them easier to use, while at best with command line you're stuck reading the online linked man pages. (At worse, its stuck reading the man pages under vim)   That's in terms of UI. None of these reasons suggest that you should not learn shell commands, and I think that any programmer should have some knowledge of them, since they will inevitably be forced to use them one way or another.   ---   Now for your second point, you talk about control and APIs. I think your example of copying a picture from Facebook to Gmail isn't accurate - the problem isn't that facebook doesn't provide some common API - it does - the images are transferred in a standard way, and accessible via standard ways. GMail can also accept drag and drop images. Not entirely surprisingly then, in the version of Chromium you can actually drag an image from one site, and drop it right into gmail. Yeah. You can. Try it.   Still, despite that, while you have a point, the UI design and differing paradigms of applications have nothing to do with control and interaction of those applications. The UI is just a UI - the data handling is what's important, and is actually being worked on. And even in the case of command line tools and piping input/output, they too are limited in terms of what they can do. There is no easy way to use a command line tool, or combination thereof, as far as I know, to copy and image from a site and insert it into gmail. Maybe someone will prove me wrong, but at the end those tools weren't designed for that sort of work - they were mostly designed for text based operations.    My point is, command line tools have their own discrepancies and inconsistencies, despite still being very useful, and there actually are efforts in areas to improve the application interaction, similar to what you're talking about. However UI design and differing UI paradigms isn't really related to the common api problem, nor will having one unified interface guarantee free interaction.
  2. MilchoPenchev

    Video of current progress

    That looks pretty cool, and kind of reminds me of an old game called Motherload (by xgen studios). I'm curious if you've played it - it's probably the only flash game that had me addicted to it. I'm not saying you're making a clone or copying them, just that they got some things right.   My only critique is that there seems to be a lot of darkness even near the digging machine, and its kind of hard to tell what the surrounding tiles. It's fine if that's what you're going for, but it might also be more useful if the players don't have to squint when plying - well, at least I did :)
  3. Well, first, I had a chance to try out your alpha (from your last post). Some feedback: - The camera seems to shake a lot. I've never gotten motion sickness from games, but that's probably the closest I've ever been to it. Can you stabilize it a bit? Maybe just have it shake when your health is low (if even then, since a lot of people have motion sickness issues) - I couldn't figure out what happened when i clicked an inventory item (i.e. the gun). It didn't seem to equip at first, but then after some random clicking i did equip it. - This is probably something you're aware of - but the mouse isn't locked in the window, so if you move the mouse around it may exit the window and clicking will then cause your game to lose focus. And since I have dual monitors so fullscreen didn't help (mouse still moved to other monitor)   Now for this post: It's interesting that you've sort of combined what I'd normally think of as item endurance and item quality (I'm coming from a rpg point of view). It may work well, depending on what properties of the items you change (like a gun's accuracy and stopping power doesn't really degrade with usage - it may jam more often though, as you said, due to metallic buildup on the inside of the barrel.) Anyway, as a pure game mechanic it's interesting.   I was wondering more in terms of why 12 exactly? Why not, say 10 or 15? I'm not saying 12 is wrong, just that I'd pick some reason for it i suppose. A lot of RPGs that have item durability chose to give it as out of 100 - since durability out of 100 is easy to express in terms of percentage. Now I understand that 100 may be just way too much, but I was wondering if you've considered other item quality levels and what effects they would have on gameplay.   Again, nothing wrong with 12 - in fact I was half expecting you to say that you went with 12 because it's one of those things that's easy to sub-divide. 12 is clearly divisible by 2, 3, 4 and 6 - which for example is one reason people cite that the feet/inches system is so useful (1 foot = 12 inches) - because it provides a clear understanding of things like one half, one third, one quarter (and one sixth, though not as commonly used)   Anyway, that's my two cents. Thanks for taking the time to write the post.
  4. MilchoPenchev

    Alpha release- EARLY!

    I'm curious - what was the design decision behind choosing 12 levels of quality? Specifically, can you go in just a bit more detail on that, maybe make another blogpost? I, at least, think it would be worth it. Also, gratz on the alpha :)
  5. MilchoPenchev

    2D Skeleton Woes

    About two months ago, I started writing a 2D game. Given that my previous work was on a 3d deformable terrain, I figured a nice 2D game would be a nice change of pace, and give me less hassle. I was right...mostly. Character animation in 3d is not a simple task. There's some great software out there to help you animate it, heck, two years ago even I wrote a simple character animation program that had the ability to automatically attach a mesh to a skeleton. But enough nostalgia! It seems that skeleton animation in 2D should be easier. You only have one simple rotation angle to worry about, and no need to account for the gimbal lock problem using those pesky yet mathematically beautiful quaternions. Here's a simple 2d texture skeleton in my game: Seems straight forward enough to animate. You don't have to necessarily worry about vertex attachments, you can make each bone have its individual sprite, and design them in such a way that they blend in together. But you don't want to design your animations twice do you? Because a walking animation should be able to be played both walking LEFT and RIGHT. So, need a way to easily flip animations. Unlike in 3d where you can rotate your animation along some axis to orient it in the proper direction, in 2D you have to actually mirror the animation. I'm cheap, so I decided to go for a cop-out - I'm going to only flip the sprites instead of the whole skeleton. Sound good? Yea. But we can't just flip the image in the sprite - you have to take the actual sprite rectangle and mirror all its vertices along a certain axis. Since SFML doesn't support this, time to write my own CustomSprite class. Still, not the hardest thing. The simple code for flipping a vertex along an arbitrary x-axis:sf::Vector2f CustomSprite::FlipHorizontal ( const sf::Vector2f &point ) const{ sf::Vector2f pt = point; pt.x = m_axisIntersect - ( pt.x - m_axisIntersect ); return pt;} The variable m_axisIntersect specifies the line along which to flip. The same method can be used to flip along an arbitrary y-axis-aligned line. So, here are the results: Ok, the actual bones (which may be a little hard to notice - they're the thin blue lines) aren't flipped, but the sprite flip seems to have worked fine. The results look promising so far. Except, I forgot - my character isn't just going to stand always oriented up. Due to the physics of the game, he will lean on slopes and corners. Here's an example: So, wait, what happens if I use the direction flip on a slope? Well... Oh, right. I'm flipping along the axis-aligned line that passes through the center of the character, so of course - the program is doing exactly what I told it to do, even if i told it to do was wrong. It looks like I'm going to have to apply a mirroring along an arbitrarily oriented line now. Mirroring around an arbitrary line isn't that bad, though it's certainly more involved. Supposing we have a line that passes through the point p, by which you want to mirror. The basics are then: 1. Translate all points by a vector -p - so now the origin of the line matches the global origin. 2. Rotate all points so that the line you want to mirror by is aligned with one of the axis 3. Mirror around that axis using the same method above 4. Undo step 2 5. Undo step 1 While I was thinking about this, I realized that I actually have all my sprites on the model in local coordinates already - they store their positions relative to the model's origin, which is the center point through which the mirroring line will have to pass. And I'm already setting the model's orientation when I touch a slope, so I already have a function that rotates it. In fact, I was setting the rotation like this: float angle = atan2( -up.x, up.y )*180.f/(float)PI;m_rootBone.SetRotation( angle ); However, I knew that when I mirrored the model, I could simply re-adjust to 'up' vector that the model received so that it was now facing the right direction:float angle = atan2( -up.x, up.y )*180.f/(float)PI;if ( m_rootBone.GetSpriteFlip().first == CustomSprite::xAxisFlip ){ m_rootBone.SetRotation( 360.f - angle );}else{ m_rootBone.SetRotation( angle );} And the results now looked good: Sure, the actual skeleton was nowhere near what the sprites displayed were, but that doesn't matter. The skeleton is only used to draw sprites, not for collision, or any other purpose. At the end making a 2d skeleton overall was easier than a full 3d skeleton, but had some challenges that you don't face when dealing with 3d.
  6. MilchoPenchev

    Concurrent programming is hard, mmmkay?

    I think I see it, message away! Concurrent programming IS hard. I always remember to never make any assumption about the order of events, and to guarantee that the order of how I want things to happen should happen! Of course that's easier said than done...
  7. Your post should have said "Stop Right There, Criminal Scum!" Ideas aren't useless, but the execution of an idea is what makes the idea come to reality, which means that inevitably the execution will affect the reality nearly, or even just as much as the idea itself. So, if you still have your design ideas, or copies of the docs, trying with another team will probably produce different, and hopefully better, results than the guys who separated. On that note though - why DID you hand over the design docs? Sure those people are free to start off by themselves with your idea in mind (unless contractually obligated not to, which doesn't sound like the case here), but you are also absolutely free to not hand them over anything at all. I would've simply bid them farewell, aufwiedersehen and adieu, wished them luck and kept any work I created to myself.
  8. I'm posting this as an excercise/lesson, hopefully its useful to someone. Eight years after I started learning c++, I was still caught off guard by this. Basically I had code like this: (ignore the LOG_DEBUG - that was just put there when I was testing this)struct ViewMember{ ViewMember(Widget *wid, int wei) : widget(wid), weight(wei) { } ~ViewMember() { LOG_DEBUG
  9. MilchoPenchev

    Water, water, everywhere...

    Well, although we plan on stopping the user from digging through the core, mostly to try to limit data storage, it is currently possible. to do what you described. To answer your questions: A) Yes, the water will stop falling when you dig through the core. It technically isn't falling, since it has no notion of gravity, but it is spreading. However it will stop when it touches another body of water. B) The oceans wouldn't drain. The current method of water spreading is a copy-density method, or in other words, an infinite water. The reason for this is that if instead of copying densities, the water densities got transferred (moved from one voxel to another) - even a tiny dig would potentially trigger a huge chain reaction that would propagate along the entire body of water. It would eventually die down because of the minimum density transfer (there's 255 levels of density, and the minimum to transfer is 1 density.) However it would still cause a huge water update reaction - which isn't computationally feasible. On a bright note, this type of water spreading works great for oceans, seas, rivers or other large bodies of water where its unrealistic to assume you could drain them all. We are considering the water spreading by the density transfer method for user-placed water. If the user has a bucket of water, it is a very finite source, thus it would be more computationally feasible to move all the density around. Thanks for reading.
  10. MilchoPenchev

    Water, water, everywhere...

    I've been working on water, slowly progressing forward. To those who might wonder, keeping track of, generating storing and updating water when you're dealing with a 5km planetoid (our current test planet) isn't quite straight forward. This is sort of a backpost, since i already had basic water in my last post. But this is a bit more in depth. The water simulation we went with is not like anything I've read about. There were several methods I considered before going with what we have now. First was particle water. The pros: Good water simulation. Realistic waves, breaking etc. possible. The cons: Hard to extract surface. Impossible to keep track of all particles on any significant planetary scale. This was obviously not going to work for us. Height-field based water. The pros: Significantly less storage. Easy surface extraction. Decent water simulation. The cons: Braking waves are harder (though not impossible). How do you do a height field based water on a spherical planet? The answer: not well. You can either split into 6 separate height fields, or try to create one with polar coordinates based on a even point distribution. This is too bad because back before we went for an actual planet, on a flat 2d terrain, this was my top choice What I went with: Storing water in a 3d voxel density grid. Much like terrain. Pros: Storage concerns were already figured out - storing can be done in same datablocks as we store terrain - thus its possible on a planetary scale. Cons: It's not a very realistic simulation. It's hard to make huge waves. There was also one other pro, which i didn't realize until later - updating water was made just somewhat easier by the fact that I stored water on a grid. Of course, the grid is NOT oriented with the surface, yet due to a range of densities [-127,127] - it was possible to achieve a perfectly smooth water surface anywhere on the planet despite the grid being all squirrly. Here are some screenshots of the apparently misaligned grid and the non-the less smooth water surface: And here is a video of the new water shader: [media][/media] And a video with the older shader, but the only video of water spreading in a huge hole. [media][/media] Update: video of the water on a small planet (200m radius) [media][/media] For more info, and a demo of the project, you can visit at http://blog.milchopenchev.com. Thanks.
  11. MilchoPenchev

    Grass, water and detail textures

    Here's a video of the work we've done on water, grass and detail textures. There's also a new build with these featuers on the blog: http://blog.milchopenchev.com We haven't really had time to post any detailed description on the technicals behind the water or detail maps, but hopefully soon we will. As always, thanks for reading. [media][/media]
  12. MilchoPenchev

    Triplanar texturing and normal mapping

    [size=2] [size="2"]How we handled doing normal maps when also doing tri-planar texturing. [size="2"][size=2]Note: this is a duplicate post from our project blog: http://blog.milchopenchev.com - the formatting may be a bit off, sorry. [size="2"][size=2][size=2]For our texturing, we had no choice but to use tri-planar texture mapping - since we generate an actual a planet and the terrain can be oriented in any direction. Combine that with the fact that the terrain is diggable, we had to make the texture adapt to any angle. Triplanar mapping was the perfect solution. Doing normal mapping on top of triplanar mapping may seem hard at first, but it's just a little harder than triplanar texture mapping. To obtain the final fragment color for triplanar mapping, you basically sample the same texture as though it was oriented along the three planes (See diagram on right). Once you have a sample from each of these planar projections, you combine the three samples depending on the normal vector of the fragment. The normal vector essentially tells you how close to each plane the projection actually is. So if you have a mostly horizontal plane, the normal vector would be vertical and thus you would sample mostly from the horizontal projection. This same principle can be used to compute the normal from a sample from a normal map. Instead of sampling from the texture, you would sample from the normal map. The RGB color you get would give you the normal vector, as seen in that plane. Then you can combine these normals using the same weights that you use to compute the mixture from the texture coordinates. [size=2] Basically you obtain three normal vectors, one on each plane, and each having a certain coordinate system that is aligned with the texture on the side. On the picture on the right, the red, green and blue are the axis on each projection of the texture, while the dark purple is a sample normal vector. You can imagine, the closer the fragment's normal is to each plane the more it samples from that plane. One thing is that unlike texture mapping, is that when the normal is close to the plane's, but is facing the opposite direction, you have to reverse the normal map's results. This is what the code for obtaining the normal of one texture from its three normal projections looks like in our terrain shader: vec4 bump1 = texture2DArray(normalArray, vec3(coordXY.xy, index)); vec4 bump2 = texture2DArray(normalArray, vec3(coordXZ.xy, index)); vec4 bump3 = texture2DArray(normalArray, vec3(coordYZ.xy, index)); vec3 bumpNormal1 = bump1.r * vec3(1, 0, 0) + bump1.g * vec3(0, 1, 0) + bump1.b * vec3(0, 0, 1); vec3 bumpNormal2 = bump2.r * vec3(0, 0, 1) + bump2.g * vec3(1, 0, 0) + bump2.b * vec3(0, 1, 0); vec3 bumpNormal3 = bump3.r * vec3(0, 1, 0) + bump3.g * vec3(0, 0, 1) + bump3.b * vec3(1, 0, 0); return vec3(weightXY * bumpNormal1 + weightXZ * bumpNormal2 + weightYZ * bumpNormal3); [font="inherit"]Where weightXY, weightXZ and weightYZ are determined like so from the normal that's calculated at that fragment:[/font] [font="'Courier New"]weightXY = fNormal.z;[/font] [font="'Courier New"]weightXZ = fNormal.y;[/font] [font="'Courier New"]weightYZ = fNormal.x;[/font][font=inherit] [/font] [font="inherit"]I realize that it sounds a bit counter-[/font]intuitive[font="inherit"] that we need the normal before we can calculate the per-fragment normals, but this normal can be simply obtained by other means, such as per-vertex normal calculations. (We obtain it through density difference calculations of the voxels)[/font][font="inherit"]Finally, to get good results you need an actual good normal texture. We only had time to create one (neither of us are graphics designers), so here's a video of the rock triplanar normal map, with a short day length on our planet:[/font][media][/media]
  13. MilchoPenchev

    PrEdiTer project intro

    The Procedural Editable Terrain project is just what it sounds like - a project to make an engine for terrain that is both procedurally generated, and allows for editing functionliaty (lowering, raising etc.) The project evolved from my previous project for simply procedural terrain. One other person joined me on this project, and has been helping with various tasks on the project. Currently the project has the basic functionality as described above, and some additional things. The major features currently are: Persistent perlin-noise based terrain generationData stored in discrete voxels, allowing for modification of the terrainPlanet generation of variable raidus, currently tested with 100km raidus, theoretically supporting much larger.Planet-based biomes - desert, savanna, temperate, polar, distributed with variation accross the planet.Basic physics from extracted terrain information - accurate collision detection with terrainDifferent LODs, on more powerful PCs support comfortably half a kilometer viewing distanceCustom terrain shader supporting custom lighting, triplanar texturing and blending between any two texturesCustom sky shader displaying the moving sun and related effects.So, in an effort to increase the number of people aware of my work to 4, I'm going to be posting some blog entries describing some ideas. You can read more technical stuff on the blog, where you can see also download the current version of the program: http://blog.milchopenchev.com Currently, a lot of the options are not exposed to the user through a nice interface, but some options are accessible via a console (~). Here are some screen shots:
  14. Well, it's been a long time since I've posted here. The terrain project I've been working on has resumed, under the name PrEdiTer - Procedural Editable Terrain. It's currently new blog is here: http://blog.milchopenchev.com ---
  15. The terrain is finally starting to look like..well.. terrain. With normal-based texture coordinate assignment and 3d texturing, the results are promising. The updated version is also up on my site, available for download, and in fact, I encourage you to try it and would appreciate any feedback. The main problem was how to assign texture coordinates properly. Well, part of that solution was to use the Normal vector and the elevation of the vertex. If the terrain at that point was not approx. horizontal (based on the normal vector) then grass or snow cannot hold on to it, thus it's set to the rock texture. The height is just used for determining how the horizontally-oriented faces are textured - with snow or grass. However, if only the height is used, then there's a clear cut-off for the switch, making it seem unnatural. So, I just added a (consistent) variance of +/- 100 m to the height, and the clear-cut line was dissolved. The visibility remains ~1.5km, which is illustrated in the last screenshot. That mountain is 1km high. I also increased the amount of detailed perlin noise (I use 2 perlin noise functions), which now generates some interesting looking formations, like the overhang and the archway seen below. So, here are the screenshots:
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!