• Content count

  • Joined

  • Last visited

  • Days Won


JTippetts last won the day on October 10

JTippetts had the most liked content!

Community Reputation

12981 Excellent

1 Follower

About JTippetts

  • Rank

Personal Information

  • Interests

Recent Profile Visitors

52965 profile views
  1. Expectations for all your code, always

    I'm terrible at writing code. Always have been, and while for some time there I was improving, in recent years I've been getting worse, I think. I can usually write something that achieves what I need it to do, but when it comes time to polish, cleanup, formalize the tests, and expunge the warnings... heh. Yeah, THAT will happen. Sure.
  2. You'd probably need to consult with Trion Worlds' legal team about that. A quick check of their site doesn't yield any promising contact info, so it's likely they're not open to being approached about those kinds of arrangements. If you're hoping to base a business plan on running an ArcheAge server, you are highly likely to be disappointed.
  3. Node Graph UI for Accidental Noise Library

    So, I decided I'd go ahead and implement the expression node now: It works pretty well.
  4. Node Graph UI for Accidental Noise Library

    I actually have considered it, yeah. As soon as I've got the existing stuff all wired up, I'll probably work on it. The functionality is mostly there, to parse out a string expression, so I don't think it'll be too troublesome. As for the arrows and buttons on the output box... those are the first 64x64 block of pixels in the UI texture. I stuck a 64x64 BorderImage widget in there, with the idea that eventually it'll show a preview image of the noise module that automatically updates, and when you create a BorderImage in the Urho3D editor, it automatically assigns the UI texture. I'll change that soon.
  5. So the last couple days I've been working on something that I've wanted to do for a long time now. I've been building it as part of a terrain editor I've been working on. It's still mostly uncomplete, but so far you can create nodes, drag links to link/unlink them, then output them to a greyscale image. In the works, I've got a few node types to implement, and a lot of glue code to implement saving/loading node graphs, hooking them up to generate terrain and terrain layer weights, etc... But it's coming along pretty nicely. In the process, I have made a few fixes and additions to the noise library, to help with matters. One of the main additions is the Fractal node. The library, of course, has had fractals since the beginning, but the way they were implemented made it tough to allow them in the node graph editor. So instead, I implemented a special function type that can take as input a chain of functions for the layer noise, as well as other functions to provide the fractal parameters. Internally, the function will iterate over the number of octaves, and calculate the noise value. At each octave, the layer function chain is re-seeded with a new seed. This travels the function graph, and sets a new seed for any values of Seed type in the chain. This small change has opened up some easier ways of making fractals. Additionally, I have added a Seeder module, which implements this internal re-seeding operation. I have implemented also a Randomize module. The randomize module takes a seed or seed chain, and uses it to randomize an output value from within a range. It's kinda weird to talk about, so instead I'll demonstrate using a real-world solution to a common problem. Here is a fractal image of ridged noise: This fractal is generated by layering successive layers, where the input is a Value noise basis passed through an Abs operation before being summed with previous layers. It creates the Ridged Multifractal variant, but you can see in that image that there are grid-based artifacts. These kinds of artifacts are common; each layer of noise is generated on a grid, and so the artifacts tend to sort of amplify as the fractal is built. You can mitigate it somewhat using different values for lacunarity (which is the value that scales the frequency for each successive layer) but that can only slightly reduce the visible appearance of artifacts, not eliminate them altogether. A long time ago, I advocated for applying a randomized axial rotation to each layer of noise, rotating the noise function around a specifiable axis in order to un-align the individual grid bases, and prevent the grid biases from amplifying one another. Previously, these rotations were built directly into the fractals, but that is no longer necessary. The new Randomizer and Fractal nodes now make this easy to incorporate in a more flexible way (or eliminate, if you really want artifacts): In this screenshot, you can see that I have set up a fractal node, and for the layer source I specify a gradient basis fed through an Abs function. That function in turn feeds a RotateDomain node, which feeds the layer input of the fractal. Attached to the angle input on the fractal is a Randomize node, that randomizes a value in the range 0 to 3. The result is this: You can see that the grid artifacts are gone. The fractal iterates the layers, and at each layer it re-seeds the Layer input chain. This means that any node marked as a seed is re-set to a new value each time. This means that the Gradient basis node (which has a seed) is re-seeded, and the Randomize node that specifies the rotation angle is re-seeded. This means that each noise layer generates a different pattern than the other layers, and is rotated by a different amount around the Z axis. This misaligns the grid biases, preventing them from amplifying each other, and gives a nice non-artifacty fractal pattern. I still have quite a bit to do in implementing the rest of the noise functions in ANL. But there you go, that's what I'm working on right now.
  6. Most enjoyable grinding mechanic?

    Consensus? You make funny jokes. You probably won't find a consensus about _anything_ on the internet, but especially about something like grinding. There are those who hate grinding in any form. There are those who are okay with some grinding, as long as it doesn't feel too, you know... _grindy_. Then there are those who can Mountain Dew-up, hunker down, and grind the absolute _shit_ out of things from sun-up to sun-down. And everything in between. Personally, I can grind pretty determinedly if I feel the rewards are commensurate with the time spent grinding. Nailing the rewards just right, though, can be tricky, and I don't think there is any one-size-fits-all formula for it.
  7. 2D Game Engine Advice

    You might take a look at Urho3D. It's open-source, supports many different platforms (windows, mobiles, raspberry pi, etc...) and rendering backends (D3D11, OpenGL, etc...). Has 2D and 3D support. Supports scripting via AngelScript or Lua. Support for web applications by way of Emscripten. It's a pretty decent engine, and it's the one I'm using for my game.
  8. Adding the directional shadows to the dungeon areas has made a huge improvement for me.
  9. One Last Try

    You still use just way too many words to say nothing at all.
  10. The main issue for me is that the walls and such in your dungeon look sorta painted onto the floor. The walls and obstacles in the outdoor locations have directional light shadows that give them a sense of depth, but that is missing in the dungeon. It might help to add those in, even if it doesn't technically "make sense" to have directional shadows inside.
  11. I really have nothing of value to contribute to this discussion. Hi, Olu! How ya been?
  12. Marching Cubes with Multiple Materials

    In the past, I never bothered with marching different meshes for different terrain materials. I just marches the terrain as a single mesh, then used vertex colors (generated after marching the surface, using various techniques) to blend between terrain textures in the shader. Something like this (very quick example): With a tri-planar shader that displays different textures for the top surface than what it displays for the side surfaces, then you can just paint the v-colors (either procedurally, or by hand if that is your wish, in a post-process step) for different materials, and the shader will handle blending between the types and applying the tri-planar projection. A single color layer provides for 5 base terrain materials, if you count black(0,0,0,0) as one material, red(1,0,0,0), green(0,1,0,0), blue(0,0,1,0) and alpha(0,0,0,1) as the others. Provide another RGBA v-color layer and you can bump that to 9. Doing it this way, you don't have to be content with sharp edges between terrain types, since the shader is content to smoothly blend between materials as needed, and you don't deal with the hassle of marching multiple terrain meshes.
  13. Learning I've no skill yet still overthinking!

    As long as you put the idea of making it an MMO firmly out of your mind, the rest dhould be easily doable. The MMO adds a degree of complexity that you are not nearly ready to tackle.
  14. Yes, if the high poly is made by subdividing the low poly then it's not too difficult to bake yourself. You can build a workflow that ensures that. Some tools, however, allow dynamic subdivision while sculpting (see something like Sculptris, or Blender's dynamic subdiv). This allows more mesh detail in areas that need it, but can break the relationship between the high poly and low poly. If your workflow includes this, then the math becomes more difficult. Additionally, I feel like you overstate the difficulty in using a 3d tool to bake normal maps. Tutorial videos make it seem more difficult than it really is. Once you understand the process, it can be a very quick thing. The actual baking setup and bake can take mere minutes. Even for tiling textures, I now prefer Blender rather than my own older hand-tooled processes: Having access to all the other tools of a general purpose 3d tool makes all the difference.
  15. I've done normal maps the way you describe, and it does have its uses. However, baking normal maps is a mapping process, mapping from high-detail geometry to low detail geometry. In the case of baking a tiling planar normal map, you are going from high-detail geometry arrayed on a plane, to a plane. It's a pretty easy mapping, 1 to 1 with the XY coordinates of a particular location on the plane. A given feature simply projects directly to the plane 'below' it. When baking an arbitrary mesh, however, then it's not so simple. In this case, you need to find the point on the high-detail poly mesh that most directly corresponds with a given point on the low-detail poly mesh. This involves projecting onto the surface of the high-poly, and this projection is NOT a simple projection onto a 2D plane. Once you advance to that level of complication, it's much easier to do the work in a 3D package. Not to say that it CAN'T be done in your own way, just that the math and logistics become a LOT more complex.