Jump to content

  • Log In with Google      Sign In   
  • Create Account

JTippetts

Member Since 04 Jul 2003
Online Last Active Today, 06:08 PM

#5272423 How to do seamless editing on multiple variants of a texture?

Posted by JTippetts on 23 January 2016 - 08:08 PM

You might take a look at this blog post from the Path of Exile developers. It details a technique they used in PoE, that utilizes Wang tiles. With Wang tiles, the edges of texture variants are made to match up with specific other edges, so that the tiles can be combined in different patterns. With Wang tile sets, it is possible to tile the plane to infinity without repeating any given pattern.




#5271884 Pathfinding and Databases

Posted by JTippetts on 19 January 2016 - 01:24 PM

Just to add a bit to the pile-on: a common problem for novices is learning to differentiate what they think is a huge amount of memory usage from what the computer thinks is a huge amount. It sounds like you might be making a text-based MUD. In terms relative to modern RAM capacity, such a thing is extremely light-weight. On modern multi-gigabyte systems, you could likely have hundreds of thousands, if not millions, of rooms resident in RAM. I'd posit that you could at least have as many rooms loaded as you could conceivably create and flesh-out within the next five years or more, barring large-scale procedural generation.

I can remember working on MUDS in the 90s that had hundreds of rooms, took many hours to explore, and could be held entirely within the paltry RAM capacities of the day. Imagine how much you can do now.


#5267693 Replacing ECS with component based architecture, or traditional OOP, or somet...

Posted by JTippetts on 23 December 2015 - 03:07 PM

OP: I think where some of your confusion comes from is the boilerplate code that frameworks such as EntityX use under the hood in order to implement their behavior. Part of the complexity involves structuring an object based on some bit of data that describes the structure of that object. For example, it is easy enough to 'compose' an object using concrete members to create an aggregate class:

struct Thing
{
  Position pos;
  Orientation orient;
  Velocity vel;
  OtherData d1;
  SomeMoreData d2;
};
Such structure is enforced at compile-time, making ad-hoc object description cumbersome. Such a composition structure can be every bit as rigid as a deep hierarchy tree, which also is enforced at compile-time.

Composition frameworks build a system where an aggregate object can be described at run-time, instanced at run-time from a data description. Objects in these frameworks are ad-hoc in nature, and quite often instanced through a system that reads a data file in some format, and populates a generic container with a list of components. The structure of the object is not determined at compile time. Due to the flexibility of such a system, quite a bit of seemingly-complex boilerplate code is necessary. That is what EntityX and others offer: they write the boilerplate so you don't have to.

Essentially, such a framework needs to implement a few standard behaviors. 1) Construct an aggregate object from a descriptor of components. 2) Facilitate communication between objects and components (ie, via event passing or some other scheme) 3) Handle update/render/etc... main loop functionality 4) Provide tools for object lifetime management, instancing, serializing, etc...

While the concepts of a composition-based framework are relatively simple, the concrete execution of them in C++ quite often involves some fairly complex code. Additionally, such a system is STILL probably going to make extensive use of inheritance. For example, an object that implements the basic container for a list of components works well if all components inherit from a base Component class, to enable storing them in a single vector. Objects that can receive events work well if they inherit from some sort of base EventReceiver class, so that the core systems can hand off an event to an object without caring about its ultimate type.


#5267561 How Does One Program A Voxel Editor?

Posted by JTippetts on 22 December 2015 - 05:19 PM

Create the voxel renderer first. That'll help you learn more about voxels. The editor will be essentially your renderer plus a bunch of UI and UX related stuff that has very, very little to do with actual voxels. After writing the voxel renderer, you may find that another voxel editor suits your purposes well, thus saving you the actual effort of writing your own editor.


#5265313 Where to learn to create a graphic framework for Lua?

Posted by JTippetts on 07 December 2015 - 12:54 PM

I mainly work with programming and am very eager to learn more deep relationships between programming languages, compilers and hardware. Therefore, I prefer to only have to do the Lua programming part of the framework with least external manipulation such as wrapping a specially structured DLL and using the C API.


The problem is that Lua CAN'T do any of what you want (window management, etc...) without being bound to other code, usually written in C or C++. Out of the box, Lua only offers the basic language constructs and a few utility libraries (table, string, math, etc...) which, themselves, are written as C modules bound and exposed to Lua. That is what libraries such as Love2D offer: utility code, useful for games, that is already bound and exposed to Lua.

That's not to say that I refuse to learn it and do it, though I am quite lost on your answer... I am going a bit too deep than I am confident with as I never actually received any formal education in computer science, but rather self-taught to program well. I mean the best I could do is install Love2D and Lua using installers since I barely understood a video for manual installation of Lua, yet the main reason I am asking this question is really to see if anyone could link me to a tutorial of this part of computer science, preferably specialized in creating a graphic framework for Lua.


There probably isn't a specialized tutorial for creating graphic libraries specifically for Lua, since it's actually 2 questions: 1) creating a graphic library, and 2) binding it to Lua. Really, with enough work, just about any graphical library can be bound to Lua.

Like the main reason I am doing this is that even for a popular language such as Java I only know of a library called batik that allows svg loading, manipulation and drawing, to which there is not many tutorials, and for a embedded language like Lua which is less popular I am surprised a library like Love2D even exists, but it doesn't support vector graphics. Therefore, I thought I would aim, possibly too high, to learn of graphic frameworks and create my own simple one for vector graphics in Lua.


Read the Programming in Lua text. It talks specifically about binding external libraries to Lua. Additionally, there are third-party tools and resources that can make the job easier, though many of them have slipped away into obsolescence. But before you go confusing yourself too much more, just read PIL. Seriously, it's useful.


#5264122 Looking for a free 2D engine

Posted by JTippetts on 29 November 2015 - 11:11 AM

You could take a look at Urho3D, which has recently expanded with a healthy set of 2D functionality.


#5257815 How to create art like this?

Posted by JTippetts on 18 October 2015 - 03:07 PM

Personally, I suspect that was done the old fashioned way. For a static still like that, drawing and painting by hand would probably be the easiest.


#5257584 Game Engine Creation: What are the biggest challenges faced?

Posted by JTippetts on 16 October 2015 - 05:42 PM

Adoption is a big problem. In a world where Unity, UE, and so forth exist and are readily available, it becomes MUCH harder to convince anyone to adopt yours. You need to offer something they can't get from one of the bigger offerings. So there is significant chance that if you are doing it with the intention that others use it, you will just be wasting your time.


#5257042 Architecture for Isometric Style Game

Posted by JTippetts on 13 October 2015 - 08:05 AM

It is best to keep update and rendering options separate, so that was a good change.

 

Do you make a lot of use of partial transparency in your sprites, or are they all alpha masked instead? If masked, then you might be able to sort your scene front to back with depth testing enabled to take advantage of early rejection.

 

Do you pack your sprites into texture atlases? Doing so will allow you to compose your draw calls into batches, rather than a single draw call per object.

 

There are quite a few optimizations that can be made if you study the ways that more modern scene architectures work.




#5255704 Thoughts on the Boost.Build system? (as opposed to CMake?)

Posted by JTippetts on 05 October 2015 - 12:50 PM

If you do an out of source build, you can isolate the generated cmake files to a build directory tree and avoid polluting your source.

I tried Boost::Build briefly, and found it to be (like much of boost itself) somewhat obtuse and difficult to use. I thrashed around with a number of others, including premake, which I liked due to being Lua -based, but I always end up coming back to cmake.


#5253572 Why 2D?

Posted by JTippetts on 22 September 2015 - 08:32 PM

have to deal with one more dimension, use a 3D modelling software and some other logic things


This hand-waving, though, does hide a LOT of what can make it difficult for a beginner to jump into 3D games. Every additional piece of software you add, or additional complication, serves as a distraction from the main principles, and represents another large chunk of knowledge that must be assimilated before you can get to the real work.

However, I'd say that once you get past the engine/framework details, then you are are right that there isn't much to discern 3D from 2D, complexity-wise. Since it's recommended that beginners skip making their own engine, and since there is an abundance of engines out there that provide the framework for you, it's probably just fine to start with 3D.


#5252109 collision not working on corona sdks

Posted by JTippetts on 13 September 2015 - 08:08 PM

What do you expect to happen? What actually happens? What other things have you tried?

If you just dump a pile of code and say "it's not working", you probably won't get much of a response. You have to provide more information.


#5251713 Workstation for game design/3d Modeling/rendering

Posted by JTippetts on 11 September 2015 - 07:39 AM

Hobbyist? Pretty much, any halfway-modern computer will do as long as you give it plenty of RAM. As a hobbyist, you probably won't be pushing the number of polys that the pros do, so going super high end is probably just a waste of cash. I'm on a shitty HP box from Costco, and I have no problem doing medium-level character sculpts of around 250,000 triangles in Blender or Sculptris. Get a graphics card with good OpenCL support, and Blender can use it for rendering pretty nicely. We've gotten to the point where consumer-grade mid- to low-end hardware is more than enough for anything a hobbyist game developer is probably going to throw at it.


#5250748 Perlin noise to cloud alike look texture

Posted by JTippetts on 05 September 2015 - 05:01 PM

Yes, gradient noise has the effect that if you input integer coordinates, it always returns 0. This was by design; Perlin wanted a function that would throw the peaks and valleys off of the integral lattice, hence the somewhat non-obvious approach of using a gradient vector and a dot product.


#5250718 Perlin noise to cloud alike look texture

Posted by JTippetts on 05 September 2015 - 01:27 PM

A couple of issues I see here.

1)
a = a + perlin.GetPerlinNoiseAtA(x, y, 256, 256,perlin.Gradient );
	a = a + perlin.GetPerlinNoiseAtA(x/2.0, y/2.0, 256/2, 256/2,perlin.Gradient2 );
	a = a + perlin.GetPerlinNoiseAtA(x/4.0, y/4.0, 256/4, 256/4,perlin.Gradient4 );
	a = a + perlin.GetPerlinNoiseAtA(x/8.0, y/8.0, 256/8, 256/8,perlin.Gradient8 );
	a = a + perlin.GetPerlinNoiseAtA(x/16.0, y/16.0, 256/16, 256/16,perlin.Gradient16 );
	a = a + perlin.GetPerlinNoiseAtA(x/32.0, y/32.0, 256/32, 256/32,perlin.Gradient32 );
You are not scaling each successive layer, so each layer has exactly the same weight of contribution to the final output. This has a tendency to produce white noise as more layers are added. Instead, you'll probably get better results doing:

a = a + perlin.GetPerlinNoiseAtA(x, y, 256, 256,perlin.Gradient );
	a = a + perlin.GetPerlinNoiseAtA(x/2.0, y/2.0, 256/2, 256/2,perlin.Gradient2 )*0.5;
	a = a + perlin.GetPerlinNoiseAtA(x/4.0, y/4.0, 256/4, 256/4,perlin.Gradient4 )*0.25;
	a = a + perlin.GetPerlinNoiseAtA(x/8.0, y/8.0, 256/8, 256/8,perlin.Gradient8 )*0.125;
	a = a + perlin.GetPerlinNoiseAtA(x/16.0, y/16.0, 256/16, 256/16,perlin.Gradient16 )*0.0625;
	a = a + perlin.GetPerlinNoiseAtA(x/32.0, y/32.0, 256/32, 256/32,perlin.Gradient32 )*0.03125;
Note that hard-coding the constants like this is not really optimal, but that's the gist of it. This way, the smaller-scale features don't have as much overall impact as the larger-scale features. So layer 0 defines the overall large-scale shape of the terrain, and each successive layer adds smaller and smaller detail.

2) Storing a MxN array of gradients for each layer is just not a very elegant way of doing things. Essentially, the heart of Perlin noise is determining which unit-sized grid cell a value lies within and determining the set of 4 enclosing gradients. Most implementations do this by performing a hash on the integral coordinates of the 4 corners, and using this hash to access a 1 dimensional table of gradients. Look at the reference implementation. (Although the reference implementation is not as clear as I would like.) It uses a table called p[], which is filled with the positive integers from 0 to 255, each entered twice and the whole table randomly shuffled. Then input coordinates (in the range of 0 to 255) are hashed using this table:

int A = p[X  ]+Y, AA = p[A]+Z, AB = p[A+1]+Z,      // HASH COORDINATES OF
    B = p[X+1]+Y, BA = p[B]+Z, BB = p[B+1]+Z;      // THE 8 CUBE CORNERS,
The AA, AB, BA, BB values are then the hashes for the 4 corners, and can be used to reference a look-up table. (The reference implementation 'cheats' a little bit by eliminating the actual lookup table, and using the grad() function to generate the gradient based on the actual hash value).

Using a hash like this saves you from having to generate a 2D table for each layer. By combining a seed with your hash (hashing integral coords + seed together) you can generate a different 'pattern' for each layer.

The reference implementation's hash algorithm is limited to a period of 256, which means that for input values greater than 255, the pattern will start to repeat itself. For many uses, this is okay, but if you are doing something like implementing large-scale worlds such as in Minecraft, then you'll probably want to look into a different hashing algorithm. This paper by Ares Lagae and Philip Dutré describes a form of long-period hashing that works similarly to how Perlin's hashing works, but allows hashing to much larger periods than 256. I implement a form of this in my noise library hash routines.




PARTNERS