Jump to content

  • Log In with Google      Sign In   
  • Create Account

frob

Member Since 12 Mar 2005
Offline Last Active Yesterday, 09:41 PM

#5289437 Which alignment to use?

Posted by frob on 30 April 2016 - 11:03 AM

Thanks everyone for the comments!
 

 

What should you actually choose for alignment? 4 byte, power of two, 16 byte?

 
For a memory manager, provide alignment as a parameter with a default that matches the build settings.

 

About allowing a paramater: Not sure how much this is useful here, since this memory manager acts entirely in the background of the engine. Its true that the developer of the component is responsible for registering it and thus creating the pool, so I could allow it as a parameter, but since all this does is allocate components that are being used by the ECS exclusively, I don't think it makes sense her, but thanks for the pointer anyway (will need to think about more general memory management strategies at some point).

 

  
As it has been publicly released, you might look over the EASTL implementation. Note how they provide a method with default alignment, another accepting a parameter for custom alignment and custom offsets for arrays.




#5289188 Obtaining angle of object in relation to camera direction

Posted by frob on 28 April 2016 - 09:51 PM

For computing the angles the law of cosines still applies. If you need to rotate from one coordinate space to another do that by multiplying the vector and the transformation (or the transformation by the vector if that's your data layout), then you're in the different coordinate space. Once you're in the same space, law of cosine computes your angle.

Your updated description does not change that. The angles are still calculated by the same formula, no matter what points are involved or what transformations they go through.


#5289148 Obtaining angle of object in relation to camera direction

Posted by frob on 28 April 2016 - 03:56 PM

Giving upvotes because the answer is correct.

 

The mathematics of 3D worlds is linear algebra.  You need to understand the fundamentals of linear algebra to program 3D software effectively.  Otherwise you will be entirely at the mercy of people online providing your basic formulas every time you need them.

 

The mathematics of 2D worlds is trigonometry and geometry. You need to understand the fundamentals to program 2D software effectively.

 

For how to apply the law of cosines, you can compute the angle between any two vectors using the function provided above. It applies just as well in 2D if you set the third component to zero; the two vectors are the legs of a triangle and the cosine (or inverse cosine) is used for calculating the angle between the legs of the triangle.

 

If you need the last line without 'math notation', the angle is equal to the inverse cosine of the dot product of two normalized vectors.  If you don't know what those words mean, work through this or similar.




#5289146 Generate this kind of 2d burst or pulse

Posted by frob on 28 April 2016 - 03:39 PM

Both the ring and the fading internal shape could be handled easily enough with a shader if you can do that in your code. The shader could look at the pixel's distance from the center.  If it is within the outer ring's distance set it to one color, inside that distance set to a fraction relative to the distance.

 

Otherwise I'd consider a 1D texture drawn across a large disk that gets scaled up over time.

 
 

If you're wanting to manually draw it, checking documentation, it looks like they've got drawing functions for DrawCircle() and DrawSolidCircle(). If those don't work, there are also spline drawing tools. That should cover the doughnut, the inside might require some gradient drawing if the API supports.  




#5289144 Is possible to recover game data for Android?

Posted by frob on 28 April 2016 - 03:28 PM

Is it possible?  Yes, on some devices it may be possible.  

 

It may not be easy, it isn't directly supported, and there are many devices where it won't work. The design of many persistent storage chips these days doesn't work in a way that is friendly to the magnetic storage or optical storage recovery techniques, and some hardware will not surrender the old data no matter how hard you try. Other hardware will gladly restore deleted data if the software invokes the right commands.

 

If you are looking for your purposes of security, know that people can get around it. There are many ways to do so, including using emulators and custom hardware, so it is not secure.

 

If you are looking because you accidentally lost your game data, sorry for your loss, I recommend frequent backups.




#5289143 Which alignment to use?

Posted by frob on 28 April 2016 - 03:21 PM

What should you actually choose for alignment? 4 byte, power of two, 16 byte?

 

For a memory manager, provide alignment as a parameter with a default that matches the build settings. Look up the platform documentation for default alignment requirements.

 

By default Visual Studio uses 8-byte alignment but it can be changed with compiler options and #pragma commands.

 

Certain functions and instructions require objects to be at specific alignments. For example, loading into 128-bit SIMD registers have both an unaligned operation (MOVDQU) and a much faster aligned operation (MOVDQA) which crashes if the data is not 16-byte aligned.  Allow the programmers the opportunity to require different alignment if they know their needs require it.

 

Note there are a few graphics buffers in recent APIs that have 4KB alignment requirements for certain buffers. When first introduced the size broke quite a few custom memory managers that used naive alignment strategies. 




#5289142 Data alignment on ARM processors

Posted by frob on 28 April 2016 - 03:04 PM

Very few chipsets allow placing values at the wrong alignment.

 

x86 is one of the few that allowed integers to be placed at any alignment, accessing a 4-byte integer at any offset is allowed but suffers a performance penalty that is not directly visible to you.  On most other chipsets that crashes.

 

Note that other data types, misaligned data can also cause crashes on x86, such as trying to load into XMM, YMM, or other SIMD registers. 

 

 

 

 

float value = doLittleBigEndianConversion(doUnalignedReading(reinterpret_cast<int*>(ucharPtr + offset));

 

Icky.  I disagree, because you are still operating at the wrong type of int*.

 

When packing and unpacking data by creating an object of the correct type, such as an int32, or float, or double, or whatever, then doing operations that do not rely on alignment such as memcpy or single byte accesses.

 

Often that means a packer and unpacker class that process your stream.  Inside it you have a function similar to this:

 

float unpackFloat(unsigned char* offset) { 

  // Other code and static assertions ensure float is four bytes and otherwise that our processor is supported

  float result;

  *((unsigned char*)(&result)) = *offset;

  *((unsigned char*)(&result)+1) = *offset+1;

  *((unsigned char*)(&result)+2) = *offset+2;

  *((unsigned char*)(&result)+3) = *offset+3;

  return result;

}

 

Repeat with packing functions and unpacking functions for all types you care about. 

 

 

There are many serialization libraries that are already written and debugged that do this for you. No need to reinvent the wheel.




#5289032 "Could not find or load main class . . ."

Posted by frob on 27 April 2016 - 09:20 PM

Jar files require a manifest file, which must be located inside the file under /META-INF/MANIFEST.MF
 
The manifest file needs to contain specific information so the .jar can be run.  Many items on that link are optional, such as the methods for signing the jar, but a minimal jar would probably be:

Manifest-Version: 1.0
Class-Path: HelloWorld.jar
Main-Class: HelloWorld




#5288920 Returning by value is inevitable?

Posted by frob on 27 April 2016 - 09:07 AM

Thats interesting, didn't know there was a global setting for this. Is there any downside to using global /Gv specifically?

 

A few.

 

* It is currently Microsoft specific, so you cannot mix results from multiple compilers.  

* It is new enough that few other tools and languages support it for library and .obj file formats.

* Code must be rebuilt with the options, and existing old code may work well with it.

* It requires hardware with SSE2, which is standard since 2001.

* It has potential support for modern hardware introduced since 2011 that implements AVX (Advanced Vector Extensions).

 

Otherwise it is an incremental advance beyond __fastcall. New hardware introduced new registers so new calling conventions that use the registers makes sense.

 

 

GCC and *nix systems used a slightly different but similar system since the introduction of their AMD64 ABI back when the 64-bit extensions were first introduced by AMD and then incorporated by Intel.




#5288915 what is meant by Gameplay?

Posted by frob on 27 April 2016 - 08:46 AM

I imagine the cost-benefit will be better in a larger organisation where you have a lot of people that need to implement game specific stuff, and you also have a support organisation to keep the scripting language with enough features and debugging tools to be useful.
 
With just a few people, where everyone is part engine part game implementers, and everyone already know C++ well, there is little reason to use a scripting language.

 
This is exactly what several of the posts were talking about.
 
When you are working on a large project, something where the cost of rebuilding and restarting takes 10 minutes or so, and you have a large number of people working on the project, it makes a lot of sense to build and maintain the system.
 
Let's assume a big project and the scripting system saves an average 5 minutes each time a build is needed.  And let's say that an average of 3 builds per person per day since some people seldom make changes.  Consider you may have 200 people using the scripting system on a project for 2 years.  So 5 minutes saved per build * 3 builds per person per day * 200 people * 400 days = 1200000 minutes saved = 20000 hours saved = 10 work years saved.  Even if it takes a full work year to build the system it is still a worthwhile gain.
 
Now lets assume a small project.  Small projects typically have fast builds, so lets say an average of 30 seconds saved each build. And let's say 10 builds per person per day since they are so closely involved with the implementation.  Two people using the scripting system for one year.  So 0.5 minutes saved per build * 10 builds per person per day * 2 people * 200 days = 2000 minutes saved = 33 hours saved.  For such a small team it doesn't really make sense to invest in creating a scripting system unless it can be implemented within a day or two.




#5288859 Where to learn 2D Math for game dev

Posted by frob on 26 April 2016 - 09:03 PM

calculus - haven't found much use for it myself.

 

Calculus is one of those fun subjects that you don't appreciate it until you need it, then it is amazing.

 

Vector calculus is used all the time in both graphics and physics. If you want to deform objects in graphics, to make their normals work out you need to figure out how the deformations change the shape and then change the normals appropriately. Same thing in physics, parametric shapes are defined by math functions, and figuring out angles to bounce and otherwise react with requires calculus to figure out the formulas.

 

... Either that or you can look up the existing formulas as best you can, and just assume that they did it all right instead of actually know the math yourself.

 

Splines in general are calculus, but again many programmers don't bother to understand the math, just search out a formula they can run against a matrix multiply to get their results.

 

Many game programmers get by with iterative methods and accumulating values.  It is more precise to compute the endpoints and the rates of change, then directly compute the intermediate rather than accumulating the values (and also accumulating the error).  But a little bit of simple calculus as you implement the system lets you implement directly and then solve in the middle.  

 

 

Any time you are using an iterative solution, like accumulating a tiny bit of motion every update, it is usually better to figure out the proper math for it.  Accumulated values also accumulate error, and once you've added a tiny number for a few minutes your accumulated error quickly becomes significant.  Better for the programmer to find the rate of change (that's usually calculus!) and then write a function to compute the value directly.




#5288797 Is two people enough for a team?

Posted by frob on 26 April 2016 - 12:39 PM

me and my friend are starting work on our game. The goal is to use this project on our resumes as well

 

When it comes to "experience" on a resume, it means paid professional work experience, not stuff you did on your own.  

 

If you have a section of your resume devoted to your hobby project where you and your friends built something, and that something did not end up being a commercial success, what you've got doesn't really count except to show interest in the field.

 

Now if your side project gets a million downloads and becomes a major success, then things are a little different. Then you are an entrepreneur who successfully started your own business, and the experience looks great.  Statistically that is unlikely to happen.




#5288788 OOP and DOD

Posted by frob on 26 April 2016 - 12:20 PM

I think I'm even more confused now than I was before. People, and even you, are saying that DOD is more efficient, but at the same time you're saying that it's not? It is but it isn't? :(

 

Efficiency in code is a strange thing.  Typically the inefficiencies are not what you expect.  Generally the first time a program is analyzed for performance all kinds of weird stuff shows up. You might discover billions of calls to string comparison, or billions of iterations through some loop deep-down in the system.  

 

It is rare for performance problems to be exactly you expect them to be, unless you've been doing performance analysis for years, in which case occasionally you can guess right.

 

 

Data oriented design is more efficient because you wrote and tested for that specific thing. You spent time and effort ensuring that the system will efficiently travel across the data.  Perhaps you have a half million particles and you build a particle system, your data oriented design will ensure that the CPU streams quickly and efficiently over those half million particles, taking advantage of cache effects and whatever prefetch commands are available on the system.

 

It takes a great deal of testing to ensure something that claims to be data oriented really is.

 

 

 

In my studies, it was said that computationally speaking OOP has a larger footprint because data is spread throughout memory rather than being stored in one place. It would be more resource intensive to perform searches on that data than a DOD approach would lend itself to. Was that wrong?

 

Yes, that is a poor generalization.  

 

Object oriented programming is based around clusters of objects and operations. There is typically no significant difference in memory footprint. Data can potentially be scattered across memory if that is how the programmer wrote it.  This is not a problem by itself if the operations are naturally scattered.  However, if operations are sequential, and also if memory cannot be linearly traversed, then your code might not to take advantage of certain benefits from locality of data. Note there are several conditions involved there.

 

Data oriented development means actively seeking out those conditions and intentionally taking action to ensure when you perform sequential operations the memory is traversed in a hardware-friendly way.  

 

Programmers who follow object oriented rules can completely violate the rules of data oriented development.  They can also closely follow the rules of data oriented development.  The two are unrelated.

 

 

 

You mention that DOD is an approach that prefers continuous strands of data, which would be easier and quicker to handle than v-table searches. I don't understand.

 

It has nothing to do with vtables.  

 

Going back to a particle system example:

 

A naive programmer might make a particle object. They'll give the particle a position, mass, velocity, and several other variables. Then they will create an array of 500,000 particle objects.  When they run the code, they will iterate over each object, calling 500,000 functions. Each function will update the object, then return.  They'll provide many different functions to manipulate the particles, and whenever they work with the system, call each function 500,000 times.

 

A more savvy programmer might make a class of particles. They will create an array of positions, an array of masses, an array of velocities, and a way to set the number of particles, taking care to ensure the processing works on an interval that steps one cpu cache line at a time.  Generally working within the cache you only pay the cost for the initial load, so if you can fit 4 in the cache at once you pay for one, the other three are effectively free. The programmer can then set the number to 500,000 particles. Then they will make a single function call which will traverse all 500,000 particles all at once. They'll provide exactly the same functionality to manipulate the particles as above, but each will be completed with a single call that processes all particles, rather than an enormous number of calls that process each particle.

 

An even more savvy programmer might take advantage of SIMD calls on the system to process 4, 8, or more particles at a time rather than processing them individually in a tight loop. Then instead of just having four in the cache and paying for a single cache load, they'll also only pay for a single operation rather than four, giving even faster results.

 

 

All three of them have the same effect of calling 500,000 particles. Two of them just took better advantage of their hardware's capabilities.

 

 

Also note that this type of design only works in limited situations.  There needs to be LOTS of things that can be accessed sequentially.  Lots of particles being shifted. Lots of pixels in an image filter.  If there are a small number of items, perhaps only a few hundred or a few thousand, the benefit is probably so small to not be worth it.  Also if the items cannot be arranged sequentially the design change does not work.

 

 

 

 I want to develop for mobile, where performance and resource usage appear to be quite important.

 

Your concern is admirable but misplaced.  This is something you should not worry about in about 97% of the time.  In the rare roughly 3% of the time it should be very obvious that you need to do something about it, you will not miss it by accident.

 

The rules for optimization are:

 

1. Don't worry about it.

2. Advanced:  Don't worry about it yet.

3. (Experts Only): Don't worry about it until after you've carefully measured, then only worry about those specific pieces indicated by measurement.




#5288750 what is meant by Gameplay?

Posted by frob on 26 April 2016 - 09:00 AM

thats actually what led me to asking this question on gameplay as i was wondering why I cant just do it in c++ and not a scripting language

 

Typically because of who is doing the scripting, along with when and how they are doing it.

 

 

When writing the code in C++ it needs to be compiled.  The C++ compilation model is excellent at optimizing and eliding code for high performance, but does so at a cost of long compile times.  Depending on the project and the scope of the changes a compilation may take seconds, or a change that affects everything in a huge system could take hours. 

 

Scripting languages are often (but not always) written in tools that can be edited and rebuilt while the main program is running.  The script system is stopped, the changes implemented, and the script system restarted.  

 

If you are working with main programmers who are spending all their time working on the code and constantly doing big compiles, working in C++ for object scripting may not be a big problem.  Hopefully they'll use tools like Edit and Continue (an amazing piece of technology!) but they can do this type of thing readily.

 

If you are working with designers who rarely work with code, and who generally don't have the tools installed for doing big compiles, working in a scripting language like Lua is easier and faster. They can make the small change to the script, have it validated on the fly, and see the results immediately.  This is good for productivity, especially when a change takes a few seconds to see rather than 10 minutes or more to see. This also enables people to make changes even if they don't have the heavyweight compilers and build systems on their computer.

 

 

You can use C++ if you want, it doesn't hurt anything when you are an individual working on a hobby project.  When you are a professional working on a large team of hundreds of people for multiple years, using a scripting tool that can be dynamically reloaded can save work-years of time, meaning several hundred thousand dollars or even millions of dollars over the scope of the multi-year project.




#5288749 OOP and DOD

Posted by frob on 26 April 2016 - 08:49 AM

 From what I gathered it seems that DOD, or functional programming, is faster than OOP but OOP is better for more complex behaviors.

 

I'm not sure how you gathered that.

 

Let's try again.

 

Functional programming is something entirely different. That is the design of the language, and is a spectrum. On one side is 'functional programming', on the other side is 'imperative programming' or 'procedural programming.'  The family of languages including C++, Java, C#, etc., are all imperative languages.  Contrast this with functional languages like SQL where you tell it what you want (select whatever from tables where this=that) and the system determines which algorithms and internal execution patterns to use.

 

 

Data Oriented programming and Object Oriented programming are not at odds and are not necessarily faster or slower.  Data Oriented is a concept, meaning your implementation takes consideration of the underlying data and its flow through the machine.  Object oriented is also a concept, meaning you have clusters of functionality around a single responsibility operating on a cluster of data.  

 

 

OOP is clusters of behaviors around a blob of data. General guidance is to have a single responsibility for a cluster. If you are only considering the behaviors and take no thought about the underlying machine, it is possible to write code that does not perform well, but that is true of any situation where you are taking no thought about the underlying machine.

 

DOD means designing around smooth flow of data. General guidance is to have long, continuous strands of data that flow through the cache and processing around a predictable stride. If you are only considering flow of data and take no thought about interplay of behavior, it is possible to write code that cannot easily handle complex behavior, but that is true of any situation where you do not think in terms of systemic complexity.

 

 

 

Both of them are fully independent of each other.  Making your tasks into tighter clusters or looser clusters (object oriented) is completely independent of making your data more or less memory/cache/cpu friendly (data oriented).   You can create code that follows both object oriented concepts and data oriented concepts.  You can write code that follows only one or the other.  You can write code that follows neither.






PARTNERS