Jump to content

  • Log In with Google      Sign In   
  • Create Account


frob

Member Since 12 Mar 2005
Online Last Active Today, 08:18 PM

#5173151 Why terrain mesh consists of elements of unequal size and has vertical walls?

Posted by frob on 12 August 2014 - 02:12 PM

For your wall, as others have pointed out, you specified different resolutions which can cause effects like that. Sounds like you got that figured out.

 

 

For the third image in your post, Gian-Reto is correct that it comes from the dynamic LOD system.  To reduce processing it automatically downsamples the mesh for the less critical areas. Exactly how critical an area is depends on several factors, including its depth,  but also how prominently it is displayed on the screen (such as mountain peaks in the distance) and the mathematically-accumulated visual error. Whenever they switch between depths there adjacent region needs to be broken up carefully to avoid t-junctions.

 

Picture found online of a t-junction in terrain:

 

T-Junction.jpg

 

It has to break up the mesh a certain way to prevent tiny holes, or to prevent some slight flickering, which can occur at boundaries.

 

 

Finally, for your other settings, you have a fairly dense texture map. Usually you want somewhere around 2-4 meters per pixel, then use models to represent critical things on the terrain. You've got about 0.67 meters per pixel, which means you'll be spending more processing time on rendering. It is not a wrong value, but it can mean additional LOD processing, additional draw calls, and additional rendering effort on the terrain.  Given the tradeoff most games prefer to spend the effort on game models rather than the ground.

 

As you are just learning those settings are fine, but as you gain experience and your scenes become more complicated you will eventually need to make decisions about where to spend your computing power. Decreasing the terrain density is an easy tradeoff, since dense terrains and especially dense control textures can consume quite a lot of resources.  

 

For now, even though the many options are confusing when you start out just know that there are good reasons for the choices and they become important details in bigger projects.




#5173142 Understanding the Z-coordinate.

Posted by frob on 12 August 2014 - 01:46 PM

so it's all about the near-z far-z of the projection matrix?

 

After projection, yes.

 

After projection, the x and y coordinates are relative to your viewing frustum (the 2D display screen) and the z is the depth within the screen between the near and far plane.  

 

You shouldn't assume too much about the exact depth within the frustum because there are several different algorithms (z-buffer, w-buffer, 1/Z, logarithmic Z, Irregular Z, Z', etc) that adjust the relative scale between 0.0 and 1.0.  That is, a z value of 0.5 might not mean it is exactly halfway between the near and far plane, but you know 0.4 is closer than 0.5. 

 

 

For beginners the projected z value is not much use beyond an optimization hint. Stuff up close means you don't draw stuff that is behind it, and you can reduce overdraw by drawing the nearest items first.  Later you will find all kinds of wonderful algorithms that use and manipulate the depth buffer.  You can then use the projected z difference for assorted effects, depth images, advanced lighting techniques, advanced depth of field processing, and more.




#5173130 what is a template from c++ intent?

Posted by frob on 12 August 2014 - 12:34 PM

 

Yes, you are right, I have never noticed std::vector can interleave objects, not only pointers. I avoided std:: lib always and its vectors.

 

I have coded a pooled memory manager. And I wonder whaether I should do it with a template or with just sizeof operator as I currently have. My point is wheather the template - if not needed- does have a runtime performance impact, or no runtime impact at all.

 

 

No, there is no additional cost to templates. 

 

In fact the opposite is frequently true.

 

Template functions are typically defined in headers and the compiler nearly always places them in line, eliminating the function overhead. Usually the compiler is then able to further optimize the code to eliminate load/store patterns, move invariants from loops, or otherwise speed things up. Many of those deeply nested template functions can be eliminated when layer after layer is removed through inline optimizations, very often reducing what would be very slow in debug builds to a very simple set of release build code. As an example, some of the built in containers will go through ten or more indirections with assorted tests in debug mode, but in release mode is optimized away completely resulting in just a single direct memory addressing. Algorithm libraries can sometimes suck in large chunks of raw code resulting on many optimization opportunities.

 

Lambda functions similarly offer an optimization opportunity to the compiler. In the worst case it devolves to a function call, but in the best case everything in the function can be elided or inlined. 

 

Those are just two of many examples of how the C++ compilation model can be leveraged to produce even faster, more efficient code. Of course there is a cost to be paid when it comes to compilation. The cost to pull in all those headers, to inline everything, to evaluate hundreds or even of thousands of potential template substitutions is one of the reasons people dislike the language -- all these tasks increase build times. But the end result is an executable that is heavily optimized and CAN BE extremely difficult to beat in terms of performance.

 

There are exactly two C++ features that have a performance cost beyond their direct usage. Those two features are C++ exceptions and RTTI.  All commercial games I have worked on or studied have disabled both of these features. Both of these were heavily debated and nearly removed from the original C++ standard and their use remains a contentious issue to this day. Everything else you can use and feel comfortable knowing that you are using a library that is well defined, debugged, and fast.

 

There are some good reasons to use alternatives to the standard C++ libraries. It is not because of a fear of templates, but because the functionality the standard library offers can be improved on in some measurable way. One oft-cited article in the C++ standard working group is the fairly old writeup of EA's Standard Template Library. It evaluates several things in the 2003 standard that had somewhat better solutions, along with metrics about why the alternative form (which also relied heavily on templates) produced somewhat better results on a range of platforms. Note that those who wrote it did not fear the standard libraries and the language functionality. Instead they embraced it and identified ways to make the system even better by leveraging templates and other features in slightly different ways. Then they ran side-by-side comparisons, improving it further. I liked how in the timing comparisons they also call out the few cases where the library is less performant than the system's library, and how they are usually offset by dramatic performance improvements in other areas.

 

 

As for your not noticing the templates, even in the original 1998 C++ standard the processing of templates and libraries using them represented about 2/3 of the language standard. If you actually learn C++ (instead of learning book about C++) then templates are impossible to miss.

 

If you don't use templates in your code, you are not using C++.  Instead you are likely using "C with classes".




#5172978 what is a template from c++ intent?

Posted by frob on 11 August 2014 - 07:01 PM

A template is not a variable of any kind. It is not a macro.


I like to think of them as cookie cutters. A cookie cutter defines the shape of the cookie, and you can swap it out with sugar cookie dough, chocolate chip dough, peanut butter dough, oatmeal cookie dough, and any other flavor even if the person who made the cookie cutter hadn't thought about that flavor.

Similarly, templates are not classes by themselves. They are tools the compiler can adjust and manipulate to create code. The compiler can substitute any kind of typename it wants, even typenames that you did not plan on as the template author.

The compiler uses a template to generate code. You provide the typenames or values that are missing, and the compiler will generate a class or function or other content that uses what you provided to create source code for the class. Then it will compile that source code.


#5172939 Which DirectX Version?

Posted by frob on 11 August 2014 - 04:25 PM

The concepts will migrate easily. The math and theory are directly applicable. It is only the source code and the version specific details that need to be changed.

 

At the learning stage (and as this is For Beginners) it really doesn't matter which version you use. Learn the concepts of the pipeline, learn the mechanics of manipulating the data. Those are the important, transferable parts.

 

In a corporate world there will be a decision by senior team members and management to target specific age of computers, so you'll pick your interfaces based on compatibility and feature requirements.  For now, just pick whatever works for you. If DX10 works for you, use it as long as you want.




#5172916 Game Institute (again)

Posted by frob on 11 August 2014 - 03:18 PM


$49 is nothing.

 

Well... in the long term $49 is nothing. I agree that these days I'll shell out $50 for a book without much thought. However, the OP is a teenager who may not have access to much money.  In the short term, if $49 is too much for you, then learn from other sources.

 

At your age (17 or 18) and pre-college status, there is not much you can do over two months that will radically transform your knowledge and ability. 

 

If you want to study a language or learn a tool or technology, then go for it.  But for the next several years you will be studying a wide range of topics in depth. Two months of your own undirected study is less than 5% of what you should get out of a 4 year degree. Sure, it is greater than zero, but be realistic at how much it actually is.

 

I encourage you to find some hobby projects you can build while in school beyond your class projects. If you can find ways to use the knowledge in your classes then great, but building some projects outside of class is a great way to build a useful portfolio and to gain experience.




#5172910 Can't solve problems without googling?

Posted by frob on 11 August 2014 - 03:09 PM

You do both.
 

Right now you are still young. Many people go to college and get a degree and feel like they know everything and are the worldwide expert on all topics. Those people are wrong. You really aren't all that great at it yet, no matter how good you think you are. A recent college grad starts out at the bottom rung, they are entry level, they are inexperienced. As a recent high school grad, you are even that much less knowledgeable than a college grad who has been studying the field for four years.

 

 

There are really only two ways to gain experience. Do things wrong yourself, and watch other people do things wrong. Do both.

 

You will do a lot of things wrong over your career.  You will try something and it will fail. You will start down one path and realize you could have done something much better. You will sometimes need to stick with your mistakes when you do not have time or resources to correct them. You can also learn much more by taking notes of your actions and at the of the project, or perhaps every 2-3 weeks, review what you did wrong and what you did right over the interval.

 

The other way is to learn from others. You can gain a lot of insights by reading and learning what others did to succeed and to fail. You can read about best practices. You can talk with a lot of people and discover what they are doing that is both different and similar to your actions. You can work with various groups to get a good feel of their practices, and compare what works and what doesn't.

 

 

 

As for skill, some of it is natural ability. Some people are naturally quite skilled at problem solving, others naturally struggle. Just like some people are naturally faster runners, some people are naturally more coordinated, some people are naturally stronger, some people are naturally more linguistic, some are more imaginative, some are more mechanical, and so on. Even so, most people are able to improve their skill if they apply some effort.

 

You get better at it by doing it a lot, and by investing some time in practicing and filling in gaps in your skill and experience through practice and study.




#5172903 Current Gen Console Supported Programming/Scripting Languages?

Posted by frob on 11 August 2014 - 02:45 PM

1) Does the PS4, Vita, Xbox One, Wii U, or 3DS allow games to be written in C++?

 

Yes. That is the primary language for most modern consoles.

 

2) Do they support C++11?

 

More or less. Every implementation has their quirks.

 

Even the latest gcc 4.9 has a short list of C++11 features that are not fully supported. Some have more than others.

 

Compilers are also quickly gaining C++14 functionality, if you want that.

 

 

3) Could I use Embedded Python in my C++ Application?
 

Sure.

 

You'll need to build the libraries as a part of your app. Quite a few games support scripting tools internally. 

 

The first party groups (Microsoft, Nintendo, etc) have their own rules about things that are not allowed to be scripted (i.e. run from data) and what must  be run from code directly. For example, you might be required to pre-compile your scripts if the existing system prefers to use a JIT execution model. 




#5172805 Сomfortable keyboard for coding

Posted by frob on 11 August 2014 - 09:22 AM

For ergonomics, it is hard to go wrong with the Microsoft Natural Ergonomic Keyboard 4000.




#5172549 Programming using Amazon Mobile Associates API and have a legality question

Posted by frob on 09 August 2014 - 09:59 PM

I'd just email their legal group and ask.


#5172481 Using Leaked Code Derived From An Open Source Project

Posted by frob on 09 August 2014 - 12:12 PM

1) Assuming I want to fork the rendering engine and keep it private (which the license permits), can I also add my own copyright notice under a different license to source files in the modified engine that I add? So if I create a new class and I modify an existing class from the original code to link to it, can that new class still have my copyright under a different license? I assume the code I add to the original class cannot be licensed differently.
 
2)  Let's say  I set up a company and develop software derived from software covered by the Boost License and an employee leaks the derived code. Seeing as the derived code is also covered by the Boost License, can anyone obtaining the leaked code legally use it for their own purposes? Even though it may be covered by an open source license, it was not the intention of the author for the code to be distributed.
 


1. The original code is covered under the original license, so you must do what the license says about including their license verbatim, with at least one blank line above and below the license as required. You can use either the long form or short form, as described in the Boost FAQ. If you add additions to the file it becomes of mixed authorship and things get a little more complicated. You should include your name as a contributor and copyright holder to a portion of the file. Exactly the wording you want to use is up to you and your lawyer. I've seen quite a few variations of "Portions of this file are copyright foomatic, inc, 1994-2014" and similar.

2. Boost license is not viral. There is no requirement that your changes be licensed under the same terms. As you control the copyright you have several legal options. Perhaps the easiest is a public plea of avoidance on your web site where content is released, a notice that "On {date} we were informed of a defective version released by someone. Please be certain you download the products directly from us. We are not disclosing their name or website so we don't inadvertently promote them. Contact us if you need additional details." Other options involve leveraging the legal system, which may not be worth persuing if you, the leaker, or the unauthorized receiver do not have much money. Leaked secrets may be worth a discussion with a lawyer depending on the value (either to you or to them) of what was taken.


#5172050 Safely eating potatoeses

Posted by frob on 07 August 2014 - 08:36 AM

Those 7.22 decimal digits are important.  Remember you are dealing with both accuracy and precision.

 

Floating point numbers work in base 2, not base 10.  The decimal value 0.1 cannot be directly represented in base 2. It becomes binary 0.0001100011000111000111...  That value is within the specified precision and is considered an accurate conversion.

 

That binary value of the decimal 0.1 is precise enough for 16 decimal places, converting back to decimal gives roughly 0.099999999999999972. So while it is accurate within the required precision of 7 decimal digits, it is not perfectly equal.

 

Also note that while the floating point standard requires 7 digits, the C++ language standard for numeric limits specifies 6 for the "number of digits, q, such that a floating-point number with q decimal digits can be rounded into a floating-point representation and back without loss of precision." Visual Studio uses the same constant. 

 

In other words, even though you have 7.22 digits going from decimal to binary, you only have 6 digits if you intend to go round trip and show the number back to a human in decimal.

 

 

It all gets back to those two rules:

1. Floating point numbers are approximations. (Consequences: Do not ever consider them as exact values. Always use range operations when testing for equivalence. They have a narrow range of precision, or significant figures. Etc.)

2. Floating point inaccuracies accumulate. (Consequences: Small errors accumulate when used repeatedly. Shifting scales can cause catastrophic precision loss. Etc.)




#5171962 Fan-made games - best ways to get aprooval

Posted by frob on 06 August 2014 - 04:51 PM

Also note that typically fan-made content does not have the express permission of the content owner.

 

It is a difficult legal area.  If they explicitly grant you permission then they cannot control whatever you do. With that permission it could be interpreted as the ability to generate a product that competes with them. 

 

Even if your product today does not compete and is just a fun little fan project, that doesn't mean that two or three or five years from now it won't be a project that competes against theirs.  Someday they will notice there is a project that uses their name, looks like theirs, takes place in the same world, and is not making them any money. That's when the C&D order will come out. And if you have a written waiver granting you permission to use their IP, then they are in a sad place. Most lawyers are smart enough to see that in the distance.

 

So your official answer is almost certainly going to be "no" unless you are negotiating a paid license. It is not in the company's interest to grant you a waiver to use their content.

 

 

 

The options are either that the company sends out C&D orders routinely, or that they quietly encourage it without explicit permission. Even when a company distributes modding toolkits they licence is very clear that you are not authorized to use any of the company's IP, but they don't usually enforce it. Sometimes people who monitor the company's forums or social websites might complement the fan-made content, but you are unlikely to ever officially be given the company's blessing to use their content.




#5171708 eating potatoe

Posted by frob on 05 August 2014 - 03:22 PM

What is the reason behind the question?  What problem are you trying to solve?

 

Yes, the GPU can be use used for processing generally.  General purpose GPU programming goes by the rather utilitarian acronym GPGPU

 

Basically you are making a tradeoff.  You give up the benefits of the CPU, which is very versatile, and replacing it with a massively parallel processing system designed for a dense grid of data.

 

This is why things like bitcoin miners love running code on the GPU. They get away from a 6-processor or 8-processor system that has gigabytes of memory and other resources, and exchange it for thousands of tiny processors. Their processing task is small but needs to be repeated on an enormous collection of data. They can load all the data up into a texture and run a series of shaders to get the desired results.

 

 

Crossing the boundary between the systems is relatively expensive.

 

So in order to help give you good information, why are you asking the questions about integer and floating point performance?




#5171684 potatoe

Posted by frob on 05 August 2014 - 12:40 PM


 

(Gentle reminder that this is a For Beginners post.)

What does this mean? Is this directed at me? I'm not allowed to ask for help understanding a beginner type float... at all, in this section of the forums? or is it directed to others to keep the complex parts that are associated with floats away? Is this just an indication that your answer is for a beginners post?

 

Just that this is the For Beginner's forum.  It has some special rules

 

Specifically: Keep the audience in mind as well when engaging in in-thread conversation -- it is very easy to digress into advanced topics that intimidate and exclude beginners. Some such digression is permissible, but be mindful.

 

Once they hit a certain complexity they tend to move over to the General Programming, Game Programming, or other more appropriate forum.

 

The original post had two simple questions that were great for for beginners.  The follow up questions had some complexity that pushed it rather far out there.

 

 

So I don't derail too much, I'll get back to your questions that were asked specifically:

 

1. Any usage of floats is normally much expensive than Int operations right?

 

Some operations are faster, some are slower. The modern PC has a very complex architecture. It can perform a large number of operations simultaneously. The internal processing ports inside the core are able to potentially complete several integer operations per cycle, but only one floating point operation per cycle. Some of those internal ports may be be busy on a slow operation while a different internal port can quickly solve a different value.

 

2. If 32 bit floats can hold data from -3.14 * 10 ^ -38 to 3.14 * 10 ^ 38 without precision loss then why would anyone use Ints if they can only store from -2 * 10 ^ 9 to 2 * 10 ^ 9 ?

 

As you discovered, that range is incorrect. A 32-bit float can only hold about six decimal digits of precision. That can be huge numbers like 5.67890e17, or small numbers like 1.23456e-14. Doing things outside of that precision are generally lost. Trying to mix and match the values results in catastrophic failure as numbers are shifted around by the floating exponent. Integers can hold any number up to 32 bits in size. Floats can hold some numbers within that range, but generally switch to approximations once you go about 1.5 million.

 

 

3. So practically the actual maximum safe range without losing precision is only 2^24 right?

 

Nope. If you add a very large number to a very large number they both get converted into the same scale. So with the numbers above, 5.67890e17 and 1.23456e-14, an operation will likely convert the second number up to 0.00000e17. So even though the individual number has not lost precision, the operation takes place at a precision that obliterates a value.  

 

If you had a different number, say 2.34567e15, it would get converted to about 0.02345e17, losing precision because the exponent needed to float elsewhere. This is one of many reasons floating point approximations break down when they are repeatedly applied. They often are not as precise as the programmers thought they were.

 

4.0 - Rather beyond the scope of a For Beginners post. 

 

4.1 If it doesn't take much effort, could someone explain to me the "Round to nearest, ties to even" rule? I keep re-reading the description but don't quite get it.

 

If they always rounded up, values would tend to slowly inflate. If they always rounded down, values would slowly drift negative. If they rounded toward zero, values would slowly shrink to zero. Bankers were one of the earliest people to discover the problem, and invented banker's rounding. Since it always rounds toward an even number it tends to balance out on average, sometimes going up and sometimes going down.

 

For decimals, the 'tie' happens at one half. It is exactly halfway between two integers. 0.3 rounds toward zero rather than one.

 

Rounding to even works like this:

0.4 --> 0

0.5 --> 0

0.6 --> 1

...

1.4 --> 1

1.5 --> 2

1.6 --> 2

For some more critical values on the halfway mark:

2.5 --> 2

3.5 --> 4

-0.5 --> -0

-1.5 --> -2

 

5. ... is there an easy way to tell how float operation speed differs from int operation speed? I am interested in the typical x86/x64.

 

More complicated operations take more time, but you cannot know the exact time. The internal pipeline is rather complicated. Here is one attempt at describing a part of it. That description is still heavily simplified, but should give an overview of why it is so difficult to estimate the performance. In general, some of the ALU ports are able to perform multiple integer operations per cycle. That is not true of the FPU operations. Of course, some integer operations take multiple cycles, and some FPU operations can be done quickly while others take a long time.

 

Not all floating point operations take a similar time. Transcendental functions like logarithm or trig functions generally take much longer than simple floating point addition. Two different logarithms may require different amounts of time. Some floating point operations can be recognized during decoding (like loading zero or one) and they can require no time inside the core.






PARTNERS