Jump to content

  • Log In with Google      Sign In   
  • Create Account


frob

Member Since 12 Mar 2005
Offline Last Active Today, 02:28 PM

#5173475 Loading Uncompressed Textures

Posted by frob on 13 August 2014 - 08:00 PM

 

The typical workflow for artists is to use photoshop images (psd files) for their work and export textures as one of the compressed formats supported by cards, such as DXT1 or DXT5 or whatever.

This is a very bad idea. Don't let your artists produce or export to the final format. Ever.

It will all start innocent, the artists commit the DXT compressed textures and you are happy. Then you realize, that all textures are compressed as DXT5 instead of DXT1, even those without an alpha channel, because DXT5 has twice the bitrate and thus MUST HAVE twice the quality. You ask them to remedy the mistake but going through all the textures in the repo and reexporting them manually is too much work and apparently too error prone.
At this point, you can forget any efforts to get an automatic export running, because the DXT compressed versions, and the (presumably) source material somehow got out of sync. And there are multiple copies of the source material and noone really knows which ones are to most recent ones.

And then you consider to change the way the textures are compressed, or even the compression format, but your only viable source material are the already lossy DXT1/5 compressed textures.
 

 

It depends on your workflow and the discipline that is kept in the office.

 

Everywhere I've been the art budget was fixed in stone in advance. The maximum size of each model, both meshes and textures, is specified and is included in the acceptance criteria. If a single asset is over budget it is well known because it shows up on the feature dashboard that shows all the metrics of all the assets; it needs to be approved by the art lead, the art director, the feature designer, the tech lead, and the project manager.  When it happens it is usually just "this is an important model" followed by "yup", "yup", "yup", "go ahead". 

 

Since the asset sizes show up in several metrics and are reviewed daily, it would be fairly hard for an artist to slip in a 4MB dxt file without at least one person noticing. 

 

If your art process is such that nothing gets reviewed and there is not accountability, sure, I'll agree it is a bad thing.  But that is bad because of a (lack of) policy and a lack of discipline.




#5173203 Loading Uncompressed Textures

Posted by frob on 12 August 2014 - 04:54 PM

1) The textures are loaded in-editor using the FreeImage library. Would this be a good way to go about it, or should I consider compression encoding non-PRVTC textures?

2) Does a custom file format sound like a good idea?

 

1) As a general purpose engine, supporting more image types is generally a good idea. The typical workflow for artists is to use photoshop images (psd files) for their work and export textures as one of the compressed formats supported by cards, such as DXT1 or DXT5 or whatever. These formats are usually supported directly by the card so no further processing is necessary.  Some people are tempted to support jpeg and gif and similar formats; those are great for the Internet or places where size is more important than fidelity, not so much in games. Uncompressed raw images are something of a last resort used for compatibility when all else has failed.

 

2) Files are how you store data. If you need to store data your options are fairly limited. Either rely on someone else's format, or use a markup language (XML, YAML, whatever) or make up your own format. Those are the only options. If you decide to make your own format a frequent suggestion is to make it exactly match the layout used in memory so you don't need to parse it, you can load it directly and set your pointers to offsets within the data.




#5173197 What's the industry like?

Posted by frob on 12 August 2014 - 04:29 PM


 

3. hours

 

3. Long. Google "EA spouse" and "video game crunch."

 

Also, it depends on location and job.

 

In many parts of the world overtime pay is strictly mandated.

 

In the US it is mandated but loosely enforced, and a specific class of "computer professionals" are specifically exempted from overtime pay.  If you are a designer, artist, modeler, or tester and your US-based employer is not paying you for overtime it is time to put in a call to the appropriate government agency, either at the state level or the Department of Labor to find the right group. Let them know that your company is requiring unpaid overtime and work with them to get the investigation started. Every few years there tends to be some regional shake-ups where the state investigates and finds all kinds of unpaid overtime violations, occasionally some people categorized as contractors who legally must be employees, and so on.  It is the law, not your contract, that specifies if you are entitled to overtime pay.

 

Repeating: The Fair Labor Standard Act specifies that your specific work actions, not your job title and not your employment agreement, are the basis for overtime exemptions. Doctors, lawyers, company owners, and "computer professionals" (now called 'programmers') are nearly always exempt.  Many people get a job in this industry and since the other artists or testers or designers are working extra long hours, they don't think about reporting it and assume things are just that way. Yes, professionalism says you do the job you are paid to do, but there is a difference between voluntarily staying an extra hour or two because you want to finish versus your boss mandating that you work 50 or 60 hours. This is an area many companies (either intentionally or accidentally) end up skirting the law.

 

The easpouse lawsuit was one of those regional periodic purges. After the EA case quite a few other SoCal studios had visits from the state. Washington had one about two years later, as was my state. The government doesn't like it because it results in less taxes paid to them.

 

If you are a programmer, if you own a certain share of the small business, or if you are in any kind of management position with both 2+ direct reports AND less than 50% of your time is spent building stuff, those three groups are generally legally exempt from overtime in the US. If you are the corporate lawyer you are also exempt, but you would have already known that. Everyone else in the game industry (designers, testers, animators, modelers, etc) is legally entitled to overtime when the company requests extra work hours even if your contract calls you exempt and even if your boss tells you otherwise.




#5173151 Why terrain mesh consists of elements of unequal size and has vertical walls?

Posted by frob on 12 August 2014 - 02:12 PM

For your wall, as others have pointed out, you specified different resolutions which can cause effects like that. Sounds like you got that figured out.

 

 

For the third image in your post, Gian-Reto is correct that it comes from the dynamic LOD system.  To reduce processing it automatically downsamples the mesh for the less critical areas. Exactly how critical an area is depends on several factors, including its depth,  but also how prominently it is displayed on the screen (such as mountain peaks in the distance) and the mathematically-accumulated visual error. Whenever they switch between depths there adjacent region needs to be broken up carefully to avoid t-junctions.

 

Picture found online of a t-junction in terrain:

 

T-Junction.jpg

 

It has to break up the mesh a certain way to prevent tiny holes, or to prevent some slight flickering, which can occur at boundaries.

 

 

Finally, for your other settings, you have a fairly dense texture map. Usually you want somewhere around 2-4 meters per pixel, then use models to represent critical things on the terrain. You've got about 0.67 meters per pixel, which means you'll be spending more processing time on rendering. It is not a wrong value, but it can mean additional LOD processing, additional draw calls, and additional rendering effort on the terrain.  Given the tradeoff most games prefer to spend the effort on game models rather than the ground.

 

As you are just learning those settings are fine, but as you gain experience and your scenes become more complicated you will eventually need to make decisions about where to spend your computing power. Decreasing the terrain density is an easy tradeoff, since dense terrains and especially dense control textures can consume quite a lot of resources.  

 

For now, even though the many options are confusing when you start out just know that there are good reasons for the choices and they become important details in bigger projects.




#5173142 Understanding the Z-coordinate.

Posted by frob on 12 August 2014 - 01:46 PM

so it's all about the near-z far-z of the projection matrix?

 

After projection, yes.

 

After projection, the x and y coordinates are relative to your viewing frustum (the 2D display screen) and the z is the depth within the screen between the near and far plane.  

 

You shouldn't assume too much about the exact depth within the frustum because there are several different algorithms (z-buffer, w-buffer, 1/Z, logarithmic Z, Irregular Z, Z', etc) that adjust the relative scale between 0.0 and 1.0.  That is, a z value of 0.5 might not mean it is exactly halfway between the near and far plane, but you know 0.4 is closer than 0.5. 

 

 

For beginners the projected z value is not much use beyond an optimization hint. Stuff up close means you don't draw stuff that is behind it, and you can reduce overdraw by drawing the nearest items first.  Later you will find all kinds of wonderful algorithms that use and manipulate the depth buffer.  You can then use the projected z difference for assorted effects, depth images, advanced lighting techniques, advanced depth of field processing, and more.




#5173130 what is a template from c++ intent?

Posted by frob on 12 August 2014 - 12:34 PM

 

Yes, you are right, I have never noticed std::vector can interleave objects, not only pointers. I avoided std:: lib always and its vectors.

 

I have coded a pooled memory manager. And I wonder whaether I should do it with a template or with just sizeof operator as I currently have. My point is wheather the template - if not needed- does have a runtime performance impact, or no runtime impact at all.

 

 

No, there is no additional cost to templates. 

 

In fact the opposite is frequently true.

 

Template functions are typically defined in headers and the compiler nearly always places them in line, eliminating the function overhead. Usually the compiler is then able to further optimize the code to eliminate load/store patterns, move invariants from loops, or otherwise speed things up. Many of those deeply nested template functions can be eliminated when layer after layer is removed through inline optimizations, very often reducing what would be very slow in debug builds to a very simple set of release build code. As an example, some of the built in containers will go through ten or more indirections with assorted tests in debug mode, but in release mode is optimized away completely resulting in just a single direct memory addressing. Algorithm libraries can sometimes suck in large chunks of raw code resulting on many optimization opportunities.

 

Lambda functions similarly offer an optimization opportunity to the compiler. In the worst case it devolves to a function call, but in the best case everything in the function can be elided or inlined. 

 

Those are just two of many examples of how the C++ compilation model can be leveraged to produce even faster, more efficient code. Of course there is a cost to be paid when it comes to compilation. The cost to pull in all those headers, to inline everything, to evaluate hundreds or even of thousands of potential template substitutions is one of the reasons people dislike the language -- all these tasks increase build times. But the end result is an executable that is heavily optimized and CAN BE extremely difficult to beat in terms of performance.

 

There are exactly two C++ features that have a performance cost beyond their direct usage. Those two features are C++ exceptions and RTTI.  All commercial games I have worked on or studied have disabled both of these features. Both of these were heavily debated and nearly removed from the original C++ standard and their use remains a contentious issue to this day. Everything else you can use and feel comfortable knowing that you are using a library that is well defined, debugged, and fast.

 

There are some good reasons to use alternatives to the standard C++ libraries. It is not because of a fear of templates, but because the functionality the standard library offers can be improved on in some measurable way. One oft-cited article in the C++ standard working group is the fairly old writeup of EA's Standard Template Library. It evaluates several things in the 2003 standard that had somewhat better solutions, along with metrics about why the alternative form (which also relied heavily on templates) produced somewhat better results on a range of platforms. Note that those who wrote it did not fear the standard libraries and the language functionality. Instead they embraced it and identified ways to make the system even better by leveraging templates and other features in slightly different ways. Then they ran side-by-side comparisons, improving it further. I liked how in the timing comparisons they also call out the few cases where the library is less performant than the system's library, and how they are usually offset by dramatic performance improvements in other areas.

 

 

As for your not noticing the templates, even in the original 1998 C++ standard the processing of templates and libraries using them represented about 2/3 of the language standard. If you actually learn C++ (instead of learning book about C++) then templates are impossible to miss.

 

If you don't use templates in your code, you are not using C++.  Instead you are likely using "C with classes".




#5172978 what is a template from c++ intent?

Posted by frob on 11 August 2014 - 07:01 PM

A template is not a variable of any kind. It is not a macro.


I like to think of them as cookie cutters. A cookie cutter defines the shape of the cookie, and you can swap it out with sugar cookie dough, chocolate chip dough, peanut butter dough, oatmeal cookie dough, and any other flavor even if the person who made the cookie cutter hadn't thought about that flavor.

Similarly, templates are not classes by themselves. They are tools the compiler can adjust and manipulate to create code. The compiler can substitute any kind of typename it wants, even typenames that you did not plan on as the template author.

The compiler uses a template to generate code. You provide the typenames or values that are missing, and the compiler will generate a class or function or other content that uses what you provided to create source code for the class. Then it will compile that source code.


#5172939 Which DirectX Version?

Posted by frob on 11 August 2014 - 04:25 PM

The concepts will migrate easily. The math and theory are directly applicable. It is only the source code and the version specific details that need to be changed.

 

At the learning stage (and as this is For Beginners) it really doesn't matter which version you use. Learn the concepts of the pipeline, learn the mechanics of manipulating the data. Those are the important, transferable parts.

 

In a corporate world there will be a decision by senior team members and management to target specific age of computers, so you'll pick your interfaces based on compatibility and feature requirements.  For now, just pick whatever works for you. If DX10 works for you, use it as long as you want.




#5172916 Game Institute (again)

Posted by frob on 11 August 2014 - 03:18 PM


$49 is nothing.

 

Well... in the long term $49 is nothing. I agree that these days I'll shell out $50 for a book without much thought. However, the OP is a teenager who may not have access to much money.  In the short term, if $49 is too much for you, then learn from other sources.

 

At your age (17 or 18) and pre-college status, there is not much you can do over two months that will radically transform your knowledge and ability. 

 

If you want to study a language or learn a tool or technology, then go for it.  But for the next several years you will be studying a wide range of topics in depth. Two months of your own undirected study is less than 5% of what you should get out of a 4 year degree. Sure, it is greater than zero, but be realistic at how much it actually is.

 

I encourage you to find some hobby projects you can build while in school beyond your class projects. If you can find ways to use the knowledge in your classes then great, but building some projects outside of class is a great way to build a useful portfolio and to gain experience.




#5172910 Can't solve problems without googling?

Posted by frob on 11 August 2014 - 03:09 PM

You do both.
 

Right now you are still young. Many people go to college and get a degree and feel like they know everything and are the worldwide expert on all topics. Those people are wrong. You really aren't all that great at it yet, no matter how good you think you are. A recent college grad starts out at the bottom rung, they are entry level, they are inexperienced. As a recent high school grad, you are even that much less knowledgeable than a college grad who has been studying the field for four years.

 

 

There are really only two ways to gain experience. Do things wrong yourself, and watch other people do things wrong. Do both.

 

You will do a lot of things wrong over your career.  You will try something and it will fail. You will start down one path and realize you could have done something much better. You will sometimes need to stick with your mistakes when you do not have time or resources to correct them. You can also learn much more by taking notes of your actions and at the of the project, or perhaps every 2-3 weeks, review what you did wrong and what you did right over the interval.

 

The other way is to learn from others. You can gain a lot of insights by reading and learning what others did to succeed and to fail. You can read about best practices. You can talk with a lot of people and discover what they are doing that is both different and similar to your actions. You can work with various groups to get a good feel of their practices, and compare what works and what doesn't.

 

 

 

As for skill, some of it is natural ability. Some people are naturally quite skilled at problem solving, others naturally struggle. Just like some people are naturally faster runners, some people are naturally more coordinated, some people are naturally stronger, some people are naturally more linguistic, some are more imaginative, some are more mechanical, and so on. Even so, most people are able to improve their skill if they apply some effort.

 

You get better at it by doing it a lot, and by investing some time in practicing and filling in gaps in your skill and experience through practice and study.




#5172903 Current Gen Console Supported Programming/Scripting Languages?

Posted by frob on 11 August 2014 - 02:45 PM

1) Does the PS4, Vita, Xbox One, Wii U, or 3DS allow games to be written in C++?

 

Yes. That is the primary language for most modern consoles.

 

2) Do they support C++11?

 

More or less. Every implementation has their quirks.

 

Even the latest gcc 4.9 has a short list of C++11 features that are not fully supported. Some have more than others.

 

Compilers are also quickly gaining C++14 functionality, if you want that.

 

 

3) Could I use Embedded Python in my C++ Application?
 

Sure.

 

You'll need to build the libraries as a part of your app. Quite a few games support scripting tools internally. 

 

The first party groups (Microsoft, Nintendo, etc) have their own rules about things that are not allowed to be scripted (i.e. run from data) and what must  be run from code directly. For example, you might be required to pre-compile your scripts if the existing system prefers to use a JIT execution model. 




#5172805 Сomfortable keyboard for coding

Posted by frob on 11 August 2014 - 09:22 AM

For ergonomics, it is hard to go wrong with the Microsoft Natural Ergonomic Keyboard 4000.




#5172549 Programming using Amazon Mobile Associates API and have a legality question

Posted by frob on 09 August 2014 - 09:59 PM

I'd just email their legal group and ask.


#5172481 Using Leaked Code Derived From An Open Source Project

Posted by frob on 09 August 2014 - 12:12 PM

1) Assuming I want to fork the rendering engine and keep it private (which the license permits), can I also add my own copyright notice under a different license to source files in the modified engine that I add? So if I create a new class and I modify an existing class from the original code to link to it, can that new class still have my copyright under a different license? I assume the code I add to the original class cannot be licensed differently.
 
2)  Let's say  I set up a company and develop software derived from software covered by the Boost License and an employee leaks the derived code. Seeing as the derived code is also covered by the Boost License, can anyone obtaining the leaked code legally use it for their own purposes? Even though it may be covered by an open source license, it was not the intention of the author for the code to be distributed.
 


1. The original code is covered under the original license, so you must do what the license says about including their license verbatim, with at least one blank line above and below the license as required. You can use either the long form or short form, as described in the Boost FAQ. If you add additions to the file it becomes of mixed authorship and things get a little more complicated. You should include your name as a contributor and copyright holder to a portion of the file. Exactly the wording you want to use is up to you and your lawyer. I've seen quite a few variations of "Portions of this file are copyright foomatic, inc, 1994-2014" and similar.

2. Boost license is not viral. There is no requirement that your changes be licensed under the same terms. As you control the copyright you have several legal options. Perhaps the easiest is a public plea of avoidance on your web site where content is released, a notice that "On {date} we were informed of a defective version released by someone. Please be certain you download the products directly from us. We are not disclosing their name or website so we don't inadvertently promote them. Contact us if you need additional details." Other options involve leveraging the legal system, which may not be worth persuing if you, the leaker, or the unauthorized receiver do not have much money. Leaked secrets may be worth a discussion with a lawyer depending on the value (either to you or to them) of what was taken.


#5172050 Safely eating potatoeses

Posted by frob on 07 August 2014 - 08:36 AM

Those 7.22 decimal digits are important.  Remember you are dealing with both accuracy and precision.

 

Floating point numbers work in base 2, not base 10.  The decimal value 0.1 cannot be directly represented in base 2. It becomes binary 0.0001100011000111000111...  That value is within the specified precision and is considered an accurate conversion.

 

That binary value of the decimal 0.1 is precise enough for 16 decimal places, converting back to decimal gives roughly 0.099999999999999972. So while it is accurate within the required precision of 7 decimal digits, it is not perfectly equal.

 

Also note that while the floating point standard requires 7 digits, the C++ language standard for numeric limits specifies 6 for the "number of digits, q, such that a floating-point number with q decimal digits can be rounded into a floating-point representation and back without loss of precision." Visual Studio uses the same constant. 

 

In other words, even though you have 7.22 digits going from decimal to binary, you only have 6 digits if you intend to go round trip and show the number back to a human in decimal.

 

 

It all gets back to those two rules:

1. Floating point numbers are approximations. (Consequences: Do not ever consider them as exact values. Always use range operations when testing for equivalence. They have a narrow range of precision, or significant figures. Etc.)

2. Floating point inaccuracies accumulate. (Consequences: Small errors accumulate when used repeatedly. Shifting scales can cause catastrophic precision loss. Etc.)






PARTNERS