RattenhirnMember Since 29 Jun 2004
Offline Last Active Yesterday, 09:47 AM
- Group Crossbones+
- Active Posts 661
- Profile Views 5,646
- Submitted Links 0
- Member Title Member
- Age Age Unknown
- Birthday Birthday Unknown
Posted by Rattenhirn on 13 March 2016 - 12:17 PM
Posted by Rattenhirn on 04 March 2016 - 12:46 PM
Yes you can!!
Indeed, that's a case of B hiding something it inherits from A. It's not good practice because it can be rather confusing. Some compilers will emit a warning.
Posted by Rattenhirn on 04 March 2016 - 04:19 AM
So let's get that out of the way by making two definitions:
Type: a class, struct or simple type (int, float,...) definition
Instance: a named, usable instance of a type
Now the answer:
Types can be derived from other types, and when that happens, members can be added, overridden and hidden, but never renamed or removed.
Instances are of exactly one type and contain exactly what is defined by the type definition, not more, not less.
I hope that helps!
Posted by Rattenhirn on 21 September 2015 - 01:51 AM
Hello, you're mixing up a couple of things.
Firstly, you want to do stuff on the same line. This is easier than you think:
Level level; level.Load("level1");
Tadaa! However, in general, it is more readable to use shorter lines, especially since debuggers usually work on a per line basis. So, stepping through the line above will likely be quite awkward.
Secondly, you're talking about new, constructors and destructors.
Let me try to untangle this a bit.
Whether you use "new" or not, an objects constructor and destructor will still be run (if it has any). So they don't have anything to do with the use of "new".
Instead, "new" will change where the object will life and therefore influence it's lifetime.
In short, without "new", the object will live until the current scope ends (at the next "}"). With "new", the object will live until "delete" is called on it.
For a more detailed explanation, this looks to be fairly comprehensive:
For the sake of completeness, here's how the above example would work with "new", again as a single line.:
Level* level = new Level(); level->load("level1");
I hope that helps!
Posted by Rattenhirn on 30 August 2015 - 02:44 AM
I think Civilization V does a good job at exploring the global impact of religions. Another example would be Black & White, showing religions effect on primitive cultures.
On a personal level, many RPGs explore various aspect of the religious lifestyle. For example Leliana's or Cassandra's story in the Dragon Age series, especially part 3.
In addition, there are many games that deal with aspects of religion via satire, which can be very entertaining indeed.
I don't think it would work as core gameplay element though.
Posted by Rattenhirn on 29 August 2015 - 09:05 AM
Stencil buffers are not an option because stencil testing is done after the pixel shader is executed and since the execution of the pixel shader takes a relatively long time, this would not increase performance.
Nope, stencil ops (like depth ops) are done before the pixel shader is run, unless the pixel shader contains stencil or depth instructions itself (which is not very common). Maybe you mistake it with alpha test, which is indeed run after the pixel shader, because it depends on whatever the pixel shader outputs.
Posted by Rattenhirn on 29 August 2015 - 06:36 AM
I think the correct way to handle the camera inside cube problem is to dissect the cube using the near clip plane, before transforming it. But the result is no longer a cube, so it's not very straight forward.
But why don't you just build geometry of the cube and render it using your post processing shader? This way the GPU / driver takes care of all that complicated clipping business. If you turn culling off, it will work even when the cube intersects with the near clip plane. If that, for some reason, won't work, you can still render the cube with an empty shader and write to the stencil buffer and you basically have what you did before, but faster, easier and more exact.
I hope that helps!
Posted by Rattenhirn on 03 August 2015 - 03:26 AM
Negative revenue can be caused by bounced payment (i.e. people using expired or stolen credit cards) or from people stepping down from their purchase (less likely). The bounced payments would explain, why it's only on one day, because those things are probably processed only once a month.
While these are possible explanation, I have no idea how Apple handles these things, so, again, I suggest you contact support. It could always be that there was indeed an error and you get your $200,-- back. ;)
Posted by Rattenhirn on 27 July 2015 - 03:07 AM
C++ is not an option due to the object instance member function issue. Believe me, this is a monumental issue for us. Our entire application model depends on events and callbacks. Not being able to set callbacks on "application classes" would be devastating.
We won't be switching to another compiler as nothing can comes close to the compiler we are using in terms of optimization. This means whatever subset of C++ supported in our compiler is the feature set we get to choose from. As stated previously, this compiler seems content with the C++ features available circa 1999, which means basically classes, inheritance, and polymorphism to name a few.
C++ has very rich support of callback mechanisms, but most of them have very specific restrictions, which are necessary because C++ doesn't want to trade convenience for perf.
For starters you have the exact same mechanism as C, namely function pointers. They are extended to be able to call static class members.
Then you have pointers to member functions, which might be what you want. Since member functions have a hidden first parameter, which is a pointer of the type of the class containing the member, you need to specify that. Look here for an example: https://isocpp.org/wiki/faq/pointers-to-members#fnptr-vs-memfnptr-types
This is not used a lot, because people want the type of the first parameter to be configurable. Here you have two options:
1) Static type deduction: that's what templates do. I think someone posted an example of this already on this thread.
2) Dynamic type deduction: that's what virtual functions are for. A common way to get this done is to declare an interface class (class only consisting of pure virtual functions) and use a pointer to this class as callback attachment point.
Pre C++11 has all these already and they work fine.
There are lots of fancy implementations that combine those two in various ways, but it's hard to recommend one without knowing the exact use cases.
Header file hell constitutes about half of my wish list That in itself is a massive wish / feature change. Besides, I am not going off and inventing a language anytime soon. I know what an enormous task that is just to get something simple running. I would rather spend my time writing code for the products my company produces so that the consumer is more happy.
Indeed, that's why I recommended picking a language that has the stuff you want and just not use the stuff you don't want. You can start today with that! ;)
Maybe I'm too used to it, but I don't consider "header file hell" particularly problematic. Sure, it could be more elegant, but there are good reasons why it's implemented like this. Languages that don't have headers lose a lot of optimization opportunities. The only thing that could be better perf wise is whole program optimization, but IMHO it's even worse than headers (vastly increased compilation times) and offers little gain.
Last time I tried, I could not for the life of me get a custom compiler / linker / tool chain to work inside visual studio. That was either due to my inexperience or it could be as I surmised at the time, which was that VS was so heavily integrated with the MS tools it couldn't care less about anything else.
Do you mean to tell me that I can take any old command line compiler, linker, and assembler and hook it into visual studio with some custom property pages without having to write an entire project system plugin? Please enlighten me more wise sage, because I spent many weeks and months trying to do this. I am not talking about a makefile project either, but a full fledged custom tool chain integration.
From what I can tell, VS has only integrated 'cross platform' tool chain stuff because of android (gcc, clang, etc.) This does not help me. I am not an Android developer.
I have been using VS for cross platform development (PC + various consoles) since version 2003. From 2005 on they officially supported multiple platforms and with 2010 and the introduction of MS Build it has become almost nice. However, this stuff has always been un- or underdocumented and I haven't found any nice resources after a quick google.
My hope is that it will become more mainstream as VS 2015 gains traction. If you can't wait, send me a PM and I'll try to write it up, but it'll take me weeks, because I have a quite busy schedule in the near future. :/
I hope that helps!
Posted by Rattenhirn on 26 July 2015 - 12:00 PM
Garbage collection: Garbage collection in its current form is un-operable: However, we have come up with some ideas for garbage collection that would be very good, clean, and would not affect performance in our major use case. Essentially a garbage collector that only runs when you need it to (idle hook) and can be preempted (interrupted) at any time to process higher priority events. Once the system is idle again, the collector resumes where it left off. This is actually possible. The major hurdle is that the C language does not support the reference tracking we need to implement it.
This won't work for various reasons IMO:
1) A big (the biggest?) chunk of perf lost in GC is caused by tracking which things are garbage. It may not show up in the profiler, because it's _everywhere_. Your suggestion doesn't address this at all.
2) GCs already only run when needed or on idle, everything else would be silly.
2a) Usually it's needed when memory is running low, so there's no way to delay it. Note that there are multiple processes competing for memory, so even if you're very conservative with your memory, other processes might not be. If they all use GC, memory will be low all the time.
2b) Preempting the GC might sound like a good idea, but you only want to interrupt it, when it's taking too long. On the next run it will take even longer, because a lot of garbage is left over. The longer it takes, the earlier it gets interrupted, the more garbage piles up. Ultimately you'll run into the low memory condition discussed in 2a much quicker and gain nothing.
We have researched moving to visual studio; however, we can't integrate a custom project and or compiler easily.
Not true, you can integrate new platforms easily by extending the platform folders in the ms build directory and you can have new project types (what for?) as well. Take a look at VS2015, it has a lot more documented stuff for multi platform development, including already prepared integration of gcc, clang, gdb and lldb.
All in all, I recommend you check out C++ again. Use a decent IDE (Visual Studio) and write a restrictive style guide, detailing what features to use and what not to use. You can start with the C feature set and allow classes and other non controversial C++ features.
I see no need to invent a new language, because you mostly listed things you don't want and very few that you want (only GC, which is a bad idea for perf), so you should be fine with picking C++ and restricting its use.
I hope that helps!
Edit: Fixed messed up quotes
Posted by Rattenhirn on 06 July 2015 - 12:02 PM
Firstly, there are three char types in C/C++: signed char, unsigned char and char. That latter one can be signed or unsigned, depending on platform, but is a distinct type in any case. If you want a printable character, use "char", if you want a number, use one of the other two.
Secondly, what ASCII character would you expect for 253? ASCII is only defined up to 127, because it is a 7 bit code (http://www.asciitable.com/). Above that you are entering the murky world of code pages, meaning that the result is pretty much dependent on platform and locale. Here's a very good intro for that topic: http://www.joelonsoftware.com/articles/Unicode.html
I hope that helps!
Posted by Rattenhirn on 25 June 2015 - 03:22 PM
The best scripting language for C++ is C++. It integrates perfectly and is very powerful. ;)
Posted by Rattenhirn on 07 June 2015 - 06:32 AM
Hi, I have very little experience with Unity, so I can't tell you what you options there are. I have, however, give some insights on your questions from a general point of view.
Firstly, there are only two cases where texture atlases can be beneficial nowadays:
1. Reduce draw calls. By combining textures into one, more things can be drawn with one draw call. This means that they draw calls also need to have the same render states, the same vertex and index buffers, the same shaders, and if instancing is not available, the same shader parameters.
Basically, when draw calls are completely identical, except for their bound textures, they could be combined, saving on some of the overhead. There are very few cases of this is actually the case, so you should go ahead and analyze your typical scenes to estimate what the possible gains are.
Also, when the platform has texture arrays or bindless textures, use these methods instead.
2. Reduce texture memory requirements. In most cases today, texture atlases actually will need more memory than individual textures. If you're targeting platforms that have limitations like only square texture, only POT ("power or two") texture dimensions or mipmapping only for POT textures, atlases can be beneficial.
None of the platforms have any of these limitations IIRC, definitely not PC.
Also, if your textures have already been manually combined by the artist, as shown in this image texture space is used very efficiently, so that can be beneficial. But there's no good way to do these automatically and it's quite difficult to get right manually as well. Those have the additional advantage of reducing draw calls as well.
Texture atlases have a lot of downside. They are a PITA to create and manage and they are prone to artifacts due to filtering and through mipmapping, so in my opinion they are rarely worth the effort.
On to your questions:
Ad 1. Modern 3D games won't use texture atlases at all, for the reasons given above. If they do, they make them as big as possible, limited only by hardware capabilities and actual needs.
Ad 2. Atlases are exactly like normal textures, so all limitations for textures apply to atlas textures as well, plus a few additional ones. I know of no recent platform that would require square textures or performance benefits for square textures. The performance hit that Unity might be talking about is, that when it encounters a platform with only square textures, it has to make all non-square textures square on the fly, increasing load times, memory usage and texture memory pressure. But a platform like that would be an ancient one anyways...
Ad 3. It's up to you how you organize your atlases. Remember, your goal is to reduce draw calls. You can put leave and trunk texture in separate atlases and use them in two draw calls, you can put them in the same atlas and use them in two draw calls, no difference in number on draw calls. The latter method _may_ save one texture switch, if the draw calls are sequential, but that would come with a render state and possible shader switch as well, so not much upside.
Or, you could put them in the same atlas and draw them in one call, using the semi-transparent settings. You save one draw call, but render the opaque part inefficiently and possibly with artifacts. Many options, none of them are easy wins.
Ad 4. Very few savings possible by grouping them. In addition, I guess these armor pieces might have different meshes anyways, so it's not possible to combine these draw calls in the first place.
Ad 5. Yes, things that are likely to be used by the same shader, vertex/indexbuffer and renderstates, and are rendered in sequence, so they can be combined. Not much usually fits that bill.
Here are a few things that come to mind where texture atlases might be beneficial:
- vegetation / grass
- particle systems
- gui, menus and other 2d elements
In short, I think you'd be better off finding something else to increase performance. Did you do measurements? Are you sure that draw calls are your #1 performance killer? PCs nowadays can handle quite a lot of those with no problems...
I hope that helps!
Posted by Rattenhirn on 06 June 2015 - 10:32 AM