RattenhirnMember Since 29 Jun 2004
Offline Last Active Today, 06:25 AM
- Group Crossbones+
- Active Posts 670
- Profile Views 6,061
- Submitted Links 0
- Member Title Member
- Age Age Unknown
- Birthday Birthday Unknown
Rattenhirn hasn't added any contacts yet.
Posted by Rattenhirn on 03 August 2015 - 03:26 AM
Negative revenue can be caused by bounced payment (i.e. people using expired or stolen credit cards) or from people stepping down from their purchase (less likely). The bounced payments would explain, why it's only on one day, because those things are probably processed only once a month.
While these are possible explanation, I have no idea how Apple handles these things, so, again, I suggest you contact support. It could always be that there was indeed an error and you get your $200,-- back. ;)
Posted by Rattenhirn on 27 July 2015 - 03:07 AM
C++ is not an option due to the object instance member function issue. Believe me, this is a monumental issue for us. Our entire application model depends on events and callbacks. Not being able to set callbacks on "application classes" would be devastating.
We won't be switching to another compiler as nothing can comes close to the compiler we are using in terms of optimization. This means whatever subset of C++ supported in our compiler is the feature set we get to choose from. As stated previously, this compiler seems content with the C++ features available circa 1999, which means basically classes, inheritance, and polymorphism to name a few.
C++ has very rich support of callback mechanisms, but most of them have very specific restrictions, which are necessary because C++ doesn't want to trade convenience for perf.
For starters you have the exact same mechanism as C, namely function pointers. They are extended to be able to call static class members.
Then you have pointers to member functions, which might be what you want. Since member functions have a hidden first parameter, which is a pointer of the type of the class containing the member, you need to specify that. Look here for an example: https://isocpp.org/wiki/faq/pointers-to-members#fnptr-vs-memfnptr-types
This is not used a lot, because people want the type of the first parameter to be configurable. Here you have two options:
1) Static type deduction: that's what templates do. I think someone posted an example of this already on this thread.
2) Dynamic type deduction: that's what virtual functions are for. A common way to get this done is to declare an interface class (class only consisting of pure virtual functions) and use a pointer to this class as callback attachment point.
Pre C++11 has all these already and they work fine.
There are lots of fancy implementations that combine those two in various ways, but it's hard to recommend one without knowing the exact use cases.
Header file hell constitutes about half of my wish list That in itself is a massive wish / feature change. Besides, I am not going off and inventing a language anytime soon. I know what an enormous task that is just to get something simple running. I would rather spend my time writing code for the products my company produces so that the consumer is more happy.
Indeed, that's why I recommended picking a language that has the stuff you want and just not use the stuff you don't want. You can start today with that! ;)
Maybe I'm too used to it, but I don't consider "header file hell" particularly problematic. Sure, it could be more elegant, but there are good reasons why it's implemented like this. Languages that don't have headers lose a lot of optimization opportunities. The only thing that could be better perf wise is whole program optimization, but IMHO it's even worse than headers (vastly increased compilation times) and offers little gain.
Last time I tried, I could not for the life of me get a custom compiler / linker / tool chain to work inside visual studio. That was either due to my inexperience or it could be as I surmised at the time, which was that VS was so heavily integrated with the MS tools it couldn't care less about anything else.
Do you mean to tell me that I can take any old command line compiler, linker, and assembler and hook it into visual studio with some custom property pages without having to write an entire project system plugin? Please enlighten me more wise sage, because I spent many weeks and months trying to do this. I am not talking about a makefile project either, but a full fledged custom tool chain integration.
From what I can tell, VS has only integrated 'cross platform' tool chain stuff because of android (gcc, clang, etc.) This does not help me. I am not an Android developer.
I have been using VS for cross platform development (PC + various consoles) since version 2003. From 2005 on they officially supported multiple platforms and with 2010 and the introduction of MS Build it has become almost nice. However, this stuff has always been un- or underdocumented and I haven't found any nice resources after a quick google.
My hope is that it will become more mainstream as VS 2015 gains traction. If you can't wait, send me a PM and I'll try to write it up, but it'll take me weeks, because I have a quite busy schedule in the near future. :/
I hope that helps!
Posted by Rattenhirn on 26 July 2015 - 12:00 PM
Garbage collection: Garbage collection in its current form is un-operable: However, we have come up with some ideas for garbage collection that would be very good, clean, and would not affect performance in our major use case. Essentially a garbage collector that only runs when you need it to (idle hook) and can be preempted (interrupted) at any time to process higher priority events. Once the system is idle again, the collector resumes where it left off. This is actually possible. The major hurdle is that the C language does not support the reference tracking we need to implement it.
This won't work for various reasons IMO:
1) A big (the biggest?) chunk of perf lost in GC is caused by tracking which things are garbage. It may not show up in the profiler, because it's _everywhere_. Your suggestion doesn't address this at all.
2) GCs already only run when needed or on idle, everything else would be silly.
2a) Usually it's needed when memory is running low, so there's no way to delay it. Note that there are multiple processes competing for memory, so even if you're very conservative with your memory, other processes might not be. If they all use GC, memory will be low all the time.
2b) Preempting the GC might sound like a good idea, but you only want to interrupt it, when it's taking too long. On the next run it will take even longer, because a lot of garbage is left over. The longer it takes, the earlier it gets interrupted, the more garbage piles up. Ultimately you'll run into the low memory condition discussed in 2a much quicker and gain nothing.
We have researched moving to visual studio; however, we can't integrate a custom project and or compiler easily.
Not true, you can integrate new platforms easily by extending the platform folders in the ms build directory and you can have new project types (what for?) as well. Take a look at VS2015, it has a lot more documented stuff for multi platform development, including already prepared integration of gcc, clang, gdb and lldb.
All in all, I recommend you check out C++ again. Use a decent IDE (Visual Studio) and write a restrictive style guide, detailing what features to use and what not to use. You can start with the C feature set and allow classes and other non controversial C++ features.
I see no need to invent a new language, because you mostly listed things you don't want and very few that you want (only GC, which is a bad idea for perf), so you should be fine with picking C++ and restricting its use.
I hope that helps!
Edit: Fixed messed up quotes
Posted by Rattenhirn on 06 July 2015 - 12:02 PM
Firstly, there are three char types in C/C++: signed char, unsigned char and char. That latter one can be signed or unsigned, depending on platform, but is a distinct type in any case. If you want a printable character, use "char", if you want a number, use one of the other two.
Secondly, what ASCII character would you expect for 253? ASCII is only defined up to 127, because it is a 7 bit code (http://www.asciitable.com/). Above that you are entering the murky world of code pages, meaning that the result is pretty much dependent on platform and locale. Here's a very good intro for that topic: http://www.joelonsoftware.com/articles/Unicode.html
I hope that helps!
Posted by Rattenhirn on 25 June 2015 - 03:22 PM
The best scripting language for C++ is C++. It integrates perfectly and is very powerful. ;)
Posted by Rattenhirn on 07 June 2015 - 06:32 AM
Hi, I have very little experience with Unity, so I can't tell you what you options there are. I have, however, give some insights on your questions from a general point of view.
Firstly, there are only two cases where texture atlases can be beneficial nowadays:
1. Reduce draw calls. By combining textures into one, more things can be drawn with one draw call. This means that they draw calls also need to have the same render states, the same vertex and index buffers, the same shaders, and if instancing is not available, the same shader parameters.
Basically, when draw calls are completely identical, except for their bound textures, they could be combined, saving on some of the overhead. There are very few cases of this is actually the case, so you should go ahead and analyze your typical scenes to estimate what the possible gains are.
Also, when the platform has texture arrays or bindless textures, use these methods instead.
2. Reduce texture memory requirements. In most cases today, texture atlases actually will need more memory than individual textures. If you're targeting platforms that have limitations like only square texture, only POT ("power or two") texture dimensions or mipmapping only for POT textures, atlases can be beneficial.
None of the platforms have any of these limitations IIRC, definitely not PC.
Also, if your textures have already been manually combined by the artist, as shown in this image texture space is used very efficiently, so that can be beneficial. But there's no good way to do these automatically and it's quite difficult to get right manually as well. Those have the additional advantage of reducing draw calls as well.
Texture atlases have a lot of downside. They are a PITA to create and manage and they are prone to artifacts due to filtering and through mipmapping, so in my opinion they are rarely worth the effort.
On to your questions:
Ad 1. Modern 3D games won't use texture atlases at all, for the reasons given above. If they do, they make them as big as possible, limited only by hardware capabilities and actual needs.
Ad 2. Atlases are exactly like normal textures, so all limitations for textures apply to atlas textures as well, plus a few additional ones. I know of no recent platform that would require square textures or performance benefits for square textures. The performance hit that Unity might be talking about is, that when it encounters a platform with only square textures, it has to make all non-square textures square on the fly, increasing load times, memory usage and texture memory pressure. But a platform like that would be an ancient one anyways...
Ad 3. It's up to you how you organize your atlases. Remember, your goal is to reduce draw calls. You can put leave and trunk texture in separate atlases and use them in two draw calls, you can put them in the same atlas and use them in two draw calls, no difference in number on draw calls. The latter method _may_ save one texture switch, if the draw calls are sequential, but that would come with a render state and possible shader switch as well, so not much upside.
Or, you could put them in the same atlas and draw them in one call, using the semi-transparent settings. You save one draw call, but render the opaque part inefficiently and possibly with artifacts. Many options, none of them are easy wins.
Ad 4. Very few savings possible by grouping them. In addition, I guess these armor pieces might have different meshes anyways, so it's not possible to combine these draw calls in the first place.
Ad 5. Yes, things that are likely to be used by the same shader, vertex/indexbuffer and renderstates, and are rendered in sequence, so they can be combined. Not much usually fits that bill.
Here are a few things that come to mind where texture atlases might be beneficial:
- vegetation / grass
- particle systems
- gui, menus and other 2d elements
In short, I think you'd be better off finding something else to increase performance. Did you do measurements? Are you sure that draw calls are your #1 performance killer? PCs nowadays can handle quite a lot of those with no problems...
I hope that helps!
Posted by Rattenhirn on 06 June 2015 - 10:32 AM
Posted by Rattenhirn on 08 May 2015 - 09:53 AM
I think you made an error substituting the variables with your values:
The first case:
A - B = C -> 15 - 5 = 10
A = 15
B = 5
C = 10
The second case:
B = A - C -> 10 = 15 - 5)
A = 15
B = 10
C = 5
Can't make any sense! ;)
Posted by Rattenhirn on 17 July 2014 - 05:54 AM
If you're looking for speed, I recommend LZ4 (http://en.wikipedia.org/wiki/LZ4_(compression_algorithm)), if you're looking for high compression LZMA is pretty much the best freely available algorithm (as stated above). If you want a good trade off between those, zlib is still pretty competitive.
If WinRAR is faster than your LZ77 implementation, then your LZ77 implementation is very very slow.
Posted by Rattenhirn on 11 March 2014 - 03:54 AM
One more note, making by value parameters (like the pointer pD3dscene and pLightingEffectId in your example) const does not really give you any benefits, because the values are copied anyways.
But it's a matter of personal preference really...
Posted by Rattenhirn on 10 March 2014 - 04:02 PM
If you want the pointer itself const, you have to put a const after the *.
int * ptr; // non const pointer to non const object
const int * ptr; // non const pointer to const object
int const * ptr; // non const pointer to const object (same as above)
int * const ptr = something; // const pointer to non const object
const int * const ptr = something; // const pointer to const object
int const * const ptr = something; // const pointer to const object (same as above)
I think these are all cases and I hope that helps!
Posted by Rattenhirn on 23 February 2014 - 05:55 AM
data compression means to move data from the space dimension to the time dimension...
"Lossless compression reduces bits by identifying and eliminating statistical redundancy."
So nothing is moved between dimensions, whatever that might even mean in this case.
Maybe you try to space/time trade offs that are often the case with CS algorithms, including compression. For instance, a better compression rate will typically take a longer time. So by spending more time working on the data one can save space or, vice versa, by using more space, one can cut down the processing time.
Is this what you mean by any chance?
Posted by Rattenhirn on 23 February 2014 - 03:44 AM
After that you can find more detailed videos on the same channel.
Posted by Rattenhirn on 08 February 2014 - 05:58 AM
Next time you make a comparison, only change one thing at a time and the result will be less confusing! ;)