The most difficult thing is having clueless people on your team. Not necessarily programmers, it can be project managers or whatever. They need things slowly explained to them over and over again. But they usually still remain clueless.
Existing interfaces are difficult to change, because it would make people modify code that they already consider "done". They hate that. Usually it's easier to suggest a new interface instead. Incidentally, this is why stuff becomes bloated over time.
I have no idea how this would be done for Java, but for C++ I believe you can just submit a paper describing what you want to change, and the ISO standards committee will take it under consideration.
So you are making a DLL, presumably for developers to use, and want some kind of copy protection?
My first thought is "forget it".
As a buyer of many libraries (or actually persuader of boss to buy libraries) I can safely say that I would never even consider buying any library that doesn't come with full source code. A binary only DLL that "phones home" or something like that is completely out of the question. Always.
Reading the first post again, you are planning to sell your DLL for about $10-$20. Have you really thought this through? How many are you likely to sell? If the library is useful, you can sell it for many times that price. My boss easily pays $1000 for a library without blinking, if I say I need it.
You can really put anything you want in sections of the executable, it's all just bytes anyway. The sections can be called anything also. The only difference is the flags that the OS uses to allocate the memory. Pure data sections could for example be allocated as non-executable or read-only.
I'm not sure what he's on about when he says that data in the code section could give you performance issues. The data needs to be somewhere, and I don't see why loading it from one arbitrary memory address would be worse than doing it from another arbitrary address. It might even be beneficial to have it near the code, because the jump table data would likely end up in the same memory page as the code that uses it... But I'm not sure, so don't qote me on that. But the compiler writers are usually not morons, so it would be pretty safe to bet that they know what they are doing.
could maybe somewhere answer to that? what changed here?
Not much has changed in the 64 bit API. A lot of code written for 32-bit windows should just be a matter of re-compiling.
The big thing to watch out for are pointer values that are common to store as integers in certain cases. SetWindowLong/GetWindowLong comes to mind. They have now been replaced with SetWindowLongPtr/GetWindowLongPtr. They use the data type UINT_PTR instead of UINT. If you use them, they will work on both 32 and 64 bit.
As for the original question, yes, the book is still pretty much relevant. New functions are added to the API every now and then, and that's about as far as the changes in the API go. The stuff that the book talks about should work. Microsoft is really good at backwards compatibility.
There's no need to practice something like "scrum". It's mostly a joke to keep project managers busy with something, so the rest can get actual work done.
We used to have a development method that worked pretty well. Then orders came from the owners to adopt scrum. In practice, things just turned into the wild west instead, with everyone just doing whatever they please.
Amazingly, this has enabled us to get more work done with higher quality than before.
One way could be to have std::map somewhere that is initialized with the class string of each object and its CRuntimeClass pointer. When you create your views, look up the name, and use the CRuntimeClass pointer you find.
Obviously the map then needs to be initialized somewhere... One way of doing it could be to create a new preprocessor macro that you use instead of the regular IMPLEMENT_DYNCREATE, that does what IMPLEMENT_DYNCREATE does, plus adds the CRuntimeClass to your map by creating an instance of a simple struct/class. The constructor of this simple struct would then register your object in the map. You can solve creation-order uncertainty by having your map accessed by a function, where the map is a static unique_ptr in the function that you initialize on first access.
It's not pretty, but it's a common trick to implement an object factory with self-registering objects. The alternative is, as you already figured out, to use long chains of if-statements.
The fact that you can pass any lua_State you want into your functions means that you could easily create two, zero, or four thousand lua_states and use all of them (or none of them) freely. This is the exact opposite of what a singleton is meant to accomplish.
Ok well this is good. So its not the end of the world to just drop in a bunch of free functions organised in namespaces? I shall proceed doing that then.
If it is the case that its not the end of the world I'd like to add why I thought it might be. I often come across experience c++ and OOP programmers who preach using classes and OOP and using all the features of the language. I thought maybe there might be a better way to go about it than just using free functions, or functions wrapped in a class instead of a namespace.
Attempting to shoehorn everything into an OOP model is a mindset from the late 90s/early 00s.
The truth is that OOP should be another tool in your box. It's useful a lot of the time, but it's not for everything.
I'm not quite shure but i think that you cant use resources in express editions.
You can, but you have to edit the resource file with a text editor or some other resource editor. What is not included is the resource editor itself.
Try to right click on the resource file and go to "properties". In the "General" section, make sure that "Item type" is set to "resource". Also check that your include path for the resource contains the folder where resource.h is.