• Content count

  • Joined

  • Last visited

Community Reputation

530 Good

About BrianL

  • Rank
  1. Precompiled Headers...

    When a file is set to compile using precompiled headers, the compiler ignores everything before that include. As a result, your 'using precompiled headers' case looked like this to the compiler: #include "GraphicsPrecompiled.h" #include "../GraphicsInclude.h" #else #include "NewModelFuncs.h" #endif ...which obviously isn't valid.
  2. Capitalizing Variables

    Standards (or concensus at best) are language specific. Java and C# have relatively well established standards relative to C++. In general, consistency is most important. Whatever you do, stick with it. If you are working in someone elses code, follow their standards. Do a search for camelcase to find discussions on various standards (I'm not advocating it specificly, but its a decent search keyword)
  3. Many-Core processors -- AI application

    Multithreading isn't an AI problem; it is a architectural problem. The most pragmatic solution is to convert any chunks of code that take a significant amount of time to be tasks. AI code is a good candidate for this, as there are plenty of heavy weight operations in it. At the same time, updating a dynamic mesh or calculating a ray intersection with the world fits the same model. The right solution here, in my opinion, is to get a task framework running and to leverage it heavily in the AI systems. Moving AI to other threads (ideally on another processor..) will separate AI and the biggest CPU user - the CPU cost of rendering. That said, I respectfully disagree that the main barrier in games is lack of CPU/memory power. In my personal experience, the biggest challenges have been working with designers to get what they really want and dealing with relatively static presentation (ie lack of high quality animation/audio generation). The AI logic side of the system can be designed to scale relatively easily compared to asset generation.
  4. 1) Including a header in a cpp file 'pastes' the contents of the header into it at compile time. 2) Static controls scope/visibility, const controls ability to change. 3) You can't have multiple visible symbol with the same name If you define a variable in a header, it is going to paste it into every cpp file including the header. By default, this definition is visible across modules which causes a link error. If you use static when defining the variable, it is going to add a locally visible instance of the variable to every module. This won't generate a link error but will cause confusion; if you change the value of the variable, the change will only be visible in your module! If you don't change it, you are wasting memory as you have storage for more than one instance at a time. externing is what you want to do here. Use static only if you want the variable to be private to the module it is in - ie you don't want to extern it.
  5. performance issue...

    300 virtual function calls won't be a problem at all.
  6. Material Override design

    A few ideas: 1) Separate out your 'material definition' and your 'material instance', potentially as different types. Material instances point at definitions. Instances point at definitions. Definitions are immutable except at creation. 2) Keep a single 'material' structure, but require cloning it before it is writeable. If done through a factory or some other interface, you could define a 'finalization' stage after which your material isn't editable.
  7. There are a bunch of patterns for customizing templated classes. 1) Use policy templates to specialize your class. There is plenty of info on the net, as this is a very common technique. 2) Externalize your customization. Have your templated class call a free function which you can specialize via template function specialization. 3) Go with runtime customization/polymorphism. If anything, C++ gives you too many options here. ;)
  8. Use of the Standard C++ Library

    The main performance pitfalls are: 1) Poor use. For example, building your matrix class out of vectors of vectors isn't a good idea. 2) Use of the wrong container. If you aren't going to add/remove often and are going to iterate frequently, a flat container like a vector will probably perform much better than a list. Node based containers are particularly poor performers on modern consoles. This probably isn't something you need to worry about unless you are making a high performance game for them though. 3) Fragmentation in low memory environments. Again, not much of an issue unless you are working in a constrained environment. 4) Executable size. Depending on the compiler, executables can grow in size rapidly. Heavy template library use can be a 10 meg executable size different in a half million line application. If your compiler supports comdat folding, this is much less of an issue. It isn't really that big of a deal outside of a memory constrained environment. There are certainly times when stl isn't a good choice, but most only apply in special cases. Also keep in mind that you can use stl algorithms without the containers.
  9. Two little notes: 1) That code can break if you run on a compiler that supports strict aliasing optimizations if they are enabled. Casting through a union is the most common workaround. See this page for more details: 2) The compiler/CPU can do screwy stuff when converting between an integer and float if you set the wrong bits. We had a bug in our endian swapping code a while back where converting an int to a float via address casting caused NaNs in very specific cases. I don't recall the specifics, but it went all the way down to getting the value into a register. I think it was doing something like: float fData; Read(&fData, sizeof(4)); // Reads in unswapped data, which involves interpreting the data as an int. Swap(&fData, sizeof(4)); // Swaps the data to the correct endian If the unswapped data bits were arranged just right, the value stored in fData was converted to a NaN, basically corrupting the data before the Swap occured. Basically, we needed to do this instead: uint32 nData; Read(&nData, sizeof(4)); // Reads in unswapped data Swap(&nData, sizeof(4)); // Swaps the data to the correct endian float fData = UnionCast<float>(nData); // Convert the representation to a float Moral: Be really careful when converting between types. Even when you know what you are doing, its very easy to introduce either a compiler specific bug or a value specific bug.
  10. 1) Textures can be compressed using hardware supported compression (ie DTX, etc). These don't need to be decompressed and are smaller/faster to load. 2) General compression (ie zlib) can help load times if file IO is a bottleneck. Profile your file loading times vs smaller file+decompression. Heavy compression will definitely take longer, but you can get a nice size reduction with relatively minimal compression which can be a win as file IO is all sorts of slow.
  11. Depends entirely on your needs. If you need primarily temporary strings, you could make a simple TString<MAX_PATH> class which created a string with an std::string style interface. With a template parameter specifying the internal array size, you wouldn't need to worry about stack allocations. An std::string link interface would make it easy to change between implementations. The two extremes - You could try to do away with strings entirely and switch over to an ID based approach (if you plan on having a huge number of identifiers, etc), no dynamic content, etc. You could just go with std::strings and procrastinate on the issue until you hit problems. Both are totally viable and depend on the needs of your project and what tradeoffs you need to make.
  12. random number problem

    It sounds like you may have some memory stomping going on. If a specific location in memory is getting corrupted, you might be able be able to use a watch point to trap when it happens. If you are concerned the storage/restoration may be causing issues, fill in a buffer and copy it. Encode it, decode it and verify the results are the same doing a byte comparision. As you are leaving 2 bytes filled with garbage data in your current stucture, you will pick up some garbage values. You might want to either manually set these to some value or just memset your whole buffer at the start to a known value to get rid of randomness. I would also suggest either converting to streams unions for writing your data. In my experience, either of these approaches is much less error prone than dealing with the offsets directly. For example, you could do something like: struct Entry { int nStorage; int nRandomNumber; }; union EntryBuffer { Entry EntryList[31]; // leave the last 8 bytes for the CRC BYTE buffer[256]; }; EntryBuffer buff; buff.EntryList.nStorage = 1; buff.EntryList.nRandomNumber = rand() % n; ... buff.buffer[255] = Callculate_CRC(buff,255); EncodeBuffer(&buff.buffer); ...or something along those lines. Using a binary stream writing class is another option. These are actually pretty simple to put together.
  13. STRIPS planning implementation?

    Wouldn't an informed forward search (ie A*) address the 'only consider actions which solve towards the goal' issue?
  14. STRIPS planning implementation?

    A simple implementation of that is easy. That definition of goal based planning should be pretty easy. Just encode your goal in a search space state. Have a method to determine distance between your initial state and the goal state. Do a search (either forward or backward) between the states using the actions as operators. At least in GOAP, goals were special because they were prioritized outside of planning system. Basically, the AI decided what to do via goals and then how to do it via the planner. The search direction itself shouldn't matter very much if you use an admissible search algorithm. At least behaviorally - the size of the space you search may vary.
  15. STRIPS planning implementation?

    The Fear planner absolutely works, but I've found it overkill for many problems. Basically, designers generally know what they want to see behaviorally. Most STRIPS style AI planning work I've done has basically been using the system to emulate what a HTN planner represents more explicitly. You might want to look at simple HTN planning or Halo 3 style trees instead. Both seem like better ways to directly encode design requirements.