Jump to content
  • Advertisement

Ralph Trickey

  • Content Count

  • Joined

  • Last visited

Community Reputation

230 Neutral

About Ralph Trickey

  • Rank
  1. Ralph Trickey

    Smarter AI pathfinding.

      Working like a charm but I like to have more influence on it as to when it should stand in line and when it should circle around. For a beginning the multiplier can be derived from the creatures current health. Maybe even switch with another enemy on appropriate times but I guess the latter should be handled by my FSM. I do something like that in my game. Different types of units have different weights. The overall targets are set by a higher level AI.
  2. Apologies if this is the wrong group for this question.   The subject says it all. Graphics performance isn't critical. I plan to use programming and not the editor for most of the graphics.   I'm comfortable with both C++, C# and GPU basics so that isn't an issue.   I'd rather use a 3D engine for performance on mobile, allowing quicker scrolling, resizing, etc. Let me know if I'm missing something, but I'm assuming that the mobile GPUs and libraries are not optimized for bit by bit manipulation like GDI+.   Am I missing anything? Are there tools out there that will target PC/iPad/Android using C++?   Unreal Pros Source code available. They have an Asset store. It's sparse, but I suspect it will grow quickly.   Unreal Cons They are just switching models into more of a mass market instead of a boutique business. I haven't looked into the GUI layer that Unreal has to see how it handles resizable screens, etc. yet.   Unity Pros Cheaper total cost. Bigger asset store. More developers using it.   Unity Cons I have been looking into using a Unity Graphics layer with a C++ layer for the game logic. Transitioning between the layers is through a C interface will be awkward. Debugging could be more challenging because of this. Performance could be slower because of this. More expensive up front cost. Mono is several versions back GC could be slow.    
  3. Ralph Trickey

    UE4, CRYENGINE, and UNITY 5: Will it work?

    I'm 99% sure that there will be a free Unity 5 Option. You should be able to send Unity $0 to preorder it and get access to Unity 4 until it's released ;)
  4.   Is there any better way to do this or any input on which way is more likely to work, or some entirely new approach suggested for that matter? I'm looking at designing a simple procedurally generated random 2D like outdoor map using hexes, a bit stylized like a board game map. Also turn-based. Ideally, I would like to be able to define the hexes with 'overlays' of textures, overlay 1 might be ground, overlay 2 might be some trees in the NW corner, overlay 3 might be a road from N to S, overlay 4 might be a river from the SE to the W, etc. I want to use a texture atlas for each layer and modify the UV values to pick a different part of that atlas for the specific graphic. In DOS/Windows, the way to do this is to take over the entire screen and paint the first layer, then the second layer, then the third layer, etc. Sprite by sprite, Pixel by Pixel. Works great on a PC, but I'd really like to take advantage of the GPU for smooth panning and zooming. (Plus some 3D decals, etc.) One way to do this would be to do this the DOS way and paint into a texture atlas (or several atlases) and use that on the hexes and let the GPU do it's magic. The seems the fastest for drawing, but is more complex. A different way would be to give them slightly different depths (which would limit how closely the camera could zoom, but it might be OK since I plan to limit the camera) I haven't seen any way to do this without many meshes/objects (one per layer per hex) This seems simpler, but I've got concerns about how well it would work. I'll probably try #2 first since it is simpler (and I like simple ;) But I hope some else has a better idea or input on whether that's going to hit issues. Thanks in advance
  5. Ralph Trickey

    Video Games Podcasts I Listen To

    I like Drunken Gamer's Radio. They're more console centric, but a heck of lot of fun, although they've gone off the deep end recently. http://robotpanic.com Also, GiantBombcast is sometimes entertaining, but can be a bit lengthy. http://giantbomb.com Retronauts is what is sounds like at http://1up.com. I haven't tried the other ones there. I'll check out the last 2 you mentioned, I wasn't aware of them.
  6. I think you'd want to load your objects at the start of the game, level, or save, and not for every access. Since that's true, speed doesn't matter as much. I don't think that using SQL is going to save any code over XML. XML has the advantage that as long as you plan ahead, it may be easier to upgrade, and let someone running 1.5.7 load the data from a 1.1. save game. I like XML because I can read and edit it by hand and it has a flexible format, so I can skip fields easily, etc. XML can also help promote a modding community since they don't need special tools. The downside is that it takes up a lot more room on disk. In the end, I don't think it matters much. Use the one you're most comfortable with and plan ahead by figuring out how you're going to do versioning. Ralph
  7. Ralph Trickey

    Is OOP better for developing games?

    Quote:Original post by Alpha_ProgDes At the very least its keywords seem to suggest that. But yes, it's not strict OOP. Geez, this brings back memories of comp.lang.c++<grin>. I remember those arguments from way back. Can we please not argue semantics? C++, C#, Eiffel, Jave, even VB all allow me to encapsulate objects and data in a way that helps me to hide complexity. That's the important piece, not whether they're 'strict OO'. Ralph
  8. Ralph Trickey

    Path finding with minimal object instantiation

    You can actually do the tree as an array too. As you walk through the nodes, store the node that you came from in the Parent array so that you can walk back through it. In C++, I've got the following structures. The code isn't as clear as the tree based structure, but it does no allocations after the initial one. In my case, it's a static variable, so it's initialized at startup. It does take up more space, but it does no allocations and is blazing fast. bool CL[H_BORDER+1][V_BORDER+1]; //Closed List bool OL[H_BORDER+1][V_BORDER+1]; //Open List unsigned short G[H_BORDER+1][V_BORDER+1]; //G shortPoint parent[H_BORDER+1][V_BORDER+1]; //Parent - to find the path priorityQueue[H_BORDER*V_BORDER+4]; I can't find where I originally saw this idea but it should not be difficult to program, although getting all the details right is a pain.
  9. Ralph Trickey

    Many-Core processors -- AI application

    Quote:Original post by WeirdoFu This may sound kind of stupid, and kind of out of place, but all this talk about L1 cache started to make me wonder. Is it even possible to specify where to put a specific block of code? ... When you create a thread on the Win32 platform, you can specify the 'processor affinity', or which core you want the code to run on. I assume that the patforms and other OS's have this functionality as well. Quote:Original post by WeirdoFu All this talk of VMs, reminds me of a ongoing arguement/debate I've had with some co-workers. I say ongoing because we've had them on and off over the past 2 years. (One of them is an AI programmer that has been working on consoles for most of his career with alot of experience in asm, while the other is about as knowledgeable about C++ as you can get without being on the standards committee.) They are generally allergic to VMs for a variety of reason. First, performance. Garbage collection is usually a big issue as there are inherent overheads and it can be rather cumbersome at times. I think their general thought is, "programmers who use C++ and don't know how to properly manage memory have no place in the gaming industry." Also, the whole purpose of a VM is to enable scripting. By enabling scripting, which usually means programming by scripter or producers, you expose yourself to a whole new set of optimization issue. If you implemented, say A* in code, you can optimize alot of its behaviors and control alot of performance related issues. If you implement it in script, and if implemented by someone who doesn't care, you've just lost a chunk of performance there, no matter how optimized your VM is, which just becomes another layer of overhead. (I know A* is abad example, but it was the first thing that came to mind.) Also, optimizing the bytecode that scripts get compiled to is another possible headache. Because, essentially, you're almost trying to do what the C++ compiler may already do for you, just reinventing the wheel in a slightly different form. Not to mention if something goes wrong with your optimization that created critical but hard to reproduce bugs, that becomes another headache in itself. I'm not sure I understand this argument. There's a confusion of items here. VMs are NOT built to enable scripting, they're built mainly to enable cross-platform work, as far as I know. Scripting languages are not compiled, and are going to have severe performance penalties. This has nothing to do with memory management or any other issues. They do allow for rapid prototyping, short development cycles, and are useful for some things like level design, etc. There are some initiatives like the DLR that hope to cut down on the performance limitations. When talking about VMs, let's break it down a little, and look at Managed C++ and C++. Memory Management is one example, C++ has it up front, and I believe that most people use reference counted classes to take care of it. Managed C++ has it as part of the library, which takes care of it on a background thread. To use either one, you need to uderstand what the internals are doing. For every deallocation, you're going to take a CPU hit. For C++ it's a small consistent hit, for Managed C++, there is no hit, that happens later when the background thread runs. For either language, you should avoid allocating/deallocating in loops when possible. For either language, you're going to be going through an optimizer to compile the code. Managed C++ does this at load time, C++ does it before it ships. On the PC platform, that means that Managed C++ can take advantage of the actual CPU type, C++ has to compile to the lowest common denominator. It's possible that Managed C++ will be slower on some PCs, and there is always the possibility of a processor specific bug being found, although I haven't heard of any. On my limied tests, Managed C++ was faster than the regular MS version of C++ (ignoring memory management.) That's probably because the normal version had to compile assuning an late model PIII, and the .Net verseion could take advantage of some newer features. Quote:Original post by WeirdoFu It really doesn't matter if it was 25 years ago, or 2 years ago or 2 months ago. Programming is still programming. There will always be good programmer and bad programmer, and those who are just godly at what they do. One thing has not changed in the past 5 - 10 years though. The simple fact is, alot of games developed for the PC bank on increase in average hardware specs to cover the bloat that was injected during development. Then people start to forget why virtual machines were invented in the first place. It was to have code that runs cross platform without that much effort from the programmer. But if you're developing for PC, then there's no multi-platform involved. I remember when java first came out in the mid 90s. It wasn't a viable solution for ANYTHING that required any form of critical performance. Why is java so popular now? Because all that processor performance we have can mask over its inherent inefficiency. Why did people start using VM's and running scripting languages for games? Partially because they could. There's no performance gain there. You don't add any more features to a game than the simple fact that any yahoo can now pick up the script and mess with it. I feel that you can probably do more with AI just with those processor cycles you get back from NOT using a VM. (Sorry, I'm kind of allergic to VMs too.) I'm not sure why the swing back to VMs happened (Yes, back, languages like Lisp and VMs like the UCSD P-System have been around for many decades, as far back as the Apple II, and were very usable.) I think it's just an artifact of the fact that Sun/Java wanted to be multi-platform, and MS/.Net wanted to have one compiler backend instead of many, and chose to do that with a VM. The one potential advantage that I see is that I think that it will be possible to write code that will recognize that you've got 16 additional non-x86 processors that it may be able to share with the graphics system and compile some of the code for them. I don't know if that will ever happen, but those possibilities are starting to open up. Programming IS changing, BTW. MS is looking heavily at functional programming. Languages like C# now offer things like lambda expressions and linq. Both of these should help me write code faster. Tying this back to the title, MS is pushing functional laguages and constructs, and these have the potential to work better with multiple CPUs. They are also very interested in assymetric multiprocessing because that has a large impact on OS design, and their own programs. My 2c, Ralph
  10. There are a couple of other possibile refinements. One is to determine the approximate square that you want to end up in, look at the neighbors in a certain radius and assign a weight to each possible square, then do the pathfinding check to that square. That way you can easily weight the squares with good defensive or offensive terrain, and also weight the squares with units already in them. Unless you do the pathfinding for each possible square, it's going to be suboptimal, but it can give reasonable results. If you've got a lot of unreachable squares then it may not work, but... For the Archers, you can either use this approach with better weight to objects on the permiter of the objective square, or (depending on the game) you can set them up just like the other units, but when moving, have them stop the moment that they get into range. BTW, 20 seconds for a 10x10 grid is pretty slow, I don't know if it's the actionscript or how A* was implemented, For the example shown, you should have no more than about 20 items max that are on the priority queue (not heap) or have been evaluated. Since you're using a priority queue, the answer is going to be the shorted path (depending on how you define shortest.) I believe that the worst case would be if the blocking squares were surrounding the green square, even that should be only about 100 evaluations. My 2c, Ralph
  11. Ralph Trickey

    AI-assisted game balancing

    Quote:Original post by Steadtler Quote:Original post by ToohrVyk The problem is that I want to be able to change the rules a lot in order to find the best balance, and I'd rather not have to generate a new heuristic every single time. Implementation detail... thats what software engineering is for! But, my degree is in Computer Science, I'm not an Engineer! If you just want to check, but don't care about how to always play the best game, the problem space sounds like it's small enough that you might be able to use Monte Carlo. run 10,000 games with random counters and pick the best 10 and eyeball them for a pattern, or pick the best 100 and see what counters aren't used. Ralph
  12. Ralph Trickey

    CPU consumption in games

    Quote:Original post by md_lasalle Hi all! I've been testing my game on different machine, and i noticed that the cpu consumption was different from machine to machine. Why do some machine get 100% cpu while other have 50% and some other have 12~20% ?? I tried playing with my main loop and by testing on every machine here is my code MSG msg; while(!done) { while(PeekMessage(&msg, NULL, NULL, NULL, PM_REMOVE)) { TranslateMessage(&msg); DispatchMessage(&msg); if (msg.message == WM_QUIT) return 0; } return 0; } If there's anything wrong, or unsafe way of doing things, please tell me. I tried adding a Sleep(1) before the ZeroMemory, it caused the consumption to go down quite a bit, but on 1 of the computer, i had lag when keyboard or mouse was in use, like if a WM_MESSAGE would make it lag. Thanks for any lights on this If it's a turn-based game, you should probably be doing GetMessage, it's more friendly for laptops. You also might consider it for pause loops if they aren't doing anything. MSG msg; while(!done) { while(GetMessage(&msg, NULL, NULL, NULL)) { TranslateMessage(&msg); DispatchMessage(&msg); if (msg.message == WM_QUIT) return 0; } return 0; }
  13. Ralph Trickey

    Software Engineering for Games

    Quote:Original post by FlashChump Quote:Original post by Ralph Trickey How many people in the project (Programmers and Artists?) How many are remote? Is this project similar to a project that you've done before? Ralph I'm the only developer. One artist, one writer, and perhaps a designer. All local. The project has similar aspects, but the way everything comes together is always a new unknown. The good news is that you're a small enough shop that almost anything will work. I'd reccomend not getting too tied up with a specific methodology. Any of the Agile ones should help, just remember that doesn't mean that you should 'cowboy' it, just that the design work may be spread out more instead of being strictly upfront. YOu may want to look at Test Driven Development, I've done it on some smaller projects, and it has some promise. You need to take the time to write the design document, it will help the writer and artist get started earlier, and give you direction to make sure you don't go too far off-track. You can either include enough in the design document fot them or include a prototype. I'd probably try to get an early working version, so that they have something to work from, but I wouldn't rush it so much that I'd have to throw it away since the class design was screwy. Risk management is done by determining the riskiest parts and doing them first, hopefully near the end you will be down to implementing things that you've done before, and are low-risk, so you can estimate them accurately. For Class Design, etc. I'm a huge fan of Enterprise Architect (www.SparxSystems.com) It's an excellent tool for doing prototyping like that and determining and documenting the roles and responsibiliites for the classes. It's the only reasonably priced UML modeling tool that I've seen. It's also got some tools for protyping UIs simply, I've found that otherwise, people get too tied up with trivia. Ralph
  14. Ralph Trickey

    Performance : Ints vs. Floats?

    Quote:Original post by mattnewport Figures for instruction latencies and throughput are often published in the processor documentation. You can find latency and throughput information for a variety of Intel processors in the IA-32 Intel Architecture Optimization Reference Manual. Thanks. That should help a lot once I get through it.<g>.
  15. Ralph Trickey

    Performance : Ints vs. Floats?

    Awesome, thanks guys. I'm doing mostly additions, so a factor of 4 for * and 11 for / worst case I can live with. I'm probably losing more than that on average by having to do * 100 / 100 around the calculations. Right now, everything is integral, so I doubt I'll hit the worst end of the multiplication at least. I'm not as sure about the divisions, but they're pretty simple. I'll definitely evaluate them again, though to see if I can store the data to make them multiplications instead. I think I can in a lot of cases because they number is often a fraction between 1 and 100. I'll be programming in C++ or C# on a PC, so alignment shouldn't be an issue. I don't expect to be doing any GPU programming in the near future, so I'll take a look at them when I need to. Thanks, Ralph
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!