Jump to content
  • Advertisement

Teknofreek

Member
  • Content Count

    471
  • Joined

  • Last visited

Community Reputation

331 Neutral

About Teknofreek

  • Rank
    Member
  1. Quote:Original post by WashuOwnership. In this case it would seem that the best alternative would be to have the one who allocates the pen be the one to dispose of it, as its the only one who knows if the pen is disposable or not. Hmm, I suppose I could try to rework things that way. However, that still feels like an error-prone approach. Is there no idiom in the .NET world to help with cases like this? In the C++ world, there are many times when you wish to re-use objects. A classic example would be an asset manager that allows you to re-use textures, models, etc. Usually you'd track these shared resources with smart pointers, or pass out handles and keep an internal reference count. I feel like I'm missing something and that there should be a better way to handle IDisposable objects in a similar fashion.
  2. Quote:Original post by Washu Attempting to dispose of a builtin pen will cause an ArgumentException to be thrown (as they are considered immutable (CantChangeImmutableObjects). Precisely. That's my problem. I guess I should have phrased my question a bit better. The question is, if you write a class that holds a disposable object, like a Pen, how do you know when you should call Dispose on it and when you shouldn't? Also, just so we don't get hung up on the details, I've used a Pen and a Brush in my example, but this same situation could occur with any classes. Even ones you create your own. Essentially, I don't see how you can handle disposable objects correctly if they can be shared. And yet, sometimes sharing disposable objects(ex. commonly used pens and brushes) is quite desirable. Quote:Original post by Washu I should note that you are not implementing dispose properly. A proper implementation will typically look like: Yeah, I know. I left out the finalizer and common dispose method because my example code already seemed a bit long. But thanks for mentioning that. I don't want my example to teach any bad practices ;)
  3. I'm writing a class that will hold a pen and a brush as member variables. So, I want to implement IDisposable so that I can dispose of these member objects properly. However, I'm not sure how to handle this properly since these objects may have been newed by the client or they may have used one of the stock brushes. Here's a simplified example of what's happening: public class Square : IDisposable { public Pen Outline { { get return m_outline; } { set m_outline = value; } } public Brush Fill { { get return m_fill; } { set m_fill = value; } } Rectangle( float x, float y, float size ) { m_x = x; m_y = y; m_size = size; } private float m_x; private float m_y; private float m_size; private Pen m_outline = null; private Pen m_fill = null; private bool m_disposed = false; public void Draw( Graphics gfx ) { if( m_fill != null ) { gfx.FillRectangle( m_fill, m_x, m_y, m_size, m_size ); } if( m_outline != null ) { gfx.DrawRectangle( m_outline, m_x, m_y, m_size, m_size ); } } public void Dispose() { if( !m_disposed ) { if( m_outline != null ) { m_outline.Dispose(); } if( m_fill != null ) { m_fill.Dispose(); } } GC.SuppressFinalize( this ); m_disposed = true; } } // later on... public partial class MyControl : Control { protected override void OnPaint( PaintEventArgs e ) { // This will work fine using( Square square1 = new Square( 0, 0, 100 ) ) { square1.Outline = new Pen( Color.Red, 1.0f ); square1.Fill = new SolidBrush( Color.Yellow ); square1.Draw( e.Graphics ); } // This will throw an exception using( Square square2 = new Square( 0, 150, 100 ) ) { square2.Outline = Pens.Red; square2.Fill = Brushes.Yellow; square2.Draw( e.Graphics ); } } } Is there a common solution to cases like this, where you may or may not share a disposable object?
  4. Quote:Original post by AAAP I thought boost::function was a implementation of functor? To be honest, I've never used boost::function so I can't say for sure. However, I thought it was a way to wrap function pointers, pointers to member functions, or functors in such a manner that you could use them interchangeably. Either way, if you want to use different inputs to different functions, using functors seems the sanest and simplest way to me. Just initialize or update each command object any way you want. Then, when you iterate through and call them just call Execute() or whatever. FWIW, I've used this approach before in situations like this and it's worked great for me! -John
  5. Quote:Original post by AAAP so this puts me in the same situation that i was in before, essentially. theres only room for one "kind of function" in the command container. Any tips? I may be missing something, but I would prefer functors instead of function pointers. One of the main reasons is that you can de-couple the initialization from the calling. The upshot of this is that you can make all of the functor calls the same. For example they could take no arguments and return a bool indicating success...or whatever. However, since each command is now a class you can initialize it however you like. So, lets say some command needs to know where you clicked. You could pass the point into the constructor and store it off. However, the call would still be "bool Execute();". I think this approach is simpler than trying to support an arbitrary calling mechanism. Plus, it means you can do nifty things like derive commands and such :) -John
  6. Teknofreek

    STL Saves or Enslaves?

    If you find a problem with speed, it probably will have little to do with WHICH linked list you use(STL or your own). Instead, your speed problems will probably stem from the typical use of a list, where you dynamically allocate objects and add them to the list. If you need to speed things up you should consider if there's a strategy that would allow you to allocate some/most/all of your objects all at once. Even if objects are created and destroyed frequently you could still use a pooling scheme. With this in hand, it probably doesn't matter all that much if you put the active ones at the front of a sorted array, point to them in a linked list, etc. Hope that helps! -John
  7. The way to avoid these problems is to de-couple your components. As long as your ModelManager needs to know about the Renderer, or vice versa, there won't be a simple solution. You'll either need a global access point or a way to pass the renderer, possible through numerous layers, to the parts that need it. Neither one of those situations is desirable. And, in fact, that's one of the main problems that many people have with Singletons. It's that it makes it easy to create dependencies between sub-systems. Instead, consider a design where the ModelManager doesn't need to know about the Renderer, and vice vera. If you had a layer above these two systems, you could potentially alleviate this problem. For example, if there was a RenderManager that could get a list of meshes to be drawn from the ModelManager and then pass that list to the Renderer, then neither one of those would need to know about the other. The ModelManager would simply be responsible for creating a list of meshes and the Renderer would simply render a list of meshes, but neither one of them would have to know about the other. -John
  8. I haven't tried this, but I believe what you want to do is this: struct CObj { int zug; }; template <typename T> void allocate( const T& obj, int count ) { obj = malloc( sizeof( T ) * count ); } main() { CObj* oPtr = NULL; allocate( oPtr, 1 ); oPtr->zug = 1; } Basically, you shouldn't put T*, just T. The reason for this is that T will get replaced by whatever your type is. So, since you already have a pointer const T& will become const CObj*&. Like I said, I haven't tried it, but I believe this will work. -John
  9. Quote:Original post by Sneftel I have written exporters using the Max SDK, and using MaxScript. I would not recommend the latter. MaxScript is a bad, bad language. It's okay for quick one-off automation scripts; it is not suitable for long-term maintained software. I've also used both. I'm curious why you think Maxscript is so bad. I've actually had a lot of good success with it. In fact, for long-term maintanance, it's a bit nicer in many ways. Especially since you don't HAVE to re-write your Maxscripts every release, but with most releases you have to, at the very least, re-compile all of your plugins. I'd also toss a third suggestion into the mix. Use Maxscript, and if you find things that take too long or run into areas where you can't access some data then you can write a simpler plugin that exposes new functionality to Maxscript. On the last project that I did a lot of heavy Max tool creation I used this hybrid approach a lot. It's really nice. You can keep all of your high-level logic and functionality in Maxscript and then call your custom functions to do your heavy lifting on the SDK side. With all that being said, if this is for a commercial game, then I'd recommend just licensing Rad Gametool's Granny exporter and be done with it =) -John
  10. Teknofreek

    Rhythm Game

    I haven't played these games much, but from the little I have played I seem to remember the patterns of arrows being the same everytime. If that's the case, then it makes me wonder if the arrows and timing of when to step on an arrow aren't simply authored by someone. In other words, I think it might not be all auto-magically detected by the program. Like, if you had an editor where you could scrub time and look at a waveform and then go "Right, here's a nice beat at 4.57 seconds. I'll put a left arrow there." Whether this is the way those games work or not, if you're going to be using pre-defined music tracks then this is probably an easy way to get things working well without a lot of technical trickery and guesswork. Just an idea... -John
  11. Teknofreek

    Just a rant...

    I'm with ZQJ. It sounds like there's a decent chance that you're corrupting memory. If that's the problem than you could spend tons of time chasing down the symptom instead of the cause. I'd make sure the memory you're accessing looks valid. -John
  12. Teknofreek

    std/stl is the root to all evil

    Quote:Original post by ProgramMax That is suprisingly close to what I'm saying. Not only does the C++ Standard recomment using an alternative to vector if you want to push_back, but Sutter even suggests never even looking at vector in such a case. He gives the only exception of contiguous memory. That's a fair assessment. And I think that's a key thing people have left out in this discussion. Contiguous memory is often quite desirable for performance reasons. First, let's concede that appending(I never liked the STL names) items to an array is not needed when that array is not dynamic. I think that's obvious, but I'm stating it here since this discussion has seemed to dwell on whether or not each use case could be re-factored to use a static array instead of a dynamic one. Could some of the examples given use static arrays since they are sized exactly once and could be pre-computed offline? Sure. But let's not dwell on that. Instead let's focus on cases where a collection of items will be re-sized a number of times. I'd like to state the obvious again, by pointing out that no container is perfect. When choosing a container, it basically comes down to balancing the pros and cons of each. When choosing between dynamic arrays and lists, I believe the following are some important questions to ask yourself: 1. Do I need random access to items? 2. How often am I accessing items? 3. How often am I accessing all of the items? (ex. in a loop) 4. What is my usage pattern for accessing items? 5. How often am I add/removing items? 6. Do I care in what order the items are added relative to others? 7. How often do I need to re-order items? Simply put, when you loop through all of the items in a container a lot more than you add/remove/re-order items in the container a dynamic array is *usually* the better choice. The reason a dynamic array is often a better choice is that accessing contiguous memory is much faster than shuffling all over memory. When you shuffle around memory, you are likely to incur a cache miss. When this happens you effectively stall out until that page of memory is loaded. This has become even more important with the latest batch of consoles where the number of operations per clock has increased greatly but the memory speed hasn't increased as much. What that means is that when you incur a cache miss you are effectively missing out on a lot of computations that could be going on. Let's consider the example of a collection of players again. The frequency of players joining or leaving the game is extremely rare. It could happen as infrequently as once every 30 minutes. However, the frequency of accessing the collection of players is tremendous! It'll happen at least once per frame, but quite likely it'll happen many times per frame. In this case, it's a better choice to use a dynamic array. This usage pattern happens a lot in games. There are many collections that change infrequently, but are iterated through often. Collections of visible objects in the view frustum, active banks of shader constants, the set of dynamic lights that are active in the world, and so on. Basically, anything that will remain fixed for many frames, or even change just once per frame, but will be accessed often is a good candidate for a re-sizeable array. Now, I also sense that there's contention about why you'd want to add items to the end of an array. The answer is simple. You're usually doing one of two things: 1. Filling up the array with items in the sequentially in a pre-conceived order. 2. You don't care what order the items are in. Since adding to the end is the fastest method, you choose that. In fact, I'd contend that if you often want to add items to some place other than the end of the array then it's at that point when you should consider using another container. When it comes to the choice of where to add an item to an array, the end is always the best choice if possible. Therefore, push_back is exactly what you want to have and use if you both: 1. Want to have a re-sizeable array. 2. Want to add items to that array in the fastest way possible. Well, perhaps this will help shed some light on why you'd want this. At the very least, I hope it can spark some more constructive banter ;-) -John
  13. Teknofreek

    Colored specular or grayscale??

    Quote:Original post by Dirge Can anyone else rationalize it or perhaps offer a counter-point? I can rationalize it. It's very easy to get caught up in CG terms. The first thing you have to do is take a step back and think about what you're doing. For example, in the real world there's no such thing as specular. The Blinn/Phong specular term is gross approximation of the way that light reflects off of a surface. Specular highlights, in particular, vaguely model the way that lights, which are infinitely brighter than the objects around them, are reflected off of a surface. And, since lights are often round-ish in shape and since their energy becomes diffuse as it travels through the air, the round shape is a close enough approximation to work. Another thing to realize is that not all surfaces reflect light in the same way. For example, if you look at metal objects, they tend to tint the color of the light they reflect. The bottom line is that way light is reflected, scattered, etc on a surface is much more complex than the equations we commonly use to model them. The same can be said of light transport as well. So, when you're basically faking what happens in the real world, there's no harm in straying from the simple CG methods you're used to. If it makes it look better, then do it! Another point I'd like to add is that we often define surfaces at a very high level. Take your screenshots, for example. You have a sphere with a texture map of the earth on it. The way that clouds scatter light is different than the way light is reflected off of water, which is different than the way light interacts with sand, rock, grass, and everything else you see on the surface. And all of this is influenced by the scattering of light through the atmosphere. Simple put, trying to use a simple lighting model, with a fake specular color, to represent the entire world simply has little to do with reality. The same can be said for most surfaces we display. Think about the metal on the side of a tank for example. The metal will most likely have dust on it, rust in some places, grime and grease around areas, and so on. The way each of these materials interact with light in the real world is quite different. However, we tend to slap a texture on it for color, and then we get fancy by allowing the amount of specular to vary over the surface. As I'm sure you can see by now, this is a pretty naive way to define the surface(s) we're actually depicting. On a final note, it's also important to realize that we're working in an artistic medium. When they shoot movies, they often go to great lengths to alter the lighting of the scenes they shoot. They place bounce cards for light and use many other tricks to give a scene the look they want. Then, they go into post, and alter it further. Often times, you want to depict things in a better-than-real way. It's simply what we do. Quote:Original post by Dirge To use an entire texture just for specular immedietly brings to mind additional SetTexture calls, extra shader texture samples, and obviously the additional storage costs. And yet even with these downsides, I'm very tempted to do this myself. It's all a balance. 100 million boring, flat-lit polygons won't look nearly as good as 25 million polys that look great. Worry about performance when it becomes a problem. And when you do, figure out which things you should cut down to achieve your goal. This often varies for different games. For example, if you're doing a DOOM-type game, where you're mostly bound inside tight corridors, then the best thing you can do is make sure your culling is as good as possible. Afterall, if you're only drawing 50 objects, it doesn't matter if each of them uses 8 textures and 3 passes a piece. On the other hand, if you're drawing a jungle you might want to focus on the best way to batch large numbers of objects, and so on. Well, I hope that was informative. As I said at the start, it's all too easy to get caught up in way we're used to using CG. We're all guilty of it. Just remember to take a step back every so often and realize that our job really revolves around faking things ;) -John
  14. Teknofreek

    3ds Max Hardware shader plugin

    Quote:Original post by Guoshima I know, but in the book it seams like a DirectX 9 plugin or something, because in the material editor, the material name is DirectX 9 Shader .. That plugin comes with Max. Since version 6, I believe. -John
  15. Teknofreek

    Gamebryo

    Quote:Original post by krum I used Gamebryo when it was still called NetImmerse while working on a prototype of a Harry Potter MMO for EA. That was a long time ago. Frankly I found it a joy to work with and in fact if I had the chance it would be my first choice for pretty much any job. I've worked with several middleware products, but I've never worked with Gamebryo. I'd be curious in hearing what you liked and didn't like about it in more detail, if you don't mind =) -John
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!