Jump to content
  • Advertisement

JPatrick

Member
  • Content Count

    42
  • Joined

  • Last visited

Community Reputation

258 Neutral

About JPatrick

  • Rank
    GDNet+
  1. JPatrick

    New Years Resolution

    Unfortunately, it appears as if DBP10 is dramatically sooner than I thought it would be. There's no way I'll be able to get anything ready by march, and I don't relish the prospect of waiting another year. Perhaps it's an opportunity though. I'm growing increasingly fond of SlimDX, and would enjoy no longer having to constantly second guess every method and algorithm to avoid 360 performance pitfalls. If I punt XNA entirely, I rid myself of that baggage, plus make distribution much easier (unless they start packing the XNA runtimes in windows update, nobody will every download them). This also has the distinct benefit of dodging the content pipeline, which while an interesting concept, always felt too "magic" to me, and too tied into the IDE and project files. Being able to use a "plain" project is appealing. Also, from what I've been hearing, XBLIG is a bit of a black hole, unless you have a total breakaway hit that strikes just the right chords like avatar drop. Let's face it, a 4D game is pretty esoteric and difficult to even explain, much less play (most likely, we'l just have to see when it's done). I don't even own my own 360, and have no particular desire to own one, much less give my payment credentials to a service that can't be cancelled without jumping through frustrating support call hoops. It's a shame to lose the huge install base of 360 and being able to just say "it's on XBLIG, check it out.", but I think the pros outweight the cons. Plus, making a SlimDX framework for the game will be something that's reusable later for other projects that I want to do after this. Which brings me around to the new years resolution part. I want to put this thing to bed within 2010, so that I can move on to other, less eccentric, projects. A playable concept demo should be enough to gauge interest, and could be expanded on from there if it ends up being worthwhile. Otherwise, at least I can say I did it, and move on without feeling like a complete quitter :/ Almost forgot. Speaking of quitting, I've shelved the emulator project. Not due to any technical hurdles or anything, there was just no reason to continue. It was a wrapper of bsnes, a fantastic and extremely accurate SNES emulator. My goal was to add shader support and such, but recent developments on the emulator itself have rendered that goal redundant. I don't consider it a waste in the least though. I learned a lot about SlimDX that will serve me well, and a good change of pace is always nice. I just need to focus completely on the 4D project in the coming months and see if I can at least post some progress before my GDNet+ expires.
  2. JPatrick

    New Approach

    I suppose taking a break from the editor actually did some good afterall. I was constantly dreading working on it, and putting it off to do other things. Recently, however, I sat down and really thought about why that is. The main reason I suppose is obvious and hardly unique to me: GUI programming is complicated, but boring. There's a million details that have to be "just so" for a GUI heavy app to be considered decent and presentable. Any particular detail may be simple in itself, but their sum can add to a level of complexity that can quickly become pathological if not carefully architected first. I started to find myself buried under a pile of nuance and minutiae that was incredibly demotivating. The second reason was also pretty clear, but until now I was plugging my ears and humming a tune ignoring it. My entire concept of a good editor was based upon experience using editors that were released for consumption, widely used, and officially supported products in their own right. This editor will only ever be used by me, is applicable only to this specific project, requires no support of any kind, and doesn't have a team of dedicated tool developers to make it their focus. Any effort spent making it into a "proper" product is wasted effort, distracting from the ultimate goal of making a game. The editor exists solely to ease the burden of producing levels, and nothing more. And that was it. My initial concept of "easing the burden" was by default a full-on GUI app like all the others I'd ever used. It was either that, or just writing a raw text file and throwing it to the engine; there is nothing in between. Or is there? I paused to consider the only other "editor" I was using at the time: the WPF designer. It had never occured to me until recently, but in all the time that I've used it, I've NEVER clicked, dragged, instantiated, deleted, altered, moved, or otherwise manipulated a single thing in the designer window itself. Everything I ever did was done in the markup window; the designer window existing solely as feedback for my alterations. All that time, and I'd never considered my burden to be anything but sufficiently eased. GUI layout is plenty different from level layout, but in the end, my levels are going to be very simple arrangements of basic 4D shapes. Anything so complicated as to require delicate manipulation with a mouse-rich interface, will be too complex for the player to comprehend and navigate in the game, to say nothing of too complex to render in realtime on the 360. So, to drive this home, I plan to simplify my efforts significantly to a markup description of the scene, and a feedback window to view the results from different angles. While perhaps not as ideal as a proper editor, I feel the burden will be sufficently eased to meet my needs, while freeing most of my architectural efforts to be used on the game itself. In other words, I'm too lazy and incompetent to produce a proper tool.
  3. JPatrick

    [SlimDX] and VS 2010 Beta 2

    I've been using SlimDX in beta 2 for a few days now. VizOne is right about supportedRuntime. Specifically, you can add a .config file for your app as follows: <?xml version="1.0" encoding="utf-8" ?> <configuration> <startup useLegacyV2RuntimeActivationPolicy="true"> <supportedRuntime version="v4.0"/> </startup> </configuration> After dropping this in it started working as normal.
  4. JPatrick

    Crazy Corruption

    Everything was going pretty smoothly. The emulator core was wrapped up in a nice managed interface, the WPF UI was hosting a winforms control that could be rendered to via SlimDX, xaudio2 output via SlimDX, etc. All the pieces were falling into place... except for the occasional startup crash. I was using vs2010 beta 1, so I was just writing it off as some possible incompatibility with SlimDX and the beta framework. After doing a little refactoring however, I was suddenly getting crashes 15-30 seconds in, and startup crashes much more frequently. I was making use of a few threads for the different components (one for the core itself, one for audio, with rendering and UI on the main thread). Intermittent crashes absolutely scream "synchronization problem" so naturally I started there. Reviewing my WorkerThread class carefully though, I just couldn't see what might be wrong. Why not use the debugger? Unfortunately, the startup crashes seemed to kill the app even with the debugger attached, which is why I expected something external was going wrong at first. Oh how wrong I was, but that's getting ahead of myself. After refactoring though, the crashes were happening later as well, not just during startup, so I decided to see if the debugger could catch them again. Fired it up, waited a little bit, and: FatalExecutionEngineError. That's... not good. Especially when that was only the error half the time. Other times I'd get NullReferenceException from things that can't possibly be null, StackOverflowExceptions when the callstack couldn't possibly be that deep, "this" reference being the wrong type or even missing altogether, or maybe just a good old fashioned AccessViolationException. It was absolute insanity. The managed heap was obviously becoming deeply corrupted somehow. The emulator core has a fibers library to implement coroutines for synchronizing all the various parts of the system, which is part of the reason I was interested in it. I've always been a fan of coroutines, and don't think they get enough play in general, but I digress. I was wary in the beginning of what effect that kind of thing might have on the CLR, but I'd seen articles around that illustrated how to use fibers with .NET, so I perhaps it wasn't a problem. The coroutine library, in addition to implementing them in assembly, also has support for just using the windows fiber API directly. Just to make sure nothing was going wrong there, I switched it to use the windows fibers version, but no joy. Content for the moment that the problem wasn't there, I turned back to slimdx. I commented out the video code; still crashing. I commented out the audio code; still crashing. Updated to the latest version of the emulator source; still crashing. What the FRACK. Seriously. I was failing utterly at debugging. Maybe I'd have to use windbg or something and look at the core dump, but I'm just not that hardcore. Without any kind of clue as to what was REALLY wrong, I'd just have to shelf the whole thing and go back to working on the Marble4D editor. Giving up on it for the night, I tried to get some sleep (which REALLY sucks when you have a mysterious unresolved bug). In the morning I took a step back and tried to reason through it. I just KNEW it had to be something relating to fibers, since in the beginning I was unsure if it'd work while hosted by the CLR at all. Perhaps there was something to that. Wouldn't video and audio always have crashed though if that were the case, since the events are triggered by the core? Well, video refresh is triggered after the coroutine scheduler comes back to the main thread, and the way I was buffering up audio, I have the video refresh callback in the native interface also fire the audio event with all the samples buffered during that entire frame. If not those, then what... Then it hit me. In one of the obscure crashes, the "this" reference for my worker thread manager object was corrupted and listed as a DevicePolledEventArgs. INPUT! The emulator core waits as long as it can to poll input, to get the very latest state. The implication of this was that it was happening in a fiber, rather than the main thread. The input request triggers a callback that crosses the managed barrier back into my app, which fires an input event with a DevicePolledEventArgs. One of the things I changed in refactoring was having the input event create a new one each time the event fires, rather than reusing the same one over and over. I was getting that strange feeling where you just KNOW that the problem is there, even if you're not sure exactly why. I commented out the input callbacks, crossed my fingers, and ran it again. No crash. I ran it again and again and again and again both from inside the IDE and from explorer. Still nothing (yet). It would seem that fibers and other forms of "fake" threads interfere with the managed heap and the GC, at least in beta1. This is a definite case in support of my personal programming mantra: a brain is the best debugger. The unfortunate implication is that I'll have to poll input before the frame starts, rather than waiting until the last possible moment, but them's the breaks. If anyone reads this and is an expert on fibers, I'd be very interested in your thoughts. Is calling back into CLR code from a native fiber known to be unsafe? Am I just doing something wrong? At any rate, once I have some kind of input up and running, I'll post some screenshots to reveal what system and emulator specifically that I'm wrapping. I'll probably keep working on this mostly until around the new year perhaps, then switch gears back to Marble4D with a fresh outlook. Hopefully be able to get some kind of simple demo ready for DBP 10. It's definitely nice to crawl out from under all the limitations imposed by the 360 clr for a while and use more features and APIs. Definitely learning a LOT from this, and SlimDX rules. :)
  5. Disappointed after my XNA module player idea went bust, I was itching for another side project to change things up and keep from getting burned out working on the same thing constantly. Most of the other ideas I had, while simpler than a 4D game to be sure, were full fledged projects in their own right. I wanted something smaller and less architecturally taxing, so I could get charged up from brisk progress and tangible results. In addition, I wanted something that would excersize a different skillset than just another XNA project. I've been a fan of emulators for a long time, so when the notion got floated for a more windows-centric fork of one of my favorite emulators, I had an idea. The emulator is written in native code, as most are. Unfortunately, it's been years since I've touched the stuff. However, there was a middle ground: c++/cli. Using c++/cli, the core could be wrapped into a CLI compliant object model, thus facilitating the use of my more recent skillset. I'm no fan of c++ to be perfectly honest, but the simple fact is that it's ubiquitous, particularly in the games industry. So I figure it's best to maintain a moderate working knowledge of it, particular since I had already invested the effort to learn it years ago. To its credit though, c++/cli does a fantastic job of interop between native and managed code. I was constantly googling the syntax to get things like events and properties working, but that was to be expected. Also to be expected was general rustiness all around. I was constantly forgetting things like the semicolon after the class definition, the ^ before reference types, proper includes, etc. Speaking of, one thing I sure as hell don't miss is header files. They feel so clumsy and archaic compared to a proper module system to which I had grown accustomed. Preprocessor macros are also a minefield of errors waiting to happen. That aside, it's still relatively painless to get a managed class up and running that can then be consumed by C#. I was constantly trying to break it with different combinations of managed/unmanaged calls and marshalling, but every reasonable scenario I could think of was working. I'm sure there are scenarios when things get nasty, but for now it's working great, even in x64 mode (which took registry hacks to get working in vc++ express, but thankfully I found an automated script that handled that for me). So we'll see how it goes. I'm definitely enjoying this a lot, as a fresh change of pace, and am finding it easier to work on it for longer. I'll discuss it in more detail and show pictures if and when it gets working. Ideas are still brewing for the 4D game, and in time I'll be able to attack it again from a fresh perspective.
  6. I suppose it depends somewhat on your scenario. In my case, targetting the 360 with XNA, I chose to attempt a lockless task pool for maximum possible utilization of all the cores. The interface is pretty standard, with the task component exposing a DoTasks method that takes a list of tasks, and blocks until all tasks in the queue have been completed. Internally it uses a shared "next task" index consumed by all the threads, and incremented safely with Interlocked.CompareExchange. I used ManualResetEvents in an airlock pattern to stop the worker threads once all work items have been consumed (A closes, B opens, threads run tasks then block on A. B closes, A opens, threads run tasks then block on B). I get about as close to 100% cpu utilization as I think I can reasonably expect to get for my app, so I wrote up a journal post about it and included the source. If you're interested, you can find it here. I hope it helps.
  7. JPatrick

    *Crickets*

    Since it's been over a month without a post, I feel compelled to make one, even if just to make sure there's an entry for august. The wheels are still turning, albeit slowly. Another heatwave a while back sapped my motivation, and more recently I've been tempted by other, much simpler, projects. I'm not going to give up on this though. A 4D game simply must exist, and come hell or high water it's going to happen. Anyway, work on the editor progresses slowly, and has revealed cause for refactory. The first thing wrong was my unnecessarily complex polychoron object model. I had a "Polychoron Part" class, which a polychoron would contain one or more of. As an example, I have a PlatformTesseract which will be the main surface object that the levels will be made of. The top and bottom of these have a grid-line shader applied, and all 6 sides (left/right, front/back, kata/ana) are just a solid color. This requires 2 separate geometry buffers, one for each shader. One polychoron part would be for the top and bottom, the other for the sides. This worked, but was ugly and unwieldy. Since polychora are subclassed for specific purposes anyway, and they already contain a list of cells that they're made of, I shifted the responsibility of managing different surface materials to the polychoron subclass, rather than this awkward, extraneous part class. This worked swimmingly and greatly simplified slice code, allowing me to completely delete the part class. If refactoring were Tetris (and it kind of is), deleting an entire source file is like clearing 4 lines at once. With that fresh, clean refactored feeling, I set out to allow my game objects to be consumed by the editor. This presented another problem though (of course it does, why wouldn't it). In the editor, 4D objects are to be displayed in 2D panels that cover each permutation of axes. The question is then, what's the best way to flatten the object into the 2D plane without them just becoming jumbled messes. I decided that objects should draw the least number of lines possible (in other words, only their actual edges). Unfortunately, I have no way to achieve this yet. Currently, polychora are just a list of 3D cells, each of which is sliced into a 2D face for rendering; the sum of the 2D faces making up the full 3D "slice" of the object, as detailed in one of my first posts. However, nothing in this process has anything to do with edges. My first instinct was just to draw the slices as wireframe, then flatten them down into 2D for the editor, but this will yield a lot of useless extra lines where the slice seams are (and there are a lot). These seams are typically invisible in 3D, except for the very occasional single-pixel gap wrought by floating point error. A little AA can smooth over those gaps for 3D, but what to do in 2D with all the extra lines? Trying to detect "useless" lines dynamically would just be a mess, so I'm not even going to go down that road. Instead, what I plan to do is further elaborate on the Cell class so that it knows what 2D faces its made of. I can then write a slicing algorithm analogous to the current one, which will slice the 2D faces into 1D line segments. This should give me an optimal wireframe ideal for editing. Additionally, slicing an object down to edges will allow me to render objects as wirefram in-game, which could be useful for some kind of extra-dimensional awareness mechanic to alert the player of nearby objects on the hidden axis. My new long term goal is to regroup and try for DPB 10, if there is one. Seems to be a pretty successful contest each year though, so I don't see why they'd stop.
  8. JPatrick

    Sounds Like a Hack

    While the side project is shelved for now, I did discover something interesting that might be useful to other XNA users working with sound. XNA 3.0 added the SoundEffect API to bypass the complexity of XACT. Unfortunately, the sounds must still be authored ahead of time and can only be instantiated through the content pipeline... Right? NOT SO! This is a total unabashed hack, but it works on both PC and 360. I successfully generated a sine wave at runtime with custom loop points, and it works great (after getting the units right for the loop point that is, but more on that later). Also, this is NOT suitable for "interactive" audio, which is to say you can't have a rolling buffer of continuously generated sound data. It almost works for that, but the gap between buffers is noticeable, and especially jarring on the 360. Here's to hoping they improve that in a future XNA release. Nevertheless, the ability to generate sound effects at runtime still provides interesting possibilities. Anyway, down to business. The first thing that bars our way is the fact that SoundEffect has no public constructor. This can be easily remedied with the crowbar that is reflection: _SoundEffectCtor = typeof(SoundEffect).GetConstructor( BindingFlags.NonPublic | BindingFlags.Instance, Type.DefaultBinder, new Type[] { typeof(byte[]), typeof(byte[]), typeof(int), typeof(int), typeof(int) }, null); As can be seen, SoundEffect has a private constructor that takes 2 byte arrays and 3 ints. Fantastic. So... what are they? Digging deeper with Reflector (which is a tool any .NET developer should have handy) we find that the first byte array is a WAVEFORMATEX structure, and the second byte array is the PCM data. The first 2 ints are the loop region start and the loop region length (measured in samples, NOT bytes), and the final int is the duration of the sound in milliseconds. I'm not sure why that's a parameter, since it could be computed from the wave format and the data itself, but whatever. While most of the parameters are straightforward, we'll need to construct a WAVEFORMATEX byte by byte. Fortunately, the MSDN page for it tells us what we need to know. Eventually, I came up with this: #if WINDOWS static readonly byte[] _WaveFormat = new byte[] { // WAVEFORMATEX little endian 0x01, 0x00, // wFormatTag 0x02, 0x00, // nChannels 0x44, 0xAC, 0x00, 0x00, // nSamplesPerSec 0x10, 0xB1, 0x02, 0x00, // nAvgBytesPerSec 0x04, 0x00, // nBlockAlign 0x10, 0x00, // wBitsPerSample 0x00, 0x00 // cbSize }; #elif XBOX static readonly byte[] _WaveFormat = new byte[] { // WAVEFORMATEX big endian 0x00, 0x01, // wFormatTag 0x00, 0x02, // nChannels 0x00, 0x00, 0xAC, 0x44, // nSamplesPerSec 0x00, 0x02, 0xB1, 0x10, // nAvgBytesPerSec 0x00, 0x04, // nBlockAlign 0x00, 0x10, // wBitsPerSample 0x00, 0x00 // cbSize }; #endif The first thing that should be apparent is that it's different for the PC and the 360. This is because the 360 is big-endian, whereas PCs are little. This also applies to the PCM data itself. The first member is the format of the wave (0x1 for PCM). Next is the number of channels (2 for stereo). The sample rate (44100Hz in hex). Bytes per second (sample rate times atomic size). Bytes per atomic unit (two 2-byte samples). Bits per sample (16), and size of the extended data block (0 since PCM doesn't have one). This will give us a pretty standard 44.1kHz, 16-bit, stereo wave to work with. It could just as easily be made mono with the appropriate adjustments. The next parameter is the sound data itself. This is stored as a series of 16-bit values alternating between the left and right channels. Here's a snippet that generates a sine wave: _WavePos = 0.0F; float waveIncrement = MathHelper.TwoPi * 440.0F / 44100.0F; for (int i = 0; i < _SampleData.Length; i += 4) { short sample = (short)(Math.Round(Math.Sin(_WavePos) * 4000.0)); #if WINDOWS _SampleData[i + 0] = (byte)(sample); _SampleData[i + 1] = (byte)(sample >> 8); _SampleData[i + 2] = (byte)(sample); _SampleData[i + 3] = (byte)(sample >> 8); #elif XBOX _SampleData[i + 0] = (byte)(sample >> 8); _SampleData[i + 1] = (byte)(sample); _SampleData[i + 2] = (byte)(sample >> 8); _SampleData[i + 3] = (byte)(sample); #endif _WavePos += waveIncrement; } This will generate a 440Hz (A) tone. Again notice the endian difference, and how the 16-bit sample is sliced into 2 bytes for placement into the array. It's written to the array twice so that the tone will sound in both channels. Next we have the loop region. The loopStart is the inclusive sample offset of the beginning of the loop, and loopStart + loopLength is the the exclusive ending sample. In this context, sample includes both the left and right channel samples, so really a 4-byte atomic block. If you pass in values measured in bytes, playback will run past the end of your sound and the app will die a sudden and painful death. Finally, the duration parameter. I just calculate the length of the sound in milliseconds and pass it in (soundData.Length * 250 / 44100). I'm not sure if this parameter actually has an effect on anything, but it's still prudent to set it. Once you have all this, you can just invoke the constructor and supply your arguments, and you should get a nice new SoundEffect from which you can spawn instances and play it just as you would with one you'd get from the content pipeline. That about covers it. Certainly not as useful as full real-time audio would be, but I thought it was cool anyway, and would hopefully be useful for some scenarios at least.
  9. JPatrick

    Undo/Redo and Debug Anecdote

    Oddly enough, undo/redo was actually rather easy to implement. WPF's routed command facility makes it a snap to wire the shortcuts and add the callbacks. The implementation is simply two stacks of an IUndoRedo interface. New actions are pushed onto the Undo stack, and the Redo stack is cleared. If an action is undone, it is popped from the undo stack and pushed onto the redo stack, and vice versa when it is redone. Custom actions can implement this interface and their Undo and Redo methods will be invoked appropriately. Currently I have a MultiPropertyChangedAction that listens to the MultiPropertyGrid and will save all the previous values of a property when it is about to change, so that it can be easily undone. That alone covers a lot of what needs to be undo-able. Other things will include spawning a new object, deleting objects, dragging an object around in the viewports, etc. ---------- EDIT: Yoink. I jinxed the side project by talking about it. Turns out SoundEffect has more limitations than I realized. Pitch shifting is constrained to +/- 1 octave, and there's no effective way to start playing from an offset within a sound. Oh well, back to the editor I guess. ---------- Since the side project has been shelved for now, I'll instead regale any readers with a humorous tale of debugging woe. While running the game and mouse-looking around, things were working great. For some reason I alt-tabbed out to look at something else, and clicked the "show desktop" button to minimize everything, including the game. After bringing the game up again, I noticed something was missing... All of my tesseracts were gone. Empty nothingness staring back at me like the void of anxiety inside at the realization of another obscure bug. Did it freeze? Was there another race condition in my task manager? Thankfully, the FPS counter was still in the corner dutifully ticking away, so it didn't freeze. The numbers looked right, too, and went down just as they should as I pushed the key to spawn more tesseracts. It was still running, but why wasn't I seeing anything? Being relatively new to shaders and all that jazz, I immediately suspected something was broken in my rendering code. It was going blank after a minimize, so maybe some kind of lost device situation? To test, I ran it again, and hit ctrl-alt-del to bring up the interrupt menu, which I'm pretty sure causes a lost device. Canceling and going back to the desktop, all the tesseracts were still there. They would ONLY disappear after a minimize, not for any other reason. Even so, maybe my shader constants were getting messed up somehow. XNA claims to be able to recover most kinds of graphics assets fully after a device lost, but maybe there was some kind of bug with minimizing? I added a special key that would reset the constants to appropriate values when pressed. I ran the game, and before minimizing, I pressed the key. The display didn't change, since the values were still correct, and the console reported that the values had been set. So far, so good. I minimize and reopen to emptiness. Crossing my fingers, I press the key. Nothing. Seething with frustration, I mash the key and fill the console output with "minimize test" but my rage was insufficient to sway the program to render once more. What the hell was wrong? Maybe I just wasn't asking it to nicely enough. RenderStates.PrettyPlease | RenderStates.WithCherry? I start reading my Update and Draw methods again and again trying to find out what the eff was wrong. If all my shader constants and render states were fine, it had to be something else. Maybe camera updates were going wonky or something. In desperation, I completely comment out the camera code so the view can't be moved at all, and ran again. Holding my breath, I minimize and reopen, only to be met with... A floating red tesseract. YES. I took a quick break to relish the discovery of the problem area. Relaxed and confident, I plow into the camera code. It was obviously getting moved in such a way that you could no longer see the scene. How though? There was code to clamp the camera coordinates to reasonable values on both rotational axes, so even the most erratic movement should be fine. Perplexed, I add a line to print out the camera angles when a key is pressed. Before minimizing I get typical -180 to 180 horizontal and -90 to 90 vertical. Minimizing and reopening yet again, I push the key and still see typical values of -Infinity and NaN. Maybe next I'll- wait, what? I don't care how high your mouse DPI is, you're not going to be scrolling to -Infinity anytime soon. Besides, my input manager will normalize the coordinates based on the client window size, so- Oh. Seems that when you come back from being minimized, the IsActive flag in Game becomes true a few updates before the client width and height are set back to nonzero values. Slapped an if around it and all is well. NaN is fun stuff.
  10. It's embarrassing that it took a month to finish just this one control, but I think I can finally put it to rest and move on. It helps to remind myself that this control could be useful in any future WPF app I ever make. Out of the box it can edit any class that provides a string converter, flags and non-flags enums (of any underlying type), arbitrary structs, and classes that provide a default constructors. I think that's sufficient coverage for a lot of cases, even without adding custom type editors. I'll still probably do that anyway, at least for stuff like colors. Here's a few shots of the control: The control before items are added, and after a single item is added. The flags, struct, and class editors. Simply check the boxes in the flags dropdown for the combination you want. The struct and class editors are just a dialog with a nested MPG, the main difference being that for classes the "null" checkbox is available. A red outline is shown around editors who's contents cannot be assigned back to the property. The background of the editor cell will be gray if the value differs among the objects selected by the MPG. Getting all the keyboard input, focus, mouse capture, data binding, etc. of this thing working correctly was at time enormously frustrating. I could itemize the challenges, but honestly I'm so tired of this control that I don't really want to drag my memory through it again. If anyone is using WPF and is interested though, I'd be happy to discuss it and share the code. Switching gears... Somewhere along the line I took a break and refactored my XNA input manager, since I wasn't quite happy with it. The primary reason for making one was to provide quick methods for checking if buttons were just pressed or released during the current frame. From that though, I also wanted to reign in the inconsistent input APIs. The keyboard, gamepad, and mouse classes all exposed their states in slightly different ways, and I wanted to have one, single flat state for all digital inputs, and all analog inputs. For digital, I combined all inputs exposed by the 3 devices into a single, large enum, and allow the user to query if an input is down, up, just pressed, or just released. This should make it easy to bind controls to any device, or any combination of devices. Similarly for analog, I made an enumeration of all axes available, and normalize them to a range of -1.0 to 1.0. The user can then query the current sample, or for the delta from the last sample. This works great for all the gamepad axes, but was slightly awkward for the mouse. To make it work, the input manager recenters the pointer after every mouse sampling. This allows one set of code to correctly handle camera control from either the mouse or the gamepad (albeit with different sensitivities). For the gamepad, camera movement is proportional to the current sample. If the player is holding the stick steady at 1.0, the camera will move at a certain speed, a different speed at 0.5, etc. For the mouse, camera movement is typically handled with the mouse delta, rather than the mouse sample. Recentering the mouse, however, effectively turns the sample into a delta, and we get the expected behavior. One final challenge was the aspect ratio of the mouse. Normalizing horizontal and vertical axes to -1 to 1 means that the movement along the axes has different magnitudes, as the screen is not square. To deal with this, I provide aspect corrected mouse axes, as well as the non-aspect ones (which are themselves useful to tracking pointer position in view space). Now I still have the undo/redo stack to implement, then It's finally back to working with 4D things. The level format aught to be interesting, especially if I add spatial partitioning...
  11. JPatrick

    Bleh

    Anyone in the Seattle area right now is I'm sure aware of the loathsome heatwave taking place. 90 plus degree temperatures in an area where it's not uncommon to have no AC at all does not foster motivation. I'm not a big fan of the day star to begin with, and record breaking heat is doing little to sway me. Anyway, the icy wind blasting through the window now is doing much to revitalize me, and I'm continuing to pursue the MultiPropertyGrid. Types that support String in their TypeConverter are editable via a simple text box, while enums are handled with a combo box. User entered strings are validated through the type converter to make sure they're valid, and if they fail the user is notified with a message box and the control's error template is activated. Flags enums might be a little more complicated. Maybe a multi-select listbox. Beyond that, types will need to supply their own WPFTypeEditorBase derived editor control through the Editor attribute. The type is then retrieved through the TypeDescriptor interface and instantiated and data-bound into the property grid. Once this is out of the way I'll need to handle an undo-redo stack somehow, then maybe I'll finally be able to focus on the level format. At this point it's abundantly clear that there's no way I'll be ready in time for DBP 09, but it's still a useful deadline to keep motivation up.
  12. As others have mentioned, System.Reflection will provide you with the facilities you need to programmatically manipulate arbitrary types at runtime; the hardest part being constructing the actual UI for it on the 360, since XNA provides none out of the box. I just thought I'd address your question about storage. If you're relatively new to C#, I strongly encourage you to read up on Data Types and their semantics. Primarily the difference between reference and value types, and value type boxing. A "reference" in C# is not the same thing as it is in C++. Boxing is also particularly important since you're targetting the 360, and it can have a non-trivial performance impact there if it occurs too often.
  13. JPatrick

    .Net read only collections

    I've been frustrated by this somewhat as well, in that it feels unclean to have exposed methods that modify a collection that really shouldn't be modified, but over time I've accepted ReadOnlyCollection as adequate. For me, it helps to remember that in .NET there's really no concept of a truly immutable object. The refleciton API can crack open almost anything and gives you quick and easy access to private members. If somebody really, REALLY wants to do it, they will. As a result, I generally consider a good-faith effort to warn the user not to do something as enough of a precaution. Having no public add method would be ideal, but an add method that throws an exception is sufficient warning that it shouldn't be done. In short, I think preventing accidental misuse is really the best that can be hoped for. Beyond that, let them do what they will.
  14. Coincidentally enough, I've just been confronted with this in my app less than a day after investigating. Unfortunately, in my case, I absolutely have to use OneWayToSource, since the source is not a dependency object. Here's a clevar hack that I just came up with to get around it for my scenario; it may or may not be useful to you: class SuppressInitialBindingValidationRule : ValidationRule { bool _IsFirstUpdate; public SuppressInitialBindingValidationRule() { _IsFirstUpdate = true; } public override ValidationResult Validate(object value, CultureInfo cultureInfo) { if (_IsFirstUpdate) { _IsFirstUpdate = false; return new ValidationResult(false, "Initial binding ignored."); } else { return new ValidationResult(true, null); } } } Instantiate one of these and add it as a validation rule to the binding. It will force the initial push that occurs when you first set the binding to fail and the value will not reach the source. The caveat is that it will also trip the error template for the control, but that can probably be dealt with as well somehow. In my case, I'll probably be setting the value to itself again immediately, which will reset the template. Perhaps there's another, cleaner way... EDIT: Using Validation.ClearInvalid on the binding expression after setting the binding clears the validation error and prevents the error template from kicking in. [Edited by - JPatrick on May 29, 2009 2:18:27 PM]
  15. I'm still in the process of learning WPF myself, so I was curious and decided to play around with this. Here's what I think is happening: Before the binding is set, the String dependency property just has a simple backing field for the o2 instance. Setting a binding, however, changes it from a backing field to a BindingExpression which links it to the binding source. If you try BindingOperations.GetBindingExpression() before and after setting the binding, you'll notice it's null before, and an instance after. Doing this appears to clobber any value that may have been there already. When you think about it, this makes perfect sense for every type of binding except OneWayToSource. Even for OneWayToSource bindings though, it still somewhat makes sense when you realize that bindings are usually set immediately in the constructor, either through XAML or in code. This usually happens before the property is ever used at all, so in most situations it's probably never noticed. Quote:When I set BindingMode to TwoWay, everything works as expected. I'm not sure that it does. If you set a value changed callback on the String property, then use a TwoWay binding, you'll notice that it immediately pulls in the value from its binding source (the Bool property in this case). Since the default value for bools is false, the converter will yield "nope" which is the same value you're setting on String right off the bet. If you instead set string to "yep" and try the two way binding, it still comes out as "nope" after the binding is set. I'm not sure that there's anything you can do about this except either setting your bindings before using the property, or caching the value, setting the binding, then setting the value back again.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!