Tom

GDNet+ Basic
  • Content count

    935
  • Joined

  • Last visited

Community Reputation

352 Neutral

About Tom

  • Rank
    GDNet+
  1. It depends entirely on the size of your models. If a model is supposed to be 2m tall and is 200 units in your editor, the factor is 200 units/m.
  2. I've scoured the GD.Net forums and the interwebs according to Google, and I cannot for the life of me find a solution. I successfully threw together a little WinForms application using C# and SlimDX that incorporates Direct3D10 and Direct2D interop. It was simple: D3D cleared the buffer, and D2D drew a rectangle. My next step was to draw a sprite using D3D10 rather than D2D so I can do all sorts of stuff that D2D simply cannot do (like change the color, for example). I've got a 32-bit PNG with transparency that I use to test it, and I converted it to a DDS (DXT5) so I'd have another format to test. I load the image using ShaderResourceView.FromFile() and assign it to a SpriteInstance. I set up the view matrix, and everything draws okay . . . except that the sprite is opaque. So, I do some research and discover how to set up the blend state, as follows. [code] _Device.OutputMerger.SetTargets(_RenderView); _Device.Rasterizer.SetViewports(new Viewport(0, 0, this.ClientSize.Width, this.ClientSize.Height, 0f, 1f)); _Device.OutputMerger.BlendState = BlendState.FromDescription(_Device, new BlendStateDescription() { IsAlphaToCoverageEnabled = false, SourceBlend = BlendOption.SourceAlpha, DestinationBlend = BlendOption.InverseSourceAlpha, BlendOperation = SlimDX.Direct3D10.BlendOperation.Add, SourceAlphaBlend = BlendOption.Zero, DestinationAlphaBlend = BlendOption.Zero, AlphaBlendOperation = SlimDX.Direct3D10.BlendOperation.Add }); _Device.OutputMerger.BlendState.Description.SetBlendEnable(0, true); _Device.OutputMerger.BlendState.Description.SetWriteMask(0, (ColorWriteMaskFlags)0x0F);[/code] Unfortunately, this does nothing. If I change IsAlphaToCoverageEnabled to true, it's renders a kind of color-keyed image where the alpha looks 1-bit, but I cannot for the life of me get smooth alpha. Ideas are greatly appreciated, and thanks in advance. If you need more info, like the entire code file, or screenshots, let me know. [b]Update:[/b] After learning that Sprite changes the render state, I put the above code [i]after[/i] BeginDraw and received interesting results: with IsAlphaCoverageEnabled set to false, the sprite appears opaque as before, but when I set it to true, the sprite disappears altogether. I might also mention that I'm not using a depth/stencil buffer, and the z-scale is 1. [b]Resolved:[/b] I honestly have no idea what I did. I was playing around with blend modes and moving around lines of code, and eventually it just popped. My sprite is blended correctly now, and I have no idea why. Before it magically started working, I'd set the BlendFactor to (1f, 1f, 1f, 1f), which had no effect at the time. I separated the BlendStateDescription from the assignment so it appears in its own block. Moved AlphaCoveraged to the bottom of the block, and it suddenly worked. I then moved it back to the up, and it [i]still[/i] worked. It's quite a mystery.
  3. In the SimpleModel10 sample, I don't understand the purpose of the following code: var rt = new Texture2D(device, new Texture2DDescription { ArraySize = 1, BindFlags = BindFlags.RenderTarget | BindFlags.ShaderResource, CpuAccessFlags = CpuAccessFlags.None, Format = Format.R8G8B8A8_UNorm, Height = 128, Width = 128, MipLevels = 1, OptionFlags = ResourceOptionFlags.None, SampleDescription = new SampleDescription(1, 0), Usage = ResourceUsage.Default }); var dxgiSurface = rt.AsSurface(); rt.Dispose(); dxgiSurface.Dispose(); It creates and destroys a texture and surface for no apparent reason, so I'm just wondering what the reason is.
  4. I figured it out. The 7th bit (position 6) indicates whether a second byte is present, and then the second byte is simply shifted left six bits and AND'ed to the remaining six bits in the first byte. I got screwed up because I was looking at the wrong counts . . . in the case of 684 faces, it was actually counting indices, in which case there are 2052, and the binary works out. Thanks to anyone who strained your brain on this. I still don't why they cut off the first byte at the 7th bit instead of the 8th, but, whatever. (Maybe they left the 8th bit open for three- and four-byte counts.)
  5. Hi there. I've been analyzing proprietary file formats for a while now, in an attempt to make a (somewhat) universal converter. I'm now analyzing the Unreal static mesh format (specifically, the old one for UE2), the raw version of which is, according to Google, completely undocumented, and have noticed something quite strange. The mesh is broken into sections for vertices, vertex colors, faces, edges, mapping coordinates, etc., and at the top of each section is a count of the number of elements the section contains. The count is what I'm having trouble with. It isn't a regular signed or unsigned 16-bit value but a combination of up to two bytes whose individual values have no direct correlation with the count. The actual count is derived using some bit math I can't figure out. Here are some examples, from the Aidabear (raw) mesh file: Vertex Section Count is given as two bytes: 0x7F (127) and 0x07 (7), in that order, so together they would be 0x077F (1919) or 0x7F07 (32519). The actual vertex count is 511. In binary, you get this: 0x7F = 0111 1111 0x07 = 0000 0111 511 = 0000 0001 1111 1111 We'll come back to this in a minute. Face Section Count is given as 0x44 (68) and 0x20 (32), in that order, so together they are 0x2044 (8620) or 0x4420 (17440). The actual index count is 684. In binary, you get this: 0x44 = 0100 0100 0x20 = 0010 0000 684 = 0000 0010 1010 1100 I can't tell how these values correlate. We'll come back to it. Edge Section This is the one that I thought made sense until I studied the other two sections. Count is given as 0x6C (108) and 0x24 (36), so together they are 0x246C (9324) or 0x6C24 (27684). The actual index count is 1174. In binary, you get this: 0x6C = 0110 1100 0x24 = 0010 0100 1174 = 0000 0100 1001 0110 Now, let's assume for the moment that the 7th bit (at position 6) in the first byte (0x6C) is a marker that tells the mesh loader to use a second byte (in this case, 0x24), which is stacked onto the end (i.e., shifted six bits and AND'ed together). This gives us 0000 1001 0010 1100, which is one bit out of place, so let's also mask out the first bit (position 0). That gives us the correct value. In short: (0x24 << 6) & ((0x6C & MASK) >> 1) = 0x0496 (the actual count), where MASK = 0011 1111. That's an awful lot of math for something that fits easily into two bytes. Problem: this does not work with the other two sections. You can do the math real quick, but really you just have to look at the binary values posted here and see right away that they don't match. In the Vertex section, you get the correct value (511) by leaving the 1st bit in 0x7F alone . . . but what in this particular case dictates that it must remain untouched? So, I'm turning to wiser programmers like you to help me figure this one out. If it helps, here's another example from Arachnid_Gib_s03: Vertex Section Count is given as 0x0E (14), which is the actual number of vertices. There is only one byte. (This is also the case in the Face and Edge sections, which contain 12 and 32 elements, respectively, described using exactly one byte.) Something in this value (0x0E) is missing that would otherwise tell the mesh loader to look for a second byte. It seems most likely the 7th bit (since two of the three values in AidaBear have zero as their first bit). Since I've only looked at two models so far (the two mentioned here), I can't tell for sure. If I find a model with exactly 127 vertices, it might help, but I'm skeptical that one even exists, and I'm not about to serialize every one of the 2,452 models by hand! So. . . With this information, how do you get 0x02AC (684) from combining 0x44 and 0x20? How do you get 0x01FF (511) from combining 0x7F and 0x07? What do the extraneous bits represent? I thank you in advance for your help. Hopefully I'll get this figured out soon, but any timely assistance you can offer is greatly appreciated. It is quite possible that the mesh file contains other data that dictates the size of the count field in each section. If no one can help me with the bit math, I'll start looking elsewhere. The file format is incredibly obscure, and, as far as I can tell, documentation is nonexistent. [Edited by - Tom on September 22, 2010 1:43:39 PM]
  6. I'm looking at the source. Lua.Close() calls lua_close(), and Dispose calls Close() in addition to destroying some other things. There's also an internal (Friend in VB) dispose method that calls lua_unref(), but I don't know where/if this is called. In effect, a call to Dispose should close the VM and free up resources, but when I set Lua to null and try to re-create it, I get .NET errors about protected memory or uninstantiated objects . . . it's not very consistent.
  7. Hi there. I recently started using Lua and C# together thanks to LuaInterface. So far everything works great except for one tiny detail: I'm not able to reset Lua at run-time without generating either an exception or memory leak, and I'm not sure why. I've tried disposing the Lua object itself and re-creating it, and I've tried closing the Lua object and re-creating it. I've even tried just setting it to null and re-creating it, knowing full well that it's going to leak memory but doing it anyway because I ran out of ideas. If someone already knows of a clean way to do this in LuaInterface 2.0.3, please let me know. I'd like to be able to dump the VM and start fresh, rather than trying to clean every single value manually (though I did write a dispose method for handling tables).
  8. Yes, but then I have to cast every enum as int, which is specifically what I want to avoid.
  9. How are you measuring and presenting CPU usage?
  10. Found a solution here. Not especially pretty, but it gets the job done in nearly all cases.
  11. Hi there. I want to do something like this: public static bool Supports<T>(this T value, T flag) where T : int { return ((value & flag) != 0); } Obviously this doesn't work because you can't extend int, and it wouldn't support other integral types anyway (e.g., uint, short, etc.). Is there an alternate way to accomplish this through an extension method (preferably without a cast), or am I stuck casting to a regular method? Thanks in advance for your help. [Edited by - Tom on February 1, 2010 10:29:21 AM]
  12. This would be a useful constant for declaring default scales. XNA has it, and I use it quite often. Writing it into SlimDX would make it look much cleaner alongside other such contructs like Vector3.Zero and Quaternion.Identity, rather than have my own "Vector3_One" or "new Vector3(1f)" sticking out like a sore thumb. Logically, Vector2.One should also be included for use in 2-D.
  13. And there's absolutely no type decorator for signed or unsigned shorts like there is for pretty much every other type? e.g., 0f, 0u, 0ul, etc. This seems like a terrifically horrible design.
  14. Hi. Is there a way to turn this: ushort c0 = reader.ReadUInt16(); ushort c1 = reader.ReadUInt16(); ushort c2 = (ushort)((int)(2 * c0 + c0 + 1) / 3); ...into this? ushort c0 = reader.ReadUInt16(); ushort c1 = reader.ReadUInt16(); ushort c2 = (2 * c0 + c0 + 1) / 3; The problem is that the compiler assumes those literals are ints rather than converting them to shorts. I believe this has more to do with the operators than the actual values. Anyway, please tell me there's a way to get around this hideous forced type-casting.
  15. Found one: http://www.rpmanager.com/otherGear.htm. It's called ChannelOps.