• Advertisement


  • Content count

  • Joined

  • Last visited

Community Reputation

466 Neutral

About ValMan

  • Rank
  1. Sprite rotation woes

    Set identity transform only once before drawing all the walls, no need to set it for each wall. How does your collision work? What do you mean by rotating the thing and not its axes? You are not using a camera transform in your program, so no axis should be affected.
  2. Sprite rotation woes

    [b]"[/b][b]Isn't spritebatch for XNA? Or is there something else I need to do?"[/b] ID3DXSprite interface is used to render a sprite batch. This increases efficiency when sprites (each being 4 vertices and 2 triangle primitives) share the same texture, transform and shaders because all sprites that share these render options can be submited to video card in one Draw call (meaning DrawIndexedPrimitive). Sprite batches are also convenient because ID3DXSprite will sort your sprites within a batch by Z order or texture if you specify. For 2D games, you usually want both options on when drawing the world, and none when drawing UI. In your case, the player sprite does NOT share the same transform as the walls, so it cannot be drawn in the same batch since render states have to be set. Chances are, you are also using a different texture, in which case drawing player will break the batch no matter what. 3D games face exactly the same issues, and you start worrying about the number of batches you submit only when you find your performance unacceptable. I personally target 60 FPS with v-sync, and if my game runs at least at 59-60 on debug build I find no reason to worry about optimizing. [b]"Also are you basically saying that I should have multiple id3dSprite->Begin() and id3dSprite->End() function calls?"[/b] Yes, let me correct my previous post - there is no need to end the batch by calling End() after every batch. End() will attempt to restore all render states on the device to values captured at the time of Begin() call, which is unnecessary until you finish all drawing for the frame. Call Flush() instead to submit current batch and change render states for the next batch. SetTransform may cause Flush() to be called automatically - I use a custom sprite class in my engine so I don't remember the nuts and bolts of D3DXSprite as well as I used to. In any case, make sure to set transform to identity matrix before drawing things that aren't supposed to be transformed, as opposed to using D3DXMatrixRotateZ with angle of 0. [b]"Also I was unaware of the memory leak thing. Do you have anything I could use to read up further on this?"[/b] new operator in C++ will allocate an object on the heap. This object will remain allocated until you explicitly deallocate it with delete. If the object is never deallocated, this creates a memory leak - your program bit off a chunk of memory and never returned it back. This is especially problematic inside loops, such as this render loop. Every time the loop is run, a new D3DXVECTOR3 object is allocated to pass to Draw method, and never released. So every frame, you loose at least 24 bytes of heap memory (D3DXVECTOR3 takes at least 12 bytes and you have two heap allocations with new). With time, this will make your program run slower and then crash. Read about heap memory allocation and new/delete operators in C++.
  3. Sprite rotation woes

    Looks like you are on the right path. The workflow for drawing rotated and non-rotated sprites is typically the following: Begin Batch Set Identity Transform (means no transform will take place) Draw some non-transformed sprites... End Batch Begin Batch Set Transform (matrix generated by RotationX, RotationY, RotationZ or Transformation2D functions in your case) Draw some rotated or otherwise transformed sprites... End Batch Begin Batch Set Identity Transform Draw some non-transformed sprites... End Batch Here are the things you are missing: 1. Every time you change transform, you need to Begin/End a sprite batch before calling Sprite->SetTransform to set another transform. The way rendering works, a sprite batch will get drawn all at the same time (which is the whole point of sprite batching), with the last transform you set for that batch. Setting transform more than once for the same batch will not have any effect. 2. To "reset" a transform, you pass an identity matrix to SetTransform to cancel out any previous rotation, etc. D3DXMatrixIdentity will generate you an identity matrix. So in your case, call Sprite->Begin, set your transform matrix generated by D3DXMatrixRotationZ, draw player sprite, call Sprite->End, call Sprite->SetTransform passing identity matrix generated by D3DXMatrixIdentity, then Begin, then draw your walls, then End. This will draw the player sprite rotated and walls will be untransformed. When rotating sprites, you also should be aware of the point you are rotating by, called pivot point or hot spot depending on terminology. If I remember correctly, D3DXSprite will generate its vertices so that any Z rotation applied will automatically rotate the sprite using center as the pivot point. However, if you should want to change this pivot point, you can do so with a Translation matrix generated by D3DXMatrixTranslation and then multiply that with RotationZ matrix to get a rotation about a different pivot point. The translation before rotation will be an offset from current pivot point to the new point. The last thing I saw in your code is you are using new operator to pass D3DXVECTORs into Sprite->Draw method. If you are using C++ this will create a memory leak, so I recommend allocating the position vars on the stack and taking their address instead. Of course when you progress to the point of having a real game, your data structures for the walls and the player should already use D3DXVECTORs and you can just take an address of those. For your walls you will probably not want to use transform matrices, because it will make collision detection more difficult later. If you represent your walls as AABBs internally (meaning x, y, width, height rectangle structure) drawing them will be more efficient and collision detection will be easier later on. If you still have problems, maybe post a screen or two. I don't know what "rotating the wall, but not visibly" means.
  4. GUI Programming

    I don't believe Windows uses messages to retrieve simple properties like width or position. In Win32 you would use GetClientRect to get width/height of a window's client area and GetWindowRect to get size and positioning of the window in screen coords. I think the reason messages are used for some properties (such as window text, WM_SETTEXT for example) is because designers wanted you to be able to "overload" that function and return your own data. Since Win32 was written in C, they had no virtual function mechanism so they used messages, which is the next closest thing. I personally use the "messages" concept only for commands and notifications, the rest is handled by virtual functions. Every GUI element may have the following functions: Shared Properties by all superclasses of gui element are simple functions: [code] POINT GetPosition() const void SetPosition(POINT ptPosition) void SetWidth(int nWidth) void SetHeight(int nHeight) int GetWidth() const int GetHeight() const void GetClientRect(RECT& rcOut) const DWORD GetFlags() const SIZE GetSize() const void SetSize(SIZE size) [/code] etc... Actions and properties that may have to be overloaded by derived classes are virtual functions: [code] virtual void Render() virtual void Deserialize(File& rFile) virtual void Serialize(File& rFile) virtual void SetFlags(DWORD dwFlags) [/code] Events, commands and notifications are also virtual functions: [code] virtual int OnCommand(int nCommandID, GuiElement* pSender, int nParam) virtual int OnNotify(int nNotifyID, GuiElement* pSender, int nParam) virtual void OnRenderClient(const RECT& rcClip) virtual void OnRenderBackground(const RECT& rcClip) virtual void OnKeyDown(int nKeyCode) virtual void OnChar(int nChar) virtual void OnKeyUp(int nKeyCode) virtual void OnMouseLDown(POINT pt) virtual void OnMouseLUp(POINT pt) virtual void OnMouseMove(POINT pt) virtual void OnFocus(GuiElement* pPrevFocus) virtual void OnDefocus(GuiElement* pNewFocus) [/code] Then I also like to specify command IDs and notify IDs within the classes that actually use them [code] class ScrollBar: public GuiElement { public: // // Constants // enum NOTIFY { NOTIFY_SCROLL } } int OverlappedWindow::OnNotify(int nNotifyID, GuiElement* pSender, int nParam) { if(NOTIFY_SCROLL == nNotifyID) { m_nFirstVisibleItem = nParam; UpdateView(); } } [/code] The whole mechanism works by having a message loop within the game that calls all those virtual functions. I have EngineWndProc as a static function within Engine class, and when it get WM_MOUSEMOVE for example, it loops through a linked list of top-level GuiElements and checks which one is under the mouse and not disabled. Then it calls it's OnMouseMove function, and passes the mouse position converted to that Element's local coords. Same for everything else. WM_KEYDOWN checks to see which element has focus and calls OnKeyDown virtual function of that element.
  5. Rendering and interacting with an UI

    Depending on how complicated you want it to be, you will need a list of windows arranged by Z-order, a way to identify foreground window, focus window (receives keyboard events), captured window (receives mouse events exclusively until reset), active window, etc. Also a way to specify hidden windows, windows that can't receive focus (disabled), can't be activated, can't be (or can be) dragged, are always on top or bottom of Z-order (topmost/bottommost) or windows that are modal (receive both keyboard and mouse events exclusively). Then there is an issue of passing focus around, tab order, events that cause windows to become active or inactive, gain or loose focus. You may also have to look at clipping, windows with buffers, and parent-child relationships. You will probably need a way to subclass the UI windows to handle events as well, which I've done with dynamically created classes and virtual functions. There is a lot involved, in other words - if you want to do it yourself for learning purposes you should look at examples first. Here are a few things I hadn't integrated into my GUI from day one and wished I did: * Make sure the format is easy to understand if you are going to be loading from file. Text format like XML works well. If you decide to create everything by code, use scripting or separate dll so you don't have to recompile the whole game. My first GUI format was binary, so I ended up having to create it by code and then save to binary file, since I didn't have time to make an editor. * Integrate an automatic layout system into positioning and sizing of your whole GUI. Even if you use text format to position and size different controls in your gui it still takes forever to layout stuff by hand. The alternative is to write a GUI editor, but most people will not want to take the time to write tools when they want to work on the game. A simple layout engine that works like HTML page layout by specifying margins, padding, breaks, alignment and attachment will make this considerably faster. * Use a styling mechanism to specify graphics. I started off specifying texture file names and coordinates on texture atlas for every control. So if I had four buttons that use the same textures for all four states (normal, hover, pushed, disabled), the source texture coordinates and the texture path would be duplicated for each. I switched to styling mechanism where I have a style file (like CSS) and the definition for each GUI control specifies the name of style it uses. Then it reads texture path and coordinates from appropriate entry in the style file when loading. This also has the advantage of having switchable themes on your GUI, that making it more reusable for your other games.
  6. Video Settings of my Game

    I think the texture detail setting is supposed to reduce texture memory usage, which means you physically have to load smaller resolution textures. SetLOD only controls the mip level used from a texture with mip levels already loaded. In order to selectively load mip map levels from a texture, I think you would have to load the texture with default resolution and then go through all levels (see GetLevelCount) and call GetSurfaceLevel to copy data into another, "reduced" texture while skipping the first "n" number of levels depending on how low the quality is. Then unload the original texture and use the "reduced" texture in its place. There is also a Filtering setting in most games, which controls sampler state between Point, Linear and Anisotropic (see D3DTEXTUREFILTERTYPE). Newever implementations may also have Pyramidal, Quad, etc. You set that with SetSamplerState for MagFilter and MinFilter states.
  7. I looked at Instancing sample on the SDK. I have one question: if I store the positions, colors and tex coords of my quads in shader constants as arrays, then what should the verts themselves contain? Should I have each vert contain just the diffuse color as the only component so I have some kind of data to send? Seems like a huge waste since since most characters will be the same color, and all verts in one character are guaranteed to be the same color.
  8. I am using D3D9, I believe instancing is D3D10 or D3D9c only feature? The major part of the slow-down is in rendering text. Somehow I doubt that sending 500 or so draw calls is going to be faster than sending 2 draw calls with 250 quads each. In fact without instancing, I'm pretty sure it wouldn't be faster.
  9. Hello, This is revisit from http://www.gamedev.net/community/forums/topic.asp?topic_id=576359. I implemented my own mechanism for rendering quads, lines, points and point sprites using a material system as a replacement for D3DXSprite. The performance I am getting is far below what I was getting using D3DXSprite previously, so I am trying to figure out what I can do to improve it. What I tried: - Analysis: probably not GPU bound. Resolution, texture size, shader complexity, actual number of vertices per renderable, blend states don't make a lot of difference. - Graphics profiled using PIX. Found that D3DXSprite uses static index buffer and circular locking scheme for vertex buffer. Implemented both: performance improved but not enough. - Profiled using AMD Code Analyst. Most time spent generating vertices and moving data to system memory and video memory buffers, which is how the scheme works. For font rendering, most time is spent accessing std::map to lookup character info by its code. - Added "restore pass" functionality for effect system to avoid saving/restoring state of entire pipeline. Performance didn't improve. - Added caching of certain parameters such as WorldViewProjection matrix to avoid setting more than once per frame. Performance didn't improve. - Added caching of current effect technique to avoid calling SetTechnique when not needed. Performance didn't improve. - Added concatenation feature to reduce number of renderables. If last renderable has the same primitive type/material/transform/zorder then verts from renderable-to-be-added get concatenated with the last renderable. Performance doubled but still not enough. - Made vertex structures contain only primitive data types (w/o explicit constructors) because AMD profiler shows lots of time is spent initializing them. No performance improvement. Statistics: Release mode, pure device, release d3d runtime, running 800x600 fullscreen 60Hz no-vsync. 6 renderables 216 triangles 6 batches 24 state changes 48 filtered state changes >> 0.74 ms per frame, 1221 fps. 8 renderables 1476 triangles 8 batches 32 state changes 64 filtered state changes >> 1.69 ms per frame, 539 fps. That was on a last generation computer 1.64 Ghz, 2GB ram, NVIDIA Quadro NVS 285. On my last generation multimedia laptop, the first case is 240 fps and second case is 54 fps. Last binary compiled using D3DXSprite did 240 fps in both first case and second case on the same laptop. On the laptop, they don't run fast enough to pull off 60 Hz with v-sync on which is below my quality standards. Thanks and appreciate your help on suggestions for improvement. Val
  10. For anyone interested, the problem was resolved. Turns out GetGlyphOutline returns different values when it's called with GGO_METRICS versus GGO_BITMAP flags. GGO_METRICS is used to get only metrics and GGO_BITMAP is used to retrieve the bitmap image of the glyph, which also returns metrics. I was using part of spacing info from metrics returned with GGO_METRICS, and part with metrics returned with GGO_BITMAP which resulted in incorrect "origin" value being set for the glyph. This caused certain glyphs to appear 1-2 pixels below the baseline. I modified my code to call GetGlyphOutline funtion with GGO_METRICS flag only for estimating how much space the next character to be copied into texture atlas will take. GetGlyphOutline with GGO_BITMAP I call when I am ready to retrieve the glyph image, and I am using the final spacing from that to set the glyph origin. The problem with kerning was also resolved, some stupid logic error in my code. GetKernPairs works fine and returns reliable information.
  11. Hello, My engine uses an Effects system and has a state manager on pure device, just like the State Manager sample in the SDK. Because my states are cached, I found that the state blocks that Effect uses internally to save and restore state cause my cached states to become out-of-sync with real states, because my state manager class gets no notification of effects set with the state block Apply(). This in turn causes total havoc of useful states being filtered out when they shouldn't be. I tried to solve this problem by creating a wrapper around state blocks that works only with pure device, and records in-memory cached states at the same time as the states in D3D state block. I used that along with do-not-save-state flag for Effect system to create my own save-restore mechanism that restores pipeline states to default agreed-upon states and does the same with the in-memory shadow to keep everything in sync. This I succeeded in, however when I ran the program and compared execution speeds I found that all the extra load of code used to sync cached states and real states actually ends up slowing everything down noticeably - defeating the whole point of having a state manager. The solution that the State Manager sample uses is to set relevant states in the first pass of every technique - but for an application with lots of shaders and states to keep track of, this is not feasible. Another solution that would work is to dirty all states every time when applying an effect, but that also defeats the point of having a state manager. The only other solution I can think of is dropping state manager and using only state blocks to set state for all the effects instead of allowing effects system to change any states. That way I can record a state block for each pass based on default agreed-upon states plus the states specified for that pass in effects file, then apply state block before each pass. Does anyone have an idea of the best way to do this?
  12. Thank you for your replies. I fixed the horizontal inter-letter spacing, that works fine now. The spacing of characters from the baseline is still incorrect. I am using exactly the same formula as what you posted. With font Courier New point size 11, True Type, glyphs packed into texture with their black boxes as bounding boxes, characters like "e", "s" and "o" fall a few pixels below the baseline. I am using the Ascent - Origin formula to calculate Y position of the glyph related to current drawing position, exactly the way you do. The origin of those problematic letters is 6 pixels down from top of glyph image's black box, same as the origin of some non-problematic letters such as "n". I am guessing there is some magical value that I'm supposed to be adding/subtracting that will be meaningful for "e", "s" and "o" but set to zero for most other letters. I have no idea what that value could be, maybe something as part of TEXTMETRIC structure?
  13. Hello, I implemented custom font rendering in my engine using bitmapped fonts (loaded at runtime from true type or raster font). Most of it is working well, however I am getting visual artifacts concerning the horizontal and vertical spacing of drawn glyphs from the baseline and each other. It was very challenging for me to figure out how to apply correct spacing, so perhaps I just haven't figured it out all the way. The biggest problem is spacing of some letters from the baseline on the second label on the form, the one with Courier New true type font applied. Some of the glyphs are "riding low" as you can see. The first label also has weird spacing between "e" and "s". Please take a look at pseudo code below, am I doing everything right? 1. Add pre-draw spacing to draw position before drawing glyph. This is the abcA value from Win32 ABC structure. DrawPosition.x += Spacing.A 2. Draw glyph using following coordinates. X = DrawPosition.x + KerningPairs[PrevChar][ThisChar].KernAmount; Y = DrawPosition.y + TextMetrics.Ascent - GlyphMetrics.Origin.y; 3. Add post-draw spacing to draw position after drawing glyph. This is the abcB and abcC values from ABC struct. DrawPosition.x += Spacing.B + Spacing.C [Edited by - ValMan on October 12, 2010 5:35:46 PM]
  14. [EDIT] Updating for anyone who might later get the same problem. It appears the cause of this error is two-fold. The reason the problem was happening only in Debug mode is because vertex buffers get cleared to 0s by the runtime for validation purposes - so if you try to draw something and pass a cleared vertex buffer you get no output. So, no output in PiX also. The reason the weird flashing and wrong texturing was happening is because I was calling DrawIndexedPrimitive and passing it wrong data to render (which only occurs in Release mode because otherwise that "wrong data" gets cleared and produces no output). Specifically when I passed BaseVertexIndex value to DrawIndexedPrimitive I had no idea that this value actually gets added to every index in the index buffer. Perhaps the docs weren't clear or perhaps I am a bad reader. In any case, I was passing BaseVertexIndex thinking it's just a simple offset where it starts reading even though in reality this value gets added to every index. As of this time I haven't gotten it to work yet, still digging through details, but I am happy that I found the cause of this error. It gave me headaches for 4 months. [Edited by - ValMan on September 30, 2010 1:08:02 PM]
  15. Hello, I wrote my own batching system to replace D3DXSprite because I wanted more flexibility. Recently I started getting this problem - when I drop my in game console and start typing, some of the triangles rendered later in a batch than the console text suddenly start flashing with the wrong texture. It's as if, for a duration of one frame, when a new pair of triangles gets added to the render batch for typed character, something in the batch processing code goes wrong and one of the renderables gets the texture from a previous or next renderable in sequence. I realize there is nothing you can do that would be much better than what I can do, since I wrote the code, but the reason I am asking is because when I do a PIX run and record every single frame from start to finish.... The flashing doesn't show up! How can this be? If I can see some triangles with the wrong texture applied for a split second that means at least one frame must have those triangles textured with the wrong texture. Yet as I looked though every single frame in Pix capture, all the rendering was spot-on perfect and all the D3D calls were perfect. What's worse, this problem only happens in Release mode! So I can't even step through anything to debug and try to see what goes on. Has anyone had an experience such as this one? Can you recommend some other debugging method to catch what's happening? I just finished writing some debugging code that dumps the contents of render queue each time it gets submitted to DirectX, looked carefully through the thousands of entries for every triangle and every batch, and everything looks perfect as well! I really don't know what to do any more.
  • Advertisement