gfxCahd

Members
  • Content count

    56
  • Joined

  • Last visited

Community Reputation

234 Neutral

About gfxCahd

  • Rank
    Member
  1. So, no ideas? I was wondering whether rendering my text at its native scale to a texture, and then I shrink that in order to render it to the screen. Efficiency-wise it shouldn't be much of a problem if I store and reuse the text-texture, recreating it only when the text is modified. Code-wise it wouldn't be a clean as my current set-up (rendering each glyph at run-time). Is that what most games use in order to get nice small text?
  2.   Yeah, I know about signed distance fields. But my issue is with small font sizes. As I understand, the best method for small fonts is still textured quads.
  3. I am rendering text letter by letter as textured quads (my glyphs are in a texture atlas). Though my glyphs' positions and dimensions in their texture atlas are integers (in texel space), their rendered screen size and coordinates will be float, mainly due to scalling. This results in text that does not have a consistent appearance. I know that any degree of scalling will deteriorate the quality of each letter, that's expected. What I am trying to solve is the case where (apparently due to sub-pixel positioning) the same letters in a line of text are rendered differently (e.g. some a's are blurry, while others sharper). Forcing their screen coordinates to be integers solves the consistency issue, but ruins their overall positioning (I go to the trouble of calculating kerning, and forcing integer possitioning renders this pointless).   Is there a solution to this? This problem exists with any rendered quad, it's just that more obvious when what is rendered is text.
  4. Rendering Perspective Text

    Thanks, but I'm using my own custom classes. I guess my question is a bit on the abstract, more relating to architecture than anything else.   Spritebatch draws quads, to screen space. BillboardRenderer draws quads to world coordinates, depending on camera frustum.   I just wonder if there is a elegant way to combine the two, to have perspective text.
  5. I have a spritebatch class, and a number of classes that use it, in order to render textures and text to the screen. I also have a billboard rendering class. It renders sprites in a 3d world. It turns out, I need a combination of these two classes' abilities; I need to render 2d text in a 3d world.   I am not going to create new "label" classes in order to use the billboard renderer; that's just stupid. Applying matrix transformations to my text before sending them to the spritebatch is also cumbersome. (For some reason too, the images-text rendered this way is fixed to pixel coordinates, resulting in a weird "wobble" effect when they (or the camera) move around in the world).   So, the way I see it, I have two choices:   1) Render my labels and whatnot thourgh the spritebatch class to a renderTarget (not the screen), and then use the billboard renderer to render the final result in the 3d world.   2) Re-write my spritebatch class to actualy use internaly the billboard renderer, and thus give me the option to render my labels etc... any way i want.     Any suggestions? The first option is the easiest to code, and I have seen many people suggest that approach. I can't help but find it extremely inefficient (this changing back between renderTarget and backbuffer should have quite an effect on performance).
  6. Compute Shader not running

        I ran the SharpDX equivalent: device.CheckFeatureSupport(SharpDX.Direct3D11.Feature.D3D10XHardwareOptions); which returns true. As you said, the 9500GT is a D3D10 card, so it can run a D3D11 compute shader with some limitations (feature level 10, cs_4_0), like only having a single Unordered Access View available (hurray for shoving everything into a single array...)   https://msdn.microsoft.com/en-us/library/windows/desktop/ff476331%28v=vs.85%29.aspx
  7. Compute Shader not running

      Just tried that, the gfxcard still "skips" running the shader the first time.
  8. Compute Shader not running

    Ok, I created a new project (just a windows console) which creates a device, sets the compute shader, and updates a few times. No multithreading. The problem persits.   I also found that "some" compute shaders actualy will run the first time, on my hardware, but only if they are very simple (basically just copying a constant value or threadId to the RWBuffer). No multiple branches or else my card craps out, and doesn't run the shader the first time around.     So, I think I have exausted all posibilities. The reference device works just fine with my code, and multithreading has no effect either way, so I guess I either live with this strange hack, or get a new gfxCard and put the burden on the user.
  9. Compute Shader not running

      DERP! I should have thought of that, thank you!   Ok, so running my original code on either the reference or warp device, produces the correct result. The problem (compute shader not running the first time its dispatched) only exists when using my hardware, geforce 9500GT.       Yes, I'm testing that still...
  10. Compute Shader not running

    Ok, so I got the latest driver for my card (geforce 9500GT), the problem persists. After some gnashing of teeth, I found the following hack:   The first time i need to run my compute shader, and before calling dispatch, I do the following:   copy from the RWbuffer to the stagging buffer, // the RWbuffer is ofcourse empty at this point, the compute shader hasn't run yet   map the stagging buffer,   unmap the stagging buffer I then proceed with my dispatch calls etc... as normal.   This works, fixing the bug on my machine at least. But any ideas on what causes this behaviour? I'm afraid this will come bite me in the *ss later on if I just ignore it and move on... p.s. the function that runs the compute shader on the device is called from a different thread than that which created the device, but while that thread runs, nothing else uses the device.
  11. Compute Shader not running

        Yeah, all I get are a few STATE_CREATION WARNING #0 I still need to track down, nothing else.
  12. So, I am using a compute shader in this project of mine. The problem is, the first time I call dispatch, the compute shader simply does nothing. All the data returned (through use of a stagging buffer) is zero.   I have verified that its not a problem in my code; I ran the same project on a separate computer (which happens to have a newer gfxCard) and the compute shader behaves properly (i.e. it does the work when its told to).   After reading this thread: www.gamedev.net/topic/661232-compute-shader-runs-more-than-once/ i have come to the conclusion that it must be a driver bug.   Well, that being said, how could i deal with this possibility in my project? How could I check from the CPU in order  to verify, in each update of my game loop, that the compute shader has actually run?
  13. Load DDS file to CPU

    Eh.... It's like including a reference to XNA when all you want is just to import a DDS file. So... I think i'll start cannibalizing. It just seemed strange that a MS framework (WIC) wouldn't support a MS filetype (DDS), out of the box.
  14. I want to load a DDS file (mipmaps included) to the CPU. That means loading its values to a D3D11_SUBRESOURCE_DATA in d3d11 (in the case of sharpdx, that would be loaded in a dataRectangle).   I can't use the graphics device & its context (multithreading issues), plus it's silly to load a texture to the graphics card just for the cpu to read it back.   It seems though that WIC doesn't support DDS? I've found suggestions to use DirectX TK, but I am using sharpDX. So is there some easy solution to this, or do i have to write a managed version of whatever Directx TK does to read DDS?   -thanks
  15. Synchronizing Compute Shader

    Ah ok, thanks!   Was hoping I could save 0.3 milliseconds or so in overhead, but it was not to be...