• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

tonemgub

Members
  • Content count

    437
  • Joined

  • Last visited

Community Reputation

2008 Excellent

About tonemgub

  • Rank
    Member
  1. I don't think these are mistakes. The sliding collision response done with floating point is supposed to move the sphere away from the sliding plane by a small value. Otherwise, the floating point errors would cause the sphere to pass through the plane. The veryCloseDistance value also might not be a mistake. I'm guessing that the veryCloseDistance value is the distance from the plane projected onto the velocity vector. Maybe this value was already available earlier during the algorithm, and it is being used instead of deriving a new value along the normal, which would increase the floating point error effect. The rest of the algorithm probably also does distance calculations based on the velocity instead of the plane normal.   It might be also that the veryCloseDistance is velocity-based in order to stop the sphere from colliding gradually, based on the angle of the velocity vector to the plane. This is because floating point errors cause more trouble at sharper angles. In any case, the velocity veryCloseDistance will never be smaller than the normal-based one, so this is not really going to cause the algorithm to fail. It will just spread the actual collision detection across the next frames, until the veryCloseDistance value is the same as the normal-based value. And from what you said, this is exactly how the algorithm is supposed to work, right?   I haven't read the document though - just guessing.
  2. OpenGL

    I'm not sure if this is the problem but you are storing 4 vertex coordinates of a quad followed by 4 texture coordinates in your vertex buffer. Instead, you should store the texture coordinate corresponding to each vertex immediately after the vertex coordinates of that vertex. I think part of your texture coordinates are being drawn as the vertices which form that weird grid. Your vertex format declaration might also be bad.
  3. Volume Tiled Resources. Same as in this demo: https://www.youtube.com/watch?v=SN2ayVd9-3E   Here's a slideshow: http://developer.download.nvidia.com/assets/events/GDC15/Dunn_Alex_SparseFluidSimulation.pdf
  4. Old microsoft mouse broke and brought a new one for short bucks. M90 with huge "possible radiation damages!" stamped all over the instructional manual. On the box, it's all "top provider", blahblah. Came from finding this thread.  http://www.gamedev.net/topic/446736-the-invisible-laser-mouse/ Unfortunately, mine doesn't simply say "don't look into it!"..   Should I be worried? Had a quick comb of the house for other boxes. Couldn't find any but peering at desktops, it's microsoft, wireless etc. I've brought two mouses before - both of them wireless and one logitech. That was a time in life where I never bothered to read the manual first. Though I should've kept it somewhere just in case. Too many house moves makes this an obscure somewhere. Had a quick search online. Invisible laser beam for some logitech. Surface visibility protection for a wireless. Constant invible beam (when left upside down) for another. Not sure if wireless. Review site (non logitech) for the exact mouse came 3/4 for positive recommendation. Logitech staff replied to a poster on their forum stating, no  wireless mouse emits radiation. This was a question asked and answered in 2008 by a now retired employee who registered in 2007.   Now I'm scared for my life. Not really. I'm sure being a top provider, it's got most cases taken care of. For over 14s. Computer hibernation of a million days, accidental mouse drops, mouse cleanings, other person's back as mouse mat, habit of side to side sways while impatiently waiting for page loads. What if I wet cleaned the mouse pad itself? How dry does it need to be before it can be used again? Or the glass desk? This isn't even counting the amount of tissue-assisted dust cleaning with the old mouse.. Should I be worried about tissue fluffs left inside?   Want to be prepared for the one instance in which I am somehow, different with the way I treat my mouse. Gotta be more careful with water spillings than ever for one thing.. And no, I haven't tried it yet. Thank you for sticking with methrough my highly paranoid rantings.  
  5. X references the list of mesh(es) it needs, and each mesh references the textures&shaders it needs.       The renderer doesn't need to know that. It just needs a list of polygons, textures and shaders to send to the GPU.       The rendering queue. It takes as input all the info needed about resources (like the references between resources mentioned above) to decide when one of X's resources is no longer needed or when new resources must be loaded - but it does not hold any info about X - everything it holds is considered to be a primitive resource that can be sent directly to  the GPU for rendering, even transformation matrices. The goal here is to have as little data as possible sent to the graphics card every frame, so the rendering queue must keep track of how many times each resource is referenced in the current frame, and discard resources which are no longer referenced. And the rendering queue must also be optimized towards reducing the number of required draw calls, so when new resources are added, it must sort them based on specific criteria, like the textures and/or shaders they use.       The drawing code should not be bound to X, and it does not need to care what X is. Drawing should be implemented in a single general-purpose Render() function that just takes the list of polygons, textures and shaders from the (optimized) render queue and passes them to the GPU. If you have things like different vertex formats for different meshes, then they (the vertex formats) should be added as a resource to the render queue (and referenced by X) and then used in the Render() function. Gameplay logic is handled in a separate "Update" function, which fills in the higher-level data about X - like it's position. The Update function handles interactions between all the X and Ys, and then places them into the render queue. It does not handle resources. The resources are allocated/discarded only by the rendering queue, when X is added to the queue, or when Y is removed.       Only X itself should know what it is and the ways it can interact with all the Ys. X should have virtual methods that can be called like this in the Update function: X->InterractWith(Y). Ys should also have virtual methods that tell X what they are, so X can decide what to do in the Interract method, like Y->Position(). Initially, you should start by defining only a single class for both X and Y, and stuff as much functionality in there as possible, along with the methods required for updating. Later, as you find that you need different types of info or behavior, you can start declaring X, Y and Z classes derived from that base class that override the base class' Interact(Y) and/or "Position()" methods. Then, you can also try using dynamic_cast to tell appart X from Y and decide which methods to call between them, but ideally, X Y and Z shouldn't declare additional methods for interaction... X should be able to figure out the identity of Y just by using the "Y->Position()" method that is already available in the base class as well and overridden by Y. When you start using dynamic_cast, things get complicated fast. Or, to put it another way: the interactions should not depend on what X or Y is; only on specific properties of X and Y, like the position. Only the end-user really needs to know that X is X and Y is Y, and for that you simply add something like a "Name" data member in the base class of X and Y, which holds the info needed to tell the user that X is X and Y is Y. Or, an even simpler explanation: Instead of thinking that the code (the interactions) must be written aroud what X is, try thinking about it the other way around: What X does ( the code in it's Interract method(s)) defines what X is.
  6. Actually, I believe this is exactly what Mario used. I played different versions of Mario a lot, and IIRC the player could still fall through gaps even when running, when just speeding up (or when slowing down to normal speed).   Most likely, the gap-skip would happen in only one or two frames, and the gravity didn't have time to kick in in that amount of time. To increase the effect, you could also round the position of the player to integers (and round gravity to lower bounds). I think Mario does this, since it always displays the character at pixel coordinates.
  7. Why do you want to mimic FlattenPath? Just use something simple, like a multiple of the text size for Bezier resolution (resolution = number of segments, which also defines the final number of triangles). A safe resolution to use would probably be glyph_wdth * glyph_height, since no Bezier curve leaves the glyph's bounding box, and the worst-case scenario for a Bezier would be to cover all of the pixels in the box (just hypothetically - this will never actually happen).   Alternatively, you could always use a constant and high enough Bezier resolution, that it looks good at the highest text size you're going to draw, and it will look ok even when scaled down, even if you do end up with 1000 segments (or triangles) covering the same pixel.   Anyway, like I've been saying - you really have your work cut out for you if you decide to go ahead with tessellating the font data yourself.   If you just want to draw screen-space text, you should use GDI to draw each character into a texture, then use that as a texture atlas for drawing text. I believe this is what you originally intended, and I've also already explained how you can deal with your alpha-blending problem in my first reply. And after investigating a bit, it seems that this is also how ID3DXFont works - it can only be used for screen-space fonts. When I mentioned tessellation, I was thinking of what D3DXCreateText does, not D3DXCreateFontIndirect - I never used any of these myself, only made some assumptions based on the samples I saw, so sorry if I misled you.
  8. I also found info about ear clipping for triangulation some time ago, but I remember I couldn't use it (or it wasn't enough) for font triangulation because it only works with one polygon, whereas the glyph data is made up of multiple polygons: clockwise polygons must be filled, counter-clockwise for holes. You would somehow have to find a way to merge the "hole" polygons and the filled ones into a single polygon - maybe by adding some extra "cutting" edges. And even then, you will have cases with glyphs made up of multiple, separate filled polygons (like the "i"), so you would have to detect this and avoid joining them into a single polygon. While it kept me awake a few nights, I quickly gave up on all that when I found out about SDF, so I can't help you more than this. I never really had an actual implementation - just a lot of thoughts.   You don't need to worry about rasterizing Beziers yourself... Once you get them all split into line segments and then triangulated into triangles along with the rest of the glyph data, DirectX can handle them perfectly fine (as floating points). Or maybe I did not understand what you meant to say by "rasterizing a Bezier"? The simplest algorithm for splitting the Bezier curves into line segments is probably recursive subdivision.   EDIT: Actually, the Bezier curves from GetGlyphOutline are cubic splines (4 control points), not quadratic (3 control points), so you need to find an appropriate algorithm. The subdivision algorithm I mentioned only works for quadratic Beziers (that I know of).
  9. Sorry I can't help you with tessellation. It's a pretty complex subject, requiring a lot of math. It's not just something you can plug into your code as just an algorithm. Or if it is, I haven't yet found such an algorithm that wasn't part of a full-fledged library.   Some thoughts of my own:   The most you can get from the Windows API is the glyph data, using GetGlyphOutline. The glyph data is stored as multiple polylines. Each polyline is made up of lines and Bezier curves, and there is also an API function that you can use to flatten the glyph data returned from GetGlyphOutline, but I can't remember the name; it transforms the Bezier curves into lines, which is more useful for generating triangles.   However, the glyph data does not provide any info about which side of the polylines must be filled or not (whether the polyline represents a filled polygon, or a hole in the glyph). The Windows font rasterizer detects what to fill by determining the winding of the polylines - if they are clockwise, they are filled; if not, they are holes.   IIRC, OpenGL's default tessellation can be used this way too. IIRC, you can tell it to treat clockwise polygons as filled, and anti-clockwise polygons as holes. But DirectX doesn't have this. But even OpenGL's tesselator doesn't handle concave polygons, and I think the polylines from the glyph data are made up of both convex and concave parts. The Windows rasterizer probably determines the per-polygon winding by summing-up the windings at each polygon vertex. OpenGL doesn't do that - I think it just splits higher-order polys into triangles, and then uses the winding from each individual triangle (or it just assumes that a filled polygon is clockwise all around, and a hole polygon is counter-clockwise).         See if this VB example helps you. I haven't tested it myself to see what it does, though. Just looking at the screenshot of the program, and from it's description, I think with a bit of effort, you can probably also use that as a design-time tool, to generate the vertices for any font you want to use, and store them with your project. I think the polygons it outputs are convex - you just have to draw each of them as triangle fans in DirectX.   Oh, and you should also look into Signed Distance Field bitmap fonts. They are just like (black and white) bitmap fonts, except that each bitmap pixel represents a distance to the glyph's outline instead of an actual color, and the distances are negative outside the glyph; positive inside. They can be rendered as triangle lists (one quad per character) using a special pixel shader; you can find it easily by googling "Signed Distance Field font". I think rastertek.net had a tutorial. And there is also a tool to generate the SDF bitmap texture, somewhere here on gamedev.net. This is what I'm currently using (and I think most video games are using too). The only downside is that sharp corners in the glyphs are slightly rounded at higher text sizes, but it's really not that noticeable.
  10. It doesn't look like actual collision to me. That would probably be too expensive (but what do I know? :) ).   Google turned up this: http://codea.io/talk/discussion/comment/19977/#Comment_19977
  11. ID3DXFont does not use GDI. It tessellates font glyphs straight into polygons. That's because GDI is slower.   But anyway, I did implement the exact thing you are trying to once. I used an ARGB texture though, initially filled to black, and after drawing text, I filled the alpha values to 0 "manually" for every black pixel (because I used a black color for SetBkColor). For better performance, you can also use DrawText to get the rectangle of the drawn text, and only fill the alpha values from that rectangle. Clearing the whole texture to black (before getting the DC and drawing the text) should be done on the GPU, since it's probably faster and you can also clear the alpha to 0.0. IIRC, I decided to do it this way because I wanted to experiment with other GDI stuff, like the AlphaBlend function. This way you also have per-pixel alpha, so you can apply different transparency levels to different text drawn on the same texture (but you'd have to implement that separately like I described).   If you don't want to use an alpha channel texture, then you can also do the same thing in a pixel shader. Just return a 0.0 alpha value from your pixel shader, whenever the input texel is black. When it is another color, you can return 1.0 for opaque text, or the value you already have from from the vertices' diffuse color for transparent text. Compared to my method above, this way you will have to draw the text to the texture and the fullscreen quad every time you want to change the transparency of the text, since there's no per-pixel alpha in the texture.     As for setting the blending states: http://www.directxtutorial.com/Lesson.aspx?lessonid=9-4-10
  12.   Wow. I like that simple explanation a lot. I tried to make sense of this once, but it's way too tedious, and you'd have to be a genius to figure out the side-effects of T-junctions from all that.
  13.   I found that info about 7h using memory addresses here: http://www.phatcode.net/res/221/files/vbe20.pdf . Sorry, I thought you were using the same document. Heck, I didn't even know there was a 3.0 version. All PM BIOS VESA functions use memory addresses, IIRC.   I don't know the reason why GetDisplayStart is returning 0 for you. If you change it to something else with SetDisplayStart, does it return the new values?
  14. Have you tried doing a GetDisplayStart first, then adding (or subtracting) your screen height to the returned DX, and then passing back the CX and DX values to SetDisplayStart?   Also, take a look at the "Protected Mode considerations" chapter of the VBE dcoument. It seems to say something specific about function 07h. In PM, the CX and DX values are actually a memory address, not the scan line&pixel position as in RM.   I miss VESA. :)
  15.   The per-channel intensities are still linear though. You could use an SRGB texture to store them, but what's the point, if all your calculations are in linear space?