Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

685 Good

About avenkov

  • Rank

Personal Information

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Internationalization (usually shortened i18n) is a clear requirement for any successful game and application. Especially when players are allowed to interact (chat) with each other, it is essential that they do it in their mother tongue. While such a requirement is relatively easy for some western languages, others like East Asian ones require the use of Input method editors (IME) that translate multiple keystrokes in final glyphs displayed on the screen. In this post I'll describe the basics of an IME integration in a game and some of the challenges that it poses. Recently we added full IME support to Coherent UI and most of the pain points were experienced first-hand. Although most modern font rendering libraries have no issues showing Unicode content, generating it is another story. On most operating systems it relies on IME functionality. The input method detects the currently selected language and as the user types some keys (some support mouse input too), it automatically proposes a valid character for that combination. For instance the Pinyin input method relies on the user entering the pinyin of a Chinese character to receive a list of compatible final characters that otherwise would be impossible to write. Some of the components during an IME composition While the user types characters an "IME composition" is started - that means that the combination might produce different outputs and it is not committed to the text field until the user accepts it (usually by pressing 'space'). The composition can also be discarded and restarted. In the screenshot above the "Composition string" is not final but shows an intermediate representation (that's why it's underlined). Part of the input has already been translated to Chinese characters while some of it remains to be defined. The possible Chinese characters in our example that match the current Latin characters are given in the "Candidate list". The user can select in it, either though a number, or with the mouse. PgUp/PgDn can scroll the candidate list to show more options. Most IME methods are fairly complex pieces of software, so implementing an ad-hoc one in your application, although possible, is not something I'd recommend. Usually we rely on the OS to facilitate the use of the currently selected IME. In this post I'll assume that we use the Chinese Simplified Pinyin input method. Other languages like Japanese and Korean have some differences. How to communicate and show the various IME-related data is platform-specific and more or less well documented in the API references. All relevant systems (Windows, Mac OS X, Linux) have default implementations for handling IME input - they can show the composition window, the candidate list and submit the text to the application. If you are creating a standard window with text fields you can safely use the system implementation. The default code works because the stock controls have already been written so that they interact properly with the input method. What we want to do however is to implement IME in an application like a game that: usually renders fullscreen does not use any of the default OS input controls Imagine an OpenGL game where you have to input the player name. You could still try to rely on the OS to draw and show the IME-related windows but the result will be very awkward. First your IME composition string and candidate list will appear where your window has it's origin (the OS has no way of knowing where your text field is on-screen). Fullscreen applications on Windows at least will struggle showing the IME windows and an extremely noticeable and unpleasant z-fighting will begin between the application and the IME windows. Finally if the user is playing the game we don't want him start typing IME characters, so the OS should somehow know which key events to ignore. We need to accomplish a list of tasks to have a good, reliable IME implementation: Draw as much IME-related information yourself (candidate list, composition window etc.) as possible. This has the added benefit that you can style it in any way you want. Notify the OS when the game is in "typing" mode - the user is writing something in a text field. Notify the OS when the user has cancelled the current composition by un-focusing the text field without committing the text. Accept notifications by the OS when the composition has been committed or cancelled and update the UI accordingly. Read the OS hints about the notification string and display them. For instance you could underline the current on-going composition as a hint to the user. Drawing the IME-related windows shouldn't be a problem for any capable UI library. On Windows you must also pay attention when the candidate list changes - when the user pushes PgUp/PgDn during a composition, the list might change and a Windows message is received. Also when the numbers 1-9 are pushed, a candidate is selected and the text committed. On most systems you can also select the candidate by clicking it with the mouse. Users are accustomed to this and you should provide the functionality too. Your UI library must support a way to tell you three things: Is a text input field on focus now - so that we can enable IME on the OS level and start listening to it's events. Where is the caret - so that we can position the candidate list under it, as users are accustomed to. The user has changed the focus to something without text input capabilities - so that you can tell the OS to cancel the composition. Additionally the library should have some notion of "composition string" - that is a temporary string marked visually in some way, that should be thrown away as soon as the user commits the composition and replaced with the final characters. Upon composition cancel, it should just be deleted. These requirements appear simple enough but might require substantial coding effort. For reference you could see the implementation of the IME(CustomUI) sample in the DirectX SDK - just their CDXUTIMEEditBox class stands at 1000 lines of code - almost all of it IME-related. Alas the OS interaction could also be somewhat tricky - especially if you need to support all Desktop platforms. The way things are done on Windows are radically different from Linux and Mac OS X. Although the verbosity of the Windows API in the IME-related stuff might be off-putting in the beginning, it is by far the most sane. In conclusion good IME support is mostly a matter of how flexible your UI library is. The OS plumbing is tricky but manageable. The UI-related requirements however can become quite difficult for simple libraries. References Using an Input Method Editor in a Game
  2. In this blog series I write about some modern volume rendering techniques for real-time applications and why I believe their importance will grow in the future. If you have not read part one of the series please check it out here, it is an introduction to the topic and overview of volume rendering techniques. Check it out if you haven't already and then go on. In this second post from our multi-post series on volume rendering for games, I'll explain the technical basics that most solutions share. Through all the series I'll concentrate on 'realistic', smooth rendering - not the 'blocky' one you can see in games like Minecraft. Types of Techniques Volume rendering techniques can be divided in two main categories - direct and indirect. Direct techniques produce a 2D image from the volume representation of the scene. Almost all modern algorithms use some variation of ray-casting and do their calculations on the GPU. You can read more on the subject in the papers/techniques "Efficient Sparse Voxel Octrees" and "Gigavoxels". Although direct techniques produce great looking images, they have some drawbacks that hinder their wide usage in games: Relatively high per-frame cost. The calculations rely heavily on compute shaders and while modern GPUs have great performance with them, they are still effectively designed to draw triangles. Difficulty to mix with other meshes. For some parts of the virtual world we might still want to use regular triangle meshes. The tools developed for editing them are well-known to artists and moving them to a voxel representation may be prohibitively difficult. Interop with other systems is difficult. Most physics systems for instance require triangle representations of the meshes. Indirect techniques on the other hand generate a transitory representation of the mesh. Effectively they create a triangle mesh from the volume. Moving to a more familiar triangle mesh has many benefits. The polygonization (the transformation from voxels to triangles) can be done only once - on game/level load. After that on every frame the triangle mesh is rendered. GPUs are designed to work well with triangles so we expect better per-frame performance. We also don't need to do radical changes to our engine or third-party libraries because they probably work with triangles anyway. In all the posts in this series I'll talk about indirect volume rendering techniques - both the polygonization process and the way we can effectively use the created mesh and render it fast - even if it's huge. What is a Voxel? A voxel is the building block of our volume surface. The name 'voxel' comes from 'volume element' and is the 3D counterpart of the more familiar pixel. Every voxel has a position in 3D space and some properties attached to it. Although we can have any property we'd like, all the algorithms we'll discuss require at least a scalar value that describes the surface. In games we are mostly interested in rendering the surface of an object and not its internals - this gives us some room for optimizations. More technically speaking we want to extract an isosurface from a scalar field (our voxels). The set of voxels that will generate our mesh is usually parallelepipedal in shape and is called a 'voxel grid'. If we employ a voxel grid the positions of the voxels in it are implicit. In every voxel, the scalar we set is usually the value of the distance function at the point in space the voxel is located. The distance function is in the form f(x, y, z) = dxyz where dxyz is the shortest distance from the point x, y, z in space to the surface. If the voxel is "in" the mesh, than the value is negative. If you imagine a ball as the mesh in our voxel grid, all voxels "in" the ball will have negative values, all voxels outside the ball positive, and all voxels that are exactly on the surface will have a value of 0. Cube polygonized with a MC-based algorithm - notice the loss of detail on the edge Marching Cubes The simplest and most widely known polygonization algorithm is called 'Marching cubes'. There are many techniques that give better results than it, but its simplicity and elegance are still well worth looking at. Marching cubes is also the base of many more advanced algorithms and will give us a frame in which we can more easily compare them. The main idea is to take 8 voxels at a time that form the eight corners of an imaginary cube. We work with each cube independently from all others and generate triangles in it - hence we "march" on the grid. To decide what exactly we have to generate, we use just the signs of the voxels on the corners and form one of 256 cases (there are 2^8 possible cases). A precomputed table of those cases tells us which vertices to generate, where and how to combine them in triangles. The vertices are always generated on the edges of the cube and their exact position is computed by interpolating the values in the voxels on the corners of that edge. I'll not go into the details of the implementation - it is pretty simple and widely available on the Internet, but I want to underline some points that are valid for most of the MC-based algorithms. The algorithm expects a smooth surface. Vertices are never created inside a cube but always on the edges. If a sharp feature happens to be inside a cube (very likely) then it will be smoothed out. This makes the algorithm good for meshes with more organic forms - like terrain, but unsuitable for surfaces with sharp edges like buildings. To produce a sufficiently sharp feature you'd need a very high resolution voxel grid which is usually unfeasible. The algorithm is fast. The very difficult calculation of what triangles should be generated in which case is pre-computed in a table. The operations on each cube itself are very simple. The algorithm is easily parallelizable. Each cube is independent of the others and can be calculated in parallel. The algorithm is in the family "embarrassingly parallel". After marching all the cubes, the mesh is composed of all the generated triangles. Marching cubes tends to generate many tiny triangles. This can quickly become a problem if we have large meshes. If you plan to use it in production, beware that it doesn't always produce 'watertight' meshes - there are configurations that will generate holes. This is pretty unpleasant and is fixed by later algorithms. In the next series I'll discuss what are the requirements of a good volume rendering implementation for a game in terms of polygonization speed, rendering performance and I'll look into ways to achieve them with more advanced techniques. References Cyril Crassin, Fabrice Neyret, Sylvain Lefebvre, Elmar Eisemann. 2009. GigaVoxels : Ray-Guided Streaming for Efficient and Detailed Voxel Rendering. Samuli Laine, Tero Karras. 2010. Efficient Sparse Voxel Octrees. Paul Bourke, 1994, Polygonising a scalar field Marching cubes on Wikipedia.
  3. A couple of months ago Sony revealed their upcoming MMO title EverQuest Next. What made me really excited about it was their decision to base their world on a volume representation. This enables them to show amazing videos like this one. I've been very interested in volume rendering for a lot of time and in this series I'd like to point at the techniques that are most suitable for games today and in the near future. In this series I'll explain the details of some of the algorithms as well as their practical implementations. This first post introduces the concept of volume rendering and what are its greatest benefits for games. Volume rendering is a well known family of algorithms that allow the projection of a set of 3D samples onto a 2D image. It is used extensively in a wide range of fields as medical imaging (MRI, CRT visualization), industry, biology, geophysics etc. Its usage in games however is relatively modest with some interesting use cases in games like Delta Force, Outcast, C&C Tiberian Sun and others. The usage of volume rendering faded until recently, when we saw an increase in its popularity and a sort of "rediscovery". A voxel-based scene with complex geometry In games we usually are interested just in the surface of a mesh - its internal composition is seldom of interest - in contrast to medical applications. Relatively few applications selected volume rendering in place of the usual polygon-based mesh representations. Volumes however have two characteristics that are becoming increasingly important for modern games - destructibility and procedural generation. Games like Minecraft have shown that players are very much engaged by the possibility of creating their own worlds and shaping them the way they want. On the other hand, titles like Red Faction place an emphasis on the destruction of the surrounding environment. Both these games, although very different, have essentially the same technology requirement. Destructibility (and of course constructability) is a property that game designers are actively seeking. One way to achieve modifications of the meshes is to apply it to the traditional polygonal models. This proved to be a quite complicated matter. Middleware solutions like NVIDIA Apex solve the polygon mesh destructibility, but usually still require input from a designer and the construction part remains largely unsolved. Minecraft unleashed the creativity of users Volume rendering can help a lot here. The representation of the mesh is a much more natural 3D grid of volume elements (voxels) than a collection of triangles. The volume already contains the important information about the shape of the object and its modification is close to what happens in the real world. We either add or subtract volumes from one another. Many artists already work in a similar way in tools like Zbrush. Voxels themselves can contain any data we like, but usually they define a distance field - that means that every voxel encodes a value indicating how far we are from the surface of the mesh. Material information is also embedded in the voxel. With such a definition, constructive solid geometry (CSG) operations on voxel grids become trivial. We can freely add or subtract any volume we'd like from our mesh. This brings a tremendous amount of flexibility to the modelling process. Procedural generation is another important feature that has many advantages. First and foremost it can save a lot of human effort and time. Level designers can generate a terrain procedurally and then just fine-tune it instead of having to start from absolute zero and work out every tedious detail. This save is especially relevant when very large environments have to be created - like in MMORPG games. With the new generation of consoles with more memory and power, players will demand much more and better content. Only with the use of procedural generation of content, the creators of virtual worlds will be able to achieve the needed variety for future games. In short, procedural generation means that we create the mesh from a mathematical function that has relatively few input parameters. No sculpting is required by an artist at least for the first raw version of the model. Developers can also achieve high compression ratios and save a lot of download resources and disk space by using procedural content generation. The surface is represented implicitly, with functions and coefficients, instead of heightmaps or 3D voxel grids (2 popular methods for surface representations used in games). We already see huge savings from procedurally generated textures - why shouldn't the same apply for 3D meshes? The use of volume rendering is not restricted to the meshes. Today we see some other uses too. Some of them include: - Global illumination (see the great work in Unreal Engine 4) - Fluid simulation - GPGPU ray-marching for visual effects In the next posts in the series I'll give a list and details on modern volume rendering algorithms that I believe have the greatest potential to be used in current and near-future games.
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!