• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

Syranide

Members
  • Content count

    570
  • Joined

  • Last visited

Community Reputation

375 Neutral

About Syranide

  • Rank
    Advanced Member
  1. Images please!   Anyway, "you talk about visibly propagating waves", to me this seems like the size of your transition/interpolation area is too big, i.e, it should be perhaps 10-20% at most. Put it higher than that and you sacrifice quality (for the given performance hit) and the terrain always looks like it's changing.   Also, note that heightmaps are not well-suited to high-frequency detail. I used real world data in my implementation of geoclipmaps which was somewhat high-frequency, I tried everything from low-quality to high-quality meshes and flew over the terrain at high speeds, and even then I could not really say that I felt bothered by the transitions other than in the worst high-frequency areas. So I would say it's a problem in your implementation (or heightmap resolution).   Also! I recommend that you don't compute the normals when sampling, sample them from a separate texture instead, hides the transitions a bit better and with the added bonus that you can use higher resolution normal textures (produces really good visuals even with low quality meshes).   Another thing, I don't remember the specifics at the moment and am too dumb to make sense of it right now, but in your transition area remember that it applies to triangles, you can't just sample in the middle if you want exact results, you need to do multiple samples (if this doesn't make any sense, just forget it). Otherwise you will find that certain the triangle pops in certain higher-frequency areas (i.e an /\ -shape will pop significantly and not interpolate), but it might not be warranted depending on your terrain.
  2. That's weird, I found your topic being second from the top... must've accidentally been on another page. Anyway, did a quick photoshop blur again on your "original" image with a radius of 2 and it turned out pretty good I think, a lot better than your second I'd say, which seems to remove a lot of features while not really fixing the jaggedness. [img]http://s8.postimage.org/pa0nc9bv9/Untitled_4awdawd.png[/img] Again, I'm not really read up on this, but to me it seems that involving any significant decision making into the processing would ruin the realtime quality, making features appear/disappear and behave erratically, whereas blurring and such solutions would have a more consistent and fluid look (although not as high quality when looking at individual frames), and you could also get the result cheaply anti-aliased that way (if not using FXAA). Just read your update, if you want that "continous shape" looking look, it seems to me like you have to give up the smaller features entirely and just fit some very rough curves over it all (but that could probably make it very "blobby" instead I think if you don't tune it carefully), or possibly just more blur. I'm curious though, I would think that from looking at that image that they don't use the buffer itself, but rather interpret the position of the body parts and then render a human model instead.
  3. [quote name='Hodgman' timestamp='1344251234' post='4966635'] The technique in this paper might be of use to you: [url="http://research.microsoft.com/en-us/um/people/kopf/pixelart/"]http://research.micr.../kopf/pixelart/[/url] Here's the comparison with hq4x: [url="http://research.microsoft.com/en-us/um/people/kopf/pixelart/supplementary/comparison_hq4x.html"]http://research.micr...rison_hq4x.html[/url] However, it's pretty expensive... I like the blur and threshold idea from Syranide; it would be very efficient. You could even apply an anti-aliasing filter such as FXAA before the upsampling step to get smoother results. [/quote] I actually thought about that specific microsoft article too, but I imagine that it would be too "jittery/quirky/erratic" for real-time use as even tiny variations could introduce major changes in the output I think (it is mind numbingly cool though!). If I'm not mistaken, I think they even mention that it has some issues with animations somewhere, but perhaps I'm mistaken. This is not my area at all, but to me it seems like some kind of "blurring algorithm" needs to be used to keep it fluid and consistent between frames, anything that too intelligently decides "on individual pixels" seems like it would just cause erratic behavior in realtime. Running FXAA before the upsampling actually seems like a really good idea I have to say, if it works well it would remove the "wobbly and jagged" look and could actually end up looking really good... I was going to suggest some basic algorithm for just filling various edges and gaps with grey pixels as a way to smooth out the original image just to minimze the wobbly look, but it seems FXAA should just be better in every way.
  4. I'm not all that familiar with the hqx upsamplers, but from my limited reading it seems like it's simply just a matter of changing the interpolation tables, it's obvious that the default implementation prefers to keep sharp features where possible which is something you don't seem to want. If it's possible or how easy it is to change the interpolation tables to prefer smoothness and also have the intended result I have no idea, but seems like that's your major issue (you are not using it for pixelart upscaling as intended for). Otherwise, depending on what kind of quality you want, nearest neighbour upscaling and blurring and then using a threshold to give a black/white image yields quite similar results, although the output is obviously a lot more "round", the following is a quick and dirty test in photoshop with 4x gaussian blur (if you upsample with bilinear rather than nearest as I did you get slightly better and less wobbly results). [img]http://img814.imageshack.us/img814/9456/67653822.png[/img]
  5. I'm not sure if I really understand your issue, but I assume you are having issues with the objects rendering in the wrong order and overlapping incorrectly for your isometric view? An image or an explanation of the actual issue would be helpful.
  6. Texture masks perhaps? Font support? Personally I would recommend focusing your time on the texture loading and sprite batching, abstract all that away from the user as much as possible. In this day and age it would seem like the programmer/artist shouldn't have to deal with manually creating tilemaps/atlases for grouping commonly used assets (for performance purposes)... ie, there should be "no difference" between loading two different textures and loading one texture with both textures on it. And depending on what your intent is, being able to specify the center-point in a sprite may be a good idea, among others, and things like that. Basically, I would personally focus on removing all of the house-keeping from the code and programmer. And if necessary, sacrifice a bit of performance if it significantly if it simplifies for the programmer, there's more than plenty of performance to go around for most reasonable 2D uses today. But then again, it's about what your intent is, perhaps there should be two levels, the low-level and high-level API... but that may just be unnecessary abstraction and work too.
  7. [quote name='YogurtEmperor' timestamp='1329476680' post='4913862'] The format only allows one set of data per block. The format is simple in order to make fetches faster. The best route is to generate two images, but I am mainly thinking about consoles when I say that. If a PC game has too many textures it may be worth it to take the hit. As for how noticeable it is, well, I certainly noticed it when I first tried it. Of course I had already been studying that image for a long while gathering all the types of artifacts that needed to be eliminated. But it is also more noticeable on some other images, especially cartoon ones. That would be a possible way of going about it, but I will save that exercise for the reader. Before leaving work today I pulled a coworker over to my desk. I said, “This is the original image. Below that there are 2 more images. One was made by my tool and one was made by ATI. Which one do you think looks better?” He got close and stared for a long while, unable to decide. Finally he pointed at mine (unknowingly) and said it looks better because the “FREE-TO-PLAY PVP MMORPG” looks jaggier in the ATI result. L. Spiro [/quote] Indeed the DXT format is really rather fixed, but it seems to me like it would be pretty much trivial to splice together a proper DXT texture at load time, so the texture stored on the disk could then instead be generated with some "average" decoder in-mind, and then prepended to that are a bunch of alternate color pairs and blocks, which are used to replace the color pairs/blocks in the "base texture" based on what GPU is used by the host computer. How practical and useful it is in practice, I don't know, but it seems like if there are some 3-4 different DXT decoders for PCs, generating and distributing a unique texture for each of them would be quite wasteful if one could instead just replace blocks or color sets so that the final texture produce close to the same results. Perhaps one could even just go ahead and generate unique textures for the different target decoders, select one as a primary and XOR all the other textures against that, and apply some really cheap compression. Perfect results for all decoders, that should hopefully end up really compressible. But of course I realize that it may be of no interest to you ;)
  8. [quote name='YogurtEmperor' timestamp='1329435067' post='4913778'] Thank you both. I remember a topic from a few months ago in which I was accused of wasting my time reinventing wheels. It did take more time than I expected but it was certainly rewarding, and I was able to uncover some important facts that could be beneficial to a lot of people/companies when considering their image quality. Mr. Gotanda was also not aware of the differences in how NVIDIA decodes their images, and by my calculations this special decoding method was very likely to have been invented specifically for the PlayStation 3. Had my company been aware of this difference before, the quality of their PlayStation 3 textures could have been improved. The ATI results differ by more than 1 value in many places too, which suggests more than just truncation. I wish they would publish their decoding method. It’s not like it will help NVIDIA or anything, but it would help developers striving for better image quality. If the ATI decompression method were exposed, then a “perfect” tool could be made to tailor to each of their decompression methods individually. L. Spiro [/quote] I found it quite interesting that different GPUs have different percentages for the interpolated colors, would never have guessed that. I'm curious if it's something that could be reasonably improved by simply embedding different "color pairs" for each block in a texture, rather than necessarily generating a unique texture for each different hardware, as to be able to compensate at load time for the (three?) common PC hardware configurations. I imagine one could possibly even allow the algorithm to generate some unique blocks if the algorithm deems it a significant improvement (rather than just another color pair). Or are the hardware differences really minor in practice and that the primary effect is perhaps only really observed in mathematical measurements and not perceptually? As for AMD decoding, shouldn't it be quite easy to just generate a bunch of "hand coded" blocks with specific gradients and then look at what AMD outputs? (assuming you have an AMD card) ... it would seem to me like there can't be anything really complicated going on behind the scenes, that wouldn't be "easily" understood by just a bit of testing. Of course there may be differences between different models... EDIT: After looking at the nVidia-implementation, I take it back! EDIT: Couldn't find any actual numbers, but it would be interesting if the percentages wasn't symmetrical, as one could then also exploit the order of the two colors as a further optimization.
  9. Wow, great work, it's even quite hard to tell the original and compressed one apart at a distance.
  10. [quote name='pas059' timestamp='1328091126' post='4908325'] Hi Syranide, Thanks your code, which is in native C++. With SlimDX (and also SharpDX and MDX) which are managed frameworks desgned to used with .Net languages (C#, C++ managed, VB,...), not all the methods of Direct3D9 are exposed, and notably [color=#660066]CreateAdditionalSwapChain[/color](). So, i started to use the "traditional" way: disposing ressources that must be disposed, reseting device, recreate resources. The first results seems sufficient, but this takes more time to code and the execution is probably slower. Anyway, i think that i have no other choice. Thanks again, Pascal [/quote] Yeah, if it isn't exposed then that would be hard to say the least, unless you can just patch it in there yourself. Also, as I believe I mentioned above Direct3D9Ex pretty much prevents lost devices from occuring entirely (on Vista and up), but I would assume that your library doesn't support that either then, as I feel like it should've just done that internally for you if it supports it. Your last option would be to use the D3DPOOL_MANAGED for textures if exposed, however, it's not without issues and doesn't actually solve the problem, it just makes a bit faster as a copy of the texture is kept in system memory at all times.
  11. [code] LPDIRECT3DSURFACE9 rendersurf_old; LPDIRECT3DSURFACE9 depthsurf_old; m_device->GetBackBuffer(0, 0, D3DBACKBUFFER_TYPE_MONO, &rendersurf_old); m_device->GetDepthStencilSurface(&depthsurf_old); rendersurf_old->Release(); depthsurf_old->Release(); RECT rect; GetClientRect(m_hwnd, &rect); int width = rect.right - rect.left; int height = rect.bottom - rect.top; m_d3dpp.BackBufferWidth = m_width = width; m_d3dpp.BackBufferHeight = m_height = height; DXASSERT(m_device->CreateAdditionalSwapChain(&m_d3dpp, &m_swapchain)); DXASSERT(m_swapchain->GetBackBuffer(0, D3DBACKBUFFER_TYPE_MONO, &m_rendersurface)); DXASSERT(m_device->CreateDepthStencilSurface(m_width, m_height, m_d3dpp.AutoDepthStencilFormat, m_d3dpp.MultiSampleType, m_d3dpp.MultiSampleQuality, TRUE, &m_depthsurface, 0)); [/code] Is the basic code I'm using and I'm pretty sure it's based on the code from that post.
  12. 3rd reply here [url="http://www.codeguru.com/forum/showthread.php?t=492308"]http://www.codeguru....ad.php?t=492308[/url] has the solution I believe. Also, I realize that I've been calling it the backbuffer, but you of course need to replace the entire swap chain.
  13. [quote name='pas059' timestamp='1327998785' post='4907924'] Hi Postie, Do you mean that objects like VertexBuffer, Mesh have to be freed? and then recreated after updating? this can take many time in some cases. As it was not necessary to do this in MDX, i'm suprising. Do you have a link? thanks again, Pascal [/quote] Sadly that is the case, with Direct3D 9, Direct3D 9Ex does not require it I believe. Anyway, as above, you can create your own backbuffer to replace the default one, that backbuffer can then be recreated at will without resetting the device. I don't know what MDX is really, but I would assume it's using one of the above.
  14. [quote name='schupf' timestamp='1327957633' post='4907768'] I haven't done anything with projected textures, so I am not quite sure how to do it. I understand I could use a orthographic projection (with a huuuge box) to project a texture with many cloud shapes onto the terrain. But isn't this overly complicated? Plus, how do I make the clouds move? I need some scrolling texture coordinates on my terrain vertices but I can't get up with a good idea how to implement this:/ [/quote] If you want a simple solution and don't have separate geometry on-top of the terrain, then you can simply just pass a cloud texture along to your terrain shader and multiply against the output terrain color, then you also pass the current time along to the shader as a variable and use it to offset the cloud texture ... voila, moving cloud shadows on the terrain.
  15. [quote name='turch' timestamp='1327940035' post='4907671'] I've emulated input processing using raw input before when I couldn't use WM_CHAR for various reasons. I had a structure that kept track of which keys were pressed and released, and #defined a character code for each character. Each code was stored in 16 bits, with the low 8 used for the "base" key, and the high 8 used for various modifiers (alt, shift, caps lock). There was a look up table with each key, and when any other part of the program was sent input, it used the table to convert the key code into something specific. Holding down shift + 1, for example, sent the following binary info [code] 00000001 00010100 [/code] If the input receiver wanted an actual character, they would look up the entire number in the array and get back ! If they only cared about the key that was pressed, they masked off the high 8 bits and looked up the low 8 bits, getting back 1 [/quote] True, but getting the printable character is not really an issue with RawInput using ToAscii, however, if you want dead keys to work properly, and a few other minor issues it seems, then you need to feed ToAscii the keyboard state, as is done with GetKeyboardState, however, doing that I've never really been able to get it to work. If I enable NOLEGACY then I even only get lower case chars, which makes me curious if NOLEGACY also prevents GetKeyboardState from doing it's job, so then I would have to implement that myself, no problem, but how would one know which keys are toggleable, or when to "untoggle" them, i.e the dead keys, which would be required to properly the dead key with whatever is pressed aftwards. Note, ToAscii and ToUnicode specifically has this functionality. And I tried to emulate it using WM_KEYDOWN and using ToAscii, I indeed got it to work, but not 100%. The order in which you released the keys incorrectly affected the output and there were a bunch of other issues too. It seems like there should be an easy way to do this as it's the reason why ToAscii and ToUnicode exists it seems, however I have yet to see any source code that uses them to achieve the proper result and I'm unable to myself.