• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

s_p_oneil

Members
  • Content count

    279
  • Joined

  • Last visited

Community Reputation

443 Neutral

About s_p_oneil

  • Rank
    Member
  1. SiCrane - shultays is being helpful, and IMO your comments and attitude are uncalled for. shultays - If you're interested, you can improve your class a good bit without a lot of effort by adding shift left and right functions. You can then implement a more efficient multiply using shift and add, as well as a fairly efficient division using shift and subtract. If you Google "shift and subtract division", you'll see it's not that bad, and shouldn't take you more than an hour or two once you figure it out. kubapl - If you just want a quick solution to a specific problem and you're willing to try a scripting language like Python or Ruby, SiCrane's advice could make your life about a hundred times easier. However, if your goal is to master C++, follow everyone else's advice. Figuring out how to deal with third-party libraries will teach you some things, and learning how to implement your own math classes will teach you other things. If you really want to master C++, I recommend you do both. Create a new class based on shultay's, then link in a library like MP and run some performance comparisons between the two.
  2. I disagree with jack, or at least with the way he answered. The honest truth is that if you're writing a Windows GUI application, it makes no sense to use C++. If you're writing back-end server components, it makes no sense to use C++. If you're writing a database application, it makes no sense to use C++. In fact, there are very few types of applications where using C++ still makes sense. All the other times, you're just wasting time and money. The last time I looked, 48% of the job offers were for Java, 48% were for C#, about 2% were for C++, and 2% for miscellaneous (I'm not counting web programming jobs), so apparently I'm not the only one who has realized this. The only time C++ makes sense anymore is when you have no better choice (i.e. writing a wrapper to interface between C/C++ and a higher-level language) or when you need to be able to squeeze the last drop of performance out of the platform (i.e. games). Obviously C++ won't die completely. Even Fortran still has a strong following. C++ will have a niche in games and a few other areas of programming. And of course, there's an awful lot of existing C++ code out there to maintain.
  3. Quote:Original post by PolyVox Don't want to drag this off topic as the OP seems to be looking for a CPU solution, but are you aware that in GPU Gems 2 there is an article (Chapter 20) on implementing cubic filtering in terms of linear filters? You can get it for free on the NVidia website. Also you might find this interesting: Thanks for the tip, but I wrote chapter 16 of that book. NVidia sent me 6 copies of it. ;-)
  4. Quote:Original post by avocados Oops, I meant bilinear interpolation doesn't work well. This project is not in a 3d API. I need to smoothly scale noise for some perlin noise in procedural textures. If you're using the original Perlin noise implementation, it already has cubic interpolation built into it (and requires a much smaller buffer in memory thanks to its lattice function). The improved Perlin noise uses quintic, which does the same thing but replaces the cubic() function with a slightly more expensive function. Quintic is a bit smoother at the edges, but IMO produces a curve that doesn't look as good. If you're using a simple 2D or 3D buffer of random values and trying to interpolate between those values yourself, then you should start by implementing cubic interpolation as I've indicated and see how it looks. Depending on what you're using the noise for, you may find that cubic is not good enough (it wasn't good enough for my needs). The best method I've found for something like that is Catmull-Rom spline interpolation (which I've implemented in C and Cg). It is significantly more expensive, but gives perfectly smooth results.
  5. Cubic is the same as linear, but as you interpolate values in a line from 0 to 1, you apply a simple function to convert the line into a smooth curve. The cubic function in my Cg shaders looks like this: // Cubic smoothing, yields C1 continuous curve from 0..1 float cubic(float r) { return r*r*(3.0-2.0*r); } float2 cubic(float2 r) { return r*r*(3.0-2.0*r); } float3 cubic(float3 r) { return r*r*(3.0-2.0*r); } float4 cubic(float4 r) { return r*r*(3.0-2.0*r); } Then you would use it kind of like this (I cut and paste from a larger shader, and changed some things, so I doubt this would compile without tweaks): float4 bicubic(sampler2D t, float2 coord) { int2 n = floor(coord); float2 r = fract(coord); float2 w = cubic(r); return lerp( lerp(tex2Dfetch(t, int2(n.x,n.y)), tex2Dfetch(t, int2(n.x+1,n.y)), w.x), lerp(tex2Dfetch(t, int2(n.x,n.y+1)), tex2Dfetch(t, int2(n.x+1,n.y+1)), w.x), w.y); If you remove the call to cubic, you get bilinear interpolation. All the cubic() call really does is change the distribution of lookup points between two texels. Where linear would give you noticeable lines, cubic smooths them out a bit (though they are usually still noticeable). EDIT: Sorry it's not in C, but this is what I had on hand, and it shouldn't be too hard for you to convert it.
  6. I've been playing around with OpenGL 3.1 and its new uniform buffer objects. I can get it to work perfectly with only one named uniform buffer. As soon as I try to declare a second, it breaks (even if I don't use it). I've tested this with a GTS250 on driver versions 190.38 and 190.62. I have wrappers around every GL call that throw an exception if glGetError() returns an error code, and no error code is being set anywhere in my program. After linking any GLSL shader program, I run this code: // Register and/or assign any uniform blocks this technique uses int nActiveBlocks = 0; gl.getProgramiv(this->getHandle(), GL_ACTIVE_UNIFORM_BLOCKS, &nActiveBlocks); for(int nBlock=0; nBlock<nActiveBlocks; nBlock++) { char szBlock[256]; int nSize; this->getActiveUniformBlockName(nBlock, 256, NULL, szBlock); this->getActiveUniformBlockiv(nBlock, GL_UNIFORM_BLOCK_DATA_SIZE, &nSize); UniformBufferObject *pUniformBuffer = gl.getManager()->createUniformBlock(szBlock, nSize); this->uniformBlockBinding(nBlock, pUniformBuffer->getBindIndex()); } Right now 3 shader programs get linked, and each runs through this loop. The first time the GL manager sees a uniform block with a specific name, it creates a new UniformBufferObject and automatically assigns it a unique bind index. On subsequent checks, it makes sure the size hasn't changed and returns the existing buffer object. Right now I'm only using one uniform block named "Transform", and I am trying to add another block. In my main game loop, I update the data in the "Transform" buffer and ensure that it is bound to its assigned index: GL::Uniform::Transform t; ... GL::UniformBufferObject *pTransform = gl.getManager()->getUniformBlock("Transform", sizeof(GL::Uniform::Transform)); pTransform->updateBlock(t); pTransform->bindBlock(); When my shader program only declares one uniform block, everything works and renders perfectly. If I define a second uniform block in my shader programs (without using it), I get a blank screen. In the debugger, I see my "Transform" block being assigned index 1 instead of index 0, and this seems to be causing the problem. If I force "pTransform->bindBlock()" to use index 0, it starts working again (even though uniformBlockBinding told the driver to use index 1). It seems to me like the driver is ignoring the numbers I pass to uniformBlockBinding and is always using binding 0 for all blocks. When I call glGetActiveUniformBlockiv with GL_UNIFORM_BLOCK_BINDING it returns 1, so I know I set it properly, but the shader will only use the values in my uniform buffer if I bind it to index 0. Can anyone see anything I've missed, or does it seem like a driver bug? Named uniform blocks are useless if you can only use one of them. I wish I could post an small complete program duplicating the problem, but GL 3.1 requires way too much code for that. Thanks, Sean
  7. To elaborate on my previous post, you could do something like this. In the header: template <class T> class PixelBuffer { ... void cleanup(); }; In the source file: // Force compiler to build template using specific types template class PixelBuffer<char>; template class PixelBuffer<unsigned char>; template class PixelBuffer<short>; template class PixelBuffer<unsigned short>; template class PixelBuffer<int>; template class PixelBuffer<unsigned int>; template class PixelBuffer<long>; template class PixelBuffer<unsigned long>; template class PixelBuffer<float>; template class PixelBuffer<double>; template <class T> void PixelBuffer<T>::cleanup() { ... }
  8. It looks like your "cpp file for class" is in a separate library. If that is the case, you may not put templatized methods in a cpp file like that. They must be in the header file. An alternative is to define every possible type you plan to use at the top of your cpp file. This will cause the compiler to build all those different types into your LIB file, in which case they will be available to the other modules you want to link to it.
  9. You can use math to generate most environmental things at run-time. That's what I do for my planet renderer (http://sponeil.net/). It will only get you so far, but it works well for things like terrain, sky, water, clouds, some plants, etc. For the rest, I agree with everyone else about using something like Blender to create some simple models. You can also search for web sites with royalty-free models and textures, but it can be frustrating to get something that's not really what you want and to have no way to fix it. Depending on what kind of look you want, you can try going back to math and logic to generate some things. If you look at Spore, it has an interface to let players design the shape of their own creatures/buildings/vehicles. There were a number of stock elements that you could drag to a specific spot on your creature/building/vehicle, and then use the mouse to change the position/shape/size of those elements. It also let you pick colors or pick from a set of pre-defined color patterns. There's no reason you couldn't try something like that for your own game, and simply use the editor yourself to put your models together.
  10. Looks good, Formski. The biggest problem left will be how sunrise/sunset looks from space. If you set the camera on the ground with the sun at the horizon, and then back the camera into space (keeping the sun at the horizon), the sunset should get redder as the sun is going through more atmosphere, but instead it turns bright blue again. In the CPU demo, it gets redder like it's supposed to, but the lack of a pixel shader for the phase function makes it look bad. I believe this problem is due to the inaccuracies in the function that's replacing the lookup table. The higher the angle gets, the worse the accuracy is. However, I had to cut a few corners to get the shader code to fit in the GeForce FX and the Radeon 9600, so something there may be causing it as well. ;-)
  11. Quote:Original post by Ashkan @s_p_oneil: Big thumbs-up for Sean. Are you planning to improve on the current technique. Any new projects you're working on? By the way, I get a connection time-out error while trying to reach your site. Anybody else experiencing the same problem? Thanks, and try sponeil.net (not sponeil.org). My previous ISP screwed up and gave sponeil.org to some squatter. I haven't done any 3D graphics work since I published the GPU Gems article and the 4th GamaSutra article. I've been too busy, too tired, and no one has been willing to pay me to work part-time on it. ;-) My only current game/graphics project is that I'm writing a Ruby extension for the SFML library (sfml.sourceforge.net). I'm trying to get a nice and simple game library in a simple programming language I like so I can teach my son the basics of programming (he's 8). SFML uses OpenGL, so I might eventually add some 3D classes to it and then play around with some 3D algorithms in Ruby (for faster prototyping with cleaner code). I'm hoping to find another part-time contract, though. I need money to keep the kids in Montessori school. ;-)
  12. Wow. I haven't visited GameDev in quite a while, and when I come back I just happen to see a thread about my scattering algorithm on the front page (talk about a coincidence). AvengerDr: Yes, you can use it to render the atmosphere from space. Go to http://sponeil.net to download the source for everything. If you can't get your hands on GPU Gems 2, send me an email and I'll see if I can find the article on it. (Gamedev has my article on how to do it on the CPU, which also explains the basic idea of the algorithm. You should read both for a better understanding.) FoxHunter2: A number of the parameters you pass to it need to be within fairly strict ranges. It looks like some parameters you have are not within the necessary ranges. (The GPU Gems 2 article explains more about why it's so strict.) FoxHunter2 and Formski: nVidia cards clip the primary and secondary colors to the 0-1 range between the vertex shader and fragment shader. This causes some nasty artifacts. To fix it, change the shaders to pass those values in some other variable. Everyone: This algorithm was really designed for HDR rendering, which uses an exponential exposure function to scale the colors down to the 0-1 range. The screenshots (like the ones Formski posted) look much worse without it. If you look at the screenshots on my home page, I think you'll agree they look much better, even with the nVidia artifacts. One more thing: My project hasn't been updated in a while, and the shaders were written back in the GeForce FX days. There are better ways to do a number of things now, from doing texture lookups in the vertex shader to get the lookup table back (much more accurate), to performing the exposure function in the scattering shader without having to render to a float p-buffer and using an extra pass.
  13. Try using 2 8-bit channels (luminance + alpha) instead of one 16-bit channel. It's usually much easier to set up and use. Then access it in the shader as a normal 2D texture with 2 channels.
  14. It's the fourth article in my Procedural Universe series. Fortunately, GamaSutra no longer requires a login just to read articles. Here's the link: http://www.gamasutra.com/features/20060112/oneil_01.shtml Please let me know what you think, Sean
  15. Quote:Original post by rollo hmm, I run into problems with ScatterCPU/ScatterGPU. They both segfaults on line 280 in PixelBuffer.cpp ((float *)m_pBuffer)[nIndex++] = fRayleighDensityRatio; I didnt get any other warning or anything. EDIT: I'm running this on 64-bit linux, so maybe there are some pointer size problems. I'll poke around quickly and se eif I can find anything. EDIT: I didnt find anything obvious, but there were quite a lot of pointer magic and alignment stuff going on so I'm sure its in there somewhere ;) I've never tried it on a 64-bit CPU, but that function doesn't need to be called for ScatterGPU. Comment the call out in ScatterGPU, and that demo should stop crashing.