Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

591 Good

About solenoidz

  • Rank
  1. I have to write a thesis for graphics programming and I need ideas for a couple of effects to implement in a demo and describe. I'm thinking of relatively simple effects such as the following : SSAO Bump Mapping Parallax Occlusion Mapping Deferred lighting Screen space atmospheric scattering Edge, silhouette detection,  cartoon rendering, etc..   Can you please guys suggest me some more effects I can implement and descrtibe ? I don't want to run out of ideas. If you back up your suggestions with some links and papers, you will be priceless :)   Thank you.
  2. Ok, thanks. I know about this tool, but I'm trying to avoid porting all together if possible :) I mean, people at Wine put so big effort to translate D3D calls to OpenGL calls under Mac and Linux and it would be great if that could be used directly, without porting on my side.
  3. Hello people. First of all, I hope I'm on the right division of the forum with this topic. Ok, here's the situation : I'm developing a  Direct3D 9.0 game on Windows and it's all good. Lately I started thinking about making it cross-platfom. I need it to run on Linux and eventuallty Mac OSX. I installed Ubuntu on my rig as a second boot, installed Wine + Winetricks and found out my DirectX 9.0 game runs just fine on Ubuntu under Wine. Now my question is - Is it a good idea to deploy a 3D application ( or any arbitrary application) on Linux using Wine. I mean, forcing users to install Wine and run it through it ? As Wine could install some Microsoft components and libraries, such as Visual C runtime libraries, MS windows dlls which are part of the operating system, am I looking for trouble and legal issues if doing it that way ? Does anybody deploy a Windows application on Linux using Wine and is there some way to make the instalation and configuration automatic in order to not force the user to install Wine beforehand manually ?
  4. Maybe your are facing a similar issue : http://h10025.www1.hp.com/ewfrf/wc/document?cc=us&lc=en&docname=c02948560 http://stackoverflow.com/questions/16823372/forcing-machine-to-use-dedicated-graphics-card
  5. solenoidz

    Leadwerks 3.1 Enters Beta; Heading to Steam Dev Days

    Did you consider QtCreator and it's visual GUI editor for making your cross platform editor ? Why Code::Blocks, why..
  6. solenoidz

    Direct X 11 really worth it?

    Not to disagree with your other points (tying D3D to Windows upgrades *is* stupid) -- but I would not call GCM/GCX OpenGL-like in the slightest. If you're porting to PS4, you're learning a new/different API, that's probably closest to Mantle, then D3D11, then GL4.   I meant using PSGL which is OpenGL ES 1(.1) compilant + support for vertex array objects and nVidia Cg shaders. So for example, if I decide to use OpenGL ES 1.0, I cant target Windows all versions, Web browsers all versions(WebGL) ( even IE maybe), iOS, Android, Linux, Mac and PlayStation 3(throught PSGL, though I heard of it's terible performance).    DX9, DX10, DX11 also have differences in implemenation.   @ mark ds Yep, I'm trying to make such a framework too, but then it comes the language difference... For example, for some of the platforms you use Java, some use C++, some Objective C etc..
  7. solenoidz

    Direct X 11 really worth it?

    I'm in a pretty similar dilemma or even trilemma. I have a DX9.0 engine and when I see where Microsoft is going with DX 11+ I don't like it. They made that thing in the past - DX10 was exclusive to Vista and Win7, now DX11.2 is exclusive to Win8.1 and so on. I had a WinXP box and I was stuck to develop with DX9. Than I started to look in OpenGL direction, where I could use the latest version on WinXP.  I'm thinking about these things lately :   1. Is OpenGL in general(including OpenGL ES) is worth learning more than DX in the long run. 2. Is OpenGL game "market share" is growing ? How many devices and platforms I can target, if I invest time in learning OpenGL-like APIs. PlayStation 3/PS4, WebGL, Android, iOS, Linux, Mac and of course all versions of Windows ? 3. Is SteamOS and SteamBox, really going to affect the PC game market in favour to Linux and OpenGL ?   As a small indie developer, I need to know what is worth learning and what is going to fade in the future. I don't know exactly what platform I'm going to target in the long run. Now I'm making a Windows game, tomorrow I could be making an iOS or Android game, or even a PS4 game. If I'm proficient in OpenGL style of programming can I be more competitive and flexible to jump between platforms.   Is sticking to DX11 ( and Win7/8 and eventually XBOX) is going to give me more than sticking to OpenGL 4+ and other OpenGL's to cover all the other platforms ? 
  8. solenoidz

    Global illumination techniques

    Thank you, people. The floating point texture that stores color and number of time a pixel has beed lit + selecting closest pixels to each probe by this kind of rendering were interesting things to know.
  9. solenoidz

    Global illumination techniques

      Thank you, but I still have some uncertainties.  Yes, i'm using a deferred renderer, so do Crytek, but their method is much more complex after generating SH harmonics data for every light probe in space. The finally end up with 3 volume textures and sample from those per pixel.   In my case, i'm not sure how big this sphere volume should be. In case of my point light, I make it as big, as the light radius, but here the light probe do not store distance from the surface, just the incomming radiance. Also, as far as I get it, I need to shade every pixel with it's the closest probe. If I simply draw shpere volumes as you suggested, I caould shade pixels that are far behind that current probe I'm rendering (in screen space) that should be rendered with a different probe behind the current one. Should I pass all the probes positions and other data, that fall in the view frustum, and in the shader, for every pixel, get it's world space position, find the closest probe and shade it with that ? That seems a bit limiting and slow also..   Thanks.
  10. solenoidz

    Global illumination raving's

    Ok, but under one condition - if the idea is impossible to implement or is not going to work at all, you must tell me and save me from struggling  I've seen your implementation of GI, so I value your opinion. Basically, for every Virtual Point Light I have generated I want to render a hemisphere at the light position with the same color, oriented along surface normal. As a render target I want to use a 2d texture and render a vertical slice of this hemispheres around the camera to that texture, and limiting the hemispheres being rendered to that slice by clipping them with near and far clipping planes. So every hemisphere gets rendered to the corresponding slice, based on it's vertical position in the world. So then, when I generate my volume texture from the slices already rendered I can have a volume, with texels colored by those hemispheres. I want to avoid the light propagation part in the volume texture. Those hemispheres could have a gradient texture applied when rendered, so the further the texel from the center of the initial Virtual Point Light, the smaller the light coming from it  influence it. It could be artist created gradient texture for nice effects. I fired 3ds max for a quick illustration of what I mean. The first picture is the overall idea, and then a camera with clipping planes enabled to render the slices, which then could be used to construct the volume texture. No spherical harmonics though, the volume would store a pure bleeding color.   [attachment=18063:gi0.jpg] [attachment=18064:gi1.jpg] [attachment=18065:gi2.jpg] [attachment=18066:gi3.jpg]      
  11. Ok guys, I'm trying to wrap my head around some global illuminations tricks and I have some questions.   Since I'm planning to use DirectX 9.0c which doesn't support rendering to volume texture I indent to use a 2D unwrapped version ( for example 1024x32) and use it as a render target. So far, so good. About propagation technique, I have this idea : instead of propagating the light energy through a volume in the shader, I plan to simply render some kind of hemispherical geometry ( probably instanced ) with some kinf of gradient texture applied to it, to simulate light fading to the distance. I mean, I have this hemispherical mesh and when I generate my VPL's positions, normals and colors, I simply render such a mesh in that position , oriented torwards the normal and using appropriate color. As a render target I'm planning to use the unwrapped texture, mentioned above 1024x32, and camera looking straight down with ortho prpjection to render every vertical slice of the scene to a part of that 1024x32 texture, while offseting the viewport accoringly. In total 32 vertical slices. I should adjust the near and far clip planes in order to render hemispheres that fall in that horozintal slice. I will probably run into issues, because if slices are too thin, and my hemispheres are too large, they will be clipped by the near and far clipping planes and nothing will be renendered even if I disable backface culling.  How can I cope with this problem  ? Should I fill the hemisperes with "geometry" too. so if clipped by the planes the inside vertices still to be rendered ? Should I study Point based rendering, or there is something neat and easy I'm missing ? As a result, a should have a 1024x32 texture, which would contain the bounced light of the scene as horizontal slices. Because I'm using a 2D texture I can't make use of hardware trilinear interpolation as with volume textures. I need to do this on my own, sampling every rendered pixel 18 times for "nearby" texels and averaging them, which seems slow. Can I instead copy the 2D 1024x32 texture to a volumetexture, and make use of hardware interpolation between adjacent texels ? Can I do it at onces, not slice by slice..? I hope memory layout of same pixel format 2d and volume textures are the same.. So I can simply lock the volume texture, lock the 2d texture and memcpy the bytes...   Should I take different route to GI instead, under DirectX 9.0 ?   Thanks for your help, in advance.  
  12. solenoidz

    backing GI

    I'm not sure if I understand the question correctly, but can't you place your probes around your level in your engine. Then for every probe, render a cube map located at the probe position, then convert the cube maps to a set of SH coefficients and store them. At render time, render spheres(for example) at your probe's position, pass the SH harmonics coeffiecents for this probe to the shader, and for every pixel, get its world position, calculate the vector pixel->probe pisition, which can be used as normal to extract the light comming from that direction stored in the SH and shade that pixel with "global light" .    
  13. solenoidz

    Global illumination techniques

    Ok, thanks.  If I'm forced to choose the method you propose and the volume texture, covering the entire level. What you people would suggest ? Volume texture could be a huge memory hog. For a big level should be pretty high resolution in order to prevent noticeable errors in ambient illumination, especially ambient light(or shadow) bleeding through thin level geometry. I'm not really sure how to build that volume texture, because Direct3D 9.0 doesn't support rendering to volume texture. I could probably unwrap it to single 2d texture, but then I need to interpolate manually in the shaders between different "depth" slices, I guess. 
  14. solenoidz

    Global illumination techniques

    Guys, thanks for the explanations. I have a bit more questions, though...   Let's say I have built the SH harmonics world grid of probes. I can understand the part of rendering a cube map at every probe position and converting each cube map to a SH coefficient for storage and bandwidth reduction. What I don't really get is how am I supposed to render level geometry with that grid if SH probes. The papers I read only explain how to render movable objects and characters. I can understand that - they calculate the closest probe to that object and shade it with that probe. But they seams to make it per object, because the movable objects and characters are relatively small to the grid. But I want to render level geometry and be able to beatufuly illuminate indoor scenes. In my test room, I have many light probes - the room is big and I don't think I should search for the closest light probe to the room - after all they are all contained in the room. I need to make it per vertex , I guess. I'm I supposed to find the closest probe to every vertex and calculate lighting for that vertex based on that probes SH coefficients ? But I render a whole room with one draw call - how to pass so many ambient probes to vertex shader and in thye shader for every vertex find the closest probe ? And what about if I want to make it per pixel ? There is definitely  something I'm missing here..   Thanks in advance.
  15. solenoidz

    4K monitor vs Multiple Displays

    As far as I can tell, about the aspect ratios - the bigger the ratio, the smaller the actual display area. I mean, the manufacturers advertise and sell us diagonals, not screen areas. But, for example a 19''  4:3 ratio monitor has bigger area than a 19'' - 16:9 monitor. 
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!