Lunatix

Member
  • Content count

    51
  • Joined

  • Last visited

Community Reputation

144 Neutral

About Lunatix

  • Rank
    Member

Personal Information

  1. Umm.. okay, this wasn't clear enough >_< I don't use any function of opengl, theese are my own functions, but i'd like to get them work as OpenGL 1.1 supported them.. it was more the question, if i'm right or wrong with my thoughts..
  2. Hi! Currently, i'm writing a little raytracing library, everything worked very well - but now, i'm implementing matrices for the transformation.. so i would like to make some things right, because i think they could be wrong: - LoadMatrix: Simply resets all Push and Pop stages and loads the given values into CurrentMatrix. - PushMatrix: Creates a new matrix, copyies the current stage and pushes the current matrix on stack - MultMatrix: Multiplies CurrentMatrix * ObjectMatrix - PopMatrix: Restores the last pushed stage - Normal Matrix: Upper 3x3 of CurrentMatrix, Inversed, Transposed (Compute it every time Load or MultMatrix is called) - Set a Vertex: Vector(x, y, z) is multiplied by CurrentMatrix - Set a Normal: Normal(nx, ny, nz) is Multiplied by CurrentMatrix -- or, when computed by intersection algorythm, this normal is used (and not transformed by NormalMatrix?) - The Lights: Position and Direction is recomputed each "LoadMatrix" is called, not multiplied by CurrentMatrix - Rays of Projection: Computed once per "SetViewport" - because our world is transformed, it isn't necessary to transform them Would be nice if someone knows, if my thoughts are right or not.. Happy coding, Lunatix
  3. Hey! Thank you Ravyne, these informations are very good [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img] I will keep this in my mind, if i start with OpenCL. And finally, i have managed to get my new library to work, and now, its about 10 times faster as before [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img] If someones interested, read more at our (LunaGameWorx) Blog site in the Raytracing section: [url="http://gameworx.org/index.php/raytracing-articles/"]http://gameworx.org/...acing-articles/[/url] But i think, 740ms per frame is even slow, if i see something like this -> [url="http://dinkla.net/de/programming/cpp-parallel.html"]http://dinkla.net/de...p-parallel.html[/url] In the Videos, if the application starts up, they are using only one thread for computing and git 8 Frames per secound! Once again, 10 times faster than mine.. are there some tricks to speed up calculations or function calls in c++ ? And, should i use float or double vectors? My vector struct has double's, because i thought, vectors should have good precision in a raytracer or rasterizer.. Edit: I switched to Release and /Os -> Now i have only 103ms per Frame and ~9Fps ! Awesome =D Maybe, i should write a bit more inline asm code in my vectors Oo Edit2: Found the SIMD SSE2 Switch, now i'm at 13fps and 71ms per Frame!
  4. samoth: In some cases, you're right - i shouldn't worry about it. But in my case, i want to get experience! I don't even want to look up at those people, who writes librarys like OpenGL or DirectX and think "Theese people must be some kind of gods *_* - no, i want to be one of those people, who know the mechanics behind the scene - just a little bit! By the way, i got it working. I searched and found, my solution uses CreateDIBSection and BitBlt and other of those basic GDI functions. And, i think, it works good. After programming this i found an equally method in the SDL Source code And now, i can call "wrtCreateContext(MyPanel.Handle);" from C# and my library does all the steps necessary to create a GdiPixelBuffer and a rtContext for drawing
  5. When i got the raytracing working again, i will modify my raytracing code a bit and use OpenCL for computing. The thing was, that i had my own "Pixmap" which the final colors where drawn to - and i had to copy this colors (public struct RTColor) to the Bitmap via "MyBitmap.SetPixel(System.Drawing.Color, ..)". This took to long, and i don't wanted to write unsafe code, because i think, if i'm programming in a managed enviroment, one should not use unsafe code until no other solution is left. So this point and the fact, that i had to do unsafe calls to get OpenCL to work with c# and the speed improvement of a c++ language lead me to this solution - to rewrite the Raytracers Core in c++. And i don't think, C# is very good for such a type of project - because managing all those vectors, colors and rays by the GC is a (for me) to heavy performance lack (leak? lack?). And my core is finished, and my first (because simpler) solution was a rtContext which has an abstract rtPixelBuffer class which it will output the final pixels. So, in C#, i call "wrtCreateContext(HDC DeviceContext)" via DllImports which creates a "GdiPixelBuffer" and the result is a valid rtContext with a rtPixelBuffer (GdiPixelBuffer)..
  6. Very funny, powly K ;) I don't wanted those kind of "direct" access, it was more a question, how OpenGL or DirectX brings the final pixels on the window, just to understand this technique. I tried something like: creating a panel and setting a bitmap as background image, then write to that image - this operation costs ~400ms (C#, my first raytracing approach was written in this, but now, i ported the code in C++ as a library for more speed). And because of this lack of performance, i thought i could get a bit more "direct" access..
  7. Thank you This is a good approach / hint! Now, i cann turn on google with a more specific "knowledge" of my problem ;) Think i will get this working.. because i'd like to get "direct" access to a buffer, without calling gdi or using bitmaps for my raytracer.. such a "step in the middle" costs to much time. Thanks
  8. Hello! I would like to know, how a graphics api like opengl finally draws to a window's handle.. i know what a pixelformatdescriptor or a HDC is, but my question is - how draws opengl the final "pixels" to the control / handle, whatever? I downloaded the Mesa GL source code and i'm trying to find something.. but maybe, someone knows a sample or something or another solution for my problem?
  9. Idea for a Fog "Layer"

    You're right, i see it too =D Maybe something interesting, here's an article about animated clouds via perlin noise.. sounds interresting o_o [url="http://freespace.virgin.net/hugo.elias/models/m_clouds.htm"]http://freespace.virgin.net/hugo.elias/models/m_clouds.htm[/url]
  10. Idea for a Fog "Layer"

    Okay, thank you guys for your approaches I think, that i will use the two quads with cloud textures... i thought it could be something more *magical* ;)
  11. Luna Game Worx

  12. I've wirtten an own (voxel) terrain renderer, so i can say, the first thing what differs from your renderer is, that minecraft's blocks 3 times bigger than yours. The secound thing is - for such big geometrical structures - use vertex buffer objects and calculate your geometry only once per change. Also play with the sizes - 32x128x32 is a good size. And be shure, that you only hold chunks in a viewrange + precache range in memory. (If you are interrested in pics, click here [url="https://www.facebook.com/pages/Luna-Game-Worx/286480244750104"]https://www.facebook.com/pages/Luna-Game-Worx/286480244750104[/url] ;)
  13. I need an idea for rendering fog, for example, like in greed corp: [img]http://www.eprison.de/pics/games/1273/pic_1279561239.jpg[/img] What did you think what greed corp is doing for this? Simple sprites with cloudy textures, floating at a definied level over the map? Or is there a shader solution? Regards, Lunatix
  14. Yay, Nearest made it, Tank you Sorry for the simple question... have had a little blockout or something >_<
  15. Hi! I'm building a Voxel Shooter Game and i stuck at the textures. I have a chunk and map system and now trying to texturize the voxels via tile-textureing. All tiles of the texture are 32x32 and the texture itself is 512x512. [Picture 1] [img]http://www.abload.de/img/minecraft_test_003asgidy.jpg[/img] [Picture 2] [img]http://www.abload.de/img/minecraft_test_003boyisc.jpg[/img] [Picure 3] [img]http://static.allegro.cc/image/cache/c/9/c9ca1a3751b402e348a6173d9ca222f5.png[/img] So, as you see, the tiles look like they are blurred [Pic. 2] and there are seems [Pic. 1] between them. I thought the mipmapping was causing it, also i've ad one extra pixel to prevent bleeding [Pic. 3] - but nothing seems to work. For example, the minecraft tiles are 16x16 - and they are sharp as hell. So, i would like to ask, if somebody knows a good texture setup for this... maybe i have to write my own "BuildMipMaps" function? With a "Pixel Resize" Downsampling Mode like in Paint Shop Pro?