stephanh

Members
  • Content count

    176
  • Joined

  • Last visited

Community Reputation

174 Neutral

About stephanh

  • Rank
    Member
  1. Shapefiles with heightmap terrain?

    This is not really a directx question. One way is to rasterize the shapefile so you get a bitmap for water like you have for elevation. Have a look at gdal_rasterize (http://www.gdal.org/gdal_rasterize.html) on how to rasterize a vector shape file into a bitmap. If you have both, elevation and hydro data, as geotiff, you can easily georeference both. Another way is to actually use the polygons in the shape file. The gdal c++ library supports reading SHP files, so you could use it to parse the polygons and determine the water areas from the polys. -- stephan [Edited by - stephanh on January 12, 2009 7:42:16 AM]
  2. Hi, the User-Object count of my application slowly increases. After several hours it reaches the Default Limit of 10000 USER-Objects per process. While there are tons of tools to track down GDI and system handles (like !htrace/!handle in windbg, sysinternals process explorer and the like) i couldnt find anything to get more information of USER-Object allocated by a process. There also seems there are no offical windows APIs to access this information either. Finding out what type of objects are leaking would be very helpfull. Any ideas? Thanks alot.
  3. Hi, i am having a hard time getting mental ray to produce good ao maps for a architectural model we have. I am trying to get the model ready for real-time viz. The model was done by an external studio (we dont have any artists inhouse..). The model is properly unwrapped but the ao maps produced by mental ray are very blocky - no matter how many samples i use or how large the target texture is set. Additionally the maps seem to be shifted or occluded regions are shrinked and there's occlusion where no occluders are around. Do you know any resources on how to properly set up MR to produce good AO maps? Or any hints&tips? Thank you very much! Edit: Also tried the faogen demo, while it produces nice ao maps, it lacks control. Rooms need stronge attenuation along the raydistance than outside structures. -- stephan [Edited by - stephanh on July 7, 2008 10:27:43 AM]
  4. Distant Terrain Rendering

    The far clip plane matters surprisingly little concerning depth precision. You can even push it to infinity. I usually keep my near plane around 1.0 - 1.5 for a human-sized observer and push the far plane as far as necessary. Rendering such large terrain, assuming with 16km^2 you mean a ~ 16k*16k heightfield, it's more a storage problem. You are dealing with 1GB of raw float-height samples then. Data has to be paged-in on demand using a LOD-scheme supporting some kind of out-of-core rendering. Additionally it might be wise to compress that data. Your LOD sounds like chunked-lod which should be able to handle such amounts of data (with paging). But i doubt the terrain is that detailed in oblivion, ground clutter and close object meshes do their share do conceal nearby terrain detail. And simple fog or atmospheric effects provide a good depth impression. How large is your current heightmap? What hardware do you target?
  5. Distant Terrain Rendering

    I dont know how it's done in oblivion, never played it. But as you said, either use a terrain lod scheme which handles very large datasets well (e.g. geoCLipmapping) or render far away landscape into a cubemap which gets updated every now and then (every 20 frames or so...). Distant level geometry was done that way in shadow of the collossus (ps2). The Making Of "Shadow Of The Colossus" Edit: typo geomipmapping -> geoclipmapping [Edited by - stephanh on June 24, 2008 7:53:39 AM]
  6. Hi, i am currently playing around with rendering of cross-sections in realtime architectural visualization. The goal is to cap cross sections between the clip plane and the geometry revealing backfaces. The current approach is to render the geometry double sided with a user clipplane and tracking backfaces in the stencilbuffer (using two-sided stencil). The clip planes themselves are rendered in a second pass - only drawing the pixels marked as backfaces in the stencil buffer. This works reasonable well but requires a very clean model with proper back/front faces. Additionally it doesnt work on geometry like glass windows which is rendered two-sided anyway. Are there any other approaches to get capped cross sections? Thanks for any tips. -- stephan [Edited by - stephanh on June 23, 2008 2:40:11 AM]
  7. Hi, is it possible to pass native types between two c++/clr mixed assemblies? My setup looks like following assembly A: public ref class SceneNode { ...managed stuff... Engine::Node * GetNode(); }; assembly B: public ref class SceneEditor { SceneEditor( SceneNode ^ node ) { Engine::Node * node = node->GetNode(); ... } } B depends on A, if i add A to B as a reference the GetNode method is missing from the assembly, linking against the assembly leads to undefined tokens for all managed classes/types. If i put A and B into the same project/assembly it works flawless. Does anybody know how to get it working in two assemblies? (without wrapping the Node ptr in a IntPtr or something similar) Thanks alot!
  8. Quote:Original post by Prune Quote:Original post by LeGreg Wrong, right, there is none. I disagree. The way it should be done is one that makes the least visual difference in the human visual system from an equivalent image displayed on an HDR screen. The same way audio and video codecs are studied in blind testing with human subjects needs to be applied here to find an approximately optimal mapping operator. Evaluation of Tone Mapping Operators using a High Dynamic Range Display Suprisingly the the answer might be different when choosing the "best" TMO with/without reference image.
  9. rendering lightnings

    You could try to use a line-mesh built dynamically with a L-system. Then use a glow effect for the L-system lighting bolt tree. Doesnt look that bad http://www.selleri.org/Blender/tuts/Lightnings.pdf and could be easily done realtime.
  10. If iam not wrong, data scattering can only be implemented on a vertex level. Do your computations into a render target and use vertex texture fetching in a 2nd pass to scatter the fragment data. e.g. for histogram generation
  11. As you pointed out, you have several options. With billboarding there's always a huge fillrate cost, but you could try grouping trees and use dynamic impostors for them to reduce the overdraw at little bit. Take care scheduling the updating of the impostors over several frames. Parallalx mapping was also used in this context, see http://portal.acm.org/citation.cfm?id=1174448 (ACM account required, maybe available somewhere else too) But only looks good from pretty far away. Though somewhat better than just baking the canopy color into the terrain textures of course. Decaudin and Neyret used volume texture (http://www-imagis.imag.fr/Publications/2004/DN04/). Roettger fills a volume with billboards in a pseudorandom manner, decreasing density view-dependently. (if i understood that right) Paper: http://www.stereofx.org/papers/VEGETATION.PDF Source: http://www.stereofx.org/terrain.html (scroll down to Fränkische Demo) I'd first try dynamic impostoring extending your billboard "forest" renderer.
  12. Reverse engineer c++ to UML

    Quote:Original post by Fred@ss Out of curiosity, what do you do the learn from what already exists? By studying the documentation. I dare to claim that any larger project worth looking at has some kind of architecture/design docs.
  13. Reverse engineer c++ to UML

    Tried UML tools from Visio for Enterprise Architects VSS-plugin to IBM Rational Rose in the past. The C++ reverse-engineering results for a large project were very suboptimal. In my opinion one is better off modelling important parts of your project with the required granularity manually. C++ isnt easy to parse, i guess thats why most free UML tools only support C# or Java for reverse-engineering.
  14. multi-threading and games

    Quote:Original post by Palidine Going forward I think it's going to make more sense to go with a componentized architecture instead of a hierarchical one. i.e. objects are composed of components that don't directly talk to each other through function calls and pointers, but rather through messages or something. That way you can have objects concurrently updating in different threads (or even different parts of the same object updating in different threads). Very true. Game devs discover actor systems - used in other domains for years. Concerning concurrency (especially) the (game) industry lags behind heavily, something i dont really understand with all the talking about Next-Gen "technology". The content is next-gen, the hardware maybe, but software-wise we are only about to leave stone-age.
  15. Search for the Model-View-Controller pattern. http://en.wikipedia.org/wiki/Model_view_controller Ordinary software engineering practices apply to game dev too. And read the thread 15 lines below yours: http://www.gamedev.net/community/forums/topic.asp?topic_id=454472 It's about linking physics to graphics -> visualization from simulation or logic, if you want. -- i really like to know how much redundant information a forum contains approximately, with the same questions popping up every month. i am looking forward to a reply-pattern catalog. [Edited by - stephanh on July 9, 2007 9:56:50 AM]