• Content count

  • Joined

  • Last visited

Community Reputation

147 Neutral

About Koder4Fun

  • Rank
  1. I know that changing code may be a risk to insert bugs, but on that case see this as an opportunity to start learn about 2D XNA rendering. A method far simpler is to pack all the tiles you need in a paint program to create a theme, and use them to render the level. The rendering of the tiles can be done with calls to[b] SpriteBatch methods. [/b]This way you can achieve a maximum of 60 fps. To increase performace you can skip all tiles completely outside current viewing position. Also you can unlock XNA from vsync to check real performace adding[b] IsFixedTimeStep = false;[/b] to the game object constructor. Here [url="http://msdn.microsoft.com/en-us/library/bb194908.aspx"]http://msdn.microsof...y/bb194908.aspx[/url] you find all you need to load a texture (your packed tiles) and draw a sprite (in your case a tile). Also you can use the same class to draw the ship, projectiles, etc. At last notes: Keep in mind that accessing resources at low level must be done only if no other ways are found. And that C# is a managed language and it's a little slower than C/C++. To move blocks of data on .NET 4 you can use Buffer object [url="http://msdn.microsoft.com/en-us/library/System.Buffer(v=vs.100).aspx"]http://msdn.microsoft.com/en-us/library/System.Buffer(v=vs.100).aspx[/url] .
  2. GPUs operate only on triangles so, to have the same shading, you need that colors on vertices are the same [b]and that the triangulation is the same [/b](a quad is composed by 2 triangles). However your quad (I think XNA shot it's the first) seem to be rotated of 90° around the axis you are looking. I don't know if you use or not a camera. If you directly render quad vertices without transformations you may be need to swap X/Y components and set Z (the depth onto the screen) to 0.0. Also if you put the quad vertices directly to the shader use the [b]Vector4 [/b]and specify a W = 1.0. (so {X, Y, 0.0, 1.0} ). W=1.0 indicate a position and not a direction vector. This fourth component is important when you use matrix transforms onto the shaders. Another thing: XNA is based on directX 9.0c that is a [b]slightly different [/b]from DX 10 and later.
  3. Thanks to all guys! You've got me some differents solutions to the problem, now I test what fit better for me. [quote name='pacobarter' timestamp='1345415093' post='4971244'] As an estimation (max bound), you can compute the minimum of all nine distances between the vertices of the triangles. Then, filtered by a threshold, you can choose candidates and compute the real distance between each vertex of one triangle and the plane that represents the other triangle. [/quote] I do this (I work with polys not triangles, but basically is the same). Also I check for distance from center to center of the two polygons. The only distance that I need to check is when the minimum distance is made by a segment with endpoints inside the polygons. [quote name='jefferytitan' timestamp='1345406263' post='4971194'] If speed's not a huge concern, perhaps the below would work: [url="http://cgm.cs.mcgill.ca/~orm/mind2p.html"]http://cgm.cs.mcgill...orm/mind2p.html[/url] [url="http://en.wikipedia.org/wiki/Rotating_calipers"]http://en.wikipedia....tating_calipers[/url] Personally I'd just go the AABB way for simplicity and speed, and then optimise if it's needed. Assuming you don't use single huge polygons, the chances of AABBs overlapping for arbitrary polygons should be pretty low. [/quote] I've already seen this approach but is for 2D. If I'm correct, when I translate to 3D, support lines become planes parallels to polygon's normal... Thanks again.
  4. I'm sorry, it's true.... The [b]polygons are convex and planar and the vertices are ordered counter-clockwise[/b]. I already have an algo to check if two convex polys intersects so I can filter it out and return zero distance, otherwise I will use the algorithm I search for. The distance must be [b]correct or at least underestimated[/b]. If the distance returned if bigger than real the contribution of lightmap is skipped making illuminations artifacts (darker regions). I not search for speed, I precalculate these distances only once during radiosity setup. [quote]I have another question now, a radiosity solution based on hemicubes is automatically unit-less? I've thinked that the inverse square law is due to the area that a polygon fill onto the hemicube faces, so the perspective automatically reduce this.... and so the units of the scene are not important. But now, misuring the distance directly from polys, i think i need to convert units it to meters to work well?[/quote] I've partially solved, the assertions made on the box above are wrong i need to check for solid angle and thus accounting for the square of distance between hemicube origin and patch center. I have also set an UnitsToMeters float to define the scene scale. Now it's all more correct. The distance algorithm [b]is always needed[/b]...
  5. After some hours of search and some implementations that doesn't work I hope on help of people that know more about this topic. I need this on a iterative radiosity renderer, using gathering method, writed in C# using GPU (with XNA), to speed up the rendering. For each polygon I have a lightmap [b]and a list of lightmaps visible from it[/b]. That's work ok. The idea is that when[b] all lightmaps in "visible list" [/b]have an average energy less than a thresold I can skip the light gathering for the entire polygon's lightmap. In a first time I've considered only the average energy, but I find that accounting for distance (or better the square of the distance, that is the physical decay of light) i can rescale the average energy and check against the threshold to do a better filtering: [code] public void GatherEnergy(float accuracy, ref Hemicube h, ref SceneRenderer scene) { bool skip = true; for (int i = 0; i < VisibleLightMaps.Length; ++i) { if ((VisibleLightMaps[i].lightmap.AverageEnergy / VisibleLightMaps[i].distSquared) > energyThreshold) { skip = false; break; } } if (skip) return; if (ForDetailMesh) GatherEnergyForDetailMesh(accuracy, ref h, ref scene); else GatherEnergyForPoly(accuracy, ref h, ref scene); CalculateMaxAndAverageEnergy(); } [/code] NOTE: the distance I need is the [b]real minimun distance[/b] (or distance squared) between the polygons. The points that define the distance segment can be on a vertex, on edge, or inside the polygon. I hope to be clear. Thanks in advance for help.
  6. C# Game Programming Audio Issue Fix

    The last Assert can fail only if the file or path don't exist, or if there is not enough memory to allocate the buffer. Due to the today PC configuration I don't think is a memory problem, maybe a [b]wrong construction of the path string [/b]or a folder name that contains spaces and can make problems. You can add this code at the beginning of the method to check if the path is correctly recognized: [code] System.Diagnostics.Debug.Assert(File.Exists(Path), "File not found"); [/code] To do a fast check be sure to have the game [b]on the same folder on real and virtual machine[/b] and check if it work. Check also that real and virtual machine run both a 32-bit or a 64-bit system. It's important for run-time DLL linking. If you compile the project in 64-bit the OpenAL search for 64-bit libraries; compiling for 32-bit it check for 32-bit libs. Another thing to check is the version of the .NET framework installed on real and virtual machines.
  7. 3d camera

    I don't know your knowledge of windows forms programming, but if your GUI is complex you can add the GraphicsDeviceControl into a windows form (like the tutorial)[b] to display only the scene[/b], but handle all the GUI with windows forms controls. It's more simple (for me) and you get a better performace due to the different threads that .NET use for GUI and code logic. I know the neoforce project and I think is better to use it on a real-time enviroment like game menus etc. As you can see in the attached image (an editor based on XNA .NET I'm developing) all the interface is done by windows forms, and the 4 viewports are made like in the "windows forms series" with some enhancements I've writed you on previous post.
  8. 3d camera

    Please, be more explicit. You have to implement all from zero? You have already developed a camera class on an XNA game and you want to make some similar on a windows form project started from the "windows form series" xna tutorials? However I try a reply:[list] [*]Add the camera class to the project or write a new one (on this case is easier to develop it on a xna game project for debug and move to windows form when it works). [*]Add all needed XNA assembly references: Microsoft.Xna.Framework, Microsoft.Xna.Framework.Graphics, Microsoft.Xna.Game (and others maybe). [*]Derive a control from the GraphicsDeviceControl, you can name this ViewportControl [*]Intercept windows forms events of ViewportControl overriding: OnKeyDown(), OnKeyUp(), OnMouseDown(), OnMouseUp(), OnMouseClick(), etc. [*]On the body of previous methods update the camera moving and/or rotating it, in example: [/list] [code] // handle mouse wheel movement and react with a camera action protected override void OnMouseWheel(MouseEventArgs e) { HandleCamera(e.Button, 0.0f, 0.0f, e.Delta); base.OnMouseWheel(e); } [/code][list] [*]Override the method Draw in ViewportControl like this: [/list] [code] protected override void Draw() { // code to draw the scene like in Draw() method of an XNA game. } [/code] If you want a continuous rendering you need to add these lines at the end of Initialize() method of ViewportControl (you need to override it from base class): [code] // Hook the idle event to constantly redraw Application.Idle += delegate { Invalidate(); }; Invalidate(); [/code] otherwise you need to manually update the rendering calling Invalidate() method on ViewportControl when the camera or objects in the scene moves. Look at this link from App Hub: [url="http://xbox.create.msdn.com/en-US/education/catalog/sample/winforms_series_1"]http://xbox.create.m...nforms_series_1[/url] I hope I've hit the target [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img]
  9. C# Game Programming Audio Issue Fix

    I see you've solved your issue on a real PC. But please paste the method that contains the Assert() to give the chance to help you, without a context is impossible to say what cause the problem. Also, for virtual machines, you need to properly configure the virtual sound card. I don't know what virtualization software you use, but all needs this configuration.
  10. I don't know if I understood correctly... Dynamic lights in reflections are rendered on the same way you render it on the scene. You need to setup a [b]multi-pass render[/b] with [b]additive blending [/b]enabled to sum the contributions of all incident lights like this: [code] render scene with only ambient light set alpha blending to additive blending for each dynamic light { set on the shader: origin, direction (if a directional light), and intensity of the light if you have writed custom shaders you know the standard lambertian/phong calculations (on the simpler case) render the scene again (illuminated only by this light) } [/code] I'm not sure but I think you do this using also standard XNA shaders if you are not familiar with shader programming. To optimize this code you can send more than one light at a time to the shader (you need to write a custom shader), also you need to cut down all objects not visible from the renderd light. Keep in mind that a spot light have a frustum like a camera and you can use it to cull geometry. For omni and directional you can calculate the maximum distance the light can reach the cut down the non lighted geometry. If you need to cast shadows is better render one light with his shadowmap at a time. With no shadows the mutli light implementation is better. I hope you understand me, if not please post.
  11. I think about this only now... Obviously you can [b]cut down the detail of the scene [/b]when you render cubemaps for reflections/shadow casting:[list] [*]using semplified/optimized meshes; [*]shortening the maximum distance rendering. [/list]
  12. Custom keybinds/name input?

    The key is to [b]define a list of actions [/b]that is needed by your game: up, down, left, right, jump, fire, etc.. and store on these actions the key selected: [code] Keys up, down, left, right, jump, fire; [/code] Now when you enter your option menu to configure "jump" action (for example) you need to listen for keyboard pressed keys and, when is pressed [b]only one key[/b], you can store jump binding like this: [code] jump = Keyboard.GetState().GetPressedKeys()[0]; [/code] You need to take care to [b]wait the release[/b] of the key used in the option menu to enter the "key selection mode" (if you use Enter you need to wait for Enter release to avoid to bind jump to enter key). This link can help you (but I don't know your programming level): [url="http://joecrossdevelopment.wordpress.com/2012/05/17/xna-engine-part-1-input-module-class-layout-and-goals/"]http://joecrossdevel...yout-and-goals/[/url] Check also this link, is usefull for GamePad/Mouse/Keyboard input design: [url="http://classes.soe.ucsc.edu/cmps020/Winter08/lectures/controller-keyboard-input.pdf"]http://classes.soe.u...board-input.pdf[/url]
  13. If you want to make reflection in realtime on moving objects you need cubemaps to gain fast access in pixel shaders, but rendering a scene 6 times for each frame is not reasonable. You can [b]time-slice[/b] the work to do. Keeping in mind a playable frame-rate (like 60 FPS) you can update [b]onlyone cubemap for each frame,[/b] after 6 frames (in this case) you have all cubemaps updated with a[b] refresh rate of 1/6 of current FPS.[/b] You can see a little "swimming" of reflections but is not so noticeable. The same for point light shadow mapping. (you need to use cubemap or [url="http://graphicsrunner.blogspot.it/2008/07/dual-paraboloid-shadow-maps.html"]Dual-Paraboloid Shadow Maps[/url]). This approach can be used also for non-graphics problems to [b]distribute work on the CPU or GPU [/b]over time. (for example AI).
  14. Independent resolution

    I've seen the blog and downloaded the source code, that is for XNA 3.0. If you use XNA 4.0 you must change a line on your spriteBatch.Begin() call like this: [code] spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, SamplerState.LinearClamp, DepthStencilState.None, RasterizerState.CullNone, null, Resolution.getTransformationMatrix()); [/code] Note: if you want [b]old stile blocky pixels [/b]change SamplerState.LinearClamp to [b]SamplerState.PointClamp.[/b] The small image with a black border is due to the wrong setting of: [code] Resolution.SetVirtualResolution(1024, 768); Resolution.SetResolution(1280, 800, false); [/code] the [b]SetResolution() must be always been set to the real size of the window [/b](or [b]real resolution in full-screen[/b]): [code] Resolution.SetVirtualResolution(1024, 768); Resolution.SetResolution(GraphicsDevice.DisplayMode.Width, GraphicsDevice.DisplayMode.Height, true); [/code] Now you can freely change the SetVirtualResolution() to any resolution you want. With these corrections the code work properly on XNA 4.0 (I have tested it) and retain correct aspect ratio when drawing with SpriteBatch, without changing any other code. Has last note if you prefer [b]black border [/b]you need to change the [b]_Device.GraphicsDevice.Clear(Color.CornflowerBlue);[/b] inside the [b]Resolution.BeginDraw(); [/b](in resolution.cs) Hope it helped. If you have questions, please tell me.
  15. Floating Point Accuracy Problems

    [quote name='NMO' timestamp='1344253981' post='4966639'] Which dot product do you mean? I only use the dot product to calc the intersection vertex: [/quote] I'm sorry, I intended distance from plane (that is a dot product + (or - depending on the coordinate system) plane distance form origin. I know the document you refer for bsp and is not so good for a pratical implementation. You can find on internet the source code of quake tools (qbsp light vis) to build levels; check Quake1 tools that are more simple. Here you have a working implementation of a "thick plane bsp tree"; to be more speciifc a leafy bsp tree. The polygons are stored only on leafs, the polygons on partition planes are classified by a [b]dot product of the polygon normal against the plane normal [/b](you can get the normal of your plane ax+by+cz+d=0 by buinding a vector [a,b,c]) and are sended to back or front child nodes. [quote name='NMO' timestamp='1344253981' post='4966639'] At the moment I simply use the first polygon from the list as the selection plane. [/quote] You can select the partition plane with in mind two methods:[b] least splitting or best balancing[/b]. The method you select [b]depend on the use you make of the tree:[/b] if tree is used for rendering is better a best balancing approach, for a collision detection tree you can use a least splitting approach. [quote name='NMO' timestamp='1344253981' post='4966639'] Another BSP tree specific question: Wouldn't it be better to work with triangles instead of polygons? I think triangles should be easier to debug. The only problem I can think of is splitting but this is acutally also not problematic because when i split a triangle by a plane I get a triangle and a 4-gon. The 4-gon can be triangulated in linear time very easily. I drew a picture to show this. [/quote] Triangles are not so good, you add more splits to the bsp creation. Using planar polygons you can manage better the "thick plane method" and use epsilons values to check the planarity of a polygon. Keep in mind that floating point round-off error generate non perfect planar polygons after several splits.