Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 28 May 2011
Online Last Active Today, 01:13 AM

Posts I've Made

In Topic: what methodologies of reading this forum you use?

16 June 2014 - 07:17 AM

Categorizing based on multiple things would work if the poster was forced to select both subject and type of thread.

In Topic: aa wire box neat tricks?

15 June 2014 - 12:07 PM

You basically need a list of the edges and then loop over them to render lines.


You could abstract this into a some sort of generic "geometry class" that describes a shape (box, pyramid, whatever else you have) and can give you:

-List of vertices

-List of edges (index to start and index to end vertex)

-List of surfaces



If using C++ I would make one class for each shape and then use templates to make a wiremesh renderer that can draw any shape (basically compile time polymorphism, you dont probably like runtime overhead...):

template T
void drawGenericShape(T shape)
    for each edge in shape.getEdges() do

However, to create the specific shape class to represent a box, I would still just manually hardcode the vertices. This is because unless you want it to support 2 and 4 dimensional boxes too, its not really a useful target for a generalization. And youll have to use a different approach for every type of shape anyways. So yeah, I would just make an array of vertices and an array of edges and an array of surfaces.

In Topic: problem with my reflection idea

14 June 2014 - 05:31 PM

Yeah you usually do mirrors by rendering the scene from the mirrors perspective.


Think of the mirror as a plane.


Now, to render whatever the mirror should see, simply REFLECT MIRROR the real camera to the other side of the mirror and render (render only the area of screen covered by mirror) on top of the real scene. The mirrored camera should not render anything behind the mirror plane (so you only render the part of scene on one side of the mirror, the side that can actually be reflected off of it)


Idk how to do that in unity tho.

In Topic: Neural networks fundamentally flawed?

13 June 2014 - 07:53 AM

You cant just take a random blob of neurons and expect them to become intelligent after you train them enough. That works for some simple cases but for more complicated cases you need genetic algorithms or manual work to create a large scale structure, just like in the brain, where low level structures might be such simpleish neural network but at the high level everything is connected in very specific ways and every area is specialized to do some specific task.


Pick the right inputs and it will work. A random blob of pixels might not be the best input you can provide. It might work, but it might take a thousand years for a good configuration to be found.

In Topic: Engine for physics and rendering from scratch

12 June 2014 - 12:01 PM

If you want to take full advantage of the GPU, you have to use one of those 'abstract' APIs (like OpenGL), because you dont have full control over the GPU (because GPUs are not all the same, and OpenGL and buddies abstracts these differences so you can treat them the same. Thus you need to use an API)


There are also APIs which are directed toward computing and not graphics specifically. That might be somewhat of a compromise between performance and wanting to implement everything from the lowest level. (OpenCL???)


But those APIs make the task more complicated (and it already is complicated if you start from low level rasterization), so you might want to reconsider if you need it to perform well. Its easier to do everything on the CPU, the GPU interface is not really clear and does not really integrate with whatever you use to write the code (since you have to use a different language to execute code on GPU).


You could start by writing all the code on CPU, then restructuring it into easily parallizeable form, then multithreading it/using GPGPU APIs (OpenCL etc.) to run parts of it on GPU.