Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 07 Mar 2002
Online Last Active Today, 07:36 AM

#5131024 FPS Camera calculations problem

Posted by Álvaro on 13 February 2014 - 06:52 AM

viewMatrix = {
xAxis.x, yAxis.y, zAxis.x, 0,
xAxis.y, yAxis.y, zAxis.y, 0,
xAxis.z, yAxis.y, zAxis.z, 0,
-dot(xAxis, eye), -dot(yAxis, eye), -dot(zAxis, eye), 1

#5130861 Calling a virtual function, illegal call of non-static member function?

Posted by Álvaro on 12 February 2014 - 12:44 PM

You can't call a member function from a static function, because the member function needs an instance on which to be called. I don't understand the rest of your post.

#5130823 how alive is oop?

Posted by Álvaro on 12 February 2014 - 09:37 AM

I don't think these trends change much in the timescale of a couple of years.
I use objects all the time, but my programming is in no way "oriented" towards objects. Objects are just a tool, and a pretty useful one because it emphasizes interfaces between parts of the code, which allows large systems to be built without getting overwhelmed by complexity. But to me "Object Oriented Programming" is a bit like "Nail Oriented Roofing".
Even if you think of your programming as "procedural" and "modular", chances are you'll end up using objects naturally. For instance, if you have a type `Image', it is very likely you'll end up having a module that implements functions that operate on Image variables.
In C, the header file would look something like this:
/* ... */
typedef struct {
  unsigned char R, G, B, A;
} Color;
typedef struct {
  unsigned width, height;
  Color *pixel_data;
} Image;
/* Create a new image, allocating memory for its pixel data. */
void create_image(Image *image, unsigned width, unsigned height);
/* Clear an image by setting every pixel to the given color */
void clear_image(Image *image, Color color);
void set_pixel(Image *image, int x, int y, Color color);
Color get_pixel(Image const *image, int x, int y);
void bit_blit(Image *destination, Image const *source, int left_x, int top_y);
/* ... */
I made the switch from C to C++ in 2001. Before then, my C code often looked a lot like this example. If you find yourself writing code in this style, you too are using objects. What are called "member functions" in C++ are essentially the functions that take a first argument of type `Image *'. When it has type `Image const *', it is called a "constant member function". In C++ instead of `set_pixel(&my_image, x, y, my_color);' you would call the member function as `my_image->set_pixel(x, y, my_color);'.
Once you get used to using objects, a lot of your code will naturally be organized this way. However, there are things that in my mind don't fit this model at all. For instance, mathematical functions like `log' don't naturally take a first argument that is a pointer to anything, so it's better to use a free function (i.e., a non-member function) in this case.
A project can be anywhere in the spectrum between not using objects at all and the extreme of organizing all the code as objects. All the projects I have worked on are somewhere in the middle, with lower-level parts of the code using objects only sporadically and higher-level parts being closer to the OOP extreme.
There are good reasons why you may not want to use OOP when dealing with low-level code. For instance, modern hardware is very good at operating on batches of data, either through SIMD instructions or using GPUs. An programmer using OOP in a straight-forward manner can easily write code that does things on a large collection of objects one object at a time, perhaps calling a virtual function on each object (similar to calling a function pointer that is part of the type, in non-OOP terms). That eliminates any possibility of using an SIMD implementation, and therefore results in code that doesn't make optimal use of the computational resources available. An alternative design would have separate arrays for each specific type of object, so instead of calling a virtual function on each object, you can call a function that performs a fixed operation on a collection of objects, and then a SIMD implementation becomes possible. Check out this thread: http://www.gamedev.net/topic/575076-what-is-data-oriented-programming/

#5130578 is this linear?

Posted by Álvaro on 11 February 2014 - 12:18 PM

I do not understand this thing with tangent



Here's how you can rasterize a sphere:

 * Among the lines that pass through the camera and are contained in a plane that contains the center of the sphere and the vertical direction, select the two that are tangent to the sphere. Use them to compute the top row and the bottom row of the rasterization.

 * For each row between those two:

   * Among the horizontal lines that pass through the camera and are contained in a plane that contains the row we are rasterizing, select the two that are tangent to the sphere. Use them to compute the left end and the right end of the scanline.


The steps of computing the tangent lines can be reduced to solving a quadratic equation, so you'll have to use a square root. Unless you can come up with a Bressenham-style algorithm, which is perhaps possible.


I haven't thought about how to compute the depth of each pixel. Computing a square root might be unavoidable, but you can always come up with some approximation that is good enough for this purpose.

#5130555 is this linear?

Posted by Álvaro on 11 February 2014 - 10:09 AM

Well, you could compute the leftmost and rightmost tangent to the sphere contained in the plane determined by the camera and the scanline. You might be able to reduce that to a Bressenham-style algorithm, but even if you have to take one square root per scanline it should be fast enough.

#5130527 is this linear?

Posted by Álvaro on 11 February 2014 - 07:25 AM

You should probably read this: http://www.altdevblogaday.com/2012/04/29/software-rasterizer-part-2/

I am sure OpenGL does the right thing for all quantities being interpolated.

I have never tried to rasterize a sphere before. Traditionally you divide the sphere up into triangles and rasterize those.

#5130362 is this linear?

Posted by Álvaro on 10 February 2014 - 01:23 PM

No, you can't interpolate depth-buffer values linearly. Lines map to lines, but distances are not preserved (so the middle point of a segment will not generally map to the middle point of the projection of the segment). I believe for depth values you can lineally interpolate 1/z.


You can mark the points in the sphere where the ray to the camera is tangential to the sphere. They form a circle. Since a circle is a conic section, its projection will also be a conic section. I am not sure what you intend to do with the "radius" you are asking about, and I am not even sure what the radius of an ellipse is. Note you could also get a parabola or a piece of a hyperbola, depending on whether the sphere has points with z=0 or points with z<0.


Triangles project to triangles as long as all three vertices have z>0.

#5130357 is this linear?

Posted by Álvaro on 10 February 2014 - 12:59 PM

Yes, you are using a projective mapping, which maps lines into lines (and conic sections into conic sections, so the oblong contours of your spheres after projection are actually ellipses).

#5129952 1D, 2D and 3D

Posted by Álvaro on 08 February 2014 - 05:16 PM

What of distance etc. because i saw a formula that is dx/something + dy/something. I can't remember what it was divided by

Look, if you can't remember the formula, we won't be able to tell you whether it works in any number of dimensions. To first order, everything works the same. Things having to do with rotations might look a bit different, because the group of rotations in 3D is significantly harder to handle than the group of rotations in 2D (and there is only one "rotation" in 1D).

#5129742 sizeof() behaviour with inner classes (MSVC12)

Posted by Álvaro on 07 February 2014 - 07:00 PM

Yes, it is due to data alignment. I am sure Google can fill you in on the details.

#5129691 fractal result by accident

Posted by Álvaro on 07 February 2014 - 03:03 PM

what way this antyaliasing works here? average of many subsamples per pixel?


Yes, I took 200 random samples inside each pixel and averaged the colors.

#5129580 fractal result by accident

Posted by Álvaro on 07 February 2014 - 07:47 AM

OK. Here's my last attempt at trying to explain why this isn't a fractal. This is an image similar to what you posted:




This is what happened after I zoomed in a bit and I used anti-aliasing:



#5129550 human intelligence

Posted by Álvaro on 07 February 2014 - 05:40 AM

A computer can perform calculations faster than MOST human brains... however there are people who have beaten computers at calculating ridiculous numeric calculations

That statement is absurd. Shakuntala Devi took 28 seconds to compute 7686369774870 * 2465099745779. That's quite a feat, no doubt. It would be hard for me to measure how long it takes my laptop to make that computation, but the order of magnitude is 0.00000001 seconds.

Thank you Alvaro, I didn't realize my 20 cents calculator was more intelligent than me, just because it can perform calculations I can't do myself. You seem to have a very good, clever and nice idea about what intelligence is.

Clearly I was making the argument that calculators are more intelligent than people because they can multiply faster. Good reading comprehension!

#5129549 fractal result by accident

Posted by Álvaro on 07 February 2014 - 05:23 AM

for me i may repeat it seem this is not worse fractal than sierpiński carpet

Take a square. If you scale it up by a factor of 3 in every direction, you get a figure composed of 9 copies of the original square. We can then define the dimension of the square as log(9)/log(3)=2 (that is, what power of the scaling factor gives you the number of copies).

If you scale the Sierpinski carpet up by a factor of 3 in every direction, you get a figure composed of 8 copies of the original Sierpinski carpet. Therefore its dimension is log(8)/log(3) = 1.89278926071437231130... That's why we call that a fractal.

I have no idea why you still think your image is a fractal. The way I see it, what you plotted is a couple of hundred concentric circles, which when sampled with a regular grid result in a spectacular moiré pattern. It's not like we are saying your image isn't pretty: It just has little to do with fractals.

I found this link: http://www.nahee.com/spanky/www/fractint/circle_type.html (Notice the "not a fractal" part.)

#5129529 ratio and number of samples

Posted by Álvaro on 07 February 2014 - 03:18 AM

Let's imagine each head has a true probability of getting a question right, but this probability is hidden to you. If you start with a prior that the probability is uniformly distributed between 0 and 1, after A correct answers and B incorrect answers the posterior distribution is a beta distribution with parameters alpha=A+1 and beta=B+1. Now we can easily compute the expected value of the hidden probability to be the mean of the posterior distribution, alpha/(alpha+beta) = (A+1)/(A+B+2).

So in your case, the first head gets a expected hidden probability of 2/3, while the other head's is 4/6, which is the same. However, looking at the mean only is a bit simplistic. The probability that the second head is more intelligent than the first head is actually 10/21 = .47619.

But of course, the answer would be different with a different prior distribution of hidden probabilities.

Is this homework?