Jump to content

  • Log In with Google      Sign In   
  • Create Account


Geometrian

Member Since 10 Apr 2007
Offline Last Active Today, 11:48 AM
-----

#5092341 CS Degree - Is it worth it?

Posted by Geometrian on 07 September 2013 - 02:50 PM

I was in exactly the same boat--I knew how to program, and had great experience with multiple programming languages--especially Python and C++. Not only this, I had reverse engineered many graphics algorithms (I had implemented, for example, a GPU cloth simulation based on GLSL and FBOs before I ever even applied anywhere).
 
Basically, I didn't take any of the introductory classes, and I started immediately on higher level coursework (I took, for example, the graduate course in graphics algorithms my first semester). Especially at a big university, there's always more to learn about your field. I quickly learned about functional programming languages, asymptotic analysis, and design patterns. I was constantly learning things, and I eventually realized I wanted to double major in abstract mathematics just to get the most out of my future coursework.
 
The point is, universities will teach you. That's kindof what they do. As others have mentioned, a CS degree is not just about programming--if that's all you can do, you're a software engineer, not a computer scientist. And there's a huge difference.
 
Plus, being at a university is wonderful in its own right. Basically everyone has a triple digit IQ (which for me was a refreshing change from high school) and by and large you can learn whatever you want. There's almost always core requirements, but you have much much much more leeway in choosing.




#5090843 glGetUniformLocation not working

Posted by Geometrian on 01 September 2013 - 01:38 PM

Call glUseProgram before getting the attributes. Also, "projectionMatrix", "modelViewMatrix", and possibly "in_Color" won't exist since they are being optimized out.




#5086158 Is my frame time too high?

Posted by Geometrian on 15 August 2013 - 10:04 AM

Also note that glClear doesn't exactly "clear" the screen as such. Looping over each viewport pixel is slow, so the driver optimizes it with hardware flags: when the pixel in question is actually needed, that's when the "clear" happens. The practical upshot is that measuring glClear by itself isn't actually meaningful as a "frame".

I seem to be able to render my entire terrain (which is a relatively complex mipmapped terrain of 4096x4096) as fast as 1.25fps. When I'm not looking at the terrain (it uses a quadtree), I get down below 0.5ms/frame.

This is a much more genuine benchmark, and, at a guess, I'd say you're CPU-bound. I'm guessing you're using ~33.6M triangles, which on a 560M should be easily interactive, if not fully realtime. On my 580M, a naïve implementation (no quad strips and in immediate mode via display list) runs at 8.5 fps. With 4096 quad strips, I already get 44fps worst case.




#5064158 Hardware/Software rasterizer vs Ray-tracing

Posted by Geometrian on 23 May 2013 - 08:41 AM

I feel like implementing a GPU (or software) raytracer just to solve OIT is a bad idea. GPU raytracers are extremely powerful, but they have drawbacks as you saw. I would have initially recommended trying to composite together a hybrid approach, but at that point it makes more sense just to jump into a full raytracing solution. You saw the performance figures in those papers; that's a big step for flexibility.

The solutions to OIT are classic and well-known, but more importantly by contrast they are simpler and faster. You're right, on-the-fly sorting is prone to problems, especially with cyclic geometry, and it doesn't scale well. I suggest you reexamine the depth peeling algorithms, in particular.

You can do this in two passes, if you're careful, by using MRT. In the first pass, render opaque fragments normally into buffer 0. For semi-transparent fragments, do single-pass depth peeling, packing premultiplied RGB and depth into RGBA floating point render targets (buffers 1 to n). For the second pass, just blend the color buffers 1 to n onto buffer 0.

Exactly how the alpha channel is stored (premultiplied or not) may be the subject of some consternation, but that general idea would definitely work. Note that for single-pass depth peeling, you'll need shader mutexes. Here's, where I made a GLSL fragment program that does single-pass depth peeling.




#5056673 OpenGL Erroneous Context Version

Posted by Geometrian on 25 April 2013 - 09:53 AM

Why are you creating a dummy window?
That should be done only if you need multisampling without FBO, in order to find appropriate pixel format.

That's actually the eventual plan.

However, the real reason is to make the design cleaner. Contexts require a window to be created, but this supposes that that window will be around forever. The way I've structured the architecture is to have a context wrapper object contain its own invisible window. So, the window that "owns" the context is guaranteed to be around for as long as the life of the context. This allows the user to create and destroy windows at will without affecting the context's existence.

 

In all other cases, you should do the following:
[...]

Don't I need to load extensions before using wglCreateContextAttribsARB?




#5041249 Odd Shading Problem

Posted by Geometrian on 09 March 2013 - 02:22 PM

You . . . need a primer on the vertex stage of the pipeline.
 
This page has a nice diagram: http://www.opengl.org/documentation/specs/version1.1/glspec1.1/node23.html
See also http://www.opengl.org/wiki/Vertex_Transformation




#5041046 Article Topics: Math and Physics

Posted by Geometrian on 08 March 2013 - 08:04 PM

3D vectors! Especially dot products and cross products. These are indispensable for even the most basic graphics programming.




#5039883 Learning GLSL all over again

Posted by Geometrian on 06 March 2013 - 12:25 AM

The only output of a fragment shader is going to be a vec4, since you are writing a color (or value) to a render target.

Also depth.




#5035852 Recommendations of A Langauge

Posted by Geometrian on 23 February 2013 - 02:53 PM

You say this like it's a negative. Multiple inheritance is one of the worst features of C++, there is a reason later languages got rid of it.

No. Multiple inheritance has comparatively few applications, but it is not never the Right Thing. As the FAQ says: "People who spout off one-size-fits-all rules . . . make your design decisions without knowing your requirements. . . . there are some situations where a solution with multiple inheritance is cheaper to build, debug, test, optimize, and maintain than a solution without multiple inheritance."

 

Operator overloading is another one of those language features that was so badly abused that it's value certainly becomes questionable.

No. If a language feature is abused, that doesn't mean the language feature is bad. It means that the programmers who abuse it are stupid.

In this case, not having it forces a hypothetical Java BigNum class to have an API like: "new BigNum(4).exponentiate(51).mod(6).subtract(1)". You laugh, but I have often seen method chaining of such cruftitude in production code.

 

Lack of implicit control over memory manage[ment] is the only real missing feature that actually hurts the language, and even in that case, 99% of the time this is an advantage as well.

I agree somewhat, but not being aware of how memory is structured is a common pitfall of novice programmers. Exclusive use of Java encourages that. Finally, teaching ignorance of resource management is not merely suboptimal, but irresponsible.

I'm not going to deny that C++ isn't somewhat messy and that, at least with respect to Java and C#, its syntax is somewhat less intuitive. I don't fancy perpetuating a holy war about which is better though, mostly because I don't really care. I will stick with my recommendation not because I necessarily like C++ better, but since C/C++ is the de facto standard for games, game engines, and high performance computing in general.




#5035646 Recommendations of A Langauge

Posted by Geometrian on 22 February 2013 - 07:41 PM

There have been a lot of extremely similar threads on this in the past. As usual, my advice is to:

1: Start with Python, because it doesn't get in your way while you're learning the basics

2: End up with C++ when you need more power

3: Avoid Java, because it is restrictive and bloated

4: Avoid game engines, because they prevent you from learning, get in the way, and take a lot of the fun out of development.




#5035644 What do you call the Java class that has the main method?

Posted by Geometrian on 22 February 2013 - 07:38 PM

When I am forced to use Java, I refer to it as the "Main Class" and name it "Main".

 

Using something more is overkill, and it's consistent with naming schemes for every other language.




#5032863 C++ & OpenGL for 3d game engine

Posted by Geometrian on 15 February 2013 - 06:11 PM

You may find this a good tip.

 

But yes, for graphics programming, go with C++ and OpenGL. Python and PyOpenGL may be a good alternative if you're not a strong programmer yet.

 

If you have a compiler, most come with GL headers (or possibly in a fairly accessible download).

 

My recommendation is to start making a simple OpenGL application. At the risk of self-promotion, my Introduction to OpenGL Programming covers the basics of making a working cross-platform OpenGL 2 program (using SDL as a backend). It's about 100 lines long (mostly comments) and is intended exactly for people needing to get started with a minimum of fuss.




#5029908 Microsoft confirms XNA is over

Posted by Geometrian on 07 February 2013 - 06:40 PM

well,i never used xna,mostly because it was in c#.But I'm sure they'll make a replacement for it

It's called "OpenGL". wink.png




#5027210 I dont know were to start AT ALL

Posted by Geometrian on 30 January 2013 - 10:27 AM

Good advice about starting with programming, especially Python. I've had good great experience with PyGame/Python. Hacking up games is possible in less than a day.

 

Also, re game engines:

http://scientificninja.com/blog/write-games-not-engines

http://www.gamedev.net/topic/636523-how-come-many-of-you-prefer-to-make-games-from-scratch-rather-than-use-an-engine/#entry5015647




#5025459 Haskell kicking my Asskell

Posted by Geometrian on 25 January 2013 - 10:20 AM

See, this is why I don't like inconsistency.

Try Scheme or some other Lisp then.






PARTNERS