Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 10 Jun 2012
Offline Last Active Mar 15 2014 12:50 AM

#5134346 C++ IDEs for Linux

Posted by on 25 February 2014 - 12:14 AM

Still, KDE won't be on my arcade machine. It won't even have a desktop environment since I'll be cramped for processing power. Not sure if that'll be a problem as I'll be running a GLUT window from a command line program.
KDevelop, like every other IDE, only requires its dependencies on your development machine. It does not force you to add any dependencies for deployment.

I haven't figured out the licensing, or what kind of royalties I'd have to pay Nokia if it came down to me producing a commercial produce.
The LGPL licensed Qt option sounds like what you'd want.

#5134131 C++ IDEs for Linux

Posted by on 24 February 2014 - 10:24 AM

KDevelop looks beautiful! This is the first time I've heard of it. Tell me, is it as "bloated" as Eclipse?

It runs much better than eclipse here. "Bloated" can be subjective, the only way you'll know for sure is to try it.

I would also vote for QtCreator. KDevelop is nice but not really suited to cross-platform development (not everyone wants to install the whole KDE on Windows)

The OP specifically asked for Linux and didn't mention cross-platform.
Furthermore, I don't think most people consider having to use different IDEs on different platforms a deal breaker.

#5134067 C++ IDEs for Linux

Posted by on 24 February 2014 - 04:28 AM

My hands down favourite is KDevelop.
Before finding KDevelop I had tried:
* Code::blocks
* codelite
* QtCreator

I haven't looked back since switching to it.

#5041993 Does anyone use fixed point math anymore?

Posted by on 11 March 2013 - 02:01 PM

In addition to the uses for fixed point already stated (e.g. large worlds,) it is also used to make code deterministic across machines/platforms/etc.

It is possible to do this with floating point code but your milleage may vary (in my experience it is challenging.)

One example of this is RTS games where inputs are broadcast to all clients and each client must update their state and stay in sync.

#4965240 Imported OBJ Looks Funky... :(

Posted by on 01 August 2012 - 10:50 AM

In addition to what has been said about the indexing base, you code makes several assumptions about the content of obj file, in particular:
  • Triangles only (i.e. always 3 sets of indices for faces, no more, no less)
  • Texture AND normal data is always present (i.e. face indices are always of the form: #/#/# )
I would check the file by hand to make sure these assumptions are not violated.

Also, I'm not sure what you mean by changing the topology to triangle list, given your above assumptions the trivial way of rendering the parsed obj data is as a triangle list.

#4960087 Easy OpenGL Directional Lighting Question

Posted by on 17 July 2012 - 12:00 PM

Also, are there any gotcha's with lighting when it comes to directional lighting?

From http://www.opengl.or...tml/glLight.xml : When glLight* is called with the GL_POSITION argument, the "position is transformed by the modelview matrix when glLight is called (just as if it were a point), and it is stored in eye coordinates."

#4960078 Tool to mathematically create a mesh

Posted by on 17 July 2012 - 11:38 AM

I've only played with this on linux, but seems to fit your description the most: http://k3dsurf.sourceforge.net/

#4959997 FPU mode

Posted by on 17 July 2012 - 07:34 AM

Look in the source code in the link, it will likely help: http://www.christian...rojekte/fpmath/
EDIT: The source is buried at the bottom, here is the direct link: http://www.christian-seiler.de/projekte/fpmath/xpfpa.h

#4958913 Odd stack write violation in struct (C++)

Posted by on 13 July 2012 - 03:03 PM

Just taking a shot in the dark:
The following lines look odd to me:
[source lang="cpp"]D3D11_BUFFER_DESC cbBufferDesc;ZeroMemory(&cbPerObjectBuffer, sizeof(D3D11_BUFFER_DESC));[/source]
I think you might have mixed your arguments.

#4957408 Beginners: Fixed-Function vs. Shaders

Posted by on 09 July 2012 - 02:44 PM

I think rnlf makes the shader route sound more daunting than it really is. There is no need to try to create a pure opengl 3.0 non-deprecated demo on the first try.

There is a natural progression you can take to go from pure fixed pipeline to pure non-deprecated functionality e.g.:
  • Start with a simple opengl "Hello world" style program
  • Change it so that it uses the programmable pipeline using trivial shaders.
  • Convert it to use vertex arrays.
  • Convert it to using vertex buffer objects
  • Create your own matrix management routines along side the fixed pipeline ones (this allows you to verify your routines)
    E.g. Using glLoadMatrix to easily switch between the matrices used.
  • Modify the shader so that you upload the matrix using something like glUniformMatrix* and only use your matrices*
  • etc.
* If you are aiming for pure non-deprecated.

There are some basics you'll want to learn regardless of which route you choose: colors, normals, textures, transformations, etc.
Whether you learn these basics using the fixed function pipeline or shaders won't matter, it is a small step to go from one to the other.

This being said, once you understand the basics there is no reason to deal with the fixed function pipeline.
Beyond the basics, IMO, effects become more complicated to implement using the fixed function pipeline, and sometimes require dealing with opengl extensions (not a big deal, but more annoying than not having to do it.)

#4955650 Search for free multiplatform archive library

Posted by on 04 July 2012 - 09:24 AM

No, it's not.

Let me clarify: AFAIK you can't write to archives, only to regular files.

#4955619 Selecting units in a 3d evironment

Posted by on 04 July 2012 - 07:42 AM

Does anyone have a link to an article, or some sample code?
Doesn't matter which language.

While my personal code is kind of tailored to my application, MathGeoLib's frustum(.h|.cpp) code is very clean (though I personally prefer the "radar" approach to intersection testing.)

An example of what I described:
If your view transform is something like this:

Your camera transform is this:
Translate2' * RotateY' * RotateX' * RotateZ' * Scale' * Translate1'

Now you can extract the position of the camera from the resulting matrix, like in the image on this page.
Note: You'll likely want the left / up / forward vectors to be normalized, what you can do as an optimization is the following:
Translate2' * RotateY' * RotateX' * RotateZ' * Translate1' * Scale' (only apply inverse scaling to the transformation part of the matrix).
That way the columns in the rotation part of the matrix are all unit.

This gives you the camera information.

Take the coordinates of the tapping and convert them from window coordinates to normalized device coordinates, then your ray's direction is simply:
Forward * nearplane distance + Left * x (in NDC) + Up * y (in NDC)
Where forward/left/up are extracted from the camera matrix as described above, the ray starts from your camera's position (translation part extracted from the matrix.)
Note, you might have to change some of the signs for the direction equation depending on the convention you use for the window coordinates origin.

Disclaimer: I don't remember why there's a conversion to NDC nor do I have the time to verify it right now.
Also, I had written in my notes, that this assumes a symmetric frustum, but I can't remember why either.

#4955330 Selecting units in a 3d evironment

Posted by on 03 July 2012 - 09:27 AM

The first, revert the clicked screen coordinate to world coordinates by using your projection in reverse. This is done by inverting the modelviewprojection matrix, which if you aren't familiar with it is going to be tricky. Then use the world coordinate to look up the nearest unit.

Why invert? Posted Image

What I do sounds very similar to what he is already doing: computing the camera matrix (inverse view matrix, without inverting,) this produces the world space position as well as the basis vectors for "camera." When I have window coordinates to pick, I convert them to normalized device coordinates then use the world-space information + projection information (i.e. near plane distance) to construct a ray, then perform a search to find the hit. If scaling is involved in the camera matrix calculation, I use the properties of matrices to avoid normalizing the basis / direction vectors.

I like this approach because I need the world-space camera information for other functions such as culling, billboarding, etc.

If you need pixel perfect picking then the selection buffer approach described by Zoomulator is a simple solution. Though if the user does not tap exactly on a pixel covered by the unit it will not register a hit.