Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 20 Nov 2005
Offline Last Active Aug 22 2016 01:50 PM

#5170724 Quaternion-Rotation to Degree (Edge-jump)

Posted by on 31 July 2014 - 04:33 PM

But if I'd convert it into a rotation-matrix, I'd still get the problem at the edges,  wouldn't I?

This was supposed to mean that the conversion problem that occurs when converting to degrees also would occur when converting to a matrix.


There are no conversion problems when converting quaternion to matrix (*) - did you mean from matrix to Euler angles? Then yes, whatever is the perceived problem with quaternion->euler (I am not sure what the problem is - are the values wrong? Why do you care that the values jump around a bit?) would probably crop up again.

Euler angles are usually terrible to work with (expensive, capricious and shall i say - bloody useless. IMMHO, aka YMMV), i would repeat the advice to use a quaternion and/or matrix where appropriate instead.

*) Quaternion transforms fairly easily into a neat equivalent rotation-only matrix (Orthonormal basis. So, just a bunch of unit length orthogonal axis vectors).

Perhaps useful for reference:

#5166135 c++ basics

Posted by on 11 July 2014 - 12:59 AM

Just to clarify what previous post is telling:


"using namespace std;"


... don't do that. Especially as a beginner - one benefits from clean namespace (not polluted with bazillion things a beginner does not even know exists from std and will bite him in the back). Etc: http://stackoverflow.com/questions/1452721/why-is-using-namespace-std-considered-bad-practice


I, personally, have never seen a necessity for "using namespace whatever" and consequently have never used it.

#5165240 [GLSL] send array receive struct

Posted by on 07 July 2014 - 06:46 AM

uniform float lights[MAX_LIGHTS * 15];
uniform Light lights[MAX_LIGHTS];

Did you account for the padding?

I do not see how is Light defined - but it likely has some padding (check the spec). Also, did you define a standard layout for the structure?

#5153167 Ignore "Reload from disk?" prompt || Python conversion?

Posted by on 12 May 2014 - 05:11 PM

Need to meditate on your Google-fu a bit.


The, shall i say, obvious first search would be "ignoreChanged" (with the quote marks of couse) - which will give a bunch of apparently topical Sublime Text links at the top.

#5148335 BRDF gone wrong

Posted by on 20 April 2014 - 05:01 AM

Unexpected black patches are indicative of NaN values - might want to check/pinpoint that (glsl has "isnan" function from 1.3 onwards ... iirc).


Possible sources of NaN: http://en.wikipedia.org/wiki/NaN

#5142261 Javascript Memory leak

Posted by on 26 March 2014 - 04:05 AM

biggrin.png, true that.


(sorry for being a jackass - i am just easily amused)

#5142248 Javascript Memory leak

Posted by on 26 March 2014 - 03:23 AM

anyway, it just looks stupid that the garbage collector have that zigzag pattern at all
what memory does it allocate ? it uses one global variable and that's all
I didn't even declare a new variable.
This is strange indeed

1. garbage source.

While your example program itself does not visibly generate garbage - there still are plenty of garbage sources, a'la: virtual machine interpreting the javascript, jit compilation, whatever other internal structs it needs to implement the used or assumed to be used functionality of javascript.

Case in point of a possible source: calling a function needs a local scope/closure to be created and destroyed (technically - a good jit optimizer under specific conditions can prevent it from using generic GC for cleanup).

2. zigzag pattern.

Garbage collection is an overhead (ie. in a sense no useful work is done - GC is hardly the goal of any program tongue.png ). Hence GC implementations avoid actually doing it if they do not need to do it (ie. there is plenty of free memory) - allowing memory usage to steadily grow till it finally decides to do the work causing a sudden drop in memory usage => zigzag.


it seems to leak according to chrome dev tools

I have not used chrome dev tools, but i guess you are misinterpreting what it tells you.

I don't think this is a case of circular dependencies or anything to do with DOM

Yep. However, memory is not free till GC tells so => every unused crumb of memory piles up over time till GC gets rid of them all, all at once.

is this normal in javascript ?

Yes. It is common in all languages that heavily use mark-and-sweep style GC, not just javascript.

One might have a leak problem if memory usage grows over GC cycles (ie zigzag bottom line keeps growing).

#5142072 Javascript Memory leak

Posted by on 25 March 2014 - 01:16 PM

/.../ and please explain me where I am wrong.

He already did:

What is "written" to setInterval is a reference to the function.

You have already stated a whole pile of nonsense, /.../

Except that his "pile of nonsense" is actually correct and your statements are not.

My guess is that you did not notice that the lambda function given to setInterval is not given as an string. This guess is also supported by the error you made in your example:

setInterval(heavy(), 1 ); // this is equal to: "setInterval(undefined, 1 )", given that heavy() returns "undefined".
instead of what you probably meant:
setInterval("heavy()", 1 );

#5139374 Needing some quick tips on how to organize my code

Posted by on 15 March 2014 - 09:56 PM

I like how C::B handles its highlighting color schemes, maybe I am just used to it...
It also has some neat features for C++, such as active/inactive code highlighting (to name one):

Just used to it tongue.png.

I have never used Code::Blocks myself - does it have comparable highlighting options to VS (look my pic in previous post [VS has quite a lot of type separation too - but i have colored almost all of them with the same color])?

The active/inactive code highlighting is obviously present in VS too - and its presence in CB hints that it too might have some Intellisense-esque capabilities, hence the Question.

#5139373 Needing some quick tips on how to organize my code

Posted by on 15 March 2014 - 09:38 PM

@dejaime, well... I'm extremely (really) picky when it comes to colors. I can't stand writing code in a white background, and I like to just get my hands dirty when I'm learning something. All the IDEs I tried (well, CB, VC and DC, don't know any others), kind of got in the way of it. There's always something that needs to be set or some intricacy that needs to be understood (i.e., project templates, MS's main() arguments), or otherwise something that doesn't work for very specific reasons. I've been away from C++ for years because of this. Also, they clutter my hard drive with project folders (VS is particularly unorganized, it mixes projects from all apps in one folder by default) when all I want at this point is a source file to experiment with. When learning the basics, I need a basic setup to get right down to it and keep me focused and without obstacles.

I would recommend you to reconsider - a proper ide (which Sublime Text at least does not seem to be from my cursory examination) is an invaluable assistance, especially if you are relatively new. And even more so when you are not new and your projects grow to anything above trivial.

I would recommend Visual Studio 2013 (for desktops, express edition - ie. free). Its coloring scheme is highly customizable - comes even with a "dark" theme as a preset option (or a starting point for your own customizations).

Syntax coloring options include separation of: global/local/member/static-member variables, namespaces/classes/enums/types, static/non-static member functions, macros etc...

Intellisense can also automatically pick up and mark with red-wiggles most errors without the need to compile and its hover-tooltips, as you will see below, are quite informative.

An example of my, slightly altered from default, coloring (i prefer white - used to prefer dark when i first started out ~20y ago):


edit: Uh, what, why the downvote? That makes no sense.

#5138119 Your one stop shop for the cause of all coding horrors

Posted by on 11 March 2014 - 09:22 AM

I enjoyed this - thanks for sharing.
// somedev1 -  6/7/02 Adding temporary tracking of Login screen
// somedev2 -  5/22/07 Temporary my ass
The history of software development in a nutshell.

#5135912 Large textures are really slow...

Posted by on 02 March 2014 - 03:25 PM

Reread OP:


"One of my main game features (visually) is the overlay texture I'm using.  This overlay texture is full screen, and covers the entire screen which gives it a nice transparent pattern."


And the post following it:



Depth test is absolutely useless there - as i mentioned in my original post. I take you did not read any of it.

#5135557 Unmaintainable code

Posted by on 28 February 2014 - 09:05 PM

write-only programming ?

That is a new one, but strangely familiar.

We used to have (20y+ ago) some custom engineered computers available in school, worked well. Except the floppy drive (which was not custom made, probably just whatever was the cheapest around) - its ability to read what it wrote was kind of hit-and-miss. So, everyone called their floppies write-only storage.

Yeah. The only time "The **** ate my programming homework." was actually accepted by teachers (i swear, i am not making this up).

Oh, good old days.

#5134284 Changing graphics settings at real time

Posted by on 24 February 2014 - 06:59 PM

I read and understand your arguments. Saying "it's not meant to be" seems awfully closed-minded, but it's the truth. The graphics API's really didn't want it to be. Take OpenGL for instance:

GLFWwindow* window;
window = glfwCreateWindow( 1024, 768, "MyWindow", NULL, NULL);
You'd have to re-create the OpenGL-context to change the resolution. The same goes for D3D, you lose the device when you change settings.
I guess what I'm saying is, I'm just amazed that this hasn't been solved yet.

GLFW is not OGL and window management and what-not has nothing to-do with OGL either (WGL is for that side).
Short story:
* Each window has a device context => encapsulated stuff for GDI
* We want a sensible pixelformat (RGBA8, doublebuffered) for it that has hardware acceleration.
* We get a rendering context for it.
* Windows/GDI is there to composite all that stuff on screen.
At no point does anything we care about care what the screen resolution is (*)(**).
Just change the resolution and reposition/resize your window, add/remove borders for windowed mode if user wanted that too. No need to recreate the OpenGL context (why would you? want windows software GL1.1 implementation instead?).

(*) Windows/GDI will have a bit more work todo internally when your framebuffer is RGBA8, but screen is in some silly format (like 256-palette). But that is not our concern (it slightly mattered in the old ancient days where using an equally silly framebuffer was a reasonable compromise option).
(**) OpenGL API specification does not mention screen resolution changes => nothing is allowed to be lost because of screen resolution change. This fact is even mentioned by some specs, ex: http://www.opengl.org/registry/specs/ARB/vertex_buffer_object.txt "Do buffer objects survive screen resolution changes, etc.?". However WGL is not OGL, so for example p-buffer (***), if you happen to use them, might be lost.
(***) do not confuse with pixelbuffer, p-buffer -> https://www.opengl.org/wiki/P-buffer

edit: Oh, forgot to comment the last line: it has never been a problem to begin with ... at the OGL side of the fence at least. At D3D side, afaik, things were considerably less rosy.

#5134057 GLSL and GLU Teapot

Posted by on 24 February 2014 - 03:40 AM

It used to be that Mac only supported OGL 3.2 (there was a fakery way to make the OS believe it has 3.3, not sure whether there has been any official support added) - which iirc does not have attribute layout location in core. The extension of it might be available though, as TheCubu noted.

Glass_Knife, i did not notice you telling how the "Not working" manifests itself? What is the error message given? Are you sure you have OGL 3.3 available in the first place?