• Popular Now

• 13
• 18
• 19
• 27
• 9

Archived

This topic is now archived and is closed to further replies.

help me yes i'm serious

This topic is 6459 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

Recommended Posts

is there any difference between glColor3f(1, 1, 1) glColor3f(1.0f, 1.0f, 1.0f) glColor3f(1.0, 1.0, 1.0)

Share on other sites

No, it makes almost no difference here. The fastest (very small difference) would be 1.0f because it stops at one decimal point, and isn''t an integer, I think.

Share on other sites
thay all have the same speed at run time since they''re all compiled to the smae code

but 1.0f will be 0.01 miliseconds faster when you compile the code ----> no difference

----------------
- Pouya / FOO!!!
***CENSORED***

Share on other sites
There is no difference at all after compilation.

Throwing them in some VC++ application, here is what comes out in the assembling listing.

; 1211 :glColor3f(1, 1, 1);	push	1065353216			; 3f800000H	push	1065353216			; 3f800000H	push	1065353216			; 3f800000H	call	DWORD PTR __imp__glColor3f@12; 1212 :glColor3f(1.0f, 1.0f, 1.0f);	push	1065353216			; 3f800000H	push	1065353216			; 3f800000H	push	1065353216			; 3f800000H	call	DWORD PTR __imp__glColor3f@12; 1213 :glColor3f(1.0, 1.0, 1.0);	push	1065353216			; 3f800000H	push	1065353216			; 3f800000H	push	1065353216			; 3f800000H	call	DWORD PTR __imp__glColor3f@12

Mike Roberts
aka milo
mlbobs@telocity.com

Share on other sites
Really?

That's not how its suppose to work

1 would be a constant signed integer value of 1
1.0f forces it to be a floating point value
1.0 defaults to a double

It is an obvious compiler optimization to not convert the numbers at run-time. You can avoid warning at least by telling the compiler you want floats

Edited by - Magmai Kai Holmlor on July 11, 2000 11:23:32 PM