Jump to content
  • Advertisement
Sign in to follow this  

Linux GCC optimization bug when returning doubles

This topic is 2427 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

When calling a C function such as sqrt() (or one I registered myself) which returns float or double from AngelScript,
the function always returns 0. This happens on either of GCC 4.5.2 or 4.6.0, and with either AngelScript 2.20.3 or
trunk. I'm using Linux 64-bit, and compiling the library statically.

This only happens when I give optimization flags to GCC when compiling the AngelScript library - as far as I can tell,
any optimization flags will do it - even -O.

Share this post

Link to post
Share on other sites
Does this also happen on older versions of AngelScript? For example 2.20.1 ?

I did some bigger changes to the code that implements the native calling conventions in version 2.20.2, and it is quite possible I've broken something by mistake.

Share this post

Link to post
Share on other sites
I've tried to reproduce this bug using multiple versions of angelscript (2.20.2, 2.20.3, r960, r964 and r965) and haven't been able to. I've tried with -O and -O2, my gcc version is slightly older than the one Jameson is using. Would you mind posting the code you tried so I can see if I can reproduce this bug.

Andreas, would you like me to setup builds that test angelscript with optimization flags?

Share this post

Link to post
Share on other sites
I decided to go ahead and add builds to test the library with -O2. test_feature does fail:

Failed on line 253 in ../../source/test_cdecl_return.cpp

Share this post

Link to post
Share on other sites
Interesting that it only failed at line 253. That means it was able to successfully return a float and a double, but not a struct with 2 floats.

The calling convention is exactly the same for all 3, i.e. the return value is placed in xmm0.

I suspect the problem is that the gcc compiler is somehow clearing the xmm registers as part of the optimizations. Probably it will be necessary to disable the optimizations for the functions in as_callfunc_x64_gcc.cpp to fix this problem.

Thanks for setting up the new buildbot with optimizations. By the way, what were the changes to turn on optimizations? I tried changing the makefile directly but it caused segmentation faults at the very first test.

Share this post

Link to post
Share on other sites
I didn't make any changes, just passed -O2 in CXXFLAGS and LDFLAGS. Were you trying a different level?

Share this post

Link to post
Share on other sites
I'll go a head and run tests with all the level of optimizations to see if the results differ.

Share this post

Link to post
Share on other sites
-O Causes a segfault in testexecuteargs32.cpp:

(gdb) bt
#0 0x0000000000000008 in ?? ()
#1 0x0000000000000009 in ?? ()
#2 0x000000000000000a in ?? ()
#3 0x000000000000000b in ?? ()
#4 0x000000000000000c in ?? ()
#5 0x000000000000000d in ?? ()
#6 0x000000000000000e in ?? ()
#7 0x000000000000000f in ?? ()
#8 0x0000000000000010 in ?? ()
#9 0x0000000000000011 in ?? ()
#10 0x0000000000000012 in ?? ()
#11 0x0000000000000013 in ?? ()
#12 0x0000000000000014 in ?? ()
#13 0x0000000000000015 in ?? ()
#14 0x0000000000000016 in ?? ()
#15 0x0000000000000017 in ?? ()
#16 0x0000000000000018 in ?? ()
#17 0x0000000000000019 in ?? ()
#18 0x000000000000001a in ?? ()
#19 0x000000000000001b in ?? ()
#20 0x000000000000001c in ?? ()
#21 0x000000000000001d in ?? ()
#22 0x000000000000001e in ?? ()
#23 0x000000000000001f in ?? ()
#24 0x0000000000000020 in ?? ()
#25 0x0000000000000020 in ?? ()
#26 0x00000000004d4907 in CallSystemFunctionNative (context=<value optimized out>, descr=<value optimized out>, obj=<value optimized out>, args=<value optimized out>,
retPointer=<value optimized out>, retQW2=<value optimized out>) at ../../source/as_callfunc_x64_gcc.cpp:482
#27 0x00000000004d3397 in CallSystemFunction (id=<value optimized out>, context=<value optimized out>, objectPointer=<value optimized out>) at ../../source/as_callfunc.cpp:459
#28 0x00000000004a27f5 in asCContext::ExecuteNext (this=0x79c680) at ../../source/as_context.cpp:2039
#29 0x00000000004a5a11 in asCContext::Execute (this=0x79c680) at ../../source/as_context.cpp:1066
#30 0x000000000048874c in ExecuteString (engine=0x7a2200, code=<value optimized out>, mod=<value optimized out>, ctx=0x0) at ../../../../add_on/scripthelper/scripthelper.cpp:154
#31 0x000000000046def4 in TestExecute32Args () at ../../source/testexecute32args.cpp:248
#32 0x00000000004068f0 in main (argc=<value optimized out>, argv=<value optimized out>) at ../../source/main.cpp:312

Unfortunately it seems that the optimization also messes with the debug symbols. -O1 and -O are the same levels, -O2 and -O3 both have the same error (http://angelscript.jeremyh.net/builders/Full-Linux-64%20Optimized%28-O2%29/builds/0/steps/shell/logs/stdio). -Os also fails in the same part as -O.

You can see a list of the flags and what optimizations each enable at http://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html.

Share this post

Link to post
Share on other sites
Sign in to follow this  

  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!