you can use codeblocks or any other ide, that compiles your object files, and links together the stuff, so you dont need into makefiles, if you dont want makefiles.
also if you dont want to use ides, like the codeblocks, you can just give g++ main.c -o mystuff -lfreeglut -lwhatever.
i dont know, if mint has glut develop files, but usually you find the package with freeglut-devel, or with a name close to that
i dont suggest clang, its buggy like the hell, in reality, it compiles slower binary, and they refuse to even care about bugreports (or at least from my bugreports, they calim its not a bug, i doing it wrong, etc).
JohnnyCode: i disaggree. maybe in *theory*, you can get the biggest performance, but using gpgpu would so much limit the number of compatible systems and would make so many extra work and compatibility issue, it would basically double the required work, and would make the whole thing almost unsellable, becouse there is a limited number of users with proper computers to run it. maybe such minor works like antialiasing, and other various filters effectively can done with gpu in this case, but that would not significantly boost the rendering ,,pipeline'', and should be maked with the possibility to disable it and use the software fallback.
have some fake color multiplier even on the dark locations, otherwise you will get this. i use 0.6 or 0.8 ambient light multiplier for shadowed places, those are ideal for my taste, but the practicular number basically varies on every scene.
i dont suggest to study these outdated technologies like the q2. those optimization tricks will not work any more, and keep in mind that we have superscalar cpu-s now. also remember that you will need an agressively multithreaded code to achieve fast speed with nice quality. just write your renderer, and if you find a part that is too slow, then meditate/study/doodle that practicular problem.
vista and win7 have the same opengl fallback that windows xp had. both xp, vista and win7 have a software rendered opengl fallback, and a special d3d9 based fallback which is deactivated. this last does not work with generic softwares at all, becouse its deactivated and not oficially exist, it cant be accessed, only can be turned on by installing some special tools to allow it. the opengl software renderer of microsoft, which is the generic fallback, can render a few thousand polygon around half or 1 fps even on the newest core i7 cpus, so it not really works in practice, and does not works with generic games or softwares at all. microsoft wrappers supports opengl 1.1 without any extension except to control vsync and abgr, and they crash from the most texture formats or even from the most basic functions. microsoft have this fallback unchanged since ~2001. in practice, both its useless, no software is compatible with them. microsoft was unable to deliver they opengl 1.5 wrapper/implementation at all, they discontinued the development after a few month of struggling. they opengl 1.5 plans are not available any more. http://msdn.microsoft.com/en-us/library/windows/desktop/ee417756%28v=vs.85%29.aspx
(the case is the same with directx, they originally planned to have a fast and good multithreaded software renderer fallback for directx, but they cancelled that too, software renderer only available after installing directx sdk only after special linking and initialization of the software, its cant be used in practice. also thats 0.5-1 fps so its useless in practice again)
Do you really think that EA would be investing development resources towards developing a Mantle version of their flagship engine if the whole thing was just a smoke screen put up by AMD?
why not? especially if amd pays enough for porting the graphics engine for they new api? i dont see why a desision of EA would make any relevance in the current situation, unless if some developer is EA-fan. its like drinking coffe is good, becouse hitler have drinken a lot of cafe too
hurray to amd for reinventing wheel and making pointless hype around it
and yes, we still talking about gpu-s - and only about gpu-s. if we would not have gpu, we would not need to use such apis at all. based on that previous measure, opengl almost can do the same performance as they promise in mantle. so i still dont see why mantle would be good for me in reality.
becouse opengl can do 100k+ draw call playable too on decent hw. i should measure directx too, but i am under linux at the moment.
OpenGL vendor string: Advanced Micro Devices, Inc.
OpenGL renderer string: ATI Radeon HD 4200
OpenGL version string: 3.3.11672 Compatibility Profile Context
OpenGL shading language version string: 3.30
uname -r -a
Linux a1 3.4.6-2.10-desktop #1 SMP PREEMPT Thu Jul 26 09:36:26 UTC 2012 (641c197) x86_64 x86_64 x86_64 GNU/Linux
Managed code can be as fast as, or even faster than, native code sometimes.
managed code never in real life code was faster than native code ever, and probably never will be. native code outperforms the managed code in 2-10 times according to the REAL benchmarks by programmers who was at least able to turn on compiler optimisation flags at least (most c# evangelists even fail at this basic point). but who want speed-ciritic code, must use true programming languages (oh god, i will get a lot of votedowns from c# warriors, but at least, i am honest ). however this is an offtopic here, i dont know why somebody put c# up in this thread aniway, its not a good way to use it as a synonym in this question. (and also i am sure c# programmers will have library to access this api too, so its irrelevant, lets forget about it alreday.)
You're also wrong about OpenGL
OpenGL ES, a different api? yeah, like directx9 is a different api from directx10....
this is just riding on the words after loosing real arguments, however, we isnt even started any debate, so this discussion starts extremely wrong alreday
first of all, you don't optimize OpenGL because OpenGL is not software, it's a specification
oh great, you should be some kind of brain surgeon i think
i would not think that you in reality in clear how opengl works alreday (no, i will not teach it even if you would pay for it), but congratulation for pointing out that opengl is a specification - nobody sayd otherwise - you maybe should apply with this fact to be a professor on an university, or something, with writing to a doctor dissertation that opengl is a specification (no offense!)
i have talked about opengl driver vendors of course, you may have heard about drivers at least.
including the very first ARB-approved extension - multitexture
thats the most unsignificant expansion to opengl according to the api speed. becouse the first heavy thing for the api was the vertex array and list capability in opengl 1.1, then the element arrays and later, vertex buffer objects. these made the api to be like a sphagetty inside. however the es versions and the later core profiles over version 3 does not have the older pipes at least, only the most significant for the actual generation they meant for.
One is that Mantle only tackles one bottleneck, whereas many other bottlenecks exist in a graphics pipeline.
yes. there is many kind of bottlenecks in opengl, however, mantle will mostly loose the speed due to architectural reasons later, this is unavoidable. if we start to compare opengl and mantle on future hardwares, we see that mantle will not have any positives according to opengl that baiscally scales from anything from the last two decade to anything in the next x0 years at least.
Likewise, if it turns out that you need significantly more complex code to use the API correctly
yes, they will need to code manually a lot of thing that opengl probably offered for them, however, if they coming from opengl es2, they probably alreday needed to have these specific code for example to tesselate they quads to triangles, and manage matrices. i suggest careful skepticism in the direction of mantle, i am afraid that we will see the reinvention of opengl es3. at least i hope they dont want to give us an another useless and unsupported opencl fork, and they will give a true graphics api, becouse if i would want to have fully generic programing, i would code it on cpu (and yeah, i alreday doing that).
samoth: c# game developers are mostly beginners, or not programmers at all. the games created in c# is very limited due to the slowness of the language, but they do speed-critic parts in c++ and call it as dlls, such as game engines. in my oppinion, the problem with mantle is that it will become outdated after 1 or 2 generation of newer gpu-s, since the internal working of the gpu will probably change so much that the api will not be fast as they originally meant to, becouse it will not fit so well to the hardware. after a few generations, the api will produce the same speed issues like opengl (or directx) does. however, opengl is widely used in every platforms and hardly optimized in the past 20 years by the gpu manufacturers, so mantle will be not so largely faster in its long-term life cycle as they originally mean it, but it will be very limited compared to opengl that has many many features, compatibility even with the age of 3dinosaurs to run older, newer, whatever games on a common code base. as they dont give out any specifications, i assume, its just purely crap, until i dont get a freely downloadable specification that proofs otherwise.