• Advertisement
Sign in to follow this  

OpenGL Vulkan is Next-Gen OpenGL

This topic is 455 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I did recompile all the loader and layer libraries from source using the VS 2015 compiler, actually, and am using the debug versions.  Though I guess since they're MSVC-generated a lot or all of the debugging information is only in the PDBs?  Maybe I should give up on trying to use MinGW for this...

Share this post


Link to post
Share on other sites
Advertisement

I was wondering how many AAA games would be using Vulkan. I heard that the Frostbite engine Mantle render would be converted to Vulkan, but it would only be used on platforms that don't support DirectX 12.

Share this post


Link to post
Share on other sites

Aha!  I finally managed to solve the problem!

 

I was reading this page more closely, and it turns out that the loader needs to be able to find the layers using registry keys that indicate the [tt]VkLayer_xxx.json[/tt] manifests.  I added them and it all works perfectly now, even through MinGW instead of VS2015!

Share this post


Link to post
Share on other sites

I was wondering how many AAA games would be using Vulkan. I heard that the Frostbite engine Mantle render would be converted to Vulkan, but it would only be used on platforms that don't support DirectX 12.


It makes sense for engine to have a Vulkan spin since it makes Android port (almost) straight forward. There isn't the gl/gles "ambiguity" anymore and the render pass concept makes things like gbuffer achievable on tiled architecture while bringing some memory bandwidth saving on desktop (thus it makes sense to invest in them even if you don't target mobile at first)

I remember XCom being ported to ios and still keeping all the mechanics and gfx complexity and I think Vulkan may help with such dev.

Share this post


Link to post
Share on other sites

It looks like although the Mali T880 and the Adreno 530 are marketed as Vulkan compatible, their drivers seem far from being working :

http://vulkan.gpuinfo.org/displayreport.php?id=128

http://vulkan.gpuinfo.org/displayreport.php?id=133

 

The Mali doesn't expose swapchain extension, no MSAA. And the grid max size for compute workload is 0...

Apparently both GPU support DX11 feature level but the Tessalation/Geometry Shader stage are not exposed.

 

I wonder how they got UE4 running on Galaxy S7 device. Or maybe they keep a more formal Vulkan announcement for Android N.

Share this post


Link to post
Share on other sites

Async compute is looking to be a killer feature for DX12 / Vulkan titles. The benchmarks indicate that Pascal's implementation of async compute isn't half as good as AMD's.

Share this post


Link to post
Share on other sites

Async compute is looking to be a killer feature for DX12 / Vulkan titles. The benchmarks indicate that Pascal's implementation of async compute isn't half as good as AMD's.

Ah, one should be careful with such a blanket statement (unless you have some more detailled information?). Benchmarks are rarely unbiased and never fair, and interpreting the results can be tricky. I'm not trying to defend nVidia here, and you might quite possibly even be right, but I think solely on the linked results, this is a bit of a hasty conclusion.

From what one can see in that video, one possible interpretation is "Yay, AMD is so awesome, nVidia sucks". However, another possible interpretation might be "AMD sucks less with Vulkan than with OpenGL". Or worded differently, AMD's OpenGL implementation is poor, with Vulkan they're up-to-par.

Consider the numbers. The R9 Fury is meant to attack the GTX980 (which by the way is Maxwell, not Pascal). With Vulkan, it has more or less the same FPS, give or take one frame. It's still way slower than the Pascal GPUs, but comparing against these wouldn't be fair since they're much bigger beasts, so let's skip that. With OpenGL, it is however some 30+ percent slower than the competing Maxwell card.

All tested GPUs gain from using Vulkan, but for the nVidia ones it's in the 6-8% range while for the AMDs it's in the 30-40% range. I think there are really two ways of interpreting that result.

Share this post


Link to post
Share on other sites

 

Async compute is looking to be a killer feature for DX12 / Vulkan titles. The benchmarks indicate that Pascal's implementation of async compute isn't half as good as AMD's.

Ah, one should be careful with such a blanket statement (unless you have some more detailled information?). Benchmarks are rarely unbiased and never fair, and interpreting the results can be tricky. I'm not trying to defend nVidia here, and you might quite possibly even be right, but I think solely on the linked results, this is a bit of a hasty conclusion.

From what one can see in that video, one possible interpretation is "Yay, AMD is so awesome, nVidia sucks". However, another possible interpretation might be "AMD sucks less with Vulkan than with OpenGL". Or worded differently, AMD's OpenGL implementation is poor, with Vulkan they're up-to-par.

Consider the numbers. The R9 Fury is meant to attack the GTX980 (which by the way is Maxwell, not Pascal). With Vulkan, it has more or less the same FPS, give or take one frame. It's still way slower than the Pascal GPUs, but comparing against these wouldn't be fair since they're much bigger beasts, so let's skip that. With OpenGL, it is however some 30+ percent slower than the competing Maxwell card.

All tested GPUs gain from using Vulkan, but for the nVidia ones it's in the 6-8% range while for the AMDs it's in the 30-40% range. I think there are really two ways of interpreting that result.

 

Not really, because all games that have async compute AMD gains much more than pascal, including direct X titles. Check this out if you don't believe me:

http://wccftech.com/nvidia-geforce-gtx-1080-dx12-benchmarks/

Share this post


Link to post
Share on other sites
Does Doom have a configuration option do disable async compute? That would let you measure the baseline GL->Vulcan gain, and then the async gain separately.

We all knew from the existing data though, that GCN was built for async and Nvidia is only now being presurred into it by their marketing department.

Share this post


Link to post
Share on other sites

 

 

Async compute is looking to be a killer feature for DX12 / Vulkan titles. The benchmarks indicate that Pascal's implementation of async compute isn't half as good as AMD's.

Ah, one should be careful with such a blanket statement (unless you have some more detailled information?). Benchmarks are rarely unbiased and never fair, and interpreting the results can be tricky. I'm not trying to defend nVidia here, and you might quite possibly even be right, but I think solely on the linked results, this is a bit of a hasty conclusion.

From what one can see in that video, one possible interpretation is "Yay, AMD is so awesome, nVidia sucks". However, another possible interpretation might be "AMD sucks less with Vulkan than with OpenGL". Or worded differently, AMD's OpenGL implementation is poor, with Vulkan they're up-to-par.

Consider the numbers. The R9 Fury is meant to attack the GTX980 (which by the way is Maxwell, not Pascal). With Vulkan, it has more or less the same FPS, give or take one frame. It's still way slower than the Pascal GPUs, but comparing against these wouldn't be fair since they're much bigger beasts, so let's skip that. With OpenGL, it is however some 30+ percent slower than the competing Maxwell card.

All tested GPUs gain from using Vulkan, but for the nVidia ones it's in the 6-8% range while for the AMDs it's in the 30-40% range. I think there are really two ways of interpreting that result.

 

Not really, because all games that have async compute AMD gains much more than pascal, including direct X titles. Check this out if you don't believe me:

http://wccftech.com/nvidia-geforce-gtx-1080-dx12-benchmarks/

 

 

That benchmark shows that samoth is correct. Async Compute seems to be improving performance by 2-5% for AMD, which is far from the massive improvement in Doom. The "AMD is awful at OpenGL" theory seems even more likely now.

Share this post


Link to post
Share on other sites

There's an instructive comparison here: https://www.youtube.com/watch?v=ZCHmV3c7H1Q

 

At about the 4:35 mark (link) it compares CPU frame times between OpenGL and Vulkan on both AMD and NVIDIA and across multiple generations of hardware.  The take-home I got from that was that yes, AMD does have significantly higher gains than NVIDIA, but those gains just serve to make both vendors more level.  In other words, AMD's OpenGL CPU frame time was significantly worse than NVIDIA's to begin with.

 

So for example, the R9 Fury X went from 16.2 ms to 10.3 ms, whereas the GTX 1080 went from 10.7 ms to 10.0 ms, all of which supports the "AMD's OpenGL implementation is poor, with Vulkan they're up-to-par" reading.

Share this post


Link to post
Share on other sites
The simple truth is that for quite some time now NV have had better drivers than AMD, both in DX11 and OpenGL, when it came to performance.
They did a lot of work and took advantage of the fact you could push work to other threads and use that time to optimise the crap out of things.

Vulkan and DX12 however have different architectures which don't allow NV to burn CPU time to get their performance up; however it also shows that when you get the driver out of the way in the manner that these new APIs allow you to do then AMD's hardware can suddenly open up its legs a bit and get on par with NV.

That is where the majority of the gains come from.

As for Async; firstly it seems that pre-Pascal forget it on NV. If they were going to enable things to work sanely on those cards they would have done so by now, I suspect the front end just simply doesn't play nice.

After that, it seems like a gain BUT it depends on what work is going on and it is going to vary per GPU as to how much of a win it will be.

I suspect the reason you see slightly more gains on AMD is down to the front end design; the 'gfx' command queue process can only keep so many work units in flight, lets pretend that number is 10. So when 10 work units have been dispatched in to the ALU array it'll stop processing until there is spare space - I think it'll only retire in order, so even if WU1 finishes before WU0 it still won't have any spare dispatch space. However, AMD also have their ACE, which can keep more work units in flight; even if you only have access to one of those that'll be 2 more WU in flight in the array (iirc) so you can do more work if you have the space.

NV, on the other hand, seems to have a more unified front end (they are piss poor at telling people things so a bit of guess work involved), which I suspect means they can get more work in flight so the same amount of spare time might not be there to take advantage of.

This is all guess work however, the main take away is that async can be a win, but it very much depends on the work going on within the GPU and the GPU it is being run on.

The big wins however are the API changes, which get the driver out the way, let you use less CPU time setting things up per thread AND let you setup across threads.

Share this post


Link to post
Share on other sites
Yeah it's both.

AMD's drivers have traditionally been (CPU) slower than NV's. Especially GL, which is an inconceivable amount for driver code (for comparison, NV's GL driver likely dwarfs Unreal engine's code base).

A lot of driver work is now engine work,
letting AMD catch up on CPU performance by handing half their responsibilities to smart engine devs who can use design instead of heuristics now :)

Resource barriers also give engine devs some opportunity to micro-optimize a bit of GPU time, which was the job of magic driver heuristics previously -- and NV's heuristic magic was likely smarter than AMD's.

AMD were the original Vulkan architects (and probably had disproportionate input into D12 as well - the benefits of winning the console war), so both APIs fit their HW architecture perfectly (closer API fit than NV).

AMD actually can do async compute right (again: perfect API/HW fit) allowing modest gains in certain situations (5-30%). Which could mean as much as 5ms of extra GPU time per frame :o

Share this post


Link to post
Share on other sites

Axel Gneiting from id Software is porting Quake to Vulkan: https://twitter.com/axelgneiting/status/755988244408381443

Code: https://github.com/Novum/vkQuake

 

This is cool; it's going to be a really nice working example and we'll be able to see exactly how each block of modern Vulkan code relates to the original cruddy old legacy OpenGL 1.1 code.

Share this post


Link to post
Share on other sites

As for Async; firstly it seems that pre-Pascal forget it on NV. If they were going to enable things to work sanely on those cards they would have done so by now

It's not so much limited to async compute, but to compute and transfers in combination alltogether it seems.

I can confirm that from my immediate experience with using Blender (which on NV uses CUDA) on a desktop computer with a Maxwell GPU in comparison to my notebook which only has the Skylake CPU's integrated graphics chip.

Yeah, Blender on a notebook, and setting the 3D viewport to "render", what a fucked up idea, this cannot possibly work. Guess what, it works way better than on the desktop computer with dedicated GPU.

Now looking at the performance counters shown by process explorer, it turns out the Maxwell has two shader units busy on average (two!) whereas the cheap integrated Intel GPU has them all 95-100% busy. So I guess if the GPU does not spend its time doing compute, the time must go into doing DMA and switching engines between "transfer" and "compute". Otherwise I couldn't explain why the GPU usage is so darn low.

Now, since this is not an entirely unknown issue, I would seriously expect Pascal schedule/pipeline that kind of mode switching much better, or even do it in parallel seamlessly (I think I have read something about them adding an additional DMA controllers at one point, too, though I believe that was even for Maxwell -- would make seem on yet older generations it's still worse?).

Share this post


Link to post
Share on other sites

Although it's true that Vulkan performs much faster than OpenGL, I don't think OpenGL is going anywhere anytime soon. Now, engines such as Unreal and Unity will perform Vulkan under the hood, but nobody in their right mind will code in Vulkan (have you seen this "simple" Hello Triangle?). However, I think it'll be interesting to see what happens. Who knows, maybe we'll all end up coding with Vulkan. Computers will continue to get faster, but so will our ambitions, so Vulkan seems like the new future of graphics processing, but OpenGL will be here to stay due to its "simplicity" (at least compared to Vulkan).  :D

Share this post


Link to post
Share on other sites

Modern OpenGL is also ridiculously complex and requires pages of code to render a triangle using the recommended "fast path". No one should be programming in GL either, except a small number of people within engine development teams :D

Share this post


Link to post
Share on other sites

Modern OpenGL is also ridiculously complex and requires pages of code to render a triangle using the recommended "fast path". No one should be programming in GL either, except a small number of people within engine development teams :D

 

This.

 

OpenGL's reputation for "simplicity", I suspect, probably stems from John Carmack's comparison with D3D3 dating back to 1996-ish.  Nobody in their right mind would write production-quality performance-critical OpenGL code in that style any more either.

Share this post


Link to post
Share on other sites

nobody in their right mind will code in Vulkan (have you seen this "simple" Hello Triangle?).

This isn't a triangle example, it's a simple rendering engine for triangles with a RGB colour at each vertex that has an easy to find hardcoded triangle in the middle as an example of the example:
// Setup vertices
		std::vector<Vertex> vertexBuffer = 
		{
			{ {  1.0f,  1.0f, 0.0f }, { 1.0f, 0.0f, 0.0f } },
			{ { -1.0f,  1.0f, 0.0f }, { 0.0f, 1.0f, 0.0f } },
			{ {  0.0f, -1.0f, 0.0f }, { 0.0f, 0.0f, 1.0f } }
		};
		uint32_t vertexBufferSize = static_cast<uint32_t>(vertexBuffer.size()) * sizeof(Vertex);

		// Setup indices
std::vector<uint32_t> indexBuffer = { 0, 1, 2 };
In example code, the priority is calling the API properly, not good abstractions (which would be confusing).
 
EDIT: quoting loses links. The "triangle example" is https://github.com/SaschaWillems/Vulkan/blob/master/triangle/triangle.cpp from the well known Vulkan examples repository by Sascha Willems. Edited by LorenzoGatti

Share this post


Link to post
Share on other sites

nobody in their right mind will code in Vulkan (have you seen this "simple" Hello Triangle?).

This isn't a triangle example, it's a simple rendering engine for triangles with a RGB colour at each vertex that has an easy to find hardcoded triangle in the middle as an example of the example:
// Setup vertices		std::vector&lt;Vertex&gt; vertexBuffer = 		{			{ {  1.0f,  1.0f, 0.0f }, { 1.0f, 0.0f, 0.0f } },			{ { -1.0f,  1.0f, 0.0f }, { 0.0f, 1.0f, 0.0f } },			{ {  0.0f, -1.0f, 0.0f }, { 0.0f, 0.0f, 1.0f } }		};		uint32_t vertexBufferSize = static_cast&lt;uint32_t&gt;(vertexBuffer.size()) * sizeof(Vertex);		// Setup indicesstd::vector&lt;uint32_t&gt; indexBuffer = { 0, 1, 2 };
In example code, the priority is calling the API properly, not good abstractions (which would be confusing).
&nbsp;
EDIT: quoting loses links. The "triangle example" is https://github.com/SaschaWillems/Vulkan/blob/master/triangle/triangle.cpp from the well known Vulkan examples repository by Sascha Willems.
I think that's not what these quotes (both towards Vulkan and modern GL) are meant to be or what they should be read as.

Sure, the draw-triangle code is concise, straightforward, clear (both in GL4 and Vulkan), but it takes about 100 lines of code to even get a context which can do anything at all set up in GL, and about three times as much in Vulkan. Plus, it takes like half a page of code to do what you would wish to be none more "CreateTexture()" or the like.

Hence the assumption "no sane person will...". Almost everybody, almost all the time, will want to be using an intermediate library which does the Vulkan heavy lifting.

Share this post


Link to post
Share on other sites

Hence the assumption "no sane person will...". Almost everybody, almost all the time, will want to be using an intermediate library which does the Vulkan heavy lifting.

 

This looks to be a good assumption... When you have a real good low-level library, we can foresee new higher-level libraries that will provide more easy means. But then we might end with a plethora of libraries, for sure not compatible between themselves... Each of them having their good and bad things.

 

It's also possible for OpenGL implementations and even Direct3D to have future releases based on Vulkan... But maybe I'm too wrong here.

Share this post


Link to post
Share on other sites

Sure, the draw-triangle code is concise, straightforward, clear (both in GL4 and Vulkan), but it takes about 100 lines of code to even get a context which can do anything at all set up in GL, and about three times as much in Vulkan. Plus, it takes like half a page of code to do what you would wish to be none more "CreateTexture()" or the like.

Hence the assumption "no sane person will...". Almost everybody, almost all the time, will want to be using an intermediate library which does the Vulkan heavy lifting.


The thing is, creating a context is an absolutely standard task that is also something you will typically write code for once and once only, then reuse that code in subsequent projects (or alternatively, grab somebody else's code off the internet).

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Now

  • Advertisement
  • Similar Content

    • By khawk
      We've just released all of the source code for the NeHe OpenGL lessons on our Github page at https://github.com/gamedev-net/nehe-opengl. code - 43 total platforms, configurations, and languages are included.
      Now operated by GameDev.net, NeHe is located at http://nehe.gamedev.net where it has been a valuable resource for developers wanting to learn OpenGL and graphics programming.

      View full story
    • By TheChubu
      The Khronos™ Group, an open consortium of leading hardware and software companies, announces from the SIGGRAPH 2017 Conference the immediate public availability of the OpenGL® 4.6 specification. OpenGL 4.6 integrates the functionality of numerous ARB and EXT extensions created by Khronos members AMD, Intel, and NVIDIA into core, including the capability to ingest SPIR-V™ shaders.
      SPIR-V is a Khronos-defined standard intermediate language for parallel compute and graphics, which enables content creators to simplify their shader authoring and management pipelines while providing significant source shading language flexibility. OpenGL 4.6 adds support for ingesting SPIR-V shaders to the core specification, guaranteeing that SPIR-V shaders will be widely supported by OpenGL implementations.
      OpenGL 4.6 adds the functionality of these ARB extensions to OpenGL’s core specification:
      GL_ARB_gl_spirv and GL_ARB_spirv_extensions to standardize SPIR-V support for OpenGL GL_ARB_indirect_parameters and GL_ARB_shader_draw_parameters for reducing the CPU overhead associated with rendering batches of geometry GL_ARB_pipeline_statistics_query and GL_ARB_transform_feedback_overflow_querystandardize OpenGL support for features available in Direct3D GL_ARB_texture_filter_anisotropic (based on GL_EXT_texture_filter_anisotropic) brings previously IP encumbered functionality into OpenGL to improve the visual quality of textured scenes GL_ARB_polygon_offset_clamp (based on GL_EXT_polygon_offset_clamp) suppresses a common visual artifact known as a “light leak” associated with rendering shadows GL_ARB_shader_atomic_counter_ops and GL_ARB_shader_group_vote add shader intrinsics supported by all desktop vendors to improve functionality and performance GL_KHR_no_error reduces driver overhead by allowing the application to indicate that it expects error-free operation so errors need not be generated In addition to the above features being added to OpenGL 4.6, the following are being released as extensions:
      GL_KHR_parallel_shader_compile allows applications to launch multiple shader compile threads to improve shader compile throughput WGL_ARB_create_context_no_error and GXL_ARB_create_context_no_error allow no error contexts to be created with WGL or GLX that support the GL_KHR_no_error extension “I’m proud to announce OpenGL 4.6 as the most feature-rich version of OpenGL yet. We've brought together the most popular, widely-supported extensions into a new core specification to give OpenGL developers and end users an improved baseline feature set. This includes resolving previous intellectual property roadblocks to bringing anisotropic texture filtering and polygon offset clamping into the core specification to enable widespread implementation and usage,” said Piers Daniell, chair of the OpenGL Working Group at Khronos. “The OpenGL working group will continue to respond to market needs and work with GPU vendors to ensure OpenGL remains a viable and evolving graphics API for all its customers and users across many vital industries.“
      The OpenGL 4.6 specification can be found at https://khronos.org/registry/OpenGL/index_gl.php. The GLSL to SPIR-V compiler glslang has been updated with GLSL 4.60 support, and can be found at https://github.com/KhronosGroup/glslang.
      Sophisticated graphics applications will also benefit from a set of newly released extensions for both OpenGL and OpenGL ES to enable interoperability with Vulkan and Direct3D. These extensions are named:
      GL_EXT_memory_object GL_EXT_memory_object_fd GL_EXT_memory_object_win32 GL_EXT_semaphore GL_EXT_semaphore_fd GL_EXT_semaphore_win32 GL_EXT_win32_keyed_mutex They can be found at: https://khronos.org/registry/OpenGL/index_gl.php
      Industry Support for OpenGL 4.6
      “With OpenGL 4.6 our customers have an improved set of core features available on our full range of OpenGL 4.x capable GPUs. These features provide improved rendering quality, performance and functionality. As the graphics industry’s most popular API, we fully support OpenGL and will continue to work closely with the Khronos Group on the development of new OpenGL specifications and extensions for our customers. NVIDIA has released beta OpenGL 4.6 drivers today at https://developer.nvidia.com/opengl-driver so developers can use these new features right away,” said Bob Pette, vice president, Professional Graphics at NVIDIA.
      "OpenGL 4.6 will be the first OpenGL release where conformant open source implementations based on the Mesa project will be deliverable in a reasonable timeframe after release. The open sourcing of the OpenGL conformance test suite and ongoing work between Khronos and X.org will also allow for non-vendor led open source implementations to achieve conformance in the near future," said David Airlie, senior principal engineer at Red Hat, and developer on Mesa/X.org projects.

      View full story
    • By _OskaR
      Hi,
      I have an OpenGL application but without possibility to wite own shaders.
      I need to perform small VS modification - is possible to do it in an alternative way? Do we have apps or driver modifictions which will catch the shader sent to GPU and override it?
    • By xhcao
      Does sync be needed to read texture content after access texture image in compute shader?
      My simple code is as below,
      glUseProgram(program.get());
      glBindImageTexture(0, texture[0], 0, GL_FALSE, 3, GL_READ_ONLY, GL_R32UI);
      glBindImageTexture(1, texture[1], 0, GL_FALSE, 4, GL_WRITE_ONLY, GL_R32UI);
      glDispatchCompute(1, 1, 1);
      // Does sync be needed here?
      glUseProgram(0);
      glBindFramebuffer(GL_READ_FRAMEBUFFER, framebuffer);
      glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
                                     GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, texture[1], 0);
      glReadPixels(0, 0, kWidth, kHeight, GL_RED_INTEGER, GL_UNSIGNED_INT, outputValues);
       
      Compute shader is very simple, imageLoad content from texture[0], and imageStore content to texture[1]. Does need to sync after dispatchCompute?
    • By Jonathan2006
      My question: is it possible to transform multiple angular velocities so that they can be reinserted as one? My research is below:
      // This works quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); quat quaternion2 = GEMultiplyQuaternions(quaternion1, GEQuaternionFromAngleRadians(angleRadiansVector2)); quat quaternion3 = GEMultiplyQuaternions(quaternion2, GEQuaternionFromAngleRadians(angleRadiansVector3)); glMultMatrixf(GEMat4FromQuaternion(quaternion3).array); // The first two work fine but not the third. Why? quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); vec3 vector1 = GETransformQuaternionAndVector(quaternion1, angularVelocity1); quat quaternion2 = GEQuaternionFromAngleRadians(angleRadiansVector2); vec3 vector2 = GETransformQuaternionAndVector(quaternion2, angularVelocity2); // This doesn't work //quat quaternion3 = GEQuaternionFromAngleRadians(angleRadiansVector3); //vec3 vector3 = GETransformQuaternionAndVector(quaternion3, angularVelocity3); vec3 angleVelocity = GEAddVectors(vector1, vector2); // Does not work: vec3 angleVelocity = GEAddVectors(vector1, GEAddVectors(vector2, vector3)); static vec3 angleRadiansVector; vec3 angularAcceleration = GESetVector(0.0, 0.0, 0.0); // Sending it through one angular velocity later in my motion engine angleVelocity = GEAddVectors(angleVelocity, GEMultiplyVectorAndScalar(angularAcceleration, timeStep)); angleRadiansVector = GEAddVectors(angleRadiansVector, GEMultiplyVectorAndScalar(angleVelocity, timeStep)); glMultMatrixf(GEMat4FromEulerAngle(angleRadiansVector).array); Also how do I combine multiple angularAcceleration variables? Is there an easier way to transform the angular values?
  • Advertisement