Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 15 Dec 2001
Offline Last Active Oct 04 2015 03:23 AM

#5216034 Vulkan is Next-Gen OpenGL

Posted by phantom on 12 March 2015 - 05:25 AM

Yes, instancing and indirect calls are here to stay; the former might break down to a few draw calls in the driver/command processor but are useful still for expressing intent.. the latter is a different class of call as instead of embedding the draw information (counts, offsets, etc) in the command buffer that information is sourced from a location in GPU memory.

So in command stream terms it is the difference between;

Where the 'draw_indirect' command knows to look at a certain register in the command processor for the details.

#5215000 What are your opinions on DX12/Vulkan/Mantle?

Posted by phantom on 06 March 2015 - 01:13 PM

Depending on how you have designed things the time to port should be pretty low and, importantly, the per-API level of code for the new Vulkan and DX12 paths will be very thin as they look very much the same in terms of API space and functionality.

Older APIs might be a problem, however again this depends on your abstraction levels; if you have coupled to D3D11 or OpenGL's style of doing things too closely at the front end then yes, it'll cause you pain - if your abstraction was light enough then you could possibly treat D3D11 and OpenGL as if they are D3D12 and Vulkan from the outside and do the fiddling internally.

At this point however it depends on where you are with things; if you want to release Soon then you'd be better off forgetting the new APIs and getting finished before going back, gutting your rendering backend and doing it again for another project/product.

In fact I'd probably argue 'releasing soon' is the only reason to stick with the current APIs; if you are just learning then by all means continue however I would advocating switching to the newer stuff as soon as you can. That might mean waiting for lib to abstract things a bit of you don't feel like fiddling but the new APIs look much cleaner and much simpler to learn - a few tricky concepts are wrapped in them but they aren't that hard to deal with and, with Vulkan at least, it looks like you'll have solid debugging tools and layers built in to help.

I guess if you are a hobbyist you could keep on with the old stuff; but I'd honestly say that if you are looking to be a gfx programmer in the industry switching sooner rather than later will help you out as you'll be useful in the future when you join up.

#5214313 Vulkan is Next-Gen OpenGL

Posted by phantom on 03 March 2015 - 04:06 PM

Asserting rights to IP is not sabotage; if it was then the same claim can be levelled at others in the group too over their various IP claims over the years. Hell, S3 caused issues with their texture compression IP claims.

Btw, MS left the ARB in March of the following year; it has taken 12 years for the rest of them to stop messing up with the following years being series of mistakes, missteps, bad choices and back tracking.
(Years, btw, that I lived as an OpenGL programmer, one time moderator of this sub-form, vocal GL advocate vs DX9, and GLSL chapter author for More OpenGL Game Programming... so, I know a thing or two about the history ;))

All of this is of course horribly off topic so should probably stop...

#5214295 Vulkan is Next-Gen OpenGL

Posted by phantom on 03 March 2015 - 02:47 PM

I was trying to bring up the fact that some people believe that Microsoft sabotaged the OpenGL implementation on Windows to increase DirectX adoption.

Yes, people do enjoy painting MS as the 'big bad' in all of this when, truth be told, 99% of OpenGL's problems were caused by ARB infighting and incompetence (see GL2.0 and Longs Peak/GL3.0) - the worst MS ever did was fix their software version back on GL1.1 and not ship GL drivers/dlls via Windows update for updated graphics drivers (which is a pain, but given they don't test that component I can see why), but they never actively sabotaged things.
(btw, if your source for this was the Wolfire blog from some time back then... well, forget you read it, it was trash frankly.)

The biggest tell in all of this is the utter amazement which has been expressed by many long time graphics programmers that Valkan seems, well, sane and well thought out.. I think many of us are still waiting for the other shoe to drop and won't be 100% convinced until we have working drivers in our hands to play with/test.

#5214170 Vulkan is Next-Gen OpenGL

Posted by phantom on 03 March 2015 - 05:07 AM

Yeah, as I said in the other thread, it seems sane... a Khronos take on the Mantle API.

Interested to see the complete model; do we get separate command queues for graphics and compute? (based on the ImgTec blog this looks to be the case!) how does it deal with multiple gpu machine? what about upload/download control?

But on the face of it things look sane... which I still find confusing... biggrin.png

#5213913 Next-Gen OpenGL To Be Shown Off Next Month

Posted by phantom on 02 March 2015 - 08:30 AM

Meh.. names... they could call the thing Wigglebum Fart Goblin for all I care; It could have the coolest name in the world but it's the API that matters.

#5212733 Is it time to upgrade to Dx11?

Posted by phantom on 24 February 2015 - 11:21 AM

Whatever the case, I apologize to the original poster because of my 'raving paranoia'. (thank you for the smart words phantom)

If you are going to make up bullshit then I'm going to call you on the aforementioned bullshit, simple as.
If in response to this you wish to resort to childish down voting of reasonable posts then carry on, your ire at being called out amuses me and underscores the kind of person you apparently are. smile.png

#5212482 Is it time to upgrade to Dx11?

Posted by phantom on 23 February 2015 - 11:20 AM

hmmm...I don't know. Seems like a real nice dangling carrot that precursors an OS subscription based pay system. Not the cool Epic deal you all know and love but more of a 'able to pull the plug' when you don't pay relationship. Personally, I think it's a trap and your OS will actually cost more in the long run.

Wait... wut?
How can a free OS cost you more in the long run?
You are going to have to explain this because right now it looks like raving paranoia...

I'd hold off as well but for different reasons. What I'm waiting for is glNext because I feel the giant is about to pull another fast one.

Your paranoia aside if anyone is going to drop the ball this time it'll be the ARB as they have a history of ball dropping which is frankly excellent.

#5212420 Is it time to upgrade to Dx11?

Posted by phantom on 23 February 2015 - 04:14 AM

Might want to hold off a bit if it isn't a major issue yet; DX12 will bring another major API shift and with Win10 going 'free' for anyone with Win7 or Win8 it could well get a lot of traction.

#5211990 Game Engine for Linux

Posted by phantom on 20 February 2015 - 03:04 PM

There is always UE4

#5209206 Next-Gen OpenGL To Be Shown Off Next Month

Posted by phantom on 06 February 2015 - 08:18 PM

Anandtech has up some early DX12 performance benchmarks vs Mantle; http://anandtech.com/show/8962/the-directx-12-performance-preview-amd-nvidia-star-swarm.

TLDR version; DX12 is basically as fast as Mantle (2 or 3 fps slower on the GPU, 4 or 5ms slower on the CPU) and destroys DX11 both in terms of FPS and CPU time to submit.

So, the next OpenGL version has a target for the implementations.

#5208848 Next-Gen OpenGL To Be Shown Off Next Month

Posted by phantom on 05 February 2015 - 03:35 AM

Mantle is currently in closed beta, which means the API/spec is still a work-in-progress. This is a pretty strange situation where we have games that have already shipped using an API that isn't finished and doesn't publicly exist yet! As is, it's AMD and Windows only, but is designed and intended to become an open standard (like GL) and be implemented by NV/Intel and Linux/Mac.
If that happened, it *could* really just kill off GL and replace it... But we know that not going to happen... NV has a huge advantage in that their GL drivers are extremely optimal compared to their competition, and they won't ever want to throw away that lead.

I was a massive cheerleader for Mantle when it was first unveiled, finally an API which did The Right Thing on the PC.. but I'm starting to lose interest now because any sort of public sdk, even a 'beta no-ship' one, hasn't appeared and there is zero noise about it appearing. Last I heard, from someone at AMD, it was meant to ship towards the end of last year.. and now we are in February.

AMD stole the march on everyone by a year, GLNext hasn't been announced and D3D12 is locked away in a preview program... but DX12 is coming soon, we basically know it'll drop with Win10 this year, and GLNext will probably exist in some form around SIGGRAPH as they have been more bullish about updates in recent years.

IF they dropped an SDK next week I might poke at it, see if I can get some triangle on screen or whatever, but I'm starting to feel like they are a day late and a dollar short and my real attention is shifting to something which will work on more than one bit of hardware.

#5207393 Some programmers actually hate OOP languages? WHAT?!

Posted by phantom on 29 January 2015 - 04:48 AM

I'm in much the same camp as Hodgman; the OOP focus tends to be towards higher systems, the functional influence I've picked up over the years informs my way of processing data without mutating state (either at all or too much, system dependent; either way I'm a 'fan' of chained processing of things and passing stuff about if possible) and the DOD aspects (which, tbh, I was doing before it got a name) make me think about how I'm going to lay out that data so that my functional processing style can work nicely with the underlying hardware.

There is no One True Way; there is only the data and how you intend to process it.

Beyond that all we have are words to try and explain how we've put things together.

#5206760 OO where do entity type definitions go?

Posted by phantom on 26 January 2015 - 03:21 PM

I'd like to apologise; when I gave you the earlier advice I assumed your data structures would contain the data required for their respective tasks not be.. well... crap.

Apprently we have WILDLY different meanings when it comes to the words 'not very big' as that animal struct is FAR from not being big... because it contains.. well.. crap.

Let me show you what I see;
struct animalTypeRec
  char craptakingupacacheline[64];  // 64bytes
  char craptakingupmorecachespace[36];  // 36bytes
  int bunchOfNotPerFrameStuff[22]; // 88bytes
  float heySomethingUsefull;       // 4bytes
  float oppsUnRelatedAagain[2]     // 8 bytes
  float somethingRelatedTo8BytesAgo; // 4 bytes
Aka 204 bytes (although the compiler might well pad that to 208 to get 4 byte alignment on the structure size) of which most is crap.
(For the rest of this we are assuming these live in isolation; the problem changes depending on what is around it in an "animal instance" although with the 100 bytes of crap at the start that just means that whatever is before it in memory is going to be pulling in some amount of 64bytes of rubbish on access.)

Now, I've got no idea wtf your update loop looks like (although I do recall a cache missing mess a couple of years ago so I'm going to go with.. hideous) but at a guess I'm going to say that 'speed' and 'turnrate' are the two useful 'frame by frame' values in that structure.

"Speed" lives at an offset of 188 into the structure; CPUs on the other hand fetch cached aligned 64bytes at a time. As we are 180 bytes in the CPU will naturally 'skip' the first 128 bytes as we don't need them and will drop us 128bytes into the data meaning it will read from 'trinkets' onwards in your original structure definition.

Or as I like to think of it [60bytes of crap we don't need][4 bytes of useful].

At which point I'm taking a guess that 'turnrate' will come into play. In this case it is only 8 bytes away which means we need to read in 12bytes + 52bytes of whatever follows (probably the opening 52 characters of crap in animals[1]).

So in order to update ONE creature; we've had to read in 128bytes, of which 8 bytes are useful.
1 in 16 bytes transferred was data we wanted.

By anyone's metric that is terrible.

Welcome to the world of 'hot-cold analysis' wherein you work out which data you need together and split your data structures accordingly.

Firstly, I'd dump the char[100] array; that's a char * to somewhere else, it has no place taking up 100bytes in that structure.
Secondly, 'Number appearing' doesn't seem like per-instance data so fuck it off somewhere else.
Third, arrange things by access if you really must store them in this mess.

At a punt;
struct lessBullShitty
  int hp, tohit, takehit_wav;
  int atkdmg, attack_wav;
  float speed, turnrate;
  bool can_climb;
  bool avian;

  int rad, AI;  // no idea what these are...
  int xp, meat, bone, trinkets, hides;  // assume these are loot things

  int mesh, texture;
  float y_offset, scale;
  int animations[5]; // being lazy, name them if you wish

  char * name;
If nothing else your structure is now 96bytes smaller and more logically arranged.

I could do the same on your other structures but frankly the massive one is just making me feel sick even thinking about it to trying to demanage that is a case of taking the Fuck This Train to Nope City.

I will say however don't make a massive "all the fucking things lol" structure for things which don't require it, comments like 'used by food' in with 'used by missiles' is a big blinking sign which says 'warning; this structure is fucked up' which can be seen from space.

I seem to recall taking you to task over your data layout and update loop two years ago on this forum, where I called them out for being piles of shit, so the fact we are here again now with the same bullshit is just frankly annoying.

#5205778 How to get a job in Graphic programming?

Posted by phantom on 21 January 2015 - 08:37 AM

A more important thing is that Graphics Programming is very rarely an Entry-Level position. Most people go into it after working as a gameplay programmer or similar (Or a crapton of school), so if you are only applying to Graphics Programming positions, its very unlikely you will find a job.

This is key in my opinion.
While it isn't impossible to be hired directly to the role you'll need a very strong portfolio behind you showing that you can cut the mustard. I've not looked at the links by judging by the comments made so far it would seem you don't have a degree which is also going to hold you back from getting a job (I know from experience; applied to a company before I had my result and got nothing. Got result and suddenly the same company were 'desperate to talk' to me. The paper matters).

But, ultimately, unless you can show you are very good you are unlikely to get hired into graphics programming role directly and the UK has no shortage of more experienced graphics programmers kicking about right now which probably doesn't help you smile.png

While it might not be your passion I would see about getting into the industry first, get a year or two under your belt before trying to grab a graphics role and during that time work on your skills outside of work hours to make that more likely.