• 11
• 9
• 10
• 9
• 10
• ### Similar Content

• By khawk
LunarG has released new Vulkan SDKs for Windows, Linux, and macOS based on the 1.1.73 header. The new SDK includes:

View full story
• By khawk
LunarG has released new Vulkan SDKs for Windows, Linux, and macOS based on the 1.1.73 header. The new SDK includes:

•

Hi, I am currently a college student studying to become a Game Developer. I need to interview current game developers for a class I'm taking. if anyone seeing this could answer just the 5 questions that I have provided below as well as your name, current position, and how many years you've been in the game industry. I'd really appreciate any responses.

Name:
Position:
Year in the industry:

What was the starting salary?
How many hours do you work?
What did you learn outside of school that was useful?
How did you get your job and how hard was it to find it?

-Alex Daughters
• By RyRyB
I got into a conversation awhile ago with some fellow game artists and the prospect of signing bonuses got brought up. Out of the group, I was the only one who had negotiated any sort of sign on bonus or payment above and beyond base compensation. My goal with this article and possibly others is to inform and motivate other artists to work on this aspect of their “portfolio” and start treating their career as a business.
What is a Sign-On Bonus?
Quite simply, a sign-on bonus is a sum of money offered to a prospective candidate in order to get them to join. It is quite common in other industries but rarely seen in the games unless it is at the executive level. Unfortunately, conversations centered around artist employment usually stops at base compensation, quite literally leaving money on the table.
Why Ask for a Sign-On Bonus?
There are many reasons to ask for a sign-on bonus. In my experience, it has been to compensate for some delta between how much I need vs. how much the company is offering.
For example, a company has offered a candidate a position paying $50k/year. However, research indicates that the candidate requires$60k/year in order to keep in line with their personal financial requirements and long-term goals. Instead of turning down the offer wholesale, they may ask for a $10k sign on bonus with actionable terms to partially bridge the gap. Whatever the reason may be, the ask needs to be reasonable. Would you like a$100k sign-on bonus? Of course! Should you ask for it? Probably not. A sign-on bonus is a tool to reduce risk, not a tool to help you buy a shiny new sports car.
Aspects to Consider
Before one goes and asks for a huge sum of money, there are some aspects of sign-on bonus negotiations the candidate needs to keep in mind.
- The more experience you have, the more leverage you have to negotiate
- You must have confidence in your role as an employee.
- You must have done your research. This includes knowing your personal financial goals and how the prospective offer changes, influences or diminishes those goals.
To the first point, the more experience one has, the better. If the candidate is a junior employee (roughly defined as less than 3 years of industry experience) or looking for their first job in the industry, it is highly unlikely that a company will entertain a conversation about sign-on bonuses. Getting into the industry is highly competitive and there is likely very little motivation for a company to pay a sign-on bonus for one candidate when there a dozens (or hundreds in some cases) of other candidates that will jump at the first offer.
Additionally, the candidate must have confidence in succeeding at the desired role in the company. They have to know that they can handle the day to day responsibilities as well as any extra demands that may come up during production. The company needs to be convinced of their ability to be a team player and, as a result, is willing to put a little extra money down to hire them. In other words, the candidate needs to reduce the company’s risk in hiring them enough that an extra payment or two is negligible.
And finally, they must know where they sit financially and where they want to be in the short-, mid-, and long-term. Having this information at hand is essential to the negotiation process.
The Role Risk Plays in Employment
The interviewing process is a tricky one for all parties involved and it revolves around the idea of risk. Is this candidate low-risk or high-risk? The risk level depends on a number of factors: portfolio quality, experience, soft skills, etc. Were you late for the interview? Your risk to the company just went up. Did you bring additional portfolio materials that were not online? Your risk just went down and you became more hireable.
If a candidate has an offer in hand, then the company sees enough potential to get a return on their investment with as little risk as possible. At this point, the company is confident in their ability as an employee (ie. low risk) and they are willing to give them money in return for that ability.
So what now? The candidate has gone through the interview process, the company has offered them a position and base compensation. Unfortunately, the offer falls below expectations. Here is where the knowledge and research of the position and personal financial goals comes in. The candidate has to know what their thresholds and limits are. If they ask for $60k/year and the company is offering$50k, how do you ask for the bonus? Once again, it comes down to risk.
Here is the point to remember: risk is not one-sided. The candidate takes on risk by changing companies as well. The candidate has to leverage the sign-on bonus as a way to reduce risk for both parties.
Here is the important part:
A sign-on bonus reduces the company’s risk because they are not commiting to an increased salary and bonus payouts can be staggered and have terms attached to them. The sign-on bonus reduces the candidate’s risk because it bridges the gap between the offered compensation and their personal financial requirements.
If the sign-on bonus is reasonable and the company has the finances (explained further down below), it is a win-win for both parties and hopefully the beginning a profitable business relationship.
First off, I am not a business accountant nor have I managed finances for a business. I am sure that it is much more complicated than my example below and there are a lot of considerations to take into account. In my experience, however, I do know that base compensation (ie. salary) will generally fall into a different line item category on the financial books than a bonus payout. When companies determine how many open spots they have, it is usually done by department with inter-departmental salary caps.
For a simplified example, an environment department’s total salary cap is $500k/year. They have 9 artists being paid$50k/year, leaving $50k/year remaining for the 10th member of the team. Remember the example I gave earlier asking for$60k/year? The company cannot offer that salary because it breaks the departmental cap. However, since bonuses typically do not affect departmental caps, the company can pull from a different pool of money without increasing their risk by committing to a higher salary.
Sweetening the Deal
Coming right out of the gate and asking for an upfront payment might be too aggressive of a play (ie. high risk for the company). One way around this is to attach terms to the bonus. What does this mean? Take the situation above. A candidate has an offer for $50k/year but would like a bit more. If through the course of discussing compensation they get the sense that$10k is too high, they can offer to break up the payments based on terms. For example, a counterpoint to the initial base compensation offer could look like this:
- $50k/year salary -$5k bonus payout #1 after 30 days of successful employment
- $5k bonus payout #2 after 365 days (or any length of time) of successful employment In this example, the candidate is guaranteed$55k/year salary for 2 years. If they factor in a standard 3% cost of living raise, the first 3 years of employment looks like this:
- Year 0-1 = $55,000 ($50,000 + $5,000 payout #1) - Year 1-2 =$56,500 (($50,000 x 1.03%) +$5,000 payout #2)
- Year 2-3 = $53,045 ($51,500 x 1.03%)
Now it might not be the \$60k/year they had in mind but it is a great compromise to keep both parties comfortable.
If the Company Says Yes
Great news! The company said yes! What now? Personally, I always request at least a full 24 hours to crunch the final numbers. In the past, I’ve requested up to a week for full consideration. Even if you know you will say yes, doing due diligence with your finances one last time is always a good practice. Plug the numbers into a spreadsheet, look at your bills and expenses again, and review the whole offer (base compensation, bonus, time off/sick leave, medical/dental/vision, etc.). Discuss the offer with your significant other as well. You will see the offer in a different light when you wake up, so make sure you are not rushing into a situation you will regret.
If the Company Say No
If the company says no, then you have a difficult decision to make. Request time to review the offer and crunch the numbers. If it is a lateral move (same position, different company) then you have to ask if the switch is worth it. Only due diligence will offer that insight and you have to give yourself enough time to let those insights arrive. You might find yourself accepting the new position due to other non-financial reasons (which could be a whole separate article!).
Conclusion/Final Thoughts
When it comes to negotiating during the interview process, it is very easy to take what you can get and run. You might fear that in asking for more, you will be disqualifying yourself from the position. Keep in mind that the offer has already been extended to you and a company will not rescind their offer simply because you came back with a counterpoint. Negotiations are expected at this stage and by putting forth a creative compromise, your first impression is that of someone who conducts themselves in a professional manner.
Also keep in mind that negotiations do not always go well. There are countless factors that influence whether or not someone gets a sign-on bonus. Sometimes it all comes down to being there at the right time at the right place. Just make sure you do your due diligence and be ready when the opportunity presents itself.
Hope this helps!

• I have a pretty good experience with multi gpu programming in D3D12. Now looking at Vulkan, although there are a few similarities, I cannot wrap my head around a few things due to the extremely sparse documentation (typical Khronos...)
In D3D12 -> You create a resource on GPU0 that is visible to GPU1 by setting the VisibleNodeMask to (00000011 where last two bits set means its visible to GPU0 and GPU1)
In Vulkan - I can see there is the VkBindImageMemoryDeviceGroupInfoKHR struct which you add to the pNext chain of VkBindImageMemoryInfoKHR and then call vkBindImageMemory2KHR. You also set the device indices which I assume is the same as the VisibleNodeMask except instead of a mask it is an array of indices. Till now it's fine.
Let's look at a typical SFR scenario:  Render left eye using GPU0 and right eye using GPU1
You have two textures. pTextureLeft is exclusive to GPU0 and pTextureRight is created on GPU1 but is visible to GPU0 so it can be sampled from GPU0 when we want to draw it to the swapchain. This is in the D3D12 world. How do I map this in Vulkan? Do I just set the device indices for pTextureRight as { 0, 1 }
Now comes the command buffer submission part that is even more confusing.
There is the struct VkDeviceGroupCommandBufferBeginInfoKHR. It accepts a device mask which I understand is similar to creating a command list with a certain NodeMask in D3D12.
So for GPU1 -> Since I am only rendering to the pTextureRight, I need to set the device mask as 2? (00000010)
For GPU0 -> Since I only render to pTextureLeft and finally sample pTextureLeft and pTextureRight to render to the swap chain, I need to set the device mask as 1? (00000001)
The same applies to VkDeviceGroupSubmitInfoKHR?
Now the fun part is it does not work  . Both command buffers render to the textures correctly. I verified this by reading back the textures and storing as png. The left texture is sampled correctly in the final composite pass. But I get a black in the area where the right texture should appear. Is there something that I am missing in this? Here is a code snippet too
void Init() { RenderTargetInfo info = {}; info.pDeviceIndices = { 0, 0 }; CreateRenderTarget(&info, &pTextureLeft); // Need to share this on both GPUs info.pDeviceIndices = { 0, 1 }; CreateRenderTarget(&info, &pTextureRight); } void DrawEye(CommandBuffer* pCmd, uint32_t eye) { // Do the draw // Begin with device mask depending on eye pCmd->Open((1 << eye)); // If eye is 0, we need to do some extra work to composite pTextureRight and pTextureLeft if (eye == 0) { DrawTexture(0, 0, width * 0.5, height, pTextureLeft); DrawTexture(width * 0.5, 0, width * 0.5, height, pTextureRight); } // Submit to the correct GPU pQueue->Submit(pCmd, (1 << eye)); } void Draw() { DrawEye(pRightCmd, 1); DrawEye(pLeftCmd, 0); }

# Vulkan Question concerning internal queue organisation

## Recommended Posts

Hello,

my first post here :-)

About half a year ago i started with C++ (did a little C before) and poking into graphics programming. Right now i am digging through the various vulkan tutorials.

A probably naive question that arose is:

If i have a device (in my case a GTX970 clone) that exposes on each of two gpus two families, one with 16 queues for graphics, compute, etc and another one with a single transfer queue, do i loose potential performance if i only use 1 of the 16 graphics queues ? Or, in other words, are these queues represented by hardware or logical entities ?

And how is that handled across different vendors ? Do intel and amd handle this similar or would a program have to take care of different handling across different hardware ?

Cheers

gb

##### Share on other sites

Yes, this is very vendor specific.

On AMD you can use multiole queues to do async compute (e.g. doing compute shader and shadow maps rendering at the same time).

You can also do multiplie compute shadres at the same time, but it's also likely that's slower than doing them in order in a singe queue.

On NV the first option is possible on recent cards, but the second option is not possible -and they will serialize internally (AFAIK - not sure)

On both Vendors it makes sense to use a different queue for data transfer, e.g. a streaming system running while rendering.

Not sure about Intel, IFAIK they recommend to just use a single queue for everything.

In practice you need a good reason to use multiple queues, test on each HW, and use differnet settings for different HW.

E.g. for multithreaded command buffer generation you don't need multiple queues and queue per thread would be a bad idea.

##### Share on other sites

Thanks. So i understand that a single graphics queue is the best solution.

Yeah, i could split the 2*16 queues freely among graphics, compute, transfer and sparse, and the family with the single q is transfer only. Like this, but two times for two devices:

VkQueueFamilyProperties[0]:
===========================
queueFlags         = GRAPHICS | COMPUTE | TRANSFER | SPARSE
queueCount         = 16
timestampValidBits = 64
minImageTransferGranularity = (1, 1, 1)

VkQueueFamilyProperties[1]:
===========================
queueFlags         = TRANSFER
queueCount         = 1
timestampValidBits = 64
minImageTransferGranularity = (1, 1, 1)

I am not that far as to test anything on different platforms/devices. My "training" pc is a debian linux one. But in principle and if one day i shall do a basic framework for my own i would of course aim towards a solution that is robust and works for different platforms / manufacturers. That would probably be a compromise and not the ideal one for every case.

##### Share on other sites
1 hour ago, Green_Baron said:

Thanks. So i understand that a single graphics queue is the best solution.

Probably. I'm no graphics pipeline expert, but i'm not aware of a case where using two graphics queues can make sense. (Interested, if anybody else does)

It also makes sense to use 1 graphics queue, 1 upload queue, 1 download queue on each GPU to communicate (although you don't have this option because you have only one seperate transfer queue).

And it makes sense to use multiple compute queues on some Hardware.

I proofed that GCN can perfectly overlap low work compute workloads, but the need to use multiple queues, so multiple command buffers and to sync between them destroyed the advsante in my case.

Personally i think the concept of queues is much too high level and totally sucks. It would be great if we could manage unique CUs much more low level. The hardware can do it but we have no access - VK/DX12 is just a start...

##### Share on other sites

I haven't quite figured out the point/idea behind queue families.  Its clear that all queue's of a given family share hardware.  Also a GPU is allowed a lot of leeway to rearrange commands both within in command buffers, and across command buffers within the same queue.  So queue's from separate queue families are most likely separate pieces of hardware, but are queue's from the same family?  I've never been able to get a straight answer on this, but my gut feeling is no.

For example AMD has 3 queue families, so if you create one queue for each family (one for graphics, one for compute, and one for transfer) you can probably get better performance.  But is it possible to get significantly better performance with multiple queues from the same queue family?  So far from what I've been able to gather online, is probably not.

While I do agree with JoeJ that queue's are poorly designed in vulkan, I don't think direct control of CUs makes sense IMHO.  I think queue's should be essentially driver/software entities.  So when you create a device you select how many queue's of what capabilities you need, the driver gives you them and maps them to hardware entities however it feels is best.  Sort of like how on the CPU we create threads and the OS maps them to cores.  No notion of queue families.  No need to query what queue's exist and try to map them to what you want.

TBH, until they clean this part of the spec up, or at least provide some documentation on what they had in mind, I feel like most people are just going to create 1 graphics queue, 1 transfer queue, and just ignore the rest.

Edited by Ryan_001

##### Share on other sites

I'm just beginning to understand how this works, am far from asking "why". And for a newcomer vulkan is a little steep in the beginning and some things seem highly theoretic (like graphics without presentation or so).

Thanks for the answers, seems like i'm on the right track :-)

##### Share on other sites
On 7/10/2017 at 6:05 PM, Green_Baron said:

do i loose potential performance if i only use 1 of the 16 graphics queues ? Or, in other words, are these queues represented by hardware or logical entities ?

No, you probably only need 1 queue. They're (hopefully) hardware entities. If you want different bits of your GPU work to be able to run in parallel, then you could use different queues, but you probably have no need for that.

For example, if you launch two different apps at the same time, Windows may make sure that each of them is running on a different hardware queue, which could make them more responsive / less likely to get in each other's way.

On 7/11/2017 at 1:41 AM, JoeJ said:

I'm no graphics pipeline expert, but i'm not aware of a case where using two graphics queues can make sense. (Interested, if anybody else does)

In the future when vendors start making GPU's that can actually run multiple command buffers in parallel to each other, then you could use it in the same way that AMD's async compute works.

On 7/11/2017 at 2:57 AM, Ryan_001 said:

I haven't quite figured out the point/idea behind queue families.

To use OOP as an analogy, a family is a class and a queue is an instance (object) of that class.

On 7/11/2017 at 1:41 AM, JoeJ said:

Personally i think the concept of queues is much too high level and totally sucks. It would be great if we could manage unique CUs much more low level. The hardware can do it but we have no access - VK/DX12 is just a start...

Are you sure about that? AFAIK the queues are an abstraction of the GPU's command engine, which receives draws/dispatches and hands them over to an internal fixed function scheduler.

##### Share on other sites
2 hours ago, Hodgman said:

On 10.7.2017 at 5:41 PM, JoeJ said:

Personally i think the concept of queues is much too high level and totally sucks. It would be great if we could manage unique CUs much more low level. The hardware can do it but we have no access - VK/DX12 is just a start...

Are you sure about that? AFAIK the queues are an abstraction of the GPU's command engine, which receives draws/dispatches and hands them over to an internal fixed function scheduler.

I would have nothing aginst the queue concept, if only it would work.

You can look at a my testproject i have submitted to AMD: https://github.com/JoeJGit/OpenCL_Fiji_Bug_Report/blob/master/async_test_project.rar

...if you are bored, but here is what i found:

You can run 3 low work tasks without synchornizition perfectly parallel, yeah - awesome.

As soon as you add sync, which is only possible by using semaphores, the advantage gets lost due to bubbles. (Maybe semaphores sync with CPU as well? If so we have a terrible situation here! We need GPU only sync between queues.)

And here comes the best: If you try larger workloads, e.g. 3 tasks with runtimes of 0.2ms, 1ms, 1ms without async, going async the first and second task run parallel as expected, although 1ms become 2ms, so there is no win. But the third task raises to 2ms as well, even it runs alone and nothing else - it's runtime is doudled for nothing.

It seems there is no dynamic work balancing happening here - looks like the GPU gets divided somehow and refuses to merge back when possible.

2 hours ago, Hodgman said:

AFAIK the queues are an abstraction of the GPU's command engine, which receives draws/dispatches and hands them over to an internal fixed function scheduler.

Guess not, the numbers don't match. A Fiji has 8 ACEs (if thats the correct name), but i see only 4 compute queues (1gfx/CS+3CS). Nobody knows what happens under the hood, but it needs more work, at least on the drivers.

Access to unique CUs should not be necessary, you're right guys. But i would be willing to tackle this if it would be an improvement.

There are two situations where async compute makes sense:

1. Doing compute while doing ALU light rendering work (Not yet tried - all my hope goues in to this, but net everyone has rendering work.)

2. Paralellizing and synchronizing low work compute tasks - extremely important if we look towards more complex algotrithms reducing work instead to brute force everything. And sadly this fails yet.

Edited by JoeJ

##### Share on other sites

I think part of your disappointment is the assumption that the GPU won't already be running computations async in parallel in the first place, which means that you expect "async compute" to give a huge boost, when you've actually gotten that boost already.

In a regular situation, if you submit two dispatch calls "A" and "B" sequentially, which each contain 8 wavefronts, the GPU's timeline will hopefully look like this:

Where it's working on both A and B concurrently.

If you go and add any kind of resource transition or sync between those two dispatches, then you end up with a timeline that looks more like:

If you simply want the GPU to work on as many compute tasks back to back without any bubbles, then the new tool in Vulkan for optimizing that situation is manual control over barriers. D3D11/GL will use barriers all over the place where they aren't required (which creates these bubbles and disables concurrent processing of multiple dispatch calls), but Vulkan gives you to the power to specify exactly when they're required.

Using multiple queues is not required for this optimization. The use of multiple queues requires the use of extra barriers and syncrhonisation, which is the opposite of what you want. As you mention, a good use for a seperate compute queue is so that you can keep the CU's fed while a rasterizer-heavy draw command list is being processed.

Also take note that the structure of these timelines makes profiling the time taken by your stages quite difficult. Note that the front-end processes "B" in between "A" and "A - end of pipe"! If you time from when A reaches the front of the pipe to when it reaches the end of the pipe, you'll also be counting some time taken by the "B" command! If you count the time from when "A" enters the pipe until when "B" enters the pipe, then your timings will be much shorter than reality. The more internal parallelism that you're getting out of the GPU, the more incorrect your timings of individual draws/dispatches will be. Remember to keep that in mind when analyzing any timing data that you collect.

##### Share on other sites

Whooo! - I already thaught the driver could figure out a dependency graph und do things async automatically, but i also thought this being reality would be wishfull thinking.

This is too good to be true, so i'm still not ready to believe it

(Actually i have too much barriers, but soon i'll be able to push more independent work to the queue and i'm curious if i'll get a lot of it for free...)

Awesome! Thanks, Hodgman