Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


S1CA

Member Since 27 Feb 2001
Offline Last Active Jan 28 2015 03:42 PM

#5129586 How to find video games internships within the UK.

Posted by S1CA on 07 February 2014 - 08:12 AM

I agree with what Tom and ambershee said. 

 

At the studio I work at:

  1. We DO take on a very small number of interns each year. NOTE this is for university placement years, not for short term school "work experience".
  2. It's done in partnership with universities close to our office. The best N students are chosen from CS or games courses.
  3. AFAIK we don't take on design interns, only programming and art.
  4. The most successful candidates have a real passion about games and making games; they do a lot of stuff in their own time.

As Tom says, expect to apply to a lot of companies to get an internship (all of them if you have the time!:)). Expect to apply to even more to get a placement as a designer - there is no shortage of good ideas (and more people queuing up with good ideas) in the games industry - what is needed is people who can make those ideas a reality - that's why people who get the pure design jobs tend to have experience. You may find intern roles as a "level builder", but quite often that's considered a branch of environment art so unless you have a qualification in say architecture or town planning, or are good at art, that might be out.

 

The competition for intern and entry level games positions is fierce - think about it, everyone in the world currently doing a games course at university or college wants those positions, so do a large number of the people on this board and similar. So as well as a lot of luck, you need to stand out from the 1000s of people you're in competition with.

 

How many games have you made yourself? Do you enter competitions like Ludum Dare? (etc etc) How many mods/levels have you made for existing games? Have you learnt to do any programming and/or art? Have you showcased your games (and other creations) much on forums such as this one? Have you covered a broad range of game genres in the stuff you've made (when it's a real 'job' you have to produce good work even for genres and IPs you hate)?

 

The people who are getting those intern roles are doing all of those things and more, if you aren't you should be! Those things are also major points in your favour for when you apply for non-intern entry level roles.

 

If you have stuff to show, then include links when you're contacting companies. Local games industry networking events can be a mixed bag - they're a good place to meet student, indie and hobbyist developers who you can collaborate with on more games. They can be a good place to get advice of people in the industry. If you have a good portfolio of games (and related things) you've made, it can be a good place to show people (to get feedback, and to ask if companies have any intern places available).

 

Game development conferences are good for similar reasons if you can afford to go and have a higher proportion of companies and professional developers.

 

As a group, the company I work for now also has a graduate scheme that might be of interest to you: https://www.ubisoftgroup.com/en-us/careers/graduateprogram/




#5126487 Is DirectX Supported on other consoles besides Xbox?

Posted by S1CA on 26 January 2014 - 07:24 AM

Dreamcast 'supports' DirectX too :) http://msdn.microsoft.com/en-us/library/ms834190.aspx

 

Thing is, because console hardware is fixed, some of the abstractions in the Direct3D you find on PC are unnecessary, so even on the Xboxen (and Dreamcast) there's D3D and another API that lets you get at some of the lower level aspects of the hardware directly - people tend to use a mixture of both.

 

Regarding PS4, this article: http://www.eurogamer.net/articles/digitalfoundry-how-the-crew-was-ported-to-playstation-4 covers a talk I co-presented at the Develop conference last year and has the few details that Sony allowed us to talk about publicly - as frob says, everything else is still covered by NDAs so isn't open for discussion.




#5112971 Is squeezing performance out of a console the same as premature optimization?

Posted by S1CA on 29 November 2013 - 08:00 AM

Premature optimisation of CODE is bad, but optimisation of code is only a small part of achieving good performance. Ideally:

 

0. At the start: Ensure the scope of the design fits with the reality of the hardware.

1. Very early on: Ensure the architecture of the engine + game will give the expected level of performance. Comes from understanding of the hardware and experience.

2. Early on: Set some rough budgets. "We're aiming for 60Hz so that's 16ms of GPU time to play with, world rendering must happen in around 8ms, post process in 2ms, etc". Same for CPU and memory. Have in-engine profiling for each system that shows how well each system is doing.

3. Throughout: Design and implement systems with performance in mind ("Cool, so algorithmically this O(blah) but what effect is that having on the cache?")

4. Throughout: Profile systems and check how close they are to the budgets you set. Adjust budgets if necessary. Analyse any systems that are wildly over to see why. 

5. Later (alpha-ish onward): Profile systems against the budgets. Analyse all that are over. Optimise the ones that are over or borrow budget from a different system if you can't optimise any further.

 

Mindset is hugely important. Performance is a whole team thing not just a programmer thing, that's important to get across to everyone - have budgets for design ("you can have N cars on screen"), art ("main character models must fit within X MB, use no more than Y textures and have no more than Z bones"), audio ("must fit within N MB and use no more than M DSP effects at any one time"), etc.

 

[That's from my experience of shipping console, PC and handheld games, including a few that ran at 60Hz]




#5069582 graphics specialization

Posted by S1CA on 13 June 2013 - 05:37 PM

#1, what frob said. In particular, do as much and learn as much as you can about graphics programming before you try to make the jump. For junior roles and for people with no proven graphics or engine programming experience in the games industry, a good demo that shows you have a good understanding* of the core algorithms and techniques is the only way you can differentiate yourself from all the other people who want to transfer from other industries to games.

[* When I say understanding, I mean it - if I'm interviewing you for my team I'll want to discuss the details of the techniques you chose and what the alternatives might be - from the interviewer's side it's easy to spot the difference between "copied from a book but doesn't understand how it works" and "understands"].

 

 

#2, AAA teams and projects are big enough these days that graphics programming and renderer programming are increasingly two separate (but of course closely related) specialist areas. Many big games use graphics engine middleware or already have their own proprietary engines, so there will be more demand for graphics programmers in the future than there will be for graphics engine programmers. Entry level low-level engine programming jobs are also very very rare. I've worked on a few games now that have had people who spent the majority of their time writing shaders...

 

 

#3, what to learn? I think writing a game or graphics engine you'd spend as much time bogged down with software design issues and platform APIs as you would learning actual transferrable techniques. Use an off the shelf engine and skip the low-level stuff unless that's really really what you want to be doing.

 

Writing a software rasterizer is a good one for understanding a lot of underlying principles. Be careful not to get carried away with 1990's optimisation techniques and methods though, I'd advise Fabian Giesen's series of articles for an up to date look at the pipeline and rasterizers: http://fgiesen.wordpress.com/category/graphics-pipeline/

 

I'd learn the common basics such as lighting (the illumination/reflectance part and the implementation part), shadows (it's a start having an idea what a shadow map is, but do you know how to fix the aliasing issues?), skinning/animation, particle and other effects (a common entry level task), HDR (fake vs real).

 

Have a rough understanding of common visibility algorithms (PVS vs Portal, etc) can be useful.

 

A good book to read for ideas for topics to learn? Realtime Rendering.

 

Look at some of your favourite games - can you explain how everything is rendered? If not, start learning, start guessing, start experimenting.

 

Given the higher than average volume of data in graphics, good choice of data structures and algorithms can matter. Good choice means understanding some underlying CPU concepts (Big-O isn't everything!). 

 

Knowledge of how GPUs work and where the performance bottlenecks are in the pipeline is quite an engine-y thing but frame rate is the whole team's concern.

 

Maths, maths and more maths. It's useful for a graphics programmer. SIGGRAPH papers tend to be much easier to read when you understand at least some of the Greek bits ;)




#3850346 d3d device enumeration questions

Posted by S1CA on 14 December 2006 - 12:09 PM

1a) The render target format and the display mode format can be different, thus the use of CheckDeviceFormat() to check if two different formats will work together. For example, a fullscreen application may have a render target (a.k.a. back buffer) format of D3DFMT_A8R8G8B8 (i.e. with an alpha channel), but the only display mode format that would work with would be D3DFMT_X8R8G8B8 (i.e. no alpha, because the display surface format (a.k.a. front buffer) can't have alpha.

1b) Yes.

1c) Your code should only need to call CheckDeviceType() once at enumeration (i.e. usually start up). You could re-enumerate in response to WM_DISPLAYCHANGE whenever the desktop mode has changed.


2a) Those are the most common formats you'll find, yes.

2b) Is the list 'complete'? No (take a look at the Buffer Formats table in the documentation for D3DFORMAT to see which you've missed).

2c) Is the list 'correct'? Well, the preference order, and which should/shouldn't be in the table depends on what the priorities and requirements of your application are. If your application needs a stencil buffer, then you only have a choice of 4 formats. If your application needs 32-bits of depth precision, you only have a choice of 2 formats (really 2 flavours of the same format). Whether you need to include any of the formats you've missed off depends on whether you need the special differences/extensions they offer (e.g. being lockable). The 'special' flavours are less supported than the vanilla ones.


3) I'm sure I already answered this in one of the replies to your original thread. They could probably be used to do similar jobs, though Microsoft's own code in the DXUTEnum.cpp in the DirectX SDK just uses CheckDeviceType() for checking whether a screen mode format and render target format will work together.


4) No. It has its uses in fullscreen too, for example (as the DirectX SDK docs mention), it can be used to check which conversions are valid for StretchRect(). Another example would be if your application wanted to query for hardware accelerated YUV->RGB colour conversion on Present() for video playback (e.g. display a frame of raw YUV into the render target, then Present() to have it appear and be converted at the same time). Usually though, unless your application has special conversion requirements, you don't need the extra work.


5) No.


6) Another one I thought I'd covered in a previous post. Yep. Your CheckDeviceType() calls would give you an error first anyway to tell you that a D3DDEVTYPE_HAL device couldn't be created with a 8 or 24 bit DisplayFormat (though some older Matrox cards do support 24 bit).


7) Yes. The DirectX SDK documentation describes it as "Retrieves the current display mode of the adapter." Note the caveat noted in the docs too - if the display mode is set to one of the more 'exotic' formats, the format returned can be wrong. It's a moot point though: if you're in windowed mode, the desktop won't ever be in any of those formats anyway (since the Windows display settings dialog doesn't expose them). So as long as you only call GetAdapterDisplayMode() for windowed mode use, or before you create the device, you'll not have any problems.


8a) If your choice is D3DCREATE_HARDWARE_VERTEXPROCESSING or D3DCREATE_SOFTWARE_VERTEXPROCESSING, then as long as the logic you use to decide between them represents what your application uses (e.g. checking minimum shader version, etc), then the only other thing to remember is to specify D3DUSAGE_SOFTWAREPROCESSING when creating vertex/index buffers with SWVP.

8b) If your choice includes D3DCREATE_MIXED_VERTEXPROCESSING, then thats a whole new (large) can of worms that affects the whole architechture of your engine... Nowadays D3DCREATE_HARDWARE_VERTEXPROCESSING is best left alone, it only ever made sense for apps that run on 1st generation T&L cards that also want to use vertex shaders, and the issues of trying to have a 'mixed' mode renderer aren't worth it any more IMO.

8c) Vertex processing type does bring up performance (incl. a few tricks) and scalability questions, but that's something for you to decide for your own app...


#3475528 Cg/HLSL - Why multiply the normal by this matrix?

Posted by S1CA on 12 February 2006 - 10:38 AM

1a) A normal vector in the traditional sense is perpendicular to the surface it represents. If the matrix you're using to transform that normal vector contains non-uniform scaling, shears/skews, then the resulting vector will no longer be perpendicular to the surface.
A lighting normal that isn't perpendicular to the surface it represents will produce incorrect lighting (usually incorrect direction, too bright or too dark).

1b) An aside: vertex normals aren't necessarily perpendicular to faces since they represent the average of face normals of faces using that vertex; those face normals are perpendicular to the face though.

1c) If the transformation matrix being used to transform a normal (or similar vector) has (or could have, e.g. from an unknown/external source) non-uniform scaling/shears/skews, then you should transform it by the inverse transpose.


2a) If a matrix is made up only from rotation(s), then the inverse of that matrix is exactly the same as the transpose of that matrix.

2b) The inverse of an inverse or the transpose of a transpose is the original matrix you started out with. So the "inverse transpose" of a matrix containing only rotations is the same as the original matrix [smile].

2c) As Jack said, because normal vectors only represent direction rather than position, translation shouldn't be applied to them. Translation lives in its own part of the matrix, so if your matrix also contains translation, ignoring it is simply a matter of only using the rotation part of the matrix (the top left 3x3 portion - in HLSL, simply cast the matrix to a float3x3).

2d) If the matrix is made only from rotation(s) and **uniform** scaling, then normalising the first 3 rows (or columns, depending how you think about matrices) of the top-left 3x3 part of the matrix or the transformed normal vector should sort out the scaling and let you treat it just like a rotation matrix.


3) Avoiding having to store the inverse transpose as well as the position transform is a common reason for not allowing your artists to use non-uniform scaling or skewing, particularly for skinned meshes where it doubles the bone matrix count (usually unnecessarily).


4) If you want a more complete breakdown of the mathematical reasoning of the above, take a read of Ken Turkowski's "Transformations of surface normal vectors" paper:
http://www.worldserver.com/turk/computergraphics/NormalTransformations.pdf




#2631014 Getting rid of OutputDebugString()

Posted by S1CA on 23 August 2004 - 02:18 PM

OutpuDebugString() also still sends output to the debugger in release build configurations. I prefer to use the _RPTn() macros that live in the crtdbg.h files:

1) They automatically disolve to nothing for release builds.

2) They take sprintf() style format strings so if you want to output the value of a non-string variable you don't need to create a temp buffer and sprintf() into it.

3) There are options about the type of report (just to debug stream, assert error box etc)

The only real downside is you have to use a different macro depending on the number of parameters you want to output:

_RPT0(_CRT_WARN, "Hello World\n");
_RPT1(_CRT_WARN, "moo is: %d"\n, moo);
...
_RPT4(_CRT_WARN, "vars: %d, %s, %f, %p\n", a, b, c, d);
...

Regarding the original question - the only other real way to change what OutputDebugString() compiles as is to do evil things such as #define'ing a replacement.

Personally I'd just run the search and replace (or similar - there are various tools out there which can scan and edit in multiple source files). Tedious, but not _too_ bad IMO.


PARTNERS