Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

179 Neutral

About Thinias

  • Rank

Personal Information

  • Interests
  1. Thinias

    Additive lighting problems

    What you're asking about is just one piece of a process collectively known as "tone mapping". Even when you are working with an HDR pipeline, at some point is has to get mapped back down to LDR in order to actually be presented. A simple implementation of this process can be described by simply computing all of your lighting in linear space, defining a "maximum intensity" over the scene, and then scaling the lighting in the scene back down based on the "maximum intensity". While the lighting computations must be linear for this to work correctly, the subsequent scaling does not necessarily need to be linear. The "maximum intensity" you use can either be hard-coded, passed down as a uniform, or even generated during pixel/compute execution and stored for the scaling to occur in a post-processing pass. There are other (and more complicated) solutions to this problem. Try searching for some tone mapping algorithms to get ideas.
  2. Thinias

    Ecs Inheritance Problem (C++)

        You seem to be right :) - I can't find any documentation articulating where this misconception came from on my end.  I suspect my own personal training to avoid RTTI caused the misconception.  You're probably right, and this data is probably just being memory mapped.  If it is infrequently accessed, you'll still end up paying for the disk cost of cycling the page back into memory... but if it is frequently accessed, you probably pay for it once and move on.  Since I never frequently access RTTI data... I always end up paging the request in, and hitting disk in the process.   Thanks for helping me to improve my understanding of the subject :).
  3. Thinias

    Ecs Inheritance Problem (C++)

    There are many reasons to avoid dynamic_cast, and I would (almost) never recommend sprinkling it throughout runtime game code.  The most common reasons look something like the following: dynamic_cast relies on RTTI... therefore, in order to use dynamic_cast, you must enable RTTI.  This will significantly bloat your application size, which can be a serious consideration on some media distribution protocols (hard disc and OTA download limits for mobile come to mind). dynamic_cast relies on RTTI (noticing a trend?)... the data for which will be primarily stored on disk at runtime.  This means most dynamic_cast operations will necessarily contain a fetch from disk in order to retrieve that data.  Not only does this make the dynamic_cast itself slow, but it can really hold hostage your ability to exercise hardware caching mechanisms for other systems, like streaming implementation. dynamic_cast relies on RTTI ( :))... therefore, in order to use dynamic_cast, you must enable RTTI.  If you enable it, it will get used.  In most cases, your junior engineers will be the ones using it, without even realizing that they are doing it.  It will prove much more difficult to remove dependencies on RTTI from existing code than it would have been to simply write code that didn't depend on RTTI in the first place. The concepts presented by Juliean are also basically the same way that I solved this problem (though I explicitly avoided creating virtual tables on my Component objects), and I don't consider it particularly unclean.  My implementation makes an additional assertion that I have not yet seen articulated here - an Entity can only ever have a single instance of a given Component type.  For example, my entities can have both a ModelComponent and an AnimationComponent, but a single entity can't have two AnimationComponents.  I also make use of hash tables rather than maps to get average-case constant-time lookups, under the assumption that I will be executing lookups more often than I will be executing inserts or deletes.   I would actually shy away from maintaining this kind of data in globally managed lists.  Those global lists will cost you headaches, cycles, or both the first time someone tries to create, delete, or access a component on a background thread.
  4. Thinias

    Gldrawpixels To Stencil Buffer

    I am not familiar with OpenCV, but I am more familiar with OpenGL.  At a glance, I could not see anything wrong with the gl calls you provided, so I set up a dummy project to prove it.  Under OpenGL 4.5, using glew-2.0.0 and glfw-3.2 as a wrapper, I was successful in this proof.  My test loads the first image you provided as Stencil.bmp, writes it to a bound buffer, reads it back frm that buffer, and writes the result to disk.  I set up a few dummy objects in order to prove I could use your gl code exactly as-is, without modifications.  I have included my source (I was not allowed to attach a .cpp file, so I included it just as a code-block below), which produces the expected output as StencilGPU.bmp   These results indicate that your problem lies somewhere in the OpenCV object, not in your OpenGL calls.  Perhaps the mat is storing the image in something akin to compression blocks (which are not sequential), and thus you end up writing a bunch of nonsense to the GPU after the end of the first block?  You might consider spending some time validating that the buffer you are reading from is both sequential, and also starts from the first scanline (positive stride).  You would also need to validate that the buffer you are writing to is both sequential, and also starts from the first scanline (positive stride).  It is also possible that just removing OpenCV altogether from the equation in this section of your code would be simpler.   Good luck! //Include GLEW #include <GL/glew.h> //Include GLFW #include <GLFW/glfw3.h> //Include the standard C++ headers #include <stdio.h> #include <stdlib.h> #include <Windows.h> #include <assert.h> #include <gdiplus.h> using namespace Gdiplus; typedef struct RadDim_Dummy_t { int x, y; } RadDim_Dummy; typedef struct Mat_Dummy_t { void* data; } Mat_Dummy; int GetEncoderClsid(const WCHAR* format, CLSID* pClsid) { UINT num = 0; // number of image encoders UINT size = 0; // size of the image encoder array in bytes ImageCodecInfo* pImageCodecInfo = NULL; GetImageEncodersSize(&num, &size); if(size == 0) return -1; // Failure pImageCodecInfo = (ImageCodecInfo*)(malloc(size)); if(pImageCodecInfo == NULL) return -1; // Failure GetImageEncoders(num, size, pImageCodecInfo); for(UINT j = 0; j < num; ++j) { if( wcscmp(pImageCodecInfo[j].MimeType, format) == 0 ) { *pClsid = pImageCodecInfo[j].Clsid; free(pImageCodecInfo); return j; // Success } } free(pImageCodecInfo); return -1; // Failure } void SetupStencilBuffer() { // load the file copied from gamedev into a gdi+ bitmap HBITMAP hBitmap = (HBITMAP)LoadImage( NULL, L"Stencil.bmp", IMAGE_BITMAP, 0, 0, LR_CREATEDIBSECTION | LR_DEFAULTSIZE | LR_LOADFROMFILE ); BITMAP bmp; GetObject(hBitmap, sizeof(BITMAP), &bmp); Bitmap *image = new Bitmap(hBitmap, NULL); assert(image); assert(image->GetPixelFormat() == PixelFormat8bppIndexed); assert(GetPixelFormatSize(image->GetPixelFormat()) == 8); const int width = image->GetWidth(), height = image->GetHeight(), size = width * height; // grab the buffer itself from the gdi+ bitmap BitmapData data; Rect rect = Rect(0, 0, width, height); Status result = image->LockBits(&rect, ImageLockModeRead | ImageLockModeWrite, image->GetPixelFormat(), &data); assert(result == Ok); // flip the buffer if the scanlines are inverted. if (data.Stride < 0) { image->UnlockBits(&data); image->RotateFlip(RotateNoneFlipY); Status result = image->LockBits(&rect, ImageLockModeRead | ImageLockModeWrite, image->GetPixelFormat(), &data); assert(result == Ok); } assert(data.Stride > 0); // allocate and initialize a buffer to copy into from glReadPixels UINT8* buffer = new UINT8[size]; memset(buffer, 0, size); // setup some dummy variables so i can copy-paste code without any modifications from gamedev GLuint _renderStencilBuffer; RadDim_Dummy _radDim = { width, height }; Mat_Dummy stencilImg = { data.Scan0 }; Mat_Dummy stencilImgRead = { buffer }; // start sample code from gamedev { glPixelStorei(GL_UNPACK_ALIGNMENT,1); glPixelStorei(GL_UNPACK_ROW_LENGTH,_radDim.x); glPixelStorei(GL_UNPACK_SKIP_ROWS,0); glPixelStorei(GL_UNPACK_SKIP_PIXELS,0); glPixelStorei(GL_PACK_ALIGNMENT,1); glPixelStorei(GL_PACK_ROW_LENGTH,_radDim.x); glPixelStorei(GL_PACK_SKIP_ROWS,0); glPixelStorei(GL_PACK_SKIP_PIXELS,0); glGenRenderbuffers(1,&_renderStencilBuffer); glBindRenderbuffer(GL_RENDERBUFFER,_renderStencilBuffer); glRenderbufferStorage(GL_RENDERBUFFER,GL_STENCIL_INDEX,_radDim.x,_radDim.y); glDrawPixels(_radDim.x,_radDim.y,GL_STENCIL_INDEX,GL_UNSIGNED_BYTE,stencilImg.data); // ... glReadPixels(0,0,_radDim.x,_radDim.y,GL_STENCIL_INDEX,GL_UNSIGNED_BYTE,stencilImgRead.data); } // end sample code from gamedev // copy the buffer back into the Bitmap, and then use the Bitmap to write it back to disk. memcpy(data.Scan0, buffer, size); image->UnlockBits(&data); CLSID clsId; int retVal = GetEncoderClsid(L"image/bmp", &clsId); image->Save(L"StencilGPU.bmp", &clsId, NULL); delete image; delete buffer; } //Define an error callback static void error_callback(int error, const char* description) { fputs(description, stderr); _fgetchar(); } //Define the key input callback static void key_callback(GLFWwindow* window, int key, int scancode, int action, int mods) { if (key == GLFW_KEY_ESCAPE && action == GLFW_PRESS) glfwSetWindowShouldClose(window, GL_TRUE); } int main( void ) { //Set the error callback glfwSetErrorCallback(error_callback); //Initialize GLFW if (!glfwInit()) { exit(EXIT_FAILURE); } //Set the GLFW window creation hints - these are optional //glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); //Request a specific OpenGL version //glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); //Request a specific OpenGL version //glfwWindowHint(GLFW_SAMPLES, 4); //Request 4x antialiasing //glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); //Declare a window object GLFWwindow* window; //Create a window and create its OpenGL context window = glfwCreateWindow(640, 480, "Test Window", NULL, NULL); //If the window couldn't be created if (!window) { fprintf( stderr, "Failed to open GLFW window.\n" ); glfwTerminate(); exit(EXIT_FAILURE); } //This function makes the context of the specified window current on the calling thread. glfwMakeContextCurrent(window); //Sets the key callback glfwSetKeyCallback(window, key_callback); //Initialize GLEW GLenum err = glewInit(); //If GLEW hasn't initialized if (err != GLEW_OK) { fprintf(stderr, "Error: %s\n", glewGetErrorString(err)); return -1; } //Set a background color glClearColor(0.0f, 0.0f, 1.0f, 0.0f); GdiplusStartupInput gdiplusStartupInput; ULONG_PTR gdiplusToken; GdiplusStartup(&gdiplusToken, &gdiplusStartupInput, NULL); TrapError(); SetupStencilBuffer(); //Main Loop do { //Clear color buffer glClear(GL_COLOR_BUFFER_BIT); //Swap buffers glfwSwapBuffers(window); //Get and organize events, like keyboard and mouse input, window resizing, etc... glfwPollEvents(); } //Check if the ESC key had been pressed or if the window had been closed while (!glfwWindowShouldClose(window)); //Close OpenGL window and terminate GLFW glfwDestroyWindow(window); //Finalize and clean up GLFW glfwTerminate(); GdiplusShutdown(gdiplusToken); exit(EXIT_SUCCESS); }
  5. Thinias

    Separate 32 Bit And 64 Bit Versions

    If your game doesn't actually run in 32-bit mode, shipping a 32-bit client doesn't make any sense.  It's just going to generate negative reviews, refund requests, and cost you new customers/goodwill with existing customers.  If you aren't going to fix your memory usage, just don't ship an x86 client.   I believe you indicated you are targeting windows-only.  If so, an easy solution might be to enable PAE.  However, that's a band-aid.     I echo this sentiment.  Unload your geo, and leave only your simulation in memory.  If you can't do that, separate your render targets from your simulation targets, and then do it.   For expensive simulations that can't be maintained in real-time, it's pretty common to to execute interleaved updates; IE - even frames only update half of the world, and odd frames update the other half.  You get half as many updates, so use twice as big of a time-slice per update (and obviously you could split that into 3, 4, n... groups as needed).  It's also a common practice for exceptionally large simulations to save off parts of the simulation to disk which are not currently interactive (in your case, not visible on the screen), and stream those in periodically for their update.  You may only get one update per second on a given section of the simulation, but the end user isn't going to notice that for things that aren't currently interactive anyways.
  6. Thinias

    Interpolating 2D data

    Resurrecting a zombie for a moment...     Wikipedia is actually an almost universally terrible resource for anything math or science related.  Frequently, the meaning of topics on which I consider myself an expert is completely lost on me when I read the jumbled mess that is the Wikipedia article on that topic.  The referenced article on Kriging is no exception to that rule of thumb, and I would suggest you search for alternative sources of information about the process before abandoning it.  I was able to find one without too much difficulty here, which breaks down the concepts involved much less abstract terms.  As an example of what I mean, let's look purely at the definition of Kriging from these two sources.   Wikipedia: Arcgis:   The Wikipedia article is likely more precise... but to someone who doesn't already intimately understand all of the terminology (and likely even to many people who *do* already understand the terminology), it's also complete jibberish.  When you have questions about a topic, Wikipedia is usually a great way to find out what some of your options are; however, I would strongly suggest moving away from it before attempting to actually understand or implement against those options.
  7. The above suggestion is generic and universally applicable.  It is also not always the best answer.  Depending on your data, the graphics API in use (and your level of direct access to that API in the engine you are using), and your program constraints... sometimes a concept similar to the OpenGL selection buffer can be superior.  Usually the criteria falls along something like the following lines: Do you expect to encounter transparency in the target region?  Alpha blending can break the selection buffer, and you should probably stick with ray casting in this case. How many physics polygons will you have to ray-cast against?  Polygon count can increase due to increased geometry count, or due to increase physics model complexity.  More physics polygons = linearly more expensive ray casting. As a corrollary, what level of precision do you require?  The greater the desired precision, the more complex physics models need to be, and the more expensive ray-casting becomes. How much screen resolution do you care about being able to detect against?  Greater resolution = linearly more expensive selection buffer generation. Are you CPU bound or GPU bound?  If you are already CPU bound, offloading more work to the GPU (switching from ray-casting to selection buffers) can improve your framerate.  If you are already GPU bound, offloading more work to the CPU (switching from selection buffers to ray-casting) can improve your framerate. The basic idea behind a selection buffer is to allocate a buffer in which you will store identifiers to rendered geometry, rather than colors.  This buffer does not need to be the same resolution as the rendered scene buffer.  You then render the scene twice (again, not necessarily at the same resolution - if you only care about a small square in the middle of the screen, only render that small square); during the second pass, you store identifiers to the top-level geometry, rather than colors.  Since you aren't actually storing color data, no pixel shaders need to execute.  Depending on how much of the screen you actually render, the vast majority of your geometry will likely be culled before it even reaches the vertex shader either.  When you are done rendering the scene to the selection buffer, all you have to do is query the selection buffer at whatever pixel or pixels you care about to get back the identifier to the object in the foreground at that position.  This has the advantage of being a relatively inexpensive non-approximation; the selection buffer will match the rendered geometry exactly.  Accomplishing the same with ray-casting would be prohibitively expensive for a scene of reasonable complexity.   This article provides some technical details on how to get started with OpenGL selection buffers, including a basic demonstration of how they might imitate ray-casting source.  Keep in mind that this concept does not exist in all graphics APIs, and some commonly used engines do not provide enough low-level access to either geometric data or the renderer in order to execute this functionality.
  8. Thinias

    Are there too many Unity Developers?

    Large portions of this thread unfortunately devolved into a debate about whether you should build games or build engines.  I'm going to try and take a more direct stab at answering your question.   You can absolutely be successful with Unity, CryEngine, or any other technology stack that you want to use.  Before talking specifically about your Unity experience, I would echo the sentiment that overspecializing in a currently vogue technology is dangerous; nothing stays current forever.  With that said...     This is the wrong way to look at the problem.  With personal projects, your goal should never be to just spit out a game; your goal should be to become a better developer.  If developing your game was exceptionally easy, you probably finished it pretty quickly.  In that case... spend some more time just playing with the technology.   Did you evaluate UI tools?  Could you apply multi-threading to improve your user experience?  How does your application handle memory pressure... and can you do anything to reduce or eliminate the probable hitches associated with garbage collection?  What do your load times look like?  How would you profile your load times, and determine options for reducing them?  If you had multiple content creators working within this infrastructure, what steps could you take to help them work effectively together within Unity's monolithic "scene" model?   These are just a few of the places in which Unity has not historically been spectacular.  Being able to say "I built a game in Unity" is great.  Being able to talk about what problems you encountered and how you overcame them is much, much better.  If you encountered no problems, you probably didn't learn anything either.  If you didn't learn anything, why should a potential employer care that you built it?   Again, don't think about personal projects in terms of spitting out a result.  Think about them in terms of bettering yourself.  With every new project, strive to learn at least one thing that you didn't already understand.  The rest will take care of itself... regardless of what tech stack you were using at the time.
  9. I think this question seems to make a fairly alarming assumption.  You need to acknowledge that graduating from a university program in "Physics programming" in no way qualifies you as a specialist in Physics programming.  In fact, it probably puts you on par with absolutely any engineer who's been in the video games industry for less than a year... but only in Physics programming!  Those people who have been in the industry probably still have an edge on you in virtually every other discipline :(.  You need to understand that once you enter the marketplace, you are competing with people who literally spend 8-10 hours a day, 5 days a week, 50 weeks a year writing code and solving the exact problems games need solved.  If you want to become a specialist in some area of video game development, you need to think about your education as only one notch in your belt; the only way you're going to catch up to the actual industry specialists is by actually *making some games!*   I'm going to guess that maybe you've heard that breaking into the video games industry is hard, and you're trying to increase your odds of success.  That's a worthy goal, but I'd like to make sure you understand that breaking into the industry isn't actually that hard.  Breaking into the video games industry is no harder than breaking into any other industry which requires some skillset.  If you want to work at McDonald's... okay, if they have a job opening you're hired.  But if you want to work at Cisco, you probably need to know something about routers, embedded chipsets, firewalls, and the network protocol stack... and you probably need to have applied that knowledge to some practical purpose in the past.  If you want to work at USA Swimming, you probably better know something about the butterfly, breastroke, backstroke, and freestyle... and you probably need to have applied that knowledge to some practical purpose in the past :).   It's no different in video games.  If you want to work in video games, you probably need to know something about artificial intelligence, physics, networking, and graphics... and you probably need to have applied that knowledge to some practical purpose in the past.  The reason the video games industry gets a bad reputation for being hard to break into is because very few under-qualified people actually *want* to work at a company like Cisco; the number of people who get excited about router technology is much smaller than the number of people who get excited about video games.  Video game companies get a large number of applicants for engineering positions who know absolutely nothing about software engineering.  I've interviewed a guy in the past who, in response to a very general question about describing debugging practices, failed to even mention a debugger.   I've actually reached the point where I hate it when people ask what I do.  I tell them I work in games, and their eyes light up; they think it's the coolest thing ever!  ... Then they want to talk about it more, and it never takes long before their eyes gloss over and they no longer have any clue what I'm talking about.  There's the disconnect; most people don't understand that having a passion for video games and having a passion for making video games is not the same thing.  If you want to land a job as a software engineer in the video game industry, you only need to do two things:   1) Develop a strong understanding of software engineering fundamentals.  Realize that what you learned in the classroom does not encompass the entirety of what it means to develop a strong understanding of software engineering fundamentals, and you will need to spend time on your own applying the things you've learned at school to projects that pique your interest.   2) Be flexible.  Interview with employers in other cities, counties, or states.  Be wiling to relocate.   If you can do both of those things, it really isn't difficult to break into this industry :).  In the process, if things that pique your interest actually are video games... you'll find that you learn enough about Networking, AI, Graphics, and Physics to figure out whatever else you need to know!
  10.   You don't get gimbal lock when using quaternions.  That's the primary reason anyone uses them.  He could potentially introduce gimbal lock by converting from quaternions to euler angles, but as he's only doing this in an attempt to accomplish his clamp... there's no reason for him to introduce it to solve gimbal issues that couldn't exist if he wasn't doing it.  I suspect he's more likely building a game where being able to spin the camera farther than 90 degrees would break the illusion.  For instance, a first person <x>.     This is true in a mathematical sense, but not in a practical sense.  "Up" is completely arbitrary, and in scenarios where you are looking straight up you can easily change your view matrix to simply run against a different Up vector (like [1,0,0] for instance).  Pick an appropriate up vector such that the handedness of your system is maintained, and there will be no visible implications.  When you're not looking up anymore... change it back.  Most modern engines will handle this for you automatically.   The simplest answer would be to avoid using the quaternion as your starting point.  Save off your camera's roll, pitch, and yaw separately... then update those values instead of the values returned by your camera.  Apply your pitch clamp at that point, convert the resulting eulers to a quaternion and send that to your camera.  Note that in this scenario you can introduce the gimbal lock Johnny Code was concerned about.  If roll is not a valid input, using YZX as your rotation order will solve gimbal issues automatically.
  11. It looks to me like you are both conceptually doing the same thing: getting rid of your angular discrepancy problems with cos(theta) by rotating the problem space.  Alvaro's answer is accomplishing this by explicitly computing the third axis of one of the triangles... whereas CaptainMurphy's answer is getting a vector pointed in the same general direction by simply using one of the edges.   These two answers will produce the same (correct) result... at least insofar as they both return positive or negative for the same inputs under an assumption of shared edge GF.  Alvaro's answer doesn't depend on the shared-edge asumption, but under this assumption CaptainMurphy's answer is probably a little faster.
  12. Thinias

    How do i become a game director someday?

    "Director" is typically a term applied to leadership roles within a game team.  You have Art Directors, Technical Directors, Development Directors, Creative Directors, etc... and it is good that you recognize that there is a lot of hard work involved in acquiring that title.  I would also suggest that there is a lot of talent and luck which goes into it as well.  As with anything worth doing, you need to be comfortable swallowing some risk to pursue your dreams!  I would also note that what you've linked as a "Game Director" sounds a lot like what some companies would call a "Creative Director", and you should look into those kinds of job descriptions as well.   The course load that you've presented sounds like a very well-rounded depiction of what producing a video game involves.  However, I would personally be a little concerned that it's maybe *too* well-rounded.  I see entries describing the following:   1. Programming 2. Modeling 2a. Environments 2b. Characters 3. Animation 3a. Rigging 3b. VFX 4. Concept Art 5. Audio 6. Design 6a. Storyboarding 6b. Level layouts 7. Production 8. Localization   Especially as an entry-level employee (in any industry... not just games), you aren't likely to be hired for your "general" prowess.  Entry-level candidates are more likely to be hired to accomplish a single, specific goal; as an example, my first job in this industry was specifically to provide engineering support to artists in building the UI for a single iteration of EA's Madden.  I had no other responsibilities, and any artistic/design/scheduling prowess I may have presented was irrelevant to my hiring decision.  Someone told me what they wanted built, I told them what artistic assets I needed and how much time it would take, and then I used/extended the technical framework they provided in order to build exactly what they asked for.  No more, no less.  The job was also a contract, which ended simultaneous to that iteration of Madden.  With my hire, no one was looking beyond an immediate need that could be filled by a low-level grunt with a specific skill-set.  That job gave me the platform to progress into the senior engineering role I enjoy today... but now I'm on the flip side of that coin.  Now, I make similar considerations when defining hiring opportunities for others.  Unless you want to start your own company and build your own games, you might want to consider narrowing the scope of your education and becoming very good at one single component of game development; no hiring manager is going to expect you to build an entire game by yourself.  Particularly as an entry-level employee, employers will want you to fill explicit needs on the teams they've already assembled.   This offers a convenient segway into my last point; becoming a "Game Director" is a great goal... but you need to also establish some smaller goals in order to get there.  Breaking into the video game industry as a "Game Director" is an unrealistic expectation; you will need to get some experience first.  Find a component of video game development you're excited about, and pursue that.  From the above, it sounds like you might be interested in game design.  If you think design is exciting, pursue design.  Understand what it means to be a designer and focus your efforts on doing things which will demonstrate your capacity for filling that role when it comes time to interview with a game company.  Understand that part of being a game designer is coming up with great ideas... but most of being a game designer has a lot more to do with communicating those ideas effectively.  The designers on my teams spend most of their days writing wiki documents which define exactly what should happen whenever a player does anything (or even what things the player can do in the first place).  They spend most of their remaining time either in meetings having people point out holes in the documents they've written (which must then be re-written to address the holes), or filling out the equivalent of spreadsheets which drive data that the games we're building will use.  To be honest, the role looks fairly tedious to me... but I'm also not nearly as interested in that aspect of game development as they are.  And that, I guess, is my whole point .
  13. Most people being interviewed for technical positions are already employed elsewhere.  You may be asked questions about why you want to leave, but it won't be a substantial focus of any portion of any interview unless you make it one; career movement happens, it's the norm.  Just make sure your answer is both succinct and positive, and nothing else really matters.   Especially if you're moving to a larger organization, you're more likely to receive questions about why you want to work at this new company than why you want to leave your old one.  Questions like this need more complete answers, and you'll want to tailor that to each specific interview.
  14. HappyCoder's suggestions are very good.  In some ways, the answer here depends on what you mean by efficiency... but in general, if you need to ask questions like the above, you aren't going to create a more efficient solution than importing someone else's library.   If you meant efficiency from a "I need to get this running yesterday" perspective, a library that already does the work for you will almost always be the best answer.  HappyCoder has given you examples.   If you meant efficiency from a runtime perspective, unless you've done this before you're unlikely to produce a faster answer.  Worse, your answer may be inadvertantly incomplete, resulting in faster loads but worse performance after loading time is complete.
  15. Thinias

    Unit Movement on 2d Grid

    In my experience, A* solutions frequently boil down to two-part answers.  You define a grid with just enough fidelity to accurately depict all obstacles, and use A* to navigate that grid.  Once you've entered a cell, that cell is now blocked and other obstacles or actors aren't allowed to move into it until you vacate the location.  Once you've gone as far along the grid as possible (you're in the target cell or as close to the target cell as possible), you navigate directly to a specific destination within that cell.  You don't need to consider other obstacles anymore, because the definition of your system prevents other obstacles from consuming the cell while you are already inside it.   Given your description of your architecture, I would expect the server to continue holding onto the computations for all full-cell movements... but once inside the cell simply allow the client to finalize the position on its own.  The server doesn't need to know anything more than "I am occupying this cell."
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!