Jump to content
  • Advertisement
    1. Past hour
    2. Yes, you need to install OS to a virtual harddrive as usual, then you have a fresh windows and you can be sure it has no stuff already installed. (I always used an ISO file of Windows CD, but stick should work too.) The virtual harddrive is a file on your real computer, so you could copy it to avoid installing OS again and again for each test. But virtual machines have options for this built in, like making snapshots or duplicating virtual HDs, and they want you to use those options to make it harder than necessary, but it works finally after a bit of patience. You can also do things like sharing folders over a virtual network, or allow copy and paste to exchange data. It's pretty comfortable.
    3. Hi! Using visual studio c++ 2017 for my games. I found a malware in one of my projects. Cleaned it. The folder has no malware, but when I recompile the game the new exe-file has the malware again! I have cleaned all my harddrives from the malware (it was found on some places) but it still reappears when I recompile that project. NOTE: I can recompile any other project, those exe files doesnt contain any malwere. It's only this one project... For reference I found it with EMSISOFT and it calls it "Gen:Variant.Razy.441994 (B). Im using win 10. Thanks for any help on this strange issue! Erik
    4. swalk studios

      Legends of Mythology Dev Vlog #2

      Hello everyone! I have just released the latest build of my tech-demo for Legends of Mythology and the second development vlog. Watch the devlog here: https://youtu.be/ZoE4RswW6Ao Get the latest version of the game by joining the discord at: discord.gg/XnXzDgK Thank you for your time!
    5. Alberth


      Not a Unity user, but with such a general question, no we can't, for the simple reason that we don't know what you're stuck with. In other words, explain to us what tutorial you're trying to do, what you've done, and how you cannot proceed any further, ie what is blocking for you, and why? Be as precise as possible, as that improves the answers you'll get.
    6. Josheir

      Virtual Machine Questions

      So, the VM is basically loaded with what their new computer has at start? (Oh, by the way, your making me paranoid!) Josh
    7. babaliaris

      Lighting: Inside faces are getting lighted too?

      As i said i tried to do this and it works: //Positions //Normals //Texels //Front face (on z axis) -0.5f, -0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.5f, -0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 0.5f, 0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f, 0.5f, 0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f, -0.5f, 0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f, 1.0f, -0.5f, -0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, //Front face with reversed Normals. Draw it a little farther so the depth test will pass. //Reverse -0.5f, -0.5f, 0.49f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f, 0.5f, -0.5f, 0.49f, 0.0f, 0.0f, -1.0f, 1.0f, 0.0f, 0.5f, 0.5f, 0.49f, 0.0f, 0.0f, -1.0f, 1.0f, 1.0f, 0.5f, 0.5f, 0.49f, 0.0f, 0.0f, -1.0f, 1.0f, 1.0f, -0.5f, 0.5f, 0.49f, 0.0f, 0.0f, -1.0f, 0.0f, 1.0f, -0.5f, -0.5f, 0.49f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f, But i thing this method destroys the performance if for each face you need to have duplicate data to change only the normal direction.
    8. Hello everyone! I have just released the latest build of my tech-demo for Legends of Mythology and the second development vlog. Watch Development Vlog #2 here: To get the latest version of the game, join the swalk studios discord and take a look in the #dev-updates channel! Simply follow this discord link to join: https://discord.gg/XnXzDgK
    9. 1. Yes, except things like 3D acceleration 2. Yes 3. You need a shaving. Either this or please put a sticker on your webcam.
    10. Today
    11. I am needing to test a program on multiple window operating systems. Somewhere I read that the way to do this is using virtual machines. Is this dependable? Also, can I use a vm to successfully determine all the additional files I will need to install, for example dlls and runtimes, on my clients computers? Is it true that the VM doesn't see anything on the real computer? Thanks so much; easy if you can please, Josheir
    12. lougv22

      Tips for game programmer portfolio

      That makes sense and it's along the same lines of what I was thinking. And what about small, fun project, types of games that are quite old? As in from around 2006. Would those be too old to show in a portfolio?
    13. Hello again! I have created a basic phong model but I noticed that inside faces are getting lighted too. It seems to me like Normals (their direction) are the same outside-inside and this is why this is happening. Take a look at the following video. Outside it seems alright, but when I look inside the cube, the front face which is getting lighted from the outside, is also getting lighted from the inside. I believe that 99% this is because the inside face is actually using the same normals as the outside one, since in my vertex data normals are being initialised for each face (6 in total) not for 12. Do I have to create 72 vertices? 36 for the outside 6 faces and 36 for the inside with different normals. In the specular calculations don't be suprised by this: vec3 viewDirection = normalize(fragPosition - viewPos); The viewPos is actually the Front of the camera not the position, which is relative to the view coordinate system no the world (I'm doing lighting calculations in the word coordinate system). Instead of transforming the viewPos (Front of the camera) into the word coordinate system, i just moved it with my mind from view to word (like we learned in maths) and came out with the above calculation which gives me the appropriate vector to get the correct angle between the view direction and the reflection of the light. This is my vertex data: //Vertex Data. float vertices[] = { // positions // normals // texture coords -0.5f, -0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f, 0.5f, -0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 1.0f, 0.0f, 0.5f, 0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 1.0f, 1.0f, 0.5f, 0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 1.0f, 1.0f, -0.5f, 0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 0.0f, 1.0f, -0.5f, -0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f, -0.5f, -0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.5f, -0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 0.5f, 0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f, 0.5f, 0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f, -0.5f, 0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f, 1.0f, -0.5f, -0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, -0.5f, 0.5f, 0.5f, -1.0f, 0.0f, 0.0f, 1.0f, 0.0f, -0.5f, 0.5f, -0.5f, -1.0f, 0.0f, 0.0f, 1.0f, 1.0f, -0.5f, -0.5f, -0.5f, -1.0f, 0.0f, 0.0f, 0.0f, 1.0f, -0.5f, -0.5f, -0.5f, -1.0f, 0.0f, 0.0f, 0.0f, 1.0f, -0.5f, -0.5f, 0.5f, -1.0f, 0.0f, 0.0f, 0.0f, 0.0f, -0.5f, 0.5f, 0.5f, -1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.5f, 0.5f, 0.5f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.5f, 0.5f, -0.5f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 0.5f, -0.5f, -0.5f, 1.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.5f, -0.5f, -0.5f, 1.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.5f, -0.5f, 0.5f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.5f, 0.5f, 0.5f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, -0.5f, -0.5f, -0.5f, 0.0f, -1.0f, 0.0f, 0.0f, 1.0f, 0.5f, -0.5f, -0.5f, 0.0f, -1.0f, 0.0f, 1.0f, 1.0f, 0.5f, -0.5f, 0.5f, 0.0f, -1.0f, 0.0f, 1.0f, 0.0f, 0.5f, -0.5f, 0.5f, 0.0f, -1.0f, 0.0f, 1.0f, 0.0f, -0.5f, -0.5f, 0.5f, 0.0f, -1.0f, 0.0f, 0.0f, 0.0f, -0.5f, -0.5f, -0.5f, 0.0f, -1.0f, 0.0f, 0.0f, 1.0f, -0.5f, 0.5f, -0.5f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.5f, 0.5f, -0.5f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.5f, 0.5f, 0.5f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 0.5f, 0.5f, 0.5f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, -0.5f, 0.5f, 0.5f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, -0.5f, 0.5f, -0.5f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f }; This is my fragment shader: #version 330 core //Framgent Output. out vec4 aPixelColor; //Normals and TexCoordinates. in vec3 fragNormal; in vec2 fragTexCoord; in vec3 fragPosition; //Light Source. struct LightSource { vec3 position; vec3 ambient; vec3 color; }; //Material. struct Material { sampler2D diffuse; sampler2D specular; int shininess; }; //Uniforms. uniform LightSource light; uniform Material material; uniform vec3 viewPos; //Declare Functions. vec3 GetAmbientColor(); vec3 GetDifffuseColor(); vec3 GetSpecularColor(); //-_-_-_-_-_-_-_-_-_-_-_-_-_-_-Main Function-_-_-_-_-_-_-_-_-_-_-_-_-_-_-// void main() { float alpha_value = texture(material.diffuse, fragTexCoord).w; vec3 ambient_color = GetAmbientColor(); vec3 diffuse_color = GetDifffuseColor(); vec3 specular_color = GetSpecularColor(); vec3 final_color = ambient_color + diffuse_color + specular_color; //Set the final color. aPixelColor = vec4(final_color, alpha_value); } //-_-_-_-_-_-_-_-_-_-_-_-_-_-_-Main Function-_-_-_-_-_-_-_-_-_-_-_-_-_-_-// vec3 GetAmbientColor() { return light.ambient * vec3(texture(material.diffuse, fragTexCoord)); } vec3 GetDifffuseColor() { vec3 light_direction = normalize(light.position - fragPosition); vec3 normal = normalize(fragNormal); float diffuse_factor = max(dot(light_direction, normal), 0); return (light.color * diffuse_factor) * vec3(texture(material.diffuse, fragTexCoord)); } vec3 GetSpecularColor() { vec3 light_direction = normalize(fragPosition - light.position); vec3 normal = normalize(fragNormal); vec3 viewDirection = normalize(fragPosition - viewPos); vec3 refrection = normalize(reflect(light_direction, normal)); float spec_factor = pow( max(dot(refrection, viewDirection), 0) , material.shininess ); return light.color * spec_factor * vec3(texture(material.specular, fragTexCoord)); } This is the vertex shader: #version 330 core layout(location = 0) in vec3 aPos; layout(location = 1) in vec3 aNormal; layout(location = 2) in vec2 aTexel; uniform mat4 model; uniform mat4 view; uniform mat4 proj; out vec3 fragNormal; out vec2 fragTexCoord; out vec3 fragPosition; void main() { gl_Position = proj * view * model * vec4(aPos, 1.0f); fragNormal = mat3(transpose(inverse(model))) * aNormal; fragTexCoord = aTexel; fragPosition = vec3(model * vec4(aPos, 1.0f)); }
    14. If you're looking to have a portfolio as a programmer, then you need to focus more on code, and less on visual examples. This is a common mistake juniors make. For example, It's great to showcase the visual result of your code, but it's more important to display how clean your code is and what you did to achieve the desired result, and a way for the recruiter to gage your competency. Even if you make a full featured game with Unity, you still want to highlight the 'coding' parts, this means you're going to want to share your source code files. You're not applying as a game designer, or game artist, you're applying as a game programmer. FYI, do not email game companies your game. Stick to applying for their posted jobs, and if they want to see your game you'll go through the proper process. Game companies will usually never look at your game in such a manner as there is potential liability. Example, you sent a game as part of your resume to Company A, and four months later they released a game similar to the example you sent. This is common practice everywhere to my knowledge. The same applies to people trying to pitch game concepts to studios. Best of luck.
    15. DirectX 9 is extremely old(in tech terms) and no longer directly supported by modern hardware. This means that all modern systems that support it have to emulate the hardware, there is also really no financial motivation for them to fix bugs like this because DirectX9 is no longer a supported platform... This is how games eventually become completely obsoleted, even though the modern hardware is fully able to handle what it is trying to do, it just doesn't speak the same language anymore... It happens to even the best of them eventually.
    16. A simple answer as Zipster might be alluding to, is to look at the characters. Make the warrior have a very low magic stat and the magician have a low physical strength stat.
    17. taoprox

      HTML5 Canvas Online RPG (MORPG)

      I've tried to edit my original post but I guess there is either a time limit or a limit to the number of edits. Just thought I would put up a screenshot here. Thanks, Tao
    18. quarzak stride


      I am new to game developing using unity and i need help to learn the basics the tutorials don't help so can any of you help me understand?
    19. Septopus

      Tips for game programmer portfolio

      I've never "worked" in the gaming industry so I cannot say about this specifically, but generally speaking large email attachments(unless asked for) of any kind are rarely appreciated. A stripped down web demo with a visible link to download a more complete PC version might be a good idea. Through my IT industry experience; I found it's way easier to get people to actually look at something if it only takes one click, and it doesn't install anything.
    20. I am working on multiple projects, one of which is using windows API. 😉 I am trying to catch a WM_DISPLAYCHANGE in WndProc switch for message. LRESULT CALLBACK WndProc(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam) { switch (message) { case WM_DISPLAYCHANGE: { The problem is that when I catch the event for a resolution not supported by my application, there is flash with this screen before the event is caught. Is there any way to bypass this? Oh, and I tried changing the WM_PAINT code too. Thanks for any help, Josheir.
    21. "This is the End" combines the squad-based strategy features of XCOM with survival elements; it is based in a steampunk setting. See more at my TIGSource devblog: https://forums.tigsource.com/index.php?topic=65119
    22. Hi, this is my first blog entry here and also the first time that I actively participate in Gamedev.net. I am a software developer who develops application software at work. At home, I develop games in my spare time. I used to read Gamedev.net articles and blog posts for inspiration and motivation. Since I have the RSS feed subscribed on my phone, I read about the Gamedev.net Frogger challenge and I thought that I'm not interested in Frogger, but that I liked the idea of the community challenges. When the Frogger challenge was over, I read about the dungeon crawler challenge and I thought: "Okay guys, now you've got me!". I always wanted to create an old-school RPG in the style of Daggerfall or Ultima Underworld with modern controls (although I never really played them). More than 10 years ago, I started to write a small raycasting renderer in C++. Raycasting is the rendering technique that was used by the first person shooters of the early 90's (see also [1]). I never really finished that renderer and put the project ad acta, until, some years later, I stumbled across the old code and started porting it to Cython[2] in order to be able to use it with the Python programming language[3]. After some time, I was able to render level geometry (walls, floor, ceilings and sky), as well as sprites. After solving the biggest problems, I lost interest in the project again -- until now. I thought the dungeon crawler challange was a great opportunity for motivating myself to work on the engine. At the same time I would create a game to actually prove that the engine was ready to be used. I started to draw some textures in Gimp[4] in order to get myself in the right mood. Then I started to work on unfinished engine features. The following list shows features that were missing end of December 2018: Level editor Hit tests for mouse clicks Processing mouse scroll events Writing modified level files to disc (for the editor) UI Text Buttons Images Containers (layouting child widgets in particular) Scheduling tasks (async jobs that are run at every frame) Collision detection for sprite entities Collision detection for level geometry (walls) Music playback Fullscreen toggle Animated sprites Directional sprites (sprites look different depending on the view angle) Scaling sprites Refactorings / cleaning up code Documentation & tutorials Fixing tons of bugs Luckily, many of the above features are implemented right now (middle of January 2019) and I can start focusing on the game itself (see screenshots below; all sprites, textures and UI are hand-drawn using Gimp[4] and Inkscape[5]). The game takes place in a world which is infested by the curse of the daemon lord Ardor. Burning like a fire and spreading like a plague, the curse causes people to become greedy and grudging, some of them even turn into bloodthirsty monsters. By reaching out to reign supreme, the fire of Ardor burns its way into our world. The player is a nameless warrior who crested the silver mountain in order to enter Ardor's world and defeat him. To open the dimension gate, the player has to defeat tree dungeons and obtain three soul stones from the daemon's guardians. The following videos show some progress: First attempt: Rendering level geometry and sprites; testing textures; navigating through doors (fade-in and out effects) First update: Adding a lantern to light up the dungeon: Second update: Rendering directional sprites and animations Raycasting (Wikipedia): https://en.wikipedia.org/wiki/Ray_casting ↩︎ Cython: https://cython.org/ ↩︎ Python: https://www.python.org/ ↩︎ Gimp: https://www.gimp.org/ ↩︎ Inkscape: https://www.inkscape.org/ ↩︎
    23. timothyjlaird

      Question on which engine should I use

      Monogame may be something to look at. Its a port of xna so it's got a very solid tutorial base (most xna stuff seems to apply). I'm using it for 3d but 2d seems solid: http://rbwhitaker.wikidot.com/monogame-2d-tutorials
    24. Sound Master

      Windows 10 makes old games crash fullscreen

      DirectX 9 games.
    25. Me and a few others are currently working on an game so I am looking for a coder to help with the project. The game in question is called rose rock shooter and its a Side scrolling action Shoot 'em up with MOBA like abilities. The game is using unity engine. I am pixel artist who is making the assets for all the characters, we have already finish the first character animations. So I am looking for a another coder. I am looking to get another coder on board because we are doing this in our spear time for now and my coder has a full time job. The games concept is that the players are welcomed to rose rock city a place on brink of chaos. But an elite team has been assembled of some odd characters and this group is called Phantom orchid. If you are interested then you can email me at cameron-troup@hotmail.co.uk and I can send you my high concept document and GDD document that has more details on the project. thanks to read this.
    26. taoprox

      HTML5 Canvas Online RPG (MORPG)

      Hi Awoken, Thanks for your comment, I appreciate it! Apart from where the minimap will be in the top left, background image, equipment inner icons and spell icons, everything else I have done myself. Basically, anything that looks good, I have not done. The layout is mine however, but I guess a lot of game UI is layed out this way. I am more of a programmer rather than a designer, I cant even draw a decent stickman 😛 Again, I thank you for your reply, it makes me want to continue with it! Tao
    27. lougv22

      Tips for game programmer portfolio

      Thanks for the tips. I may give those a try. Any insights on the question of a Web build versus a PC Unity build when applying for game programming jobs? Is the Web build the preferred way of showing off your game? Would I be at a disadvantage if i emailed them a PC build instead?
    28. By one hand you is absolutely right. Models that fundametal phisics usually utilize to find fundamental lows,factors (especially factors) and so on really works on micro-level models and simulate trillions of moleculas, atoms and so on. And obviuously it kind of simulation require a month of calculations on super digit-mills per simulation, and to slow and expensive ever not for games, but for applicative sciences and engeneering software too . But by other hands you is absolutely wrong. Apllicative sciences and enginering software usualy simulating same processes on macro-level, utilizing fundamental laws and factors found by fundamental science on micro-level (i.e. it integrate behavior of huge clusters or ever whole system, instead of simulation of each indnividual athom ). It enougt to make simulation with accuracy enought for most applicative researches and engeeniting needs, and require a hours or maximum days per simulation on high-end PC or maximum tiny cluster. But it schemes of integration also have a nonliniar dependence of speed from precission. So optimizing it by speed for cost of accuraacy(or better say by putting it on edge of numerical stability, that makes it much harder to implement ) and limiting some simulation capabilities we have a schemes good for realtime simulations of its effects for games. For example most of people here know classical example of 2D fire simulation that works wery wery fast (at least it able to reproduce fire effect for 320x200 area on 80386@33MHz with high FPS). But it scheme comes from scientific scheme used to model natural convection, just with some limitations like fixed direction of fire that reduce required calculation approximately 6x times and lovest possible (1 byte) precission for field of themperature. On my university years i used same scheme for scientific simulations. Optimized by accutacy and without limitations it give 0.1 FPS for 100x100 area on Pentium-II@400MHz, and for modelling of flame give results that have no significant visual differences with scheme optimized by speed. Say more modern simulators for 3D fire and smoke utilize same scheme, but mainly using a Lagrange aproach (particles) instead Euler approach (grids) for simulate effect. An so on with anything else. For example high-end simulation of destruction and deformation (including fabric and so on) use same schemes that CAD software use for strength calculations.
    29. Wiljan

      Graphic artist looking for programmer

      Hi,are you still interested in doing some collaborative development?
    30. SoldierOfLight

      Windows 10 makes old games crash fullscreen

      Be more specific. What "old games" are you talking about?
    31. Awoken

      HTML5 Canvas Online RPG (MORPG)

      looks good. Did you create the website UI yourself? It looks very professional.
    32. Windows 10 makes old games crash fullscreen when pressing alt+tab or alt+F4. What can you do ?, a lot of people have it, i searched for it, they advice to use windowed mode. Should i wait until microsoft fix this ? Why would they make a update to make things worse ? What did they change so the games crash ? Any better information ? thanks
    33. Hi, I'm not sure if this is the right place to post, but I have created my own 2D game engine using HTML5 Canvas + JavaScript(Node.js). Apart from the node infrastructure and graphics, I have built this from scratch. It is still in development, therefore I cant provide a direct link for you to play, but I have a quick youtube video to highlight my game and the map editor. I am not very good at video editing so its just a quick showcase of running around the map, creatures following and attacking, spells, logging in and out, changing the map and reloading. The graphics are ripped from another game I used to play, obviously I will be replacing these with my own eventually. I just needed something fast to test with. I am self taught and this is my first major scale game that I am attempting, but so far so good. Please take a look at the video and if you have any comments, feel free to post. If the game is not your cup of tea, please don't criticise it. I look forward to your replies, Thanks Tao https://www.youtube.com/watch?v=zUzHNu4mKJ8 (p.s I recorded this with quciktime, and for some reason every now and then it does lag. It is not the game that is lagging!!)
    34. Project Title:Unnamed, at the moment.Description:Explore a beautiful vibrant environment and do as your mind pleases. In this world your goal is to thrive as a medieval businessman. Trade, harvest, gather and discover the vast wilderness surrounding the cities and villages you operate in.Think about The Elder Scrolls V: Skyrim in a sense to understand the width of this game. Remove everything but the trading and gathering system. There you have our game. The main focus will stand at graphical realness and immersion. Basically the player will roam around harvesting, gathering and crafting new items and selling them, the option to turn around objects for a profit at different markets will also be possible. Further explanation will be given if you join the team.Do not think I have huge AAA expectations on this game. Functions will be limited and focus will be on graphics. I want to make a greater game but we have to confine ourselves in the beginning.Includes: Harvest resources (Metal, Crops, Wood etc) Trade products Buy properties Quests Hunger, thirst Currency Team Structure:David (me):All in all, I have experience and basic knowledge in all fields except programming. I have worked in Unity, Unreal, Blender, Photoshop & more. But only as a hobby. After a long pause from game making I am now back again.Talent Required:-3D Artist(s)-C++ Programmer(s)-Music/Sound Creator-Other useful skillsDont worry, you dont have to be a proffessional in your field, but please know what you are doing. Portfolio of previous work is required. If you are skilled in multiple fields that is a bonus.Contact:E-mail: davvethegamer@gmail.com (private mail)Are you interested in becoming part of this project? Do not hesitate to contact me.Sincerely, DavidREMEMBER THIS GAME IS AT THE IDEA STAGE AND YOU WILL BE PAID BY ROYALTY, IF YOU ARE NOT COMFORTABLE WITH THIS PLEASE DO NOT CONTACT ME.
    35. Hi all, I planned to create a 2D turn based strategy game using modern C++. The idea was to use SDL and make it available for at least Linux, Windows and Android and create everything from scratch, but the time is what it is, and unfortunately it will be probably hard to do that.As alternative I was thinking to use some already existent engine that would allow me to save some time. Any suggestion is much appreciated!
    36. It's a story on how to write a plugin for Unity Asset Store, take a crack at solving the well-known isometric problems in games, and make a little coffee money from that, and also to understand how expandable Unity editor is. Pictures, code, graphs and thoughts inside. Prologue So, it was one night when I found out I had pretty much nothing to do. The coming year wasn't really promising in my professional life (unlike personal one, though, but that's a whole nother story). Anyway, I got this idea to write something fun for old times sake, that would be quite personal, something on my own, but still having a little commercial advantage (I just like that warm feeling when your project is interesting for somebody else, except for your employer). And all this went hand in hand with the fact that I have long awaited to check out the possibilities of Unity editor extension and to see if there's any good in its platform for selling the engine's own extensions. I devoted one day to studying the Asset Store: models, scripts, integrations with various services. And first, it seemed like everything has already been written and integrated, having even a number of options of different quality and detail levels, just as much as prices and support. So right away I've narrowed it down to: code only (after all, I'm a programmer) 2D only (since I just love 2D and they've just made a decent out-of-the-box support for that in Unity) And then I remembered just how many cactuses I've ate and how many mice've died when we were making an isometric game before. You won't believe how much time we've killed on searching viable solutions and how many copies we've broken in attempts to sort out this isometry and draw it. So, struggling to keep my hands still, I searched by different key and not-so-much-key words and couldn't find anything except a huge pile of isometric art, until I finally decided to make an isometric plugin from scratch. Setting the goals The first I need was to describe in short what problems this plugin was supposed to solve and what use the isometric games developer would make of it. So, the isometry problems are as follows: sorting objects by remoteness in order to draw them properly extension for creation, positioning and displacement of isometric objects in the editor Thus, with the main objectives for the first version formulated, I set myself 2-3 days deadline for the first draft version. Thus couldn't being deferred, you see, since enthusiasm is a fragile thing and if you don't have something ready in the first days, there's a great chance you ruin it. And New Year holidays are not so long as the might seem, even in Russia, and I wanted to release the first version within, like, ten days. Sorting To put it short, isometry is an attempt made by 2D sprites to look like 3D models. That, of course, results in dozens of problems. The main one is that the sprites have to be sorted in the order in which they were to be drawn to avoid troubles with mutual overlapping. On the screenshot you can see how it's the green sprite that is drawn first (2,1), and then the blue one goes (1,1) The screenshot shows the incorrect sorting when the blue sprite's drawn first In this simple case sorting won't be such a problem, and there are going to be options, for example: - sorting by position of Y on the screen, which is (isoX + isoY) * 0.5 + isoZ - drawing from the remotest isometric grid cell from left to right, from top to down [(3,3),(2,3),(3,2),(1,3),(2,2),(3,1),...] - and a whole bunch of other interesting and not really interesting ways They all are pretty good, fast and working, but only in case of such single-celled objects or columns extended in isoZ direction After all, I was interested in more common solution that would work for the objects extended in one coordinate's direction, or even the "fences" which have absolutely no width, but are extended in the same direction as the necessary height. The screenshot shows the right way of sorting extended objects 3x1 and 1x3 with "fences" measuring 3x0 and 0x3 And that's where our troubles begin and put us in place where we have to decide on the way forward: split "multi-celled" objects into "single-celled" ones, i.e. to cut it vertically and then sort the stripes emerged think about the new sorting method, more complicated and interesting I chose the second option, having no particular desire to get into tricky processing of every object, into cutting (even automatic), and special approach to logic. For the record, they used the first way in few famous games like Fallout 1 and Fallout 2. You can actually see those strips if you get into the games' data. So, the second option doesn't imply any sorting criteria. It means that there is no pre-calculated value by which you could sort objects. If you don't believe me (and I guess many people who never worked with isometry don't), take a piece of paper and draw small objects measuring like 2x8 and, for example, 2x2. If you somehow manage to figure out a value for calculation its depth and sorting - just add a 8x2 object and try to sort them in different positions relative to one another. So, there's no such value, but we still can use dependencies between them (roughly speaking, which one's overlapping which) for topological sorting. We can calculate the objects' dependencies by using projections of isometric coordinates on isometric axis. Screenshot shows the blue cube having dependency on the red one Screenshot shows the green cube having dependency on the blue one A pseudocode for dependency determination for two axis (same works with Z-axis): bool IsIsoObjectsDepends(IsoObject obj_a, IsoObject obj_b) { var obj_a_max_size = obj_a.position + obj_a.size; return obj_b.position.x < obj_a_max_size.x && obj_b.position.y < obj_a_max_size.y; } With such an approach we build dependencies between all the objects, passing among them recursively and marking the display Z coordinate. The method is quite universal, and, most importantly, it works. You can read detailed description of this algorithm, for example, here or here. Also they use this kind of approach in popular flash isometric library (as3isolib). And everything was just great except that time complexity of this approach is O(N^2) since we've got to compare every object to every other one in order to create the dependencies. I've left optimization for later versions, having added only lazy re-sorting so that nothing would be sorted until something moves. So we're going to talk about optimization little bit later. Editor extension From now on, I had the following goals: sorting of objects had to work in the editor (not only in a game) there had to be another kind of Gizmos-Arrow (arrows for moving objects) optionally, there would be an alignment with tiles when object's moved sizes of tiles would be applied and set in the isometric world inspector automatically AABB objects are drawn according to their isometric sizes output of isometric coordinates in the object inspector, by changing which we would change the object's position in the game world And all of these goals have been achieved. Unity really does allow to expand its editor considerably. You can add new tabs, windows, buttons, new fields in object inspector. If you want, you can even create a customized inspector for a component of the exact type you need. You can also output additional information in the editor's window (in my case, on AABB objects), and replace standard move gizmos of objects, too. The problem of sorting inside the editor was solved via this magic ExecuteInEditMode tag, which allows to run components of the object in editor mode, that is to do it the same way as in a game. All of these were done, of course, not without difficulties and tricks of all kinds, but there was no single problem that I'd spent more than a couple of hours on (Google, forums and communities sure helped me to resolve all the issues arisen which were not mentioned in documentation). Screenshot shows my gizmos for movement objects within isometric world Release So, I got the first version ready, took the screenshot. I even drew an icon and wrote a description. It's time. So, I set a nominal price of $5, upload the plugin in the store and wait for it to be approved by Unity. I didn't think over the price much, since I didn't really want to earn big money yet. My purpose was to find out if there is a general demand and if it was, I would like to estimate it. Also I wanted to help developers of isometric games who somehow ended up absolutely deprived of opportunities and additions. In 5 rather painful days (I spent about the same time writing the first version, but I knew what I was doing, without further wondering and overthinking, that gave me the higher speed in comparison with people who'd just started working with isometry) I got a response from Unity saying that the plugin was approved and I could already see it in the store, just as well as its zero (so far) sales. It checked in on the local forum, built Google Analytics into the plugin's page in the store and prepared myself to wait the grass to grow. It didn't take very long before first sales, just as feedbacks on the forum and the store came up. For the remaining days of January 12 copies of my plugin have been sold, which I considered as a sign of the public's interest and decided to continue. Optimization So, I was unhappy with two things: Time complexity of sorting - O(N^2) Troubles with garbage collection and general performance Algorithm Having 100 objects and O(N^2) I had 10,000 iterations to make just to find dependencies, and also I'd have to pass all of them and mark the display Z for sorting. There should've been some solution for that. So, I tried a huge number of options, could not sleep thinking about this problem. Anyway, I'm not going to tell you about all the methods I've tried, but I'll describe the one that I've found the best so far. First thing first, of course, we sort only visible objects. What it means is that we constantly need to be know what's in our shot. If there is any new object, we got to add it in the sorting process, and if one of the old one's gone - ignore it. Now, Unity doesn't allow to determine the object's Bounding Box together with its children in the scene tree. Pass over the children (every time, by the way, since they can be added and removed) wouldn't work - too slow. We also can't use OnBecameVisible and other events because these work only for parent objects. But we can get all Renderer components from the necessary object and its children. Of course, it doesn't sound like our best option, but I couldn't find another way, same universal and acceptable by performance. List<Renderer> _tmpRenderers = new List<Renderer>(); bool IsIsoObjectVisible(IsoObject iso_object) { iso_object.GetComponentsInChildren<Renderer>(_tmpRenderers); for ( var i = 0; i < _tmpRenderers.Count; ++i ) { if ( _tmpRenderers[i].isVisible ) { return true; } } return false; } There is a little trick of using GetComponentsInChildren function that allows to get components without allocations in the necessary buffer, unlike another one that returns new array of components Secondly, I still had to do something about O(N^2). I've tried a number of space splitting techniques before I stopped at a simple two-dimensional grid in the display space where I project my isometric objects. Every such sector contains a list of isometric objects that are crossing it. So, the idea is simple: if projections of the objects are not crossed, then there's no point in building dependencies between the objects at all. Then we pass over all visible objects and build dependencies only in the sectors where it's necessary, thereby lowering time complexity of the algorithm and increasing performance. We calculate the size of each sector as an average between the sizes of all objects. I found the result more than satisfying. General performance Of course, I could write a separate article on this... Okay, let's try to make this short. First, we're cashing the components (we use GetComponent to find them, which is not fast). I recommend everyone to be watch yourselves when working with anything that has to do with Update. You always have to bear in mind that it happens for every frame, so you've got to be really careful Also, remember about all interesting features like custom == operator. There are a lot to things to keep in mind, but in the end you get to know about every one of them in the built-in profiler. It makes it much easier to memorize and use them Also you get to really understand the pain of garbage collector. Need higher performance? Then forget about anything that can allocate memory, which in C# (especially in old Mono compiler) can be done by anything, ranging from foreach(!) to emerging lambdas, let alone LINQ which is now prohibited for you even in the simplest cases. In the end instead of C# with its syntactic sugar you get a semblance of C with ridiculous capacities. Here I'm gonna give some links on the topic you might find helpful: Part1, Part2, Part3. Results I've never known anybody using this optimization technique before, so I was particularly glad to see the results. And if in the first versions it took literally 50 moving objects for the game to turn it into a slideshow, now it works pretty well even when there're 800 objects in a frame: everything's spinning at top speed and re-sorting for just for 3-6 ms which is very good for this number of objects in isometry. Moreover, after initialization it almost haven't allocate memory for a frame Further opportunities After I read feedbacks and suggestions, there were a few features which I added in the past versions. 2D/3D Mixture Mixing 2D and 3D in isometric games is an interesting opportunity allowing to minimize drawing of different movement and rotations options (for instance, 3D models of animated characters). It's not really hard thing to do, but requires integration within the sorting system. All you need is to get a Bounding Box of the model with all its children, and then to move the model along the display Z by the box's width. Bounds IsoObject3DBounds(IsoObject iso_object) { var bounds = new Bounds(); iso_object.GetComponentsInChildren<Renderer>(_tmpRenderers); if ( _tmpRenderers.Count > 0 ) { bounds = _tmpRenderers[0].bounds; for ( var i = 1; i < _tmpRenderers.Count; ++i ) { bounds.Encapsulate(_tmpRenderers[i].bounds); } } return bounds; } that's an example of how you can get **Bounding Box** of the model with all its children and that's what it looks like when it's done Custom isometric settings That is relatively simple. I was asked to make it possible to set the isometric angle, aspect ratio, tile height. After suffering some pain involved in maths, you get something like this: Physics And here it gets more interesting. Since isometry simulates 3D world, physics is supposed to be three-dimensional, too, with height and everything. I came up with this fascinating trick. I replicate all the components of physics, such as Rigidbody, Collider and so on, for isometric world. According to these descriptions and setups I make the copy of invisible physical three-dimensional world using the engine itself and built-in PhysX. After that I take the simulation data calculated and get those bacl in duplicating components for isometric world. Then I do the same to simulate bumping and trigger events. The toolset physical demo GIF Epilogue and conclusions After I implemented all the suggestions from the forum, I decided to raise the price up to 40 dollars, so it wouldn't look like just another cheap plugin with five lines of code I will be very much delighted to answer questions and listen to your advices. I welcome all kinds of criticism, thank you! Unity Asset Store page link: Isometric 2.5D Toolset
    37. Well the same can be said about game physics, it's just an approximation of reality. It uses a simplified model (based on classical mechanics?) to describe the interaction of physical objects. Light transportation can be described as lightrays going around in the scene, not including effects like polarisation etc.. However it's a good enough physical model of our visible world. I guess what I am trying to express is that everything related to game physics (including optics and classical mechanics) is based on a model, a simplified view of the world that is still able to produce convincing results (to an extend). Guess I'll start with a lighting engine at first, the other parts or not that interesting to me at least. Thanks for the links
    38. Makusik Fedakusik

      Anyone who wants to write a little game engine?

      No one write game engines from scratch. 20 years ago too. You can take some demo's from github: https://github.com/SDraw/run-on-coal https://github.com/JoeyDeVries/Cell http://tesseract.gg/ And try to merge them together. Becouse every single part of game engine now is: hmph, it's time to become.... sse/avx/neon hacker, gapi hacker, physics hacker, network hacker and other.
    39. Irusan, son of Arusan

      Anyone who wants to write a little game engine?

      It really doesn't. In actual physics, the diffraction of water comes from the movement of light through the whole body of water and the way individual photons interact millions upon millions of times with individual atoms. Similarly the surface of water moves because of the interaction of trillions of molecules. No game models this or anything close to it. Graphics is the art of imitating physical reality not the art of reproducing it. Describing the way shaders mimic the look of water as "physics" is reducing the term to near meaninglessness.
    40. Well if you read my post you will see that I check about memory alignment, and for the max texture size. These weren't the problem. Anyway problem solved.
    41. I'm not a spelling Nazi but you sure do make it hard to want to read your posts.
    42. Variations in thickness often result from numerical errors in scaling. Please make 110% sure you're not scaling anything, from font description to bitmap, or from bitmap to screen. One way is to get a handle on it is to dump the bitmap you get from the font as rows of text characters, and use a non-proportional font to check. EDIT: You may want to disable anti-aliasing here. The "g" is drawn incorrectly. The full circle of the letter should rest at the baseline, and the extension at the bottom should be below the baseline. Just look how the "g" is rendered here relative to the other text. Not sure of the details of FreeType any more, but iirc you could have a negative offset wrt to the baseline if you had to start below it. As you can see at https://www.freetype.org/freetype2/docs/tutorial/step2.html the bottom of the letter can be below "origin"
    43. Of cource rendering involve optical phisics, but it usualy part of graphics stuffs (shading algos) becouse its is same (ot near same) for any semi-opaque objects. Under phisics stuffs here meaned simulation of waves droplets and so on, that make difference from just semi-opaque plane that has simulate water for 20+ year old games.
    44. Yes because realistic light interaction requires simulation of physic laws and therefore graphics is involves physical calculation i.e. solving the redering equation. So I would say you can't really separate those concepts -> realistic looking water needs "100%" physics
    45. Hi, Need help setting up ImGui.. I am trying to render UI using the ImGui framework on dx12. I followed the ImGui example project for dx12 but so far i've had no luck even after carefully looking through my code. The dubug output shows no errors either. I'm calling the ImGui functions in a separate class with static methods as indicated below: void GUI::Initialize(HWND hwnd, ID3D12Device* device, D3D12_CPU_DESCRIPTOR_HANDLE srvCpuHandle, int num_frames_in_flight, DXGI_FORMAT rendertargetformart) { D3D12_DESCRIPTOR_HEAP_DESC fontHeapDesc{}; fontHeapDesc.Flags = D3D12_DESCRIPTOR_HEAP_FLAG_SHADER_VISIBLE; fontHeapDesc.NodeMask = 0; fontHeapDesc.NumDescriptors = 1; fontHeapDesc.Type = D3D12_DESCRIPTOR_HEAP_TYPE_CBV_SRV_UAV; device->CreateDescriptorHeap(&fontHeapDesc, IID_PPV_ARGS(sm_FontHeap.GetAddressOf())); D3D12_GPU_DESCRIPTOR_HANDLE fonthandle = sm_FontHeap->GetGPUDescriptorHandleForHeapStart() ; IMGUI_CHECKVERSION(); ImGui::CreateContext(); ImGuiIO& io = ImGui::GetIO(); (void)io; ImGui_ImplWin32_Init(hwnd); ImGui_ImplDX12_Init(device, num_frames_in_flight, rendertargetformart, srvCpuHandle, fonthandle); ImGui::StyleColorsDark(); } void GUI::Update() { ImGui_ImplDX12_NewFrame(); ImGui_ImplWin32_NewFrame(); ImGui::NewFrame(); { ImGui::Begin("Some Window"); ImGui::Text("Random text here"); ImGui::Button("Button"); ImGui::End(); } } void GUI::RenderOverlay(ID3D12GraphicsCommandList* cmdlist) { cmdlist->SetDescriptorHeaps(1, sm_FontHeap.GetAddressOf()); ImGui::Render(); ImGui_ImplDX12_RenderDrawData(ImGui::GetDrawData(), cmdlist); } void GUI::Shutdown() { ImGui_ImplDX12_Shutdown(); ImGui_ImplWin32_Shutdown(); ImGui::DestroyContext(); } Microsoft::WRL::ComPtr<ID3D12DescriptorHeap> GUI::sm_FontHeap = nullptr; And then I call these methods in the graphics class: gpuContext->TransitionResource(currbackbuffer, D3D12_RESOURCE_STATE_RENDER_TARGET); //gpuContext->SetViewport(); //gpuContext->SetScissorRect(); gpuContext->ClearRenderTarget(currbackbuffer); gpuContext->ClearDepthStencil(dephbuffer); gpuContext->SetRenderTargets(currbackbuffer, dephbuffer); //Render GUI GUI::RenderOverlay(gpuContext->GetCommandList()); gpuContext->TransitionResource(currbackbuffer, D3D12_RESOURCE_STATE_PRESENT); gpuContext->ExecuteCommands(); GraphicsRoot::Present(); uint64_t frameFenceVal = gpuContext->Finish(); Any help on this will be appreciated
    46. Wiljan

      Artist Looking for Programmer

      Hi, are you still interested in finding a programmer?
    47. ECS is a solution direction if inheritance or interfaces fail. From your post I gathered you don't use those much yet, so ECS is a possible solution 2 steps away from your problem for you. I'd suggest you get a good understanding about inheritance and interfaces first (in particular when not to use them) before you venture into ECS. I think my Behavior object is at least in the direction of ECS, but I am not sure. I am more of a pragmatic programmer picking the ideas that fix the problem, rather than knowing exactly what is "proper ECS" or "proper OOP" or "proper Foo-Pattern" or "proper <whatever>". Nice to see you already moved into reading data from file. As to how to write behavior in a file, whatever you do, it must be text, since that's the only thing you can write in a text file. Other adventure authoring systems tend to allow writing source code in some form which they load and interpret, but that's likely too complicated for you at this time (in the general case you end up in scanner and parser code generators like yacc or bison (both are C-based, no idea what exists in C# but likely it's similar). A much simpler form is to give each behavior a name, and then loading behavior into an Item is just a list of such names. A switch statement is one option for converting a name to an object in the program. An other option is to use a Dictionary, which is quite simple if none of the the behavior objects has state. (That is, they don't have any variables inside that are different between behavior objects in different Items.) In that case, you can make a "Dictionary<string, Behavior> behaviors;" dictionary, where Behavior is the base class (or interface) of all behaviors. Getting the behavior is then a simple dictionary lookup, something like "behaviors.get(loaded_name)" or "behaviors[loaded_name]". (Not being a C# programmer, I don't know exactly how to do that, but https://stackoverflow.com/questions/12169443/get-dictionary-value-by-key seems to point to a solution.) If the behavior objects do have state (and I can imagine that being useful), you need to construct a new Behavior object for each Item. The usual solution for that is the Factory pattern. Instead of a dictionary from name to behavior, you have a dictionary of name to behavior-builder. Getting a behavior is then a 2-step process, first get the builder (the factory) from the dictionary, then ask the builder for a fresh behavior object. I can imagine that this is too complicated for now, and a switch is the better solution to you at this point. How to find and perform a behavior at runtime is indeed another puzzle. I would try to avoid splitting the actions as much as possible. The advantage of that is less cases to deal with, and (probably more for the future), simpler expansion of the set of actions. If you want to add "boinc" as new action (no idea what it would do, just an example), you don't want to have to write a new derived BoincAction, extend all existing behavior code for this new Boinc action, etc. So instead, why not let the command processor ask the Item for a Behavior object matching the given action string (or action ID). The Item then asks all its behaviors whether they understand that action, and if one does, it is returned. The command processor thus gets (or doesn't get) a behavior object, which it then executes ("behavior, please do your stuff"). Nothing in this setup knows what action is performed exactly, except the behavior object itself, but the latter is supposed to know eh? The command processor and the Item don't know if you typed "eat", "open", "rub", or (in the future) "boinc".
    48. Of course, becouse phisics staffs involve same simulations that scientific software, with only difference that it optimized by speed (that much much harder to perform) instead by precission. Also if you ask any of its PhD what same stuffs involved , for exampe, to realistic looking water effect, he/she will answer that it is 99% of phisics stuffs and 1% of graphics stuffs. Really, developers (especially such high qualified developers as PhD) make line betwin stuffs by sciences that it stuffs involve, instead of gamers (and evet managers and promouters) that call a graphics stuff anything tha visible on screen. Really if you see a bar that glowe as metal "graphoniy", you never will say that it make from "phisoniy" metall covered by thin layer of "graphoniy", and end ever will have no any suspections that it thin layer consists of 2 sublayers maked from different isotops of "graphoniy". Really one of most importent part of graphics stuffs player ever never seen on screen, becouse it perform a different kinds of clipping algo, that determine what same required to render, and without wich gaming will be really to slow.
    49. jbadams

      Catch the kids: Priest simulator game

      This is not the place to discuss the reputation system, politics, or related topics. I've hidden a couple of posts, let's keep any further discussion on topic please. On topic, I personally find humour that makes light of paedophilia extremely distasteful - I suspect for better or worse that may be a common reaction to your work.
    50. Of course it have different implementation of both components anf architecture . But it have a same set of components. If you want to have into you game, for eхample, chatacters you anycase need components for bones system and inverse kinematics into engine, independently is it only chatacter into your world, or it model any person into galaxy. It really has been actual 25-30 years ago for low-end solutions. For example for tunnel walker/shooter you need to render a couple rooms that player can see at time, independently from total size of the maze. Othervice it really will be slow. With open spaces scenes it much harder to implement, but concept is same - it draw only little part of a big world at time. It why demos ever exists - it clearly shows what engine able to perform for world of any size.
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.