Jump to content
  • Advertisement

Search the Community

Showing results for tags 'Custom'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Game Dev Loadout
  • Game Dev Unchained

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Community
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Topical
    • Virtual and Augmented Reality
    • News
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics
  • For Beginners's Forum

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Product Groups

  • Advertisements
  • GameDev Gear

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Website


Role


Twitter


Github


Twitch


Steam

Found 71 results

  1. printf("Hello %s", "follow game makers!"); I'm about to make a game engine, and looking for advice! Background I have a game engine project I've been thinking about A LOT lately. I know it's gonna be hard, in fact the most challanging programming challange i can think of. With that said, I'm willing to put down the time and effort. I'm looking for some advice to keep the project up and running and I'm asking you, game makers! I've so much passion about this project. I've tried making a game engines before, but they have all halted. Some failed because of lack of focus, some failed because of unorganised structure, some failed because of lack of experiance, too big scope, unclear destination... you get the point. Now I'm taking a different approach. I'm doing the boring part, pre-planning, researching, etc. That's partly why I'm here, asking you. I'll lay out my plan for you guys. Prerequisites I'm gonna try to keep technical terms to a minimum. So no spoiling what graphical API's or libraries I'm going to use, that's just asking for political warfare. This is more about the project management, avoiding pitfalls and such. The engine is gonna be a 2D engine. When i feel finished (probably in a couple of years) I will expand to 3D, but that's for another time. Because it's a game engine it should handle any type of 2D game, sidescrolling, top-down, hell even click-adventures! Disclaimer Sorry if my english is a bit wacky. Don't judge! The Game list(You'll read about it soon.) is just for experience purpose. I don't wanna fall in any kind of legal action because i stole some idea and thus only for personal use. My own ÜBER-awesome-final-game, if ever completed, will be released to the public. I first posted this on stackoverflow and was shutdown pretty hard because of too broad topic, understandable. Hoping this is the right forum, I'm just looking for some friendly advice. Kinda hard to get on this internet thingamabob... The Plan Start simple, work my way towards a more and more advanced game engine. In the end and my long term goal is my very own advanced 2D game(of course built on my engine). As a bonus, I might release the sourcecode of the game engine if I'm happy how it turned out. I believe in short term goal too keep my motivation and the feel of progress. But also have major goals to strive for, too always keep myself challanged, get bits and pieces to be proud of and most important have something to look forward to. Some of my older tries failed because i lost focus and stopped coding for a while. This time around i think it's best to atleast get a few lines of code every week. My "average goal" is to code for atleast a couple of hours every weekend. Just so i don't stop coding, the worst pitfall (i think). My strategy is a list of games to make on my journey. Trying to always have the list as a unit testing tool (Surely I'll have to redo older games when my engine gets up to speed). The list looks a bit like this. Game list, Major hits 1. Pong 2. 1 Level platformer (Extremly restricted) 3. Extended 1 level platformer with screenscrolling, jumping, etc. 4. Same level with added Sprite/Animation. 5. Same level with Goomba-like enemies and a finish line. 6. Multiple levels! 7. Super Major. A complete, short, single player mario-like game, with different enemies, levels and of course a boss. 8. Top down 2D game. Bomberman-like 9. Bomberman-like multiplayer 10. ... This goes on for a while. Some smaller games, some Super Major Smaller technical milestones to start with (I know i said "no technical talk", but this is the extent of it) 101. Graphical window (Ok, it's not a game but i have to start somewhere right?) 102. Draw a triangle [Draw objects] 103. Pong, very hardcoded (No help from the game engine to make collision or so) First game PONG 201. Textures 202. Simple physics (gravity, friction) collision 203. Player Controller 204. ... First Platformer: Have a 1 Level platformer were i can jump onto objects and stuff. No enemies, no screenscrolling. Just a super simple platformer. 301. Animation 302. Add Screenscrolling 303. Static enemies 304. Super simple AI. (Move toward player) 305. ... Keep on adding so i can complete my list of games This is of course not the full list, i just don't want to TL;DR.. If you are still here, you are the GREATEST! Some concerns The more I complete games on my list, the longer it will take to complete the next one. The more powerful function, the longer it will take. Multiplayer for instance, is no easy task... ADVICE Am i on the right track? Would you do something different? What do you think will work, what is risky? Have you tried making a game engine yourself? What kind of pitfalls did you encounter?
  2. For my test case I have a sphere sliding across a horizontal plane that will eventually come in contact with a vertical plane. I have figured out the correct way to deal with a moving sphere to plane for either sliding to resting contact or bouncing off of it. However, it was always with only one rigid body object to one static collision. Now there are two static collision objects. The way the static collisions are added are 1) Vertical plane 2) Horizontal plane. As a result it find the collision with the horizontal plane to slide across it (index 1) but it will find collision with the vertical plane (index 0) once it reaches it. With the logic I have below, it will never consider the collision with the vertical plane. How should I ensure that it will respond to collision with it? float fAccumulator = 0.0f; while(fAccumulator < fElapsedTime && mRigidBodyObjects.size() > 0) { F32 left_time = fElapsedTime - fAccumulator; for(unsigned int i = 0; i < mRigidBodyObjects.size(); ++i) { int j1 = -1; RigidBodyCollisionResult crFirstCollisionResult; crFirstCollisionResult.fCollisionTime = FLT_MAX; RigidBodyCollisionResult crCollisionResult; for(unsigned int j = 0; j < mStaticObjects.size(); ++j) { crCollisionResult = mRigidBodySolver.Collide(mRigidBodyObjects[i], mStaticObjects[j], left_time); if(crCollisionResult.enCollisionState == WILL_COLLIDE) { if(crCollisionResult.fCollisionTime <= crFirstCollisionResult.fCollisionTime) { crFirstCollisionResult = crCollisionResult; j1 = j; } } else if(crCollisionResult.enCollisionState == HAS_COLLISION || crCollisionResult.enCollisionState == RESTING_CONTACT) { crFirstCollisionResult = crCollisionResult; j1 = j; } } if(crCollisionResult.enCollisionState == WILL_COLLIDE || crCollisionResult.enCollisionState == NO_COLLISION) { mRigidBodyObjects[i]->ApplyGravity(); } if(j1 != -1) { if(crFirstCollisionResult.enCollisionState == WILL_COLLIDE && crFirstCollisionResult.fCollisionTime <= fElapsedTime) { mRigidBodyObjects[i]->Update(crFirstCollisionResult.fCollisionTime); mRigidBodySolver.HandleCollisionReponse(mRigidBodyObjects[i], mStaticObjects[j1], crFirstCollisionResult, crFirstCollisionResult.fCollisionTime); fAccumulator += crFirstCollisionResult.fCollisionTime; } else if(crFirstCollisionResult.enCollisionState == HAS_COLLISION || crFirstCollisionResult.enCollisionState == RESTING_CONTACT) { mRigidBodySolver.HandleCollisionReponse(mRigidBodyObjects[i], mStaticObjects[j1], crFirstCollisionResult, left_time); mRigidBodyObjects[i]->Update(left_time); fAccumulator += left_time; } else { mRigidBodySolver.HandleCollisionReponse(mRigidBodyObjects[i], mStaticObjects[j1], crFirstCollisionResult, left_time); mRigidBodyObjects[i]->Update(left_time); fAccumulator += left_time; } } else { mRigidBodyObjects[i]->Update(left_time); fAccumulator = fElapsedTime; } } }
  3. Mitja Prelovsek

    Alpha Blending Unreal Engine 4 with Lightact

    Do you want to combine Unreal Engine's content with other content in Lightact media server? Sit back and watch this 4 min tutorial to learn how: Alpha Blending Unreal Engine 4 with Lightact:
  4. Mitja Prelovsek

    Alpha Blending Unreal Engine 4 with Lightact

    Do you want to combine Unreal Engine's content with other content in Lightact media server? Sit back and watch this 4 min tutorial to learn how: Alpha Blending Unreal Engine 4 with Lightact: View full story
  5. applicant42

    Update 0.19.0

    Still just a start, WIP, but finally a new step done. New assets, new sandbox map… Isometric and alpha maths has been rewritten but still needs a lot of refactors and reviews. http://game.applicant42.com/
  6. jb-dev

    Menu transitions

    From the album: Vaporwave Roguelite

    This is how menu translate to one another
  7. Hey everyone! My name is Freya, and I am currently developing a board game called 'Confined'. The game itself is set in a prison, where all players scavenge for items, do missions and interact with other inmates in a desperate attempt to escape... No one can trust one another as opportunities for betrayal and sabotage constantly emerge. If the premise described above interests you, I am looking for all sorts of people to help out! Whether you're an aspiring artist, composer, writer or just a geeky person bursting with ideas, feel free to contact me! Gmail: confinedDev@gmail.com I'll provide more details upon contact. I hope to hear from some of you! 😊
  8. I'm a beginner in C, maybee I don't understand arrays in C because as Webdeveloper I used Languages like ASP, Python, PHP, Javascript and Actionscript it was easier. My Goal is a two-dimensional array. First dimension: should contain the current level of the game, second dimension: object structure. When I compile and execute it seems, the first dimension is used as a structure. C:\_dev>gcc struct_def.c -o struct_def.exe -lmingw32 C:\_dev>struct_def.exe result x = 420.000000 x = 0.000000 < instead of 1200.000000 C:\_dev> The Game is very small and I would like to hardcode all the level data. without having to load the levels dynamically and without malloc. I want to iterate the array and refresh all the positions and making collision checks etc. then iterate the array again to calculate wich object are inside the camera to draw. How can I use the array as I would like it? First dimension: Level, second dimension: all the struct data? thank's for helping 😊 C ISO C98 <code> #include <stdio.h> #include <stdlib.h> #include <stdbool.h> struct dynamic_pfl{ bool player; int type; int pause_time; int moving_dir; float mspeed; float hspeed; float vspeed; float x; float y; float friction; int width; int height; int state; }; int main(){ int level = 5; //max levels of the game struct dynamic_pfl arr_dynamic_pfl[level][20] = { {false,1,4,2,40,0,0,420,580,0,128,64,0}, {false,2,4,2,40,0,0,1200,580,0,128,64,0}, {false,1,4,2,40,0,0,111,580,0,128,64,0}, {false,2,4,2,40,0,0,2222,580,0,128,64,0}, {false,2,4,2,40,0,0,666,580,0,128,64,0} }; printf("result\n"); printf("x = %f\n",arr_dynamic_pfl[0][0].x); printf("x = %f",arr_dynamic_pfl[0][1].x); return 0; } </code>
  9. About me: Hi, my name is Christian and I am an enthusiast for games and simulations in the field of artificial life. I am an applied mathematician and have worked in my past for several years as a researcher on mathematical problems for damage mechanics. Currently I am employed as a software engineer. Physics engine: What I want to announce here is a 2D physics engine for damageable and glueable rigid bodys. Temperature effects such as radiation and heat conduction are also simulated via certain particles and internal energy distributions. The building blocks (nodes) of the bodys can be programmed by a simple assembler-like language enriched with physical actions, communication, sensoring and procreation functions. Portions of code is stored in each node as machine code and also potentially subject to changes. The code is processed by tokens moving/forking on a directed graph which constitutes the body. To get an impression of how the engine looks like in practice I attach some screencasts showing different examples. The engine performs multithreaded computations and is encapsulated in a C++-framework. I think it could be used to create realistic effects for space games. I am very interested in your ideas, opinions and constructive criticism. https://www.youtube.com/watch?v=DG61uprKzWg https://www.youtube.com/watch?v=2L3Cr2WwHDc Performance: With the current implementation one could simulate rougly 50k - 100k building blocks with 30 fps using 8 CPU threads (measured on Core i7-6700). There is still no finished GPU implementation but after experimenting with CUDA I would estimate that 300k can be simulated on my GPU (GeForce 960 GTX). To get an idea of of the computational effort please think that every building block in the scene (size of a pixel in the video) can be glued or detached to/from a body and performs calculations or other physical actions. The high degree of dynamics in simulation (damage and coalescence, function of nodes can change) was one the biggest challenge in the development. For broader applications it would be quite nice to port this engine to Unity. But I fear that the performance would then decrease substantially. Editor: In order to demostrate the engine I had developed an editor/simulator named alien. It allows to create and modify simulations filled with bodys. alien provides a pixel view as well as a graph and code editor for designing and visualizing every detail in a scene. As an demonstrating example I have designed a replicating machines that consumes nearby materials (see video below). More complex machines with sensoring, communication and attacking skills are also conceivable. Thus the material in the simulated world could be equipped with life-like or even intelligent behavior. https://www.youtube.com/watch?v=Slba3g7-LK4 More information and download: alien-project.org (it's opensource)
  10. I had some doubts about hex formats(assembler output) and linkers: 1.- So, I disassembly a raw binary(no ELF, PE, etc... headers) X64 assembly code and i got that result: 0: 66 89 c8 mov ax,cx 3: e8 00 00 00 00 call 8 <gh> 0000000000000008 <gh>: 8: 66 89 c2 mov dx,ax I understand how Byte Offset works('66' is the byte ID 0, '89' is 1, 'c8' is 2 and on 3 the call instruction starts(that is why '3:' is there)) but, by that logic, shouldn't 'call gh' be translated to 'e8 00 00 00 00 00 00 00 08' instead of 'e8 00 00 00 00' since the byte offset of the first instruction of gh, which is 'mov dx, ax' is 8 and the output is 64 bits? 2.- Using the example of above, if endianness is little end., how the assembler would swap the bytes, by each instruction? Like: Original, no endiannes { 66 89 c8 e8 00 00 00 00(in case that would be correct and i'm wrong in the question 1.-) 66 89 c2 } to { c8 89 66 00 00 00 00 e8 c2 89 66 } 3.- And then, the big end. would be like the original, without endiannes, code of the question 2.-? 4.- Suppose that i mark gh as .globl, then, would the assembler create a map table file where gh is in 'e8 00 00 00 00'(again, in case that would be correct and i'm wrong in question 1.-), and linker will look into such map file, and if another object file calls gh, the linker will then translate call gh as either 'e8 00 00 00 00'?
  11. Hodgman

    Imperfect Environment Maps

    In 22 our lighting environment is dominated by sunlight, however there are many small emissive elements everywhere. What we want is for all these bright sunlit metal panels and the many emissive surfaces to be reflected off the vehicles. Being a high speed racing game, we need a technique with minimal performance impacts, and at the same time, we would like to avoid large baked data sets in order to support easy track editing within the game. This week we got around to trying a technique presented 10 years ago for generating large numbers of shadow maps extremely quickly: Imperfect Shadow maps. In 2008, this technique was a bit ahead of its time -- as indicated by the performance data being measured on 640 x 480 image resolutions at 15 frames per second! It is also a technique for generating shadows, for use in conjunction with a different lighting technique -- Virtual Point Lights. In 22, we aren't using Virtual Point Lights or Imperfect Shadow Maps! However, in the paper they mention that ISMs can be used to add shadows to environment map lighting... By staying up too late and misreading this section, you could get the idea that you could use the ISM point-cloud rendering ideas to actually generate large numbers of approximate environment maps at low cost... so that's what we implemented Our gameplay code already had access to a point cloud of the track geometry. This data set was generated by simply extracting the vertex positions from the visual mesh of the track - a portion is visualized below: Next we somehow need to associate lighting values with each of these points... Typically for static environments, you would use a light baking system for this, which can spend a lot of time path-tracing the scene (or similar), before saving the results into the point cloud. To keep everything dynamic, we've instead taken inspiration from screen-space reflections. With SSR, the existing images that you're rendering anyway are re-used to provide data for reflection rays. We are reusing these images to compute lighting values for the points in our point cloud. After the HDR lighting is calculated, the point cloud is frustum culled and each point projected onto the screen (after a small random offset is applied to it). If the projected point is close in depth to the stored Z-buffer value at that screen pixel, then the lighting value at that pixel is transferred to the point cloud using a moving average. The random offsets and moving average allow many different pixels that are nearby the point to contribute to its color. Over many frames, the point cloud will eventually be colored in now. If the lighting conditions change, then the point cloud will update as long as it appears on screen. This works well for a racing game, as the camera is typically looking ahead at sections of track that the car is about to drive into, allowing the point cloud for those sections to be updated with fresh data right before the car drives into those areas. Now, if we take the points that are nearby a particular vehicle and project them onto a sphere, and then unwrap that sphere to 2D UV coordinates (at the moment, we are using a world-space octahedral unwrapping scheme, though spheremaps, hemispheres, etc are also applicable. Using view-space instead of world space could also help hide seams), then we get an image like below. Left is RGB components, right is Alpha, which encodes the solid angle that the point should've covered if we'd actually drawn them as discs/spheres, instead of as points.Nearby points have bright alpha, while distant points have darker alpha. We can then feed this data through a blurring filter. In the ISM paper they do a push-pull technique using mipmaps which I've yet to implement. Currently, this is a separable blur weighted by the alpha channel. After blurring, I wanted to keep track of which pixels initially had valid alpha values, so a sign bit is used to keep track of this. Pixels that contain data only thanks to blurring, store negative alpha values in them. Below, left is RGB, middle is positive alpha, right is negative alpha: Pass 1 - horizontal Pass 2 - vertical Pass three - diagonal Pass four - other diagonal, and alpha mask generation In the final blurring pass, the alpha channel is converted to an actual/traditional alpha value (based on artist-tweakable parameters), which will be used to blend with the regular lighting probes. A typical two-axis separable blur creates distinctive box shapes, but repeating the process with a 45º rotation produces hexagonal patterns instead, which are much closer to circular The result of this is a very approximate, blobby, kind-of-correct environment map, which can be used for image based lighting. After this step we calculate a mip-chain using standard IBL practices for roughness based lookups. The big question, is how much does it cost though? On my home PC with a NVidia GTX780 (not a very modern GPU now!), the in-game profiler showed ~45µs per vehicle to create a probe, and ~215µs to copy the screen-space lighting data to the point cloud. And how does it look? When teams capture sections of our tracks, emissive elements show that team's color. Below you can see a before/after comparison, where the green team color is now actually reflected on our vehicles In those screens you can see the quick artist tweaking GUI on the right side. I have to give a shout out to Omar's Dear ImGui project, which we use to very quickly add these kinds of developer-GUIs. Point Radius - the size of the virtual discs that the points are drawn as (used to compute the pre-blurring alpha value, dictating the blur radius). Gather Radius - the random offset added to each point (in meters) before it's projected to the screen to try and collect some lighting information. Depth Threshold - how close the projected point needs to be to the current Z-Buffer value in order to be able to collect lighting info from that piixel. Lerp Speed - a weight for the moving average. Alpha range - After blurring, scales how softly alpha falls off at the edge of the blurred region. Max Alpha - A global alpha multiplier for these dynamic probes - e.g. 0.75 means that 25% of the normal lighting probes will always be visible.
  12. This will be a post mortem explaining how i failed the game dev competition for a tower defense game, so here it goes. In the start of june 2018 i found the tower challenge post by accident, read the description and was really excited. So i decided to apply and started right away - without any planning whatsoever. I created a new visual c++ project and after 3 evenings with a couple of hours i got the basic game mechanics nailed down: Enemies could be spawned and moved from waypoint to waypoint along a direction and randomly placed towers could hit and destroy it. Everything (tiles/enemies/towers) was defined in static int arrays, so i can adjust it however i like. It was going very smoothly and i was very happy with it. The next 2 evenings was a nightmare. I made a new map and suddently everything was broken. All the towers was shooting randomly, the enemies was not following the waypoints anymore, bullets missed all the enemies and so on, even the spawning was behaving weirdly. So it took me 7 hours to find all those bugs and fix them. After that in the next evening i refactored the current code, made it more robust and fixed a few bugs. In addition i added basic HUD rendering to display lives, score, money, current wave, etc. Now it was looking really good so far and the game was already playable. The next day for whatever reason i decided to use the editor "TileD" to setup everything. I have no idea why i wanted that, maybe i thought it would save me time or something but i was wrong. Even after 5 evenings, i still couldnt´t figure out how to get towers/enemies/waves defined in the TMX file. So i partly gave up on that idea and ended up just defining the walkable tiles and the waypoints in the TMX file. In addition i created a shitton of code just to parse a TMX file - including writing a generic xml parser in C99. The only thing useful i made in that 5 evening was that parser 😞 This entire process took me ~20 hours -> 4*8 useless workhours minus 2 hours for writing the xml parser. After that the next 3 evenings i did of lot of refactoring needed to get the TMX loaded in the way i wanted. The result was not that bad. I now could define all the waypoints, the walkable and placeable tiles in the editor and setup as many spawners as i want. This took me about ~6 hours. The next 2 evenings was a blast! I was very productive and added a lot of functionality and fixed a lot of bugs. I now had a fine looking HUD , multliple waves with multiple spawners for each wave. In addition i improved almost every part of the game, even the towers was rotating smoothly against the target now and you could lose or win the game. The "game" finally started to take shape. Of course after a blast, there comes the opposite of that: Destroying and unproductivity! In one evening i broke the tower rotation, the enemy position prediction, the rendering and even the HUD. Why of a sudden did everything brake? It may be that i just wanted to make it "even better" -> Over complicating simple things! In the following evening i reverted everything and simplified a lot of the game mechanics. Now all of a sudden the enemy prediction worked and the tower rotation was correct and very smooth. But the font rendering seems to be totally broken now - after switching to a new font. So i had no choice to still use the old font 😞 Many evenings later with a lot of delays between, i finally fixed the nasty font rendering bug. It took me over 10 hours to find that bug and 5 seconds to fix it... Now the new font or any other font works just fine. In addition i made a few simple functions to render and handle UI buttons - to select the appropriate tower. Now i got sick i could barely do anything, so i was of for over a week. After that i wanted get rid of the ugly dev graphics, so in 2 hours i made a push rendering system + opengl implementation and changed everything to it. It looked exactly the same as it was before, but now it was much more flexible and i finally could add sprites to the game. The next day i successfully added loading and rendering of sprites in just about 30 minutes. Then i searched the net for a free tileset, which i can use to test the sprite rendering. After i found one, i changed the TMX map to use it. The following 4 evenings i have written a lot of code for parsing/converting/rendering the tilesets from the TMX, but with wrong results. All the UV´s was incorrect and even after spending hours of debugging i could not find the bug at all. Now there was a full week where i didn´t do anything. The motivation was gone. The first evening in the following week i was still not motivated at all, but i wanted to get this finally fixed so i forced myself to analyze the code again - searching for the UV bug and after a short amount of time, i finally found the bug... It was just a typo... After fixing that typo now all the UV´s was correct and everything looked fine. In the next evening i added 3 more layers to the TMX map, trying to make it look more prettier. But there was a problem, the fixed map dimension was not sufficient to make use of the tilesheet i was going for. So i decided to change the entire system from a fixed map size to a dynamic one and this was pretty expensive from a time perspective, it took me around 2-3 hours. Now i had just a few days left before the deadline. The following days i moved all the wave/enemy/tower definitions into separated xml files, so i can starting making the actual 20 waves/enemies and a few towers. Of course this required me to change a lot of the internal systems, but the written xml parser now payed of and in an hour it was changed very easialy. Now i started to fiddle around with the data, trying to add more waves, more towers... Such tasks are not my thing, so it took me two hours just to add another wave and another tower 😞 So now i had one day left before the deadline and the game was not even close to be finished. I had one level, two waves, two towers, two enemies and very basic game mechanics working, without upgradable towers. Also i had no final art, no sound or music, not even a menu 😞 The following days i was really depressed about it, so i was not working at the project at all, so i failed and missed the deadline. So now comes my reasons why i failed it: 1.) I didn´t plan anything I had no idea which art style i was going for I had not slicest idea what type of waves/towers/enemies i want I had no idea how the level should look like I didn´t set any goals or milestones or tasks whatsoever 2.) Forcing myself to use the TileD editor was a huge mistake For such a little project, one level should be just fine. So why the heck do i need a editor when i just want to have one level anyway? The only thing i needed to setup in the editor was the visual tiles, the walkable and placable areas and the positions for the spawners and thats it. 3.) I added a lot of complexity without thinking it through At work i always do that, but for some f*cking reason on private projects i never do that and that always kills me. I should have sticked with the simplest solution in all cases, then i may had finished it in time. 4.) I didn´t continously worked on the game There was too many days of me not working on the game at all. I should at least made one little thing each day or something like that. But not everything was bad, at various point i made a lot of progress and the last build i made was not that bad. It was playable, you can win or lose the game - it just lacks content in a all places, so i decided to finish the game to end of september - to have at least one finished game made in my life. Thats almost two month´s from today - counting just the days, that should be doable - even with my limited time budget.
  13. Hi everyone. I guess this would be another one of 'those' questions. I'm a .NET programmer and I've been developing games in Unity for 3 years now, and recently finished my 5th game. But in some sense, I feel like I've reached a barrier. I don't want to disregard Unity in any way, neither UE4 (that I used for a month or so), but in the last days, I'm feeling some urge to have more freedom in the development of my projects. That's when the idea of rolling my game from the ground up came back to mind. Besides wanting to have more control over some low-level details of the game, I want a more close control over the scene system especially to use my own map editor. And obviously, I'm exciting by the learning perspectives. If I use an engine again, I would go with Godot. And although I know I could expand Godot to my needs, that's unlikely to happen because I'll get easily overwhelmed by the engine and the lack of a good C++ API specific documentation (the engine one is good though). So I'm here to ask for your personal opinion. I know C/C++ good enough to start, and I've already toyed around with OpenGL. I also love this topic and data-oriented approaches. Should I use SDL, Allegro? Irrlicht, Urho, Ogre? Maybe build upon Cube or Torque? I'd like to learn while doing it, but how much work would be to write a simple render in pure SDL? And adding shadows or shaders? Should I follow HandmadeHero maybe? - I'm planning on Windows support, but with porting in mind. Starting with 2D and hoping on diving in 3D soon. - I'm not building an engine, but a game. Thank you all!
  14. Hello, I am designing my own game engine just as a hobby in c++. I like the language of c++ but for development speed for actually making a game with the engine i wish to use simple language like c# or python to script game objects. Can anyone point out any free online resources or link to forum posts that would help me understand how to enable my engine to use other languages to script with please. It would be very much appreciated. Thanks Alice
  15. Hello, i am trying to write a game architecture. I have the script and story planned, scene by scenes, moments by moments. The game age and replayabilty has also been planned... However i lack the skill of building the AI and how it should operate against the players(wandering around for strategic positions), random number generations for attacks and combos, multiplayer features(between mobile devices or within servers), and simple things like realistic fx fire or thunder generations, screen noises and distortions... i am using Openspace 3d. Can i get some pointers on how to do each of these? The game i want to make is like digimon world PS1, xenogears, ff7... those games has all the things i want to show as the aboves.
  16. Hi there. I'm looking for some quick opinions, advice or other comments on my custom engine architecture. For better or for worse, I have ended up with an ECS engine. I didn't intend to go this way, but countless searched through Google and the forum seem to confirm that this is the case. I have entities (mere Ids), components (pure data) and systems (holding raw resources and functionality) to operate on them. To be honest, I'm fairly happy with it. However, I have yet to implement any actual logic into my 'game', and have been looking around for details on the various ways of handling interactivity, specifically, interactively between entities and components. A topic that comes up a lot is events and event queues. I have not liked these. I don't want to add functionality to entities or components, and I don't like the idea of callbacks or event calling firing all over the place. So, I have been puzzling over this for the last two or so days. Eventually, I gave up on the musing and came to accept that some kind of event system is going to be needed. So, I had another look at the bitSquid blog (recommended on this forum), and something occurred to me. Isn't an event really just another form of entity? If it isn't, why isn't it? I also realised that I already have something pretty similar running in my engine now. Specifically, my (admitted quite naive) implementation works more or less like this. The scene hands a list of physicalComponents and their corresponding placementComponents, and the collisionDetection sub-system iterates through them, looking for collisions. If it finds one, it creates a collision, adds it to the list, and moves on to the next one. Once it is finished, the collisionResolution sub-system goes through the list, and handles the collisions - again, currently very naively, by bouncing the objects off of one another. So, I am wondering if I can just use this same approach to handle logical interactions. Entities with logical requirements have a collection of components related to interactivity (the range, the effect, and so on), and the various sub-systems iterate through potential candidates. If it notices an interaction, it creates an interactionEntity (with the necessary data) and the interactions are processed by the next sub-system. I guess I'm looking for some feedback on this idea before I start implementing it. The hope i for more granularity in the components, and the ability to add a logical scripting system which combines various components into potential interactions, and omits the need for any kind of event system. Or am I just repeating the general idea of events and event queues in a slightly more complicated way? Additionally, any comments or commentary on this approach (ECS, and so on), would be very gratefully received. I've pretty much run out of resources at this point. Regards, Simon
  17. Do you know any papers that cover custom data structures like lists or binary trees implemented in hlsl without CUDA that work perfectly fine no matter how many threads try to use them at any given time?
  18. Steamie Tilted

    Test and feedback

    Hi guys, I just released my first game and would like to know if anyone wants to test it. This is the first attempt to produce a game. A total new world for me. The link to test is here. https://play.google.com/store/apps/details?id=com.steamiegames.beatem I hope you like and have fun. Any feedback will be appreciated. Kind regards, Steamie & Tilted
  19. Hi, guys! I have a rather abstract question, because I don't know which side to approach to its solution. So, I would appreciate any information. I have a task to create a simple game that generates floor plans and I following by this perfect algorithm (https://www.hindawi.com/journals/ijcgt/2010/624817/). At the moment I use squarified treemaps (http://www.win.tue.nl/~vanwijk/stm.pdf) and here no problems. I create nested array in which elements are rooms with size. Problems starts when I trying to represent generated "rooms" as edges and vertexes (a, b, c, d steps in attached picture) That representation can give me access to this elements as special "entities" in future game versions. I don't have skills in graphs (and do I need graphs?) and at the moment totally stucked at this step. How can I represent room walls as trees (or graphs?) at this step? Calculate size of squares (rooms) and convert sides to a vectors? Then in loop find shared vectors (same position by "x" or "y") and determine them as shared walls? The instinct tells me that there exist more elegant and efficient ways. Anyway, thanks for any information about this.
  20. A sticky dilemma. I'm part of a team based in USA that produces a virtual world software for remote business purposes. The businesses that use us are our Clients with users from all over the world (and expanding), but primarily in the USA. Our software makes use of customizable human avatars to use in world for each user. We have gotten requests from one of our biggest paying Clients and approval from boss to include religion based avatar clothing options (yamulkes, headscarves, skullcaps and turban head coverings currently, potentially garments too). As our software is used for business, most people want to keep their real world likeness, which may include some of this clothing because it is a part of their identity. Since this is such a sensitive topic on all sides involved and we are in a politically charged climate in the USA, clearly we don't want to offend anyone because they all pay us. In my opinion, even if this request was deemed as a reason for loss on Client's part, it will still be our company providing the service that will be affected primarily. As an emerging company we can't afford to lose users or current/potential Clients over something unrelated to the core mechanics or hardware requirements of the game. How do we put it in the avatar creation menu? Keep it with the other head coverings (so not to upset/offend the religious wear users via segregation) or separate it (to protect from accidental abuse of said garments from ignorant users and offend everybody)? As difficult as it would be for us to do (right now), do we only allow access to certain users? Would that be going too far to request information such as this from users, or for them to have to volunteer it for access? How do we talk about it with the client? When the concern was brought up, they warned us to be careful about using the term "religious wear", so we switched to the more broad "cultural wear", in which they again implied even that term might offend in discussion (because Texas users (very many) would get mad about their cowboy hats not being treated as culturally significant...) and client tactfully avoided telling us what they want us to call it themselves. How do we have a productive conversation though they put out a controversial request and are not willing to speak confidently on it's behalf?
  21. Liza Shulyayeva

    Go auth (and other) progress

    July 28 Made a bit more progress on the authentication basics today. Relevant commits are: Add http package; have auth0 test delete user it has just registered after test is done Create json util; add logout test July 29 Today I focused a bit on the building and installation of snaillifecli. I switched my custom app configuration code for Viper because it apparently integrates really well with Cobra, which is a library to help make CLI applications. It is really tempting to avoid plugging in existing libraries and write everything from scratch because I’m positive that it will teach me a lot about Go, but the existing solutions seem more than suitable and I want to get to working on actual snails at some point. I also checked in a couple of quick scripts to build and install the app. deployDebug deploys to a subdirectory under GOBIN and copies the config file the app will use alongside the executable. This is really dangerous because it means database credentials are exposed to whoever wants to look in the config file and is to be used for local debug purposes only. The deployProd script first runs go-bindata to generate a go file from the json config and have the configuration compiled into the binary during the build step. This way any sensitive database credentials and such are not directly exposed. Of course though, I don't plan on distributing any binary with secret key information in it to external users.
  22. Trym Studios

    Weekly Update #2

    Time for another update with how we are doing. We have now started production and I will continue to update you guys trough the week`s ahead, something`s we will share and some we dont`t, can`t spoil all the fun right? But I can at least guarantee that we are on track for a powerfull vertical slice. Programming: The programmer is now doing Water Test - Perlin Noise / Heightmap Calm checking at comparing to our references. Modeling: I have spent most of the week gathering reference pictures for asset`s so I can fill the ship when it`s ready, this is a very hard job but luckily there are alot of book`s out there. First out was one of the spyglasse`s, a Dollond from mid 17th century. Did some material`s testing on that one as well and will finish that asset tomorrow. Soon I will also start texturing, rigging and animating the first person arms/hands. Until next week! http://www.indiedb.com/games/the-whaler-working-title
  23. Zemlaynin

    Need feedback for UI

  24. I am an audio researcher developing new audiovisual technologies and currently interested in new applications for games, especially in areas of VR arcades, large immersive spaces, 360 degree installations and even escape rooms. I am wondering if anyone has any ideas how to get 32 independent channels (or more) of audio output in real-time from a game engine like Unity that can be spatially mapped to XY coordinates of virtual objects in a screen, or the XYZ coordinates for a spatial enclosure, when most game engines only allow for fixed pre-defined output formats such as stereo, 5.1 or 7.1. I have an executive summary of the technology online at (the link also includes my email address): http://bit.ly/pixelphonics Thanks, M
  25. Hi, Recently I have been looking into a few renderer designs that I could take inspiration from for my game engine. I stumbled upon the BitSquid and the OurMachinery blogs about how they architect their renderer to support multiple platforms (which is what I am looking to do!) I have gotten so far but I am unsure how a few things that they say in the blogs.. This is a simplified version of how I understand their stuff to be setup: Render Backend - One per API, used to execute the commands from the RendererCommandBuffer and RendererResourceCommandBuffer Renderer Command Buffer - Platform agnostic command buffer for creating Draw, Compute and Resource Update commands Renderer Resource Command Buffer - Platform agnostic command buffer for creation and deletion of GPU resources (textures, buffers etc..) The render backend has arrays of API specific resources (e.g. VulkanTexture, D3D11Texture ..) and each engine-side resource has a uint32 as the handle to the render-side resource. Their system is setup for multi-threaded usage (building command buffers in parallel and executing RenderCommandBuffers (not resources) in parallel. One things I would like clarification on In one of the blog posts they say When the user calls a create-function we allocate a unique handle identifying the resource Where are the handles allocated from? the RenderBackend? How do they do it in a thread safe way that's doesn't kill performance? If anyone has any ideas or any additional resources on the subject, that would be great. Thanks
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!