Jump to content
  • Advertisement
    1. Past hour
    2. jyscal

      Creating A Grid of Images

      Thank you Wyreframe for the advice. To be perfectly honest I am not sure how the mechanics of pymunks space.add() works so I cannot give a definitive answer on that. As far as I am aware, from the tutorials I have seen it simply adds pymunk bodies and shapes to the pymunk space, which you should only need to do once. On the other hand, I thought about what you said with there only ever being 2 sprites being drawn. I have readjusted the code as such: class Brick(): def __init__(self, space): self.batch = pyglet.graphics.Batch() self.brick_images = ['brick1.png', 'brick2.png'] self.sprite_list= [] for x in range(7): for y in range(7): body = pymunk.Body(body_type=pymunk.Body.KINEMATIC) body.position = x * 100 + 75, y * 30 + 340 image_index = random.randint(0, len(self.brick_images) - 1) image = pyglet.image.load(self.brick_images[image_index]) sprite = pyglet.sprite.Sprite(image, x=body.position.x, y=body.position.y, batch=self.batch) self.sprite_list.append(sprite) shape = pymunk.Segment(body, (0, 0), (50, 0), 6) shape.elasticity = 0.80 shape.collision_type = collision_types['brick'] space.add(body, shape) This creates the grid as I intended. But I am still not sure if it is correct as pyglet as a built in fps display. With the previous code (with the grid not showing) it showed a constant 60fps. After changing it to this, it creates the grid but the fps now sits and around 25fps I am assuming it has something to do with the loading of the images in a loop, although I could be mistaken. Thanks for the help of spotting only 2 sprites, which has at least helped me move forward a little bit more.
    3. Scouting Ninja

      ball isn't spawning where desired

      Isn't it because of this: newBall.transform.position = paddle.transform.position + newBall.GetComponent<Ball>().paddleToBallVector; //use the original ball's vector3 to follow it directly When the original ball hits the power up, you could save that point for a time. Then as the ball moves away you can spawn a new one at that point.
    4. Today
    5. The thing is that the moment you are using here is the same as Blitz3D. It also uses delta time and does collision checks. If you want the direct movement, there is set_pos() as in set object position. This is teleportation like movement, it doesn't detect collisions. Use it like this: if Input.is_action_pressed("ui_up"): self.set_Pos(Vector3(0,1,0)) To be clear, the Godot move_and_slide() is also a collision check. You don't do any collision checks yourself. move_and_collide() = Stop response of Blitz3D. The object stops when it collides. move_and_slide() = Slide response of Blitz3D. The object slides when it collides. Uses gravity for incline when needed. The Godot RigidBody is a realistic collision system. The velocity setup is needed because you are also doing collision checks.
    6. Wyrframe

      Creating A Grid of Images

      Doesn't look like you're creating a sprite for each brick; you're iterating over a range, moving one of two randomly picked sprites (and only two sprites ever exist, mind you) to the corresponding position on the grid, and then moving on. Also, does `space.add()` copy its arguments and create a new rigid body as described, or does it add a new body which uses those arguments are its position-and-shape parameters (so if you update them later, it updates that body's position and shape)? Either way, I suspect `space.add()` should be returning something you should store in Brick somewhere.
    7. Hello I am trying to create a grid of images in pyglet and python and I am not sure where exactly I am going wrong. The goal is for it to be a Breakout/Arkanoid clone. The problem I am having is getting the brick images to display in a grid. Here is the code that as far as I can tell, should place the bricks in the correct position. class Brick(): def __init__(self, space): # Create the list to hold different sprite bricks and load images self.batch = pyglet.graphics.Batch() self.brick_images = ['brick1.png', 'brick2.png'] self.brick_sprites = [] # 1 out of 5 chance to drop a power pill self.chance_to_drop = 1 # Set the images anchor point to its center and create sprites for i in range(len(self.brick_images)): img = pyglet.image.load(self.brick_images[i]) img.anchor_x = img.width // 2 img.anchor_y = img.height // 2 self.brick_sprites.append(pyglet.sprite.Sprite(img)) for x in range(7): for y in range(7): self.body = pymunk.Body(body_type=pymunk.Body.KINEMATIC) # The position where each pymunk body will be placed self.body.position = x * 100 + 75, y * 30 + 340 self.brick_type = random.randint(0, len(self.brick_sprites) - 1) if self.brick_type == 0: sprite = self.brick_sprites[0] # Set the sprite to the same position as the pymunk body sprite.set_position(self.body.position.x, self.body.position.y) sprite.batch = self.batch elif self.brick_type == 1: sprite = self.brick_sprites[1] sprite.set_position(self.body.position.x, self.body.position.y) sprite.batch = self.batch self.shape = pymunk.Segment(self.body, (0, 0), (50, 0), 6) self.shape.elasticity = 0.80 self.shape.collision_type = collision_types['brick'] space.add(self.body, self.shape) handler = space.add_collision_handler(collision_types['brick'], collision_types['ball']) handler.separate = self.remove_brick So what I am trying to accomplish is have 7 rows of 7 bricks. As far as I can see the sprites are being created in the loop, but when I run the program only 2 bricks are being displayed. I am sure there is something wrong with the way I am looping but honestly just cannot see where I am going wrong. I have spent some time, trying to see the error but simply cannot see where I am going wrong. I can see that the pyglet brick sprites are NOT being set to the correct x, y of the pymunk body, even though, using the same formula for the player paddle object lines up the sprite perfectly. #Set the sprite to pymunk object position self.image = pyglet.image.load('paddle.png') self.image.anchor_x = self.image.width // 2 self.image.anchor_y = self.image.height // 2 self.sprite = pyglet.sprite.Sprite(self.image, x=self.position.x, y=self.position.y) I am very confused with this one and I just hope I have explained everything clearly enough. Thank you for any help or assistance in any way.
    8. I need to get this information in batches to prepare for optimization because there are about 15 scenes at present and will be more and more in the future.
    9. Ooohhhh boy. I'm back with more questions. I am having a horrible time with this pick up I'm working on. I've got multiple issues but I'm trying to make a list and work through them one by one. If you haven't already seen one of my other posts, I'm working on an arkanoid clone type game, with a pickup that when active, the ball will come back and hit the paddle, and as it leave the paddle, 6 additional balls should be spawned, with a short delay between each one, and then they all follow the first ball in a straight line away from the paddle until they all hit whatever is in front of them. Each ball is destroyed on impact except for the last ball in the line. So, onto my problem: Currently, the original ball hits the paddle and bounces away fine, and another ball is spawned. However, this ball is not spawned over the paddle like it should be. Some times it spawns right next to the paddle, sometimes it spawns half way across the screen. Left or right of the paddle, either way. Can't seem to find any rhyme or reason to it's spawn location. I have added code directly to this ball so that right after it is spawned, it's location should be initialized in a location over the paddle. I used the same code that I use on the original ball at the beginning of the game. Also, when this ball spawns, it falls. I thought I had it set so that the vector of the original ball would be assigned to this new ball and it would just move in the same direction as it, but apparently I'm missing something there. I'll post all my code for this pick up and also for the Ball class and maybe someone can explain what I'm doing wrong. This pickup is really making me lose my mind! AssaultRiflePickUp class: public class AssaultRiflePickUp : BasePickUp, ISpecialShotPickUp { private int AmmoRemaining = 35; private float timer = 0.3f; private Vector3 mainBallVector; public GameObject newBall; void Start () { //assign paddle to variable paddle = GameObject.FindObjectOfType<Paddle> (); //find the ball and assign it to this variable foreach (Ball ball in GameManager.pickUpManager.allBalls) { if (ball.mainBall) { mainBall = ball; } } } public override void ActivatePickUp (MonoBehaviour coroutineHost) { Debug.Log("ActivatePickUp"); //subscribe to events mainBall.onBallHitPaddle += AssaultRifleShot; mainBall.onBallCollided += DestroyExtraBalls; } public void SpecialShot () //spawn a new ball. add it to the list of balls. set it's spawn location, set it's velocity to match the original ball { Debug.Log("SpecialShot"); Instantiate(newBall); newBall.GetComponent<Ball>().hasStarted = true; GameManager.pickUpManager.allBalls.Add(newBall.GetComponent<Ball>()); newBall.GetComponent<Ball>().paddleToBallVector = newBall.transform.position - paddle.transform.position; newBall.transform.position = paddle.transform.position + newBall.GetComponent<Ball>().paddleToBallVector; //use the original ball's vector3 to follow it directly newBall.GetComponent<Rigidbody2D>().velocity = mainBallVector; } public void DestroyExtraBalls (GameObject ball) { if (GameManager.pickUpManager.allBalls.Count > 1) { } //TODO: Implement observer pattern to watch for balls colliding with something. If they do and they are not the main ball, destroy them. } public void DestroyPickUp () { Debug.Log("DestroyPickUp"); Destroy (GameManager.pickUpManager.pickUpQueue [0]); GameManager.pickUpManager.pickUpQueue.RemoveAt (0); GameManager.pickUpManager.pickUpActive = false; } //a timer used for timed pick ups public IEnumerator PowerUpTimer (float delay) { Debug.Log("PowerUpTimer"); yield return new WaitForSeconds (delay); SpecialShot(); } //The coroutine is used to start a short timer so that the balls don't spawn on top of each other all at the same time. public void StartCoroutine (float delay, MonoBehaviour coroutineHost) { coroutineHost.StartCoroutine(PowerUpTimer(delay)); } //Controls the pick up's cycle public void AssaultRifleShot (GameObject mainBall) { Debug.Log("AssaultRifleShot called"); mainBallVector = mainBall.GetComponent<Rigidbody2D>().velocity; //grab mainBall's direction and speed before making it a destroyable ball mainBall.GetComponent<Ball>().mainBall = false; //deactivate the original ball as the mainBall so that it may be destroyed on impact if (AmmoRemaining > 0) { Debug.Log("AssaultRifleShot - Inside If Statement"); //if there is ammo remaining, set the main ball to just a normal ball so it can be destroyed on impact and then start coroutine mainBall.GetComponent<Ball> ().mainBall = false; Debug.Log("AssaultRifleShot - 1"); mainBall.GetComponent<Ball> ().onBallHitPaddle -= AssaultRifleShot; //unsubscribe to event when mainBall is set to false. Debug.Log("AssaultRifleShot - 2"); StartCoroutine(timer, GameManager.monoBehaviour); Debug.Log("AssaultRifleShot - 3"); AmmoRemaining -= 1; Debug.Log("AssaultRifleShot - 4"); } else { DestroyPickUp(); } } } Side note: There is some base class code for the pick up, if it is decided that that is needed as well I can post it. But I don't think it's relevant. Ball class: public class Ball : MonoBehaviour { private Paddle paddle; public Vector3 paddleToBallVector; public bool mainBall = false; public float Speed { get; set; } public float MaxSpeed = 10.198f; public bool hasStarted = false; public delegate void DestroyBalls(GameObject ball); public event DestroyBalls onBallCollided; public delegate void BallHitPaddle(GameObject ball); public event BallHitPaddle onBallHitPaddle; void Start () { //find the paddle and assign it to variable paddle = GameObject.FindObjectOfType<Paddle> (); paddleToBallVector = this.transform.position - paddle.transform.position; Speed = 1; } // Update is called once per frame void Update () { if (!hasStarted) { //Lock the ball relative to the paddle. this.transform.position = paddle.transform.position + paddleToBallVector; //Wait for a mouse press to launch. if (Input.GetMouseButtonDown (0)) { hasStarted = true; this.GetComponent<Rigidbody2D> ().velocity = new Vector2 (2f, 10f); } } } void OnCollisionEnter2D (Collision2D collider) { //use this vector2 to adjust the velocity so the ball does not get stuck in a vertical bouncing loop if (hasStarted) { //apply sound when the ball hits something AudioSource audio = this.gameObject.GetComponent<AudioSource> (); audio.Play (); Vector2 tweak = new Vector2 (UnityEngine.Random.Range (0f, 0.2f), UnityEngine.Random.Range (0f, 0.2f)); //the following lines set the speed and direction of the ball based on built in physics, and the current speed of the ball. Speed can be changed based on pick ups this.gameObject.GetComponent<Rigidbody2D> ().velocity = this.gameObject.GetComponent<Rigidbody2D> ().velocity.normalized; this.gameObject.GetComponent<Rigidbody2D> ().velocity *= Speed * MaxSpeed; this.gameObject.GetComponent<Rigidbody2D> ().velocity += tweak; if (collider.gameObject.name == "Paddle" && this.mainBall) { if (onBallHitPaddle != null) { onBallHitPaddle (this.gameObject); } } } } }
    10. I gotta know; why do you need this information? Because you can extract it from the same API that debug overlay is getting it from, if it's that important.
    11. This did fix the error I was getting and is making the ball wait to spawn as desired! Thanks!
    12. bovacu

      Distance to point beyond a rectangle

      Hi, I've solved your problem as shown on the images. In case there are uploaded not in correct order, the first one is the one with half paper in white, then the other one. I've solved part for general cases, but for the final solution I've taken some random values as you will see. Have in mind that Rx is the left upper corner x value of the rectangle and Rx+w is that x plus the rectangle width. You just need to adapt these to your data. Hope it is clear and undersatble and it can help you. And sorry for the image quality, it is lower than on my mobile
    13. Zakwayda

      Distance to point beyond a rectangle

      It seems like the most general solution would be to intersect the rectangle with a ray starting at the point in question and passing through the rectangle center, and compute the distance to the intersection point. Depending on the circumstances though you might be able to simplify that. (Relevant details would be things like, is the rectangle axis aligned? Can the point be in the Voronoi region of a vertex of the rectangle, or only an edge? What will the information be used for? And so on.)
    14. And? That's a simple action, if you're judging how easy it is to script an engine based on how short the program is... I don't even know what to say. That's a completely arbitrary thing to judge an engine on and a very odd thing for a "veteran programmer" to say. The entire movement in script in Godot looks something like this. extends KinematicBody2D const SPEED = 200 func _physics_process(delta): var velocity = Vector2( float(Input.is_action_pressed("right")) - float(Input.is_action_pressed("left")), float(Input.is_action_pressed("down")) - float(Input.is_action_pressed("up")) ) move_and_slide(velocity.normalized() * SPEED) That's it. It's flexible, succinct and it honestly shouldn't get much "easier" than that. If you're afraid of typing a few lines of code... I don't know what to say. Keep switching engines to find the shortest, I guess, but like I said before that's a completely arbitrary thing to be judging an engine on. In terms of ease of use balanced with expandability and popularity (which sounds unimportant, but unless you're self-sufficient then you need people to help you solve problems) you're not going to find many (or any, really) better options than Unity or Godot. Pick one and start learning it. And stop expecting it to conform to your irrational expectations, no software will and unless you learn to adapt to different languages, libraries and paradigms than you'll constantly have a bad time with game development. It is what it is, stop expecting it to be something else and stop trying to make it into something else.
    15. Not sure how to phrase the question, made if harder for me to search, Never quite got the answer I wanted. Hoping you can answer here :) Sorry if this is wrong forum. I think this section is ok? Basically I want the distance from the point to the rectangle edge, pointing to the rectangle centre. When looking for answers I mostly got the closest point on the rectangle which I didnt want ( i think ). I know how to get the lines total distance, not sure where to go from there ;[ Any help appreciated. I attached a picture ;] .
    16. Vilem Otte

      Back-projection soft shadows

      I've been actually experimenting with various shadowing every evening in past week - apart from the ones I've implemented. So far the PCSS and PCMLSM (it's a PCSS variant where I use mipmaps to get more-smooth shadows using trilinear filtering) looks really good for small lights, but once you increase size it collapses (mainly because penumbra search is going to be wrong). Additionally PCSS suffers from noise, see the following: PCSS - small area light PCSS - large area light In PCMLSM I got rid of PCSS noise, and it looks really good, for small-size area lights: PCMLSM - Small area light PCMLSM - large area light I intentionally didn't do any attenuation at all. For comparison ISM-like approach with lots of small lights instead: This looks by far superior to previous ones, yet the fillrate is huge (notice - you can see the actual shadow maps in the virtual shadow map to the right). This is quite close to the actual result I'd like to see (for large area lights). Cone traced shadows are interesting but most likely a no-go simply because of the resolution of the voxelized scene - I just quickly played with cone size for the ambient occlusion and cone direction (to point towards the light), and you can clearly see that while it is generating shadows - they're not well-defined and suffer of light leaking (due to F.e. empty interior voxels of the sphere). Using SDF instead could be good, but they will still suffer with lower resolution. Not to mention problems with animated models. I'd personally like to try it - yet it might be quite challenging. Back-projection seem quite promising for such scenarios (yet the only working example which is from NVidia doesn't really work well with such scenario unless you invest large amount of steps in it - which again ends up in slowness) - which might actually render it useless in the end. I'm now trying to implement the prototype I have into the actual editor so I share some nice screenshots and finish up the article (which is still in-progress), yet I do have some problems with different light types (point vs. spot), and generally the actual projection which seems incorrect to me.
    17. For example, in Blitz3d, to handle movement controls you write this one line: If Keydown(up_key) Then MoveEntity Object 0,1,0 But in Godot, I don't know it so well but I think you have to declare "velocity' then write "Input.Is_action_pressed("ui_up")", then set "velocity" to a value, then write Move_and _slide("velocity"). All those steps just for a simple motion.
    18. I would suggest you post a lot more details about the project to help get someone interested in contacting you. Usually people will just ignore such threads and move on if you only provide a link for contact and no game details. What is the project about? What has been done? Do you have concept art or anything visually to show? Since this is a rev share project, you need to generate a common interest when bringing people on-board.
    19. TeaTreeTim

      Help with FBXSDK

      I'm sorry I should have simplified and tidied the code first. In that code I directly access the UV's int texIndex = mesh->GetTextureUVIndex(polygonIndex, polyVertexIndex); FbxVector2 lUVValue = lUVElement->GetDirectArray().GetAt(texIndex); Notice I only set 2 uv coordinates, this implies there is only one set. For GetTextureUVIndex, you can specify the uv set (basically the material), so that example grabs the default (first). Other examples use GetPolygonVertexUVs which returns an array and you can use the name in there. That's where the naming matters if you have multiple sets, and you'd need to iterate through them. That's what I used to do but didn't tidy my code so I apologise for that. The thing is if I included a comprehensive example it would be quite complicated, for example: Here is a better way to get your normal (by better I mean accommodate different systems): FbxGeometryElementNormal* vertexNormal = mesh->GetElementNormal(0); switch (vertexNormal->GetMappingMode()) { case FbxGeometryElement::eByControlPoint: switch (vertexNormal->GetReferenceMode()) { case FbxGeometryElement::eDirect: { result.x = static_cast<float>(vertexNormal->GetDirectArray().GetAt(inCtrlPointIndex).mData[0]); result.y = static_cast<float>(vertexNormal->GetDirectArray().GetAt(inCtrlPointIndex).mData[1]); result.z = static_cast<float>(vertexNormal->GetDirectArray().GetAt(inCtrlPointIndex).mData[2]); } break; case FbxGeometryElement::eIndexToDirect: { int index = vertexNormal->GetIndexArray().GetAt(inCtrlPointIndex); result.x = static_cast<float>(vertexNormal->GetDirectArray().GetAt(index).mData[0]); result.y = static_cast<float>(vertexNormal->GetDirectArray().GetAt(index).mData[1]); result.z = static_cast<float>(vertexNormal->GetDirectArray().GetAt(index).mData[2]); } break; default: throw std::exception("Invalid Reference"); } break; case FbxGeometryElement::eByPolygonVertex: switch (vertexNormal->GetReferenceMode()) { case FbxGeometryElement::eDirect: { result.x = static_cast<float>(vertexNormal->GetDirectArray().GetAt(inVertexCounter).mData[0]); result.y = static_cast<float>(vertexNormal->GetDirectArray().GetAt(inVertexCounter).mData[1]); result.z = static_cast<float>(vertexNormal->GetDirectArray().GetAt(inVertexCounter).mData[2]); } break; case FbxGeometryElement::eIndexToDirect: { int index = vertexNormal->GetIndexArray().GetAt(inVertexCounter); result.x = static_cast<float>(vertexNormal->GetDirectArray().GetAt(index).mData[0]); result.y = static_cast<float>(vertexNormal->GetDirectArray().GetAt(index).mData[1]); result.z = static_cast<float>(vertexNormal->GetDirectArray().GetAt(index).mData[2]); } break; default: throw std::exception("Invalid Reference"); } break; } Here is a better way of getting colour info: FbxColor color; switch (colorElement->GetMappingMode()) { case FbxGeometryElement::eByControlPoint: { switch (colorElement->GetReferenceMode()) { case FbxGeometryElement::eDirect: { color = colorElement->GetDirectArray().GetAt(ctrlPointIndex); break; } case FbxGeometryElement::eIndexToDirect: { int id = colorElement->GetIndexArray().GetAt(ctrlPointIndex); color = colorElement->GetDirectArray().GetAt(id); break; } default: { break; } } break; } case FbxGeometryElement::eByPolygonVertex: { switch (colorElement->GetReferenceMode()) { case FbxGeometryElement::eDirect: { color = colorElement->GetDirectArray().GetAt(vertIndex); break; } case FbxGeometryElement::eIndexToDirect: { int id = colorElement->GetIndexArray().GetAt(vertIndex); color = colorElement->GetDirectArray().GetAt(id); break; } default: { break; } } break; } default: { break; } } This gets materials better: int mcount = mesh->GetSrcObjectCount<FbxSurfaceMaterial>(); if (mcount > 0) { for (int index = 0; index < mcount; index++) { FbxSurfaceMaterial *material = (FbxSurfaceMaterial*)mesh->GetSrcObject<FbxSurfaceMaterial>(index); if (material) { FbxProperty prop = material->FindProperty(FbxSurfaceMaterial::sDiffuse); int layered_texture_count = prop.GetSrcObjectCount<FbxLayeredTexture>(); if (layered_texture_count > 0) { for (int j = 0; j < layered_texture_count; j++) { FbxLayeredTexture* layered_texture = FbxCast<FbxLayeredTexture>(prop.GetSrcObject<FbxLayeredTexture>(j)); int lcount = layered_texture->GetSrcObjectCount<FbxTexture>(); for (int k = 0; k < lcount; k++) { FbxTexture* texture = FbxCast<FbxTexture>(layered_texture->GetSrcObject<FbxTexture>(k)); const char* texture_name = texture->GetName(); } } } else { int texture_count = prop.GetSrcObjectCount<FbxTexture>(); for (int j = 0; j < texture_count; j++) { const FbxTexture* texture = FbxCast<FbxTexture>(prop.GetSrcObject<FbxTexture>(j)); const char* texture_name = texture->GetName(); } } } } } else { int layerCount = mesh->GetLayerCount(); for (int i = 0; i < layerCount; i++) { fbxsdk::FbxLayer * layer = mesh->GetLayer(i); if (layer) { fbxsdk::FbxLayerElementVertexColor *vColor = layer->GetVertexColors(); if (vColor) { } fbxsdk::FbxLayerElementMaterial * vMat = layer->GetMaterials(); if (vMat) { } } } } So you see to handle every fbx format becomes a bit complicated.
    20. Yesterday
    21. FreneticPonE

      Back-projection soft shadows

      Unfortunately area shadows is basically an unsolved problem right now. Signed distance fields can be fast, but get expensive when animating a lot or getting too detailed. Cone tracing would probably be leaky and unstable for shadows. Right now there's just not a great answer. But a good answer might be Moment Based Shadow mapping. It's the fastest/best way I know of to get a large filter radius in renderable time. There's hundreds of pages to read on the stuff, you can read that, anti-aliased moment shadow mapping, some other papers about improving lightleak a lot, etc. etc. etc. But the point is soft shadow mapping with a large filter is doable on the relative cheap with it. Edit- AFAIR temporally supersampled shadows have been tried before, for anti-aliasing. But you'd have to re-do an entire temporal reprojection pipeline for each one to avoid obvious ghosting and etc. just like visibility sampling temporal AA. Probably not worth it, either in production time or milliseconds.
    22. Hodgman

      Distance Fields

      From memory of my interpretations of their work (take with a grain of salt) - originally they did a full screen shader that had an array of bounding boxes. They'd ray-trace against the boxes, and, if hit, they'd then sphere-trace through a box by reading from its distance field volume-texture. Later, they moved to a single, global "cascaded" volume texture for the whole world (one volume for nearby, another for mid-range, another for everything, all the same resolution but differing world-space sizes meaning that the voxel size is different). They can then do a single full-screen sphere tracing pass. Before that though, each object (which has an individual volume texture for just that object) is copied into the global volume at the appropriate locations.
    23. Hello I'm searching a coder for my project, i'm coder too c#. My project is not big space simulator ,but in 2D. Write me to discord: Scoutrooper#8313
    24. Yeah I recommend doing this for ALL if you game assets. Don't save any game-ready / shippable files manually - have them all be generated from intermediate files via some kind of build system. If you want to change your binary model format, or how textures are packed/compressed, or convert text configuration files to some binary format, etc, then you just have to change your build process and recompile the assets. Without an asset compiler, you'd have to go and re-export every model in order to change the model file format, which could be a huge amount of manual labour.
    25. It sounds like you went through a real learning experience. No one goes into business without running into some road blocks. What's important is that you take them as the learning opportunities that they are and apply those lessons to future projects. Which sounds like what you're already doing! It can definitely be disheartening, but don't beat yourself up too much. Keep exploring your passion! Do you mind if I ask about the free-to-play game you're developing?
    26. JessGame90

      Jess Cantrell

      Drawings using Corel Painter
    27. Actually, for my purposes that turned out rather easy but it is definitely a behind the scenes type of thing. First, everything in that API is actually deferred and does not run immediately. This is a painful thing to implement and still keep the nice API, but it is doable. Now that everything is deferred it means that because my engine is fully threaded, I do a lot of lazy cleanup. So, with entities and components being added/removed in bulk, performing a little sort on the data and then updating the indexes is quite efficient. I'm not sure if there is a proper name for my method of insert/delete, but it is generally a couple memcpy's. I start at the highest thing to be inserted and binary search for the location it should be in the index. I memcpy everything at that location to the end up in memory by the total number of items to be inserted. Copy the new index and pop it, repeat till done but only moving the portion of memory to the point right before the insertion. Eventually the gap you made fills in and you've only moved any given element once. Overall it is a reasonable approach, I think I have some ideas to make it work faster but haven't bothered since in the ridiculous stress test I run the insert/delete hasn't shown up in the top 100 hot spots yet. The test adds/removes on average five thousand components and three thousand entities every second, so it is being beaten on significantly.
    28. PerezPrograms

      Game Development Reintroduction

      Thank you for the advice.
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!