• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By Ricardo3Ddev
      Hi guys!
      This is a independent game being produced by me and my brother. We’ve been working on it for about 6 months and we’ve already done a good part of the game. We hope to finalize and make it available on Steam by the end of this year.
      We are using Blender 3D and Gimp software for production.
       
      About the Game: Dongo Adventure will be a 3D platform style game, where the main character (Dongo) is a mouse that ventures through various scenarios (sewers, culverts, streets, electric grid, etc.) and faces several enemies along the way (cockroaches, mosquitoes, spiders, toxic gases, electrical wires, etc.). He carries a basket / backpack with cheeses that he uses to throw and defend himself from enemies, as well as being able to push objects that helps him to overcome obstacles. The ultimate goal will be a surprise!
       
      Now we are developing new scenarios and enemies. We hope to publish news soon...
      Game page on Steam: http://store.steampowered.com/app/811450/Dongo_Adventure/ - (Teaser UPDATED)
      Dongo Adventure - Indie Game Project (First Teaser) https://www.youtube.com/watch?v=X2nmxtkE0xk
       
      Thanks for following the project!

    • By iradicator
      This is a general question about player controller on a surface that contains geometry (curved roads / slopes, mountains, etc.) and obstacles (walls). The game should simulate a simple physical model (acceleration, collisions, etc.) and the character should navigate convincingly through the terrain. 

      I'm using Unity but I think this is a general question about how to design a character controller.

      I wrote a simple character controller that uses player input to steer the character in the world. wasd keys move forward and turn. Since I'm controlling the character directly, I'm using a kinematic object (I don't even use the rigidbody) and moves it by setting the transform directly to some model I implemented (I have speed, acceleration, mass, etc.)

      Why did I wrote a physical kinematic simulation? I tried to use a rigidbody and apply forces based on player's input directly on it but I found the control felt a little bit "swimmy" and it was hard to tweak (example: the character slammed hard and spun out of control (even when locking xz rotating direction), it took a long time to accelerate, etc.)

      That worked well when during prototyping on a simple plane with no obstacles. Now I have a level with non-even geometry. The problem I have is how to make the players "stick" to the ground when they travel around (Prototype applies movement on the xz plane but doesn't take into account being connected to the floor). Another issue is to set the orientation (up vector) of the player (imagine a vehicle) in a way that looks both smooth and convincing - the vehicle should change its pitch / roll as it's navigating through some slopes. Even the simple example of a  vehicle starting to climb from a plane on a road with a constant slop (say 20 deg) should change the orientation in a convincing manner, i.e. the vehicle should not start to "lift the nose" before touching the ramp, nor should it "sink the nose" colliding into the ramp. Again, this is where the physical engine can come in handy, but when I tried to apply force going up the vehicle slowed down because of friction.

      I also have problems with collisions since I'm moving the character directly by controlling its transform (kinematic), it feels weird and doesn't play well when the physics engine detects collisions and doesn't want to let the character penetrate a wall. It collides well with objects, it just feel very not natural.

      The real questions here are about best approaches to design a character controller (note: that SHOULD be applied also to agents using AI steering algorithms - that also calculates forces or running a model underneath). 

      1. How do you move a character? Are you using the physics engine to do the heavy lifting or you control the character directly like a kinematic object?
      2. If you're using physics, what's the best approach to apply forces? (yes, it depends on the game, but let's say some realistic based physics model with accelerations and forces - let's assume animations don't apply root motion - to simplify) In Unity, there are multiple ways to apply force - relative / non-relative, impulse / continuous etc.
      3, If you're not using physics, how do you make sure that collision detection play nice with your movement algorithms? How do you make collisions look natural and still give the player good control?
      4. Uneven terrain, how do you make the character (let's assume a vehicle - a car - with no complex animations (so no IK in play)) "stick" to the ground while changing its orientation (up vector) in a smooth and convincing manner?
      5. what's the best way to also allow the player to disconnect from the ground? (e.g. either jump or fall off platforms)

      For me, rigidbody vs. kinematic is the key question here. I saw tutorials that use both - but since they were super simple they didn't deal with the problems I mentioned above in depth. I'm wondering what's the best approach for the player controller and would love to hear more points to consider and tips based on your experience. Pseudo code / code samples (in any language / engine) would be much appreciated. Thank you!
    • By Monty Kiani
      This idea comes from the concept of making a game that specifically, fills the time of a person in travel when the person might not enjoy the adrenaline factor that accompanies many games.  I couldn't come up with anything so I thought about the general vibe I wanted and an action people did in general that emulated that. I found that it mostly happened, amongst other places no doubt, when people go through their messages panel on their devices; a plane traveler/businessperson perfectly calm for a minute eliminating messages, that moment extended. It's a kind of process of elimination. I don't know if this idea is common knowledge but I couldn't find anything and I'd love to see games based on this. 
    • By Jack Slink
      today i've finished working on PNR alpha and sharing it with you guys. main reason for sharing in this early stage of development, is to get feedback about what you'd like to see in it in the future etc. 
       
      link: https://katabunaga.itch.io/pnr-alpha-v010

    • By CrazyApplesStudio
      Hello,
       I managed to finish my first Android Game from A-Z in about 2 months, the game was planed to be an endless game since as a solo Dev i thought it would be easier to finish , it was created using Unity/blender3d and i made everything except for the music tracks.
       Currently i am looking for feedback, especially about mechanics and game play, especially since i found out this is the hardest thing while developing as and indie , i seem to be able to get 0 feedback from most communities i posted in so i hope i will have more luck here.
      A short game play trailer:
       
      The game is Free and available on Google Play Store:
      https://play.google.com/store/apps/details?id=com.CrazyApplesStudio.ApplesMania
      So feel free to comment , critiques are appreciated, even if they are harsh, since that is what will help me improve the game.
      Also the website for the project : http://crazyapplesstudio.com/
      Twitter: https://twitter.com/ApplesStudio
      Facebook: https://www.facebook.com/ApplesManiaX/
       
  • Advertisement
  • Advertisement
Sign in to follow this  

Gameplay Sprite sizes and positions on different resolutions

Recommended Posts

3 hours ago, Angelic Ice said:

Hm, but wouldn't artefacts occur over the entire sprite-structure anyway? I'm not sure if I understand this issue well enough.

You want it to happen over the whole sprite. It's by sampling the pixels that a new sprite is made. 4 or more pixels sampled and a 1 pixel made from this data.

I realize it's something you have to see, so I found this Unity topic: https://answers.unity.com/questions/939077/sliced-textures-bleed-into-eachother.html

43790-unity-help2.png.a98f0f12dfb601d30e48136984b7c3de.png

If you zoom in you can see where the sprites are bleeding into each other.

3 hours ago, Angelic Ice said:

Not all sprites are fully filled to the very corner

It's actually more noticeable with sprites that don't fill every corner and ones that use alpha channels. Imagine a walking character where the hand from the next sprite bleeds into the new sprite. What you get is this line that floats in front of the character.

The more the object is scaled the worse it gets.

3 hours ago, Angelic Ice said:

Thus I do not really understand how adding a "border" would fix this?

In the case of a alpha image we use a alpha border so that there is no color to sample from, so it doesn't mix with a color. A full sprite we use the last pixel and "stretch" it.

Google: "sprites edge padding" Also look into "Sprite bleeding problems".

 

3 hours ago, Angelic Ice said:

Sorry but I still cannot imagine the process.. So, when I have a 64x64 sprite that I load 10 times onto the screen

There is so much misunderstandings, sorry it's my fault I should have made something clear from the start.: Art has nothing to do with the game.:)

If my collision box is: 10 by 10 units long, it tells me nothing of how many pixels that is going to be. If I change a 64*64 sprite with a 120*10 sprite it will not affect collisions or gameplay at all. Rendering and gameplay are separate from each other.

So the thing you are describing here with the 64*64 sprite being used 10 times isn't how we calculate space. You aren't doing anything wrong but it is a weird way of thinking about the object.

The above post is where I explain Unity's coordinates. Unity uses the axis/grid way as I explained above.

 

I feel like there is gaps when I am trying to explain things. Maybe if you tell me what it is you want to do I can focus on explaining only what you need to know.

Share this post


Link to post
Share on other sites
Advertisement
5 hours ago, Scouting Ninja said:

There is so much misunderstandings, sorry it's my fault I should have made something clear from the start.: Art has nothing to do with the game.:)

This is what I wanted to learn about in this thread. I know that art should be seperated from physics, but I was not sure how I would make art independent from resolution and then physics transformable into graphics.

So borders are literally bonus pixels that are used so that sampling methods do not blend over with nearby colours. I think the rotating cube image did explain it fairly well: border-artifacts.png

So what I see is that Unity is using abstract units. Their meaning is given by the developer. A unit represents a number that I decide.

Working with percentages would be one way to construct these units, I assume?

And about physics, if I say, an object is defined as such:

x-position: 0.5, y-position: 0.25, width: 0.2, y-position: 0.2

I think that is okay to do? If there is no off-set, I assume it is safe to calculate the collider's position like this. But I assume one would usually save physics x, y, width, and height separately from sprite x and y.

One last thing, what's more common: Sampling every sprite directly to the screen-size or rendering in game's native-size to a view and sampling this to screen-size? Because first would require me to set the scale of each sprite manually, latter would make me set the scale of the view, I guess?

Share this post


Link to post
Share on other sites
14 hours ago, Angelic Ice said:

I think the rotating cube image did explain it fairly well:

Perfectly, I will remember rotation for future reference.

14 hours ago, Angelic Ice said:

So what I see is that Unity is using abstract units. Their meaning is given by the developer. A unit represents a number that I decide.

Working with percentages would be one way to construct these units, I assume?

Yes, exactly.:D

Both Unity and the percentage method use abstract units. The difference is that with the grid we decide how many pixels we want, while with the percentage method each unit is the size of the resolution.

This is why I said they are just alterations of the percentage formula. It's easier when you stop thinking of it as a percentage and instead think of it like a grid.

ZoomedOut.jpg.bc85f8abfd92689b2de0fea9ee9c0049.jpg

Once you start moving the camera around both systems act exactly the same way. The math remains the same and objects act the same way. You can't go wrong, no matter what you choose.

The percentage one is often used with games that don't have a moving camera. That way it is easy to find the location of every object and where it should be rendered.

The grid system is the most common one. This is because it allows more control and allows the developer to decide what is what. It's preferred with "scene" based games like Unity and Unreal makes.

15 hours ago, Angelic Ice said:

And about physics, if I say, an object is defined as such:

x-position: 0.5, y-position: 0.25, width: 0.2, y-position: 0.2

Is this based on screen size? If it is it isn't wrong but has a common downside, lets take an example. If it's relative to a screen size and the screen is 800*600 then: (0.5 * 800, 0.25* 600, 0.2 * 800, 0.2* 600) : 

position = (400, 150) and size = (160 , 120) See the problem?

Result.jpg.96731099b1ff11d1dce0f2c005637fa1.jpg

0.2 by 0.2 is what the sprite size is but because the screen is (4:3) we get a rectangle and not a square. If the screen is a wide screen then the result is a longer rectangle.

Is this what you meant?

 

16 hours ago, Angelic Ice said:

One last thing, what's more common: Sampling every sprite directly to the screen-size or rendering in game's native-size to a view and sampling this to screen-size? Because first would require me to set the scale of each sprite manually, latter would make me set the scale of the view, I guess?

The rendering is scaled.

Share this post


Link to post
Share on other sites

I gave all these concepts a bit more time for me but still have some questions:

When does the conversion from abstract values to real values happen? When I load an object and simply mutate the abstract value with the real one or does that happen prior drawing and I keep working with abstract units all the

The issue I have with this, when the sprite moves, the graphics position change too, but do I mutate abstract values? This is a bit mind boggling to me.

Also on another note, when my sprites are 1024x1024 large, simply because they have a high resolution, how do I handle the game's native screensize? We talked about using down- and upsampling of 1280x720 and 1920x1080, but this would result in a very huge size. If I have a row of 10 of them and I want them to fit into what the player sees, I would already need a format with at least 10240 pixels, I think that might be a bit brutal for a game's native size.

Should I lower the size of my sprites? How do quality settings in games come to play? Do they actually draw to a larger native size with higher texture-resolution or do they draw simpler/complexer textures? Or do they start with textures that are lower than supposed (e.g. 32x32 instead of 64x64) and upsample them until the settings are put to Very High and then the original size (e.g. 64x64) would be used?

 

Share this post


Link to post
Share on other sites
3 hours ago, Angelic Ice said:

When does the conversion from abstract values to real values happen?

All the time? This is confusing because there is lots of different ways of doing this. Most of the time you program on a 1:1 scale and the camera turns it to a 4:3 scale.

In other words, this isn't something you do do unless you also want to make your own render system (API Application Programming Interface, OpenGL and DirectX).

3 hours ago, Angelic Ice said:

The issue I have with this, when the sprite moves, the graphics position change too, but do I mutate abstract values? This is a bit mind boggling to me.

If your sprite moves 1 on the X and your screen is 3:4 then it will move 3 pixels on the X axis when rendered. Moving 1 on the Y also moves it 4 on the Y axis when rendered. You don't do this, your rendering API will or engine will.

3 hours ago, Angelic Ice said:

Also on another note, when my sprites are 1024x1024 large, simply because they have a high resolution, how do I handle the game's native screensize? We talked about using down- and upsampling of 1280x720 and 1920x1080, but this would result in a very huge size. If I have a row of 10 of them and I want them to fit into what the player sees, I would already need a format with at least 10240 pixels, I think that might be a bit brutal for a game's native size.

If your sprite is 1024*1024 but it only takes 64*64 pixels on screen then the API will keep down sampling till it has a 64*64 image to fill those pixels. This takes a long time.

It can't render 1024*1024 in a 64*64 space no matter how much it wants.

So to avoid this Mipmapping is used. Mip maps store do the sampling and keeps it in the "sprite" then instead of sampling the large texture and wasting performance.

See this video: https://drive.google.com/file/d/107_vHcHx_wByMARYj-LOgXnQYVyDhQEj/view?usp=sharing

This was made using a DDS texture that allows me to make custom mips for special effects.

 

So in other words if you used 10 sprites each of them a 1024*1024 and rendering them on a 1280x720 ratio then each sprite result will be: 1280/10 x 720/10 = 128x72 pixels per sprite meaning that X has to be down sampled 8 times and Y 15 times to scale it to fit on screen.

This means you are using 15 times the performance needed to actually display that sprite. Where with a Mipmap and Anisotropic filtering you would be able to render it around 10 times faster and get better results.

 

In other words you will only use a 1024*1024 sprite if the sprite covers 90% of the screen and you know most players will be using a screen +/- 1280x720.

Bigger sprites does not mean better quality unless your players have bigger screens. If you used a 4K (4096x4096) sprite it will never display that way on a 1280x720 screen no matter what you do. The screen just doesn't have enough pixels so it has to down sample the sprites.

 

Also 10 of the same C doesn't use 10 texture sheets so you won't need a 10240*1024 texture. You will just render the same sprite more than once. Only 10 unique sprites of 1024*1024 would need a texture of that size and you would pack it better like this:

10240*1024 -> 4096*4096 aka a 4K texture and it will look something like this:

From this:

10By1.jpg.90d163e6a48043bf8e86e4a3aefd5360.jpg

To this:

4By4.jpg.5f51b651ccbeab96b74988a917352884.jpg

Because computers like working with numbers to the power of two, meaning that the image bellow actually loads and renders faster than the image above. This is known as a atlas or tile sheets.

 

What is it you are trying to do? Are you making a rendering API? A engine? A game?

The knowledge you need in the moment depends on what you do. The stuff I am telling you took me years of experience as a professional 3D artist and a hobby 2D artist to learn(3D and 2D is the same stuff just have different names).

There is a lot of information you need before these things make sense, a lot you don't need to know to make games.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Advertisement