Jump to content
  • Advertisement

Androphin

Member
  • Content Count

    15
  • Joined

  • Last visited

Community Reputation

107 Neutral

About Androphin

  • Rank
    Member

Personal Information

  • Interests
    Art
    Audio
    Business
    Design
    DevOps
    Education
    Production
    Programming
    QA
  1. Androphin

    Auth tokens

    If I get this right, with JWT, the need of a secure connection over https isn't really necessary, because the sent data is already strongly encrypted and gets decrypted on each side?
  2. Look here: https://social.msdn.microsoft.com/Forums/de-DE/7f22624f-d8c9-435b-a546-f1fc470bfb5b/vs2015-c-project-needs-ucrtbaseddll-how-to-install
  3. Hello, I have problems deciding some design questions. About the game: It's like Anno 1404 but multiplayer only 1) Accessing GUI directly (Object interaction) Should the Notifier has some kind of queue or should I call it directly, because the player expects immediate response. A queue would maybe overkill in things of performance, if it's processor has its loop in the render method/loop. As of writing these lines I doesn't even know why I would have even considert choosing a queue. Searching for a solution to my problems, I came across this article http://gameprogrammingpatterns.com/event-queue.html and it made me thought if it is a good practice or needed for my problems how to wire objects and handle asynchronous tasks and object communication. I don't know, if its a good idea/programming style to let the ActionController do everything as a central station or to let objects do their own eventchain, like trigger a request from the ActionController, the RequestBuilder sends its request directly to the NetworkManager and the NetworkManager can directly call on the GUI. 2) Observe game objects From time to time the client sends a request (carrying the actions done by the player) to the server to get a new update of the gamestate and if the actions are allowed and possible. The server calculates end send back the "cleaned" gamestate. The client receives the approved gamestate from the server and transition/interpolate from it's current state to the ones of the server, like undo not allowed actions with a message in the players notification feed, what was reverted. (not the ones shown in the image above, but an extra one) So there is one observer to observe the overall gamestate and little ones that observe game objects like buildings on the grid, that produce items. Is it good to use a observer pattern here? Can't say why, but my intuition says: Observers for states the player doesn't initiate directly (burning fuel, producing units over time, ...) Listeners for direct actions the player does (pressing a button to login, build a building, move a item on the map, pay for something, trade) A better example: The player wants to research "Ironswords". So he presses the button for this research. The listener notice the event and creates an observer (or subscribe that event to an existing research-observer) which observes the progress over time. If the research is finished after a certain amount of time, a notification is added to the players feed and the event unsubscribes from the observer or the observer is getting killed.
  4. Hello, i'm currently working on a mobile MMO game. It's kinda RTS but not really, because at this point, there are no units/players you would see moving around. (Imagine Minecraft without players or animals, just seeing if a block is set, machine is working or it's current state) But to provide nearly real-time updates to the players, I thought, using ERLANG with a headless protocol might be the best solution here. My thoughts: - Every transmission with sensitive data needs to be secure (especially the players personal data) - To avoid overhead through TCP and handshaking every request, a new/existing player only does this time and resource expensive steps at registration or login. The given session is, lets say, 12h valid and allows the player to receive/send data. - One ERLANG module is listening on port 80 (kind of an ERLANG webserver), to serve HTML pages (anyone can access) Primitive concept sketch: Because I'm lacking experience, especially which protocols to choose, I want to ask for your opinions on that. I looked into several protocols like UDP, RUDP or DTLS, TCP or SRTP and tried to apply them that way, what made the most sense to me (colored in the image). But not really sure. Thanks in advance!
  5. I already tried this and the problem is, the sprites are drawn in the cameras y/x space (origin 0,0 is the bottom left corner) obs_stream_2018-06-21-0855-06.mp4 and not on the isometric grid anymore: obs_stream_2018-06-21-0859-14.mp4
  6. Exactly, the image "planes" are not parallel to the projectionmatrix/viewport of the camera, because the camera is looking down to the origin in a 45 degree angle and the sprites get drawed on y/x. This image shows how the sprites are drawn. The rotation matrix rotates the sprites 45 degree around the y-axis. In your image the one in the middle shows how I want it. My sprites are perpendicular to the isometric grid. That's why they look scewed.
  7. I'm using an orthographic camera. Due to the direction and position of the cam, I needed a rotation-matrix for the isometric perspective. (As described in my 3rd post) But the image is drawn at the default y/x, because I don't apply the rotation-matrix on it. The spritebatch's projection matrix is set to the orthographic camera's combined one. Source code: package com.androtest.iso; import com.badlogic.gdx.ApplicationAdapter; import com.badlogic.gdx.Gdx; import com.badlogic.gdx.InputAdapter; import com.badlogic.gdx.graphics.*; import com.badlogic.gdx.graphics.g2d.Sprite; import com.badlogic.gdx.graphics.g2d.SpriteBatch; import com.badlogic.gdx.graphics.g2d.SpriteCache; import com.badlogic.gdx.graphics.glutils.ShapeRenderer; import com.badlogic.gdx.math.Intersector; import com.badlogic.gdx.math.Matrix4; import com.badlogic.gdx.math.Plane; import com.badlogic.gdx.math.Vector3; import com.badlogic.gdx.math.collision.Ray; public class Init extends ApplicationAdapter { public static final int SCREEN_WIDTH = 800; public static final int SCREEN_HEIGHT = 480; int WORLD_WIDTH = 200; int WORLD_HEIGHT = 200; float screenAspectRatio; Texture terrain; Texture squareDummy; Texture tex; SpriteBatch batch; public class OrthoCamController extends InputAdapter { final OrthographicCamera camera; final Plane xzPlane = new Plane(new Vector3(0, 1, 0), 0); final Vector3 intersection = new Vector3(); Sprite lastSelectedTile = null; int zoomLevel = 0; public OrthoCamController (OrthographicCamera camera) { this.camera = camera; } final Vector3 curr = new Vector3(); final Vector3 last = new Vector3(-1, -1, -1); final Vector3 delta = new Vector3(); @Override public boolean touchDragged (int x, int y, int pointer) { Ray pickRay = camOrtho.getPickRay(x, y); Intersector.intersectRayPlane(pickRay, xzPlane, curr); if( !(last.x == -1 && last.y == -1 && last.z == -1) ) { pickRay = camOrtho.getPickRay(last.x, last.y); Intersector.intersectRayPlane(pickRay, xzPlane, delta); delta.sub(curr); camOrtho.position.add(delta.x, delta.y, delta.z); } last.set(x, y, 0); return false; } @Override public boolean touchUp(int x, int y, int pointer, int button) { last.set(-1, -1, -1); return false; } } OrthographicCamera camOrtho; OrthoCamController ctrl; final Sprite[][] sprites = new Sprite[3][3]; final Matrix4 XYtoXZmatrix = new Matrix4(); final Matrix4 imageFaceCameraMatrix = new Matrix4(); ShapeRenderer sr; static final int LAYERS = 1; static final int TILES_X = 20; static final int TILES_Z = 20; static final int TILE_WIDTH = 30; static final int TILE_HEIGHT = 30; static final int TILE_HEIGHT_DIAMOND = 28; SpriteCache[] caches = new SpriteCache[LAYERS]; int[] layers = new int[LAYERS]; @Override public void create(){ terrain = new Texture(Gdx.files.internal("tile_grass.png")); squareDummy = new Texture(Gdx.files.internal("b2.png")); Pixmap pxm = new Pixmap(2,2, Pixmap.Format.RGBA8888); pxm.setColor(0f, 0.5f, 1f, 0.2f); pxm.fill(); tex = new Texture(pxm); batch = new SpriteBatch(); camOrtho = new OrthographicCamera(WORLD_WIDTH, WORLD_HEIGHT*(Gdx.graphics.getWidth()/Gdx.graphics.getHeight()) ); camOrtho.position.set(WORLD_WIDTH*2, WORLD_HEIGHT, WORLD_WIDTH*2); camOrtho.direction.set(-1,-1,-1); camOrtho.near = 1; camOrtho.far = 10000; camOrtho.update(); ctrl = new OrthoCamController(camOrtho); Gdx.input.setInputProcessor(ctrl); XYtoXZmatrix.setToRotation(new Vector3(1,0,0), 90); //imageFaceCameraMatrix.setToRotation(new Vector3(0, 1f, 0), 45); //imageFaceCameraMatrix.setToRotation(new Vector3(1f, 0f, -1f), -45); //imageFaceCameraMatrix.setToRotation(new Vector3(1f, 0f, 1f), 30); imageFaceCameraMatrix.setToLookAt(new Vector3(1f,1f,1f), new Vector3(0,1,0)); sr = new ShapeRenderer(); for(int z = 0; z < 3; z++) { for(int x = 0; x < 3; x++) { sprites[x][z] = new Sprite(terrain); sprites[x][z].setPosition(x,z); sprites[x][z].setSize(TILE_WIDTH, TILE_HEIGHT); sprites[x][z].flip(false, true); } } for (int i = 0; i < LAYERS; i++) { caches[i] = new SpriteCache(); SpriteCache cache = caches[i]; cache.beginCache(); int colX = 0; int colZ = 0; for (int x = 0; x < TILES_X; x++) { for (int z = 0; z < TILES_Z; z++) { int tileX = colX + x*TILE_WIDTH; int tileZ = colZ + z*TILE_HEIGHT; cache.add(tex, tileX*1.1f, tileZ*1.1f, 0, 0, TILE_WIDTH, TILE_HEIGHT); } } layers[i] = cache.endCache(); } } @Override public void render() { Gdx.gl.glClearColor( 0.154f, 0.200f, 0.184f, 1f ); Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT ); Gdx.gl.glEnable(GL20.GL_BLEND); Gdx.gl.glBlendFunc(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA); camOrtho.update(); for (int i = 0; i < LAYERS; i++) { SpriteCache cache = caches[i]; cache.setProjectionMatrix(camOrtho.combined); cache.setTransformMatrix(XYtoXZmatrix); cache.begin(); cache.draw(layers[i]); cache.end(); } batch.setProjectionMatrix(camOrtho.combined); batch.setTransformMatrix(imageFaceCameraMatrix); batch.begin(); batch.draw(squareDummy, 0, 0); batch.end(); sr.setProjectionMatrix(camOrtho.combined); sr.setTransformMatrix(XYtoXZmatrix); sr.begin(ShapeRenderer.ShapeType.Line); sr.setColor(1, 1, 1, 1); sr.line(0, 0, 500, 0); sr.line(0, 0, 0, 500); sr.setTransformMatrix(new Matrix4().setToRotation(new Vector3(1,0,0),0)); //x sr.setColor(Color.RED); sr.line(0,0,0, 500,0,0); //y sr.setColor(Color.GREEN); sr.line(0,0,0, 0,500,0); //z sr.setColor(Color.BLUE); sr.line(0,0,0, 0,0,500); sr.end(); } @Override public void resize(int width, int height) { camOrtho.viewportWidth = width; camOrtho.viewportHeight = height; camOrtho.update(); } @Override public void dispose() { terrain.dispose(); building.dispose(); tex.dispose(); batch.dispose(); sr.dispose(); } }
  8. That's exactly the problem I have with my dummy image. I don't know how to adjust the image's matrix to the viewport's one.
  9. Hello and thank you. Had to look up, what a vertex buffer is and what it does. I'm guessing an image is like a plane, defined by 4 vertices? I will build that system what you suggested. It's far easier than my approach. Just want to know, how the square result would be possible with the actual system. As you mentioned, it looks scewed. But it's the same square image I posted in my second post. So I thought I need to find a vector that displays it and all other images square to the cam.
  10. Thanks. I thought of something like this, but had doubts, how things later behave, when I want to select a building to interact with. But with an invisible isometric grid as kind of a logic grid, I can just check which tile is clicked and the building, who occupies it. I like your approach of simplicity. What I currently have, is an orthographic camera looking with direction vector (-1,-1,-1) to the origin. A transform matrix rotates the default Y-X-Plane to Z-X by 90° around the x-axis. So I'm not sure, if it's a good solution, to give each image of a building a transform matrix as well, to face the camera. This seems too much, compared to your solution just working on the Y-X. Do I complicating things? Even though, I can't figure out the right rotation vector to face the camera as square. Maybe I should refresh some math? With final Matrix4 imageFaceCameraMatrix = new Matrix4(); imageFaceCameraMatrix.setToRotation(new Vector3(0, 1f, 0), 45); I'll get the following (not square) I tried chaining the 2 matrices, with no success. Just discovered the multiplication methods of the Matrix4 class. Maybe there is the key to a solution.
  11. Okay, so it seems that I do not understand some tricks, how images work an isometric grid. If you have a look at of a finished game, it looks like there are no image tiles for the buildings. Instead, fully 2d images applied on a predefined gridspace at once.The windmill occupies 2x2 gridspace (red lines), but needs much more tiles to display the image? (blue lines) In the violet circle, the building takes a 3x3 space on the grid. Is the image composed of tiles or one single image? Last guess I would have are 3d objects. If I'm drawing an image at my grid, it looks like But it should be drawn like Is this a camera issue or am I just to stupid to understand something major about isometric perspective? Or has this something to do with the source image? All isometric graphics I found on the net of buildings, trees, and so on, are square images and already in isometric perspective, like the gray box. Some are a set to compose a building like seen here https://en.wikipedia.org/wiki/File:Tile_set.png, but the composed ones of that tiles doesn't look as good as for an example, an image like this one (from a 3D game) That's what I'm trying to achieve. Bake a 2d image in isometric perspective in Blender of a building like that, and get it on the grid as a whole, occupying gridspace depends on the size of the building. Not possible?
  12. Hello, I'm asking myself, what's a better approach - especially for smartphones with lower computing power - to have some sprites animated together into a transition, or to pre-render a short video that is loaded and played from the internal storage. Transition with sprites I guess: https://youtu.be/FzbaZ9DIbaU?t=31 Transition as a video: (imagine, you would dive into the clouds) https://youtu.be/uK2l6Yrtqhg?t=1121 What's your experience on that? Thanks in advance!
  13. Hello, I'm not sure if my approach of an isometric grid for a mobile game, is a good one. Please have a look at the image. The 100x100 grid is just for positioning the images on the grid. The background comes from the base tile and is one image and comes in different resolutions for different screen densities. Is this even possible or a performance killer? Also, I choosed 1 unit = 1 meter sidelength for a tile in worldspace, because I'm modeling the buildings in Blender also with a 1m reference object, to get a better sense of the dimension they would take in worldspace. Your thoughts about it? Thanks in advance! Additional information: I'm using libgdx and the default y/x plane is transformed by a transformation matrix to z/x plane. Don't know if this is common, but a blogpost (https://www.badlogicgames.com/wordpress/?p=2032) did so.
  14. Thanks to both of you! I've read about Distance Field and I partly understand it's concept. A problem that I have is: (seamless) textures/patterns in the graphics-program (I'm using GIMP), which are raster graphics, that I don't have in the vectorgraphics-program (Inkscape). The results I created in GIMP look better so far. But when it comes to scaling down, the following happens. Smartphone HD (1920x1080) texture graphic: https://i.imgur.com/BABxKV5.png Smartphone ldpi (320x240) downscaled texture graphic on top Smartphone ldpi (320x240) downscaled but texture reapplied after on the bottom The downscaled texture becomes so fine/small and it doesn't seems "natural" to me, because I can only see on the HD screen it's "true" shape of the bubbles. The ldpi screen would show a different image, if it's just downscaled. It seems to me, that I don't notice something major or doing a serious mistake in my thinking process. May I doesn't understand some basics about scaling, because in https://developer.android.com/training/basics/supporting-devices/screens.html they speak about dpi and I'm so pixelfocused!?
  15. Hello, how do you handle your UI graphics for different screen resolutions? 1) Using scaleable vector graphics from the beginning to up- or downscale the UI graphics for the target screen? 2) Creating graphics with HD (1920x1080) resolution and up-/downscale them for different screens or in the first place create 4K resolution and downscale only, what will result in some used textures becoming to small. (Don't know the specific term for that problem)   Thanks in advance! Regards
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!