Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

  • Days Won


Hashbrown last won the day on June 23

Hashbrown had the most liked content!

Community Reputation

127 Neutral

About Hashbrown

  • Rank

Personal Information

  • Role
  • Interests

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Hello all I want to use the right analog stick of my gamepad for throwing a rect based on the angle and intensity I flicked the analog stick. So if I flick the analog stick a little to the right, the less force is applied to the rect's rigidbody velocity property. I should be able to move the character left and right and also toss this rect at the same time. A good example of this gameplay mechanic would be Skate (xbox360). Granted, Skate is a lot more complex and 3D, and I just want to toss a rect. So far, I've kind of figured it out, but incredibly dissatisfied with the results. I'm able to to get the direction of my flick, but it's so sensitive, sometimes I repeatedly toss the rect completely upwards. I also don't feel so much control over strength I toss the rect. Long story short, I don't feel as much control over my flick functionality The functionality is only one script, If any of you have suggestions on improving this functionality it would be greatly appreciated. I'll share the code below, you can also download this little project. Its made for playing with a gamepad though. thanks in advance! using System.Collections; using System.Collections.Generic; using UnityEngine; public class Player : MonoBehaviour { [SerializeField] private GameObject boxObject; private Rigidbody2D rigidbody; private Vector2 leftInput; private Vector2 rightInput; private float timeFlicking = 0.0f; // Flags private bool flicking = false; private void Start() { rigidbody = GetComponent<Rigidbody2D>(); } private void Update() { // Will use later if (flicking) timeFlicking += Time.deltaTime; leftInput = new Vector2(Input.GetAxis("Horizontal"), Input.GetAxis("Vertical")); rightInput = new Vector2(Input.GetAxis("Right Horizontal"), Input.GetAxis("Right Vertical")); float rightInputMagnitude = rightInput.magnitude; // No Analog Stick movement and make sure we're not flicking already if (rightInput != Vector2.zero && !flicking) { Debug.Log("Flicking!"); CreateBox(); flicking = true; } if (rightInput == Vector2.zero) { if (flicking) { flicking = false; } } Vector2 newVelocity = leftInput * new Vector2(10, 10); newVelocity.y = 0; rigidbody.velocity = newVelocity; } private void CreateBox () { GameObject box = Instantiate(boxObject, transform.position, Quaternion.identity); Rigidbody2D rb = box.GetComponent<Rigidbody2D>(); Vector2 dir = rightInput.normalized; box.transform.position = (Vector2) box.transform.position + dir; // I should scale this new velocity vector with the magnitude of my rightInput vector? rb.velocity = rightInput * new Vector2(40, 40); } } Project (6mbs): https://files.fm/u/ru8p9rgs
  2. Hey guys thanks again for all the help! I've definitely taken all the advice given to me. I changed to Simplex 3D and I'm using spherical coordinates. I love the results now because there's no stretching. I had trouble understanding why 2D noise wasn't the best choice but all the explanations helped. (not the real colors of the planet, just marking areas)
  3. Gnoll thanks for answering! I'm definitely going to try Simplex, I found an implementation here. Isn't Simplex under a strict patent though? I appreciate answering the seam question too, I'll definitely give it a try that was really annoying me. As for cutting off certain areas of the noise, I found the answer in this video. The person who made the tutorial calls it a "falloff map". The tutorial is short too, 11 mins long. (could use some improvement, but I'm getting there) I also found out why my texture looked stretched: I was actually stretching the texture! I'm dynamically creating the texture but was making it 512x512, but spheres apparently like images that are wide. So I went with 1000x500 and it looks so much nicer.
  4. I'm trying to use Perlin Noise to paint landscapes on a sphere. So far I've been able to make this: (the quad is just to get a more flat vision of the height map) I'm not influencing the mesh vertices height yet, but I am creating the noise map from the CPU and passing it to the GPU as a texture, which is what you see above. I've got 2 issues though: Issue #1 If I get a bit close to the sphere, the detail in the landscapes look bad. I'm aware that I can't get too close, but I also feel that I should be able to get better quality at the distance I show above. The detail in the texture looks blurry and stretched...it just looks bad. I'm not sure what I can do to improve it. Issue #2 I believe I know why the second issue occurs, but don't know how to solve it. If I rotate the sphere, you'll notice something. Click on the image for a better look: (notice the seam?) What I think is going on is that some land/noise reaches the end of the uv/texture and since the sphere texture is pretty much like if you wrap paper around the sphere, the beginning and end of the texture map connect, and both sides have different patterns. Solutions I have in mind for Issue #2: A) Maybe limiting the noise within a certain bounding box, make sure "land" isn't generated around the borders or poles of the texture. Think Islands. I just have no idea how to do that. B) Finding a way to make the the noise draw at the beginning of the uv/texture once it reaches the end of it. That way the beginning and ends connect seamlessly, but again, I have no idea how to do that. I'm kind of rooting for the solution a though. I would be able to make islands that way. Hope I was able to explain myself. If anybody needs anymore information, let me know. I'll share the function in charge of making this noise below. The shader isn't doing anything special but drawing the texture. Thanks! CPU Noise Texture: const width = 100; const depth = 100; const scale = 30.6; const pixels = new Uint8Array(4 * width * depth); let i = 0; for (let z = 0; z < depth; z += 1) { for (let x = 0; x < width; x += 1) { const octaves = 8; const persistance = 0.5; const lacunarity = 2.0; let frequency = 1.0; let amplitude = 1.0; let noiseHeight = 0.0; for (let i = 0; i < octaves; i += 1) { const sampleX = x / scale * frequency; const sampleZ = z / scale * frequency; let n = perlin2(sampleX, sampleZ); noiseHeight += n * amplitude; amplitude *= persistance; frequency *= lacunarity; } pixels[i] = noiseHeight * 255; pixels[i+1] = noiseHeight * 255; pixels[i+2] = noiseHeight * 255; pixels[i+3] = 255; i += 4; } } GPU GLSL: void main () { vec3 diffusemap = texture(texture0, uvcoords).rgb; color = vec4(diffusemap, 1.0); }
  5. Actually that did work, also understood what I was missing now. Thanks a lot Tim!
  6. I'm trying to use values generated with a 2D Perlin noise function to determine the height of each vertex on my sphere. Just like a terrain height map but spherical. Unfortunately I can't seem to figure it out. So far it's easy to push any particular vertex along its calculated normal, and that seems to work. As you can see in the following image, I'm pulling only one vertex along its normal vector. This was accomplished with the following code. No noise yet btw: // Happens after normals are calculated for every vertex in the model // xlen and ylen are the segments and rings of the sphere for(let x = 0; x <= xLen; x += 1){ for(let y = 0; y <= yLen; y += 1){ // Normals const nx = model.normals[index]; const ny = model.normals[index + 1]; const nz = model.normals[index + 2]; let noise = 1.5; // Just pull one vert... if (x === 18 && y === 12) { // Verts model.verts[index] = nx * noise; model.verts[index + 1] = ny * noise; model.verts[index + 2] = nz * noise; } index += 3; } } But what if I want to use 2D Perlin noise values on my sphere to create mountains on top of it? I thought it would be easy displacing the sphere's vertices using its normals and Perlin noise, but clearly I'm way off: This horrible object was created with the following code: // Happens after normals are calculated for every vertex in the model // xlen and ylen are the segments and rings of the sphere // Keep in mind I'm not using height map image. I'm actually feeding the noise value directly. for(let x = 0; x <= xLen; x += 1){ for(let y = 0; y <= yLen; y += 1){ // Normals const nx = model.normals[index]; const ny = model.normals[index + 1]; const nz = model.normals[index + 2]; const sampleX = x * 1.5; const sampleY = y * 1.5; let noise = perlin2(sampleX, sampleY); // Update model verts height model.verts[index] = nx * noise; model.verts[index + 1] = ny * noise; model.verts[index + 2] = nz * noise; index += 3; } } I have a feeling the direction I'm pulling the vertices are okay, the problem might be the intensity, perhaps I need to clamp the noise value? I've seen terrain planes where they create the mesh based on the height map image dimensions. In my case, the sphere model verts and normals are already calculated and I want to add height afterwards (but before creating the VAO). Is there a way I could accomplish this so my sphere displays terrain like geometry on it? Hope I was able to explain myself properly. Thanks!
  7. Thank you so much! Removing the fract function was exactly the solution! Why was this occurring though? Could it be due to the welding issue, or maybe floating point precision? I'll figure out another way to dynamically tile textures. I appreciate all the answers suggested, thanks all. Update: Hodgeman's answer above explains why this is happening for those of you wondering.
  8. Hey everybody, still no solution Zakwayda is right, as much as I tested with a Blender model, I'm sure there's something wrong with my implementation. I've actually uploaded the entire engine to github. Hopefully it's not too large of a project. [GitHub link] I'll leave a little guide here so nobody has to guess where everything is. By the way, if you want to run the project, you can either host the client folder or if you got node installed: $ npm install // after dependency download $ node server If you're using nodejs, the game will be served at localhost:8080. By the way, I tried drawing the scene with no mipmapping but got the following warning: texture bound to texture unit 2 is not renderable. It maybe non-power-of-2 and have incompatible texture filtering. Which is true, my planet texture is w > h. I tried it with mipmapping off, min and mag filters on nearest. Guide If you press the lockmouse button, you enter fps mode enabling you to navigate around using your mouse and arrow keys (not WASD). CORE src/engine/loop.js: Game Loop. Physics and Render loops called from here MAIN ENTRY src/main.js: This is where I create the planet entities, textures, meshes. Game.load: calls a function in src/game/loader.js and loads all objs, images, etc. Game.ready: (line26) is called once all assets are ready and before the game starts UV SPHERE, RENDERING, MESH AND TEXTURE CONSTRUCTORS src/engine/systems/drawscene.js: (line 109) Entities are rendered here. src/engine/texture/texture.js: The Texture object for creating new Textures src/engine/data/model.js: (line 216) The UV Sphere function is located here, it's the very last static function. src/engine/data/mesh.js: This is the Mesh object for new Meshes SHADER src/shaders/phong.frag: The only shader I'm using aside from the post processing ones. You'll notice I'm feeding vec4 color the diffuse texture map only, ignoring my lighting algorithm for now. MATH src/engine/math: Here you'll find matrices, vectors and also a class for calculating normals.
  9. WebGL2 is what I'm using., actually I'm also using CLAMP_TO_EDGE, sorry I should have been more specific. The api I wrote just needs the word "clamp". if (value === "clamp") wrap = gl.CLAMP_TO_EDGE; Gnoll, actually I'm going to try that, thanks for your suggestion!
  10. Hey what's up Gaxio, no padding as far as I see, in fact this is what it looks like: Good point, but I actually tested exporting a uv sphere from Blender and I get the same issue. Here's the UV Sphere code, which I got from the following tutorial.
  11. I've taking a WebGL2 tutorial on how to make UV Spheres but noticed a one pixel size line running from the north to south poles of my uv sphere: (you'll have to click on the images to get a better look) Before posting this question, I tried exporting a UV Sphere primitive made on Blender, unwrapped it using Blender's sphere unwrapping function, and still I got the same line. I wanted to discard the possibility of this line happening because of my uv sphere implementation, but get the same situation with a uv sphere generated with blender. I'm also not lighting the scene, just using the diffuse texture map. I also drew a straight line from beginning to end on the diffuse map in order to make sure the uvs are lined up, and everything is okay..but at some point, you'll see that minor 1 pixel separation as mentioned above. Would anybody have any clue of what might be happening? I can't seem to figure out why this is occurring. Some extra info: Thought I'd mention some of my texture settings just in case it helps: mipmapping: true, wrap: "clamp", minfilter: "linearmipmap", magfilter: "linear"
  12. Scouting Ninja, thanks a lot! That was exactly what's going on. I had no idea what Anisotropic Filtering does, now I know. I had to set this first: minfilter = gl.LINEAR_MIPMAP_LINEAR; magfilter = gl.LINEAR and of course: const max = gl.getParameter(ext.MAX_TEXTURE_MAX_ANISOTROPY_EXT); gl.texParameterf(gl.TEXTURE_2D, ext.TEXTURE_MAX_ANISOTROPY_EXT, max); Now my frames are rendering very nicely: For those of you looking for more details on how to do this on WebGL or OpenGL ES, you check out this tutorial. It's a very brief but informative article. Thanks again Ninja!
  13. I've found two particular visual issues in my renders and can't seem to find a way to solve it. Just to setup some context: floors, walls and roof all use diffuse, specular and normal maps. Oh and all three are tiled in the fragment shader. If you don't mind, take a look at the video below: If you look at the roof area of the room, the pixels morph depending on the camera movement (light source is not moving): looks horrible I've also noticed that the less I tile the roof, the less morphing and dots you see. Also, if you check out the floor, you'll notice these strange artifacts around the lines of the floor. In fact, I'll show you a zoomed in picture (click for a larger view): I'm no sure what to do. I've checked if my video card supports anti-aliasing: function antialiasSamples () { const antialias = gl.getContextAttributes().antialias; return gl.getParameter(gl.SAMPLES); } ...and it does, supports 8 samples. I also have a post processing pass, but I'm blitting the frame buffer and using a multisample texture (8 samples): static toMultisample (input, output) { gl.bindFramebuffer(gl.READ_FRAMEBUFFER, input.buffer); gl.bindFramebuffer(gl.DRAW_FRAMEBUFFER, output.buffer); gl.blitFramebuffer( 0, 0, input.width, input.height, 0, 0, output.width, output.height, gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT, gl.NEAREST ); gl.bindFramebuffer(gl.READ_FRAMEBUFFER, null); gl.bindFramebuffer(gl.DRAW_FRAMEBUFFER, null); } ...and this definitely works, yet still I get the same jagged edges and artifacts in the floor. I even added a feature to my textures which I honestly don't understand very well: const ext = ( gl.getExtension('EXT_texture_filter_anisotropic') || gl.getExtension('MOZ_EXT_texture_filter_anisotropic') || gl.getExtension('WEBKIT_EXT_texture_filter_anisotropic') ); if (ext) { const max = gl.getParameter(ext.MAX_TEXTURE_MAX_ANISOTROPY_EXT); gl.texParameterf(gl.TEXTURE_2D, ext.TEXTURE_MAX_ANISOTROPY_EXT, max); } ... still not much difference. Are there any tips some of you can offer me to remove these issues? Below I'll share some information about my rendering pipeline and even share my shaders. Thank you very much for reading. Shader Pastebin Links: VERTEX SHADER FRAGMENT SHADER WebGL2 There's only one point light in the scene at vec3(0.0, 3.0, 2.0) Textures are tiled in the fragment shader (vec2 uvs = fract(uvcoords).xy * uvtile) Mipmapping on for all textures -------- By the way, I thought I'd share a video of a closer look at objects with normal mapping. So you all have an idea of how it's working so far. I'm moving the light left and right btw.
  14. Hey Randy thanks a lot for the great answer, very much appreciated. I'm definitely changing the algorithm to the more simpler version you're suggesting and changing my character to a capsule collider. Thanks again!
  15. Hey Randy thanks for the share, I appreciate it. Although I forgot to mention that I'm working on a fast paced 3D fps that uses some very basic physics. So I read that Minkowski Difference was a nice candidate, is that true? I know I'll have to modify my code in order to avoid tunneling and consider the colliders velocities, but I wanted to understand the algorithm in its simplest form. By the way I finally understood how to get the "penetration vector". I like calling it an offset vector instead, since I use this returning vector to push the collider out. Would this algorithm (plus modifications) still be a good idea for a fast paced fps game with physics, or maybe a better question, what would Minkowski Difference in terms collision be good for? What situations? Thanks!
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!