Advertisement Jump to content
  • Advertisement

Hashbrown

Member
  • Content Count

    60
  • Joined

  • Last visited

  • Days Won

    1

Hashbrown last won the day on June 23 2018

Hashbrown had the most liked content!

Community Reputation

133 Neutral

About Hashbrown

  • Rank
    Member

Personal Information

  • Role
    DevOps
  • Interests
    Art
    Audio
    Business
    Design
    DevOps
    Programming

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Shaarigan thank you very much for the answer, that's all I needed. I kind of felt their documentation confusing. Once I figure everything, out, I'll post a guide just in case others find it confusing as well. Thanks again.
  2. I'm learning how to implement a scripting language into my c++ game engine, specifically Mozilla's JavaScript engine: SpiderMonkey, but I'm still very confused. First I try to follow their How to build SpiderMonkey document, but it indicates I need the right build tools for my platform. In my case Windows 10 and Visual Studio 2018 Community as my IDE. I go to that "build tools" documentation for Windows but it then says Building Firefox for Windows. I guessing the same tools to build Firefox are used for SpiderMonkey, but I'm not sure what part of what I'm installing is overkill, I just want to #include "jsapi.h" in my project. I realize it's not as easy as including a bunch of header files. I just don't know where to start or how to do this properly. From what I see in the documentation, using the API is very straightforward, but building is so confusing for me Would anybody know of a more straight forward step by step guide? I can't find one anywhere.
  3. Thanks for the answer! You're right, I use the word GameObject a lot since I use Unity, sorry about that. In the case of my question, I wrote a system (I think) similar to Unity You understood perfectly, and I was planning on making the textures in development. Using a grid mesh as you mentioned makes so much sense. Now I'd like to programmatically create my worlds during runtime thanks to your suggestion. So if I wanted to make a very large world, I could create one giant grid mesh and map all the tiles to its texture coordinates? I'll definitely look into the padding situation, I'm not prepared for that issue, hopefully somebody can offer some tips on that, but for now I'm going to work on the tile renderer. Thanks again Zakwayda, I appreciate it!
  4. No problem at all, it's not a dumb question. I should have specified it was promotional image I took from the site I bought the assets from. My camera or world objects are not angled in anyway, pretty much a bunch of quads with its Euler angles at 0, 0, 0.
  5. I purchased a nice tile set for practicing: ...and I'm wondering what would be the best way to tile my game world. At the moment my sprite renderer works nicely, I'm able to render only a portion of a texture given the right x,y, width and height. I can even animate my sprites: Question What's the best way to draw my environment though? I can't imagine creating a bunch of game objects just to fill the screen with floor tiles. I thought of perhaps making a few giant quads, scatter them around the world, and texture each quad with an image of the environment tiles already placed. Maybe this could avoid making so many game objects. Or maybe I'm wrong. Hopefully somebody can point me towards the right direction, no code is needed. I'm not using any particular engine btw, just opengl, c++, and glm for math. Thanks!
  6. Hashbrown

    Any tools that write shader code?

    This is actually pretty awesome and not expensive. Thanks for the share.
  7. Hashbrown

    Any tools that write shader code?

    @pcmaster I just checked the source code for glslsandbox and apparently they're using a WebGL 1 context , which doesn't support anything greater than glsl 100 as far as I know. var contextNames = ["moz-webgl", "webkit-3d", "experimental-webgl", "webgl", "3d"]; // From glslsandbox I guess they're trying to target as many browsers possible. I would have completely ignored WebGL1 and sticked to WebGL2 in order to support version 300 es. As for Shadr, they're apparently using THREE.js and I'm not sure what webgl context that library defaults to
  8. Hashbrown

    Any tools that write shader code?

    You can also try: http://shdr.bkcore.com/, it's a smaller project but has a clean and slick design. You also have a mesh to work with. Oh and there's also: http://glslsandbox.com/ (GLSL though)
  9. Got you. That makes sense, now I feel a stupid, I should have thought of that before posting: make the sprite quad larger when creating it. Thanks a lot Joe, I'll give it a try right now.
  10. When importing sprites in Unity, we get a Pixels Per Unit option. The smaller the value, the larger it looks on screen. This is great for very small (50x32px) sprites I download. My question is, how can I accomplish this with OpenGL? Should I make the sprite larger in the frag shader? I don't want to scale the game object, I'd like the image to be rendered at the size I need without changing the scale, similar to Unity. No code needed, just some suggestions that put me in the right direction. Thanks!
  11. Hello all I want to use the right analog stick of my gamepad for throwing a rect based on the angle and intensity I flicked the analog stick. So if I flick the analog stick a little to the right, the less force is applied to the rect's rigidbody velocity property. I should be able to move the character left and right and also toss this rect at the same time. A good example of this gameplay mechanic would be Skate (xbox360). Granted, Skate is a lot more complex and 3D, and I just want to toss a rect. So far, I've kind of figured it out, but incredibly dissatisfied with the results. I'm able to to get the direction of my flick, but it's so sensitive, sometimes I repeatedly toss the rect completely upwards. I also don't feel so much control over strength I toss the rect. Long story short, I don't feel as much control over my flick functionality The functionality is only one script, If any of you have suggestions on improving this functionality it would be greatly appreciated. I'll share the code below, you can also download this little project. Its made for playing with a gamepad though. thanks in advance! using System.Collections; using System.Collections.Generic; using UnityEngine; public class Player : MonoBehaviour { [SerializeField] private GameObject boxObject; private Rigidbody2D rigidbody; private Vector2 leftInput; private Vector2 rightInput; private float timeFlicking = 0.0f; // Flags private bool flicking = false; private void Start() { rigidbody = GetComponent<Rigidbody2D>(); } private void Update() { // Will use later if (flicking) timeFlicking += Time.deltaTime; leftInput = new Vector2(Input.GetAxis("Horizontal"), Input.GetAxis("Vertical")); rightInput = new Vector2(Input.GetAxis("Right Horizontal"), Input.GetAxis("Right Vertical")); float rightInputMagnitude = rightInput.magnitude; // No Analog Stick movement and make sure we're not flicking already if (rightInput != Vector2.zero && !flicking) { Debug.Log("Flicking!"); CreateBox(); flicking = true; } if (rightInput == Vector2.zero) { if (flicking) { flicking = false; } } Vector2 newVelocity = leftInput * new Vector2(10, 10); newVelocity.y = 0; rigidbody.velocity = newVelocity; } private void CreateBox () { GameObject box = Instantiate(boxObject, transform.position, Quaternion.identity); Rigidbody2D rb = box.GetComponent<Rigidbody2D>(); Vector2 dir = rightInput.normalized; box.transform.position = (Vector2) box.transform.position + dir; rb.velocity = rightInput * new Vector2(40, 40); } } Project (6mbs): https://files.fm/u/ru8p9rgs
  12. Hey guys thanks again for all the help! I've definitely taken all the advice given to me. I changed to Simplex 3D and I'm using spherical coordinates. I love the results now because there's no stretching. I had trouble understanding why 2D noise wasn't the best choice but all the explanations helped. (not the real colors of the planet, just marking areas)
  13. Gnoll thanks for answering! I'm definitely going to try Simplex, I found an implementation here. Isn't Simplex under a strict patent though? I appreciate answering the seam question too, I'll definitely give it a try that was really annoying me. As for cutting off certain areas of the noise, I found the answer in this video. The person who made the tutorial calls it a "falloff map". The tutorial is short too, 11 mins long. (could use some improvement, but I'm getting there) I also found out why my texture looked stretched: I was actually stretching the texture! I'm dynamically creating the texture but was making it 512x512, but spheres apparently like images that are wide. So I went with 1000x500 and it looks so much nicer.
  14. I'm trying to use Perlin Noise to paint landscapes on a sphere. So far I've been able to make this: (the quad is just to get a more flat vision of the height map) I'm not influencing the mesh vertices height yet, but I am creating the noise map from the CPU and passing it to the GPU as a texture, which is what you see above. I've got 2 issues though: Issue #1 If I get a bit close to the sphere, the detail in the landscapes look bad. I'm aware that I can't get too close, but I also feel that I should be able to get better quality at the distance I show above. The detail in the texture looks blurry and stretched...it just looks bad. I'm not sure what I can do to improve it. Issue #2 I believe I know why the second issue occurs, but don't know how to solve it. If I rotate the sphere, you'll notice something. Click on the image for a better look: (notice the seam?) What I think is going on is that some land/noise reaches the end of the uv/texture and since the sphere texture is pretty much like if you wrap paper around the sphere, the beginning and end of the texture map connect, and both sides have different patterns. Solutions I have in mind for Issue #2: A) Maybe limiting the noise within a certain bounding box, make sure "land" isn't generated around the borders or poles of the texture. Think Islands. I just have no idea how to do that. B) Finding a way to make the the noise draw at the beginning of the uv/texture once it reaches the end of it. That way the beginning and ends connect seamlessly, but again, I have no idea how to do that. I'm kind of rooting for the solution a though. I would be able to make islands that way. Hope I was able to explain myself. If anybody needs anymore information, let me know. I'll share the function in charge of making this noise below. The shader isn't doing anything special but drawing the texture. Thanks! CPU Noise Texture: const width = 100; const depth = 100; const scale = 30.6; const pixels = new Uint8Array(4 * width * depth); let i = 0; for (let z = 0; z < depth; z += 1) { for (let x = 0; x < width; x += 1) { const octaves = 8; const persistance = 0.5; const lacunarity = 2.0; let frequency = 1.0; let amplitude = 1.0; let noiseHeight = 0.0; for (let i = 0; i < octaves; i += 1) { const sampleX = x / scale * frequency; const sampleZ = z / scale * frequency; let n = perlin2(sampleX, sampleZ); noiseHeight += n * amplitude; amplitude *= persistance; frequency *= lacunarity; } pixels[i] = noiseHeight * 255; pixels[i+1] = noiseHeight * 255; pixels[i+2] = noiseHeight * 255; pixels[i+3] = 255; i += 4; } } GPU GLSL: void main () { vec3 diffusemap = texture(texture0, uvcoords).rgb; color = vec4(diffusemap, 1.0); }
  15. Actually that did work, also understood what I was missing now. Thanks a lot Tim!
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!