Jump to content
  • Advertisement

dpadam450

Member
  • Content count

    3293
  • Joined

  • Last visited

Community Reputation

2361 Excellent

About dpadam450

  • Rank
    Contributor

Personal Information

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. You can copy the final image to a buffer. Render the water plane converting the 4 coordinates to screenspace so that the water plane matches the final texture exactly. Then run a distortion image to distort the underlying objects. This can be as simple as a black and white texture that is a noise texture and you can offset the current pixel by the noise texture amount. As long as you keep depth testing on etc, it will only distort pixels where water renders. Honestly, it depends on what technique you want to mimic. I would look at some games or youtube water rendering demos and see which one is easiest that you like.
  2. Since a point such as a one-pixel peak is not a surface, but a point: You should perform the surface normal calculation I said. If you want the accurate peak normal to point at the sky, it would be averaging the surrounding pixles. That's how per-vertex normals are calculated for a mesh, same thing would apply here.
  3. First, understand that an engine can be a simple while() loop that polls for input and moves and object and draws it. You can build upon that and continually build new features and decide to refactor when you need to. I wouldn't worry about organizing the perfect way to do it up front. I could post a long reply, but if you are starting your own engine from scratch, just implement the features you need/want to work on. Get it working and keep building new features. From an advance standpoint, typically the renderer is on it's own thread. It will have positions/locations stored outside of what positions game/physics have. When a new rock is spawned that doesnt move, you could tell the renderer "here is an object, the 3d mesh it belongs to, and the location". For something like a moveable character, the gameplay/physics thread will have some kind of pointer/handle to be able to tell the render thread "hey that object, this is where it is now". Things like that. Render class, Camera class, Camera could do the rendering, you can render shadows from camera or the main players view from a camera. You just have to start somewhere. If you just want to learn rendering tricks and shaders, then work towards that.
  4. Sorry that is (right - current) = vector, (upper - current) = vector. Cross the two.
  5. Not sure what I am looking at. I see the shadow wrong but what is the giant beam going up. Looks like a strange height map or something. Also odd, you are calculating normals without using the actual current pixel, just the surrounding ones. I would take the current pixel and sample the right pixel and upper pixel only and use that to calculate the normal and tangents. I think if you cross the right sample with the upper sample, you will get a more proper normal.
  6. I didn't down vote, but UV's can go to the edge and further. Again, there is no limit to UV space. It just wraps around unless you request to clamp to edge.
  7. dpadam450

    Distance Fields

    I guess if you generate them per instance and pre-compute the distances offline it would make sense. Each instance would have a unique 3d volume.
  8. dpadam450

    Distance Fields

    I briefly looked at some of the material, what I'm wondering is for a screen-space pixel: How does a pixel determine what distance field volumes are overlapping it that need to be raycasted against for doing AO? Does it have reference to all DistanceField volumes and is doing a computer shader with a BSP tree or something?
  9. dpadam450

    Distance Fields

    http://advances.realtimerendering.com/s2015/DynamicOcclusionWithSignedDistanceFields.pdf I get the idea that they are using for the distance fields, they are essentially a 3d texture/volume with varying resolution per object, but how are they used/rendered? Are the cubes rendered? How/when/where does a shader evaluate them. Is it a fullscreen shader?
  10. dpadam450

    oscillating a 2D object in opengl

    glVertex3f(x1 + r1 * cos(t), y1 + r1 * sin(t), 0.0); You may want to read up a bit more on openGL and how transforms work, glLoadIdentity(), glRotatef(), glScalef(). You shouldn't be recalculating vertices this way.
  11. Trying to get texture buffer objects working. I'm planning on updating a buffer every frame with instance matrices, for now I have stored a single color. I stripped my code to a simple example. Right now the color I am getting is black. I'm wondering if there is something dumb I need to know about TBO's. I have 0 glGetError() issues. The buffer should definitely contain data, so I wonder if there isn't something with binding the texture properly. ***I missed in my example, but I am calling glUniform1i("instanceMatrixBuffer", 11); To properly bind texture 11 to the sampler in the shader. glGenVertexArrays(1, &VAO_Handle); glBindVertexArray(VAO_Handle); glBindBuffer(GL_ARRAY_BUFFER, VBO_Handle); glEnableVertexAttribArray(0); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)this->m_vboLayout.positionDataOffset); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, VBO_Index_Handle); glBindVertexArray(0); ...later glBindVertexArray(VAO_Handle); static bool doOnce = true; if(doOnce) { doOnce = false; glGenBuffers(1, &TBO_Buffer_Handle); glBindBuffer(GL_TEXTURE_BUFFER, VBO_Index); float data[4] = {1.0, 0.0, 1.0, 1.0}; glBufferData(GL_TEXTURE_BUFFER, sizeof(float)*4, data, GL_STATIC_DRAW); glGenTextures(1, &TBO_Texture_Handle); glBindTexture(GL_TEXTURE_BUFFER, Texture_Index ); glTexBuffer(GL_TEXTURE_BUFFER, GL_RGBA32F, VBO_Index); } glActiveTexture(GL_TEXTURE11); glBindBuffer(GL_TEXTURE_BUFFER, VBO_Index); glBindTexture(GL_TEXTURE_BUFFER, Texture_Index); glDrawElementsInstanced(GL_TRIANGLES, meshToRender->num_faces*3, GL_UNSIGNED_INT, 0, instanceCount) GLSL vec4 Color = texelFetch(instanceMatrixBuffer, 0);
  12. dpadam450

    Contract Work

    He suggested that nobody can live off 100K. Just stating that not every job is a high paying software job. I mean you have experience in Seattle as a game dude. Did you start at 100K? Just curious.
  13. dpadam450

    Contract Work

    https://www.seattletimes.com/seattle-news/politics/a-city-of-riches-most-seattle-filers-make-less-than-50k-irs-data-show/
  14. dpadam450

    Contract Work

    Well that's too pricey in general for running a normal business. If you have the money to blow as a bazillion dollar corporation I get that and I know those numbers exist. Still unrealistic of an expectation for every programmer to believe that jobs just pay that much in or out of Seattle. I was looking at like 150K at Amazon plus the 2 year bonus thing back maybe 3 years ago. My friend made like 50/60K in LA on the Rocket League team with like 3 years experience. L....A..... right? You never know. There are so many ranges. But again even for Seattle and across the US, 100K is a pretty reasonable sweet spot (and not a bad number) for 5-10 years experience.
  15. dpadam450

    Contract Work

    The fact that that guy even believes 50K is a good number in Seattle is stupid. Maybe some jobs are low-balling, Yes there are awful jobs that might try to pay low, I've seen plenty of them. They might not be exactly be in Seattle city limits either. But 100K anywhere in the US is the typical average for a Mid/Sr programmer (10 years experience). $150/hr is pretty pricey. I could see someone potentially getting away with that but hmm.... 100K/year is $48/hr. Factor in that in the US your employer also covers your half of employment/social security taxes (6% or whatever), offers some help in healthcare benefits, and should be offering a retirement plan, then 150K is a pretty up there number for developers. Seattle is definitely inflated because of the price to live there is awful. 200K is top what anyone should be getting paid in Seattle. Not saying specific jobs won't go higher with bonuses, but if someone is shelling out 200K including all benefits that's not business smart. From what I know Slayemin never had real game experience before starting his venture. I think the indie game market is such a risk that diving head first and hiring an artist was probably a mistake. I'm trying to make an indie game, but I'm super scarce on budget and keep my day job as a good safety net. Good luck on the journey. Not to discourage but VR is a tough spot, I'm not sure what the potential is of your game or how much more content/quality you think you can get on your return on investment, but it is a business at the end of the day. You started with a lot of money and that sucks to be in this spot. If you are a good dev then you can certainly rebound with a good job. Work at Amazon for a couple years and make bank out there.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!