Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

196 Neutral

About FonzTech

  • Rank

Personal Information


  • Github
  • Steam
  1. Hi! I have a doubt about making a "Room" manager. For "room", I mean a collection of game objects (in detail, I would say "a collection of subclasses of GameObject"), with their own properties (position, rotation, scale and other specific ones). For now, I only want to load a "room" from an external file, then instantiate and setup game objects based on its data. I thought about making a simple text file, containing a big structure like an array, where each entry, contains the class to be instantiated (along with their properties). I'm not very skilled in C++. In Java, I would load classes by their name using reflection (Take the security thing apart, for now), then set their properties (always using reflection), based on the data contained in the above text file. Think like a JSON, where you have a structure like this: [ { "type": "Player", "general": { "position": "1,2,3", "rotation": "1,2,3", "scale": "1,2,3" } "specific": { "someSubclassField": "myValue", "anotherPlayerField": "myValue2" } }, { "type": "MainMenu", "general": { "position": "0,0,0", "rotation": "0,0,0", "scale": "1,1,1" } "specific": { "isLevelEditorEnabled": true } } ] There is some mechanism I can adopt to avoid setting-up game objects at compile-time? I would like to avoid the myriad of if statements. Any advice is appreciated. Thanks in advance!
  2. As for Does a game company spend on your education?, I meant If large companies sends you to some kind of seminars sometimes, not a 3-4 year full degree XD.     Yes, you are really right on this. That's why I found it really difficult. I'd rather stick with OpenGL, and try to make somekind of game XD. I didn't consider it. Thanks for all the replies. This topic made things clearlier.
  3. Hi at all! Today, a question came up in my mind.   Currently, I'm a university student at the first year. Everyday I try to balance study and work.   I mainly work as a freelance, but I don't really like to make websites and mobile apps (which are the most frequent requests from my customers). But I like working with Arduino and Raspberry (I think they are more funny to work with, than making the same website / e-commerce / mobile app over and over, and over... and over) But... It's not what I want, despite being a decent source of earning...   In the free time, I'm putting all the effort I can into my small OpenGL engine. When I will graduate, I'd like to have at least a fully functional OpenGL, featuring lighting, shadows, ocean simulation (I'm still struggling with it XD) and skeletal animation (and maybe an audio backend, at the end).   I succeded to implement deferred shading. I'm currently working on lighting. I think I will finish this project in time for my graduation thesis (I have almost three years of time XD).   But, the question is: how many chances I have to work in a game company? AFAIK, OpenGL is getting kind of deprecated (I know, it will still be supported for a loooooong time). I'd rather finish my project with OpenGL, than spending time learning Vulkan, which I found really hard, expecially because almost no book, nor tutorials were available. You can't find the amount of OpenGL resources for Vulkan (I'm talking about the API itself, not about shading techniques).   So, another question, but not very important is: Does a game company spend on your education? Considering that developers who learn DirectX or OpenGL, Vulkan or Metal, etc..., they do it because they are really interested into it. Otherwise, I would stay here, without struggling, making the same Magento e-commerce, or the same ERP software over and over, and over LOL. I don't expect small or indie game companies spending money to do it, but I think BIG companies should care about their employeers, and mantain them always updated about new technologies (Vulkan, for instance).   What do you think? What do you advise me? Thanks in advance!  
  4. Hi at all! I think this is the right section to ask my question. It's just a curiosity, but I'll try to express it like a question xD. Anyway, I was reading about Nyquist–Shannon theorem, where it says the minimal sampling frequency of a signal is 2 times the maximum frequency of the signal. Now, human ears can hear a maximum of 20 KHz, so now I understand why most audio files are encoded at a sampling rate of 44.1 or 48 KHz. Now, we do know that if we stretch an audio sample to make it longer, it becomes distorted. This effect can be reduced by pitch-shifting down the audio. But, if I record an audio sample at a frequency of 96 KHz (like professional / studio microphones do), can I stretch the audio by slowing down by two times without distortion? I cannot test this by myself, because I don't have any good microphone, nor a professional audio card, but I think that by slowing down the audio, the minimum frequency of the audio would be enough to make it not distorted, right?   I think it's possible, because if we make an analogy with video: to give the illusion of motion, we need around 23 FPS (23 Hz). So, if we take its double, we have a solid motion of 46 FPS (like Thomas Edison said, anything less will strain the eye xD), which is pretty good (I can notice the difference between 46 and 60 FPS, but it's not a lot, at least for me). If we take a slow motion video captured at 120 FPS, then stretch it down by two times, we can see the video is still smooth, because framerate is above the double of the maximum frequency.   So, please, let me know if is possible to stretch an audio without distortion...   Thanks for any help! Excuse me for my bad english :(
  5. From the documentation, I have understood that once you create a rigid body, you must add to the discrete world, by calling the addRigidBody method. So, are new transformations supposed to be updated automatically, once stepSimulation is called? Infact, collision detection sometimes does work, and sometimes does not. I don't know if it depends on geometry, but I don't think so... Any clue?   P.S.: I don't get the meaning of the -1 on my post -.-"
  6.   I call the stepSimulation method on my btDiscreteDynamicsWorld object...
  7.   Thanks for the help, but it doesn't change nothing... Collisions are still incorrect... why?
  8. Hi! I'm trying to integrate the Bullet physics library into engine. But some collisions are weird. Also, sometimes I get an assertion, because the W component of some objects in the scene become 0. I don't know why -.-"   This is how I start the physics simulation: config = new btDefaultCollisionConfiguration(); dispatcher = new btCollisionDispatcher(config); broadphase = new btDbvtBroadphase(); solver = new btSequentialImpulseConstraintSolver(); world = new btDiscreteDynamicsWorld(dispatcher, broadphase, solver, config); world->setGravity(btVector3(0.0, -9.81, 0.0)); And this is how I create a mesh made of triangles (I know there are other efficient meshes, but it's just to see if it works): collisionShape = new btConvexHullShape(); const unsigned int size = model->vertices.size(); for (unsigned int i = 0; i != size; i++) collisionShape->addPoint(btVector3(model->vertices[i].x, model->vertices[i].y, model->vertices[i].z)); motionstate = new btDefaultMotionState(btTransform(         btQuaternion(rotation.x, rotation.y, rotation.z, 1.0), btVector3(position.x, position.y, position.z) )); btRigidBody::btRigidBodyConstructionInfo rigidBodyCI( mass, // mass, in kg. 0 -> Static object, will never move. motionstate, collisionShape, // collision shape of body btVector3(position.x, position.y, position.z) // local inertia ); rigidBody = new btRigidBody(rigidBodyCI); Physics::world->addRigidBody(rigidBody); And this is what happens: https://www.youtube.com/embed/sNFf4Gx_CNM   I have also tried btBvhTriangleMeshShape, creating the object and giving all the triangles of my mesh, but it doesn't even collide with anything. I followed the official documentation, but it does not work. Am I missing something? I am new to physic engine, so I think I'm missing something.   I also tried scaling my models, because some people said that physic engine work better with large geometry... but nothing.   Any ideas?
  9. Link: http://graphics.ucsd.edu/courses/rendering/2005/jdewall/tessendorf.pdf As I have understood, I should make the inverse FFT (FFT ^ -1) of this texture, which is the ocean in the frequency domain. So, to obtain the displacement map, I should move this texture to a periodical ("tileable") space domain, via the FFT (also known as butterfly algorithm). Right? or am I missing something? I see a lot of other math formulas... I don't know what to do now XD   I want to make this by myself, expecially because I'm writing the code in a compute shader. Anyway, this ocean remembers me some kind of signal processing XD... But in realty I am only trying to apply the math explained in that paper :P
  10. I have edited the first post, since no one has replied yet.
  11. Hi at all! I'm struggling to understand Tessendorf's Paper. I have read it several times, but nothing. I have implemented FFT in my project (I know what is the FT and its uses), but now I don't know how to proceed. I have never encountered so many problems problems understading math, given a good explanation.   This is what I have understood: 1) First we need to calculate the Phillips' Spectrum, in a frequency domain. 2) Then we use the inverse FFT to convert it into a spatial domain.  This is what I get once I merge h0(k) and k0(-k) into one texture. And then?? What I have to do?? Excuse me, I know I can appear as a complete n00b, but this is driving me crazy :(  
  12. Hi at all! I'm trying to create a water effect in OpenGL.   In my Water class, I have two FBOs: one for reflection and one for refraction. So, I render the scene flipped with the clip distance on the reflection FBO, and the refraction scene with the clip plane only on its FBO. Then, I do a projective texture mapping in the fragment shader with some distortion using a DU/DV map, which is the derivative of a normal map (I downloaded one online).   I got it working, along with the fresnel effect (which I discovered to be very simple XD). The problem is that on these texture appears some black pixels. I really don't know why!! I thought that it was due to the texture itself, but they appear as I try to rise the distortion.   Just take a look:   This is the vertex shader: #version 330 core layout(location = 0) in vec3 vertexPos; layout(location = 1) in vec2 vertexUV; out vec4 clipSpace; out vec2 textureCoords; out vec3 cameraVector; uniform mat4 model; uniform mat4 view; uniform mat4 projection; uniform vec3 traslation; uniform vec3 cameraPos; void main() { vec4 worldPosition = vec4(vertexPos + traslation, 1.0); mat4 MVP = projection * view * model; gl_Position = MVP * worldPosition; // Copy values in out variables clipSpace = gl_Position; textureCoords = vertexUV; // Calculate the camera vector by subtracting two points in world space cameraVector = cameraPos - worldPosition.xyz; } And this is the fragment shader: #version 330 core out vec4 color; in vec4 clipSpace; in vec2 textureCoords; in float cameraAngle; uniform sampler2D reflectionTexture; uniform sampler2D refractionTexture; uniform sampler2D DUDVmap; uniform vec3 cameraVector; uniform float time; const float WAVE_STRENGTH = 0.0075; void main() { // Normalized device space, divide all by the W component vec2 ndc = (clipSpace.xy / clipSpace.w) * 0.5 + 0.5; // Store coordinates - computationally is pretty useless vec2 coords = vec2(ndc.x, ndc.y); // Two distortion make the effect more real vec2 distortion1 = texture2D(DUDVmap, vec2(textureCoords.x + time, textureCoords.y + time)).rg * 2.0 - 1.0; vec2 distortion2 = texture2D(DUDVmap, vec2(- textureCoords.x + time, - textureCoords.y + time)).rg * 2.0 - 1.0; vec2 totalDistortion = distortion1 + distortion2; // Add distortion coords += totalDistortion * WAVE_STRENGTH; // Clamp distortion, because of NDC border coords = clamp(coords, 0.001, 0.999); // Get reflection and refraction color vector vec4 reflectColor = texture2D(reflectionTexture, coords); vec4 refractColor = texture2D(refractionTexture, coords); // Calculate the cosine between the quad normal and camera vector float theta = dot(normalize(cameraVector), vec3(0.0, 0.0, 1.0)); // Mix up the reflection with the refraction by the above factor color = mix(reflectColor, refractColor, theta); } Please, help me. I don't know why this happens! Thanks in advance!   EDIT: I SOLVED!!! This topic can be closed! The problem was the wrapping of the texture. Now I set it to GL_CLAMP_TO_EDGE and now it's fine!
  13. FonzTech

    OpenGL - Can't get FBO to work

    SOLVED! Yes, I know that there are some misunderstanding that I committed, such as attachments. Now I have re-read the documentation 'till to FBOs. Then when I was reading about textures:   If you looked at my FBO class, the implementation of "CreateEmptyTexture" was missing this part XD. This was driving me crazy, because framebuffer was clearly bound, but glClearColor didn't even work! Now it works XD
  14. Hi! I'm following OpenGL tutorials at www.opengl-tutorial.org I'm stuck at FBO section. I have set up a FBO class, which I have attached to this message.   This is what I have understood: Every FBO has a color buffer and a depth buffer, which all goes into a texture. So basically we have to generate these two buffers and an empty texture, passing nullptr to the glTexImage2D function. Then we have to attach that texture to the entire framebuffer. So we write colors into GL_COLOR_ATTACHMENT0, which corresponds to the layout 0 in the shader. Finally, we have to bind the framebuffer, so we can draw everything we want on it. Once done, switch back to the framebuffer 0, which is the screen buffer.   This is what I'm trying to do: draw the entire scene, by calling only the Draw method of all the active instances, just to see if the FBO class work. BUT NOTHING!!   This is the code which is supposed to draw a cube on the FBO: // Don't judge this code: it's only to see if FBOs actually work XD. fbo->Bind(); // Render to FBO texture Model * model2 = ResourceManager::RequestModel("cube"); model2->textureID = ResourceManager::RequestTexture("texture"); // GLuint to the texture ID model2->BindBuffers(); // Pass data to OpenGL model2->ActiveTexture(ResourceManager::RequestShader("shader")->uniforms["texSampler"]); // Bind the texture unit 0 and pass it to the shader model2->Draw(); // Draw procedure for 3D models, including vertices and UV mapping FBO::BindDefault(); // Switch back to the screen The above code works normally, but not if I call the Bind method of the fbo object before actually draw anything. So, if I you had took a look at my FBO class, I should have that cube drawn on the "renderedTexture" variable, shouldn't I?   But it doesn't. What am I doing wrong? Thanks in advance!   Excuse me for my bad english xD
  15.   I was talking about the 3D model itself, not the texture. It appears to be a bit "jagged"... then I realized that it's obvious, because I'm generating a 3D model from a grayscale bitmap (considering 8 bit per pixel isn't that much XD). Once textured properly it doesn't appear that bad :P   Anyway, thanks to all for all the help given! 
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!