• Advertisement

Search the Community

Showing results for tags '2D' in content posted in Graphics and GPU Programming.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • News

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • GameDev Unboxed

Categories

  • Game Dev Loadout

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Topical
    • Virtual and Augmented Reality
    • News
  • Community
    • For Beginners
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Developers

Developers


Group


About Me


Website


Industry Role


Twitter


Github


Twitch


Steam

Found 49 results

  1. For a 2D game, does using a float2 for position increases performance in any way? I know that in the end the vertex shader will have to return a float4 anyway, but does using a float2 decreases the amount of data that will have to be sent from the CPU to the GPU?
  2. Hi, Two questions: I am trying to rotate chroma in YUV colour space by separating each of the components and applying the following formula. int Ut = ((U-128) * cos(hue[H]) + (V-128) * sin(hue[H])) + 128; int Vt = ((V-128) * cos(hue[H]) - (U-128) * sin(hue[H])) + 128; ...where hue[H] is an array of rotation angles between 0.0 and 360.0. This seems to work OK but is dog slow. Is there a way to speed this up by converting to integer only calculation and doing away with cosine and sinus? Second, when doing conversion between YUV and RGB, it seems that luma is also affected after rotation. To demonstrate what I mean, I am randomly rotating each line's chroma. The process is: Convert from RGB to YUV > rotate chroma > convert to RGB. Top left is the original image, the two images on the right are U and V and the bottom left is the luma. As you can see, it seems to have some artefacts from rotation. Is there a way to avoid this? Thanks CJ
  3. I have an game application in Direct3D 9 which create a couple of views (window's) , a couple of swap chains, and create a D3D device. When in windowed mode with a single view (window/monitor), it works fine. But when in fullscreen mode with a couple of views it fails. When I set both views' D3DPRESENT_PARAMETERS' Windowed to false, CreateDevice(...) leads to an application APPLICATION NOT RESPONDING When I set just one of the views' D3DPRESENT_PARAMETERS' Windowed to false (and the other to true), CreateDevice(...) returns D3DERR_INVALIDCALL What might be happening ? any advice is welcome. Thanks ! The code: d3d_object = Direct3DCreate9( D3D_SDK_VERSION ); if ( FAILED( d3d_object->GetDeviceCaps( D3DADAPTER_DEFAULT, D3DDEVTYPE_HAL, &d3dCaps ) ) ) { swprintf_( errbuf, _T( "Failed To obtain DeviceCaps D3D" ) ); return FALSE; } ... if ( pcgaming()->active ) //Windowed { hr = d3d_object->CreateDevice( D3DADAPTER_DEFAULT, D3DDEVTYPE_HAL, WindowHandle(), //D3DCREATE_HARDWARE_VERTEXPROCESSING | D3DCREATE_MULTITHREADED, D3DCREATE_SOFTWARE_VERTEXPROCESSING | D3DCREATE_MULTITHREADED, &present_parameters[ 0 ], &ddraw ); } else // Fullscreen { if ( WindowHandleTop() ) { hr = d3d_object->CreateDevice( D3DADAPTER_DEFAULT, D3DDEVTYPE_HAL, WindowHandle(), D3DCREATE_SOFTWARE_VERTEXPROCESSING | D3DCREATE_MULTITHREADED | D3DCREATE_ADAPTERGROUP_DEVICE, //D3DCREATE_HARDWARE_VERTEXPROCESSING present_parameters, &ddraw
  4. I really, really would like to have smooth, artifact-free resize of an app that renders its content using Direct2D. (In point of fact, the app is an editor, but I'm using "game-like" techniques to render the UI, and a solution to this problem would be useful for games). I've been playing with this for a while, and am frustrated that I have a couple of "almost" solutions, but nothing that works robustly. I wrote up my explorations (with links to code) on my blog. The code happens to be in Rust, but I'm calling WinAPI pretty directly, so a working recipe would be easy enough to adapt to or from C++. I'm feeling stuck enough on this that I'm offering a $2500 bounty for a proper solution. I'd love to be able to make forward progress, and don't want to spend a lot more time on it myself. I'm not 100% it's even possible on Windows, though I see glimmers of hope.
  5. Hello Guys, I have a problem, i want to draw a Text with Direct2D1 in SharpDX and everytime i create a DeviceContext with my 2Ddevice, i get a System.AccessViolationException Heres my code: (this.Handle = Handle of my Form) Device d2dDevice = new Device(this.Handle); DeviceContext d2dContext = new DeviceContext(d2dDevice, DeviceContextOptions.None); Can someone explain me wehre my mistake is? Greets Benjamin
  6. Good Evening, I want to make a 2D game which involves displaying some debug information. Especially for collision, enemy sights and so on ... First of I was thinking about all those shapes which I need will need for debugging purposes: circles, rectangles, lines, polygons. I am really stucked right now because of the fundamental question: Where do I store my vertices positions for each line (object)? Currently I am not using a model matrix because I am using orthographic projection and set the final position within the VBO. That means that if I add a new line I would have to expand the "points" array and re-upload (recall glBufferData) it every time. The other method would be to use a model matrix and a fixed vbo for a line but it would be also messy to exactly create a line from (0,0) to (100,20) calculating the rotation and scale to make it fit. If I proceed with option 1 "updating the array each frame" I was thinking of having 4 draw calls every frame for the lines vao, polygons vao and so on. In addition to that I am planning to use some sort of ECS based architecture. So the other question would be: Should I treat those debug objects as entities/components? For me it would make sense to treat them as entities but that's creates a new issue with the previous array approach because it would have for example a transform and render component. A special render component for debug objects (no texture etc) ... For me the transform component is also just a matrix but how would I then define a line? Treating them as components would'nt be a good idea in my eyes because then I would always need an entity. Well entity is just an id !? So maybe its a component? Regards, LifeArtist
  7. Hi I am having this problem where I am drawing 4000 squares on screen, using VBO's and IBO's but the framerate on my Huawei P9 is only 24 FPS. Considering it has 8-core CPU and a pretty powerful GPU, I don't think it is not capable of drawing 4000 textured squares at 60FPS. I checked the DMMS and found out that most of the time spent was by the put() method of the FloatBuffer, but the strange thing is that if I'm drawing these squares outside of the view frustum, the FPS increases. And I'm not using frustum culling. If you have any ideas what could be causing this, please share them with me. Thank you in advance.
  8. Hello, I am rewriting my pixel art game program and have decided to support the following resolutions: 1920 x 1080 1280 x 720 1024 x 768 I think they all look fine on the different sized monitors. Does anyone have an opinion of any other needed resolution to support? When I started this research I seem to recall a general response of it might be just important to have the aspect ratios. What does this mean exactly? Why? Thank you, Josheir
  9. I'm wondering if I'm missing any steps or making incorrect assumptions anywhere in creating an "infinite zoom" effect in a viewport similar to what happens in the software Mischief where you can pan LRUD and zoom in and out. Mischief claims trillions of levels of zoom. In this prototype in Maya as you can see I have a front facing orthographic camera and a tree-like structure of 1x1 planes that have been scaled down and/or translated in Z. Imagine the planes as having arbitrarily sized textures on them. Later I want this to be in 3D stereo or VR with depth playing a factor as well as size. I am prototyping this in Maya before I move it to my engine in GL with GLM. I can successfully zoom the ortho camera and set camera bookmarks that I can recall. However I quickly run out of camera zoom resolution as the zoom parameter approaches zero. For example in this scene pictured above I only have 24 1x1 planes but when I'm zoomed far enough in that I can only see the smallest one, the camera zoom is near zero, (even if I query the internal double value), everything starts to shake, and I can't really go any smaller than 0. With mischief, I can zoom in way deeper, to where I haven't found a limit, and do it smoothly. How can I keep on zooming in more with ortho? I tried using a perspective cam and just transforming it, but the perspective distortion warps the perceived position of the planes too much to where once you zoom in far enough everything begins drifting behind each other. They could also be moving objects back in world space with a real flat perspective cam as they "zoom" in...? Anotherquestion is why have I only seen this implemented in Mischief out of all the other programs? Is the implementation that difficult or not useful and if its not useful, why did the Foundry buy out the program?
  10. A texture rectangle, texture type is RGBA. the alpha channel like this: (0=0,x=255) [0][0][0][0][0][0][0] [0][0][x][x][0][0][0] [0][x][x][x][0][0][0] [0][x][x][x][x][x][0] [0][0][0][x][x][x][0] [0][0][0][x][x][0][0] [0][0][0][0][0][0][0] I want a function to picking. now I have a very clumsy solutions: //system Init............. //.......... //pixel buffer to texture....... (texture is handle, a number.cannot get pixel matrix.) //.......... BYTE pixel_alpha[7][7];//save alpha channel matrix //.......... texture.draw(30,30);//draw the picture in the window x=30 , y=30 //.......... if(picking(GetMouseX,GetMouseY))MessageBox("click!"); // picking functio //.......... /*picking function*/ bool picking(int x,int y) { int mouse_to_image_x, mouse_to_image_y;//get mouse position in the image mouse_to_image_x = 30 - x; mouse_to_image_y = 30 - y; if((mouse_to_image_x < 0 && mouse_to_image_x > 7) && (mouse_to_image_y < 0 && mouse_to_image_y > 7))return false;//mouse is not in the image for(int i = 0; i < 7 * 7; i++) { if(pixel_alpha[mouse_to_image_x][mouse_to_image_y] == 255)return true;//in the image and alpha channel is 255 } return false; } //============================================================================================= but...... 1.an excessive amount of memory in alpha channel matrix. 2.i think efficiency is not high. so, how to do? not is ray picking, What is this technology called?
  11. Hey guys...I found a great channel on youtube. I was always a Unity guy and got pretty good in it. But finally I ended up with doing stuff on my own without a specific engine. I love the freedom of it I found monogame and I found this guy...he is a professional programmer in AAA games studio and shows really cool stuff for 2d MMORPG and server-client stuff!! Defenitly check it out...! https://www.youtube.com/channel/UCThwyD-sY4PwFm7EM89shhQ Have fun
  12. I'm currently learning opengl and started to make my own 2D engine. So far I've managed to render an rectangle made out of 2 triangles with `glDrawElements()` and apply it a custom shader. This is how my code looks like so far: #include <all> int main() { Window window("OpenGL", 800, 600); Shader shader1("./file.shader"); float vt[8] = { -0.5f, -0.5f, 0.5f, -0.5f, 0.5f, 0.5f, -0.5f, 0.5f }; const GLuint indices[6] = { 0, 1, 2, 2, 3, 0 }; GLuint vao, vbo, ebo; glGenVertexArrays(1, &vao); glBindVertexArray(vao); glGenBuffers(1, &vbo); glBindBuffer(GL_ARRAY_BUFFER, vbo); glBufferData(GL_ARRAY_BUFFER, sizeof(vt), vertices, GL_STATIC_DRAW); glEnableVertexAttribArray(0); glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 0, 0); glGenBuffers(1, &ebo); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ebo); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW); while(window.isOpen()) { window.clear(0.0, 0.5, 1.0); shader1.use(); glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, NULL); window.update(); } glDeleteBuffers(1, &vbo); glDeleteBuffers(1, &ebo); glDeleteVertexArrays(1, &vao); return 0; } There multiple way to do it, but I've heard that drawing and switching shaders one by one is pretty expensive. I'm looking to preload all the data, vertices, shaders in one buffer or what's more efficient and when it's all ready start drawing them in one call, similar to a batch. Thanks!
  13. I'm looking to render multiple objects (rectangles) with different shaders. So far I've managed to render one rectangle made out of 2 triangles and apply shader to it, but when it comes to render another I get stucked. Searched for documentations or stuffs that could help me, but everything shows how to render only 1 object. Any tips or help is highly appreciated, thanks! Here's my code for rendering one object with shader! #define GLEW_STATIC #include <stdio.h> #include <GL/glew.h> #include <GLFW/glfw3.h> #include "window.h" #define GLSL(src) "#version 330 core\n" #src // #define ASSERT(expression, msg) if(expression) {fprintf(stderr, "Error on line %d: %s\n", __LINE__, msg);return -1;} int main() { // Init GLFW if (glfwInit() != GL_TRUE) { std::cerr << "Failed to initialize GLFW\n" << std::endl; exit(EXIT_FAILURE); } // Create a rendering window with OpenGL 3.2 context glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2); glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE); glfwWindowHint(GLFW_RESIZABLE, GL_FALSE); // assing window pointer GLFWwindow *window = glfwCreateWindow(800, 600, "OpenGL", NULL, NULL); glfwMakeContextCurrent(window); // Init GLEW glewExperimental = GL_TRUE; if (glewInit() != GLEW_OK) { std::cerr << "Failed to initialize GLEW\n" << std::endl; exit(EXIT_FAILURE); } // ----------------------------- RESOURCES ----------------------------- // // create gl data const GLfloat positions[8] = { -0.5f, -0.5f, 0.5f, -0.5f, 0.5f, 0.5f, -0.5f, 0.5f, }; const GLuint elements[6] = { 0, 1, 2, 2, 3, 0 }; // Create Vertex Array Object GLuint vao; glGenVertexArrays(1, &vao); glBindVertexArray(vao); // Create a Vertex Buffer Object and copy the vertex data to it GLuint vbo; glGenBuffers(1, &vbo); glBindBuffer(GL_ARRAY_BUFFER, vbo); glBufferData(GL_ARRAY_BUFFER, sizeof(positions), positions, GL_STATIC_DRAW); // Specify the layout of the vertex data glEnableVertexAttribArray(0); // layout(location = 0) glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 0, 0); // Create a Elements Buffer Object and copy the elements data to it GLuint ebo; glGenBuffers(1, &ebo); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ebo); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(elements), elements, GL_STATIC_DRAW); // Create and compile the vertex shader const GLchar *vertexSource = GLSL( layout(location = 0) in vec2 position; void main() { gl_Position = vec4(position, 0.0, 1.0); } ); GLuint vertexShader = glCreateShader(GL_VERTEX_SHADER); glShaderSource(vertexShader, 1, &vertexSource, NULL); glCompileShader(vertexShader); // Create and compile the fragment shader const char* fragmentSource = GLSL( out vec4 gl_FragColor; uniform vec2 u_resolution; void main() { vec2 pos = gl_FragCoord.xy / u_resolution; gl_FragColor = vec4(1.0); } ); GLuint fragmentShader = glCreateShader(GL_FRAGMENT_SHADER); glShaderSource(fragmentShader, 1, &fragmentSource, NULL); glCompileShader(fragmentShader); // Link the vertex and fragment shader into a shader program GLuint shaderProgram = glCreateProgram(); glAttachShader(shaderProgram, vertexShader); glAttachShader(shaderProgram, fragmentShader); glLinkProgram(shaderProgram); glUseProgram(shaderProgram); // get uniform's id by name and set value GLint uRes = glGetUniformLocation(shaderProgram, "u_Resolution"); glUniform2f(uRes, 800.0f, 600.0f); // ---------------------------- RENDERING ------------------------------ // while(!glfwWindowShouldClose(window)) { // Clear the screen to black glClear(GL_COLOR_BUFFER_BIT); glClearColor(0.0f, 0.5f, 1.0f, 1.0f); // Draw a rectangle made of 2 triangles -> 6 vertices glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, NULL); // Swap buffers and poll window events glfwSwapBuffers(window); glfwPollEvents(); } // ---------------------------- CLEARING ------------------------------ // // Delete allocated resources glDeleteProgram(shaderProgram); glDeleteShader(fragmentShader); glDeleteShader(vertexShader); glDeleteBuffers(1, &vbo); glDeleteVertexArrays(1, &vao); return 0; }
  14. Hey, I've a minor problem that prevents me from moving forward with development and looking to find a way that could solve it. Overall, I'm having a sf::VertexArray object and looking to reander a shader inside its area. The problem is that the shader takes the window as canvas and only becomes visible in the object range which is not what I'm looking for.. Here's a stackoverflow links that shows the expected behaviour image. Any tips or help is really appreciated. I would have accepted that answer, but currently it does not work with #version 330 ...
  15. I've started building a small library, that can render pie menu GUI in legacy opengl, planning to add some traditional elements of course. It's interface is similar to something you'd see in IMGUI. It's written in C. Early version of the library I'd really love to hear anyone's thoughts on this, any suggestions on what features you'd want to see in a library like this? Thanks in advance!
  16. I have a simple openGL engine using SDL working. I'm not using any SDL_Image stuff, its all just openGL. I've made 2d tilemap levels with a .txt file before by reading in the characters and using a switch statement to assign them to various textures and using their position in the file to render them. This method works fine and produces no graphical glitches. Then I made a .tmx map in Tiled and used TmxParser to read the file into my program, and I'm able to render the map properly to the screen, but when I move up and down these black lines start to appear between the tiles. The map loads in fine at first, its only once I start moving the camera around that it happens, and its only when the camera moves up or down. I made a video to show exactly what I'm seeing, I see a lot of these type of issues when searching google but nothing that matches whats happening to me. https://www.youtube.com/watch?v=bm6DcJgCUKA I'm not even sure if this is a problem with the code or what it could be so I don't know of any code to include. This problem also happened to me when I was using Unity and importing a Tiled map with Tiled2Unity. In unity there were long horizontal line glitches on the tilemap, and now its happening to me again in c++, is this something to do with Tiled? Has anyone encountered this before or know a solution? Any help would be greatly appreciated, I feel stuck on this issue.
  17. I have a character in a game, made of multiple sprites in a hierarchy for skeletal animation (also called cutout animation by some). I want to make that character semi-transparent for a while, like a ghost. The problem is, when I naïvely set the transparency on the top-most sprite and all of its children, it doesn't look as I expected, because the parts of the sprites that were supposed to be occluded by other sprites can now be seen through their occluders Like, for example, you can see the body through the clothes, or parts of the limbs inside the body, etc. (#3 in the picture below). I'd rather like it to be made transparent as a whole, as if it were a single image (#2 in the picture below). So my question is, how is this effect usually achieved in video games? (Preferably in OpenGL, but I'd like to know the general approach, so that I could apply it to whatever tool I'll use.) My first thought of how to fix it, was to render the sprite chierarchy to texture first, and then map that texture onto a simple rectangular mesh, and apply opacity to that. But this approach would have some drawbacks. One of them I guess being the additional overhead required for rendering to texture, especially if there will be more "ghosts" on the screen at the same time, all with different transparencies. Another one is of course the overhead in code – it would require a framework of managing these hierarchies and their associated render targets – in other words: a MESS! Is there any other, better way to do that than rendering the entire character to texture? If there isn't one, then maybe you could give me some advices about how to organize that mess in the code?
  18. Hi guys, anyone experienced with DX11 could look at my graphics.cpp class? I got fonts rendering correctly with painters algorithm - painting over the other 3d stuff each frame, however, whenever I turn the camera left or right, the fonts get smushed narrower and narrower, then disappear completely. It seems like the fix must be a very small change, untying their rendering from the cam direction, but I just can't figure out how to do it under all this rendering complexity. Any tips would be helpful, thanks. https://github.com/mister51213/DirectX11Engine/blob/master/DirectX11Engine/Graphics.cpp
  19. Hello there, I here have a quite technical question, and as my english isn't perfect, feel free to ask any questions if I wasn't clear enough on my explanations. I am building a homemade voxel model creator. My models are a set of blocks, from which I generate a mesh. I would like to texture them using an external 2D image file. So, I have to 'uv-map' the vertices of my model, meaning, having a projection of 3D vertices on a 2D plane (texture) I am looking for an algorithm to generate an optimal texture (optimal: having the lowest memory usage but still holding every vertices, without overlaps) Let me pose the problem more formally. Feel free to look at the example if there is any comprehension issues. • What I have: I have list of planes, where each planes is defined by 2 integers (width, height) • What the algorithm should do: 1) give the smallest (in term of area) rectangle which can hold every planes, without overlaps 2) give the position and orientation of each planes relatively this rectangle. The position is where the plane should be mapped. (x, y), the orientation is 0 or 1, weather the plane is flipped once mapped. Flipped: map on the rectangle from (x, y) to (x + height, y + width), (instead of (x, y) to (x + height, y + width)) There might be multiple optimal solutions (as the following example shows, I would like to find an algorithm that generate at least one of them) Example: (notice that "LEFT" plan is flipped once mapped here) sample.bmp Source code: https://github.com/rpereira-dev/VoxelEngine/blob/master/VoxelEngine/src/com/grillecube/client/renderer/model/editor/mesher/ModelMesherCull.java Screenshot: modelEditor.bmp NB: I've posted this topic on the "graphic and GPU programming" section; as this generated texture will be used in an OpenGL context.
  20. Now I am making 2d game in my home. I am so confused with the resolution. Please anyone help me to choose the resolution
  21. Hello, I’m writing research paper on software rasterization algorithms and at one point I gave example of triangle rasterization algorithm. The algorithm is really basic. If the triangle is flat top or flat bottom it’s possible to determine the minimum and the maximum x values for each scan line using the equation of line for the edges. Then for each scan line fill the pixels between minimum x and maximum x values. If the triangle is of other kind it’s possible to split it to flat top and flat bottom triangles (finding the fourth vertex) and draw it using the previous algorithm. I need to cite a reference for this algorithm. I saw it in some book in the ‘90s and I can’t just write it without a reference. The problem is that I can’t remember where I saw it. I already tried to look at “Computer Graphics: Principles and Practice” but the only similar algorithm there is the polygon rasterization algorithm, which is over engineered for this kind of problem, same with "Computer Graphics: C Version". I also tried to look at “Black Art of 3D Game Programming”, which have similar algorithm but the algorithm that I saw was in another book and slightly different. Anyone know a book with this kind of algorithm? Any help is appreciated. Thanks.
  22. I've been attempting to write a Navier Stokes fluid simulation using the Cg language in Unity for a few times now with no luck. Every attempt has only resulted in a blank result. I don't really know how to fix the problem but from what I see, I think the problem might lie in the rendering-to-texture aspect and not the shader themselves so this question might be more Unity-related the shader-related. But just in case, I've read Chapter 38 from GPU Gems and while I don't understand the maths, I do understand the implementation of it. If I'm correct, the order for each timestep of the simulation is: 1. Advection 2. Force application 3. Projection (computing divergence, solving Poisson equations using Jacobi iterations, gradient subtraction) So there's that. Multiple shaders would be required for each step and I need to have multiple textures to represent the state of fluid such as velocity, density and pressure. Those states each need to have two textures to read and write to, so I can swap them around after each step has been outputted to the correct texture. I also need a temporary divergence texture and a texture to display the final result. My first question is, what format should my textures be in? I've heard that you should use RGBA floating-point textures for each one but I don't know if that's really the case. Now about the implementation. Like I've said before, I think all my shaders have been written properly as I've compared them to numerous sources including GPU Gems codes and it looks similar. My shaders are based on this three.js implementation and this one. I want my simulation to be setup so that the user can click on the screen to add 'ink' and drag the mouse around to 'add force and velocity' to it. While the second implementation is a working example of a Unity implementation, it's not setup to how I want it to be and the three.js implementation does exactly that. So I combined the Splat shader from the three.js implementation to my own project to achieve the 'add force' that I want but I'm not sure if it's working properly. I've attached my shaders below along with a Unity package file for those who have Unity. If my shaders have all been written properly, then the problem lies in how to actually display it. Currently, I'm just inputting a density texture into my Render shader to display it but it's giving me a blank result. I don't know if there's anything I had to do first before displaying the fluid so if anyone knows, please let me know. Shaders.zip FluidSimulation.unitypackage
  23. Hi again, thanks to the good advise from the guys on this thread https://www.gamedev.net/forums/topic/692174-opengl-2d-gui-system-question/?tab=comments#comment-5356606 thanks guys! I have my basic 2D GUI working now, buttons, windows/dialogs and sliders are working, now i want to port my scrollbox, listview controls to OpenGL, unfortunately i dont have an idea on how 'clipping' a rendered object works, in 2D canvas such as in 2D SDL or javascript canvas, what i did is for example on listview/listbox control, i draw all items in an offscreen canvas, and just Blit copy the visible area to screen based on the amount it was scrolled. Since i dont think OpenGL has the BlitBlk style of partial rendering a texture/canvas, how should i approach implementing a control such as listview control? Background, i want a 2D GUI control where there will be items inside (like a listview) that hides/clip when scrolled based on scroll value, any tips on how to do this on OpenGL?
  24. Hola estoy interesado en practicar y crear un sistema de pre cargado de texturas, así el juego que estoy desarrollando en vb6 utilizando Direcxt8 tiene un mayor rendimiento a la hora de cargar los gráficos de un mapa. ¿Pueden orientarme con temas acerca el tema? ¿Qué es lo que necesito saber? Gracias! Hi, I'm interested in practicing and creating a pre-loaded system of textures, so the game I'm developing in vb6 using Direcxt8 has a higher performance when loading graphics on a map. Can you guide me with topics on the subject? What do I need to know? Thank you!
  25. Hi can anyone give me hint how to write a shader for making an animation of a transparent sweep means a line moving about 180 degree on the screen the line.
  • Advertisement