• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
???

OpenGL
SNES Like 5 Layer 320x240 Graphics Library

19 posts in this topic

https://sites.google.com/site/simpleopenglsneslike/

 

 

NO "NVIDIA or AMD or Intel OpenGL or DirectX License to unlock it's full potential" needed but it's 12FPS if you don't have license, if you have license it takes NO TIME to render at all.
 
GeForce 7600GT or Higher needed.(Consult source code for needed extensions)
 
If you are enthusiast and don't have money, use ths library, connect it to TV and it's awesome.
12FPS for GeForce 7600GT and it's slow only for graphics so it's kind of fixed FPS and if you use CPU time for hard working it won't slow down at all like 12FPS fixed forever unless you do some damned thing...
 
License: LGPL, CC0

 

 

 

ADDED:

It is basically 2D Library with similar graphical environment like THE CONSOLE Super Nintendo Entertainment System. We have this awesome and fast CPU why would we want to use crippled Public OpenGL library for this? Because, not only doing it on GPU is COOL but also gives you totally free cpu time per frame. GL Draw Call is done in another process using nvglvt.dll or similar things and it's basically same thing as just doing Sleep(1000/12) in another thread.

 

So, if you tries to implement this kind of environment like 32Bit 320x240 and 5 Layer in GPU blit only, it gives you about 3FPS no matter how great your graphics card is.

The more objects you have the slower it gets. Means you CAN'T make even Starcraft 1 if you only use what is given to you.

 

 

 

Point is, CPU is FREE and 12FPS is just Graphics do some crazy stuff like voxel 3D filling in cpu and path tracing in vertex shader and stuff. It is about using only uncrippled DP4 Vertex Shader processing line.

 

 

ADDED2:

 

Think about Raspberry PI

Edited by WalkingTimeBomb
-3

Share this post


Link to post
Share on other sites

What exactly is this? I see no documentation at all.

Friend, it means nothing to you if you don't understand what I have wrote.

 

Try to run it in your old and slow computer. If it gives you 2000FPS it means nothing to you but if it gives you 300FPS you have a problem.

 

Also, code is so simple, just follow through Lesson45.cpp and vs2.txt

Edited by WalkingTimeBomb
-7

Share this post


Link to post
Share on other sites

OK friends, you guys either know about licensing problems or don't know what you guys are talking about.

 

Us OpenGL programmers without proper job always tried to make some 3D or 2D game with existing hardwares.

 

 

I've had access to Properly licensed NVIDIA and AMD card and even shitty code that loads 1MegaByte 3D Mesh ran on 2000FPS and they do it all on CPU.

 

I asked why is it so fast here and so slow at home?

 

 

My previous boss said, yeah.. you need PAY NVIDIA and AMD to get full speed on OpenGL or DirectX or Assembly SDK.

 

I'm saying here is, no need to pay if you are interested with just playing along... I mean, did you guys even made a finished game anyways with existing hardware without licensing? I did. https://code.google.com/p/digdigrpg/

 

I runs fast but it should run at least 500 FPS if I have licensed it correctly.

 

 

Play with my source code you guys will get it. Change int wdt=320 into 160 and int hgt=240 into 120 and you will see FPS goes up to 300FPS.

 

It runs on 12FPS not because I load images every frame but NVIDIA driver cripples and blocks it.

 

 

Upper comments said it right it uses Vertex Shader why I did it? Vertex Shader is only thing that is not crippled in unlicensed hardware.

Upper comments said it wrong it loading PNG every frame does NOT slow it down. Updating VBO every frame does not slow it down either.

 

Well you guys just wanted for me to explain it all but really, if someone don't get it and don't like it, and don't understand WHAT I HAVE WROTE IN THE FIRST PLACE even if you don't like it or not, it's NOT FOR YOU.

Edited by WalkingTimeBomb
-4

Share this post


Link to post
Share on other sites

OK here is the WHOLE explanation since you guys want to know what it is all about.

 

MFU and other functionality in Vertex Shader is all slow and crippled even if you have correct license.

 

 

Look at my vs2.txt code:

in  vec4 in_Pixel;
in  vec4 in_Layer1;
in  vec4 in_Layer2;
in  vec4 in_Layer3;
in  vec4 in_Layer4;
in  vec4 in_Layer5;
in  vec4 in_Layer6;
in  vec4 in_Layer7;
varying vec3 color;
varying float test;

void main()
{
	gl_Position = gl_ModelViewProjectionMatrix * in_Pixel; // Vertex Array Object's First Vertex Buffer Obect it Simulates Pixel. Remember. This is Vertex Shader. Don't transform it.

	vec4 curLayer = in_Layer2; // VAO's Second VBO. Color Buffer. Thus, First Layer of Bitmap.
	vec4 colR = vec4(in_Layer1.r, 0.0, 0.0, 0.0); // Read RGB pixel from current Bitmap.
	vec4 colG = vec4(in_Layer1.g, 0.0, 0.0, 0.0);
	vec4 colB = vec4(in_Layer1.b, 0.0, 0.0, 0.0);

	vec4 col2R = vec4(curLayer.r, 0.0, 0.0, 0.0);
	vec4 col2G = vec4(curLayer.g, 0.0, 0.0, 0.0);
	vec4 col2B = vec4(curLayer.b, 0.0, 0.0, 0.0);
	vec4 col2W = vec4(curLayer.a, 0.0, 0.0, 0.0);
	vec4 oneMinusLayer2Alpha2 = vec4(1.0, curLayer.a, 0.0, 0.0); // For blending

	vec4 oneMinusLayer2Alpha1 = vec4(1.0, -1.0, 0.0, 0.0); // For blending
	float colAf = dot(oneMinusLayer2Alpha1, oneMinusLayer2Alpha2); 
        // dot oneMinusLayer2Alpha2 oneMinusLayer2Alpha1 ?
        // 1.0*1.0 + curLayer.a*-1 + 0.0*0.0 + 0.0*0.0 == 1.0-curLayerAlpha
	float colRf = dot(colR, vec4(colAf)); //RGB * Alpha
	float colGf = dot(colG, vec4(colAf));
	float colBf = dot(colB, vec4(colAf));
	float col2Rf = dot(col2R, col2W); // RGB * 1.0 not used here but in next layer it will be used as current layer alpha
	float col2Gf = dot(col2G, col2W);
	float col2Bf = dot(col2B, col2W);
	float colFinalRf = dot(vec4(colRf, 1.0, 0.0, 0.0), vec4(1.0, col2Rf, 0.0, 0.0));
	float colFinalGf = dot(vec4(colGf, 1.0, 0.0, 0.0), vec4(1.0, col2Gf, 0.0, 0.0));
	float colFinalBf = dot(vec4(colBf, 1.0, 0.0, 0.0), vec4(1.0, col2Bf, 0.0, 0.0));

	color = vec3(colFinalRf,colFinalGf,colFinalBf);
}

It is all about using SO ABUNDANT 1GB average PREFETCHABLE from GPU DP4 unit line in GPU processor.

Edited by WalkingTimeBomb
-2

Share this post


Link to post
Share on other sites

I looked at your vertex shader code as well, and I don't think you understand what it is that you're trying to achieve.

 

For one thing the DP4 (Vector) style design ended with the 7x00 series of GeForce cards more than 8 years ago. The G80 onwards have all been Scalar designs, they were released in 2006, which means that all you get from trying to exploit Vector instructions is (potentially) a little bit of pipeline improvement.

 

Also if all you're trying to do is draw things too a screen then your still much worse off doing it on the CPU and then blitting the result to the screen using the GPU and THEN trying to do the blending in a very poor way using a shader.

2

Share this post


Link to post
Share on other sites

First of all lets clear up a misconception: You don't pay nVidia / AMD/ Intel or anyone else for using the OpenGL or DirectX APIs. They are free and if someone has told you that you have to pay to get better performance then you have been misinformed (Lied too).

 

You can write code, and compile it, run it, give away that programs etc, all without paying anyone and you will enjoy access to the same performance as everyone else does including big companies.

 

The reason your program runs slowly is because of the way that it is written, not because of nVidia / AMD, just because of your code.

 

fastCall22 has given you some of the reasons why it is slow and they should be easy enough to understand:

  • Loading the image every frame - ask yourself this question: which is faster, loading something, copying, then deleting it and repeating that every frame, or loading it once, using it until the program ends and only then releasing it? The answer is obviously that it is always faster to do less work so loading/deleting it only once is always going to be faster.
  • Software blitting - this is very slow and unnecessary. You should look into drawing textured quads (2 triangles), load your images (once only) turn the, into a texture and apply that texture too your quads, render those using opengl instead or blitting to the layers using software.
  • Updating your buffers every frame - Your data isn't changing so why are you updating them? Create them, fill them with data, and then use them every frame without updating them anymore. This goes back to the first problem of doing more work than you need/want to do each frame. This one comes with a 2nd problem though, you've told OpenGL that you won't update them very often by using the STATIC_DRAW flag... but then you are updating all of the time. That means that OpenGL has to do a lot of extra work which means that _you_ are slowing it down.

Everyone has to start learning somewhere, and I started with the NeHe tutorials a long time ago as well, but part of learning involves listening to what other people are trying to tell you.

 

Andy

 

 

Buddy, just call NVIDIA customer service and ASK FOR IT before you tell lies to people.

 

 

Scalar design is ADDED TO DP4 unit, not REPLACED.

 

I don't know which industry you are from but I'm from the industry I've been here since 1997.

 

 

 

You say software blitting is slow then why is PyGame hardware assisted blitting is slow on GeForce Titan? Everybody knows about PyGame so I talk about pygame but I'm actually saying can you do 1440*900 5 Layers fullscreen blitting every frame with your current technology?

Edited by WalkingTimeBomb
-2

Share this post


Link to post
Share on other sites

To clarify finally, it is 320x240 people. It is just resize the window of Lesson45.exe.

 

320x240 software/hardware blitting doesn't matter.

 

Images loaded every frame is 8x16 and 8x8 pixel resolution.

 

 

I told you to read code and you didn't even read and understood my code and talk like professional. Don't ruin it, it is going to be a big technology in the future for Raspberry Pi and other cheap computers.

-2

Share this post


Link to post
Share on other sites

(most of this is just speculation)

 

You can pay NVidia/AMD to get premium support etc, which might mean some people will help you create the best code for their graphics cards. I can imagine someone saying "you pretty much have to pay for a premium membership to get optimal performance", meaning you need their expertise to be able to utilize their cards to their fullest potential. There's no real technical advantage though, more access to knowledge.

 

You might also get access to some better tools, though I think most of them are free?. It may also be that you can use those "NVidia - the way it's meant to be played" etc logos only if you pay them or enter some form of partnership.

 

If your game is very successful, then NVidia and/or AMD may take an extra look at your shaders etc, wether you pay them or not, to make sure the game runs as fast as possible on their cards. This is to make their own cards more competitive though.

0

Share this post


Link to post
Share on other sites

@Krypton - my apologies I initially thought there was actually hope in here for it being a simple misunderstanding and maybe he could learn something :(

 

@WalkingTimeBomb - I'm a bit concerned that you're telling people that there is some secret-paid-for drivers/code that make "something" (I cannot figure out what) run much faster once you've paid for it. There isn't. You buy the hardware and you have full access too do with it what you like using the various API's available. Both AMD and nVidia give away a lot of code samples, for free, that prove you can do anything with your own code that they do with theirs.

 

Also don't rely on stating your own years of experience to try and get leverage in discussions on the forums, a lot of us are full-time professional game developers.

 

I still don't know what it is that you think is so amazing about that code but it's worth noting that the Raspberry Pi also has a dedicated GPU which people have access too and for which there are open source drivers.

 

@All

 won't post again in here, sorry for feeding the troll :/

0

Share this post


Link to post
Share on other sites

You can pay NVidia/AMD to get premium support etc, which might mean some people will help you create the best code for their graphics cards. I can imagine someone saying "you pretty much have to pay for a premium membership to get optimal performance", meaning you need their expertise to be able to utilize their cards to their fullest potential. There's no real technical advantage though, more access to knowledge.

 

nVidia's "TWIMTBP" program just gets you visits from one of their engineers, help with optimising things and if you've found a bug with their drivers then it might get prioritised (I think) but a lot of the optimisation is stuff you can do yourself and all of the tools I've ever encountered have been available for free. Although sometimes they have access to slightly newer version that aren't yet public - nothing revolutionary.

0

Share this post


Link to post
Share on other sites

Agreed.

 

P.S. You're not supposed to call BufferData every frame when using STATIC_DRAW.

Edited by Promit
2

Share this post


Link to post
Share on other sites
Guest
This topic is now closed to further replies.
Sign in to follow this  
Followers 0

  • Similar Content

    • By Toastmastern
      So it's been a while since I took a break from my whole creating a planet in DX11. Last time around I got stuck on fixing a nice LOD.
      A week back or so I got help to find this:
      https://github.com/sp4cerat/Planet-LOD
      In general this is what I'm trying to recreate in DX11, he that made that planet LOD uses OpenGL but that is a minor issue and something I can solve. But I have a question regarding the code
      He gets the position using this row
      vec4d pos = b.var.vec4d["position"]; Which is then used further down when he sends the variable "center" into the drawing function:
      if (pos.len() < 1) pos.norm(); world::draw(vec3d(pos.x, pos.y, pos.z));  
      Inside the draw function this happens:
      draw_recursive(p3[0], p3[1], p3[2], center); Basically the 3 vertices of the triangle and the center of details that he sent as a parameter earlier: vec3d(pos.x, pos.y, pos.z)
      Now onto my real question, he does vec3d edge_center[3] = { (p1 + p2) / 2, (p2 + p3) / 2, (p3 + p1) / 2 }; to get the edge center of each edge, nothing weird there.
      But this is used later on with:
      vec3d d = center + edge_center[i]; edge_test[i] = d.len() > ratio_size; edge_test is then used to evaluate if there should be a triangle drawn or if it should be split up into 3 new triangles instead. Why is it working for him? shouldn't it be like center - edge_center or something like that? Why adding them togheter? I asume here that the center is the center of details for the LOD. the position of the camera if stood on the ground of the planet and not up int he air like it is now.

      Full code can be seen here:
      https://github.com/sp4cerat/Planet-LOD/blob/master/src.simple/Main.cpp
      If anyone would like to take a look and try to help me understand this code I would love this person. I'm running out of ideas on how to solve this in my own head, most likely twisted it one time to many up in my head
      Thanks in advance
      Toastmastern
       
       
    • By fllwr0491
      I googled around but are unable to find source code or details of implementation.
      What keywords should I search for this topic?
      Things I would like to know:
      A. How to ensure that partially covered pixels are rasterized?
         Apparently by expanding each triangle by 1 pixel or so, rasterization problem is almost solved.
         But it will result in an unindexable triangle list without tons of overlaps. Will it incur a large performance penalty?
      B. A-buffer like bitmask needs a read-modiry-write operation.
         How to ensure proper synchronizations in GLSL?
         GLSL seems to only allow int32 atomics on image.
      C. Is there some simple ways to estimate coverage on-the-fly?
         In case I am to draw 2D shapes onto an exisitng target:
         1. A multi-pass whatever-buffer seems overkill.
         2. Multisampling could cost a lot memory though all I need is better coverage.
            Besides, I have to blit twice, if draw target is not multisampled.
       
    • By mapra99
      Hello

      I am working on a recent project and I have been learning how to code in C# using OpenGL libraries for some graphics. I have achieved some quite interesting things using TAO Framework writing in Console Applications, creating a GLUT Window. But my problem now is that I need to incorporate the Graphics in a Windows Form so I can relate the objects that I render with some .NET Controls.

      To deal with this problem, I have seen in some forums that it's better to use OpenTK instead of TAO Framework, so I can use the glControl that OpenTK libraries offer. However, I haven't found complete articles, tutorials or source codes that help using the glControl or that may insert me into de OpenTK functions. Would somebody please share in this forum some links or files where I can find good documentation about this topic? Or may I use another library different of OpenTK?

      Thanks!
    • By Solid_Spy
      Hello, I have been working on SH Irradiance map rendering, and I have been using a GLSL pixel shader to render SH irradiance to 2D irradiance maps for my static objects. I already have it working with 9 3D textures so far for the first 9 SH functions.
      In my GLSL shader, I have to send in 9 SH Coefficient 3D Texures that use RGBA8 as a pixel format. RGB being used for the coefficients for red, green, and blue, and the A for checking if the voxel is in use (for the 3D texture solidification shader to prevent bleeding).
      My problem is, I want to knock this number of textures down to something like 4 or 5. Getting even lower would be a godsend. This is because I eventually plan on adding more SH Coefficient 3D Textures for other parts of the game map (such as inside rooms, as opposed to the outside), to circumvent irradiance probe bleeding between rooms separated by walls. I don't want to reach the 32 texture limit too soon. Also, I figure that it would be a LOT faster.
      Is there a way I could, say, store 2 sets of SH Coefficients for 2 SH functions inside a texture with RGBA16 pixels? If so, how would I extract them from inside GLSL? Let me know if you have any suggestions ^^.
    • By KarimIO
      EDIT: I thought this was restricted to Attribute-Created GL contexts, but it isn't, so I rewrote the post.
      Hey guys, whenever I call SwapBuffers(hDC), I get a crash, and I get a "Too many posts were made to a semaphore." from Windows as I call SwapBuffers. What could be the cause of this?
      Update: No crash occurs if I don't draw, just clear and swap.
      static PIXELFORMATDESCRIPTOR pfd = // pfd Tells Windows How We Want Things To Be { sizeof(PIXELFORMATDESCRIPTOR), // Size Of This Pixel Format Descriptor 1, // Version Number PFD_DRAW_TO_WINDOW | // Format Must Support Window PFD_SUPPORT_OPENGL | // Format Must Support OpenGL PFD_DOUBLEBUFFER, // Must Support Double Buffering PFD_TYPE_RGBA, // Request An RGBA Format 32, // Select Our Color Depth 0, 0, 0, 0, 0, 0, // Color Bits Ignored 0, // No Alpha Buffer 0, // Shift Bit Ignored 0, // No Accumulation Buffer 0, 0, 0, 0, // Accumulation Bits Ignored 24, // 24Bit Z-Buffer (Depth Buffer) 0, // No Stencil Buffer 0, // No Auxiliary Buffer PFD_MAIN_PLANE, // Main Drawing Layer 0, // Reserved 0, 0, 0 // Layer Masks Ignored }; if (!(hDC = GetDC(windowHandle))) return false; unsigned int PixelFormat; if (!(PixelFormat = ChoosePixelFormat(hDC, &pfd))) return false; if (!SetPixelFormat(hDC, PixelFormat, &pfd)) return false; hRC = wglCreateContext(hDC); if (!hRC) { std::cout << "wglCreateContext Failed!\n"; return false; } if (wglMakeCurrent(hDC, hRC) == NULL) { std::cout << "Make Context Current Second Failed!\n"; return false; } ... // OGL Buffer Initialization glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT); glBindVertexArray(vao); glUseProgram(myprogram); glDrawElements(GL_TRIANGLES, indexCount, GL_UNSIGNED_SHORT, (void *)indexStart); SwapBuffers(GetDC(window_handle));  
  • Popular Now