Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


dpadam450

Member Since 18 Nov 2005
Offline Last Active Yesterday, 11:08 AM

Topics I've Started

HTML to C++

01 May 2015 - 03:12 PM

I don't know a whole lot about web servers, so I'm wondering if this is possible:

This would be a normal setup, I don't know the finer details but the web server can create real sockets to the local c++ executable, so that it can talk to some real time applications that exist on another machine. So in this example I have 3 machines: phone, PC-webserver, PC application

[ PHONE]                   [ PC   ]                                   [ Real-Time Application Machine 1,2,3...n]

[ html5   ]---html/js--> [Web Server]                                                                  |
                              |                                                                        |
                          [c++ executable] <-------real-time-socket-connection---------------> [ c++ executable] 

Could I possibly do something like this?

[ PHONE]                     [ Real-Time Application Machine 1,2,3...n]

[ html5   ]<--html/js--------->      [ c++ executable]

                          

Assuming I have all the html/webpage files on the phone, and when I click buttons they send HTML commands to an IP address (just like a normal webserver), only the "server" is actually my c++ application and I just attach data to my Get/Set/Request whatever HTML protocol command, and my c++  program can determine what to do with the data, and also send data back.

I'm not sure if there would be a better way or there is a library to do this or any pitfalls of something like this. I basically want a phone or PC, to be able to open a webpage to control a real-time c++ application that runs on another machine.  The real-time PC is Linux so I don't know how well it could host the web server if I really needed one, but I still think it would be ideal to not have to have a web server to do this.


Dynamic Tessellation Control

04 March 2015 - 02:19 PM

I'm about to step into turning tessellation on and playing around with it for the first time. The one issue I have is that there are two types of bricks:

Modern

http://basictextures.com/wp-content/maxfreesize/stone-brick/stone-brick-wall-mossy-00226.jpg

 

Stone
http://azbricksource.com/wp-content/gallery/manufactured-stone/beavercreek_002.jpg

 

The difference is that the modern flat ones only need to be tessellated around the rectangle edges. The inside of the brick doesn't actually need to be tessellated because it is flat and tessellating it will add no surface detail.

 

Is this something that should be done in a geometry shader then? Where I can send a texture that is a heat map for where/how much tessellation to perform? Is there already support or planned support for this?

 


Toggling Between Video Cards

08 February 2015 - 12:19 AM

Pretty sure the answer to this was no and probably still is. There are motherboards that used to have bios settings or whatever for onboard video card disabling so I assume a bios switch or something might exist.

 

I would like to buy a new video card for development reasons only, as the one I have now runs all games on high so I don't need to drop 250.00 on a newer card.

 

I only want to buy it though, if I can toggle between my old and new card. So I have 2 benchmarks for my games performance on an old card and new card. Whether I have to restart the computer, or do something in the bios each time would be fine. I just don't want to have to plug one in, update drives, run my game + benchmark. Swap the old graphics card physically. Install drivers. Benchmark.

 

Is this possible to have 2 plugged in and choose between them?


glTexSubImage3D, invalid enum

06 February 2015 - 02:44 PM

Short and sweet, I get an invalid ENUM on the "glTexSubImage3D()"
This works on my ATI card but not on NVidia. I've re-read the docs several times. My enums look good.





GLenum pixelFormat = GL_RGB;

GLuint GL_Index;
glGenTextures(1,&GL_Index);
glBindTexture(GL_TEXTURE_2D_ARRAY,GL_Index);
//Allocate the storage.
glTexStorage3D(GL_TEXTURE_2D_ARRAY, numMipMapLevels, pixelFormat, textureSize, textureSize, textures.size());
  

 

for(int i = 0; i < textures.size();i++)
    {

            Texture* texutre = textures[i];
 

const int mipLevelUploading = 0;
        const int depth = 1;
        glTexSubImage3D(GL_TEXTURE_2D_ARRAY, mipLevelUploading, 0, 0, i, diffuseTextureSize, diffuseTextureSize, depth, pixelFormat, GL_UNSIGNED_BYTE, texture->pixels);
        GLenum error = glGetError();

 

}


Bus Bandwidth

04 February 2015 - 02:23 PM

Back when I first started openGL 8 years ago, I had the naive:

 

glBegin(GL_TRIANGLES);

for(int i = 0; i < vertices; i++)

{

glNormal(); glTexCoord2f();glVertex();

}

glEnd();

 

I used to do this with some models that were 10,000 verts or so, I remember running gDebugger and having 100,000 + openGL function calls.

 

Now I was doing a test on a scene with 4000 objects (the same six objects duplicated 800 times). I internally track redundant state changes and I only bind the objects once, and draw them all. My test case there isn't much pixel overdraw. Here I have only 5000 openGL function calls.

 

Anyway, back on that old video card I still used to get like 60FPS or so from what I remember, Now I may have only had 50 objects in my scene back then and I now have 4000, but my total openGL calls today per frame is 5,000. Back then it was 100,000+.

Clearly I have a much better motherboard, much faster bandwidth, and way less openGL function calls (but I do have a lot more draw calls). So if it isn't a bandwidth issue, what is slowing down the GPU so much? What is going on internally, clearly doesn't seem to be a bandwidth issue. What happens when the draw() command hits that destroys everything?
 


PARTNERS