Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 13 Sep 2012
Offline Last Active Today, 07:34 PM

#5192142 Software Fallback?

Posted by TheChubu on 10 November 2014 - 03:16 PM

AFAIK, HD 3xxx series should go up to OpenGL 3.1 on Windows, 3.3 on Linux with Mesa. Not sure what's the situation in OSX world, didn't they supported up to 3.2 with those cards?

#5191972 OpenGL - layout(shared) ?

Posted by TheChubu on 09 November 2014 - 07:12 PM

Are you querying the 'uniformBlockIndex' for all the programs your UBO is going to be on? What happens if you call glUniformBlockBinding after glBindBufferBase? ie, first attach UBO to binding slot, then bind it to the program's index for that UBO.


To be honest, I'm fuzzy on the specifics since I moved early one to fixed binding slots in the shader with arb_shading_language_420pack, which is widely supported.

#5191858 What to do if a coding project starts to feel too complex to handle?

Posted by TheChubu on 08 November 2014 - 05:25 PM

Keep coding.

#5191822 Mutiple VAOs and VBOs

Posted by TheChubu on 08 November 2014 - 01:16 PM

However, no one seems to be interested in doing anything about it.

Post it to Timothy Lottes/Graham Sellers/Christophe Riccio's Twitter. See what they say...


EDIT: To be clear, I doubt VAOs perform that bad. While you got Valve and a couple of scattered devs saying otherwise, on the other end you have every single OpenGL driver developer out there saying they perform better. And to go against those kind of people, you'd need a bit more big names than just Valve to say it, someone that either is known for being a graphics powerhouse (Crytek, DICE) so they're bound to know what the fuck they're doing, or someone who has been working with OpenGL for a long ass time (say, Carmack).


So far I haven't seen such complaints from other developers that have been porting new games to OpenGL lately (Firaxis, Aspyr, 4A Games, etc).

#5191616 Help, falling at the first hurdle as usual

Posted by TheChubu on 06 November 2014 - 10:55 PM

I have a suggestion for your workflow: Copy the default GCC compiler profile and edit that one instead. Otherwise all the projects that you start with the GCC profile will have those OpenGL libraries linked.


BTW, reading GLFW's site:


The static version of the GLFW library is named glfw3. When using this version, it is also necessary to link with some libraries that GLFW uses.

When linking a program under Windows that uses the static version of GLFW, you must link with opengl32. On some versions of MinGW, you must also explicitly link with gdi32, while other versions of MinGW include it in the set of default libraries along with other dependencies likeuser32 and kernel32. If you are using GLU, you must also link with glu32.


The link library for the GLFW DLL is named glfw3dll. When compiling a program that uses the DLL version of GLFW, you need to define the GLFW_DLL macro before any inclusion of the GLFW header. This can be done either with a compiler switch or by defining it in your source code.

A program using the GLFW DLL does not need to link against any of its dependencies, but you still have to link against opengl32 if your program uses OpenGL and glu32 if it uses GLU.

So, you're using the dynamically linked version right? Have you defined the GLFW_DLL macro? Otherwise if you want the static version you're linking to the wrong file it seems...

#5191257 Mutiple VAOs and VBOs

Posted by TheChubu on 04 November 2014 - 09:23 PM

Now, there is a 3rd option: AZDO. Azdo is about using one huge single VBO (or very few of them) and manual manipulation using unsynchronized mapping and fences (it's very Mantle-like behavior). Then place all the meshes in the same VBO at different regions (offsets) of the memory.
AFAIK, and as the AZDO talk explained, you need GL_MAP_PERSISTENT_BIT and GL_MAP_COHERENT_BIT for doing this in the fastest way, which are OpenGL 4.2 features.


Now, I understand that's very important for dynamic meshes, but can't you get pretty close there with OpenGL 3 hardware without buffer mapping?


ie, make pools of VBOs, and let every mesh have its own offset into the VBO (and possibly, which VBO too since you might have several). You'd update these VBOs only once for most meshes, ie, when a mesh has to be rendered (or better, cached for future rendering). Since we're limited to OpenGL 3 features, we'd be using glBufferSubData for these things.


You still benefit from the fewer state changes, but you'd be foregoing the buffer mapping.

#5190803 The intersection between cubic Béziers

Posted by TheChubu on 02 November 2014 - 05:43 PM

Holy shit. I'd totally fine if the compiler just noped out of that constructor call. It'd deserve it.

#5190235 Undesired Behavior When Attempting To Find Which Directory Jar Is Located In

Posted by TheChubu on 30 October 2014 - 03:52 PM

btw, you're creating a fileList

 fileList = new ArrayList<File>( );

Then, immediately after, you create a new one:

 fileList = (Arrays.asList(rawFiles.listFiles() ) );

Just do the second line, no need to create the first fileList.


If you have access to Java 8, you can also do something like:

List<File> outGoing = fileList.stream().filter( (f) -> f.isDirectory() && f.list().length > 0 ).collect ( Collectors.toList() );

#5190052 Creating a Grand Theft Auto type of Game in Delphi.

Posted by TheChubu on 29 October 2014 - 05:45 PM

How long would it take to make such a game if you have a few Developers (such as maybe 5) working on it?

Lets see, GTA-sized game, with 5 people... 30 years? Maybe more? I'm assuming they have top notch skills from the get go. If they have to learn along the way, add another 8 years.


Grand Theft Auto doesn't fits in a subforum called "For beginners". GTA is anything but for beginners.


Reduce the scope to something a bit more realistic first.

#5189602 OpenGL/GLSL version handling

Posted by TheChubu on 27 October 2014 - 10:05 PM

Move to SDL2? Or to anything else that allows you to create whatever context you want...

#5189537 Mac or PC - Really, this is a programming question.

Posted by TheChubu on 27 October 2014 - 05:07 PM

Hardware: PC or MAC

Linux, Debian distro.


For IDE I use different Eclipse installs in its various incarnations (Database Tools + Deli plugins when I have to do that kind of database/XML/ORM stuff, JDT-only for my personal projects, Efxclipse for JFX projects, PyDev for Python, etc).


I also use JEdit, which is a text editor along the lines of Notepad++ or Atom, with GLSL highlighting when I have to edit shaders.


I'm not that big into which editor/IDE provides which keyboard shortcut to make XYZ task faster, so I don't customize any of them much beyond setting the dark theme, some editor colors and an allman-style formatter.

#5189301 OpenGL/GLSL version handling

Posted by TheChubu on 26 October 2014 - 06:40 PM

What's the difference between core and version support?
You should read up on OpenGL profiles then. It seems your card supports core 3.3, and GLSL 3.30.


Mesa drivers doesn't implements ARB_compatibility extension, thus, you only got 'core' profiles, ie, no deprecated functions.


I'm not sure what that second "version" is, maybe its just Mesa's software OpenGL implementation, or just the version of a dummy context created to query OpenGL data, no idea. But you should be looking at the first one, the 'core' version.


Just try to create an OpenGL 3.3 core/forward compatible context and use #version 330 in your GLSL files. If you can create it and run it, then your rendering issues must be somewhere else (hello different driver implementations!).


Are you using deprecated functions? Did you coded initially on a nVidia card? nVidia drivers tends to be more lenient, whereas AMD and possibly Intel drivers might strive more to follow the OGL spec to the letter (first time I ran my shaders on an AMD card, it practically spit on my face :D ).


Then again, sometimes its nothing to do with the spec, driver just bugs out at specific circumstances.

#5189292 OpenGL/GLSL version handling

Posted by TheChubu on 26 October 2014 - 05:24 PM

I recently got a laptop that only supports glsl 1.3 and opengl 3.0 (it's a chromebook 14, with chromeOS erased and Ubuntu 14.10 installed over it).
Which one exactly? 


If its the one with the 2955U Celeron,  It might have an Intel HD 4xxx card or similar which should run OpenGL 3.3 fine. I'm not sure what kind of Mesa drivers Ubuntu ships tho.


If its one with a Tegra chip, then it might go up to OpenGL 4.4 if its recent (not sure how the state of the linux drivers are for Tegra either).


OpenGL 3.0 only seems kinda weird, most OpenGL 3 class hardware go up to 3.1 with updated drivers, 3.2 if they support geometry shaders, or 3.3 if they support geometry shaders and the vendor feels like supporting it.

#5189115 C++/Java - Libraries/Methods for rendering pixels without hardware acceleration

Posted by TheChubu on 25 October 2014 - 02:55 PM

I'm trying to create an engine that will allow me to update the pixels quickly and efficiently.

That's the reason GPU APIs exist.


Do any of you people know efficient methods (or libraries) for making a "pixel canvas" like this in Java or C++?

Yup. Don't update pixel per pixel. It won't work, you'll become extremely CPU bound in a reasonable resolution.


If you still want to stay away from regular GPU APIs, you could use Swing/Java2D, and draw BufferedImages. BufferedImages allow you to edit pixels if you want.


Again, updating pixel per pixel will be extremely slow. So I dunno how far you can get with that. Java2D provides methods to write images quickly, which will be much more efficient than just grabbing the matrix of data and updating stuff on your own.


EDIT: Then again, I'd just grab LibGDX if I were you.

#5188646 Does Valve have a good working methodology?

Posted by TheChubu on 22 October 2014 - 06:42 PM

I think you should read this: http://www.gamespot.com/articles/ex-valve-employee-blasts-the-company-for-feeling-like-high-school/1100-6411126/


Specially the part about the "hidden layer of management".


EDIT: I'm not saying that Valve's flat hierarchy is a lie or something like that, but I'm just referencing a different point of view about the whole thing, which is a nice reference.