 Home
 » Viewing Profile: Topics: AlanSmithee
Banner advertising on our site currently available from just $5!
1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!
AlanSmithee
Member Since 19 May 2011Offline Last Active Yesterday, 11:53 PM
Community Stats
 Group Members
 Active Posts 160
 Profile Views 5,188
 Submitted Links 0
 Member Title Member
 Age Age Unknown
 Birthday Birthday Unknown

Gender
Not Telling
Topics I've Started
degenerate triangles
22 May 2015  08:39 AM
I am making a webgl 2d renderer to render quads (sprites).
To make the renderer more effective i minimize draw calls by having all vertex data in the same buffer.
Each vertex has a pos, tintcolor and texcoord, making one vertex (3 * 4 * 2) * 4(float) = 36 bytes.
Max number of sprites per draw call is 10.000.
At the moment i am rendering quads using indices, with drawElemtents and gl_triangle_strips.
Ie: quad one is made up of index 012 + 123, quad two of index 456 + 567 etc. This seems to be working fine. There is, however no shared vertices. (except those that are shared in a quad)
Looking at other solutions to this kind of batching, a lot of people do not use indices, but instead use drawArrays with gl_triangels and a degenerate triangle to separate quads.
I am looking for some input regarding the differences between the two, and if there are any disadvantages to using indices.
Thanks in advance!
Encapsulation through anonymous namespaces
22 January 2015  06:10 AM
Hello.
When programming in C++ I often (always) use some method of data encapsulation.
This used to mean defining a class, declaring variables and helper functions private and supplying a public interface of member functions:
class A { public: int PublicFunction() { return _privateVariable; } private: int _privateVariable; };
Any user of this class can create an instance of the class and use its public members.
Should implementation specific details happen to change, the public interface of the class can stay the same. This is all fine.
Howeverm, the more I code, the more I find myself in situations where there only needs to (or only should) ever be one instance of a class.
There are many solutions to this, such as the singleton pattern, classes with static member functions etc.
But somehow these designpatterns have always felt wrong to me. It feels like trying to fit the problem to the solution, where the solution always is some form of a class, be it a singleton or whatever.
namespace // private, not accessable from outside this source file { int privateVariable = 54; int PrivateHelperFunction() { // can change this to whatever i want without breaking the public interface return 123; } } namespace PublicInterface { bool PerformSomeAction() { // Can access "private" varaibles and functions from here: return PrivateHelperFunction() == privateVariable; } }
 bad (or incorrect) use of namespaces
 cannot seperate implementation over many source code files (would you even want to do this?)
 could it make code less reusable maybe?
Problem with perspective in opengl
23 November 2014  01:21 PM
Hello.
I am experiencing a perspective problem when rendering a model in OpenGL.
I am pretty sure there is some problem with my MVP matrix, but I am unable to deduce what it could be.
I am using the identity matrix for my model and the projection matrix and view matrix is defined as below:
glm::mat4 _projectionMatrix = glm::perspective(45.0f, _screenDimensions.x / _screenDimensions.y, 0.1f, 100.0f); glm::mat4 _viewMatrix = glm::lookAt(glm::vec3(4,0,0), glm::vec3(0,0,0), glm::vec3(0,1,0));
they are multiplied and sent to the vertex shader:
auto mvpMatrix = _projectionMatrix * _viewMatrix * glm::mat4(1.0f); glUniformMatrix4fv(mvpUniformHandle, 1, GL_FALSE, glm::value_ptr(mvpMatrix));
and used in the shaders:
// Vertex shader #version 330 layout (location = 0) in vec3 Position; layout (location = 1) in vec2 TexCoord; uniform mat4 MVP; out vec2 TexCoord0; void main() { gl_Position = MVP * vec4(Position, 1); TexCoord0 = TexCoord; } // Fragment shader #version 330 precision highp float; in vec2 TexCoord0; uniform sampler2D TextureSampler; out vec4 FragColor; void main() { FragColor = texture2D(TextureSampler, TexCoord0.xy); }
And the result I am getting can be seen in attached file "wrong.png"
However, when searching for answers I found a post at SO where someone was trying to do something with a raycastand I noticed that the asker had defined his/her projection matrix with the inverse of the aspect ratio, so I tried doing that:
glm::mat4 _projectionMatrix = glm::perspective(45.0f, (_screenDimensions.x / _screenDimensions.y), 0.1f, 100.0f);
and the results actually seem to be correct! (see attached file "right.png")
I also added files "wrong2.png" and "right2.png" which is the model with some rotation / scaling / translation
But all this makes absolutely 0 sense to me, and makes me think that there is something else that is wrong with my code.
Why would passing a negative aspect ratio make it look as it should (if it even is as it should).
I am fairly certain there is a problem in either the MVP matrix or how it is used in the shader.
I have used the model data in a webgl application and it rendered just fine there so the vertecies position and texture coord data is fine.
Here is the model loading / passing code for completion sake
// Vertecies, position / texture coord Vertex vertices[] = { // Front face Vertex(glm::vec3(1.0, 1.0, 1.0), glm::vec2(0.0, 0.0)), Vertex(glm::vec3( 1.0, 1.0, 1.0), glm::vec2(1.0, 0.0)), Vertex(glm::vec3( 1.0, 1.0, 1.0), glm::vec2(1.0, 1.0)), Vertex(glm::vec3(1.0, 1.0, 1.0), glm::vec2(0.0, 1.0)), // Back face Vertex(glm::vec3(1.0, 1.0, 1.0), glm::vec2(1.0, 0.0)), Vertex(glm::vec3(1.0, 1.0, 1.0), glm::vec2(1.0, 1.0)), Vertex(glm::vec3( 1.0, 1.0, 1.0), glm::vec2(0.0, 1.0)), Vertex(glm::vec3( 1.0, 1.0, 1.0), glm::vec2(0.0, 0.0)), // Top face Vertex(glm::vec3(1.0, 1.0, 1.0), glm::vec2(0.0, 1.0)), Vertex(glm::vec3(1.0, 1.0, 1.0), glm::vec2(0.0, 0.0)), Vertex(glm::vec3( 1.0, 1.0, 1.0), glm::vec2(1.0, 0.0)), Vertex(glm::vec3( 1.0, 1.0, 1.0), glm::vec2(1.0, 1.0)), // Bottom face Vertex(glm::vec3(1.0, 1.0, 1.0), glm::vec2(1.0, 1.0)), Vertex(glm::vec3( 1.0, 1.0, 1.0), glm::vec2(0.0, 1.0)), Vertex(glm::vec3( 1.0, 1.0, 1.0), glm::vec2(0.0, 0.0)), Vertex(glm::vec3(1.0, 1.0, 1.0), glm::vec2(1.0, 0.0)), // Right face Vertex(glm::vec3( 1.0, 1.0, 1.0), glm::vec2(1.0, 0.0)), Vertex(glm::vec3( 1.0, 1.0, 1.0), glm::vec2(1.0, 1.0)), Vertex(glm::vec3( 1.0, 1.0, 1.0), glm::vec2(0.0, 1.0)), Vertex(glm::vec3( 1.0, 1.0, 1.0), glm::vec2(0.0, 0.0)), // Left face Vertex(glm::vec3(1.0, 1.0, 1.0), glm::vec2(0.0, 0.0)), Vertex(glm::vec3(1.0, 1.0, 1.0), glm::vec2(1.0, 0.0)), Vertex(glm::vec3(1.0, 1.0, 1.0), glm::vec2(1.0, 1.0)), Vertex(glm::vec3(1.0, 1.0, 1.0), glm::vec2(0.0, 1.0)) }; glGenBuffers(1, &VBO); glBindBuffer(GL_ARRAY_BUFFER, VBO); glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW); unsigned int Indices[] = { 0, 1, 2, 0, 2, 3, // Front face 4, 5, 6, 4, 6, 7, // Back face 8, 9, 10, 8, 10, 11, // Top face 12, 13, 14, 12, 14, 15, // Bottom face 16, 17, 18, 16, 18, 19, // Right face 20, 21, 22, 20, 22, 23 // Left face }; glGenBuffers(1, &IBO); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, IBO); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(Indices), Indices, GL_STATIC_DRAW); // Draw function glClear(GL_COLOR_BUFFER_BIT  GL_DEPTH_BUFFER_BIT); glUniformMatrix4fv(mvpUniform, 1, GL_FALSE, glm::value_ptr(mvpMatrix)); glEnableVertexAttribArray(0); glEnableVertexAttribArray(1); glBindBuffer(GL_ARRAY_BUFFER, VBO); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (const GLvoid*)0); glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), (const GLvoid*)12); glActiveTexture(_textureUnit); glBindTexture(_textureTarget, _textureId); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, IBO); glDrawElements(GL_TRIANGLES, 36, GL_UNSIGNED_INT, 0); glDisableVertexAttribArray(0); glDisableVertexAttribArray(1);
Anyone have any suggestion as to what could be wrong?
Thanks in advance for your help!
building luabind for gcc 4.8.1
15 January 2014  08:25 PM
Hi.
I hope this is the right forum for this.
I am having some major problems getting luabind to even compile with mingw.
I have searched and followed every tutorial / stackoverflow thread / other documentation available and it doesn't help.
Steps taken:
1. Downloaded boost and build bjam with cmd "bootstrap.bat mingw" (tried versions 1.5.5, 1.5.4 and 1.3.4 same results)
2. Downloaded lua precompiled binaries (version 4.1.0 since that is supposidly the only one compatible with luabind)
3. Added environment variables BOOST_ROOT and LUA_PATH to their respective value
4. Added bjam path to environment variables
5. Downloaded luabind (tried all versions since 2006) and ran this in cmd "bjam stage toolset=gcc"
Output:
"g++" ftemplatedepth128 O0 fnoinline Wall g DLUABIND_BUILDING I "." I"C:\boost_1_54_0" I"C:\mingw_dev_lib\lua41\include" c o "bin\gccmingw 4.7.1\debug\src\overload_rep.o" "src\overload_rep.cpp" ...failed gcc.compile.c++ bin\gccmingw4.7.1\debug\src\overload_rep.o... gcc.compile.c++ bin\gccmingw4.7.1\debug\src\stack_content_by_name.o In file included from ./luabind/wrapper_base.hpp:31:0, from ./luabind/back_reference.hpp:27, from ./luabind/class.hpp:97, from ./luabind/luabind.hpp:28, from src\stack_content_by_name.cpp:25: ./luabind/detail/call_member.hpp:318:1: error: missing binary operator before to ken "(" In file included from ./luabind/back_reference.hpp:27:0, from ./luabind/class.hpp:97, from ./luabind/luabind.hpp:28, from src\stack_content_by_name.cpp:25: ./luabind/wrapper_base.hpp:90:1: error: missing binary operator before token "(" In file included from ./luabind/detail/constructor.hpp:39:0, from ./luabind/class.hpp:98, from ./luabind/luabind.hpp:28, from src\stack_content_by_name.cpp:25: ./luabind/wrapper_base.hpp:90:1: error: missing binary operator before token "(" In file included from ./luabind/detail/constructor.hpp:41:0, from ./luabind/class.hpp:98, from ./luabind/luabind.hpp:28, from src\stack_content_by_name.cpp:25: ./luabind/detail/signature_match.hpp:151:1: error: missing binary operator befor e token "(" ./luabind/detail/signature_match.hpp:227:1: error: missing binary operator befor e token "(" In file included from ./luabind/detail/constructor.hpp:42:0, from ./luabind/class.hpp:98, from ./luabind/luabind.hpp:28, from src\stack_content_by_name.cpp:25: ./luabind/detail/call_member.hpp:318:1: error: missing binary operator before to ken "(" In file included from ./luabind/detail/constructor.hpp:43:0, from ./luabind/class.hpp:98, from ./luabind/luabind.hpp:28, from src\stack_content_by_name.cpp:25: ./luabind/wrapper_base.hpp:90:1: error: missing binary operator before token "("
And it goes on and on with similar messages from different files...
If i run without toolset=gcc it compiles just fine with mcvs.. the .lib and .bin files are there, just like they should be..
I have looked at :
http://stackoverflow.com/questions/9631762/errormissingbinaryoperatorbeforetoken
and
https://svn.boost.org/trac/boost/ticket/6631
which seems to be related, but it doesn't help my situation.
Someone please help
Thanks in advance / AS
2d logical to view conversion
13 January 2014  09:22 AM
Hi.
I wanted to get some input from you guys on what method to use when converting logical positions and dimensions to their view counterparts and vice versa in a 2D game.
To give you some context, the game is a simple top down 2d game. The game is competitive and multiplayer. All game related positions and dimensions (x, y, widht, height etc) are in a logical cartesian space and when rendering it is scaled to view space. The other way around is also true, for example, when getting input from the mouse, the click position is converted from view to logical space.
Two psuedo code use cases might look like:
render( camera.to_view(entity.x), camera.to_view(entity.y), camera.to_view(entity.w), camera.to_view(entity.h) ); get_mouse_pos( camera.to_logical(mouse.x), camera.to_logical(mouse.y) );
So, I can think of two ways to do this.
1. Using a scale that depends on the level and screen width
scale.x = screenWidth / levelWidth; scale.y = screenHeight / levelHeight;
This will make it so that the same number of logical units will be shown no mather what resolution the screen is. The logical positions and dimensions will simply be scaled to match the screen resolution.
pros
 gameplay will be the same on all screen resolutions (the same amount of logical units are always shown)
cons
 might look wierd on a lot of resolutions because of scaling
2. Using a fixed scale
scale = 10;
This will make it so that (screenWidth / scale and screenHeight / scale) number of logical units will be shown. The logical positions and dimensions are always scaled the same; by [scale] amount.
pros
 easier to make it look good, since scaling is always the same on all resolutions
cons
 will affect gameplay, since players using a screen with larger resolutions will see more of the level
PS: I say "screen" but you can think of it as a viewport, in case the game is no in fullscreen, everything still applies.
PSS: "positions and dimensions" are really points and vectors.. lol
So, I would love to get some feedback on what you think will work the best, thanks in advance! / AS