# CyberRascal

Members

32

208 Neutral

• Rank
Member
1. ## Drawing one GL_TRIANGLES element with glDrawElements makes a quad

Thanks everyone, you were both right: I had defined the near-clipping plane too far away from the origin (and the rectangle was very long and narrow) which made the clipped triangle look like a side of an almost-cube.   The amount of time I spent trying different things infuriates me (4-5 hours), but at this stage every type of fiddling around teaches me something...   For example, realized that OpenGL seems to expect matrices which are column-major ordered. Also found out that it seems that switching the order of matrix multiplication for two matrices (a, b) is reversed by transposing [aT * b = (b * a)T]. Yep, being a noob equals having a great time!   Anyway, did as you (Sponji) suggested and implemented translation via the input matrix instead. Now just onto learning about quaternions, euler angles, rotation matrices and that stuff... Or is that not the right way to achieve rotation? I was thinking storing translation / rotation with my (as of yet) static models - so (x, y, z) translation and (x, y, z, w) quaternion - and applying them like perspective(...) * translation(v) * rotation(q), does that make sense?   To rotate around say the origin as pivot, you apply the rotation matrix last, which applies the actual operation first?
2. ## Drawing one GL_TRIANGLES element with glDrawElements makes a quad

Thanks for the reply!   I am using a selfmade obj deserialiser. It handles comments, so all lines starting with # is excluded - there is currently only one face, from vertex 4 to 3 to 7 (which is 3 to 2 to 6 0-based index).   The reason I do indices.size() * 3 is that the size is actually the amount of triangles (obj faces) and I think the glDrawElements call takes the total number of indices (3 in this case).   I have verified that indices.size() is 1 and that the call matches the call in my first post. If I use for example an orthogonal face, which directly faces the 'camera' it gets rendered as a triangle correctly (which is why I suspected my perspective shader). Any ideas?
3. ## Drawing one GL_TRIANGLES element with glDrawElements makes a quad

Hey everyone, I'm basically slowly losing my mind   I have the following code: glBindBuffer(vertices.type(), vertices.id());   glBindBuffer(indices.type(), indices.id());     glEnableVertexAttribArray(0);     glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 0, nullptr);     glDrawElements(GL_TRIANGLES, indices.size() * 3, GL_UNSIGNED_INT, 0);   glBindBuffer(indices.type(), 0); glBindBuffer(vertices.type(), 0); The above code outputs the below image. Vertices are a number of xyzw format vertices, and indices are three-tuples of indices. In the concrete case below, indices.size() is 1, so the call to glDrawElements is glDrawElements(GL_TRIANGLES, 3, GL_UNSIGNED_INT, 0); This is confirmed by glslDevil which shows the above call. How can this possibly yield more than 1 triangle?   This is the cube model I am using (I take the 1-indexation used by obj into consideration when deserializing): # # object Box001 # v  -0.2500 -0.2500 -0.7500 v  -0.2500 0.2500 -0.7500 v  0.2500 0.2500 -0.7500 v  0.2500 -0.2500 -0.7500 v  -0.2500 -0.2500 0.7500 v  0.2500 -0.2500 0.7500 v  0.2500 0.2500 0.7500 v  -0.2500 0.2500 0.7500 # 8 vertices g Box001 #f 1 2 3 #f 3 4 1 #f 5 6 7 #f 7 8 5 #f 1 4 6 #f 6 5 1 f 4 3 7 #f 7 6 4 #f 3 2 8 #f 8 7 3 #f 2 1 5 #f 5 8 2 # 12 faces I have a vertex shader doing perspective correction that I've cobbled together without really understanding the math, here's the shader used: #version 430 layout(location = 0) in vec4 position; uniform mat4 perspective; void main() {     vec4 offsetPos = position + vec4(0.5f, 0.5f, 0, 0);     gl_Position = perspective * offsetPos; } the uniform perspective above has the following value (when bound): matrix<4, 4> perspective(float frustumScale, float z_near, float z_far) { matrix<4, 4> mat = { 0 };   mat[0][0] = frustumScale;   mat[1][1] = frustumScale;   mat[2][2] = (z_far + z_near) / (z_near - z_far);   mat[3][2] = (2 * z_far * z_near) / (z_near - z_far);   mat[2][3] = -1;   return  mat; } As far as I know, it should be impossible for the perspective shader to make 3 vertices appear as 4? If I just render all 12 faces for the below cube, it looks alright, but the individual faces (those subject to perspective correction, that wouldn't be visible in an orthogonal projection) all look skewed. What am I doing wrong? Do you need more information? Thanks for any help, I'm really a newbie to graphics programming in general.
4. ## Having trouble compiling C++11 chrono with CodeBlocks, MinGW

Really? Exactly that code? Chrono is in boost namespace..
5. ## How powerful is Java?

Alright, just wanted to clarify And yes indeed, I was speaking of the Java generics. Sorry about that.
6. ## How powerful is Java?

[quote name='Oberon_Command' timestamp='1351970212' post='4996946'] [quote name='Dunmord' timestamp='1351968546' post='4996939'] Every datatype in Java is an object, whereas in C++ native data types are not. Most C++ data types have a 1-1 correlation with assembler data types. (C++) int - dd (Java) int - dd + functions [/quote] This is true in [b]C#[/b], but not Java. Java's "int" type is a primitive type and basically behaves like the "int" type in C++. In C#, "int" is basically syntactic sugar for the System.Single type, which is a value-type structure that represents an int. If you want to treat integers like objects in Java, you need to use the Integer class. [/quote] True, but misleading. It seems like you are saying that the implied performance hit when treating an int as an object applies to C# but not Java, but it's the other way around. Given that a primitive (or struct) in C# behaves like automatic variables in C++ (same in Java), there is no performance penalty in C#. However, in Java it is not possible to have arrays without indirection - you can only have arrays of the Integer object, which is a reference type. So basically for most intents and purposes it's [i]more[/i] true that primitives are objects in Java than the other way around.
7. ## C# List<T>.Find

Why don't you rar it and use say [url="http://www.2shared.com/"]http://www.speedyshare.com/[/url] to host it so we can check it out?
8. ## C# List<T>.Find

Yes, what I meant with the "First" LINQ extension was actually the method CALLED First: var y = myList.First(x => x.ID == id); or var y = myList.FirstOrDefault(x => x.ID == id); This is getting weird... Have you done a clean solution and then a rebuild solution?
9. ## C# List<T>.Find

Seems to me like that should compile. Try using the LINQ extension First instead (just for kicks). Also, do you have some variable named i in scope? Neither should give you the error you are getting though... I can't think of anything that could make the lambda infer as a List..
10. ## Inheritance vs. Templates

Some negative things about templates for duck-typing: Executable bloat Compile time increase Can no longer separate header / source For an algorithm like your example I'd probably go with templates anyway though.
11. ## checking for end of game in hex

The above is o(n) with n = nodes in connected graph. At each placement, you could instead cache the connected end nodes to that position. You could then just check if the neighbours have any elements connected to end nodes. Constant time end-game check at each placement.
12. ## checking for end of game in hex

Since the only way a game can be ended is when placing a marker at a position, so each time a hex is marked, do a breadth first search until there are no more connected positions of the same color, or two different end positions are found. I'm at a lecture, writing on my phone, so sorry if I'm terse. Anyone see anything wrong with my approach?
13. ## Should I learn .NET for making design tools?

[quote name='ChaosEngine' timestamp='1341398804' post='4955586'] It's also slow, bloated and has a very uncertain future. [/quote] I'm not saying you are wrong, even though my experience is not the same, but do you have anything backing that up? [quote name='Fábio Franco GFT(Partner) at Microsoft (MSDN)'] Windows Forms will not have any future development. [/quote] I know it's just a wrapper for the Win32 API, and support isn't going to be dropped for it, but still. Admittedly, I haven't tested WPF on computers older than about 6 years, but I've never noticed anything sluggish. Bloated is a somewhat weird notion. A bloated UI is obviously bad, since it is very cluttered, hard to get an overview. Some people call C++ bloated. Other people call it feature rich. Other people say that it has a too small standard library. In WPF you can just use what you want. I've also worked on a commercial WPF app, but very small. I found it comfortable to use. YMMV.
14. ## Should I learn .NET for making design tools?

I see many people here recommend winforms - but it is like the retarded uncle of WPF. If you want to easily make a good looking, and/or custom looking GUI, you should learn WPF instead, ESPECIALLY if you don't already know winforms. It's databinding support is an ridiculous amount better than winforms, it separates view and logic ala HTML (but more app-centric, of course) via XAML, it is hardware accelerated among more things. On one occasion I made the backend in C++, made a wrapper library using C++/CLR, and made it available to end-consumers via WPF. Very smooth. I should point out that for now WPF seems to be windows only - relying heavily on D3D etc, etc, it's probably hard to make a mono implementation. If I recall correctly, winforms also had a few problems running on mono. On a final note, C# is a very pleasant language. It's not quite as powerful as C++ when it comes to templates among other things, but syntax-wise and ease-of-use it's very good.