• Content count

  • Joined

  • Last visited

Community Reputation

1200 Excellent

About Aztral

  • Rank
  1. The 'Thick Client' is dead...

    We write architectural design software in C++ with Qt.  It's a very, very thick client application.  We also have a pretty large (and quickly getting larger) iOS application based on Unity written primarily in C# with occasional native code.
  2. I made this [media][/media] I wouldn't call it game. I was (and still am), like you, interested in graphics programming. I had the video, code samples and some technical explanations of how a few things in the engine worked up on a website I created with Joomla. There wasn't a place I applied that I didn't hear back from (and that includes a few major studios that I was, frankly, shocked to hear back from). I'm not sure if that speaks more to my resume or portfolio project but it worked out well for me in the end. My experience was that a pretty substantial amount of the discussion I had with potential employers - be it on the phone or during an on site interview - was related to this project. A lot of the suggestions on Gamedev are to create polished, completed projects. I'm not saying that's bad advice but I had a positive experience with a project that I never 'completed' or intended to complete. I wanted to learn OpenGL so I started implementing things with OpenGL. It started as a COLLADA importer, then I added a terrain generator, GUI, water features, the ability to interact with it which led to the terrain editor, etc. I got a LOT out of it, it was fun and it looked good enough (in my opinion) to get me interviews.
  3. [quote name='gekko' timestamp='1348209993' post='4982255'] Why don't you report it to Microsoft Connect? [/quote] [quote name='gekko' timestamp='1348209993' post='4982255'] And if you do report it, would you mind posting the link? I wouldn't mind tracking it to see what they say. [/quote] Sure thing. [quote name='gekko' timestamp='1348209993' post='4982255'] If this is your actual code, you should call std::mem_fn and not std::bind since you aren't binding anything, mem_fn will give you a callable object which takes either a pointer or reference to the object as it's first parameter. That said, I can't find any reason why it shouldn't work. If you explicitly handle the conversion yourself using mem_fn, it also works [/quote] You're right - that was one thing that I tried early on and it does work properly. It's probably the appropriate solution if we want to be as correct as possible. It also appears that the issue goes away if we do in fact use bind appropriately (we actually bind something) like so: [CODE] virtual void ObjectBase::vfunc_parm(int i) const { } void ObjectDerived::vfunc_parm(int i) const { std::cout << "vfunc_parm i = " << i << std::endl;} const ObjectDerived obj; auto func = std::bind(&ObjectBase::vfunc_parm, _1, _2); func(obj, 2); [/CODE] The only downside is that this is a problem in many thousands of locations and determining in each location the appropriate solution will be a tedious process; even if it does make our code base better overall. It's as much a matter of curiosity for me at this point anyways. Conceptually I don't understand why the original code [i]wouldn't[/i] work, even if it's not technically the best solution.
  4. [quote name='Codarki' timestamp='1348129080' post='4981961'] Yeah I think that should work. [quote name='Aztral' timestamp='1348082358' post='4981784'] This code compiles and executes fine using the v100 C++ compiler. My question is [i]should[/i] this code work? In the 2010 case the code calls vfunc() of the derived class, using obj as the callee, which is what I would expect. [/quote] The compiler is instantiating std::bind::operator() which takes const ObjectDerived by value. So you're actually using copy of the obj. "The arguments to bind are copied or moved, and are never passed by reference unless wrapped in std::ref or std::cref." [/quote] I believe your quote is referring to the arguments actually passed to the std::bind call. I can say for sure in the case of compiler v100 std::bind::operator() isn't making a copy, as this code: [code] #include <algorithm> #include <functional> #include <iostream> class ObjectBase { public: virtual void vfunc() const { std::cout << "vfunc base" << std::endl; } }; class ObjectDerived : public ObjectBase { public: void vfunc() const { std::cout << "vfunc derived this = " << this << std::endl; } }; int main(int argc, char *argv[]) { using namespace std::placeholders; const ObjectDerived obj; std::cout << "Addr of obj " << &obj << std::endl; auto func = std::bind(&ObjectBase::vfunc, _1); func(obj); return 0; } [/code] Outputs: Addr of obj 003EFE20 vfunc derived this = 003EFE20 And this makes sense (to me, at least). If, for example, you are calling std::for_each(container.begin(), container.end(), std::bind(&Class::func, _1)) you would expect Class::func to be called by the actual elements in the container - wouldn't you? Not to mention the performance implications. That said, I can call func with func(&obj), func(dynamic_cast<ObjectDerived &>(obj)), func(std::move(obj)) so it does seem to be a problem only in the case I am passing the object itself. I'd say this became an issue with the introduction of Rvalue and move semantics but that is not new to 2012, the same language features existed in 2010.
  5. That is true - though in other cases when I pass it a this reference it works fine. The only time it is an issue is when I provide a pointer to a member function of the base class and pass a this reference to an instance of the derived class. But I suppose that goes back to the should it work question, regardless of whether or not in some cases it does work.
  6. [quote name='swiftcoder' timestamp='1348082846' post='4981789'] My gut feeling is no. It is called the 'this pointer' for a reason. [/quote] I'm not sure what you mean. In this case shouldn't obj be the callee and thus it's 'this' pointer used? [quote name='Bregma' timestamp='1348083874' post='4981794'] Data point: your code compiles and works just peachy on GCC 4.7 [/quote] I suppose I should have noted this as well - it also compiles just fine with XCode Clang. [quote name='Servant of the Lord' timestamp='1348084137' post='4981796'] std::function can wrap member functions as (imitating) regular functions; then maybe you can std::bind that? [/quote] This also compiles in 2010 and fails in 2012. Interestingly, the sample code at [url=""]http://en.cppreferen...tional/function[/url] also fails to compile under MSVS 2012. It is seeming more and more like an issue with the 2012 compiler, but again I don't know the standard well enough to say whether or not it [i]should[/i] compile. It seems like a fundamental and common enough practice that a bug like this would be pretty glaring and wouldn't be present this close to a compiler release date.
  7. In trying to build some existing code from MSVS 2010 in MSVS 2012 I've run into a few (quite a few) compiler errors that were non-existent with the 2010 compiler. Almost all of these errors are in STL/templated code. Trying to figure them out has made me question my sanity, more than anything, but more relevantly my understanding of a few important C++ ideas. Let me preface this by saying that I understand 2012 is still an RC, but the sheer volume of compiler errors I'm getting is troublesome and makes me wonder if I'm not doing something that if it isn't flat out wrong is at best in a grey area. One example: A very simple test case to reproduce the compiler error: [CODE] #include "algorithm" #include "functional" class ObjectBase { public: virtual void vfunc() const { } }; class ObjectDerived : public ObjectBase { public: void vfunc() const { } }; int main(int argc, char *argv[]) { using namespace std::placeholders; const ObjectDerived obj; auto func = std::bind(&ObjectBase::vfunc, _1); func(obj); return 0; } [/CODE] This code produces ClCompile: std_bind_test.cpp c:\program files (x86)\microsoft visual studio 11.0\vc\include\functional(1264): error C2100: illegal indirection c:\program files (x86)\microsoft visual studio 11.0\vc\include\functional(1147) : see reference to function template instantiation '_Rx std::_Pmf_wrap<_Pmf_t,_Rx,_Farg0,_V0_t,_V1_t,_V2_t,_V3_t,_V4_t,_V5_t,<unnamed-symbol>>::operator ()<ObjectDerived>(const _Wrapper &) const' being compiled with [ _Rx=void, _Pmf_t=void (__thiscall ObjectBase::* )(void) const, _Farg0=ObjectBase, _V0_t=std::_Nil, _V1_t=std::_Nil, _V2_t=std::_Nil, _V3_t=std::_Nil, _V4_t=std::_Nil, _V5_t=std::_Nil, <unnamed-symbol>=std::_Nil, _Wrapper=ObjectDerived ] c:\users\ryan\documents\visual studio 2012\projects\cppeleventest\cppeleventest\std_bind_test.cpp(19) : see reference to function template instantiation 'void std::_Bind<_Forced,_Ret,_Fun,_V0_t,_V1_t,_V2_t,_V3_t,_V4_t,_V5_t,<unnamed-symbol>>::operator ()<const ObjectDerived&>(const ObjectDerived)' being compiled with [ _Forced=true, _Ret=void, _Fun=std::_Pmf_wrap<void (__thiscall ObjectBase::* )(void) const,void,ObjectBase,std::_Nil,std::_Nil,std::_Nil,std::_Nil,std::_Nil,std::_Nil,std::_Nil>, _V0_t=std::_Ph<1> &, _V1_t=std::_Nil, _V2_t=std::_Nil, _V3_t=std::_Nil, _V4_t=std::_Nil, _V5_t=std::_Nil, <unnamed-symbol>=std::_Nil ] Build FAILED. My understanding of it so far is that somewhere down the long line of STL code (in an operator() call I think) the indirection operator is being applied to a non-pointer of type ObjectDerived. I can verify this by implementing the indirection operator for ObjectBase: [CODE] const ObjectBase &operator*() const { return *this; } [/CODE] When I do so I can compile and run. This code compiles and executes fine using the v100 C++ compiler. My question is [i]should[/i] this code work? In the 2010 case the code calls vfunc() of the derived class, using obj as the callee, which is what I would expect. I can work around this simple case pretty easily by calling func(&obj); or func(dynamic_cast<ObjectBase &>(obj)); but I can't, for example, do this: [code] std::vector<ObjectDerived> container; container.resize(2); std::for_each(container.begin(), container.end(), std::bind(&ObjectBase::vfunc, _1)); [/code] I can also workaround by using std::bind(&ObjectDerived::vfunc, _1) but only if that function is actually implemented. Why would binding the derived vs. the base function be resolved differently? (I am actively digging around the STL code to answer that question but that's quite a chore while I have other things to work on as well). There are a number of potential workarounds but they seem hackish and tedious if the code should be working as written. And again given the sheer volume of template related compiler errors I wonder if I should just be holding off for a later release and hope they get resolved?
  8. Portfolio/Resume Advice

    [font="Arial"][size="2"]Thanks a lot for the feedback. Sorry I've taken so long to get back to my post - I was traveling this week and had much less available time than I expected. [color="#1C2837"][quote][/color] [color="#1C2837"]Your resume looks reasonably good. It could use some additional detail about what you did. What specifically did you study (content, not course names).[/color] [color="#1C2837"][/quote][/color] [color="#1C2837"]Thanks. I'm not sure what specifically to mention while avoiding simply listing standard computer science course content. Interesting school projects and what they involved?[/color] [color="#1C2837"][color="#000000"]Perhaps on a related note, am I hurting myself by not making my skills more apparent? For example I want to write C++ code, every position I'm planning on applying for is asking for C++ skills, but is it clear enough on my resume that I am competent in C++? I want to avoid a bulleted list, but if someone just skims my resume will it be clear that I know what I know?[/color][/color] [quote] [color="#1C2837"]It could use a bit more detail about what you did individually at work, you probably didn't create all the content in your simulators.[/color] [/quote] By content do you mean assets, in the sense did I work with modelers/animators? Or do you mean specifically what I myself implemented in code? [quote] [color="#1C2837"]You are setting yourself up as a specialist in 3D, which wil be limiting if the employers are looking for other aspects of game development. For the entry level they are much more likely to be hiring general gameplay engineers or object scripters. I'd use caution and only send that version of your resume out to companies specifically looking for 3D programmers. You are reducing the pool of possible jobs, which may make it more difficult to find a job. However, you are also pointing out an important skill set, so this falls into personal choice.[/color] [/quote] I thought it might be too specific. I'll use a more general objective for the online resume and tailor it more specifically for each position I apply for. Graphics would be my first choice but as far as I can tell it's not a likely entry level position. [color="#1C2837"][font="arial, verdana, tahoma, sans-serif"][quote][/font][/color][/size][/font] [font="Arial"][size="2"][color="#1C2837"][font="arial, verdana, tahoma, sans-serif"]You can put together a demo video or not. You are correct that most of them won't view it. They'll prune people out at the job application level far before checking their web site or watching movies. You can drive them toward it if you want. If that is your goal then add some more details that a demo is on your site.[/quote][/font][/color] Will do. I don't think it can hurt. [quote] [color="#1C2837"]I get a few "You are not authorised to view this resource" pages on your site. I suggest you fix those. Assuming all the code and images are yours, I don't see anything that would stop an employer from interviewing or hiring you.[/color] [/quote] These will hopefully be fixed tonight - I've just hidden them until I get those pages filled up. I do have a few other questions. First, it's my understanding that if they want references they will ask for them. Right? I don't need to list them, and definitely should avoid a "references available on request"? Second, I'm still finishing up the contract work at my current job. The hard set deadline for completion is the end of this year but it is likely that the work will be done quite a bit earlier than that. Is it appropriate to be applying for jobs if I can't give an exact date when I would be available to start? If so should I be waiting until nearer to the project completion to start applying? I'd like to start now and obviously be jobless for the shortest possible amount of time but I am not sure if I'll be overlooked on the basis that I can't start immediately or as soon as other candidates. Thanks again. [quote] [color="#1C2837"][url=""][/url] here i do not see a UR engine 2 download. But different dates! i have slower pc.[/color][/quote] I'm not sure what you mean. The UDK is built on UE3, but the work I have been doing still uses UE2 for the simple reason that we are writing the software to be accessible to very low end machines. I'm fairly certain the UE2 Runtime is not available for download (basing that on when I tried to get it to prepare somewhat before starting at my current job, which was over a year ago), and I know from experience that its functionality and editor is nowhere near that of the UDK.[/size][/font] [font="Arial"][size="2"]Also, note that the site is still a work in progress. Now that I have some time I'll get it fleshed out a bit.[/size][/font]
  9. Yet another resume thread. I would greatly appreciate some feedback. My resume can be found at Most of the rest of the site is not complete, but I have a couple of general questions about it. A lot of responses to threads here asking for portfolio critiques mention the importance of having smaller, [i]completed[/i] projects in their portfolio. I don't have anything of the sort. Frankly the time I've spent working independently has been more about enjoying myself and learning what I want to learn (namely graphics) and not so much doing specifically what will make my portfolio better. That said, even if it isn't perfect I'd still like to present the work that I've done somehow for whatever it is worth. Some people mention tech demos as good portfolio pieces, so I figured I would split up my portfolio site into smaller pages each related to a different sort of tech. For example a page on terrain rendering, procedural and manual generation, a page for water rendering, my COLLADA importer, my GUI implementation, things like that. Each page will have some sort of explanation regarding how I did what I did and related code samples. On top of that I thought I'd make a 3-5 minute video of everything in action that I talk over and give some explanations and stick that on the front page. Assuming all of this sounds reasonable so far, would it be better to split up this 3 - 5 minute video and put one on each sub-page or have a longer video right up front? I understand that viewing portfolio's is generally a hurried process and I want to make sure I'm providing something that is most likely to be viewed. Thanks very much for any and all feedback.
  10. Texture Lookup Cost?

    [font="arial, verdana, tahoma, sans-serif"][size="2"]I've been working on Terrain rendering for a bit and I wanted to ask my professor today a few questions so I took the project in to show to him. Unfortunately I hadn't run it on my laptop before and much to my dismay the frame rate had dropped from 90 - 100 on my desktop to about 7 on my laptop when rendering similar scenes. For reference my desktop has a Radeon 5970 and my laptop has a Radeon HD 3400 (mobile card). Clearly there is a difference there in horsepower but I didn't think the difference would be so drastic when rendering a relatively simple scene. After tinkering with a few things it turns out the largest performance killer is my terrain fragment shader. These are my terrain shaders, it's nothing complicated:[/size][/font] Vertex: [code] attribute vec3 in_Vertex; attribute vec2 in_TexCoord0; attribute vec4 in_BlendWeights1; attribute vec4 in_BlendWeights2; varying vec3 var_Vertex; varying vec2 var_TexCoord0; varying vec4 var_BlendWeights1; varying vec4 var_BlendWeights2; void main() { var_Vertex = in_Vertex; var_TexCoord0 = in_TexCoord0; var_BlendWeights1 = in_BlendWeights1; var_BlendWeights2 = in_BlendWeights2; gl_Position = gl_ModelViewProjectionMatrix * vec4(in_Vertex, 1.0f); }[/code] Fragment: [code] varying vec3 var_Vertex; varying vec2 var_TexCoord0; varying vec4 var_BlendWeights1; varying vec4 var_BlendWeights2; // up to 8 terrain textures uniform sampler2D texture0; uniform sampler2D texture1; uniform sampler2D texture2; uniform sampler2D texture3; uniform sampler2D texture4; uniform sampler2D texture5; uniform sampler2D texture6; uniform sampler2D texture7; void main() { gl_FragColor = (texture2D(texture0, * var_BlendWeights1.r) + (texture2D(texture1, * var_BlendWeights1.g) + (texture2D(texture2, * var_BlendWeights1.b) + (texture2D(texture3, * var_BlendWeights1.a) + (texture2D(texture4, * var_BlendWeights2.r) + (texture2D(texture5, * var_BlendWeights2.g) + (texture2D(texture6, * var_BlendWeights2.b) + (texture2D(texture7, * var_BlendWeights2.a); }[/code] The Terrain generator calculates blending weights for each texture per-vertex and the fragment shader simply applies those weights, as you can see. Is this a poor approach? Perhaps the actual performance answers that question for me. It looks like on my laptop I lose ~20 fps per texture2D call and ~30 on my desktop. Is it just way too many lookups per fragment? I'm trying to figure out if I should be totally reworking the Terrain engine or if it's still salvageable. Thanks.
  11. Renderer Design

    [quote] Opaque geometry can be written in any order provided there is a Z-Buffer. If you have slow shaders for this part a depth pre-pass or rendering front-to back helps speed it up. [/quote] [quote] [color="#1C2837"][size="2"]Rendering front-to-back is a GPU optimisation (assuming HiZ/EarlyZ is functioning). Reducing state-changes is largely a CPU optimisation. It would be best to write your sorting code in a such a way that it's easy for a user of the library to configure it.[/size][/color] [/quote] This makes sense. I guess it's dependent upon the situation then, as to what sorting (front to back or by vertex shader) would be optimal. I suppose I will take Hodgman's suggestion and make it configurable, this will make it easy to profile and compare performance as well. In general I tend to understand [i]what [/i]kind of things will offer a performance boost but to what extent or how a specific optimization compares to another I rarely can say for sure. Especially in the context of opengl calls, where it's hard to really see what is going on. For example, I know that in general: minimizing state changes, minimizing buffer binds, minimizing texture binds, minimizing shader changes, minimizing shader uniform updates, rendering in a specific order, minimizing draw calls, and basically minimizing anything that requires data to be sent over the bus to the GPU all lead to performance gains, but doing ALL of these things simultaneously strikes me as difficult and quite complicated. I guess a more broad question is how much I should be trying to accommodate [i]all [/i]of these versus a couple, and if the latter which should be focused on above others? [quote] For a long time changing pixel shader constants was as expensive as changing the shader itself (i.e. Geforce 9800 and older), so there isn't much to be gained by sorting by pixel shader. The hardware cost in changing the shader basically boils down the hardware (possibly, or not!) needing to have finished processing all the pixels with the old shader before the new one can be loaded and executed. It isn't exactly a stall but some of the shader cores are going to be idle for a tiny amount of time when that happens. The pathological case would be rendering a very small object (composed of fewer pixels than shader cores), and switching the shader, and repeating the process over and over. The throughput approaches some low fraction of the hardware's best case (say 5-10%). I would imagine the newer hardware has addressed this to some degree, but nobody likes talking about it If you are rendering a lot of pixels per draw call then the switch ends up only costing something like 0.01% the draw instead of 90%. [/quote] This is quite useful thank you. I'll take this to imply that as long as I'm not frequently shading a very small number of pixels before switching fragment shader I shouldn't stress too much about swapping the shader. This is the kind of in depth information that seems useful for answering the question above! [quote] Most hardware has a way of doing a tri-strip restart index so you don't have to encode a degenerate connector with 2 vertices. This includes D3D (there is a query object to get the index if it is supported). The advantage to the restart is it doesn't pollute the post transform cache and chew up 2 of the vertices it stores. The other possibility is to just use indexed tri-lists all the time instead of strips. They are certainly much easier to work with, though the index buffers are 2-3x larger. [/quote] As far as I can tell from a bit of googling my options are to indeed use primitive restart or use glMultiDrawElements to achieve this. I'm thinking of going with your latter suggestion, since my COLLADA importer already loads models as indexed triangles and other functionality (like generating geometry for a plane or other primitives) I can easily change to render as triangles instead of triangle strips. [quote] As for unused vertex data, you should be able to split them into separate streams and have unused components be bound to a stream containing a single vertex with a stride of 0, and have it initialized to some nice value that works for all vertices (all 0's, etc). This cuts down on the vertex shader permutations, in that the shader can just assume most of the attributes exist and operate on them as if they do, as long as the math works with the dummy data it is a good tradeoff. [/quote] By split them into separate streams do you mean not pack them into interleaved structures? I see what your saying, again I'm just trying to figure out what is most optimal. My understanding is that interleaving arrays is faster in general (for caching purposes?) but that if you are going to update a specific attribute of a vertex frequently and not others, the frequently updated attribute should be stored elsewhere. I could be wrong, and I could be (probably am) overthinking this big time. [quote] [color="#1C2837"][size="2"]64bytes per vertex is way too big. Don't use floats unless you need them. e.g. 128-bit colour is most likely overkill. Use bytes or shorts or half-floats where possible.[/size][/color] [/quote] Doh. I probably should have been able to figure out this one myself. 128-bit color is most definitely overkill for what I'm doing. For vertex data I have been tinkering with making the vertex "structure" configurable on a per-vertex array basis. For example a VertexArray will contain an array of [code] struct VertexAttribute { GLenum type; // i.e. GL_FLOAT, GL_UNSIGNED_BYTE GLenum components; // components per vertex, i.e. XYZ position would be 3 uint32 offset; // offset into vertex you find the start of this attribute }; [/code] each indicating a different attribute like position, color, normal, etc. which can be dynamically added to a VertexArray depending on what data is pulled from COLLADA or what the process for generating geometry determines is necessary for rendering. Post-loading some function will use these structs to determine the actual stride of a vertex and pack everything into either interleaved (or non-interleaved, I haven't decided yet) arrays for VBO's. Thanks again, to both of you.
  12. Renderer Design

    As I rework my renderer to something more sophisticated than just brute forcing its way through all the geometry in a scene, I've come up with a few questions. - My first question involves rendering order. I read quite frequently that opaque geometry should be rendered front to back, and translucent geometry should be rendered back to front. I also read quite frequently that render data should be sorted on a per material, texture and shader basis to avoid opengl state changes. Which of these sorts should take priority? I'm assuming that it's the former, spatial sort for a couple of reasons. Translucent geometry obviously HAS to be rendered in a particular order if you want transparency to be correct. I'd also guess that the benefit of early out offered by rendering opaque geometry from front to back is greater than (obviously this can vary) that offered by limiting state changes, though the benefit here is lot less obvious and seemingly dependent upon the situation. That said, my current plan is to sort spatially and then sort that sorted data by material. I'm certain this will still offer some benefit, as static render data especially is likely to share material, texture and shader with spatially nearby geometry. (For example, my terrain geometry is broken up into small chunks to allow for only rendering visible terrain, but all this terrain shares the same material and chunks are right next to one another). I'm really just curious as to how these two methods coincide. This is also relevant to where I store data, as data that is likely to be rendered at the same time should be stored in the same VBO to limit the number of bind buffer calls. - My next question is also somewhat related to rendering order. When I have data sorted by material this [i]should[/i] allow me to make significantly less gl*Pointer/draw calls, since instead of having to do this per object I can do it per material (hopefully encapsulating several objects). However, to do so I would need to append vertices to objects depending on their rendering order such that an object rendered first renders a degenerate triangle "going into" the next object. I'm not sure how I would do this if rendering order is dynamic. If it was fixed I could do this when the geometry is initially loaded into a VBO, but since it is not I'd have to change this degenerate vertex per frame, which seems silly and makes me think I'm going about this incorrectly. This is something I haven't been able to wrap my head around for a while. Sorting by material without worrying about draw calls would at least mean I don't have to bind textures, active shaders and send material data as frequently, which is good, but it also seems like it should be able to hugely limit the number of draw calls that must be made (which, from my understanding, is a big time performance boost). I am just not clear how to go about it. - Third Question This is less important in my opinion, but currently I use the following structure to store vertex data: [code] struct RenderVertex { float x, y, z; // position float nx, ny, nz; // normal float r, g, b, a; // color float s0, t0; // multi tex coords float s1, t1; float s2, t2; }; [/code] This works fine, but every time I only use position and tex coords, or position and normals, or some small subset of the struct I cringe. Would it be worth it to have several different structures depending on the data I actually need for an object? Using this struct exclusively seems like a pretty monstrous waste of memory. I also recall that it is suggested your vertex structure size aligns for 32 bytes, but I am not sure how important that is. These are the questions I can think of right now, hopefully they are clear. Thanks for any advice.
  13. I generally feel like I get the most out of the things I fail horribly at multiple times. We've all been stuck and frustrated, but I daresay only a fraction of people understand the sheer joy one experiences after figuring out a bug six hours in . Quitting at something you enjoy doing because it is challenging is, in my opinion, a very poor decision.
  14. OpenGL Texture and Vertex Shader

    Thanks everyone. Plugging in a basic fragment shader (in this case that just set the fragment color to white, for now at least) solved the problem.
  15. OpenGL Texture and Vertex Shader

    Makes sense, thanks to both of you. I will give that a shot as soon as I get home. One question comes to mind, since it is the responsibility of the vertex shader to interpolate values as necessary to be sent to the frag shader (notably the texture coordinate projection in this case), and my vertex shader doesn't do that (it wouldn't do that by default, would it?) how would the fragment shader determine the texture values to use, or would it use some default i.e. the same, coordinate for every fragment. This would make sense, as the alpha fading seems to be uniform across the mesh (I think, as far as I can tell with just my eyeballs). This explanation also makes sense in the sense that the fading seems to follow the same pattern I use to generate the height map (a rather simple wave function).