Prune

Members
  • Content count

    931
  • Joined

  • Last visited

Community Reputation

223 Neutral

About Prune

  • Rank
    Advanced Member

Personal Information

  • Location
    Vancouver, Canada
  1. Is there any reason you're not using multi-draw indirect? With that, you can do a single draw call per shader. What I do is pack all of a given type of vertex attribute for all meshes in a single VBO (I don't interleave because most passes only need positions: the shadow map passes and the z-prepass), then I build a separate command buffer for each <shader,draw-call> pair. Transforms and material properties, including bindless texture handles, are in other buffers that the shader indexes by indirectionBuffer[gl_DrawIDARB], and those that change can be written into persistently mapped buffers. The render thread then becomes very simple:   Init: persistently map all dynamic buffers; initial glFenceSync()   bool newSync(false): if (new dynamic update data available) { glClientWaitSync(...); copy data (whatever's changed of transforms, material parameters, texture handles, indirection buffer, draw command buffer) into triple buffer to begin DMA transfer increment indexes newSync = true; } ... if (newSync) glMemoryBarrier(GL_CLIENT_MAPPED_BUFFER_BARRIER_BIT); bind shadow shader, disable color writes for each shadow-casting light, bind render target and glMultiDrawElementsIndirectCountARB(...) bind z-prepass shader glMultiDrawElementsIndirectCountARB(...) enable color writes for each shading pass, set state, bind shader, and glMultiDrawElementsIndirectCountARB(...) if (newSync) syncs[index] = glFenceSync(); for each postporcess pass, bind shader and glDrawArrays(GL_TRIANGLE_FAN, 0, 4); // No VAO/VBO bound, just use gl_VertexID to index constant array in shader   The update data is created by the compute threads; I only do interpolation based on timestamp on the render thread to avoid jitter, as the compute threads run asynchronously.
  2. "The term embodied cognition is a philosophical term which has also been studied in psychology and it basically means that our rational thoughts are interconnected with our sensory experiences."   As someone well familiar with the work of one of the major figures in the field, the neurologist Damasio, I'd like to point out that the definition you give is misleading as it only looks at one half of what the body provides to cognition--sensory input. At least as important to the concept of embodied cognition, however, is that the body provides feedback to the mind on itself, not just the rest of the world through the senses. It does this in terms of feedback to the brain about the state of its internal millieu, proprioception (position of joings/muscles and body parts in general), and so on. This feedback is very tightly integrated with emotions and affects both conscious and unconscious cognitive processes. It provides a sort of anchor for the mental self. This is evolutionarily useful, as the mind's ultimate responsibility is maintaining optimal homeostasis in the body, and the benefit of a complex mind over a simple one is that it has the potential to take into consideration future changes in factors that would impact homeostasis, as opposed to being merely reactionary.   I think the most important result from research in embodied cognition that us computer science types ought to note is that it makes human-like AI a far more difficult problem than one of simulating intellect--an artificial mind cannot be human-like if you don't provide it all the input that a body normally provides, complete with the complex and rich feedback every physiological state change in the body causes, and full emulation of the numerous complex feedback loops between the mind and body.   I remember Kurzeweil talking about simulating whole brains at the neuron level back from the late 90s in his books, and now I laugh at his wildly optimistic projections o being able to do so by circa 2020. Yet, Google hired the guy (it boggles the mind). Embodied cognition puts one nail in that coffin: you'd have to simulate the body as well as the brain. And another factor which he ought to have known at the time puts the other nail in the coffin: he was referring to numbers of neurons and comparing to trends of numbers of transistors in supercomputers (a fallacy I see some people do to this day--!), but it's the number of synapses that matter. There are 150 trillion synapses in the human brain, and a synapse's electrical activity is far more complex than a transistor's. This still makes up for the brain's slow eletrochemical signal propagation by a few orders of magnitude.   To end my now-offtopic rant: I'm not saying AI is useless, just that human-like AI is something that is in the far future. We'll have intelligent machines in the coming decades, but they won't be able to understand us at a deep level, because their minds will be alien to ours, and the converse. I leave the question of whether that will have a practical impact on their utility (and/or danger) to us open to the reader.
  3. PhilTaylor, it's because I'd have to install the DirectX SDK just for this (I use OpenGL not DX). SpaXe, thanks for the link. It seems to require DX10.1 and so with my GTX-285 based card I had to run it in software emulation mode, though I don't know what hardware feature that DX10 lacks is needed for these shadow mapping algorithms... In any case, it's unfortunate that there's VSM and EVSM but no ESM for comparison. Additionally, I have to ask, which method both provides smooth shadows and avoids the artifact seen here with EVSM (VSM is even worse): http://i56.tinypic.com/359gwu1.png I thought that EVSM's purpose is exactly to avoid this sort of thing
  4. There are no binaries at the download link, just source. Can someone post binaries? Thanks in advance.