• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Latest Forum Activity

    • If you are running your own custom heap, you can also try to expand any block of memory when asked for a new one. This may require some custom logic (expand instead of new) but could worth because you keep your heap as close as possible. Data fragmentation will always happen the one or other way but you could minimize it. There are technics like different heap chunks for objects of certain size where smaller objects are chunked together and larger ones too. You could also run some kind of garbage collection and memory reordering. Instead of raw pointers give a struct that wraps arround a pointer (so its size is as same as the raw pointer; your objects wont be larger) that is possible to lock access to the pointer while your heap manager reorders your memory to kill free spaces and so on. There are tons of possibilities
    • You're right, there is pressure between them.  Communications between systems can require knowledge of other systems, but relying on other systems creates dependencies and unwanted coupling. Dependency inversion can help with parts of that.  When you've got multiple types of things provide a base abstraction and implement all the details separately. This can also help with composing objects since you can maintain a collection of abstract objects and never need to know (nor care) what the concrete objects are. Your systems will have knowledge of each other since that is required for interoperation, but they'll still be open for creating new features in your game without breaking old ones. Input handling is nearly always handled by a somewhat complex state machine, coupled with the Chain of Responsibility pattern. The input may test first against special high-priority commands, then against input modifiers, then passed to UI commands, then passed to game simulation commands, etc.  Different game states can cause modules to be processed in different orders, or to be excluded from the processing queue. Events or a message bus are quite common in games and they allow a good way to broadcast general events.  Many systems can attach to event listeners. You may have a tutorial that respond to events, achievements that respond, and loggers that can record all events matching the criteria. They're great for "fire and forget" systems where you're announcing "this happened" rather than doing specific processing.  They need a bit of maintenance: messages that are never broadcast need to have listeners and message removed; messages that are broadcast but never handled need to have the broadcast and message removed.   No matter how you do it, the various systems will always need to interact.  Factory methods, composition and collections of abstract interfaces, visitor objects, command objects, and many other patterns are often used to reduce the coupling between systems while still allowing interaction.  Each has a cost, and that cost also needs to be balanced. Sometimes when all the costs are considered there are times and places where the tight coupling is the best option. On top of all that you've got concerns about how you know about each system.  Dependency injection works if you know both the dependency and the system to inject.  A common compromise is a small number of well-known instances, such as a global structure instance (or pointer to a global structure instance) that has pointers to the various major components, with your own rules and policies about when those well-known objects are valid and when they can be modified.  Exactly how you handle it also varies based on specific needs.
    • Eventually your normals point into the terrain instead outwards, or they are uninitialized? 
    • I have a question about the "single responsibity principle". I'm a hobby-gamedev. I try to organize code into different systems, but I often end up with a few classes that know a lot about other classes, and I'd like to know if there is a way to avoid this. As an example: Let's say, I work on a citybuilder (like Cities:Skylines, but really crappy) and I want the player to be able to build roads in the game. To build a road a lot of different systems have to be used, for example: InputHandling: handling of mouse input (clicks, movements) to start and end roads and determine the position and shape of the road MeshBuilder: generate and constantly update a mesh for the road (or a mesh for a colored "proxy road" to visualize whether the road can be build) Geometry/CollisionHandler: to find closest roads/junctions for "snapping" and to check whether the new road collides with existing geometry RoadNetwork (a directed graph of all roads and junctions, used for pathfinding and more): to determine whether the road can be build (there may be some "logical" restrictions, e.g. only a limited number of roads can connect to a junction) Scene: to add/update the new Mesh/Model UI: to give feedback to the user in case of errors (in the sense of "You can't build here.") and more
      I have no problems to separate all that functionality into the mentioned systems or helpers. But then there is a level "above" all that where I usually end up with classes (here a "RoadBuilderTool") that use all these systems to implement the "stuff that actually needs to happen" = the logic of building a road, and those classes depend on all the lower-level classes. "Separation of concerns", "Single responsibity principle" and similar guidelines tell me that I should avoid that, but I don't see how to do it here. Perhaps Events/Messages could be used to avoid coupling here, but I think Events also hide the flow of the execution and that's not great. This is just one example of many. I manage to organize and separate code into different systems, until I don't, and I'm not sure what would be a better way to do this (or even if there is a better way). Thanks for advice!  
    • @matt77hias The code you posted really helps a lot!  I like the way you structured it. I have replicated this in my own code and have learned a lot from it already though I get a black material with no brdf influence when I compile. It is likely a syntax and/or math issue on my part. I will be pressing on with this more tomorrow to try to resolve the issue. I posted what I have at the moment. I am also not sure what the 'g_inv_pi' was in your code so I removed it. That, itself, could be the issue. Regardless, I am happy that, at this point, I am able to code at least this much with no reference. Obviously, once the brdf is implemented properly I will feel much more satisfied. Thanks again for you posts. I cannot express how much it helps. I feel that I am close to putting this all together.  ///// RasterizerState ////////////////////////////////////////////// RasterizerState disableCulling { CullMode = NONE; }; ///// Vars ///////////////////////////////////////////////////////// float4x4 wvp : WORLDVIEWPROJECTION; float3 cameraPosition : CAMERAPOSITION; float4x4 world : WORLD; float light; float3 Ks; float3 Kd; float Ns; float3 E_directional; float3 lightDirection : DIRECTION < string Object = "DirectionalLight0";>; struct vsIn { float3 normal : NORMAL; float2 texCoord : TEXCOORD; float4 position : POSITION; }; struct vsOut { float4 position : SV_POSITION; float2 texCoord : TEXCOORD; float3 normal : NORMAL; float3 view :TEXCOORD2; }; ///// Phong Lighting /////////////////////////////////////////////// float3 phongLighting(float3 n, // The normalized surface normal vector. float3 l, // The normalized direction from the surface point to the light source (i.e. constant for directional lights) float3 v, // The normalized direction from the surface point to the eye. float3 Kd, // The diffuse reflection coefficient of the surface material. float3 Ks, // The specular reflection coefficient of the surface material. float Ns, // The specular exponent of the surface material. float3 E_directional // The irradiance of the directional light source. ) { const float3 r = reflect(-l, n); // Calculate the reflection vector using HLSL intrinsics. const float n_dot_l = saturate(dot(n, l)); // Note the saturate! const float v_dot_r = saturate(dot(v, r)); // Note the saturate again! const float3 diffuse = Kd; // Evaluate the diffuse part of the Phong BRDF. const float3 specular = Ks * pow(v_dot_r, Ns); // Evaluate the specular part of the Phong BRDF. const float3 brdf = diffuse + specular; // Evaluate the complete Phong BRDF. const float3 radiance = brdf * E_directional * n_dot_l; // Combine the BRDF and the irradiance. return radiance; } ///// Vertex Shader //////////////////////////////////////////////// vsOut vs (vsIn IN) { vsOut OUT = (vsOut)0; float3 worldPosition = mul(IN.position, world).xyz; OUT.position = mul(IN.position, wvp); OUT.view = normalize(cameraPosition - worldPosition); return OUT; } ///// Pixel Shader ///////////////////////////////////////////////// float4 ps (vsOut IN) : SV_TARGET { float4 OUT; OUT.rgb = phongLighting(normalize(IN.normal), normalize(lightDirection), normalize(IN.view), Kd, Ks, Ns, E_directional); OUT.a = 1; return OUT; } ///// Technique //////////////////////////////////////////////////// technique10 main10 { pass p0 { SetVertexShader (CompileShader(vs_4_0,vs())); SetPixelShader (CompileShader( ps_4_0,ps())); SetRasterizerState (disableCulling); } }  
  • Advertisement
  • Advertisement

SICP Workshop

Sign in to follow this  

Workshop for those studying "Structure and Interpretation of Computer Programs."

21 topics in this forum

  1. Chapter 3

    • 13 replies
    • 0 replies
    • 84 replies
  2. Chapter 2

    • 40 replies
  3. Student status

    • 23 replies
  4. Start Here

    • 0 replies
    • 0 replies
    • 4 replies
    • 19 replies
    • 65 replies
    • 7 replies
    • 2 replies
    • 2 replies
    • 3 replies
    • 2 replies
    • 0 replies
  5. C++ Bindings

    • 0 replies
  6. SCIP Videos

    • 12 replies
    • 2 replies
    • 13 replies
  7. Link to the Text

    • 0 replies
Sign in to follow this  
  • Advertisement