Jump to content
    1. Past hour
    2. If you go to the "careers" tab and the "contractor" sub tab there are a few professional freelance artists that have posted ads on the site. There is also a job board you could post on if you're looking to hire someone. Professional artists looking to do paid work most likely won't come across this post because this is the hobby board, it's unpaid and mostly beginners. You could get some good free help here but it's most likely not what you're looking for.
    3. Fulcrum.013

      Beginning developing

      Of cource it planar. Is you see any FPS around where it not a planar? It just some kind of optimisation that allow to use a same G vector for any object instead to recalculate it for each object every frame that involve a expensive vector normalisation operation. 🙃
    4. krb

      Beginning developing

      Having worked in a low level office job with countless guys obsessed with fantasy sports I can confidently say you would still have a basement, it would just be filled with pictures and uniforms of grown men that catch a ball instead of whatever you have going on now. Maybe work on time management. I have the same problem sometimes, set times for meals and cook a few before hand so you have stuff ready to go and it's easy to make, that helped me eat more regularly. I've been using my game making time to try and make projects for game jams. Having set goals and an end point makes the work seem finite and doable.
    5. Today
    6. Hi, As the post states, I need art work for the game title/logo/whatever you want to call it. Possibly need some ui art as well. Buttons, borders, ui background, etc. Think medieval, fantasy, mmo. (Technically, its an mom but it doesny play like an MMO) This is paid, lets workout a reasonable rate. Pay will be through PayPal. No I will not pay you up front. We can draft a contract for work if you feel more comfortable with that. Also, if you, or an artist you know, are great at illustrations and concept art, we can work out another deal/rate of pay for that art work. If you want to do the art. If you are interest please email me at addictcodercs@gmail.com Send samples/portfolio please.
    7. Randy Gaul

      Polygons and the Separating Axis Theorem

      The idea instead of computing min/max along an interval, you can use the concept of support points. This can reduce some redundancy in your code and potentially make it easier to pin-point the problem.
    8. Thanks for the advice everyone! I updated my resume with the feedback I received here, as well as with a link to my newly made portfolio website JonahBrooks.net. I'll try to commit to github more regularly so it doesn't look abandoned, and I'll try to make a more substantial game demo in the coming months. Thanks again for all the feedback and advice! Jonah_Brooks_Resume.pdf
    9. NikiTo

      Beginning developing

      It was never a business. Nobody took money form the kids and we all were volunteers. I did for that kid what i would do for my little kid brother, or for my own child. My conscience is white clear. I am not anymore volunteering there. I tried it to see how is it, but we were forced to force kids to code. Some kids wasn't wanting to code, and was watching youtube. The organizers told me to force them somehow to code. Not cool for me. And i stopped volunteering there.
    10. Yesterday
    11. Hodgman

      DirectX interfaces

      They're inherited/implemented internally somewhere else (e.g. Inside d3d11.dll). You're not supposed to inherit/implement them, because you're a client of the interface, not an author of the D3D runtime The purpose of interfaces is to fully hide all implementation details from an object's client's (and sometimes also to enable polymorphism). In this case, we have no idea what the implementation of ID3D11Buffer looks like, because it's hidden behind the interface. This also allows MS to constantly change the implementation behind the scenes (with Windows update, or graphics drivers, etc) and our software doesn't care, because we talk to those different implementations via the interface.
    12. Dirk Gregorius

      Polygons and the Separating Axis Theorem

      I would not implement it like this. I gave a presentation on SAT maybe this is helpful. http://media.steampowered.com/apps/valve/2013/DGregorius_GDC2013.zip
    13. swiftcoder

      Beginning developing

      Around these parts, we generally prefer to encourage newcomers to the field. We also prefer to keep discussion in the For Beginners forum at least somewhat relevant to helping out the original poster. With that kind of attitude, I can only hope you aren't in the business of teaching kids anymore. Not all kids are the same - I certainly would have taken an interesting project over sweets at any age.
    14. If you would like to see the full picture (three .pde files), you can view the code at my GitHub repository here. If interested you can also run the program yourself if you have Processing installed. Here though is the code which implements the SAT to detect collisions and create the MTV. Please tell me if you see any mistakes. class CollisionHandler { PVector detectCollision(PShape shape1, PShape shape2) { float magnitude = 999999; PVector direction = new PVector(); ArrayList<PVector> axes = new ArrayList<PVector>(); axes.addAll(getAxes(shape1)); axes.addAll(getAxes(shape2)); for(int i = 0; i < axes.size(); ++i) { PVector axis = axes.get(i); PVector p1 = project(shape1, axis); PVector p2 = project(shape2, axis); if(isOverlap(p1, p2)) { float overlap = getOverlap(p1, p2); if(overlap < magnitude) { magnitude = overlap; direction = axis; } } else { return null; } } return direction.mult(magnitude); } ArrayList<PVector> getAxes(PShape shape) { ArrayList<PVector> axes = new ArrayList<PVector>(); for(int i = 0; i < shape.getVertexCount(); ++i) { PVector v1 = shape.getVertex(i); PVector v2 = shape.getVertex(i + 1 == shape.getVertexCount() ? 0 : i + 1); PVector edge = v2.sub(v1); PVector axis = new PVector(-edge.y, edge.x); axis.normalize(); axes.add(axis); } return axes; } PVector project(PShape shape, PVector axis) { float min = axis.dot(shape.getVertex(0)); float max = min; for(int i = 1; i < shape.getVertexCount(); ++i) { float projection = axis.dot(shape.getVertex(i)); if(projection < min) { min = projection; } else if(projection > max) { max = projection; } } return new PVector(min, max); } boolean isOverlap(PVector p1, PVector p2) { return p1.x < p2.y && p1.y > p2.x; } float getOverlap(PVector p1, PVector p2) { return p1.x < p2.y ? p1.y - p2.x : p2.y - p1.x; } }
    15. Dirk Gregorius

      Polygons and the Separating Axis Theorem

      It seems you are computing the penetration wrong which results in overshooting I guess. The SAT test is really simple in 2D. You only need the vertices (in some order - CCW or CW) and the transform (translation and rotation). How do you compute the MTV?
    16. Here are two videos showing the collision handling between polygons. The collisions between the hexagon and the triangle are resolved as you would expect, with the hexagon being moved back by the MTV such that it is resting against the triangle. The collisions between the two triangles however are not resolved correctly, with the moving triangle jumping back a small distance upon collision resolution. I would like some help understanding why this happens and how it might be resolved. Also attached is an image showing two rectangles, both specified as being of width 200 pixels and height 140 pixels. One is made using the Processing rect() function and the other is a Polygon object constructed as described in my last post. As you can see, there is a noticeable disparity between their respective dimensions, with the Polygon object having smaller width and height. I would like some help understanding how to create the polygon such that its width and height are those specified by the parameters of its constructor, regardless of initial vertex rotation. Thank you. polygoncollision1.mp4 polygoncollision2.mp4
    17. Sergio Gardeazabal

      Airburner

      Airburner is an air combat game inspired in games like Hawx and Ace Combat and also old school simulators how F-22 Air Dominance Fighter. I trying combine the best of 2 worlds, the the freedom of an simulator (full axes control, no cutscenes in middle of gameplay, no take control over the camera in mid gameplay,etc) , and the quick action of an arcade air combat game. As i like to describe it, Airburner is a Fast pace air combat game where the player is always in control The game is developed in Unity since August of 2017 , but i had to pause the development in 2 times to work in other projects to raise money. That's pause were a 3 months pause in December , and 2 months pause in July. This is the first prototype that i had working in December 2017 In that stage the game barely worked because i was loading the whole world at the same time, which causes a lot of memory leaks and floating precision point errors. In the second stage of the development, i started to address that problems making a custom Unity editor tool , that is basically a level streamer. I took the whole world and divided in chunks, create a database with Scriptable Objects , and in runtime only load the tiles that rounding the player. Also when the player move from one tile to another , i move all world to the origin to avoid floating point precision error. This is a video of this tool working after a week of development. Since then, in the next two months , i created the editor tool set to be able create a level with the main tool. An editor tool that allows me load a specific sector, add models,waypoints,spawn points,etc, and save it to the database. Also i created a proxy system for the transform component, maintaining the interface using hiding so i had to make minimum changes to mi current code at the time. That proxy system for the transform was necessary because , streaming the level, there is a lot of element that no exist in run time at the time, but i need to know where they are, for example ,waypoints. This is a screenshot of the current state of that editor tool. I have planned release it as editor toolset in the Unity Asset Store in the future, but i will need polish the interface for that. This is a video demo that i made in july of the gameplay of the game with only air enemies at the moment. Today , after that last pause i’m optimising my Level Stream tool to have extra performance that i could use for large draw distance or better details in general. One of those changes is , detect which terrain tiles are visible in the camera, and don’t draw the tiles that aren’t visibles. Needs some adjustments but the result is promising. Also at this stage , all the art (Sound,VFX,UI,3D models) in i consider it placeholders, because i’m not an artist and i made what i did that i can, but is not that i want for the game. So the plan is when the game is in a more definitive stage of development, that reflect my original design, i will look to hire artist,UX designers,Sound Designers,etc. I don’t do that now , because i don’t have the funds yet, so probably i will seek for funding o use my savings which i building right now. In the mean time, i'm working in the VFX, this a video where i testing the first iteration of a prototype shader that i made with Amplify, for my Volumetric Explosion framework that i will be using in the game.
    18. Apparently so. I'm not sure why anyone would need that level of protection but there you have it. You can use a plain old int * to do the same thing, the only difference being it can point to any int, and not just one which is a member of class A.
    19. The error in fact means exactly what it says: you cannot use the same command list to record commands that use more than one swap chain buffer. To fix the error, you need to submit the command list and reset it before using it for the next buffer in the swap chain. As for the render targets, using the same back buffer in every frame is incorrect. You need to query current back buffer index from the swap chain every time after you present.
    20. Usually when I think of interfaces, or when I read about them, I see them as something as they are made in C#. There is a keyword for it and it's used for classes that can not be instantiated, they have to be inherited. In C++ they can be made by making a member function a pure virtual function, with = 0, where they are called abstract classes. When I was coding with DirectX I used "DirectX interfaces". Actually if I remember correctly DirectX uses them for almost everything. Some example would be ID3D11Buffer. I know that it's a COM interface though I don't know much about it. I never inherited them, though maybe someone would expect this because they are interfaces. I always used them as "real" objects, I would get their pointer through some function like CreateBuffer() or some similar method. What confuses me is why they are "called" interfaces and have an "I" in front? Are they maybe totally different from interfaces that I described, like C# interfaces, and have some other intended use?
    21. No worries, the thread is not hijacked, it's actually what I wanted. This is not my code, I'm reading a book and the author used this to explain something else though I did not understand this part and that's why I'm asking. Declaration of "int A::*pInt;" in B::function() was intentional. So, does putting "A::" in front of it makes it only "bound" to objects of A, so the pointer can access only A members? I'm still confused about it and using the pointer with a "." as it's not declared in original A class.
    22. Oh my god, I'm blind. The method actually has a return type of 'void' not 'Float3'. I guess this is what I get for copy-pasting the method registrations and forgetting to check the return types. This is the correct binding: if(engine->RegisterObjectMethod("Physics", "void ClearVelocity() const", asMETHOD(Physics, ClearVelocity), asCALL_THISCALL) < 0) { ANGELSCRIPT_REGISTERFAIL; } Thank you for the help.
    23. WitchLord

      X64 GCC native CallFunction gives invalid this pointer

      Most likely the problem is with how you've registered the Float3 type. Can you show me how you've registered it? Just the call the RegisterObjectType is needed. I also want to see the C++ declaration of this class so I can verify if the registration is correct. Even without seeing your code I would risk a guess and say that it should be registered like this: engine->RegisterObjectType("Float3", sizeof(Float3), asOBJ_VALUE | asOBJ_POD | asGetTypeTraits<vec3>() | asOBJ_APP_CLASS_ALLFLOATS); The last flag, asOBJ_APP_CLASS_ALLFLOATS, is probably what you're missing in your code. This flag has no effect on Windows, as the C++ ABI on Windows doesn't change with the members of the classes, but with gnuc on Linux it makes a difference, which is why I believe this is the cause of your problem.
    24. Randy Gaul

      Polygons and the Separating Axis Theorem

      I'd like to help but I have trouble understanding what you need help with. Maybe you can try asking a more specific question as opposed to some paragraphs, and also if you can post a video/gif showing your problem that helps a lot as well.
    25. Totally no information about this error on the internet, so I'm sharing the solution: The same (one) back buffer resource is used in consecutive frames. Most probable cause of this is using only the first descriptor from render target descriptor heap in all frames, so every frame doing this D3D12_CPU_DESCRIPTOR_HANDLE handle( rtvDescriptorHeap->GetCPUDescriptorHandleForHeapStart() ); commandList->OMSetRenderTargets( 1, &handle, false, nullptr ); instead of this D3D12_CPU_DESCRIPTOR_HANDLE handle( rtvDescriptorHeap->GetCPUDescriptorHandleForHeapStart() ); handle.ptr += backBufferId * rtvDescriptorSize; commandList->OMSetRenderTargets( 1, &handle, false, nullptr );
    26. WitchLord

      Typo in scriptfilesystem.cpp

      Thanks. I've fixed this in revision 2548.
    27. I have programmed an implementation of the Separating Axis Theorem to handle collisions between 2D convex polygons. It is written in Processing and can be viewed on Github here. There are a couple of issues with it that I would like some help in resolving. In the construction of Polygon objects, you specify the width and height of the polygon and the initial rotation offset by which the vertices will be placed around the polygon. If the rotation offset is 0, the first vertex is placed directly to the right of the object. If higher or lower, the first vertex is placed clockwise or counter-clockwise, respectively, around the circumference of the object by the rotation amount. The rest of the vertices follow by a consistent offset of TWO_PI / number of vertices. While this places the vertices at the correct angle around the polygon, the problem is that if the rotation is anything other than 0, the width and height of the polygon are no longer the values specified. They are reduced because the vertices are placed around the polygon using the sin and cos functions, which often return values other than 1 or -1. Of course, when the half width and half height are multiplied by a sin or cos value other than 1 or -1, they are reduced. This is my issue. How can I place an arbitrary number of vertices at an arbitrary rotation around the polygon, while maintaining both the intended shape specified by the number of vertices (triangle, hexagon, octagon), and the intended width and height of the polygon as specified by the parameter values in the constructor? The Polygon code: class Polygon { PVector position; PShape shape; int w, h, halfW, halfH; color c; ArrayList<PVector> vertexOffsets; Polygon(PVector position, int numVertices, int w, int h, float rotation) { this.position = position; this.w = w; this.h = h; this.halfW = w / 2; this.halfH = h / 2; this.c = color(255); vertexOffsets = new ArrayList<PVector>(); if(numVertices < 3) numVertices = 3; shape = createShape(); shape.beginShape(); shape.fill(255); shape.stroke(255); for(int i = 0; i < numVertices; ++i) { PVector vertex = new PVector(position.x + cos(rotation) * halfW, position.y + sin(rotation) * halfH); shape.vertex(vertex.x, vertex.y); rotation += TWO_PI / numVertices; PVector vertexOffset = vertex.sub(position); vertexOffsets.add(vertexOffset); } shape.endShape(CLOSE); } void move(float x, float y) { position.set(x, y); for(int i = 0; i < shape.getVertexCount(); ++i) { PVector vertexOffset = vertexOffsets.get(i); shape.setVertex(i, position.x + vertexOffset.x, position.y + vertexOffset.y); } } void rotate(float angle) { for(int i = 0; i < shape.getVertexCount(); ++i) { PVector vertexOffset = vertexOffsets.get(i); vertexOffset.rotate(angle); shape.setVertex(i, position.x + vertexOffset.x, position.y + vertexOffset.y); } } void setColour(color c) { this.c = c; } void render() { shape.setFill(c); shape(shape); } } My other issue is that when two polygons with three vertices each collide, they are not always moved out of collision smoothly by the Minimum Translation Vector returned by the SAT algorithm. The polygon moved out of collision by the MTV does not rest against the other polygon as it should, it instead jumps back a small distance. I find this very strange as I have been unable to replicate this behaviour when resolving collisions between polygons of other vertex quantities and I cannot find the flaw in the implementation, though it must be there. What could be causing this incorrect collision resolution, which from my testing appears to only occur between polygons of three vertices? Any help you can provide on these issues would be greatly appreciated. Thank you.
    28. NikiTo

      Beginning developing

      I was making once with a kid boy a shooting game in Scratch at Coder Dojo. When the staff said everyone to have a break to pick some sweets, that boy kept staying with Scratch. When his mother came to take it home, I told her that it is not normal for a kid to skip sweets because of Scratch(almost every other kid left Scratch and gone to the table with the sweets). I advised her to not bring that kid more and to not let it become a programmer.
    29. GoliathForge

      Suggestion for choosing challenges

      is a cheat or pre make deterrent as to not nullify the challenge duration contract. lol... I would argue that the list should be quite large and one new one pushed for every one poped. That could add new flavor to potentials and still keep it a mystery. Epic Ninja Coder.
    30. phil67rpg

      shooting bullets

      why did downvote me I am asking a very simple question
    31. Hi everybody, Me, Xylvan, announces Xilvan Design are building 3D games. Since 1993, our kindly official gaming related pages (please click on each links): Soul of Sphere Platinum v3.75. Age of Dreams:Abyss of Atlantis v1.5. Lights of Dreams IV: Far Above the Clouds v9.17. Candy World II: Another Golden Bones v9.37. Candy Racing Cup: The Lillians Rallies v2.97. Candy World Adventures IV: A Cloud of Starfield v6.57. Candy to the Rescue IV: The Scepter of Thunders v7.07. Candy in Space III: A dog to the Space v5.47. Candy's Space Adventures: The Messages from the Lillians v17.27. Candy's Space Mysteries II: New Mission on the Earthlike Planets v7.27. Discover more than 10 games which are coded in Blitz3D by Xylvan(Alexandre) from Xilvan Design. Download them on my new websites: Plenty of games wait you HERE: - New Xilvan Design Website - Hope you will like them all! To watch the videos of our games: - Xilvan Design Youtube Channel - You may need to Subscribe to our channel for more infos about our new releases! Friendly, Alexandre L., Xilvan Design.
    32. Whatever system goes in place is fine by me because at the end of the day I'm just wanting to make a small game within a short period of time, no matter the game choice.
    33. lawnjelly

      Suggestion for choosing challenges

      I wonder whether having a random item from approved list might take some of the potential fun out of choosing challenges? I'm sure a voting system could be figured out. If there was a large choice at the start, say 20 choices, each voter could maybe have a vote worth 5, 4, 3, 2 and 1, and assign them to their favourites in order. Then the winning challenge would need at least 2 people to have given it a vote, or there could be a second stage as tiebreaker. We could if necessary even do the voting manually, like the eurovision song contest!
    34. The need to join the group is also an obstacle. It took me some time to figure out why i was unable to reply here. (Also i did not notice a new baseball challange at all just by looking at the front page and 'unread content' which is what i usually do.)
    35. Dirk Gregorius

      FBX SDK skinned animation

      No problem! I will try to help. Let's talk about the basic algorithm first, this should give you an idea what information/data you need. This should make it much easier for you to navigate the FBX file and retrieve the data you need. Finally we can talk about implementation details and common pitfalls and mistakes. There are four principal data elements 1) A transform hierarchy (skeleton) 2) An animation clip 3) One or more meshes 4) A binding for each mesh to the skeleton (cluster) The first thing you need is the skeleton. I already posted some code how to retrieve the skeleton, but let me know if you need more detail. There are two ways to describe a skeleton: hierarchical or linear. The hierarchical representation is a classic tree structure and could look something like this: struct Bone { Vector Translation; Quaternion Rotation; Matrix4x4 RelativeTransform; Matrix4x4 AbsoluteTransform; Matrix4x4 BindPose; Bone* Parent; std::vector< Bone* > Children; }; struct Skeleton { Bone* Root; }; The second option is to 'linearize' the transform hierarchy. This is usually done by traversing in DFS or BFS order. This assures that the parent bone is located before all children in the array. The linear representation can look something like this: struct Transform { Vector Translation; Quaternion Rotation; }; struct Skeleton { int BoneCount; std::vector< std::string > BoneNames; std::vector< int > BoneParents; std::vector< Transform > BoneBindPoses; }; struct Pose { Skeleton* Skeleton; std::vector< Transform > RelativeTransforms; std::vector< Transform > AbsoluteTransforms; }; Note that I skipped scale as this would only unnecessarily make things more complicated here. The next thing is the animation clip which you sample at each frame to retrieve a pose for the skeleton. A simple animation structure looks like this struct Animation { int FrameCount; float FrameRate; std::vector< Vector > TranslationKeys; std::vector< Quaternion > RotationKeys; }; So each frame you get translation and rotation of each bone and write it into Bone structure (hierarchical representation) or better you just extract the current Pose (linear representation). The next step is to deform the mesh using the new skeleton pose. This is where the skeleton/mesh binding comes in. This binding is often referred to as skin and it tells you how much a bone influences a vertex of the mesh. Again there are two ways to describe the data based on the association. E.g. either each bone knows which vertices it deforms or each vertex knows by which bone it is deformed. This looks something like this: struct Cluster { Bone* Bone; std::vector< int > VertexIndices; std::vector< float > VertexWeight; }; #define MAX_BONE_COUNT 4 struct Vertex { Vector Position; Vector Normal; // ... int BoneIndex[ MAX_BONE_COUNT ]; float BoneWeight[ MAX_BONE_COUNT ]; }; The final step is to apply the new pose to the mesh. For the cluster this looks something like this: std::vector< Vector > DeformMeshPositions( int ClusterCount, const Cluster* Clusters, const Mesh* Mesh ) { std::vector< Vector > VertexPositions; VertexPositions.resize( Mesh->VertexCount ); std::fill( VertexPositions.begin(), VertexPositions.end(), Vector::Zero ); for ( int i = 0; i < ClusterCount; ++i ) { const Cluster* Cluster = Clusters[ i ]; const Bone* Bone = Cluster->Bone; Matrix4x4 BoneTransform = Bone->AbsoluteTransform; Matrix4x4 BindPose = Bone->BindPose; Matrix4x4 Transform = BoneTransform * Inverse( BindPose ); for ( int k = 0; k < Cluster->VertexCount; ++k ) { int VertexIndex = Cluster->VertexIndex[ k ]; int VertexWeight = Cluster->VertexWeight[ k ]; VertexPositions[ VertexIndex ] += VertexWeight * Transform * Mesh->VertexPositions[ VertexIndex ]; } } return VertexPositions; }; The algorithm takes a vertex in model space into the local space of the bone at bind time. Then it moves to the new location of the bone and transforms it back to model space. Think of it as if you are attaching the vertex to the bone, then move the bone, and finally detach the vertex and release it at the new location. Finally apply a weight to this new vertex position and sum it up. HTH, -Dirk PS: IIRC the geometric transform is a concept from 3D MAX. It is a special transform which is applied to the associated shapes of a node, but it is not propagated down the hierarchy. This means you cannot just bake it into your bone transform. The simplest way to deal with it is to ignore it when retrieving the skeleton and then apply it to the mesh vertices when you read them. I look up some code and post it here. It is really simple.
    36. I was having trouble loading a model with assimp, nothing was showing up. I decided to strip down my code to bare minimum so I went here https://www.braynzarsoft.net/viewtutorial/q16390-9-transformations and copied the code. Removed the d3dx stuff in it and some other things and got it to compile. Still nothing is showing up on screen. The code from that site should work...it worked before. Is something wrong with my computer or something? It looks fine from all I can see...even in renderdoc. Renderdoc shows the vertices and indices correctly. Here is the code im using. Look at initdx, initscene, render, and updateScene and the shader. Doesn't everything look correct? I basically just copy/pasted the code and nothing is showing up. What would cause this? -edit- ok well i got it to show up but its distorted. Atleast something shows up. and I posted the wrong shader earlier. #include "Source.h" #include "DDSTextureLoader.h" //Global Declarations - Interfaces// IDXGISwapChain* SwapChain; ID3D11Device* d3d11Device; ID3D11DeviceContext* d3d11DevCon; ID3D11RenderTargetView* renderTargetView; ID3D11DepthStencilView* depthStencilView; ID3D11Texture2D* depthStencilBuffer; ID3D11Resource* tex; ID3D11RasterizerState * rasterState; ID3D11VertexShader* VS; ID3D11PixelShader* PS; ID3DBlob* VS_Blob; ID3DBlob* PS_Blob; ID3D11InputLayout* vertLayout; ID3D11Buffer* fbx_vertex_buf; ID3D11Buffer* fbx_index_buf; ID3D11Buffer* cbPerObjectBuffer; ID3D11ShaderResourceView* fbx_rc_view; ID3D11SamplerState* fbx_sampler_state; ID3D11ShaderResourceView* normal_rc_view; ID3D11Buffer* sqVertexBuf; ID3D11Buffer* sqIndexBuf; std::vector<MeshData> meshList; myConsole* con; //Global Declarations - Others// HWND hwnd = NULL; HRESULT hr; int Width = 300; int Height = 300; DirectX::XMMATRIX cube1World; DirectX::XMMATRIX WVP; ///////////////**************new**************//////////////////// DirectX::XMMATRIX mesh_world; ///////////////**************new**************//////////////////// DirectX::XMMATRIX camView; DirectX::XMMATRIX camProjection; DirectX::XMVECTOR camPosition; DirectX::XMVECTOR camTarget; DirectX::XMVECTOR camUp; ///////////////**************new**************//////////////////// DirectX::XMMATRIX Rotation; DirectX::XMMATRIX Scale; DirectX::XMMATRIX Translation; float rot = 0.01f; ///////////////**************new**************//////////////////// //Create effects constant buffer's structure// struct cbPerObject { DirectX::XMMATRIX WVP; }; cbPerObject cbPerObj; struct Vrx { Vrx() {} Vrx(float x, float y, float z, float cr, float cg, float cb, float ca) : pos(x, y, z), color(cr, cg, cb, ca) {} DirectX::XMFLOAT3 pos; DirectX::XMFLOAT4 color; }; D3D11_INPUT_ELEMENT_DESC layout[] = { { "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 }, { "COLOR", 0, DXGI_FORMAT_R32G32B32A32_FLOAT, 0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0 }, }; UINT numElements = ARRAYSIZE(layout); // Event table for MyFrame wxBEGIN_EVENT_TABLE(MyFrame, wxFrame) EVT_MENU(wxID_EXIT, MyFrame::OnQuit) EVT_CLOSE(MyFrame::OnClose) wxEND_EVENT_TABLE() // Implements MyApp& GetApp() DECLARE_APP(MyApp) // Give wxWidgets the means to create a MyApp object IMPLEMENT_APP(MyApp) bool MyApp::OnInit() { // Create the main application window MyFrame *frame = new MyFrame(wxT("Worgen Engine Version 0")); // Show it frame->Show(true); return true; } void MyFrame::OnQuit(wxCommandEvent& event) { // Destroy the frame Close(); } void MyFrame::OnClose(wxCloseEvent& event) { timer->Stop(); //Release the COM Objects we created SwapChain->Release(); d3d11Device->Release(); d3d11DevCon->Release(); renderTargetView->Release(); event.Skip(); } MyFrame::MyFrame(const wxString& title) : wxFrame(NULL, wxID_ANY, title, wxDefaultPosition) { // Create a menu bar wxMenu *fileMenu = new wxMenu; // The “About” item should be in the help menu wxMenu *helpMenu = new wxMenu; helpMenu->Append(wxID_ABOUT, wxT("&About...\tF1"), wxT("ABout this program.")); fileMenu->Append(wxID_EXIT, wxT("E&xit\tAlt - X"), wxT("Quit this program")); // Now append the freshly created menu to the menu bar... wxMenuBar *menuBar = new wxMenuBar(); menuBar->Append(fileMenu, wxT("&File")); menuBar->Append(helpMenu, wxT("&Help")); // ... and attach this menu bar to the frame SetMenuBar(menuBar); // Create a status bar just for fun CreateStatusBar(2); SetStatusText(wxT("Welcome to Worgen Engine!")); nbHierarchy = new wxNotebook(this, wxID_ANY, wxDefaultPosition, wxSize(200, 300)); nbScene = new wxNotebook(this, wxID_ANY, wxDefaultPosition, wxSize(800, 600)); nbInspector = new wxNotebook(this, wxID_ANY, wxDefaultPosition, wxSize(200, 300)); timer = new RenderTimer(); console = new myConsole(wxSize(800, 300), wxTE_MULTILINE | wxTE_READONLY, this); timer->dxPanel = new MyDxPanel((MyFrame*)nbScene); wxPanel* hierarchyWindow = new wxPanel(nbHierarchy, wxID_ANY); nbHierarchy->AddPage(hierarchyWindow, "Hierarchy", false); nbScene->AddPage(timer->dxPanel, "Game", false); wxPanel* inspectorWindow = new wxPanel(nbInspector, wxID_ANY); nbInspector->AddPage(inspectorWindow, "Inspector", false); wxBoxSizer* sizer = new wxBoxSizer(wxHORIZONTAL); sizer->Add(nbHierarchy, 0, wxEXPAND, 0); sizer->Add(nbScene, 1, wxEXPAND, 0); sizer->Add(nbInspector, 0, wxEXPAND, 0); wxBoxSizer* console_sizer = new wxBoxSizer(wxVERTICAL); console_sizer->Add(sizer, 0, wxEXPAND, 0); console_sizer->Add(console, 0, wxEXPAND, 0); SetSizerAndFit(console_sizer); timer->dxPanel->c = console; timer->dxPanel->aLoader = new LoadMesh("C:\\Models\\wally.fbx", console, meshList); timer->dxPanel->initDx(timer->dxPanel->GetHWND()); timer->dxPanel->initScene(); timer->Start(); } MyFrame::~MyFrame() { delete timer; } wxBEGIN_EVENT_TABLE(MyDxPanel, wxPanel) EVT_PAINT(MyDxPanel::OnPaint) EVT_ERASE_BACKGROUND(MyDxPanel::OnEraseBackground) wxEND_EVENT_TABLE() MyDxPanel::MyDxPanel(MyFrame* parent) : wxPanel(parent) { parentFrame = parent; } MyDxPanel::~MyDxPanel() { } void MyDxPanel::OnEraseBackground(wxEraseEvent &WXUNUSED(event)) { //empty to avoid flashing } void MyDxPanel::updateScene() { //Keep the cubes rotating rot += .05f; if (rot > 6.26f) rot = 0.0f; //Reset cube1World cube1World = DirectX::XMMatrixIdentity(); //Define cube1's world space matrix DirectX::XMVECTOR rotaxis = DirectX::XMVectorSet(0.0f, 1.0f, 0.0f, 0.0f); Rotation = DirectX::XMMatrixRotationAxis(rotaxis, rot); Translation = DirectX::XMMatrixTranslation(0.0f, 0.0f, 4.0f); //Set cube1's world space using the transformations cube1World = Rotation; } void MyDxPanel::render() { //Clear our backbuffer float bgColor[4] = { 0.0f, 0.3f, 0.4f, 1.0f }; d3d11DevCon->ClearRenderTargetView(renderTargetView, bgColor); //Refresh the Depth/Stencil view d3d11DevCon->ClearDepthStencilView(depthStencilView, D3D11_CLEAR_DEPTH | D3D11_CLEAR_STENCIL, 1.0f, 0); ///////////////**************new**************//////////////////// //Set the WVP matrix and send it to the constant buffer in effect file WVP = cube1World * camView * camProjection; cbPerObj.WVP = XMMatrixTranspose(WVP); d3d11DevCon->UpdateSubresource(cbPerObjectBuffer, 0, NULL, &cbPerObj, 0, 0); d3d11DevCon->VSSetConstantBuffers(0, 1, &cbPerObjectBuffer); //Draw the first cube d3d11DevCon->DrawIndexed(36, 0, 0); //Present the backbuffer to the screen SwapChain->Present(0, 0); } void MyDxPanel::OnPaint(wxPaintEvent& event) { wxPaintDC dc(this); updateScene(); render(); } void MyDxPanel::initDx(HWND wnd) { //Describe our SwapChain Buffer DXGI_MODE_DESC bufferDesc; ZeroMemory(&bufferDesc, sizeof(DXGI_MODE_DESC)); bufferDesc.Width = Width; bufferDesc.Height = Height; bufferDesc.RefreshRate.Numerator = 60; bufferDesc.RefreshRate.Denominator = 1; bufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; bufferDesc.ScanlineOrdering = DXGI_MODE_SCANLINE_ORDER_UNSPECIFIED; bufferDesc.Scaling = DXGI_MODE_SCALING_UNSPECIFIED; //Describe our SwapChain DXGI_SWAP_CHAIN_DESC swapChainDesc; ZeroMemory(&swapChainDesc, sizeof(DXGI_SWAP_CHAIN_DESC)); swapChainDesc.BufferDesc = bufferDesc; swapChainDesc.SampleDesc.Count = 1; swapChainDesc.SampleDesc.Quality = 0; swapChainDesc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT; swapChainDesc.BufferCount = 1; swapChainDesc.OutputWindow = wnd; swapChainDesc.Windowed = TRUE; swapChainDesc.SwapEffect = DXGI_SWAP_EFFECT_DISCARD; //Create our SwapChain hr = D3D11CreateDeviceAndSwapChain(NULL, D3D_DRIVER_TYPE_HARDWARE, NULL, NULL, NULL, NULL, D3D11_SDK_VERSION, &swapChainDesc, &SwapChain, &d3d11Device, NULL, &d3d11DevCon); //Create our BackBuffer ID3D11Texture2D* BackBuffer; hr = SwapChain->GetBuffer(0, __uuidof(ID3D11Texture2D), (void**)&BackBuffer); d3d11Device->CreateRenderTargetView(BackBuffer, NULL, &renderTargetView); //Describe our Depth/Stencil Buffer D3D11_TEXTURE2D_DESC depthStencilDesc; depthStencilDesc.Width = Width; depthStencilDesc.Height = Height; depthStencilDesc.MipLevels = 1; depthStencilDesc.ArraySize = 1; depthStencilDesc.Format = DXGI_FORMAT_D24_UNORM_S8_UINT; depthStencilDesc.SampleDesc.Count = 1; depthStencilDesc.SampleDesc.Quality = 0; depthStencilDesc.Usage = D3D11_USAGE_DEFAULT; depthStencilDesc.BindFlags = D3D11_BIND_DEPTH_STENCIL; depthStencilDesc.CPUAccessFlags = 0; depthStencilDesc.MiscFlags = 0; //Create the Depth/Stencil View d3d11Device->CreateTexture2D(&depthStencilDesc, NULL, &depthStencilBuffer); d3d11Device->CreateDepthStencilView(depthStencilBuffer, NULL, &depthStencilView); //Set our Render Target d3d11DevCon->OMSetRenderTargets(1, &renderTargetView, depthStencilView); } void MyDxPanel::initScene() { //Compile Shaders from shader file HR(D3DCompileFromFile(L"Effects.fx", 0, 0, "VS", "vs_5_0", 0, 0, &VS_Blob, 0)); HR(D3DCompileFromFile(L"Effects.fx", 0, 0, "PS", "ps_5_0", 0, 0, &PS_Blob, 0)); //Create the Shader Objects hr = d3d11Device->CreateVertexShader(VS_Blob->GetBufferPointer(), VS_Blob->GetBufferSize(), NULL, &VS); hr = d3d11Device->CreatePixelShader(PS_Blob->GetBufferPointer(), PS_Blob->GetBufferSize(), NULL, &PS); //Set Vertex and Pixel Shaders d3d11DevCon->VSSetShader(VS, 0, 0); d3d11DevCon->PSSetShader(PS, 0, 0); ///////////////**************new**************//////////////////// //Create the vertex buffer Vrx v[] = { Vrx(-1.0f, -1.0f, -1.0f, 1.0f, 0.0f, 0.0f, 1.0f), Vrx(-1.0f, +1.0f, -1.0f, 0.0f, 1.0f, 0.0f, 1.0f), Vrx(+1.0f, +1.0f, -1.0f, 0.0f, 0.0f, 1.0f, 1.0f), Vrx(+1.0f, -1.0f, -1.0f, 1.0f, 1.0f, 0.0f, 1.0f), Vrx(-1.0f, -1.0f, +1.0f, 0.0f, 1.0f, 1.0f, 1.0f), Vrx(-1.0f, +1.0f, +1.0f, 1.0f, 1.0f, 1.0f, 1.0f), Vrx(+1.0f, +1.0f, +1.0f, 1.0f, 0.0f, 1.0f, 1.0f), Vrx(+1.0f, -1.0f, +1.0f, 1.0f, 0.0f, 0.0f, 1.0f), }; DWORD indices[] = { // front face 0, 1, 2, 0, 2, 3, // back face 4, 6, 5, 4, 7, 6, // left face 4, 5, 1, 4, 1, 0, // right face 3, 2, 6, 3, 6, 7, // top face 1, 5, 6, 1, 6, 2, // bottom face 4, 0, 3, 4, 3, 7 }; D3D11_BUFFER_DESC indexBufferDesc; ZeroMemory(&indexBufferDesc, sizeof(indexBufferDesc)); indexBufferDesc.Usage = D3D11_USAGE_DEFAULT; indexBufferDesc.ByteWidth = sizeof(DWORD) * 12 * 3; indexBufferDesc.BindFlags = D3D11_BIND_INDEX_BUFFER; indexBufferDesc.CPUAccessFlags = 0; indexBufferDesc.MiscFlags = 0; D3D11_SUBRESOURCE_DATA iinitData; iinitData.pSysMem = indices; d3d11Device->CreateBuffer(&indexBufferDesc, &iinitData, &sqIndexBuf); d3d11DevCon->IASetIndexBuffer(sqIndexBuf, DXGI_FORMAT_R32_UINT, 0); D3D11_BUFFER_DESC vertexBufferDesc; ZeroMemory(&vertexBufferDesc, sizeof(vertexBufferDesc)); vertexBufferDesc.Usage = D3D11_USAGE_DEFAULT; vertexBufferDesc.ByteWidth = sizeof(Vertex) * 8; vertexBufferDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER; vertexBufferDesc.CPUAccessFlags = 0; vertexBufferDesc.MiscFlags = 0; ///////////////**************new**************//////////////////// D3D11_SUBRESOURCE_DATA vertexBufferData; ZeroMemory(&vertexBufferData, sizeof(vertexBufferData)); vertexBufferData.pSysMem = v; hr = d3d11Device->CreateBuffer(&vertexBufferDesc, &vertexBufferData, &sqVertexBuf); //Set the vertex buffer UINT stride = sizeof(Vertex); UINT offset = 0; d3d11DevCon->IASetVertexBuffers(0, 1, &sqVertexBuf, &stride, &offset); //Create the Input Layout hr = d3d11Device->CreateInputLayout(layout, numElements, VS_Blob->GetBufferPointer(), VS_Blob->GetBufferSize(), &vertLayout); //Set the Input Layout d3d11DevCon->IASetInputLayout(vertLayout); //Set Primitive Topology d3d11DevCon->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST); //Create the Viewport D3D11_VIEWPORT viewport; ZeroMemory(&viewport, sizeof(D3D11_VIEWPORT)); viewport.TopLeftX = 0; viewport.TopLeftY = 0; viewport.Width = Width; viewport.Height = Height; viewport.MinDepth = 0.0f; viewport.MaxDepth = 1.0f; //Set the Viewport d3d11DevCon->RSSetViewports(1, &viewport); //Create the buffer to send to the cbuffer in effect file D3D11_BUFFER_DESC cbbd; ZeroMemory(&cbbd, sizeof(D3D11_BUFFER_DESC)); cbbd.Usage = D3D11_USAGE_DEFAULT; cbbd.ByteWidth = sizeof(cbPerObject); cbbd.BindFlags = D3D11_BIND_CONSTANT_BUFFER; cbbd.CPUAccessFlags = 0; cbbd.MiscFlags = 0; hr = d3d11Device->CreateBuffer(&cbbd, NULL, &cbPerObjectBuffer); //Camera information ///////////////**************new**************//////////////////// camPosition = DirectX::XMVectorSet(0.0f, 3.0f, -8.0f, 0.0f); ///////////////**************new**************//////////////////// camTarget = DirectX::XMVectorSet(0.0f, 0.0f, 0.0f, 0.0f); camUp = DirectX::XMVectorSet(0.0f, 1.0f, 0.0f, 0.0f); //Set the View matrix camView = DirectX::XMMatrixLookAtLH(camPosition, camTarget, camUp); //Set the Projection matrix camProjection = DirectX::XMMatrixPerspectiveFovLH(0.4f*3.14f, (float)Width / Height, 1.0f, 1000.0f); } RenderTimer::RenderTimer() : wxTimer() { } void RenderTimer::Notify() { dxPanel->Refresh(); } void RenderTimer::start() { wxTimer::Start(10); } cbuffer cbPerObject { float4x4 WVP; }; struct VS_OUTPUT { float4 Pos : SV_POSITION; float4 Color : COLOR; }; VS_OUTPUT VS(float4 inPos : POSITION, float4 inColor : COLOR) { VS_OUTPUT output; output.Pos = mul(inPos, WVP); output.Color = inColor; return output; } float4 PS(VS_OUTPUT input) : SV_TARGET { return input.Color; }
    37. I was thinking there could be an approved challenge queue list and every challenge a random item is picked and removed from the queue. This would eliminate the problem with voting and either having a tie or everyone voting for their own idea, otherwise maybe only allow two ideas per vote, and increase that amount as more people jump on board.
    38. GoliathForge

      Suggestion for choosing challenges

      There's quite a bit of complication in the grab bag idea for certain. I feel it would be a lot to ask from the web gurus here to implement. If this direction was desired, the community would have to self manage until site functionality could trickle in based on what worked. Or so I would imagine. back on topic, I for one am still curious for other challenge ideas even if they seem way out there or weird to broaden this discussion.
    39. Fulcrum.013

      Beginning developing

      One day long years ago i has so intrasting task and it has progress so fast, so i ever forgot to sleep for 3 days. But really productivity begin to decrease after 8 hours. After 16 hours it decreases dramatically. Also short breaks every hour hepls to save productivity longer. Also a phisical excercises required to keep head clear and productivity longer and have a good mood. Ever not hard exercises but regular. So each morning and eavning i'm sit to bike and 15 minutes pedal around neigborghood buildings. Also it help to save lile bit more health. Cat can make a significant help with it. Especially ledy-cat. Its cudate tiny teammate highly require a attention to his persone, so when you ignore she to long time, she began to attract your attention. My little black home devil already invent triks for it like "to turn router off", "to drop keyboard from a table", "to catch a fingers that type on his favorite pillow". And of course cat is endless source of a good mood. Just stop to drop cigarette stubs directly to floor first. Good trik for it - when you eat during coding, leave a empty plate on table and use it as ashtray. It is very hard to overflow it ever on 3 days. Also it is good idea to hire cleaner that will be clean at least working cabinet at least weekly. It usualy cost no so high, but can significantly impove look of working place so will improve your mood. Also it good idea to remove enything unnessesary for development from working room. I really have on working room only gamers armchair and big 1.8x0.5 meters table that have no any mood collectors like drawers, and with installed on it computer tower with 3 27" monitors and projector have a free desk space that can fit no more then 2 plates, cup, keyboard, 2 joystics, 2 usb hubs, wireless mouse and one black cat at time. So it just have no corners where mood and garbage can be collected in big quantities. Really womens can be obsessed too. I know one family where husbend and wife both obsessed by competition who of them better in programming since his first university year. And i guess their little doughter will join it competition very soon.
    40. NikiTo

      Beginning developing

      We swim in an ocean, we have to adapt or die. Ocean says: "you need a career and income to proof your as a human being! nobody cares about your inventions nobody paid you for!" If a person loses the social touch with external world, he could lose his social skills(i open the mouth and everybody instantly hates me. what did i say?!?!). If a person is not able to work in a team, he very very very hardly will find a job in programming. Modern programming practices expect people to work in teams. Everywhere, i was told by everyone like: "even if you are the best programmer out there(i am not, just saying), if you can not work in a team, you will not find work, you will have no money, no money no worth as a human being, antidepressant pills, anime, hikikomori, hentai, basement.."
    41. lawnjelly

      Suggestion for choosing challenges

      I know I put in far too many suggestions but the Atari list is a long list!! We might need a special polling system to allow us each multiple votes otherwise we will end up with just a few games with 1 vote each, I hope the forum supports this lol. And maybe 2 rounds of voting. Good idea about artists and assets. A global grab bag of assets would be fun, especially if there were not enough artists to e.g. randomly assign artist to a programmer. Might need some kind of pre-agreement for the programmer to be able to distribute the game, like they were licensed as free as long as credit given, one of the creative commons licenses.
    42. I'm trying to expand my framework to run on Android NDK as well. Obviously I'm running in a few discrepancies between MSVC, UWP, NSK and Android NDK. I already ironed out a few other issues where Clang (3.8) needs things adjusted to compile. One error has me stumped: I'm using a #define to declare and implement cloning methods (insert boo,hiss here) This one works fine on all but Android NDK. The call inside a class called Component looks like this: DECLARE_CLONEABLE( Component, "Component" ) The define looks like this: #define DECLARE_CLONEABLE( xClass, strClassName ) virtual ICloneAble* Clone() \ { \ xClass* pNewClass = new xClass( *this ); \ pNewClass->m_ClassName = strClassName; \ return pNewClass; \ } \ \ static ICloneAble* CreateNew##xClass##() \ {\ xClass* pNewClass = new xClass(); \ pNewClass->m_ClassName = strClassName; \ return pNewClass;\ } I'm using the first passed string to concat a function name (CreateNew##xClass##(). For some reason Clang complains: error : pasting formed 'CreateNewComponent(', an invalid preprocessing token 1> DECLARE_CLONEABLE( Component, "Component" ) 1> ^ 1> p:/common\Interface/ICloneAble.h(15,74) : note: expanded from macro 'DECLARE_CLONEABLE' 1> static ICloneAble* CreateNew##xClass##() \ It seems it's irritated by the opening/closing parenthesis, but why? Besides the ugliness of this macro, can you show me how to avoid that error? Edit: I've managed to work around the issue by using a fixed name (CreateNewInstance instead of CreateNew##xClass##) but I'd still like to know why one is a problem and the other isn't.
    43. lawnjelly

      Beginning developing

      A healthy obsession with keeping yourself in shape is not a problem, but obsession with sport itself is more questionable as a pursuit to spend all your time on than programming imo. Sports are often full of random chance that you have no control over, there are unlikely to be rewards except for a few lucky ones, and any day you can have an injury that will end a sport career, rendering all that effort wasted. Whereas with programming, in nearly all circumstances you will still be able to use the skill you have gained. It is like the difference between putting your savings in a bank for high interest, or gambling them at a casino. Who cares what women think who are not right for you? Don't waste time trying to impress them. Just find one (or several) that want you for who you are and what you enjoy. Good quote here:
    44. NikiTo

      Beginning developing

      i am most of the time doing investigation. If i tried to code known techniques from a technical papers, i would finish a 3D engine in less than one year. But this, constant experimenting is eating 90% of my time. I would not try to code yet another engine/app if there were nothing new i think i could "invent". We have already too many engines out there. Today i forgot to eat again, got very hungry and my head started hurting. "i will just do that, i will just finish that chunk here..and go to eat". The working places at home of some programmers look (sorry for saying that) miserable and depressing. The monitors hurts too. Having to dig inside the corners of a 3D model inside the modeling software is painful to the eyes. I turn and turn and turn it until i find a corner from where i have visibility. It gives me headaches. If i were a woman, i would not date somebody obsessed with something so much. Somebody could say that the problem is i am too obsessive, but what if i were obsessive with sports, not programming? I would have a sixpack and medals. Now i have a basement....
    45. You could show the whole code (and screenshots for comparison), maybe we see something. Misplaced renormalization ? PS: As an aside: One can avoid the manuel format conversion (2*n -1, etc), at least with DX11 signed formats.
    46. skyemaidstone

      Terrain Bump Mapping - but not everywhere please

      Thanks guys, I tried lerping between the normal and the bumpmapped normal but I must have put it in the wrong place. Works pretty nicely now. Weirdly with bump mapping on the whole terrain seems slightly darker even where the bump mapped (blue) texture isn't being used in the splat/blend map but I can live with that.
    47. Fulcrum.013

      Programming and Higher Mathematics

      Becouse standart mathematical libraries have a 4x4 matrices only becouse both AVX and GPU use 4 component vectors. Also 2D rotation is just a 3D rotation around z axe only, so can be performed using same mechanizm. Also matrices have feature to accumulate transformations. So you no need a code to perform a each kind of transformation with vertex buffer. Yo can just accumulate required sequence of transformations on single matix and then apply it all to vertices by single vector to matrix multiplication per vertex. In example that only translate a vertices it may look a really stupid. But on real usage where it unknown what same transformations matrix has accumulated, it just much simply to apply any kind of transformation to vertices by single kind of operation. So thay just follow a concept of superposition that says that optiaml elements not varranty a optimal composition. Or by other worlds to win in big it require to sacrafice in little. Other similar example - search in sorted array. Linear search is chache friendly but have O(n) complexity. Binary search is not chache friendly but have a O(log n) complexity. So to have much faster algo we have to sacrafice hardware optimization. With matrices it little bit complexive - it sacrafice a small algo and memory optimization to win both algo and hardware optimisation in much large scale.
    48. Blend between vertex normal and bumpmapped according to the road/blue channel ? normalFromMap = lerp(PSIn.Normal, bumpMapped, splat.b); Could give a smoother transition given the splat is fine. PS: JoeJ just beat me to it
    49. Simply finalNormal = normalize (bumpedNormal * blue + upVector * (1-blue)) should work. The value of upVector depends on the space you work in, maybe (0,0,1).
    50. turanszkij

      Beginning developing

      It's great to hear I'm not the only one. That is not true in my experience, but you need to think long term and keep at it. My general advice would be to keep programming on the side, and one day you will be good. But don't just wait for it, you need to practice, practice, practice, and apply for jobs, that's how you will know when you are good enough. Also try to keep a balance, and practice/pursue other interests. For me, this is drawing and martial arts (tae kwon-do) at the moment. Both of these can also be mastered with a lot of practice, and they really help to stay motivated and not burnt out with programming.
    51. Hi guys, I wanted my roads to look a little more bumpy on my terrain so I added in bump mapping based on what i had working for the rest of the models. It works and looks nice enough (I'll need to fiddle with the normal map to get the pebble looking just the right amount of sharpness) but anyway.. a problem cropped up that hadn't occurred to me: I don't want it applied to the whole terrain, just the roads. The road texture is simply added using a blend map with green for grass, red for rock, blue for road. So the more blue there more the road texture is used. I don't wan't the other textures bump mapped.. i mean I guess i could but for now i'd rather not. So the code is something like: float3 normalFromMap = PSIn.Normal; if (BumpMapping) { // read the normal from the normal map normalFromMap = tex2D(RoadNormalMapSampler, PSIn.TexCoord * 4); //tranform to [-1,1] normalFromMap = 2.0f * normalFromMap - 1.0f; //transform into world space normalFromMap = mul(normalFromMap, PSIn.WorldToTangentSpace); } else { //tranform to [-1,1] normalFromMap = 2.0f * normalFromMap - 1.0f; } //normalize the result normalFromMap = normalize(normalFromMap); //output the normal, in [0,1] space Output.Normal.rgb = 0.5f * (normalFromMap + 1.0f); I tried checking if the blendmap's blue component was > 0 then use the bump mapping but that just makes a nasty line where it switches between just using the normal of the whole vertex or using the normal map. How do I blend between the two methods? Thanks
    52. mrMatrix

      FBX SDK skinned animation

      I'm just confused about what you're talking about and I've gotten lost. This is the last big road block for my gamedev endevours. Is it possible for you to write a little more code for the CPU skinning side of things to get me going with the debug? I'm sorry to be such a noob at this, but i just cant see the light at the end of the tunnel if you know what I mean. Also, what did you mean by this on page 1? Could you explain this // DON'T get the geometric transform here - it does not propagate! Bake it into the mesh instead! This makes it also easier to read the skinning from the clusters! // Export your mesh and bake the geometric transform into the vertex attributes! Also, your ReadAnimation function (second one) could you clarify that that is the final animation (skinning) matrix? I always read that I need to apply a inverse bindpose of the joint to that, but you dont specify how to get that exactly although I've asked you once before in this thread and I wasnt able to deduce what you meant. So, if yo could clarify that as well, if and how to get the bindpose for the joint to go along with your ReadAnimation() function that would be great.
    53. Hey all! we are a team of 3 looking for more members, we are making an isometrical Survival RPG. we are looking For Members who can make low poly 3D artists who can do character models, environments, tools and more. if interested and want to know more email me at rioishere14@gmail.com
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!