• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.


  • Content count

  • Joined

  • Last visited

Community Reputation

164 Neutral

About tracegame

  • Rank
  1. Thank you for your great help,now I understand. :D
  2. The code below is from book Real-Time Collection Detection ch5.cpp file Some of them I don't know where it come from.   float denom = a*e-b*b;   and   (b*f - c*e) / denom   what does this mean?I know a is square length of d1,e is square length of d2,b is d1 projection on d2, but what does a*e-b*b means?where this guy come from,for what reason?any rule or fomular related?? Can somebody point me the direction or something to figure this out.Thanks // Computes closest points C1 and C2 of S1(s)=P1+s*(Q1-P1) and // S2(t)=P2+t*(Q2-P2), returning s and t. Function result is squared // distance between between S1(s) and S2(t) float ClosestPtSegmentSegment(Point p1, Point q1, Point p2, Point q2, float &s, float &t, Point &c1, Point &c2) { Vector d1 = q1 - p1; // Direction vector of segment S1 Vector d2 = q2 - p2; // Direction vector of segment S2 Vector r = p1 - p2; float a = Dot(d1, d1); // Squared length of segment S1, always nonnegative float e = Dot(d2, d2); // Squared length of segment S2, always nonnegative float f = Dot(d2, r); // Check if either or both segments degenerate into points if (a <= EPSILON && e <= EPSILON) { // Both segments degenerate into points s = t = 0.0f; c1 = p1; c2 = p2; return Dot(c1 - c2, c1 - c2); } if (a <= EPSILON) { // First segment degenerates into a point s = 0.0f; t = f / e; // s = 0 => t = (b*s + f) / e = f / e t = Clamp(t, 0.0f, 1.0f); } else { float c = Dot(d1, r); if (e <= EPSILON) { // Second segment degenerates into a point t = 0.0f; s = Clamp(-c / a, 0.0f, 1.0f); // t = 0 => s = (b*t - c) / a = -c / a } else { // The general nondegenerate case starts here float b = Dot(d1, d2); float denom = a*e-b*b; // Always nonnegative,======================================Here is the one ====================================== // If segments not parallel, compute closest point on L1 to L2, and // clamp to segment S1. Else pick arbitrary s (here 0) if (denom != 0.0f) { s = Clamp((b*f - c*e) / denom, 0.0f, 1.0f);//======================================Here is the other one ====================================== } else s = 0.0f; // Compute point on L2 closest to S1(s) using // t = Dot((P1+D1*s)-P2,D2) / Dot(D2,D2) = (b*s + f) / e t = (b*s + f) / e; // If t in [0,1] done. Else clamp t, recompute s for the new value // of t using s = Dot((P2+D2*t)-P1,D1) / Dot(D1,D1)= (t*b - c) / a // and clamp s to [0, 1] if (t < 0.0f) { t = 0.0f; s = Clamp(-c / a, 0.0f, 1.0f); } else if (t > 1.0f) { t = 1.0f; s = Clamp((b - c) / a, 0.0f, 1.0f); } } } c1 = p1 + d1 * s; c2 = p2 + d2 * t; return Dot(c1 - c2, c1 - c2); }
  3. Find a friend to help work this out,i'm really not good at Algebra. Why prove it,because usually i just copy,paste codes without thinking, now i'm trying to do things step by step,and try to figure out why these code work, no more blackbox coding.   anyway thanks for the reply,post a few places,just got reply here.     [attachment=31191:Momentum.png]
  4. Read a book about physics - Foundation AS3 Animation Making Things Move and there is the equation i can not prove. why (m0 * v0) + (m1 * v1) = (m0 * v0F) + (m1 * v1F) (m0 * v0^2) + (m1 * v1^2) = (m0 * v0F^2) + (m1 * v1F^2) can get v0F=((m0-m1)*v0+2*m1*v1)/(m0+m1) v1F=((m1-m0)*v0+2*m0*v0)/(m0+m1)   [attachment=31182:momentum_prove.png]
  5. OpenGL

    OK,finally read some book talking about this. The book name is Real-Time Collisioin Detection Author:Christer Ericson On Chapter 5.3
  6. Thank you for the infomation and keywords.Really helps. and i came back to post some actually code i searched from github. it's a html mario game with gap dash and some other cool stuff,complete mario 1 clone works on my chrome not firefox,a little strange,but guess may help someone,i will just leave the link. https://github.com/FullScreenShenanigans/FullScreenMario
  7. well,Dash here is really means Run,cause mario does not have a flash burst dash like Mega man. I'm not good at english,gap dash is some word i googled from internet.Maybe it called something else. But i guess you know what i mean if you have played Mario.Run across one tile gap works in every version of mario
  8. I have written some simple 2D platform game now. and i'd like to write a mario clone myself. problem is how to do a gap dash which means when i am on running mode i can dash foward for one tile gap. If not running,i will walk down the one tile gap,and falling down. I think if i just change the speed,i can not promise to pass the hole tile. cause if i stand some place i still may run exact to the hole positon. Then i think maybe i can large the collision rectangle when i was on running mode by 1 or 2 pixels. which can promise if it is one tile gap,i will not run down,cause i fill the whole collisioin area by enlarging the rectangle. but i am not sure this is a good idea. What's the usual way to do gap dash collison? Want to learn some new and better ideas.
  9. Hi,everyone. I am new to raytrace.and i found some example from shadertoy. link below https://www.shadertoy.com/view/ld2Gz3 with a little work,i can run this shader in opengl now. my question is there is some equation i do not understand. it is about ray intersect with sphere. here is the code. void intersectSphere(const Sphere sphere, const Ray ray, Material material, inout Output o) { vec3 d = ray.origin - sphere.origin; float a = dot(ray.direction, ray.direction); float b = dot(ray.direction, d); float c = dot(d, d) - sphere.radius * sphere.radius; float g = b*b - a*c; if(g &gt; 0.0) { float dis = (-sqrt(g) - b) / a; if(dis &gt; 0.0 &amp;&amp; dis &lt; o.dis) { o.dis = dis; o.origin = ray.origin + ray.direction * dis; o.normal = (o.origin - sphere.origin) / sphere.radius; o.material = material; } } } as i know g should equal to b*b - 4*a*c; but this code is g = b*b - a*c; and dis should equal to (-sqrt(g) - b) / (2*a); but instead it is dis = (-sqrt(g) - b) / a; and this code really works. why?what's going on here?
  10. thank you,PBO is exactly what i want.
  11. New to OpenGL. I want to write pixel to texture oen by one. In directx 11.I do something below hr = mDeviceContext->Map(pNewTexture, subresource, D3D11_MAP_READ_WRITE, 0, &resource); float* source = static_cast< float* >(resource.pData); then i can direct operate source for (int i = 0; i < 512; ++i) { source[i] = height; } then unmap it mDeviceContext->Unmap(pNewTexture, 0); and copy to the texture i want to mDeviceContext->CopyResource(audioTex, pNewTexture); all done what is the equivalent code in OpenGL? i try to write like glBindBuffer(GL_TEXTURE_BUFFER, tex2); void *ptr = glMapBuffer(GL_TEXTURE_BUFFER, GL_WRITE_ONLY); unsigned char data[] = {0,100,100,100}; memcpy(ptr, data, sizeof(data)); glUnmapBuffer(GL_TEXTURE_BUFFER); glBindBuffer(GL_TEXTURE_BUFFER, 0); but this does not seem to change the texture. instead it accidently change my vertex position. strange what is the right way to do this?
  12. Fixed sprite render problem,It is because of the wrong order setting buffer by the tutorial author.   And the Transparent line problem is because of the windows style.   This is strange and i think i need some help maybe.   In the create window part like int nStyle = WS_CLIPSIBLINGS | WS_CLIPCHILDREN | WS_POPUP; //int nStyle = WS_OVERLAPPED | WS_SYSMENU | WS_VISIBLE | WS_MINIMIZEBOX; hwnd = CreateWindowEx(WS_EX_APPWINDOW, windowTitle, windowTitle, nStyle, posX, posY, screenWidth, screenHeight, NULL, NULL, m_hInstance, NULL); If use WS_POPUP style which is same as Rastertek_DX11 tutorial did,the line display as opaque, which is what i need. But this will make the window undraggable.And there is no minimize and window boarder.Not good for dragging around.   If i use the WS_OVERLAPPED style,The line will display as some of them transparent.Which is not what i wanted.   So how can i set the window style to be draggable and the line displayed as opaque?   WS_OVERLAPPED and WS_POPUP doesn't seem to work together.       OK,solved.Leave the link for who have same problem.Actually the window size and client size are not the same.   The line looks transparent is becuase the client image was scaled.Not the original aspect ratio.   http://stackoverflow.com/questions/431470/window-border-width-and-height-in-win32-how-do-i-get-it
  13. Thank you,Buckeye for replying so patiently.   And apologize for my bad english and posted so many scaring code segment.   I guess maybe the tutorial render pipeline is wrong.   Bucase i found some other example from Rastertek DX11 tutorial,Tutorial 26  Transparency   And drop my draw line function there,and everything works fine.no sprite size problem,no draw line cancel problem, no transparent line problem at all.   I think i can manage to fix the problem by myself now.   And thanks again for the kind reply.
  14. 1.Check that the draw-line and the draw-sprite functions each work correctly by themselves. yes,they do.and no they don't if they work together.If run only one type they really works.   2.Check the states each function must have set correctly, and ensure they're set as necessary by each function at the time it's called. since i am new to dx,i am not 100% sure of this.just follow the tutorial,there is not too many explain in that video.so some code i am not really understanding very well.   3.Does your call m_entityManager->Render( ... ) work properly if blending is disabled same as right now. if render single object,works with z off blend off if render both still get wrong size,or reverse size.   does this mean the render state wrong?what shall do?post some state function up here? to be honest,those bunch of state description really scares.
  15. thanks for the reply.   i did not get any error or waring right now.by the way the project is small,just a few file,not something big.and i use dx sdk 2010,guess the library should be pretty old.nothing new.   i could post some code segment but i am not sure it will make too much sense.   the project main render function begin with Engine::Render void Engine::Render() { m_graphics->BeginScene(BG_COLOR_R, BG_COLOR_G, BG_COLOR_B, BG_COLOR_A); // render stuff goes here D3DXMATRIX viewMatrix = m_camera->GetViewMatrix(); D3DXMATRIX projMatrix = m_camera->GetOrthoMatrix(); m_entityManager->Render(m_graphics->GetDeviceContext(), viewMatrix, projMatrix); if (m_gameComponent != NULL) { m_graphics->EnableAlphaBlending(false); m_graphics->EnableZBuffer(false); D3DMATRIX orthMatrix = m_camera->GetOrthoMatrix(); m_gameComponent->Render(m_graphics->GetDeviceContext(), viewMatrix, orthMatrix); } m_graphics->EndScene(); } and here is m_graphics->BeginScene and m_graphics->EndScene(); void Graphics::BeginScene(float r, float g, float b, float a) { m_dxManager->BeginScene(r, g, b, a); } void Graphics::EndScene() { m_dxManager->EndScene(); } and in dxManager void DXManager::BeginScene(float r, float g, float b, float a) { float color[4]; color[0] = r; color[1] = g; color[2] = b; color[3] = a; // clear the back buffer m_deviceContext->ClearRenderTargetView(m_renderTargetView, color); // clear the depth buffer m_deviceContext->ClearDepthStencilView(m_depthStencilView, D3D11_CLEAR_DEPTH, 1.0f, 0); } void DXManager::EndScene() { if (m_vsync_enabled) { // Lock to screen refresh rate m_swapChain->Present(1, 0); } else { // Present as fast as possible m_swapChain->Present(0, 0); } } now is m_entityManager->Render(m_graphics->GetDeviceContext(), viewMatrix, projMatrix); void EntityManager::Render(ID3D11DeviceContext* deviceContext, D3DXMATRIX viewMatrix, D3DXMATRIX projectionMatrix) { int size = (int)m_entities.size(); for (int i = 0; i < size; ++i) { m_entities[i]->Render(deviceContext, viewMatrix, projectionMatrix); } //this is what i'm talking about if i comment out the loop above,instead the below once a time,it will render correctly //for example if i just render m_entities[0],it will be correct.or if i just render m_entities[1] also correct. //but if i render them both or use the loop above.it will be wrong sprite size display //m_entities[0]->Render(deviceContext, viewMatrix, projectionMatrix); //m_entities[1]->Render(deviceContext, viewMatrix, projectionMatrix); } void Entity::Render(ID3D11DeviceContext* deviceContext, D3DXMATRIX viewMatrix, D3DXMATRIX projectionMatrix) { if (m_sprite) { m_sprite->Render(deviceContext, m_worldMatrix, viewMatrix, projectionMatrix); } } void AnimatedSprite::Render(ID3D11DeviceContext* deviceContext, D3DXMATRIX worldMatrix, D3DXMATRIX viewMatrix, D3DXMATRIX projectionMatrix) { /*Engine::GetEngine()->GetGraphics()->EnableAlphaBlending(true); Sprite::Render(deviceContext, worldMatrix, viewMatrix, projectionMatrix); Engine::GetEngine()->GetGraphics()->EnableAlphaBlending(false);*/ // alpha blend Engine::GetEngine()->GetGraphics()->EnableAlphaBlending(true); if (m_texture) { m_shader->SetShaderParameters(deviceContext, m_texture->GetTexture()); m_shader->SetShaderParameters(deviceContext, worldMatrix, viewMatrix, projectionMatrix); m_shader->UpdateTransparentBuffer(deviceContext, 1.0f); //m_shader->UpdateTransparentBuffer(deviceContext, 0.3f); m_vertexBuffer->Render(deviceContext); } Engine::GetEngine()->GetGraphics()->EnableAlphaBlending(false); } void Sprite::Render(ID3D11DeviceContext* deviceContext, D3DXMATRIX worldMatrix, D3DXMATRIX viewMatrix, D3DXMATRIX projectionMatrix) { if (m_texture) { m_shader->SetShaderParameters(deviceContext, m_texture->GetTexture()); m_shader->SetShaderParameters(deviceContext, worldMatrix, viewMatrix, projectionMatrix); m_vertexBuffer->Render(deviceContext); } } bool Shader::SetShaderParameters(ID3D11DeviceContext* deviceContext, ID3D11ShaderResourceView* texture) { deviceContext->PSSetShaderResources(0, 1, &texture); return true; } bool Shader::SetShaderParameters(ID3D11DeviceContext* deviceContext, D3DXMATRIX worldMatrix, D3DXMATRIX viewMatrix, D3DXMATRIX projectionMatrix) { HRESULT result; D3D11_MAPPED_SUBRESOURCE mappedResource; MatrixBufferType* dataPtr; // transpose all matrices D3DXMatrixTranspose(&worldMatrix, &worldMatrix); D3DXMatrixTranspose(&viewMatrix, &viewMatrix); D3DXMatrixTranspose(&projectionMatrix, &projectionMatrix); // lock buffer result = deviceContext->Map(m_matrixBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource); if (FAILED(result)) { return false; } dataPtr = (MatrixBufferType*)mappedResource.pData; dataPtr->worldMatrix = worldMatrix; dataPtr->viewMatrix = viewMatrix; dataPtr->projectionMatrix = projectionMatrix; // unlock buffer deviceContext->Unmap(m_matrixBuffer, 0); // update the values in the shader deviceContext->VSSetConstantBuffers(0, 1, &m_matrixBuffer); return true; } void VertexBuffer::Render(ID3D11DeviceContext* deviceContext) { unsigned int stride; unsigned int offset; stride = sizeof(VertexType); offset = 0; m_shader->Begin(deviceContext, m_indexCount); deviceContext->IASetVertexBuffers(0, 1, &m_vertexBuffer, &stride, &offset); deviceContext->IASetIndexBuffer(m_indexBuffer, DXGI_FORMAT_R32_UINT, 0); deviceContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST); m_shader->End(deviceContext); } now the shader class void TransparentShader::Begin(ID3D11DeviceContext* deviceContext, int indexCount) { deviceContext->PSSetSamplers(0, 1, &m_samplerState); Shader::Begin(deviceContext, indexCount); } void TransparentShader::End(ID3D11DeviceContext* deviceContext) { deviceContext->PSSetSamplers(0, 0, NULL); Shader::End(deviceContext); } void Shader::Begin(ID3D11DeviceContext* deviceContext, int indexCount) { // set the vertex input layout deviceContext->IASetInputLayout(m_layout); // set the vertex and pixel shaders that will used to render deviceContext->VSSetShader(m_vertexShader, NULL, 0); deviceContext->PSSetShader(m_pixelShader, NULL, 0); // render deviceContext->DrawIndexed(indexCount, 0, 0); } void Shader::End(ID3D11DeviceContext* deviceContext) { deviceContext->IASetInputLayout(NULL); deviceContext->VSSetShader(NULL, NULL, 0); deviceContext->PSSetShader(NULL, NULL, 0); } will these code segment be useful??