• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

smile55

Members
  • Content count

    30
  • Joined

  • Last visited

Community Reputation

139 Neutral

About smile55

  • Rank
    Member
  1. [quote name='pcmaster' timestamp='1339053778' post='4946975'] Is it true that width == WIN_WIDTH and height == WIN_HEIGHT? Why do you allocate the damp vector of WIN_WIDTH * (WIN_HEIGHT + 1) elements, instead of WIN_WIDTH * WIN_HEIGHT? [/quote] It's true that width == WIN_WIDTH and height == WIN_HEIGHT. I use WIN_WIDTH * (WIN_HEIGHT + 1) because I manually offset WIN_WIDTH / 2 to bypass the problem: I copy data between damp[WIN_WIDTH / 2 : WIN_WIDTH * WIN_HEIGHT + WIN_WIDTH / 2] to TBO instead of damp[0 : WIN_WIDTH * HEIGHT]. It solve the problem, but I want to know what causes the problem.
  2. I was trying to use TBO to do a simple task: First I fill a buffer object with some data; Then I want to visualize the result, so I bind the buffer object to TEXTURE_BUFFER, and post it on the screen. here is my codes: Initialize: [CODE] /* Just fill the buffer, color is changing in x direction */ std::vector<float> damp(WIN_WIDTH * (WIN_HEIGHT + 1)); for (int i = 0; i < WIN_WIDTH; ++i) { for (int j = 0; j < WIN_HEIGHT; ++j) { damp[i + j * WIN_WIDTH] = (float)i / WIN_WIDTH; } } /* Create the TBO and fill the TBO with the data generated above */ glGenBuffers(1, &g_buf_levelset); glBindBuffer(GL_TEXTURE_BUFFER, g_buf_levelset); glBufferData(GL_TEXTURE_BUFFER, width * height * sizeof(float), &damp[0], GL_DYNAMIC_COPY); glBindBuffer(GL_TEXTURE_BUFFER, 0); /* Create the texture of the TBO */ glGenTextures(1, &g_tex_levelset); glBindTexture(GL_TEXTURE_BUFFER, g_tex_levelset); glTexBuffer(GL_TEXTURE_BUFFER, GL_R32F, g_buf_levelset); glBindTexture(GL_TEXTURE_BUFFER, 0); [/CODE] Rendering: [CODE] /* Post the texture to the screen */ glUseProgram(g_pgm_texture); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_BUFFER, g_tex_levelset); glUniform1i(g_uniform_texture, 0); glUniform1i(g_uniform_win_width, WIN_WIDTH); glUniform1i(g_uniform_win_height, WIN_HEIGHT); glBindVertexArray(g_quad_vao); glEnableVertexAttribArray(0); glDrawArrays(GL_QUADS, 0, 4); glDisableVertexAttribArray(0); glBindVertexArray(0); glBindTexture(GL_TEXTURE_BUFFER, 0); glUseProgram(0); [/CODE] Here is the fragment shader: [CODE] #version 330 uniform samplerBuffer tex; uniform int win_width; uniform int win_height; out vec4 color; void main(void) { int offset = int(gl_FragCoord.x + gl_FragCoord.y * win_width); float dist = texelFetch(tex, offset).r; color = vec4(dist, 0.0, 0.0, 1.0); } [/CODE] There is no vertex shader , the verts are just the four corners of a screen quad. I mean they are {(-1.0, -1.0, 0.0), (1.0, -1.0, 0.0), (1.0, 1.0, 0.0), (-1.0, 1.0, 0.0)}, I set the modelview matrix and projection matrix both to identity matrices. But the image I got was shifted: the left half part of the image was move to right half of the screen, and the right half of the image was round to left half of the screen. I mean, the distribution of color in x direction is supposed to be: [0.0 --------------------> 1.0] but I got: [0.5----->1.0 0.0 --- >0.5] If I change the code from [CODE] glBufferData(GL_TEXTURE_BUFFER, width * height * sizeof(float), &damp[0], GL_DYNAMIC_COPY); [/CODE] to [CODE] glBufferData(GL_TEXTURE_BUFFER, width * height * sizeof(float), &damp[WIN_WIDTH / 2], GL_DYNAMIC_COPY); [/CODE] The result is correct. So it seems the while data is offset by WIN_WIDTH / 2. I checked the codes for a lot times and can't figure out why, anyone has some ideas? Thanks in advance.
  3. [quote name='Red Ant' timestamp='1333007625' post='4926255'] [quote name='smile55' timestamp='1333006533' post='4926250'] Many times I define a class and soon I will feel the design is bad, making the problem more complex. Sometimes I define a lot interfaces today and change all my minds tomorrow and re-write them all.[/quote] That problem won't really go away if you switch to pure C, though, except that you obviously won't be writing any classes. Instead you'll write a set of functions operating on a certain type of data and then change your mind the next day and rewrite them all. [quote name='smile55' timestamp='1333006533' post='4926250'] So I want to know how does professional C programmer write their daily codes? For example, when a linked list is used to organize some data, will you directly operate the pointer to next node in the application's algorithm, or will make an abstraction to hide the details of linked list? [/quote] There are probably various third-party libraries available that provide somewhat generic data structures such as lists and so on. It's just that in C, "generic" usually means mucking around with void-Pointers and constantly casting to the type of data you're using the structure with. [/quote] Really thanks for your replies. Actually I am doing exact things you descripted: writing function to operate some data, as a class in C++, but with functions. From some books I read that abstraction is important because it decreases the complexity. But I always feel it's hard to do it. Sometimes I think programming without abstraction, just use plain data, may be more efficient, as I can waste less time designing interfaces and abandon thems, and may feel less frustrated. Maybe I read wrong book. Books I read always tell the concepts, list a lot of its advantages but doesn't teach how to really do it. Even SICP, I read the two chapers and half the third chapter and did the most exercises in these chapters, doesn't help a lot. I feel the programs in these books are always too ideal or just toy programs. But practical programs are more dirty. Do you have some recommended books? Or should I read some sources code of open source projects? And actually my point is not whether there are some third-party library for C. I'm asking whether professional C programmers write some general data structure once and use them frequently after (or just use some third-party libraries), or just combine these data structures into the applications' algorithm?
  4. I am a CS student, I have long been writing C&C++ mixing codes. I mean I use C++ compiler such as VS2008, but actually I am not good at OOP, so I just write C-like codes using C++. Many times I define a class and soon I will feel the design is bad, making the problem more complex. Sometimes I define a lot interfaces today and change all my minds tomorrow and re-write them all. So finally I don't want to struggle any more. I know C grammar pretty well, so I think I'd better just use pure C, forgetting the C++ features. These days I am trying to write a pure C program, maybe thousonds lines of codes. A problem I met is that, I have been used to use STL data structure such as vector for a long time, but C doesn't have STL. I tried to write some general data structure like linked list, using function pointers to apply general operations. But soon I found it's not so easy to be general. So I want to know how does professional C programmer write their daily codes? For example, when a linked list is used to organize some data, will you directly operate the pointer to next node in the application's algorithm, or will make an abstraction to hide the details of linked list? Thanks for your time.
  5. Hi, in a program, I drew some something on the screen, in 2D, then I wanted to draw some points and lines above them. I didn't use a closer z value, so these points and lines counldn't pass the depth test. So I tried to use glPolygonOffset, but I met troubles. I called glPolygonOffset like this: [code]glPolygonOffset(0.0f, -1.0f);[/code] First I drew quads with PolgyonMode set to GL_FILL, enable GL_POLYGON_OFFSET_FILL, and everything is ok, the wireframe of the quads were displayed correctly on the screen. To draw the points, with the same offset setting, I tried to enable GL_POLYGON_OFFSET_FILL, GL_POLYGON_OFFSET_LINE, GL_POLYGON_OFFSET_POINT, but none of them worked. I searched on google but didn't find the answer. I am thinking that whether point and line are not polygon in GL, so they are not affected by glPolygonOffset? If they were polygon, which type should I use, GL_POLYGON_OFFSET_FILL or GL_POLYGON_OFFSET_POINT? Thanks.
  6. The specification said, objects like vbo and texture could be shared between different gl contexts. But someone told me, they could only be shared after the call glShareList. Or they can be shared automatically between different contexts ?
  7. Hi everybody. Current I was writing an plugin on some software. In this plugin, I need some FBOs to do off-screen renderings. The problem is the software environment will change the openGL context in the back in some conditions, and the SDK provides no callback functions to let me do something when context is being changed. I have some VBOs and textures, and after checking some documents I knew they can be shared between multiple contexts automatically. But FBO can't. I have two solutions: one, create FBO every frame, so I don't worry about the context changing; two, in the plugin's code, I create my own context, and every frame when I need to do some rendering, I call wglMakeCurrent to set my own context. But I think the two methods are both ugly. Also, I think about everytime I find the context has been change, I create new framebuffer objects on the new context. But in this way, I can't delete the old FBOs, since I don't know whether the old context has been deleted. And I don't know whether these old FBOs will be deleted too when the context is deleted, If they won't, I have no way to delete them. If you know some elegant way to handle such situation, please help me, thanks.
  8. I need to do some logic operation to a render target which is a integer format texture. The process is to enable gl LogicOp, set the logic operation way to 'OR', and in fragment shader the output will do the logic operation with the corresponding value in the integer texture. So after the rendering, the integer texture will accumulate the results of every fragments covering this pixel. My question is, the output of fragment shader, gl_FragColor is a vec4, which is 4 float value. When I pack a integer value in the shader, like 0x0000100, how to make the integer value as the output value. Assign it to gl_FragColor will convert it to float format I think. I know in cg, one can define the return value's type like int4, but how to do this thing in glsl? thanks for your help.
  9. Quote:Original post by ET3D I checked a small sample program of mine, and it works like you describe things should. I'm using: D3DXMatrixPerspectiveFovLH(&projection, float(M_PI) / 4, float(viewport.Width) / float(viewport.Height), 1, 20); This indeed keeps the height fixed and what fits in the window horizontally changes. (Note, there's no stretching.) Thanks for repies. I find there are details in the documents. The document gives the every element in the perspective matrix and how they are computed.
  10. Seems no problem...But i wonder whether the x and y coordinate 128 are ok. At z coordinate 1.0f, will 128 be too large to display in the view plane? i think the max size of image could display in the screen is (tan(fov/2) * aspect) by tan(fov/2) at z value 1.0f. But why don't you use ID3DXSprite to draw a texture to the screen?
  11. Quote:Original post by Sean_Seanston Quote:Original post by smile55 4. Restore the old render target. And do NOT forget releasing the surface object used to save the old render target That's only to prevent a memory leak though, isn't it? Because I didn't bother doing that since I was just messing around trying to get it to work first. Right it's only to prevent a memory leak. But if you don't release these surfaces, the call resetting the device will fail.
  12. To render to a texture 1. when creating the texture, the usage tag must be D3DUSAGE_RENDERTARGET 2. Get the surface of the texture using the method IDirect3DTexture9::GetSurfacelevel(). 2. use the SetRenderTarget method of the IDirect3DDevice to set the render target to the surface of the texture, don't forget saving the old surface 3. Render to texture 4. Restore the old render target. And do NOT forget releasing the surface object used to save the old render target Hope can help.
  13. In my mind, fov determines the vertical range of the view. And based on fov, Aspect determines the horizontal range of the view. So when the window is resized, if fov is not changed, changing the Aspect to a larger number will let the the image be stretched in horizontal coordinate. Is this right? I saw that when i did this, it seemed the image was not stretched but the top and bottom part of the image was out of the screen.
  14. Quote:Original post by ET3D First of all, the x and y coordinates are in the range [-1, 1] before the viewport transform, not [0, 1]. If you're seeing that there's no scaling with certain changes, to me that'd indicate the you have some transforms that change. If you're using just an identity matrix then you should see scaling. If you're using a perspective transform and you're changing it when the display changes, then anything can happen. Thanks for replies. Every time i change the size of backbuffer and reset the device, i set the perspective transform again using the new size. What do you mean that anything can happen?
  15. well, you remind me that maybe when the window is resized and i want to change the size of backbuffer, i need to rebuild the perspective transform matrix meanwhile. if the Aspect parameter in the call D3DXMatrixPerspectiveFovLH is different from the ratio of height of the backbuffer and the width of the backbuffer, will the image be stretched? I think it will be, but in my experiment, seem the application will adjust it to the smaller one between width and height of backbuffer.