steve coward

  • Content count

  • Joined

  • Last visited

Community Reputation

122 Neutral

About steve coward

  • Rank
  1. I have several chess transposition table related questions. I list two implementations of TTables - the first is my current code (based on an incomplete understanding), the second is based on a frequently cited paper ["Parallel Search of Strongly Ordered Game Trees" by T. A. MARSLAND AND M. CAMPBELL, Department of Computing Science, University of Alberta.] Only the relevant portions of code that touch the ttable within the negamax routine are given for simplicity. First question, is the Marsland listing incorrect in how entries are stored at the end of the listing? Shouldn't the upper and lower flags be swapped? Second question, why does the first listing handily beat the second listing in head to head competition? All other optimizations are turned off - this is just negamax with alphabeta pruning and transposition table. The remainder of the code is basically identical. The only differences are: 1st listing ttable stores 64 bit integer with bits packed, 2nd listing ttable stores ptr to structure (in preparation for saving best move which first listing does not do.) 1st listing compares only 58 bits of hash (due to cramming everything into 64 bits), 2nd listing compares all 64 bits. The first code seems somewhat random to me and in places contradicts the second implementation. The second listing makes much more sense. Third question, what is optimal action to take upon finding a non-exact entry in the table? I have found examples that do variations on A, B and C. Marsland does C. My current code does B. [code] if(pe!= null && pe->depth >= depth){ if (pe->flag == TTABLE_LOWER) { // A if (pe->m_eval >= beta) { return(beta); } // B if (pe->m_eval <= alpha) { return(alpha); } // C if (pe->m_eval > alpha) { alpha = pe->m_eval; } } [/code] Fourth question, is there any reason to do ttable store at depth 0 when evaluation is incremental (i.e. code is not calling eval routine to calculate the evaluation, instead the evaluation is simply returned when depth=0). [code] // My current implementation hashhval entry; if (m_pTTable->FindTTableState(m_currHashVal, entry)) { if (entry.depth >= depth) { switch (entry.flag) { case TTABLE_EXACT: return(entry.value); case TTABLE_LOWER: if (entry.value <= alpha) { return(alpha); } break; case TTABLE_UPPER: if (entry.value >= beta) { return(beta); } break; case TTABLE_INVALID: break; } } } if (depth <= 0) { m_pTTable->SaveTTableState(m_currHashVal, iBoard, depth, m_eval, TTABLE_EXACT); return(m_eval); } doMove(); negaMax(); undoMove(); if (v > bestV) { bestV = v; } int ttableType = TTABLE_LOWER; if (bestV > alpha0) { ttableType = TTABLE_EXACT; alpha = bestV; } if (bestV >= beta) { m_pTTable->SaveTTableState(m_currHashVal, iBoard, depth, bestV, TTABLE_UPPER); return(bestV); } m_pTTable->SaveTTableState(m_currHashVal, depth, alpha, ttableType); [/code] [code] // Marsland pseudocode (directly from paper) CTTEntry* pe; if ((pe = m_pTTable->FindTTableState(m_currHashVal)) != NULL) { if (pe->m_depth >= depth) { switch (pe->m_flag) { case TTABLE_EXACT: return(pe->m_eval); case TTABLE_LOWER: if (pe->m_eval > alpha) { alpha = pe->m_eval; } break; case TTABLE_UPPER: if (pe->m_eval < beta) { beta = pe->m_eval; } break; case TTABLE_INVALID: break; } if (alpha >= beta) { return(pe->m_eval); } } } if (depth <= 0) { return(m_eval); } doMove(); negaMax(); undoMove(); if ((pe == NULL) || (pe->m_depth <= depth)) { if (bestV <= alpha0) { m_pTTable->SaveTTableState(m_currHashVal, depth, bestV, TTABLE_UPPER); } else if (bestV >= beta) { m_pTTable->SaveTTableState(m_currHashVal, depth, bestV, TTABLE_LOWER); } else { m_pTTable->SaveTTableState(m_currHashVal, depth, bestV, TTABLE_EXACT); } } [/code]
  2. Modifying VertexBuffer Data

    From my first post - Setting this (the second parameter) to 0 will lock the entire buffer.
  3. Problems displaying .x file

    an excerpt from the .x file: Material Chrome { 0.803922;0.803922;0.803922;1.000000;; 18.240000; 0.909000;0.909000;0.909000;; 0.000000;0.000000;0.000000;; } MeshMaterialList { 1; 960; 0, . . . { Chrome } } So I do have a material defined. When this .x file is displayed the sphere is colored gray. There are no metallic qualities displayed. Is this because I do not have proper lighting defined in my app?
  4. Modifying VertexBuffer Data

    the second parameter to Lock() is The amount of data (in bytes) that you wish to lock. Setting this to 0 will lock the entire buffer. What does sizeof(sCOLORVERTEX) evaluate to?
  5. I am trying to display in my game a metallic ball. I am using 3ds max 2011 to create a spherical model with a chrome material. I use Panda plugin to write a .x file. When 3ds max renders the sphere it looks great - very shiny/relective. When I view the .x file using the directx viewer it appears as a greyscale sphere. I coded my app to read .x files. My app is c++ using microsoft 2008 express. My app is able to properly display the tiger.x file supplied with the mesh tutorial. However when I display my metallic sphere it appears as a white circle. Changing in the debugger the material attributes such as diffuse or ambient does not alter the display of the sphere. When I edit the .x file (it is saved in text format) to include a texture in the material description (otherwise no texture is specified): TextureFilename { "cedar.jpg"; } the texture is properly applied to the sphere in my app. Does anybody have any ideas what I am doing incorrectly? It appears that I have an issue in how the .x file is created. To render a metallic ball does the .x file need a texture to be specified?
  6. My game is experiencing an intermittent E_FAIL upon executing Present() function call. Here is the C++/DirectX 9 code. I am using Visual C++ 2008 Express IDE. I am using a NVIDIA GeForce9800GT graphics card and Vista OS. ddrval = m_pd3dDevice->Present(NULL, NULL, NULL, NULL); if (ddrval == D3DERR_DEVICELOST) { // IDirect3DDevice9::Present will return // D3DERR_DEVICELOST if the device is either "lost" or "not reset". m_bDeviceLost = true; } else if (FAILED(ddrval)) { SetErrorEncountered(true); continue; } This only fails about 10% of the time on the first attempt to execute the Present() call. Failure is marked by a return value of E_FAIL by Present(). It works correctly the remaining 90% of the time on the first attempt. It never fails upon second or later Present() calls. If I run in debug mode, break at the failure and set the current execution line back to the Present() call, the Present() call will work correctly when execution resumes. I am not sure what other info to include here but here are a few settings: HAL (pure hw vp): NVIDIA GeForce 9800 GT Windowed_PresentInterval = D3DPRESENT_INTERVAL_IMMEDIATE DisplayMode.Format = D3DFMT_X8R8G8B8 DisplayMode.Height = DisplayMode().Height1 DisplayMode.Width = DisplayMode().Width1 DisplayMode.RefreshRate = = 60 DepthStencilBufferFormat = = D3DFMT_UNKNOWN MultisampleType = D3DMULTISAMPLE_NONE MultisampleQuality = 0 VertexProcessingType = PURE_HARDWARE_VP AdapterOrdinal = 0 DepthStencilBufferFormat = D3DFMT_UNKNOWN MultisampleType = D3DMULTISAMPLE_NONE MultisampleQuality = 0 VertexProcessingType = PURE_HARDWARE_VP DevType = D3DDEVTYPE_HAL BackBufferFormat = D3DFMT_X8R8G8B8 Does anybody have any idea what can be happpening? Or how to proceed in debugging? Is this a software or hardware issue? thanks
  7. DX9 SetRenderState() question

    In my app every drawPrimitive() is immediately preceded by a source texture change (except for the rare case when the vertex buffer is full and a D3DLOCK_DISCARD lock is necessary.) So, if I understand the replies to my initial post, my app takes no additional performance hit if each drawPrimitive() is also preceded by a (possibly redundant) SetStreamSource(), SetFVF(), and/or multiple SetRenderState()'s. Is this true? thanks
  8. Is there a performance penalty for calling SetRenderState() if the call is not actually changing a render state attribute? For example, if SetRenderState(D3DRS_SRCBLEND, D3DBLEND_INVSRCALPHA) is performed prior to every DrawPrimitive(), will there be a performance penalty? thanks
  9. Thanks for the quick replies. I verified that the same behavior occurs in a simple assignment such as a = -f(x) + g(x), where both f(x) and g(x) read and pop the same stack of arguments.
  10. I am seeing a difference in order of operation in evaluating a parameter to my StackPush() function call: StackPush ( -StackPopNumber ().number + StackPopNumber ().number); In debug mode it appears that the first Pop occurs before the second. In release mode it appears that the second Pop occurs before the first. This is C++ with Visual Express 2005. Any ideas? Details - Release, Debug ============== stack top is 7.0 next entry is 175.0 double top = StackPopNumber().number; double next = StackPopNumber ().number; StackPush ( next - top ); stack top now is 168 Debug ======= stack top is 7.0 next entry is 175.0 StackPush ( -StackPopNumber ().number + StackPopNumber ().number); stack top now is 168 Release ======= stack top is 7.0 next entry is 175.0 StackPush ( -StackPopNumber ().number + StackPopNumber ().number); stack top now is -168 ????????? void StackPush(const double p) { Token t; t.type = NUMBER; t.number = p; TokenStack.push (t); } Token StackTop() { return (); } Token StackPopNumber() { Token ret; while (StackSize() > 0) { ret = (); TokenStack.pop (); if (ret.type == NUMBER) { return ret; } } ret.type = NUMBER; ret.number = 0.0; return ret; } struct Token { TokenType type; union { double number; char identifier[128]; }; Token () { } Token (const TokenType t) { type = t; } Token (const TokenType t, const double n) { type = t; number = n; } Token (const TokenType t, const char * i, int len) { type = t; strncpy (identifier, i, len); identifier[len] = '\0'; } };
  11. I am not extremely knowledgable about advanced 3d rendering concepts. My app is 2d. Mostly I just read in graphics from png files and then emit these graphics (or rectangular portions of them) as part of the scene I am rendering. So I do not think I am doing texturing. Maybe I should not have used the word texture in my initial post. Or maybe I am not understanding. The only render states that I change in drawing my scenes involve transparency. Is there a render state to enable texturing? I have not posted code from my app because it would require the posting of a large number of lines of code and I did not think that people would take the time to dig into it. I could give a summary of how it operates if that would be useful. Just posting the draw member function of a tank unit would not give insight into the functioning of my graphics engine/game loop. It would consist of calls to DrawString(), DrawLine(), DrawSprite(), etc. functions. The important point is that the tank draw function does not know anything about the fog of war. The sprites that it renders are separate from the sprites for the fog of war. I am hoping that an experienced graphics programmer can give me insight into how adding pixels (within Paintshop outside of the running of the app) on one sprite could affect the display of another sprite when the app is running. thanks!
  12. I am experiencing a dx9 graphics bug that I am unable to locate. The bug causes spurious images to be displayed for some of the sprites in my 2D game under certain circumstances. Instead of a tank being displayed, some random rectangular portion of a texture in memory will be displayed in its place. I have noticed that when I disable fog of war, the problem disappears (or at least is much, much less prevalent.) When I enable fog of war but use a blank texture tile (all pixels erased within PaintShop) for the fog, the problem also disappears. Or if I add a few random pixels of color, the problem still disappears. But add too many and the problem reappears. At first I thought my problem was either a buffer overrun or a timing/synchronization issue, but now I am not so sure. So my question is : How can the contents of one sprite affect other sprites? Regardless of the content of a texture, the contents of my vertex buffers and all other graphics data structures must be the same in each case (i.e. same u, v, x, y, z, texture references, same polygon ordering, same number of DrawPrimitive() calls, etc.), right? Any comments would be greatly appreciated.
  13. Help with boundschecker ifstream problem

    From Wikipedia: "BoundsChecker is a memory checking tool used for C++ software development with Microsoft Visual C++. It is part of the DevPartner for Visual C++ BoundsChecker Suite. Comparable tools are Purify, Insure++ and Valgrind. BoundsChecker can be run in two modes ActiveCheck which doesn't instrument the application and FinalCheck which does. ActiveCheck performs a less intrusive analysis and monitors all calls by the application to the C Runtime Library, Windows API and calls to COM objects. By monitoring memory allocations and frees it can detect memory leaks and overruns. Monitoring API and COM calls enables ActiveCheck to check parameters, returns and exceptions and report exceptions when they occur. Thread deadlocks can also be detected by monitoring of the synchronization objects and calls giving actual and potential deadlock detection." I am using the tool in ActiveCheck mode. That many people are not familiar with it is not surprising to me given its prohibitive cost. I am using a trial version. Perhaps some people have used it in a business environment. Even though the code I posted is from a much larger directx program, I believe that the test case is essentially self contained - it is composed of the declaration and assignment of one std::string which is a file path to an existing, readable file and the attempt to open that file for reading. This code has the same problem as pointed out by Boundschecker. std::ifstream ifs;"test.txt", std::ios::in);
  14. I am running my app (C++, 2005 Express Edition) under DevPartner Error Detection program (Boundschecker) and I am receiving the following error: Parameter 1, SIZE_T* _PtNumOfCharConverted = 0x00000000 in mbstowcs_s should not be null. Current Call Stack ------------------ _Fiopen fiopen.cpp basic_filebuf<char,struct std::char_traits<char> >::open fstream basic_ifstream<char,struct std::char_traits<char> >::basic_ifstream<char,struct std::char_traits<char> > fstream ReadRules tankbattle.cpp std::string m_strAppRootPath; // as defined in pGameStatusInfo std::string strRulesFile; strRulesFile = m_pGameStatusInfo->m_strAppRootPath + "rules" + ".rsrc"; std::ifstream ifs(strRulesFile.c_str(), std::ios::in); // <-- offending line if (!ifs) { return(D3DAPPERR_FILENOTFOUND); } else { while (!ifs.eof()) { ifs.getline(line, sizeof(line), '\n'); explicit __CLR_OR_THIS_CALL basic_ifstream(const char *_Filename, ios_base::openmode _Mode = ios_base::in, int _Prot = (int)ios_base::_Openprot) : basic_istream<_Elem, _Traits>(&_Filebuffer) { // construct with named file and specified mode if (, _Mode | ios_base::in, _Prot) == 0) _Myios::setstate(ios_base::failbit); } _Myt *__CLR_OR_THIS_CALL open(const char *_Filename, ios_base::openmode _Mode, int _Prot = (int)ios_base::_Openprot) { // open a C stream with specified mode _Filet *_File; if (_Myfile != 0 || (_File = _Fiopen(_Filename, _Mode, _Prot)) == 0) return (0); // open failed _Init(_File, _Openfl); _Initcvt((_Cvt *)&_USE(_Mysb::getloc(), _Cvt)); return (this); // open succeeded } strRulesFile evaluates to a proper, full windows file name. Code compiles without warning and appears to run properly. If I examine strRulesFile in the debugger, I see that various member variables "deeper" into std::string structure are not well defined. For example, _Mycont and _Mynextiter are displayed as "CXX0030:Error:expression cannot be evaluated". Also the _Bx member _Buf seems to contain garbage. If I examine in the debugger _File as returned by _Fiopen, it is not equal to 0 and its three member variable pointers (_base, _ptr, _tmpfname) are listed as 0x00000000 <Bad Ptr>. Is this behavior expected or am I improperly using ifstream? Thanks for any comments,
  15. I have a bug in my graphics engine. When certain sprites are displayed or when I pan the view across the map, spurious graphics are displayed in my game window. When I try to do a screen dump to capture the spurious behavior I am unable to do so. The captured image is perfect. If I attempt to pause my render loop while the spurious graphics are displayed, the paused image never contains the spurius images. I have used every debug method I know to figure this out but to no avail. I have gathered detailed, cycle based stats from my engine while the bug is evident but have found no anomalies. The problem is much more prevalent in my release model. Also print statements tend to hide the problem as well. Placing a Sleep(10) in my render loop also greatly eliminates the problem. So it appears to be timing related. Is this problem related to a synchronization issue in accessing vertex buffer? Or perhaps related to monitor syncing? But most of the graphics are not affected. I am using Visual C++ 2005 Express and dx9. (The problem is also evident with my older code developed with Visual C++ 6.0.) Does anybody have any ideas? Thanks,