• Advertisement
Sign in to follow this  

D3DXSprite issues (texture pointer not valid)

This topic is 3627 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I am having some difficulty getting this code to run properly. It compiles fine, but the debug info says that the texture pointer is invalid. I did some research on MSDN but couldn't find anything useful. This code is based off the tutorial on Toymaker, but I have restructured it quite a bit to be useful in a class, rather than on its own. Here is the code: SGSprite.h
#ifndef SGSPRITE_H
#define SGSPRITE_H

#include <d3d9.h>
#include <d3dx9.h>
#include <Dxerr.h>

class SGSprite
{
public:
	SGSprite(void);
	~SGSprite(void);
	void Load(char *file, LPDIRECT3DDEVICE9 device);
	void Draw();
	void SetTranslationRotationScaling(float x, float y, float rot, float scalex, float scaley);
private:
	IDirect3DTexture9 *texture;
	LPD3DXSPRITE sprite;
	D3DXVECTOR2 pos;
	D3DXVECTOR2 scaling;
	D3DXMATRIX mat;
};

#endif

SGSprite.cpp
#include "SGSprite.h"

SGSprite::SGSprite(void)
{
}

SGSprite::~SGSprite(void)
{
	sprite->Release();
}

void SGSprite::Load(char *file, LPDIRECT3DDEVICE9 device)
{
	D3DXCreateTextureFromFile(device, (LPCWSTR)file, &texture);
	D3DXCreateSprite(device, &sprite);
}

void SGSprite::Draw()
{
	sprite->SetTransform(&mat);
	sprite->Begin(D3DXSPRITE_ALPHABLEND);
	sprite->Draw(texture, NULL, NULL, NULL, 0xFFFFFFFF);
	sprite->End();
}

void SGSprite::SetTranslationRotationScaling(float x, float y, float rot, float scalex, float scaley)
{
	pos = D3DXVECTOR2(x, y);
	scaling = D3DXVECTOR2(scalex, scaley);

	D3DXMatrixTransformation2D(&mat, NULL, 0.0f, &scaling, NULL, rot, &pos);
}

I'm not sure what is going on, but what I am sure of is that the file is in the right location and that it is a power of 2. Any help is appreciated. Thanks. -AJ

Share this post


Link to post
Share on other sites
Advertisement
Try linking against D3DX9D.LIB instead of D3DX9.LIB and see if you get any debug output from D3DX.

My guess is that what YOU think of as the right place for the textures is different to what D3DX thinks. One example of when this can happen is using relative paths and making the false assumption the 'current' directory is always going to be the same as the directory your executable is in.

A related thing is forgetting that by default MSVC puts executables in \Debug or \Release folders within your project folder.

Share this post


Link to post
Share on other sites

void SGSprite::Load(char *file, LPDIRECT3DDEVICE9 device)
{
D3DXCreateTextureFromFile(device, (LPCWSTR)file, &texture);
D3DXCreateSprite(device, &sprite);
}



This right here is very, very bad. I'm guessing when you first put in the D3DXCreateTextureFromFile, the compiler complained to you because it expected an LPCWSTR (AKA const wchar_t*, a pointer to a string of Unicode characters) rather than an LPCSTR (AKA const char*, a pointer to ANSI characters). Simply casting your pointer from one type to the other will make the compiler shut up, but is wrong because you're sending one type of data while the function is expecting something completely different. Worse yet, the function has no idea that you sent it bogus data (since all it knows is the pointer type), and will happily operate on the data as if it were Unicode characters. In this case the function is probably failing since your ANSI string probably translates to nonsense in Unicode and therefore it doesn't find a file with that name existing anywhere, but in other situations this can lead to very hard-to-find bugs (notice how your program doesn't even crash, the function just fails).

Now it's completely understandable if you're confused about how Unicode and ANSI works, since typically most programmers coming from working with simple console apps have never even heard of Unicode and are used to just working with char* and std::string. I know I was pretty confused at first when I started Windows programming and had to deal with these issues myself. My advice is to do some research either at MSDN or elsewhere on how Unicode works in the Windows API (DirectX handles it almost the exact same way in most cases), or if you want I will happily explain some if it for you here. Then, I would advise picking writing your apps to only use Unicode, which means only using wchar_t* (or WCHAR* or LPWSTR), std::wstring if you're using string classes, and prepending your string literals with an "L" to make them Unicode.

Oh and in case you missed the original point...don't use casts unless you know exactly why you should be doing that cast. And if you are using casts, use the C++ style casts (static_cast, reinterpret_cast, dynamic_cast) rather than C-style casts.

Share this post


Link to post
Share on other sites
Is there a way to set the current directory, either by code or with VC project setting? Thanks.

-AJ

Share this post


Link to post
Share on other sites
The directory that your application is started in with the debugger can be set at Project Properties/Debugging/Working Directory.

Share this post


Link to post
Share on other sites
Thank you for calibrating me, MJP. I did not know there was a difference between the formats. I would be very gracious if you could explain the differences to me and how to use them. Thank you.

-AJ

Share this post


Link to post
Share on other sites
Just to let you guys know, due to a collaboration of your responses, I got the program running the way I meant it. So thanks very much.

-AJ

Share this post


Link to post
Share on other sites
Quote:
Original post by u235
Thank you for calibrating me, MJP. I did not know there was a difference between the formats. I would be very gracious if you could explain the differences to me and how to use them. Thank you.

-AJ


Surely. I'm running out to dinner now, but I'll write up something when I get back. In the meantime, you may want to read through this section of the Windows API documentation ---> Unicode and Character Sets (in fact pretty much all the relevent information is contained the pages that it links to, but later I'll post some of the important points in summarized form so you don't miss them).

Share this post


Link to post
Share on other sites
An LPD3DXSPRITE is a pointer to Direct3DX's sprite batching manager. Although it is confusing called a sprite, it isn't really what you would think of as a sprite - a pixel image on the screen, it's something that takes sprites, batches them for speed, and sticks them on the screen for you. The DX SDK recommends you use only one LPD3DXSPRITE for your entire program, and put all your draw calls through one Begin(), End() block.

Share this post


Link to post
Share on other sites
Okay here's some pertinant info:

The Windows API supports two kinds of strings, each using two types of characters. The first type multi-byte strings, which are arrays of char's. With these strings each glyph can either be a single byte (char) or multiple bytes, and how the data is interpreted into glyphs depends on the ANSI code page being used. The "standard" code page for Windows in the US is windows-1252, known as "ANSI Latin 1; Western European". These strings are generally referred to as "ANSI" strings throughout the Windows documentation. The Windows headers typedef the type "char" to "CHAR", and also typedef pointers to strings as "LPSTR" and "LPCSTR" (the second being a constant pointer to a string). String literals for this type simply use quotations, like in this example:


const char* sAnsiString = "This is an ANSI string!";


The second type of string is what is referred to as Unicode strings. There are several types of Unicode, but in the Windows API "Unicode" generally refers to UTF-16 encoding. UTF-16 uses two bytes per glyph, and therefore in C and C++ the strings are represented as arrays of the type wchar_t (which is two bytes in size, and therefore referred to as a "wide" character). Unicode is a worldwide standard, and supports glyphs from many languages with one standard code page (with multi-byte strings you'd have to use a different code page if you wanted something like kanji). This is obviously a big improvement, which is why Microsoft encourages that all newly-written apps use Unicode exclusively (this is also why a new Visual C++ project defaults to Unicode). The Windows headers typedef the type "wchar_t" to "WCHAR", and also typedef pointers to Unicode strings as "LPWSTR" and "LPCWSTR". String literals for this type use quotations prefixed with an "L", like in this example:


const wchar_t* sUnicodeString = L"This is a Unicode string!";


Okay, so I said that the Windows API supports both the old ANSI strings as well as Unicode strings. It does this through polymorphic types and by using macros for functions that take strings as parameters. Allow me to elaborate on the first part...

The Windows API defines a third character type, and consequently a third string type. This type is "TCHAR", and it's definition looks something like this:


#ifdef UNICODE
typedef WCHAR TCHAR;
#else
typedef CHAR TCHAR;
#endif

typedef TCHAR* LPTSTR;
typedef const TCHAR* LPCTSTR;


So as you can see here, how the TCHAR type is defined depends on whether the "UNICODE" macro is defined. In this way, the "UNICODE" macro becomes a sort of switch that lets you say "I'm going to be using Unicode strings, so make my TCHAR a wide character." And this is exactly what Visual C++ does when you set the project's "character set" to Unicode: it defines UNICODE for you. So what you get out of this is the ability to write code that can compile to use either ANSI strings or Unicode strings depending on a macro definition or a compiler setting. This ability is further aided by the TEXT() macro, which will produce either an ANSI or Unicode string literal:


LPCTSTR sTString = TEXT("This could be either an ANSI or Unicode string!");


Now that you know about TCHAR's, things might make a bit more sense if you look at the documentation for any Windows API function that accepts a string. For example, let's look at the documentation for MessageBox. The prototype shown on MSDN looks like this:


int MessageBox( HWND hWnd,
LPCTSTR lpText,
LPCTSTR lpCaption,
UINT uType
);


As you can see, it asks for a string of TCHAR's. This makes sense, since your app could be using either character type and the API doesn't want to force either type on you (unless you're using VB6 or .NET, of course [smile]). However there's a big problem with this: the functions that make up the Windows API are implemented as precompiled DLL's. Since TCHAR is resolved at compile-time, the function had to be compiled as either ANSI or Unicode. So how did MS get around this? They compiled both!

See, the function prototype you see in the documentation isn't actually a prototype of any existing function. It's just a bunch of syntatic sugar to make things look nice for you when you're learning how a function works, and tells you how you should be using it. In actuality, every function that accepts strings has two versions: one with an "A" suffix that takes ANSI strings, and one with a "W" suffix that takes Unicode strings. When you call a function like MessageBox, you're actually calling a macro that's defined to one of its two versions depending on whether the UNICODE macro is defined. This means that the Windows headers has something that looks like this:


WINUSERAPI
int
WINAPI
MessageBoxA(
__in_opt HWND hWnd,
__in_opt LPCSTR lpText,
__in_opt LPCSTR lpCaption,
__in UINT uType);
WINUSERAPI
int
WINAPI
MessageBoxW(
__in_opt HWND hWnd,
__in_opt LPCWSTR lpText,
__in_opt LPCWSTR lpCaption,
__in UINT uType);
#ifdef UNICODE
#define MessageBox MessageBoxW
#else
#define MessageBox MessageBoxA
#endif


Pretty tricky, eh? With these macros, the ugliness of having two functions is kept reasonably transparent for the programmer (with the disadvantage of causing some confusion among Windows newbies). Of course these macros can be bypassed completely if you want, by simply calling one of the typed versions directly. This is important for programs that dynamically load functions from Windows DLL's at runtime, using LoadLibrary and GetProcAddress. Since macros like "MessageBox" don't actually exist in the DLL, you have to specify the name of one of the "real" functions.

Anyway, that's basically a summarized guide of how the Windows API handles Unicode. With this, you should be able to get started with using Windows API functions, or at least know what kinds of questions to ask when you need something cleared up on the issue.

ADDITIONAL INFO:

The above refers specifically to how the Windows API handles strings. The Visual C++ C Run-Time library also supports it's own _TCHAR type which is defined in a manner similar to TCHAR, except that it uses the _UNICODE macro. It also defines a _T() macro for string literals that functions in the same manner as TEXT(). String functions in the CRT also use the _UNICODE macro, so if you're using these you must remember to define _UNICODE in addition to UNICODE (Visual C++ will define both if you set the character set as Unicode).

If you use Standard C++ Library classes that work with strings such as std::string and std::ifstream and you want to use Unicode, you can use the wide-char versions. These classes have a w prefix, such as std::wstring and std::wifstream. There are no classes that use the TCHAR type, however if you'd like you can simply define a tstring or tifstream class yourself using the _UNICODE macro.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement