• Content count

  • Joined

  • Last visited

Community Reputation

283 Neutral

About RJSkywalker

  • Rank

Personal Information

  • Interests
  1. I am trying to set the value of an integer by reference in a macro created in my blueprint script. The macro does a simple conditional check of an integer and increments it if it is less than a max value. I was using one of the maze generating tutorials and the function throws a failed to resolve term Value passed into Target error. I am not sure why it cannot deduce the type of Value. The original tutorial was made in UE 4.6, so could it be due to updates made in UE 4.16? I have attached a screenshot of my blueprint. [not sure if this section might be the correct one since I couldn't find an appropriate section to post this question].
  2. Thanks a lot! It gives me a clearer picture now. I wasn't aware this could be related to morph targets. Unfortunately my engine does not support morph targets just yet. 
  3. Hey guys, I had a question regarding the type of animations that can be exported to FBX created in Maya. One of my animators created an animation of an object using Deformers specifically the Bend and Squash. It seems like FBX does not export this property (or even import). We tried exporting it with Bake Animation and the Deformed Models option but it did not work. Also, the object does not have any bones/joints associated with it. Does anyone know if this is a FBX thing? Do we need to write a custom FBX  exporter to handle this?
  4. Handling png images as fonts

    I think you are spot on! I will discuss this with them and will probably adjust the font exporter. Thanks a lot! 
  5. Handling png images as fonts

    That is what I was thinking.   @Shaarigan: We use C++ in our proprietary engine and the graphics engine uses openGL for rendering. We actually get an image made in Photoshop containing the characters which is not exactly a font per se nor based on an existing font.   @Kylotan: Our custom font exporter actually only supports system fonts. Detecting an image as a font by the exporter would sound strange?    This is what we usually get:  [sharedmedia=gallery:albums:1105]   A simple image (like a sprite sheet perhaps?) Would it be similar to a Bitmap font?
  6. Custom Font

  7. Hello everyone,   So I have this issue of rendering characters that are not fonts but rather png images which the artist gives me. They want to apply certain effects, drop shadows and custom hand drawn curvatures at times. Our custom font exporter does not support this kind of functionality.  Has anyone faced this kind of a situation? Do I need to create a custom font object? The artist does not give us any kind of description, just a png strip that contains the relevant characters. What I was thinking was to either use this strip or to break the strip per character which I am guessing is not a good idea, in addition to asking the artist of a description file. 
  8. Hi all!   I am trying to render an image that I generate after reading it from a .asc file. I perform all the necessary transformations and then display it on the screen. The output is a ppm image. Initially I used the windows API related BitBlt function to render the image and it shows perfectly on the screen. Then I tried using OpenGL to render the same image and it doesn't.   If I use DrawPixels, it shows a black screen and if I use texture mapping it shows a partial white box.   Here are the images and the bits of code. I have worked with targa images before but this is the first time I am creating a ppm image but I do not think that is an issue as I am just trying to render the buffer which is of type char*. This image is produced using BitBlt. [attachment=18697:screenShot1.jpg] This one using OpenGL. [attachment=18698:screenShot2.jpg] And here is the code of my files: Main.cpp and COpenGLRenderer.cpp:  DrawFrameBuffer(using BitBlt) void DrawFrameBuffer() { HBITMAP m_bitmap; HDC memDC; memDC = CreateCompatibleDC(hDC); //display the current image char buffer[sizeof(BITMAPINFO)]; BITMAPINFO* binfo = (BITMAPINFO*)buffer; memset(binfo,0,sizeof(BITMAPINFO)); binfo->bmiHeader.biSize = sizeof(BITMAPINFOHEADER); //create the bitmap BITMAPINFOHEADER* bih = &binfo->bmiHeader; bih->biBitCount = 3*8; //3 - channels bih->biWidth = pApp->GetFrameBufferWidth(); bih->biHeight = pApp->GetFrameBufferHeight(); bih->biPlanes = 1; bih->biCompression = BI_RGB; bih->biSizeImage = 0; //for rgb bitmaps we set it to 0 m_bitmap = CreateDIBSection(hDC,binfo,DIB_RGB_COLORS,0,0,0); SelectObject(memDC,m_bitmap); binfo->bmiHeader.biBitCount = 0; GetDIBits(memDC, m_bitmap, 0, 0, 0, binfo, DIB_RGB_COLORS); binfo->bmiHeader.biBitCount = 24; binfo->bmiHeader.biHeight = -abs(binfo->bmiHeader.biHeight); //for top-down image SetDIBits(memDC,m_bitmap,0, pApp->GetFrameBufferHeight(), pApp->GetFrameBuffer(), binfo, DIB_RGB_COLORS); //replace the 3rd last argument with the framebuffer SetStretchBltMode(hDC, COLORONCOLOR); RECT client; GetClientRect(hwnd, &client); BitBlt(hDC,0,0,pApp->GetFrameBufferWidth(),pApp->GetFrameBufferHeight(),memDC,0,0,SRCCOPY); DeleteDC(memDC); DeleteObject(m_bitmap); }  DrawFrameBuffer(OpenGL) function void COpenGLRenderer::setFrameBuffer(const char* buffer) { m_pFrameBuffer = (char*)buffer; glGenTextures(1, &m_pTextureID); glBindTexture(GL_TEXTURE_2D, m_pTextureID); glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST); glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST); glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP); glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, m_nWindowWidth, m_nWindowHeight, 0 , GL_RGB, GL_UNSIGNED_BYTE, buffer); //gluBuild2DMipmaps(GL_TEXTURE_2D,GL_RGB,m_nWindowWidth, m_nWindowHeight,GL_RGB,GL_UNSIGNED_BYTE, buffer); } void COpenGLRenderer::drawFrameBuffer(const char* buffer) { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, m_pTextureID); glPushMatrix(); glLoadIdentity(); glTranslatef(0.0f, 0.0f, -10); //z translation is the last value glBegin(GL_QUADS); glTexCoord2f(0,0); glVertex2f(0,0); glTexCoord2f(1,0); glVertex2f(m_nWindowWidth,0); glTexCoord2f(1,1); glVertex2f(m_nWindowWidth,m_nWindowHeight); glTexCoord2f(0,1); glVertex2f(0,m_nWindowHeight); glEnd(); glPopMatrix(); glDisable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, 0); /*glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glPixelStorei(GL_UNPACK_ALIGNMENT,1); glRasterPos2i(0, 0); if(buffer != NULL) { glDrawPixels(m_nWindowWidth, m_nWindowHeight, GL_RGB, GL_BYTE, buffer); } */ } I have commented the DrawPixels technique that did not work. Can anyone tell me what might be going on? Its basically trying to render image data stored in a char* variable using different methods. One method works so I know for sure that the buffer does not contain invalid data.
  9. line segments having common endpoint

    @Alvaro: Thanks for the d >= abs(L0-L1) info.. I had figured out the L0+L1 part. Both of these would give if a solution exists or not. But I also want to know the configuration of the 2 line segments, as in also the point as well. I want my function to return a list of all such points. Because if I think of a brute force way, I can find atleast 3 points by moving by L0 parallel to  the 3 axes. The other points can be found by breaking L0 into x,y and z components if I am write. So if i have to iterate through a loop, how can I find the range of the loop it does not exactly go on a particular order when finding the point. 
  10. Hey guys,    I had a question regarding 2 line segments. Say we have 2 line segments whose origin and lengths are given as: (P0, L0) and (P1, L1) respectively. I need to find when can they end at the same point. The line segments lie anywhere in 3D space.   One of the approaches I could think about is: Let's say this common end point is T and the points are A and B. So for the line segments with A and B as origins, A,B and T must form a triangle. Length of vector AT = L0 and length of vector BT = L1. But since the orientation of the line segment is not known, there can be a lot of possibilities. Lets say we choose a particular orientation for line segment AT as (i,j,k) - 1st octant. So now we can move anywhere in space  from T but only by a distance L1 to find BT.    This is where I m not sure how to move forward. 
  11. Setting up Nvidia PhysX SDK 3.2

    I found the error: I was not including WIN32 in the preprocessor section of the properties
  12. Setting up Nvidia PhysX SDK 3.2

    Hi people, I have a runtime DEBUG Assertion error when I m trying to setup a basic application using the latest Nvidia PhysX 3.2 sdk. I was just using the documentation and the tutorial mentioned at this blog : [url="http://mmmovania.blogspot.com/2011/05/getting-started-with-physx-3.html"]http://mmmovania.blo...th-physx-3.html[/url] as the only guide. He does mention a small change required for the 3.2 sdk. This is where I m getting the assertion:[CODE] gPhysxFoundation = PxCreateFoundation(PX_PHYSICS_VERSION, gDefaultAllocatorCallback, gDefaultErrorCallback); [/CODE] The declarations are: [CODE] PxFoundation* gPhysxFoundation = NULL; static PxDefaultErrorCallback gDefaultErrorCallback; static PxDefaultAllocator gDefaultAllocatorCallback; [/CODE] And the Assertion is : Expression: (reinterpret_cast<size_t>(ptr) & 15) == 0 in the file: \include\extensions\pxdefaultallocator.h Now according to the documentation, PhysX does implement a default version of the Default Allocator class and for the windows platform it does call [b]_aligned_malloc(size, 16). [/b]Because this is what is happening. Its not able to align it to 16 bits. Does anyone know how to solve this issue? I m only starting to learn PhysX and I cannot move forward without solving this issue.
  13. Basic Ray tracing

    oh god..i think i just realized i had been reading vertices from the wrong file...yes that quad was supposed to be a plane for the texture map that I had used in my rasterization program..i changed the file and it does not contain the plane but it still gives that white shade at the bottom..i think it might be due to the light direction or the shading equation..i shall check again
  14. Basic Ray tracing

    [sharedmedia=gallery:images:2102] [sharedmedia=gallery:images:2103] output 4 I get when I shifted the camera position. For the same case if I invert the z value of the ray direction I get output 3. I had it originally called it 'd' and for output 4 d = -5. for output 3 d = 5. there is still some mistake. I wanted to ask you, In the camera code you gave me, if we are performing all the calculations in world space, then why do we multiply by the view matrix to get the corners of the screen. Wouldn't that take you to camera space?
  15. Ray tracing intermediate images