Jump to content

  • Log In with Google      Sign In   
  • Create Account


zerorepent

Member Since 25 Sep 2007
Offline Last Active Jul 18 2014 03:27 PM
-----

Topics I've Started

cl::vector vs std::vector with cl::Event error

02 April 2013 - 01:26 AM

Hi!

I am trying to figure out why this code generates an error (Unhandled exception, Access violation reading location...)  when releasing the events if __NO_STD_VECTOR is defined, i.e. using the built in more simple vector class in OpenCL, but works perfectly fine using std::vector. i've been scratching my head bout this issue for quite some time now. Its like the destructor of cl::Event is called twice, once in destructor of the vector and once when leaving the scope of main. But it feels like this isn't the intended behaviour considering they are vectors of cl::Event everywhere in the c++ API of openCL, which in turn makes me believe i've overseen an obvious error in the code. Still it works with std::vector and not cl::vector... 

#define __NO_STD_STRING 
#define __NO_STD_VECTOR

#include <CL/cl.hpp>

const __int64 FILE_LENGTH = 1024*128;
#define NUM_WRITES 2

int main()
{
	
	cl::CommandQueue commandQueues;
	cl::Program program;
	cl::Context context;
	cl_device_id device_id; 
	VECTOR_CLASS<cl::Platform> platforms;
	VECTOR_CLASS<cl::Device> devices;

	cl_int error;
	error = cl::Platform::get(&platforms);

	if(error !=CL_SUCCESS)//handle error better
	{
		return false;
	}

	cl_device_type type = CL_DEVICE_TYPE_GPU;
#ifdef ENCODE_CPU
	type = CL_DEVICE_TYPE_CPU;
#endif

	error = platforms[0].getDevices (type, &devices);
	if(error != CL_SUCCESS) //handle error better
	{
		return false; 
	}

	context =  cl::Context(devices);

	cl::CommandQueue queue = cl::CommandQueue(context, devices[0]);


	VECTOR_CLASS<cl::Event> m_Events;
	cl::Buffer destinationbuffer[NUM_WRITES];
	int n = 0;
	for(int i = 0;i<NUM_WRITES;i++)
	{
		destinationbuffer[i] = cl::Buffer(context,CL_MEM_ALLOC_HOST_PTR|CL_MEM_READ_ONLY,FILE_LENGTH,0,&error);
		if(error != CL_SUCCESS) //handle error better
		{
			printf("error");
		}
		m_Events.push_back(cl::Event());
		error = queue.enqueueFillBuffer<int>(destinationbuffer[i],n,0,FILE_LENGTH,0,&m_Events[i]);
		if(error != CL_SUCCESS) //handle error better
		{
			printf("error");
		}
	}
	cl::WaitForEvents(m_Events);
	m_Events.clear();
	return 0;
}

(removing m_Events.clear() doesn't change anything)

 

I've been requested to not use std::vector or std::string, so going with what works is not really an option. 

 

The error occurs in both intel and amd sdk of opencl. 


Filling opencl buffer/image with zeros

20 March 2013 - 03:33 AM

Hi!

What is the way to fill a opencl buffer (or texture) with zeros? 

 

cl_int errorCode;
float arg = 0; //or a larger datastructure/type
errorCode = m_Queue->enqueueFillBuffer<float>(destinationBuffer,arg,0,size,NULL,0);

This operation seems really wasteful, but maybe I'm wrong? Is it perhaps better to use map/unmap and memset with zeros?  

 

For example I want to create a buffer of size x, fill it with z amounts of data (z<x) and fill the rest of the buffer with zeros. Is the only option then to first write z data, and then do a fill-function call like the one above with an offset? Is it possible to intialize a buffer filled with zeros. Googling only seem to result in information about zero-copy memory (which is interesting in itself, but not what I'm looking for). 


How to create ID3D11Device1?

24 October 2012 - 07:11 AM

Hi!
I am trying to use direct2d in directx11.1 with feature level 11, however i quickly run into problems, when creating a directx10.1 device you use D3D10CreateDevice1, however a similar function for directx11.1 doesn't seem to exist. I look at this example http://msdn.microsof...9(v=vs.85).aspx however they use ComPtr's, And I am not that familiar with using them,

They declare
ComPtr<ID3D11Device> device;
ComPtr<ID3D11DeviceContext> context;
uses the regular D3D11CreateDevice function, and then uses
DX::ThrowIfFailed(
		device.As(&m_d3dDevice)
		);
where m_d3dDevice is a ID3D11Device1. Is that a regular typecast (in the ComPtr way)? If so is it safe to do use
m_d3dDevice= (ID3D11Device1*)(device);
I do not use ComPtrs in my project, so in this example above both are "regular" pointers.

OpenGL shaderreflection equivalent?

13 January 2012 - 03:54 AM

Hi
Does OpenGL has a similar function as DirectX shaderreflection? I've been thinking about testing opengl a bit. Or how does people usually keep track of shadervariables and such? Depending on the situation I guess one can hardcode it, but that loses a lot of flexibility another solution might be through some kind of scripting, but I'm very curious about how people usually do this kind of work.

Inputlayout problem (dx11) after visual studio 11 preview installation

16 September 2011 - 04:12 PM

Hi
I get a really weird error, and I cant figure out whats wrong tbh. I installed visual studio 11 developers preview today (I am starting to think that it might have been a mistake tbh). After I installed it I started to get this error written to my output and I honestly have no idea what to make out of it. An old projekt of mine get the same, but after a render calls it crashes. This only occur when I use the debugflag during creation of the device. My old project worked with debugflag before without crashes. This happens even if I use vs2010 now, my guess is that vs11 updated some files for vs2010 as well :/ I still get the error-output after removing everything but initialization of directx, and creation of one Vertexshader and the creation of the inputlayout. Have anyone seen this error before? To be honest I am starting to think there's a bug in some file that visual studio 11 preview updated or installed that causes this, or that I need to set more flags or settings somewhere in order to use the debugflag during device-creation.


On inputlayout creation I get the following error in output, but the createinputlayout function returns S_OK
D3D11: ERROR: ID3D11Device::CreateInputLayout: The provided input signature expects to read an element with SemanticName/Index: '(null)'/9714417, but the declaration doesn't provide a matching name. [ STATE_CREATION ERROR #163: CREATEINPUTLAYOUT_MISSINGELEMENT ]



D3D11_INPUT_ELEMENT_DESC polygonLayout[2];
	unsigned int numelements = 2;
	polygonLayout[0].SemanticName = "POSITION";
	polygonLayout[0].SemanticIndex = 0;
	polygonLayout[0].Format = DXGI_FORMAT_R32G32B32A32_FLOAT;
	polygonLayout[0].InputSlot = 0;
	polygonLayout[0].AlignedByteOffset = 0;
	polygonLayout[0].InputSlotClass = D3D11_INPUT_PER_VERTEX_DATA;
	polygonLayout[0].InstanceDataStepRate = 0;

	polygonLayout[1].SemanticName = "COLOR";
	polygonLayout[1].SemanticIndex = 0;
	polygonLayout[1].Format = DXGI_FORMAT_R32G32B32A32_FLOAT;
	polygonLayout[1].InputSlot = 0;
	polygonLayout[1].AlignedByteOffset = D3D11_APPEND_ALIGNED_ELEMENT;
	polygonLayout[1].InputSlotClass = D3D11_INPUT_PER_VERTEX_DATA;
	polygonLayout[1].InstanceDataStepRate = 0;

	HRESULT hr = m_Device->CreateInputLayout(polygonLayout,numelements,_VShader->m_ShaderBuffer->GetBufferPointer(),_VShader->m_ShaderBuffer->GetBufferSize(),&m_layout);

The HLSL I am currently testing on is very simple


cbuffer MatrixBuffer
{
	matrix worldMatrix;
	matrix viewMatrix;
	matrix projectionMatrix;
};


//////////////
// TYPEDEFS //
//////////////
struct VertexInputType
{
    float4 position : POSITION;
    float4 color : COLOR;
};

struct PixelInputType
{
    float4 position : SV_POSITION;
    float4 color : COLOR;
};


////////////////////////////////////////////////////////////////////////////////
// Vertex Shader
////////////////////////////////////////////////////////////////////////////////
PixelInputType ColorVertexShader(VertexInputType input)
{
    PixelInputType output;
    

	// Change the position vector to be 4 units for proper matrix calculations.
    input.position.w = 1.0f;

	// Calculate the position of the vertex against the world, view, and projection matrices.
    output.position = mul(input.position, worldMatrix);
    output.position = mul(output.position, viewMatrix);
    output.position = mul(output.position, projectionMatrix);
    
	// Store the input color for the pixel shader to use.
    output.color = input.color;
    
    return output;
}


struct PixelInputType
{
    float4 position : SV_POSITION;
    float4 color : COLOR;
};


////////////////////////////////////////////////////////////////////////////////
// Pixel Shader
////////////////////////////////////////////////////////////////////////////////
float4 ColorPixelShader(PixelInputType input) : SV_TARGET
{
    return input.color;
}

Everything renders correctly (in my current project) but everytime I call drawindexed the following is written in the output, but I think this is due to the previous "error" during the inputlayout creation.

Invalid parameter passed to C runtime function.
Invalid parameter passed to C runtime function.
Invalid parameter passed to C runtime function.
Invalid parameter passed to C runtime function.
D3D11: ERROR: ID3D11DeviceContext::DrawIndexed: Input Assembler - Vertex Shader linkage error: Signatures between stages are incompatible. The input stage requires Semantic/Index ((null),6502817) as input, but it is not provided by the output stage. [ EXECUTION ERROR #342: DEVICE_SHADER_LINKAGE_SEMANTICNAME_NOT_FOUND ]
Invalid parameter passed to C runtime function.
Invalid parameter passed to C runtime function.
Invalid parameter passed to C runtime function.
Invalid parameter passed to C runtime function.
Invalid parameter passed to C runtime function.
Invalid parameter passed to C runtime function.
D3D11: ERROR: ID3D11DeviceContext::DrawIndexed: Vertex Shader - Pixel Shader linkage error: Signatures between stages are incompatible. The input stage requires Semantic/Index ((null),6511892) as input, but it is not provided by the output stage. [ EXECUTION ERROR #342: DEVICE_SHADER_LINKAGE_SEMANTICNAME_NOT_FOUND ]
If I use the code from this tutorial (but changes it so that the debugflag is set) the same error occurs.
http://www.rastertek.../dx11tut04.html

PARTNERS