• Advertisement
Sign in to follow this  

Port HLSL to x86 Assembly

This topic is 1432 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Is there a tutorial/literature on how to port HLSL to Assembly? Is this possible?

 

Thanks!

 

Share this post


Link to post
Share on other sites
Advertisement

Could you talk a little about why you want to do this.  HLSL code you write doesn't run on the CPU to begin with, so this doesn't seem to make much sense,

Share this post


Link to post
Share on other sites

I'm not aware of any HLSL compiler that will output x86 assembly.

 

What you could maybe do is use the Microsoft HLSL compiler to output HLSL IR and then use this to convert the HLSL IR -> GLSL, and then use MESA to convert GLSL -> LLVM IR, then finally use LLVM to get x86 assembly.

Share this post


Link to post
Share on other sites

Could you talk a little about why you want to do this.  HLSL code you write doesn't run on the CPU to begin with, so this doesn't seem to make much sense,

 

I need to recreate in the CPU, some of the calculations done in the GPU.

 

So, taking advantage I am familiar with x86 wanted to port HLSL to Assembly.

 

I'm not aware of any HLSL compiler that will output x86 assembly.

 

What you could maybe do is use the Microsoft HLSL compiler to output HLSL IR and then use this to convert the HLSL IR -> GLSL, and then use MESA to convert GLSL -> LLVM IR, then finally use LLVM to get x86 assembly.

 

Thanks! I will take a look.

Share this post


Link to post
Share on other sites

It depends somewhat on what kinds of operations you're using, but a fair number of HLSL instructions map fairly directly to different flavors of SSE -- If you can use SSE, that will be your highest-performing option. Otherwise, you need a scalar fall-back using normal x86 instructions. You can use intrinsic functions for the former.

 

Now, that said -- you don't need to port the HLSL to assembly language. What you need to do is understand what the HLSL is accomplishing and then write a version of teh same algorithm in whatever language you choose. It doesn't have to be x86 or SSE assembly/intrinsic functions.

 

How much HLSL is it? Can you post the code?

Share this post


Link to post
Share on other sites

It depends somewhat on what kinds of operations you're using, but a fair number of HLSL instructions map fairly directly to different flavors of SSE -- If you can use SSE, that will be your highest-performing option. Otherwise, you need a scalar fall-back using normal x86 instructions. You can use intrinsic functions for the former.
 
Now, that said -- you don't need to port the HLSL to assembly language. What you need to do is understand what the HLSL is accomplishing and then write a version of teh same algorithm in whatever language you choose. It doesn't have to be x86 or SSE assembly/intrinsic functions.
 
How much HLSL is it? Can you post the code?

It makes sense, but there is a lot of code so that's why wanted to port it. Maybe have to replace specific HLSL math functions but may be there is already a Lib out there?

Share this post


Link to post
Share on other sites

DX11 WARP runs very well (and slow but that's not the problem, right?), however that's not quite just what OP asked, I guess..

Edited by pcmaster

Share this post


Link to post
Share on other sites

DX11 WARP runs very well (and slow but that's not the problem, right?), however that's not quite just what OP asked, I guess..

Well... he said "I need to recreate in the CPU, some of the calculations done in the GPU." which... I'm not sure exactly what he has in mind TBH.  I just thought I'd mention it as an option, but you are correct in that it is not a HLSL to x86 assembly converted in the sense that the assembly could then be extracted and used elsewhere, though I imagine somewhere deep inside is a hlsl bytecode to assembly converter.  So... ummm ya, without more info from the OP I'm not really sure TBH.

Share this post


Link to post
Share on other sites

Thanks to all.

 

 

DX11 WARP runs very well (and slow but that's not the problem, right?), however that's not quite just what OP asked, I guess..

Well... he said "I need to recreate in the CPU, some of the calculations done in the GPU." which... I'm not sure exactly what he has in mind TBH.  I just thought I'd mention it as an option, but you are correct in that it is not a HLSL to x86 assembly converted in the sense that the assembly could then be extracted and used elsewhere, though I imagine somewhere deep inside is a hlsl bytecode to assembly converter.  So... ummm ya, without more info from the OP I'm not really sure TBH.

 

 

I need to recreate all shader instructions in the CPU. My idea was porting all opcodes generated by the shader to x86. And run it in the CPU to replicate it.

 

HLSL Bytecode and x86 seem very similar, I am not familiar with HLSL but I code in ASM, that's why I was thinking of porting it.

Share this post


Link to post
Share on other sites

HLSL isn't too far syntactically from c++. If it's just some calculations you need you could probably put that code in a separate file, and #include it into both your HLSL and c++ code files. You might need a bit of macro magic to make everything work.

Share this post


Link to post
Share on other sites

There are sampling instructions which do various kinds of texture filtering and then there are vector instructions, fences and many things, which you'd have to emulate in SW.

 

HLSL compiles to D3D bytecode, which still isn't really "assembly" and that gets compiled (JIT) to real HW ASM. Do you want to emulate this in SW or what is the reason? I don't understand how the fact that you code in ASM helps you, since you'll have to get familiar with HLSL (which is C++ "like") anyway.

Share this post


Link to post
Share on other sites

There are sampling instructions which do various kinds of texture filtering and then there are vector instructions, fences and many things, which you'd have to emulate in SW.
 
HLSL compiles to D3D bytecode, which still isn't really "assembly" and that gets compiled (JIT) to real HW ASM. Do you want to emulate this in SW or what is the reason? I don't understand how the fact that you code in ASM helps you, since you'll have to get familiar with HLSL (which is C++ "like") anyway.

I am dissasembling the shader code, and that's why they seem very similar. I will follow your advices

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement