Advertisement Jump to content
  • Advertisement

newMe

Member
  • Content Count

    37
  • Joined

  • Last visited

Community Reputation

249 Neutral

About newMe

  • Rank
    Member

Personal Information

  • Role
    Programmer
  • Interests
    Art
    Programming

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Thank you. The unorm option did not occur to me cause I never used it. I will give it a try.
  2. Hi. There is the known trick of swapping near and far planes to spread precision evenly across the whole range of a depth buffer. It works with a float depth buffer and to my knowledge is based on the fact that a float number stores values with a different precision (smaller values being given more precision) and a depth buffer stores a value proportional to the reciprocal of depth rather than z so with swapping the planes those effects sort of cancel themselves out and we have a buffer with a good precision across the whole range. But with the orthogonal projection we are just left with the float format precision and swapping the planes won't help. Is it right? Is there a remedy for the orthogonal projection then? Thanks.
  3. Hi. does anyone have an idea what would be faster, reading a big texture, where each thread reads a sample from it or reading from a buffer, say 64 floats, so they all read from a much smaller buffer, but each thread reads 64 floats. The thread is running on a compute shader. Thanks.
  4. newMe

    Software to make space frame

    Yes, that is right, i deleted this cube vertex by vertex. I hope you dont mind if i ask another question. While i was playing around with the interface i happened to be able to create vertices and edges in pairs, probably mirroring some axis, but dont remember how i did it. Is there a way to create a geometry  with the mirror option, like in zbrush, not just mirror an object, but do it in the edit mode? 
  5. newMe

    Software to make space frame

    OK, looks like what i wanted, but is it possible to start adding vertices to an empty scene? The edit mode renders active only after creating some mesh and  this mesh gets exported with other geometry.
  6. newMe

    Software to make space frame

    My mistake. By standalone i meant an edge that belongs to a structure and does not belong to any faces, not a separate edge. It is like a frame structure. It is not for visualization purposes, rather for a  physic simulation. I did not even look at blender thinking it was similar to zbrush and i can not think of any way to do it in zbrush. Thank you for the nice tip. If it is really so and if blender can export its files to some commonly used formats, obj being ideal for me, it will be just what i wanted.
  7. Hi, has anybody happened to come across any software to make a space frame or maybe there is some script or plugin to make it in 3d max, say add edges to polymesh, i would read them to the file then, no problem, the thing is you can not just add an edge which is not coherent with a face-edge coupling there. I need to be able to add standalone edges. Some wolkaround in 3d max would be preferable, but another software to make this is ok too. Thanks.
  8. This problem started when i changed blending stage settings. But before that i had another problem. Transparent edges of my sprites where eating through other sprites like in this problem http://www.gamedev.net/topic/524664-transparent-png-on-texture/  Alphatocoverageenable worked fine but then i could not adjust transparency. I added some randomness to sprites and some rotation but when i use bandicam somehow this adjustments stop working i dont know why. With them it looks a bit better, can not record it though. Blickering is worse with bandicam too. Thats what i got so far.   I have sort of found a walkaround with this flickering. When fading, i render smoke to a tecture and render it with a sprite, i changed the smoke to spheres, insted of expanding them to sprite in the geometry shader expand them to spheres in the domain shader. It is strange though,how rendering to a texture is different from rendering to a back buffer. Looks like it is the same process, why does it flicker in the buffer then.   
  9. OK, i have sort of finished it. The idea was to take one couple of append buffers and use them as a "fragment" buffers that can spawn particles to other append buffers based on a passed distance. The particals then are expanded to sprites. The sphere was completely expanded in a domain shader from a single vertex and i move its vertices with displacement map. So the whole process takes place on the gpu only. i dont like the way sprites interact though, case insted of blending to each other they blick and everything. 
  10. OK, it can take up to couple of days though. It is a part of a a little bigger thing, so i am not testing now anything, just thinking what disigne pass to take. And i dont care about the order just want to add new items to the whole bunch, then they are all treated in the next dispatch on equal terms. 
  11. Thank you for reply. I just did not want to take some unnecessary steps. If the answer was no, i would try some walkaround, if nobody answered, i would  try it and share the result. Luckily you had the answer. Thank you again.    
  12. Hi. Is it possible to append to an append buffer several times in one thread, say in a loop? All examples i have seen just consume and append or append a new item but once per thread. For example, would something like this work:   Item newItem;   for( uint i=0; i< N; ++i) {      newItem.param = i;      AppendBuffer.Append( newItem ); }   Thanks.
  13. Hi. Has anyone noticed the way the InterlockedExchange function performed on floats increases the size of a compiled shader? Even a quite lengthy shader is compiled to a several hundred bytes size code. But adding a single InterlockedExchange function blows it up to several thousand. Is this almost 10 times increase normal and should not this function be used on floats then (being the single atomic function allowed to be used on floats i guess)? Thanks.
  14. Actually, the only reason for me to have this buffer is to perform the atomic float add cause directcompute does not provide this option http://www.gamedev.net/topic/613648-dx11-interlockedadd-on-floats-in-pixel-shader-workaround/  So for now i have to settle for integral atomic add though it is not the best thing. Maybe i will figure out something later.
  15. I have another buffer that is coupled with a staging resource, i can write to it and read any result by cpu. It is working, i have no doubt about it, it is checked with other buffers, so i can read pretty much any value from any variable at any point. All the other buffers work as expected, i have no problem with them. The byte buffer gives me a hard time with everything but store and load, i cannot initilaze it, a value does not stay there, the member functions do not work. It behaves more like a shared memory than buffer. That is why i thought that maybe there was something wrong with  implementation. I know there are some issues regarding writing and reading from the byte buffer, an attention to the interpretation of the data should be paid, still i think that is not the problem.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!