BloodOrange1981

Members
  • Content count

    75
  • Joined

  • Last visited

Community Reputation

389 Neutral

About BloodOrange1981

  • Rank
    Member
  1. As someone who knows a bit about the tradeoffs of architectures and seeing how gaming is currently tethered to x86 (regardless of XBONE, PS4 or PC) what are the thoughts on RISC V and furthermore do you believe adoption of it for hardware would be beneficial in any way beyond power consumption?
  2. Hi! I would like any suggestions or advice in hunting down a useful file system structure definition language. That may be a misleading term so allow me to elaborate: Currently I'm using a proprietary language built where I work with it's own parsing system. The language needs to be readable by non-tech types so that when a project starts and we discuss the desired file system structure they can scan over it easily. For example: |-- reference: ->ref, lock, sync=everyDay |-- initialPlan # This will be for ops |-- restore: lock=datasys |-- planref: lock |-- <app_code> |-- reference |-- <app>: lock, namerule=[a-z]{2}\d{3}[a-z]? |-- reference |-- <app_section>: lock, namerule=[a-z]{2}\d{3}[a-z]?_\d{4}[a-z]? |-- reference |-- ux       Would generate a file system structure where the parts within the angled brackets differ by codenames for different sections/apps we are working on and following the colons we define "rules" for access and naming conventions of the folders. If you wanted to access the folder reference under the app_code folder it would follow this path: reference/restore/planref/app_code/reference/ If any of that doesn't make sense please ask. The question is - is there an existing standard that can be used for the same task? We're using our own parser and it isn't the most robust of things, so an 'industry standard' with a variety of parsers, syntax hilighting schemes and/or extensions by the community who uses it would be most welcome. The o/s is Linux based.  
  3. Back in 2010 when I signed up for Twitter, it seemed great. I could follow many people in the games industry (as well as people working in movies, art, comics etc in a variety of roles) and it seemed like a haven of sharing news and ideas, I reached out and had convos with many people who helped me in some way or another. 5 years later and social networking seems to have devolved into echo chambers of disgust, griping about big problems without actually proposing *doing* anything about them, and/or a platform for crowd-bullying or shaming people. Additionally it's quite a time-eater, but you have to go through the chaff to get the wheat so time-limiting programs like LeechBlock don't do a lot to help.   So to adjust to a post-twitter personal climate, what are good/great mailing lists or feeds to sign up to for gamedev and creative industries stuff? What about tech?   These kind of subjects:   Movies - making, reviews, cinematography etc Gamedev - c, c++, c#, tools like Unity/Unreal, Graphics (realtime and offline) Art/sketches - I know DeviantArt and ArtStation, anymore?   Thanks in advance
  4. Character silhouette visible through geometry problem

    Thanks, I'll try it!
  5. Hi Gamedev community, my problem for today is trying to render a characters' silhouette through geometry if there is any occlusion between the camera's pov and a character mesh. I understand how to detect the occlusion (if the vector/ray between the character and the camera intersects other geometry first) and then reorder the rendering calls with depth culling turned off for the character's outline (or composite the outline onto the final image as a kind of post process) but what if half the character is visible and half the character isn't?   E.g when turning a corner or half occluded.   Just so you know I'm implementing this in unity but I'd like a more high level approach to the problem so that in the future I can try and apply it to other engines and/or rendering APIs.   Thanks!  
  6.   I definitely agree. It was an important lesson so I'm glad it only took about 40 minutes of coding to really discover this. With regards to the actual loading Assimp does all of that in via Assimp::Importer importer; const aiScene* scene = importer.ReadFile(directory + fileName,flags); and the parts I was trying to multithread depend on opengl object creation for buffers and textures. I don't really want to go through the tedium of "rolling my own" importer to parse various 3d file formats so I will have to make do with my code as it stands. Thanks!
  7. Thanks for all the feedback and insight so far. It really astounds me how articulate people can get about programming and the merits and demerits of approaches. That in itself is quite inspiring!
  8. Hi all I've been investing quite some time into reading about and experimenting with multithreading in C++ and in general and it seems lockless (or supposedly wait free) data structures and algorithms comes up a lot. A good understanding of the new memory model in C++ and atomics seem necessary for any kind of implementation of lock-free programming, and after reading the top response in this thread: http://stackoverflow.com/questions/21070747/lock-free-data-structures-in-c-just-use-atomics-and-memory-ordering   Is it REALLY necessary to know much about lock-free stuff? Or can I just skip it for the time being, knowing that lock-free programming is a "thing" and leave it at that? Thanks!  
  9. Thanks. Is there any way of sharing a context across threads? I imagine if you create multiple contexts across multiple threads, then created resources will be available only on that thread i.e //I create a VAO on thread1 glCreateVertexArray(1,&myVAO) //I try to use the created resource in another thread, //but openGL will get very upset and not know what I'm talking about as it //is a seperate context glBindVertexArray(myVAO); Am I barking mad?   
  10. OpenGL Books and Tutorial Recommendation

    OpenGL Superbible is a GREAT book for modern openGL. 6th Edition Introduces modern features like tesselation and geometry shaders at an early stage and tends to use the drawing commands such as glDrawElements, gl DrawArrays etc rather than using data wrapped in "batches" like in the 5th edition. Well worth your money.
  11. Hi again all. I'm a bit of a n00b when it comes to multi threading but thought I may be able to take advantage of it for mesh loading using Assimp. Meshes are made up of sub-meshes, so I thought if I divided the number of meshes up between cores for scaleability and let the threads do their thing there wouldn't be a problem. Reading from the Assimp Scene object isn't a problem, but the generation of Vertex Array Objects on the open gl side is an issue.  GLuint InitVAO(){ GLuint VAO; glGenVertexArrays(1, &VAO); return VAO; } In the above case, when called from each thread I get 0xcc generated for each submesh. This is obviously bad! Is there something about opengl not being threadsafe that I don't know about? As I said, I'm a bit n00bish with multithreading so sorry if this is a facepalm-able question
  12. Thank you very much. I saw the Keith Lantz links before but all the derivations and mathematical breakdowns made my head hurt. I can understand your explanations a lot better! i.e "complex conjugate in this case is multiplying by -1". If I have any more problems I'll let you know! Thanks again
  13. Just wanted to see what other people's experience has been with what I'm about to talk about.   It's safe to assume that most people on this board took an interest in programming when they found out it was a way to make games, or that they liked programming and problem solving and also happened to like games, so they decided to get serious about writing code.   Buy a few books, search the web, maybe sign up for a degree - everyone's journey is a little different.   However, what I found personally was that after the initial discovery phase I became mad about computer science, rather than making things, which was my original target.   Programming languages, algorithms, CPUs, the myriad of different ways to do one thing, Object oriented programming vs functional, toy projects to test different ways of doing things, assembly and computer architecture, Operating systems etc.   It seemed a new universe, one where I just wanted to know more and more about esoteric data structures and classic cs problems, spending time on youtube watching Stanford lecture videos and hunting down corresponding course literature online, almost having a nerdgasm when Coursera and Udacity came onto the scene where I could get tuition in GPGPU programming at no cost. Looking up programmer job interviews online and really getting teeth into coding problems etc.   I had totally lost sight of my original intention. Only in the last couple of years have I regained my "let's make stuff!" mojo, making a few graphics demos and looking for interesting projects to contribute to. Since returning back to England from Japan with games industry experience under my belt and looking for work, I've also been applying for jobs where maybe 4 years ago I would have died to work at - e.g Amazon, Google etc and the amount of interesting problems you'd have to work on daily.   However I've realised today that the "Software engineer/code enthusiast" part of me is pretty dead. I just want to make awesome experiences + toys for people ("experience" as in a self contained game as opposed to a smooth web browsing experience or smooth streaming of video) and today I've been looking over subjects that enthused me a few years ago strike me as pretty "meh" today and wondering about an impending interview I have with a big brand company as a software engineer tomorrow.  I was just wondering what other people's personal obsessions about things related to what we do have fluctuated over the years - were you an algorithm junkie too? Did you just always concentrate on making things and learning where necessary i.e "I want to make an AI for my game so I best learn about traversing graphs now" rather than "wow, graph theory is so cool let's investigate!"  Also why do you think this happens? Is there some common psychological fallacy/condition at work here?  
  14. Dear Gamedev.net-aratti   I've spent the last week or so rendering a simple ocean using gerstner waves but having issues with tiling, so I decided to start rendering them "properly" and dip my toes into the murky waters of rendering a heightfield using an iFFT.   There are plenty of papers explaining the basic gist -  1) calculate a frequency spectrum 2) use this to create a heightfield using ifft to convert from frequency domain to spatial domain - animating with time t   Since the beginning of this journey I have learned about things like the complex plane, the complex exponent equation, the FFT in more detail etc but after the initial steps of creating an initial spectrum (rendering a texture full of guassian numbers with mean 0 and sd of 1, filtered by the phillips spectrum) I am still totally lost.   My code for creating the initial data is here (GLSL): float PhillipsSpectrum(vec2 k){     //kLen is the length of the vector from the centre of the tex   float kLen = length(k);   float kSq = kLen * kLen;     // amp is wave amplitude, passed in as a uniform   float Amp = amplitude;     //L = velocity * velcoity / gravity   float L = (velocity*velocity)/9.81;     float dir = dot(normalize(waveDir),normalize(k));     return Amp * (dir*dir) * exp(-1.0/(kSq * L * L))/ (kSq * kSq) ; }   void main(){      vec3 sums;      //get screenpos - center is 0.0 and ranges from -0.5 to 0.5 in both    //directions    vec2 screenPos = vec2(gl_FragCoord.x,gl_FragCoord.y)/texSize - vec2(0.5,0.5);      //get random Guass number    vec2 randomGuass = vec2(rand(screenPos),rand(screenPos.yx));      //use phillips spectrum as a filter depending on position in freq domain    float Phil = sqrt(PhillipsSpectrum(screenPos));    float coeff = 1.0/sqrt(2.0);      color = vec3(coeff *randomGuass.x * Phil,coeff * randomGuass.y * Phil,0.0); }   which creates a texture like this: [attachment=26468:wave.png] Now I am totally lost as how to : a) derive spectrums in three directions from the initial texture     b) animate this according to time t like mentioned in this paper (https://developer.nvidia.com/sites/default/files/akamai/gamedev/files/sdk/11/OceanCS_Slides.pdf) on slide 5 I might be completely stupid and overlooking something really obvious - I've looked at a bunch of papers and just get lost in formulae even after acquainting myself with their meaning. Please help.
  15.   I see. So in the case of an over or under flow set a value to the max or min e.g   add bl,15 cmovcf bl,255       ; set the byte in bl to 255 if overflowed