Jump to content
  • Advertisement
The search index is currently processing. Activity stream results may not be complete.

All Activity

This stream auto-updates     

  1. Past hour
  2. frob

    Why are enums broken

    All this reminds me of the problem that naming things is hard. Enumeration means something. For historical reasons programmers use enums to name the bit patterns, but generally flags are not an enumeration. An enumeration lists all the things one by one, every name must be present in the enumeration. We don't enumerate FlagsAB, FlagsBC, FlagsABC, FlagsAD, FlagsABD, FlagsACD, FlagsABCD, and so on, but that is what an enumeration would do. The grouping was done with enumerated types because they were silently converted to integers. You can implement flags with type safety using wrappers, using template magic, using a class like a bitset, or similar. But when you're doing that, we should stop calling them enumerations. At that point they are bit flags, not an enumeration. If that's what you want, implement a flag class instead.
  3. In my implementation I write the deltas to a bitstream. When writing a property (except bool, which are 1 bit) I either write a 1 followed by the delta, or a 0 indicating that there's no change. For arrays, I either write an array of bits (1 bit for each array element), followed by just the elements that have changed, OR, write an array containing the indices of changed elements. These two strategies are selected depending on which would be smaller. The array is preceded by a single bit indicating which strategy is in use. When writing a delta for a property, you can either write out just the new value, or you can write "oldValue XOR newValue". In the first case, the client simply copies the values from the delta packets over its own values, and in the second case, the client XORs the delta packets values with their own to recover the new value. This second method is much more complex / slow / fragile, but, if you apply generic compression to your packets before sending them, this XOR process is likely to produce long strings of zeros, which compress quite well. Doom 3 didn't use a general purpose compressor like zlib, but instead wrote a simple RLE compressor that only looked for repeated 0's in the bitstream and replaced them with a single zero followed by a 3bit repetition count.
  4. Rutin

    Learning How to be Better

    There is nothing wrong with making flashy projects for your portfolio, but when you're applying for a programming job the person reviewing your resume and portfolio should be reviewing you for your code, not visuals because it's not important as a programmer; you're not applying as an artist. EDIT: I should also add, you're not a "game designer" either, so keep that in mind. What is important is your ability to code.
  5. MintyLyton

    Learning How to be Better

    I was told to make my games visually appealing in my Game Programming College. That idea might have just stuck with me for awhile actually. What I wanted to do was create a list of youtube tutorials for content that might seem appealing to people. One of the ideas I initially had was a complex inventory system which could use custom items, filter through, organize and delete items based on identifier tags.
  6. How do you do this? Does that mean you build a large system of equations, e.g. all bodies that form an island, and solve for the exact solution? If so, is this utilized for games, or is it common to solve contacts unrelated of each other simply by accumulation and iteration (the 'naive' approach, which worked well enough for me and did not cause jitter, but was not perfectly stiff of course). You mean Inverse Dynamics? An IK solver would be of use only to control the target position/vel/accel of the joints. My question is how to calculate the necessary joint torques / forces to get there, e.g. for a robot consisting of many bodies? The problem is pretty similar to the calculation of resting contact forces, but contacts act only in one direction, so i assume both require / allow different approaches to solve them. I would be happy to understand just one.
  7. Today
  8. Fulcrum.013

    Navigation Meshes and Pathfinding

    Better add a Minkowski'y sum. As result of stripe algo you have a sequence of corners near wich character have to go. Add to each corner a circle with radius enought to bypass a wall, usually it is radius of collision capsula for human-like character. Then connect circles by tangent lines. In case portals enought wide to pass throug, it never will walk thru the walls. Also in case all NPC have same safe radius Minkowsky sum can be accounted at navmesh building time. to do it just required to shift walls projections inside navplanes by NPCs safe radius along wall normal.
  9. d000hg

    June 2018

    I liked your acorn intro. Interesting to hear the other side of it. For these big conferences it probably should have been a big pair of boobs you had to grab, to get the attention of passers-by! Or perhaps a cow you had to milk (same thing really)
  10. So, is there a reference in what range my lights/values should be? I also tried setting the sun value to 1000 and the ambient light to 1 while applying the uncharted tonemapper: I think that i get something fundamentally wrong here. Code again: void main() { vec3 color = texture2D(texture_diffuse, vTexcoord).rgb;//floating point values from 0 - 1000 //tonemap color = Uncharted2Tonemap(color); color = gammaCorrection(color); outputF = vec4(color,1.0f); } Tonemappers map the HDR range to LDR. But what i don't quite get how this can properly work if they don't "know" the range of your max brightness RGB value in the first place. (they only take the RGB values of the specific pixel in your FP buffer as input.). The range has to be important if you want to realize (for example) an S-curve in your tonemapper (like in the ACES filmic tonemapper). And that can only happen if you A) pass the range into the tonemapper (if you have an arbitary range of brightness) or B) the tonemapping algorithm assumes that your values are in a specific "correct" range in the first place.
  11. Rutin

    Learning How to be Better

    In my 18+ years I don't recall taking many notes. I'm more about "doing", not reading and writing notes. I like to create projects as soon as I learn a new concept. In all the fields I'm involved I rarely write notes, but I retain information like a sponge though... Depending on how you learn, I would suggest that you find something you're passionate about and use that as a way to justify creating projects. For example, if I'm very interested in space, I would tie in my learning by creating space based programs and games, I wouldn't spend that time making boring bookkeeping examples like you see in many of the older programming books, ect... Unless you're into that sort of thing. If you're struggling to create content you have two options. Find a game or program you really like, and make a clone of it. Your other option would be to check out the hobbyist section on GameDev and join a project. When you're creating a portfolio as a "programmer" it's not meant to be a visual showcase. Your object is to show how you code, put projects together, and solve problems. Showing that you made a fire ball shoot out of a hand visually means nothing, but showing the code behind it does.
  12. Well then I'm having to lie which I dislike. I think it should just be optional.
  13. You don't; just pick some random value.
  14. Members are asked to check their profiles if the profile fields change. Not all of them are required, but the ones that are will trigger a profile review. As for why, industry role helps us understand what content, pages, areas, jobs (with https://gamedev.jobs) and topics people may be interested in seeing. In the end it's about providing a better service.
  15. It's just I'm sure I've been asked this only recently when you rolled out the new forum, maybe more than once. Why do I have to tell you?
  16. Fulcrum.013

    Would Unique paritcles slow down a physics simulation?

    I guess thay has marked border particles and test collisions for its subsets only. Also as shown on 2:48 thay use some kind of collision prediction. It very robust approach, something like sending moving body's distantion mesuarment subsystem , that replaces collision detection, to rest until it get closer to other body. As result it detects impacts with high accuracy, instead of detecting interpenetrations. This technique has been described by Brian Mirtich into his PhD dissertation at middle of 90-x, but i never seen it implementation into gaming engines before. http://citeseerx.ist.psu.edu/viewdoc/download?doi=
  17. Anthony Serrano

    HDR tonemapping/exposure values

    A major reason why tonemapping isn't working for you is you're not actually feeding it an image with a high dynamic range. You're using lighting values that are more appropriate to older, non-HDR lighting models. Which is to say, your light:ambient ratio is way too low. Your light source is barely more than twice as bright as the ambient lighting, whereas in the real world (and thus, the lighting conditions HDR is attempting to simulate), that ratio is easily 100:1 or higher, sometimes even in excess of 1000:1. Remember, the entire point of tonemapping is to take an image with a high dynamic range and map that range such that it's displayable on a device with a low dynamic range without the bright parts clipping to pure white or the dark parts clipping to pure black. If you tonemap an image that already has a normal dynamic range, the result is just going to be washed out.
  18. MintyLyton

    Learning How to be Better

    Thanks for the insight. Most of the time I spend making notes is with the intention of retaining information or looking back if I need them in the future. I just don't want to mistake that with spending too much time spending on note taking rather than learning. I think my new question might be now is where to prioritize learning as a programmer while still looking for a job. As it is now, I'm looking for hobby projects to work on since I like creating content to learn / practice skills. Also I'm bad at thinking of content to make to showcase in my portfolio
  19. I'm looking for any team / people that need a programmer for their project. I'm looking to expand my portfolio which you can see Here. I'm more experienced with Unity but I can spend the time to learn new Engines if that's your preference. I have worked on Unreal Engine 4 before but I might take some time to re-learn it, if the project requires it. Feel free to DM here or use the contact info on my website.
  20. UPDATE: The solution boils down to: DO NOT USE ID2D1Factory::CreateHwndRenderTarget Now my D2D initialization is fairly more complicated, the one in the code below. Is it me or initializing D2D is a goddamned mess?! you create a d3dDevice, use it to query the dxgiDevice, use that to create a d2ddevice, use that to create a d2dcontext, go back to the dxgiDevice and use that to get the adapter, use that to get the factory, use that to create the swap chain, use that to access the backbuffer, go back to the d2d context and use it to bind your bitmap to the back buffer, and finilly set your bitmap as the render target of the context. This is a cursed maze created by an insane person at Microsoft. //Init Direct3D HRESULT hr; ID3D11Device* d3d_device; ID3D11DeviceContext* d3d_context; UINT creationFlags = D3D11_CREATE_DEVICE_BGRA_SUPPORT; D3D_FEATURE_LEVEL featureLevels[] = { D3D_FEATURE_LEVEL_11_1, D3D_FEATURE_LEVEL_11_0, D3D_FEATURE_LEVEL_10_1, D3D_FEATURE_LEVEL_10_0, D3D_FEATURE_LEVEL_9_3, D3D_FEATURE_LEVEL_9_2, D3D_FEATURE_LEVEL_9_1 }; hr = D3D11CreateDevice(nullptr, D3D_DRIVER_TYPE_HARDWARE, 0, creationFlags, featureLevels, _countof(featureLevels), D3D11_SDK_VERSION, &d3d_device, NULL, &d3d_context ); if (FAILED(hr)) { MessageBox(NULL, "Failed to initialize D3D", "Error", MB_OK); return false; } //Init Direct2D IDXGIDevice* dxgiDevice; d3d_device->QueryInterface(&dxgiDevice); D2D1_CREATION_PROPERTIES D2D1DeviceDesc{}; D2D1DeviceDesc.threadingMode = D2D1_THREADING_MODE_SINGLE_THREADED; D2D1DeviceDesc.debugLevel = D2D1_DEBUG_LEVEL_INFORMATION; hr = D2D1CreateDevice(dxgiDevice, D2D1DeviceDesc, &m_device); if (FAILED(hr)) { MessageBox(NULL, "Failed to create D2D1 Device", "Error", MB_OK); return false; } hr = m_device->CreateDeviceContext(D2D1_DEVICE_CONTEXT_OPTIONS_NONE, &m_context); if (FAILED(hr)) { MessageBox(NULL, "Failed to create D2D1 Context", "Error", MB_OK); return false; } //Create Swap Chain IDXGIAdapter* dxgiAdapter = nullptr; hr = dxgiDevice->GetAdapter(&dxgiAdapter); if (FAILED(hr)) { MessageBox(NULL, "Failed to get Adapter", "Error", MB_OK); return false; } IDXGIFactory2* dxgiFactory = nullptr; hr = dxgiAdapter->GetParent(IID_PPV_ARGS(&dxgiFactory)); if (FAILED(hr)) { MessageBox(NULL, "Failed to get dxgiFactory", "Error", MB_OK); return false; } //swap chain desc DXGI_SWAP_CHAIN_DESC1 swapChainDesc{}; swapChainDesc.Width = m_winWidth; swapChainDesc.Height = m_winHeight; swapChainDesc.Format = DXGI_FORMAT_B8G8R8A8_UNORM; swapChainDesc.Scaling = DXGI_SCALING_NONE; swapChainDesc.SampleDesc.Count = 1; swapChainDesc.SampleDesc.Quality = 0; swapChainDesc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT; swapChainDesc.AlphaMode = DXGI_ALPHA_MODE_IGNORE; swapChainDesc.BufferCount = 2; swapChainDesc.SwapEffect = DXGI_SWAP_EFFECT_FLIP_SEQUENTIAL; dxgiFactory->CreateSwapChainForHwnd(d3d_device, m_hwnd, &swapChainDesc, NULL, NULL, &m_swapChain); if (FAILED(hr)) { MessageBox(NULL, "Failed to create Swap Chain", "Error", MB_OK); return false; } //Create Back Buffer D2D1_BITMAP_PROPERTIES1 backBufferDesc{}; backBufferDesc.pixelFormat.format = DXGI_FORMAT_B8G8R8A8_UNORM; backBufferDesc.pixelFormat.alphaMode = D2D1_ALPHA_MODE_IGNORE; backBufferDesc.dpiX = dpi; backBufferDesc.dpiY = dpi; backBufferDesc.bitmapOptions = D2D1_BITMAP_OPTIONS_TARGET | D2D1_BITMAP_OPTIONS_CANNOT_DRAW; IDXGISurface* dxgiBackBuffer; m_swapChain->GetBuffer(0, IID_PPV_ARGS(&dxgiBackBuffer)); hr = m_context->CreateBitmapFromDxgiSurface(dxgiBackBuffer, &backBufferDesc, &m_backBuffer); if (FAILED(hr)) { MessageBox(NULL, "Failed to create back buffer", "Error", MB_OK); return false; } m_context->SetTarget(m_backBuffer);
  21. Fulcrum.013

    Would Unique paritcles slow down a physics simulation?

    Really this approach has been made by sir Iasaak Newton and it can not be other approaches. Any issues depens from how preciously Newton's approah calculated. Also, impact forces can nod be calculated at all, so usualy dynamics simulation works with impulses instead of forces. Any issuses wih resting contact comes from simulating it by mechanical dynamics rules, while it it works by mechanical statics rules. Did you mean a Inverse Kynematics solver?
  22. Hi, I'm a Multimedia Engineering student. I am about to finish my dergree and I'm already thinking about what topic to cover in my final college project. I'm interested in the procedural animation with c++ and OpenGL of creatures, something like a spider for example. Can someone tell me what are the issues I should investigate to carry it out? I understand that it has some dependence on artificial intelligence but I do not know to what extent. Can someone help me to find information about it? Thank you very much. Examples: - Procedural multi-legged walking animation - Procedural Locomotion of Multi-Legged Characters in Dynamic Environments
  23. slayemin

    June 2018

    I suppose I should recap my trip to Las Vegas. I got to attend the Dell World Expo at The Venetian and one of the stations for Dell was dedicated to showing off the application I built for them. Here are a few pictures I took: This is the booth setup before the show begins. There's two podium stations, a set of wireless headphones, and a Leap Motion device attached to a laptop via USB cable. The convention is in full swing and people are coming up and interacting with my application. It's driven entirely with hand gestures. No mouse, keyboard, game pad, etc. This is the first application of its kind in the world. Nobody else has used hand gestures to control and interact with 360 video before. Usually there were healthy crowds of people watching other people try it out. I had three different interactive scenarios people could play through. The game portion was a fun way to learn about Dells philanthropic programs. I had some people literally go through every single video because engagement was so high. Who wants to watch 15 minutes of corporate feel good video? These guys do! Just in case, I brought my laptop with me and had it setup to create new builds if I needed to. There were some small bugs and changes I wanted to fix, so I started to create another build. I copied all of the files over... and then disaster happened. I don't know how it happened, but somehow, the source code folder was completely empty. I still don't know how that could happen because I just copy/pasted the root project folder and the source code folder was a sub folder. All of the other subfolders had all of their files copied successfully, so it shall remain a mystery. So, if there was a critical bug, we would either need to just deal with it or I would have to catch an emergency flight back to Seattle. There were bugs, but fortunately they were minor enough that we could brush over them. I had to train the Dell employees how to run the application. Fortunately, I had already anticipated this need and tried to simplify the application management to be as easy as possible. Basically, you could jump between scenes with the number buttons, and the first button just resets the whole app. Unfortunately, there were a few small lighting artifacts which popped up on a reset level, so I had to train them how to quit and restart the app. It only took a few seconds, but it did mean that a booth attendant had to always be on hand and paying attention. For most of the event, I stood nearby and just watched people using my app and took notes. Where were the pain points? What assumptions were people making about the interface? How long was user interest being held? What was holding their interest? What mistakes did I make? How can I fix them in the next update? What am I missing? One thing that I realized is that our introduction screen is terrible at attracting attention. I initially wanted to use an interface which trained the user on how to use the application, so to do that, I kept it bare minimalist so that people could focus on only one thing: Learning how to grab things. The interface started with a black background, a floating acorn, and a bit of white instructional text. In terms of focusing and training, it doesn't get any simpler and more clear. As far as capturing the attention of passer bys, it was TERRIBLE. So, if you're walking by and all you see is the intro screen waiting for users to engage, you have no idea what the application does or is about, so you'll just keep walking. This means that if you're the booth attendant, you have to be actively engaging with people walking by and trying to hook them. My opening line is always, "Hey, you want to see something amazing?! Come check this out!". This engagement stuff is always a really good skill to have if you're ever giving demos of your game at events like E3, PAX, meetups, game jams, etc. After the expo was over, I felt drained and lost a majority of my interest in the application. I don't really know why, but I just got really bored with it. A month later, I'm still bored. My attitude feels like, "Yeah, that was pretty cool and it was hard to pull off, but it's been done now." I was hoping that I would get lots of contacts and leads for more work, but that didn't really happen. Maybe it's my fault. Maybe I needed to be more outgoing and aggressive about getting to know people. Or maybe it wouldn't have mattered one way or another? During the event, we were put in touch with a team at VMWare, a subsidiary owned by Dell. They were looking for a vendor who would build them a virtual reality application and we were the only ones they could find. So, we had a phone call meeting to get an understanding on what they're trying to build. Basically, it was a group of marketing people who had just seen Ready Player One and they wanted to build a VR experience for their customer experience team. Great! This sounds like a big project and a good opportunity! I started digging into their requirements. They... really didn't know what they wanted or could do. They said that they want every person in the company to be able to use their VR app, and they have 25,000 people. I asked them if they were going to buy 25,000 VR headsets. They didn't realize they needed to buy a headset. ...Okay... They decided that maybe they didn't want to do a VR app. What about a 3D game app instead? "Sure! I can definitely build one! What do your client workstations look like in terms of hardware specs?" "We run thin clients throughout the whole enterprise." *long silence on my end* "uh... that's not good." So, I asked them to try running a 3D game on their server and playing it on their thin clients. The big, obvious problem is that all of the 3D GPU processing will happen server side, and the amount of GPU processing is going to be a function of the number of connected clients. So, can their server GPU handle a high rendering load? I'm still waiting to find out...a month later. I found that they're trying to create a multiplayer app... in vr... with voice over IP...with a content management system backend... supporting up to 25,000 users... on thin clients... in three months! WTF?! Okay, I know I have the capability to build an enterprise level multiplayer CMS app. It's not going to be easy, but I could pull it off. But probably not alone in three months. I'd have to hire people to help. The problem is, it's going to get very expensive, very quickly. And if I put on my hat of pragmatism +5, I have to ask, "Why not just build an enterprise web app?". "Because we want to do something cool and different." That's a valid reason, especially for marketing folks who need to differentiate themselves from other marketing folks. Anyways, I submitted a ridiculous budget proposal last week. I think this project is going to fail before it starts because it's just not technically feasible, but I will probably just have to end up turning down the project if I don't get fully funded by the end of June. I just can't pull this off in less than three months... a corporate MMORPG in VR. In my mind, I'm already expecting it to fall through so I'm not getting any hopes up or counting on it to happen. They have to be moving a lot faster than they're moving right now if they want this to get built. In other news, I've been in a bit of a professional rut lately. I need to make money. Money is a resource which enables me to do things, and the lack of money is seriously holding me back. For example, I want to create a 3D VR travel application. I've created a working MVP, so now all I have to do is go out and shoot some footage with a camera. I borrowed a 360 camera, but it sucked so bad that all the footage I shot was unusable. I've been looking hungrily at the Insta360 Pro camera. It's got everything I want and need to make my app. 8K 360 video in stereo. Automatic stitching. Image stabilization. Good battery life. etc. But, it costs $3,500 which I don't have. I asked my local community if anyone had one I could borrow one, but no replies. So, this project is on hold until I can get enough funds to purchase equipment. *Sigh* My girlfriend has been getting on my case about not making enough money as well. It's really hard on her because I don't contribute enough financially. All of my money making schemes tend to be long term (6+ months out). And when I get clients, I tend to vastly undercharge for my services. For example, the leap motion app I just made for Dell, I charged at an hourly rate of $75/hour and grossed about $6,500. I should have at least added another zero to that. My girlfriend tells me I am a stubborn fool who won't listen, and I'll always be poor and broke unless I raise my rates. She's entirely right. She said I should 100% stop doing engineering work for a month and instead focus on sales and marketing. Full time sales and marketing. That's scary, I absolutely hate phone calls, and doing cold calls has zero appeal to me. But, my girlfriend is right. Nobody knows who I am or what I do, so how are they supposed to find me and hire me? Nothing is going to fall into my lap just by existing. I need to build a pretty website which highlights my work and abilities. Then I need to promote that website. So, for the next month, I need to focus on self promotion, sales and marketing. It's SO tempting to do engineering stuff though. Yesterday I spent a few hours researching machine learning using reinforcement learning. It's really enticing, but it would take a LOT of engineering talent and time to pull off. And I want to try, and I could probably do cool stuff, but it won't help pay for tomorrows bills. So I kind of need to shelve that desire as well. Harsh.
  24. You are correct, it is initial map download data. I will look into HTTP. Thank you.
  25. drcrack

    Zeal — Online PvP RPG

    https://gfycat.com/AccomplishedDapperHoiho Just a small teaser for our upcoming Warrior Gameplay video
  26. Sounds like a big problem. 1:100 is huge. This means potential high divergence on GPU: Larger bodies need to check MUCH more nodes/cells for collision detection. Same for calculating response with many tiny bodies. But this can be likely solved by binning bodies by size, and running one compute shader dispatch per bin. This way divergence can be reduced a lot, and async compute can run all dispatches in parallel. Sounds good but in practice it may be still an issue i guess. The larger problem is probably robust simulation. Large size ratios means large mass ratios too. Or can you use similar mass for small and big bodies? (I asked for how to handle large mass ratios myself just before in my previous post...) Also, large bodies having many contacts with tiny bodies will likely cause heavy jitter. You'll have to cheat: Use large sleep thresholds, things like shock propagation, etc. This solves some problems but also introduces new problems. Another option would be to store and simulate physical properties in the grid cells instead of individual bodies. E.g. average velocity / density per cell, similar to how grid based fluid simulation works. Then you drive the bodies by the resulting grid velocity vector field. I don't know if PhysX particles already do this, but if you can afford some inaccuracy, this might be the way to go. (And why don't you use PhysX? It's NV exclusive, but you could check out what's possible there.) Can't comment on Neural Networks, but here's some research i've found: https://homes.cs.washington.edu/~barun/files/icra17_se3nets.pdf If you can, you should tell us more. What kinds of geometry do you want to simulate? Breaking buildings the player can interact with? Or just some debris flying around after explosions, but not affecting anything?... makes a big difference.
  1. Load more activity
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!