Jump to content
  • Advertisement

Chronicles of cyberpunk: Awakening

Recommended Posts

Hi! In my previous post I told about my game Chronicles of cyberpunk and here I will post updates about the second part of this game Chronicles of cyberpunk: Awakening. It will be on Unreal Engine and I am working on it since January 8, 2017 (and the first part was released on January 7). This is an independent game, you don't need to play the first part to understand what is happening, since there will be a brief retelling of the events of the first part in the beginning

twitter

1.jpg

Edited by dimaCyberpunk

Share this post


Link to post
Share on other sites
Advertisement

Here is some illustrations for the game. Now I'm learning how to program on UE4 using C++, my knowledge about C# is useless here. And finished 5 locations, but still need to add a lot of details to it

 

1.jpg

Share this post


Link to post
Share on other sites

  • Advertisement
  • Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By Alexander Winter
      Jumpaï is a game about creating platformer levels and playing them online with everyone. Will you become the most popular level maker or will you be a speedrunner holding world records on everyone's levels? More into casual play? No problem! You can happily play through the giant level database or chill at people's hub. Meet new people, make new friends, learn to master the game by asking pros or ask for people's favorite tricks on level making. Download here: https://jumpai.itch.io/jumpai Discord: https://discord.gg/dwRTNCG   Trailer:      (The following screenshots are older but still a bit representative)  





      Unlike other games of its genre, Jumpaï is about playing levels with everyone in real time. You have the fun to see how other people are playing and get to realize you are not the only one failing that jump!

      The game is currently into development and still have lots to do. I am looking for people willing to help how they can. Developer? Graphist? Play tester? Sound designer? Game designer? I'm welcoming any talent. The project is so big I have a lot of work to do in all areas. Server backend, UI/UX, Game networking, Gameplay and even the website some day. As you can see from the default buttons, the game has been made with LibGDX. This project is a perfect opportunity for you to get better in various fields as well as showing off your skills.

      If you plan to take an important role into the development of the game, we will discuss how you will get paid once the game generates money. Note that I'm not working on the game full-time. I'm studying full-time and working on it is a hobby. The project has started in november 2016 and experiences heavy progress.

      So, are you interested? If so join me on my discord https://discord.gg/dwRTNCG and I'll answer all your questions.

      Additionnal screenshots:
       



       
    • By Shtabbbe
      I've had a game idea for a while, and I wanted to finally try to create it.
      Its a 2D open-world tile-based MMO. The concept is it is one world and multiplayer only, so everyone shares one world no matter region, platform, etc.
      I am having problems finding out what to use to start development, I tried Unity but saw some of the negatives and refrained and now im stuck, could anyone recommend some intermediate friendly 2D engines that can support what I am looking for? Preferably in languages that are or are somewhat like Java, C#, Python, JavaScript, Lua.
      Thanks for your help, im very new at this if you cant tell
    • By MoreLion
      Hey all! we are a team of 7 looking for a game designer, im a game designer but need help as i am doing multiple things at once, the game is being developed in UE4.
      the game is a futuristic action adventure game where you play as a 21 year old female who has woken up in a simulation not knowing who or where she is, but when all is unfolding the simulation gets hacked leaving eveline with no choice but to escape before she is killed inside the simulation.
      we are also looking for other members aswell wether you be a animator a ue4 game developer or that just email me below.
      if interested email liondude12@gmail.com
    • By chiffre
      Introduction:
      In general my questions pertain to the differences between floating- and fixed-point data. Additionally I would like to understand when it can be advantageous to prefer fixed-point representation over floating-point representation in the context of vertex data and how the hardware deals with the different data-types. I believe I should be able to reduce the amount of data (bytes) necessary per vertex by choosing the most opportune representations for my vertex attributes. Thanks ahead of time if you, the reader, are considering the effort of reading this and helping me.
      I found an old topic that shows this is possible in principal, but I am not sure I understand what the pitfalls are when using fixed-point representation and whether there are any hardware-based performance advantages/disadvantages.
      (TLDR at bottom)
      The Actual Post:
      To my understanding HLSL/D3D11 offers not just the traditional floating point model in half-,single-, and double-precision, but also the fixed-point model in form of signed/unsigned normalized integers in 8-,10-,16-,24-, and 32-bit variants. Both models offer a finite sequence of "grid-points". The obvious difference between the two models is that the fixed-point model offers a constant spacing between values in the normalized range of [0,1] or [-1,1], while the floating point model allows for smaller "deltas" as you get closer to 0, and larger "deltas" the further you are away from 0.
      To add some context, let me define a struct as an example:
      struct VertexData { float[3] position; //3x32-bits float[2] texCoord; //2x32-bits float[3] normals; //3x32-bits } //Total of 32 bytes Every vertex gets a position, a coordinate on my texture, and a normal to do some light calculations. In this case we have 8x32=256bits per vertex. Since the texture coordinates lie in the interval [0,1] and the normal vector components are in the interval [-1,1] it would seem useful to use normalized representation as suggested in the topic linked at the top of the post. The texture coordinates might as well be represented in a fixed-point model, because it seems most useful to be able to sample the texture in a uniform manner, as the pixels don't get any "denser" as we get closer to 0. In other words the "delta" does not need to become any smaller as the texture coordinates approach (0,0). A similar argument can be made for the normal-vector, as a normal vector should be normalized anyway, and we want as many points as possible on the sphere around (0,0,0) with a radius of 1, and we don't care about precision around the origin. Even if we have large textures such as 4k by 4k (or the maximum allowed by D3D11, 16k by 16k) we only need as many grid-points on one axis, as there are pixels on one axis. An unsigned normalized 14 bit integer would be ideal, but because it is both unsupported and impractical, we will stick to an unsigned normalized 16 bit integer. The same type should take care of the normal vector coordinates, and might even be a bit overkill.
      struct VertexData { float[3] position; //3x32-bits uint16_t[2] texCoord; //2x16bits uint16_t[3] normals; //3x16bits } //Total of 22 bytes Seems like a good start, and we might even be able to take it further, but before we pursue that path, here is my first question: can the GPU even work with the data in this format, or is all I have accomplished minimizing CPU-side RAM usage? Does the GPU have to convert the texture coordinates back to a floating-point model when I hand them over to the sampler in my pixel shader? I have looked up the data types for HLSL and I am not sure I even comprehend how to declare the vertex input type in HLSL. Would the following work?
      struct VertexInputType { float3 pos; //this one is obvious unorm half2 tex; //half corresponds to a 16-bit float, so I assume this is wrong, but this the only 16-bit type I found on the linked MSDN site snorm half3 normal; //same as above } I assume this is possible somehow, as I have found input element formats such as: DXGI_FORMAT_R16G16B16A16_SNORM and DXGI_FORMAT_R16G16B16A16_UNORM (also available with a different number of components, as well as different component lengths). I might have to avoid 3-component vectors because there is no 3-component 16-bit input element format, but that is the least of my worries. The next question would be: what happens with my normals if I try to do lighting calculations with them in such a normalized-fixed-point format? Is there no issue as long as I take care not to mix floating- and fixed-point data? Or would that work as well? In general this gives rise to the question: how does the GPU handle fixed-point arithmetic? Is it the same as integer-arithmetic, and/or is it faster/slower than floating-point arithmetic?
      Assuming that we still have a valid and useful VertexData format, how far could I take this while remaining on the sensible side of what could be called optimization? Theoretically I could use the an input element format such as DXGI_FORMAT_R10G10B10A2_UNORM to pack my normal coordinates into a 10-bit fixed-point format, and my verticies (in object space) might even be representable in a 16-bit unsigned normalized fixed-point format. That way I could end up with something like the following struct:
      struct VertexData { uint16_t[3] pos; //3x16bits uint16_t[2] texCoord; //2x16bits uint32_t packedNormals; //10+10+10+2bits } //Total of 14 bytes Could I use a vertex structure like this without too much performance-loss on the GPU-side? If the GPU has to execute some sort of unpacking algorithm in the background I might as well let it be. In the end I have a functioning deferred renderer, but I would like to reduce the memory footprint of the huge amount of vertecies involved in rendering my landscape. 
      TLDR: I have a lot of vertices that I need to render and I want to reduce the RAM-usage without introducing crazy compression/decompression algorithms to the CPU or GPU. I am hoping to find a solution by involving fixed-point data-types, but I am not exactly sure how how that would work.
    • By Paszq
      Group photo of some of the characters and creatures currently living in Arpago
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!