Jump to content
  • Advertisement

Search the Community

Showing results for tags '3D'.

The search index is currently processing. Current results may not be complete.


More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Community
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Topical
    • Virtual and Augmented Reality
    • News
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics
  • For Beginners's Forum

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Product Groups

  • GDNet+
  • Advertisements
  • GameDev Gear

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Website


Role


Twitter


Github


Twitch


Steam

Found 970 results

  1. Hi, guys. I am developing a path tracing baking renderer now, which is based on OpenGL and OpenRL. It can bake some scenes e.g. the following, I am glad it can bake bleeding diffuse color. : ) I store the irradiance directly, like this. Albedo * diffuse color has already come into irradiance calculation when baking, both direct and indirect together. After baking, In OpenGL fragment shader, I use the light map directly. I think I have got something wrong, due to most game engines don't do like this. Which kind of data should I choose to store into the light maps? I need diffuse only. Thanks in advance!
  2. Brandon Sharp

    Machbot 2.0 VS Sweetbot

    This is a project I've been working on for awhile now. I'd save over all going on around a year. I did the Machbot 2.0 all from scratch including the textures. I spent countless hours trying to figure out how to get the models from Twisted Metal. I finally figured out how to manually extract the mesh. But the only problem was it was not UV mapped. So i pretty much had to go back in and remap everything. Which wasn't hard but the assembling of the model itself was a challenge. I did the best I could at placing stuff where it goes I'm sure there are things that are incorrect. All in all it was for this one render. Not sure if my models can be used as game assets but i do want to eventually make this into a fighting game. Both vehicles and bots. Let me know what you think and thank your for checking out my work.
  3. Brandon Sharp

    Machbot 2.0 VS Sweetbot

    This is a project I've been working on for awhile now. I'd save over all going on around a year. I did the Machbot 2.0 all from scratch including the textures. I spent countless hours trying to figure out how to get the models from Twisted Metal. I finally figured out how to manually extract the mesh. But the only problem was it was not UV mapped. So i pretty much had to go back in and remap everything. Which wasn't hard but the assembling of the model itself was a challenge. I did the best I could at placing stuff where it goes I'm sure there are things that are incorrect. All in all it was for this one render. Not sure if my models can be used as game assets but i do want to eventually make this into a fighting game. Both vehicles and bots. Let me know what you think and thank your for checking out my work.
  4. jb-dev

    Let's go to the bank

    From the album: Vaporwave Roguelite

    This is a picture of a yet to be completed bank. The idea is to generate simple geometries to fill the space between the floor and the ceiling. It's done like that because I can set the wall height of the generated level on the fly, so I need something generic and flexible.
  5. I know that is a noob question but, between OpenGL 2.0 and OpenGL ES 2.0, which one got better performance to desktop and/or mobile devices? I have read in somewhere that the performance opengl is based on code but some games we can compare oepngl version performances, so idk. Which one of both use less CPU & GPU/ got better performance? Thanks
  6. A 3D mobile (Andriod) game. Sci Fi setting. Puzzle-platformer. Please, somebody, make a some sort of expertise of visual aspect of the game. Just give me Your opinion. Thanks.))
  7. Hi, I am Antony Wells, and have been coding in general for 20+ years, and using Unity3D since v2.5. I am looking for team members, in the following fields: . 3D Artists - Skills including modelling, texturing and material creation .2D Artists - Skills including 2D Textures, concept art and User Interface elements .Music .SoundFX .Coders - Although a good coder, I think 2 or 3 coders would greatly speed up the project. .Website Designer - To develop and maintain our website (I will provide the url/isp) Atm the game is looking like being an Arena style shooter. For Windows/Mac/Linux. It will be commercial. Profits will most likely be split evenly upon any sales being made. The type of shooter? depends on how succesfull finding a team is. It could be robots(I.e not much animation) or even characters(I.e more animations) - It will feature a One player training mode also, with bots. If this project sounds interesting to you, pls reply to my personal email addy. antonyrwells@outlook.com I will be working full time on the project (around 5 hours a day) but you can be part time, as long as reasonable progress is made. I can send/show you previous examples of my work too.
  8. Mind Bullet - Stupid Zombies Kill radioactive brain eating vampires with bouncing bullets! Click here https://play.google.com/store/apps/details?id=com.bestsoft32.mindbullets&hl=en
  9. Hello everyone I am a programmer from Baku. I need a 3D Modeller for my shooter project in unity.I have 2 years Unity exp. Project will paid when we finish the work If you interested write me on email: mr.danilo911@gmail.com
  10. This Week Hello everyone! This week was an exciting one. Lots of things were fixed, improved and implemented. I found 2 experienced 3D artists who will be helping me along the way. I guess I can say I'm part of a team now It seems that I learn something new everyday just by talking to them, which is great! New Workflow To improve my productivity I decided to start making a TODO list with post-it notes stuck to my wardrobe doors. On one side there are things which need to be done, and on the other are things which I have alreadydone. It is really satisfying to move the notes from one side to another And it makes it easier for me to remember what things I've done each week, so I can write them in the blog. 3D Artists Recently I've found two 3D artists who agreed to work on the game. They've got some awesome ideas and in the coming weeks the game should start to look pretty great. Here's what one of the artists already started working on (construction yard vehicle): This will be a prop for the construction yard themed levels. There will be 4 themes total: airport, construction yard, prison yard, and parking lot. Each theme/scene will have props, ground textures and everything else related to that specific environment. This is how the each cell on the floor will look like. It will be animated, and the turrets will rise from beneath these doors: Currently one of the artist is working on ground textures, walls and environments in general, and another artist should soon begin working on turrets. New turret: Progress With The Game So as I mentioned before this week I've done lots of things which include: Implemented smooth (lerped) camera movement Implemented support for multiple enemy routes (with multiple entry and exit points) Added non-bomber air enemies Improved information displayed on UI (Upgrade menus, bottom panel) Limited deltaTime's maximum value Added fast forward button Added text telling what will be the next wave Multiple routes shown by the black/brown markings on the floor. Price is now shown at the bottom panel without the need to hover over the turret button. Text of the type of the turret is now highlighted: That is everything for today, see you next week!
  11. Today was kind of a slow day too. I've haven't got a lot of sleep lately (thanks little hamster wheel in my head) But at last, I was still able to add (and also fix) some graphical components here and there. In short, I've made the first and last rooms of the level more distinct from every other room. For example, I've added a room flow on these rooms to properly align props and, in the case of the starting room. the spawning rotation. I've also added a little decal-like plane that tells the player what to do (take it as a little tutorial, if you may) The important thing is that this decal is, not unlike my palette shader, dynamic in terms of colours. What I've done is quite simple: I've mapped each channel of a texture to a specific colour. Here's the original texture: After inputting this texture in my shader, it was just a matter of interpolating values and saturating them: Shader "Custom/TriColorMaps" { Properties { _MainTex ("Albedo (RGB)", 2D) = "white" {} _Glossiness ("Smoothness", Range(0,1)) = 0.5 _Metallic ("Metallic", Range(0,1)) = 0.0 _RedMappedColor ("Mapped color (Red channel)", Color) = (1, 0, 0, 1) _GreenMappedColor ("Mapped color (Green channel)", Color) = (0, 1, 0, 1) _BlueMappedColor ("Mapped color (Blue channel)", Color) = (0, 0, 1, 1) } SubShader { Tags { "RenderType"="Transparent" } LOD 200 CGPROGRAM // Physically based Standard lighting model, and enable shadows on all light types #pragma surface surf Standard fullforwardshadows vertex:vert decal:blend // Use shader model 3.0 target, to get nicer looking lighting #pragma target 3.0 sampler2D _MainTex; struct Input { float2 uv_MainTex; }; half _Glossiness; half _Metallic; fixed4 _RedMappedColor; fixed4 _GreenMappedColor; fixed4 _BlueMappedColor; void vert (inout appdata_full v) { v.vertex.y += v.normal.y * 0.0125; } // Add instancing support for this shader. You need to check 'Enable Instancing' on materials that use the shader. // See https://docs.unity3d.com/Manual/GPUInstancing.html for more information about instancing. // #pragma instancing_options assumeuniformscaling UNITY_INSTANCING_BUFFER_START(Props) // put more per-instance properties here UNITY_INSTANCING_BUFFER_END(Props) void surf (Input IN, inout SurfaceOutputStandard o) { // Albedo comes from a texture tinted by color fixed4 c = tex2D (_MainTex, IN.uv_MainTex); c.rgb = saturate((lerp(fixed4(0, 0, 0, 0), _RedMappedColor, c.r) + lerp(fixed4(0, 0, 0, 0), _GreenMappedColor, c.g) + lerp(fixed4(0, 0, 0, 0), _BlueMappedColor, c.b))).rgb; o.Albedo = c.rgb; // Metallic and smoothness come from slider variables o.Metallic = _Metallic; o.Smoothness = _Glossiness; o.Alpha = c.a; } ENDCG } FallBack "Diffuse" } Also, note that I've changed the vertices of the model. I needed a way to eliminate the Z-Fighting and just thought of offsetting the vertices by their normals. In conclusion, It's nothing really special, really. But I'm still working hard on this. EDIT: After a little bit of searching, I've seen that you can give a Z-buffer offset in those Unity shaders by using the Offset state. So I've then tried to change a bit my previous shader to use that functionality rather than just offsetting the vertices: SubShader { Tags { "RenderType"="Opaque" "Queue"="Geometry+1" "ForceNoShadowCasting"="True" } LOD 200 Offset -1, -1 CGPROGRAM // Physically based Standard lighting model, and enable shadows on all light types #pragma surface surf Lambert decal:blend // Use shader model 3.0 target, to get nicer looking lighting #pragma target 3.0 sampler2D _MainTex; struct Input { float2 uv_MainTex; }; fixed4 _RedMappedColor; fixed4 _GreenMappedColor; fixed4 _BlueMappedColor; // Add instancing support for this shader. You need to check 'Enable Instancing' on materials that use the shader. // See https://docs.unity3d.com/Manual/GPUInstancing.html for more information about instancing. // #pragma instancing_options assumeuniformscaling UNITY_INSTANCING_BUFFER_START(Props) // put more per-instance properties here UNITY_INSTANCING_BUFFER_END(Props) void surf (Input IN, inout SurfaceOutput o) { // Albedo comes from a texture tinted by color fixed4 c = tex2D (_MainTex, IN.uv_MainTex); c.rgb = saturate((lerp(fixed4(0, 0, 0, 0), _RedMappedColor, c.r) + lerp(fixed4(0, 0, 0, 0), _GreenMappedColor, c.g) + lerp(fixed4(0, 0, 0, 0), _BlueMappedColor, c.b))).rgb; o.Albedo = c.rgb; // We keep the alpha: it's supposed to be a decal o.Alpha = c.a; } ENDCG }
  12. I have a brilliant game idea of a Grand Theft Auto style game set in the prehistoric era, or a combination of prehistoric, ancient and medieval eras. more appropriately named. 'Grand Theft Horse'. You may think that this may be dull but, I have Ideas of the 'vehicles' that will be used. Not just horses, but also horse drawn carriages, open cargo carriages or stage coaches(passenger carriages), chariots, oxen, donkeys, camels, elephants, ostriches, (yes, you ride an ostrich). and sea vehicles like canoes, outriggers, boats, triremes, sailboats and pirate ships. You can jack any vessel or vehicle, travel between cities or villages of different cultures, and have weapons and armour of these cultures, swords, spears, bow and arrow. Cultures range from cavemen to medieval . You can commit crimes and get wanted then swordsmen and bowmen run after you. Cities range from villages with grass roof huts, to teepees, to medieval walled towns. There will be missions to do similar to any GTA title. Brilliant Idea no one has thought of yet, isn't it?
  13. jb-dev

    A ragdoll on the floor

    From the album: Vaporwave Roguelite

    This ragdoll is about to be shrunken to nothingness soon enough...
  14. During the past days, lots of shaders were updated and other visual things did too. Firstly, I've added lights effects when the crystals get shattered. There's also a burst of particle emanating from the broken crystal on impact. Also, enemies now leave a ragdoll corpse behind when they die. I love some of the poses those ragdolls make. On another note, I've toyed around with corpse removal and got captivated by the shrinking effect it created. It can sometimes be off-putting, but I'm still captivated. I've also added a nice VHS-like effect from layering two VHS shader together; namely "more AVdistortion" and "VHS pause effect". I've already ported the former and it's already active and the latter was just a matter of porting GLSL shaders to HLSL. No biggie. I did change the code a bit to make the white noises move through time. And there's nothing like trigonometry to help us with that fixed4 frag (v2f i) : SV_Target { fixed4 col = fixed4(0, 0, 0, 0); // get position to sample fixed2 samplePosition = i.vertex.xy / _ScreenParams.xy; float whiteNoise = 9999.0; // Jitter each line left and right samplePosition.x = samplePosition.x + (((rand(float2(_UnscaledTime, i.vertex.y))-0.5)/64.0) * _EffectStrength ); // Jitter the whole picture up and down samplePosition.y = samplePosition.y + (((rand(float2(_UnscaledTime, _UnscaledTime))-0.5)/32.0) * _EffectStrength ); // Slightly add color noise to each line col += (fixed4(-0.5, -0.5, -0.5 , -0.5)+fixed4(rand(float2(i.vertex.y,_UnscaledTime)),rand(float2(i.vertex.y,_UnscaledTime+1.0)),rand(float2(i.vertex.y,_UnscaledTime+2.0)),0))*0.1; // Either sample the texture, or just make the pixel white (to get the staticy-bit at the bottom) whiteNoise = rand(float2(floor(samplePosition.y*80.0),floor(samplePosition.x*50.0))+float2(_UnscaledTime,0)); float t = sin(_UnscaledTime / 2); if (whiteNoise > 11.5-30.0*(samplePosition.y + t) || whiteNoise < 1.5-5.0*(samplePosition.y + t) ) { // Sample the texture. col = lerp(tex2D(_MainTex ,samplePosition) , col + tex2D(_MainTex ,samplePosition), _EffectStrength); } else { // Use white. (I'm adding here so the color noise still applies) col = lerp(tex2D(_MainTex ,samplePosition), fixed4(1, 1, 1,1), _EffectStrength); } return col; } It's nice to have HLSL code, but a video is better:
  15. jb-dev

    A little sign infront of a mall

    From the album: Vaporwave Roguelite

    This little sign is actually placed randomly and at a random angle. There can only be one though. Much like all other models, it follows a colour palette.
  16. Project Overview: Rogue is a doom inspired FPS. An AI Called Solace On Board The ESSL (Elemental Space Station Lab) Becomes sentient and decides to create creatures to kill all people and break free, ultimately seeking revenge for the “experiments” they did to her. You wake up on board, and grab your gun and make it your mission to destroy Solace. We will make a kickstarter once we have a prototype and some concept art. If we get funded then awesome! If not then it’s good experience! GAME DETAILS Language/Game Engine: Unity C# Theme/Setting: Futuristic Genre: FPS Musical Direction: Electronic/Metal (Don’t want to copy doom though) Artistic Direction: Realisticish 3D ROLE DETAILS My Role: Game Designer, Writer And Project Lead/Manager Looking for: +1 Composer +2 Sound Effects +3 Programmer +3 Artist 3 Concept artist(s) , 1 2D artist(s), 3 3D artist(s), 3 Texture artist(s), 3 Animator(s), 3 Modeller(s) +1 Producer +1 Project Manager +2 Level Designer +1 Community Manager If Interested Contact me on discord at Thathuman44#4207 Or Email Me At rioishere14@gmail.com
  17. dillyframe

    New life of minesweeper!

    This is the first entry in this blog and it's about the minesweeper - some people hate it, others - love it. Why people don't play minesweeper? Probably because it's boring? The idea of our game is to unite people in solving logic task in classic minesweeper. It is not solo experience anymore, It's cooperetive game where you can gather your friends (up to 4) and win together or loose together. And this is not a flat 2D picture it's full 3d environment which you can explore (play mini-football or find some hidden places, or just kick chckens). It's not about minesweeper it's about challenging your friendship and your mind if you accept the invitation. How we came to it? It's simple: we like minesweeper and we like 3d games and we like games from third-person and we like puzzles. Why not to combine it all in one game? Here it is! What's new? Bunny Minesweeper have 2 game modes: classic and crazy. Classic mode offers 3 classic types of difficulties which you could see in traditional minesweeper Crazy mode offers 3 crazy types of difficulties - the biggestfield is 60x60, that is 3600 cells and 396 bombs - and we really tested it and it's damn hard to win in party!!! You can play solo if you want to - there are 3 matchmaking opportunities: solo, random and friends. As the game is only in steam - the friends matchmaking is your steam friends. Mini-football? And why not? It will help to rest from the main task and have some fun with friends. BTW you can play football not with the ball but with your friends to get more scores - just kick them into the gate! Statistics and Leaderboard At the end of the match you will see some statistics of the match - how many cells each player opened and how many flags they put. There is also a Leaderboard - separate for solo and coop modes for each difficulty - the faster you complete the game the higher your position will be. Customization For now you can change the color of your bunny before joining any game (in the menu). This color serve to identify the flags you put, so at the end of the match before building another field, everyone can analize where and who is responsible for the defeat. Future plans of customization are huge - from some parts of clothes (like hat or gloves) to full unique skins. Some of parts you can see in the main menu - those strange dancning bunnies. And - YES! - you can dance in game pressing 4 button. Why kicking? The main idea of kicking players is to force them to go away from the cell they could occupied when afk or bother you to play. You can use it in any way you like. It's just a possibility. Conclusion So if you love minesweeper you should definitely try this one and if you hate it - well... give minesweeper a chance to change your mind by playing it in 3d enviroment with friends - hard and fun simultaneously!!!
  18. Hi, my name is Carlos Coronado, I am a gamedev. Recently (April 2018) I released Infernium for Steam, Humble, Switch and PS4 using the Unreal Engine. I've got a lot of questions asking me via twitter how I released solo the game on Switch, and I figured out it would be cool if I could explain it in a video! Oh, and I also invited Alexander, ex community manager of Epic Games to help me explain the feedback. Anyways, I hope you find it useful Cheers, Carlos.
  19. Hodgman

    Imperfect Environment Maps

    In 22 our lighting environment is dominated by sunlight, however there are many small emissive elements everywhere. What we want is for all these bright sunlit metal panels and the many emissive surfaces to be reflected off the vehicles. Being a high speed racing game, we need a technique with minimal performance impacts, and at the same time, we would like to avoid large baked data sets in order to support easy track editing within the game. This week we got around to trying a technique presented 10 years ago for generating large numbers of shadow maps extremely quickly: Imperfect Shadow maps. In 2008, this technique was a bit ahead of its time -- as indicated by the performance data being measured on 640 x 480 image resolutions at 15 frames per second! It is also a technique for generating shadows, for use in conjunction with a different lighting technique -- Virtual Point Lights. In 22, we aren't using Virtual Point Lights or Imperfect Shadow Maps! However, in the paper they mention that ISMs can be used to add shadows to environment map lighting... By staying up too late and misreading this section, you could get the idea that you could use the ISM point-cloud rendering ideas to actually generate large numbers of approximate environment maps at low cost... so that's what we implemented Our gameplay code already had access to a point cloud of the track geometry. This data set was generated by simply extracting the vertex positions from the visual mesh of the track - a portion is visualized below: Next we somehow need to associate lighting values with each of these points... Typically for static environments, you would use a light baking system for this, which can spend a lot of time path-tracing the scene (or similar), before saving the results into the point cloud. To keep everything dynamic, we've instead taken inspiration from screen-space reflections. With SSR, the existing images that you're rendering anyway are re-used to provide data for reflection rays. We are reusing these images to compute lighting values for the points in our point cloud. After the HDR lighting is calculated, the point cloud is frustum culled and each point projected onto the screen (after a small random offset is applied to it). If the projected point is close in depth to the stored Z-buffer value at that screen pixel, then the lighting value at that pixel is transferred to the point cloud using a moving average. The random offsets and moving average allow many different pixels that are nearby the point to contribute to its color. Over many frames, the point cloud will eventually be colored in now. If the lighting conditions change, then the point cloud will update as long as it appears on screen. This works well for a racing game, as the camera is typically looking ahead at sections of track that the car is about to drive into, allowing the point cloud for those sections to be updated with fresh data right before the car drives into those areas. Now, if we take the points that are nearby a particular vehicle and project them onto a sphere, and then unwrap that sphere to 2D UV coordinates (at the moment, we are using a world-space octahedral unwrapping scheme, though spheremaps, hemispheres, etc are also applicable. Using view-space instead of world space could also help hide seams), then we get an image like below. Left is RGB components, right is Alpha, which encodes the solid angle that the point should've covered if we'd actually drawn them as discs/spheres, instead of as points.Nearby points have bright alpha, while distant points have darker alpha. We can then feed this data through a blurring filter. In the ISM paper they do a push-pull technique using mipmaps which I've yet to implement. Currently, this is a separable blur weighted by the alpha channel. After blurring, I wanted to keep track of which pixels initially had valid alpha values, so a sign bit is used to keep track of this. Pixels that contain data only thanks to blurring, store negative alpha values in them. Below, left is RGB, middle is positive alpha, right is negative alpha: Pass 1 - horizontal Pass 2 - vertical Pass three - diagonal Pass four - other diagonal, and alpha mask generation In the final blurring pass, the alpha channel is converted to an actual/traditional alpha value (based on artist-tweakable parameters), which will be used to blend with the regular lighting probes. A typical two-axis separable blur creates distinctive box shapes, but repeating the process with a 45º rotation produces hexagonal patterns instead, which are much closer to circular The result of this is a very approximate, blobby, kind-of-correct environment map, which can be used for image based lighting. After this step we calculate a mip-chain using standard IBL practices for roughness based lookups. The big question, is how much does it cost though? On my home PC with a NVidia GTX780 (not a very modern GPU now!), the in-game profiler showed ~45µs per vehicle to create a probe, and ~215µs to copy the screen-space lighting data to the point cloud. And how does it look? When teams capture sections of our tracks, emissive elements show that team's color. Below you can see a before/after comparison, where the green team color is now actually reflected on our vehicles In those screens you can see the quick artist tweaking GUI on the right side. I have to give a shout out to Omar's Dear ImGui project, which we use to very quickly add these kinds of developer-GUIs. Point Radius - the size of the virtual discs that the points are drawn as (used to compute the pre-blurring alpha value, dictating the blur radius). Gather Radius - the random offset added to each point (in meters) before it's projected to the screen to try and collect some lighting information. Depth Threshold - how close the projected point needs to be to the current Z-Buffer value in order to be able to collect lighting info from that piixel. Lerp Speed - a weight for the moving average. Alpha range - After blurring, scales how softly alpha falls off at the edge of the blurred region. Max Alpha - A global alpha multiplier for these dynamic probes - e.g. 0.75 means that 25% of the normal lighting probes will always be visible.
  20. jb-dev

    Ragdoll removal tests

    From the album: Vaporwave Roguelite

    It's just me trying out ways to remove ragdolls from the world.
  21. So the foolproof way to store information about emission would be to dedicate a full RGB data set to do the job, but this is seemingly wasteful, and squeezing everything into a single buffer channel is desirable and indeed a common practice. The thing is that there doesn't seem to be one de facto standard technique to achieve this. A commonly suggested solution is to perform a simple glow * albedo multiplication, but it's not difficult to imagine instances where this strict interdependence would become an impenetrable barrier. What are some other ideas?
  22. I am using the rts cam script and was wondering if it was possible to stop the camera from going out of sight of an object. This set up is being developed in unity and is for a 3D game. The camera is being used to rotate, pan and zoom in/around an object. At the moment I can move out of sight of the object. Is there a way to block the camera from going out of sight of the object? I have seen multiple ways to keep an object in sight through the camera, however I am unsure which route to take. I have attached the script being used below. public class RTSCam : MonoBehaviour { private float dist; private float orbitSpeedX; private float orbitSpeedY; private float zoomSpeed; public float rotXSpeedModifier=0.25f; public float rotYSpeedModifier=0.25f; public float zoomSpeedModifier=5; public float minRotX=-20; public float maxRotX=70; //public float minRotY=45; //public float maxRotY=315; public float minZoom=3; public float maxZoom=15; public float panSpeedModifier=1f; // Use this for initialization void Start () { dist=transform.localPosition.z; DemoSceneUI.SetSceneTitle("RTS camera, the camera will orbit around a pivot point but the rotation in z-axis is locked"); string instInfo=""; instInfo+="- swipe or drag on screen to rotate the camera\n"; instInfo+="- pinch or using mouse wheel to zoom in/out\n"; instInfo+="- swipe or drag on screen with 2 fingers to move around\n"; instInfo+="- single finger interaction can be simulate using left mosue button\n"; instInfo+="- two fingers interacion can be simulate using right mouse button"; DemoSceneUI.SetSceneInstruction(instInfo); } void OnEnable(){ IT_Gesture.onDraggingE += OnDragging; IT_Gesture.onMFDraggingE += OnMFDragging; IT_Gesture.onPinchE += OnPinch; orbitSpeedX=0; orbitSpeedY=0; zoomSpeed=0; } void OnDisable(){ IT_Gesture.onDraggingE -= OnDragging; IT_Gesture.onMFDraggingE -= OnMFDragging; IT_Gesture.onPinchE -= OnPinch; } // Update is called once per frame void Update () { //get the current rotation float x=transform.parent.rotation.eulerAngles.x; float y=transform.parent.rotation.eulerAngles.y; //make sure x is between -180 to 180 so we can clamp it propery later if(x>180) x-=360; //calculate the x and y rotation //Quaternion rotationY=Quaternion.Euler(0, Mathf.Clamp(y+orbitSpeedY, minRotY, maxRotY), 0); Quaternion rotationY=Quaternion.Euler(0, y+orbitSpeedY, 0); Quaternion rotationX=Quaternion.Euler(Mathf.Clamp(x+orbitSpeedX, minRotX, maxRotX), 0, 0); //apply the rotation transform.parent.rotation=rotationY*rotationX; //calculate the zoom and apply it dist+=Time.deltaTime*zoomSpeed*0.01f; dist=Mathf.Clamp(dist, -maxZoom, -minZoom); transform.localPosition=new Vector3(0, 0, dist); //reduce all the speed orbitSpeedX*=(1-Time.deltaTime*12); orbitSpeedY*=(1-Time.deltaTime*3); zoomSpeed*=(1-Time.deltaTime*4); //use mouse scroll wheel to simulate pinch, sorry I sort of cheated here zoomSpeed+=Input.GetAxis("Mouse ScrollWheel")*500*zoomSpeedModifier; } //called when one finger drag are detected void OnDragging(DragInfo dragInfo){ //if the drag is perform using mouse2, use it as a two finger drag if(dragInfo.isMouse && dragInfo.index==1) OnMFDragging(dragInfo); //else perform normal orbiting else{ //apply the DPI scaling dragInfo.delta/=IT_Gesture.GetDPIFactor(); //vertical movement is corresponded to rotation in x-axis orbitSpeedX=-dragInfo.delta.y*rotXSpeedModifier; //horizontal movement is corresponded to rotation in y-axis orbitSpeedY=dragInfo.delta.x*rotYSpeedModifier; } } //called when pinch is detected void OnPinch(PinchInfo pinfo){ zoomSpeed-=pinfo.magnitude*zoomSpeedModifier/IT_Gesture.GetDPIFactor(); } //called when a dual finger or a right mouse drag is detected void OnMFDragging(DragInfo dragInfo){ //apply the DPI scaling dragInfo.delta/=IT_Gesture.GetDPIFactor(); //make a new direction, pointing horizontally at the direction of the camera y-rotation Quaternion direction=Quaternion.Euler(0, transform.parent.rotation.eulerAngles.y, 0); //calculate forward movement based on vertical input Vector3 moveDirZ=transform.parent.InverseTransformDirection(direction*Vector3.forward*-dragInfo.delta.y); //calculate sideway movement base on horizontal input Vector3 moveDirX=transform.parent.InverseTransformDirection(direction*Vector3.right*-dragInfo.delta.x); //move the cmera transform.parent.Translate(moveDirZ * panSpeedModifier * Time.deltaTime); transform.parent.Translate(moveDirX * panSpeedModifier * Time.deltaTime); } //for the camera to auto rotate and focus on a predefined position /* public float targetRotX; public float targetRotY; public float targetRotZ; public Vector3 targetPos; public float targetZoom; IEnumerator LerpToPoint(){ Quaternion startRot=transform.parent.rotation; Quaternion endRot=Quaternion.Euler(targetRotX, targetRotY, targetRotZ); Vector3 startPos=transform.parent.position; Vector3 startZoom=transform.localPosition; float duration=0; while(duration<1){ transform.parent.rotation=Quaternion.Lerp(startRot, endRot, duration); transform.parent.position=Vector3.Lerp(startPos, targetPos, duration); transform.localPosition=Vector3.Lerp(startZoom, new Vector3(0, 0, -targetZoom), duration); duration+=Time.deltaTime; yield return null; } transform.parent.rotation=endRot; transform.parent.position=targetPos; transform.localPosition=new Vector3(0, 0, -targetZoom); } */ /* private bool instruction=false; void OnGUI(){ string title="RTS camera, the camera will orbit around a pivot point but the rotation in z-axis is locked."; GUI.Label(new Rect(150, 10, 400, 60), title); if(!instruction){ if(GUI.Button(new Rect(10, 55, 130, 35), "Instruction On")){ instruction=true; } } else{ if(GUI.Button(new Rect(10, 55, 130, 35), "Instruction Off")){ instruction=false; } GUI.Box(new Rect(10, 100, 400, 100), ""); string instInfo=""; instInfo+="- swipe or drag on screen to rotate the camera\n"; instInfo+="- pinch or using mouse wheel to zoom in/out\n"; instInfo+="- swipe or drag on screen with 2 fingers to move around\n"; instInfo+="- single finger interaction can be simulate using left mosue button\n"; instInfo+="- two fingers interacion can be simulate using right mouse button"; GUI.Label(new Rect(15, 105, 390, 90), instInfo); } } */ }
  23. jb-dev

    A small tutorial decal

    From the album: Vaporwave Roguelite

    A small, simple yet comprehensive decal telling what the player needs to do to clear the level, I'm not quite sure of the design, although I think that the content is straightforward.
  24. Today, I finally finished the following features : Add new solid voxels at the selected position Destroy voxels at the selected position Generate chunks on the fly Remove chunks on the fly Here is a video demonstrating those features :
  25. jb-dev

    No More Internet

    From the album: Vaporwave Roguelite

    When there's no more internet. In order to appease the Internet gods I must sacrifice myself to the bottomless Ethernet port.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!