• Advertisement

C# Simplest C# MVC framework (Unity UNET2 player)

Recommended Posts

Hi
Having a huge problem trying to simply implement a working framework for a turn based (each player takes turns) board game using UNET (Unity).
I have an Eventmanager system I am comfortable with for sending, receiving custom events.
I want to avoid adding any unnecessary complexity, so no State machine etc.
Just a Model class
A Game Controller
Perhaps a controller for each player?
And a View for the board and game pieces.

Not sure whether most of the game logic (calculating score, wins etc.) would go in the Game Controller or Model class.
Will extend this for use in AR.

Can anyone help?
Are there any really useful MVC C# templates that would get me started?
Any help appreciated!

Share this post


Link to post
Share on other sites
Advertisement

I would want to know why you are trying to use the MVC design pattern for a game in the first place. Game entities tend to want to access some type of global state (i.e. a level) so they can navigate the space. MVC works really well for websites and applications.

Share this post


Link to post
Share on other sites
3 hours ago, kingius said:

I would want to know why you are trying to use the MVC design pattern for a game in the first place. Game entities tend to want to access some type of global state (i.e. a level) so they can navigate the space. MVC works really well for websites and applications.

As mentioned, a turn based, not realtime, board game. Thinking MVC or MVCP makes perfect sense.

After much searching came across this example, with a tutorial as well

https://bitbucket.org/jparham/blog-tic-tac-toe/src

http://theliquidfire.com/2016/05/05/turn-based-multiplayer-part-1/

 

Would appreciate feedback!

Edited by Jim Bachalo

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By Yosef BenSadon
      Hi , I was considering this start up http://adshir.com/, for investment and i would like a little bit of feedback on what the developers community think about the technology.
      So far what they have is a demo that runs in real time on a Tablet at over 60FPS, it runs locally on the  integrated GPU of the i7 . They have a 20 000 triangles  dinosaur that looks impressive,  better than anything i saw on a mobile device, with reflections and shadows looking very close to what they would look in the real world. They achieved this thanks to a  new algorithm of a rendering technique called Path tracing/Ray tracing, that  is very demanding and so far it is done mostly for static images.
      From what i checked around there is no real option for real time ray tracing (60 FPS on consumer devices). There was imagination technologies that were supposed to release a chip that supports real time ray tracing, but i did not found they had a product in the market or even if the technology is finished as their last demo  i found was with a PC.  The other one is OTOY with their brigade engine that is still not released and if i understand well is more a cloud solution than in hardware solution .
      Would there  be a sizable  interest in the developers community in having such a product as a plug-in for existing game engines?  How important  is Ray tracing to the  future of high end real time graphics?
    • By Alexander Nazarov
      Hello. I'm newby in Unity and just start learning basics of this engine. I want to create a game like StackJump (links are below). And now I wondering what features do I have to use to create such my game. Should I use Physics engine or I can move objects changing transform manually in Update().
      If I should use Physics can you in several words direct me how can I implement and what I have to use. Just general info, no need for detailed description of developing process.
      Game in PlayMarket
      Video of the game
    • By Manuel Berger
      Hello fellow devs!
      Once again I started working on an 2D adventure game and right now I'm doing the character-movement/animation. I'm not a big math guy and I was happy about my solution, but soon I realized that it's flawed.
      My player has 5 walking-animations, mirrored for the left side: up, upright, right, downright, down. With the atan2 function I get the angle between player and destination. To get an index from 0 to 4, I divide PI by 5 and see how many times it goes into the player-destination angle.

      In Pseudo-Code:
      angle = atan2(destination.x - player.x, destination.y - player.y) //swapped y and x to get mirrored angle around the y axis
      index = (int) (angle / (PI / 5));
      PlayAnimation(index); //0 = up, 1 = up_right, 2 = right, 3 = down_right, 4 = down

      Besides the fact that when angle is equal to PI it produces an index of 5, this works like a charm. Or at least I thought so at first. When I tested it, I realized that the up and down animation is playing more often than the others, which is pretty logical, since they have double the angle.

      What I'm trying to achieve is something like this, but with equal angles, so that up and down has the same range as all other directions.

      I can't get my head around it. Any suggestions? Is the whole approach doomed?

      Thank you in advance for any input!
       
    • By Dave Haylett
      Hi all. My project is coming along wonderfully, and am starting to consider alpha deployment, and would like your advice.
      My project need access to 10,000 small PNG image files at runtime, each is only a few kilobytes each, which during development I used to load in directly from a fixed path on my HDD whenever one was needed (obviously not a solution for go-live), using something like this:
      img = new WriteableBitmap(new BitmapImage(new Uri(@screenshotsPath + filename)));
      The image would then be blitted onto a buffer screen, etc. etc. At a time, a few dozen would be being used.
      Now I'm thinking about deployment, and also when I produce an update to my app, there could be more images to add to the folders. So I'm considering the best way of a) deploying the images to the user as part of the project, and b) how to most easily handle updates to the app, whereby more images will be added.
      I have just experimented with adding them all as a Resource (!). This inflated the exe from 10mb to 100mb (not a major problem), increased the compile time from 3 secs to 30 secs (annoying), increased RAM usage from 500mb to 1.5gb (not a major problem either), but means that it solves my fixed directory issue, distribution issue, and update issue, simply by having the files all stuck into the executable. Here's the new code I'm using:
      img = BitmapFactory.FromResource("Shots/" + filename);
      The next thing I was going to try was to mark them as Content > Copy if Newer. This would resolve the executable size and RAM usage (and also the directory issue as well), however it seems that I'd need to highlight them all, and move them from Resource to Content. As an up-front job this isn't too bad, but as I add new images to the project, I'll need to go in and do this every time, which gets annoying, as the VS2015 default is Resource. Also, I'm not sure how this would work in terms of updates. Would something like ClickOnce deployment recognise new PNGs and install them to the users?
       
      I also have 3,000 ZIP files (~500kb each) which also need deploying and updating in the same way. These are currently read directly from my HDD until I can find a permanent solution for adding these to the project as well.
      Can anyone thing of a better way of doing what I'm trying to achieve?
      Thanks for any help folks.
       
    • By hyperknot
      Hi, first post here. I'm making a simple Augmented Reality game from the known 2D puzzle game Slitherlink or Loopy. This will be the first time I'm using shaders, so I'm on a bit of a steep learning curve here.
      My concept is that AR will look really nice with self illuminating objects, instead of normal materials where the shadows would be missing or wrong, as would be quite striking when composited to the camera feed.

      So I'd like to make the game as "laser-beams" levitating above the table, which is technically saying displaying and illuminating using tube lights. This is where I'm stuck.

      I've implemented smooth 2D line segment rendering by creating rectangles perpendicular to the camera and shading them in the fragment shader.

      I also looked into area lights, but all I could come up with was just "getting the nearest point in a rectangle" concept, which is:
      - looking nice on diffuse as long as it's a uniform color
      - but is totally wrong for Blinn and Phong shading

      My biggest problem is how to get the tube light illumination effect. Instead of the uniform white area on the screenshot below, I'd like to get colored, grid-like illumination on the ground. The number of tube lights can be up to 200.

      My only idea is to render to a buffer from a top orthogonal projection, apply gaussian blur and use it for diffuse lighting on the floor. Does this sound reasonable?

      Also, does anyone know how to get spectacular reflections right with an area light? Nothing PBR, just a Blinn would be nice. 

      The scene is very simple: floor on 0, all lights in the same height and only the floor needs to be lit.



      My shader (Metal, but pretty much 1:1 GLSL):
      fragment float4 floorFragmentShader(FloorVertexOut in [[stage_in]], constant Uniforms& uniforms [[buffer(2)]], texture2d<float> tex2D [[texture(0)]], sampler sampler2D [[sampler(0)]]) { float3 n = normalize(in.normalEye); float lightIntensity = 0.05; float3 lightColor = float3(0.7, 0.7, 1) * lightIntensity; // area light using nearest point float limitX = clamp(in.posWorld.x, -0.3, 0.3); float limitZ = clamp(in.posWorld.z, -0.2, 0.2); float3 lightPosWorld = float3(limitX, 0.05, limitZ); float3 lightPosEye = (uniforms.viewMatrix * float4(lightPosWorld, 1)).xyz; // diffuse float3 s = normalize(lightPosEye - in.posEye); float diff = max(dot(s, n), 0.0); float3 diffuse = diff * lightColor * 0.2 * 0; // specular float3 v = normalize(-in.posEye); // Blinn float3 halfwayDir = normalize(v + s); float specB = pow(max(dot(halfwayDir, n), 0.0), 64.0); // Phong float3 reflectDir = reflect(-s, n); float specR = pow(max(dot(reflectDir, v), 0.0), 8.0); float3 specular = specB * lightColor; // attenuation float distance = length(lightPosEye - in.posEye); float attenuation = 1.0 / (distance * distance); diffuse *= attenuation; specular *= attenuation; float3 lighting = diffuse + specular; float3 color = tex2D.sample(sampler2D, in.texCoords).xyz; color *= lighting + 0.1; return float4(float3(color), 1); }  
  • Advertisement