• Advertisement


  • Content count

  • Joined

  • Last visited

Community Reputation

368 Neutral

About undead

  • Rank
  1. Introduction It has been three years since I started developing Android apps on my spare time. My history as a developer dates back to when I was a little kid and started playing with MS-DOS and x86 assembly. Like everybody I always had to adapt to new technologies, learn new APIs, new tools and new programming languages. Some transitions can be easier than others but in the end I have always been a desktop developer. When approaching android programming I found myself a little lost at the beginning and after a while I was... even more lost! The problem is mobile development works in a totally different way. That's why you can find the same questions asked a million times on different forums. In this post I would like to give some hints about android programming for people coming from desktop programming. Code is not what matters here, it is the programming model. 1 There is no executable In a desktop environment you usually end up with an installer package that unpacks and installs your application into a folder. Then you have one or more executables which are somehow linked to the OS UI (i.e. the start menu entry or a desktop link on Windows). Android is a different beast. You have to provide an APK file which is more or less a zip file containing your application code+data. Inside this APK file there is one file called the manifest. The manifest declares what's inside, which permissions are required, which hardware features are requested by your app, etc. Obviously this is also used to check device compatibility with that specific APK. Every APK has to be signed (even the debug one that uses a debug key inside your android SDK folder) and contains a version number. When updating an APK the system checks the version number and verifies the new APK has been signed with the same key as the old one, for obvious security reasons. 2 An android application... is not an application This is confusing to many people: what is usually referred to as an application is an activity. In android an application class is a lower level class that can be overridden in order to perform specific tasks, but at least at the beginning you probably won't need it. An activity, on the other hand, is exactly what a desktop application is. It starts, it loads its contents, it can handle user input, etc. 3 One apk, one application, many activities Yes, you can have many activities inside a single APK. A game activity could call another activity in its APK that takes care of the player settings. In order to keep your code modular and reusable you might want to have multiple activities. 4 One Activity, many Fragments As android OS evolved, it was soon clear that the many activities model was a good idea. At the same time, for trivial tasks it was maybe a little too much to create a different activity. So starting with honeycomb (android 3.0) the fragments were introduced. Fragments are views (with code) inside an activity. You can easily create multiple fragments, no need to declare them in the manifest. 5 Intents, the "main activity" and a very good idea In order to start an activity, you need to use a class called Intent. You inform the system of your intention to start an activity. You can pass parameters and the activity can be started. Even more: an activity can declare in the manifest which kind of intents it supports. This leads us to two considerations. The first is the main activity is the one that supports the intent linked to the "launcher". That means the intent the system generates when you tap on an app icon. That is your main activity. The second consideration is you can actually call external activities if you use an intent which is supported. So if you are using a phone number intent the phone will ask you which app should be opened (as long as you don't select a default app for that kind of intent). This is actually a very good idea because it makes possible to use the same system (or custom) activity from many other different activities. Also, every activity which is started from another activity is added to an activity stack. Pressing the back button or closing an activity in another way brings you back to the previous activity in the stack. Automatically. 6 Lifecycle and configuration changes So now you have an idea of how things work in android. One APK with a manifest, one application class, one or more activities, at least one of those handling the launcher intent and others handling no intent or specific intents. Probably some activities having multiple fragments/views. The most important thing to understand is the activity lifecycle. Here it is: These are all callbacks to your activity class. Inside the oncreate you usually load the UI. When a configuration change happens, like switching from portrait to landscape, your activity is destroyed and recreated(!!!!). You can save data into a class called the bundle which is passed to the oncreate in this case. The principle behind this choice is you might want to load different layouts for portrait and landscape modes. Is it annoying? YES. When another event occurs (a phone call for example) the application is paused. It will be resumed after. Well.. kinda of. When an application is paused and the system claims more memory, an app can be destroyed. Is it annoying? Again... YES. Expecially if that means you lose your OpenGL context and have to reload all textures, meshes, etc. Also note that in android clicking the back button until you close the app means to pause and destroy the app, while clicking the home button to come back to the main screen means to pause the app. The idea is an app that resides in memory and is paused will be back in foreground very fast. 7 Services, alarm manager, wake locks and broadcast receivers In a desktop environment you think about a service as something that runs in background or something that provides a set of functions you can call. In android a service is a class that does.... almost nothing. If you need a thread inside a service you have to create it by yourself. Everybody asks about reading positions or contacting a server once every while. Their idea is they create a service, run the service and then everything will work, even when the phone is in sleep mode. Sorry, it doesn't work like this. If you could do something like that you would kill the battery in no time. The solution is to schedule an alarm via the alarmmanager so that an event is generated once every X seconds (better minutes), then intercept the event, acquire a wake lock, perform the task and release the wake lock. When you don't touch your phone, it soon turns the screen off. After a while it goes into sleep mode to save battery. It wakes up when an event occurs (like the alarm of the alarmmanager). At that point in order to prevent the phone to go back to sleep you acquire a wake lock, keep the phone "alive" and then inform it can go back to sleep. When an alarm is fired, it broadcasts an intent you specify. You can define a broadcastreceiver class that handles that intent. In general this will start a service that will start a thread that will perform the operations and then will stop. It is complicated. It makes sense but the first time this can be confusing. 8 Passing data between activities You can set extra parameters to the intent when calling an activity. The problem is it works only if this data is trivial. This is one of the most asked questions: which is the correct way to pass a lot of data from an activity to another? Well, there is no clear answer. As explained before an activity should be self-contained and reusable. On one hand it should not rely upon complex data to be passed. On the other hand if you have a specific activity for your specific application you might want to pass complex data without worrying about allowing other apps to use your activity. As sad as it sounds the solutions are the following: - use an SQLite database - use the app preferences - use the application class - use a SINGLETON (not kidding, actually suggested by people working at Google!!!!!!) 9 UI and Code flow Forget about your main. There is no obvious code flow except for the callbacks described in the activity lifecycle. The activity starts, loads the UI, displays the UI. That's it. In order to do something useful everything has to use callbacks. The programming model is more or less the following: - set the xml file representing the UI - get an object from a resource view ID that represent an item in your UI - add a listener to that object so that when user interacts something happens For game programming there is an exception: the OpenGL renderer is called continously. Obviously if you want to seperate the game logic from rendering you will have to create a new thread and handle rendering data synch between the two threads. As for the UI, everything has to be done in an XML file. For an OpenGL game is quite easy as you will have a GLsurfaceview covering the entire screen. For normal activities this means to place each element relative to the other or to declare layouts that are linear or relative or scrolling. Due to the nature of android and the wide range of resolutions and devices that is the only way to create a consistent UI. Even if you have to consider many apps use many layouts (not only for landscape/portrait, also for different screen densities and resolutions). In OpenGL the problem with the UI is you have to take care of the different aspect ratios. 10 Threading Back in the days of android gingerbread (2.3) 1.2ghz dual core CPUs were the coolest guys in town. Because of what I explained before it was considered "normal" to just sit there, wait for the user to press a button and then perform the operations we needed. The programming model is tempting from this point of view. The problem is people started performing network operations inside the callbacks. Remember it is not a desktop PC, it is a mobile device. You might have poor connectivity. What happened was that many apps looked so slow and crappy that it started affecting user experience. Try to explain a user his brand new octa-core 2.5 GHz smartphone is totally stuck when you press the refresh button. He will blame the OS, maybe switching to the competitor next time he has to buy a device. That's why starting from ice cream sandwich Google decided no network operation can be performed in the UI thread. So take into account every network operation has to be done on a new thread. To sum up: an average application has the UI thread (the UI callbacks) a main thread (the activity thread) and probably another short lived thread (or asynctask or another kind or threading mechanism) for networking. In case of a good game you also have the game logic thread sitting aside the rendering thread. If you come from desktop programming consider Android programming is ALWAYS multithreaded. Not so shocking to see octacores around. 11 External memory, SD cards and security You want to save data. Maybe a lot of data. Android supports SD cards. Just write garbage there and you are fine. Release a small game, then download 1gb of data from an external website and place all that stuff somewhere in the SD card. There are two problems with that. The first problem is you have access only to internal memory and external memory in android. You might think internal memory = the device internal memory and external memory = SD card. Obviously it doesn't work like this. Internal memory is a private folder for your application stored in internal memory. External memory is the rest of your internal memory. What about the SD? SD is mounted in a folder in internal memory and every manufacturer can change its name. Even from model to model or from OS version to OS version. If this is not enough, let's speak about the second problem. If you could find the SD folder, you could write into it until kitkat (4.4). Now every phone with kitkat or Lollipop does not allow apps without root privileges (and your game should never have or require these privileges) to write to the SD card. You might ask if people were complaining for this change and if many apps stopped working. Well, both answers are yes. People complained and many apps stopped working or suffered from reduced functionality (file managers). Why is Google enforcing this? There are two good reasons. The first is android has no way to tell what you have been writing around. If everybody started writing on the SD (and many developers did) once the user uninstalled the app then you would still have garbage around. Also, relying upon data stored into a removable media for your app in internal memory to work is not a good idea, in general. You can still write in internal memory, but writing into your private folder is the way to go. This way a user uninstalling will effectively have a clean phone. The second reason has a lot to do with security. SD cards have FAT32 filesystems. There is no encryption. To allow apps to write there means somebody having your removable media in his hands might easily steal precious information. Conclusion I hope you will find these hints useful. Mobile programming can be confusing at the beginning, expecially for game developers used to work on desktops, where 500W beasts and huge screens are the norm. To adapt to this new environment requires a different mentality.
  2. I often write getters and setters. Not everytime I declare a variable, of course. I think first we should not associate getters and setters so tightly. A setter is potentially a lot more dangerous than a getter. On the other hand I assume it might be convenient to implement a getter, expecially if you are subclassing. For this reason in my code a setter is often declared as protected. While it is true a constant might be enough and I totally understand the point behind YAGNI for any non-trivial project I usually have data loaded from an external file. I want to be able to modify the datafile and restart, no recompilation needed. In that case a public getter and a protected setter are a useful combination. The point is if you are programming since years it is likely you have self-contained components which might be reused for different projects so it makes sense from my point of view to take a component I already have in my toolbox, create new classes according to my needs, tweak the data files and see what happens.
  3. Is using a debugger lazy?

    Assuming the professor has some knowledge and assuming he met many unskilled people, those who try to code "randomly"... well he might have a point. I mean if you write something and then by default you debug because there's surely something wrong you missed in our own code then better if you don't have a debugger and just learn how a while or for loop works. But in the end debugging functionality from a broader point of view is also a printf. Debugging is also mentally parse your code trying to figure out what's wrong with your algorithm. But development of every non-trivial software in 2012 needs good debugging tools. Those who fail to realize it either live 40 years in the past or have never written (and shipped) any non trivial product. [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img]
  4. android memory restrictions

    Just a few additions. Certain apps like live wallpapers can have TWO instances running at the same time. This happens when you have your live wallpaper set and user decides to change settings. In order to do it he has to go into live wallpapers menu and select the wallpaper. When he does it another instance of your app is created when the preview starts, effectively creating a second instances before user can press "settings". This happens because the "engine-generator" in a live wallpaper is unique (technically it is the wallpaper service) so multiple instances are generated sharing the same heap space. In situations like that if you have a decent screen res and you are loading multiple images you can have problems (a single 1200x800*32-bit image is 3.5-4mb). In such cases you might want to give a try to Bitmap.recycle() to quickly free heap space before loading another image... [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img]
  5. Hi, not strictly game programming but I recently released a live wallpaper for android. The interesting part is it uses OpenGL ES 2.0 for rendering and has a realtime glass marbles/lens effect. FREE: [url="https://play.google.com/store/apps/details?id=net.drkappa.lwp.madmarbleslite"]https://play.google.....madmarbleslite[/url] PAID: [url="https://play.google.com/store/apps/details?id=net.drkappa.lwp.madmarbles"]https://play.google.....lwp.madmarbles[/url] Comments and feedback are welcome! Also interested in knowing about compatibily issues with GLES 2.0... [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img]
  6. Lost between Android Activity, Service, Intent...
  7. Indoor rendering

    [quote name='Krypt0n' timestamp='1320249200' post='4879719'] for outdoor, you are usually limited by 1. as you have a big view range, seeing tons of objects, not really important if they have perfect shading, in addition most parts are lit just by the sun, and usually you have a big part of the rendering covered by 'sky'. for indoor rendering, you will rather be limited by 2, you cannot stick in 5000 individual drawcalls into the level in every room, as that would mean you have about 200pixel per object, or 16x12. it would be like creating your level of tiny bricks.. so, while you are right bout those two limitations, it's very context dependent, what you need to optimize for. [/quote] I agree, I was posting that as general optimization rules. [quote] that's how it was before the year 2000, since about the geforce256 (geforce 1), we stopped touching individual polygones, it's just way faster to push a whole 'maybe' hidden objects, than updating thousands of polys on cpu side. per pixel graphics was anyway too slow at that time to do anything fancy (even one fullscreen bumpmapping effect was dropping your framerate <30). [/quote] Exactly, rendering potentially hidden geometry is faster on modern GPUs. [quote] the problem with optimzations for a theoretical situation is that you cannot know what would help or make it worse. 'optimizing' is just exploiting a special context, it's trying to find cheats that nobody will notice, at least not in visual artifacts. while your ideas are valid, they might not change the framerate at all in real world, they might make it faster just like it all might become slower. I think, if I had 200+ drawcalls for an indoor scene, I'd probably not care about drawcalls at all. if I'am drawcall-bound with just 200, there must do something seriously wrong. [/quote] Well, the problem isn't 200 drawcalls are limiting, my point is if in a simple scenario like that there's a solution to submit 10% of the drawcalls it just looks like a good solution. [quote] considering this, the situation is way simpler, you might observe, that the geometry is not your problem, neither is the actual surface shading, you will be probably limited by lighting and by other deferred passes (e.g. fog, decals, postprocessing like motion blur). so, it might be smart to go deferred like you said, you dont need a zpass for that, you probably do best with simple portal culling in combination with scissor rects and depthbound checks. now all you want to optimize is to find the perfect area your lights have to touch, to modify as few pixel as possible and you need to solve a problem, (nearly) completely unrelated to BSP/Portals/PVS/etc. you might want to - portal cull lights and/or occlusion culling -depth carving using some light volumes (similar to doom 3's shadow volumes) -fusing deferred + light indexed similar to frostbite 2 -reducing resolution like most console games do, maybe just for special cases e.g. distant lights, particles, motion blur (e.g. check out the UDK wiki), with some smart upsampling -you might want to try scissor and depthbound culling per light -you might want to decrease quality based on frame time (e.g. less samples for your SSAO) -you might want to add special 'classification' for lights, to decide how to render which type of light, under what conditions, with which optimizations, e.g. it might make sense to batch a lot of tiny lights into one drawcall, handling them like a particle system, it might make sense to do depth carving just on near-by lights, as distant lights might be fully limited by that carving and a simple light-objects would do the job already). what I want to basically show is, that your scene handling is nowadays not as big deal as it was 10 years ago. you still dont want to waste processing power, of course, but you won't implement a bsp system to get a perfect geometry set, I've even stopped to use portal culling nowadays. it's rather important to have a very stable system, that is flexible and doesn't need much maintaining, while giving good 'pre-processing' for the actual expensive stage nowadays, which is the rendering. (as an example, "resistance" used just some kind of grid with PVS, no portals, bsp etc.) and like you said, there are two points, drawcalls and fillrate, you deal with them mostly after the pre-processing (culling). you have a bunch of drawcalls and you need to organize them in the most optimal way you can, not just sorting or batching, as for indoor you'll spend probably 10% of your time to generate a g-buffer, you'll have 10%-30% creating shadow maps, the majority of the frame time will be spend on lighting and post processing effects. [/quote] Yes that was exactly my point. To my experience static (and opaque) geometry is just submitted to the g-buffer (I go deferred) with no spatial structure traversing (I have the chance to generate an octree if needed but submitting geometry turns out to be usually faster). Then all optimizations are about dynamic objects and lights/shadows. Also spent some time optimizing shadows when it comes down to static and dynamic geometry, shadowmap resolution, distance, etc. And since all my shaders are assembled and generated on the fly according to what effect each material requires, I can also generate simpler shaders if there's not enough horsepower available. The only reason why I still use a spatial structure is for scenes heavily using transparent static objects. I use a BSP but that was a very specific scenario in which I had to use the engine to render a real world building which was 70% glass, with different colors and opacity levels. In that case I needed a perfect geometry set and a perfect sorting so I went for a BSP. Of course it's a performance killer but I couldn't come up with a better solution at that time.
  8. Indoor rendering

    [quote name='AgentC' timestamp='1320238981' post='4879659'] [quote name='undead' timestamp='1320236467' post='4879639'] In which way any acceleration structure can render something faster than that? [/quote] You'll have to profile if the benefit of the reduced draw call count outweighs the penalty from the vertex buffer updates needed by dynamically merging your visible set of objects. My gut feeling is that draw calls are becoming less expensive as CPU speeds go up, while updating large vertex buffers can be costly, so there might be "hiccups" as you for example rotate your camera, and the visible set changes. For reducing overdraw, you can also do something like setting a threshold distance where you render the closest objects front-to-back without state-sorting, then switch to state-sorting for the rest. Octrees can also be used without involving any splitting of the objects, this is commonly accomplished by so-called "loose octrees" where the objects can come out halfway from the octree cell they're based in. [/quote] Well my idea is to logically divide your level in object types. Let's consider 4 different object types: - static, unsorted (a static indoor level) - static, sorted (static translucent objects) - not static, unsorted (a character) - not static, sorted (a moveable glass) Of course my approach is intended only for static objects not needing sort. In that case there's no need to update the vertex buffer, when camera moves there's no special work to be performed, etc. Yes it's brute force and inelegant but I don't see how an octree (loose or standard) can be faster than just merging static geometry. And even if your geometry won't fit into a single vertex buffer, you can still use multiple VBs containing geometry grouped by the same criteria (example: materials 1 to 50 goes into the first vertex buffer, materials 51 to 70 goes into the second, etc.). As for transparency unless you use order independent transparency a portal/octree isn't enough to accurately resolve sorting, which in theory should be performed per polygon. BSPs can be useful in this case. My point being as far as I can see OP has a good portal prototype working on static geometry which might be more useful for objects than for the level itself. Maybe I am missing something...
  9. Indoor rendering

    This thread is very interesting but there's something I don't get about acceleration structures in AD 2011. From my (little) experience and (maybe poor) understanding there are two huge limiting factors when it comes down to rendering speed. 1- batch as much as you can in order to limit the amount of draw calls 2- limit GPU cycles spent on a per pixel basis The theory is pretty obvious: by using an acceleration structure you limit your draw calls by rendering only the potentially visible (polygon) set. At the same time, when using occlusion culling, you inderectly limit the GPU cycles by reducing overdraw. My problem being: there are different ways to solve the same problem without side effects. Problem: you have 100 objects to render, not cloned/instanced, each made up by 2 materials, one shared and one being choosen from a set of 8 different materials. Worst case scenario: selecting material 1, render, select material 2, render, switch to next object. 200 draw calls and 400 material (texture/shader) switches. Second solution: select material 1, render each object, select material 2, render each object using that material, then switch to next material 200 draw calls and 9 material (texture/shader) switches. Now, if we use an acceleration structure like a BSP or an Octree, we are actually splitting objects, introducing more polygons. So if an object gets splitted that implies we have two different objects, thus increasing the total object count (and draw calls). On the other hand some acceleration structures can reduce overdrawing so this might still be a winner. What I ask is if I merged all the polygons sharing the same material and I took advantage of a z-pass (ore use a deferred renderer) what kind of performance would I get? Even if I didn't create a supermerged object for the z-pass I would be able to issue: 9 draw calls to get the z-pass. 9 draw calls for the actual rendering, with 0 overdrawing (guaranteed by the z-pass). I can submit 18 draw calls compared to 200+ and I'm 100% sure there's no overdraw at all... and as for rendering more polygons, polygon count usually isn't a big problem in 2011... or at least it's not so limiting like shading. In which way any acceleration structure can render something faster than that?
  10. Using tolua++

    When I want to export a class in LUA I always create a wrapper class. Am I the only one doing this?
  11. MacBook Pro good for game dev?

    [quote name='Serapth' timestamp='1315672214' post='4860036'] I am not avidly anti-Mac, I actually quite like my iMac for what I use it for, at 1000$ ( at the time ) for a single plug solution wasn't a bad deal really, if I figure Mac OS as a value add, which in that case I do. The Mac Mini is an interesting machine, which if they shipped with slightly better video, I would have picked on up as a Media Center solution. Then again, at 400-500$, there isn't much of an Apple Tax on them either. It's once you start talking laptops that the price gap becomes quite so vast. The entry level MacBooks have fairly appalling specs, but the Pro's with more reasonable specs are just outrageously expensive. However, if you want a high end MacOS portable experience, they are the only game in town. If on the other hand, you want a Windows machine, they are an overwhelmingly bad purchase. [/quote] I got a mac mini 2011 with an ATI (not a great video card but ok). And I couldn't find a PC equivalent because all small PCs I found had an atom CPU. I already had a spare non OEM win7 64 professional. Now I have a media center, a portable PC, a second development machine for hobby or emergencies and I have the chance to learn something about iOS programming. Never had a mac before because I use windows and they cost a lot and give less, performance-wise. And yes Apple Tax isn't much on a mac mini. I agree, in the end it's all about what fits better somebody's needs.
  12. MacBook Pro good for game dev?

    [quote name='3x3is9' timestamp='1315670398' post='4860019'] [quote name='Serapth' timestamp='1315668666' post='4860007'] A MBP booted mostly into Windows is a bad purchase. Period. Mac OS is about the only reason to pay the ( rather horrid ) Apple Tax, so if you are going for mostly Windows, there are many many many many better choices than a Mac. [/quote] Now, I don't really want to bring up an Apple debate, because they're usually pointless. I'm just wondering why you would say a MBP with Windows is a bad idea. Is it a hardware/software incompatibility issue? Or is it just that it's a waste of money to use Windows on a Mac? The reason I am buying this mainly for school, but it's also going to be for personal use. I like Mac OS X better than Windows, so I can switch back when I'm not working. The thing I'm worried about is wasting my money on a laptop that won't accomplish my school work, which is why I'm getting it. If a MBP won't do the job, I'll just have to suck it up and get a PC laptop, but of course I'd rather buy a Mac. [/quote] I don't want to defend Seapth as he can do it by himself but he said "if you are going for MOSTLY windows".
  13. How to Log and stay Modular

    It's simple. You have a globally accessible class, let's say an application class even if it might not be the case. Depends on your design. Question: who is responsible of enabling/disabling/configuring log? Answer: your application, because different applications might require different logs and also you might want to change one line to enable/disable log, compile and run without rebuilding everything. So your application class at startup should configure the logger. Question nr.2: who can access log? Answer: potentially every class. So what you do is to build globally accessible macros for emitting logs pointing at a class WRAPPING the logger. Example: [size="2"][color="#0000ff"][size="2"][color="#0000ff"]#define[/color][/size][/color][/size][size="2"] my_LOG(sStr,...) myLogWrapper->EngineLog(my_LOGT_UNKNOWN,__LINE__,__FUNCTION__,__FILE__,sStr,__VA_ARGS__); [/size][size="2"][color="#0000ff"][size="2"][color="#0000ff"]#define[/color][/size][/color][/size][size="2"] my_WRNLOG(sStr,...) myLogWrapper->EnginelLog(my_LOGT_WARNING,__LINE__,__FUNCTION__,__FILE__,sStr,__VA_ARGS__); [color="#0000ff"]#define[/color][size="2"] my_ERRLOG(sStr,...) myLogWrapper->EnginelLog(my_LOGT_ERROR,__LINE__,__FUNCTION__,__FILE__,sStr,__VA_ARGS__); [/size] Which is nothing special, just a call to a global log wrapper class passing also a log type. The log type identifies the error level. For example the base module always logs CPU/OS/System infos and my rendering module logs the device capabilities. Both are INFO logs. The point in having a wrapper is simple. The base module always initializes the wrapper but not the log. This way when you want a logger actually you ADD a logger to the wrapper. So the function EngineLog, in its simplest form, parses all available logs from an array of logs. If none is availble no message is emitted, if many you have multiple output logs. And what I like is it's easy to change how log works, for example if you wanted to bind specific log types to different log files all you would need to do is to write a new enginelog method, change the macros and just recompile. That's it. [/size]
  14. How to Log and stay Modular

    My solution is a mix of those proposed in some replies. Base module providing logging capabilities. Logger is a class but log messages are added via macros, different error levels. Many logger classes each supporting different log formats (right now just txt, rtf and xml. xml logs have some extra features like error level filter search, etc.) That way I still have global access but at the same time I can change the log type when I need it. Could easily write a network logger class and use it modifying 2 lines of code. Also I can easily select verbosity based on error levels.
  15. MacBook Pro good for game dev?

    [quote name='Serapth' timestamp='1315668666' post='4860007'] [quote name='3x3is9' timestamp='1315668253' post='4860005'] Thanks for the replies. I'm not sure if I made it clear enough that I'd be installing [b][size="4"][u]Windows 7[/u][/size][/b] on the MacBook Pro. Most of the software that the school provides is Windows-only, and I imagine that game dev is pretty Windows based in general. It's been mentioned that Mac's don't have great support for OpenGL, which if I understand right from a wiki search, is a software thing. Would dual-booting with Windows 7 take care of the OpenGL problems? [/quote] A MBP booted mostly into Windows is a bad purchase. Period. Mac OS is about the only reason to pay the ( rather horrid ) Apple Tax, so if you are going for mostly Windows, there are many many many many better choices than a Mac. [/quote] I agree, with one exception. The Mac Minis.
  • Advertisement