• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

Anonymous P

Members
  • Content count

    112
  • Joined

  • Last visited

Community Reputation

429 Neutral

About Anonymous P

  • Rank
    Member
  1. You can comfortably stand on the shoulders of giants in this case: [url="http://www.graphviz.org/"]http://www.graphviz.org/[/url]
  2. [quote] Anyway, from what the OP states, he needs Subversion/Merc/Perforce but it needs to silently run in the background, flawlessly merge, and automatically handle any possible conflict with no user intervention? Don't mean to be negative but ... good luck with that. [/quote] No, as I said before he actually needs [url="http://en.wikipedia.org/wiki/Rsync"]rsync[/url], but for some reason everyone seems to prefer Rube Goldberg software... Efficiently syncing files across networks is such an unusual problem, who'd have thought a solution already existed right?
  3. [url="http://rsync.samba.org/"]Rsync?[/url] I'd just drop the whole manifest approach (thereby eliminating several possible failure modes) and use rsync. Also gets rid of the need to keep old files around, while still keeping bandwidth costs for a client update proportional to the staleness of their version. Seems like a solved problem to me...
  4. As always, do the simplest thing first, then optimize IF NECESSARY. It is much easier to optimize a correct design than it is to correct a broken 'optimized' design. [quote] Do you think a table with stockID, date, dayHighest, dayLowest and dayAverage would be sufficient? [/quote] If you defer the optimization decision, you can decide whether to implement it this way or not when you actually KNOW whether it's sufficient or not. I'd start with what you actually need right now, which is a table recording stock price changes. [quote] By caching subsets (last week, last month, 3 months, 6 months, last year, 5 years) it shouldn't matter if the history data table gets a bit bigger but I need a good performance for the intraDay price table. [/quote] As you familiarize yourself with moden SQL database implementations, you'll learn that the performance of queries on a dataset is basically independent of its size if you can create an appropriate index. if you've created a structure more or less like this: [code] create table stock ( id integer primary key, name string not null ); create table stock_price ( stock_id integer references stock (id); stamp timestamp not null, value integer not null, constraint stock_price_pk primary key (stock_id, stamp) ); create index stock_price_stamp_idx on stock_price (stamp); [/code] your sql query optimizer (assuming it was programmed after the first world war and is not MS Access) will consider a backward index scan for queries like "select * from stock_price where stamp > $yesterday". This means you can dump millions of rows in stock_price and they won't noticeably affect that query's runtime. SQL implementations put a lot of work into performing optimizations themselves so you can be free to focus on more important stuff. Later, if your aggregate calculations are too slow (if you use internal sql mechanisms like groupby they can be surprisingly quick) you can build additional tables for optimization, and deal with the headaches that safely synchronizing your cache to your raw data usually brings, and the consequent loss of flexibility. But until (or if) that becomes necessary, why bother?
  5. You can come at this bro from a more formal point of view. You may notice that your containment graph IS NOT NECESSARILY A TREE, or a forest (a set of trees): take the bounding boxes given by ( (0, 0), (40, 40) ) ( (20, 20) (50, 50) ) ( (20, 20), (40, 40)) Your containment relation is transitive and antisymmetric. If you make it reflexive (you assume a shape is contained in itself) you've got yourself a partial order, so you know you'll always be dealing with a directed acyclic graph and can apply other useful knowledge that holds for partial orderings. Next you can notice that if you only allow containment tests between the given shapes there will be cases where building this graph will take quadratic time (all shapes disjoint means you need to test each shape against each other shape, as you can't use containment transitivity to cut down on your search space). So something cleverer needs to be done. My choice would be to build a spatial accelerator (see techniques for building bounding volume hierarchies, quadtrees, spatial hashes, whatever) and use that to cut down on containment tests to build your explicit graph representation (for example, adjancency lists).
  6. Heh, the forum software mysteriously decided to uppercase some variable names in the code section above, breaking stuff.
  7. You're right, I was less than clear. The nudge I expected to give was to look at sweepline algorithms but I wrote scanline instead. Sorry. I don't know F#, but that algorithm looks quadratic (it seems you're applying min operations to the list for every interval). Here's another stab at an explanation with accompanying Haskell code. Suppose you have a list of intervals that contains the occupied intervals AND the interval representing the timeframe. You can partition the initial, overlapping, set of intervals into a set of disjoint intervals. If these disjoint intervals keep track of how many parent intervals overlap them, you can easily discard those with a > 1 overlap; the remaining intervals are free. This generalises to other interesting interval operations if you keep track of more than number of overlapping parents: this is where I was coming from. You can accomplish this in O(nlogn) time or O(n) time if you get to use a non-comparison sort like radix sort. HOW you accomplish this is by first decomposing intervals into their endpoints, sorting them, and sweeping over these to generate subintervals and overlap counts. Once you've done this, all that remains is to filter out those intervals with intersection counts > 1. The same approach works for many other interval-related problems. [code] import Data.List data Interval a = In a a Int deriving ( Show ) data Point a = Pt a Int deriving ( Show ) decompose = concat . map (\(In a b _) -> [Pt a 1, Pt b (-1)]) disjoint [] = [] disjoint (Pt a e:xs) = disjoint_h e a xs disjoint_h _ _ [] = [] disjoint_h accum a (Pt b e:rest) = In a b accum: disjoint_h (accum + e) b rest cmp (Pt a _) (Pt b _) = compare a b prune_empty_intervals = filter (\(In a b _) -> a /= build_disjoint :: [Interval Int] -> [Interval Int] build_disjoint = prune_empty_intervals . disjoint . (sortBy cmp) . decompose to_internal = map (\(a, -> In a b 1) from_internal = map (\(In a b accum) -> (a, ) find_free = from_internal . find_free_h . to_internal find_free_h intervals = filter (\(In _ _ accum) -> accum == 1) (build_disjoint intervals) [/code] So, let's say we have a timeframe of 1 to 10 and a few occupied intervals: [code] *Main> find_free [(1, 10), (1,2), (3, 6), (7, 9)] [(2,3),(6,7),(9,10)] *Main> [/code] edit: removed comments, highlighting makes the code with them harder to read
  8. I'd say this problem is slightly trickier than it looks at first. An elegant way to do this is to define a closed algebra on intervals with the operations union, complement and intersection: you can formulate your problem as intersection(big_timeframe, complement(union(taken_timeframes))). This can be done in an optimal way in a functional language using scanline-type algorithms on lists, or slightly less optimally and more clearly using the scanline idea and an intermediate datastructure that partitions the intervals into disjoint subintervals pointing to their containing parent intervals. I'd opt for the latter. You can build these operations easily using the following method: Notation: {A, B} is a tuple representing an interval, A <= B. {} means tuple, [] means list. 1) Decompose intervals into [{Endpoint, Parent_interval}, ...]. Eg. the interval {A, B} is decomposed into the list [{A, {A, B}}, {B, {A, B}}]. 2) Sort this based on Endpoint. 3) Use these endpoints to build disjoint subintervals which know their containing parent intervals, your datastructure is [{Subinterval, Parent_intervals_set}, ...]. 2) and 3) is basically a 1-d scanline algorithm. 4) Find the subintervals that satisfy your predicate (union, intersection, complement). For example, union simply means find subintervals whose Parent_interval_set is nonempty. The filter hof is useful for expressing these. 5) Merge contiguous subintervals. 6) ??? 7) Profit. This can be optimized by writing union, intersection and complement directly without going through the subintervals indirection, but that's wordier and trickier.
  9. Quote: I wrote a program, that finds unique item from the list and returns the list, problem - it doesn't return the list !!!! 1) You've written your function to return either #t, #f or nothing at all (return value of set! is unspecified). Why do you expect it to return a list when you've told it not to return one? 2) If you've correctly pasted in your whole program, you haven't defined extractUniqueAtoms_recurse. 3) Using destructive assignment here (set!) goes against everything that is good and pure.
  10. Nevermind, I guess the heat has me picking silly fights in forums after a decade of being on the wagon. Carry on :)
  11. Quote:Quote:...condescention...That's really not warranted, seeing vi, emacs, etc are already way more advanced tools than notepad. You've miscalibrated your condescend-o-meter, and missed my point. Asking what should be done about people who work in a manner you don't approve of is condescending. My post might not be very funny, which was my primary intent, but it's not even in the same ballpark as far as condescension is concerned. Quote:Just curious, what made you think of the OSI and COBRA communities? i meant committees, hehe. They're both efforts that involved a lot of hypothesizing about what 'ought' to be done, and their proponents developed a particularly irritating variety of smug self-righteous fundamentalism about their One True Approach (or at least that was my impression). When the time came to put those ideas to the practical test, it was found that huge parts of them weren't actually very useful, and people came up with their own more modest and more elegant ways to solve the problems that were supposed to have been solved forevar. They're just the examples I try to bear in mind when I feel the urge to tell people what they should be doing in cases where simple market pressures will do the job organically and without getting people angry (choice of work environment is jihad-worthy for many people).
  12. Quote:Where is this growing insistence in my peer group (ages 18-21) for making work as laborious as possible come from? Where do you get 'growing'? As someone gravitating more towards vim + cmake or scons than towards VS/Eclipse/EverythingAndTheKitchenSinkPro I'd say shrinking instead. Not that it keeps me awake at night. Quote:Should we do something about it? Er, if it's as laborious as you claim they'll never get anything done and you'll blow them away with one click on Build Solution or Execute Awesome Engineering Architecure or whatever they're calling the build button on IDEs now. They'll be busy dislocating their fingers playing keyboard twister on Xemacs or whatever it is those wacky command-line folks do instead of getting shit done. You should be thankful that your competitors are wasting their time. Or, hey, maybe you could theorize about other people's workflow choices on the internet and what should be 'done' about them instead. You might have a bright future in the OSI or CORBA committees, they'd be delighted to have you.
  13. Quote: Yes because everyone who writes software can clearly afford to pay to go to university their entire lives and write software out of the good of their hearts and magical unicorns and bunnies will bring us wonderful feasts to feed our families and pay for our homes. People need X People make X People need money. People make X for money. Same paradigm that has existed for all of eternity. Just because a bunch of people who fail economics don't like it, doesn't mean they have the right to actively try to destroy my career. They have the right to do whatever they want, as long as it isn't illegal. Writing software for free is legal, distributing software for free is legal, whatever the motivation of those producing it. You may not like that motivation, or agree with it, or understand it, but it certainly results in software being available for free. If you can't compete with free software that's your problem. Obviously, most consumers don't care about why free software gets written. They do care about paying more for something if an equivalent cheaper (or free) alternative will do the job. I doubt most consumers care much about what you'd like to be paid for. Most consumers don't care that children work in sweatshops to make their footwear cheaper, or drown in petroleum waste in Nigeria to make their gas cheaper. SOMEONE around here fails at economics. That welcome-to-the-real-world, faux-business tone you chose for whining about the free market is deliciously ironic.
  14. Quote:Original post by daviangel Quote:Original post by tufflax Quote:Original post by Scourage My 2 cents: Honestly for a beginner I would recommend something like python, lua or C# to get started. Learn structures and program flow. I'd then move into C (might even start there) and learn about pointers and memory management. You may want to look into C++ for object oriented programming, THEN I'd look into lisp. Lisp can be very daunting for a first language (i.e. car, cons, cdr). Cheers, Bob IMO C and C++ are more daunting than Lisp. Car, cons, cdr, etc. are supposed to be hard? There are lots of harder things in C and C++. I think Lisp is a fine first language. Which dialect and book are you going to use, you think? I guess SICP is one of the better books, and it's beginner friendly. If you haven't already, cheack out Teach Yourself Programming in Ten Years. I guess you've never seen this then: "Lisp's cons is perhaps the greatest fuck up in the history of computer languages. " ROFL. Double L. I guess you didn't bother to read the reference you linked. If you did and still posted it, you should forget about programming and seek a career in the lower echelons of the fast food business. Xah Lee is a dimwitted, possibly insane, self-aggrandizing putz, best known for increasing the noise in newsgroups by spamming his inane ramblings. comp.lang.lisp is among the unfortunate targets of his 'wisdom'. He's the internet equivalent of the subway weirdo who won't shut up about whatever the hell he's going on about while you're trying to focus on your book. Hey, I guess all his spam paid off. Most people don't care about lisp; here's an invitation to join their ranks instead of posting inane drivel.
  15. Quote: Is there a commonly-used, low-overhead way to accomplish what I'm doing here? I feel like what I'm doing right now is kludgy. Well, that's about as heavyweight as it gets. One process per user? Each user performs a SQL query every 250 ms? Gawd. You're planning to scale to several machines... you'll need them REALLY SOON if you use that approach with more than 50 users. ;) You're polling frequently for infrequent updates: that's called waste. Each poll is a sql query, which is really expensive. If your updates are infrequent but you're sensitive to latency in receiving these updates, you want the load to be proportional to the number of updates so you must avoid polling for them, as low latency means high poll frequency. 1) Taking action after a transaction event isn't what SQL databases are built for, and that's what you need. So write events to sql if you wish, but use another mechanism external to SQL to notify waiting clients that they should check their sql event queue. The simplest way to accomplish this is to have another central program (in addition to your sql one) that manages the event stuff: a message generates both a sql write and a notification to this central program which is responsible for waking up the needed listener. How this notification should occur leads directly into 2) Why are you using up a whole process per user? Not that this makes the event notification hard; leave each process listening on some channel through which wakeup notifications from the central notification manager arrive. This can be as simple as a pipe to the notification manager if everything's local to one machine, or a tcp socket, or a udp socket. Of course, if everything's a process they'll be clogging the kernel, but each process calling select() or epoll or blocking on an fd is preferable to each process executing a sql query every 250 ms. The sensible thing to do would be to thread this or even better use events or some microthreading mechanism, depending on how far you need to scale. Then all users on 1 machine can use the same socket or pipe for listening, for example, plus the huge reduction in ram and context switch overhead that process-per-user brings. Google for publish/subscribe; your message broker can be a really simple program which supports 3 operations: 1) register(listenerid, host, port): a listener tells it it's listening and where messages to it should be relayed (ie its host machine's ip address and socket it uses to listen, pipe, whatever). 2) unregister(listenerid): a listener's stopped listening, so clear it from the routing table. 3) send(destinationid): tell destinationid that its sql queue needs checking.