• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

Rickert

Members
  • Content count

    38
  • Joined

  • Last visited

Community Reputation

109 Neutral

About Rickert

  • Rank
    Member
  1. [quote name='Promit' timestamp='1331962593' post='4922756'] A lot of problems with OOP stem from the way it is badly mis-taught to programmers. The recent push for data oriented design is related, but it isn't quite the same. The trouble is that object oriented programming [i]as presented in most educational material[/i] is badly flawed. It does not reflect real world usage of objects, it does not reflect real world problem solving, and it certainly isn't a good way to think about code.[/quote] Can you please give an example between simple which "does not reflect real world usage of objects" (such as Animal, Shape, Car examples) and real world examples how OOP is applied. [quote]We learn in school and in books that object oriented programming is invariably applied to logical, human based objects. I've heard this referred to as "noun-based" design, and frequently imposes constraints that have everything to do with a human abstraction and nothing to do with the technical problem we're dealing with. The problem is further exacerbated by is-a and has-a relationships, UML diagrams, design patterns (GoF), and all that other crap that passes for "object oriented software engineering". Then you wind up with projects like Java, blindly applying the OO principles in the most pointlessly abstract terms imaginable.[/quote] Underlying of human abstraction is still the technical problems we have to deal. What's wrong with creating a high level point of view which is close to human, and solve its technical underneath? [quote]And to top it off, most of this code never turns out to be "reusable". The biggest promise of OOP simply does not deliver. And the tight coupling headache leads us down a whole new rabbit hole of design patterns, IOC, and stuff that lands us so far in abstracted land that we've lost sight of what it was we were trying to do. I understand why the guys who think we should have stuck to C feel that way, though they are misguided.[/quote] Can you give an example how OOP is not reusable? I think if something which can be packaged into libraries and is independent of any application code, it is already resuable? With this definition, resuable is presented in any language and programming paradigm? Or is OOP [i]less[/i] [i]resuable?[/i] Finally, I wonder, if OOP is so bad, why is it so widely used, especially in game industry (this is what I read and heard of)?
  2. @Hodgman: Thanks for provide the low level point of view. It's nice to know such disadvantage for coupling between functions and data, which I never think of before. However, I don't understand the 3 empty bytes for padding? Shouldn't all variable be contiguous? One of the complain about OO is that it does not reflect the real world because not everything in the eral world is object. However, I think people play with word on this one rather focus on the idea. Let's not call it object, but call it concept. An object in OOP reflects a real world concepts, not an object as in noun. For instance, kicking is an action which can be modeled by a class named Kick along with its attributes to present it as a concept: [CODE] class Kick{ private: int velocity_; int force_; public: Kick(int velocity, int force):velocity_(velocity),force_(force){} virtual ~Kick(){} void high(Fighter target){ /*implement high kick */ } void mid(Fighter target){ /*implement mid kick */ } void low(Fighter target){ /* implement low kick */ } int get_velocity() const { return velocity_; } int get_force() const { return force_; } }; class Fighter{ private: int health_; int ki_; Kick kicking_stance_; public: enum AttackPosition { high, mid, low, }; FIghter(int health, int ki, int kick_stance):health_(health), ki_(ki), kicking_stance_(kick_stance){} virtual ~Fighter(){} void kick(Fighter target, AttackPosition pos){ if (pos == high) kicking_stance_.high(target); else if (pos == mid) kicking_stance_.mid(target); else kicking_stance_.low(target); } }; [/CODE] So, what's the problem here? I think one of the reason people complain that is, Kick is not an object in real world itself. Only live entity like human, horses can kick. Another example is, if I got an Invoice class, should object of invocie, invokes actions like invoice.save(), invoice.send()? For this reason, we have patterns, design and design principles because pretty much people can invent different ways to present a concept. As a consequence, OO is accused for low reusability, despite the abstraction is for concept reusability. In the example above, other people might put Kick concept inside an abstract class Fighter, while other might define the MartialArtStyle abstract class, and put kick action inside it as a abstract method. For this reason, it's more difficult to reuse, since if there's a more complicated object system, a member function of an object may operate on the abstract data type of that object, and inside the abstract data type, it operates on other abstract data types as well. This is what I got from the articles. Correct me if I'm wrong. However, I still don't think it's the fault of the paradigm, but rather, the language. Let's consider another scenario:[list] [*]An ancient wizard has a powerful spell which can turn objects (such as tea cup) into living entity like human being. [*]There's a tea cup in front of him [*]The old wizard casts the spell on the tea cup to make it live. [/list] In language like Java, the problem is that I have to plan ahead for TeaCup object. If I want to have a tea cup to behave like human, I have to:[list] [*]Write an interface for shared human behaviors, such as talk(), walk(), and let TeaCup implement it. [*]Since not all TeaCup can talk, and I want to model the real work closely so other can understand, the TeaCup must not always talk. [*]So, I define a TeaCup hierarchy, which (for simplicity, I will create one) has TalkingTeaCup inherits from TeaCup (which is the normal one), and implements the shared human behaviors interface. [/list] Look at this way of constructing concept, we can see that it creates overhead. In this case, instead of a tea cup with has varied behaviors, we got different versions of TeaCup which varies a little in behaviors. This is just another simple case. Imagine if it applies for a large and complicated system. There's no way for me to keep the TeaCup object as simple as it is, because TeaCup is a TeaCup, it can't operate on it own until the wizard gives it a life. Thus, TeaCup must not contain any related behaviors to make it "lively" (to prevent issue like whether invoice should save() / send() or not). Dynamic behaviors will be added later after the wizard casts life spell on TeaCup. I don't think we can add methods to TeaCup if we don't write it in class or implement via interface. Although, Java can dynamically add more class with varied parameter at runtime with Dynamic Object Mode (which we have to do more work)l: [url="http://dirkriehle.com/computer-science/research/2005/plopd-5.pdf"]http://dirkriehle.co...005/plopd-5.pdf[/url] . In C, we can have an array of function pointers, however, this approach is not convenient, since it operates on low level, so it's error-prone and it is strong typing, which make it less flexible. With function as first class object, it creates a more elegant way to solve the above problem. Consider this piece of Lisp code (I'm learning, so pardon me if it's not elegant) written for this scenario: [CODE] //I have to use C comment because our forum does not support Lisp comment style. //collection of wizard spells. In Lisp, a variable is just a list, which might be a single value, or a list of symbols/values/[b]functions[/b] (defparameter *wizard-spells* nil) //class object with its attributes consists of name and action (defstruct object name action) /* This is the wizard spell for giving an object a life. Here you can see, upon calling this function, it will add function talk and walk into the variable action of the class object */ (defmethod giving-life ((o object)) (princ ( type-of o)) (princ " is now as lively as human!") (push #'talk (object-action o)) (push #'walk (object-action o))) //wizard spell for making an object become poisonous, so if someone contacts with the object, they will die a moment later (defmethod poison ((o object)) (princ ( type-of o)) (princ " is covered with poison.")) //talk action, which is a human behavior (defun talk () (princ "Hello I can talk now")) //walk action, which is a human behavior (defun walk () (princ "Hello I can walk now")) //add spells to spell collection of the wizard (push #'giving-life *wizard-spells*) (push #'poison *wizard-spells*) //create a teacup (defparameter *tea-cup* (make-object :name "TeaCup" :action nil)) //funcall will execute the functions inside the variables. (funcall (nth 1 *wizard-spells*) *tea-cup*) (funcall (nth 0 (object-action *tea-cup*))) (funcall (nth 1 (object-action *tea-cup*))) [/CODE] In sum, OOP creates or more flexible way for "creativity" (in which we call design, where people can create different abstractions for the same concepts. This might be a good or bad thing). Those are the ideas I "suddenly" got it after practicing with Lisp for a while. Not sure if it's to the point or not, so If you guys can verify it, I am really thankful.
  3. Recently I read many leangthy articles say how bad OO is. Among those are Paul Graham's and he series of articles on this site: [url="http://www.geocities.com/tablizer/oopbad.htm"]http://www.geocities.com/tablizer/oopbad.htm[/url] (I haven't read all, since I don't have much time). Basically, everything about OO is bad. I failed to see. It may not be the perfect solution, but it's not as they claim. I read a more objective article: [url="http://www.petesqbsite.com/sections/express/issue17/modularversusoop.html"]Modular Programming Versus Object Oriented Programming (The Good, The Bad and the Ugly)[/url], and the author made a good point that certain paradigm is more suitable for certain domains. For example, OOP is very suitable for multimedia and entertaiment industries such as "[size=4]Music software designers, Recording studios, game designers, even book publishers, and video production groups[/size]" . He pointed out that people in these industries tend to think in term of objects more. What's wrong with building abstraction by encapsulate data and behavior into class, and provide the class as a package to be used as it is without worrying about the details? C does this as well. In C, we also specify a set of interfaces to the client in .h file, implement it in .c, and if the source is proprietary, we can always provide the interface only. What's wrong with classification? How can you program a game in a procedural or functional way, where it's counterintuitive to model objects in real world? Even Lisp supports OOP as well. In C, we can define low level struct, and if we want to transform one struct to another, we have to base on the memory layout of each struct. We would end up writing OOP feature for automatic transformation anyway. Because we are talking about paradigm, so we should not be specific about one language, so we don't say thing like the object model in Java is dictated by Object, or C++/Java is too verbose. Finally, object oriented or whatever is just a way to organize source code. Instead of millions lines of code in our main, we divide it into smaller units and store it different locations (files), and the main only uses a nice interface from these modules (which is usually only one line) to invoke certain functions when needed. The act of dividing and organizing code into logical entities (class, functions, struct...) and physical entities (files, directories) is a logical (science) and creative (art) task. I don't think one paradigm is suitable for every situation. Can anyone, especially from anti-OO camp explain this to me?
  4. [quote name='Antheus' timestamp='1329486283' post='4913903'] [quote]This is what I thought from what I read anyway, please verify and correct it for me.[/quote] Not for virtual or multiple inheritance. There are multiple vtables, which are resolved during run-time, depending on how object is constructed. A * is not sufficiently defined. While in this particular case it might be obvious, it's an exception, not the rule. IIRC, multiple inheritance should be viewed as each class having its completely own vtable, rather than sharing it across hierarchy. There's also a ton of rules on how such classes are constructed and destructed. Consider a diamond:[code] X / \ A B \ / C[/code]Given an instance of C, one can cast it to either A or B, but A and B are completely distinct types. So even though we have C which has both, function that operates on A or B cannot rely on fixed layout. Last time I tried to comprehend it I gave up and decided that virtual multiple inheritance is one of those parts of C++ one doesn't use. It's also the reason why essentially no other language supports it, there's just too many complications. [url="http://www.parashift.com/c++-faq-lite/multiple-inheritance.html"]See here[/url]. [/quote] I see. It seems that the example in the book is so obvious that it hardly makes sense. From what was written in C++ FAQ in this section: [url="http://www.parashift.com/c++-faq-lite/multiple-inheritance.html#faq-25.9"]http://www.parashift...e.html#faq-25.9[/url], it seems there's not ambiguity in the example, since with the virtual keyword eliminates the duplication of multiple inheritance, thus calling a data member is straightforward . Since you refer to vtables, I will modify the example a bit clearer: [CODE] class X { public: int i; virtual void func(){} }; class A : public virtual X{ public: int j; virtual void func(){} }; class B : public virtual X { public: double d; virtual void func(){} }; class C : public A, public B { public: int k; virtual void func(){} }; // cannot resolve location of pa->X::i at compile-time void foo( const A* pa ) { pa->X_only(); } main() { foo( new A ); foo( new C ); // ... }[/CODE] In this new code, each base suboject of object C as well itself will have a virtual pointer to its own destructor. It is not known until runtime to be sure about which virtual destructor to be invoked. Based on the answer in this question: [url="http://stackoverflow.com/questions/9033451/pointer-to-base-class-sub-object-which-version-of-virtual-function-is-invoked"]http://stackoverflow...tion-is-invoked[/url], it is known t hat in runtime, the virtual pointer of the base class subobject will be replaced by its derived one. In this case, [b]func()[/b] cannot be determined until actual object is passed into foo() at runtime, thus it cannot assign the function address to invoke call to [b]func()[/b] in [b]foo()[/b]. So, I think the memory layout of typical object C is: [CODE]1000: int i; //start of subobject X 1004: __vptr_func_X; //virtual pointer of func() in X, however, it is pointing to address of __vptr_func_C 1008: int j; //start of subobject A 1012: __vptr_func_A; //virtual pointer of func() in A, however, it is pointing to address of __vptr_func_C 1016: double d; //start of subobject B 1020: __vptr_func_B; //virtual pointer of func() in B, however, it is pointing to address of __vptr_func_C 1024: __vbcX; // which points to the start subobject X at address 1000 1028: __vbcA; // which points to the start subobject A at address 1008 1032: __vbcB; // which points to the start subobject B at address 1016 1036: int k; 1040: __vptr_func_C; //virtual pointer of func() in C[/CODE] About the virtual base class pointer __vbc things, I'm not sure if it is laid out as I wrote it to be. Or maybe this is compiler specific, and assume I'm a compiler maker, I can actually do it that way or otherwise, place it at the end of the object C as long as I satisfy the condition of having a virtual base class pointer in the derived class object, is it right?
  5. [quote name='Washu' timestamp='1329472619' post='4913847'] If you happened to assemble it yourself and list the assembly you might get something like... [CODE] movq -16(%rbp), %rdi callq __Z3fooP1A //... movq %rax, %rdi callq __Z3fooP1A [/CODE] For your calls to foo (after fixing the const issue), foo then looks like: [code] __Z3fooP1A: ## @_Z3fooP1A Ltmp2: .cfi_startproc ## BB#0: pushq %rbp Ltmp3: .cfi_def_cfa_offset 16 Ltmp4: .cfi_offset %rbp, -16 movq %rsp, %rbp Ltmp5: .cfi_def_cfa_register %rbp movq %rdi, -8(%rbp) movq -8(%rbp), %rdi movq (%rdi), %rax movq -24(%rax), %rax movl $1024, (%rdi,%rax) ## imm = 0x400 popq %rbp ret Ltmp6: .cfi_endproc Leh_func_end0: [/code] Feel free to figure it out, its pretty straightforward. Note that this is NOT optimized code. Which pretty much eliminates most of the code. [/quote] Thanks for your answer. However, I learned assembly using Motorola 68K, not x86, and it was a long time ago. So I will figure it out in the future by learning proper x86 instruction set. Can you elaborate the answer in a higher level point of view?
  6. Consider this code: [CODE]class X { public: int i; }; class A : public virtual X { public: int j; }; class B : public virtual X { public: double d; }; class C : public A, public B { public: int k; }; // cannot resolve location of pa->X::i at compile-time void foo( const A* pa ) { pa->i = 1024; } main() { foo( new A ); foo( new C ); // ... }[/CODE] It is said that the compiler cannot fix the physical offset of X::i accessed through pa within foo(), since the actual type of pa can vary with each of foo()'s invocations in the book "*[i]Inside C++ object mode[/i]*l". So, the compiler has to create something like this: // possible compiler transformation void foo( const A* pa ) { pa->__vbcX->i = 1024; } If the program has a pointer to the virtual base class, how can't it resolve the memory address of that member at compile time? As far as I know, when each derived class object is created, the memory layout of each object consists of:[list] [*]all members in the base class [*]a virtual pointer (of a virtual destructor) [*]a pointer to the virtual base class of the derived object [*]all of the members of the derived class object. [/list] For example, suppose I have an object [CODE]C c_object[/CODE] and [CODE]A a_object[/CODE] This is what I think about object c_object layout (suppose c_object start at address 1000: [CODE] 1000: int i; //(subobject X) 1004: int j; //(subobject A) 1008: double d; //(subobject B) 1012: __vbcX; // which is at address 1000 1016: __vbcA; // which is at address 1004 1020: __vbcB; // which is at address 1008 1024: int k; [/CODE] This is what I thought from what I read anyway, please verify and correct it for me. So, finding the base class member should simply be finding the right offset from the starting address of the derived class object. But why can't it be resolved?
  7. [quote name='NoisyApe' timestamp='1328375301' post='4909574'] ..why care about what's smart and what's not? [/quote] Because many people think what they do is more intellectual than the others, and have a tendency to downplay what they think is inferior. In fact, it is very hard to judge based on the degrees they got and the types of work they perform. Also, this is a discussion as well, it's one of the reason for creating the topic. I want to hear opinions and experience on this issue, and how people deal with it. To me, I just ignore and focus on the job. I don't care. But how about the others? [quote name='SteveDeFacto' timestamp='1328375531' post='4909577'] Obviously quantifying intelligence based on which programming language a person uses is stupid. Why did you even feel the need to make this post? [/quote] My scope is more than just programming language. Please read carefully.
  8. I often see undergraduate CS students (like I used to be) consider more difficult languages as a media to prove they are smart and outstanding compare to the others, or people from other Engineering fields (such as Electrical Engineering) think that their major is more difficult than CS, because their field involves interaction with physical world (hardware, real time....) while CS only deals with, at most, logic (in form of Mathematics), and tend to be proud of that. As for me, I think this is a big misconception. Certainly, some fields invovle with complex problems by its nature, but it doesn't mean every problem in which the people in that particular field solve, will change the world next morning. However, every field has its own set of problems according to the thinking level of the problem solver (i.e. one such measurement is the simplistic scale easy/medium/hard/very hard). The flaw in the thoughts of those EE people is that they assume the interactions with real world, such as assembling/crafting/modifying hardware - the interactions with real-world which are specific to their field - are difficult to work with. it is not necessary to directly interact and think with the real-world constraints, considered more difficult to work with at higher abstraction level. For that reason, programming in C/C++ does not imply the programmer in such languages are more competence than a programmer using Java. Let's view it like this: Java is easy, so it's more accessible for lesser gifted people to work with, solving easy tasks/errands, so competence programmers can focus on the suitable problems at their level. A competence programmer, can take advantage of the easiness of Java, to greatly improve their productivity compare to the programmer with lesser intellectual capacity and/or skills. For the same reason, working with bare metal does not mean it's harder to work at higher levels of abstraction. For example, creating a simiple circuit which can turn on/off a lightbulb does not mean it's more difficult than prove Fermat's last theorem. Math is a field I consider very hard, above all else, simply it forces the thinker to think in a very high and abstracted level, to the point where everything does not seem connect to the real world, thus make it hard for majority of people to grasp advance math concepts. It is also hard to map Math concepts into real world and apply it. So, what's your opinions on this issue? I'm working in a telecom company which produces embedded telecom software for telecome devices. Although I work with C/C++ and the terminal primarily, I'm tired people considered themselves competent programmers because they program in C/C++, as well as folks on internet forums. They might be, but it doesn't mean they are more competent than ALL the people using Java/Python (they usually use the word [i]most[/i], but really, in their mind, it means [i]all[/i]). One again, the complexity of problem defines the competent of the problem solving ability of a person. Whether you are working on art, business, communication, engineering, science, if you can solve complex problems of your field (given the field is significant), you are smart. Sadly, we don't have a universal metric to measure the complexity of every problem, and this seems to be impossible (instead, they use IQ, but it still does not reflect the true capacity of the mind at all).
  9. [quote name='swiftcoder' timestamp='1327522357' post='4906199'] What's wrong with a good old-fashioned requirements specification, followed by a design document? [sub](as you may guess, I have limited tolerance for enterprise-style 'process for the sake of process')[/sub] [/quote] Of course the documents are necessary, and is aided by modelling tools where it helps retrieving relevant information to build up the document.The tool provides a way to manage documents throught out the development process (source code is a form of document as well). However, in practice like XP, documents such as what you specified are not encouraged to write, since it will cost more time. BPMN is part of the requirement specification if it is applied. I think BPMN is a good tool to model the requirement and business process. So instead of writing pages of document, one diagram can summarize nicely and easier to read.
  10. Good day everyone, I have just posted this question on Stackoverflow: [url="http://stackoverflow.com/questions/9008656/bpmn-and-use-case"]http://stackoverflow...mn-and-use-case[/url] . However, I want to post here to hear your opinions on software development process in general and the game industry in particular. BPMN (Business Process Modeling Notations) is used for modeling business process by visualization, thus making intangible ideas become physically concrete through the expression of BPMN diagrams. The question is, [b]how do I organize the process with the software development process[/b]. Initially, I thought of two ways to organize use cases and business process diagram:[list] [*][b]1 to many:[/b] By mapping each step (step here means each node in the BPMN digram) in the business process diagram with server use cases. Each use case is mapped with several class diagrams/component diagrams (I prefer this one, since you can encapsulate a set of classes into one component which has input and output), several sequence diagrams (optional). After you have class diagrams/sequence diagrams, code is written/generated based on the model. [*][b]Many to one:[/b] By mapping several steps into one use case. The subsequence steps are the same. [*][b]Many to many:[/b] For example, one step in the business process can be mapped with two or more use cases, and the same two or more use cases can be mapped with other steps. [/list] The above methods can be done by the modeling tool, and in my case, I use Enterprise Architect from Sparx System. I discover it recently and I am using its trial, but I will buy it in the future. I can organize many use case diagrams with one step of the BPMN diagram, and can click to view the necessary use cases. However, I don't if it supports many to many cases. After thinking my own method for organizing BPMN (which is aimed for business people) and Use Cases (which is more oriented to software engineers, even it's still at the high level point of view - the requirement view), I searched the Internet, and found two other papers, each suggest the following method:[list] [*][b]Turn each use case into each step of BPMN diagrams:[/b] To visualize how use cases refined from engineering point of view play in the business process. However, this approach requires the use cases to be defined first, which makes it hard since business process should be modeled first to make everyone understands the process before turning into use cases, if the process is complicated enough. Or should this method be performed after business process and use cases are well defined to further verify and validate the concepts between two point of views (business and engineering)? Original presentation is here: [url="http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CCkQFjAA&url=http://csis.pace.edu/%7Eogotel/professional/rev08_lubke.pdf&ei=L0UgT6DCLMe5iQfh0d30DQ&usg=AFQjCNH4xXQteialP302uEu_2zgKNqjgeQ&sig2=h1VzYsxjEtPP2q8tiu0yAQ"]Visualizing Use Case Sets as BPMN Processes[/url] [/list] [img]http://i.stack.imgur.com/uxPpf.png[/img] [list] [*][b]Each use case is exactly a business process: [/b]Each step in the use case is each step of the business process. Original paper is here: [url="http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=3&ved=0CDwQFjAC&url=http://www.cs.put.poznan.pl/lolek/homepage/Research_files/06-BIS-Nawrocki,%2520Ne%25CC%25A8dza,%2520Ochodek,%2520Olek%2520-%2520ver1.pdf&ei=L0UgT6DCLMe5iQfh0d30DQ&usg=AFQjCNG7dGr4XgDAc7ETTqTtTP5sDBIGDw&sig2=A9ErEuShtPEe8zJ-LkdsKw"]Describing Business Processes with Use Cases[/url] [/list] [img]http://i.stack.imgur.com/jpYVW.png[/img] It seems to me that there's not standardized way of gluing these artifacts (BPMN and Use Cases and other digrams) together. Maybe it's a management problem and rely more on creative usage rather than follow a formal steps. [b]What are your opinions/experience on the usage of these diagrams in software engineering process?[/b] I know methodology like XP which specifies its own practice in software development process. However, unlike Scrum where it focuses more on management aspects (which means you can still apply the BPMN/UML modeling into your work process), XP specifies software practices and requires you to follow, and eliminate the modeling process like BPMN/UML, and its practices if not apply properly will lead to issues like under documentation, incorporates insufficient software design.... I prefer the model driven way more than XP. I guess it's up to the preference of companies and people. One of Agile goal is to "free developers from document works". Methodology like XP seems to easily lead to under documentation. I think to achieve that goal, the solution is to implement the tool to help developer reduce the workload on writing document, not by writing less documents, by gathering information from existing diagrams and automatically generate reports (in RTF, PDF, HTML in the case of Enterprise Architect of Sparx System). Another example is, people often complain about drawing diagrams consume their time. In my opinion, the solution is not to draw diagram, but the using the tool. Modeling tools today support round-trip engineering, where you can synchronize between your code and your diagrams, thus eliminates the extra effort to manually correct the diagrams if the code base changes (specifically, class diagram). [b]What's your opinions/experience on this issue?[/b]
  11. [quote name='DarklyDreaming' timestamp='1322592113' post='4888819'] Bullshit. You're deducing a complete fallacy: that a smart person will get 'smarter' out of going to MIT.[/quote] Possibly. Why not? It may not be MIT but it maybe other Universities as well, since you have to think and reason a lot in university. [quote name='DarklyDreaming' timestamp='1322592113' post='4888819'] Sure, everyone wants to be something. What's that got to do with the thread at hand? [/quote] [quote name='DarklyDreaming' timestamp='1322592113' post='4888819'] Childhood idols and interests play a major role in what we strive towards and dream of. But again, why are you bringing it up [i]here? [/quote] [/i]I just want to mention one of the very big motivation aside from passion for people improve themselves. With strong motivations, people perform better at what they do. Getting into MIT may give them a confident boost (aka belief) to be able to achieve significant thing. [i] [/i][quote name='DarklyDreaming' timestamp='1322592113' post='4888819'] No, they can however be [i]improved[/i]. [/quote] How can you improve your past academic record? You can only improve by having extra performance to compensate and replace your written record which will persist your entire life. It's similar like the famous dropouts: either having outstanding achievement or nothing. But Bill Gates is an extreme case. What I meant is simpler and in smaller scope, like being extra productive compare to your co-workers with nice academic record [quote name='DarklyDreaming' timestamp='1322592113' post='4888819'] You played russian roulette, ey? [/quote] Just like any investment: If you fail, you would lose everything or likely. [quote name='DarklyDreaming' timestamp='1322592113' post='4888819'] Stop using the word "better people". It pisses me off. There are no "better people" - no Über Menschen; there are prodigies and talented people, yes. That's different from what you are implying - that people are on a scale of 'good' to 'bad'. Also, stop using "people" - you don't speak for the vast majority. [/quote] I always think every living creature in the world is equal in nature. Being different, like bigger or smaller, weaker or stronger, dumber or smarter, the purpose for every living thing is to live, experience its life and die. The purpose is unknown and maybe need not to. However, from our society point of view, definitely there are better and worse people. That's why we have social classes and ranks and it exists in everyday life. However, in this context, I don't mean better people as a whole, but rather is narrow in my study field. However, there are definitely better people than others in every aspects (again, remember it's based on society). [quote name='DarklyDreaming' timestamp='1322592113' post='4888819'] One MIT grad could go on to achieve no credible work in his entire life while a community college nobody goes on to be the next Steve Jobs - what's your point? That MIT grads and those who attend other high profile Ivy League schools are better than those who go to other, more 'regular', schools? That they generally 'achieve' more? [/quote] I heard that big companies like Microsoft or Google favor graduates from big Universities (not just in America, but in the world generally). So, probably they will achieve more, at least in academia. [quote name='DarklyDreaming' timestamp='1322592113' post='4888819'] You care [i]waaayy [/i]too much about something that should occupy perhaps one trillionth of a picosecond of your life, yes. "Destined"? How the hell would one know what destiny (if there is such a thing as a deterministic universe) one posses? Impossible to change? What is impossible to change? Life? I read defeatism between the lines here. [i]In spades[/i]. [/quote] By follow destiny, what I really mean is to follow your true self. Often, we are easily affected by surrounding environment. For example, if you see a social guy and being favored, at that moment, you want to become him. It's probably a good thing, however, it may derail you into the wrong track. Peer pressure does exist, not only in high school, after all. By being truly yourself, follow your true nature without worry or question about anything. Talk when you want to and if you are not the type to fit the social image, simply ignore it. Each people have their own destiny. You cannot choose what you are when you were born. You have to play genetic lottery, and if you have a decent prize, you may potentially have a good life in the society. Since you were destined when you ware born, you are limited to a few choices to decide your life in that given context. You may have make your life better, but it is simply the result of your optimized choices. Your optimized choices maybe to live a cheerful life, never give up, positive thinking, independent, good nature... instead of the opposite. [quote name='Antheus'] This is not fun. It's not even a game. CowClicker is a fun game. Now focus on understanding why.[/quote] It might not be the best job I want to have, but it's still a good job. As a graduate, I cannot be picky, but rather concentrate on getting used to commercial setting and become a professional. The work is beneficial anyway, since it will put my knowledge into practice, making me understand more on Linux and computer overall. That's why I consider it to be my practicing environment. In my free time after work, I still spend time studying other subjects related to game programming, and my goal is to be self-autonomy on game designing and implementing. But it's still a long way to go.
  12. Thanks for replying. [quote name='Binomine' timestamp='1322568873' post='4888743'] You're making a rookie mistake. It's not the knowledge that's the killer feature of the university system. It's the social aspect. By being in a university, I got to meet the tip top people in their fields. That, and the ability to work on cutting edge research that you wouldn't really have thought about by yourself. As far as large university vs. small, there's less competition in a smaller university, but then there's less opportunity. You might have to work on something that is absolutely not interesting, even if it's important. [/quote] You are right. I missed the environment aspect, although I can feel how it affects my daily study at the University, and the big difference between self-study on your own and being in the community. However, both are supplementary to each other, and when we graduate, we have to rely more on self-study, although working environment is a nice place to interactively learning from people, but we have to rely on our own after all to improve ourselves and get the jobs done. At least that's my experience. [quote name='alnite' timestamp='1322529080' post='4888642'] Echoing what people have been saying, I do think that the students themselves that make the differences. Top-tier schools' students are smart, not because the school made them smart, but because they are already smart to begin with. There are differences in the environment among schools. Of course, though, if it's good or bad for students is up for debate. MIT students, I heard, are very competitive. They would cutthroat each other just to get better grades. Other schools are probably more relaxed, their students are more likely to support each other. [/quote] I agree. I used to think that it doesn't matter where we study, the knowledge is the same. 1+1=2 regardless where it is taught and everyone has the same competitive advantages when they acquire the same knowledge. Realistically, it's different. Even with the same knowledge being taught, students in environment like MIT, as I deduce from the qualification process, they definitely can do more with the same amount of knowledge obtain, since most of them would be smart plus the education environment. I think most of us want to be something significant in the society, or, the world. That's why we thrive to work hard, study more. Even with those who innately interest in something, aren't the childhood idols a big factor? I used to love the stories of famous scientists around the world, who dedicated their lives to advance the world. Later, I play games, and I want to create it. That's why I learn programming/software engineering. However, in high school, I did not perform well, only an average student, and just love to play games but not serious on creating it. I was really lazy, and maybe not so smart at that time. Until I decided to be serious, it was a bit late. Your academic record cannot be undone. Due to being an average student, I was always having a fear of not able to complete subjects like engineering/science... any subjects which require high logical thinking. But since I really like computer and making games, I bet my life on it. Later I was amazed to myself how I can learn the "impossible" subjects and discovered that psychology plays a big factor in learning something. At the time of being serious in getting my degree, I thrown away every negative thought, and mentally focus on learning what I was supposed to learn. For example, many people compete and jealous to each other, which affects their learning mind and derails them from learning, pushing themselves to stressful situation. I simply didn't care who's better than me anymore, but I care how good I progress and I respect that. The better people, I viewed them as a measurement to improve me, not to make me sad. Yet lots of people cannot get pass this. My formal education is in application programming. But now I am working on embedded telecom devices, with focusing on learning Linux stacks (from its kernel to its utilities). I am pretty satisfied with what I achieved. Even though, whenever thinking about graduates from the top universities, I think they even achieve more. Right now I am still struggling on reading "The art of computer programming" (Maybe I will comeback later when my reasoning skill improve to another level), while I am comfortable to read other technical books. Maybe I care too much about how myself fits the society again? Maybe after all, not everyone is destined to be the greatest to do the greatest things which change the world. Maybe everyone follows what they are destined for, they would be happy, rather trying to change what is impossible to change. I really like Linux, open source and free software in general, since I can feel like I contribute, regardless trivial or significant, to the world as a whole, rather than a specific organization, or a country, or a race. I really like to develop games and I like the free software philosophy. I am working on creating telecom software, but I consider part of my plan for maturing my skills to start developing serious games for Linux (note that serious games does not mean AAA title, it maybe at personal or indie level, but it's serious). I would like to create exclusive games for Linux, because making portable games for Windows is like telling the public how inferior the Linux compares to Windows, when in fact, it's not. Dream is just dream, but I still follow it. There maybe people who dislike my post, but I'm ready for criticism .
  13. I always have impressions to those who got admitted into top Universities like MIT, Standford... for studying Engineering (only those with modesity and nice, not being an arrogant jerk though). I don't actually know what they are doing in the University or what they will do, but I always feel they can perform higher level tasks with more complexities. I always think that they are good at create and applying mathematical model in real life and I tend to agree: If you can't apply math, it's your problems, not math. I am a junior software engineer on embedded devices. I am learning more on Linux kernel and low level stuffs. Even so, my will is not strong enough to pursue technical path forever, with a final purpose is to create something significant on my own. May I have a chance to get on their level if I keep learning through experience and self-study? In my opinion, Math is the must have requirement, since it seems that programs on those Universities are very Math oriented. Without very strong math skill, how can one perform good in science and researching beyond making regular business products?
  14. It is written in the document: "This function is used to asynchronously read data from the stream socket. The function call always returns immediately. " I know it is asynchronous, so it returns immediately. But what does async_read_some() differ from free functio read()? When I try to std::cout my buffer used for async_read_some(), it seems that the function reads many times until the stream is out of data. Does this mean async_read_some() request continuously until it receives every data, for example, in a HTTP GET request? And the server will write little at a time and send a little to the client (for async_read_some() to read a little bit of whole data), or it dumps all data to the client at once? Example from [url="http://en.highscore.de/cpp/boost/index.html"]The Boost C++ Libraries - Chapter 7[/url] [code] #include <boost/asio.hpp> #include <boost/array.hpp> #include <iostream> #include <string> boost::asio::io_service io_service; boost::asio::ip::tcp::resolver resolver(io_service); boost::asio::ip::tcp::socket sock(io_service); boost::array<char, 4096> buffer; void read_handler(const boost::system::error_code &ec, std::size_t bytes_transferred) { if (!ec) { std::cout << std::string(buffer.data(), bytes_transferred) << std::endl; std::cout << "Byte transfered: " << bytes_transferred << std::endl; sock.async_read_some(boost::asio::buffer(buffer), read_handler); } } void connect_handler(const boost::system::error_code &ec) { if (!ec) { boost::asio::write(sock, boost::asio::buffer("GET / HTTP 1.1\r\nHost: google.com\r\n\r\n")); sock.async_read_some(boost::asio::buffer(buffer), read_handler); } } void resolve_handler(const boost::system::error_code &ec, boost::asio::ip::tcp::resolver::iterator it) { if (!ec) { sock.async_connect(*it, connect_handler); } } int main() { boost::asio::ip::tcp::resolver::query query("www.google.com", "80"); resolver.async_resolve(query, resolve_handler); io_service.run(); } [/code] read_handler is first called connect_handler after write(), and then read_handler() repeatedly called inside read_handler() itself. So, "read_some" means read little by little from a complete transfered data? In opposed to read() free function, read() actually blocks and read the whole stream into a buffer?
  15. [quote name='rip-off' timestamp='1318240565' post='4871025'] You didn't mention the technology you are using. Most language implementations treat static and member functions as practically the same thing - a block of code with a set of parameters. For a member function, the first parameter is some kind of reference to the object itself.[/quote] Sorry about that. I am concerning with C/C++ implementation. But I think it should be the same principle for every language. [quote] I'm not sure what distinction you are making here. At some level, each function is going to be [i]somewhere[/i] in memory. Some implementations will be clever. For example, on MSVC++, if you have two different functions, in different classes, that happen to compile to the same assembly, in Release builds the linker will only emit that assembly once, and use it for both functions. This happens quite often in small functions, like simple accessor/mutator functions. [/quote] I know that functions must be stored in memory. However, I'm not sure if each class object will have its own function or (and should be) all class objects will refer to the same function stored in memory. The only unique thing each class object should have is its data members.