Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


Cosmic314

Member Since 25 Jul 2003
Online Last Active Today, 03:28 PM

#5148968 References or Pointers. Which syntax do you prefer?

Posted by Cosmic314 on 23 April 2014 - 09:20 AM

A common style that I've seen is const-references for in-params (which should act like pass-by-value, but are expensive to copy) and pointers for all out params.
The rationale behind this is that it makes out-params very obvious at the call-site, similarly where in other languages the caller must use an 'out' keyword.
 

It'd be nice if the syntax were a little cleaner in C++, but then it'd also be nice if owning pointers (std::unique_ptr) were a built-in feature like in Rust. C++ is not for people afraid of typing or syntax spew. tongue.png

That would completely kill C++ as being a pay-for-what-you-use / opt-in language. Most embedded systems that I've worked on still use raw-pointers, or a smart pointer that acts like a raw one, but only performs leak detection during development.

 

Is a valid use of raw pointers for things within a class?  For example, if I construct an object (and correctly release resources if the object fails to construct) and then provide the correct release in the destructor would this be a valid use?  I guess that's the RAII paradigm in a nutshell.  Or would you advocate smart pointers even in this scenario?

 

I suppose if I need to share a dynamic resource to things outside of the raw pointer containing class then smart pointers become essential.  But even general purpose libraries intended for mass consumption avoid smart pointers because there's no hard bound guarantee that the same smart point convention is followed by programs which use different implementations.  How then is the problem of detecting resource leaks handled?  Is there a method to the madness or are designs/tools up to the task?

 

If there's any doubt, these are genuine questions and not quibbles with the concept of smart pointers.  I just haven't been on a large enough project to see what disasters might befall the unwary.




#5098745 Procedural Game Generation

Posted by Cosmic314 on 04 October 2013 - 08:23 AM

While I have no specific recommendations about how to go about it, I do remember a game I loved:  Adventure Construction Set

 

You could make adventures yourself or let the computer spit-out an entire adventure after it spent about 1 hour doing its thing.  All the more amazing is that it ran on the Commodore 64, a 64k machine!  There's a ton of nostalgia for it on the web.  Walk on over to Google for some ideas.




#5098085 about keyword and include

Posted by Cosmic314 on 01 October 2013 - 09:46 AM

If I understand your question correctly, you're asking why the keywords of C++ do not need a header file.  

 

Simply, they are a part of the language.  A compiler / parser is essentially hard-coded with the rules it needs to understand what keywords are, and what it is supposed to do with them.  Several international C++ standards exist which give precise definitions, depending on the version of C++ you're using.  

 

While it might be possible for a tool to construct a language from a set of rules, such as the keywords, you would have to require that tool to have certain rules to understand those definitions.  At some point a tool must have some set of basic definitions for it to do its job.  And that's what keywords partially represent -- an internal skeleton that guides the parsing / compiling processes to choose the appropriate rules when translating your code.




#5095293 Effective C++ (book)

Posted by Cosmic314 on 19 September 2013 - 04:58 PM

Why not make it a trifecta?  Grab 'Effective STL'.  The formatting and cross-referencing are the same.  You're given 50 specific points for improving your code with a copious amount of rationale behind the decisions in each of the books.

 

But maybe that's getting ahead of things.  C++ kinda grows on you.  If you pile up on books but don't do much programming you won't absorb the knowledge and fully understand what Scott Meyer is trying to explain.




#5094562 CS Degree - Is it worth it?

Posted by Cosmic314 on 16 September 2013 - 06:46 PM

As has been referenced earlier, if you want a job you need your resume in the hands of the hiring manager (obvious, right?).  What are the most effective ways to do this?

  1. You have interned for the company.
  2. You know the manager.
  3. You know someone who knows the manager and who vouches for you.
  4. You attend any seminar / conference / job fair that puts you into direct contact with the manager.

As Washu mentioned, HR receives and rejects thousands of resumes before any manager gets a whiff of them.  At larger companies they use "search" terms to create a score.  This leads to resume padding, sometimes to ludicrous levels.  Bypassing HR, if possible, dramatically increases your odds.  Tying the above knowledge to a university.....

 

One overlooked aspect of education at a university is that they tend to have great connections to industry.  Often they will have a resource center where they invite recruiters to interview.  Typically the recruiters are either professionals who visit with hiring managers, or the managers themselves.  Put all those big dollars you're spending into good use and attend these interviews!

 

The job resource center will usually host co-ops and internships as well.  As I've mentioned earlier, if you spend some time with an employer you get some "face" time that no resume can supplant.  If you're at these jobs learn as much as you can.  Make as many friends and connections as possible.  Even if you discover that the actual internship or company isn't exactly what you want to do, there's a good chance one of your connections will know a perfect match for you.  They'll refer you through a friend or have direct contacts with a manager (which allows you to bypass HR).

 

Also, internships tend to pay money.  I remember mine well.  Instead of pouring concrete with the local construction company for a summer, I was in air-conditioned bliss, making 3x that salary and following my educational goals.

 

Anyways, best of luck!




#5093771 to improve wide or better narrow knowledge ?

Posted by Cosmic314 on 13 September 2013 - 07:18 AM

Here's some back story that may have relevance.  YMMV.
 
I've interviewed probably 50+ candidates over the years.  I'm not a manager.  I don't get to pick who I interview.  As part of the interview team I typically cover basic programming in Perl, C, C++, and ARM assembly.  I've discovered that PhD candidates cannot program at all (this is not a sweeping assessment of PhD holders. I personally know plenty of PhDs who are amazing programmers).  We were looking for people who could demonstrate they could program at a bare minimum without much hand-holding.  I'm sure a PhD could eventually figure this stuff out, but our needs were such, at the time, that they needed to hit the ground running.
 
I spoke with a co-worker about this puzzling trend.  He holds a long list of degrees and I figured he'd be the best to discuss the matter.  He put it this way:  "Imagine that all computer programming knowledge (or all knowledge) can be represented by a circle."  He would then draw a much smaller circle on the edge of the bigger circle.  "This smaller circle represents the pimple of knowledge where the PhD is an expert.  They have focused on this piece of research for so long that they've clouded out the rest of the general knowledge.  PhDs typically spend many years in this focused state.  This is why, despite people's general understanding of what a PhD means, reality is quite different."
 
The next natural question was, "Well, is there a degree program that makes you expert of general knowledge?"  

 

"Yes.  That's called industry."
 
How is this relevant?
 
Well, if your idea of programming involves highly focused research based projects, in order to get anywhere you'll probably have to devote considerable effort to reach any level of competency.  The thing you sacrifice is time spent on becoming proficient in a wide range of activities.  My impression of game programming is that if you're doing it all by yourself, you want to master as many of the fundamentals as possible.  Don't spend too much time in any one area.  If anything, rely on the generous support of the open source environment for when specialization is necessary.  You don't have enough time to do it all, but you do have enough time to learn tools and flows that cover that aspect of programming for you.
 
As Álvaro says, it's a judgment call on your part.  Maybe somewhere along the way you decide you really enjoy something.  Or maybe you have some novel idea that you really want to explore.  It's largely dependent on what your goals are.




#5093369 Strategy pattern.... am I on the right track?

Posted by Cosmic314 on 11 September 2013 - 02:34 PM

Is the allocator parameter of C++ standard library containers an example of the strategy pattern? It customizes the logic based off of the allocator passed in, without changing the functionality of the container itself.

Yeah, I think that's about right.

 

In my listen() example, my creature class might implement it as a giant series of if/then/else statements every single time I call listen, even though the state of the creature rarely changes.  Rather than have a giant sequence of if/then/elses I could instead pay that cost one time.  For example when the creature goes deaf I can change the listen() strategy to the deaf strategy, which still happens to use the same interface.  Now I have tidier code.  If I call listen() a zillion times, I save a bundle of branch instructions on that if/then/else tree.  Pardon my poor syntax below, but it's enough to get the idea across (I hope!)

void creature_class::listen()
{
   this->InterfaceListen_strategy->execute();
   // place this creature's specific code afterwards, if any, that is invariant
}

void creature_class::set_listen_strategy( <variables> )
{
  if( DEAF ) this->IntefaceListenStrategy = DeafListenConcrete;
  if( BUFFED ) this->IntefaceListenStrategy = BuffedListenConcrete;
  // etc.
}




#5093332 coin flip problem

Posted by Cosmic314 on 11 September 2013 - 12:18 PM

Your loop continues as long as you don't enter 1 *or* you don't enter 2. To visualize that, here's a table:
 

1  |   2   | !=1  |  !=2  |  Loop?
-----------------------------------
Y  |   N   |   N  |   Y   |  Yes
N  |   Y   |   Y  |   N   |  Yes
N  |   N   |   Y  |   Y   |  Yes
So in other words, you will always loop forever, no matter what.

 

Quite helpful.

 

I find if you have trouble with testing the negative condition in your head, do this:

 

Determine which is easier to state in your mind.  In this case, "I want to exit the loop when choice is 1 or 2".  Then just negate this logic for the condition of staying in this loop:

"I want to stay in the loop when !(choice == 1 || choice == 2)".

 

You can also apply DeMorgan's rule to alter the check.  DeMorgan's rule is:

!(A && B) = !A || !B
!(A || B) = !A && !B

Thus the condition can also appear as:

!(choice == 1 || choice == 2) -> !(choice == 1) && !(choice == 2)



#5093292 coin flip problem

Posted by Cosmic314 on 11 September 2013 - 09:45 AM

You are using 'randRange' as if it were an 'int' variable.  However, 'randRange' is a function so in this context it will return a function pointer.  A function pointer is an address in memory where the code for a function resides.  Essentially your initial attempt is comparing code to a data type.  CaptainKraft's solution will fix that particular compiler issue because you are now comparing against the result of performing that function, which produces an int and comparing it to an int type.




#5093266 Strategy pattern.... am I on the right track?

Posted by Cosmic314 on 11 September 2013 - 07:42 AM

The high level view of the strategy pattern is that the interface to a behavior never changes but the behavior itself changes.  

 

A simple example might be a convex hull strategy pattern.  There are probably 10 different ways to solve the convex hull pattern.  Some perform better than others depending on the type of data that's input to the pattern.  If you need to do convex hull solutions frequently and you monitor data input patterns that you are receiving, you might decide that a different algorithm for the convex hull is more appropriate.  Rather than placing a series of 'if-thens' that surround each convex hull algorithm you can simply have your interface point to the best algorithm.

 

I realize the convex hull solution is a fairly simple example.  You don't really need much more than an 'execute' method, although if you could customize sub-steps that have a common interface it would start to get some benefit.

 

Let's take your example.  Maybe a certain class of creatures implement methods to listen(), assess_opponent(Creature &), look(), attack(Creature &), flee(), etc.  You could have a generalized opponent interaction function that does something like:

if( listen() == OPPONENT_DETECTED )
{
   Creature& opponent = get_detected_opponent();
   bool we_do_attack = assess_opponent(opponent);
   if( we_do_attack )
    {
        attack(opponent);
    }
   else
    {
        flee();
    }
}

You could have a strategy pattern that provides interfaces to those methods with a possible default behavior.  You can then provide a way to change the underlying strategies at run time (this is a crucial difference from the template pattern which chooses behavior at compile time).  For example, maybe your creature has super ears that can echo locate creatures, but somewhere in the course of the game it sustains an injury that renders it stone deaf.  Rather than provide a series of 'if-thens' in the above code, you could change your strategy that implements a 'deaf' version of listen().

 

Essentially what this buys you is the ability to have code that has the same overall logic but lets you vary behavior dramatically.  It saves you from having to change the above code constantly despite changes in underlying behavior, which is really one of the major benefits of design patterns.




#5092826 Any interest in ARM CPU pipeline / programming article?

Posted by Cosmic314 on 09 September 2013 - 04:11 PM

Thank you for your feedback.  You're correct.  Covering the entire ARM family is probably too ambitious for one article.  Maybe it would serve the community best if the article was focused on getting the best return of processor performance without giving too much consideration to the underlying specific implementation.  If it's a bust, well then I've tried, but if it's successful it may spawn interest in looking at more specific implementations.




#5092767 Any interest in ARM CPU pipeline / programming article?

Posted by Cosmic314 on 09 September 2013 - 11:51 AM

I want to write an article but first I wanted to solicit my idea to see if it would get any interest.  I already read the article about the CPU pipeline, which focuses on the Intel architecture.  Would there be any interest in a similar article about the ARM pipeline and processor?  Mobile devices are heavy users of this architecture so it does have some direct relevance to Gamedev.net.  I work with ARM processors and know enough to convey the basics.

 

Anyways, I'm just putting some feelers out to see what people think.




#5092504 Seeking Advice on Windows IDEs, Cygwin, Windows versions, et. al.

Posted by Cosmic314 on 08 September 2013 - 11:13 AM

I am trying to write hobby games for Windows / PC.  When I used Visual Studio Express 2010, I am frustrated with restrictions placed on being able to add IDE customizations.  For example, the code snippets tool doesn't natively support C++ unless I upgrade to Pro.  For the same reason I can't add 3rd party extensions, like Snip2Code, unless I shell out some cash.  From what I've read, and I certainly can be wrong, is that if I wanted Visual Studio with C++11 my best route is to install either Windows 7 or Windows 8 ($100-$200).  But would I need to get Pro and shell out another $500?  Does anyone have opinions and experience here?  Like I said, I'm currently only a hobbyist.  Spending $100 for a new OS is non-ideal but doable, but spending another $500 for a license on software that might be replaced a year later is asking too much.  If I upgrade to a new tool is there typically a savings involved?  Are add-ons difficult to code up?  Are 3rd party applications typically expensive?

 

I do enjoy the features of the IDE, especially the Intellisense and pre-compilation error detection.  However, I do not enjoy hunting through GUI menus to add libraries, include paths, etc.  Maybe my experience would be a little better if I could control all of these build options from a text file and command prompt instead.  I feel I'm giving up some control by just letting the IDE handle the build flow.  I see CMake is touted in these forums so I may head that way.  Is Powershell a worthwhile way of running on the command line?

 

In exploring alternatives I ran across Cygwin.  Since I program on Linux for my day job this discovery was serendipitous.  I've download the environment, copied source code from one of my Windows projects, created a Makefile, and after a bit of tinkering I got a natively compiled Windows program working without using any Cygwin DLLs.  I use Xemacs / Emacs and I also uncovered CEDET:  http://cedet.sourceforge.net/.  Maybe I can have the best of both worlds?

 

Is using Cygwin a dangerous path to travel for programming Windows applications?  Is there anything to be wary of?  I'm looking for other user experiences.

 

Thanks!




#5091818 What does the dimensions of the cache size and numbers of the bus speed tell...

Posted by Cosmic314 on 05 September 2013 - 10:40 AM

 

When they talk about cache associativity they are referring to how the data is stored within the cache.  As a gross simplification, when you have a large number of cache bins you need to keep track of which bin contains which data. When you have 128KB of memory kept in 64-byte bins that is a lot of bins. On the one hand you want all your data in cache memory.  On the other hand you don't want to search every bin in order to find which bin actually holds your data. If you have a 16-way set associative cache it means you need to sort through 1/16th of the bins to find the data.

I can understand the desire to simplify things here, but I think you've oversimplified to the point where you've inverted what associativity means. A 16-way associative cache doesn't sort through 1/16th of the bins, it searches 16 bins. An 8-way associative cache would search through 8 bins.

To try again (still simplified): the idea behind a cache is that it stores the most recently used memory with the idea that if you've used it recently you're likely to want to use it again in the near future. One way to handle this is that when you access some memory, if it isn't already in the cache you find the bin that has gone the longest without being accessed and replace that bin with the new memory. Because any bin can be used for any memory address this is called a fully-associative cache - every memory can be associated with any bin. The down side here is that it takes effort to figure out which memory address has gone without access the longest and which bin currently has which memory address, which slows the cache down.

The opposite approach is called direct associative or one-way associative: every memory address can only ever be found in one of the bins. Now you no longer need to keep track of age or hunting down the bin for a given memory address. On the down side if you try accessing two memory addresses back and forth that map to the same bin, the cache might as well not exist because every access will generate a cache miss.

Between these two approaches are the n-way associative caches. Let's say you have a two way associative cache. Here every memory address has two associated bins. If you want to see which bin a given memory address is found in you only need to check two different bins. It's also a lot easier to keep track of which of those two bins has gone longest without being accessed. And since now two memory addresses that map to the same set of bins can be in the cache in the same time it's harder to get usage patterns that will generate a cache miss with every access. So replace two with some value n to get your n-way associative caches like the 8-way and the 16-way. As n gets bigger the more complex the circuitry gets, but as n gets smaller the easier it is to have certain memory access patterns that it can't handle efficiently.

 

I work in a processor design group for a popular mobile processor family.  I don't do high level architecture.  My prior duty was to design a fully custom DTLB and assist in adder designs (in prior jobs I've worked on Xbox360 and PS3 custom circuitry).  Now I'm on the opposite end of the spectrum.  I run post-silicon characterization.  This job involves writing ARM and C on pretty much bare metal.  There's no OS with maybe 1 MB of memory if I'm lucky -- typically caches are bigger than the available memory, which makes things difficult because the entire characterization suite can be contained in the cache that always hits.  The idea is to create screens to filter out low yielding parts (defective, too low a frequency, too high a power requirement).

 

Anyways, there's a separate pre-characterization / modeling group that has to do things like cache size, TLB size analysis.  On server and PC chips it is typically unconstrained.  These memory structures can be made as big and powerful as possibly until the design occupies every last piece of silicon.  There's no power consideration, typically, because you can directly plug into a wall.  On mobile processors this investigation is much more nuanced and essentially is a constrained optimization problem.  The performance must be balanced with power consumption.  For the set/ways of a cache an important consideration is that the more ways you have, the less dynamic power you consume.  The cache can be broken into equal sized ways.  Since one address can only be in one way you only need to clock that particular sub-block.

 

The OP asked about bus speeds and I was hesitant to respond.  They are one thing on PC based systems where the bus involves traveling across bridges on the motherboard.  However, on SOCs (silicon on chip) it can be a different thing.  For example SOC parts that I work on typically contain > 12 different processors (4 application cores, GPUs, DSPs, power management processors, audio subsystems, peripheral subsystems, embedded DDR, PMIC controllers, et. al).  There are multiple bus systems, some of them private, and others more global.  Bus speeds range from 19.2 MHz all the way up to 900 MHz.  Application and GPU subsystems usually have a funnel directly to DDR memory on that high speed bus.




#5091387 What does the dimensions of the cache size and numbers of the bus speed tell...

Posted by Cosmic314 on 03 September 2013 - 01:12 PM

This is a fairly involved discussion.  Here is the 10,000 foot view.

 

Let's assume we have a CPU connected to a memory.  The CPU can process data 100x faster than memory can be read or written from this memory.  If the CPU needs the value of some memory location loaded then it's spending roughly 100 cycles waiting for the memory to respond.  What can be done to speed-up the performance?

 

This is where the memory hierarchy enters.  A smaller memory can be interposed between the CPU and the larger memory.  While its capacity is much smaller than the external memory it is much quicker.  What good does this do?

 

It wouldn't do an ounce of good unless you can continually rely on the cache to repeatedly use the same information or data over and over again.  If you think to your code, this happens all the time:  loops use the same instructions and data over and over.  Function calls use the same code in memory.  Routines typically use the same data throughout the routine.  So this smaller but quicker memory can be an effective solution because code and data tend to be reused.  This principle is known as instruction and data locality.  Without it, caches would be useless.

 

So let's give a high level example.  Assume 1 cycle = 1 CPU operation performed, 10 cycles = a 'hit' on your cache (a hit means your local data or instruction is present) and it takes 100 cycles to access data from main memory not contained in the cache.

 

Let's compare two machines, one with CPU + main memory and another with CPU + cache + main memory.

 

On the cacheless computer, sequential accesses to instructions are always at a cost of 100 cycles per instruction (this is a simplification btw -- typically CPUs fetch a batch of instructions at one time).  Thus if we have 10 sequential memory accesses we spend 1000 cycles.

 

On the computer with a cache let's assume that the same 10 instructions are used as in the prior example.  Let's further assume that the first 3 instructions are an iteration through a loop.  The cost of access is 3x100 for the first three instructions.  Assuming these instructions remain in our cache we will hit on the remaining 7 instructions at a cost of 7x10 cycles.  We spend 370 cycles on the system with the cache.

 

Comparing both cases, to perform the same work it takes 1000 cycles on the cacheless machine vs. 370 cycles on the machine a cache, nearly a 3x speed-up.

 

There is much more to this discussion but that's the general idea of how caching works.  Modern computer systems have a hierarchy of caches usually designated L0,L1,L2,L3, etc.  Each increasing number indicates the memory "distance" away from the CPU -- the intervening amount of memory between it and the CPU.  An L3 means the system has 3 caches between it and the CPU, for example (L0,L1,L2).  Typically you'll find the L0 and L1 buried inside the CPU itself if you were to look at the implementation architecture.  They are also typically tightly coupled, meaning you need both in operation for the CPU to run.  Whereas higher level caches have the feature to be enabled or disabled depending on your needs.  Mobile processing systems usually allow this to happen to conserve power.

 

Anyways, there's more to your question and I'll try to answer later when I have more time.  An awesome book to cut your teeth on:  Computer Architecture, A Quantitative Approach by Hennessy & Patterson.  It is considered a classic.






PARTNERS