Depends entirely, there's no sure answer, but in general I'd say that getting acceptable performance out of the GPU takes more effort, The primary reason for this has more to do with scalability than with computational constraints. For just about any modern CPU you can buy today, the capability difference between the top-tier and the lower-end of 'mainstream' (i.e. not 'budget') isn't going to differ by more than a factor of 3 or so if your engine can effectively use more than 2-4 'heavy' threads of execution. Most games don't, and the factor decreases to something under a factor of 2, perhaps a bit less. A top-end part is 4 fast cores with hyper-threading, the lower-end part is 2 nearly-as-fast cores with hyperthreading -- you give up 2 cores, but only 25-33% of the clock-speed, if you weren't using the other cores heavily, they weren't giving you much advantage in the higher-end CPU. Its been this way for awhile, CPUs have only made incremental (~10% per generation) improvements for the past 5 years or so.
GPUs tend to span a broader range between the highest and lowest-end part -- typically about 6-8x within a single generation, and a generation-to-generation gain for similarly-priced parts is around 20%. Its not uncommon to find a brand new CPU paired with an inherited GPU from a couple generations ago, either, so Ideally you're probably supporting GPUs that span down as far as 1/16th as powerful as an enthusiast-level GPU at time of launch. The only break you catch is that its fairly trivial to scale along with graphics performance -- heck, just reducing the resolution by half means you can run at the same quality using only 1/4th of the GPU resources. Typically, games will target right at about the middle of available GPU power because its just as easy to scale up for users with extra-powerful GPUs -- just render at higher resolutions and throw in some more eye-candy. You can't just do more stuff on the CPU-side of things easily, there's not much extra you can do without affecting gameplay.
GPUs also require their own special kind of care to get the most performance out of them too, and doing things in the wrong way can really hose your performance quickly. The patterns to get this performance are well understood, they just take time and care to implement, and often combined with a variety of shaders tuned to each GPU generation and performance/image-quality level, simply takes effort because of the combinatorial (e.g. Effort = patterns * GPU Generations * Perf-IQ levels; the product of factors, not their sum) nature.