Computers are still getting more and more powerful every day, though physical limitations have slowed down the pace significantly as far as single-core capabilities are concerned. It's not rare to hear statements along the lines of "one day computers will surpass humans in computational power".
It's important to not take such statements out of context, however. Simply having the computational power of a human brain is not exactly useful. Not to mention, it's also a completely unfair comparison, as computers and humans do very different things and have their own specialities. When was the last time a human managed to produce a billion digits of pi? Can computers talk so intelligently that a human won't notice?
Firstly, note that while I'm firmly of the belief that artificial intelligence (on the level of humans) is a perfectly reasonable goal, we are not there yet and we are still probably a long way off. From a physical perspective, the architectures that human thought and computer processing run on are very, very different today, although with massively parallel processors we are getting a little bit closer. The fact that computers are Turing-complete doesn't really say much about their computational power nor usefulness in various applications; after all, there are plenty of ways to implement them (and many more ways to construct Turing tar pits), and we happened to have stumbled upon and stuck with this Von Neumann-like architecture, which is great for things like numerical computations and such, but for other more artificial intelligence-related tasks it's a bit more troublesome to deal with. Lots of progress has been made in various areas (such as facial recognition, natural language processing, etc) however, despite all this. It might be tempting to think that if we had a completely different architecture of computations then maybe some of these problems wouldn't be so hard, but I suspect that we tend to gravitate towards the more mathematical way of describing things anyway and that's unlikely to change. (Can you even imagine what an algorithm designed for a neural network would even look like on paper? I doubt most developers would want tread here.)
That also happens to be the second problem that we haven't solved. Imagine teaching a sufficiently smart alien child how to differentiate -- he might not know much about the theory behind calculus, but you can at least teach the student all the rules that can be applied mechanistically (and recursively) to obtain the result. Now try teaching such an alien how to recognize and differentiate human faces ... this is essentially what solving an AI problem is like. We don't even have full knowledge of the mechanism of facial recognition (much progress has been made though!), considering that many areas of cognitive psychology are still quite active today.
So what's the conclusion here? Well not much, other than just a pessimistic reminder that the so-called "AI" is still probably a long way off, but with hardware upgrades (dramatic ones, mind you) and better algorithms / techniques / architectures it might not be fantasy one day (I hope!).