The only limitation we're going to see with computing speed is the amount of energy needed to run that kind of processing power.
We have new technologies on the horizon that will blow away what we have now. For example, quantum computers.
We'll see limits no doubt, but those limits will be well beyond what is needed for simulated human intelligence, or even universe simulations.
Whoa Mr. Badass, bring your teacher into this thread and I'll run circles around the both of yas.
If you knew what I've spent most my time doing the past 20 years, this certainly wouldn't be a debate. That's what makes this fun.
We're getting there, the hurdle isn't about knowing *how* to do it. The only thing stopping us is a lack of processing power, which is again, inevitable.
Well, you clearly didn't look up what I recommended so I'll go ahead and elucidate you.
Take a transistor (we're going to call it Bob). Bob has 1 million cousins and they all live in a little tiny island. Bob takes in electricity and a portion of that current goes through him and comes out of a different path (maybe, it depends on Bob's work). The problem is that not all of the electricity going in comes out, some of that energy is converted into heat, by way of the 2nd law of thermodynamics - which states that the sum total amount of usable energy in the universe goes down when any work is done in a system. Period.
So take Bob and his millions of cousins, aunts, uncles and brothers and squeeze them down further on a tinier island then multiply them. Now they can do more work faster, but unfortunately Bob is starting to feel a little hot.
Squeeze them down more, more and MORE. Run more current and more current through them.
Guess what you're going to get - Bob will be doing so much alongside his transistor family that he'll fry himself to death in milliseconds (depending on what material he's made out of). Currently, we don't know what material we can make bob out of to make him live longer. Therefore, since approximately 2004, if you noticed, computer speed hasn't really gone up significantly. What they did was introduce multi-core scheduling to handle how different processes can be efficiently and quickly managed and appear to be "faster". In reality if you were only running 1 process, computers form 6 years ago were just as fast as computers today. The easiest scheduling algorithm to date is the MLFQ:
Multilevel feedback queue - Wikipedia, the free encyclopedia
These are the very basics of the Heat Wall. We're literally running against fundamental physical laws that prevent us from going any faster without better cooling (which is impossible unless you spend millions of dollars to build a state of the art nitrogen cooled super computer). Try to mass produce those bud, I dare ya. There are also other problems with multiple cores, such as Dark Silicon but STS is no place for that kind of discussion.
Love this shit:
ftp://ftp.cs.utexas.edu/pub/dburger/papers/ISCA11.pdf
Key point:
Since 2005, processor designers have increased core counts to exploit
Moore’s Law scaling, rather than focusing on single-core performance.
The failure of Dennard scaling, to which the shift to multicore
parts is partially a response, may soon limit multicore scaling
just as single-core scaling has been curtailed.