The Singularity

... It's a definition as perfect as the definition of God....there's no way to prove it wrong. Really fucks with your mind.

and here is why:

"..useful varieties of human knowledge rely upon synthetic a priori judgments, which are, in turn, possible only when the mind determines the conditions of its own experience. Thus, it is we who impose the forms of space and time upon all possible sensation in mathematics, and it is we who render all experience coherent as scientific knowledge governed by traditional notions of substance and causality by applying the pure concepts of the understanding to all possible experience "
written almost 300 years ago -Immanuel Kant
 


Been following his blog for a while now. As I believe within 20 years, they will reach the point whereby we will halt and reverse the ageing process and nanotechnology will advance to allow nanorobots to circulate the body to destroy tumours, unblock clots etc. But what is the answer for physical trauma?! Reverse engineering the brain?!:)
 
The takeaway from Kurzweil's stuff for me is that the value of human labor is decreasing exponentially and that the only way to stay relevant in the future is to own resources. Bank hard enough now to secure power in post-singularity society.
 
The takeaway from Kurzweil's stuff for me is that the value of human labor is decreasing exponentially and that the only way to stay relevant in the future is to own resources. Bank hard enough now to secure power in post-singularity society.
Pretty much this. We'll either enter a system where there is no possible movement between social classes, or we'll have some sort of communism. The latter being true if we were to live in a virtual reality (something Ray Kurzweil has predicted), and the former being true if we don't live in a virtual reality.

Human labor's value will eventually decrease to nothing. If we are in a virtual reality the value of resources will also be close to nothing. Combine the two together and humans have pretty much nothing valuable to offer.


But what is the answer for physical trauma?! Reverse engineering the brain?!:)
Having us always plugged into a near perfect virtual reality all the time? Then there would be no physical trauma (assuming something outside of the virtual reality doesn't fuckup).
 
how long b4 i can buy a robot girl?

372644394v2147483647_480x480_Front_Color-Natural.jpg


roofie.jpg


5bd45-roofies.jpg


So-I-gave-that-bitch-a-roofie-Bitches-love-roofies.jpg
 
As other have said, just because computers get faster, does not magically make them able to think.

Evolution of machines/software and how they act is based in code, code which requires a huge amount of man-hours to create and test. Take Watson for example. Yes he beats up on humans in Jeopardy, but they are saying it cost IBM close to $100M and 4 years to get him to that stage. 4 years to get him to play a quiz show. A quiz show which is arguably based on memory (harddrive space), and the ability to convert a question tied into subtleties of english, and turn that into an answer. The amount of work required to "sort of" understand english and all it's subtitles -- as a computer -- is sort of mind blowing. As a programmer, I watched the whole thing and was so completely impressed, will simultaneously watching a loved one who was completely underwhelmed by what was going on. And sadly the only reason he really won was due to the fact that he could press the button faster than the other two. To get any machine to even mimic a human in terms of understanding, and then beyond that, even grasp creativity, is way beyond the scope of anyone right now, or in the foreseeable future. Machines eventually taking over the world is tied more into code and the creativity that humans wield in code, than it is with computer speeds or how hardware evolves. Computers will continue to get faster and faster, but they will more mimic something like star trek computers than anything more sci fi than that really. We will teach them things to do, and they will do it. They won't teach themselves creative things to do, and do it. At least not for a very long time.

The entire problem with the theory that machines will eventually usurp us (in terms of being better than us at _everything_) is that machines are completely and undeniably tied to rules that we have created. Code creates and controls computers, and this code is based on these rules. While we can make the rules flexible, the rules will always be firm at it's most basic of levels. Saying, "Well, a computer can just rewrite it's code and make itself smarter" is far more complex than it seems. While it is fun to imagine, it just isn't possible. Eventually machines will be able to compare and contrast -- choosing something more optimal and expanding on that, but those choices are always tied to code, code which is based on human intelligence. Rules that we create. Will they ever be superhuman? At some things, definitely. But at everything? Naw. Augmented humans becoming superhuman? That is something more in the realm of believable than a pure silicon ruler.

The ability to adapt, expand and be creative is something tied to being human. Can computers be like that? No -- at least not in our lifetimes, or our grandchildren's life times. I only say that because it will take hundreds of years to create the code that allows computers to have all the subtleties of humans. And even then, they will end up being as flawed as we are, because we taught it to act like us.

Anyway -- Just because computer speeds are growing exponentially, doesn't mean we are teaching them just as fast, or coding them to do things just as fast. We make computers faster, so they can do more things for us at the same time, that is it. We don't make computers faster because we are creating code just as fast as we are making them. While computer speeds grow exponentially, our ability to teach them only grows linearly.

The singularity is a fairy tale, like living forever, and god. Just because humans can imagine it, doesn't make it real.

EDIT: This seems too serious, so here's some angry tits:

girls.jpg
 
your argument

If we design AI the way you are proposing we should... then yes you are right.

True AI will not be the result of us programming a complete and thorough human mind.


That's not how you program artificial intelligence, and is not how IBM's Watson was programmed. IBM's computer was given huge amounts of data, and the correct answers to old Jeopardy problems and was able to teach itself how to play jeopardy. The coders just programmed a framework that allowed Watson to randomly create rules and figure out which of its randomly created rules worked.

The principles behind this type of machine learning (neural networks) have been heavily studied, and work. And they can, with sufficient processing power, be applied to human knowledge. There are no issues with us 'not being smart enough' to program artificial intelligence, or even with us making firm or inflexible rules. Programmers aren't writing the rules for artificial intelligence. They're writing a system that can randomly generate rules, learn which rules work, and which rules don't work. Through randomness and cutting out things that don't work (which is simulating biology) we don't need to worry about the inability for humans to program the human brain. We just need a sufficient amount of data, storage, and processing speed.

Between the vast amount of information on the web, and rapidly increasing computer speeds it will only be a matter of time before a neural network like Watson is able to not only answer Jeopardy but do anything else a human can do.

Since the 'rules' for AI (just like the rules that govern Watson) are generated randomly and not created by coders, it allows for creativity, and adapt. We don't need to teach them all of the nuances of being human. Through trial and error it will learn on its own.


You are right... there might be a time period (maybe 10-20 years) where computers are fast enough, but software hasn't caught up. But I can't imagine the gap being any longer than that.
 
  • Like
Reactions: ly2 from WF
The principles behind this type of machine learning (neural networks) have been heavily studied, and work. And they can, with sufficient processing power, be applied to human knowledge. There are no issues with us 'not being smart enough' to program artificial intelligence, or even with us making firm or inflexible rules. Programmers aren't writing the rules for artificial intelligence. They're writing a system that can randomly generate rules, learn which rules work, and which rules don't work. Through randomness and cutting out things that don't work (which is simulating biology) we don't need to worry about the inability for humans to program the human brain. We just need a sufficient amount of data, storage, and processing speed.

You are basically describing evolution. Like I mentioned, machines may eventually have the ability to compare and contrast, but they will never be able to apply that beyond their set of rules, that we have defined.

Machines are at a permanent disadvantage. While we follow our own rules defined in this universe (laws of physics, etc), they have to follow rules that we have created for them, that sit on top of rules created for us. The problem is, they will never know if their decision is growth unless a human tells them it is. If you are trying to say that they function outside of the rules we have created for them, then you are fooling yourself. Just like we can't change the rules of physics, they can't change the rules we've created for them.

And even if they don't require humans to decide whether something they do is "growth" or "failure", it will take almost forever for humans to create the code for them to even retain this knowledge, or have an understanding of it.


Between the vast amount of information on the web, and rapidly increasing computer speeds it will only be a matter of time before a neural network like Watson is able to not only answer Jeopardy but do anything else a human can do.
Watson isn't magic. Watson doesn't think. He does what he is told. He fucked up the final question, and the programmers told everyone why. They didn't account for something, and the true nature of why "thinking" computers will fail. Inevitably there will be a problem with the code. Code created by humans. Neural networks is a buzz word. These networks aren't magically creating anything. They are bound to a set of rules / algorithms. They can change and be flexible, but the flexibility is not what you are trying to imagine, or even close to what you are trying to pretend they are. They are bound within ranges. We can give randomness to programs, but they all fall within sequences, parameters, and variables that we can predict.

Since the 'rules' for AI (just like the rules that govern Watson) are generated randomly and not created by coders, it allows for creativity, and adapt. We don't need to teach them all of the nuances of being human. Through trial and error it will learn on its own.
Rules are not created randomly, they are created specifically for a reason. If we allowed a computer to be completely random, it would do nothing useful. We need to give it direction. The issue is a computer will never understand reason, it will only understand direction. At least for now.

You are right... there might be a time period (maybe 10-20 years) where computers are fast enough, but software hasn't caught up. But I can't imagine the gap being any longer than that.
You are fooling yourself if you think computers will be even close to real thought within 20 years. I would say hundreds of years. And it will take a huge amount of money and study to get them close to anything resembling a singularity.

I hate saying something is completely false, since we're so imaginative and creative, but this is pure science fiction on so many levels.
 
You are basically describing evolution. Like I mentioned, machines may eventually have the ability to compare and contrast, but they will never be able to apply that beyond their set of rules, that we have defined.

Machines are at a permanent disadvantage. While we follow our own rules defined in this universe (laws of physics, etc), they have to follow rules that we have created for them, that sit on top of rules created for us. The problem is, they will never know if their decision is growth unless a human tells them it is. If you are trying to say that they function outside of the rules we have created for them, then you are fooling yourself. Just like we can't change the rules of physics, they can't change the rules we've created for them.
How is this any different from human evolution? The rules that made humans who we are were the rules that led to us being more likely to survive, reproduce, and bear our young. We are constrained by those rules just as much as AI would be constrained.


Watson isn't magic. Watson doesn't think. He does what he is told. He fucked up the final question, and the programmers told everyone why. They didn't account for something, and the true nature of why "thinking" computers will fail. Inevitably there will be a problem with the code. Code created by humans. Neural networks is a buzz word. These networks aren't magically creating anything. They are bound to a set of rules / algorithms. They can change and be flexible, but the flexibility is not what you are trying to imagine, or even close to what you are trying to pretend they are. They are bound within ranges. We can give randomness to programs, but they all fall within sequences, parameters, and variables that we can predict.

And even if they don't require humans to decide whether something they do is "growth" or "failure", it will take almost forever for humans to create the code for them to even retain this knowledge, or have an understanding of it.
Again, we aren't writing the rules. Just like how evolution didn't 'write' the rules for humans. You said what I'm describing is evolution, and you are completely right. We would be applying evolution to computers. We would need to program the framework (the machine equivalent to DNA). We wouldn't be hand crafting millions of rules that would get the nuances of everything that involves being human. If that were the case, I would agree with you on how long that would take. But it isn't.


Rules are not created randomly, they are created specifically for a reason. If we allowed a computer to be completely random, it would do nothing useful. We need to give it direction. The issue is a computer will never understand reason, it will only understand direction. At least for now.
No, rules are created randomly. If you look at any application of "machine learning" currently in use (including Watson) you will see that huge amounts of random rules are created and the rules are then refined. Watson was not the result of hand crafted rules for each and every possible case. Just like how humans were created through random genetic mutations; AI would be created in the same fashion.


You are fooling yourself if you think computers will be even close to real thought within 20 years. I would say hundreds of years. And it will take a huge amount of money and study to get them close to anything resembling a singularity.

I hate saying something is completely false, since we're so imaginative and creative, but this is pure science fiction on so many levels.
I meant 20 years from when hardware will be fast enough for AI... so my prediction would be more like 35-45 years. Either way, I don't think we're going to ever agree on this. I guess we'll find out who's right in 40 years.
 
How is this any different from human evolution? The rules that made humans who we are were the rules that led to us being more likely to survive, reproduce, and bear our young. We are constrained by those rules just as much as AI would be constrained.

It is different because we are evolving in a system that computers can't even conceptualize yet. They aren't even on the same playing field we are. They have no concept of the rules of our universe, because they are only playing with rules defined by us. Once we can fully define what it is to be human, can they finally be able to play the same game we're playing. The problem is, humans can't even define what it is to be human.

Again, we aren't writing the rules. Just like how evolution didn't 'write' the rules for humans. You said what I'm describing is evolution, and you are completely right. We would be applying evolution to computers. We would need to program the framework (the machine equivalent to DNA). We wouldn't be hand crafting millions of rules that would get the nuances of everything that involves being human. If that were the case, I would agree with you on how long that would take. But it isn't.
You are correct in one sense -- the only real way for a computer to evolve is if we don't have to define millions of rules for it. The problem is, that doesn't apply to our system of existence, only their own.

Computers and software exist in a system. A system we have created. Our DNA revolves specifically around a completely different system. Changes in that DNA is like changes to the code of a program. The difference is, our program exists in the real world, while computers exist in a universe we have created. Until we have fully defined ourselves in our own system, we will never be able to create something that can evolve from that, beyond procreating.

I meant 20 years from when hardware will be fast enough for AI... so my prediction would be more like 35-45 years. Either way, I don't think we're going to ever agree on this. I guess we'll find out who's right in 40 years.
You're right, we won't agree, and that's completely fine. In 40 years I'll be 70. If we're ruled by machines, then I won't give a fuck either way, as long as I'm getting a sponge bath regularly. But if you want to put fiver on it to prove your conviction to our silicon rulers, I'm all for it ;)
 
Another thought...this might not be possible because we may lack the intelligence to develop it.

Garbage in, Garbage out.

The thing with it is you don't have to do it in one big jump. The first thing we develop might just be a memory enhancement and it starts from there.
 
And I want to leave one last thing about my argument. This TED lecture:

Patricia Kuhl: The linguistic genius of babies | Video on TED.com

An amazing discussion about how human babies learn language. We really have no fucking clue about ourselves, if this is something we're only looking at now. We aren't even close to figure out WHY we can do what we do. Yet, people think that we can teach silicon how to be like us? Let's be realistic here. Maybe in a hundred, well actually, many hundreds+ years we will be able to replicate in silicon what we do in carbon, but in the next 40 years? No. Your singularity is a mormon's rapture ;)
 
The biggest mistake is people believing Moore's Law is actually a Universal "Law". It's just a name given to a trend and, like the price of stocks, it can change. I'm likely not the only one that could leech traders for cliches, but past performance is not always indicative of future performance.

As humans we do like to find patterns and meaning in the inane, and I believe Moore's Law to be no different. I'm not saying that I think the Singularity won't happen in 45 years, I actually have zero idea, I just see people's use of Moore's Law, as a crutch, rather weak.