Geordie Rose speaks on aliens, A.I and summoning entities

Elon Musk , Stephen Hawking and many of the greatest minds on the planet disagree

https://www.google.ca/amp/s/www.van...billion-dollar-crusade-to-stop-ai-space-x/amp
These guys are figureheads of the science community and I doubt have actually studied or worked with AI.

The real scary part of the technology world is automation and how it will cause everyone to lose their jobs. That is a way more realistic and potent danger than AI will be unless some new scientific breakthrough happens in AI, which hasn't happened for a long time. The newest wave of AI (artificial neural networks) were actually a failed concept from the 60s, that only recently got revitalized. And that is literally just a simplified and plagiarized version of our brain.
 
These guys are figureheads of the science community and I doubt have actually studied or worked with AI.

The real scary part of the technology world is automation and how it will cause everyone to lose their jobs. That is a way more realistic and potent danger than AI will be unless some new scientific breakthrough happens in AI, which hasn't happened for a long time. The newest wave of AI (artificial neural networks) were actually a failed concept from the 60s, that only recently got revitalized. And that is literally just a simplified and plagiarized version of our brain.
I think Musk has a much deeper understanding of AI than you give him credit for , by all accounts he is a self trained rocket scientist, he's is hands on and no figure head, the man has a great mind , when the man devotes a good chunk of his very valuable time to thinking about AI I suspect he can grasp the topic
 
I think Musk has a much deeper understanding of AI than you give him credit for , by all accounts he is a self trained rocket scientist, he's is hands on and no figure head, the man has a great mind , when the man devotes a good chunk of his very valuable time to thinking about AI I suspect he can grasp the topic
I'm sure he has an understanding at the top level of what the capabilities could be, but as far as realistic and practical applications I believe that is a different story.

In theory, if we can come up with some sort of way to encapsulate our framework for learning into machines and have near infinite computational and storage capabilities, then yes it could easily be possible for AI to overtake humans in things we are superior in.

However, then you look at an absolutely gigantic 16000+ CPU network that Google put up to identify cats in videos (which only has about a 80% accuracy rate) and realize that we are very far from anything that could potentially 'take over the world', both in the capability of sentience and in the hardware requirements.

https://www.wired.com/2012/06/google-x-neural-network/
 
I'm sure he has an understanding at the top level of what the capabilities could be, but as far as realistic and practical applications I believe that is a different story.

In theory, if we can come up with some sort of way to encapsulate our framework for learning into machines and have near infinite computational and storage capabilities, then yes it could easily be possible for AI to overtake humans in things we are superior in.

However, then you look at an absolutely gigantic 16000+ CPU network that Google put up to identify cats in videos (which only has about a 80% accuracy rate) and realize that we are very far from anything that could potentially 'take over the world', both in the capability of sentience and in the hardware requirements.

https://www.wired.com/2012/06/google-x-neural-network/
I believe Musk and Hawking are obviously trying to look into the future (something Musk has proven to be very good at ) and are anticipating computing power to continue to increase at rapid rate.
 
How do reptilians fit into all of this?
 
The whole AI thing is a bunch of ignorant doomsaying. Take it from someone who has studied and worked in AI: they are not going to take over the world. Stop letting science fiction movies subvert your common sense.

AI is still in it's infancy and nothing in the current day suggests that it will go anywhere close to actual sentient thinking. With clever programming and massive processing power it might look sentient (like a chat bot can look like a human talking), but there is no framework for making something sentient (or even defining how to get there).

And computation power/storage capacity is reaching it's limits. You can only shrink mechanical parts to a certain point, and transistors are already at ~5nm, which is about the limit for how small they can get.

All these people going out there and talking about how dangerous AI is are simply milking it to get views/followers.

I have an AI question. Man programs and AI with a set of specific rules. If that AI creates another AI, does it have to follow the same set of rules or can it create it's own set of rules for the second ai?
Have you read up on that Google AI story from the beginning of the month?
 
I have an AI question. Man programs and AI with a set of specific rules. If that AI creates another AI, does it have to follow the same set of rules or can it create it's own set of rules for the second ai?
Have you read up on that Google AI story from the beginning of the month?
The problem is in the AI programming the other AI, that would assume that AI's have the ability to program, which is a high-level intellectual skill. If they are at that point anyways then they are probably intelligent enough to pose a danger to humanity.

Now the step of getting from what we have today (simple problems), to having an AI that can actually program another AI with some sort of logic for some purpose. That would require some high level thinking and sentience. We don't have a framework for creating sentience, making things learn to serve some purpose other than the ones we program for them to learn.

We could create deadly robots now, with very limited use of AI as it is. Simply make a mobile turret, add some way of recognizing humans (which most smartphones have), and simply aim & shoot on recognition of a human. It is surprisingly simple to do (apart from the actual shooting mechanics) and could pose a huge threat. That mobile robot has no way of learning anything, it's simply a dumb thing we humans have programmed to do a task, no AI is really needed to do it.
 
We should be mass producing Reese's Pieces in preparation to bribe aliens to be nice to us primitive humans.
 
The problem is in the AI programming the other AI, that would assume that AI's have the ability to program, which is a high-level intellectual skill. If they are at that point anyways then they are probably intelligent enough to pose a danger to humanity.

Now the step of getting from what we have today (simple problems), to having an AI that can actually program another AI with some sort of logic for some purpose. That would require some high level thinking and sentience. We don't have a framework for creating sentience, making things learn to serve some purpose other than the ones we program for them to learn.

We could create deadly robots now, with very limited use of AI as it is. Simply make a mobile turret, add some way of recognizing humans (which most smartphones have), and simply aim & shoot on recognition of a human. It is surprisingly simple to do (apart from the actual shooting mechanics) and could pose a huge threat. That mobile robot has no way of learning anything, it's simply a dumb thing we humans have programmed to do a task, no AI is really needed to do it.
The Google AI I mentioned earlier created another AI, trained it, and it ended up being better than the human created AI.
 
The Google AI I mentioned earlier created another AI, trained it, and it ended up being better than the human created AI.
It's basic stuff that has been happening for ages now. People have been using genetic algorithms to create neural network structures forever. It's not really an AI training another AI, it's an AI determining the structure of the neural network (i.e., how many layers, what type of network, what type of activation function, etc.) based on some fitness function (how good the output is). It's nothing special and I did something similar back in university.

It is not at all sentient, again this is all humans just bootstrapping some AI to setup a neural network structure (that a human would otherwise just guess and hope for the best).

Below is a wiki article which essentially shows what they did, you can find examples of it from the 90's.
https://en.wikipedia.org/wiki/Neuroevolution
 
It's basic stuff that has been happening for ages now. People have been using genetic algorithms to create neural network structures forever. It's not really an AI training another AI, it's an AI determining the structure of the neural network (i.e., how many layers, what type of network, what type of activation function, etc.) based on some fitness function (how good the output is). It's nothing special and I did something similar back in university.

It is not at all sentient, again this is all humans just bootstrapping some AI to setup a neural network structure (that a human would otherwise just guess and hope for the best).

Below is a wiki article which essentially shows what they did, you can find examples of it from the 90's.
https://en.wikipedia.org/wiki/Neuroevolution
Very cool, thanks for the info!
 
Elon Musk never worked on AI as far as I know.

If you don't believe me, go and read up, study AI, learn about it yourself, try to implement something. You'll quickly realize they are built for solving relatively simple problems that require a lot of computational power. Most of the 'learning' in most of AI is actually just the programmer figuring out ways to represent what is a good solution/bad solution, and the AI construct re-adjusting itself to suit that outcome. It's not real learning like human learning (which stems from some intrinsic and hard to define properties of survival, reproduction, etc.)
But what difference would it make if it was truly sentient or just appeared that way? If it’s objectuves that were programmed into it by humans or an error in its programming led to a conflict with us ( that probably wouldn’t end well for us ) , what difference would it make if it had real “ sentience “ or “ consciousness “ or simply following its programming ? You know DARPA is working on killer robots , right ? Doesn’t matter if they are conscious or not . Wouldn’t matter . They could be put in wrong hands or catch a virus that puts them against everyone , don’t u think ?
 
Who do I believe , this dude scheme here who has studied and worked on AI and fellow sherdogger....or some random guy by the name of Elon musk
I vote for a third, non-sherdog or failed cologne name source
 
Who do I believe , this dude scheme here who has studied and worked on AI and fellow sherdogger....or some random guy by the name of Elon musk
I wish he'd get to work on a novel solution for the triple-parking situation over at the Palo Alto Tesla HQ...huge pain in the ass, especially when he happens to be on site.

@Scheme I too am curious why sentience factors so strongly in your considerations and explanations. I'd imagine it's a common end goal/theoretical guide for AI research from basic theory to advanced R&D, but I'm not very informed on the topic in general. Why do you factor it so heavily when assessing the risk of possible runaway systems of automated learning/adaptation/implementation at any point on the scale between toaster and sentient?
 
But what difference would it make if it was truly sentient or just appeared that way? If it’s objectuves that were programmed into it by humans or an error in its programming led to a conflict with us ( that probably wouldn’t end well for us ) , what difference would it make if it had real “ sentience “ or “ consciousness “ or simply following its programming ? You know DARPA is working on killer robots , right ? Doesn’t matter if they are conscious or not . Wouldn’t matter . They could be put in wrong hands or catch a virus that puts them against everyone , don’t u think ?
I wish he'd get to work on a novel solution for the triple-parking situation over at the Palo Alto Tesla HQ...huge pain in the ass, especially when he happens to be on site.

@Scheme I too am curious why sentience factors so strongly in your considerations and explanations. I'd imagine it's a common end goal/theoretical guide for AI research from basic theory to advanced R&D, but I'm not very informed on the topic in general. Why do you factor it so heavily when assessing the risk of possible runaway systems of automated learning/adaptation/implementation at any point on the scale between toaster and sentient?

Because for the whole AI singularity to happen, AI would have to be able to reach sentience, and make decisions on it's own. The thought that AI will reach reach human-level sentience with our knowledge today and then turn on us because they realize we are bad is far from happening. And for a human to program something like "kill all humans" into a robot, is extremely difficult, vague and there are many limitations to it.

Automation is more of a realistic issue, as Hunter mentioned. You could easily strap a rotating machine gun onto a mobile turret, make it shoot at anything that moves and deploy it on the streets of NYC. There doesn't have to be any AI involved there, that is simply automation programmed by humans. If that is what people mean when they say 'AI', then sure, it's possible and could be scary. Although there isn't any actual AI there, because it is simply a human telling a robot what to do, i.e., what we have already had for more than half a century now. You could even automate a factory to make these killer robots, send them to random cities on the globe and deploy them. There doesn't need to be any AI involved in that process.

The scary thought about AI is that an AI that is simply learning things will eventually learn the complex problem of eliminating all humans, then create a factory of these killer robots, but for that to happen they would have to be able to learn that extremely complex problem of 'killing all humans', and have the tools of creating a killer robot factory, which are gated by advanced sentience (much further than what we a human can program now) and processing power/storage capabilities respectively.

The issue isn't with AI (which has proven to be a valuable tool for solving relatively simple problems for humans that require a lot of processing power), it's with automation used in bad ways. That's basically what drones are, automated robots that drop bombs. Not really a new issue we have to look out for. At the end of the day, we are far from AI's being able to program themselves to reach a complex goal, both in theory and in practice.
 
Back
Top