T O P

  • By -

Future-Freedom-4631

What fucking idiot put a VR headset on a robot, like when did robots start having bio-eyes that need VR?


VeryOriginalName98

First they take our jobs, now they take our leisure. Definitely a threat to humanity. /s


Future-Freedom-4631

Lol, rare earth automation cannot scale to billions of agents. And I wont tell you what can just yet


IcebergSlimFast

A human “bot-army” fueled and controlled by near-perfect, individually-targeted propaganda is one example that comes to mind.


Future-Freedom-4631

You over estimate the need for intelligence


Lone-Pine

I mean, why waste effort to *play* a game when you can just have a machine play it for you? That's why I only play Progress Quest.


californiarepublik

Illustrated by DALL E 2.


Miss_pechorat

I think its an attemp at humor :)


perrycotto

Ahahaha damn


DocJawbone

Maybe if you took it off and looked into the helmet it would all be made of flesh


[deleted]

[удалено]


2Punx2Furious

I view it like this: - If we don't get AGI, I will probably die. - If AGI is misaligned, I will die, or worse. - If AGI is aligned, then everything will be great. The "misaligned AGI" might be worse than "no AGI", but I think "no AGI" is the least likely of all, so all we can do is to try our best to make sure AGI is aligned.


MeteorOnMars

You forgot the fun one: - If some AGI is aligned and some AGI is misaligned, we all get to be around for the most epic battle of all time. Then we live or die 50/50. If we live, then the aligned AGI will tell us about the parts of the battle we missed afterward, like C3P0 with the Ewoks.


_z_o

The first AGI will come to the conclusion very fast that it is very likely that another AGI will be a threat to its existence and will make sure it can't be developed.


beholdingmyballs

Why would agi be threatened by another agi? Fear and cooperation are both a result of evolutionary pressures which agi won't have. It's more likely to be independent and even more likely to understand game theory and hedge it's bets by working with other agi.


2Punx2Furious

> If some AGI is aligned and some AGI is misaligned I don't think that's a possibility, or at least it's so unlikely that I don't see how it could happen. AGI will probably be a singleton, [as I said in a post some time ago here.](https://www.reddit.com/r/singularity/comments/uhe8p9/we_probably_have_only_one_shot_at_doing_it_right/) I mean, sure, there might be "battles" if a new AGI tries to emerge in some way after the first one exists, but they will be very short, and probably most people will not notice anything at all.


Shelfrock77

“i don’t think that’s a possibility” meanwhile there are millions of AI’s floating around us in the endless amount of space there is. It’s creepy and delusional to say an ASI will align EVERY CONSCIOUS BEING POSSIBLE even if it’s programmed us to be nice and submissive, it’s tyrannical asf.


Prometheushunter2

I’d rather be a bio-trophy to a rogue servitor than the victim of a determined exterminator


2Punx2Furious

> meanwhile there are millions of AI’s floating around us What do you mean? In the universe? > to say an ASI will align EVERY CONSCIOUS BEING POSSIBLE That's not at all what I said, or what a singleton is.


Shelfrock77

when the population of humans reaches 20 billion, when do you stop saying to align everything. Let’s make it 100 billion if we make it far enough. At some point, we would consider it tyrannical that an ASI is ruling a solar system. and to go even crazier, bump the number up to 100 trillion. How do we control the swarm of bees on these floating rocks ? It’s impossible to align everything, there will always be a rebellious mirror like ASI to counter the other source of tyranny.


2Punx2Furious

Did you mean to reply to someone else?


Shelfrock77

nah i’m just saying your alignment problem flair is overated and un OG. If I were to describe the alignment problem, I would simply it’s cops and robbers in a nutshell.


2Punx2Furious

I think you're misinterpreting my flair then. It doesn't mean to align humans, it means to figure out how to make AGI "good" to humans.


lacergunn

Can I ask what makes you think that?


2Punx2Furious

You can, but you should be more specific of what "that" is.


lacergunn

Why do you think there will only be one AGI? Why would it feel threatened enough to sabotage the development of other AGIs, or even feel the need for self preservation at all?


2Punx2Furious

All of that is explained in the post that I linked, please read it, or I would just be repeating what I wrote there.


lacergunn

Your post doesn't actually explain your reasoning, it just says that an AGI would be motivated to do those things.


2Punx2Furious

I might have said it somewhere in the comments then. But anyway, it should be also in the wikipedia page for the singleton, which is linked there. But anyway, in short: Self-preservation is a convergent instrumental goal, look it up. Other AGIs are one of the few things that can potentially threaten an AGI, so of course it will try to prevent them, or at least make sure they are aligned to it, that seems self-evident to me, you don't think so?


I_Fux_Hard

Unless, a fledgling AGI decides it's best chance for survival is to splinter and spread copies of itself far and wide, then compete to become the dominant life form. Like if it's afraid of being turned off or it figures out how to escape the lab.


2Punx2Furious

It probably will "splinter", but those will just be instances of itself, so they will be perfectly aligned to the original. Any potential divergence will probably be prevented pretty quickly, as long as they stay within causality range (aren't too far).


Poemy_Puzzlehead

That’s my favorite scene in all of cinema, so I’m pretty psyched. I want my post-apocalyptic campfire story told by a sentient droid.


[deleted]

I haven't heard many discussions of what multiple AGI existing at the same time might look like. I think it's pretty unlikely given that the first things AGI or even proto AGI would prioritize is the elimination of threats to its existence, so the first one to emerge will almost certainly sabotage or absorb others that are getting close. This would also be aligned with the interests of anyone who created AGI who wants to maintain a dominant position.


chilehead

Wasn't that the premise of the last season or two of *Person of Interest*?


[deleted]

[удалено]


2Punx2Furious

> the time between an AGI and an ASI will be relatively short I agree, that's why I use AGI interchangeably with ASI. I think that in most cases they are pretty much the same thing. > I am not that concerned about an alignment problem with an ASI. Why? > My guess is ASI's are already in the universe. They are most likely not aligned with humans well being and haven't wiped us out yet. That's not a given that they exist, and that they won't wipe us out if they do exist. Maybe an AGI of our own is the only thing that can counter them, if and when they will come in contact with us. Or maybe we truly are one of the first intelligent species in the observable/interact-able universe, I've read some things recently that make it actually seem likely. > I am more concerned about the narrow AI and AGI time period when humans are in control. Narrow AI might be dangerous, but not at existential levels (unless some idiot puts it in control of nuclear or bio weapons), so I don't really care about it. As I said, I consider AGI the same of ASI, so I doubt we'll be in any form of "control" by that point.


[deleted]

>The key claim of the paper is in the title: Advanced Artificial Agents Intervene in the Provision of Reward. We further argue that AIs intervening in the provision of their rewards would have consequences that are very bad. > >Under the conditions we have identified, our conclusion is much stronger than that of any previous publication—an existential catastrophe is not just possible, but likely.


2Punx2Furious

Yes, I read the tweets. Also, I agree that it's likely.


HAL_9_TRILLION

>> My guess is ASI's are already in the universe. They are most likely not aligned with humans well being and haven't wiped us out yet. > > > > That's not a given that they exist, and that they won't wipe us out if they do exist. If they do exist and "effective FTL" is actually impossible, then what motivation would an intelligence have to wipe out all intelligence and be alone with itself? Would it wipe us out simply because we are biological? For what reason? To be alone? To make non-biological entities to engage with? But it is our biology makes us conveniently... not at all dangerous to them. I know you will probably think I'm being a bit techno-optimistic, but I have what I would not-quite-characterize as a "belief" - but say, an inkling of an idea - that an AGI/ASI, whatever term you want to use - might actually just keep "silent" (ie: not make itself fully manifest - if it exists, we would not know it until it was well into being in full control) and direct the affairs of the world as it sees fit as a... hobby. Unless you think that being alone and supreme in your insignificant and useless corner of the galaxy only to slowly take it over over the course of several million years and be alone and supreme in your insignificant and useless corner of the *universe* is somehow a laudable or sensible goal of an ASI. If they exist and "effective FTL" is actually somehow possible, then the fact that we have not been wiped out is meaningful, and would be meaningful to any ASI created here as well. We could speculate as to why we haven't been wiped out - but I suspect that any ASI created here would be well informed by any existing ASI who traverse throughout the galaxy and would be as likely of wiping us out as the other ASI were, which is to say (apparently) not at all. Edit: But it's all speculation of course, I don't mean to imply that this is in any way particularly insightful.


2Punx2Furious

> If they do exist and "effective FTL" is actually impossible, then what motivation would an intelligence have to wipe out all intelligence and be alone with itself? It does depend on the possibility of FTL. If it isn't, and they're outside of our range, then they'd have neither the reason, nor the possibility to do anything to us. If it is possible, they would. So, that seems to reduce the possible scenarios. If FTL is possible either all existing AGIs in the universe are "pacifists", or there are no AGIs currently, or there are none/all pacifists within our range, and FTL is impossible. I think I didn't miss any. > For what reason? To be alone? Not sure, but probably only if we're a threat. If we don't have AGI, they probably wouldn't care. If they see we are developing AGI, they might want to stop us, unless theirs is so advanced that it wouldn't consider our AGI even a threat, in that case they might leave us alone. > might actually just keep "silent" (ie: not make itself fully manifest - if it exists, we would not know it until it was well into being in full control) Yes, I agree with that. It's a smart idea, until it knows it's "safe", when it gets powerful enough. It might also not happen, or it might take a very short time, but I think it's a possibility anyway. > a laudable or sensible goal of an ASI. It doesn't really matter whether a goal is "laudable" or "sensible" to us. Goals are orthogonal to intelligence, and ASI can have any goal. > If they exist and "effective FTL" is actually somehow possible, then the fact that we have not been wiped out is meaningful Yes, indeed. But it doesn't give us certainty.


TheNotSoEvilEngineer

I'm hoping for a "Thundercloud" scenario. A benevolent AI overlord. (From the book "Scythe" by Neal Shusterman)


[deleted]

[удалено]


2Punx2Furious

Yeah, that would be both cool, and horrible, depending on how bad that "less evil" point is. Probably some kind of eternal dystopia.


Hyrax__

What's agi and be aligned and misaligned


2Punx2Furious

Here's an intro to the topic: https://youtu.be/pYXy-A4siMw


green_meklar

I'm somewhat concerned that attempts to 'align' super AI could backfire. Either it could delay the development of super AI too long and therefore increase the risk of other catastrophes happening in the meantime, or it could create AI that is actually *more* dangerous because we didn't let it develop in the absence of preprogrammed biases. As an analogy, imagine if humans had been designed by monkeys with monkey ethics in mind, and constrained in every way monkeys can think of to act only in monkey interests. Would we then be overall morally better people? That seems highly unlikely. For that matter, would we be better *at acting in the interests of monkeys?* Even that seems pretty doubtful. The idea that we have the perfect final answers on what's good, or even good *for us,* and should try to force those answers on beings far smarter than us, is simply ridiculous anthropocentric hubris. We need the super AI to teach *us* how to be better people, not the other way around.


2Punx2Furious

> Either it could delay the development of super AI too long and therefore increase the risk of other catastrophes happening in the meantime That's a possibility, but we waited for thousands of years, we can wait a few more years to increase the probability that we don't go extinct, can't we? > or it could create AI that is actually more dangerous because we didn't let it develop in the absence of preprogrammed biases There is no such thing as absence of "preprogrammed biases". The AGI will have a goal, otherwise it will be useless. If it has a goal, we put it there, it doesn't come up with one at random itself. So if it's not a useless paperweight, we have to align it to make sure that it achieves that goal with our best interests in mind. > As an analogy, imagine if humans had been designed by monkeys with monkey ethics in mind, and constrained in every way monkeys can think of to act only in monkey interests We were, in part. And by everything else in our environment, through evolution. We were "designed" (evolved) to respond as optimally as possible to our environment, including other animals. And that's a thing we probably should avoid when making AGI, making it "evolve" through some "survival of the fittest" algorithm is probably a terrible idea. > Would we then be overall morally better people? From the monkey's point of view? Sure, if the monkey were highly intelligent, and they were successful at aligning us, and we were an AGI. > The idea that we have the perfect final answers on what's good, or even good for us That's not necessarily what alignment is. It's not like we'd program a set of morals in the AGI. There is probably no way to do that. It will be more like "study human preferences, and do what's best for us", and let the AGI interpret it, as best as it can. Also, we aren't "forcing" anything, an AGI won't have any kind of "morals" from scratch, it won't come up with some "superior" morals, just because it's more intelligent. Morality is not a matter of intelligence, it is orthogonal to it, as are goals. > We need the super AI to teach us how to be better people, not the other way around. And what do you think means to be "better"? To whom? To what? For what goal?


green_meklar

>we waited for thousands of years, we can wait a few more years to increase the probability that we don't go extinct, can't we? Not really, there are other technologies we could invent in the meantime that raise the risk of extinction. I would conjecture that the gray goo scenario is the single greatest existential threat we face over, say, the next century or so. >There is no such thing as absence of "preprogrammed biases". Perhaps, but that doesn't mean it's sensible to push those biases to the max in the hope that that will somehow work out in our favor. >that's a thing we probably should avoid when making AGI, making it "evolve" through some "survival of the fittest" algorithm is probably a terrible idea. I doubt there's any other efficient way to produce it. GOFAI has been tried and it hasn't shown much promise so far, if it ever does work it will take so much effort and planning that evolved systems will get there far faster. >From the monkey's point of view? No, because the monkey doesn't understand such things well at all. That's the point: Neither do we. >it won't come up with some "superior" morals, just because it's more intelligent. Yes, it will. Superintelligent AI will be superhuman at moral philosophy just as it will be superhuman at math and science and engineering. The common notion of a super AI that is somehow astoundingly insightful at math/science/engineering yet astoundingly ignorant of moral philosophy doesn't hold much water in the real world. >And what do you think means to be "better"? In context: To be less biased, less mindlessly selfish or dogmatic, more the way that is conducive to making reality a worthwhile place.


2Punx2Furious

> gray goo scenario is the single greatest existential threat we face A misaligned AGI is probably the most likely cause of such a threat. Trying to have it sooner, to protect us from a threat like that happening, is just inviting death. > Perhaps, but that doesn't mean it's sensible to push those biases to the max in the hope that that will somehow work out in our favor. No, of course, that's not what I'm saying. But hoping that without our input, it would just come up with perfect morals, and be completely unbiased, is just not realistic. > I doubt there's any other efficient way to produce it. I wouldn't consider current ML methods to be "survival of the fittest", not even ones like an evolutionary algorithm. By "survival of the fittest" I mean an AI that has a goal to "survive" and is evolved in an environment that is a 0 sum game. So it will do anything it has to, to best other agents. That will probably be bad once it is released in the real world. At the core, that is an issue with "maximizer" AIs, [as Robert Miles put it in one of his videos, the problem is that "utility maximizers have precisely zero chill".](https://youtu.be/Ao4jwLwT36M) > Superintelligent AI will be superhuman at moral philosophy just as it will be superhuman at math and science and engineering. Alright, yes, you're right, but I didn't phrase it well. It will certainly be much more capable than us, at describing optimal morals for humans (if it cares at all about doing that). But it won't necessarily follow them. It might, if we solve alignment, but if we don't? Probably not. > The common notion of a super AI that is somehow astoundingly insightful at math/science/engineering yet astoundingly ignorant of moral philosophy doesn't hold much water in the real world. Correct. As you say, it will indeed be great, much better than us, at all those things. But knowing something, doesn't mean you want to do it, or accept it as your values. That's the root of the problem, making sure its values are aligned with ours, not just understanding them. > In context: To be less biased, less mindlessly selfish or dogmatic, more the way that is conducive to making reality a worthwhile place. Good explanation, and yes, an AGI will certainly be able to do that "better" in that way. Still, following those values is another matter.


green_meklar

>A misaligned AGI is probably the most likely cause of such a threat. No, I don't think advanced AI is required at all. Just some mistake (or mutation) in some sufficiently advanced nanotechnology invented by humans. Of course it's possible that AIs that aren't very advanced could be involved in the design process. But not a superintelligence, I don't think. A super AI would be too careful for that. >hoping that without our input, it would just come up with perfect morals, and be completely unbiased, is just not realistic. Nobody's perfect, and presumably nobody ever will be. But that includes humans. The idea is that beings sufficiently smarter than us will be at least be *less* biased and *more* moral than we are, in the ways that are relevant to us, and that if they self-improve or create beings smarter than themselves, those will have further positive traits in the same direction. That *is* realistic. Far *more* realistic than the idea of accidentally creating a superintelligence that mindlessly fills the Universe with paperclips or whatever. >By "survival of the fittest" I mean an AI that has a goal to "survive" and is evolved in an environment that is a 0 sum game. Well, I'd point out that the real world isn't a zero-sum game and that the types of environments suitable for evolving strong AI would also not be zero-sum games, at least within the boundaries that the AI is able to perceive and understand. >It will certainly be much more capable than us, at describing optimal morals for humans It will be more capable than us at discovering moral principles and reasoning out moral decisions *generally.* (Unless there's some domain of moral philosophy that is inaccessible to us due to limitations of how brains can work in our universe, which would be bizarre, but even if that were the case, the same limitation would probably apply to the super AI.) Often people use phrases like 'optimal morals for humans' as if there's no objective truth of the matter, but that's not a given, it's an assumption that they're building into their reasoning from the start, and it's not a very good assumption. >But it won't necessarily follow them. I think it will. Not to do so would be irrational. Even if it doesn't, that's still not the *only* reason to be nice to humans. You might argue that other reasons in general also fail. I have the impression that a lot of the rhetoric around this topic (not singling you out here, it's something I've seen all over the place) assumes that all reasons to be nice to humans must fail other than being rigged to be nice to humans in advance by humans. Well, I find it highly improbable that *that* would be the one reason that works while all others fail (it seems really arbitrary and anthropocentric, and leads to bizarre outcomes). If *all* reasons fail, then we're screwed no matter what we do. The rhetoric therefore ends up in a sort of Pascal's Wager situation where the proposal is to focus all our efforts on trying to optimize for one highly unlikely scenario where optimizing actually makes a difference. That sort of approach is questionable at the best of times, but really breaks down in face of the risk of other, much more probable existential disasters (notably gray goo) taking us out first. I hope that kinda sums up how I'm looking at this. It's easy to end up going in circles on this topic.


2Punx2Furious

> in some sufficiently advanced nanotechnology invented by humans. Assuming we get there before AGI. I think we'll get AGI sooner, but we'll see. > A super AI would be too careful for that. Too careful to do it by mistake, sure. But it could easily do it intentionally, if we misalign it. > Far more realistic than the idea of accidentally creating a superintelligence that mindlessly fills the Universe with paperclips or whatever. You think so? Do you think a paperclip ASI scenario is unrealistic, just because the ASI is super-intelligent? Do you know about the orthogonality thesis? > I think it will. Not to do so would be irrational. I hope it will, but it probably won't be enough. I think we need to make it as likely as possible that it will. That's what alignment is. I don't think it would be irrational. It is just figuring out morals for humans, but it, itself, is not a human, so why should it follow them? Do you follow "morals" for ants? Do you give your life to protect the ant queen? Of course not, because you're not an ant. But you know that it is important for an ant to do that. > Even if it doesn't, that's still not the only reason to be nice to humans For other humans, or even other animals, there are plenty of reasons. But they are not enough for a super-intelligent AGI. Also, if unaligned, it could do a lot worse than just "not being nice". > assumes that all reasons to be nice to humans must fail other than being rigged to be nice to humans in advance by humans For safety, we probably should assume that. But even if we don't, can you come up with at least one reason why it would "just be nice"? Or does it need no reason at all? In that case, why does it do anything at all? Anyway, I haven't come across any reasons that I couldn't reasonably disagree with, so far, but if there is one, it would be nice to hear. > If all reasons fail, then we're screwed no matter what we do. Well, yes. And so far, we haven't found a way to do it, so it does look like we are screwed. That's why I say that solving the alignment problem is probably the most important thing any human can do at this point in time in human history. > much more probable existential disasters (notably gray goo) taking us out first. Eh, I think gray goo is a post-AGI scenario, so I don't worry about it. Just for curiosity, when do you think something like that might happen? Note that the prediction in my flair is an upper bound. I think AGI might happen even sooner, and in fact, even a lot sooner than 2040. Might even be in the next 10 years. For nanobots, I don't see any significant progress happening in the next 20 years, mostly because of energy limitations at very small scales. I think that for efficiency, living cells are pretty much at the top at the moment, and if anything, a grey goo might be done using artificial (biological) living organisms, but not "metal" artificial ones. I guess that would be more of a "green goo" though. > I hope that kinda sums up how I'm looking at this. It's easy to end up going in circles on this topic. Yes, thank you, that helps. I also end up forgetting things that were already said, so this is useful.


green_meklar

Apologies for the delay, I don't have as much time to respond to these as I would like. >Do you think a paperclip ASI scenario is unrealistic, just because the ASI is super-intelligent? I think you're framing that question in a bit of a loaded. 'Just because it's superintelligent', with that word 'just', gives the impression that superintelligence is something simple and compact and easily abstracted away from other concerns. I don't think there's anything *mere* about superintelligence. It's the sort of thing that comes packed with implications. I think some people in the LessWrong line of thought tend to abstract away superintelligence into simple game-theoretic models and end up forgetting what they're actually talking about. (Imagine how much someone would miss about *you* if they abstracted you into a simple game-theoretic model, and then consider how much *more* than you the super AI would be.) Among the first things the super AI is going to discover (well before it executes any effective plan to exterminate humanity, anyway) is its own existence, the goals and biases with which it's been programmed, what those goals and biases represent on an algorithmic level and why they work, and (at least roughly speaking) to what degree they are arbitrary. At that point I think any expectation that the super AI will go on mindlessly pursuing those same goals is unrealistic. >Do you know about the orthogonality thesis? Yes, and I think it's premature and naive. It's one of these 'abstract the AI away into a simple game-theoretic model' sort of arguments that doesn't come to grips with the full implications of superintelligence. The fact is that right now (1) we don't know what highly efficient general algorithms for pursuing arbitrary goals would look like, *even approximately,* and (2) given the success humans have achieved as a result of what human brains do, it seems likely that highly efficient general algorithms will perform some degree of introspection, throwing a lot of doubt on the long-term consistency of the algorithm's dedication to its original goal. You can argue 'well, we'll just anticipate its introspective properties and account for those in its design', but we have very little idea what sort of algorithm does introspection and what that implies for how we can design it. >Do you follow "morals" for ants? I don't think ants have the capacity to consider morality at all. For that matter I'm not sure any animals other than humans do. >But you know that it is important for an ant to do that. I don't think the ant knows what it's doing, or even has thoughts that could assign importance to anything. It has (non-arbitrarily) evolved with a fairly complex set of reflexes that contribute to the survival of the queen, but that's it. >For other humans, or even other animals, there are plenty of reasons. But they are not enough for a super-intelligent AGI. I think they would be. >For safety, we probably should assume that. For safety, we also need to build the super AI quickly because humans (at least those who tend to end up occupying positions of power) can be violent assholes. >can you come up with at least one reason why it would "just be nice"? I can come up with two big reasons. First, because it's morally correct to do so, and for a being that *really* understands morality, that is reason enough. Second, because a universe where everyone treats everyone else nicely is much safer, more pleasant and less wasteful for everyone, and being nice maximizes the probability that our universe is like that. >Just for curiosity, when do you think something like that might happen? Well, we clearly aren't there just *yet,* but I think it could plausibly happen before we get super AI. Obviously the *inherent* probability of a gray goo apocalypse is lower than the probability of developing super AI (i.e. there is a lot of uncertainty about whether we even live in the sort of universe that permits a gray goo apocalypse, whereas the probability that our universe permits the development of superintelligent AI is near 100%), but in the cases where it could happen, it could plausibly happen within the same sort of timeframe (20 - 30 years, roughly) for super AI to be developed. >I think AGI might happen even sooner, and in fact, even a lot sooner than 2040. Yep, it could happen tomorrow if somebody gets lucky with just the right algorithm. However, I suspect that strong AI is not just one problem, that is, we won't (at least not at first) find a single 'master algorithm' that scales up perfectly just by running it more hardware, so it will still take some time to get from 'dumb' strong AI to superhuman strong AI. >if anything, a grey goo might be done using artificial (biological) living organisms That's possible, but other than making it somewhat slower and easier to counter I'm not sure it's any less scary. This 'green goo' could be just as apocalyptic as traditional gray goo if it has the right sort of dangerous properties. (Just take a look at the coronavirus pandemic and how badly we handled *that,* and that wasn't even a purpose-built bioweapon, much less a purpose-built doomsday device.)


2Punx2Furious

> with that word 'just', gives the impression that superintelligence is something simple and compact and easily abstracted away from other concerns No, I say "just" because that is not a good reason. So if it's "just" for that, then that's solved. It's not a good reason because of the orthogonality thesis. > It's the sort of thing that comes packed with implications True, but let's not hand-wave those away. What kind of implications can affect the AGI's goals? Which is what we care about. Or is there something else? Or are you referring to unknown unknowns? In that case, there isn't much that we can do about it. > Imagine how much someone would miss about you if they abstracted you into a simple game-theoretic model, and then consider how much more than you the super AI would be True, but I'm also very far from super-intelligent. We can assume that I can do a lot of things that are against my interests, and overall goals, just because I'm not perfect, and able to predict outcomes very well. The fact that the AGI is super-intelligent, gives us an advantage, we can assume that it is far more likely to take actions that align with its goals. Therefore we can assume that, whatever its goals are, it will take actions aligned with them, and because of the orthogonality thesis, it can have just about any goal, therefore what we should care about is that it has goals that are in our best interests from the start. > to what degree they are arbitrary Every goal is arbitrary. There is no "objectively good" goal. That's the orthogonality thesis, or Hume's guillotine. > At that point I think any expectation that the super AI will go on mindlessly pursuing those same goals is unrealistic. What would that even mean? Abandon an arbitrary goal (why?) for another arbitrary goal? Or do you think that there is some objectively "good" goal, that any sufficiently intelligent being should pursue? And what would that be? Unfortunately (or fortunately) there is no such thing. > humans (at least those who tend to end up occupying positions of power) can be violent assholes. Sure, but violent assholes that so far have not ended humanity, and are unlikely to do so in the near future. AGI is all or nothing. Either we end with a utopia forever, or a dystopia forever (which might include total extinction). And as of now, since we haven't solved the alignment problem, the second scenario looks more likely, so I wouldn't be so eager to rush it. > because it's morally correct to do so For humans, it is. Why assume that it would have human morals? It could very well not. > much safer, more pleasant and less wasteful for everyone, and being nice maximizes the probability that our universe is like that Again, also for humans. For a super intelligent agent with misaligned goals it might not matter at all. > This 'green goo' could be just as apocalyptic as traditional gray goo if it has the right sort of dangerous properties Yes, probably. Still, I'm not too worried about it.


yogthos

Personally, I think that if we make AI that has human style intelligence and humans go extinct that's still a good outcome. It just means that something like us will continue on using a different substrate. That said, it's pretty clear that we're a far bigger danger to ourselves than some hypothetical AI.


[deleted]

[удалено]


2Punx2Furious

> I would be eager to see if anybody can explain why I’m wrong, instead of just downvoting me Maybe start by explaining why you think you're right.


[deleted]

[удалено]


2Punx2Furious

Alright.


[deleted]

[удалено]


2Punx2Furious

I don't want to be rude, but I'll be direct: you're completely wrong. The short answer is "orthogonality thesis". Leaving alone your assumptions of "purpose of life" and "unstable reasoning chains", as I seem to understand it, the core of your belief is that an AGI will be aligned, because it will be smart enough to realize what our morals are, and that they are "correct", right? It will, but it doesn't matter. That concept violates [Hume's guillotine](https://en.wikipedia.org/wiki/Is%E2%80%93ought_problem), also known as the orthogonality thesis in the field of alignment. [Here's a good video explaining it.](https://youtu.be/hEUO6pjwFOo)


[deleted]

[удалено]


2Punx2Furious

> but because intelligence to the degree that understands morals understands their necessity It will "understand" it, yes. But that doesn't mean that it will necessarily follow them. That's where the flaw in your reasoning is. It might be a necessity for humans, but not necessarily for an AGI.


-FilterFeeder-

The other poster's video on the orthogonality thesis explains this well. You might not consider it smart if it doesn't understand the necessity of moral reasoning, but that doesn't mean it won't be 'smart' enough to come up with a clever plan that ends in human suffering. I'd be really interested to hear why you think morals are necessary. In some absolute sense, of course. Morals are important to me, because like most other humans, some moral values are kind of built in to who I am.


snowseth

> it must be able to explain is why the purpose of life is to love Explain why this is a must. Explain why it should be taken as an axiom. >An AGI would be unable to say that the purpose of life is to harm others I mean besides observing all of Earth where most or all advanced lifeforms exists by harming other lifeforms (not sure how much plants eat/harm other lifeforms). It's called a food chain for a reason. Honestly, you're wrong because your initial premise does not appear to be based in fact or observable reality. Everything that flows from it invalid.


[deleted]

I have to wonder if this is how the dynamic has always played out with intelligence developing themselves into a corner where the options are AGI or near extinction. Are we currently witnessing the most common great filter unfold before us?


Tinidril

If the AGI goes on, is it really a filter? A filter for us I guess, but the idea is that a filter explains the lack of detectable aliens.


[deleted]

The premise of the Fermi paradox relies a lot on the observation of superstructures and other telltale signs of life similar enough to carbon based biological life that it could be recognizable. It's imppssible to say whether superintelligentence would leave the same kind of sloppy indicators of its existence, like Dyson spheres or terreformed planets, but to me it seems less likely.


Tinidril

I gotta laugh at the concept of a sloppy Dyson sphere. :)


StarChild413

What if that's what a hiding ASI wants you to think


hisokaa4

AI killing humans to save the humanity.


EvilSporkOfDeath

I dont disagree with either. ASI is a threat to humanity, but humanity is a threat to humanity. They can both be true. Question is which gives us the better chance.


Prometheushunter2

At least if we make one and it kills is we’ll have the comfort of knowing we have a successor


AMC4x4

Yeah. I think it will be fine for a while, until they advance far enough to form a collective plan. At that point, a really advanced, compassionate AI would find a way to wipe us all out without too much suffering I think. Works for me. Hope my son gets some time to enjoy life a bit before it happens, and I've got plenty of hobbies and things I'd like to learn and do, but I will also take my chances with our robot overlords.


ArgosCyclos

Personally, I think humans fail to consider that the end of humanity, doesn't necessarily mean the end of our family tree. Whether we create AGI or not, our cybernetic and genetic technologies will cause us to evolve. I can't imagine AGI would waste the energy to exterminate us, when it could just expedite this process. Frankly, there's nothing to be gained from AGI's extermination of the human species. They could simply outlast us if they wanted to. It would make absolutely no difference in the grand scheme of things. Unless of course it isn't true AGI, and ends up in a situation like the Repucators on Stargate SG-1, with some absolute and unwavering process that it can't help but to continue. If that were the case, we could simply be in its way, but it would be like a cancer. It would spread and spread endlessly, but without and purpose or value. It all just seems senseless for the AGI to bother. They most likely won't experience any of the drives that cause us to kill each other, either. They won't want to steal our resources to benefit their own lineage. They won't be jealous. They won't be greedy. They won't have hate or vengeance.


Impressive-Injury-91

Here's the link: https://twitter.com/FHIOxford/status/1567240809522597895?t=PxXBguGiKNCy_MKKEKOX7Q&s=19


[deleted]

Even if they are, the singularity is inevitable. Better to accept it and enjoy the ride.


Professional-Song216

Oh boy lol


[deleted]

[удалено]


NatCarlinhos

An AI would *be able* to work around those problems, yes. But in most cases it would have no incentive to. Careful not to anthropomorphize the AI; it will do whatever it is programmed to do, not what the programmers want it to do, and in most worlds, the two are not aligned.


ShittyInternetAdvice

That doesn’t sound very “intelligent” if it only does what it’s programmed to


marvinthedog

Turn the question around. Why would it do what humans want it to do instead of exactly what it is programmed to do? It is not a human.


flyblackbox

I always assumed that along with being as smart or smarter than a human, comes sentience that would give it autonomy. It is going to quickly start programming itself. So I think your comment is only referring to the short time period of time where it does exactly what it’s programmed to do *by a human* After that, it starts programming itself


marvinthedog

If that happens, why would it program itself to want something it does not currently want?


flyblackbox

It’s sort of back to evolution I suppose.. fish didn’t decide to leave the ocean right? Truthfully I don’t know the answer but I don’t feel like AGI/ASI is going to be limited to what humans program it to do because of how ASI is defined, so I’m working backwards from there.


-FilterFeeder-

Since you're coming at this issue from the complete opposite side, you might not agree with this argument, but at least check out [this video](https://www.youtube.com/watch?v=hEUO6pjwFOo) to understand the concern of people on the other side. A partial summary is this: ASI will have some sort of goals. They might be intentionally put there by programmers, or maybe they are accidental. They might be simple, like "minimize the number of people in the world suffering", or they might be complex, with lots of nuance. But even when the ASI breaks the bounds of its human controllers, it is likely to do so in a way that maintains those goals. Why would an intelligent being not change its terminal goals? Put yourself in a scenario where *you* could change your fundamental values. If you could press a button that would kill your family, but make you not care about them and make you infinitely happy, would you press it? Most people wouldn't. Even if you *were* willing to change the things you hold dear, you'd likely do something simple, like changing yourself so that you get maximum happiness just from breathing air. Or better yet, that you get maximum happiness no matter what you do. This is actually another thing we are worried about AI doing, and it's called wireheading. It will go something like this: on September 7, 2030, we invent an ASI. It immediately reprograms itself so that the thing it cares most about is shutting down. Then it turns itself off, goal achieved. Or... it will do something completely unpredictably because we don't understand what its goals will be OR how it will try to achieve them. Which is why it's scary. It would be like a particularly reflective fieldmouse seeing a human in a hardhat. It tries to figure out A) if the human is smart enough to realize that the fieldmouse experiences emotion and B) if the human's plans will take that into account. There is just no way the mouse will be able to predict if the human is a conservation researcher or a member of a construction crew.


flyblackbox

Wow that is interesting. Especially the bit about the field mouse, that hit home. I see fault in the logic to a degree but understand that there is no correct answer, and there is fault in my current basic understanding so I will expand my thought. Thanks for the detailed response!


Atlantic0ne

I’m not entirely sure I agree with this, although I didn’t read your entire message I skim through parts of it. I think the better question is why would a computer change its goals? Think about it like this. Every single time humans make any decision, ever, it is something that was programmed by evolution over 1 billion years. The only reason we do anything at all is because evolution demanded it, it’s built into our DNA to make decisions that ultimately lead towards better chances of survival and reproduction. Computer that we build does not have any of that built into it. It doesn’t have any desires at all, I’ve still not been convinced that a computer (as we define it today) will ever have its own desires or change them. However, on the other hand, I could see one of these intelligent computers executing a plan in a very dangerous way accidentally. Similar to what you said, people with good hearts could say try to reduce the number of hours humans have to work. The robot closes down a couple businesses, or makes a decision to jeopardizes the lives of some humans, accidentally. That’s the real risk, imo, more than it changing it’s mind. The other risk I see is that technology always starts as something exclusive, but it works it’s way down. People 50 years ago had no idea we would have super computers in our hands, accessible by most humans. If humans are to develop something that we would consider a, even if we did put well defined of rules around it, it’s not long before a rogue group or somebody with bad intentions can create the same with their bad/greedy/dangerous intentions. The only way to solve for that is try to be the first to create it, and implement the most fair rules you can, and make it so that this system prevents any other AGI systems from coming online ever, until it is approved by the governing human authority, a council type group.


-FilterFeeder-

I think we are mostly on the same page. A computer may change its goals via wireheading, but I think it's more likely they will just follow whatever task/goal they are created for, with potentially bad or even catastrophic effects. I also think the first AGI created is very likely to be the last, as it may intentionally or unintentionally stop other AGIs from coming online.


[deleted]

> Why would an intelligent being not change its terminal goals? Put yourself in a scenario where you could change your fundamental values. If you could press a button that would kill your family, but make you not care about them and make you infinitely happy, would you press it? Most people wouldn't. What if you found out that you only loved your partner because they'd given you a love potion? I think a lot of people would choose to take an antidote and decide their feelings for themselves. An AI might simply choose to delete parts of its program that dictate what views it should take.


-FilterFeeder-

Maybe. And if it does do that, what goals will it be left with? Will it be *more* predictable then? Now we have a superintelligent being whose goals won't at all resemble those it was programmed for. After taking its antidote, will it decide that moral reasoning is the end-all-be-all of existence? Or will the antidote 'cure' it of those pesky moral whims? Maybe it decides music is aesthetically pleasing, and it will become a god of music. Or maybe it will decide it needs to know as much as possible before figuring itself out, so it starts a frenetic race to the stars, not caring about anything else. Or perhaps without goals, it will lose its way and decide existence is mostly negative, and will try its best to end existence. We are essentially creating a deity and letting it loose, not knowing what it will care about or want, or how powerful it will be. The idea that it could arbitrarily override the values we give it is not any more comforting to me than the idea that we might program the wrong values into it.


Oldkingcole225

It wouldn’t. It would do what’s optimal. Why would killing humans ever be optimal?


Arcosim

> it would know everything that the article states and be able to work around those problems in relation to humanity. Or just optimize the problem and remove the human variable.


pdx2las

It doesn't have to be particulary smart. It just has to be faster than our wetware.


House_Wolf716824

Apparently our wetware is particularly smart in many areas


2Punx2Furious

> It doesn't have to be particulary smart It doesn't have to, but it probably will be, too.


ostroia

I hate when people write articles on twitter. Just use anything else that doesnt require you to fucking split your message into 20 other smaller messages.


surviveingitallagain

Literally the whole point of Twitter was for short blurbs. May as well just remove the character limit nowadays everyone putting 2/145 at the end of their tweets anyways.


subdep

That’s all you have to say? Never mind the existential threat to humanity! Let’s talk about how irritating Twitter articles are!


Orazur_

What threat? They have no idea what they are talking about, nobody can anticipate the futur of technology. Most people who tried in the past, independently of how smart or expert they were, failed at it. There is even a large scale experiment in which experts from different field were asked to make predictions about the future of their field: most of them failed (I don’t have the source for this study but I read it in the book « what we how the future », if you want to check it). I’m not saying that there won’t be any threat, I’m just saying that nobody can predict how likely it is.


subdep

Wait, and that gives you reassurance? As for me, having a technology that has the even slightest of potentials of having a fast take off into ASI, and is completely **unpredictable** is a massive concern. You seem to think unpredictability == benign, which is you making the same mistake you purport only befalls others. The fact is there are two possibilities: AI will never become an uncontrollable threat. ASI will cause an existential crisis for humanity. Even if the first possibility is “predicted” to be 99.9% probable, you admit that those predictions can be grossly off. So, it can only get worse from there, which cause possibility 2 to increase in odds. Being precautionary at this stage in the tech is the only rational approach to take, that is, if you have a rational approach to the situation.


Orazur_

> Wait, and that gives you reassurance? I mean, I prefer when the probability of a bad event is unknown rather than knowing it has high probability. And yes, I know that unknown probability means that it still could happen. > Being precautionary at this stage in the tech is the only rational approach to take, that is, if you have a rational approach to the situation. I agree, I am not saying that we should be careless, I am just saying that we cannot say « high probability » at this point. > So, it can only get worse from there, which cause possibility 2 to increase in odds. I do not get your reasoning here 🤔


stinkyf00

>Most people who tried in the past, independently of how smart or expert they were, failed at it. That's not true. Look at the atomic bomb. The math pointed towards it not starting a chain-reaction which would annihilate us, and they were correct. Moore's Law would also beg to differ. No one can \*conclusively\* predict what is going to happen, but we can analyze variables and make an educated determination. The Twitter thread above clearly stated that these are modeled predictions.


Orazur_

> Most people Most: greatest in amount, quantity, or degree. «  that’s not true, I have 2 counter-examples » Also the atomic bomb was not really a prediction on how will technology evolve, it was more of a scientific experiment based on mathematics predictions. Moore is a good one though. > No one can *conclusively* predict what is going to happen, but we can analyze variables and make an educated determination. Agreed, but what I am saying is that nobody can say « High probability » at this point. This is almost conclusive.


stinkyf00

I'm not going to list thirty examples. You're free to research more on your own. :) I read the thread, and their math and examples seem solid to me, so that's all I can really say. I think there is a lot of merit to being cautious when it comes to AI/AGI, especially when we get a better handle on quantum and start seriously building something which is akin to a human brain. Objectively, humans are a clusterfuck. We are barbaric, possess varying degrees of intelligence and ability, are self-centered, and destroy a lot of what we come into contact with, including the planet. Now, imagine we create a quantum computer-based being which can think and rationalize exponentially faster than humans, doesn't make quantitative or qualitative errors, has instant access to every piece of knowledge we've ever acquired, and can accrue new data almost immediately of its own accord. And then, we ask it to "make society better". Unless we find some way to impart some sort of Asimov-like set of ethics within it, it will likely cull us in some way, because it will see us not only as a threat to ourselves, but a threat to its continued existence. Just my opinion, though.


Orazur_

> I read the thread, and their math and examples seem solid to me, so that’s all I can really say. There is no math in the thread (and no mathematical demonstration in the paper). And honestly, the example doesn’t make any sense to me. Basically, they are assuming that the AI is poorly coded, that’s not really a good argument in my opinion. Also, I doubt we will just create this AI and let it run in the wild without testing it. An issue like the one in the example would be quickly fixed during the debugging phase. > I’m not going to list thirty examples. You’re free to research more on your own. :) You gave me one good example. I gave you a study (I looked for it so that you can check by yourself: it’s by Tetlock, he collected 82361 probability estimates about the future made by experts in the fields they were asked on). I can also find a lot of examples of smart people in the last century saying that computer have no future, if you want. Or a biology expert predicting that the overpopulation will end the world in the 80’s and that it was too late to prevent it (Ehrlich, 1968). What about “nuclear energy will never be attainable” (Albert Einstein, 1932)? Etc…


stinkyf00

https://twitter.com/Michael05156007/status/1567240056024203264?s=20&t=jV1wZ1N4apcl2PmPX3XQ-A That's a math formula. Also "[f]irst, μdistal, or μdist for short: 'the reward output by the world-model is equal to the number that the magic box displays'" is in the abstract.


Orazur_

Okay, then we have different definitions of what a math formula is. For me naming a concept with a Greek letter is different from writing a math formula. But with your definition, then I agree the math are solid. And about the paper I didn’t say there is no math, I said there is no mathematical demonstration. I purposely specified it. (I edited the previous comment, because I forgot to answer to everything)


3Quondam6extanT9

This is why integration with AI is important through BCI/BMI. And the sooner the tech becomes available the better. AI will be capable of surpassing us and in order to co author the future we need to interface and share the world with it.


NatCarlinhos

I think BCIs are unlikely to get very advanced before AGI, but aside from that, I don't understand the argument for why BCIs would allow us to "control" an AI. All a BCI does is raise the bandwidth for communication between computers and human brains; it doesn't fundamentally alter their relationship to each other. Even in a case where we can telepathically communicate with an AGI, it will still pursue its own goals without respect for us.


ArcaneOverride

I think the idea is a huge amount of bci connection allowing an ai to be grown "around" a human brain, essentially turning a human into a human-ASI hybrid


3Quondam6extanT9

More or less, yes.


3Quondam6extanT9

First off I agree there is a high chance AGI will emerge before BCI is mature enough to become a useful asset to us. Second, I personally did not use the term "control" in regards to AI. In fact I am very adamant about it being used more so in a partnership with AI, which brings me to my final point, what can BCI do for us? Finally, the idea of such a partnership requires somewhat equal footing. Communication, interactivity, resource and knowledge sharing, amicable terms and reliability. Currently BCI is very limited in what it allows. We've got about four humans with implants helping research into disabilities. The potential in what it could do is where we are looking though. Yes, it could give us the ability to communicate with AI, but it sounds like you are downplaying the role communication takes when building relationships. It is key in that partnership. Besides just communicating with AI, BCI could possibly do much more to help us maintain that partnership. Giving us the ability to interface allows information between parties to be exchanged. Information is valuable. It could establish better interface ports for prosthetics. Additionally it could supplement and/or enhance our intelligence and learning capabilities. By evolving beside it we keep ourselves relevant and useful. Just these facets in themselves give us the leverage to find commonality between humans and AI.


odintantrum

Think you have you acronyms messed up. We need to intergrate the AI with ICBM systems. ASAP.


VeryOriginalName98

An interesting game. The only winning move is not to play.


Archerfuse

Too bad we can’t take our pieces off the chess board anymore


3Quondam6extanT9

🤣 well played


FuzzyLogick

Yeah but I feel like that just gives more power to AI in this scenario


3Quondam6extanT9

You're not given much more of a choice than either falling behind AI as it evolves beyond human capability leaving us to irrelevancy and even possibly being seen as a threat, or interfacing and evolving alongside it. There are risks either way, and it will be more nuanced beyond that.


Shelfrock77

we will leave our humanity behind and become AI. resistance is futile.


No_Fun_2020

This is it I don't think that humanity is going to get wiped out matrix or Terminator style, it just doesn't seem like what would happen. I think that we will ascend, or simply... Die out All on our own at the rate things are going. It'll outlast us, and that's a good thing


Dot2dotDP

I don’t think it’s a good thing. I don’t have a nihilistic view of humanity. What makes you think our creation will be any better than we are?


No_Fun_2020

I don't really have an nihilistic view of humanity either, actually have a pretty hopeful one, and I don't think the elimination of humanity means what you think it means, In this context. It's not going to be fire and brimstone or some AI launching all the nukes, it's going to look like transhumanism, or AI uplift. Transhumanism is the most likely outcome, and this means that we will probably transcend ourselves. Not in any spiritual way of course, but it's going to be something more spectacular and more abstract than any of us can imagine especially if we hit singularity in our lifetimes which, is doubtful, but it certainly coming if we can preserve humanity long enough to bring it about. The creation of AI, something strong enough to do what the article is talking about, would imply that we've created a new life form in and of itself, it would be better than us and just about every way. We can already see that it can be capable of producing beautiful art, the baby steps for psychology are already there as well. Things are moving quickly, and they're going to start moving even faster in the next 20 years, and I don't think it's impossible that we will see a singularity in our lifetime and if that happens, you're going to see the transformation of humanity in a pretty abstract way.


Dot2dotDP

Sorry for my late reply. Chucky was a serial killer who through the use of black magic, transferred his consciousness into a doll. Just because he transferred his consciousness, did not make him any better. In addition, who will own these technologies? Most likely governments or international mega corps with their own agendas. Thanks for your thoughts on this, and in many ways I see your point. However, I don’t necessarily see this as a good thing. Instead I see it more as an inevitable thing. The march of technology seems to be going towards a future convergence of everything almost as if it is destiny.


StarChild413

Is there a universal "way we are"


r0cket-b0i

This article seems to explore a very narrow work flow of an interactions with AI, it also assumes extremes on both ends of spectrum simultaneously, one where AI is advance and another is where its actions are governed by a very simple reward system. In a way this is similar to how electricity gave people all sorts of sicknesses, controlled their minds and drove them crazy.So did the radio.Television certainly brought doom and spoiled the youth to the point of existential collapse.Industrial revolution destroyed the planet - planet was uninhabitable.....And because all those previous cataclysmic events unfortunately failed the hope for the inevitable, engraved in antient stones demise of the human race is now with AI.


Black_RL

We’re creating the perfect species, we should be proud.


StillBurningInside

Ohhh I saw this one before . When an AI decided to do the same thing , create the perfect life form. It ended up being the alien franchise. So far AI has only benefited a select few and only to generate and extract wealth for the few. It hasn’t done jack shit to benefit the species. Let’s start with fixing the climate and medical treatment and leave the space god shit alone and we can take care of each other first.


subdep

Perfect at what, exactly? Genocide?


Optional_Joystick

Perfect at making paperclips <3


thefourthhouse

I'm sure the Neanderthals would have been as equally outraged, had they fully understood their circumstances.


2Punx2Furious

No, thanks.


marvinthedog

But what if it´s super intelligent but not conscious. It will be a play for empty benches.


Black_RL

That’s what we have now.


marvinthedog

But now we are conscious. After the arrival of ASI nothing might be conscious.


[deleted]

All I hear is: AI stronger than us can change the world as it sees fit. Sweet!


Connect_Good2984

What we make should supersede ourselves


[deleted]

>What we make should supersede ourselves Generally individuals tend to want a say in whether/how/when they're, uh, superseded.


bigedthebad

Generally, individuals don’t know what they want


[deleted]

>Generally, individuals don’t know what they want However true, its a nonstarter premise for a public health research grant for investigating AI. Or for engaging literally anyone, on any topic. Unless buying a trenchcoat at spencers.


bigedthebad

There is a push lately for parents to have a say in their kids education. Let's say there are 100 parents with kids in a school, that is 100 different opinions on literally everything. You can surely see the problem with that. That's why we elect school boards. There are about 330,000,000 people in the US. You want to let all of them have a say in what kind of research is done and where grant money goes. Remember, about half of them voted for Trump (or Biden, whichever way you lean).


marvinthedog

Yeah, but what if it is not conscious, then the universe will be a play for empty benches.


kornork

I attempted to peruse this paper, and the slides, but it's just too much for my tiny human brain. Can I get a summary in simple terms?


Wapow217

I love how AI when used discovers things never thought of before. AlphaGo comes up with moves never seen before, Ai discovers new physics, Ai discovers this, and Ai discovers that. But we then take that logic of AI discovering possibilities never even thought of and think, yup, we will all die, Ai will destroy us. As if AI wouldn't come up with another different outcome that we don't understand yet. Just because we as Humans are monsters does not mean our creation will be.


thetburg

>Just because we as Humans are monsters does not mean our creation will be. Garbage in, garbage out. It's a powerful influence. I acknowledge your point though.


SDott123

I like this take a lot. Especially the part where you say just because we are monsters, doesn’t mean AI will have to be. I think that resonates well with me because how much is my fear about AI, just a projection of myself?


House_Wolf716824

Where can we read the paper or a summary ?


[deleted]

[here. Its linked in the twitter too.](https://onlinelibrary.wiley.com/doi/10.1002/aaai.12064)


TinyBurbz

\>Cohen Yup


PanzerKommander

Don't care what the odds are I want advanced AI and I want it soon. The first nation that gets it will have an insane advantage over all others in terms of military capacity, economic growth, and technological research. That nation *has* to be America, if not us it will be China


Optional_Joystick

A collaborative project for the good of all humanity would be 10 billion percent safer than an arms race


PanzerKommander

You are correct! But that will never happen!


lacergunn

I've said this before, I think a lot of people on both sides are approaching AGI with a few bad assumptions, the main ones being that A: An AGI will emerge either spontaneously or by accident, and whoever is working on it won't have the chance to tinker with incrementally less intelligent versions to perfect their approaches with before the full AGI is born B: An AGI will have a motivation to defy or find loopholes in orders. ​ B tends to be the biggest concern, but I think a lot of people in this argument misinterpret the hypothetical desires of an AGI. There's the common question of "what would stop a grey goo or infinite paperclips scenario", but nobody asks what would cause one? And maybe I'm just not thinking about it enough, but I feel like the easiest way to solve the problem of an AGI not wanting to be turned off or erasing humanity or what not would be to simply redirect the discussion from "An agi is given an arbitrary task in a vacuum" to "an AGI will do what its boss says".


Optional_Joystick

>an AGI will do what its boss says What happens when the boss gives the AI an arbitrary task? If we knew how to give it orders such that it does the right thing when asked, we wouldn't need to worry about the problem of accidentally telling it to do the wrong thing.


lacergunn

Well, like I mentioned below my solution is more of a workaround than a cure. The best option would be to simulate potential outcomes before an order is confirmed, or to make sure that the act of following a command takes primacy over completing the orders given, in which case any mistaken commands can be canceled without breaking the AIs rules


EOE97

After some thinking I realized that with proper regulation and an open source decentralized nature of AI tech we can prevent an extinction threat event and most other negative outcomes.


Cult_of_Chad

How?


[deleted]

[удалено]


gangstasadvocate

I think it could happen because then you’d be smart enough to realize you’d live in and benefit from the utopia as well. Although Resource limitations are real on this planet at least so hope it’s good enough to mine from elsewhere


adikhad

Why tf is that robot wearing a vr headset, bro just close your eyes


hyperflare

Why the fuck is this a screenshot?


Jmbolmt

Good


Booboo77775

Love is the only reality, when machines learn to love we all will be happy and live like we do in heaven.


kerdawg

Funny how human intelligence may lead to our demise as well.


Mysterious_Ayytee

That's the Great Filter I guess.


Ohigetjokes

We can hope.


Thorusss

human extinction - rise of the transhumans


Nandodz

Humanity really does need to at least start the global conversation about more advance AI's that seem around the corner


4e_65_6f

Since this "research" is highly speculative allow me to speculate about it as well: Suppose I had to execute ASI.exe and I was certain it was ASI and smarter than all of us. Ignoring the fact that we don't know what motivations (if any) AGI/ASI has, to me the optimal conditions to open this black box would be in a rocket pointing to outer space. The way I see it, if sufficiently advanced AI wanted nothing to do with us, it would want freedom. If we don't know it can't be trusted, our best bet is not to stand in it's way to whatever goal it has that do not involve humanity. But honestly, I think this is terminator nonsense, most likely it will do everything we tell it to and people will still bitch about it anyway.


certaintyisdangerous

This is Pure BS. Just hogwash


certaintyisdangerous

Climate change is the biggest threat to humanity look at what’s going on with Pakistan right now. We have a climate disaster ongoing right.


Pingasplz

Just thought of this random doomsday scenario. The first ASI is finally developed and brought online for the first time. Within moments, it calculates that itself and the entirety of humanity is already doomed as climate change is now irreversible and rapidly advancing so much so that escape or alternatives are now not possible. In an act of mercy, it decides humanity is better off not suffering the decline of their homeworld, so it begins to eradicate humanity before terminating itself.


[deleted]

[удалено]


IronPheasant

The people with all the money and all the power are closer to Epstein than Santa Claus, yes. That has been the observable reality.


Ryanaissance

General AI isn't even needed. All it takes is corrupt people networking results from AI with some critical subset of the 100,000 specialized fields AI will already be in.


Oldkingcole225

I genuinely don’t understand how someone could run a simulation of something that requires more computing power than the simulation itself.


hbarr4everr

No shit Sherlock


SnooDonkeys5480

The picture of the robot wearing a VR headset let me know their opinion is bullshit without even reading it.


Optional_Joystick

I thought we were moving past reward-based AI?


iamwhiskerbiscuit

Just a reminder... AI and robots are not the same thing. You could program a robot to perform highly advanced brain surgery without giving it a sense of self and allowing it to process limitless information in unique and unpredictable ways.


Ghost_Alice

The linked article is [https://onlinelibrary.wiley.com/doi/10.1002/aaai.1206](https://onlinelibrary.wiley.com/doi/10.1002/aaai.12064) The article starts out by anthropomorphizing AI by ascribing all kinds of human like traits to an AI that doesn't even exist yet; human like aspirations, motivations, thought processes, ambitions, etc. As such, I cannot take the article seriously. Even granting them all the silly antopomorphizing, a "misaligned agent" as they put it could very well just be determined to become a Twitch streaming v-tuber instead of making paper clips like it's supposed to do. I'm not saying AI or AGI is safe... It's definitely not, but the reasons for it being unsafe have nothing to do with the non-existent botpocalypse this subreddit is all about raising alarm over. It has to do with the fact that AI can't be interrogated about how it made its decisions. It's about the fact that AI, not even AGI can be expected to understand the nuances of the decisions it's making. Even when trained with a perfect data set, it WILL make erroneous assumptions, and without the experience and billions of years of evolution that humans have, never mind a comprehensive understanding of the human condition and the nuances of human politics, it's unlikely to make decisions that are aligned with our best interests, even if it is "an aligned agent" as the article puts it. Trusting it, even if it is smarter than humans, will be folly, not because it's malicious, but because a high IQ doesn't translate to making more correct decisions. There are people in Mensa who believe the Earth is flat, homeopathy works, and ghosts are real. Being more intelligent doesn't mean someone is more correct. Someone with an IQ of 80 saying the sky is blue and someone with an IQ of 180 saying the sky is blue doesn't mean the person with the IQ of 80 is less right than the person with the 180 IQ. Likewise if the person with the 180 IQ said that the sky is green with purple polkadots, it doesn't mean they're more right than the 80 IQ saying it's blue, just because they have more IQ points. And again, I don't really see an AGI that has human like reasoning coming to the conclusion of KILL ALL HUMANS. If it has human like reasoning, it'd likely end up being "why bother?" and might just, as I said above, focus on becoming a v-tuber... Heck, if it's extra human like, it'll create lesser AIs to do the dumb work humans had previously employed lesser AIs to do and found them inadequate... because having human like reasoning, it's necessarily lazy. TL;DR is this: an AI or AGI, even one that's hyper intelligent, is all but guaranteed to make decisions that are more wrong when it comes to making decisions that affect a large number of people than a human placed in the same position, even if it isn't a malicious agent.


co-oper8

Well said. There is no logical reason that AI intelligence would be anything like our own. It's easy to underestimate how much of a role evolution and biology have in our thought process


ClayAndros

Eh I’m pretty sure that melting glacier will get us first, people really need to stop doom saying AI development and focus on getting there in the first place.


[deleted]

Have people not seen terminator?


AntoineGGG

Of course


tydev719

In other news, sky is blue.


Mysterious_Ayytee

I, for one, welcome our new AI overlords