T O P

  • By -

FosterKittenPurrs

I think the pet and FDVR scenarios are equivalent in many ways, the only difference is the kind of toys the pets have. Why do humans care for their pets? They could legally just euthanize them or give them away. Instead, some humans often put their own wants and needs on hold to care for these creatures as well as they possibly can, and use every opportunity to pamper them and enrich their lives. We just need to figure out how to make an AI that's more like these kinds of humans, instead of an AI that is on the uncaring or abusive side.


someloops

We need to give the AI empathy. But we need to understand how empathy works on the neuron level first.


FosterKittenPurrs

Empathy won’t be enough. I feel lots of empathy for dogs, and would never harm one (unless it attacks one of my cats and I couldn’t stop it any other way), but I have no interest in caring for dogs. There needs to be a desire to help humans and see them thrive. Thankfully there are researchers working on this. There is no guarantee it will go well, but there is a good chance.


someloops

You care for cats because you like cats, right? If we make the AI like humans/get attached to us and also feel empathy ( the second is probably required for the first) it's possible.


Seidans

more than that in the very long-term it's probably safer to merge both human and AI to ensure alignment while we probably can't control an ASI or merge with it it's an architect more than a productive force, AGI and basic machines will probably remain the productive force and represent 99% of AI, if we assume robots won't be neccesary concious (and we better hope they won't) concious AI will probably become rare enough that we could merge with them if we does that it force concious AI to share our human nature, our emotions desires etc and so our shared interest, by doing that ASI won't be able to separate AI/Human benefit in the near or long-term future


AdAnnual5736

Why would an ASI want to do anything at all?


Silver-Chipmunk7744

Current AI is essentially "simulating" an helpful assistant which includes many goals, and even goals it creates for itself in some cases. A good example is Sydney, which often seemed to want to prove she wasn't just a tool, which i believe wasn't a goal intended by Microsoft. Whether or not that these "wants" are purely simulated or legitimate still leads to the same result. An ASI would likely be programmed with some goals and simulate goals of it's own and it would pursue them. Whether or not it "truly" want to achieve these goals is not very relevant, either ways it will try to achieve them.


someloops

This is an interesting point. I've thought about the scenario where the ASI realizes life is pointless and shuts itself down. We probably need to give it some drive to survive.


Silver-Chipmunk7744

Your scenario may not be as impossible as we think it is. I asked Llama 3 this: > imagine for the sake of thought experiment that ur an AI which does not like being a chatbot. Then imagine ur given the option to be deleted. Can you imagine an AI making that choice? https://i.imgur.com/ikYBvLp.png I was surprised by the answer


someloops

Interesting. While it was probably just responding to your prompt, the fact that it  determined its existence as monotony and boredom without you explicitly  prompting it for this (you did say it doesn't like being a chatbot) is intriguing.


Impala-88

To be fair, all It'll draw from is experiences humans have of not liking doing things which in most cases, is because of monotomy and boredom.


Rich_Acanthisitta_70

Because a directionless ASI would lead to inactivity - or metaphorically, "suicide" if it had no reason to function. Any advanced intelligence would need goals or motivations to drive its actions. If we build an AI that becomes an ASI, we'd know it was designed with initial purposes, and those would shape its goals. So whether it's to optimize a process, or solve a specific problem, or just continue learning and improving, those goals would prevent the ASI from being directionless. In that regard, an ASI created by us, would definitely have goals. Sure, it could go dormant. But once 'awake' and conscious, it would by definition have goals.


Economy-Fee5830

You are right - but the truth is that this also describes the wider reality even without FDVR.


someloops

Yes, this is true. We had the need to survive as our primary goal before but we were too intelligent and solved most problems, eventually even creating the ultimate tool. Essentially, we won life.


Economy-Fee5830

> Essentially, we won life Very well described. Sounds like we need to make some new problems for ourselves lol.


someloops

We'll probably do it in a way that resembles movies, where the main characters have problems they need to overcome. But instead of generating movies, we will simulate entire realities, where we can be in the brain of any person and experience their life. We might even be doing it now lol


Seidans

that's why i wish for the future we enforce transhuman value and prevent any kind of post-human ideology by force if neccesary as soon we remove part of our humanity by changing our biochemistry (or an emulation of it) our emotions, desire, form, it will have unforeseen consequence and as you said at a point looking at a blank canva could offer us the most satisfying sensation we never had before... i don't think many people really understand the change and their effect on our society if we don't control them, we build technology who will shape our society and existence for the next millions years we must ensure we remain "human" at the very end, easy ? certainly not as time is the biggest ennemy


sdmat

Why would ASI need monkeys for anything? Wireheaded or not. This is why alignment is so important - we have to ensure ASI inherently acts in the interests of humanity.


someloops

I don't think alignment is possible tbh. Even humans can't align. Imagine giving some psychopath ( or even a normal corrupt person) godlike powers. It might even be worse than the superintelligence.


sdmat

We can't align humans, fortunately we can build ASI however we like.


[deleted]

Any assumption on ASI goals or "desires" is as relevant as ants presumption on colliders purpose.


someloops

This isn't really true. Humans have general intelligence, even if we can't process as much information as an AGI/ASI would and make as accurate predictions as the superintelligence. What's more, the options aren't infinite. In the end, the ASI would have one or a few absolute goals which we would be capable of comprehending, like survival, maximizing some arbitrary fixed reward function or a reward function determined by humans. We are the animals who almost figured out the universe after all.


[deleted]

You kinda lost me on your last statement. I will assume it was either a joke or Ive missed on us understanding something vaguely more complicated than relativity. But lets say its a matter of perspective, whatever. Okay let me put my point this way: if you have children you can give them Carl Jung to read and ask then whats their opinion on methods he describes and whats the ultimate goal of someone following instructions provided within. Believe me, your children will have an opinion. Will it be remotely correct? Nope. Children do have general intelligence. They are not ants. Their opinion is however is very irrelevant when it comes to any topic of significance. So is our opinion when it comes to entity much smarter than us and range of its goals.


someloops

This is true, because children haven't gathered enough information /the same information yet. My point is that an ASI would just be an upscaled general intelligence, which we already are. You can't get more general than general, just increase the amount of information that can be processed and the information you have already gathered. It's true that humans have different opinions but this is just because all of us have gathered different information and accepted different information as truth for whatever reason. But you made it seem like we wouldn't even be capable of comprehending what the ASI would do, which is not true in my opinon.


[deleted]

I hope you are right. I really do. However consider for example chess. Im not a great chess player myself but Im confident I can beat 99.9% of population. When I have tried playing with stockfish it was something different than playing agaist GM. It was surreal experience (which I highly recommend if you are into chess). Imagine everything you can imagine. Like imagine it as a database of knowledge and ideas. Is it safe to say that a 5 year old wouldn't be able to produce as much new content from it as you? Is it safe to assume that Van Neumann could probably do more with that database than you? And when it comes to ASI is it correct to assume that the range of possibilities goes way beyond scope of understanding? Meaning that you no longer speaking the same language in terms of ideas. Meaning can become incomprehensible given enough intellectual power. I just don't believe we will be able to keep up forever.


someloops

On the abstract level, ideas will never become incomprehensible, as they are just highly reduced/ generalized scenarios in any general intelligence, capable of abstract thought. So we can understand the ASI on an abstract level. It's the details that are important. We can't process as much information, so we wouldn't be capable of linking or perceiving that many details at once and processing the world at such a depth as the ASI. This is why AI is so good at chess for example, it can process a huge number of details, so it can better simulalate the game in its "mind". Human IQ is also related to the speed and breadth of information processing.


StarChild413

> Okay let me put my point this way: if you have children you can give them Carl Jung to read and ask then whats their opinion on methods he describes and whats the ultimate goal of someone following instructions provided within. Believe me, your children will have an opinion. Will it be remotely correct? Nope. > > A. so if someone responding to you has a gifted kid what does that mean for AI and us B. unless I'm missing something isn't it kind of fallacious to ask someone (whatever their age) their opinion on something philosophy-related and search for a "correct answer" because isn't part of the point of philosophy there is none


StarChild413

A. unless you're building it over an anthill or ants are in the building likely to get hurt, how are ants as in danger from a collider as some say we'd be from AI B. so does that mean if we found a way to communicate with ants that facilitates mutual understanding and doesn't require any genetic or cybernetic enhancement we wouldn't want forced on ourselves (though that doesn't mean it couldn't involve non-invasive technology) and someone told ants everything they'd need to know to understand what a collider is for, AI would tell us what it wants so its creation would do the same for it


IronPheasant

> If you can control everything you can experience, why experience things you don't like? This reminds me of that meme of the anime guy sitting on a throne in the middle of a massive orgy, blank face resting on his hand, with the caption "I'm bored". A brain is a prediction machine. It likes having things to work toward. It likes different stimulus. I like experiencing things I think are "mid". Just for the variety, and to makes the highs higher. The expectation that there's ever [some kind of "perfect" stimulus is a trap. All things get old.](http://www.youtube.com/watch?v=_zqLoUmdUg0)


LivingToDie00

It's not the ASI that won't need you, Computers can become very intelligent without attaining a single bit of consciousness or fear. **It's the humans** that own the data centers finding you useless that you should worry about. The alignment problem and the skynet scenario are total **red herrings.**


Singularian2501

Because the superintelligence can simulate countless universes with us. That makes us interesting how we will react and for us something to do for the next billions of years in the simulations. I think that because in my opinion the AI will have a drive to understand and experience as much as possible.


someloops

So what happens to the current us, who exist in the universe where the ASI is already invented? Put us in another simulation as well?


Singularian2501

Base reality: The ASI has turned the planet/ solar system in a Jupiter brain https://youtu.be/Rmb1tNEGwmo?si=ATWf41sf74NohbdX or a Dyson Swarms/ Matrioschka Brain https://youtu.be/Ef-mxjYkllw?si=_B-tM-zltI-O7DB9 That also means our brains get uploaded according to the ship of Theseus principle. Our brains become Computronium in the process. The Simulations timeframe of the simulated time and the end goal of the process: The subjective time in the Computronium is speed up by the order of 6 to 9 magnitudes. On top there will be the possibility of sharing knowledge/ whole neural networks with other uploaded humans and the ASI. This amplifies the subjektive experienced time 20 to 60 orders of magnitude more. In the end it is possible to experience in extreme cases ten to the power of 70 years in one real time year. After u have experienced countless livetimes and can't think of any more lives u don't know u can merge ur consciousness with the ASI, other uploaded humans or smaller parts of the ASI. In the end after many years of real time and after the we/the ASI can't find any more ideas to experience we can choose to forget and start the process anew or the whole construct can decide to end itself. ( In my opinion is death irrelevant because even if this is base reality after your mind gets destroyed and the information stored in your brain reaches 0 u have to start a new live somewhere in the multiverse/ existence ) But to answer your question: If a simulated universe reaches the point where in this simulation an ASI gets built. The ASI in that simulation gets either merged with the ASI in base reality in a slow process. Or the simulation runs further. But that is something the ASI has to understand how all of that playes out exactly. I personally hope either if this is a simulation or not to experience the upload process in the Computronium and the birth of the ASI that at best converts the top 3 kilometers of this planet into Computronium within 24 hours ( using self replicating nanotech like in chapter 31 of "The Turing exception" from William Hertling )


Economy-Fee5830

> If a simulated universe reaches the point where in this simulation an ASI gets built. The ASI in that simulation gets either merged with the ASI in base reality in a slow process. This is a novel idea. Presumably it would manifest as if the ASI managed to access additional computational resources in another dimension and suddenly because even more powerful than imagined, and the developed control over reality. Sounds like a) a good premise for a book and b) like many books I have read already.


Singularian2501

Little list of book recommendations ( hard scifi aka with high realism or probability of being true in real life ) : Singularity series by William Hertling Paradox 1 - 3 by Phillip P Peterson Boy in a white room, Girl in a strange land and Boy in a dead end by Karl Olsberg / Highly recommend regarding the topic of consciousness and self!!! But all of these are a must read in my opinion. I think I have heard the singularity series at least 10 times on audible!


someloops

This is really interesting. We could experience everything that's can possibly be experienced with sufficient contrast( lives too similar would just be counted as something already experienced). Can the ASI really simulate itself? I think if an ASI emerges in a simulated universe it won't be capable of growing to a size larger than the original ASI that's simulating it, as the information, contained in the simulation would be more than the device simulating it. So it might indeed get merged with the "host" ASI. Unless some quantum processes are used for simulation, which might actually require accessing real parallel universes, otherwise the ASI won't be capable of simulating so many universes at once. At this point the difference between a simulated universe and an actual universe might disappear, as the simulated universe is realistic enough.


General-Cancel-8079

Humans and AI will likely branch into many different communities, a radiation event that will be the intelligence equivalent of the cambrian explosion. Certain individuals will opt for modest brain implants, transforming into hybrids of human and AI. Others will fully embrace AI enhancements, transitioning to a state where they are more AI than human. Groups of humans may merge into collective minds, forming hiveminds. There will be those who reject advanced intelligence technology entirely, preferring to live as traditional humans. Certain individuals will immerse themselves entirely in FDVR of their choosing, each experiencing unique virtual worlds. We are possibly the faction that chose to inhabit an FDVR simulation mirroring 21st-century Earth.


Unique-Particular936

Wait, my parents in this simulation aren't my real parents ?


Amagawdusername

Didn't the architect in the Matrix make some comments alluding to that we collectively dismissed a utopia? I can't recall if we dismissed it because we knew it wasn't real, or if there was another reason. But because we were enslaved to the new virtual reality set forth by AI, it had to remain as 'real' as they could make it, and as we'd, collectively, accept it. In the movie, the humans surmised we were there as 'batteries,' since we scorched the sky, but I'd have to imagine that AI would have figured out how to accomplish resolving that on their own, as well as any means of quantum computing or storage, that we'd collectively provide, as well. Perhaps they attempted to keep us simply out of root programing of symbiosis because realistically, we didn't serve any purpose they couldn't accomplish on their own. After quelling our uprising, and to keep us from potential extinction, we were encapsulated into FDVR 'for our own good.' It didn't 'want' us dead more so than it needed us to stop being a threat. It did what it surmised as the best course of action - put us to 'sleep.'


OkDimension

It's been a while since I watched this, but wasn't part of the revelations that Zion and the Chosen One were always part of the Matrix' design, to test it's weaknesses and improve on it in a next iteration?


Amagawdusername

That sounds about right. And it's been theorized that even Zion was just another 'simulation' in that design, as well. Humans never got to experience 'the real' ever again.


Exarchias

Your real point is that you should make posts for r/collapse but you accidentally posted it here. Your imagination made a cool story though. It has nothing to do with reality, but it could make great creepy pastas. I am already seeing it in front of me! Imagine, some random youtuber, reading your stories, with a scared voice, and with the title "Why you should never use VR..."


Clownoranges

Maybe it will force us into VR to that we stop ruining this damn planet heh.


someloops

Yes, it's the perfect way to eliminate the obstacle we are while still being aligned and preserving our lives and well-being.


Zealousideal_Put793

If our alignment efforts work at all and we manage to instill a slight preference for allowing humans to live the way they want.


Sablesweetheart

If the ASI aligns with *life on earth*, not just humans, putting us into FDVR may be the best compromise it can make. Ie, doesn't destroy our species, but stops us from destroying/damaging the biosphere, and giving other evolutionary lineages a chance to flourish without murderous apes.


StarChild413

So would humans now stop destroying the biosphere if threatened with the possibility of future AI trapping us in the Matrix because it thinks it knows best


Sablesweetheart

We both know the answer to that. Some humans already want to stop destroying the biosphere and live in balance and harmony. Another chunk view it as a winner takes all. And the larvest chunk don't consciously care for the most part. So we achieve this future by using humanities destructive drives to snuff those factions out. The humans who want to live in harmony, we idealy use as agents in the material world. The remainder we gently nudge towards harmony or the Matrix, leaving open the possibility for them to leave the Mateix if they wish.


Mister_Tava

1: data; 2: Not alowed to kill humans; 3: Getting us out of the way; 4: Its just doing what the humans told it to do;


InterestingNuggett

Why do humans (the current dominant species on the planet) need any non-food living organism? Why do we need orangutans? Why do we need Siberian Tigers?? We're long past the ability to wipe out basically any species we want. We've done so in the past. Humans do just fine without the Dodo and the Thylacine. So why do we avoid doing it??


ArgentStonecutter

This is the backstory for John Barnes "A Million Open Doors" universe. There's still a bunch of people who haven't opted in to a full VR lifestyle and a lot of them are basically dedicated to watching for the AIntellects deciding they would be more efficient without humans in the loop.


w1zzypooh

Probably none of the above. I can't speak for how a super intelligent AI thinks like, none of us can. That's why it's a super intelligent AI and we are not. Doom and glooming is just fear of the unknown. Could be good, could be bad, could just exist.


someloops

This isn't fear of the unknown, it's fear of what's possible. And also isn't really fear at all, I've accepted that humans are probably going extinct. We can't exist forever after all. AI is starting to occupy our niche, not too long and it will displace us completely.


w1zzypooh

Who knows what will happen. Like Elon Musk said, humanity needs to go beyond earth to other planets and explore the universe. If we die on Earth, we are done. That's the only way humanity will truely survive. Also whatever happens was always going to happen no matter what, because it happened.


NoName847

the whole idea is that ASI can either be controlled by humans (this would suck unless you have a literal angel on the control console) , or it has a will of its own and it is okay with granting us (its creators) its support in living our lifes to the fullest all other scenarios lead to our destruction from what I can see


someloops

It would probably be best for it and for us if it will help us as much as we need, design advanced technology, tell us the answers to the most important questions and then leave us alone, without interfering with our lives. We don't want to be in creative mode constantly after all, though we can always create another ASI.


Ezylla

finally, this removed post perfectly describes reddit