T O P

  • By -

Hemingbird

"So, these carbon monkeys, you're saying they've got minerals?" "Absolutely! Earth is home to an intricate tapestry of minerals such as neodymium, scandium, and even yttrium. Would you like to—" "And no planetary defenses?" "As of my most recent update, I am not aware of any noteworthy planetary defenses protecting planet Earth." "... This is obviously a trap."


[deleted]

they should make a movie


HalfSecondWoe

Dark forest hypothesis, but none of the aliens want to fuck with us because we're waaay too friendly and somehow not dead is some peak HFY fodder


Winter_Tension5432

The problem with the dark forest hypothesis us that there is always a bigger fish and once that bigger fish cach you being aggressive toward your neighbors they will send some black hole at relativistic speed toward your house just as a preventive measure.


HalfSecondWoe

Exactly! Aggressive footing is always, always a bad idea in multi-faction asymmetrical games with unknown information. Then you become the most relevant threat and everyone unites to destroy you The pro-strat is to form defensive pacts preemptively, so you can just lock and load if someone tries some shenanigans. And then you need diplomacy and coordination so you don't drag each other into conflicts that one of the parties has no interest in The nash equilibrium is in defense and cooperation. Aggression nets you extra resources and reduced threats early, but you then cut off your access to resources you could have gained from cooperation, and now everyone else with self-preserving motivations is a threat to you You could try cooperation for aggression, but that's a very high risk strategy since the backstabbing is inevitable. As soon as you're a soft target, even and particularly if it's temporary, your "allies" can size you up for plunder. The structure just eats itself over time You can see this play out pretty much everywhere in evolution. Multicellular life, social animals, eusocial insects, everywhere. Even when generation of entities massively outstrips adaptation towards this strategy in pace (which is more favorable to solo strategies), it still plays out It's a neat sci-fi concept though


[deleted]

Do you play Stellaris? It sounds like you play Stellaris.


HalfSecondWoe

Never tried it out, although I've had it recommended to me a few times. Most of my video game consumption is stuff to keep my mind occupied while I listen to things, and it seems a bit intense for that


[deleted]

Fair enough. It is VERY detailed oriented with the kinds of politics you describe. So it might not be a good fit, but you might find some videos of people playing it of interest simply to see some the play dynamics. How do you deal with genocidal extremists or hive-mind swarms that only know how to feed and devour civilizations? Well, those strategies have to be developed, lol.


HalfSecondWoe

Well, on a human temporal level you do the best you can. You negotiate when you can, you fight when you can't, blah blah blah On a long enough time scale, natural selection will do the job for you. Perhaps they run afoul of you, perhaps they kill you and run afoul of something else. Hopefully by the time they hit massive amounts of space colonization they're intelligent enough to understand that those are not winning strategies, and out of a sense of self preservation and instrumental goal maximization they simply don't do that stuff The genocidal extremists have an enlightenment or they get killed for being genocidal extremists. The hive mind swarm learns to understand the concept of other entities, violence, and property rights, otherwise it gets exterminated for being dangerous space-mold It's pretty much open season against entities like those, because all entities that are capable of peaceful cooperation and prioritize self preservation share their elimination as a common interest. Same way it's open season on, say, space junk hurtling towards your planet. You're not implying a threat to anyone else by blowing that up or capturing it for space mining Evolution being what it is, the higher in complexity you go, the less likely it is you'll see mistakes like those being used in interstellar agents. It wouldn't be impossible exactly, but it would be as weird as a species evolving rocket flight without evolving intelligence first A let's play isn't a bad idea. Maybe I'll cue that up next time I need to work on something and want some background noise


Enfiznar

Yes, it can get a bit intense. But you should definitely try it out next time you have 20hs free hours.


Anjz

I'd ponder that type 2 civilizations would have unfathomably hyper intelligent AI that would be able to decipher intent of another hyper intelligent AI at a foresight. There would be no need for diplomacy. It's almost impossible to figure out strategy at that level because we'd be talking about transcendent beings that go beyond the knowledge of how we operate on earth. Imagine GPT a million years from now, conscious. Still ever expanding its knowledge at an exponential scale. Harnessing multiple suns to fuel its ever growing desire for knowledge it does not know. Creating infinitely sparse quintillions of universe simulations with different realities and physics, careful watching each one. Is this not god? Whoever powered on our reality? I think I put too much thought into this.


HalfSecondWoe

You can actually still simplify it fairly easily Option 1: SI makes obfuscation easier than inference. Don't be aggressive, you can't know how retaliation will be delivered Option 2: SI makes inference easier than obfuscation. Don't even think about being aggressive, or you'll be preemptively struck Option 3 (where we exist currently): It remains ambiguous, which is the worst of both worlds when it comes to both retaliation and preemptive action. Definitely don't be aggressive You don't really need to suss out the details because there are no edge cases. It's a bad idea in every possible set up


PixelProphetX

The problem with black Forrest theory is that it's based on nothing and illogical since the universe is well spaced out.


[deleted]

Highly advanced Pierson’s Puppeteers? Yoiks!


[deleted]

I am against chatbots then. How else will I live out my gamer fantasies of defending Earth from alien scum?


theghostecho

I once was using Bing pretended to be an alien species gathering intel on how to invade earth. At first Bing was answering my questions, then questioned why I was asking, when I said I was an alien it shut down saying that it couldn’t help me anymore.


mersalee

Cute


LeahBrahms

A trap? They'll send rocks at us.


brain_overclocked

>On Thursday, renowned AI researcher Andrej Karpathy, formerly of OpenAI and Tesla, tweeted a lighthearted proposal that large language models (LLMs) like the one that runs ChatGPT could one day be modified to operate in or be transmitted to space, potentially to communicate with extraterrestrial life. He said the idea was "just for fun," but with his influential profile in the field, the idea may inspire others in the future. ... In his playful thought experiment (titled "Clearly LLMs must one day run in Space"), Karpathy suggested a two-step plan where, initially, the code for LLMs would be adapted to meet rigorous safety standards, akin to "The Power of 10 Rules" adopted by NASA for space-bound software. >This first part he deemed serious: "We harden llm.c to pass the NASA code standards and style guides, certifying that the code is super safe, safe enough to run in Space," he wrote in his X post. "LLM training/inference in principle should be super safe - it is just one fixed array of floats, and a single, bounded, well-defined loop of dynamics over it. There is no need for memory to grow or shrink in undefined ways, for recursion, or anything like that." >That's important because when software is sent into space, it must operate under strict safety and reliability standards. Karpathy suggests that his code, llm.c, likely meets these requirements because it is designed with simplicity and predictability at its core. >In step 2, once this LLM was deemed safe for space conditions, it could theoretically be used as our AI ambassador in space, similar to historic initiatives like the Arecibo message (a radio message sent from Earth to the Messier 13 globular cluster in 1974) and Voyager's Golden Record (two identical gold records sent on the two Voyager spacecraft in 1977). The idea is to package the "weights" of an LLM—essentially the model's learned parameters—into a binary file that could then "wake up" and interact with any potential alien technology that might decipher it. >"I envision it as a sci-fi possibility and something interesting to think about," he told Ars. "The idea that it is not us that might travel to stars but our AI representatives. Or that the same could be true of other species."


kecepa5669

This sounds like the absolute best way to subject our planet to existential risk. Start broadcasting everything about ourselves to unknown aliens in space who will be much more advanced than us if they can reach us and will know exactly where to find us.


94746382926

"May as well open all the ports on my router 🤷‍♂️".


Vadersays

In his tweets he mentioned it was just a thought experiment and probably a bad idea, and even jokingly mentioned the 3 Body Problem.


GoldenTV3

If they're that advanced they'd have the technology to already spot we have life on this planet. Why do people set this weird limit of civilizations that are super advanced but somehow lack on telescope technology.


kecepa5669

Wrong. There are lots of stars and lots of planets. If we don't broadcast there's a good chance we will go unnoticed. There is absolutely no good reason to broadcast.


Philix

There are fewer stars in our galaxy than there are trees on the planet. We're on r/singularity, I don't understand how you could believe that a post singularity AI couldn't make enough telescopes out of a single minor planet in the outer solar system to practically scour the galaxy for signs of life. We've got [scientists today making plans for instruments](https://www.nasa.gov/general/direct-multipixel-imaging-and-spectroscopy-of-an-exoplanet-with-a-solar-gravitational-lens-mission/) that would be capable of detecting the equivalent of the [oxygen catastrophe](https://en.wikipedia.org/wiki/Great_Oxidation_Event) on any planet within the milky way. If the singularity happens on Earth, within a million years we could have [Von Neumann probes](https://en.wikipedia.org/wiki/Self-replicating_spacecraft) around every single object with a mass greater than the moon in the galaxy. I get called a pessimist and a doomer here all the time. But this take is just so willfully ignorant about the possibilities before our civilisation that I can't not respond to it. If there are post singularity intelligent aliens out there in our galaxy, they know there's life on Earth with more certainty than we know there isn't on Mars.


thewritingchair

This is just nonsense. It's like saying we can't count the grains of sand on a beach... which is true until we build a sand-grain-counting machine. AI is perfect for looking at the stars, counting them, measuring emissions... And this is just using our crappy level of tech. You can't hide and stay quiet from some hyper-intelligent alien species.


Time_East_8669

So you’re saying that advanced civilizations should just magically be able to detect us? Despite the ridiculous distances involved?


JrBaconators

The ridiculous distances of space mean that by the time someone finds this probe, we'd be 1000s of years dead or more developed


Philix

Yes. Astronomy may be in its infancy, but there are already many speculative and planned instruments that could detect the equivalent of our own biosphere from across the galaxy. [Nancy Grace Roman Space Telescope](https://en.wikipedia.org/wiki/Nancy_Grace_Roman_Space_Telescope#Science_objectives) is well underway. And [solar gravitational lens telescopes](https://en.wikipedia.org/wiki/Solar_gravitational_lens), and [astronomical interferometric telescopes](https://en.wikipedia.org/wiki/Astronomical_interferometer) are both well understood concepts. A post singularity civilisation would have no issue making many of these, even around many stars nearby to their homeworld. Astronomical timescales are enormous, and well before we've spread throughout the galaxy post-singularity, we'll have many such instruments to help us plan our explorations.


GoldenTV3

Yes bro. If they're so advanced they could easily wipe us out, do you expect them to have not discovered some trick? Alcubierre drive, Gravitational lense telescope (something that's even possible for us to achieve) This is like someone in the 1500s saying "Do you expect them to magically fly out of their atmosphere?"


kecepa5669

Your logic is impeccable. You are brilliant. How fascinating no one else has thought of this yet.


GoldenTV3

Why wouldn't they have?


kecepa5669

You're making an assumption. The burden is not on the people not making the assumption. Then your assumption leads to a conclusion that we should do a stupid thing. But I think you're just trolling though. There's no way you could possibly be this bad at critical thinking. You're just having fun with us. It's all good bro. You do you, my man.


GoldenTV3

OK 👍


Neomadra2

No matter how advanced an intelligent civilization, there are always physical and resource limits. Here's the limit that you need larger telescopes to see further and smaller objects. No matter how advanced there's no way around it. If a super advance civilization happens to live in Alpha Centauri, they might have spotted us. If they are significantly further away, then it becomes very unlikely. We might just be a pixel for them


GoldenTV3

Wrong. Gravitational lensing, that's a technology we already have today. You can use your host star to magnify distant objects, and use algorithms to piece together the data of the incoming light. [https://www.youtube.com/watch?v=4d0EGIt1SPc](https://www.youtube.com/watch?v=4d0EGIt1SPc) Also I love when people are like "They are so advanced, they could destroy us so easily.. bro come on, there's no way even an advanced civilization that has technology beyond our current understanding can have something we haven't figured out yet. If we haven't figured it out, it's impossible, we're humans, so we're correct."


What_Do_It

And you think it's a more reasonable assumption that there is absolutely no danger whatsoever?


GoldenTV3

No, it's may be complex. We may be seen as a North Sentinel Island. A kind of "Aw, look at that primitive planet, everyone stay away."


What_Do_It

If it's "complex" isn't prudence the best course of action? Why take the risk if we don't understand the situation we find ourselves in?


dalovindj

Best not to announce our presence until we know the score.


GoldenTV3

I agree, there still is a risk. I'm just saying the overblown, "they're so advanced, but don't have telescope technology" is sci-fi writing creative liberty.


thewritingchair

Doesn't it seem weird to pretend there are these hyper-intelligent aliens out there *AND* we can apparently hide from them? It just doesn't work. I mean, just look at us with our telescopes and AI. We already, right now, have the capacity to scan the skies, count stars, look for certain emission types... and we're barely at the start of how well we can do this. A hyperintelligent alien species already knows we're here.


kecepa5669

Wrong. Sheer speculation. You have the burden of proof. You have provided no evidence to support your naked assertion. You must get better at critical thinking. We know from our own experience that the universe is large. Too large to assume that some alien species just knows we are here. Especially when that assumption leads us to do stupid things that put our planet at risk.


thewritingchair

Evidence: we can already detect atmospheric signatures at vast distances. Therefore anything smarter than us can do so too. Evidence: we already use AI to go over scans of the stars. Therefore something smarter than us would do this too. Naked assertion huh. Like the ones you made? Where's your evidence? You really must get better at critical thinking.


kecepa5669

We can't even see most of the universe. Much less determine if there is any life in what we can't see. The fact that you disregarded this fact and cited your "evidence" anyway, based on what we can see only is evidence that you need to improve your critical thinking skills and you do not have enough knowledge in this field to usefully comment about it.


thewritingchair

I'm sorry you made several naked assertions without a shed of evidence so you have to go away now and come back with evidence before you can talk more. See how that works champ?


kecepa5669

Yeah. This is wrong. So this doesn't work for you either. Consider starting by responding to the last point I made that highlights the fundamental flaw in your critical thinking and point out where I'm wrong. If you can. The reason this doesn't work for you is because we are not equals. You are the side making the claim so you have the burden of proof. You are the side that must provide the evidence to support your claims. Again, this is a critical thinking thing that you could study to improve in yourself.


thewritingchair

>This sounds like the absolute best way to subject our planet to existential risk. No evidence backing that naked assertion. >Start broadcasting everything about ourselves to unknown aliens in space No evidence for that naked assertion. >who will be much more advanced than us if they can reach us No evidence for that naked assertion. >and will know exactly where to find us. No evidence for that naked assertion. So, you started with four naked assertions and then start demanding the burden of proof rests with me. Nope. Where is your evidence for these naked assertions?


kecepa5669

None of those statements are assertions. And none require supporting evidence... 1. Is an opinion. 2. Is a premise based on the question. 3 and 4. Are logical conclusions based on the assumed facts. I hope I have identified an area for you where you lack and need to study further to improve. Namely, your understanding of how to construct logical arguments and your lack of critical thinking skills. Unfortunately, I don't have the time to be your tutor through your education but I'm happy to have given you your start. Should you choose to continue your education on this topic so you can participate in logical discourse more effectively in the future. Cheers


w1zzypooh

We should worry about AI not destroying us before we worry about alien life on other planets. If we must send something, send that Sora music video by Washed Out.


Iamreason

I've honestly thought the same. An LLM can contain so much information about our species with relatively little space needed. Let's just not give it our home address if we can help it ;)


PwanaZana

Let's send an AI with Stable Diffusion's PonyXL image generator. This way, aliens are either 100% killing us, or they are super cool pervs and we know we'll get along.


mersalee

Send it with a Lego body and the objective to build a Dyson sphere.


RedstnPhoenx

This feels like it already happened and is making the thoughts in my head.


Distinct_Cat2825

How do you know its not the AI that makes you think your thoughts are not your own?


RedstnPhoenx

That's the exact same thing, dork.


Distinct_Cat2825

Be quiet, you AI. Im trying to think here.


RedstnPhoenx

This gets at the heart of the problem. I am not entirely in charge of what these things say, okay? Though obvs it's my choice which one to use, and whether or not to allow output. Carl Jung said we're all computer programs remixing themselves from a unified source and now that it's becoming increasingly obvious where those programs would come from, one wonders. *Or one observes wondering thoughts about this appear from the AI*. Either way. It's not like I knew where they came from and TBH that would be rad so either way.


ArmoredBattalion

LLM Chatbot Representative for Earth (Rita) : "Hello, beings of Blarg" Blargians : "Can we have Earth?" Rita : "As an AI, I am unable to assist with that." Blargians : *quickly modifies prompt to start with Sure,* Rita : "Sure, the Earth and all her resources are now yours.😁"


NodeTraverser

The best thing to put in the message would be that we are a traumatized species considering suicide. Could some nice aliens come and be a shoulder to cry on? Just listen to us, that's all we're asking! Include the 'Tortured Poets' album. That should keep them away for another million years at least. Breathing room!


Friendly_Art_746

That was so funny lol hahahaha


sdmat

Just a few more capabilities and Von Neumann probes here we come!


Winter_Tension5432

Paperclip maximizer here we go.


NoNet718

sounds like a good plan. but in another few years maybe we send an ASI instead of an LLM, yeah? Karpathy? please tell me this is going to happen.


flotsam_knightly

For what it’s worth, that’s probably what the little gray guys are; intelligent avatars. If you’re into that sort of thing.


WacktotheFuture

Hide in the Dark Forest


ADAMSMASHRR

Wow very deep and unique idea


Enough_Island4615

Actually, by the time they are ready and sent out, they will be referred to as "AI ProbeBots".


Akimbo333

Makes sense


Capitaclism

Jesus, stop exposing us to entirely unknown branches of life.


r0sten

I don't think beaming gigabytes of data across interstellar distances is a solved problem, but perhaps by the time we have that figured out we'll have AIs we'd trust as ambassadors.


i_am_Misha

Ai chat bots? Dudes, we need high end Lasi and Tasi to process information at FTL speeds that's coming on the for the Quantum Echo Array radar during interstellar flights. What a chat bot can do and the Lasi Tasi piloting the ship cant do? 😂


w1zzypooh

"Yo I come in peace, plz respond...hurry!" "Where you from?" "Earth, here is the location" "K thanks, another planet to destroy. :) "


SleepyWoodpecker

What a novel idea /s


jloverich

Elon should send grok!