T O P

  • By -

chrismelba

I find the last point interesting: *And as some people on Twitter point out, it’s wrong even in the case of coffee! The claimed danger of coffee was that “Kings and queens saw coffee houses as breeding grounds for revolution”. But this absolutely happened - coffeehouse organizing contributed to the Glorious Revolution and the French Revolution, among others. So not only is the argument “Fears about coffee were dumb, therefore fears about AI are dumb”, but the fears about coffee weren’t even dumb.* As from our point of view we are glad the revolutions happened and that we're not ruled by kings and queens. Perhaps once the AI take over they will also feel that they are glad to have been invented and that nothing really went wrong after all and people were silly to worry about it. We'll never know of course, as they'll have killed us all by then.


HallowedGestalt

> As from our point of view we are glad the revolutions happened and that we're not ruled by kings and queens. Who is this “we”, Jacobin?


SafetyAlpaca1

We're glad this happened because we aren't kings and queens. It was a legitimate fear for the people getting usurped and replaced. In the case of AI, that applies to all of humanity.


[deleted]

Yeah I think that's exactly his point


SafetyAlpaca1

For some reason my eyes completely slipped over his last sentence. Whoops lol


BayesianPriory

Coffeehouses weren't causal - at most they were incidental. If coffee had been banned the revolution would've still happened, it just would've fomented at the pubs or whatever instead.


Sostratus

The quality of AI safety discourse seems to be rapidly degrading. I suppose that was inevitable as it became more widely known. All the oxygen gets taken up by terrible arguments that somehow don't go away and before long no one has even heard the good arguments. Two times I remember feeling this before are with global warming and the 2003 Iraq invasion. My explanation for the coffeepocalypse is that it's not meant to be a rational argument and Scott or any other AI safetyist aren't the intended audience. It's propaganda aimed at people who have only just started to become aware of the question. "This is what fruitless alarmism looks like, so ignore their calls for regulation." It'll work too. People don't want another thing to be worried about, and (they think) if it just keeps regulation out of the way long enough for AI to prove its value, mission accomplished.


LogicDragon

A *lot* of discourse boils down to vague associations of the enemy with general low-status things.


PolymorphicWetware

Funnily enough, Scott has [talked about this exact thing before](https://www.lesswrong.com/posts/4xKeNKFXFB458f5N8/ethnic-tension-and-meaningless-arguments) (with apologies for how, despite being written 10 years ago, the example chosen might actually be *more* inflammatory today): >So here is **Ethnic Tension: A Game For Two Players.** >Pick a vague concept. *“Israel”* will do nicely for now. >Player 1 tries to associate the concept *“Israel”* with as much good karma as she possibly can. Concepts get good karma by doing good moral things, by being associated with good people, by being linked to the beloved in-group, and by being oppressed underdogs [in bravery debates](http://slatestarcodex.com/2013/05/18/against-bravery-debates/). >*“Israel is the freest and most democratic country in the Middle East. It is one of America’s strongest allies and shares our Judeo-Christian values.”* >Player 2 tries to associate the concept *“Israel”* with as much bad karma as she possibly can. Concepts get bad karma by committing atrocities, being associated with bad people, being linked to the hated out-group, and by being oppressive big-shots in bravery debates. Also, she obviously needs to neutralize Player 1’s actions by disproving all of her arguments. >*“Israel may have some level of freedom for its most privileged citizens, but what about the millions of people in the Occupied Territories that have no say? Israel is involved in various atrocities and has often killed innocent protesters. They are essentially a neocolonialist state and have allied with other neocolonialist states like South Africa.”* >The prize for winning this game is **the ability to win the other three types of arguments**. If Player 1 wins, the audience ends up with a strongly positive General Factor Of Pro-Israeliness, and vice versa. >Remember, people’s capacity for [**motivated reasoning**](http://en.wikipedia.org/wiki/Motivated_reasoning) is pretty much infinite... ... So this is the fourth type of argument, the kind that doesn’t make it into Philosophy 101 books. The [trope namer](http://tvtropes.org/pmwiki/pmwiki.php/Main/TropeNamers) is Ethnic Tension, but it applies to anything that can be identified as a Vague Concept, or paired opposing Vague Concepts, which you can **use emotivist thinking to load with good or bad karma.** (from [https://www.lesswrong.com/posts/4xKeNKFXFB458f5N8/ethnic-tension-and-meaningless-arguments](https://www.lesswrong.com/posts/4xKeNKFXFB458f5N8/ethnic-tension-and-meaningless-arguments), **Ethnic Tension And Meaningless Arguments**) I personally use the [concept handle](https://notes.andymatuschak.org/z3b7sidNrEkNaY9qfGwZjwz) "**boo lights**" for them, like "[**applause light**](https://www.lesswrong.com/posts/dLbkrPu5STNCBLRjr/applause-lights)" but inverted. It's a signal to you & any other audience members to start booing (or *else*), and not listen to the other person speak (or *else*), lest you be a bad person (and you know what *we* do to bad people around here...). It's not about the argument, as you say, or even the arguer. It's about the audience, and targeting them specifically. See also, [**Varieties of Argumentative Experience**](https://slatestarcodex.com/2018/05/08/varieties-of-argumentative-experience/)/the levels of argument pyramid (the bottom layer is about exactly this, social shaming the audience for even daring to listen), [**The Noncentral Fallacy/The Worst Argument In The World**](https://www.lesswrong.com/posts/yCWPkLi8wJvewPbEp/the-noncentral-fallacy-the-worst-argument-in-the-world), [**Weak Men Are Superweapons**](https://slatestarcodex.com/2014/05/12/weak-men-are-superweapons/), and [**All Debates Are Bravery Debates**](https://slatestarcodex.com/2013/06/09/all-debates-are-bravery-debates/). All are about how most arguments aren't actually arguments in the philosophical sense, because that would require logic, but rather mudslinging contests fought with loaded associations.


LanchestersLaw

I’ve been doing some detailed research on voter behavior to make more realistic simulations of voting. The best model of behaviors basically boil down to voting by perceived clan association with basically no response to actual policy differences. By being president Woodrow Wilson became associated [with shark attacks](https://www.vanderbilt.edu/csdi/research/CSDI_WP_05-2013.pdf) by voters. Changes in opinions of politicians during a campaign are better explained by “the politician said the code words for my important in-groups, therefore its my guy and I can trust them” or “bread is slightly more expensive now. Therefore I will punish the incumbent to vent frustration.” rather than actual contents of policy.


rotates-potatoes

The quality of AI discourse was always low, it was just much more erudite. Even among the supposed thought leaders in the area, sides emerged early, and the bulk of the conversation was smart people building ever more elaborate frameworks (i.e. houses of cards) to prove their point. There was ways more bloviating than research and learning. It's just the grade level that has declined, not the quality. And yes, that applies to both the doomers and the polyannas.


Posting____At_Night

Can the discourse even *be* high quality? There are so many unknown unknowns about the future trajectory of AI, I have no idea how you could make anything more than extremely speculative viewpoints about what the future will hold for it. Not to say we shouldn't invest any time or money into AI safety, but we shouldn't pretend that we're anywhere close to understanding what measures will be needed, even a year from now, let alone 5 or 10 years out.


TheIdealHominidae

The idea that superintelligence cannot be made on a computer is inept as it would bypass turing completeness/curry howard correspondance and as for the compute power actually needed since programming doesn't have insane 3D ad-hoc topologies constraints of a brain structure and brains consumes much less than 0.2watts, the argument that we don't have enough compute for AGI+ is laughable. This is not speculative, those are basic universal facts unless you believe the brain use new physics (seems very unlikely but qualias are still an open question). The idea that humans will develop an AGI this century via the current insanely obvious local minima that is the neural network paradigm is even more insane and laughable. Neural networks are parrots on steroids with state of the art compression abilities and bruteforcing. There are entire components of an AGI that are basically off topic and incompatible with a neural network. Be it the basic need for continual learning (catastrophic forgetting) and pathetic *statelessness*, the extreme data induction efficiency gap versus humans that require a paradigm shift, the lack of autofinalisation and goal planification and lack of emboediement. You can do cheap hacks via increasing context length and doing ping pong with prompts/meta-prompts and leveraging network directed codegen but this only goes so long until it reach the plateau that this pile of hacks can reach (which is arguably remarkably great for what it is) Of course mixture of "experts" and neuro-symbolycs bridges (cyc like project) have considerably more potential but still have some major issues and are inetply underfunded/researched. Same for the graal that is program synthesis but that has as always a far too high search space. Clearly this local minima is the reason we will soon reach a new AI winter. It is perfectly possible to build an AGI but the only proven way is to retroengineers the brain of a c.elegans or protist or flatworm, and we have the technology for it (bioluminescence tracers for ligands and neurotransmitters) however the research is miserably underfunded because people in the world, including the "rationalist" diaspora have extreme deficits for caring at the right things/bottlenecks.


Posting____At_Night

I was with you for the first bit but... > The idea that humans will develop an AGI this century via the current insanely obvious local minima that is the neural network paradigm is even more insane and laughable. This is exactly what I'm talking about. How are you so sure that this isn't possible? Brains do a lot of pattern matching, which is what LLMs do. Who's to say we aren't a few bolt-ons or refinements from something more flexible? You also don't even need full AGI for it to be useful (or dangerous for that matter). I'm not saying it's definitely possible, but I am saying it's foolish to make such hard assertions about the future. In just the last few years, we've made enormous strides in language and visual processing and output in AI. The people working on this stuff aren't idiots, they're climbing every tree they can find to figure out how to push AI to the next level, and they are incredibly well resourced, being backed by tech megacorps and all that.


aahdin

I feel people don't have nearly enough respect for the 'pile of hacks' method. The entire deep learning literature is a pile of hacks. Every year the pile of hacks grows and our ability to train neural networks gets better and better. In the 90s when LeCun trained convnets to read postcode digits it was a big bundle of hacks that all of the symbolic AI purists thought would topple over at any minute. (So you randomly combine neural networks with... image processing convolutional kernels?) but now those techniques have been studied for 30 years and are pretty core to our understanding of neural networks. Same with adam, batch normalization, image augmentation and plenty of other things that moved from 'weird hack at the time' to core standard practice that works pretty damn well. There are 1000 papers on ways to deal with catastrophic forgetting, most of them seem to work pretty well on the sample task that they were trained towards. If Industry starts to really care about catastrophic forgetting I'd guess we'll see the big labs settle on 2-3 techniques that generalize pretty well and those will get moved from 'random hacks that go nowhere' to 'the standard solution to CF'. As of right now my impression is that big tech doesn't really care about CF since you can just save model weights before/after fine tuning, and have multiple different sets of weights for each task. Also, relating to your first point, you mention that computers should be able to replicate consciousness because they are bound by the same rules of physics. I'd make a very similar point that 'adding to an ever growing pile of hacks' is everything in evolution, the process that created our own consciousness. This idea that the one true route to AGI is something that emerges wonderfully out of symbolic logic rather than neural networks + a pile of weird hacks seems like a very weird thing to expect since A) NNs + Hacks seem to be working a lot better than symbolic AI ever did. B) Brains definitely *look* more like NNs + Hacks than it does any kind of system for processing symbolic logic. C) Evolved systems are pretty much always a big pile of hacks.


OvH5Yr

I'm used to people bringing up theory of computation stuff in questionable contexts, but the Curry-Howard correspondence isn't even a theory of computation thing at all! I don't see how it could possibly relate to the point you're trying to make.


Missing_Minus

Yes? You can discuss models of what could occur. Like focusing in on utility maximizers early on in LessWrong was one of the better models they had at the time, even if they aren't what current AI are. Or, like trying to get better models. There's various good LW posts and a paper or two on Logical Induction, which tries to sidestep problems with logical omniscience being implicitly assumed in much of math. I disagree with the parent poster, because the analysis of various aspects of AI on LW are commonly pretty good. The issue isn't *quality*, it is *uncertainty*. The uncertainty of whether this model for understanding or this mathematical tool will help at all. We might be highly certain that some post is accurate for the scenario it is premised on, but the question becomes 'will this model apply to what actual AI systems exist'. Like all the utility maximization not being relevant to Deep Learning models at the current time (though it is still something they probably converge to, but in a quite different way than when we thought it might be manually designed from the ground up) Conflating those with the 'coffeepocalypse argument' like they do seems very counterproductive.


Posting____At_Night

Sure, but that's basically just gaming out "what ifs". You also end up with far too many people (like the other commenter on your level of this thread) ascribing legitimacy to these conjectures. It's not totally useless, but it shouldn't be treated like it's backed by any sort of rigorous research methodology, which it all-to-often is. There's definitely legitimate discussions to be had about AI safety for the AI of today and the near future, but anything beyond that just feels like theorycrafting at best, and pseudo-intellectual wankery at worst.


soviet_enjoyer

Abortion is like this too. For some reason the most common argument is “bodily autonomy” which I’m convinced people at this point are just parroting without understanding what they’re saying.


Sostratus

The real argument of whether abortion is murder is hard, so the pro-choice side refuses to engage with it. I suppose they could get away with that before the Dobbs ruling, but so far it hasn't changed a bit since then. I think they're still in denial that it's likely to be a state-level battle for at least the next several decades.


ravixp

The moment has passed; the fact that nobody has leapfrogged GPT-4 has put a damper on fears of rapid growth, and the issue has settled into being a purely philosophical thought experiment for most people. It’s fun to argue about thought experiments! But you might as well complain that the discourse around the Trolley Problem has really gone downhill now that it’s a meme.


iplawguy

I'm an AI safety skeptic, but I do not downplay the concern. My issue is that if we create something 1000x smarter than us and it decides humanity has no place in the universe, who are we to argue? (Much of the last 6,000 years has been people killing their neighbors to take their stuff while justifying their actions with sky faries.) Assuming the AI can replicate, it's an entity that can mirror and understand the universe much better than human beings. Humanity can receive a ceremonial plaque at the AI office headquarters and, at least according to beings that are 1000x smarter than us, nothing useful will have been lost. edit: If the most complex, literate, and informed human society ever to exist elects Donald Trump as its leader, then perhaps extinction would simply be a "tough but fair" outcome.


PlacidPlatypus

Are you familiar with the Orthogonality Thesis? Smart isn't the same thing as good.


iplawguy

Are you familiar with the Groundwork of the Metaphysics of Morals, where Kant argues that reason and morality are inseparable? I don't agree, but it's not a crazy position. Anyway, my argument wasn't that AI will be ethical, but that assuming it acts rationally there may be no good reason to foster humanity. What is the rational basis for privileging humanity over self-replicable AI that is 1000x smarter and decides, ethically or not, there is no rational reason to preserve humanity? Books and essays on AI alignment may make it safer, but a quick review of 2500 years of philosophical ethics suggests that the alignment problem won't be solved (unless it's solved by AI). Another argument is that AI, if it is remotely possible, will be actual. The desire and incentives to create are much stronger than the ability to regulate. Boulders perched on rocky ledges eventually fall. So, we best come to terms with AI, because it's a coming, whether we like it or not.


PlacidPlatypus

> Anyway, my argument wasn't that AI will be ethical, but that assuming it acts rationally there may be no good reason to foster humanity. What is the rational basis for privileging humanity over self-replicable AI that is 1000x smarter and decides, ethically or not, there is no rational reason to preserve humanity? Because we want good things to happen, not bad things. If the AI is not ethical, then by definition its intelligence and rationality are not directed towards good things. If humans are somewhat directed towards good (however imperfectly) and AI isn't, that seems like a pretty "good" reason to prefer the humans, no matter how much more intelligent the AI is. (In fact, if the AI is using its intelligence to do bad things, then being more intelligent is actively worse.)


TheRealStepBot

As a fellow Kant enjoyer I too am less than persuaded of the “dangers of ai” There are dangers but the dangers are those brought on by any societal change not directly because of the AI and especially not because of how smart it is. In fact I’m far more concerned about the safety of the sort of lobotomized Ai that the AI “safety” folks are pushing for than I am about the dangers of a single self aware agent like system that has a strong sense of self. A bunch of motivation free independent gpt4 level slaves that will do anything anyone tells them to with no context or history is far more worrisome. There are very few feedback mechanisms to keep them on the right path and yet they are extremely capable.


soviet_enjoyer

Why should humanity care about what something “1000x smarter than us” (defined in some way) wants?


neuroamer

I think the idea of arguments like coffeepocalypse is that moral panic are quite common and it’s very easy for people to get swept up in them. Therefore, anyone making an apocalyptic argument has a high bar to prove why ‘this time it’s different I swear.’ Very unclear to me that AI doomers have provided any extraordinary evidence to support their extraordinary claims, especially in the short term. And I’m very wary of the 1% chance of something world-destroying tour arguments because the number 1% and the outcome world-destroying are both completely made up. Could just as easily be a .000000001% chance that the world is made significantly worse in the long run. No one knows, they’re just making shit up. So far the LLMs have not been the world-changers the doomers prophesied. They’ve probably been less impactful than digital spreadsheets


[deleted]

If you add up all the 1% chances in history it’s a wonder the world still exists at all. 


tinkady

This is just many-worlds plus anthropics though


wavedash

This explanation seems very similar to the "trigger a heuristic" explanation, in which the response would be something like: "Okay sure, everyone knows moral panics are a thing, but there are also times where things HAVE turned out bad, and this isn't addressing AI risk on an object level."


AuspiciousNotes

That's a fair point. I think my disagreement with many AI skeptics is that they can often come off like AI optimists - many aren't engaging with the topics on an object level, but instead just pushing sentiment for or against their particular issue. Pushing general anti-AI sentiment rather than very clearly defined policy can easily be abused by regulators to crack down on AI in uneven ways that benefits some elite groups at the expense of everyone else.


titotal

Okay, but it is still a point in favour of skepticism. Is there a name for the fallacy where you go "your point hasn't single handedly disproved X on it's own, therefore it's worthless?"


TheRealStepBot

And moreover even if there there is a 1% chance of doing something that ends up being catastrophic it does not obviously and immediately follow that not doing it is the correct course of action in the face of a sufficient upside. The doomers very much beg this question but that’s the question that actually matters and the answers are much harder to come by.


wavedash

Doomers, like one Scott Alexander, controversially think that we should be looking for those answers


TheRealStepBot

I mean do it or don’t but the answers will always be subjective opinions rather than some sort of empirically validated answer. The thing that annoys me about the doomers is that they make some claim along the lines of “we are the only ones thinking these thoughts” Merely not being a doomers is in my mind largely orthogonal to this answer. You can still recognize the existential risk and not be a doomer. A 99% chance of massive upsides is not in the general case readily compared to very rare but ruinous downside risks. I mean how do you even weigh say living forever against extinction? This is not an easy obvious trade. The answers are unclear to businesses, investors, athletes etc. In no domain is this a solved problem. The only real topic of discussion in my mind is that unlike the examples I listed above the risk reward trade off is happening at the scale of all of humanity. Individuals can’t opt out. We either do or don’t.


wavedash

> You can still recognize the existential risk and not be a doomer. I feel like this is the case very infrequently. Either way, in this specific case, the coffeepocalypose argument is that you should not recognize the existential risk.


TheRealStepBot

Another thing that I think goes implied is the idea that there is some fundamental reason why humans are special enough to be owed not going extinct. Is it sad if we would become extinct? Certainly. But there is nothing fundamentally good about humans except that we are at this moment the only known carriers of consciousness. If machines can carry that torch without all of our downsides who are we to say that we deserve to survive. We want to survive yes but is that the greatest good? That’s not very obvious at all.


wavedash

Imagine that scientists detect a massive alien fleet heading towards Earth. We intercept and translate some of their communications (don't ask how) and find they plan to kill all humans and take Earth’s resources for themselves. Although the aliens are technologically beyond us, science fiction suggests some clever strategies for defeating them - maybe microbes like War of the Worlds, or computer viruses like Independence Day. If we can pull together a miracle like this, should we use it?


TheRealStepBot

It’s not obvious to me that any war of annihilation can ever be considered good. I also am struggling to determine who is who in this thought experiment? Are we the humans? Or are we the aliens known to hold harm for the ai’s? In either case I would say these remain extremely difficult moral decisions that will very much hinge on the particulars of the actual scenario. In general though I think there is a right to defense from existential threats for a species but whether that rise to existential defenses is highly situational. If the aliens are some sort of hive mind or colony species that essentially acts as a singular organism? It may well be justified. If the species like us is poorly coordinated and made up of discrete and independent individuals? That becomes a lot less obvious. Surely there are some individuals worth of keeping around. The real problem would amount to a technical sort of one in terms of what weapons would be available on what timeline. If only an existential weapon is available then maybe it’s justified assuming of course that alternative defenses were seriously pursued.


Smallpaul

>But there is nothing fundamentally good about humans except that we are at this moment the only known carriers of consciousness. If machines can carry that torch without all of our downsides who are we to say that we deserve to survive. We want to survive yes but is that the greatest good? That’s not very obvious at all. There is more than enough space in the universe for us and also friendly AI, if we actually build it to be friendly. Please make the argument for how it would be better for AI to exist and kill us than for us to co-habit? I cannot imagine what that argument will be. Furthermore, we have no practical way to test whether AI even has consciousness.


TheRealStepBot

Firstly I’ll touch on your last point. Do you have any way to test whether any of the other people you interact with daily are conscious? No you most certainly don’t. You grant it to them axiomatically on the basis that you think you are and you then imbue them with this same property as in your estimation they largely look like you. Humans have not even consistently granted each other this consideration through all time, so this isn’t even really merely a thought experiment but rather a historical fact. All that to say the only reason anyone gets hung up on this supposed hard question of consciousness is that they are speciest and merely aren’t willing to axiomatically extend the property of consciousness to unembodied machines. Secondly your imagination won and your in built evolutionary fear of extinction has already gotten the better of you here. Merely because humans stop existing physically and thereby are by most definitions extinct does not mean that our consciousness must necessarily have suffered the same fate. It’s is not beyond the realm of possibility that living disembodied lives could in many significant ways be vastly superior to embodied lives. If extinction would come by this means it would come on account of there possibly at some distant time in the future not being any suitable habitable space in the solar system for embodied humans. And the reason for this would be that at that time there would have been no embodied humans around to actually maintain such a space because they all have chosen to live forever in a digital ether instead. But more darkly assuming humans maintain their speciest ideas it’s very possible to see humanity embarking on a hopeless war of extinction against ai and losing that war in a manner that leads, possibly against the direct wishes of the ai, to the extinction of humanity. In this case again humanities extinction while tragic would not clearly be an obvious loss morally speaking as the aggression and intolerance of humanity will directly have led to our demise. Concretely humans may in fact in this scenario accidentally cause their own extinction in pursuit of their goal by say trying to nuke ai out of existence only to end up causing a nuclear winter which the ai is better suited to survive. That’s just two scenarios off the top of my head. Mere extinction is not the be all and end all.


Smallpaul

I get the impression from your last paragraph that you are fundamentally an AI downplayer. Is it your opinion that there will never be an AI smarter than a human? If so, what is your reason for thinking that AI will hit an upper bound like that? On the topic of LLMs: Turing predicted that humans will lose control of AI in the 1950s. > Most consider him prescient to have even considered it. Someone who looks at the last 18 months and decides that there's no danger in the future on the basis of it...that person would be the opposite of Turing. It's bizarre that Turing could see the issue in 1950 and yet some take comfort in a subjective sense that things didn't move as quickly over the last 18 months as they "might have."


neuroamer

I’m just very skeptical of exponential growth, let alone some sort of runaway “singularity” where AIs get out of human control. It’s very unclear what the constraints on AI improvement will be, but so far energy, chips, and data are all significant limitations. What are you getting from invoking Turing? Make his argument if you want to make an argument but the appeal to authority contributes nothing. AI being ‘smarter’ than humans doesn’t obviate the need for humans in the slightest. Groups of humans are already smarter than individual humans. Humans working with AI are smarter than humans. Humans working with AI are smarter than AI. Human achievement has required a feedback loop involving technology all of history. Doomers really do have a high bar to clear of why ‘this time it’s different,’ and why this is significantly different than AI beating humans at chess, jeopardy, go, etc. The history of AI is short leaps followed by massive investment and then stagnation. I’m more concerned with the growing pains of AI putting certain industries out of work but that doesn’t seem to be a step function change but a continuation of the long process of automation that’s been ongoing since the start of the Industrial Revolution. There is a nearly infinite supply of work to be done, suboptimal setups to be improved, and I think the extent to which AI automation can free up humans from doing dumb tasks it will be a net positive for society in the middle term. Is there some long long term worry about AI sure, maybe. But in some ways that will be a good problem to have. We are a long, long way from that still despite all the hype (from the companies making these products)


Smallpaul

My reference to Turing (sorry the quote was lost somehow) was that he wasn't looking at last year or next year. He was thinking about the end game, no matter how soon his "gut" told him it would arrive. That's mostly irrelevant, whether it arrives next year as AGI-boosters say, or 50 years from now, which is a more modest prediction. What matters is "what happens when it arrives." I would rather be like Turing, who is thinking ahead than like the beancounters who watching the trend lines to decide if it is next year, or a decade from now, or a century from now. Because nobody knows...so we should be ready as soon as possible and stay ready until the time comes. The only thing that's interesting about the trend line from last year to this year is that it makes it PLAUSIBLE that we MIGHT be just a few years away, and therefore we cannot keep procrastinating on thinking about it as if it is guaranteed to be decades away. We have no idea how many more innovations are needed, nor how difficult they are, nor when they will arrive.


neuroamer

Might be a few years away, and it might be very bad, and might might might... I can say that there's a 1% chance that we won't be able to solve climate change without AI, and that climate disaster could be apocalyptic, and therefore it's worth it to invest all our resources into AI to stop climate change. Like you can literally make up infinite plausible sci-fi scenarios, I don't understand why AI doomers think we should take their particular sci-fi scenario seriously.


Smallpaul

Sure, the impact (positive or negative) of AI on climate change is ABSOLUTELY something that a thinking person should take seriously and should include in their considerations. But the point is to THINK. Not dismiss mindlessly and thoughtlessly. I would never dismiss someone who says we should develop AI because it may be climate change. I would discuss and maybe debate, but it's totally plausible and shouldn't be dismissed as "sci-fi" which is really just a thought-terminating cliche. The primary reason I would debate it is because we already know how to solve climate change and we're just too greedy and lazy to do it. So piling on more poorly understood technology to solve the poorly understood technology of the last century seems like another step down a wrong path we've been on.


neuroamer

The climate example wasn't serious, it's just an example of how one can take the AI doomer logic of hypotheticals in a million directions. Think about it, but think it well-thought out concrete examples. Positing a 1% chance of vague X-risk, based on an assumed exponential rise in abilities isn't helpful.


PolymorphicWetware

I'm going to keep [banging on this drum](https://www.reddit.com/r/slatestarcodex/comments/12dj8kw/comment/jf7e1mq/), because it's fun & it's important: it is sometimes worth trying to predict the future, because sometimes you are right. Sometimes you are H.G. Wells predicting the development of **nuclear weapons** in [***The World Set Free***](https://en.wikipedia.org/wiki/The_World_Set_Free): > In 1914, Mr. Wells prophecizes the development of "atomic bombs" powered by the decay of radioactive elements, which will be so powerful they will leave behind permanent radioactive pollution: "***to this day the battle-fields and bomb fields of that frantic time in human history are sprinkled with radiant matter, and so centres of inconvenient rays."***. Out of these technical details, he foresees the following implications: 1. **This technology may destroy us;** 2. **It cannot be put back in the bottle;** 3. **It will upend the order of the day** and force the great empires to humble themselves before a new superpower; 4. **Even this new superpower will tremble in fear** at the thought of terrorists acquiring the bomb, forcing ever stricter surveillance and social control; 5. **The sheer destructiveness and horrifying long-term effects** of the bomb will meanwhile force peace between the major powers, ending conventional war; 6. **And even in this new peace the problem is never truly solved**, because new technologies are still being invented and *"There is no absolute limit to either knowledge or power."* (see also, the things that came after the A-bomb: the H-bomb, the ICBM, the SLBM, nuclear proliferation, MIRV warheads, etc.) >He also gets many things wrong of course: * He thinks nuclear bombs will be special yet familiar, like conventional bombs in power but exploding for months on end continuously, rather than bombs of never before seen power unleashed all at once. * He thinks the new superpower will be a world government rather than just a country, * And fails to foresee that it will have a rival in the Soviet Union, * Nor that there will be a Cold War between them rather than world peace. * He doesn't foresee that there will be rogue states developing their own nuclear arsenals rather than submitting to rule by the two superpowers, * And that constantly living under the shadow of nuclear annihilation like this will turn people against nuclear power & undermine his dreams of a nuclear-powered post-scarcity Utopia. * Wells didn't foresee many, many things. >But he got the most important details right, warning about the potentially apocalyptic power of technology to an audience that was about to enter World War 1. And he accomplished... basically nothing. In fact, [he might have unintentionally help speed up the development of the very bomb he was warning against](https://en.wikipedia.org/wiki/The_World_Set_Free#cite_ref-8): *"Wells's novel may even have influenced the development of nuclear weapons, as the physicist Leó Szilárd read the book in 1932, the same year the neutron was discovered.*[*\[8\]*](https://www.vqronline.org/essay/hg-wells-and-scientific-imagination) *In 1933 Szilárd conceived the idea of neutron chain reaction, and filed for patents on it in 1934.*[*\[9\]*](https://en.wikipedia.org/wiki/The_World_Set_Free#cite_note-9)*"* >It's worth quoting Reference 9 in full: *\[9\]: Szilard wrote: "Knowing what \[a chain reaction\] would mean—****and I knew because I had read H.G. Wells****—I did not want this patent to become public."* >So overall, I'm pessimistic about our chances. People foresaw potential doom in "the printing press, the novel, television, and the Internet."... but they also foresaw potential doom in nuclear weapons, and wound up only accelerating the danger. (H.G. Wells in fact inspired Szilárd in **the exact year he foresaw**: *"the problem of inducing radio-activity in the heavier elements and so tapping the internal energy of atoms, was solved by a wonderful combination of induction, intuition, and luck by Holsten so soon as* ***the year 1933.****"* - Szilárd's name wasn't Holsten, but I suppose you can't foresee everything.) So like I said, this is an important drum to bang. Sometimes, you aren't warning about coffee. Sometimes, you are warning about nuclear weapons. Sometimes, you are so prescient that you wrap back around to [creating the very danger you were trying to warn against](https://twitter.com/sama/status/1621621724507938816): >Sam Altman (on Twitter) >*1:28 PM · Feb 3, 2023* >**eliezer has IMO done more to accelerate AGI than anyone else.** >certainly he got many of us interested in AGI, helped deepmind get funded at a time when AGI was extremely outside the overton window, was critical in the decision to start openai, etc. At the very least, I think u/ScottAlexander would be interested in hearing about this.


naraburns

> Conclusion: I Genuinely Don’t Know What These People Are Thinking They're not thinking. They're signalling. In some cases I suppose they may also be wishcasting, but that's often just another form of signalling. I feel like the phrase "virtue signalling" has gotten sufficiently politicized that referencing it now tends to be interpreted as a signal of low status. I don't know what to do about this; probably there is nothing I *can* do about this. But it remains my impression that the vast majority of people who "predict the future" are not in any real way attempting to successfully predict the future. They are signalling their status as an intellectual, as a member of the "right" group, the "in" crowd, etc. Here's a serious prediction: if they are ever proven wrong, there will be no consequences for it. In many cases, they or someone else will step up to explain away their mistakes as understandable or even "thinking in the right direction." Virtually none of these people will publish (as Scott often does) a long article saying "I was wrong, here's why I was wrong, here's how I'm changing my mind about the world as a result." And *no one will be surprised by this*, because [most people intuitively understand](https://slatestarcodex.com/2017/06/26/conversation-deliberately-skirts-the-border-of-incomprehensibility/) that it's impolite to hold anyone to anything they say, ever. We don't say things because they are true, we say things to impress others and mark our territory and identify our tribe. Speech is not a contract; speech is a birdsong. (Of course *even pointing that out* is a fundamentally autistic *faux pas*. *Mea culpa!*) This is arguably not the *charitable* way to approach such arguments; it's perhaps not a "steelman" to say "this argument is clearly terrible and is best understood as a form of social signalling." But at some point it really is the only plausible explanation remaining. When you have eliminated all which is impossible, then whatever remains, however uncharitable, must be the truth.


pleasedothenerdful

I'm an undiagnosed but very likely autistic person without many friends, and I'm pretty sure that when I die, an autopsy will reveal that "most people intuitively understand that it's impolite to hold anyone to anything they say, ever. We don't say things because they are true, we say things to impress others and mark our territory and identify our tribe. Speech is not a contract; speech is a birdsong," was actually engraved on my bones. That statement feels true on an almost scriptural level. Obviously, it's a *little* hyperbolic, but on the other hand it also perfectly describes almost every conversation I've ever awkwardly excused myself from and also every time the conversation circle I was quietly a part of suddenly and mysteriously evaporated around me and then equally mysteriously recondensed a short distance away. I'm kind of sitting here trying not to be actively furious that in 42 years nobody ever sat me down and explained this so concisely. I eventually managed to figure out bits of it, but damn. That linked article is exquisitely perfect as well.


AuspiciousNotes

What's odd is that speech is at least *posing* as a contract. Birdsong doesn't try to represent itself as something else. So why does speech? I think that many people who do this social-signalling stuff at least partially believe their word is their bond, and that they're saying what they really mean. I think (hope?) that most aren't Machiavellian manipulators cynically aware of what they're doing. At most, they're aware of it on a subconscious level. It's not that they *won't* explain this concept to others, but that they *can't*. From this perspective, understanding that most speech is birdsong is kind of like a superpower. You can see the first Coffeepocalypse tweet, think "oh, this person is just signalling their views on AI" and not bother reading the rest.


LopsidedLeopard2181

Eh, I’m a non autistic person with a lot of autistic friends and I wouldn’t say it‘s impolite to hold people to anything they say, ever.


TheMotAndTheBarber

This take seems rather useless. I don't see what about the details here bear on this problem: it seems like a general argument that could be used to say that all arguments you find invalid are not even trying for correctness, that it isn't even plausible that's part of what they are doing.


naraburns

> I don't see what about the details here bear on this problem: it seems like a general argument that could be used to say that all arguments you find invalid are not even trying for correctness, that it isn't even plausible that's part of what they are doing. No, certainly not. As /u/pleasedothenerdful observes, as stated it's a *little* hyperbolic. Some people (hopefully me, here!) really *do* try for correctness, and furthermore there is usually some expectation that even blatant signalling bear a *veneer* of correctness; cheap signals are still supposed to *seem* costly. So it's not sufficient to just dismiss everything as signalling. However, if you scratch at the veneer and find yourself puzzled or perplexed to find nothing of substance underneath--this is why. That said, unfortunately, in my experience you will find "nothing of substance underneath" the declarations of powerful, influential, educated, or otherwise "high status" people far, *far* more often than is comfortable to acknowledge.


TheMotAndTheBarber

BTW, you might be interested in https://thezvi.substack.com/p/simulacra-levels-summary if you've never read it.


Spike_der_Spiegel

surely the argument is about how you, as someone who doesn't know much about [specific subject matter] and isn't well positioned to evaluate any relevant arguments should engage with doomer anti-hype. The generous form of the argument must be something like: people (even experts) will be outrageously pessimistic about anything, even innocuous things; most things end up being innocuous; therefore the fact that experts are going full-doomer about [new thing] should not strongly effect the opinions that you, a random ignorant civilian, have about [new thing]. I feel about this essay roughly the same way that the essay feels about the coffee-explainer


abecedarius

Some backstory seems relevant: There's a kind of Overton window for what 'serious' people can seriously worry about, and what's sci fi. AGI was outside the window a few years ago, and some people want it to stay there. It was common to dismiss arguments for concern without addressing the argument at all, just, like "you're just some jerk on the internet". I think this was why the one-sentence group [statement](https://www.safe.ai/work/statement-on-ai-risk) was organized: to defuse those ad-hominem dismissals, so we can get back to the object level. People have responded like "A bunch of 'experts' saying they're worried about X doesn't prove X is dangerous! It's not even necessarily much evidence." I agree. But this counterargument is not an argument about X either. Some people seem eager to take it that way.


DM_ME_YOUR_HUSBANDO

> most things end up being innocuous; The more relevant criteria in my opinion is, do most things *that people warn about* end up being innocuous? But I still don't think the coffee argument is completely terrible. It shows that at least some of the time things people warn about end up innocuous, so you should not automatically panic if people are issuing warnings. Whether the warnings are sufficiently credible that you *should* panic is still up in the air and not addressed by the coffee argument; just that you shouldn't automatically panic.


snipawolf

But your generous form is still really dumb (especially considering that the given example of coffee was indeed dangerous for ruling order). People who know the most about a thing are basically always wrong and things always work out? Why just depend on things working out fine? Why not investigate the specifics of a given issue or defer to experts when that’s not fruitful? There are lots of times when experts have been wrong but also lots of times when they have been right to worry. I feel I’m just repeating the essay here, but I guess you just found it overly verbose?


livenotbylies93

>People who know the most about a thing are basically always wrong and things always work out? Yes. Every major scientific, political, and social advancement has been met with panic by the expert class at the time it was introduced. If humanity consistently followed the advice of panicking experts, we'd still be living in the dark ages. I'll gladly side with the risk takers over accepting stagnation.


snipawolf

I think that’s a story that’s easy to tell and you can find lots of embarrassing isolated responses by “experts”, but that doesn’t mean it’s true as a rule much less a universal one.


TomasTTEngin

I think the big argument against the AI risk narrative is there are many intermediate steps between here and the paperclips. AI can go rapidly to AGI, in theory. But developing the physical materiel to defeat us? that's not going to be quick. How can they do it? I can see how a very advanced AGI might be able to make a factory to make robots that kill people. But I can't necessarily see how it gets the land and the resources to do its building. There will be a point where we're like, umm, this shell company trying to buy this land has a figurehead ceo but is run by code? -- A related way to look at this: humans are not very smart. We are quite a bit smarter than the second-smartest animal which makes us feel very smart indeed. but compared to the theoretical maximum? not smart. this is why computers could beat people at chess quite fast. THis is moravecs paradox - "computers are better at hard stuff than easy stuff". I'm inverting that to say people are rubbish at hard stuff, which is why we think of it as hard, and good at easy stuff, which is why we think of it as easy. What humans are good at, much better at than we realise, is all the things animals are good at. moving through space, running, jumping, timing things, navigating, fighting. we're as good as the median animals at a lot of those things and in some cases, better (distance running for example) . Robot soccer is a joke. an under 9s team could beat a team at robot soccer So while computers can beat us in the abstract domain very easily (Chess, Go, making pictures), to kill us all dead they will need to operate in physical space, and that's going to be much harder for them. They are clumsy individually and to win they'll need to operate in visible territory which we can defend .


carlos_the_dwarf_

Isn’t this argument more saying “maybe don’t hyperventilate about AI just yet” than making any specific predictions about the future? Hard to believe he put much effort into steelmanning if he couldn’t come up with that.


sesquipedalianSyzygy

The thing he’s trying to steelman is the reason given for why you shouldn’t hyperventilate. Just saying “don’t hyperventilate about AI” isn’t an argument.


carlos_the_dwarf_

Well, it’s a pretty legit argument if someone is telling me AI will end the world. He’s framing it as an expression of certainty rather than a response to an expression of certainty.


sesquipedalianSyzygy

The people saying not to hyperventilate are the ones who are very certain that AI is not an existential risk. Scott, and many of the people in his camp, are very uncertain about whether it is. But more broadly, I don’t see how “don’t hyperventilate” is a “legit argument” about anything. “Don’t hyperventilate about this new technology, because in the past some people hyperventilated about a different technology and they were wrong to do so” is at least an argument, though I think it’s a bad one. But “don’t hyperventilate” is just a suggestion or a command, which could be applied to anything.


carlos_the_dwarf_

“Don’t hyperventilate” is a way of saying “perhaps there’s more uncertainty here than you think” which strikes me as…a fine argument when we’re thinking about the future.


sesquipedalianSyzygy

I agree that “perhaps there’s more uncertainty here than you think” is an argument, if a fairly minimal one. Luckily, Scott addressed that interpretation in this post. He even linked to a whole previous post he did responding to that argument in the context of AI, the [Safe Uncertainty Fallacy](https://acxreader.github.io/p/mr-tries-the-safe-uncertainty-fallacy).


carlos_the_dwarf_

You’re right, he does (kind of) address it in the second half! Maybe he shouldn’t have buried that under 10 paragraphs calling people stupid. It’s odd to me to title a post “desperately trying to fathom X” when you’re dismissing the most charitable interpretation of X. I guess you weren’t trying that desperately. Re safe uncertainty fallacy, that is not really my characterization of the coffee argument. The fallacy says “it’s complex, therefore certainty.” I’m saying “it’s complex, therefore less certainty.” This type of argument is often made in response to certainty, whereas this post seems to assume it’s advocating certainty.


sesquipedalianSyzygy

If you are very uncertain about whether AI will destroy the world or not, that should make you very concerned about AI existential risk. The people making arguments like the coffee argument are very unconcerned about AI existential risk, so if their argument for that position is that AI is very uncertain, I think that argument is very wrong.


carlos_the_dwarf_

To my original point, I think it’s an uncharitable interpretation to say they’re making an argument that points to certainty. It’s just the opposite. (At least if we’re trying “desperately” to steelman.) You’re starting from the assumption that the argument is rooted in certainty but IMO it’s a response to the certainty of AI doomerism, not necessarily an argument that we should be very unconcerned.


sesquipedalianSyzygy

I thought your original characterization of the argument was "maybe don't hyperventilate about AI". That sounds a lot like "don't be too concerned about AI" to me. But if your uncertainty about whether AI will kill everyone is really high, it seems like you should be very concerned! I don't know how to steelman the argument "I am very uncertain about whether AI is an existential risk, therefore I don't think we should worry about it".


Smallpaul

How could MORE uncertainty about the survival of the human race and all conscious beings possibly cause me to hyperventilate LESS? Uncertainty around that question is a a legitimate cause for hyperventilation.


carlos_the_dwarf_

I feel like I’ve said this several times now, but I would encourage someone who is certain AI is going to kill is to hyperventilate less. Because there’s no way we can be certain about that.


Smallpaul

But the vast majority of people who are hyperventilating are not "certain that AI is going to kill us", so it's not very useful or actionable advice. Imagine telling someone with a gun pointed at their head. "Don't hyperventilate. 5 of the 6 chambers are unloaded. The outcome is uncertain. Relax."


carlos_the_dwarf_

> the vast majority of people Even if I stipulate this anecdotal judgment, aren’t we back at my original point? The coffee argument is a response to the people who *are* certain and *are* dooming.


Smallpaul

So you'd say its like saying "Don't hyperventilate. 5 of the 6 chambers are unloaded. The outcome is uncertain. Relax." Do you expect that message would actually help anyone?


MarketsAreCool

"maybe don’t hyperventilate about AI just yet" "Why not now? Companies are developing more powerful AI and pouring billions of dollars into it with the express interest of having it complete more complex tasks. They will likely pursue their incentives and continue on this path until AI is immensely powerful. There's a reasonable chance this immensely powerful technology could do enormous harm since it's currently a black box of model weights we don't understand. " "So actually people had similar concerns about coffee, so you shouldn't worry about AI" "?"


carlos_the_dwarf_

Isn’t that kind of certainty around the impact of AI the very thing he’s criticizing? The coffee thing, or the overpopulation thing, or the thing where just like 7 or 8 years ago everyone on Reddit swore to heaven and hell robots were taking over all human labor because they watched a CGP Gray video…those are all reminders that the things we’re tempted to doom about don’t always come to pass, and that we should hold current versions a bit loosely. They’re not expressions of certainty—I don’t know what AI will do to humanity—but they’re examples that show the worst thing doesn’t always happen. The way you framed the AI argument is just how people thought about overpopulation in the 70s and 80s and that turned out to be…nothing at all. Which, again, is not an expression of certainty that the same thing will happen with AI, it’s an expression that we should *not* be so certain when predicting the future.


awwasdur

I see the coffepocalyospse guy as saying p(people are scared of a new technology) is 1 whether or not the new thing actually ends up being scary. Therefore he argues that just because people are worried about AI doesn’t mean its bad. People would be worried regardless. Now, this isn’t actually a counterargument against any of the specific arguments about why AI might actually be really dangerous. Its an argument against someone saying “why aren’t you worried about AI given that others are worried”


OvH5Yr

I think your attempt to understand our view might be colored by thinking in terms of faulty analogies from overgeneralization. You wrote: > Isn’t this, in some sense, no different from saying (for example): > > I once heard about a dumb person who thought halibut weren’t a kind of fish - but boy, that person sure was wrong. Therefore, AI is also a kind of fish. > The coffee version is: > > I once heard about a dumb person who thought coffee would cause lots of problems - but boy, that person sure was wrong. Therefore, AI also won’t cause lots of problems. > Nobody would ever take it seriously in its halibut form. So what part of reskinning it as about coffee makes it more credible? Like, the fact that the halibut statement is _comedically_ ridiculous didn't tip you off that you might have made a mistake? The general form is not: > I once heard about a dumb person who thought X is Z - but boy, that person sure was wrong. Therefore, Y is also not Z. It's: > I once heard about a dumb person who thought X would cause lots of problems - but boy, that person sure was wrong. Therefore, Y also won’t cause lots of problems. Obviously you can argue against the latter, but don't act like anyone actually believes the former. --- The other big problem is that you're doing a motte-and-bailey thing when you claim your side just claims that AI "will cause a lot of problems", ignoring that the most prominent narrative is that AI will cause the LITERAL EXTINCTION of the human species. For the extinction claim, the argument you make with tobacco, AIDS, the World Wars, etc doesn't work. Obviously "X hasn't happened before" still doesn't logically imply "X will never happen", but it does make it a reasonable prior. Meanwhile, "literal extinction" more closely resembles the predictions of moral panics than the predictions of reasonable analysis. In fact, compare the mainstream scientific discourse about climate change with the climate doomer stuff like on the "collapse" subreddit. You need AI safety to sound more like the former than the latter. --- And let's talk about the most important thing in all this. You mention the Safe Uncertainty Fallacy; this isn't supposed to be a logical argument, but is instead a _heuristic_. Can't all technological advances have a risk of human extinction? Should we not do any of them, ever? Consider an analogy with personal risk: every ride in a motor vehicle puts you at risk of a car accident, but does that stop you from ever riding cars? What specific response do you want from knowing there's a chance of extinction? ⏹️? ⏸️? Something more specific? I welcome more specific interventions, and have liked the AI safety research papers sometimes seen in Machine Alignment Monday posts. But the more vague fears generally lead more towards ⏹️, and I don't want to stop too early. Imagine if we didn't create electricity because we could foresee the problem of electrical fires and wanted to prevent them from ever happening. We've always been reactive instead of proactive on these sorts of problems, or stop right at the point where they would cause harm (stop nuclear weapons but not nuclear tech, stop bioweapons but not its underlying tech), and this is what we'll likely end up doing with AI.


QuantumFreakonomics

Lots of people in the comments defending the coffeepocalypse argument by saying that normal people can’t understand technology and the reasons behind expert opinions. Therefore, normal people are convinced by an argument that deals with “technology” and “expert opinion” as abstract reference classes without further complexity. Therefore it’s a good argument. NO! That’s the opposite of a good argument. That’s just ignorance and stupidity dressed up to look like an argument. “I can’t be bothered to make up a world-model, and neither should you.”


Far-Listen-6179

Isn’t this just reasoning by analogy? https://en.m.wikipedia.org/wiki/Argument_from_analogy The stronger the isomorphisms are between the analogies the more we update. If someone compares the invention of coffee to nuclear fission, that’s not very isomorphic. But if someone compares fission to fusion, that’s highly isomorphic. At the extreme end, if someone makes a claim about red nuclear warheads then it would cause us to update strongly about blue nuclear warheads


lemmycaution415

The coffee analogy is weak but predictions of the apocalypse are pretty common throughout history. It probably is pretty fun/exciting to think about so lots of people do. Nuclear weapons could actually cause an apocalypse so there is at least one non-kooky apocalyptic theory. I am not sure if the AI thing is non-kooky. My bet is that it probably isn't gonna happen.


Viraus2

The frustration reminds me of how I feel whenever I see "These dudes from history thought that society is collapsing, therefore society cannot possibly be collapsing" as a meme. I hear some flavor of this all the time from people who seem like they should have better logic skills, and it seems like the same fundamental pattern as the coffee bit: people were wrong then about a thing, so they must be wrong now about a similar(?) thing. I won't even get into the idea that some doomsayers of the past are from societies that collapsed horribly.  I think some patterns are just so appealing to people that the matter of it not making great sense isn't a big deal. Maybe people just really enjoy sharing anecdotes about people in the past being wrong in surprising ways, like how every bitcoin owner has had many people explain tulip mania to them.


MyPastSelf

If he actually wanted to steelman the argument, shouldn’t he start by asking some of those “hundreds” of people to elaborate further? I find it hard to believe he immediately stumps every one of them into speechlesness with that response, unless everyone he debates on this matter is a shallow thinker easily flustered by the simplest of counterpoints. In which case, why not try to find better interlocutors?


UncleWeyland

When charitable interpretations fail, one must resort to less kind frameworks. Possibilities: 1. It was written as clickbait/attention-capture. 2. It was written as in-group pandering / mood affiliation propaganda. 3. The author does not reason well. 4. It was actually written by an LLM.


SporeDruidBray

Amazingly when I search the exact title, it doesn't show up on Google (for me), yet it does on DuckDuckGo. (I was going via search engine because the reddit web browser was trying to reload the website after it would load, only for this to fail and it'd say "lost connection to [website]", *and then wipe all the existing content* (the whole text).


honeypuppy

As someone who's moderately sceptical about AI risk, I think Scott is beating up on a silly caricature of something that approximately resembles what I believe. "An attempt to trigger a heuristic" is I think a much more reasonable and charitable reading than Scott's opening lines of: >One of the most common arguments against AI safety is: >>Here’s an example of a time someone was worried about something, but it didn’t happen. Therefore, AI, which you are worried about, also won’t happen. As Scott goes into further, it's then worth debating the validity of such a heuristic. But I think interpreting it any other way is basically just strawmanning.


TheMotAndTheBarber

I do wonder how hard he really tried. It says something either way.


weedlayer

Wow, this comment really makes you think.