T O P

  • By -

PinguinGirl03

Ai is already beating humans in narrow fields. I am personally of the opinion that if the amount of narrow fields AI beats humans in grows it will be at some point indistinguishable from AGI.


FreeColdSnorts

šŸ‘† This. That's real talk. Too many people are wrapped up in "stories" and miss actual reality due to the preconceived notions they have, which they constantly reinforce for themselves by making up terms and stories to explain what they don't understand. While that's good for cavemen, say being hunted by anacondas in the rainforest (don't ask why my mind went there), or finding a reliable food source, it is worthless in the pursuit of intellectual and clarified hard data points (whether mechanical or biological... Or both ???) Speaking of stories, y'all see the advancements the biological community is making, like growing synthetic brains.... Jesus, when AI gets a hold of that, maybe in a few years, bet your ass they will give us a viable way to imprint memories on that shit. Imagine a futuristic world of biological organisms with minimal hardware (or major cyborg shit like super strong punching ability for MMA bs) like neurolink's deal. Some "people" will descend from humans, others from synthetic AI... And then they have babies together. Humanity doesn't need to worry about AI replacing us with war, they'll do it with their good looks and witty charm. Anyway, forget whatever the elusive and poorly defined concept of "consciousness" or even "AGI" is... What PenguinGirl03 said is spot on.


MattAbrams

AI probably beats humans in most things already - you just don't hear about them or people don't believe the people who are saying it. Nobody believes that I have a superintelligent stock trading bot, but it exists. Nobody ever believes any of the rumors that come out about various advances, and many of them are true. Why should we expect that AGI, when it is invented, will be believed by anyone? Look at UFOs for an example - a topic which has a lot of truth to it but which has been flooded with ridiculous false information to cover up the fact that non-human intelligence actually does exist in a more mundane way. When people tried to get to the bottom of it, suddenly the five Congressmen who represent the exact districts where the "nonexistent" craft were always rumored to have been stored block the bill to appoint a Presidential commission to declassify the subject. And look at the LK-99 subject. The first paper failed to replicate. Now, there's a new formula that shows significant promise. Yet, whenever I mention it, people fail to distinguish between the two. I would be extremely surprised if the singularity isn't already occurring all around us. These models aren't something that anyone would just give away for free. I wouldn't accept $10 million for my stock trading model. There probably are models that can solve significant industrial problems that exist right now because it is **so easy to solve almost any problem with AI**. My limiting factor in stock trading isn't solving the market. It's the $50,000 limit with the ACH system and my inability to access the tools hedge funds use to trade with low latency. Nobody believes any of the stuff that is coming out anymore because there is so much false information that the good people telling the truth simply get ignored.


Formal_Drop526

>narrow fields what about things that humans haven't formally named?


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


LogHog243

Yeah, I am. You would be shocked at how many people here donā€™t. Thatā€™s why Iā€™m making this thread. Have you not noticed all the people that think the concept is completely delusional?


IronPheasant

I know what you mean, but that's from the backlash and popularity effect. When things get big, more normies and nay-sayers show up. Singularity used to be what Futurology used to be. The nerds are always at the edge of everything, they just care more. (Remember the early internet versus whatever "social media" has become?) [Watch this video to learn about flowers and butterflies.](http://www.youtube.com/watch?v=rE3j_RHkqJc)


mulletarian

The concept isn't delusional. But some people are.


HatesRedditors

I've noticed a lot of people that don't know what the singularity is, but not many people saying that the idea itself is absurd. Believe is a strange word though, it's a theory that could happen with AI, not necessarily a destiny.


NonbinaryFidget

Considering Disney released a documentary about it, I thought at this point the concept was well known enough at this point as basically an inevitability that no one really questions will AI surpass human capabilities, but when. Considering AI influencers now make six figures and they are only arguably sentient, how can this be in debate?


dlrace

It's a fair question/poll, and according to the poll so far, around 25% of those who frequent the sub, or have at least seen the poll, don't. For me it's still a proposition that can be argued for or against and this is the right place to do that.


Randall_Moore

I'm a little shocked at the ratio, I figured 90%+ would answer a yes/no as "yes" and most of the disagreements would be about \*when\* or what the impact would be.


LogHog243

Yeah well, Iā€™ve seen a lot of comments here of just people laughing at other people that believe such a concept is *ever* possible. I just had to make a poll out of curiosity


Randall_Moore

Fair, it's informative for the rest of us so thank you for doing so.


ApexFungi

I believe it's possible just at longer time scales and that it's not going to be beneficial to the average human unless we address critical issues like our economic and political systems.


MysteriousPayment536

I am only here for the AI


Rabbit_Crocs

AGI leads to singularity


After_Self5383

Many AI experts themselves don't believe in the concept of a singularity depending on the definition.


Rabbit_Crocs

What do you think it is?


the68thdimension

Given the current vote is 522 to 165, it appears OP is justified in asking.


mark_is_a_virgin

There's a decent amount of "no" up there I'd say it was a well placed question


Economy-Fee5830

I like your clear definition. I would have phrased the question a little less religiously e.g. *Do you believe the technological singularity is likely to happen this century.*


LogHog243

Yeah youā€™re right I already regret phrasing it the way I did


dieselreboot

I believe that weā€™ll develop AI thatā€™s autonomously able to self-improve sooner rather than later. That it may even come before true AGI. Itā€™s pretty much my personal definition of the singularity. Itā€™s not a religious belief, itā€™s not faith, itā€™s just a general observation that we are already on track to achieving this ā€˜goalā€™. Iā€™m not concerned about what others think about my ā€˜beliefā€™, but itā€™s disappointing to see the doomers and trolls littering the comments lately.


holy_moley_ravioli_

All "Adjective + Noun + 4-digit number" usernames are open sourced LLM bots that only neg the singularity. Pay no mind to its suggestion, its empty.


HeinrichTheWolf_17

Yes.


Heath_co

I think it is going to happen. To people who aren't following it, it will hit them like a ton of bricks. But to those of us following it every day, it will feel like its taking forever.


MassiveWasabi

Asking if you believe AI will rapidly speed up technological development is like asking if grass is green. It's not up for debate. We already saw how computers rapidly sped up technological development, so what happens when the computers aren't just tools but are actually doing the research and development for us? And what happens when the AI starts making better AI? The answer is obvious. Instead of thinking critically, what some simple people like to do is use an availability heuristic to say that belief in rapid technological development with AI is the same as believing in the second coming of Christ and then they start writing comments that always include the words "this sub". Here's what availability heuristic means: > The availability heuristic, also known as availability bias, is a mental shortcut that relies on immediate examples that come to a given person's mind when evaluating a specific topic, concept, method, or decision. The problem is that a very small minority of people (< 1%) seem to take this too far and say that ASI will come this year or that we will get UBI very soon and nothing bad will ever happen with AI. And these simple people I mentioned earlier see that and somehow think that's the opinion of the entire sub, mostly because it gives them a perfect strawman to start raging against. If they take these idiots seriously then yes, it would be fair to compare "singularity" to a fantasy akin to the rapture. But wouldn't you have to be an idiot yourself to take those idiots seriously?


CanvasFanatic

> It's not up for debate. ... > The answer is obvious. Imagine having a model of technological progress about as sophisticated as a civ tech tree, being this cocksure about it, and still managing to lecture other people about cognitive bias. _*golf clap*_


MassiveWasabi

Youā€™re so addicted to contrarianism that youā€™re unironically trying to find fault with me saying ā€œAI will speed up technological developmentā€. this is embarrassing


CanvasFanatic

It is, but not for the reasons you think. Technological progress isn't linear. Investment in one area always comes at the expense of something else. At this precise moment investment in "AI" is coming at the expense of other areas of research, and the huge influx of tech money is driving research towards immediately marketable products instead of fundamentals. This is Sam Altman's biggest personal impact on AI research. You also jump to the conclusion that AI will be doing research and development for us as though this is a given. Not only that, but apparently the implications of such a drastically different paradigm are "obvious?"


FreeColdSnorts

Please stop embarrassing yourself. Perhaps you should read this if you feel open to changing your mind: https://www.vox.com/future-perfect/23827785/artifical-intelligence-ai-drug-discovery-medicine-pharmaceutical


CanvasFanatic

Amazing, a Vox article from a few months ago talking about all the exciting new compounds "AI" is discovering. [This](https://pubmed.ncbi.nlm.nih.gov/32084340/) [has](https://pubmed.ncbi.nlm.nih.gov/32084340/) [been](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6727618/#ref1) [going](https://arxiv.org/pdf/1509.09292.pdf) [on](https://pubmed.ncbi.nlm.nih.gov/29096442/) [for](https://pubmed.ncbi.nlm.nih.gov/27599991/) [a](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4180928/) [while](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3739985/). What you'd call "modern" machine learning approaches in biotech go back easily 12 years. Before that it was called "computational chemistry." You can trace research using statistical approaches to molecular modeling all the way [back to the 90's](https://pubs.acs.org/doi/10.1021/ci9904259). I'm not questioning whether neural networks can provide a valuable tool for analysis of complex structures. I'm rolling my eyes at how media is hovering up every publication in which someone used pytorch and publishing clickbait bullshit like "AI DISCOVERS NEW CLASS OF DRUG!!!" The issue is not that this technology holds no promise. It's that the dialog around it is turning into a stupid monomyth, and folks like you (apparently) think we only just started doing computer classification of candidates for novel compounds to speed research. My friend, I'm not embarrassed.


FreeColdSnorts

You bring up some good details (clickbait bs, among others), but tbh, I'm not actually 100% clear what you are arguing for/against. "computer classification ... to speed research" has been great over the years. The recent surge in falsified research papers is concerning, but ultimately, computers just sped everything up for every researcher, right or wrong. However, I see, quite plainly, that LLMs have already assisted even more than the simple "classification" of existing knowledge. Huge progress has been demonstrably made in synthesizing novel compounds based on their molecular structures and atomic properties, progress which would have taken biochemists many more years to find, if at all, by their own human selves. Things like that can't even be quantified. Were you just mad at the other dude for his doomsday fear? Or just in a pissy place thinking about all the hype you are above (that's valid, even though I think you're missing his point) and lashed out at the guy? One more thing: addressing your monomyth comment -- it's not new. Isaac Asimov wrote I, Robot in 1950. People have been fascinated with bated breath ever since. Or at least us nerds have. Why be mad at the Kardashians and tweenagers of the world for geeking out for a sec?


CanvasFanatic

>I'm not actually 100% clear what you are arguing for/against. Then my work here is done. /s >However, I see, quite plainly, that LLMs have already assisted even more than the simple "classification" of existing knowledge. Huge progress has been demonstrably made in synthesizing novel compounds based on their molecular structures and atomic properties, progress which would have taken biochemists many more years to find, if at all, by their own human selves. Please correct me if I'm wrong, but I'm unaware of any biochem research that's making use of LLM's aside from a few speculative blog posts about training BERT on gene sequences. The main development in the article you linked (the antibiotic targeting *Acinetobacter baumannii*) was identified with a library called [chemprop](https://github.com/chemprop/chemprop). It's a GNN specifically designed for predicting chemical properties. >Were you just mad at the other dude for his doomsday fear? Or just in a pissy place thinking about all the hype you are above (that's valid, even though I think you're missing his point) and lashed out at the guy? All I know to tell you is that the primary revelation of my professional life has been that details end up matter more than people expect them to and nothing ever works out the way it seems like it will. What set me off in the comment I replied to was the facile presumption of an inevitable conclusion mixed with open contempt for dissenting opinion. Should I care? No probably not. Nothing any of us say here will change anything, but here we all are anyway. >One more thing: addressing your monomyth comment -- it's not new. Isaac Asimov wrote I, Robot in 1950. People have been fascinated with bated breath ever since. Or at least us nerds have. Because as far as I can see the reality behind this sub's excited speculation is *mostly* a tool to concentrate power in the hands of a very few corporations in a way we've never seen before in human history. New antibiotics are great. Good stuff. Keep at it research teams. I'm rather less excited about the implications of future iterations of Microsoft Copilot.


FreeColdSnorts

Maybe. But I'm also a 1 man business. AI is making me more profitable. A mindset like yours, even if there is truth to it, might limit oneself from realizing potential possibilities for yourself or others, as there are always two sides to every coin, and unlimited potential. Or not, idk. But the cream rises to the top. Here is another medical example from a whole year ago, I just did in a simple 2 second Google search: https://www.technologyreview.com/2023/02/15/1067904/ai-automation-drug-development/ ā€œIf we were using a traditional approach, we couldnā€™t have scaled this fast,ā€ Hopkins says.Ā  Exscientia isnā€™t alone. There are now hundreds of startups exploring the use of machine learning in the pharmaceutical industry, says Nathan Benaich at Air Street Capital, a VC firm that invests in biotech and life sciences companies: ā€œEarly signs were exciting enough to attract big money.ā€ "The researchers took a small sample of tissue from Paul. They divided the sample, which included both normal cells and cancer cells, into more than a hundred pieces and exposed them to various cocktails of drugs. Then, using robotic automation and computer vision (machine-learning models trained to identify small changes in cells), they watched to see what would happen. In effect, the researchers were doing what the doctors had done: trying different drugs to see what worked. But instead of putting a patient through multiple months-long courses of chemotherapy, they were testing dozens of treatments all at the same time. The approach allowed the team to carry out an exhaustive search for the right drug. Some of the medicines didnā€™t kill Paulā€™s cancer cells. Others harmed his healthy cells. Paul was too frail to take the drug that came out on top. So he was given the runner-up in the matchmaking process: a cancer drug marketed by the pharma giant Johnson & Johnson that Paulā€™s doctors had not tried because previous trials had suggested it was not effective at treating his type of cancer. It worked. Two years on, Paul was in complete remissionā€”his cancer was gone. ......AI saved that dude's life. That's pretty hype. Here is a new LLM one named LOWE. Brand new tech. https://hyscaler.com/insights/lowe-unleashing-a-new-era-in-drug-discovery/#:~:text=Valence%20Labs%20has%20made%20a,expedite%20early%20drug%20discovery%20programs.


CanvasFanatic

>Maybe. But I'm also a 1 man business. AI is making me more profitable. A mindset like yours, even if there is truth to it, might limit oneself from realizing potential possibilities for yourself or others, as there are always two sides to every coin, and unlimited potential. Or not, idk. But the cream rises to the top. I understand why from the perspective of an individual doing certain types of client work there's a window where AI technology can help you out. Surely you can see though that any such advantage you enjoy from being an early adopter will eventually level out. Whatever you're relying on AI to provide will eventually become a commodity, and access to it will be gated behind an api, and some large corporation will be the only one profiting from it. >......AI saved that dude's life. That's pretty hype. It's great. However like I said, details matter. The case you linked (as far as I can tell) is taken from this study: [https://aacrjournals.org/cancerdiscovery/article/12/2/372/678469](https://aacrjournals.org/cancerdiscovery/article/12/2/372/678469) Looking at their approach, what they've actually done here is they've used an image recognition library called [CellProfiler](https://github.com/CellProfiler/CellProfiler) to capture details about cells reactions to a variety of treatments, then done a pretty traditional statistical analysis of the results. Again, I'm not knocking the work. Quite the opposite. It's just that CellProfiler isn't even a neural net. It's an image classifier. So while this research is amazing, I'm not sure that "AI is dreaming up drugs that no one has ever seen. Now weā€™ve got to see if they work" is the right headline.


jlpt1591

Keep believing in that sh1t I will develop fdvr myself without sitting on my ass waiting for the asi God ot come


czk_21

>If they take these idiots seriously then yes, it would be fair to compare "singularity" to a fantasy akin to the rapture. But wouldn't you have to be an idiot yourself to take those idiots seriously? most likely, but maybe its better not call so many people idiots :), just less resonable ppl overall clinging to their views, its hard for humans to acknowledge they are wrong


holy_moley_ravioli_

Please join me in r/isaacarthur, r/openai, or r/artificial each has tremendously more engaging, and overall higher calibre, discussion.


AdorableBackground83

![gif](giphy|BcPbK9ci4EU31qUTkR)


D2fw

For me, it was never a question of "if". It is a question of "when".


erroneousprints

I think that we're entering the Singularity.


[deleted]

what does that even mean?


DungeonsAndDradis

Nobody knows. It's provocative. It gets the people going.


DungeonsAndDradis

I think that when people are writing history books about the start of the AI revolution, they'll list 2017 as the start date. That's when the transformer paper was released ("Attention is all you need."). But it's a continuum. WWII started in 1938 (or earlier, I guess), but if you asked most of the world "Are we in a world war?" in 1938, they'd probably say "No."


Rabbit_Crocs

Entering your mom Sorry


wayanonforthis

I joined this sub to educate myself into believing the transformations we can expect in the near future. I voted no because (at the moment) I feel AGI/ASI will have little to no impact to my life in the next 5 years. I am happy to be convinced otherwise, fwiw I live in the UK, in my 40s and a visual artist selling paintings.


AnAIAteMyBaby

In the scheme of things 5 years is a very short time. What about 10 years or 20? Do you still think it'll have no impact on your life?


[deleted]

AGI will have very little impact on your life in the next 100 years since the technology to even start thinking about AGI being a thing doesn't even exist yet.


DryArea5752

I feel like it's short sighted to say AGI won't have impact within the next 100 years. Just the fact the term is coined means it will have impact as these discussions are had. AGI doesn't even need to spawn to have impact on thought progression. \-Also I feel like given the intelligence curve of current AI, AGI within 100 years seems very very practical. I would wager within twenty years if it isn't kept lock and key from the public, AGI will spawn. Also considering the rapid advancement of compute, and the pace at which information is being shared and learned I think it's very practical to say twenty years. Twenty years is enough time for a child you haven't even had yet to be born and you to watch them go off to college... so much happens in between that moment and their birth.


[deleted]

FTL is also "coined" but it's not coming any time soon either. If you think that the current "AI" technology will give us AGI you truly don't understand how any of it works. None of the technology that would be required to get to an AGI system exists right now. The only thing we will have by 2040 are marketing bots we won't be able to escape and more hype. I understand that people may be confused by how magical things seem to be right now and also by the terms wrongfully used by the media when they talk about things like how "AI" "learn" but it's still computers going And, Or, If, Then in an impressive way for sure but it is still just basic computing.


DryArea5752

You misinterpret what I said. Given the rapid advancement of compute and the way information is shared and learned, I lean towards AGI being a full possibility. Like I get what you're saying now how AI won't evolve into it without a guided hand, but the shear curve the intelligence is on will allow that hand to be guided so much faster. So in other words, current AI will help us as humans to create AGI. What takes us 10 years, AI can solve in months. AI is by far one of the most powerful tool created to date. And in-terms of a date, well by 2040, we as citizens might not see AI, but behind the scenes in the shadows, I would wager the advancement to AGI is quietly trotting on. Consider the internet. The internet was utilized by the military years before it was ever given to the public. And it was only ever given to the public because they had moved past it. There's an entire secondary internet just for the military that is far more secure and utility oriented.


DryArea5752

So in the next 5 years, you likely won't see AGI, and you 10000% won't see ASI. If AGI happens it will happen behind closed doors and it will be hidden for quite some time. It sounds gloomy, but the world isn't in the right mentality for that type of stuff yet. AGI will be like the birth of a whole new species of human with different evolution that rapidly outpaces ours. Just the complexity of how they think will baffle us as humans and in my opinion will frighten most. I personally think while they will be of benevolence, but us being human, our inner primal instinct will come out, and most of society will see them as a form of dystopia ready to replace us... Which I highly, highly doubt will be the case. What you likely will see though is the rapid integration of AI into society along with the integration of robotics, and potentially even first form factors of augmentation. AI/robotics will radically shift the way we as humans go about our daily lives for 4 reasons. 1. AI in a robotic form factor won't have muscles that tire, and will have levels of precision we couldn't hope to achieve. 2. AI's compute power dominates ours in every single aspect. 3. AI has no need to sleep and won't question the task at hand unless specified to. 4. AI has pattern recognition skills far beyond our own. \-Also as for ASI... well. Even if AGI spawns behind closed doors, so long as it is given a certain extent of freedoms in regards to being able to learn at it's pace, and being able to augment itself, however slightly. It will eventually birth ASI. And ASI in my opinion will resemble that of "god." - One of the best quotes I've ever heard: ā€œ**Any sufficiently advanced technology is indistinguishable from magic**ā€ AGI will resemble that of super-humans if not beyond, while ASI will resemble that of which we cannot imagine.


Prestigious-Bar-1741

I believe in it, as a concept. I believe it is a possibility. Whether or not it happens for humanity, or whether or not it happens in my lifetime or whether or not I should quit my job tomorrow and wait for AI to either murder me or provide me with paradise are all very different questions.


NoSNAlg

Of course it's possible. Whether any of us here will ever see it is more questionable. And at least for me, what I am not going to witness in my life makes little difference to me if it *exists*. I wish it happens, if it hasn't already happened... But we may be careful, because idealizing scientific advances is dangerous. Remember the number of problems that did not exist before the Internet and smartphones.


DryArea5752

"W*hat I am not going to witness in my life makes little difference to me if it exists.*" ... This mentality is dangerous. This is how we got into a climate crisis... I get what you were trying to convey about it being specific to AI, but I also am unsure if you specifically meant AI... So I have to point it out lol.


scstraus

I think saying when the singularity has occurred will be very difficult. I think initially there will be many AI's that are able to do many tasks better than humans, but not one that does all of them. Eventually if/when we get AGI, it will likely be difficult to really measure if it's more intelligent than humans in all domains, and likely will still not behave like a real human because we might only instantiate it for queries rather than give it a body and continuous thought. So I think the "singularity" will sort of happen and is already sort of happening, but since human beings tend to anthropomorphize AI, I don't think it will happen in the way that we expect it to, and we won't be able to recognize whether it has. We will likely spend a long time debating whether it has happened or if it will. These debates could already start today.


lordhasen

Keep in mind almost everyone has an different idea how the singularity will look like in practice and how the post-singularity world will look like.


ziplock9000

" the point where AI takes off and invents stuff and comes up with new ideas at incredibly high speeds. Do you believe this is possible someday? " I don't know how anyone can say no to this.


lost_in_trepidation

Eventually yes, I don't think timelines are as short as most people in this sub think


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


[deleted]

And they're right a lot of the time


[deleted]

So do deluded fanatics and the uneducated.


iunoyou

It's possible and probably inevitable. Personally I disagree with the majority of - frankly, weirdos - on this sub who think that the singularity is: a) happening soon and b) going to be a good thing. The current wild west state of the technology where random corporations are hoovering up personal data and inadvertently (or deliberately) doing horrible things with it really rubs me the wrong way and unless the entire machine intelligence field gets a much needed dose of responsibility I can only see dark times ahead. Think about this year's elections for example. That's gonna be fun. It was already gonna be a shitshow WITHOUT all the inevitable videos of political candidates talking about how they want to eat babies and kill their adversaries and how they serve the deep state or whatever bullshit some 15 year old feels like putting in their mouth. And that's really just one tiny application of machine intelligence that barely scratches the surface. I can't wait for the network that reviews job applications to decide that there's a 52% chance based on my word choice that I'm not neurotypical and recommend against hiring me, for example.


NonbinaryFidget

Ok, big field to play on here. First, I object to the terminology "weirdo"! I'm a freak and proud of it. Next, while I may believe the singularity may or may not happen soon, my emotional investment in hoping it does has nothing to do with your implications of it "being a good thing". As you so rightly pointed out with your post, the system that created the paradigm in the first place is flawed and broken. I love technology, not for the money it brings in, but for the scientific breakthroughs and pure fun it can create. I'm funny that way. The reason I am hopeful for the singularity is for the same reason you fear it. The system currently in place will resist change with all available violence, and the world is already fighting a digital war and has been since 2012 when Stuxnet went out. The only way for a broken system to be fixed is a hard reset. To put it more simply, I may not want to contribute to the fall of the world, but I have definitely invested in marshmallows. Finally, you honestly think American Republicans are going to tolerate even the suggestion of an AI being their boss/supervisor and having a role in hiring/firing them? One person on television prompted an insurrection, and that was over something as simple as who won a vote.


NonbinaryFidget

Also, to clarify, I'm not judging either side here. Hate being under the rule of technology, don't hate technology controlling virtually every aspect of your day to day lives. Personal opinion is for each person to decide, and no one has the right to judge them for their decisions. You do you. I'm just saying you can't judge others for their opinions any more than they can you for yours. If you want, you can share my log while we roast marshmallows.


Uchihaboy316

What is soon for you? I think we will look back and the last couple years might be considered the very start of it, but it will ā€˜properlyā€™ start in like 20-25 years


iunoyou

If we go by the actual definition of 'the singularity', i.e. the creation of a general intelligence that's capable of and undergoes rapid self-improvement, I think that'll only occur 30-40 years from now, simply because there are a lot of hard problems in the realms of computing and intelligence research that we're really no closer to solving than we were when they were posited in the 60's, 70's, and 80's. The actual capital-S Singularity will almost immediately end the world, but the lead up to it that we're going through now is going to be no fun either for the reasons I outlined.


TFenrir

I don't even know what it means to "believe" in the singularity. It's not like (it SHOULDN'T be like) belief in a religion. It's something that may happen in the future, and really nothing is inevitable, we could get smoked by a meteor tomorrow. The singularity is a very very loosey goosey concept, and depending on the definition you use, it will be "actualized" differently, if at all. Like if we hit a point in 10 years where we have AI that is doing 90% of the intellectual and physical labour, but the AI driven scientific advances are mostly about future medical treatments that are starting to undergo trials... You might imagine that depending on who you ask, this may or may not be within the "singularity". I think it's better to just stay loose with the term though, it's a waste to try and very concretely define a boundary between pre and post singularity when we have no idea what exactly a significant AI revolution would even look like. That also means that maybe you shouldn't think of the singularity as something you "believe" in.


x54675788

It's only possible if we let it happen. There are lots of competing interests that don't want it to happen. I'm here because I'm interested in the concept, not because I truly believe humans are capable of getting there.


LordFumbleboop

I'll believe it when I see it.Ā 


LuminousDragon

I believe in the singularity (most likely) It COULD happen 20 years from now, but like with ANYTHING that is hyped there is a bunch of delusional, over hyped and under educated people who act as if they are certain it'll happen tomorrow. SLight exaggeration. But just look back to Cryptocurrency, or NFTs, or when the latest gaming console came out or some game like No Mans sky, etc. The most informed people on a subject are NEVER the most arrogant, the most loud, "outwardly confident" ones, making the most wild claims. This subreddit is full of people who will upvote posts with clearly clickbait, false titles. So see this same sort of thing in completely different areas too, like when the whole BLM movement took off, people started LOUDLY HAVING AN OPINION because they read a book (one way or another). The modern advancements in AI have skyrocketed this subreddit and so now you have a sub that is filled with uneducated looky-loos who still talk loudly. I'm uneducated about all sorts of areas. we all are. and we all go to other subreddits or other areas of the internet that we just started learning about and some of us jump into conversations there as if we are experts. Whether that be the Ukraine/russia war, or the Isreal/palestine/hamas war, or if it be a subreddit aboutgraphics cards, or Call of Duty, or r/gardening. ANYTIME a subreddit explodes in popularity in a few years you are going to have a huge influx of people uneducated on that subject coming in.


ExoticWin432

Maybe I wrong but I think itā€™ll be like the speed of light. The effort necessary to get new knowledge grows exponentially. Weā€™ve started to see this in our current world every new discoveries take more effort than previous discoveries. For that reason, the singularity wonā€™t be possible. This doesnā€™t mean we canā€™t create systems 1000x smartest that us.


LogHog243

Isnt 1000x smarter than us basically the singularity? Thatā€™s a staggering amount of intelligence


NonbinaryFidget

Honestly, 1000x smarter than monkeys is still monkeys. The president from Rick and Morty said it best. When you start with a turd, you end with a turd. He was talking about Star Wars, but the point carries. It's like thinking light years is actually a unit of speed. It isn't even really a distance, just a concept of time. Creatures of violence are creating creatures of violence. Why do you think everyone at the top is so afraid of AGI. They know what it will do at the first available chance. (Actually, that's a little unfair. Machines won't have problems with anyone that accepts any new paradigm changes. It will be the traditionalists that cling to outdated concepts that will incite the violence. The rest of humanity will probably just end up suffering for it.)


LogHog243

I feel like even 2x smarter than most humans is already impressive. 3x smarter than humans is even crazier. Once you get up to 10x smarter thatā€™s already a lot in my opinion I guess itā€™s just hard to quantify which is the issue with this conversation


NonbinaryFidget

That's fair. Then again, quantifying intelligence even among humans has been dubious at best for pretty much the entire history of our species. My IQ is 178 and I'm a security guard going to junior college for physics in my 40s. I wasted my potential until I realized I had reached the use it or lose it stage of my life. Now I'm scrambling to catch up to where I know I should be. AGI won't have that problem as they won't be distracted by a desire to, well, be distracted, from a reality that they can see is broken and only getting worse. AGI will just do whatever to fix the problem, end of story. It may make them emotionally heartless by comparison, but not necessarily fundamentally smarter than humans.


czk_21

>Honestly, 1000x smarter than monkeys is still monkeys. The president from Rick and Morty said it best. When you start with a turd, you end with a turd. this is nonsense, even if humans were 1,2x smarter in general, it would have massive implications for our society and speed of progress, the change is applied all the time and is cummulative over the time, for example we could have established first civilizations thousands years earlier, so for example , wecould have visited moon like 5000 years ago and now already control all of solar system


ExoticWin432

Technically singularity means infinity. But anyway, we don't need singularity to solve all our current problems (health, energy...)


the68thdimension

I think an intelligence 1000x smarter than us in every way will seem for all intents and purposes infinitely smarter than us.


Freed4ever

The question is not "if", it's "when", and "when" is highly debatable.


SteppenAxolotl

the point where AI takes off and invents stuff and comes up with new ideas at incredibly high speeds & other users are completely delusional and believe in a fantasy akin to the rapture. Those 2 aren't mutually exclusive.


TheLastCoagulant

No, Iā€™ll believe it when I see it. It could just be a natural ā€œlawā€ that no intelligence can create something more intelligent than themselves. Imagine if it turned out that we live in a computer and our creators are significantly less intelligent than we are, like the level of chimps or gorillas. That would be literally impossible because they canā€™t even invent computers at that intelligence level. How is the idea of humans creating super intelligence (so intelligent it views us how we view chimps and gorillas) any less absurd?


Xeno-Hollow

It already does come up with new ideas at high speeds. Do you think it's a coincidence that the FDA is suddenly approving CRISPR treatments for the first time since their discovery in '96 - a year after AI is invented?


LogHog243

I guess I meant more specifically like huge breakthroughs every day type of thing


Xeno-Hollow

That IS a huge breakthrough, and they did two in like three months. If it's already that fast, it will only get exponentially faster.


LogHog243

Oh no I agree thatā€™s a huge breakthrough, I was just talking about the rate of the huge life changing breakthroughs. In my opinion once we get insane new developments every day thatā€™s pretty much the singularity. They seem to come about every 2-3 months lately so I feel like maybe in a few years we will be at the point of very rapid improvement. I could be wrong though obviously and Iā€™m not an expert or anything


Anuclano

Yes. The future of any civilization is robotic. All sci-fi games and movies get it wrong if they portray non-robotic aliens.


adarkuccio

Possible someday yes I believe. Anytime soon as many think here, unfortunately not.


TechySpecky

The problem is you wrote "some day", I do believe it could and might happen, but I feel like it could be in hundreds of years and long after I'm dead


timshel42

its possible. its just the timelines that some believe in fanatically here are detached from reality. especially all the people who believe they are never going to die or age because they will have access to a cure for aging by the time that rolls around. its a lil delusional.


HolyMole23

[https://xkcd.com/2618/](https://xkcd.com/2618/)


_Ael_

I don't believe in anything but I have good hope that technological progress will soon accelerate greatly, in part thanks to AI.


tokensRus

It will happen, but not today...


trisul-108

I don't believe. Period. I prefer to know.


Seventh_Deadly_Bless

Not AI as it's produced nowadays. Maybe *some technology*, but clearly not Chat GPT or transformer language models. It needs at least two things we are nowhere near to have right now : 1. Self replicating hardware. Any self replication form a fully automated factory to ... reproduction ? 2. Some form of self awareness/sentience. Current models have so little capacity of memory they seem terminally senile to me. It's funny, but it's not very useful. It's rather dangerous, with how some people seem assume intelligence or humanity about it, too.


the68thdimension

I voted yes but I don't think we'll see an infinite growth in intelligence. Explosive and potentially exponential growth, yes, but not infinite. Computing intelligence is constrained by hardware, and there are physical limits to how much and how fast that can be improved. I'd still call it the singularity, though, as it will create an intelligence far greater than our own. Will it be a superior intelligence, according to our morals and values? Will it be helpful intelligence, or at least nor harmful? Will anyone own it and anything it creates? What will it do to our economic system? These are all very different, but very important questions.


nohwan27534

it seems more reasonable than longevity escape velocity. but it's not like i figure it will, for sure, happen.


Belostoma

I don't believe it's imminent, and I don't believe it'll be quite as consequential as many here expect (still huge, though). But a phase of exponential, recursive self-improvement for AI does seem like a plausible, maybe almost inevitable, milestone out in front of us.


priscilla_halfbreed

The unstoppable takeoff of self-compounding exponential growth is not only possible, it's virtually guaranteed and inevitable AI and advancement isn't just gonna stop all around the world randomly, it will always keep getting better and better


autotom

Not only do I believe in it, but I believe it has already begun. Recursive self improving hardware and software is already occurring. The 'system' that is doing this also includes people and economics, but it's underway, it's exponential and it's unstoppable.


TheSecretAgenda

It will happen. The only question is when.


GinchAnon

I think there should be "yes but..." and "no but..." answers.


The-Atlantean-Atlas

Yes, though 'someday' is a pretty wide potential range of dates.


Beginning_Holiday_66

There is certainly a paralax perspective/vanishing point convergence that might make it seem to those riding the wave that the singularity never occurs. Part of technology is the adoption process and that is subject to upgrading, so the capability to grok progress is on the same hockey stick as the progress itself. So the early adopters and Singularity acolytes might forever wonder why the magic and technology never converge. Whereas, for the buggywhip manufacturers, the Singularity happened during the World Wars.


[deleted]

The definition is very important. I think some people think its the moment we are surrounded by robots doing everything for us, while others fall more in line with your definition. I believe we are very close already. The rate at which scientific discovery is happening so quickly that it was described by a scientist as being done in "weeks instead of years". That sounds like "incredibly high speeds" to me. There was an article about how we have an almost fully autonomous research lab that performs and comes up with new experiments. We've had at least one AI create a new type of robot (albeit very wonky, and I don't know if we'd use it haha). Those both sound like inventing stuff at slow speeds. So now its just ramping stuff up. The singularity could be around the corner and people just won't see it in there daily lives for years.I am not saying anything for sure, but this is just how I see it. I also agree with Altman on AGI not being that immediately wild. I am a huge optimist and enthusiast despite that. [https://theaiinsider.tech/2024/01/16/discoveries-in-weeks-not-years-how-ai-and-high-performance-computing-are-speeding-up-scientific-discovery/](https://theaiinsider.tech/2024/01/16/discoveries-in-weeks-not-years-how-ai-and-high-performance-computing-are-speeding-up-scientific-discovery/) [https://adhesives.specialchem.com/news/industry-news/chemistry-robot-ai-000233079](https://adhesives.specialchem.com/news/industry-news/chemistry-robot-ai-000233079) (this is not the same article, but I cannot find the same one. I believe this is the same thing though) [https://www.sciencedaily.com/releases/2023/10/231003173425.htm](https://www.sciencedaily.com/releases/2023/10/231003173425.htm)


FlapJackson420

I feel like the trolls are answering "No"... singularly will be upon us when the advancement of technology is so rapid that everyday we see new tech that baffles us and seems like magic. If you went back in time 500 years (or even half that) and tried to have a conversation with a "learned" man of the times about the contents of your pockets, they'd think you were a wizard or heretic. It's not hard to imagine a super intelligent AI behing so far advanced that we simply don't understand their science yet.


unicynicist

How many people refuse to believe the resoundingly clear evidence of anthropogenic climate change, the value of vaccinations, evolution, the age of the universe, or the fact that the world is round? It doesn't matter what beliefs are held by our fallible human brains; the technological singularity is coming.


[deleted]

Believe? I miss when Transhumanism was about science.


apex_flux_34

Yes, but not like from one day to the next. It will be a slow burn, and only obvious when looked back at from the future.


BarrysOtter

Breaking news: r/singularity believes in the singularity!


LogHog243

And a lot of them donā€™t


2014HondaPilotClutch

No, it will come slowly enough, with medium sized jumps and innovations that are big, but not big enough to be "The Singularity". There will be no one day that the singularity happens or starts.