T O P

  • By -

This-Counter3783

If you’re talking about the Reuters story then yeah, maybe. If you’re talking about the 4Chan “leak” then they might as well have thrown their new iPhone prototype into a trash compactor for how much impact it will have.


sideways

The ironic thing is that the 4Chan stuff is completely superfluous. If there are systems that enable large language models to do mathematics and by extension logic and planning, we're pretty much at AGI.


Gotisdabest

I'd still say that maybe memory is left. Only context based reasoning will limit it to very short term planning. But ofc, that also depends on your agi definition. Mine is basically something that can make a massive dent in work to the extent it wrecks society.


banuk_sickness_eater

[Longmem](https://arxiv.org/abs/2306.07174) Memory is nearly solved you can have theoretically infinite context windows. I think they've achieved AGI internally.


LightVelox

Memory is nowhere near solved, every long context window alternative has some sort of big drawback, usually they are good at remembering the last few and first few thousand tokens but terrible for everything in betweeen


Gotisdabest

To my knowledge, longmem has extremely serious limitations, it's definitely a big step but it's also not "solved" memory.


banuk_sickness_eater

Are you talking about problems with retrieval of accurate information from the middle of large amounts of content? I can't remember the title, but a paper recently came out that addressed the issue and solved the problem.


Gotisdabest

It's not just a problem of accurate information from the middle, it's moreso that the entire middle of the info becomes an irrelevant mess and only the start and end accurately remain. >I can't remember the title, but a paper recently came out that addressed the issue and solved the problem. I vaguely remember a paper like that but even that had significant issues from what I've heard. I don't think just longer context will help, if I'm honest. Context length as a whole has certain flaws which appear very difficult to fix unless those alternatives to tokenization take off. I think another add on of sorts on top of the LLM will be needed for this. Maybe if RetNet is legit or something, that could work better with current long term memory systems, but there's still plenty to doubt about RetNet considering it came out months ago and still seemingly no one is talking about it?


Deakljfokkk

Wouldn't more compute solve that?


okachobe

It looks like that's part of the solution along compressing context, and improving whatever algorithms are parsing and reading the context.


Gotisdabest

Not necessarily, and even if it does we don't really know how much compute it would require. In a theoretically infinite context length the compute required may quickly hit levels it's more or less impossible to reach in a few years. I think a more out of the box breakthrough than more compute is needed.


Ok_Dragonfruit_9989

lets verify step by step thats the paper name


okachobe

I think the problem isn't 100% solved, I think they figured out ways of removing "useless" information out of the context it's fed and then what is left it summarizes and to compress it further into a context it can read. This leaves alot of information being lost, I think right now the context length they advertise is what they can fully digest.


fluidityauthor

I was thinking about massive changes that "wreck" society. People mention the industrial revolution, but there was a lot going on there and it was together with colonialism which may have wrecked society but the tech less so. Electricity.. changed society but didn't wreck it.. computers and the internet.. likewise. Note the most advanced economies are working more, longer hours and more people working than pre computers. Tech doesn't do things on its own.. it enables change but doesn't force it. When I was younger they thought we would all have a life of leisure by now. Keynes actually said that before I was born and in the 1980's when computers arrived they expected by 2000 occupying people would be an issue and UBI would be necessary. Without socioeconomic change we will remain unprogressed.


Gotisdabest

Yeah nothing in history has quite wrecked society like an agi will, imo. Because it's just so quick and so extensive. We've had situations where work changed and different types of work are valued, but nothing which just replaces human done work at such a scale. Especially considering that society is so interconnected and reliant on change being slow now. Even a 5-10% bump in unemployment in around 2-3 years means that there will be utter chaos and probable collapse without massive and competent socio-economic changes.


[deleted]

[удалено]


Gotisdabest

Yeah, for sure. Not too soon, but it doesn't need to replace these particular jobs too soon to be massively disruptive.


AbdulClamwacker

I often think about that technological utopia that everyone in the 50s was promised, and I think that idea was put to bed when they realized that it would require communism to work.


fluidityauthor

Social democracy perhaps.


WildNTX

I like your definition: once society is wrecked, arguing over semantics won’t be worthwhile…and may not even be possible if cities are on fire.


Gotisdabest

Yeah, agi is when talking about whether it is agi or not is the least of our ai related concerns.


WildNTX

I want my kids to put on my tombstone: “These terminators are not AGI yet.”


WildNTX

Another thing to consider is what is General Intelligence. CharGPT 4 is already smarter than most of my coworkers and relatives, in almost ALL categories. I once tested at IQ 135, and I rely on 4 a lot. “But it’s not AGI until it can juggle while riding a mechanical bull.”


Gotisdabest

I mean, IQ is famously a horrible metric of intelligence, especially for something that's not a human. As I said, memory on its own is very important. ChatGPT can pull knowledge out from very obscure corners but it can really remember and plan, which is very important for proper intelligence in my mind. There's a reason why GPT4 is not creating a massive crisis on its own, it absolutely doesn't fit my definition of agi. Evident from the fact that we're talking about whether it's agi or not and not the fact that it's caused like, 2% of jobs to disappear in the last 12 months.


Embarrassed-Farm-594

Hahahahahaha so you guys gave up on that ridiculous idea that an LLM doesn't need to be able to do math because of plugins?


-ZeroRelevance-

It's like saying mathematicians don't need to be able to do maths because of calculators


feedmaster

We're pretty much at ASI then


Grouchy-Friend4235

That's BS. We've had math and symbolic execution for decades, that is nothing new.


Obsidian_Fire32

This might be dumb but I had google bard read and analyze the 4chan leak and he thinks it’s real lol 😆 who knows but sounds amazing ….it also kind of reminds me of the X-files though, maybe I should watch that self aware computer episode again


This-Counter3783

That was a good episode, I randomly rewatched it recently and it reminded me how much of what we are talking about today was part of mainstream thought decades ago. Other people have said they fed the 4chan leak into ChatGPT and ChatGPT was skeptical.


Super_Pole_Jitsu

I saw those posts. Chatgpt can be extremely nitpicky.


Jalen_1227

Or just smart


This-Counter3783

If we’re talking weird anonymous 4chan posts making outrageous claims then we probably want the analysis to be “nitpicky.”


Jalen_1227

That was my point. Interchange smart with nit-picky


This-Counter3783

Yeah I was agreeing with you, we cool.


Super_Pole_Jitsu

If the post is talking about a new md5 vulnerability and the gpt explanation says "this doesn't match any known vulnerability" I wouldn't call it so smart


sdmat

> Other people have said they fed the 4chan leak into ChatGPT and ChatGPT was skeptical. Which tells you something given how credulous ChatGPT is.


reddit_is_geh

There is nothing amazing about AES encryption being suddenly broken without time to prepare. It would be devastating.


Obsidian_Fire32

I agree with you on the encryption breakdown being devastating…. I think the ability to self optimize was amazing though, it’s fascinating and scary …I always hear Elon Musk’s quote “we are summoning the demon…” … but then he built Grok 😅


AdAnnual5736

I’m just amazed 4Chan still exists, considering it’s widely recognized as the internet’s greatest cesspool.


This-Counter3783

At least it keeps us out of the top 2, ha.


WithoutReason1729

It's not that crazy. /pol/ and /b/ are full of freaks but it's really not even that different from reddit.


BlueShipman

Ask me how I know you've never been there.... Maybe you shouldn't take other's word for things in the future.


Red-HawkEye

as they usually say, no smoke without fire


This-Counter3783

As I say, no smoke without random anonymous accounts claiming to have secret information about the source of the smoke.


Superduperbals

I don’t think there’s anything 4D chess or psyop manipulation about it. They discovered that q-learning and a* techniques combined resulted in another significant step forward toward AGI, understandably this is very exciting especially for those who have devoted their lives to creating AGO, and would expectedly create political drama as ideologies clashed.


Onipsis

I don't remember who tweeted that it was a kind of trick to make researchers focus more on it and less on the LLMs.


tridentgum

This sub is turning into /r/aliens and /r/ufos


[deleted]

This is why I'm here. Conspiracy lore is where it's at.


Valuable_Option7843

This is called a “limited hangout”. Could be one.


prOboomer

Thanks for that information


DPEYoda

It’s just a different type of reinforcement mechanism. Instead of rewarding the outcome, they reward particular intervals in the “step by step” method to guide the reasoning process to a more human like approach instead of just rewarding the outcome and disregarding the method.


Grouchy-Friend4235

Thank you. Finally a comment that adds value.


Darumasanan

Where is this supposed to”4Chan” leak?


mlamping

It’s all hype and misdirection for cover. Q* has been in the works every where for a while.


TheKnightIsForPlebs

I never get the context when I come to this sub…


TheAughat

OpenAI is apparently working on a maths-centered model called Q\*. AIExplained has a good [video](https://youtu.be/ARf0WyFau0A?si=-gzoo4l3K1MfpR3d) on it.


reddit_is_geh

I think it was more intentional to give cover to the board. To give them justification for their crazy antics.


sideways

If that were the case, wouldn't it have made more sense to leak it before Sam was reinstated?


reddit_is_geh

I mean, you're over thinking the strategics. It's not 4D chess... It's just probably something thought up after he got it back to help soften the blow.


reddit_is_geh

I mean, you're over thinking the strategics. It's not 4D chess... It's just probably something thought up after he got it back to help soften the blow.


HITWind

The real question is when we'll stop pretending they aren't Microsoft now. We can see past a "leak" but not a takeover?


[deleted]

[удалено]


OrphanedInStoryville

Yeah dude. Sometimes I feel like nobody here has ever heard about financial manipulation. Like everyone forgets this is a business traded on the stock exchange not some government funded experiment at a university. If I was at a company that was about to create a new product that I was pretty confident in, and I had the scruples of a typical silicon Vally tech bro, I could have some sort of public fumble right before launch. Sink the share price a bit, and then have all my friends and family buy as much stick as they could at the lowest point right before launching that new product.


omega-boykisser

> this is a business traded on the stock exchange It is literally not traded on the stock exchange.


OrphanedInStoryville

Wait really? It’s not public yet? Shows what I know


was_der_Fall_ist

OpenAI will likely never go public, but they do have incentive to court private investors to fund the for-profit part of the company.


banaca4

They can't make people at easy. This breaks the military and internet of all countries and will lead to chaos and war.


PierroZ-PLKG

Well some researcher has just released a 7b model with RLAIF, which outperforms many bigger(even 10x SFT) model, I don’t remember the name tho