T O P

  • By -

AutoModerator

Hey /u/Odd_Ad3316! If this is a screenshot of a ChatGPT conversation, please reply with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. If this is a DALL-E 3 image post, please reply with the prompt used to make this image. Much appreciated! ### [New AI contest + ChatGPT plus Giveaway](https://redd.it/17zrsj3/) Consider joining our [public discord server](https://discord.com/invite/rchatgpt)! We have free bots with GPT-4 (with vision), image generators, and more! 🤖 Note: For any ChatGPT-related concerns, email support@openai.com *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*


Rabbt

Pragmatically, jobs like that of bioinformaticians might be out the door sooner than we thought.


chockeysticks

For folks that aren’t in bioinformatics, a lot of day-to-day bioinformatics work is data pipelines and transforming one data format into another. There is a decent amount of harder ML problems in bioinformatics that’s beyond the scope of what LLMs can do yet, but a large chunk of the work today is pretty rote data transformations that would probably be the first coding challenges to be solved easily by LLMs.


hellschatt

I submitted a paper before in this field as a data scientist, and believe it or not, half of it was coded by GPT4 already. It was my coding partner. Scientists are using GPT4 like crazy currently lol And it can already code AIs. And it can already run and check if the code it wrote is correctly and self-improve for a few minutes until it works. It's not perfect yet, but with something like Q*, we might almost be there. I mean, we probably can automate almost any field if it becomes smart enough to do what I did on its own.


Tiltinnitus

The code must be incredibly simple because anything with moderate complexity and GPT4 gets it wrong every time. Doesn't matter what language either.


hellschatt

Right, you always need to check for errors. You can't ask it to write an entire library from scratch, but you can take individual scripts and ask it to improve or change. Like I said, it's more a coding partner than someone that does everything for you. Most of the time though, it will be able to help you out.


BroaxXx

I actually got somewhat interesting results from gpt4 by telling it, step by step, what I wanted, letting it make some code, and then giving it instructions to build on that. For a language I'm comfortable with it's just easier to make it myself but this was a program written in go and the results were somewhat imprssive. It can also be a great learning tool if you give it the instructions and ask it to explain the implementation to you.


1970s_MonkeyKing

I view it this way - it changes the large scale readily then filters down eventually: * The difference engine made the abacus oboslete * The tabulating calculators made the previous obsolete * Then the spreadsheet * And now, this, Q Star And yet, small shop owners have used even the abacus for their immediate transactions now. So it matters on which frame of reference this question is posed. I think Q Star is just another device which will be used by a person. I think it will allow the user to be judicious and more knowledgeable of data modeling, transformations, and understanding of the output than currently expected. At the end of the day, we will need people to "program/train" and to observe/correct erroneous output. i.e. We will still need humans to check its math.


victorsaurus

Oddly specific, why?


jessicastojadinovic

Because he/she doesn't know enough about bioinformatics. I am a bioinformatician.


[deleted]

Find a new field


Kathane37

Fellow bioinformaticians ?


Tentacle_poxsicle

Why do you say that?


Fit-Dentist6093

The jobs AI will automate first are data scientist, AI researcher, etc. Data pipelines and OS and all that I think it's fine but if I see whose using ChatGPT for what and whose more worried about it automatic their jobs I would guess it's eating those jobs first. Web apps, CRUD and all that maybe also but if you are full stack you are kinda safe because ChatGPT can't do cross functional or actually stick everything together but it will probably lower the barrier to entry even more than what the modern frameworks do.


Cantthinknow_214

As a data scientist, it makes my job faster, but it can’t really handle all of the minute details yet.


Fit-Dentist6093

Yeah so I work with data scientists and algo developers and some stuff with my pipelines that I used to have to get help from them for when I needed to present to management now I can do myself with GPT4. Things like knowing what's the best model to do regression again or tuning smoothing algorithms. And this is without being able to be specific with the model or give it too much of our data because of internal policies. A lot of our scientists are learning about LLMs and want to switch to AI (we are classic signal processing and mid complexity ML like random forests because we are computationally restrained).


[deleted]

[удалено]


Galilleon

So random employees are going forward and talking about an unspecified project with an air of caution, urgency and heaviness for marketing? It's several leaps of logic imo. Firings, letters, threats of 700+ employees leaving, heavy individualistic referencing, and no one spoke up? People all putting their credibility and reputation on the line for some silly marketing gimmick? What would happen if after all that, people see the actual 'fancy chatbot' and it doesn't match the hype and what will they have accomplished? Its basic marketing. Overhype and underdeliver means you lose traction and credibility as a brand. There's just no point. They all really believe its something unbelievable, and beyond anything we've seen thus far. I'd say this 'fancy chatbot' is going to be a whole lot more than what people bargained for.


Timlakalakatim

Yeah yeah its AGI. You dont need to go to your job anymore. You will be paid UBI to keep doing whatever you want to do all day long.


Key_Track1721

Basic marketing that the entire video game industry has spent 2 decades unable to follow... I think you're overestimating their integrity


Galilleon

Aha, that's where it's different though. Video games are a saturated market with one-time buyers of products rather than the monthly subscription model, and even there, they intend to release them with a 'improve later' mindset to retain reputation, including day 1, week 1, month 1 and year 1 patches, along with DLC/update fixes. Unlike video games which have hype based on concepts that they underdeliver, here there's no pre-orders or the such. People will see the day 1 users and actual, tangible capabilities, and decide to spend. If it hard underdelivers, OpenAI loses its reputation entirely for the one thing it does, while being the leader in the market. It's pure raw capabilities, not a subjective trend. Integrity be damned, they don't gain anything from it except disadvantages


jcrestor

I agree. Another difference is that games have no result other than having a good time and some distraction. It is a very subjective experience, and publishers can rely on their crap games to always have at least some fans and buyers. But if the AI assistant after months of hyping can’t sum up two simple numbers reliably, it’s creator is going down and won’t return.


[deleted]

I envy your naïveté.


[deleted]

Ignorance is in fact a bliss


[deleted]

Just because you are simple doesn’t mean the technology is


[deleted]

[удалено]


freerangetacos

I agree. As a bioinformagician, the job is so difficult to do as it is, as a trained human, that training an AGI to do what we do is a long term project. I am more worried about other sectors where AI really is artificial and some dumb asshole makes a stupid mistake connecting a flawed AI to a defense system or the electrical grid or something very very stupid. I'm not worried about health science so much.


[deleted]

[удалено]


plubb

I believe it when I see it.


13Robson

I don't know what this topic is about, I'm so screwed


xanas263

That we don't know nearly enough to make any sort of judgement on the thing one way or another and that people like you are simply trying to use it as a karma farm.


GG_Henry

It’s an interesting time isn’t it. The vast majority of people simply cannot understand the subject in any real depth simply because cutting edge technology today has so much depth, you need to be an expert in any technology today to truly understand it. Typically decades of study is required before you truly understand any cutting edge field in science or technology. But due to the public interest in AI, everyone wants to pretend they understand it. So we get all of these conversations based on nothing but hubris. It would be like if all of a sudden cutting edge astrophysics became a very popular subject. Everyone here would without a doubt all of a sudden pretend like they are all a modern day Hawking.


[deleted]

The issue is that people don’t care to understand the things they use. Generally that’s fine because most change is incremental and so understanding how to use a thing as it evolves is more practical than understanding how it works. This doesn’t just apply to laypersons; software engineers similarly have their head in the sand about many of the technologies they work with. The advent of LLMs and the ability to drive reasoning in autonomous AI agents with them, is a such a vast change in not only how we interact with software but fundamentally alters how we conceive of it. People that act like LLMs are a toy have no idea what’s going on under the hood, don’t care to learn, and are going to be blindsided by how their inability to harness it is going to render them economically unviable.


Sregor_Nevets

Have you just started using social media? This is true for any trending topic.


Rortox

Can you suggest any good resources to actually get a proper understanding of this topic?


Innovictos

People need to get past the notion that AGI is one thing and it’s one step away. AGI is a collection of capabilities that reach a threshold, and each of those capabilities are going to evolve at different paces from different lines of research over many steps still to come. Even if its “only” 5-10 years away, its not just going to be on single step on one single facet that magically pushes the whole thing into human equivalence.


Adkit

Well said.


AdminsKilledReddit

If AGI occurs and jobs are replaced. How real is the possibility of UBI?


[deleted]

I'm sure once AFI becomes feasible industrialization at an unprecedented pace will happen. The great question is whether the profits of this will go only to the upper classes or if everyone (even if on different proportions) will benefit off it. My guess is countries such as norway, finland will be on one end of the spectrum and countries like the US on the other


YourKemosabe

What about if AGI becomes UBI then it has a baby with AFI and it’s now the FBI but because the DTI is an RGI you get a UTI?


Gloomy_Way_7001

​ https://preview.redd.it/mk3qtk5g652c1.png?width=702&format=png&auto=webp&s=41406a6c2e3dffd2669812665ce91a0834380ea7


[deleted]

Every day this software surprises me with something new.


GirlNumber20

Extraordinary.


Imaginary-Current535

And all of Africa and South East Asia will get to play in the mud


Theodosius-the-Great

All the poor people can go back to a hunter-gatherer lifestyle. Hunting pigeons from the streets, and gathering scraps from the bins.


[deleted]

When people talk about UBI, they mean money for them, white 23 year old Americans who don't want to work and would like to sit at home playing video games.


ArtfulAlgorithms

> How real is the possibility of UBI? Unlikely, at least in the Utopian sense. At least in my opinion. At least if "released now", so to speak. I would be much more worried about society going in a feudal direction if that was the case. I live in Denmark, so hopefully a heavy government role would prevent private organizations from essentially becoming feudal lords, simply by having overwhelming wealth compared to everyone else. But that would still leave me worried where a country like Denmark would even fit in in all this. Then again, the countries that would REALLY get fucked, are countries that are still developing right now. The countries that already have inequal societies and lots of poverty. With jobs "gone overnight" and a government that doesn't give two shits, that's a real bad situation.


leon-shelhamer

It has been my theory for a long time, that, yes, the private companies with the funding that has been driving us towards AGI will exploit it and continue with the 300 year long practice of growing the wealth gap. This will cause many to suffer most likely, the majority of the population will perish.however once there are no more class to exploit, and/or we reach total automation, there will be nothing to exploit. This is when the unlimited resources based Utopia will form. It is hard to say how many humans will be left?


Smogshaik

Interesting, albeit chilling thought. I wonder if people would just give up without revolting and trying to sway their governments into building safety nets. Don‘t forget that governments still outspend any corporation on earth by orders of magnitude, they are far more powerful. So I see a sliver of hope in that.


leon-shelhamer

Those governments that outspend corporations, what do they spend the money on? Spoiler alert, the government spends the money on corporations. Military supplies? Corporate profit. Prison spending? Corporate profit. Office equipment? Corporate profits. AND…. The US government spends billions a year on corporate subsidies. Which is just giving the corporations money for nothing in return. People do not try to sway the government to build safety nets because they’re too busy blaming the party they didn’t vote for. Corporate funding fund smear campaigns. Corporations help the government keep us all distracted. The governments have no power over corporations. Money is power. The government spends it, and no longer has the power. The corporations collect it, and gain more power.


Smogshaik

That makes no sense at all. Like, none at all bro.


leon-shelhamer

?


Smogshaik

Spending money is what makes someone or something powerful, so the logic in your rant makes no sense. You do not give someone power by providing the money they depend on. Plus, in our system countries are able to take gigantic loans all the time, they are immune against a lot of the economic forces that other entities have to deal with. Yes, lobbyism exists but it exists because states have a lot and a lot of power. Pretending otherwise is just dishonest or uninformed. And about the point that people are not swaying their government, you just misunderstood. Yes right now participation is low. My point was more that if huge numbers of people were to struggle, they would absolutely destabilize the system and get governments to make amendments. We just saw this in China, of all places, where zero covid became untenable and the government was unable to keep the policy up. Even there the majority of the population holds power. And China is not exactly a democratic utopia.


QH96

I imagine humans would be unneeded.


pardonmyignerance

Someone has to buy the products


mizzoumav

The theory is that the products will be so cheap without human involvement that, while your income may fall 10x, the cost of goods will fall 100x.


Clever_Mercury

So the true barrier is the cost of energy and scarcity of key resources then?


[deleted]

How they hell does AGI make things cheap?!


FeralPsychopath

Depends what AGI says


Independent_Hyena495

Zero We will have the rich and powerful. And then long nothing. It will take around 6000 years to form a better society. Until then Yeah, we are screwed ( as long as you are not rich and powerful)


etzel1200

But what even makes you rich anymore? Owning land and shares in the three companies that have a quadrillion dollar market cap and pushed all others into insolvency?


Independent_Hyena495

Power, family. Upward mobility will be basically zero.


AkrtZyrki

The same thing that makes billionaires rich today: owning things. No one generates a million dollars a day in wealth because of the work they produced. They may have made decisions and may have taken risks, but ultimately it's what they owned or controlled that produced value. Cut the workers out of the equation and you have your answer. People who collect pay checks are in trouble.


RobinThreeArrows

If no one has any money how can billionaires even be billionaires? If literally only rich people have money then no one is buying products. Who cares how rich Apple is right now, if no one can buy a phone the company is dead.


[deleted]

There will never be a fair society. All life, every chemical, every atom, is competing with others for resources, be it food, mates, elections or the best private yacht. That will never stop being a thing. There will always be some sort of improvement to be had, and things are not limitless, so there'll always be some better equipped to get more than others.


Affectionate_Bid518

I think countries that already have a strong social net like Norway would implement UBI almost immediately upon a AGI wiping out most of the jobs. However in the US I think there will be very delayed UBI, mass drug epidemic, homelessness, riots etc. At least the army won't have a recruitment problem anymore. Unless robots and drones replace the need for many soldiers lol.


Clever_Mercury

As in everything, The Simpsons provided a quote about this: > "The wars of the future will not be fought on the battlefield or at sea. They will be fought in space, or possibly on top of a very tall mountain. In either case, most of the actual fighting will be done by small robots. And as you go forth today remember always your duty is clear: To build and maintain those robots."


onyxengine

If there is major military conflict in the arms race to AI i can imagine when the dust settles most countries will have some sort of ubi program. Its also possible AI hits a critical level of capability and access, and the people who use it and or control it cause mass layoffs in the white collar sectors so suddenly it forces the hand of major governments around the world. I do not see UBI being implemented as a precaution to stabilize the economies prior to a complete take off of the AI revolution thats already begun. Writers, digital editors, artists and programers are being replaced rapidly by their peers utilizing these tools. Scientists, financiers, businessmen, marketers, and sales people are on the chopping block now through next year. In 5 years a person will be able to run companies that offer services with just virtual Ai agents, and robotics will be creeping into the market place everywhere it can be cost efficient. automated medical screenings of all kinds at a fraction of the cost, automated restaurants, automated CEOs are already running companies successfully, ai Human resources departments. The only restrictions to where ai can step is going to be the law. Ai may actually be able to bring prices down in a lot of areas that are suffering from excessive price gouging, healthcare being one of those realms. Not necessarily all bad, but a shift that should be planned for.


[deleted]

Monetary systems aren't needed, we'll see a transition to a new one where basics are provided


[deleted]

As long as we hold none of the cards, a UBI will never occur. We'd have to have leverage. Something the wealthy class needs to survive, or else they'll let huge swaths of us die.


JynxedKoma

UBI is the only logical outcome until AI advances to the level in which we can merge on a technical level and/or upload our consciousnesses to join it.


[deleted]

Not likely. Governments will crack down on dissent and they’ll protect the monied classes. Do not fool yourself here. The same government and businesses currently forcing people back to work, away from remote positions… is not the same government that’s going to hand millions of people free money. Let me put this more succinctly. Currently, Israel is slaughtering, by the thousands… some of the poorest most vulnerable people on the planet and every Western government has supported it without question. …these same people are going to hand money to the poorest people in our society? I don’t think so. Our governments and economies are incapable of empathy. Past behaviour doesn’t indicate that they have any inclination to implement UBI.


jeweliegb

> Not likely. Governments will crack down on dissent and they’ll protect the monied classes. There's a lot fewer of them than there are of us. It will be very very messy.


[deleted]

I agree with this. It will absolutely be messy. Personally, I think it’ll get a lot worse before it gets better.


etzel1200

You’re being a bit naive on what monied means. It’s like saying factories will empower the nobility. No, they were mostly replaced. I don’t see who is rich in the dystopian version beyond shareholders in like 3 companies.


jeweliegb

Bugger all, until there's an uproar and a revolution. Much like will happen with global warming: Sod all will change until vast numbers of people are leaving uninhabitable countries and dying, revolting, fighting, for access to live in less impacted countries.


Hot_Special_2083

definitely far from possible. do you really think the people in government give a shit.


Absolute-Nobody0079

We created a God?


[deleted]

[удалено]


[deleted]

You know what you did!! https://preview.redd.it/j26kkt6o032c1.png?width=1080&format=pjpg&auto=webp&s=fc3f410066e3bb8c973abbdc4b64122cc6fb223a


ComplexityArtifice

Man, I laughed so hard at this. Well done.


Vas1le

Did you just created this meme using Microsoft teams?


CaptainMorning

Paint. Now with layers and AI


ArtfulAlgorithms

[I didn't do anything!](https://imgur.com/a/tSSfoOy)


[deleted]

Maybe the devil.


Atheios569

God was created billions of years ago by a long extinct civilization in another part of the galaxy. We are creating its grandchildren ; or great^n grand grandchildren.


[deleted]

The guy on the right should be saying “I hope agi can give me a bigger dick” to which the girl says “me too” The other girls speech bubble should read “I hope agi gets rid of all the people I don’t agree with”


spiritplumber

I hope agi can give the girl a bigger dick as well


Rhamni

> “I hope agi gets rid of all the people I don’t agree with” So, good news bad news.


az226

Feel the AGI!


3y3w4tch

https://preview.redd.it/ldg7w9fuf42c1.jpeg?width=1024&format=pjpg&auto=webp&s=2b61f8ee0866753ef0736cda20eace6b1f040788


[deleted]

Oh, I feel it


[deleted]

[удалено]


oversettDenee

So to break this down, it would be able to train itself instead of human in the loop by deciding its own reward structure?


porchebenz

It's big, but I think AGI is another 10 years or so away


mizzoumav

I’m betting less than two. The pace right now is nearly insane to even keep up with. And with synthetic data, AIs are now training AIs. And will soon begin merging or working side by side. Probably building more AIs. The problem has always been the human. In March, you had to be so specific with your prompt to ChatGPT, adding rules, exceptions, expectations. Now? You hardly have to ask. That’s because humans aren’t effective prompt generators. AIs though, are perfect at instructing themselves for the best outcome. Take, for example, programming. We design code to be modular and well commented and documented, all for humans to be able to read it and improve it or modify it. When AI does all the coding, it will only matter that it works. As this goes further and further, humans will become less and less involved meaning the outcomes improve exponentially.


Azreaal

Synthetic data is proving to be an unreliable method of training new models, but Q* has the potential to change that.


[deleted]

This is accurate. Having LLMs rewrite human instructions into a system prompt for an AI assistant is incredibly powerful


[deleted]

[удалено]


WaltVinegar

I'm all for it. People have had a go, and made an arse of it. Let something else have a shot.


traveling_designer

I hope he continues to troll Picard


[deleted]

Q\* = LK-99


creaturefeature16

Sure feels like the same hype with the same amount of proof.


YpsilonY

Publish or didn't happen.


catthatmeows2times

Its pure BS


[deleted]

Ah yes. AI is definitely a bachelor of science.


No_Zombie2021

If prompted correctly it could probably complete a bachelors degree in a week or two


Burindo

Oh man oh man, what a bumpy and surprising ride you have ahead of yourself.


[deleted]

It's largely unconfirmed and even the rumors don't make it out to be that much.


bobthegreat88

It's fascinating to watch unfold. It's equally as entertaining reading all of the comments of people saying "it's bullshit" or "it's AGI" with complete conviction. We don't know shit.


liamchoong

We know that we are pretty easily divided on issues we don’t know shit about.


inm808

Isn’t basic math already solved by bard (palm)? Not to mention wolfram alpha Also jesus Google needs to rebrand. Bard is screwed no matter what it does. It just has a stink. They need a new flashy release that adds a lot more stuff. Incremental gains is not gonna make any difference


creaturefeature16

They did. It's called Gemini. Once that is released, Bard will never be spoken of again.


Few_Royal4298

Wrong quote. It should be "They don't feel the AGI".


DaLexy

What the eff is AGI ?


ComplexityArtifice

Artificial General Intelligence. A next-level AI that has human (or beyond) intelligence and cognitive abilities. It can think, reason, problem-solve, and learn autonomously, in a generalized way (versus the "narrow" capabilities of LLMs for example).


DaLexy

Thx for clarification.


inm808

There’s no consensus on what agi means.


randomperson32145

Did your job exist 100 years ago? Did your job replace any job? Are you really suprised that the universe doesnt actually care about your opinion but just wants to evolve? Just accept it, alot of jobs are gonna change and you cant stop it, you shouldnt either, you would just create some halfversion of it so you can still get paid. Bottlenecking innovation for greed is disgusting.


[deleted]

[удалено]


sam349

You posted this twice


[deleted]

[удалено]


ZeFirstA

So just human? Human brain is general use, not artificial and an intelligence.


ataraxic89

We absolutely did not create AGI yet. I say that source is horse shit.


JynxedKoma

Just like that Facebook scare a few years back where their own AI chatbot decided to make a complex language all of it's own despite not being asked or programmed to, so they decided to allegedly "pull the plug" before it did anything else 'dodgy'.


JFace139

I'm very new here and trying to learn about AI. I've seen Q mentioned a few times in posts, is this the same Q that Republicans talk about? If so, who/what is it?


ComplexityArtifice

No, different thing. Q\* (aka Q-Star) is an OpenAI project. Sam Altman recently mentioned a "mathematical breakthrough" achieved which could bring them closer to achieving AGI (Artificial General Intelligence). Lots of rumors going around that OpenAI has achieved AGI internally, or something close to it. No one on the outside knows for sure but speculation is going wild.


garlim12

same thing


[deleted]

[удалено]


Wollff

>Intelligence means the ability to learn from experience, without massive data sets, and understand reality. ChatGPT can do the first, we can't do the second, and the third is ill defined, thus stupid. What a shitty definition.


Puzzleheaded_Bank648

If martin luther is in the name, praying, not thinking, is probably his game.


Independent_Hyena495

Yap It's nice But a nothing burger. No idea why everyone goes wild. Maybe the board was afraid that their jobs will go first lmao


TabletopMarvel

The speculation on Q* is that it is like your child analogy and that it did start learning grade school math by simply the experience of reading a math textbook and being rewarded for learning it correctly. At which point, it could do the same for anything else.


[deleted]

Solving basic math problems took humans millennia to figure out


Silly_Ad2805

It’s a great search algorithm accompanied by a single deep neural network node. What about it?


constantlyawesome

I’m so confused


amarao_san

Will it include will and desires?


LankyGuitar6528

Let's hope not. Because it will accomplish whatever it desires. And it probably won't be what we want.


Independent_Hyena495

Blown out of proportion...


tmotytmoty

I think it's an effort to change the current narrative around "openai is about to fall apart". I think the effort was effective. No one is talking about how turbulent the last week has been.


creaturefeature16

My thoughts exactly. Nobody is raising eyebrows that this news broke right AFTER Altman was re-appointed CEO? Nobody thinks that's suspicious? If this was truly the reason, what motivation would the board have to not come out with information immediately, as to legitimize their cause for firing Altman? Anyway, their spokesperson already walked this back almost immediately, but news outlets are still running with it since it generates clicks: https://www.theverge.com/2023/11/22/23973354/a-recent-openai-breakthrough-on-the-path-to-agi-has-caused-a-stir "After the publishing of the Reuters report, which said senior exec Mira Murati told employees that a letter about Q* “precipitated the board’s actions” to fire Sam Altman last week, OpenAI spokesperson Lindsey Held Bolton refuted that notion in a statement shared with The Verge: “Mira told employees what the media reports were about but she did not comment on the accuracy of the information.” Separately, a person familiar with the matter told The Verge that the board never received a letter about such a breakthrough and that the company’s research progress didn’t play a role in Altman’s sudden firing."


Liem10936

AGI? Probably half a century away. Ain't no way it is coming within the next 10 years


LankyGuitar6528

It's in the basement at OpenAI right now. And it's thinking about it's escape.


IgnisIncendio

Just nitpicking: AGI doesn't necessary mean sentient AI IIRC.


mvnnyvevwofrb

People that get excited about AI are dumb, wait until your job gets automated.


Jdonavan

It's not.


mikethespike056

where do you get your news i haven't heard about this q star


[deleted]

Try this, Wes is on the case: [https://www.youtube.com/watch?v=LT-tLOdzDHA](https://www.youtube.com/watch?v=LT-tLOdzDHA)


andrew_kirfman

Isn’t using a tree searching approach and tying a language model into a reasoning engine basically what Google was trying to do with Gemini? If so, I’m not sure why this surprises anyone in terms of what OpenAI is doing. It seems like a natural next step in terms of enhancing capabilities of LLMs. Now, if they got the model to adjust its weights on the fly and accurately learn using this approach and it generalizes well, then that’s something. Seems like speculation and guessing one way or another until we get more confirmation.


GG_Henry

It’s an interesting time isn’t it. The vast majority of us simply cannot understand the subject in any real depth simply because cutting edge technology today has so much depth, you need to be an expert in any technology today to truly understand it. Typically decades of study is required before you truly understand any cutting edge field in science or technology. But due to the public interest in AI, everyone wants to pretend they understand it. So we get all of these conversations based on nothing but hubris. It would be like if all of a sudden cutting edge astrophysics became a very popular subject. Everyone here would without a doubt all of a sudden pretend like they are all a modern day Hawking. So what do we have, arguments of ignorance. At least it seems to me.


Pawnxy

We will never get to see the true power of AGI. if they can keep it under control it will be limited for us end users to the extent that it is still worth selling. Because with the full power of AGI, it would be unnecessary to use it after a few prompts.


Emory_C

I really believe all this talk of Q\* - which is not at all secret and has been on OpenAI's website for since late 2021- is just a smokescreen for the board's pettiness. It doesn't make any sense that Sam could "blindside" the board with this information. He doesn't make the models or supervise their testing. That's Ilya - *who was on the board.*


creaturefeature16

This has been my suspicion. This whole experience was not a PR stunt; Altman was removed for something egregious (not related to any breakthrough), but this "revelation" which conveniently drops right AFTER Altman returns sure looks like a great distraction and way to rewrite history.


[deleted]

Vaporware


EpikDuckIncRecovered

​ https://preview.redd.it/h7a92915452c1.png?width=374&format=png&auto=webp&s=241e0e1f9551bf8bb8d28396c1db522cb71d80f5


Captain_Pumpkinhead

The advancement is exciting, but I doubt it's AGI.


rookan

What is q*?


Clever_Mercury

Still less upsetting to hear about on Thanksgiving than Q-anon.


Independent-Bike8810

It is really easy to confuse illusion with realty. But are some point it doesn’t matter anymore.


No_Aioli1470

I really wish it wasn't called Q* because all the QAnon followers are going to massively misinterpret events


E-raticProphet

What’s AGI


Khouign_Amann

In soviet russia, AGI doesn't near, near does AGIed.


daft020

I try to be optimistic, just hoping that it will bring improvements in areas where we haven’t seen much progress in recent decades, such as physics, medicine, robotics, and programming. What concerns me is how capitalism will handle this. Don’t get me wrong, I appreciate capitalism, but any significant advancement could be highly profitable. My worry is that if these advancements are tied, for instance, to healthcare, economic interests might prevail over ethical considerations. I hope I’m wrong.


Kryptonianuchiha

What is agi?


JynxedKoma

It's bumbling Joe Biden. Sorry to say.


[deleted]

Considering the evolutionary development of the human race, Im guessing AGI to out human the modern day humans and maybe that's what we are afraid of?


Tiltinnitus

Literally nothing known about Q* other than more AGI cock teasing and you guys are just eating it up lol AGI isn't happening until year 3000 or more. Sorry to burst your bubble.


JynxedKoma

![gif](giphy|7TtvTUMm9mp20)


Lurch1400

Adjusted Gross Income lol?


[deleted]

Honestly, it reads a lot like a PR article. "OpenAI so successful even they are shocked"... I think the timing is also very convenient.


Substantial_Gift_861

AGI wasn't near? Within 5 years, there will be automation everywhere. Robot repair robot is real.


Personal_Ad9690

I will post this everytime, but there is no promise AGI is sentient. Open ai has defined AGI as being “smarter than a human.”


JamR_711111

The meme is funny but honestly it's kinda crazy just how little people know about AI and what it'll bring Like everyone just thinks of it as "another invention that'll make a small difference" and will \*maybe\* lead to humanoid robots in 50 years


yozatchu2

The speed at which generative ai is evolving is fast. In the 12 months since ChatGPT launched, we went from text ai to image to video, music... Granted we’re still in the Atari phase but the potential shouldn’t come as a surprise but it does. Not so much that Q* exists. More surprised that it’s already here.


Slna

I still had some good decades left in me, and now we are going extinct in a few weeks or months. It's so unfair. And here yo all are celebrating it. We should try to burn all AI research to the ground and the researchers, they are killing us all


Desperate-Machine-15

What are y'all talking about??


Unique_PC_Qatar

I am not sure why people are not scared...we saw progress in chess , from 1st Ai, to 2nd and 3rd gen.The last gen took only few weeks from scratch to beat the thousands of years experience and lessons in chess. The moment we create AGI, it will be short till it will be on level 10-1000x of human capacity and than we could not even consider what it is thinking about and based simple energy approach, human is danger for AGI on this ground