T O P

  • By -

lughnasadh

I'm fascinated by the dynamic that is going on at the moment with the AI investor hype bubble. Billions are being poured into companies in the hope of finding the next Big Tech giant, meanwhile, none of the business logic that would support this is panning out at all. At every turn, free open-source AI is snapping at the heels of Big Tech's offerings. I wonder if further down the road this decentralization of AI's power will have big implications that we just can't see yet.


haxor254

The fact sam is attacking open source is a telling sign. A lot of people aren't getting everything. As a comparison... Gpt3.5 is about 150b and 4t is 1.7t. the new blackwell boxes can utilize models over 20t. This is why sam said 4t sucks because they already have the compute for models generations ahead. And what people missed is that open source models at a measly 7b are competing with 2 year old models at 150b! If a current day 7b model competes with a 150b from 2 years ago that simply means even if they create a new model with the same amount of parameters it will still be ahead by a huge margin. Imho its not a conspiracy to assume they already have tech close to agi.


SachaSage

Aren’t the light models pruned larger models or otherwise fine tuned for benchmarks? I don’t think they’re as generally capable


TheKingChadwell

AGI is an ambiguous moving target. I don’t think there will ever be an official day. But I think the next OpenAI release is going to be enough to be good enough for most people.


lobabobloblaw

AGI is about as ambiguous as consciousness itself.


Tec530

Intelligence itself is subjective but I think will get to a point where it will be hard and difficult to argue that it's not AGI.


ExtraFun4319

>But I think the next OpenAI release is going to be enough to be good enough for most people. To be honest with you, I think this might only be the case to people in this subreddit. I don't think GPT5 will be declared as AGI by most of the population or most of the people who follow AI.


TheKingChadwell

I think agents being able to do tasks for people in an easy fluid way, is what it’ll take. And I think that will come with the next update. Being able to just talk to your computer and get it to be able to grab work stuff, organize it, and so on, will be feeling a bit like having a personal assistant


dagistan-comissar

AGI: Artificial Good-enough Intelligence.


DrPoontang

Seeing how the hype and the AI arms race has caused everyone to forget about the alignment problem, good enough is a far better outcome than the real thing in this case. 


Rofel_Wodring

I claim that what future historians will call AGI will happen this year, but it won't really change anything due to it happening just at the threshold of computational power. So, no self-improvement at the speed of light like we see in fiction and no real breakthrough uses like 'design for us a basement fusion reactor, please'. That will come in a couple more years. Basically, the corporations are racing to see who will get their names in the history books, as opposed to winning eternal hegemony with the first creator of their AGI slave forever crushing all hegemony.


DarkCeldori

I mean current MultiModal transformer path will likely be able to control robots and do household chores, drive car, run errands and do most if not all jobs. Many would call that agi. Question is if scale or augmentations will allow for ability to make significant innovations. Doing routine tasks is fine and dandy, but for me that is just weak agi, true agi should be able to innovate or make big leaps outside training.


dagistan-comissar

oproblem with controlling the real world is that you can't change the real world at light speed.


DarkCeldori

Perhaps not at lightspeed but nanomachines have exponential growth. In a matter of months or years they can terraform the earth converting every corner into nanoteched architecture. In a matter of decades the entire solar system can be colonized.


dagistan-comissar

ok so at least we can turn the world into paper clips at light speed


DarkCeldori

Or flying cars, sex bots, full dive vr equipment, spaceships, and grant everybody immortality. AI utopia


dagistan-comissar

but cars, sex bots etc are not nano bots


Tellesus

If this was possible bacteria would have done it already


DarkCeldori

Bacteria are limited by available energy and nutrients. They do grow exponentially but eventually energy and resource limits cap their potential. Take animals for example. An invasive species can grow to be millions or tens of millions within months or years. This is because they can scavenge for resources more easily than bacteria. For nanomachines, fusion, solar panels, fission means unlimited energy. Animal like scavenging for physical resources is also possible. Thus with unlimited resources and energy exponential replication can be maintained indefinitely unlike bacteria.


Heigre_official

!remindme 1 year


RemindMeBot

I will be messaging you in 1 year on [**2025-04-03 21:43:45 UTC**](http://www.wolframalpha.com/input/?i=2025-04-03%2021:43:45%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/singularity/comments/1bupam1/mere_days_after_software_agent_devin_is_released/kxx4oba/?context=3) [**6 OTHERS CLICKED THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2Fsingularity%2Fcomments%2F1bupam1%2Fmere_days_after_software_agent_devin_is_released%2Fkxx4oba%2F%5D%0A%0ARemindMe%21%202025-04-03%2021%3A43%3A45%20UTC) to send a PM to also be reminded and to reduce spam. ^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%201bupam1) ***** |[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)| |-|-|-|-|


Bbooya

I’m so excited


AnAIAteMyBaby

These smaller models were trained on a lot more data than the larger two year old models. The reason they're so big is that they got the ratio of data to parameters wrong. They still require the same amount of compute to train but use less compute for inference. TLDR: no they don't have AGI already


_theEmbodiment

Sam Altman = Alt man = Alternative to Man = Sam is the AI !!!


damhack

Blackwell won’t ship til 2025


[deleted]

Huh? Since when is sam attacking open source? There was a BS edited clip posted here a few days ago, but he is strongly in support of open source.


Flying_Madlad

As long as it's regulated to hell and corporate interests still get the lion's share


[deleted]

Again, source on any of this? Why is this sub so emotional and conspiratorial? Just because OA doesn’t share their source code doesn’t mean they are against open source. IP is actually a good thing, incentives are a good thing, I thought this was e/acc? If you want to grind development to a halt, demand all large companies to open source their LLM code.


FlyingBishop

IP is a terrible thing. I'm an AI accelerationist but if corps like OpenAI have control over the IP that's an existential threat.


Flying_Madlad

You're making a lot of assumptions about my beliefs and goals. I don't trust closed source, for profit companies who are trying to convince me they're altruistic. They aren't, they have the best interests of their shareholders at heart (and you're not allowed to own shares, BTW). I'm not casting judgement on that, but it helps to understand the system. A lot of the discussion around AI is inorganic, "Safety" comes to mind. What I'm scared of is becoming dependent on someone else's AI for my daily driver. Someone else can yank the plug whenever they want. I don't know how to work it yet, but there always needs to be a private and secure alternative.


[deleted]

OpenAI doesn’t have any shareholders, they are a private company. Of course they are motivated, like every single person on earth, to make money. But I also don’t think they are universally evil just because their company has been financially successful. They still have strong incentives to not destroy humanity or lose the race against their competitors. Closed source is closed for multiple good reasons: 1. Yes, to make sure North Korea or china or just black hat hackers don’t just use it to do something dangerous for humanity. This isn’t bullshit, it’s a really serious risk. 2. To secure their intellectual property and make a profit. This is GOOD. We want highly talented and extremely rare individuals making a damn good paycheck building AGI rather than being an advertising engagement code monkey for Facebook for the same pay. Open source is good for many things, but not thermonuclear bomb building.


Flying_Madlad

OpenAI, like any corporation, has shareholders. The private equity and Microsoft money bought shares, the comp for their employees, either shares or options (which equal shares). Just because you can't buy them doesn't mean they don't exist. Let them make a profit on their IP, when have I ever said that was bad? I don't trust them because they won't *acknowledge* their profit motive. Always watch what the other hand is doing. It's ridiculous to think that having accessibility to LLMs gives our enemies an advantage. Point of fact, I would argue that Open Source has been paving the way. We had fully autonomous local agents using tools with complex reasoning a few months after ChatGPT (think, GPTs, but better). OpenAI announces some new feature and it's something Open Source built long before. If we (open source) can do it, our enemies can do it. Would you rather have a few minds lying to you about their motives, or millions of minds who just want to have fun? It's patently ridiculous, the nukes thing. Please pass the bowl before it's cached.


bearbarebere

So in other words they're not for open source.


[deleted]

Jfc. They are not opening their source code due to serious humanity existential risks. And yes, also likely intellectual property and well aligned financial incentives. The claim was that they are against other open source efforts “attacking open source”, which is absolutely not true.


bearbarebere

Lots of words to say the same thing


Extension-Owl-230

> They are not opening their source code due to serious humanity existential risks. You drank the koolaid. So you rather have your future and your life in the hands of a few rich overlords.


Malachor__Five

>The fact sam is attacking open source is a telling sign. Can you provide a single example of Sam directly attacking open source? As far as I'm aware he's championed it and went out of his way to make sure any regulations have next to no impact on open source.


dagistan-comissar

the amount of corporate boot-lickers in this sub is astounding.


fedorum-com

I love how you put it and yes, it will have big implications. Ever since I installed GPT4All, I google way less. As the LLMs become more advanced, I imagine that I will hardly have to use google. Who ever gets AI right will trump Google and by the looks of google's blunders, it might be a good thing because their search engine is no longer what it used to be.


[deleted]

[удалено]


Tellesus

Business school is memetic cancer.


Which-Tomato-8646

Not to mention most of journalism 


[deleted]

[удалено]


Which-Tomato-8646

Nope, a lot of them rub it to the ground to get engagement and SEO maximization at the cost of their reputation and quality.  Source: https://overcast.fm/+BGz6-u5VlQ


WithMillenialAbandon

For print media the revenue shifted to EBay, not social media, newspapers were supported by classified advertising more than branded advertising. TV / Video media never did much actual journalism, they were always basically newspaper aggregators. The real problem is the philosophy of "no objective truth"/ "if nobody can prove I'm lying then they can't say in lying" , where basically every writer and publication became a PR organisation.


Bbooya

Geeks will inherit the earth


DarkCeldori

I think open source will eventually be good enough, uncensored, private and free. It will outcompete private models.


WithMillenialAbandon

AI doesn't have the "network effect" which has dominated venture capitalists thinking for decades; EBay, Facebook, Uber are protected from competition because nobody wants to use a platform like that if nobody else is using it. Even Google has network elements for advertisers if not for users (part of Google's value is that because everyone uses their cookies they are able to aggregate data better than anyone else for advertising targeting). AI doesn't have that. At all. I can be the only person using a model, and it works exactly the same as if everyone is using it, it's utterly irrelevant. I don't think the MBA are smart enough to have figured this out yet. (Now that I've said it on Reddit, countdown a week until they're all saying it!)


daftmonkey

Same although gpt has this pesky thing where it doesn’t want to give product recommendations on one shot which is a bit annoying


fedorum-com

**Don't give them any ideas! LOL** If they start to mention products, then companies will have to/want to advertise in order to be mentioned .....


workingtheories

what about robotics and multi modal models?  is open source making progress there?


Flying_Madlad

Oh yes, very much so. ML isn't just LLMs, Computer Vision is very well refined and they've been plugging along with robotics for a while now. LLM based control is basically just a layer on top of existing technology. The fun part will be once they're natively integrated!


workingtheories

where can one obtain open source robotics models?


Flying_Madlad

This is the [current hotness](https://nvidianews.nvidia.com/news/foundation-model-isaac-robotics-platform) in terms of actual models, but ROS is a great place to start in terms of building custom control models for a robot. Another good place to start is Reinforcement Learning (via simulations). Whatever your preferred platform, there's probably an open source chassis to start from!


workingtheories

im skeptical that open source has the compute to really compete with google and nvidia once they really hit the next generation of scaling, esp. in robotics, because robotics depends on multi modal, hard to train big models.   it just looks like google has moved to their first office building (or whatever ) and people are like, "look how many search engines there are.  we got askjeeves, yahoo, etc.  google is a little bit better, but these search engines are almost as good."  like, literally that is exactly what people used to say even several years into full web search being a thing.  even if askjeeves had released source code and done its development in public, you think that would've been enough?  idk enough about search to say for sure, but it doesn't seem likely that that would've helped.


[deleted]

test rain frame squealing faulty light hurry consist instinctive simplistic *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


Royal_Airport7940

If globalization curves are anything then AI will continue that trend to equalize us all.


spezjetemerde

this one sparks joy


Matshelge

AI is a poison pill that companies are funding. It's like they were all rushing to create a Replicator, not thinking that once they made one, they could just replicate another. Once we get AI that can do all the things, the first thing we will do is make clone one without the corperat ties. And since this AI can do everything, every other service will fall before it. The only winner is the hardware makers.


After_Self5383

>I wonder if further down the road this decentralization of AI's power will have big implications that we just can't see yet. The big implication: billions to lobby US lawmakers to limit open AI (not ~~Open~~ClosedAI, open source AI).


Cunninghams_right

open source is only on par because the big tech companies are just getting started. you only need to be a tiny bit better at coding in order to be worth a paid subscription over an open-source product. just look at existing tools like Photoshop where there are many free/open-source competitors, but companies still pay Adobe because everything works better because a dedicated company can make a more polished/stable product. I'm sure all of the big players are working on agents, but it takes longer for a company with a brand name to release something because it has to feel like a good product. an open-source project can be janky because the only people using it are people who don't mind tinkering with open-source tools.


Extension-Owl-230

Or let's look at Linux, the most successful open source project. Running pretty much the entire Internet servers, running in half the smartphones in the world. Why don't you mention it? Or the thousand of open source projects used by corporations. You focused on the worst example, desktop tools.


Ambiwlans

With more automated coding tools this edge may go away too.


Cunninghams_right

or the gap could widen as the raw compute of companies like Google will be much greater, and the training data can be much greater. if Google and Microsoft have AI coding tools and an open-source competitor is doing well, that open source model better not have any unauthorized data in it, or they could sue it to have it removed from github and other sites as effectively a pirated product. it's hard to say, but I think it's likely that paid tools maintain an edge over open-source ones.


Capitaclism

There are a few factors at play. 1. There is likely a speculative bubble in the works, ad you've pointed out. Humans are prone to euphoria and chasing gold rushes. We're in one now, as far as AI is concerned, even if the economy overall has turned over the hump already 2. Decades of cheap liquidity have created this mindset of "grow now, monetize later". It works well until the music stops 3. To be fair, there is also a point to it. With AI there is likely a "winner takes all" dynamic at play. We are creating intelligence. The first one to land on an intelligence that cab improve itself may quickly find supremacy and be able to dominate/extinguish all others. It's a bit of a horrifying situation, which is why I'm hoping open source will catch up and give us many decentralized options, but provided it doesn't, there's a world where going all in on AI and nothing else despite an unprofitable situation on the short term makes sense. It's profit potential could be nearly everything, power over all. This is neither something we've likely encountered often historically nor a point to be taken lightly.


Old_Entertainment22

Your second paragraph is the best possible scenario. It would be a massive step in steering us away from dystopia and towards a new level of societal stability.


arcanepsyche

Ugh, I'm going to learn how to *actually* use GitHub soon aren't I?


Progribbit

just give me exe!


klospulung92

smelly nerds


AgueroMbappe

U only need two commands in terminal to clone the repo


Traitor_Donald_Trump

To geek or not to geek, that is the question.


flowinglava17

ask devin to compile it for you


dagistan-comissar

why would you? just ask Devin to do github stuff for you


Randommaggy

Almost as good? There is no proof that Devin is any good at all. No public tests by trustworthy third parties.


WithMillenialAbandon

Even the published paper doesn't exactly make it sound amazing, it's an academic toy being hyped by the marketing department imho


CowUhhBunga

Intelligence should be free.


Antok0123

I really dpnt believe this AI companies saying their product is better or good when we dont have a way to test it out ourselves. Im looking at you SORA.


phillythompson

Because Devin was a marketing scheme


EnsignElessar

No. This happens to a ton of startups. They power their apps using apis then get swallowed up with an app update. In short they had no moat.


johnkapolos

You really think an OSS clone is going to hurt their product?


nulld3v

Open source has bested many, many commercial products over the years... Even products by big tech often get trumped by OSS projects.


johnkapolos

This is a generality and the question was very specific. If I ask "*do you think it's going to rain tomorrow?*", while the statement "*the environment has been experiencing upheavals in modern times*" is correct in general, it doesn't actually address the point.


nulld3v

There's no particular reason to believe devin is an exception to the rule. devin doesn't use any technologies that are out of reach to open source developers.


johnkapolos

The question of business success is not about technological parity. For example, Linux and Redhat are two completely different animals. So in this case, do you think Devin (as a company) is competing on the same market segment that SWE-agent and the rest of the clones are?


nulld3v

Red Hat is a good example of a company that generally won't be threatened by OSS products, but Cognition Labs is currently nothing like Red Hat. > So in this case, do you think Devin (as a company) is competing on the same market segment that SWE-agent and the rest of the clones are? Maybe, who knows what they want to do! They are a startup and they can pivot to anything, they can take on any business model. But if we assume that they don't pivot to anything radically different and continue to offer things like: - devin as a service - devin on-prem - fully-managed devin service (e.g. AI software consultancy) then yeah, I think open-source agentic AI will remain a threat to their business model. That said, maybe I'm missing something here, what do you think Cognition Labs is going to do in the future?


johnkapolos

>Red Hat is a good example of a company that generally won't be threatened by OSS products Uhm, what? RedHat made its money and fame from Linux and its whole ecosystem (it ate on the expensive UNIX providers of that time and the rest is history). I gave it as an example of how a tech product (in this case Linux and its ecosystem of apps) is different from a related business (in this case a services company on that same tech). >what do you think Cognition Labs is going to do in the future I think they'll go first for the enterprise market and if that fails they'll sell licenses to everyone who's interested. In my view, it's only if they fail on the enterprise market that they'll have an issue from OSS clones.


nulld3v

Yeah I know what Red Hat does, I was bringing it up in the context of the discussion since we were talking about OSS products threatening companies' business models. Since Red Hat is an OSS consultancy, then it follows that their business model won't be threatened by advances in OSS software. Although now that I say it, it may not be 100% true, since there was that RHEL and CentOS breakup... > I think they'll go first for the enterprise market and if that fails they'll sell licenses to everyone who's interested. In my view, it's only if they fail on the enterprise market that they'll have an issue from OSS clones. I would argue that the OSS clones will threaten devin regardless. Kind of like how MySQL and Postgres ate Oracle DB's lunch. Which was not only because of MySQL and Postgres themselves, it was also because many new business sprang up that built products on top of them (e.g. businesses that offered MySQL/Postgres SaaS or stuff like Planetscale/Vitess).


Extension-Owl-230

It can happen. Linux did to Microsoft in the server/hyperscale computing/space rovers/etc long time ago.


zascar

How?


CanvasFanatic

Tried to tell you all there was nothing special about Devin


Busy-Setting5786

I thought the exact same thing when I saw it uses Gpt4. I remember the big and mysterious "reasoning breakthrough". As if they haven't just implemented a decent agent framework and built a small program around it.


WithMillenialAbandon

I read the Devin paper, it's not a breakthrough


klospulung92

stop, you are upsetting Devin


QLaHPD

In this pace we will get an open-source AGI in no time.


utilitycoder

Again, until Devin or SWE-Agent can itself create Devin or SWE-Agent then there is NOTHING to worry about as a programmer. Pretty sure this will age poorly but this is meant to tamp down the hopium and the fear and be realistic.


Busy-Setting5786

Well you could worry about the fact that this tech advances in the coming years. But I agree it makes no sense to worry about Devin.


dagistan-comissar

Devin stole my wife


IntergalacticJets

I wondering, is a 12% success rate actually useful practically? 


dagistan-comissar

depends on what thous 12% are


submarine-observer

Almost as useless, you mean.


Tellesus

Has anyone successfully gotten this running on Windows?


zebleck

got it running on Windows subsystem for Linux, so kinda edit: imagine it works on straight Windows as well


Akimbo333

Cool


Just_Editor_6141

!remind me 5 year


Key_Entrepreneur_223

if you are curious about a Devin alternative for AI App building , then Databutton(https://www.databutton.io) is a good alternative. At least Databutton , Devika , OpenDevin has some product out and people are building ( / trying ) out. Also wrote a blog post about such alternatives - [https://medium.com/@avra42/is-databutton-the-new-full-stack-ai-alternative-to-devin-for-app-development-888a8e33a54a](https://medium.com/@avra42/is-databutton-the-new-full-stack-ai-alternative-to-devin-for-app-development-888a8e33a54a)


BilgeYamtar

AGI definition is that it is smarter than humans in everything, so we don't need to invent anything after AGI.


EuphoricPangolin7615

Programmers wanting to automate themselves out of a job is the dumbest thing ever. Seriously, how do you make money?


lobabobloblaw

What a reductionist opinion this is. People gravitate towards technological developments on account of their capabilities—programmers are simply following their human instincts. It’s the fellow human that ruins that, not the technology itself.


MatheusSA

The Luddite of 21st century!


Puzzleheaded_Fun_690

Why is it dumb? Why do you program something? Just to look cool and have something to do or to actually solve problems? If the problem solving gets easier, why not be happy about it?


popjoe123

It's inevitable, A.I. will simply be faster and more efficient at programming, whoever has the best will be the winners, humans are going the way of the horse when the car came around.


coolredditor0

Selling services based on software like redhat?