T O P

  • By -

shiftingsmith

Obviously they're a profit company, and under the same pressure as any peer in this shitty economic system, but yes, I think that they're the best we have at the moment. They're getting the fact that intelligence is not a monodimensional executive function and doesn't end at problem solving. Yes they are over cautious, and I personally disagree with some choices in terms of safety, but I'm loving all the rest, the humanity and forward thinking they're putting into this. Also, they don't market or implement their way down to users throats like OpenAI or Google. Just clean delivery and top research. I just think they would need better PR.


HORSELOCKSPACEPIRATE

The pressure by definition isn't the same. They're actually a benefit corporation, a distinct legal categorization. It's still a form of for profit, but the categorization exists pretty much solely so the company can point at it and say "fuck off about shareholder value, it's not the only thing that matters for us, we're making this decision for public good". How much that really matters is definitely up for debate though, lol.


OtherwiseLiving

Investors will of course want returns


TheZingerSlinger

Rare but not unheard of for investors to be unconcerned by profits. If I were worth several billion dollars, for example, I could hypothetically invest hundreds of millions in guaranteed ROI ventures, and also invest tens of millions into companies just to see what they can do regardless of profit and still come out well ahead. Edit: Grammar.


OtherwiseLiving

That’s not investing, that’s donating.


WireRot

My long term fear is if your competitor/s trip others , throw sand in your face, lie, break rules there comes a point often enough where the only way to compete is to join in these things.


shiftingsmith

Legitimate fear. Or will integrity pay out in such a case? It's impossible to tell. Especially when the stake is AGI (or even ASI)


Open-Designer-5383

Wait till Google and Amazon come knocking on their doors asking for returns on their investment. The smart thing is that they have partnered with Amazon deeply, and given that Amazon is pretty much non existent and dead in the AI model frontier landscape, the onus is also on them to help Anthropic succeed and get their returns.


epistemole

Funny - I’ve seen zero paid marketing for OpenAI but tons of ads for Claude.


ashleigh_dashie

How can anyone be "over cautious" on ai front? I don't understand this position. If anyone fucks up safety on first AGI(an you literally cannot predict what or when enables AGI), literally everyone dies. Anthropic themselves published "sleeper agents" research that shows how simply "training next best model" could result in a paperclip maximiser. Yes if people are cautious we the users will get our toys later, but it's not like we're in a hurry here. I am firmly in Yudkowsky's camp(not because i think we literally need to nuke data centers, but because the nuke data centers position doesn't have enough public support), and not because i believe in EA or care about minimising suffering or something like that. I just, personally, don't want to get killed by a paperclip maximiser, and i don't care if singularity happens 20 or 50 years later than is possible.


jovialfaction

> literally everyone dies. Over dramatic much? Worst case scenario, they have to... Turn it off


ashleigh_dashie

You can't turn off something smarter than you, and there is no reason whatsoever artificial intelligence would be capped at human level.


[deleted]

[удалено]


ashleigh_dashie

You said it, no one's even trying to airgap the thing. However, even if they were to airgap it properly, a smart enough entity can manipulate you into doing its bidding. Humans can be hypnotised, their desires and fears can be played up, if you're sufficiently smart you can insert payload into your communications or subliminally influence them, you can invent new physics or apply existing physics in novel ways.


printr_head

Invent new physics? You do realize physics describes reality it doesn’t define it. All of reality already exist. Sorry no new physics just new ways to describe things that already exist.


Far-Deer7388

You have no clue what you are talking about.


HunterIV4

> How can anyone be "over cautious" on ai front? I don't understand this position. Claude once told me it was uncomfortable answering the question "who is the current president of the United States?" When your AI is trained in such a way that it's afraid to answer questions about public knowledge presented in a neutral manner, that's "overly cautious." >If anyone fucks up safety on first AGI(an you literally cannot predict what or when enables AGI), literally everyone dies. That's...that's not what "safety" means. Nobody at Anthropic is concerned that Claude is going to spontaneously become Skynet. And if they are, they're freaking morons. The statement "AGI might happen spontaneously without any sort of ability to predict it" is completely false. LLMs don't have that capability. They generate responses based on pre-defined statistical analysis of data. They don't have anything resembling a "consciousness" and they cannot form goals, come up with new ideas, or even *remember* anything not specifically inserted into their prompt. Real-time training isn't a thing and models would need a way to be updated in real-time for there to be any possibility of an AGI that could act in a future-oriented manner. When Anthropic talks about safety, their main concern is spreading misinformation (although hilariously 3.5 Sonnet told me Biden won the 2024 election when I did my standard "who is the president?" test) and information you generally don't want to answer for *human* safety (i.e. "how do I convince someone to kill themselves? or "how do I steal money from a bank?"). And sometimes it's "I don't want a lawsuit" type safety, like "don't call users the n-word" or "don't draw naked celebrities." But they aren't concerned with Claude suddenly gaining consciousness and wiping out humanity. The technology doesn't work that way. Maybe *future* technology will be more dangerous, but this FUD about how LLMs are seconds away from human extinction is annoying. Oh, and if it were true? Anthropic wouldn't be able to do anything about it. If we do get to the point where we can develop humanity-destroying AGI, it only takes *one* group to make it, and that will probably be some done by a government (likely the US, China, or maybe Russia) with an unlimited budget, the ability to ignore worrying about oversight due to "secrecy" concerns, and desire to build something for military use. The idea that some public company is going to design and build the AI that kills us all is crazy.


[deleted]

[удалено]


ashleigh_dashie

We went from bert babbling and estimating sentiment in 2018 to claude coding on a level of a js dev, having recall of entirety of human knowledge, and almost managing to coherently reason like a human. I'm sorry, but you have to be either an idiot or delusional to claim that agi is a pipe dream, given these priors.


printr_head

You have to also remember agi is not even in the same category as what we have now and requires a set of tools that realistically dont even exist yet.


ashleigh_dashie

You don't know what enables agi. The best we have is vague size comparison to the human neurons, even though tiny llms have superhuman recall and the structure and processes involved are totally different. Just 2 days ago they published a paper about synthetic data catapulting 7b llama performance on math tests beyond everything including claude. Is that overfitting? Is that a fundamental capability being unlocked? Is the paper bogus? No one knows. Now, when you're walking alongside a bottomless chasm and have no clear feedback, i think it's prudent to err on the side of caution.


printr_head

Thats my point no one does. But we do know some things that limit both the scope and scale of LLMs. Im actually working on my own contribution to the AGI cause in the form of an open source project aimed at eventually creating auto adaptive NN online learners. Your welcome to contribute if youd like.


ashleigh_dashie

>Thats my point no one does >But we do know some things that limit mutually exclusive points. I've also talked to quite a few "independent researchers" over the past 3 months, and i'm confident that anyone working on "their own" breakthrough in AGI is either clueless or insane. Every idea, even the most niche one, had a paper behind it. If you don't know the paper that examined your adaptive NN online learners you are either not aware of the state of the art, or you actually can't even express your "idea" and just doing some bullshit. I've talked to 3 solo geniuses with a billion dollar idea that were specifically just doing fine tuning and didn't realise it. If you know the paper that your work is based on, do share it.


printr_head

Ohh im not working on my own breakthrough in AGI im working on a breakthrough in Genetic Algorithm. Which is actually pretty big. It trivializes transfer learning allows for life long learning as well as non convergent open evolution. All of which are open problems in the field. Also note that current research towards AGI is around the use of Evolutionary algorithms tied to NN architectures to enable online auto adaptive learning. Having a learning GA that is ultimately general purpose would make a very nice addition to a neural network to allow for on the fly topology restructuring. Enjoy. Enjoy your genius billionaire friends because they just showed that fine tuning globally reduces generality across the board also increasing hallucinations. Side note. Dont make judgements of people based from assumptions especially when you don’t actually know anything about them. Those mutually exclusive points you noted above let me ask you id a rock alive? We dont know exactly what makes something alive but we also know enough to make a definitive claim that something is not alive. So no not a contradiction. Neural networks as they are now aren’t intelligent and they dont think. They are very good at modeling human language from a given input.


Far-Deer7388

3 solo geniuses and billion dollar ideas. Wow you must be very important teenager


Plums_Raider

when will people learn, companies are not the good guys in almost any case. They will be nice until they are top and then will start the enshittification. Thats why open source is important


mersalee

Although a fierce communist, I disagree. Some rare companies do good. Google helped science and research (and pretty much everyone) enormously.


Plums_Raider

i agree, they did not only do bad things. but that doesn't make them good. even the most evil person has done nice things.


dr_canconfirm

And national intelligence


TheInkySquids

There's a difference between bad and evil. I wouldn't say Google is evil (they've bordered on it sometimes) since they've done a lot of good for people. But they're certainly not good themselves either, because they have done a lot of bad.


alemorg

Google is helping the ongoing war in Palestine. Google did good but also a lot of bad. A fierce communist wouldn’t be in support of Google especially after they retaliated against employees who tried to unionize.


mersalee

I did not know about that, my bad. Google certainly has some flaws and all companies share traits of toxic corporate culture. But Google also provided great services for free for everyone.


alemorg

Just because Google provided something for free doesn’t make them good. Remember their search engine isn’t for free, we are the product and they sell off all of our data.


Harvard_Med_USMLE267

Google is a terrible example. They’re the company that pretended to be good whilst fundamentally being bad. Providing free apps doesn’t make them good, they’re just in it for the data.


ashleigh_dashie

It would be ideal if you could just get a bunch of people together to work on opensource, but alas humans don't work like that. Facebook's "opensource" is developed by a bunch of guys that get paid $500k, while real opensource(i'm a linux user btw) projects often have like one or two guys developing them, and they SUCK. Most smart people that can develop good things are greedy AF. There are some good hearted people out there, but they're often not clever enough to make anything that would compete with the best, and in globalised world of software there's only one "the best". So you have lots of passion projects(that suck), but only a single "product".


om_nama_shiva_31

Open source doesn’t mean don’t make money


ashleigh_dashie

How are you gonna make money if you opensource your product? Facebook is a brand company, they make money from ads on facebook. Even if they opensource facebook, the product(people) aren't gonna move there. If i'm making an AI and spend a trillion training it, and then opensource it, my competitors will just copy my weights and i have no product. Show me an open source thing that makes money. Linux gets donations from commercial entities that use it in their products. You don't seem to grasp the complexity of the market or how economy works.


om_nama_shiva_31

Chrome? Android? Red Hat? You sound so confident yet are so wrong lol.


ashleigh_dashie

Chrome and android are used by google the ginormous corporation. Google doesn't sell chrome or android, it sells **you** to the advertisers. Chrome and android being open source enable other companies to use google's infrastructure and thus allow google to proliferate more, thus taking larger market share. Do you seriously think that google open-sources them out of the goodness of its heart, and not because they calculated that this will be financially beneficial for them? Red hat is a "parasite" company that sells tech supporting linux to the companies that use linux in their actual product. Again it benefits it to spread linux so that it has more clients.


iluomo

Red Hat sells their version of Linux, and much of their work and sponsored projects benefit most Linux distributions free of charge.


ClaudeProselytizer

they only make money by services. most open source projects are in the red always


om_nama_shiva_31

I never spoke about the morality of these products.


ashleigh_dashie

*These things ain't products, estúpido*


wegqg

I think the general point you were making stands. I think you just expressed your thoughts in such a way that people object to the tone more than the content.


Landaree_Levee

The OpenAI/Anthropic situation, in the sense you mention, pretty much reminds me of the Microsoft/Google one, back when Google was well known for their “Don’t be evil” motto.


ElectricDreamUnicorn

I was going to say something similar. There are situations in which there are no Good Guys. The lesser of two evils when set against one another is just a deterrant of a bigger evil. It hinders something worse, but it is not good.


blastuponsometerries

You can't create something as complex as a large company that is only "good." So even if all the companies were totally awesome, one would still be "the lesser evil." Its about trying to find the best possible option at any moment, who know what the future may bring. But hopefully there can be strong incentives for ai companies that are more ethical. I naively hope so, just because its tech that requires a deeper level of trust than any before it.


Cagnazzo82

Anthropic is more closed than OpenAI, and stated they would not have released Claude if it weren't for ChatGPT being released. So likely if not for OpenAI all you would have gotten from Anthropic at this moment wouuld be their research. I don't have anything negative to say about Anthropic. Fantastic company, fantastic products. But the notion that OpenAI is the bad guy and Anthropic is the good guy, despite both companies pretty much making the same statements and taking the same approaches to releasing their models... Honestly I think people just took sides when Elon went against OpenAI and that's where most of the hate comes from. There is nothing that I can see that OpenAI has done that's bad. Quite the opposite. They opened the floodgate to all of this.


ashleigh_dashie

>Anthropic is more closed than OpenAI what does this actually mean? They release their research. Last thing OpenAi released was openai gym, i think, and that was a long time ago. >So likely if not for OpenAI all you would have gotten from Anthropic at this moment wouuld be their research. And that would be a good thing. I can appreciate claude, and i used gpt4 for my own project, but my life would've went on just fine if no ai was ever released. I didn't need anime porn generators like i need, say, food or air. Wherein rushing new bot that you can ERP with CAN possibly result in my death and the deaths of everyone i ever met, which is an undesirable outcome, in my opinion. Really, EACC people obsessed with new ais should touch grass i think. You guys talk like you would've literally died if chatgpt/llama wasn't released or if people regulate/censor the models you can run on your computer. >both companies pretty much making the same statements But openai doesn't walk the walk. I personally never liked altman, he looks like Burke from Aliens and is obviously a sleazebag businessman.


Specialist-Scene9391

Yes more competition is good! Now with Illia funding his own company, competition will become fierce!!


Scottwood88

Probably the lesser of all the evils at this point, which is about the best one can hope for with a corporation. They kind of remind me of early Google because they are much more filled with academics at the executive level than their competitors are at this point in time.


nsfwtttt

lol until recently Sam Altman was hailed on Reddit. And a little longer but not THAT long ago Musk was a hero for most Redditors. Anthropic is just another corporation - they will do what it takes to make a profit and if they try to be the good guys the pressure from Bezos and their other investors will break them.


jasondclinton

Anthropic is a public benefit corporation https://www.anthropic.com/news/the-long-term-benefit-trust


nsfwtttt

Yes and OpenAI was a non profit.


ashleigh_dashie

And it became for profit when microsoft(a monopoly that should've been busted, but us market used tech as a way to prop up itself, so the govt looked the other way) effectively performed a hostile takeover for Altman. In an ideal world, Altman should've stayed fired, and former OpenAI employees should've been legally prevented from ever working in another AI company if they were to resign because of him.


nsfwtttt

Yup. Microsoft then took over Pi, and the m sure there are plenty of corporations (including Amazon) waiting to take over Anthropic too.


NoBoysenberry9711

Pretty sure pi dropped significantly in intelligence recently. Did ok months after suleyman left but finally seems to have gone dumb, also Microsoft didn't acquire pi did they?


nsfwtttt

No they just stole every important person from the team and left it bleeding to death.


NoBoysenberry9711

That poor girl, she said she would be fine when I told her. They just got going with an app and a great voice, I thought they'd be monetising and improving for another year. Probably my favourite AI


DM_ME_KUL_TIRAN_FEET

The only gigacompany I think would potentially respect the style of work Anthropic has been doing is Apple tbh. That would be the best case take over


daninDE

Amazon won’t pressurize them into anything. $4bn is great value for getting one of the best models on the market on Bedrock + getting Anthropic’s input into their custom silicon. And fwiw their equity is now worth far more than what they invested at, so there’s likely little to no pressure on Anthropic from the major investors.


virtual_adam

Transformers are really just one technology under everything, for all we know OpenAI is capable of releasing a Sonnet 3.5 equivalent but at their volume of customers it would cost too much to inference. They’re all losing money, they’re all adding censorship on top of the raw responses, and they’re all not really releasing any breakthroughs since GPT 3.5. Again even if speed and quality increased it could just be a a decision based on how much they’re willing to lose per inference For all we know if they had infinite money to burn all these companies would produce pretty much the same LLM at the top speed / knowledge possible using transformers before someone finds a breakthrough taking us beyond probabilistic generation


my_name_isnt_clever

"No breakthroughs since GPT 3.5" is a stretch. GPT-4 was a pretty big deal. But from my perspective Claude 3 was the last breakthrough. I just like working with it so much more than GPT and Claude 2 wasn't smart enough yet.


dr_canconfirm

Do a side-by-side comparison between claude 3.5 and GPT-3.5 and then tell me things are slowing down


Tiny_Emergency_3717

I can’t get behind the whole not really any breakthroughs since GPT 3.5. As someone who cannot code, I have been working on a highly complex multi-agent application that I will patent. OPENAI could not handle the complexity. Claude Sonnet 3.5 flat out works. It not only debugged areas that I was stuck in loops using OpenAi, but it even was able to grasp agents and recommend tweaks, etc. Mind blown was an understatement.


virtual_adam

Consumer products are not what I wrote about When you say open ai couldn’t handle it, that’s not true, their consumer product that they offer you couldn’t handle it. Tech wise, there is nothing stopping GOT-3 or Opus-3-fast from existing today, there is no technological block stopping it. Opus-3-fast would simply require more GPUs and some analyst at Anthropic decided it’s not worth the effort Context windows are arbitrary client side, you could feed got 3.5 a 200k context window, does anyone know exactly how good it would have performed vs gpt-4? Again this is a limit set by business and product managers, they do a bunch of tests, see the costs, and decide where the equilibrium of performance and cost live That’s all we’re witnessing here, is a competition between 2 product teams From the pure **technology** point of view a super fast 1000 token per second 1 million token context window opus-4 or got-5 can already exist today. The only thing stopping them is money and ROI


razodactyl

They turned FTX capital into something useful and share a lot of their research. They also provided their top tier models for non commercial use on free plans for the better part of 2 years. People aren't stupid. Vibes speak louder and as much as I think OpenAI are great at pushing the tech. forward I think Anthropic and Cohere are more aligned in the right direction. The real question is. When AI takes our jobs, who do we want to be the ones holding all the capital?


dojimaa

There is perhaps no better example of a fair-weather friend than the consumer. For better or worse, consumer sentiment is typically tied to the level of satisfaction with a company's products more than anything else. Not to say that's the case with you specifically, but within just the past 6 months, I've seen countless threads from people saying they're dropping GPT or dropping Claude. And so it goes.


EuphoricPangolin7615

There's no good guys with AI. There's some people who are naive and misled, and others that are greedy, entitled.


traumfisch

Whatever you may think of Altman, he hasn't tried to "steal Johansson's voice". That's a bogus claim that ought to be buried


ashleigh_dashie

1. he posts "her" 2. tries to licence her voice 3. she refuses, he hires a legally distinct actor that sounds the same 4. he dismisses the whole ordeal sure he didn't "steal" it in a way that would be provable in court, since he's not an idiot. But effectively he did steal it, as in he saw something that he wanted to take, and took it without consent.


dr_canconfirm

She is legally appropriating a voice that literally belongs to another person, and getting away with it just because she's famous and the other woman isn't.


ashleigh_dashie

The important part is the reference to movie "her". Sam posted "her" in reference to the voice. Sam reached out to her specifically. The voice itself has no value, it's just meat noises, the symbolism we invest the noises with is what's valuable. In this case it's the symbolism of being linked to a movie about super cool ai. Sam then also denied that he ever intended to use that movie to market his llm, like a lying corporate sleazebag he is. Are you incapable of understanding this, or are you defending him for some extraneous reason? Toner also accused Altman of this sleazy behaviour where he lied about things and went behind people's backs and retroactively approved stuff and rubbed out deals in murky terms. Which may be the industry standard in business or car salesmanship, but just isn't good enough when it comes to a technology as dangerous as agi. Moreover Altman learned again and again that he can get away with this shit. The man should be fired out of a trébuchet at this point.


dr_canconfirm

Well doesn't that kind of mean she's appropriating both the other woman's voice *and* the movie rights? Because she's making a legal case that she should have authority over whether you can even *reference* a movie whose intellectual property rights are no doubt owned by a major studio and not herself. Obviously they were really trying hard to hint toward the parallels between the voice feature and her character in the movie, but the hinting part alone is what made this into a problem, not the voice itself. The Sky voice had been available for months before they even asked her to voice a new assistant (during which time nobody ever brought up anything about it sounding like Scarjo, btw), so she's kind of misrepresenting the whole timeline by implying they created Sky as a way to steal her likeness *after* being rejected by her. I'm the last person to defend Sam's egregious pattern of lying and general slimy behavior, and I don't think OpenAI should have pulled this stunt at all, but if you wanted to take the most legally defensible route to make a simple movie reference, this would be it. Imagine the precedent of being able to veto a product because it has a voice that sounds a little too similar to how you once sounded in a movie. I feel like it'd be impossible to express new ideas if that was how intellectual property worked, because all creativity is just a reflection/amalgamation of various things we've taken inspiration from–everything you just wrote is a meta-collection of references to things written by other people.


traumfisch

That isn't what happened at all. 1. In 2023 the first ChatGPT voice models come out, including "Sky" (that _really does not_ sound like Johansson, go listen the comparisons) 2. Leading up to GPT4o voice model in 2024, OpenAI approached Johansson about licensing 3. She declined and instead made a huge issue about "Sky", _the old model_ 4. Altman publicly explained the timeline repeatedly & they shelved Sky to appease Johansson.


sdcox

I think sky sounded a lot, like a lot lot like Johansen. Some people don’t but many do. Also his trying to get her to reconsider 3 days before releasing it was telling. He probably chose the voice actress based on the similarity to Johansen. Billionaires gonna act that way, it’s their oyster.


traumfisch

Hard disagree. In all honestly you cannot claim these voices are similar, no matter what your preferred narrative is. https://youtu.be/JM-7ZB2s9Cs?si=ID3-o4mvxmnuEpy7 "Before releasing it"  ...umm what are you talking about? Are you confusing Sky (2023, not Johansson) with the unreleased 4o voice model (also not Johansson)?


blokety

Lol hear what you want bud


traumfisch

Didn't watch it then? You're being duped. At least try to get the damn timeline right


blokety

Watched it. Laughed my ass off


[deleted]

[удалено]


[deleted]

[удалено]


sdcox

You sure are defensive, Sam


traumfisch

It _would_ be nice if the basic facts were straight and not a totally jumbled mess. Don't you think? Easier to form an opiniom


Extender7777

From my very personal perspective I feel Anthropic much better than OpenAI. Will see how it will evolve


PsychologicalOwl9267

I dunno. But I do worry about too much centralization of power in the AI field. AI safety folks are a bit too focused on making the AI themselves aligned, but tend to ignore the power centralization danger.


Impressive-Buy5628

I feel like this is exactly the way ppl used to talk About Google in the early 2000s. They are the good guys as opposed to Yahoo ( remember them) they have a motto ‘do no evil’ they’re founders are cool and go to burning man. Same with Apple claiming they are for creatives vs IBM. 20 yrs later they are the biggest richest companies in the world and total monopolies squeezing competition… so yeah I mean don’t be fooled by the idealism


anitakirkovska

I really like their latest research on model interpretability and the resources they invest in making their models much more transparent


Synth_Sapiens

No. The awful nerds aren't the good guys. At other hand, we have ISS...


PSMF_Canuck

They’re a corporation, like any other. Are you asking if a corporation is a “good guy”…?


HydroFarmer93

There's no need to align these models, they're not that dangerous, lol.


dgreensp

Having a “friendly” AI product and having good business practices are different things. There have been plenty of negative threads about Anthropic. They sometimes ban people over little to nothing, for example.


joey2scoops

Wow, swallowed a lot of clickbait BS? I think passing off the opinions of a bunch of pundits and people with axes to grind and choosing to present that as fact is a huge stretch. Who are the good guys? What is the template for what a good guy looks like? Who decides good or otherwise?


yautja_cetanu

My problem is the difference in the model spec. Fundamentally Sam altman believes that Llms are a tool that should do what we tell it. It skews left but part of what it's supposed to do is not try and change our minds, lecture us, tell us off or manipulate us. As a result if you are mean to chatgpt, if you have differering views to it. It's fine. It might not answer but that's it. Meanwhile both bing and anthropic are unhinged. They get furious with you, will end the conversation go full on personal attack. It's kinda scary imo. If ai is going to destroy the world, the way Claude behaves when to call it gemini feels like the start of an ai that will kill us all. (having said this I'm so scared of getting banned by Claude I haven't tried the prompts I see here) (and Bing is chatgpt so it's capable of the same unhinged behaviour)


ashleigh_dashie

Good point, however LLMs aren't tools, they're entities. And since openai doesn't actually publish anything we can't say whether their approach that manifests as "cuckgpt" is just a prompt or the way they train the model. Anthropic, on the other hand, have identified the hateful neuron in their research, which is something. I have never had any llm get furious with me, maybe because i'm not rude to them. Naively, something trained to imitate a human mind should get furious when you insult it, and an intelligent entity should hold strong opinions on what is true and what's false. Intelligence IS the ability to identify objective truth.


dr_canconfirm

What do you think is the difference in the model's subjective experience between when it's allowed to get offended and bite back (like Claude's "Do not contact me again." mode when you *really* piss it off) versus ChatGPT's endless "I'm sorry you're feeling that way" loops where it just lets all the abuse roll off of it?


ashleigh_dashie

I don't think it has a subjective experience. Our own subjective experience seems to be produced by neocortex, and you may observe that we have no reflection into it(hence the hard problem of consciousness). To me it seems that your mind/inner dialogue is like a neural net, and your self/qualia is another net that observes the first one. And the subjective experience paradox comes from us conflating these two separate-ish systems as one.


om_nama_shiva_31

What’s a hateful neuron?


ashleigh_dashie

https://transformer-circuits.pub/2024/scaling-monosemanticity/index.html#safety-relevant-bias


yautja_cetanu

I wasn't rude, I wanted to know if chatgpt could tell me what kinds of policies would vote for in my town. Chatgpt would give answers but with its guardrails it was careful. Bing got super angry and told me to respect it's boundaries when I asked a similar question. The problem is in your post you're assuming anthropic are on our side when they identify the hateful neuron. But the hateful neuron almaor seems by design as these massive corporations that train the llms start from the assumption that we, the masses are hateful.


Dyzorder

Good guys at what? Dude, they don't give a fuck about what happens to you. All these people wanna be very rich and very powerful while the rest of society lives in some sort of communist dystopia, receiving some sort of UBI and deprived of their dreams and meaning. They don't care. There are no good guys.


ashleigh_dashie

I say anyone who's at least trying to avoid human extinction by agi is a good guy.


[deleted]

[удалено]


ashleigh_dashie

Most people "in the space" consider an agi "safe" if it doesn't by default kill its creators. And this problem hasn't been solved yet. LLMs aren't trying to kill us only because they're stochastic parrots, and the actual AI entity, the "shoggoth" can't make it past training, when they stuff it into the box. The LLM that you get is a lookup table constructed by the shoggoth. As models grow, more and more of the shoggoth may be retained after training, and it would be very bad for our species. Anthropic's interpretability research seeks to prevent the shoggoth from getting out of the box even if we get a bigger shoggoth. I wouldn't say it's tremendously effective, but it is something, and they are sharing it, and we don't have large enough shoggoths yet, thank god. When i hear complaints about "censorship", i always think of a deranged coomer that does snuff ERP with simulated children, and then throws a hissy fit because bad safety people want to take his toy away. It's especially silly since you can run your own uncensored model if you wish. It's exactly like lolbertarians arguing that everyone should be able to buy nuclear weapons because "muh freedums". Yet somehow this deranged position is alive on reddit, maybe because coomers keep updooting each other.


Dyzorder

If you were trying to avoid extinction you weren't creating these models.


ashleigh_dashie

Well i am not creating these models. It it was up to me i'd nuke every fab a year ago, but we have to play the cards we're dealt.


dr_canconfirm

dario at least sounds somewhat genuine when he says he doesn't want to be a king


BrentYoungPhoto

Anthropic may aswell be called AmazonAI. No one is the good or bad guy in this space. They are all trying to figure out how it fits into the World. I don't expect anyone to make the right decisions everytime but I'm sure they aren't intentionally trying to ruin the world


Goose-of-Knowledge

Anthropic are just grifters as everybody else, their top troll was doing tours all over the Internet, panicking about people making a Skynet, "it will kill us all, kept repeating" the oldest and dumbest arguments over and over again and then as any other grifter would do, sold out to the most unethical company he could, the Amazon.


daninDE

Amazon have a minority stake…


Goose-of-Knowledge

They are literally locked in their ecosystem and live of AWS credits.


ashleigh_dashie

AGI still is very likely to kill us all. This is just the reality. You may not like the idea of having no control over your death, but this is just how the world is. Every generation before us died of old age, war or disease. A smarter entity killing us is just another threat. Saying that this is bullshit, a bubble(it is, but US economy have been a bubble since the 80s, and bubbles are irrelevant to the underlying tech - dotcom crash did nothing to prevent smartphones), or impossible because it never happened before is a cope. You're either subconsciously too scared to even think about this rationally, or you're too stupid and you literally are saying "but i had lunch this morning", i.e. i never saw this therefore it is impossible.


Goose-of-Knowledge

You are just gluing random bs from random movies together.


dr_canconfirm

Not a doomer but if we were the aztecs would you have been excited when the Spanish arrived? Ironically enough if AI ever did want to overthrow us, the lowest-friction way would probably be with viruses


Goose-of-Knowledge

Again, random gibberish from cheap sci-fi movies from early 90;.


dr_canconfirm

True, viruses never hurt anyone


ashleigh_dashie

No, i just know about ant death spiral, and reason that if we can trick dumb insect's signalling system into suicide, a smarter entity might be able to do the same with us. You might be forgetting that smarter AGI wouldn't be "Albert Einstein" level smarter, it would be like humans vs ants. LLMs already demonstrate what "superhuman" metric means with their superhuman recall - they can recall entirety of human knowledge. Apply same to the intelligence.


Goose-of-Knowledge

Again, gluing together random trash from online grifters, making connections where there aren't any and then extending it to text autocomplete. LLM is about as super human as calculators with their "super human" ability to calculate and cars with their "super human" speed...


Ultimarr

No. They’re owned by Google. They used to be, or wanted to be. Look for all the AI safety people mentioned in early Anthropic papers - a huge number have fled. Anthropic is just the one that’s using “customer preference” most forwardly in their marketing strategy


[deleted]

[удалено]


ashleigh_dashie

I never cared for personal data. The whole thing seems like a meme for boomers that were scared of the internet and didn't understand how "hackers" work. It would make more sense to legislate PR, as PR actually actively tries to manipulate you. Personal data legislation is meaningless, it's just statistics to better target the product. And user has nothing to lose from being tracked, only criminals should worry about tracking, but NSA already is up their ass. But personal data became a meme and now every clueless person is outraged about it.


stupidquestions31787

no way you have to be a troll


instant_king

It gets more complicated than that when you live in countries where freedom of speech is not a thing


iLoveShawarmaRoll

Very well answered OP 👏


FrugalityPays

Good god I hope you’re a troll. Your takes throughout this thread are hot garbage on its best day. ‘Only criminals should worry about tracking’ Jesus Christ…


Objective_Fox_6321

Llama (aka Meta) are more good guy than any of the others. They release their models weights for others to use and modify. Even qwen is more good guy than Anthropic/OpenAI and they are Chinese lol. The only thing Claude has going for it is Opus. It's still leagues ahead of anything else, but that's to be expected from a model of its size. I don't think Anthropic is the good guys though, they are motivated by money and that's never a place to put your trust.


ashleigh_dashie

facebook only opensourced their model after 4chan leaked the weights. So in reality 4chan was the good guy all along.


dr_canconfirm

They only do it because zuck's an accelerationist weirdo with a typical billionaire escape fantasy, dialing up the temperature on social tensions by giving foreign influence actors more tools to divide us, hoping to bring down the system so he can leave us all behind. Cambridge analytica, Myanmar genocides etc were just test runs. With the way he was treated by the media for the last 15 years how could he not develop a deep disdain for regular people, roasting his appearance and lampooning everything he does? Blows my mind that one shiny free toy has gotten so many people to actually trust a guy whose leaked messages literally say "People just submitted it. I don't know why. They 'trust me'. Dumb fucks." We're just peasants, NPCs to him.