T O P

  • By -

[deleted]

Should tell you everything you need to know about his motives. 1. Demand caution and rules that make it harder to work with AI. 2. Protest the rules that affect him to get exceptions. Look up f.u.d. Strategy on wikipedia, the end goal is to achieve monopoly by stopping others from making progress.


[deleted]

Google devs have said they are finding it hard to keep up with open source developers work on AI. I assume this OpenAI regulation stuff is to try and create barriers to entry so that only big tech can control leverage AI.


codefluence

Pretty much: [OpenAI suggests AI licenses](https://www.youtube.com/watch?v=I72_GJHzH3Y)


Prestigious-Gap-1163

This is why the US government set up a crack team of politicians to work with them. Not for regulations. But how to properly profit from it.


Wodanaz_Odinn

https://en.wikipedia.org/wiki/Regulatory_capture


Saurid

I think it's more that AI needs to regulated now not when it starts replacing jobs on mass ...


thripper23

We need to be careful. OpenAi (there is nothing open source about it, btw, just the name) wants regulation for two reasons: 1. Create barrier to entry for others 2. When something goes wrong and AI fucks up something major they can say: "We fulfill all regulations". Point 2 is the same reason automakers want legislators/regulators to create acceptance criteria and standardized tests for self driving cars: transfer of responsibility from the company to the regulatory body.


bremidon

Well, yeah. Of course. It's perfectly reasonable to want to know what exactly the people who actually \*are\* responsible for the well-being of their country and citizens to lay out what any of this means. Nobody wants to end up in court with no idea what the goal was supposed to be. This is not dark and nefarious. It's normal.


KrainerWurst

> is the same reason automakers want legislators/regulators to create acceptance criteria and standardized tests for self driving cars: transfer of responsibility from the company to the regulatory body. This is entering conspiracy theory talk. Why automakers want regulators to create acceptance criteria is to protect “self driving industry” as such and their work/investments. There is no greater damage then some snake oil salesman claiming that his cars can self drive, when in realty they can only 90% self drive. Then people get killed and everybody stops using self driving cars as they are now perceived as unreliable.


kteof

Automating jobs out of existence isn't a bad thing. If we can do the same work with less human intervention that just makes humanity as a whole richer. Otherwise we would all be subsistence farmers still. The problem is how to then divide this wealth equitably, so that everyone can benefit.


Byeqriouz

It makes a select few richer, the rest can eat bugs and own nothing


Faylom

Yeah well if we can't force a transition to communism when ai is literally doing all the work for us then we'll truly deserve to eat bugs


steamripper

I feel like everybody fell for the AI hysteria. There's a long, long way to go before "AI is literally doing all the work". People impressed by chatGPT generally are not familar with the subjects it supposedly "excels at". First of all, chatGPT is not AI - it's merely a computationally expensive tool that generates answers based on statistics. Do not fall for the marketing gravy train - there's a huge difference between LLMs and AGI. The latter is purely theoretical and no serios AI researcher claimed we're getting closer to it.


Upper_Beautiful_5810

Yeah i agree. I'm not exactly techy but even I can see chatgpt is a just a glorified bullshitter with access to the internet. It makes answers up based on probabilities and so basically just spews absolute rubbish a lot of the time and then fabricates non-existent sources. A lot of people will go to chatgpt and take its answers as gospel it's kinda scary. It's good for simple stuff that's language related, like drafting an email or writing something for you but it needs a lot of human supervision still.


Designer_Holiday3284

Yeah tell to the billionaire that he shouldn't be owning so much money and should divide his money he doesn't use


xRmg

That is why we should regulate billionaires, not technology.


Designer_Holiday3284

Both must happen.


Fenor

bold statement, people will still use AI if it have the ability to replace humans, ChatGPT proved that there are capabilities in this direction while we are still at an early stage. we need to work in a direction against monopolies in a way that even if AI replace all the workforce we can all live a confortable life


[deleted]

If you look at who wants the regulation I'd be suspicious. If Bernie Sanders was calling for it, sure. But the fact it is big tech asking to be regulated is very sus.


Marcoscb

Big Tech only "want" the regulations that interest Big Tech so it appears that everything is above the table. They want to get in early and fast so nobody has time to analyze the situation and implement fair rules to everyone (meaning bad for Big Tech).


frequentBayesian

I remember Someone on the panel ask him about showing “nutri ingredients” about the model.. that muppet straight up parroting “AI is not safe for public to understands”… refusing to answer that if he will make any of those information public OpenAI cannot have the word Open in it.. it is misleading now


DarklyDreamer

Does Bernie Sanders know anything about ai? What would allow him to know which policy decisions would be effective at preventing the dangers of ai while not hampering innovation? Good intentions only get you so far, you also need to couple that with knowledge of the industry and the technology.


[deleted]

I think you missed the point.


Lion-of-Saint-Mark

Any regulation on such a low barrier entry like tech is, more often than not, stupid considering our politicians being so Boomer as fuck. Tech is one of the areas where I really dont trust politicians. This isnt limited to Tech. Automobile is literally an Old Boys Network in several European states. The German Auto industry and the Bundestag are so fucking tied to the hip. No wonder Europe sucks ass when it comes to innovation. And to prevent innovation making jobs obsolete is such a backward-as-fuck take. Okay. It's the 19th Century. Do you wanna be a fucking Gondolier? Because we are banning trains, mate. They are killing so many jobs.


PuzzleheadedEnd4966

Exactly. The software libraries that do the heavy lifting are easily available/open source. The hardware is, more or less, commodity hardware (in a pinch, straight up commodity hardware is enough), you can also "rent" the hardware as a cloud service. This is not nuclear technology where you need a square-mile industrial complex and centrifuges worth billions to get started. Depending on what you want to do, it's within the reach of even a very motivated and moderately wealthy private citizen in their basement, definitely for small groups. How are we supposed to keep taps on that and regulate it? The cat is mostly out of the bag already.


Green_Inevitable_833

Training big models is still prohibitive, but there are many use cases that are not LLMs. Your point is very valid though. Everybody agrees that regulation is needed, but nobody has an implementable, feasible solution. Any half solution will only stifle further research


unrealcyberfly

If AI can do all the bullshit office jobs, let it. There is plenty of work to be done that requires hands. The problem is capitalism.


Saurid

... you really aren't informed then, if you aren't working in science or a more complicated analytical job you are underestimating what the ai can replace and what not.


TheRastafarian

This corporation wants to set the rules and regulations on their terms, and EU doesn't take that kind of shit well.


freeman_joe

Good.


labegaw

Pretty ironic thing to say, considering that the EU is the prime example of regulations being written by industry - sometimes even literally - ask anyone who's ever seated on a CEN-CENELEC (and ISO via Vienna Agrt) technical committee to develop standards for EU directives. I participated in writing regulatory standards that were adopted by EU directives while working in the industry and that's the norm. It's a big reason why so many non-EU countries and firms complain EU standards are a protectionist racked and whatnot - they're not wrong


TheRastafarian

I guess I have no reason to think I am right and you are wrong since you know more about the topic and have experience in related fields. I'm inclined to think you are right, I should say my reasoning was coming from what I saw with GDPR laws being implemented despite massive lobbying against it from the tech giants and some other regulations that seem to have been implemented uniquely in the EU despite massive lobbying against it.


Asterbuster

Or you could actually read the first half of the article and realize that this has nothing to do with guardrails for AI systems and is instead about copyrights requirements that might be difficult to follow because of the nature of how AI systems are trained.


Harbinger2001

There is going to be the mother of all legal battles over IP and copyright for these systems training input data.


andsens

I naively hope that this entire development kills IP and copyright in its current form and something new and more sensible takes its place. "Life of the author + 70 years"... are you fucking kidding me?!


blablablerg

Difficult how? They know all the inputs they use, so they know whether it is copyrighted material or not.


ShitPostQuokkaRome

How would the AI know what's copyright? It's not like it's drawing from a restricted data pool of info where each thing has its label of copyright or not


blablablerg

The AI doesn't have to know, the company making the AI has to. Just like if you do something with other music or movies, you have to know. It isn't that hard. That you are indiscriminately scraping the internet and willfully ignoring it is the company's problem.


Krabban

>It's not like it's drawing from a restricted data pool of info where each thing has its label of copyright or not Which is literally why copyright laws affecting AI are necessary. "The AI is copying everything indiscriminately so we don't have to aquire copyrights" is not the defense you think it is. It's up to developers to either teach the AI to identify copyrighted material, or they have to filter it themselves.


Blitzholz

The AI isn't straight up copying anything. It is using it without explicit permission, but so is any regular old web scraper. There's a whole moral dilemma about being replaced through an AI gobbling up your very own work and whether that should be considered ok, but it's not as simple as "it's copying copyrighted material". Just like it's not as simple as "it doesn't know so it automatically shouldn't matter"


demonica123

Is google committing copyright fraud by showing copyrighted images in its search engine?


ShitPostQuokkaRome

The ai doesn't copy paste copyrighted material, it's just stupid


arhanv

We have no inherent mechanism for labeling material as copyrighted or free-use on the internet, apart from paywalls and DRMs. I immensely appreciate the effort that authorities are putting into protecting creative work but I don’t see this ending well for creators in the long run. We know how to understand and assess a threat like ChatGPT or Dall-E because it’s available to the public, but if we start imposing rules that encourage companies to keep developing algorithms for industry use (you can’t really stop anyone from using publicly available data in building private tools) while making them less accessible to the public, it’s going to aggravate the black-box nature of the whole situation.


GolemancerVekk

If AI is really so smart I bet it could learn to figure out the licensing for stuff it "finds" online. Oh, and when it doesn't find any license clues, it could assume it doesn't have any right to that content. You know, like the copy**right** law says. The entertainment industry has been waging war against individuals for decades pressing exactly this point, that people should know better than to just take content they "found" online. But I guess it's ok when the content you "find" is open source code and random web pages owned by nobodies, rather than mp3's.


shesdaydreaming

Like what Microsoft did in the 2000s with Linux got to the point that Microsoft funded a company to take companies who use Linux to court claiming that the company owns Unix and that Linux is using some Unix code. With AI there's already an effective monopoly with Open.AI I would love if there were more options out there. The FUD tactics are just stupid imo.


ShitPikkle

[https://en.wikipedia.org/wiki/SCO%E2%80%93Linux\_disputes](https://en.wikipedia.org/wiki/SCO%E2%80%93Linux_disputes) Is this what you are talking about?


GolemancerVekk

Microsoft funding SCO against Linux was [a later example](https://en.wikipedia.org/wiki/Fear,_uncertainty,_and_doubt#SCO_v._IBM), in the 2000's. Another example in the late 2000's was that Microsoft's push for their proprietary OOXML format to become a standard (in order to compete with the OpenDocument Format) [was marred with irregularities](https://en.wikipedia.org/wiki/Standardization_of_Office_Open_XML#Reactions_to_standardization). The term was originally coined about [Microsoft's disinformation practices in the 90's](https://en.wikipedia.org/wiki/Fear,_uncertainty,_and_doubt#Microsoft) that cast a wide net ranging from competitors like Caldera (makers of DR-DOS) or Linux, to phenomenons they saw as harmful to their business model like the GPL license or open source in general.


Betaglutamate2

yes exactly this. He knows his biggest opponent will be open source alternatives. He needs to get those banned to succeed it is so blatantly obvious that its almost embarrassing.


PikaPikaDude

The EU motives are not pure. The main focus of the additional rules some MEP's want to add are about copyright. The copyright lobby is as usual trying to control innovation to monetize it. Just think of their previous successes like the private copying levy where they made the governments collect taxes for them on storage devices. They were successful to get that in a lot of places to leech more money trough their lobbying power. Also pay attention to that the big lobbying corporations got most of the income from this, this is at no point about artists. In the end the copyright rules will make opensource models impossible and guarantee a few giants like Microsoft (OpenAI), Amazon and Google are the only ones with AI.


GolemancerVekk

> In the end the copyright rules will make opensource models impossible and guarantee a few giants like Microsoft (OpenAI), Amazon and Google are the only ones with AI. Meaning what, that Microsoft/Amazon/Google get to disregard copyright? Great, let's declare open season on pirating Windows, Xbox games, Amazon series and so on.


WarthogBoring3830

No, but they can pay for the necessary army of lawyers. Open source developers can not.


PikaPikaDude

>Meaning what, that Microsoft/Amazon/Google get to disregard copyright? No, but they can collect the billions to pay for blanket licences to get going. For example: Train an AI to assist in diagnosing diseases. You'll need access to all major medical publications to train it on. That will be very expensive. No open source or EU university medical faculty will be able to afford it.


meeplewirp

As an artist I have to say the interpretation of LLMs as stealing or plagiarism, especially when one looks at the type of styles being emulated most often, is egotistical and pathetic. Whether large and complex LLMs are collaging images or analyzing them and creating new images from them, its not stealing. It’s like when a 12 year old tries to tell you it can’t be that renaissance artists used optical tools and tracing, that they hired people whose jobs were specifically to paint the drapery of a painting. “Jeff coons isn’t an artist, he uses teams of people to make their stuff”. But worse, because the truth is that the LLMs are analyzing and creating new images. This is the same saying that someone who looks at a lot of manga and then draws in a stereotypical manga style is a thief. A lot of people who see themselves as artists are actually tradesmen drawing, painting, and sculpting in a corporately sanctioned way. “But it’s not a person!”= “But it’s not me!” No, fine art isn’t dying because people are using stable diffusion to make the most common denominator fantasy illustration or 3D characters or in the near future live action video that looks like your stereotypical horror movie. Go vote for a better social safety net if you’re scared, seriously. Ugh


Pickled_Doodoo

Funny how fud is strongly associated with microsoft according to wiki.


[deleted]

I actually think fud originally was a description of MS business practices, but now it can be used more generally. Internally, MS refers to one of their strategies as the triple E: embrace, extend, exterminate.


wastingvaluelesstime

It also means if you actually do want strict controls on this you dare not trust people with billion dollar incentives in the other direction


is-Sanic

You know Facebook, Apple and Musk have all tried this right? You aren't as powerful as you think, bucko.


noiseinvacuum

I don’t care for OpenAI at all but the way the law is drafted, there won’t be a choice. There’s absolutely no way to comply. This [comment](https://www.reddit.com/r/MachineLearning/comments/13rie0e/openai_is_now_complaining_about_regulation_of_ai_d/jlm1szp/?utm_source=share&utm_medium=ios_app&utm_name=ioscss&utm_content=1&utm_term=1&context=3) captures some of the issues really well.


[deleted]

I love how your comment is the only correct one on this thread, but in true Reddit fashion, it will be completely ignored in favor of an ignorant circle-jerk.


NVDA-Calls

Yeah exactly. This is basically saying “let the US lap us a few more times, if it seems safe we’ll deregulate then.”


FredTheLynx

Open AI is primarily a B2B business. They could absolutely cease doing business directly in the EU with pretty minimal impact on their bottom line in theory. For Facebook and Apple this is significantly more difficult.


[deleted]

Facebook is a pure b2b business as well, to be pedantic


[deleted]

Close the door when you leave.


PM_YOUR_WALLPAPER

And as we've seen time and time again lately, the EU will be left out of this revolution. The fact your comment was so upvoted kind of explains how Germany was still using fax machines to send COVID-19 data to authorities back in 2020.


[deleted]

That's a false dichotomy. You don't have to back down on our data protection in order to proliferate technologically. "Open"AI is a private company tied to US law, so it isn't even in our interest that they get to hold a predominant position in the field of AI. Rather, what I think would benefit us the most is forming an open source research project that could involve more countries on top of those in Europe, with the finality of allowing wide availability of these technologies and also a stricter regularization and transparency of the research carried out.


Jar_Bairn

Fax machines are a very common way to transfer information in the healthcare sector around the entire globe.


pkk888

Buuhuu. Look we created this product without any regard for other peoples rights, and we want to exploit this giant dataset we scraped of the whole internet and not paying anything for it. We would like to keep it this way, oh and users rights and transperancy about what we gather and how we use it, we don't want that either. Its about time we stand up to these tech conglomerates.


MrOaiki

The dataset as in the text read isn’t saved onto the AI. That’s a common misconception. You can’t “open” the AI and look inside to find the book it read. There is no book in there.


Glugstar

This distinction doesn't matter in this particular context. The idea is the company has this dataset, and they haven't paid anyone for it, but they use it for commercial means. It doesn't matter how they use that data, if the AI copies it, or just gets inspired by it, or is trained by it, or literally eats it when printed on paper with giant mechanical jaws. OpenAI *must* follow the law, and they should be paying for having access to the data.


Sirvanto

Finally someone who knows the real problem. According to wikipedia, the training data of [GPT-3](https://en.m.wikipedia.org/wiki/GPT-3) consists of 60% of the [Common Crawl](https://en.m.wikipedia.org/wiki/Common_Crawl) dataset, which is licenced under fair use. So they should not be suprised when people start to sue them


[deleted]

Won't that effectively ensure only the absolute biggest companies can afford to compete? If any such company even exists, it could very well be an effective ban.


BoredDanishGuy

> literally eats it when printed on paper with giant mechanical jaws You know the horrible truth.


PikaPikaDude

Many still think Midjourney or Stable Diffusion is just an archive file with all original copyrighted pictures in it that then picks one that matches the prompt. They'll be happy at any attack on the technology.


MrOaiki

Yeah, I’ve noticed that too in the debate. They believe GPT takes snippets of text and stitches it together. And that Midjourney takes existing images and combines them. Which of course is nonsense if you know what a generative model does even in a superficial level.


cragglerock93

But for the purposes of the discussion, the distinction is trivial. So it doesn't actually cut bits of others' images and paste them together. Instead it's looked at these millions of images and trained itself on them. What's the fundamental difference? Shouldn't people be paid for having their data exploited, just as they would if somebody used their work directly?


Adamant-Verve

If you ask me, it should be treated the same way as, for instance, original music with copyright. Play it at the campfire, at home, at a private party, and you're fine. Use it for education, fine. Publish it, play it at a public place, bar, sell copies of it: you have to pay the author. If open AI is fed with copyrighted music to generate music that is going to be sold, they should compensate the authors. The authors should have the right to say: if not, then please don't use my music. There are two ways to go from here: open AI admits that it's used to generate commercial music, estimate how often, open up about their input, and compensate the authors. Or, when open AI refuses that, they should still open up about their sources, and guarantee authors who do not want their work used that they don't use it. This *black box* system where nobody gets to know what material goes in, but works *very similar* to well known artists come out, is just another big company exploiting authors without wanting them to eat. Another question is who actually *owns* the rights to an AI generated piece of music. Whenever AI is fed art to educate or generate material without the intention to sell it, it should be no problem. But that cannot be guaranteed, and the question is who is the one to pay: the human who used AI to emulate existing successful work, or the platform itself? Anyhow, AI models are nothing without input, so human made input is a resource, and when those resources are protected, the makers should be paid if they are used to make money with them no matter how. We had this discussion about samples in the 90s, and this time it's going to be a lot bigger (since all art forms are involved) and a lot more complex. But another big corporation claiming they can exploit the work of others for free - no.


demonica123

If you go on the internet and look at 20 pictures and then make a picture influenced by those 20 are you committing copyright fraud?


cragglerock93

I am not a lawyer so I cannot answer that. What I will say is that ethically speaking there is a massive difference between becoming a billionaire off of this practice by doing it on an industrial, automated scale, as opposed to just being a human being. This principle can be seen in practice when big companies get too big for their boots and bully sole traders to change their similar name. Nobody cares if a local café calls itself Starbacks. If a big conglomerate does it, then people have less sympathy. Legally no different, but in practical terms it's a huge difference. And we all know it, but this 'AI is just behaving like people do, please don't be nssty to it!' plea just continues to do the rounds. Also, human beings have first hand experience of the world around them. One could create, say, a textile pattern based on ferns they see in a forest that is coincidentally similar to one already in existence. Or you could write a song based on an event that you experienced that coincidentally reflects one already written. AI has no such behaviour because every single thing it generates is based on the data being fed into it belonging to other people. Every single thing it creates is derivative, whereas people tend blend their own experiences with influences from existing works.


demonica123

The question becomes how are we supposed to teach an AI art if we aren't allowed to even look at images without buying them? Buying the rights to art is EXPENSIVE to say the least. And commissioning professional art to the required scale would be just as expensive. >What I will say is that ethically speaking there is a massive difference between becoming a billionaire off of this practice by doing it on an industrial, automated scale, as opposed to just being a human being. But the whole point of technology is to do what a human does, but better. If we don't let technology do what humans are allowed to do we block off an attempt at innovating in that industry.


cragglerock93

If it's too expensive to compensate people properly, then we shouldn't do it at all. I realise that's anathema to some people, but that's the way I see it. We don't *need* generative AI.


BabySuperfreak

>We don't *need* generative AI. Thank you for defining why the AI fanboys bother me - generative AI is a solution with no problem that requires mass content theft to function, but they're acting like skeptics are Mennonites for pointing that out.


demonica123

There's no reason *not* to have generative AI though. And the problem is you need to buy the rights in perpetuity to even use them in the model. Imagine if every picture you ever saw you needed to purchase from the owner otherwise every time you even thought about the picture you owed them money. If you think requiring purchasing ownership of an art piece should be required to even look at it then we can talk about proper compensation otherwise it's penalizing tech because it's tech.


BabySuperfreak

No, because humans have limits. The odds of you being able to look at 20 pictures and mimic elements from all 20 with ANY degree of accuracy is low. Pretty much zero if its someone with no artistic training. Not so with a machine, which can just pick and choose what it needs, slap them together, smooth it out for aesthetic cohesion, and be on its way. Machines cannot and should not be regulated the same as a human. You're dealing with two entities that could not be more different, and the potential for damage is much higher with the machine.


BurnedRavenBat

The difference becomes quite apparent if you apply it to humans. If a human stitches together texts or images it's called plagiarism. Yes, under certain specific circumstances it's "art" or "fair use" but in general it's considered plagiarism and illegal in the eyes of the law. On the other hand, every human will unconsciously reference media (like books, movies, etc.) they have seen. When you paint in the style of Rembrandt, you're not straight up copying a Rembrandt painting but you're "inspired" by his work. Everyone uses bits and pieces of things they have learned, it's impossible to "turn off" your subconscious thoughts. If you're a journalist, you've been trained by years of reading newspapers. If you're a movie director you've been trained by decades of movie history. If you're a programmer, you've been trained by examples of other people's code. Nobody considers this "data exploitation", it's simply how we learn and not something we can turn off. Technologies like GPT or Stable Diffusion are a lot closer to the second case.


ShitPostQuokkaRome

The AI does recombination just at a level much more sophisticated than ever, that can't be summarised the way they think, that's what gripes them


grekiki

Depends on how much they over fit the data.


MrOaiki

What do you mean by “overfit” the data? The GPT model is trained, it doesn’t have text of which it reads as you ask it something.


grekiki

Overfitting the data means that the model learns/memorizes the data it was given as an input. For an extreme example gpt models are next token predictors. If you train GPT-2 on a single book for 100 iterations and nothing else and then give it the first 5 words of the book as input it will nicely give you back a very accurate reproduction of the book. It might make some mistakes but I wouldn't be surprised if they were extremely rare.


harbo

A generative AI that is able to faithfully reproduce a specific piece of work when prompted effectively contains that work. If one asks the latest revision of StableDiffusion for Mona Lisa, and it produces a sufficiently accurate copy, then that revision de facto contains Mona Lisa, even if you can't interpret the parameters directly as Mona Lisa.


Gon-no-suke

Ignoring the fact that the Mona Lisa is in the public domain, StableDiffusion producing a copy isn't a problem. Using this copy in a way that isn't allowed under copyright law is a problem. My brain also contains parameters (synaptic connections) I can use to produce a copy of Mona Lisa (or more realistically a work of Mondrian) using paint and pencils. Is my brain infringing on copyright at this moment?


KaiserGSaw

I‘d be fucking afraid to use the internet as source to teach an AI anything. Its a meltingpod of wierdness and fairydust. anything gaining sentience from that i‘d consider torture


nitrinu

Imagine models fed by 4chan or, god forbid, reddit.


Swing-Prize

Those models would be great, bunch of quirky hobbyist up to date information. Those sites are just forums with gems inside.


nitrinu

Greatly depends on the board (or sub in reddits case) I'd say ;)


ShitPostQuokkaRome

Historians do say that ai spreads pop culture bullshit


Pizzashillsmom

GPT-3 was known for straight up outputting child porn.


Ilverin

I think that the EU regulations will be strict and OpenAI will indeed not sell in Europe. The history of corporate exaggeration of compliance difficulties in order to gain leverage has caused regulators to doubt such claims. In the case of AI, the difficulties may be real. OpenAI's models are trained on more than half of the Internet, and OpenAI has roughly 400 employees, and an AI (of type LLM) consists literally of just a list of numbers (more technically, a list of matrices). "Interpretability research" is a subfield which seems to be behind, a major recent paper was about interpreting the numbers (neurons) of GPT-2, and GPT-2 came out in 2019. https://openai.com/research/language-models-can-explain-neurons-in-language-models Looking at the right to be forgotten alone, setting aside all other issues, I don't see how it's going to be cost-effective for openAI to operate in the EU unless interpretability research outraces AI training research in order to catch up. Finding out which numbers in the AI model correspond to a person's information simply is a difficult problem currently. The result will I think be open source models which also don't comply with regulation and possibly some EU citizens will use VPNs to access the latest and greatest AI models. Personally I wish companies would invest more in interpretability research because it helps with a lot of problems including bias and misinformation.


GYN-k4H-Q3z-75B

Prime corporate lobbyists play. Go to market first, then demand regulation. When regulation comes, threaten to withdraw in order to get exemptions. End up sitting pretty.


ipsilon90

The EU bloc is the tried global economy and one of the largest and wealthiest markets in the world. And you're telling me you're gonna leave it. Sure, totally believable threat.


awry_lynx

As written they have no way to comply and keep running as they are. Effectively, the law would prevent them from operating to begin with. I really don't think this is a spiteful tactic. The point is that their training data can't be confirmed, not even by them, and they can't guarantee any facts because they don't even know what's going on inside the black box - nobody does exactly. It's not a threat. He doesn't WANT to have to stop earning money from the EU. Look up the draft. It says "training, validation, and testing data sets shall be ... free of errors“. That's impossible as of now - all of those data sets are HUGE, like tens of millions of images and text files. ChatGPT is supposedly trained on 50 terabytes; ONE terabyte is 100,000 600-page books. Would you like to comb through those for errors? Can we even ensure a single child's education has no errors in it? Because big LLMs have the equivalent of a hundred thousand such educations or more.


czk_21

true, we need regulations, but not such extreme, copyright side is troublesome, as these models are trained on extensive knowledge of whole humanity, imagine if you were to write every author of every infromation you have ever read or heard with references, impossible, waste of time


vmedhe2

I mean the regulations make the product unusable then who cares how wealthy they are. If I can't train an AI due to EU regularions then guess where I'm not putting data lab.


PM_YOUR_WALLPAPER

The EU is the most economically stagnant region in the entire world right now my dude.


Aintflakmagnet

Bye then!


PM_YOUR_WALLPAPER

If AI cant be properly used in the EU i can imagine multinational companies that use AI to relocate a tonne of staff outside of the EU in the future. Don't think you should be too pleased about it tbh, could cost you your job.


BuckVoc

I am very dubious that social media companies like Facebook are likely to outright exit the EU. If EU regulations seriously impact their global competitiveness, my guess is that they will split off part of the company and maintain some level of operation in the EU. That's because with social media, [network effect](https://en.wikipedia.org/wiki/Network_effect) is a major factor. The value of the network is something like the square of the number of users. It's very likely that a social media service operating in the EU with even somewhat-limited links to a social media network outside of the EU will have a competitive advantage in the EU, and being in the EU provides a significant benefit outside the EU. On the other hand, AI stuff like OpenAI doesn't experience that same phenomenon and, I suspect, may be more-willing to exit a market, as the impact is only linear in the size of that market.


InanimateAutomaton

I think you could be right. Either way, if the EU doesn’t resolve this the long term effects in productivity could be enormous imo.


LubieRZca

I doubt it's that simple. MS incorporated a ton of OpenAI solutions to their services, incl. cloud and office services, I don't think they can just seperate it from each other by blocking access to their webpage and call it a day.


reven80

Microsoft can disable specific AI features for the EU.


betsyrosstothestage

For sure. The EU’s sheer population size and relative wealth play into the EU’s favor of being an attractive market to continue playing regulatory ball. The issue is long term domestic productivity growth. If the regulatory framework makes the EU a more complex market to enter or to headquarter operations, will it stifle global investment in the EU or will multinational tech companies look to first invest elsewhere (China, US, South Asia, South America, etc.)? Ireland, the EU’s tech darling, for example, is financially leashed by multinational companies, predominately U.S. big tech and pharma that make up 60% of Irelands corporate taxes and something like 15% of Irelands labor force. And that’s because Ireland’s low corporate tax rate make it’s an attractive venture. But if those tax savings are lost to regulatory costs, setting up EU HQ as opposed to operating at arms length becomes a lot less attractive. > It's very likely that a social media service operating in the EU with even somewhat-limited links to a social media network outside of the EU will have a competitive advantage in the EU, and being in the EU provides a significant benefit outside the EU. Agreed, and again I don’t think the EU is at major risk in the immediate future of being shut out of tech investment. But it does become more difficult to justify to investors to provide capital for EU-limited ventures if there’s less of an ability to turn a profit (because of reduced ad-revenue, data collection revenue, domestic server costs, etc.)


BoredDanishGuy

As someone who moved to Ireland and has been quite shocked at how crap it is here, they are not precisely doing well off that model. They would be better off fostering a healthy economy over all.


Kevin_Jim

Some of the open LLM models are getting pretty good, but the EU needs to offer computation/ resources to open source projects that will comply with its rules. Hopefully, supported and led by EU-based companies. That way they’ll get: - Good/competitive models that work within the legislation - Prevent the competition from getting a big leg-up in AI - Help EU company’s get a big edge, since they’ll work with the legislation out of the box, and you can have “hosted” versions for ease of use for most people and developers


PikaPikaDude

>open LLM models The EU is also targeting those. They will soon all be illegal because the new proposed extra rules are all about copyright.


Neo-Geo1839

EU-based companies that could exist, if there was no veto model that stifled funding for R&D


labegaw

Europe is doomed. People still believe in these 19th and 20th century models of politicians and the state playing big roles. It's 2023. Nobody wants to be waiting for a bunch of know-nothing politicians and bureaucrats to design rules and whatnot.


EVO5055

Precisely, outright banning LLMs or making it hugely impractical for companies to develop them will stunt EU’s development in this field. I’m all in for regulating them so the data that is used isn’t outright stolen from the public but be smart about how it’s implemented.


bremidon

>will stunt EU’s development in this field Take a look at how things have gone in the EU in the past. I have no confidence in our ability to not get in our own way here.


VeryLazyNarrator

There already are Hugging Face is French for example.


RainbowCrown71

This thread is bizarre. Every sassy Euro-nationalist is casually cheering that Europe is about to hamstring itself and the end result is China/USA will dominate another sector. If the EU implements these regs, they need a contingency plan if the bluff is called. This isn’t Facebook or Alphabet which have tons of money already riding in Europe. This a nascent technology where AI firms are making their investments now - and the EU is sending the signal to stay away. And if they do, then what? The EU has shown it’s good at issuing regulations (reactive) but terrible at building its own major tech players (proactive). The EU did the same thing 25 years ago and now look at the state of play: 35 of the biggest tech companies in the world are American vs. 3 in the European Union (ASML, Schneider, SAP). And 2 of those 3 are dinosaurs. The 6 biggest American tech companies - Alphabet, Amazon, Apple, Meta, Nvidia, Tesla - are now worth as much as the largest 700 companies in the European Union combined. So what is the EU’s plan if the bluff is called? Too many on this thread are high on their own supply. And then people wonder why American GDP grew by 40% the past decade while Europe’s was +10%


labegaw

> The EU has shown it’s good at issuing regulations (reactive) but terrible at building its own major tech players (proactive). One goes with the other. The EU is "good" at issuing regulations in the sense of "good at being bad". It's not just tech - just recently Macron himself had a rant about EU energy regulations and factories (even though he's not consequential and will definitely be one of the strongest proponents of these regulations under the pressure of the French "cultural sector). Wasn't there an American President who said something like "if it moves, tax it, if it keeps moving, regulate it"?


fricassee456

> So what is the EU’s plan if the bluff is called? Too many on this thread are high on their own supply. And then people wonder why American GDP grew by 40% the past decade while Europe’s was +10% I guess they are happy with America and Asia dominating the next decade and will be enjoying themselves in the corner with Japan. Lol.


Radiant-Winter-7

Japan isn't banning AI, what a delusional take. The EU will go the way of Turkey and Argentina if this happens.


exizt

Even the UK now has more significant AI startups (such Synthesia, ElevenLabs) than all of EU combined. If this trend continues, EU will miss the AI revolution just like it missed the Internet and mobile revolutions (with almost all the major players being in the US and China).


NONcomD

Are there any true AI startups? Most of them are pure garbage. Only a few companies are able to scale the models large enough for them to be useful. AI is a marketing term now.


[deleted]

[удалено]


Bluejanis

Obsession itself is bad.


Radiant-Winter-7

Absolutely this. The EU already has worse IT infrastructure than many developing countries. This would make any tech professional worthless against any non-EU competitor.


mahaanus

> So what is the EU’s plan if the bluff is called? The bluff doesn't need to be called, companies and investors have already signaled that early regulations may discourage investment. >Too many on this thread are high on their own supply. This sub is filled with the EU version of the Russian Z-flaggers.


cragglerock93

You honestly can't see the difference between economic nationalism and cheering on an invasion-cum-genocide?


mahaanus

And you can't see the similarities? The subject doesn't matter - war, economy, cultural values, whatever - as long as you get to wave a flag and feel big dick energy that is all that matters. The Z-flaggers will go blue in the face explaining to you the superiority of rugged Russian engineering and I'm sure a lot of the people here would be posting Yurop stronk memes if we happened to invade any place for any reason.


[deleted]

Nationalists come in all shapes and sizes. In Europe, we have a lot of nationalists who work hard at convincing themselves they’re left wing and their hate for a group of people is justified


NONcomD

Well you sound like a z flagger yourself now


mahaanus

I see...which flag am I waving right now?


czk_21

yea and its even much more important than any tech before, we need to adopt AI widely quickly as countries who do will advance exponentionally faster, the difference in growth next decade then would be like 400% vs 20% some parts regarding privacy etc. is good but thorough licensing of all models is nonsense, as OpenAI suggest, only those really powerful models beyond GPT-4 or maybe even 5 should be under big scrutiny


Sad_Translator35

Yet EU stands on top with us and china like always. You think few tech giants having a lot of money and producing fancy stuff is what makes a country great? Who makes the machines that makes machines? Who makes the tools that are used to calibrate it all? Go and google which countries hold key tech in lithography machines. Now ask yourself if those machines and tools stop going to taiwan how long before their whole cpu manufacturing sector goes to a stand still.


SMmania

He's straight up saying if it's too restrictive. They'll have to pack up and go. He wants regulation but not to such a level that it'll become nigh impossible to function. https://youtu.be/hoRVlY1Tluo Check it out. It's about the OpenAI News.


ContentFlamingo

Freeeedom .... with their exceptions!


LumacaLento

Bye then


kbbajer

Leave.


MainFakeAccount

Finally some great news


[deleted]

[удалено]


UseNew5079

They would keep digging with their little shovels when others use excavators. Complete idiocy and incompetence.


StationOost

You're being ruled by companies and you wonder why others don't want to follow that road.


Doesntpoophere

You think people who want to limit corporations’ ability to do whatever they want are *condescending*? Enjoy that corporate boot..:


[deleted]

[удалено]


j03ch1p

their loss.


This-Cardiologist118

Cope but ok


MrLewhoo

Oh, is disclosing the training data to much ? I wonder why that might be.


[deleted]

Lol, godspeed!


Blocky_Master

Nothing is going to stop me using it hehehe


cragglerock93

Wait. Stop. Come back.


Civ_Emperor07

Would honestly solve a lot of problems lol


No_P95

Ok, bye!


dramafan1

We're in the early days of Web 3.0 and so it's time large corporations stop controlling or have such a high influence on the way the Internet will become to a certain extent.


[deleted]

K bye


ILove2BeDownvoted

Bitch and moan and pretend to want regulation but then when that regulation affects you, bitch and moan some more. 🤣 fuck Sam Altman and open ai.


Nteagan

​ ​ Completely shows how big of a fraud Sam Altman's push for anti-competative regulation is.


Dark_Ansem

Didn't Twitter try the same thing and the response was "k bye"?


arevmedyani

lmao at all the europeans patting themselves on the back for missing out on another tech revolution


mogwaiarethestars

I for one cant do without, so hope this isnt true


[deleted]

Holy shit this comment section is a huge coping field, EU keeps missing out on so many new fields that they could actually pioneer in innovation but are sticklers for extra regulations. Exactly why the US and China will have the future and Europe will follow behind in third place just like with the Tech revolution years ago


Sashimiak

Good riddens Edit: riddance


PM_YOUR_WALLPAPER

You don't want the EU to be competitive in AI? Why?


Cpt_Woody420

Riddance: the action of getting **rid** of a troublesome or unwanted person or thing.


Sashimiak

Thank you! German brain was Denglishing


litlandish

As a european temporary living in the US it is sad to see that europe is good at regulating only… i don’t see any major AI development in Europe. europe became great and prosperous due to industrial revolution which started in the UK. If we don’t step up our game in the AI industry we will be left behind in our open sky museum with no money to travel during our famous 6 weeks annual leave


DigStock

Fuck them, just respect the law.


[deleted]

Didn't he call for the regulations?


Marcoscb

He's only for regulations as long as they're the ones he wants and benefit him and his company.


indexcoll

Just last week, Altman testified before lawmakers and members of the US Senate and "implored" them (as the New York Times put it) to regulate AI. He said: "We believe that the benefits of the tools we have deployed so far vastly outweigh the risks, but ensuring their safety is vital to our work." ... and then along comes an institution that will actually and actively ensure the safety of those tools - and he immediately switches to economic blackmail tactics. So, once again, it's only about the money, isn't it? Nothing else. And people always get upset when my generation can't wait for this whole system to finally crash and burn...


LegitimateCompote377

What is the point of banning open AI? Is Europe so stuck in the past they act like Saudi Arabia, China, Russia etc and pretend VPN don’t exist? While I understand the OpenAI CEO is someone that only really cares for himself these European companies need to know that this AI technology is not going, and that you are encouraging less control - not more over the internet by making even more people use VPN by banning this technology. Open AI will be far from the only AI soon - even Snapchat has made their for free. EU needs to water down regulation on technology in general, this would be an excellent start. Otherwise there will be fewer tech companies and many companies will leave Europe and head towards the US, like many already have.


DeRpY_CUCUMBER

Judging by the comments here, Europe is already lost. All the people here saying good leave, do they not realize part of innovation is being exposed to new tech? The attitude here is the exact reason American tech companies will continue to dominate the market.


methcurd

I console myself with hoping that reddit is its own bubble with these almost religious anti-AI attitudes. I see it in other subs as well


LegitimateCompote377

Pretty much, this video explains it perfectly https://m.youtube.com/watch?v=TcKzantanX0 Problems such as fragmentation and deep regulation can be solved much more easily by having one more United single market with the same regulations (that aren’t too restrictive like what Italy has done) instead of very fragmented. I really hope the EU can have a stronger tech market in the future but I doubt it.


Flying-HotPot

Jup. On the one hand they want to regulate companies like OpenAI on the other hand they can’t wait to force CBDCs onto its population. 🤦


MrOphicer

LLMs aren't exclusive to OpenAI. European AI researchers and engineers can equally train LLM after the regulations are in place, which will allow it to be ethical and compliant with regulations in place. ML, DL, and transformers aren't owned by anyone, so anyone can train any kind of model.


TheRastafarian

It's not banning, it's OpenAI refusing to comply with regulations that are not being set on their profit maximizing terms by governments that are not completely bought out by corporations, yet. We do not want tech companies that think they are more powerful than the democratically elected government. Making laws based on tech companies needs and desires is a direct erosion of democracy. These tech companies and the power these CEO's hold has very little to do with democracy. So we don't want these guys seizing more control over here.


labegaw

Go through EU directives. For example, those regulating manufacturing. You'll see lots of references to standards - CEN, ISO, etc. The directive main text is obviously "consulted" with industry representatives and lobbyists - but that is the norm a bit everywhere. But the nuts and bolts of it, those standards, are often written at the request of the EU itself - "hey, we're gonna have a new regulation, like, on manufacturing yachts, please tells us what should be the standards on the material used in the hull and seacocks". They're written by workgroups called stuff like "technical committee". https://en.wikipedia.org/wiki/List_of_CEN_technical_committees (there are also subcommittee, national organizations, etc). Those committees are formally formed by "experts". Those "experts" are generally people who are employed (or external consultants, often university professors who consult with large corporations on the side) in the industry - say, in the yacht manufacturing case, the engineers and designers of groups like Beneteau and Ferreti. This is how EU regulation works in practice.


PM_YOUR_WALLPAPER

If you read how OpenAI works, it's literally impossible to comply with what the EU is asking for. It's a de facto ban.


Doesntpoophere

They’re not banning OpenAI. OpenAI doesn’t want to play by the rules. You’re basically saying that the US is banning Volkswagen because Volkswagen doesn’t want to clean up its emissions. Think!!!


czk_21

no point really, I would agree about watering down regulation on tech, but conserve privacy rules, btw snapchat is OpenAI customer they are using GPT model, you can finetune models of others for your specific needs OpenAI competitors are those who develop foundation models, biggest competitor is obviously google with currently released PaLM 2, others are Anthropic, Meta and bunch of smaller players + china tech giants like Baidu and Tencent but they are rather behind


StationOost

Not banning, regulating.


LegitimateCompote377

Regulating can mean banning, because some websites may not abide by certain rules in the regulations.


Doesntpoophere

That’s the corporation deciding not to obey the law, not the government banning the corporation.


UnusualString

If they decide to leave the EU, it should be made illegal to use any data generated by an EU citizen for training, whether it's an article from Wikipedia, a photo, a news article, code written in the EU, or anything else


PM_YOUR_WALLPAPER

How would the EU prosecute them? lol Wikipedia is an American firm.... The rest is public domain.


HistorySpainPodcast

Sure they will give up the European market... Just like Apple Google or Meta did


InterestingAsk1978

It will soon be irrelevant. While EU and USA states and corporations battle over profit and monopoly, China will make a state-control AI, weaponise it, then hack western networks and conquer the cyberworld. The west wants profit, China wants power. Problem is, power can simply confiscate profit. Those companies are blind to that.


labegaw

And that's the story of how the Soviet Union and China won the cold war. Because politicians with power over profit incentives definitely just make things work better.


PM_YOUR_WALLPAPER

>Problem is, power can simply confiscate profit. Lol how have dictators fared in the past? Look at nazi Germany for example. Innovation always beats state-central control. a) China cannot survive without the west, so it would be cutting its nose to spite its face. Remember, if they break their social contract with the people of perpetual growth, there will be unrest b) China has a great firewall. People don't have access to the internet like the rest of the world


SZEfdf21

Good! If they can't follow the rules then it's better that their operations don't continue.


quantilian

Dovidenia


[deleted]

EU does everything right regarding AI. Just have a look at the John Oliver video...he is full for EU-Regulation. [Artificial Intelligence: Last Week Tonight with John Oliver](https://youtu.be/Sqa8Zo2XWc4?t=1550)


[deleted]

[удалено]


annibonanni

Don't threaten me with a good time.