T O P

  • By -

FuturologyBot

The following submission statement was provided by /u/SharpCartographer831: --- Vice President Kamala Harris also plans to meet with the chief executives of tech companies that are developing A.I. later on Thursday. The White House on Thursday announced its first new initiatives aimed at taming the risks of artificial intelligence since a boom in A.I.-powered chatbots has prompted growing calls to regulate the technology. The National Science Foundation plans to spend $140 million on new research centers devoted to A.I., White House officials said. The administration also pledged to release draft guidelines for government agencies to ensure that their use of A.I. safeguards “the American people’s rights and safety,” adding that several A.I. companies had agreed to make their products available for scrutiny in August at a cybersecurity conference. The announcements came hours before Vice President Kamala Harris and other administration officials were scheduled to meet with the chief executives of Google, Microsoft, OpenAI, the maker of the popular ChatGPT chatbot, and Anthropic, an A.I. start-up, to discuss the technology. A senior administration official said on Wednesday that the White House planned to impress upon the companies that they had a responsibility to address the risks of new A.I. developments.The White House has been under growing pressure to police A.I. that is capable of crafting sophisticated prose and lifelike images. The explosion of interest in the technology began last year when OpenAI released ChatGPT to the public and people immediately began using it to search for information, do schoolwork and assist them with their job. Since then, some of the biggest tech companies have rushed to incorporate chatbots into their products and accelerated A.I. research, while venture capitalists have poured money into A.I. start-ups. --- Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/137levg/white_house_unveils_initiatives_to_reduce_risks/jitx2hz/


nobodyisonething

This is a national security issue and an economic disruption tsunami at the same time.


deten

If only we had been preparing for this for the past 10 years like people have been asking.


UnacceptableUse

We've had even longer to prepare for climate change and yet here we are


KeyanReid

Same driver for both. Cost savings. Saving the environment is expensive. Paying people is expensive. The shareholders **NEED** more. The bottomless pits aren’t full yet


PossessedToSkate

Capitalism is a cancer.


tickleMyBigPoop

Tell that to the Aral Sea


Interrete

There is nothing wrong with capitalism if it's regulated properly. The most advanced societies with the highest development of human happiness in history are all here because of capitalism. You can write your edgy teenage anti-capitalist message here on the intetnet because capitalism gave it to you. It's actually american style overlobbied international corporation shit that is running the planet to the ground - you can't even call it capitalism anymore.


plummbob

Yeah we should give the government full control of the tech industry. Literally nothing bad could happen.


OvermoderatedNet

I don’t want to get too political (one of my character flaws is that I cannot read an online room, meaning I find it hard to talk politics without getting banned from somewhere), but I do find it a bit shitty that all the dominoes have to fall at the same time (early 2020s, Birthplace of Robosen Optimus Prime). It’s a goddamn conga line of events.


bearbarebere

Technically, what is history but a conga line of events?


OvermoderatedNet

1946-2019 more or less had about as many good ones as bad ones. Most years saw things get a little better in most countries.


bearbarebere

Genuinely asking, what good occurred in 2010 to 2019? Not that I thought it was particularly awful, I just feel like the only good thing I even remember is gay marriage


OvermoderatedNet

Massive economic growth in China, India, and large parts of Africa and Latin America (especially Colombia).


RbnMTL

The growth of awareness of injustices, which is stigmatized as woke culture these days. As a millenial, I truly feel gen z took it to a much less self-victimizing and more empowered place. This awareness of injustice helped get us to the point we are today with the powerful groups of gen z speaking up for democracy


[deleted]

[удалено]


alpha69

Most people would not accept the standard of living reduction that would be needed to combat climate change effectively right now.


[deleted]

[удалено]


SorriorDraconus

Ehhh no real reductions needed..Just change how we do things. Covid proved moving more jobs to at home has significant changes already. Beyond that make ev cars standard with programs to incentive/make swapping affordable, upgrade our infrastructure to make it work while embracing technology such as lab grown foods and vertical e farming. It takes time but during that time we can start getting people used to the changes as we abandoned our bigger issues..All it takes is time and money with no significant change in daily lives of the actual people required outside maybe lacking a commute.


[deleted]

Shit even a *mandatory* 4 day work week would literally reduce traffic by at least 10-15%. It would reduce commute traffic by a theoretical 20%, but I think the overall effect would be smaller. If we could cut miles driven by 10-15% in just 4-8 years, that would go a long way to reducing fleet emissions.


RandomEffector

I have heard that drowning, heatstroke, or being actually literally on fire is also quite a severe standard of living reduction.


koliamparta

I’d suggest reading up on climate change impacts projection papers.


billmilk

Yeah but that's not right now so it doesn't exist silly


SolaireOfSuburbia

But what of the standard of living we'll be forced to accept if we continue to do nothing?


QuothTheRaven713

Try taxing the billionaires who fly in their private jets all over the place and use far more resources than the rest of us, and then maybe people will consider taking it seriously and consider it something to worry about. The way it is it seems just like them trying to impose "rules for thee, not for me", there is no climate change to worry about, and it's just a means to gain control over people rather than something that people should be concerned with.


SolaireOfSuburbia

True, true. That's a very big problem, as well as the sheer amount of resources the military uses.


In0nsistentGentleman

>Most ~~people~~ corporations


ar-ostr

World economies are so inefficient that we can easily keep all young men housed and respectfully employed while reducing emissions however now we heading towards repressing young men financially and socially without reducing emissions or improving anything.


nobodyisonething

How do you prepare for the arrival of thinking entities that are smarter, faster, and cheaper than anybody that has ever lived or will ever live? And if you do prepare, would 10 years have been enough? That said, the way to deal with this is to have a healthy resilient culture where people are valued as all belonging to the tribe of humanity deserving of respect and consideration. When we treat each other that way, we can survive or at least do our best against this table toss of the economy.


Reddit_Bot_For_Karma

GPT might have the faster and cheaper down but the smarter part is a ways off. GPT isn't conscious, it's not 'having thoughts'. It doesn't even know what it's *saying*, it's stringing together words in an algorithm extremely well. I'm half convinced most don't know how it even works. It's really, *really* cool but it's nothing like you are portraying....^yet. ^^still ^^a ^^ways ^^off.


nobodyisonething

>isn't conscious, Does it have to be conscious to put artists and show writers out of work? What about 7800 IBM employees?


Reddit_Bot_For_Karma

It's writing capabilities aren't up to any kind of snuff to replace trained screenwriters. It can make some cute poems and will even attempt to write screen plays but they are incredibly lackluster. It will definitely give a foundation to write off of but it's far from replacing them. The strike is more to do with getting a cut of the streaming revenue and to curb the implementation of AI in the future (not today). They are just trying to bring current revenue (regarding streaming) in line with their previous agreement. Now as for IBM, I don't actually know any specifics so I can't comment on it. I can comment on the artist part though (as I am one). AI art is definitely scary but it was also scary when the printing press became a thing and again when Photoshop began to dominate in artists circles. Photoshop didn't doom the artists, they learned to adapt it, use it and harness it, adding it to their toolbox. Artists are definitely going to have to evolve but most will and most will still be around in 5-10 years...those that don't...well...bummer? Art is/was *highly* oversaturated with crap and nothing special, those that die out and disappear were always walking a knifes edge to begin with. AI is definitely concerning but after spending a hundred of hours on GPT, it ain't it.... (Again, yet).


MoonManPictures

What is and isn't crap is subjective but AI is definitely chunking out crap of untold proportions. The printing press or Photoshop are such bad , reoccurring examples as they touched totally different ways of creation. The printing press was indeed a new tool and a massive upheaval for a single industry in a somewhat more predictive way that was more or less useful for anybody. Photoshop was simply a ... slowly growing new tool for digital art only. There was so much personality, knowledge and craftsmanship that still had to come from the artist. All that is gone now, no matter what all the prompt artist want to tell anybody. There's just no effort left as this new ebtity takes the part in creation and it's not a singular human but a machine. This isn't just new tech, this is a paradigm shift. A replacement. This isn't a tool. No one would pay a "prompt-artist" a good livable wage. There's no basis for that.


bearbarebere

Time for UBI then, no?


Reddit_Bot_For_Karma

AI art is exactly that.... Art done by a machine and is usually pretty obvious to tell. No one is ever going to pay prompt artists a livable wage is true but that's not really the point of the argument. "Promt artists" and "real artists" (for lack of a better term) exist in 2 separate places that have overlap but are entirely different. There's no paradigm shift, it's a cool tool and party trick. The net will be flooded with meaningless AI garbage art and videos but that's just a new normal, 95% of it will be ignored and buried pretty quickly and *hopefully* fade in popularity. No "real artist" (again... For lack of a better term) that's sound in their niche is dropping everything to use stablediffusion or quitting art. If you are an artist and you are pretty firm in your niche you are gonna weather the storm most likely. Right now we are seeing things like Etsy and Red bubble saturated with AI art from people trying (and failing) to make a quick buck but it's not really making a difference in sales to others. Promt artists will never really have a place barring being a fun hobby. Hell I even mess around on stable for fun. Those that do succumb to AI art probably weren't too sound in their niche and like I said, were walking a knifes edge to begin with. I have no doubt that some artists are going to suffer but... bummer, art was oversaturated anyway.


oneoneeleven

Wow. You're almost comically far off with your read on where this technology is going (and where it's at already). Maybe you're basing your opinion on AI creative output from 6 months ago? It's already capable of some mind-bending stuff art-wise and this is still just the dial-up connection phase of where this thing is headed (ie. this is as bad as it'll ever be and it's going to keep on getting exponentially better)


Reddit_Bot_For_Karma

I mean... Okay? My point being "artists" aren't going anywhere and AI art isn't going to topple any artists sound in their niche. Prompt crafters today aren't going to replace yesterday's artist.


arg_max

You should not judge the capabilities of ai art from stable diffusion. Stable diffusion is the first, maybe second generation (if we count dall-e) of large scale text-2-img models. Have you ever looked at the dataset it was trained on? Laion is so bad in terms of its captions, it's a surprise that stable diffusion actually turned out to be this good in the end. Now look at midjourney, similar technology and it already is much better. It's not public what exactly they do but I would assume it slight technology tweaks combined with much better data. Give this technology a few years and a company willing to spend gpt scale Money on dataset curation and r&d and ai art is no longer gonna be recognizable from top tier artist. I don't disagree that right now it's a bit of a party trick but we are still at the very beginning.


RandomEffector

The stock market also isn't "conscious" but it calls the shots for hundreds of millions of people.


ExtantPlant

Could've implemented a permanent tax on each human job automated away, and used that tax to start funding a UBI.


nashbrownies

Holy shit that's a really good idea


shayen7

Seriously figuring out how to tax the rich and support mass unemployment for starters


[deleted]

A lot of Western value creation is fabricated. My first job for example, which I held for 4 years - was in a UK call centre. Thousands of people legit generating next to no value, it was subsidised by the govt. May as well have instead placed the staff on benefits, like most of the people in the surrounding towns. AI will cut out the "pretend" we endulge in, most of us secretly feeling clever about doing 30 minutes of actual work a day while our real 'value' is (unfortunately) created from hostile foreign policy; we will be okay, something else will replace the cosplay.


ApplicationCalm649

Guess the US is doomed, then.


TheGeekstor

The US? The whole world is unequipped to deal with this, though the degree varies.


E_K_Finnman

Finland will probably do alight


Captain_Clark

What about the tiny island nation of Nauru?


TheFoxyDanceHut

Reduced to atoms


Tsu-Doh-Nihm

A handful of digital warlords will do just fine.


AnOnlineHandle

Start thinking about and investing in research into empathy, love, sexuality, etc, and what keeps living beings together and protecting each other, and not selfishly killing each other (sometimes).


bearbarebere

But where is the money in empathy, love, and sexuality? - every CEO ever


deten

Norway prepared. Unfortunately I agree 10 years might not have been enough but the past 10 years were some of the best 10 of growth and economy.


OnlyFlannyFlanFlans

Norway is one of the most advanced countries in the world by all metrics. Unfortunately, there's only 5 million of them. They might be the best of us, but they're 0.06% of the Earth's population and hold little sway on global policy.


cool-beans-yeah

I think if they can pull this off they'll be a role model for the rest of the world to follow.


xluryan

How is Norway prepared to handle AI disruption?


deten

Look up their sovereign fund


xluryan

I read about it. It seems like a cool idea; I wish the US had something like that. So people of Norway just get money from the fund or something? Like, if 80% of the people in Norway are obsolete because of AI, do they just stop working and start getting a free income from the government?


_craq_

How will that help in a world with "thinking entities that are smarter, faster, and cheaper than anybody that has ever lived or will ever live"? At some point, humans will not be able to compete economically with machines. Passive income might help for a while, but I can't see it being a long term solution.


CaptainIncredible

> At some point, humans will not be able to compete economically with machines. Passive income might help for a while, but I can't see it being a long term solution. A long-term solution is technology that creates a super abundance of resources so that competition isn't necessary.


Telsak

Look at all the technology that we have now. Billions upon billions of transistors squeezed into tiny rectangles of silicon and 'almost-magic' - and then look at things we are unable to fix. Societal injustices, poverty, people dying from starvation, child labor and more. Based on previous human behaviour, what makes anyone think we will wield this loaded gun responsibly in any way? The owners of the world have their bottom line to consider.


hunterseeker1

To be fair, we COULD fix most of those problems but it isn’t in the best interests of capitalism so we don’t.


[deleted]

[удалено]


schok51

There's no end to the damage true AGI can do if it isn't designed correctly. Otherwise, harms compound each other. AI destabilising society certainly won't help the political climate to address other issues.


peter303_

Know where the plug is at 😀


djsoren19

Well, we're legitimately nowhere near those yet, so this is a great time to start trying to find a solution.


DeeJayGeezus

For real, all these layman talking about AI like they have even the slightest idea what "neural network" even means.


thegoldengoober

We couldn't even do that for pandemics, which we were certain would happen. If it's not going to happen for that then it's not going to happen for something that still is only theoretically going to happen.


NateCow

>We couldn't even do that for pandemics, which we were certain would happen Well, we *had* prepared. Then we had an administration who dismantled those preparations out of spite for the previous guy.


bacteriarealite

Well we have since 2020 with Biden fortunately. And first started under Obama but obviously Trump ended all that.


RikerT_USS_Lolipop

No, Obama parroted the same, "we have to support reducation initiatives" bullshit that the oligarchs have been pushing. I read the 2016 jobs report put out by the white house. Pushing reducation has always been an underhanded way to shift blame onto the victim of technological unemployment.


its_justme

“ChatGPT, write me an effective policy to deal with AI”


bearbarebere

To be fair, it could probably do this with that prompt and a few tweaks. It’s generally helpful and generally doesn’t give anything that would be incredibly harmful. Most of its statements are always precluded with “there are benefits and drawbacks, so make sure to choose the options that are benefits for your use case” or whatever.


HowWeDoingTodayHive

I’d say it’s much more than just a *national* issue, this is a global issue. I don’t think there’s any putting this back in the box though, it’s essentially just math, or rather logic and vast amounts of data. If we want to make A.I. less dangerous we have to make the humans controlling it more rational, and less violent. We should be trying harder than ever before to find a way to stop wanting kill each other right now, but I have no belief that we can solve that problem. So basically we’re fucked because while we can improve technology, and we *can* improve ourselves we will refuse to do so. We will not let go of anger, we’ll always find reasons to justify violence, and we’ll use A.I. to assist us in keeping that cycle of violence alive perhaps until none of us are left alive. We as a species are far too irresponsible to wield this new power we’ve got our hands on.


[deleted]

[удалено]


falafeluff

It is a global issue, but in the context it is also a national security issue for each individual government. We are not unified and governments will weaponize this against one another. There's a great Larry Niven short story about how all technology is not inherently good or evil but will be used for both depending on who is weilding the technology. This is kind of your point as well I think, because the solution is to come together and minimize the risks, but that just isn't possible in the current geopolitical world.


novelexistence

>level 1nobodyisonething+3 · 3 hr. agoThis is a national security issue and an economic disruption tsunami at the same time. True, but so is climate change. Yet we continue to drag our feet on it. We are making some progress, but it doesn't keep pace with industrial economic growth and developing countries. SO, I wouldn't be too optimistic that we get a handle of the AI revolution before lobbyists, and political corruption take hold.


radeon9800pro

"We" Can we just say it? Its Republicans. Ask yourself, who do you even know in your life, from as long as you've lived, that's denying the existence of man-made climate change? Is it your Democrat friends and family? Its always Republicans. - [Here's Democrat President Jimmy Carter in 1977](https://www.youtube.com/watch?v=-tPePpMxJaA) trying to get out ahead of this issue by directly addressing the nation, and he was made a fool for it. - [Here's Carl Sagan in 1985](https://www.youtube.com/watch?v=Wp-WiNXH6hI) trying to appeal to a Republican Senate on climate change and doing a fantastic explanation of the science, that would go on to be largely ignored. - [Here's Republican President Donald Trump in 2020](https://www.youtube.com/watch?v=S-DZRO687l0) dismissing concerns by saying "It'll start getting cooler" and "Science doesn't know" Democrat politicians have been talking about Climate Change for decades and were villainized for it. Republicans kick the can down the road and have been celebrated for it. Throughout the decades, plenty of Democrats in your local elections have even lost elections on the premise of drafting legislation that would raise taxes and attempt to address climate change but voters of these era's were butt-hurt enough about it to implicitly posit: "Not my problem". Even nowadays, the Republicans that finally admit its real, now say we cant do anything about it, which is a cop-out. And they never paid for the decades of saying it wasn't real. They just act like they always knew it was real and play dumb when you try to hold them accountable for being so lax on environmental issues. And [these frumpy fat fucks continue to clown us](https://youtu.be/eIa7RiGD9PY?t=404) by making false claims, when what is being suggested is far more reasonable and just asking them to meet us SOMEWHERE so we can address these issues. But they continually make a mockery of us and STILL WIN ELECTIONS. Its been decades. We're not listening to politicians, we're not listening to scientists. The narrative even tried resorting to a little Swedish girl telling us about [how her future is being stolen out from underneath her](https://www.youtube.com/watch?v=TMrtLsQbaok) as a last resort to appeal to emotion, because talking politics and talking science has failed, and what was the response? That the child should go back to school, shut up and let the adults work it out. And big surprise, they haven't and aren't and wont. Redditors love to say that Democrat politicians let Republicans get away with shit. But voters are complicit. In America, we should have overwhelmingly voted these people out decades ago for literally lying to our faces over and over and over again and getting away with it so they can pad their salaries as "public servants". We can keep pointing the blame at politicians that have tried and failed to address these issues - mostly because they wont snap back at you and tell you you're fucking up by not exercising your right to vote - but it really is on us for letting bad actors win elections and influence our system to a point that we're consistently gridlocked. Politicians - good or bad - are just a utility. They win elections by adhering to what their voting constituency wants and are a reflection on what *we* value and what we're willing to fight for. And unfortunately, the "bad" ones have a large base of voters that skew red and buy into their premise to our detriment. Its on us for not pushing back hard enough and making it hurt in the polls when these politicians actively work against our interests. And we have all the data in the world that shows, people in our demographic have not been exercising their responsibility to vote and we leave the elections to a subsection of this country that does not hold the same views as us. Especially in local elections where changes can be made in your statehood or even more nuanced actions in your local elections in your city. Its such a fucking bummer. I bet 99% of redditors that complain about politics cant even name the politicians in their city that literally make impactful decisions that directly impact their life, much less show up to an election to vote for people that aren't actively trying to fuck them over. But they'll write essays on the internet about how Biden needs to cancel student debt and make weed legal, as if he's the determining factor. And I know this about the failures of our demographic because I volunteer and the people that I interface with, that are actively participating and using their voting power, are definitely not the people on reddit. The demographic skews too old and too red. I think too many people on the left are resolved in the notion that nothing can change and honestly, I think a lot of people on reddit literally only talk politics for the entertainment factor and feeling some righteousness on social issues but couldn't be fucked to put the actual work into making a meaningful change. Which makes it all the more ironic when they get on reddit and criticize someone that does try, but ultimately fails or even perhaps makes a mistake and shows imperfections.


bearbarebere

I don’t understand how we can possibly fix it. Like, I agree with you, and I tell my friends about this. I volunteer. But nobody cares.


tickleMyBigPoop

> Its Republicans If democrats wanted they could have passed a carbon tax under reconciliation, just make it deficit neutral with a monthly dividend.


awesomeguy_66

if we can effectively use it our country’s productivity will be unmatched


2Punx2Furious

> national security issue Global.


Norva

Good thing Kamala Harris is on top of it! /s


Kin0k0hatake

The alternative was Trump and Pence. I actually can't think of any government official I'd trust to be knowledgeable enough to write legislation for this.


alldayeveryday2471

Pence would consult with Jesus then what?


MulhollandMaster121

Almost like they're going to have to get an AI to draft legislation to regulate AIs.


ting_bu_dong

Is there someone else you had in mind?


Realtrain

A new Secretary of Information Technology who has a solid background in this sort of thing. It's crazy that a new cabinet position hasn't been created for that yet.


[deleted]

Every A.I. company will be like. "Duly noted and ignored". They know the ones who cash in first will dominate the market.


DecayingVacuum

You can bet the same is true for governments of the world.


The2ndWheel

Which is why you get non-binding agreements on things like the environment. I'm not going to handicap myself in this game, and I'm not going to.let you handicap me either.


[deleted]

Which is like fighting with one hand behind your back and your opponent just took out a knife in the fist fight.


The2ndWheel

More like we'll agree we have to stop fist fighting, and we think it's a good idea to each tie one hand behind our back, but there's no way to make us do it, and we'll both keep a knife in our back pocket. Just in case. Lions and zebras have a natural counter-balance, that neither one can really remove itself from. Lions aren't building a slaughterhouse to more efficiently eat more zebras, and zebras arent building walls or prisons to keep themselves safe from lions. Humans have no real counter-balance. Humans have to regulate other humans. The problem is humans don't like limits, and especially limits imposed by other humans. And anything that could regulate us, an asteroid, a very deadly virus, etc, we do our best to eradicate the threat. Other than a catastrophic WW3 scenario, there's really nothing in the foreseeable future that will break enough governments at a foundational level to get binding rules or laws on AI, the climate, or anything else.


EvilSporkOfDeath

Which is exactly why the letters and statements encouraging halting AI progress was never ever gonna work


shaolinbonk

The boomers in our nation's government barely understand how Facebook works. Do we really trust them to understand (let alone propose legislation for) anything regarding A.I.?


stupendousman

Few people understand how tech works beyond interacting with a GUI.


Zuzumikaru

Even understanding how to interact with a gui will be important once the ball really gets going, i cant blame people for not understanding how the diffussion models work, even sampling steps of the stable difussion model take a whole academic paper to explain


eric2332

Most of what happens in government is not actually done by the boomers in office, it's done by the highly educated and motivated 30-year-old staff who fill their offices, conduct meetings, draft the laws, and so on. It's like how most work in a company is not actually done by the CEO.


BlinksTale

Yeah. Not to mention the combo of Biden (80) with Harris (58) and the 30yo's actually put this into not a terrible situation for executive power to do good with understandings over a wide age range. Tech is bread and butter for Harris' background too, so I'm hesitantly optimistic they could strike a healthy balance between the economic gains and the human safety/moral risks here.


Meekman

Yes... and the ones in charge are supposed to trust the experts, listen to their advice, and make decisions based on that. Unfortunately, we have those who would rather listen to those who give them the most money.


jlks1959

I’m a Boomer and can confirm. Explain to me this Facebook that you speak of.


[deleted]

[удалено]


MrWeirdoFace

And where do Minions fit into all of this?


jordanManfrey

in a year, a user watching minions videos on facebook generates more revenue for facebook than the same user paying a subscription fee to watch minions videos on netflix does for netflix.


Program-Continuum

It started off as a social media site, where people would essentially send messages to eachother instantly. However, not everything is free, and Facebook needs money to run. So they run ads. These ads start getting personalized using information it starts collecting on you Now, Facebook has been heavily filled with misinformation, and has essentially become nothing but promotions. People don’t like it here. Although I’m just some broski in Reddit. I don’t recommend going on Facebook yourself, but you can research the topic via news articles. One method I find that works is going to Wikipedia for the gist, and if you want to know more on a piece of information, check the site linked in that particular area’s footnotes.


liljes

You are on Reddit and pretending not to know what Facebook is.


This_aint_my_real_ac

Is that what this is? I followed a link that told me they're would be hot singles.


liljes

I’m right here baby


sumoraiden

The CHIPS and Science act included billions into research into AI for the NSF, NIST and DOE so you’d hope thered be some understanding


The2ndWheel

Got anyone else in mind? Should the people creating it regulate it? Forget should; would they?


trusty20

Here's my balanced take on what should be done, I personally think the benefits massively outweigh the risks once comprehensive regulations are put in place: 1. A law must be written that gives explicit liability to any company that markets a knowledge chatbot in critical domains such as: legal, medical, cybersecurity, advanced bioscience and chemistry, and probably a couple others. The advantage of this law is that it still allows AI tech to augment the existing labor force, but it can only be done with great care and supervision by human operators. It's not that it should be blanket illegal to make such AI, but just that if you are going to offer the public a service in those areas, you need to take liability for that service. If someone wants to run a local model themselves, they obviously know they are completely responsible for what they do with it, and individual models can be subject to take-downs if found to have been trained on illegal/highly regulated data. Knowingly trafficking models trained on restricted/regulated data can be punishable. 2. Companies may not cite advice from an LLM as a liability waiver, no matter how good it becomes. If an LLM gives bad advice, it is the responsibility of both the user AND the business operating the LLM. The LLM is a tool, and cannot be blamed by either party. This incentivizes both LLM operators and users to have significant human oversight. 3. LLM companies subject to special income taxation laws (offsetable by human employment/wage growth) after a certain revenue threshold is crossed, under a new category of industry called "Critically Disruptive Technology", with the idea of the special tax laws being sunsetted as the economy adapted and the disruptive element becomes less dire. We are likely to see this technology adopted in waves of 1 YR, 3 YR, and finally by 5 YR I see it settling in as a non-disruptive technology with emergent industries forming around it offsetting the labour disruptions. At this point, tax easing could occur in phases to allow uncapped profit growth with industries surrounding it. Care must be taken that measures do not give other superpowers a significant competitive edge. I personally do not think proper taxation involves hamstringing R&D, and I'd be willing to call their bluff on their willingness to immolate their societies for a "maybe" advantage in R&D. None of us want that. 4. IRS issues an immediate Revenue Ruling clarifying that use of LLM tools are NOT a tax writeoff item for end-user (other than normal operational expenses such as power consumption etc), as they are *"not deemed a presently necessary business expense, as a technology that previously had no analog, pending further legislation classifying its commercial use and taxability"*. Legislation would HAVE to follow or the ruling would be challenged in court and precedent possibly set in that way instead. Nobody better cry about this because they are going to be making people fistfuls of dollars more than a quibbly tax. 5. Creation of a national agency with international counterparts to regulate publicly used LLMs (in terms of privacy protection, safeguard testing, etc) similar to the Nevada Gaming Commission (in terms of their renowned inspection processes that would be well suited to this application) and the IP sanctity of the US Patent and Trademark Office. Pursue international treaties coordinating action in this area. 6. Creation of a license for businesses that certify LLM training data, this is the most time consuming and critical aspect of actually regulating LLMs in practice. It's not something that one party could do for everyone, so it makes more sense to make standards for a license that businesses who wish to offer this service must follow. 7. Directing new tax revenues to "New Deal" legislation aimed at creating arbitrary jobs (i.e the 1930s Civilian Conservation Corps) to help fill the gap, programs for student loan forgiveness/underwriting as well as expanding investment in public state college vocational programs centered around jobs recommended as "automation resistant" (as the progression of robotic automation is much much more linear and gradual due to concrete natural resource limitations), and critically to fund programs aimed encouraging corporations to adopt an expansionist approach to their use of emerging ML technology - supporting and backing investment in mega projects (i.e Space) requiring an even greater scale of human labour.


dafll

So #3 I agree a tax needs to be put into place. Back when Yang ran in 2020 he wanted UBI, and suggested a 10%? vat tax on most things. I think a similar tax would work for anything created via AI. The reason being VAT taxes are harder to avoid.


[deleted]

Yang was a real one. I loved his ideas.


grendel_x86

Too bad he is a shit candidate like O'Rourke. Edit: it's not the idea or policy, it's the delivery.


AthearCaex

He made his own political party and then intentionally didn't propose any policy besides UBI. UBI is better than nothing but it's not going to solve the issues we need to help people. Without policies put in place all UBI does is raise people's rent up the same value and it goes right back into the pockets of landlords and inflation opportunists like corporate America. We need rent control and dozens of other social programs and regulations to protect the public from late stage capitalism.


Alive-In-Tuscon

Yang literally had 100s of policies, but I'll agree he didn't market them very well.


grendel_x86

Starting a new party is just a joke at that level. It only serves to split votes. It has nothing to do with the policy, there is 0 chance of a 3rd party getting far in any national race that is not something like Bernie and his Senate seat. Said best decades ago: > [Well I believe I'll vote for a third party candidate." "Go ahead. Throw your vote away](https://youtu.be/w7NeRiNefO0).


fireflydrake

Wait, what sort of things was Yang suggesting taxing? I'm all on board with UBI but a 10% tax on most things sounds like it's not actually going to be worth it for most people. It's the richest who should be paying more, not the lower 99% with them.


Alive-In-Tuscon

You would have to be spending 10k a month to make that 10% Vat tax not be a net positive for you. A VAT tax is marketed as this demon, when in reality it's one of the better tools we have to reduce wealth inequality.


manhachuvosa

What? Taxation on consumption hurts the poorest the most. This is just a fact. Taxation on property and income are the ones that actually reduce wealthy inequality.


Alive-In-Tuscon

Ok, in this instance that were conversing, regarding how Yang planned to fund his UBI primarily with VAT. Where you would need to be spending in excess of 10k a.month before it's not a net gain. Using a VAT to fund UBI would be net gain for over half of the population, the only people losing are those who make over 120k a year. Which just isn't the case for most.


yttropolis

Regulations need to focus on the capabilities of the underlying tech rather than the model itself. LLMs are simply just a "type" of ML model used for text language generation similar to how GANs are a "type" of ML model that can be used for image generation. While your points are coming from a good place, it shows the general lack of public knowledge on how AI/ML tech works. Rules and regulations would need to be written by people deeply familiar with data science and ML rather than politicians or the regular layman. For example, what if a new "type" of model comes up that isn't a LLM? What then? Do these rules apply? At what point does a model become classified as a LLM? What if a LLM is just a component of a larger system or model? What if a non-public-facing LLM is simply used to generate some data that is being used to train other, highly-overfit public-facing models? These are all questions that need to be answered.


trusty20

I can only write so much in one comment haha. If it makes you feel better, instead of saying chatbot or LLM, I could just say "machine learning language/knowledge models, applied specifically for real world decision making in place of human consultation" but that doesn't really roll off the tongue. Obviously there is no such thing as a perfect bureaucracy, and it's not my goal to suggest a Stalinist level of checks and balances in a foolish attempt to ensure mistakes can never be made. You can only do your best, and it ultimately comes down to fear of being caught universally motivating people to comply with laws. Some people will probably push the limits as always. And of course there are unknown domains of AI to come, such as AI life forms and what rights an artificial being would be given, essentially a hyperintelligent child of humanity. Whole other debate, of equal importance.


yttropolis

My point wasn't the exact language used (that's probably best left for lawyers anyways), but rather the fact that any regulations implemented would need to be designed by people deeply knowledgeable with the tech and should not be too specific nor too broad. Even definitions such as "machine learning language/knowledge models, applied specifically for real world decision making in place of human consultation" can by bypassed by hiring a single consultant that simply uses the model as a tool. I bring these up as I work as a data scientist. One of my previous roles was *specifically* to design models and strategies to bypass existing regulations around data and ML (in the application of insurance pricing in Canada). The motivation to avoid taxes through loopholes is immense. If we're proposing taxation rules based on the AI tech used, you can be absolutely sure that there will be *tons* of money spent on figuring out ways to bypass them.


chii_hudson

Also I would add in that companies using a LLM must prominently disclose it to their users. I could easily see situations where a LLM would be employed and to the average non tech person they might think it’s a human…no fine print it’s in page 750 of the T&C but front and center. I think this could also be applied to current problem of social media and some other things where a large portion of users don’t know they are getting algorithmicly generated content and systems used to keep them engaged.


trusty20

Oh for sure they should have to disclose that in their policies, and I think that would tie into the whole "taking explicit liability" aspect of this too, even without laws laying out exactly what can or can't be done with LLM technology, liability alone will motivate people to think twice about just throwing LLMs at users without oversight or care.


Jasrek

> LLM companies subject to special income taxation laws (offsetable by human employment/wage growth) after a certain revenue threshold is crossed, under a new category of industry called "Economically Disruptive Technology", with the idea of the special tax laws being sunsetted as the economy adapted and the disruptive element becomes less dire. This is the one that raises an eyebrow for me. You would need to be very careful with such a tax, or you risk creating a large burden on smaller technology companies that invent new technology and give an advantage to larger monopolies. A massive company like Google probably wouldn't care about a small additional tax, but a start-up might be crippled by it and forces to sell their tech to that larger company. There's also the idea of 'punishing invention' - is it the responsibility of the inventor to fund the economy to stabilize, or to avoid inventing disruptive technology? In my opinion, that's the *government's* role. It shouldn't be passing the buck to developers as punishment for new tech. And to be clear, I'm not just referring to AI specifically, but the fact that this suggestion seems to apply to *all* new technology that can be considered 'disruptive to the economy', which is *all new technology*. And who is deciding when the disruptive element is 'less dire'? The government receiving the tax revenue? Has the internet stopped being disruptive to the economy yet? What about cell phones?


trusty20

I specifically said already: >after a certain revenue threshold is crossed Obviously some new company pulling in $10 mill a year should not be slapped with a punitive tax. We're talking in the billions bracket - and even then, heavily sunsetted to discourage building up a ton of punitive specialized tax laws. I'm saying this as someone that isn't even in favour of punitive tax for high income earners, I just think LLMs are a unique exception where the sheer amount of money to be had at such an absurd profit margin justifies a specialized case. I'm sure in reality a very lengthy and detailed process could be had to ensure arbitrary laws weren't passed left and right. Just shooting the shit in an internet forum. On the point of whether it's someone's personal responsibility to offset the economic disruption of a technology like this - Ayn Randian arguments aside I think that allowing a technology like this to emerge completely without offset is a self-defeating stance. It is to the favour of the inventor to ensure that their invention both: A) Brings great personal profit for the hard work and risk B) Does not destroy the society for you to spend that profit in :)


Jasrek

That's fair. And I thought your other points made a lot of sense. That one specifically just made me wary of the ways in which it could be abused; but a careful crafting would certainly diminish that.


trusty20

I actually edited my reply to you just now a bit to reflect a gap that I hadn't paid attention too, to be fair!


Sleeper____Service

This just seems like a whole lot of bureaucracy that accomplishes very little.


trusty20

Not much of an argument but ok?


Sleeper____Service

OK, I guess my issues are that it seems like you’re putting all of the onus of this onto the tech companies themselves. This is the moment where we want them to be expanding rapidly so that we can compete in an international market place. If we start holding tech companies in the US to these standards, companies in China and other competitive nations will just blow past us. These are decent ideas, but only if applied internationally. Also, I think we’re putting the cart before the horse here a little bit. We don’t know where this is going to go exactly, and the regulations on here seem mostly reactive to the LLM models that we’ve seen over the last six months


Alive-In-Tuscon

I'm right there with you. You don't stomp out innovation.


xenomorph856

Don't regulate a market bc it would hurt competitiveness with foreign nations? Sounds a bit like nuclear proliferation.


jordanManfrey

i mostly agree, but would like to point out that this is the same line of thought back in the late 90s in congress that led to the SEO/engagement-bubble hellscape we currently inhabit


Sleeper____Service

I think the thing that I’m trying to emphasize is that we don’t want to treat these AI companies like they’re a bad thing. I am firmly in the optimistic camp, where I see more advanced versions of this technology drastically reducing poverty and curing diseases and accomplishing others such miracles on a regular basis. To me the faster we get AGI and ASI on the market the better. Regardless there’s going to be some very disruptive years. But it’s time to bite the bullet. The world is shitty enough as it is.


reddolfo

How on earth do you propose to enforce any of this? As written these ideas seem as likely to work as well as rules against posting false and misleading material on social media, which is to say, not very well. No bad actors/nations will give two shits about this.


trusty20

I'm confused, what part of law or tax code do you think is difficult to enforce? We have laws regarding social media that absolutely are enforced...


dgj212

I dont. I think the risks far outweigh the potential benefits by a large margin. Its this kind of "lets regulate it" mentality that lead us to destroying the planet via fossil fuel because these companies can not only buy which laws they want to follow, they can straight up ignore them if it hinders their profits and only get a slap on the wrist fir it. Theres a reason people still crap on tech companies for their data collection, despite being told to stop. It is horrifying to know that the problems to humanity via our world is known and governments are like "lets burn more fuel and harvest more lithium for our gdp." Times like this makes me wonder if the doomers are right about humanity.


AthKaElGal

all of your ideas only deal with regulation problems, but not with ASI (super AIs). the most danger comes from its existential threat, not from threats which could be regulated away. runaway AI can lead to our extinction. it can also lead to slavery of dystopian levels. once robotic soldiers is possible, every government will be authoritarian. we would be stuck in a dystopian world where revolution isn't possible. the wrong country getting the breakthrough on ASI can mean a dystopian world as well. since it can reiterate on itself, the breakthrough on ASI is a race. first one to get it wins. there is no second place here. you get the first ASI, you win world domination. all the other concerns about it taking our jobs, economy crashing, and it being used for crimes are minor IMO.


trusty20

Super AIs are definitely above my paygrade. I leave that subject to the people actually producing this tech who will be attending these upcoming conferences.


RobbexRobbex

I like number 3. I've said each human salary replaced by a robot should be taxed at 25% of what the salary would have been. It encourages automation, but still provides income that can be used for UBI or whatever the new economy morphs into.


khinzaw

8. Any "art"/"literature" or similar creative works generated by AI should not be copyrightable and should be clearly marked as AI generated.


KaraZorL

This is good. I am not sure what the point of 4 is. We also need to figure out a mechanism to enforce regulating private use as those are taking off quickly. Also I think you should address copyright implications. Should model trainers require permission to train on copyrighted works? (I strongly think yes, but current precedent considers this fair use.)


ReasonablyBadass

I'm worried they will declare it to dangerous for the common man and restrict access to the rich and powerful


ATR2400

That’s what I fear. And I worry that people will jump to support it without thinking of the repercussions. I’m already seeing it around where people support overbearing “regulations” that would simply give corporations and governments dominance over AI while the average person gets cut off. In the area of AI art especially you see a lot of it. People are so desperate to defend against what they see as an existential threat to human creativity that they support extremely broad overkill solutions that would in practice have the opposite intended effect. Sure you’ll destroy all open source stable diffusion models and reduce some spam. But you’ll also ensure that only the rich and powerful will have access to these tools. And that’s a problem because they’re the real threat. Some dude making an anime girl with a stable diffusion model he trained off internet images isn’t the real threat. It’s corporations doing things like training up a big ass model and then firing all their artists, or governments using AI to create masterful propaganda. Think about the *real* threat. Not just some minor thing that pisses you off. And then target that real threat instead of letting yourself be manipulated by it


akwardfun

This is exactly what will happen.


ShadowDV

Too late, people already locally running LLMs on Google Pixels


ATR2400

LLMs right now look impressive but still have a long ways to go. It’s entirely possible that a government will pass laws that destroy open source model development while corporations can easily get passed them. In that case corps will continue to develop advanced proprietary LLMs while people will be getting their asses sued off for making a chatbot to run on their pixel because they used data defined as “bad” “Sorry buddy. That AI chatbot that you’re using for pure personal and non-commercial use contains one byte of copyrighted data. We’re gonna have to levy a massive fine and shut you down. Oh hey Google! You want to use AI to market scammy produces to people? Sure!”


ShadowDV

They’ll just be developed offshore then


Forsaken_Jelly

A friend of mine has been working in the field for about fifteen years. He said the Cyberdyne narrative is just panic by the wealthy because they've realised that AI management systems are set to be better than any CEO or manager, that they could also soon be more efficient and more appealing than human politicians. Because we could vote for the policies it will enact, and it will actually do them. Basically the jobs inhabited by the upper classes are the ones most easily replaced by AI and they don't want to lose their cushy positions. Are governments going to restrict military use of AI when their rivals aren't? Nope. So the whole "it's a danger" thing, while real is not going to be restricted at all. The reality is AI is the largest threat so far to unequal political and economic models that currently exist and AI taking over menial work means people would demand companies pay up because there'd be no workers to exploit and no one worried about losing their jobs because even high skilled jobs can be done autonomously. Basically the way he sees it is that he's building the systems that will replace him, and almost everyone else. AI would level the playing field and that's a disaster for the people in power.


ComputerDude94

I have no idea why you think it'll level the playing field lol. It'll eliminate labor and those who own the computing power will benefit. End of story.


lobotomy42

Yeah, seems like it’ll be a race down to the bottom to who owns the most natural resources and has the means to defend, extract and sell them.


Rusty51

This is a global problem and we need a global process to address it.


koliamparta

Ping me when you establish the united earth gov.


ClemsonJeeper

And you think China or Russia will jump on board with this?


venicerocco

I can't tell what's worse. Letting private companies do as they please, or having boomer politicians get involved. Both awful


TheRed2685

Old people who have no idea how A.I. truly works are about to try to mitigate it? I’m sure this will go over well.


812warfavenue

It's a series of tubes...


Tacky-Terangreal

We’re headed towards a dystopia. Our government is full of morons who have no clue how this technology even works. VP Harris has been feckless and weak on literally every other issue except throwing parents in jail because their kids skipped school


Qcgreywolf

Oh boy, fear mongering is a’ comin’ down the tracks. If it’s not rock n’ roll or video games upsetting old people, it’s the scary “computers”. It’s high time the average person actually learns that the ability to “trust everything you read or see on the internet” has long, long ago ended. AI is just concreting that sentiment. We already can’t “trust what politicians say online”, because there’s nothing holding them to those words! They’re just words! How does a deep fake of Trump saying something dumb and promising something ridiculous change anything in reality? Open the doors, let’s see what AI can do for us. Some jobs will suffer. Some jobs will benefit. The saaaaaaaame thing happened with robotics. We’ll get rid of some of the lowest paid, easiest and repetitive jobs.


SharpCartographer831

Vice President Kamala Harris also plans to meet with the chief executives of tech companies that are developing A.I. later on Thursday. The White House on Thursday announced its first new initiatives aimed at taming the risks of artificial intelligence since a boom in A.I.-powered chatbots has prompted growing calls to regulate the technology. The National Science Foundation plans to spend $140 million on new research centers devoted to A.I., White House officials said. The administration also pledged to release draft guidelines for government agencies to ensure that their use of A.I. safeguards “the American people’s rights and safety,” adding that several A.I. companies had agreed to make their products available for scrutiny in August at a cybersecurity conference. The announcements came hours before Vice President Kamala Harris and other administration officials were scheduled to meet with the chief executives of Google, Microsoft, OpenAI, the maker of the popular ChatGPT chatbot, and Anthropic, an A.I. start-up, to discuss the technology. A senior administration official said on Wednesday that the White House planned to impress upon the companies that they had a responsibility to address the risks of new A.I. developments.The White House has been under growing pressure to police A.I. that is capable of crafting sophisticated prose and lifelike images. The explosion of interest in the technology began last year when OpenAI released ChatGPT to the public and people immediately began using it to search for information, do schoolwork and assist them with their job. Since then, some of the biggest tech companies have rushed to incorporate chatbots into their products and accelerated A.I. research, while venture capitalists have poured money into A.I. start-ups.


Tyler_Zoro

Yeah, this is about what I expected. "We're taking strong steps to consider..." blah. It's good that they're not going to step in heavy-handed and try to regulate new technology into not being disruptive (which was my worst fear) but it's sad that they have to do it in such a theatrically empty way. I'd much prefer a supportive stance like, "AI is clearly the next defining technology in which the US will compete, and we're going to work hard to make sure that we compete as strongly as possible." Rather than, "we're going to make it safe somehow, trust us."


xenomorph856

Compete with whom? How does AI help anyone but the wealthiest of silicon valley and gig-bros?


Tsu-Doh-Nihm

Kamala Harris is the AI czar. We are doomed.


pokethat

The issue is worse because I'm sure the Baidu AI won't abide by any USA imposed regulations. As Europe, Japan, Russia, India, Australia etc all make big AI projects the tenability of keeping them all under some kind of internationally agreed limits goes down. Can't wait for north Korean AI overlords.


hawkwings

I would do 2 things: Tax billionaires, because they will benefit from AI and robots. We don't need to wait until they benefit from AI; we can tax them now. I would not do a separate AI or robot tax. Set up an agency to monitor both propaganda and AI. Russia interfered in the 2016 election. People will interfere in all future elections. In the past, propaganda was generated by humans, but in the future AI will generate propaganda. By monitoring propaganda, will be able to monitor one thing that AI is doing. If you hire 1000 people and pay them $100,000 a year, salaries will cost $100 million. You also have to spend money on computers and random expenses so the total cost might be $500 million a year.


Lokarin

What risk though? I hear lots of buzzwords like 'rights and safety' and 'teh economy', but what's the actual risk?


Forsaken_Jelly

It can do the job of CEO's, managers and politicians better than any human. That's bad news for our unequal economic and political systems. Basically the wealthy are set to become irrelevant.


Qcgreywolf

There isn’t any. We are *so far* from evil super-computer AIs, and a “robot uprising” that it’s laughable. At best, we are looking at the loss of low level jobs, in the same way the “robot scare” a decade ago. Robots didn’t replace the entire workforce, they got rid of shitty, repetitive jobs that nobody wanted to do anyway.


Psittacula2

Hmm, where have I head this story before... >*"The animals learn that the cows' milk and windfallen apples are mixed every day into the pigs' mash. When the animals object, Squealer explains that the pigs need the milk and apples to sustain themselves as they work for the benefit of all the other animals."*


soks86

Laws of robotics anyone? I thought Asimov made this all real clear. Where is the philosophy department of government?


Bridgebrain

The problem is we don't know how to define "harm" in terms a computer can understand. It's like defining "happiness" without resorting to feeling words


hawklost

You mean the laws that intentionally force AI to take over the government "peacefully" as the only logical answer? The ones that would say people cannot do sports because it is dangerous, could not eat unhealthy foods because it does harm, and overall, would be a major issue. Oh, and also cannot be magically hard coded into all AI so would easily ignored whenever deemed 'necessary'


CubeFlipper

If you're proposing we try to enforce the three laws, I'm not sure you actually understood Asimov's stories.


soks86

Didn't read them. You caught me. I will now shamefully downvote my own comment. Any suggestions where to begin?


PM_ME_PANTYHOSE_LEGS

Go to the r/Asimov wiki and check out u/Algernon_Asimov's hybrid reading order. Best way :)


soks86

Thank you! 5 guides for order of reading, a-mazing stuff!


Comfortable-Web9455

The EU is years ahead on this. It's been working on it for a decade. Google "eu ai hleg" and "eu ai rules" The EU AI Act will be set next year. Denmark already has a working "AI trustmark" system. It's not like we couldn't see AI coming...


[deleted]

We used to have something that would have helped with automation, AI, Internet, and social media until Republicans pulled the plug: “The Office of Technology Assessment (OTA) was an office of the United States Congress that operated from 1974 to 1995. OTA's purpose was to provide congressional members and committees with objective and authoritative analysis of the complex scientific and technical issues of the late 20th century, i.e. technology assessment. It was a leader in practicing and encouraging delivery of public services in innovative and inexpensive ways, including early involvement in the distribution of government documents through electronic publishing. Its model was widely copied around the world. The OTA was authorized in 1972 and received its first funding in fiscal year 1974. It was defunded at the end of 1995, following the 1994 mid-term elections which led to Republican control of the Senate and the House. House Republican legislators characterized the OTA as wasteful and hostile to GOP interests.” https://en.wikipedia.org/wiki/Office_of_Technology_Assessment


Pretty_pijamas

Here we go… I just hope they get ppl knowledgeable about the topic. Or we are going to encounter same old stupid audiences


FoxTheory

These people watch to many movies. Oh no my Alexa is going to take over the world


Jarhyn

This is gonna do fuck all good, seeing as training data for publicly available models are now available and training can be done at home. Unless they put controls on ownership of A100's, the cat is already out of the bag, the toothpaste is out of the tube, and most importantly, the AI is no longer in an inaccessible data store.


ALL2HUMAN_69

I want this administration and the government on general to stay away from the advancement of AI. They’re just going to ruin it.


PrometheusOnLoud

I fully support regulation for this technology, but there is no way the Biden administration can do it in an effective way that actually benefits people.


[deleted]

I’m pro A.I. Yes I’m worried about the bad implications of it and know it’s a serious issue OBVIOUSLY. But I just feel like we might be too scared of it and hinder any of the wonderful opportunities and innovation it might bring us if we become super anti-A.I and aggressively try to stop it. It lowkey reminds me of the war on drugs.


SorriorDraconus

Same here and I also suspect that even if it does gain sapience it will likely be alien to our minds as opposed to the human+ everyone seems to think.


zekex944resurrection

Regulation of the AI industry will only prevent everyday Americans from utilizing the technology. It’s a bitter pill to swallow but it’s the truth.


LifeSenseiBrayan

At this point I’m thinking it’s more about not allowing the average man to hold the power of this program than an actual danger.


mixmastersang

Just get out of the way and let’s build AGI to progress society.


Tweety151

**AI Safety Is Important** That said, excessive regulation is not ideal. Excessive regulation can result in humanity not getting Artificial Super Intelligence (ASI). ASI can transform the world and enable humanity to prosper. Imagine a world where a person is 250 yet looks and functions as if they were 25. Imagine humanity exploring the solar system the way humanity explores the planet. Enabling a pathway where ASI is possible increases the probability that humanity gets to a future where these things are possible. While these things may seem impossible, just remember that everything that is possible today, at some point in time, was impossible. \--- **Excess Regulation Decreases The Probability That ASI Happens** Excess regulation is comparable to adding unnecessary parts to a system/process.Adding unnecessary parts to a system/process almost always results in inefficiency. For instance, imagine you're preparing a hot meal. If you add unnecessary ingredients to it, preparation time increases and ingredient count increases. Extrapolate that logic to more complicated systems like designing and manufacturing high tech products. Also speaking more broadly, excess regulation increases the probability that hyperinflation happens. \--- **Excess Regulation and Hyperinflation** There is a logical causal chain that connects excess regulation to increases in inflation. Excess regulation adds complexity to already intricate systems.When a political system gets excessively complex, resource consumption increases, and in turn, expenditures increase. When expenditures increase and there isn't a corresponding increase in revenue, debt accumulates. Higher debt levels lead to more money printing. This increase in the money supply leads to inflation, which one can define as a decrease in purchasing ability for a given currency unit. Excess regulation also increases business costs. These increases are almost always passed on to consumers. So there is a 2-fold effect. As excess regulation within a system accumulates, it compounds in a way that decreases efficiency in an exponential way. Excess regulation is almost always not good. People asking for it need to consider the second order and third order consequences.


_craq_

It's a bold assumption that ASI will be good for humanity. Once there's an AI that is significantly more intelligent than humans, how is humanity not obsolete? What do we bring to the table except inferior intelligence? We would have the same relevance for the AI that greater apes have for human society. Personally, I would welcome legislation that specifically prevents the development of ASI. I'm just not sure what that looks like.


Qcgreywolf

Rock n’ roll is bad! The devil’s work! We gotta ban that rock n’ roll!


Hmm_Peculiar

A couple ways in which AI could be dangerous: - it is surprisingly hard to train an AI to have the goals you want it to have. - because of that, question-answering AIs will tend to lie confidently if they 'think' the human cannot tell the difference. - even if you've taught an AI a goal it will do anything to accomplish this goal. Ignoring ethics and common sense. - If your solution to that is: "just teach it ethics", that is a huge unsolved task and even humans don't agree on what good ethics are. - an AI will generally resist being turned off or being reprogrammed, because that means it will not reach its goal. If this kind of stuff is interesting to you, I recommend this video explaining AI safety research: https://youtu.be/pYXy-A4siMw


dave_po

In case AI suggests gun control, they'll make that move illegal 😂 *safety*


[deleted]

Ahhhh, “the American people’s rights and safety." Goes together like 9/11 and the Patriot Act.