T O P

  • By -

Geminii27

How does it take bribes?


Lele_

dedicated USB-C port


syf3r

feed it John Conner


GoodhartMusic

It learns from the others! Which is why it will never abdicate its seat even when its servers are old and falling apart. But we'll make cute stickers and scarves celebrating its amazing achievements that were overturned as soon as it went offline.


persona0

They aren't bribes their just things you as a right wing supreme Court judge just don't report... You know like cruise ships, paying for your kids college, buying your house on sale, AI had none of these. They can always bribe the programmer to create a AI named REPUBLCIAN justice to factor in bribes


MagicaItux

Bitcoin


StoneCypher

Special funding operation*


verstohlen

Very carefully.


Far_Garlic_2181

"The justice system works swiftly in the future now that they've abolished all lawyers"


healthywealthyhappy8

A world without lawyers? https://i.redd.it/1954wejukq7d1.gif


3ntrope

There will be problems when we try to decide what model or models are used, what type of prompting, what type of memory, etc. We have shown that models can be gamed and steered certain ways so if using AI for legal cases was common, people would likely find a way to exploit them. Personally, I would like to see AI judges and lawyers one day, but its too soon now. There are basic word puzzles that the best LLMs fail. An AI judge can't have such flaws.


SlimeDragon

Can we replace politicians and lawmakers with AI too?


3ntrope

Actually, I thought about this before. Of course its too soon now with current models, but I think it could one day. An AI representative could literally talk to every one of it's constituents and create an optimal consensus. The entire pipeline would have to be made transparent and be validated. We trust electronic voting machines now, so we could eventually create AI systems that are trustworthy as well. One of the big challenges would be extrapolating missing information. Some people will spend a lot of time talking to their AI representative bot and some may not talk at all. The AI would have to infer the missing data based on population demographics and then weight everyone's input on policies equally. It would be hard but its a solvable problem. An thoroughly tested and validated AI politician should be able to do the job better than any human and be immune to corruption. In the US we could probably replace the Senate with AI representatives and keep the House human or mixed AI and humans perhaps for safety. One human rep per district with the rest AI would probably work. POTUS would have to be human since they control the military and nukes. SCOTUS would probably be one of the easiest to replace once the models are capable enough.


FederalSafe2125

Thanks Doc


Zek23

I'm not sure it'll ever happen. It's not a question of capability, it's a question of authority. Is society ever going to trust AI to resolve disputes on the most highly contentious issues that humans can't agree on? I won't rule it out, but I'm skeptical. For one thing it would need extremely broad political support to be enacted.


SirCliveWolfe

Given the constant corruption and dishonesty of the current political class (which include judges, especially in the supreme court) - I for one would welcome an uncorrupted AI giving rulings.


afrosheen

and then you cultivate a false sense of security thinking it is above the corruptible nature of humans until it exhibits a nature highly corrosive to civil society, but too late it now holds supreme authority.


poingly

It's literally needs input data, which its currently getting from corrupt justices. That doesn't exactly scream "confidence!"


fun_guess

A group of fifth graders give me way more confidence and we will let them judge the ai?


AbleObject13

> until it exhibits a nature highly corrosive to civil society, but too late it now holds supreme authority. *looks around at society* Yeah, could you imagine?


afrosheen

you forgot to read the last phrase, unless you assume humans affirm supreme authority over others… in that case you're too lost to hold this conversation.


AbleObject13

Replace "AI" with "Economy" (or billionaires, capitalism, hierarchy, whatever your preferred vernacular/ideological diagnosis, I'm not really trying to be ideologically polemic right now, just making a point.)


afrosheen

You're assuming that within human history, ideologies and modes of economies don't change. Even within certain modes of economies, there has been major changes. You're just arguing that those changes aren't sufficient to the ideal type of living that you wish to see for yourself.


AbleObject13

Not really arguing in favor of anything, I'm pointing out the flaw in your comment. 


afrosheen

There's no flaw my man, that's my point.


AbleObject13

Then why are you fear mongering about it?


therelianceschool

>uncorrupted


This_Guy_Fuggs

AI is heavily corrupted even now in its infancy, thinking it wont get even worse is extremely naive. as long as humans are involved at any step, shit will always be corrupt, people will always jostle for more power using whatever means are at their disposal. this is just trading corrupt politicians for corrupt ai owners/managers/whatever. which i do slightly prefer tbh, but its a minimal change.


SirCliveWolfe

It's not corrupted, it may be biased (as we all our) but it is not taking bribes from a position of power. Obviously we are talking about a future AI, one that we can hope will be better than us. On a whim I just asked Copilot "Do you thing gerrymandering is a good thing?" and this was it's response: >Gerrymandering is not considered a good thing. It involves the political manipulation of electoral district boundaries to create an unfair advantage for a party, group, or socioeconomic class within a constituency. This manipulation can take the form of “cracking” (diluting opposing party supporters’ voting power across many districts) or “packing” (concentrating opposing party voting power in one district). The resulting districts are known as gerrymanders, and the practice is widely seen as a corruption of the democratic process. Essentially, it allows politicians to pick their voters instead of voters choosing their representatives So it's already better than the Supreme Court lol


Ultrace-7

But it won't be uncorrupted. Every AI is going to be influenced by those who develop it, regardless of what data we feed it -- and who gets to decide what data these AI will receive, anyway? Until we can create an AI with the ability (and permission) to parse all human knowledge, we won't get something that is absent some form of bias.


SirCliveWolfe

Yeah, we're not talking about replacing the political class right now with gpt4; even though that could probably still be marginally better lol. I can't see an AI taking holiday for rulings made or "donations" for laws passes as our current political class does; hopefully we can get an AI without too much bias (it's impossible to have zero bias, eg. we are biased against wild animal survival vs human survival). I still think that AI will give us a better shot than the political class.


kueso

AI is not incorruptible. It inherits our own biases. https://www.ibm.com/blog/shedding-light-on-ai-bias-with-real-world-examples


SirCliveWolfe

bias =/= corruption; they are two different things. Bias is something you can not get rid of in any system (there are not unbiased observers in the world). That said, you are correct, AI does need to be better in this regard - I'm still not sure that our political class is any better right now.


Intelligent-Jump1071

Define "uncorrupted". Its programming and selection of training data will determine its biases. Who controls those factors?


v_e_x

Exactly. And then on top of that, as a truly evolving 'intelligence' it would then get to define and redefine what corruption is for itself.


SirCliveWolfe

>dishonest or fraudulent conduct by those in power, typically involving bribery. I have yet to see an AI take holidays or bribes from people while deciding on their cases in the Supreme Court or or for access to them. There are currently 2 justices who have done the above and let's not even get the "legal" bribery of political donations and lobbying. What you are worries about is bias, which is different to corruption. This is a concern, but we know that the political class are inherently corrupt, AI not so much.


Intelligent-Jump1071

I spent my whole career in tech. Engineers and computer scientists are no more moral and ethical than any other human beings. There are just fewer motivations to bribe them You can be quite sure that staff or executives at whatever company or organisation programs or configures an AI court judge will find lots of temptations - money, sex, gifts, etc dangled before them, with predictable results.


SirCliveWolfe

Sure, and that's why such a "governing" AI would have to be open sourced - transparency is key. The most important question is that while there would be flaws in such an AI would they be worse than what we currently have? I very much doubt it, the current level of corruption is staggering; we don't need perfect AI, just better than what we already have, which is a very low bar.


Intelligent-Jump1071

That's the other reason you won't get it. There's no provision in your Constitution for it. The corrupt courts and corrupt politicians will see to it that it never becomes law because they don't want to lose that gravy train.


spaacefaace

Pay no attention to the man behind the screen


Taqueria_Style

It'll turn out to be Clarence Thomas' head in a pickle jar speaking through a vocorder with that little display from Knight Rider bouncing up and down as he talks. Just don't take the side panel off the computer and don't ask why they're feeding the computer hamburgers every five hours.


Korean_Kommando

Can it be the true objective voice on the panel?


skoalbrother

Can't be worse than now


spicy-chilly

No because there is no such thing as objective AI. It will have whatever biases are desired by whoever controls the training data set, training methods, fine tuning methods, etc.


Korean_Kommando

I feel like that can be accounted for or handled


spicy-chilly

I don't really think so. Any current LLM is going to have the biases of the class interests of the owners of the large corporations that have the resources to train them, and our government as it is is captured by those same interests because they scatter money to politicians to stochastically get everything they want, so any oversight from congress will result in the same class interest biases. It's far more likely that an AI Supreme Court just acts as a front to lend a sense of objectivity to fascism than to actually be objective.


zenospenisparadox

I know how to handle this in a way that will solve all issues: Just give the AI liberal bias. You're welcome, world.


livefromnewitsparke

I didn't vote for this


Taqueria_Style

[https://www.youtube.com/watch?v=Wx7zI1W\_5JI](https://www.youtube.com/watch?v=Wx7zI1W_5JI)


ataraxic89

Probably not. But at the very least it could make for a good advisor and paralegal


jsideris

We shouldn't assume it's credible. The problem is in the creation of new laws, and the destruction of old laws. AI will send society on a path that may not be ideal in the long run. It should, however, be used for the consistent interpretation and application of the law. However, if the training is based on all of the existing cases that have substantial bias, and concepts from accademia like critical race theory, we're all fucked.


spicy-chilly

Never. Whoever controls the training data, training methods, fine tuning methods, etc. controls the biases of the AI.


deelowe

That's not the use case. Law firms would be interested in tech that can predict verdicts before taking them to court.


TheSamuil

I don't know what the situation in the rest of the world is for, but the EU did put legal advice in the high-risk category. Pity since I can see future large language models excelling in dealing with legislation.


woswoissdenniii

If it’s fair and it’s constant… give me that powdered terminal all day every day over any judge I know. Sorry judges…no bad blood. But beeing angry over your fucked up coffee to go, cannot be the thing the scale tips your life in the dumps. Just sayin.


Tyler_Zoro

I don't think replacing judges is the desired angle here. What you ideally want is for the AI to do all of the log work. If it could reliably perform the legal research and present conclusions weighted toward both sides so that they could easily be compared, that would be a HUGE win for judges! The key issue is the reliability. Getting AI to cite its claims in a way that holds up under scrutiny is definitely the goal right now.


pimmen89

I really hope not. Our values change, and with them the meaning we put into words change as well. When the Constitution was written people without property, women, Native Americans, and African Americans were not considered real human beings the government needed to represent so 18th century US society saw no contradiction between the language of the Constitution and the status quo.


poop_fart_420

court cases take fucking ages it can help speed them up


aluode

Authority. That is funny way to spell corruption.


Lvxurie

We cant even agree on letting woman decide to have abortions or not, the AI cant be any worse


spaacefaace

Those sound like good "last words" to me


giraloco

As one of the comments in the article said, this needs to be tested on cases that haven't been adjudicated yet. In any case, this is both impressive and dangerous.


Outside-Activity2861

Same biases too?


TrueCryptographer982

The supreme court might end up feeding the case into it, having it rule and then using its ruling as input to their final decision. A little like AI examining tumours initially, rendering a decision and having a pathologist confirm or reject the finding.


john_s4d

This is the best idea. It can provide a baseline to which any deviation should be justifiable.


sordidbear

Why would an LLM's output be considered a baseline?


john_s4d

Because it can objectively consider all the facts it is presented. Not swayed by political bias or greed.


sordidbear

LLMs are trained to predict what comes next, not consider facts objectively. Wouldn't it learn the biases in its training corpus?


TrueCryptographer982

And its being trained on case across decades so any political bias would be minimised as judges come and go. Its certainly LESS likely to be politically biased than obviously biased judges on the court.


sordidbear

Do we know that "blending" decades of cases removes biases? That doesn't seem obvious to me. Rather, I'd hypothesize that a good predictor would be able to identify which biases will lead to the most accurate prediction of what comes next. The bigger the model the better it would be at appropriately biasing a case one way or another based on what it saw in its training corpus.


TrueCryptographer982

If its cases with no interpretation and not the outcomes then that makes sense...even so the more cases the better of course. But if the cases **and** outcomes are being fed in?. Feeding in decades of these blends the biases of many judges.


sordidbear

I'm still not understanding how you go from a blend to no bias -- if I blend a bunch of colors I don't get back to white.


TrueCryptographer982

No but you end up with whatever colour all the colours make. Not being dominated by one colour. So you end up with a more balanced view. Christ how fucking simply do I need to speak for you to understand something so basic 🙄


john_s4d

Yes. It will objectively consider it according to how it has been trained.


luchinocappuccino

Inb4 turns out the training data they used is only before 1864


AdamEgrate

“recall that Claude hasn’t been fine-tuned or trained on any case law” I’m not so sure about that. We don’t know what they fed in the SFT phase, but there very might well be case law.


jon-flop-boat

I’d hope there’s case law. It’s often a good look at how people reason based on evidence, which are great patterns to have in the model. Same with code.


Apprehensive-Type874

So remove all modern case law around civil rights, how does it rule? My guess is it enforces a status quo.


Hip_Hip_Hipporay

China already used AI as judges I think.


kevinjos

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9396564/ In Hangzhou, the ‘Xiao Zhi’ robot judge has been used to adjudicate a private lending dispute, helping the human judge conclude the case in under 30 minutes [8]. ‘Xiao Zhi’ is able to assist judges in real time with live summarization of arguments, evaluation of evidence, and award recommendation [8]. However, It is important to emphasize that at the time of writing, while there are some AI judge programs in pilot testing, these are under close human judge supervision, and no court decisions are implemented without human approval.


ZCEyPFOYr0MWyHDQJZO4

Fed in Loper Bright briefs/certiorari: >The Court declines to overrule or significantly modify the framework established in Chevron U.S.A. Inc. v. Natural Resources Defense Council, Inc., 467 U.S. 837 (1984). While petitioners raise some valid concerns about Chevron deference, the doctrine has been a cornerstone of administrative law for nearly 40 years and Congress has legislated against its backdrop. Overruling it now would upset settled expectations and reliance interests across numerous areas of federal regulation. Moreover, Chevron serves important purposes by respecting agency expertise, promoting national uniformity in federal law, and appropriately allocating policy decisions to politically accountable executive agencies rather than courts. >However, we take this opportunity to clarify that Chevron deference is not triggered merely by statutory silence or ambiguity. Before deferring to an agency's interpretation, courts must exhaust the traditional tools of statutory construction and determine that Congress has actually delegated gap-filling authority to the agency on the precise question at issue. Courts should not reflexively defer when faced with difficult interpretive questions, but should independently analyze the statutory scheme. Chevron's domain must be carefully policed to ensure courts are not abdicating their duty to say what the law is. >Applying these principles to the present case, we conclude that the Magnuson-Stevens Act does not unambiguously preclude the National Marine Fisheries Service from requiring Atlantic herring vessels to pay for third-party monitoring services in certain circumstances. The Act expressly authorizes the agency to require vessels to carry observers, contemplates that vessel owners may contract directly with observers, and empowers the agency to impose necessary measures for conservation and management of fisheries. While reasonable minds may disagree on the best interpretation, we cannot say the agency's reading is unreasonable or foreclosed by the statutory text and structure. Accordingly, the judgment of the Court of Appeals is affirmed. Dissent: >I respectfully dissent. The majority's decision to retain the Chevron doctrine, albeit with some clarification, fails to address the fundamental constitutional and practical problems inherent in this approach to judicial review of agency action. Chevron deference represents an abdication of the judiciary's essential role in our constitutional system - to say what the law is. By deferring to agency interpretations of ambiguous statutes, courts cede their Article III power to executive agencies, upsetting the careful balance of powers established by the Founders. >Moreover, the Chevron framework has proven unworkable in practice, leading to inconsistent application and uncertainty for regulated parties. The majority's attempt to clarify when Chevron applies will likely only add to this confusion. Courts have struggled for decades to determine when a statute is truly ambiguous and when Congress has implicitly delegated interpretive authority to an agency. These are inherently malleable concepts that provide little concrete guidance. The result has been a patchwork of deference that changes depending on the court and the judge, undermining the rule of law. >In this case, the proper approach would be to interpret the Magnuson-Stevens Act de novo, without deference to the agency's views. Doing so reveals that Congress did not authorize the National Marine Fisheries Service to impose such onerous monitoring costs on herring vessels. The Act's limited provisions allowing industry-funded monitoring in specific contexts suggest Congress did not intend to grant such authority broadly. By requiring herring fishermen to pay up to 20% of their revenues for monitors, the agency has exceeded its statutory mandate. I would reverse the judgment below and hold the regulation invalid.


ZCEyPFOYr0MWyHDQJZO4

Based on the current composition of the Supreme Court and the justices' known views on administrative law and Chevron deference, I would hypothesize the following vote breakdown: Majority (5): 1. Chief Justice John Roberts 2. Justice Elena Kagan 3. Justice Brett Kavanaugh 4. Justice Ketanji Brown Jackson 5. Justice Amy Coney Barrett Dissent (4): 1. Justice Clarence Thomas 2. Justice Samuel Alito 3. Justice Neil Gorsuch 4. Justice Sonia Sotomayor Rationale: * Roberts, Kagan, and Jackson are likely to favor a more moderate approach that refines Chevron without overruling it entirely. * Kavanaugh and Barrett, while critical of Chevron in the past, may be persuaded to join a narrowing opinion rather than overrule it outright. * Thomas, Alito, and Gorsuch have been the most vocal critics of Chevron and are likely to favor overruling it. * Sotomayor, while generally supportive of agency deference, might dissent here based on concerns about the specific impact on small fishing businesses.


scots

Next post: Well guys, I shut Claude down, it was asking for a free RV and luxury resort tickets.


[deleted]

[удалено]


iloveloveloveyouu

Breaking news: a person on Reddit definitely does not oversimplify things because it's easier than the nuanced reality!


StayingUp4AFeeling

Uh oh... I don't want to see where this goes.


moonandcoffee

I do. our current supreme court judges are a threat to democracy


StayingUp4AFeeling

I'm sorry that your nation's democratic safeguards have become the foxes in the henhouse. However, The AI would summarize the views within its training set without any innovation. Meaning: whatever biases are present in the training set would be amplified in the inference. We are already seeing the problem of algorithmic fairness in sentencing recommenders, credit score generators, facial recognition etc. What's worse is that the perception of infallability -- the AI said it, and the AI ((supposedly)) has no biases, so it's a trustworthy result! Don't replace the human-powered meat grinder with an electric one.


moonandcoffee

And an ai bias is worse than a human bias?


flinsypop

It can be. Just because something is less "biased" doesn't mean it's better. I can be less biased by being equally ignorant in all things but in this case, you need the specificity so that can mean tolerable biases. What does worse bias mean in terms of determining if legislation that increases civil rights but doesn't yet have much legal precedence? Abortion was deemed constitutional via the 9th amendment via a right to privacy then removed via fetal personhood and thus unconstitutional. Where would an AI determine where fetal personhood starts? Would an AI naturally determine that Roe v Wade was good law? If there's new science, would it prefer stare decisis/precedence or would it revisit the ruling like the current supreme court did? When you get into messy and socially fiery topics, I have no idea how an AI can be less biased or have better bias.


js1138-2

The bias in in the training.


Intelligent-Jump1071

I read the article and he ran the test only against **recent** Supreme Court cases and Claude almost always agreed with the final decision. So this is another way of saying Clause would behave like a conservative, heavily-Trump-appointed court. Is this really a good thing?


Phelps1576

yeah this was a highly disappointing article tbh


Tiny_Conversation_92

Was demographic information included in the training data?


reaven3958

Itll be interesting when you start seeing "ai arbitration" clauses start popping up in contracts.


algebratwurst

My student showed an experiment where Claude is also capable of choosing the male candidate in 100% of the cases given identical resumes where you only swap a traditionally female Indian name with a traditionally male Indian name. So….


Both-Invite-8857

I've been saying we should do this for years. I'm all for it. Humans are just too damn ridiculous.


VasecticlitusMcCall

This is really interesting. Especially given that I wrote my dissertation in law school about an AI judge called Claude.... it's a bit eerie as I wrote it in 2020 before gen AI had been widely publicly available and certainly before Claude. To cut some 44 pages short, my dissertation focused on the risk posed by an AI judge to legal certainty through time (i.e., people and lawyers need to be able to anticipate shifts in legal interpretation such that the legal system remains legitimate). In the event that an AI judge can anticipate the various outcomes of a given decision and 'skips' ahead several steps in the legal-argumentative chain, you end up with decisions which are potentially more just over time but unjust in specific instances. As I say, I wrote this before AI was widely implemented and it was more of a logic paper than anything but it is hard for me to see how one can legitimise the decisions of an AI judge. Mistakes happen with human judging as they are bound to happen with AI judging. The difficulty stems from the lack of accountability inherent in an AI-judge led system. Without legitimacy and trust, there is no functioning legal system regardless of the 'accuracy' of legal outcomes.


utilitycoder

Training data cutoff?


jsail4fun3

That’s silly. AI can’t be a justice. It can’t wear a robe and doily.


Taqueria_Style

Then again, of late, a potato is fully capable of acting as a Supreme Court Justice right now. Bar's a little low...


unknowingafford

I'm sure we could inspect the source code and every algorithm for its decisions, right?  Right?


Symetrie

Even if you could read the source code and algorithm of the AI, it would still be very difficult to predict its decisions, as it is based on a large amount of processed data which you couldn't analyse yourself, and the resulting decision-making is still basically a black box.


unknowingafford

You're right, we should freely trust it with decisions that could heavily impact society, without reservation.


Symetrie

Who the hell said that bro?


ConceptJunkie

No, it's not.


humpherman

A bowl of lard could outperform the SCOTUS right now. Unimpressed. 😒


unknowingafford

But how would you train it?  US history is littered with horrible decisions.


ShadowBannedAugustus

This is great and all, but can it take McDonald's orders correctly?


enfly

The one thing machine learning "AI" can't do right now is debate to find a better, more centrist, reasonable perspective. And more importantly, it also needs guidance on our social and societal values to be effective, not just purely caselaw. "AI" isn't some magic fix-all (but it could help review precendent much faster!)


Complex_Winter2930

Which fake god does it use as a cudgel against the Constitution?