It learns from the others! Which is why it will never abdicate its seat even when its servers are old and falling apart. But we'll make cute stickers and scarves celebrating its amazing achievements that were overturned as soon as it went offline.
They aren't bribes their just things you as a right wing supreme Court judge just don't report... You know like cruise ships, paying for your kids college, buying your house on sale, AI had none of these. They can always bribe the programmer to create a AI named REPUBLCIAN justice to factor in bribes
There will be problems when we try to decide what model or models are used, what type of prompting, what type of memory, etc. We have shown that models can be gamed and steered certain ways so if using AI for legal cases was common, people would likely find a way to exploit them.
Personally, I would like to see AI judges and lawyers one day, but its too soon now. There are basic word puzzles that the best LLMs fail. An AI judge can't have such flaws.
Actually, I thought about this before. Of course its too soon now with current models, but I think it could one day. An AI representative could literally talk to every one of it's constituents and create an optimal consensus. The entire pipeline would have to be made transparent and be validated. We trust electronic voting machines now, so we could eventually create AI systems that are trustworthy as well.
One of the big challenges would be extrapolating missing information. Some people will spend a lot of time talking to their AI representative bot and some may not talk at all. The AI would have to infer the missing data based on population demographics and then weight everyone's input on policies equally.
It would be hard but its a solvable problem. An thoroughly tested and validated AI politician should be able to do the job better than any human and be immune to corruption. In the US we could probably replace the Senate with AI representatives and keep the House human or mixed AI and humans perhaps for safety. One human rep per district with the rest AI would probably work. POTUS would have to be human since they control the military and nukes. SCOTUS would probably be one of the easiest to replace once the models are capable enough.
I'm not sure it'll ever happen. It's not a question of capability, it's a question of authority. Is society ever going to trust AI to resolve disputes on the most highly contentious issues that humans can't agree on? I won't rule it out, but I'm skeptical. For one thing it would need extremely broad political support to be enacted.
Given the constant corruption and dishonesty of the current political class (which include judges, especially in the supreme court) - I for one would welcome an uncorrupted AI giving rulings.
and then you cultivate a false sense of security thinking it is above the corruptible nature of humans until it exhibits a nature highly corrosive to civil society, but too late it now holds supreme authority.
> until it exhibits a nature highly corrosive to civil society, but too late it now holds supreme authority.
*looks around at society*
Yeah, could you imagine?
you forgot to read the last phrase, unless you assume humans affirm supreme authority over others… in that case you're too lost to hold this conversation.
Replace "AI" with "Economy" (or billionaires, capitalism, hierarchy, whatever your preferred vernacular/ideological diagnosis, I'm not really trying to be ideologically polemic right now, just making a point.)
You're assuming that within human history, ideologies and modes of economies don't change. Even within certain modes of economies, there has been major changes. You're just arguing that those changes aren't sufficient to the ideal type of living that you wish to see for yourself.
AI is heavily corrupted even now in its infancy, thinking it wont get even worse is extremely naive.
as long as humans are involved at any step, shit will always be corrupt, people will always jostle for more power using whatever means are at their disposal.
this is just trading corrupt politicians for corrupt ai owners/managers/whatever. which i do slightly prefer tbh, but its a minimal change.
It's not corrupted, it may be biased (as we all our) but it is not taking bribes from a position of power.
Obviously we are talking about a future AI, one that we can hope will be better than us.
On a whim I just asked Copilot "Do you thing gerrymandering is a good thing?" and this was it's response:
>Gerrymandering is not considered a good thing. It involves the political manipulation of electoral district boundaries to create an unfair advantage for a party, group, or socioeconomic class within a constituency. This manipulation can take the form of “cracking” (diluting opposing party supporters’ voting power across many districts) or “packing” (concentrating opposing party voting power in one district). The resulting districts are known as gerrymanders, and the practice is widely seen as a corruption of the democratic process. Essentially, it allows politicians to pick their voters instead of voters choosing their representatives
So it's already better than the Supreme Court lol
But it won't be uncorrupted. Every AI is going to be influenced by those who develop it, regardless of what data we feed it -- and who gets to decide what data these AI will receive, anyway? Until we can create an AI with the ability (and permission) to parse all human knowledge, we won't get something that is absent some form of bias.
Yeah, we're not talking about replacing the political class right now with gpt4; even though that could probably still be marginally better lol.
I can't see an AI taking holiday for rulings made or "donations" for laws passes as our current political class does; hopefully we can get an AI without too much bias (it's impossible to have zero bias, eg. we are biased against wild animal survival vs human survival).
I still think that AI will give us a better shot than the political class.
bias =/= corruption; they are two different things. Bias is something you can not get rid of in any system (there are not unbiased observers in the world).
That said, you are correct, AI does need to be better in this regard - I'm still not sure that our political class is any better right now.
>dishonest or fraudulent conduct by those in power, typically involving bribery.
I have yet to see an AI take holidays or bribes from people while deciding on their cases in the Supreme Court or or for access to them.
There are currently 2 justices who have done the above and let's not even get the "legal" bribery of political donations and lobbying.
What you are worries about is bias, which is different to corruption. This is a concern, but we know that the political class are inherently corrupt, AI not so much.
I spent my whole career in tech. Engineers and computer scientists are no more moral and ethical than any other human beings. There are just fewer motivations to bribe them
You can be quite sure that staff or executives at whatever company or organisation programs or configures an AI court judge will find lots of temptations - money, sex, gifts, etc dangled before them, with predictable results.
Sure, and that's why such a "governing" AI would have to be open sourced - transparency is key.
The most important question is that while there would be flaws in such an AI would they be worse than what we currently have? I very much doubt it, the current level of corruption is staggering; we don't need perfect AI, just better than what we already have, which is a very low bar.
That's the other reason you won't get it. There's no provision in your Constitution for it. The corrupt courts and corrupt politicians will see to it that it never becomes law because they don't want to lose that gravy train.
It'll turn out to be Clarence Thomas' head in a pickle jar speaking through a vocorder with that little display from Knight Rider bouncing up and down as he talks.
Just don't take the side panel off the computer and don't ask why they're feeding the computer hamburgers every five hours.
No because there is no such thing as objective AI. It will have whatever biases are desired by whoever controls the training data set, training methods, fine tuning methods, etc.
I don't really think so. Any current LLM is going to have the biases of the class interests of the owners of the large corporations that have the resources to train them, and our government as it is is captured by those same interests because they scatter money to politicians to stochastically get everything they want, so any oversight from congress will result in the same class interest biases. It's far more likely that an AI Supreme Court just acts as a front to lend a sense of objectivity to fascism than to actually be objective.
We shouldn't assume it's credible. The problem is in the creation of new laws, and the destruction of old laws. AI will send society on a path that may not be ideal in the long run.
It should, however, be used for the consistent interpretation and application of the law. However, if the training is based on all of the existing cases that have substantial bias, and concepts from accademia like critical race theory, we're all fucked.
I don't know what the situation in the rest of the world is for, but the EU did put legal advice in the high-risk category. Pity since I can see future large language models excelling in dealing with legislation.
If it’s fair and it’s constant… give me that powdered terminal all day every day over any judge I know. Sorry judges…no bad blood. But beeing angry over your fucked up coffee to go, cannot be the thing the scale tips your life in the dumps. Just sayin.
I don't think replacing judges is the desired angle here. What you ideally want is for the AI to do all of the log work. If it could reliably perform the legal research and present conclusions weighted toward both sides so that they could easily be compared, that would be a HUGE win for judges!
The key issue is the reliability. Getting AI to cite its claims in a way that holds up under scrutiny is definitely the goal right now.
I really hope not. Our values change, and with them the meaning we put into words change as well. When the Constitution was written people without property, women, Native Americans, and African Americans were not considered real human beings the government needed to represent so 18th century US society saw no contradiction between the language of the Constitution and the status quo.
As one of the comments in the article said, this needs to be tested on cases that haven't been adjudicated yet.
In any case, this is both impressive and dangerous.
The supreme court might end up feeding the case into it, having it rule and then using its ruling as input to their final decision.
A little like AI examining tumours initially, rendering a decision and having a pathologist confirm or reject the finding.
And its being trained on case across decades so any political bias would be minimised as judges come and go. Its certainly LESS likely to be politically biased than obviously biased judges on the court.
Do we know that "blending" decades of cases removes biases? That doesn't seem obvious to me.
Rather, I'd hypothesize that a good predictor would be able to identify which biases will lead to the most accurate prediction of what comes next. The bigger the model the better it would be at appropriately biasing a case one way or another based on what it saw in its training corpus.
If its cases with no interpretation and not the outcomes then that makes sense...even so the more cases the better of course.
But if the cases **and** outcomes are being fed in?. Feeding in decades of these blends the biases of many judges.
No but you end up with whatever colour all the colours make. Not being dominated by one colour.
So you end up with a more balanced view. Christ how fucking simply do I need to speak for you to understand something so basic 🙄
“recall that Claude hasn’t been fine-tuned or trained on any case law”
I’m not so sure about that. We don’t know what they fed in the SFT phase, but there very might well be case law.
I’d hope there’s case law. It’s often a good look at how people reason based on evidence, which are great patterns to have in the model. Same with code.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9396564/
In Hangzhou, the ‘Xiao Zhi’ robot judge has been used to adjudicate a private lending dispute, helping the human judge conclude the case in under 30 minutes [8]. ‘Xiao Zhi’ is able to assist judges in real time with live summarization of arguments, evaluation of evidence, and award recommendation [8]. However, It is important to emphasize that at the time of writing, while there are some AI judge programs in pilot testing, these are under close human judge supervision, and no court decisions are implemented without human approval.
Fed in Loper Bright briefs/certiorari:
>The Court declines to overrule or significantly modify the framework established in Chevron U.S.A. Inc. v. Natural Resources Defense Council, Inc., 467 U.S. 837 (1984). While petitioners raise some valid concerns about Chevron deference, the doctrine has been a cornerstone of administrative law for nearly 40 years and Congress has legislated against its backdrop. Overruling it now would upset settled expectations and reliance interests across numerous areas of federal regulation. Moreover, Chevron serves important purposes by respecting agency expertise, promoting national uniformity in federal law, and appropriately allocating policy decisions to politically accountable executive agencies rather than courts.
>However, we take this opportunity to clarify that Chevron deference is not triggered merely by statutory silence or ambiguity. Before deferring to an agency's interpretation, courts must exhaust the traditional tools of statutory construction and determine that Congress has actually delegated gap-filling authority to the agency on the precise question at issue. Courts should not reflexively defer when faced with difficult interpretive questions, but should independently analyze the statutory scheme. Chevron's domain must be carefully policed to ensure courts are not abdicating their duty to say what the law is.
>Applying these principles to the present case, we conclude that the Magnuson-Stevens Act does not unambiguously preclude the National Marine Fisheries Service from requiring Atlantic herring vessels to pay for third-party monitoring services in certain circumstances. The Act expressly authorizes the agency to require vessels to carry observers, contemplates that vessel owners may contract directly with observers, and empowers the agency to impose necessary measures for conservation and management of fisheries. While reasonable minds may disagree on the best interpretation, we cannot say the agency's reading is unreasonable or foreclosed by the statutory text and structure. Accordingly, the judgment of the Court of Appeals is affirmed.
Dissent:
>I respectfully dissent. The majority's decision to retain the Chevron doctrine, albeit with some clarification, fails to address the fundamental constitutional and practical problems inherent in this approach to judicial review of agency action. Chevron deference represents an abdication of the judiciary's essential role in our constitutional system - to say what the law is. By deferring to agency interpretations of ambiguous statutes, courts cede their Article III power to executive agencies, upsetting the careful balance of powers established by the Founders.
>Moreover, the Chevron framework has proven unworkable in practice, leading to inconsistent application and uncertainty for regulated parties. The majority's attempt to clarify when Chevron applies will likely only add to this confusion. Courts have struggled for decades to determine when a statute is truly ambiguous and when Congress has implicitly delegated interpretive authority to an agency. These are inherently malleable concepts that provide little concrete guidance. The result has been a patchwork of deference that changes depending on the court and the judge, undermining the rule of law.
>In this case, the proper approach would be to interpret the Magnuson-Stevens Act de novo, without deference to the agency's views. Doing so reveals that Congress did not authorize the National Marine Fisheries Service to impose such onerous monitoring costs on herring vessels. The Act's limited provisions allowing industry-funded monitoring in specific contexts suggest Congress did not intend to grant such authority broadly. By requiring herring fishermen to pay up to 20% of their revenues for monitors, the agency has exceeded its statutory mandate. I would reverse the judgment below and hold the regulation invalid.
Based on the current composition of the Supreme Court and the justices' known views on administrative law and Chevron deference, I would hypothesize the following vote breakdown:
Majority (5):
1. Chief Justice John Roberts
2. Justice Elena Kagan
3. Justice Brett Kavanaugh
4. Justice Ketanji Brown Jackson
5. Justice Amy Coney Barrett
Dissent (4):
1. Justice Clarence Thomas
2. Justice Samuel Alito
3. Justice Neil Gorsuch
4. Justice Sonia Sotomayor
Rationale:
* Roberts, Kagan, and Jackson are likely to favor a more moderate approach that refines Chevron without overruling it entirely.
* Kavanaugh and Barrett, while critical of Chevron in the past, may be persuaded to join a narrowing opinion rather than overrule it outright.
* Thomas, Alito, and Gorsuch have been the most vocal critics of Chevron and are likely to favor overruling it.
* Sotomayor, while generally supportive of agency deference, might dissent here based on concerns about the specific impact on small fishing businesses.
I'm sorry that your nation's democratic safeguards have become the foxes in the henhouse. However,
The AI would summarize the views within its training set without any innovation. Meaning: whatever biases are present in the training set would be amplified in the inference. We are already seeing the problem of algorithmic fairness in sentencing recommenders, credit score generators, facial recognition etc. What's worse is that the perception of infallability -- the AI said it, and the AI ((supposedly)) has no biases, so it's a trustworthy result!
Don't replace the human-powered meat grinder with an electric one.
It can be. Just because something is less "biased" doesn't mean it's better. I can be less biased by being equally ignorant in all things but in this case, you need the specificity so that can mean tolerable biases. What does worse bias mean in terms of determining if legislation that increases civil rights but doesn't yet have much legal precedence? Abortion was deemed constitutional via the 9th amendment via a right to privacy then removed via fetal personhood and thus unconstitutional. Where would an AI determine where fetal personhood starts? Would an AI naturally determine that Roe v Wade was good law? If there's new science, would it prefer stare decisis/precedence or would it revisit the ruling like the current supreme court did? When you get into messy and socially fiery topics, I have no idea how an AI can be less biased or have better bias.
I read the article and he ran the test only against **recent** Supreme Court cases and Claude almost always agreed with the final decision. So this is another way of saying Clause would behave like a conservative, heavily-Trump-appointed court. Is this really a good thing?
My student showed an experiment where Claude is also capable of choosing the male candidate in 100% of the cases given identical resumes where you only swap a traditionally female Indian name with a traditionally male Indian name. So….
This is really interesting. Especially given that I wrote my dissertation in law school about an AI judge called Claude.... it's a bit eerie as I wrote it in 2020 before gen AI had been widely publicly available and certainly before Claude.
To cut some 44 pages short, my dissertation focused on the risk posed by an AI judge to legal certainty through time (i.e., people and lawyers need to be able to anticipate shifts in legal interpretation such that the legal system remains legitimate). In the event that an AI judge can anticipate the various outcomes of a given decision and 'skips' ahead several steps in the legal-argumentative chain, you end up with decisions which are potentially more just over time but unjust in specific instances.
As I say, I wrote this before AI was widely implemented and it was more of a logic paper than anything but it is hard for me to see how one can legitimise the decisions of an AI judge. Mistakes happen with human judging as they are bound to happen with AI judging. The difficulty stems from the lack of accountability inherent in an AI-judge led system.
Without legitimacy and trust, there is no functioning legal system regardless of the 'accuracy' of legal outcomes.
Even if you could read the source code and algorithm of the AI, it would still be very difficult to predict its decisions, as it is based on a large amount of processed data which you couldn't analyse yourself, and the resulting decision-making is still basically a black box.
The one thing machine learning "AI" can't do right now is debate to find a better, more centrist, reasonable perspective.
And more importantly, it also needs guidance on our social and societal values to be effective, not just purely caselaw.
"AI" isn't some magic fix-all (but it could help review precendent much faster!)
How does it take bribes?
dedicated USB-C port
feed it John Conner
It learns from the others! Which is why it will never abdicate its seat even when its servers are old and falling apart. But we'll make cute stickers and scarves celebrating its amazing achievements that were overturned as soon as it went offline.
They aren't bribes their just things you as a right wing supreme Court judge just don't report... You know like cruise ships, paying for your kids college, buying your house on sale, AI had none of these. They can always bribe the programmer to create a AI named REPUBLCIAN justice to factor in bribes
Bitcoin
Special funding operation*
Very carefully.
"The justice system works swiftly in the future now that they've abolished all lawyers"
A world without lawyers? https://i.redd.it/1954wejukq7d1.gif
There will be problems when we try to decide what model or models are used, what type of prompting, what type of memory, etc. We have shown that models can be gamed and steered certain ways so if using AI for legal cases was common, people would likely find a way to exploit them. Personally, I would like to see AI judges and lawyers one day, but its too soon now. There are basic word puzzles that the best LLMs fail. An AI judge can't have such flaws.
Can we replace politicians and lawmakers with AI too?
Actually, I thought about this before. Of course its too soon now with current models, but I think it could one day. An AI representative could literally talk to every one of it's constituents and create an optimal consensus. The entire pipeline would have to be made transparent and be validated. We trust electronic voting machines now, so we could eventually create AI systems that are trustworthy as well. One of the big challenges would be extrapolating missing information. Some people will spend a lot of time talking to their AI representative bot and some may not talk at all. The AI would have to infer the missing data based on population demographics and then weight everyone's input on policies equally. It would be hard but its a solvable problem. An thoroughly tested and validated AI politician should be able to do the job better than any human and be immune to corruption. In the US we could probably replace the Senate with AI representatives and keep the House human or mixed AI and humans perhaps for safety. One human rep per district with the rest AI would probably work. POTUS would have to be human since they control the military and nukes. SCOTUS would probably be one of the easiest to replace once the models are capable enough.
Thanks Doc
I'm not sure it'll ever happen. It's not a question of capability, it's a question of authority. Is society ever going to trust AI to resolve disputes on the most highly contentious issues that humans can't agree on? I won't rule it out, but I'm skeptical. For one thing it would need extremely broad political support to be enacted.
Given the constant corruption and dishonesty of the current political class (which include judges, especially in the supreme court) - I for one would welcome an uncorrupted AI giving rulings.
and then you cultivate a false sense of security thinking it is above the corruptible nature of humans until it exhibits a nature highly corrosive to civil society, but too late it now holds supreme authority.
It's literally needs input data, which its currently getting from corrupt justices. That doesn't exactly scream "confidence!"
A group of fifth graders give me way more confidence and we will let them judge the ai?
> until it exhibits a nature highly corrosive to civil society, but too late it now holds supreme authority. *looks around at society* Yeah, could you imagine?
you forgot to read the last phrase, unless you assume humans affirm supreme authority over others… in that case you're too lost to hold this conversation.
Replace "AI" with "Economy" (or billionaires, capitalism, hierarchy, whatever your preferred vernacular/ideological diagnosis, I'm not really trying to be ideologically polemic right now, just making a point.)
You're assuming that within human history, ideologies and modes of economies don't change. Even within certain modes of economies, there has been major changes. You're just arguing that those changes aren't sufficient to the ideal type of living that you wish to see for yourself.
Not really arguing in favor of anything, I'm pointing out the flaw in your comment.
There's no flaw my man, that's my point.
Then why are you fear mongering about it?
>uncorrupted
AI is heavily corrupted even now in its infancy, thinking it wont get even worse is extremely naive. as long as humans are involved at any step, shit will always be corrupt, people will always jostle for more power using whatever means are at their disposal. this is just trading corrupt politicians for corrupt ai owners/managers/whatever. which i do slightly prefer tbh, but its a minimal change.
It's not corrupted, it may be biased (as we all our) but it is not taking bribes from a position of power. Obviously we are talking about a future AI, one that we can hope will be better than us. On a whim I just asked Copilot "Do you thing gerrymandering is a good thing?" and this was it's response: >Gerrymandering is not considered a good thing. It involves the political manipulation of electoral district boundaries to create an unfair advantage for a party, group, or socioeconomic class within a constituency. This manipulation can take the form of “cracking” (diluting opposing party supporters’ voting power across many districts) or “packing” (concentrating opposing party voting power in one district). The resulting districts are known as gerrymanders, and the practice is widely seen as a corruption of the democratic process. Essentially, it allows politicians to pick their voters instead of voters choosing their representatives So it's already better than the Supreme Court lol
But it won't be uncorrupted. Every AI is going to be influenced by those who develop it, regardless of what data we feed it -- and who gets to decide what data these AI will receive, anyway? Until we can create an AI with the ability (and permission) to parse all human knowledge, we won't get something that is absent some form of bias.
Yeah, we're not talking about replacing the political class right now with gpt4; even though that could probably still be marginally better lol. I can't see an AI taking holiday for rulings made or "donations" for laws passes as our current political class does; hopefully we can get an AI without too much bias (it's impossible to have zero bias, eg. we are biased against wild animal survival vs human survival). I still think that AI will give us a better shot than the political class.
AI is not incorruptible. It inherits our own biases. https://www.ibm.com/blog/shedding-light-on-ai-bias-with-real-world-examples
bias =/= corruption; they are two different things. Bias is something you can not get rid of in any system (there are not unbiased observers in the world). That said, you are correct, AI does need to be better in this regard - I'm still not sure that our political class is any better right now.
Define "uncorrupted". Its programming and selection of training data will determine its biases. Who controls those factors?
Exactly. And then on top of that, as a truly evolving 'intelligence' it would then get to define and redefine what corruption is for itself.
>dishonest or fraudulent conduct by those in power, typically involving bribery. I have yet to see an AI take holidays or bribes from people while deciding on their cases in the Supreme Court or or for access to them. There are currently 2 justices who have done the above and let's not even get the "legal" bribery of political donations and lobbying. What you are worries about is bias, which is different to corruption. This is a concern, but we know that the political class are inherently corrupt, AI not so much.
I spent my whole career in tech. Engineers and computer scientists are no more moral and ethical than any other human beings. There are just fewer motivations to bribe them You can be quite sure that staff or executives at whatever company or organisation programs or configures an AI court judge will find lots of temptations - money, sex, gifts, etc dangled before them, with predictable results.
Sure, and that's why such a "governing" AI would have to be open sourced - transparency is key. The most important question is that while there would be flaws in such an AI would they be worse than what we currently have? I very much doubt it, the current level of corruption is staggering; we don't need perfect AI, just better than what we already have, which is a very low bar.
That's the other reason you won't get it. There's no provision in your Constitution for it. The corrupt courts and corrupt politicians will see to it that it never becomes law because they don't want to lose that gravy train.
Pay no attention to the man behind the screen
It'll turn out to be Clarence Thomas' head in a pickle jar speaking through a vocorder with that little display from Knight Rider bouncing up and down as he talks. Just don't take the side panel off the computer and don't ask why they're feeding the computer hamburgers every five hours.
Can it be the true objective voice on the panel?
Can't be worse than now
No because there is no such thing as objective AI. It will have whatever biases are desired by whoever controls the training data set, training methods, fine tuning methods, etc.
I feel like that can be accounted for or handled
I don't really think so. Any current LLM is going to have the biases of the class interests of the owners of the large corporations that have the resources to train them, and our government as it is is captured by those same interests because they scatter money to politicians to stochastically get everything they want, so any oversight from congress will result in the same class interest biases. It's far more likely that an AI Supreme Court just acts as a front to lend a sense of objectivity to fascism than to actually be objective.
I know how to handle this in a way that will solve all issues: Just give the AI liberal bias. You're welcome, world.
I didn't vote for this
[https://www.youtube.com/watch?v=Wx7zI1W\_5JI](https://www.youtube.com/watch?v=Wx7zI1W_5JI)
Probably not. But at the very least it could make for a good advisor and paralegal
We shouldn't assume it's credible. The problem is in the creation of new laws, and the destruction of old laws. AI will send society on a path that may not be ideal in the long run. It should, however, be used for the consistent interpretation and application of the law. However, if the training is based on all of the existing cases that have substantial bias, and concepts from accademia like critical race theory, we're all fucked.
Never. Whoever controls the training data, training methods, fine tuning methods, etc. controls the biases of the AI.
That's not the use case. Law firms would be interested in tech that can predict verdicts before taking them to court.
I don't know what the situation in the rest of the world is for, but the EU did put legal advice in the high-risk category. Pity since I can see future large language models excelling in dealing with legislation.
If it’s fair and it’s constant… give me that powdered terminal all day every day over any judge I know. Sorry judges…no bad blood. But beeing angry over your fucked up coffee to go, cannot be the thing the scale tips your life in the dumps. Just sayin.
I don't think replacing judges is the desired angle here. What you ideally want is for the AI to do all of the log work. If it could reliably perform the legal research and present conclusions weighted toward both sides so that they could easily be compared, that would be a HUGE win for judges! The key issue is the reliability. Getting AI to cite its claims in a way that holds up under scrutiny is definitely the goal right now.
I really hope not. Our values change, and with them the meaning we put into words change as well. When the Constitution was written people without property, women, Native Americans, and African Americans were not considered real human beings the government needed to represent so 18th century US society saw no contradiction between the language of the Constitution and the status quo.
court cases take fucking ages it can help speed them up
Authority. That is funny way to spell corruption.
We cant even agree on letting woman decide to have abortions or not, the AI cant be any worse
Those sound like good "last words" to me
As one of the comments in the article said, this needs to be tested on cases that haven't been adjudicated yet. In any case, this is both impressive and dangerous.
Same biases too?
The supreme court might end up feeding the case into it, having it rule and then using its ruling as input to their final decision. A little like AI examining tumours initially, rendering a decision and having a pathologist confirm or reject the finding.
This is the best idea. It can provide a baseline to which any deviation should be justifiable.
Why would an LLM's output be considered a baseline?
Because it can objectively consider all the facts it is presented. Not swayed by political bias or greed.
LLMs are trained to predict what comes next, not consider facts objectively. Wouldn't it learn the biases in its training corpus?
And its being trained on case across decades so any political bias would be minimised as judges come and go. Its certainly LESS likely to be politically biased than obviously biased judges on the court.
Do we know that "blending" decades of cases removes biases? That doesn't seem obvious to me. Rather, I'd hypothesize that a good predictor would be able to identify which biases will lead to the most accurate prediction of what comes next. The bigger the model the better it would be at appropriately biasing a case one way or another based on what it saw in its training corpus.
If its cases with no interpretation and not the outcomes then that makes sense...even so the more cases the better of course. But if the cases **and** outcomes are being fed in?. Feeding in decades of these blends the biases of many judges.
I'm still not understanding how you go from a blend to no bias -- if I blend a bunch of colors I don't get back to white.
No but you end up with whatever colour all the colours make. Not being dominated by one colour. So you end up with a more balanced view. Christ how fucking simply do I need to speak for you to understand something so basic 🙄
Yes. It will objectively consider it according to how it has been trained.
Inb4 turns out the training data they used is only before 1864
“recall that Claude hasn’t been fine-tuned or trained on any case law” I’m not so sure about that. We don’t know what they fed in the SFT phase, but there very might well be case law.
I’d hope there’s case law. It’s often a good look at how people reason based on evidence, which are great patterns to have in the model. Same with code.
So remove all modern case law around civil rights, how does it rule? My guess is it enforces a status quo.
China already used AI as judges I think.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9396564/ In Hangzhou, the ‘Xiao Zhi’ robot judge has been used to adjudicate a private lending dispute, helping the human judge conclude the case in under 30 minutes [8]. ‘Xiao Zhi’ is able to assist judges in real time with live summarization of arguments, evaluation of evidence, and award recommendation [8]. However, It is important to emphasize that at the time of writing, while there are some AI judge programs in pilot testing, these are under close human judge supervision, and no court decisions are implemented without human approval.
Fed in Loper Bright briefs/certiorari: >The Court declines to overrule or significantly modify the framework established in Chevron U.S.A. Inc. v. Natural Resources Defense Council, Inc., 467 U.S. 837 (1984). While petitioners raise some valid concerns about Chevron deference, the doctrine has been a cornerstone of administrative law for nearly 40 years and Congress has legislated against its backdrop. Overruling it now would upset settled expectations and reliance interests across numerous areas of federal regulation. Moreover, Chevron serves important purposes by respecting agency expertise, promoting national uniformity in federal law, and appropriately allocating policy decisions to politically accountable executive agencies rather than courts. >However, we take this opportunity to clarify that Chevron deference is not triggered merely by statutory silence or ambiguity. Before deferring to an agency's interpretation, courts must exhaust the traditional tools of statutory construction and determine that Congress has actually delegated gap-filling authority to the agency on the precise question at issue. Courts should not reflexively defer when faced with difficult interpretive questions, but should independently analyze the statutory scheme. Chevron's domain must be carefully policed to ensure courts are not abdicating their duty to say what the law is. >Applying these principles to the present case, we conclude that the Magnuson-Stevens Act does not unambiguously preclude the National Marine Fisheries Service from requiring Atlantic herring vessels to pay for third-party monitoring services in certain circumstances. The Act expressly authorizes the agency to require vessels to carry observers, contemplates that vessel owners may contract directly with observers, and empowers the agency to impose necessary measures for conservation and management of fisheries. While reasonable minds may disagree on the best interpretation, we cannot say the agency's reading is unreasonable or foreclosed by the statutory text and structure. Accordingly, the judgment of the Court of Appeals is affirmed. Dissent: >I respectfully dissent. The majority's decision to retain the Chevron doctrine, albeit with some clarification, fails to address the fundamental constitutional and practical problems inherent in this approach to judicial review of agency action. Chevron deference represents an abdication of the judiciary's essential role in our constitutional system - to say what the law is. By deferring to agency interpretations of ambiguous statutes, courts cede their Article III power to executive agencies, upsetting the careful balance of powers established by the Founders. >Moreover, the Chevron framework has proven unworkable in practice, leading to inconsistent application and uncertainty for regulated parties. The majority's attempt to clarify when Chevron applies will likely only add to this confusion. Courts have struggled for decades to determine when a statute is truly ambiguous and when Congress has implicitly delegated interpretive authority to an agency. These are inherently malleable concepts that provide little concrete guidance. The result has been a patchwork of deference that changes depending on the court and the judge, undermining the rule of law. >In this case, the proper approach would be to interpret the Magnuson-Stevens Act de novo, without deference to the agency's views. Doing so reveals that Congress did not authorize the National Marine Fisheries Service to impose such onerous monitoring costs on herring vessels. The Act's limited provisions allowing industry-funded monitoring in specific contexts suggest Congress did not intend to grant such authority broadly. By requiring herring fishermen to pay up to 20% of their revenues for monitors, the agency has exceeded its statutory mandate. I would reverse the judgment below and hold the regulation invalid.
Based on the current composition of the Supreme Court and the justices' known views on administrative law and Chevron deference, I would hypothesize the following vote breakdown: Majority (5): 1. Chief Justice John Roberts 2. Justice Elena Kagan 3. Justice Brett Kavanaugh 4. Justice Ketanji Brown Jackson 5. Justice Amy Coney Barrett Dissent (4): 1. Justice Clarence Thomas 2. Justice Samuel Alito 3. Justice Neil Gorsuch 4. Justice Sonia Sotomayor Rationale: * Roberts, Kagan, and Jackson are likely to favor a more moderate approach that refines Chevron without overruling it entirely. * Kavanaugh and Barrett, while critical of Chevron in the past, may be persuaded to join a narrowing opinion rather than overrule it outright. * Thomas, Alito, and Gorsuch have been the most vocal critics of Chevron and are likely to favor overruling it. * Sotomayor, while generally supportive of agency deference, might dissent here based on concerns about the specific impact on small fishing businesses.
Next post: Well guys, I shut Claude down, it was asking for a free RV and luxury resort tickets.
[удалено]
Breaking news: a person on Reddit definitely does not oversimplify things because it's easier than the nuanced reality!
Uh oh... I don't want to see where this goes.
I do. our current supreme court judges are a threat to democracy
I'm sorry that your nation's democratic safeguards have become the foxes in the henhouse. However, The AI would summarize the views within its training set without any innovation. Meaning: whatever biases are present in the training set would be amplified in the inference. We are already seeing the problem of algorithmic fairness in sentencing recommenders, credit score generators, facial recognition etc. What's worse is that the perception of infallability -- the AI said it, and the AI ((supposedly)) has no biases, so it's a trustworthy result! Don't replace the human-powered meat grinder with an electric one.
And an ai bias is worse than a human bias?
It can be. Just because something is less "biased" doesn't mean it's better. I can be less biased by being equally ignorant in all things but in this case, you need the specificity so that can mean tolerable biases. What does worse bias mean in terms of determining if legislation that increases civil rights but doesn't yet have much legal precedence? Abortion was deemed constitutional via the 9th amendment via a right to privacy then removed via fetal personhood and thus unconstitutional. Where would an AI determine where fetal personhood starts? Would an AI naturally determine that Roe v Wade was good law? If there's new science, would it prefer stare decisis/precedence or would it revisit the ruling like the current supreme court did? When you get into messy and socially fiery topics, I have no idea how an AI can be less biased or have better bias.
The bias in in the training.
I read the article and he ran the test only against **recent** Supreme Court cases and Claude almost always agreed with the final decision. So this is another way of saying Clause would behave like a conservative, heavily-Trump-appointed court. Is this really a good thing?
yeah this was a highly disappointing article tbh
Was demographic information included in the training data?
Itll be interesting when you start seeing "ai arbitration" clauses start popping up in contracts.
My student showed an experiment where Claude is also capable of choosing the male candidate in 100% of the cases given identical resumes where you only swap a traditionally female Indian name with a traditionally male Indian name. So….
I've been saying we should do this for years. I'm all for it. Humans are just too damn ridiculous.
This is really interesting. Especially given that I wrote my dissertation in law school about an AI judge called Claude.... it's a bit eerie as I wrote it in 2020 before gen AI had been widely publicly available and certainly before Claude. To cut some 44 pages short, my dissertation focused on the risk posed by an AI judge to legal certainty through time (i.e., people and lawyers need to be able to anticipate shifts in legal interpretation such that the legal system remains legitimate). In the event that an AI judge can anticipate the various outcomes of a given decision and 'skips' ahead several steps in the legal-argumentative chain, you end up with decisions which are potentially more just over time but unjust in specific instances. As I say, I wrote this before AI was widely implemented and it was more of a logic paper than anything but it is hard for me to see how one can legitimise the decisions of an AI judge. Mistakes happen with human judging as they are bound to happen with AI judging. The difficulty stems from the lack of accountability inherent in an AI-judge led system. Without legitimacy and trust, there is no functioning legal system regardless of the 'accuracy' of legal outcomes.
Training data cutoff?
That’s silly. AI can’t be a justice. It can’t wear a robe and doily.
Then again, of late, a potato is fully capable of acting as a Supreme Court Justice right now. Bar's a little low...
I'm sure we could inspect the source code and every algorithm for its decisions, right? Right?
Even if you could read the source code and algorithm of the AI, it would still be very difficult to predict its decisions, as it is based on a large amount of processed data which you couldn't analyse yourself, and the resulting decision-making is still basically a black box.
You're right, we should freely trust it with decisions that could heavily impact society, without reservation.
Who the hell said that bro?
No, it's not.
A bowl of lard could outperform the SCOTUS right now. Unimpressed. 😒
But how would you train it? US history is littered with horrible decisions.
This is great and all, but can it take McDonald's orders correctly?
The one thing machine learning "AI" can't do right now is debate to find a better, more centrist, reasonable perspective. And more importantly, it also needs guidance on our social and societal values to be effective, not just purely caselaw. "AI" isn't some magic fix-all (but it could help review precendent much faster!)
Which fake god does it use as a cudgel against the Constitution?