×

Announcing: Slashdot Deals - Explore geek apps, games, gadgets and more. (what is this?)

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!

Algorithm Predicts US Supreme Court Decisions 70% of Time

samzenpus posted about 4 months ago | from the telling-the-future dept.

The Courts 177

stephendavion writes A legal scholar says he and colleagues have developed an algorithm that can predict, with 70 percent accuracy, whether the US Supreme Court will uphold or reverse the lower-court decision before it. "Using only data available prior to the date of decision, our model correctly identifies 69.7 percent of the Court's overall affirm and reverse decisions and correctly forecasts 70.9% of the votes of individual justices across 7,700 cases and more than 68,000 justice votes," Josh Blackman, a South Texas College of Law scholar, wrote on his blog Tuesday.

Sorry! There are no comments related to the filter you selected.

biased algorith (5, Insightful)

Dthief (1700318) | about 4 months ago | (#47621123)

I (read: anyone) can make an algorithm that fits any previous data (even only using data that precedes the "prediction")......testing future predictability is the only way this means anything.

Re:biased algorith (2)

mwvdlee (775178) | about 4 months ago | (#47621133)

If only he could have made some predictions, travelled to the future to test the predictions, then travelled back and put the results in his blog post.
Sadly, testing future predictability can only be done after the future has passed.

Re:biased algorith (2)

Thanshin (1188877) | about 4 months ago | (#47621161)

But once the future has passed, it's no longer future. So one can only assert to have tested the predictability formerly called future; also known as the Prince test.

Re:biased algorith (4, Insightful)

Euler (31942) | about 4 months ago | (#47621581)

You could train it with 80% of the historical data and see if it predicts the next 20% of historical data.

Re:biased algorith (2, Insightful)

Anonymous Coward | about 4 months ago | (#47621171)

That's why you should always divide your data set into one subset for fitting/training of the algorithm, and another subset to verify its predictive ability.

The algo doesn't know or care whether the data is actually from the future. That is irrelevant as long as it wasn't fitted on it.

Re:biased algorith (0)

Anonymous Coward | about 4 months ago | (#47621293)

It's pretty easy to simulate future prediction. Divide your data into a training set and a validation set. Feed the training set to the algorithm to build the model. Check the accuracy against the validation set.

For bonus marks, repeat this many times with different, random partitions of the data into training/validation.

Re:biased algorith (1)

Dthief (1700318) | about 4 months ago | (#47621387)

how about make the algorithm.......use it in a predictive manner after making it....THEN REPORT IT.....

what they have done here is taken a data set and made algorithms until one of them matched well. If I have a model that predicts traffic patterns or weather patterns in the past, its only useful if it is then applied after the fact and is still comparably accurate to when it was developed.

Re:biased algorith (3, Informative)

Chatterton (228704) | about 4 months ago | (#47621153)

That why you train your algorithm on all the available cases but the last year ones. Then you can test it on that last year of cases. For the system the last year is the "future" on which you do your testing.

Re:biased algorith (5, Informative)

Anonymous Coward | about 4 months ago | (#47621203)

Yes, and then when the algorithm doesn't work you finetune it a bit and test again and suddenly you end up with an algorithm that has been trained on all data without actually training it against all data.

One should be very skeptical against future predicting algorithms. Until they have been released in the wild for a while without the developer tampering with it it is pretty safe to guess that it more or less is another version of the Turk [wikipedia.org] , even if its inventor doesn't realize it.

The same principle can be applied to market research or climate studies. If the algorithm used is tampered with to produce more accurate results one can assume that it is useless.

Re:biased algorith (1)

Dthief (1700318) | about 4 months ago | (#47621401)

THIS...better worded than my original comment

Re:biased algorith (1)

Daniel Hoffmann (2902427) | about 4 months ago | (#47622139)

I would assume that any person doing professional statistical research knows how to validate to a certain degree of trust.
For example from:
http://en.wikipedia.org/wiki/C... [wikipedia.org]

"Repeated random sub-sampling validation

This method randomly splits the dataset into training and validation data. For each such split, the model is fit to the training data, and predictive accuracy is assessed using the validation data. The results are then averaged over the splits."
So you actually train against all data and validate against all data in several random sub-samplings, then you average the results to get your 70%. This is just one form of cross-validation there are other more fit for some specific problems.

I think what you actually mean is:
http://en.wikipedia.org/wiki/O... [wikipedia.org]
The solution is fits too tightly to the current data. There are ways to reduce overfitting (like using some forms of cross-validation,) again if the researcher is competent he can be pretty sure that his solution is not overfitted to the current data. But who reads the actual paper when you can head a headline with big numbers like 70%?

Useless? No. (1)

Anonymous Coward | about 4 months ago | (#47622369)

What One has is a more accurate model, which is how science works: One creates a model, tests it, adjusts to improve its accuracy, lather, rinse, repeat. It's worked pretty well for centuries.

Re:biased algorith (0)

Anonymous Coward | about 4 months ago | (#47621191)

I (read: anyone) can make an algorithm that fits any previous data (even only using data that precedes the "prediction")......testing future predictability is the only way this means anything.

How exactly does one test future predictability of that particular group?

I'm assuming we would at least use the three-ring-circus algorithm to make it somewhat accurate.

Replace them (-1)

Anonymous Coward | about 4 months ago | (#47621261)

I wish they could replace the SCOTUS with computers.

Lawyers, "We want corporations to have the same rights as people."

SCOTUS (in the voice of Nomad from ST:TOS), "Non sequitur. Case dismissed."

Re: Replace them (0)

Anonymous Coward | about 4 months ago | (#47621331)

Lawyers: "We want wealth to equal power."

SCOTUS: "Done. You owe me an upgrade."

Re:Replace them (5, Insightful)

Impy the Impiuos Imp (442658) | about 4 months ago | (#47622237)

Lawyers: We want people to carry their rights with them, even when operating as a group of people Congress defined as a "corporation" because Congress cannot force them to give up their First Amendment rights.

Scotus (in the voice of Nomad): Logic correct. Opposing lawyers are in error. Must sterilize.

Re:biased algorith (1)

jellie (949898) | about 4 months ago | (#47621279)

In this particular case, future predictability doesn't work. The sample size is way too small (as SCOTUS only hears ~80 cases/year), and the cases are not evenly distributed. The last couple years, for example, the court has become very conservative and happens to hear a significantly higher percentage of business-related cases. It's hard to predict anything from that.

It would make more sense to divide the data into training and validation/cross-validation data sets like in a standard machine learning approach.

Algorithm based on bias (2)

ranton (36917) | about 4 months ago | (#47621467)

I wouldn't be surprised if the primary predictive trait used is simply to check the biases of each judge and then assume they will vote along those biases. Assuming conservative judges will vote conservative and liberal judges will vote liberal should give you a pretty good score right off the bat.

Re:Algorithm based on bias (4, Informative)

AthanasiusKircher (1333179) | about 4 months ago | (#47621913)

I wouldn't be surprised if the primary predictive trait used is simply to check the biases of each judge and then assume they will vote along those biases. Assuming conservative judges will vote conservative and liberal judges will vote liberal should give you a pretty good score right off the bat.

Only in a small minority of cases. Contrary to popular belief, most SCOTUS cases aren't highly politicized cases with a clear conservative/liberal divide. Most cases deal with rather technical issues of law which are much less susceptible to this sort of political analysis.

The Roberts Court, for example, has averaged 40-50% unanimous rulings in recent years (last year about 2/3 of rulings were unanimous). So, your idea of "assume conservative vote conservative, liberal vote liberal" would tell you nothing about maybe half of the cases that have come before the court in recent years. (Historically, I believe about 1/3 or so of rulings tend to be unanimous.)

And even with the closely divided cases, you have a problem. Of the 5-4 rulings (which in recent years have been only about 20-30% of the total rulings), about 1/4 to 1/3 of them don't divide up according to supposed "party lines."

In sum, I don't know what factors this model ends up using, but "conservative vs. liberal" is way too simplistic to predict the vast majority of SCOTUS rulings. If you could factor in detailed perspectives on law (which often have little to do with the stereotyped political spectrum), you might have something... but that would require a lot more work, particularly over the 50 years of rulings TFA deals with.

Re:biased algorith (0)

Anonymous Coward | about 4 months ago | (#47621535)

Predicting what people are going to decide via an algorithm is a defunct. I believe you were trying to say unless someone else can foresee the future and predict it with certainty this algorithm/study was dumb.

You can ruffly predict the future of somethings based off history, but even that fails with or without an algorithm. They are good for somethings but not good for predictions but decisions.

Re:biased algorith (0)

Anonymous Coward | about 4 months ago | (#47621585)

I believe you were trying to say "is defunct," "roughly" and "some things."

Re:biased algorith (0)

Anonymous Coward | about 4 months ago | (#47621627)

But, but we can replace the supreme court with Big Data (r) once the prediction rate reaches the significant 80%! The other 20% can be ignored as irrelevant noise to the great nations harmonious strives to a politically homogeneous and assuredly peaceful state.

Re:biased algorith (1)

plopez (54068) | about 4 months ago | (#47621787)

That is why you use split data sets. You calibrate on one half, or less, of historical data and then verify against data you did NOT calibrate against.

Re: biased algorith (1)

Bartles (1198017) | about 4 months ago | (#47622135)

And sometimes an algorithm can't predict the future OR correctly duplicate the past. Just see global temperature models.

Not a bad idea (0)

Anonymous Coward | about 4 months ago | (#47621127)

Maybe it'll cut down the number of frivolty in lawsuits once businesses realize their cases are futile.

Or not.

Re:Not a bad idea (0)

Anonymous Coward | about 4 months ago | (#47621163)

Be frivolous and frisky, I always says.

hell (0)

Anonymous Coward | about 4 months ago | (#47621131)

They beat my 50% accurate algorithm...

Trivial (2)

Chrisq (894406) | about 4 months ago | (#47621137)

Just identify the wrong decision and they are bond to pick it ;-)

is it better than random? (1)

Anonymous Coward | about 4 months ago | (#47621141)

If the decisions have 50/50 distribution, then a random guess is right 50% of the time. For any other distribution it's more than that. Soooo 70% is at best a little bit better than random guess, at worst equal to it.

Re:is it better than random? (0)

Anonymous Coward | about 4 months ago | (#47621257)

Actual a random guess is right 50/50 even for uneven distributions.

take a situation where outcome A happens 10% of the time and outcome B happens 90% of the time. Now, flip a coin to choose A or B, and work out how often you will pick right. Outcome A is right in 10% of cases, given that, you flip a coin and have a 50% chance of matching A. Outcome B is right in 90% of cases, given that, you flip a coin and have a 50% chance of matching B.

In other words, a random selection never beats the odds.

Re:is it better than random? (5, Informative)

mrvan (973822) | about 4 months ago | (#47621371)

That is correct, but not what the GP meant. If you can model the distribution (e.g. you 'know' that B is 90%) then you can weigh your random guessing such that it is correct in >50% of the cases, even without looking at the case itself (it is still 'random' in that sense)

Extreme case: I can predict whether someone has Ebola without even looking at them with >99.99% accuracy by just guessing "no" every time, since the prevalence of Ebola is >.001%.

Suppose the supreme court has 70% chance of overturning (e.g. because they choose to hear cases that have 'merit'), then an algorithm that guesses 'overturn' 100% will have a 70% accuracy. A random guess that follows the marginal of the target distribution (e.g. guess 70% overturn) also scores >50% (58% to be precise).

Re:is it better than random? (0)

Anonymous Coward | about 4 months ago | (#47621781)

There are other ways to do a random guess than flipping a coin. Try a D10.

Full of shit! Partially? (0)

Anonymous Coward | about 4 months ago | (#47621147)

So does that make them 70% full of shit, or 30%?

i know one when i see one (-1)

Anonymous Coward | about 4 months ago | (#47621149)

Gay Nigger gaydar detects niggers with 1000% accuracy.

Is this anything new? (1)

SillyBrit (3401921) | about 4 months ago | (#47621155)

People have been predicting outcomes for years. There was a story a couple of months back about something similar. And here's a link to a group that stated 75% success predicting the outcome prior to oral arguments, back in 2004 http://www.jstor.org/discover/10.2307/4099370?uid=3738032&uid=2&uid=4&sid=21104566455723 [jstor.org] . I can't comment on the relative academic merits of either though.

Missing info (1)

paradigm82 (959074) | about 4 months ago | (#47621157)

It would be useful to know how many of the court's decisions are affirm vs reverse. If 70% are affirm, it's not impressive to be able to correctly predict 70% of decisions since you can just always guess on "affirm". Of course, if it is equally distributed (50% affirm, 50% reverse), getting 70% correct shows the algorithm has some prediction power (assuming it is trained on a different dataset than is used for evaluating it). But it is impossible to determine if this is the case, based on the information in the article.

Re:Missing info (3, Informative)

mwvdlee (775178) | about 4 months ago | (#47621189)

It would be useful to know how many of the court's decisions are affirm vs reverse.

http://www.americanbar.org/con... [americanbar.org]

I did some tallying on table 3 and found the following numbers on total decissions;
Reversed: 58.48%
Vacated: 12.58%
Affirmed: 28.94%

The article doesn't mention whether "vacated" is counted separately or as a reversal.
The graph shows only reversed and affirmed, so I'm assuming vacated counts as a reversal.
If this is the case, reversed and vacated together is 71.06%.
So if you'd guess "Reversed" all the time, you'd be slighly more accurate than the algorithm.

Re:Missing info (0)

Anonymous Coward | about 4 months ago | (#47621285)

I totally lol'd when I saw those stats. This is probably where the guy got the idea. No doubt he 'fine tuned' his algorithm to fit these probabilities.

Re:Missing info (0)

Anonymous Coward | about 4 months ago | (#47621939)

Vacated is neither affirmed, nor reversed.
Affirmed means, "The earlier court got it right."
Reversed means, "The earlier court got it backwards."
Vacated means, "We found an error the earlier court made, so we're discarding the judgement, and sending it back to be tried again with instructions on how to handle the part where the earlier court erred."

Useless (5, Insightful)

Jiro (131519) | about 4 months ago | (#47621169)

According to http://www.scotusblog.com/stat... [scotusblog.com] the Supreme Court recently affirmed 27% of lower court decisions and reversed 73%. This means that if you guess that the Supreme Court reverses the lower court every time, you'll be 73% accurate. 70% accuracy is ridiculously low if you can get 73% accuracy *without* taking into consideration the records of each justice or any other kind of details.

Re:Useless (1)

fph il quozientatore (971015) | about 4 months ago | (#47621185)

This should be +10 insightful.

Re:Useless (2)

sound+vision (884283) | about 4 months ago | (#47621361)

Not really - The "algorithm" the grandparent has come up with can be written out as "The vote will be to reverse the ruling". Sure, you will get approximately 73% accuracy, assuming the distribution of the decisions remains the same. But it has zero utility as a predictive algorithm. Presumably, the algorithm that has been developed in TFA can predict both rulings to uphold and rulings to reverse with 70% accuracy. That's infinitely more useful than an algorithm that predicts rulings to reverse with 100% accuracy and rulings to uphold with 0% accuracy, which is what the GP poster did.

It's similar to

Re:Useless (3, Informative)

fph il quozientatore (971015) | about 4 months ago | (#47621519)

Yeah, that's why we have better error measures [wikipedia.org] than "70% accuracy", and competent people should use them.

Re:Useless (3, Informative)

fph il quozientatore (971015) | about 4 months ago | (#47621551)

To be more fair, I am not bashing the original paper here -- that one reports the full confusion matrix (http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2463244).

Re:Useless (0)

Anonymous Coward | about 4 months ago | (#47621213)

I guess you missed the "using only data available prior to the date of decision" part. Yes, knowing it's 73%, you can go back and guess "reversed" and be 73% correct, but if you already know it's 73% in fact you can find out the outcome in each case and guess correctly 100% of the time, so 73% is pretty bad actually. But yeah, having the posterior information available as a prior does help.

Re:Useless (0)

Anonymous Coward | about 4 months ago | (#47621323)

Yes, knowing it's 73%, you can go back and guess "reversed" and be 73% correct, but if you already know it's 73% in fact you can find out the outcome in each case and guess correctly 100% of the time, so 73% is pretty bad actually.

Only a South Texas College scholar could parse that sentence.

Re:Useless (0)

Anonymous Coward | about 4 months ago | (#47622393)

I did and I'm from nowhere near there.

Re:Useless (1)

Bearhouse (1034238) | about 4 months ago | (#47621263)

It depends - there's a difference between saying 70% "in general" and "this one will be part of the 70%".

Of course, since the percentages seem very close the practical implications would seem to be the same.

Useless (0)

Anonymous Coward | about 4 months ago | (#47621273)

So what they've done is create a system marginally less accurate than "return 'reversed'".

Re:Useless (1)

dkf (304284) | about 4 months ago | (#47621291)

According to http://www.scotusblog.com/stat... [scotusblog.com] the Supreme Court recently affirmed 27% of lower court decisions and reversed 73%. This means that if you guess that the Supreme Court reverses the lower court every time, you'll be 73% accurate. 70% accuracy is ridiculously low if you can get 73% accuracy *without* taking into consideration the records of each justice or any other kind of details.

Of course, the usual reason why the case got to the Supremes in the first place is because there were two cases by different Appeals Circuits which conflicted.

Re:Useless (not if you read the article) (1)

MarkWegman (2553338) | about 4 months ago | (#47621319)

The article talks about predicting decisions going back to 1953. It also says it's easy to come up with good predictors for specific time ranges. Your rejection algorithm works well for the last year or so, but the article you cite is based on the last years statistics only. The actual article talks about using a whole pile of inputs and learning a good predictor. It sounds like it would have easily learned your strategy, though the article isn't clear. Apparently the algorithm is doing just about as well as humans trying to predict the decision, where the best humans have just a small amount better track record.

Re:Useless (4, Informative)

AthanasiusKircher (1333179) | about 4 months ago | (#47621395)

70% accuracy is ridiculously low if you can get 73% accuracy *without* taking into consideration the records of each justice or any other kind of details.

First, your link only deals with the past court term. TFA deals with predicting cases back to 1953. Is your 73% stat valid for the entire past half century?

And even if it were, the algorithm is much more granular than that, predicting the way individual justices will vote. From TFA:

69.7% of the Courtâ(TM)s overall affirm and reverse decisions and correctly forecasts 70.9% of the votes of individual justices across 7,700 cases and more than 68,000 justice votes. Also, before someone objects, please note that (contrary to popular belief) SCOTUS does not always vote 5-4 according to party lines. For instance, your own link notes that 2/3 of last year's opinions were UNANIMOUS. 5-4 decisions usually amount for only 25% of cases or so in recent years, and of those, usually a 1/3 or so don't divide up according to supposed "party line" votes.

So, I agree with you that simply predicting reverse/affirm at 70% accuracy may be easy, but predicting 68000 individual justice votes with similar accuracy might be a significantly greater challenge.

Re:Useless (1)

AthanasiusKircher (1333179) | about 4 months ago | (#47621415)

Sorry -- accidentally hit submit early. Obviously the main quote from TFA is only one sentence... the rest is my commentary.

Re:Useless (1)

Cyberdyne (104305) | about 4 months ago | (#47621549)

So, I agree with you that simply predicting reverse/affirm at 70% accuracy may be easy, but predicting 68000 individual justice votes with similar accuracy might be a significantly greater challenge.

In fact, it looks like very much the same challenge: with most decisions being unanimous reversals, it seems only a small minority of those individual votes are votes to affirm the lower court decision. So, just as 'return "reverse";' is a 70+% accurate predictor of the overall court ruling in each case, the very same predictor will be somewhere around 70% accurate for each individual justice, for exactly the same reason. (For that matter, if I took a six-sided die and marked two sides "affirm" and the rest "reverse", I'd have a slightly less accurate predictor giving much less obvious predictions: it will correctly predict about two-thirds of the time, with incorrect predictions split between unexpected reversals and unexpected affirmations.)

This is the statistical problem with trying to measure/predict any unlikely (or indeed any very likely) event. I can build a "bomb detector" for screening airline luggage, for example, which is 99.99% accurate in real-world tests. How? Well, much less than 0.01% of actual airline luggage contains a bomb ... so a flashing green LED marked "no bomb present" will in fact be correct in almost every single case. It's also completely worthless, of course! (Sadly, at least two people have put exactly that business model into practice and made a considerable amount of money selling fake bomb detectors for use in places like Iraq - one of them got a seven year jail sentence for it last year in England.)

With blood transfusions, I understand there's now a two stage test used to screen for things like HIV. The first test is quick, easy, and quite often wrong: as I recall, most of the positive readings it gives turn out to be false positives. What matters, though, is that the negative results are very, very unlikely to be false negatives: you can be confident the blood is indeed clean. Then, you can use a more elaborate test to determine which of the few positives were correct - by eliminating the majority of samples, it's much easier to focus on the remainder. Much the way airport security should be done: quickly weed out the 90-99% of people/bags who definitely aren't a threat, then you have far more resources to focus on the much smaller number of possible threats.

Come to think of it, the very first CPU branch predictors used exactly this technique: they assumed that no conditional branch would ever be taken. Since most conditional branches aren't, that "prediction" was actually right most of the time. (The Pentium 4 is much more sophisticated, storing thousands of records about when branches are taken and not taken - hence "only" gets it wrong about one time in nine.)

Now, I'd like to think the predictor in question is more sophisticated than this - but to know that, we'd need a better statistical test than those quoted, which amount to "it's nearly as accurate as a static predictor based on no information about the case at all"! Did it predict the big controversial decisions more accurately than less significant ones, for example? (Unlikely, of course, otherwise they wouldn't have been so controversial.)

Re:Useless (1)

AthanasiusKircher (1333179) | about 4 months ago | (#47621789)

In fact, it looks like very much the same challenge: with most decisions being unanimous reversals, it seems only a small minority of those individual votes are votes to affirm the lower court decision.

Nope -- you just made the same error the GP did: extrapolating a false inference based on one year of data. It's true that last year had 2/3 unanimous rulings, but that was an outlier -- which I was mainly using to make a point about how the 5-4 rulings that make the news are not as common as we think.

In reality, the Roberts court has averaged maybe 40-50% unanimous rulings, but this is an outlier historically too. Over the 50 years TFA deals with, the unanimous rate is more like 30-40%, I think, maybe less.

So, no -- you can't just assume that 2/3 of votes are for unanimous reversals.

So, just as 'return "reverse";' is a 70+% accurate predictor of the overall court ruling in each case, the very same predictor will be somewhere around 70% accurate for each individual justice, for exactly the same reason.

That logic is completely bogus. For example, imagine a scenario where the 70% reversals are unanimous, and the other 30% are all 5-4 rulings not to reverse. That would mean that 83.3% of votes were to reverse, and only 16.7% were to uphold... quite far from your 70% assumption. Or, to be more extreme, imagine a scenario where the 70% reversals are all 5-4, while the affirmations are all 9-0 unanimous. In that case, while 70% of rulings result in reversals, only 31% of votes were actually to reverse.

In sum, most decisions are NOT unanimous reversals, and the number of individual votes to reverse does NOT necessarily have much to do with the number of decisions to reverse.

For that matter, if I took a six-sided die and marked two sides "affirm" and the rest "reverse", I'd have a slightly less accurate predictor giving much less obvious predictions: it will correctly predict about two-thirds of the time, with incorrect predictions split between unexpected reversals and unexpected affirmations.

Wrong again! Assuming a 70% rate for reversals, if you took a 6-sided die and wrote "reverse" on ALL sides, you'd get a 70% accuracy rate for predictions.

But if you wrote reverse on only four sides, you'd get an accurate prediction from your die only about 56% of the time. Your die actually became much less accurate! (Even if you used a 10-sided die and wrote "affirm" on 3 sides, you'd only get 58% accuracy for predictions, much less than you'd get by writing "reverse" on all sides.)

Really -- if you're going to try to critique someone else's models on the basis of flawed stats, take some time to think about your own arguments and understand their numerical results.

Useless (1)

Anonymous Coward | about 4 months ago | (#47621601)

Correct. I did my thesis work in this area (predicting court outcomes). If you can't beat simply predicting reverse every time, you have nothing.
From a common sense perspective. Consider the effort the court goes through in selecting a case and all the cases that don't get selected. Consider why cases go before the supreme court. The ones that the court hears are naturally going to be those that someone things are important enough to reverse.

Re:Useless (0)

Anonymous Coward | about 4 months ago | (#47621653)

The 70% of algorithm accuracy is rather an estimation, further testing actually gives value of 73 %.
I have found the source code of the algorithm on a warez site - here is complete listing:

        bool supreme_court_decision(bool lower_court_decision) {return ! lower_court_decision;}

Simplified algorithm (3, Insightful)

mwvdlee (775178) | about 4 months ago | (#47621175)

if defendant.bank_balance > plaintiff.bank_balance
      winner = defendant
else
      winner = plaintiff

I'd guess about 90% accurate.

Re:Simplified algorithm (1)

Anonymous Coward | about 4 months ago | (#47621445)

bool IsGuilty(String defendant){
if (defendant.compare("black")==1)
  return true;
else
  return false;
}

-- From outraged black guy!

Re:Simplified algorithm (0)

Anonymous Coward | about 4 months ago | (#47622301)

Sigh... it's so long! After you practice using !strcmp instead of strcmp...==0, the following becomes second nature:
bool isGuilty(String defendant){return defendant.compare("black");}
Some people may say that this code is more implicit, and therefore more challenging to read.
Newbs... once the language is mastered, the shorter version is just as easy, and doesn't take six freakin' lines. Actually, if you can just simply use:
x.compare("black")
instead of:
isGuilty(x)
then the function can be reduced down to zero lines.

Re:Simplified algorithm (2)

dywolf (2673597) | about 4 months ago | (#47621559)

my algorithm is even better, and even more accurate. its simple: What is the worst possible outcome for the citizenry?

end (1)

WallaceAndGromit (910755) | about 4 months ago | (#47621631)

I mean when will this ever end?

Re:end (1)

Vitriol+Angst (458300) | about 4 months ago | (#47621733)

SCOTUS + ((Mobs + Pitchforks + Torches) / Angry) = Sudden Concern for Public

Re:Simplified algorithm (1)

Anonymous Coward | about 4 months ago | (#47621649)

/* politics is the standard 2-axis model of politics:
  * x = economic freedom
  * y = personal freedom
  * (0,0) = authoritarian
  * (0,1) = liberal ("left")
  * (1,0) = conservative ("right")
  * (1,1) = libertarian
  */
vec2 c = case.politics;
vec2 p = normalize(proj(c, plaintiff.politics));
vec2 d = normalize(proj(c, defendant.politics));
for (Judge judge : scotus)
  {
    vec2 j = normalize(proj(c, judge.politics));
    float pj = dot (p, j);
    float dj = dot (d, j);
 
    if (pj >= 0.5 || dj <= -0.5)
      judge.rulesFor(plaintiff);
    else
      judge.rulesFor(defendant);
  }

This is 99.44% accurate.

Re:Simplified algorithm (1)

Vitriol+Angst (458300) | about 4 months ago | (#47621741)

It would run faster if written in Swift. ;-P

Re:Simplified algorithm (1)

Vitriol+Angst (458300) | about 4 months ago | (#47621747)

By the way, I'm keeping this algorithm -- not for the wisdom, but because it's so much more efficient than a bunch of CASE statements. Wow, vectored decision trees -- seems so much more civilized.

Re:Simplified algorithm (0)

Anonymous Coward | about 4 months ago | (#47622183)

e>

This is 99.44% accurate.

So is Ivory soap.

Re:Simplified algorithm (0)

Anonymous Coward | about 4 months ago | (#47621717)

That's probably at least 92% accurate.

I used to use the algorithm;

Does ruling help people + hurt companies =
    winner Company.

Does ruling help companies + hurt people =
    winner Company

Does ruling do nothing to companies and annoy people =
    Clarence gets his intern to put big words in his decision brief.

Yours with the money calculation is way more straightforward. It accounts for those Company vs. Company rulings as well. Kudos!

That ought to be easy... (0)

Type44Q (1233630) | about 4 months ago | (#47621179)

Just write an algorithm that determines which decision will leave the American peoples' ass stretched open the widest, when they're bent over and fucked.

Sweet! Now we can start the Judge program. (2)

Lumpy (12016) | about 4 months ago | (#47621287)

Install software in the helmet, Set the judges loose on the city....

I AM THE LAW!

There's a more useful algorithm (-1)

Coisiche (2000870) | about 4 months ago | (#47621307)

A more useful algorithm would be one that evaluates if the decision is free from bias and not influenced by money.

Of course, finding the test data for it would be a bit of a problem.

Re:There's a more useful algorithm (0)

Anonymous Coward | about 4 months ago | (#47621349)

Well, there are some things. You can say, for instance, that a court's decision should not depend on the gender of the judge, or the time of day, or on whether the appellant is of the same race as the judge. If you check and find out that these irrelevant things do matter, then you can say that there is injustice even without having any opinion on which way any particular case should go.

But there is very little research of this sort. At the end of the day, the court system is more about convincing people that justice has been served, than about actually serving justice.

There's a more useful algorithm (0)

Anonymous Coward | about 4 months ago | (#47621455)

Here's that algorithm:

#DEFINE FREE_FROM_BIAS_AND_FINANCIAL_INFLUENCE 0
#DEFINE FREE_FROM_BIAS_BUT_INFLUENCED_BY_MONEY 1
#DEFINE FREE_FROM_FINANCIAL_INFLUENCE_BUT_POLITICALLY_BIASED 2
#DEFINE POLITICALLY_BIASED_AND_INFLUENCED_BY_MONEY 3

int algorithm() {

      return POLITICALLY_BIASED_AND_INFLUENCED_BY_MONEY;
}

70% successful prediction (1)

Richard Kirk (535523) | about 4 months ago | (#47621327)

A 70% prediction rate is not impressive. In the UK, where the weather seems pretty unpredictable, "it will be pretty much the same as yesterday" is right about 70% of the time. Weather forecasting and track individual storms, but It took a long time and a lot of research for the weather forecast success flat rate to get any better than this. The model may be important: the success rate probably isn't.

Re:70% successful prediction (1)

jd (1658) | about 4 months ago | (#47621669)

It's 70% average. For the Democratic judges, it's much lower. For the Republican judges, you could probably dispense with them and use the code as it stands. Since the algorithm falls short of true AI, this clearly implies a lot about how decisions are made and what with.

Re:70% successful prediction (0)

Anonymous Coward | about 4 months ago | (#47622291)

For the Republican judges....

Having not read the article... is this true or is it a bias of yours? When dealing with stats bias is a horrible thing to have. You can accidentally sneak in bad data and not even know it.

But let me show you how you are wrong. read this please.
http://yro.slashdot.org/comments.pl?sid=5501457&cid=47621169

There are people who actually understand statistics here. Please read them. From my pov I would say there are 2/3 judges that seem to swing the vote back and forth. The rest are dipsticks who vote with their party for some reason. If you take into account that only 2-3 swing votes in any case you end up at about 70%. Low and behold that is about what it is.

If you want to pretend your party is better than the other please read up on the history of your party. Even recent history is telling (NSA and IRS scandals).

Re:70% successful prediction (1)

wonkey_monkey (2592601) | about 4 months ago | (#47621769)

A 70% prediction rate is not impressive.

Doesn't that rather depend on what you're predicting, and how good previous algorithms were?

Isn't that a bit like complaining that "10mph is not impressive" while commenting on a story about the world's fastest snail?

In the UK, where the weather seems pretty unpredictable, "it will be pretty much the same as yesterday" is right about 70% of the time.

I can predict with 99.9% accuracy what the weather will be like in five minutes. Does that mean any prediction less than 99.9% accurate is not impressive?

Guess what (0)

Anonymous Coward | about 4 months ago | (#47621419)

If we outlawed private campaign funding it would be correct 100% of the time.

Unimpressive (0)

Anonymous Coward | about 4 months ago | (#47621435)

It's within 1 standard deviation of being no better than a coin toss.

I Have One To! (0)

LifesABeach (234436) | about 4 months ago | (#47621477)

Tyranny equals enough money has been thrown into the court room.

not really that hard, theoretically (1, Flamebait)

argStyopa (232550) | about 4 months ago | (#47621537)

The US Constitution is only about 4 pages, 4400 words (and the bulk of that is structural & procedural minutiae about the US government).
The role of the USSC is simply resolving if a law does or does not conform to the US Constitution.

Given those relatively limited boundaries, it shouldn't be that complex of an issue to predict algorithmically the results of a given judicial ruling, one would think. (The devil's in the details about parsing meaning and context.)

Of course, I believe phrases like "the right to keep and bear arms shall not be infringed" are indisputably clear, and I'm astonished that people can find convoluted ways to try to tear it apart syntactically.and semantically.

Re:not really that hard, theoretically (-1, Offtopic)

coinreturn (617535) | about 4 months ago | (#47621599)

Of course, I believe phrases like "the right to keep and bear arms shall not be infringed" are indisputably clear, and I'm astonished that people can find convoluted ways to try to tear it apart syntactically.and semantically.

I agree - as long as the bears don't mind you taking their arms.

Re:not really that hard, theoretically (1)

Anonymous Coward | about 4 months ago | (#47621623)

Of course, I believe phrases like "the right to keep and bear arms shall not be infringed" are indisputably clear, and I'm astonished that people can find convoluted ways to try to tear it apart syntactically.and semantically.

Except you're missing the context. The full text is: " A well regulated Militia, being necessary to the security of a free State,, the right of the people to keep and bear Arms, shall not be infringed."

So for 150 years, the 2nd amendment was interpreted to mean that states could raise armed militias, but that there was no individual right to bear arms. Citation: http://www.newyorker.com/news/... [newyorker.com]

Re:not really that hard, theoretically (2)

tomhath (637240) | about 4 months ago | (#47621735)

The original intent was to prevent the government from having too much power by ensuring that citizens could form militias. Having arms available to everyone (not just the government's army) was an essential part of being able to raise a militia.

Re:not really that hard, theoretically (0)

Anonymous Coward | about 4 months ago | (#47621865)

No, the original intent was to ensure that slave owners could form militias to prevent slave revolts. The Founders were well aware of the inability of militias to stand against a professional army as consistently demonstrated in the American Revolution.

Re:not really that hard, theoretically (3, Interesting)

argStyopa (232550) | about 4 months ago | (#47622137)

Nonsense, an editorial screed by the New Yorker is meaningless. And if you want to bring context into it, you'll lose even harder.

Firstly, judicial review wasn't even a principle until Marbury v Madison in 1803. So we're talking about the 19th century only.

In cases in the 19th Century, the Supreme Court ruled pretty much only that the Second Amendment does not bar state regulation of firearms. (For example, in United States v. Cruikshank, 92 U.S. 542, 553 (1875), the Court stated that the Second Amendment âoehas no other effect than to restrict the powers of the national government,â and in Presser v. Illinois, 116 U.S. 252, 265 (1886), the Court reiterated that the Second Amendment âoeis a limitation only upon the power of Congress and the National government, and not upon that of the States.â )

Although most of the rights in the Bill of Rights have been selectively incorporated into the rights guaranteed by the Fourteenth Amendment and thus cannot be impaired by state governments, the Second Amendment has never been so incorporated.

It's only since 1939 United States v Miller, that federal court decisions considering the Second Amendment have largely interpreted it as preserving the authority of the states to maintain militias - not the '150 year history' stated in the deliberately-misleading text of the quoted article.

(much of the above is clipped verbatim from http://www.loc.gov/law/help/se... [loc.gov] )

In fact, it's ONLY in the latter 20th Century that we've even HAD this debate, as all constitutional commentary and understanding previous to that was universal in its understanding of the 2nd Amendment as an individual right, *not* dependent on being in a militia: http://en.wikipedia.org/wiki/S... [wikipedia.org]

Of course, you further disregard that according to the US code, all males from 17-44 *are* by default in the militia. (http://www.law.cornell.edu/uscode/text/10/311)

Re:not really that hard, theoretically (1)

Anonymous Coward | about 4 months ago | (#47622189)

I see the 'collective right' argument all the time, but it makes no sense on the face of its own claim.

From the 1st Amendment:
"Congress shall make no law ... abridging ... the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.

Here we have another "right of the people" that is not to be denied. If "the people" means a collective right, not an individual one, then you are making the claim that *you* don't have the right to peaceably assemble, and to petition the Government for redress of grievances. Do you see the problem yet?

If not, let me ask you a simple question: If the individual does not have a right, how can a group which is *composed* of those individuals have that right?
The answer is simple; it cannot. The rights of a group of people are the same as the rights of the individuals which make up the group.

In the case of the 2nd Amendment, the issue becomes even more clear when you are familiar with what a militia was (and still is). A militia is the body of the people living in a state who are able and willing to take up arms in defense. They are expected to arrive with at least minimally sufficient arms, ammunition, and equipment to take on the immediate task for which they have been called. (The state is expected to provide such supplies, including food and ammunition, as may be needed for extended campaigns.) If an individual does not have the right to keep & bear arms, then the militia *cannot* exist.

Finally, the Constitution does not *grant* rights. It explicitly lays out the boundaries of the powers which were granted to the federal government *by* the people. The 2nd Amendment does not *create* the right to keep & bear arms, it simply acknowledges the existence of that right, and disallows the federal government from infringing that right.

Re:not really that hard, theoretically (1)

jd (1658) | about 4 months ago | (#47621655)

Since you fail on the example you tried to parse, I suggest that although the theory is easy, personal prejudice always takes precedence over what is written.

lol (1)

Charliemopps (1157495) | about 4 months ago | (#47621677)

Per their own data:
They reviewed 7700 cases.
The court reversed 5077 of those cases.
So the court reverses 66% of cases it sees. Which makes sense, that's what the court does.
So I can get damn close to their results with my model which is: "The court will reverse 100% of the time"

I don't see their model in there, and I don't really care to look that hard. But they said they used the same data previous models did. Most of that data are things like:
Which court heard the origional case?
Was the decision liberal or conservative?
etc...

It seems to be more of a case of, the court is overturning politically motivated decisions made by lower courts. i.e. a Liberal leaning decision out of California is likely to be overturned... or a conservative leaning decision out of texas. But the reverse, a conservative leaning ruling out of California is not. So if a court rules against it's nature it's more likely to stand when heard by SCOTUS, which makes intuitive sense.

Better algorithm (0)

fulldecent (598482) | about 4 months ago | (#47621689)

I have a better algorithm... written in one line of perl:

print "Reverse";

Accuracy: 73%

Source http://www.scotusblog.com/stat... [scotusblog.com]

I can predict House votes 100% of the time (1)

Anonymous Coward | about 4 months ago | (#47621707)

If Obama wants it the GOP will say No.

That's not really prediction (1)

eladts (1712916) | about 4 months ago | (#47621739)

You can only predict things that have not yet happened. I'll be much more impressed If they publish their predictions to future decisions and these turn out to be 70% correct.

Predicting the SCOTUS is easy (-1)

plopez (54068) | about 4 months ago | (#47621837)

Just follow 3 simple rules; they vote in favor of corporations, vote in favor of the gov't, and vote against the individual. You will be able to predict the court more often than not.

So (0)

Anonymous Coward | about 4 months ago | (#47621915)

It's slightly more accurate than a coin flip! Nice! Something tells me even a Fox News anchor would perform better.

Predicting the past (0)

Anonymous Coward | about 4 months ago | (#47621929)

Shouldn't this algorithm predict 100% of past court decisions? What we are really interested in is whether it can predict the future - let it run for a few court terms and THEN announce something to the media.

Good news (1)

internerdj (1319281) | about 4 months ago | (#47622005)

Despite all the (partially true) snark. Isn't this a good thing? Shouldn't the highest court of the land be producing rulings that are predictably consistent with previous rulings? Unless a case is truly novel, past performance should be a good predictor of future performance here, since case law is cumulative.

Re:Good news (1)

Bob the Super Hamste (1152367) | about 4 months ago | (#47622409)

While I agree that we do need a logically coherent set of laws and rulings that doesn't seem to be what we get.

It should be trivial to predict it 100% (0)

Anonymous Coward | about 4 months ago | (#47622115)

It should be trivially easy to predict the outcome of SCOTUS cases, as most constitutional issues are black or white.

The constitution does a couple of things. First, it lists a bunch of things the Government CAN'T DO, period, no matter how much it wants. Second, it says that it can only do what the people give it permission to do, and nothing else.

Too often I hear people talking about what the framers "might have meant" or worse, what their "intent might have been." Folks, these are not at issue, because what they meant, and what they intended, are clearly spelled out in their writings of the time. Between the Federalist Papers, the DoI, the Constitution, and the multitude of books these people authored, it is all spelled out in excruciating detail, and by design, without room for creative interpretation.

70% of the time... (0)

Anonymous Coward | about 4 months ago | (#47622191)

... it works everytime! https://www.youtube.com/watch?v=pjvQFtlNQ-M

Odds? (0)

Anonymous Coward | about 4 months ago | (#47622193)

Let the judicial betting begin!

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?