Interesting ban on analytics on past rulings made by judges. The rationale for such ban is explained thusly:
However, judges in France had not reckoned on NLP and machine learning companies taking the public data and using it to model how certain judges behave in relation to particular types of legal matter or argument, or how they compare to other judges.
In short, they didn’t like how the pattern of their decisions – now relatively easy to model – were potentially open for all to see.
Seems like hubris has a lot to do with such move. If decisions are more predictable, the whole legal process becomes less a game of dice with contained costs and timetables for those involved in the litigation. Would it not make sense for the legal system to use the AI for the first level of decision making and override it when it does not make sense? Even without AI, lawyers and law-firms have always used past performance data of judges to plan their strategy. The more adept the lawyer at predicting how the judge would rule, the more useful they are to the client. Its unclear why throwing some technology at the problem makes it such a great offense.
Comments