A comment raised in the Data+ Design article that really stuck with me was the notion that “data is around us and always been” and that “only recently have we had the technology to efficiently surface these hidden numbers, leading to greater insight into our human condition.” Given that the human condition features what is perceived to be an unalterable part of humanity that is tied to our tendency for error and fallibility, it is interesting to imagine instances where we might conceivably quantify such intangible concepts- let alone provide insight into such a topic. This standpoint is notable especially given the general aversion of the humanities toward the need to quantify everything in the world and see it in black and white.
This reminded me of the computational knowledge system Wolfram Alpha, which “takes the world’s facts and data” and computes it across a range of topics”. I went to their demonstrations website (where people can showcase the projects they have been working on), and found an interesting collection of projects, including a fair amount in the legal field. These include the projects linked above- The Appeals Court Paradox, in particular, takes into account the probability that each judge votes correctly, and factors in whether the judge votes independently, to determine the likelihood of a “correct” ruling being delivered.
The projects demonstrate a more pressing/ overarching issue in legal rulings and procedure, where judges’ bias, however reprehensible, is first difficult to identify and allege, and seems to be an inescapable part of the decision making process. Especially in the Hobby Lobby case and recent decisions that have been split 5-4 or 4-5, we now understand decisions also as a product of judges’ personal ideology or political affliation. This has resulted in a notable drop in confidence of the public toward the supposed objectivity that the judicial system is expected to deliver, such that the ruling system seems more a result of chance, rather than law.
Ignoring the assumptions made in deriving any such numbers for the initial calculation, the Wolfram Alpha project therefore seems capable of reconciling the need for grey areas/ in between spaces (as opposed to black and white) and statistics by calculating probability.
Then again, there seems to be something unsettling about basing the present on the past-gathering data from past occurences and extrapolating that to predict the future. Problems also arise as the data set of choice is conceptually fuzzy- what is the “correct” decision in relation to the law? If the notion of correctness is associated to our personal beliefs, how then might we represent that in an empirical data set?
At present, although data can be useful in representing non-contentious information, it remains to be seen whether it can assist us in illuminating controversial topics in the realm of ethics and law, both of which are underpinned by the human condition.