Policy, Technology, and Law: Who Decides?
In his paper on the proposed censorship scheme for internet access in Australia, Bambauer suggests evaluating the proposal with regard to four criteria: openness, transparency, narrowness and accountability. He evaluates the Australian proposal along these criteria, and discusses areas in which the proposal is strong, as well as areas which are potentially weak, such as accountability. While these are excellent criteria with which to evaluate the censorship proposal, they beg a larger question. That is, when political bodies address social issues using new technology, which groups need technical, political and legal competency?
Earlier this semester, we looked at two relevant selections on different models of policy codification. In one model, policy makers do their best to give technologists a set of tools for revealing policy issues that might be raised by new technological developments. In another, there is an effort to separate policy and technology and have each group work with the other to build a single system. These models are both acknowledging the challenges of codifying existing social and legal norms into systems that act blindly.
Turning from Australia for a moment to the United States, it is worth considering another example of communication censorship for contrast. The U.S. Postal service has a long history of acting as a gatekeeper against the transmission of obscene material. Its postal inspectors enforce “more than 200 federal laws”https://postalinspectors.uspis.gov/investigations/MailFraud/fraudschemes/ce/CE.aspx, many pertaining to to “child exploitation”. Under Bambauer’s analysis, the U.S. Postal service is probably not as good about censorship as the Australian proposal. However, it does have two advantages. First, rule making and enforcement are done by the same class of entity (human), and second, when the system fails, there is an established protocol for appealing the failure (the court system). Bambauer’s analysis is still very useful, and the postal service would be improved if it were more open, transparent, narrowly restrictive, and accountable, but the importance of its two advantages is significant.
A large, unaddressed issue, the condition alluded to earlier which is presupposed by Bambauer’s list, is that translating a functioning system that deals with a single class of entities into a function system that involves multiple classes is tremendously difficult and requires great breadth and depth of expertise. What is not clear is with whom that expertise should lie.
The Australian government drafted legislation in response to public demand (democratically expressed) that would ban some and restrict other content on the internet for all Australian citizens. In addition to the new legislation, Australia has two content censoring bodies, the ACMA and Classification Board, which “implement the statutory classification framework.”2 Also involved in the new censorship scheme are ISPs, filtering software makers, foreign watchdog groups, and Australia’s own citizens. Who should decide which protocols are filtered? Who should decide whether filter performance is important, and what level of performance is acceptable? Who should decide what constitutes an overly- broad filter? None of these questions has an obvious answer, and this list is far from exhaustive. More broadly, should legislators consider what is technically feasible when codifying (in law) policy?
This is not a novel question. It has been raised before, and dismissed with the argument that legislators often have to make law in technical areas in which they are not experts. They rely on expert consultation, lobbying groups, and the industries under regulation themselves. These is a valid point, and it’s certainly relevant to this case as well. Recall the modes of interaction between policy makers and technologists we encountered in our readings earlier this semester and it’s clear that there can be meaningful collaborations between the two groups. However, there is an important difference. At the end of the legislative process, in most cases there are a set of laws that will be interpreted by people, enacted by people, and reviewed by people along the way. Lawyers will either help groups comply or challenge the laws when appropriate. In the case of technology and software, the final product of legislation will be law, process, and code. It’s this last component, which requires not just expertise in a different field, but a different understanding of how the product will be used.
In some introductory programming books, the author tries to expose the novitiate to the concept of programming, and how it is different from human interaction, by having the reader think about telling a computer to make a sandwich. This “simple” task is very difficult to explain to a computer, and the novice understands that computers are good at some things, and bad at following poorly expressed directions. Code written as law is bad code. Consequently, somebody else, ACMA or the ISPs or the filter providers or the watchdog groups, has to translate legislative intent into executable code. While Bambauer addresses this in passing as an accountability issue, it is more fundamental than that. The questions that get passed off to third parties are important, policy-laden questions. If you block http, but not https, you have a simple work around. If you have DPI, you take a bigger performance hit or have a system that costs more. If you filter smtp, you might be over- broad.
Governments, and their people, think that technological issues can be addressed with the same system which has been used to address policy issues since time immemorial. This thinking leads to the addition of RFID to passports being considered a “mere policy change” and internet filtering done like mail filtering from a hundred years earlier. Multi- class systems, involving people and computers, require different kinds of thinking. Until that issue is addressed, we’re left with useful, but ultimately insufficient, criteria, such as those provided by Bambauer.