We’re hoping to compile a list of texts relevant to the intersections of policy, data and civic life. Please add suggested texts below. For the time being, we’re vetting any texts that are added (to eliminate spam) so it may take a day or two to see your suggestion on the site.

Not Just Neoliberalism

Berman, E. P. (2014). Not Just Neoliberalism: Economization in US Science and Technology Policy. Science, Technology & Human Values, 39(3), 397–431.

Recent scholarship in science, technology, and society has emphasized the neoliberal character of science today. This article draws on the history of US science and technology (S&T) policy to argue against thinking of recent changes in science as fundamentally neoliberal, and for thinking of them instead as reflecting a process of ‘‘economization.’’ The policies that changed the organization of science in the United States included some that intervened in markets and others that expanded their reach, and were promoted by some groups who were skeptical of free markets and others who embraced them. In both cases, however, new policies reflected (1) growing political concern with ‘‘the economy’’ and related abstractions (e.g., growth, productivity, balance of trade) and (2) a new understanding of S&T as inputs into a larger economic system that government could manipulate through policy. Understanding trends in US S&T policy as resulting from economization, not just neoliberalism, has implications for thinking about the present and likely future of science and S&T policy.

Experimenting with the Archive: STS-ers As Analysts and Co-constructors of Databases and Other Archival Forms

Waterton, C. (2010), Experimenting with the Archive: STS-ers as analysts and co-constructors of databases and other archival forms, Science, technology & human values, 35(5), pp. 645-76. 

This article is about recent attempts by scholars, database practitioners, and curators to experiment in theoretically interesting ways with the conceptual design and the building of databases, archives, and other information systems. This article uses the term ‘‘archive’’ (following Derrida’s Archive Fever 1998/1995 and Bowker’s Memory Practices in the Sciences 2005) as an overarching category to include a diversity of technologies used to inventory objects and knowledge, to commit them to memory and for future use. The category of ‘‘archive’’ might include forms as diverse as the simple spreadsheet, the species inventory, the computerized database, and the museum. Using this protean concept, this study suggests that we are currently witnessing a time where close convergences are occurring between social theory and archive construction. It identifies a ‘‘move’’ toward exposure of the guts of our archives and databases, toward exposing the contingencies, the framing, the reflexivity, and the politics embedded within them. Within this move, the study examines ways in which theories of performance and emergence have begun to influence the way that archives of different kinds are conceived and reflects on the role of Science and Technology Studies (STS) scholars in their construction.

Rethinking empirical social sciences

Ruppert, E., (2013), Rethinking empirical social sciences Dialogues in Human Geography, 3(3), pp. 268-73. 

I consider some arguments of social science and humanities researchers about the challenge that big data presents for social science methods. What they suggest is that social scientists need to engage with big data rather than retreat into internal debates about its meaning and implications. Instead, understanding big data requires and provides an opportunity for the interdisciplinary development of methods that innova-tively, critically and reflexively engage with new forms of data. Unlike data and methods that social scientists have typically worked with in the past, big data calls for skills and approaches that cut across disciplines. Drawing on work in science and technology studies and understandings of the ‘the social life of methods’, I argue that this is in part due to the fragmentation and redistribution of expertise, knowledge and methods that new data sources engender, including their incipient relations to government and industry and entanglements with social worlds.

Trending: The Promises and the Challenges of Big Social Data

Manovich, L. (2012), Trending: The Promises and the Challenges of Big Social Data, in (ed.) Gold, M.K. Debates in the digital humanities, Minneapolis , USA: University of Minnesota Press. 

Today the term “big data” is often used in popular media, business, computer science and computer industry. For instance, in June 2008 Wired magazine opened its special section on “The Petabyte Age” by stating: “Our ability to capture, warehouse, and understand massive amounts of data is changing science, medicine, business, and technology. As our collection of facts and figures grows, so will the opportunity to find answers to fundamental questions” (“The Petabyte Age”). In February 2010, Economist started its special report “Data, data everywhere” with the phrase “the industrial revolution of data” (coined by computer scientist Joe Hellerstein) and then went to note that “The effect is being felt everywhere, from business to science, from government to the arts” (“Data, data everywhere”).

Six Provocations for Big Data

boyd, d. and Crawford, K. (2011) Six Provocations for Big Data A Decade in Internet Time: Symposium on the Dynamics of the Internet and Society, September 2011. Available at Social Science Research Network. 

The era of Big Data has begun. Computer scientists, physicists, economists, mathematicians, political scientists, bio-informaticists, sociologists, and many others are clamoring for access to the massive quantities of information produced by and about people, things, and their interactions. Diverse groups argue about the potential benefits and costs of analyzing information from Twitter, Google, Verizon, 23andMe, Facebook, Wikipedia, and every space where large groups of people leave digital traces and deposit data. Significant questions emerge. Will large-scale analysis of DNA help cure diseases? Or will it usher in a new wave of medical inequality? Will data analytics help make people’s access to information more efficient and effective? Or will it be used to track protesters in the streets of major cities? Will it transform how we study human communication and culture, or narrow the palette of research options and alter what ‘research’ means? Some or all of the above?

This essay offers six provocations that we hope can spark conversations about the issues of Big Data. Given the rise of Big Data as both a phenomenon and a methodological persuasion, we believe that it is time to start critically interrogating this phenomenon, its assumptions, and its biases.

Beyond Rational Games: An Analysis of the “Ecology of Values” in Internet Governance Debates

Powell, A. and Nash, V. (2013) Beyond Rational Games: An Analysis of the “Ecology of Values” in Internet Governance Debates. 

The same characteristics that make the Internet so unique as a tool for also create concerns that dangerous and illegal content and interactions are more easily available, particularly to children. This article explores these issues by examining the debate between two long-established strands of digital advocacy: child protection and freedom of expression. It suggests the value of a new analytic framework and model of intervention, arguing that a negotiation of values characterizes a policy development ecology. This article describes an “ecology of values” based on the phronetic, rather than epistemic, aspects of the discursive relationships created between members of these two advocacy groups, where core values are negotiated and redefined as part of the policymaking process.

Big Data, Communities and Ethical Resilience: A Framework for Action

Big Data, Communities and Ethical Resilience: A Framework for Action. White paper by 2013 Bellagio/PopTech Fellows Kate Crawford, Gustavo Faleiros, Amy Luers, Patrick Meier, Claudia Perlich and Jer Thorp

“In August 2013, a multidisciplinary group gathered at the Rockefeller Foundation’s Bellagio Center to address the theme of “Community Resilience Through big data and Technology.” Creative and critical thinkers were selected from the technology sector, academia, the arts, humanitarian and ecological spheres. Over ten days, we explored how data could be used to help build community resilience in the face of a range of stresses — environmental, political, social and economic. Large data collection and analysis may support communities by providing them with timely feedback loops on their immediate environment. However, the collection and use of data can also create new vulnerabilities and risks, by enabling discriminating against individuals, skewing evidence, and creating dependencies on centralized infrastructure that may increase a system’s vulnerability. After analyzing these risks and opportunities, we developed a framework to help guide the effective use of data for building community-driven resilience. In this framework, we propose six domains: ethics, governance, science, technology, place and sociocultural context. We believe that by considering all six domains together, organizations can safeguard against predictable failures by exposing project weaknesses from the outset rather than in hindsight.”

Data matter(s): legitimacy, coding, and qualifications-of-life

Wilson, M. W. (2011). Data matter(s): legitimacy, coding, and qualifications-of-lifeEnvironment and Planning D: Society and Space29(5), 857–872.

Data are central to geographical technologies and provide the pathways in which geographic investigations are forwarded. The mattering of data is therefore important to those engaging in participatory use of these technologies. This paper understands `mattering’ both in the material sense, that data are products resulting from specific practices, and in the affective sense, that data are imaginative, generative, and evocative. I examine these senses of mattering, of both presence and significance, in a discussion of a community survey project held in Seattle, USA. During this four-year project, residents in ten neighborhoods were asked to collect data about their community streets using handheld computers. Residents tracked `assets’ and `deficits’ by locating objects such as damaged sidewalks and graffiti on telephone booths. These data records were then uploaded to a central server administered by a local nonprofit organization. The nonprofit worked with community residents to help link these data about their changing neighborhoods to agencies in the municipal government. Here, I argue that the legitimacy of these data practices is constructed through processes of standardization and objectification and that these processes transduct urban space. I ask, as participatory mapping practices target governing agencies with their data products, what are the implications for the kinds of knowledge produced and for its legitimacy? In other words, how does data come to matter?

The performitivity of code: Software and cultures of circulation

Mackenzie, A. (2005). The performitivity of code: Software and cultures of circulation. Theory, Culture & Society, 22(1), 71–92.

This article analyses a specific piece of computer code, the Linux operating system kernel, as an example of how technical operationality figures in contemporary culture. The analysis works at two levels. First of all, it attempts to account for the increasing visibility and significance of code or software-related events. Second, it seeks to extend familiar concepts of performativity to include cultural processes in which the creation of meaning is not central, and in which processes of circulation play a primary role. The analysis concentrates on the practices and patterns of circulation of Linux through versions, distributions, clones and reconfigurations. It argues that technical ‘culture-objects’ such as Linux take on a social existence within contemporary technological cultures because of the authorizing contexts in which the reading, writing and execution of code occur. The ‘force’ or performance of certain technical objects, their operationality, can be understood more as the stabilized nexus of diverse social practices, rules and personae than as a formal property of the objects themselves.

“The whole is always smaller than its parts”–a digital test of Gabriel Tardes’ monads

Latour, B., Jensen. P., Venturini, T., Grauwin, S., & Boullier, D. (2012). “The whole is always smaller than its parts”–a digital test of Gabriel Tardes' monads. British Journal of Sociology, 63, 590–615.

In this paper we argue that the new availability of digital data sets allows one to revisit Gabriel Tarde’s (1843-1904) social theory that entirely dispensed
with using notions such as individual or society. Our argument is that when it was impossible, cumbersome or simply slow to assemble and to navigate through the masses of information on particular items, it made sense to treat data about social connections by defining two levels: one for the element, the other for the aggregates. But once we have the experience of following individuals through their connections (which is often the case with profiles) it might be more rewarding
to begin navigating datasets without making the distinction between the level of individual component and that of aggregated structure. It becomes possible to give some credibility to Tarde’s strange notion of ‘monads’. We claim that it is just this sort of navigational practice that is now made possible by digitally available databases and that such a practice could modify social theory if we could visualize this new type of exploration in a coherent way.

Popular Culture, Digital Archives and the New Social Life of Data

Beer, D., & Burrows, R. (2013). Popular Culture, Digital Archives and the New Social Life of Data. Theory, Culture & Society, 30(4), 47–71.

Digital data inundation has far-reaching implications for: disciplinary jurisdiction; the relationship between the academy, commerce and the state; and the very nature of the sociological imagination. Hitherto much of the discussion about these matters has tended to focus on ‘transactional’ data held within large and complex commercial and government databases. This emphasis has been quite understandable – such transactional data does indeed form a crucial part of the informational infrastruc- tures that are now emerging. However, in recent years new sources of data have become available that possess a rather different character. This is data generated in the cultural sphere, not only as a result of routine transactions with various digital media but also as a result of what some would want to view as a shift towards popular cultural forms dominated by processes of what has been termed prosump- tion. Our analytic focus here is on contemporary prosumption practices, digital technologies, the public life of data and the playful vitality of many of the ‘glossy topics’ that constitute contemporary popular culture.