#dhiha5
Explore tagged Tumblr posts
Photo

Change & Permanence
Change & Permanence – a Walk around Science 2.0 by Lucas Frederik Garske
Lucas Frederik Garske is a research associate and member of the “digital humanities and educationial media research” cluster at the Georg-Eckert-Institute for International Textbook Research. He blogs on paintitscience.com.
Disclaimer: This blog wasn´t designed for long texts. If you find it hard to read the article on this template there is a small “print friendly” button at the bottom of the article that should make it easier for you to read (unfortunately the print friendly version doesn´t support footnotes).
Credits to Tom for cleaning the mess up.
Source: Paul Harris (1996) HYPER-LEX: A Technographical Dictionary.
0. From Web 2.0 to Science 2.0
Talking about Science 2.0 is much more than adopting a fancy and successful concept: What we are faced with is a change of communication in which the web plays an important and decisive role. The internet has always been more telephone than radio, more exchange than information. The change on the web we commonly refer to as Web 2.0 is specifically concerned with interaction and collaboration. As science is embedded in communication processes, it is part of this change. Obviously it wouldn’t make much sense to talk about collaborative or interactive science, as science has always been based on tools and infrastructure that imply interaction and communication (paragraph I). While it seems easy to address ongoing changes, it’s worth reflecting not only on flow but also permanence of current developments (paragraph II). I will argue that instead of claiming paradigm change based on technology we should rather demystify our terminology and focus on the continuity we are facing in 2.0 culture (paragraph III). I propose to describe the most relevant change in science as a shift from invention towards perpetual β-Science as elaborated in paragraph IV.
I. Changing conditions: tools & infrastructure
Science is an activity that takes place. The underlying metaphor of spatiality was highlighted by Karl Schlögel for the case of history. 10 However, let’s start to reconsider science in a broader sense by reflecting on the conditions under which science is done.
For some forms of science you don’t need much more than a piece of paper, a pen and some time. For other science you need a particle accelerator or a giant server farm. Different tools imply and allow for different types of science. In order to trace back conditionality in science it seems useful to start with the development of tools rather than with abstract thoughts on scientific paradigms or attitudes.
Let’s think of tools as resources that help us do whatever we do when we say we do science. 11 A tool does not have to be a material thing, it can equally be an inspiring discusssion, a useful theory or a set of good conditions at your work place. On the other hand, material things play an important role as well: think of certain IT equipment, a specialist library right on one’s doorstep or simply the money to finance your research. The use of tools was and still is strongly related to spatiality, however, not in the sense of distance but rather in the sense of infrastructure and communication. 12 Infrastructure can be considered as a workshop that provides us with the neccesary tools to take our research to a different level. 13 Yet, since not everyone has access to every workshop, we must shed some light on the accessability of tools and infrastructure. Most obviously, access to tools is restricted legally and economically, but it is also influenced by certain research standards and traditions. 14
A tradition of “stable” references, has excluded and is still excluding a great part of “dynamic” scientific activity generated on the web. Information and sources seem much harder to control and archive and they appear less credible in terms of content and authorship (especially Wikipedia, the nightmare of many teachers and scholars). For a long time, blog entries and online discussions were neglected in “serious” scientific discourse. This is mirrored by the fact that only recently the quotation of electronic sources, especially blogs and podcasts, was addressed by well-respected style guides like the MLA Style Manual (2008) or the Chicago Manual of Style (2010). The work of major platforms like scienceblogs, hypotheses or SciLogs contributed to the reputation of “web science”. On the other hand, certain web phenomena and practices have helped highlight the fragility and instability of traditional research, with users harnessing software and collaborative platforms to detect plagiarism. We can also observe how new projects are working creatively on the lack of control and assessment of information in digital media. Hypothes.is to me is one of the most interesting and ambitious projects in this context.
However, most restrictions imposed by tradition are much less sophisticated and simply based on a lack of practice. The fact that tools are technically ABLE TO make things easier, more efficient or more sustainable does not mean that they will effectively do so. Rather than adopting and learning how to use new tools, users need to discover and appreciate them first. Without an intrinsic need for change, the discovery of new potential stays with online pioneers (nerds?) willing to take risk – that is, to invest time. This includes time to get comfortable with live collaboraton in documents (Google Docs, Zoho), new correspondence technology (instant messaging, Videoconferencing, Wave/Rizzoma) or new techniques of data treatment (Data Mining, Visualization, Coding).
II. Has anything changed?
But how profound is the change from Science 1.0 to Science 2.0? I will use the example of hypertext to argue that in most cases the term Web 2.0 bottles old wine in new wineskins. While this might sound provocatively negative it can also be interpreted positively as a discovery of old treasures. Innovation is often flawed or challenged by the theory that there are no profoundly new ideas, but merely translations, remixes, revamps.
I remember one of my project members commenting that one shouldn’t be too optimistic about the acceptance of Web 2.0 features in science. He argued that in the late 90s there was a big fuss about hypertext and everyone predicted that in the future there will only be hypertext, which – in his opinion – turned out to be a rash statement. One can argue that the prediction was in fact wrong, however, for a different reason: Even though strongly connected to infrastructures like Wikipedia, hypertext can be traced back to the first uses of reference systems like indices, quotations or footnotes centuries ago. So in fact, long before hyperlinks and Wikipedia became part of our vocabulary, science was strongly based on hypertextual structures. 15 In particular, the fact that researchers usually refer to other researchers in the middle of their work contradicts the assertion that scientific texts are usualy organized in a strictly linear and categorical way.
Of course, references and feedback on the web work much faster and discussions have a structure that allow for flexible responses. What changed are certain habits we integrated into the way we are communicating. Iconic and deictic use of language on the web have become more popular and common. Obviously, we are dealing with a new technology. However, rather than enabeling us to do things differently, new technology makes us look at old practices from a new perspective. As Manuel Lima pointed out on his speech at the Royal Society of Arts (with amazing visual support from Andrew Park), we are currently undergoing a shift from categorical thinking towards thinking in networks. Beyond statements and causes we are becoming more inteterested in connections and relations. Network-based scientific communities and scientific blogging may be seen as a practical implementation of how we do science differently – but they are merely a translation of practices we had before (e.g. publishing an article in a journal & discussing the issue on a conference). In the end, the medium is the message and new media do not substitute older ones: A letter is not an email and an email is not a text message – you feel it as you write it. But you speed up communication a lot when the channel is less important and you can focus on content.
The fact that new technologies most likely won’t create anything essentially new doesn’t mean they aren’t worth playing around with. Quite the contrary, by experimenting with new media we continuously update scientific language. This may raise questions that can only be answered with those new tools. That alone, of course, does not imply that the results are generally better or more precise, but they might be potentially more suitable to present problems.
III. The Scientist as Songwriter vs. the Scientist as DJ
Let’s come back to the idea that scientists use and produce instruments to do their work and that those tools are resources that help us do whatever we do when we say we do science. This means that everything and everyone can become a tool for science and if we’re lucky we or our work becomes a tool in someone elses work. This makes science an ongoing process of working with and working on tools. We can’t help falling back to science done by others in the past, we are standing on the shoulders of giants. But there is a problem with the idiom: While it makes things way easier to think of the history of science as the history of great men and great inventions we tend to forget about the simple fact that things come hardly out of nowhere (at least – as a physicist might note – in most cases) and that the concept of authorship might be reasonable in a practical sense (reference & archive), but in the end it is implausible if the work is not only(!) done by individuals but in fact by many individuals. Sticking with the image of giants, we would probably agree with the image of the Leviathan depicted on Thomas Hobbes’ famous title – a giant imagined and made up of many.
The so-called “death of the author” (Barthes, Foucault and – with less theory – Twain) predates current phenomena like the flux of massive data induced by search engines like the Web of Knowledge and Google Scholar or digitilization, but today it becomes more relevant than ever. How are we treating these new circumstances? One metaphorical approach could be changing our understanding of the scientist more and more from songwriter to DJ. While at first sight this might appear as a devaluation of the scientist it is rather a revaluation of the DJ: the work of the DJ does not only consist in playing records of others, s_he draws up narratives, samples old bits of pieces and finally turns them into something new (check this speech by DJ Spooky on Remix Culture). The DJ is creating and inventing but the relations betwen his_her work and the work of others is more visible than it is in the case of the person that claims to be the creator of his_her own ideas. S_he is more node than author, more conclusion of successful practices than lone standing monument. The metaphor enables us to rethink authorship without discrediting creation as the essential part of our scientific work. It also gives way to reconsidering mixing and merging as eclecticism that doesn’t need to end up in a blended, unrecognizable and indecipherable melange. More than ever we are able to locate ourself in the network of knowledge that we are working in.
Authorship played and still plays an important role in our society and in science, culturally and legally, but we can argue that authorship is merely the consequence of cutting off the remixing process that has been done to finish the creative process. 16 Certainly, by using references we already confess the hypertextual condition of our work, but still the own work is crucial to what we are doing. There might be a slight chance that the way we locate ourselfs in the work we do will change in the future. Referring to knowledge could transform from pointing to the work of certain authors to the identification and construction of networks of their ideas. If the humanities take the direction that has already anticipated by web technology we will see more collaborative projects and networks of researchers but also the decrease of the authors relevance in scientific work.
IV. Summing things up: (Perpetual) β-Science
The production of science can learn from software releasing strategies commonly practiced in web 2.0 environments. One is the shift from a teleological release process towards a cyclical one. In their frequently quoted article “What is Web 2.0” Tim O’Reilly and John Battelle prognosed the “End of the Software Release Cycle” (traditionally proceeding from PreAlpha taking Alpha, Beta, RC to the Final Release). O’Reilly and Battelle argued that in Web 2.0 not products but operations must become a core competence and users are treated as co-developers. Instead of proceeding from Beta to Final, products stick in a perpetual Beta process, were they are continuously elaborated. A short look on current web 2.0 products show that perpetual beta is already the most frequent case even if release terminology still recurs to Alpha, Beta, Final.
Now, learning from changes on the web does not necessarily imply the end of the publication release cycle, but it definitely challenges the idea of the “final release”. Publication traditions still support the idea of the “stable version” as the main medium to perform science. In many disciplines, especially in Historiography, the monography is still a kind of gold standard. And yes, there are good arguments to produce grand narratives of science instead of short articles depending on whatever you work on. But β-Science does not mean a retreat from greater to smaller narratives but a turn towards more hybrid ones. Indeed, Wikipedia is a good example for hybrid narratives that are equally gigantic. 17 It reads much different than a book but it is technically capable to do the same. 18
We can understand β-Science as a fundamental acceptance of “science in progress”, hybrid narratives and the increased use of “bits and pieces” as valuable parts in collaborative productions. β-Science is a habit, thus part of our socialization. A mundane and certainly not very philosophical illustration for the different forms of socialization is the existence of “Let me google this for you” (you don’t know what is? click here or – less infantile – here). Lmgtfy is symptomatic for a peer group where stupid questions actually do exist, namely when you can answer them on your own – via search engines/Google. There is a gap between those who are using and experimenting with digital tools constantly to work with their questions and problems and those who primarily fall back upon non-digital sources of knowledge. As far as I see this gap is going to be closed as time goes by. Until then it’s worth discussing it critically.
Notes:
Karl Schlögel (2003) Im Raume lesen wir die Zeit. Über Zivilisationsgeschichte und Geopolitik. München/Wien, p. 70. ↩
This means we can call things “good tools” only in retrospect. ↩
We can see this relation between communication and spatial movement in the use of language. e.g. in polish the term for transport is synonymous with the term for communication (“komunikacja”). ↩
As more or less every tool is based on other tools (check out this talk by Matt Ridley), infrastructure is equally a set of tools and a tool itself depending on how you look at it: e.g. a computer can be considered as tool in a greater research infrastructre, but can also be considered as infrastructure for certain programms running on the computer. Technology on the web don’t make the distinction between the one and the other obsolent but brings them closely together ↩
Of course, both dimensions, traditional and economical, are closely related to each other as we see e.g. in the development of third party funded science. ↩
As far as I can see the term was introduced in the middle of the 20th century. I came up with the poem Cent mille milliards de poèmes by Raymond Queneau published in 1961 as a really interesting example of hypertext you should check out if you didn’t know about it. ↩
In fact recent scandals like Helene Hegemann’s Axolotl Roadkill or Karl-Theodor zu Guttenberg’s thesis have been so scandalous in their corresponding peer groups because they covered the remix work done while pretending not to have remixed at all. ↩
There is a live Wikipedia article on the “Longest Wikipedia Article” on the english Wikipedia, currently (05/30/2013) the “List of United States counties and county-equivalents” (617,113 bytes) being the longest one. Most of the longest articles are actually tables but in some cases they resemble tables of contents as you find them commonly in every book. ↩
At least like most books. I remember reading a book from the “Choose your own adventure” series in my childhood which was maybe one of the earliest hypertext structures I came into contact with. ↩
Karl Schlögel (2003) Im Raume lesen wir die Zeit. Über Zivilisationsgeschichte und Geopolitik. München/Wien, p. 70. ↩
This means we can call things “good tools” only in retrospect. ↩
We can see this relation between communication and spatial movement in the use of language. e.g. in polish the term for transport is synonymous with the term for communication (“komunikacja”). ↩
As more or less every tool is based on other tools (check out this talk by Matt Ridley), infrastructure is equally a set of tools and a tool itself depending on how you look at it: e.g. a computer can be considered as tool in a greater research infrastructre, but can also be considered as infrastructure for certain programms running on the computer. Technology on the web don’t make the distinction between the one and the other obsolent but brings them closely together ↩
Of course, both dimensions, traditional and economical, are closely related to each other as we see e.g. in the development of third party funded science. ↩
As far as I can see the term was introduced in the middle of the 20th century. I came up with the poem Cent mille milliards de poèmes by Raymond Queneau published in 1961 as a really interesting example of hypertext you should check out if you didn’t know about it. ↩
In fact recent scandals like Helene Hegemann’s Axolotl Roadkill or Karl-Theodor zu Guttenberg’s thesis have been so scandalous in their corresponding peer groups because they covered the remix work done while pretending not to have remixed at all. ↩
There is a live Wikipedia article on the “Longest Wikipedia Article” on the english Wikipedia, currently (05/30/2013) the “List of United States counties and county-equivalents” (617,113 bytes) being the longest one. Most of the longest articles are actually tables but in some cases they resemble tables of contents as you find them commonly in every book. ↩
At least like most books. I remember reading a book from the “Choose your own adventure” series in my childhood which was maybe one of the earliest hypertext structures I came into contact with. ↩
0 notes