The “Edition”

Oxford Scholarly Editions Online.
Electronic editions allow us to explore texts in new ways. The OSEO is only one example of how technology can re-animate scholarly work.

Kenneth Price, author of the article “Electronic Scholarly Editions”, states that with new developments in electronic editing we may have to ability to view all versions, or editions, of certain texts side by side (Price, 2012). This would allow scholars to compare and contrast the various editions of a particular text and essentially to engage closer with the text itself.

As a student studying Medieval to Renaissance literature the many editions available of Sir Gowther, a moderately short, anonymous Middle English romance, caught my attention. Often cited as an adaptation of the late twelfth century French poem Robert the Devil, Robert le Diable, it tells the story of the life of Sir Gowther from birth to death.

The tale of Sir Gowther can be found in two manuscripts dating from the late-fifteenth century; that of the British Library Royal MS 17.B.43 and also the National Library of Scotland MS Advocates 19.3.1. The romance in both manuscripts is composed of twelve-line, tale rhyme stanzas; however both versions of Sir Gowther differ from one another slightly (Laskaya and Salisbury, 1995).

The British Library Royal manuscript, which scholars highlight was perhaps intended for a more sophisticated and refined audience, excludes the passage where the hero Sir Gowther, prior to his conversion from a barbaric individual to a saint-like figure, commits a repugnant crime where he, along with his comrades, sexually assaults and loots a nun’s convent and subsequently burns the convent to the ground. Laskaya and Salisbury note that the manuscript found in the National Library of Scotland is told in “a more vigorous and decidedly more explicit manner” (Laskaya and Salisbury, 1995).

The two manuscripts of Sir Gowther are not the only editions available of the text however, but rather because it is a Middle English romance it has been subject to numerous translations and interpretations. One such translation to modern English by English Professor George W. Tuma from San Francisco State University and independent scholar Dinah Hazell is available to read online.

XML or extensible markup language, the acid-free paper of the digital age, allows editors to determine which part of the text is important or of major interest by tagging or labelling the specific area of interest of a text with something known as a markup. Not only can editors mark the structural features of the manuscript, for example where there are line breaks and stanzas, but they can include extra information about the society of the time and their culture and, if known, the author (Price, 2012).

By creating an electronic edition of each version of the tale of Sir Gowther scholars would be able to mark up where the differences in both manuscripts lie. They would have the ability to include information about the people and culture of the late-fifteenth century and perhaps suggest why the manuscripts differ slightly in their re-telling of the tale.

By creating an electronic edition of each version of the text hyperreadng, defined by James Sosnoski as “reader-directed, screen-based, computer-assisted reading”, can take place (Sosnoski,1999). While Sosnoski criticises hyperreading for distancing the reader from the text, I, like Katherine Hayles, believe that hyperreading allows us to understand a text more in-depth by giving us the ability to focus on specific terms, keywords and areas of the text that are of relevance to a specific individual’s research. This is particularly useful when various editions of a text are available. One has the ability to search for the key areas of the text that differ slightly without having to repeatedly re-read the text closely as a whole just by filtering a specific word or phrase.

Bibliography

Laskaya, A. Salisbury, E. The Middle English Breton Lays. University of Rochester Archive. Medieval Institute Publications. 1995. Web. 19 December 2012.

Price, K. “Electronic Scholarly Editions”. A Companion to Digital Literary Studies. Blackwell DTD, 2012. Web. 20 Jan. 2013.

Sosnoski, J. J. “Hyperreaders and Their Reading Engines”, Passions, Politics and 21st Century Technologies. 1999: 161-177. Web. 24 January 2013.

Digital Resources

How we research now!!!

In an open-ended survey and subsequent virtual panel discussion Gibbs and Owen, authors of Building Better Digital Humanities Tools, posed questions to historians asking how digital resources have altered the ways in which they conduct their research. The historians who took part highlighted that ease of access, due to the extensive range of information available in the many databases on the web, has meant that their research can be conducted much quicker compared to ten or twenty years ago when various trips to the library had to be made. The historians also reported that search engines such as Google and other resources such as Google Books allow them to find key terms and distinctive quotations relevant to their research that they would otherwise spend days, if not weeks, manually searching for in the past (Gibbs and Owen,2012).

Leary, author of the article “Googling the Victorians”, suggests that for years scholarly discoveries had consisted of manuscripts that had previously gone unnoticed turning up in somebody’s attic or in the drawer of an old desk. The adventures undertaken by today’s scholars however, he states, are more likely to occur in front of a computer screen. There has been a profound shift in our relationship with texts in that our relationship with these texts is now “mediated by digital technology”. Like the historians surveyed by Gibbs and Owen, Victorianists are also utilising digital resources such as Google and Google Books in order to find key terms or to locate various characters in unbelievably vast amounts of text (Leary, 2005).

Google Books, launched in 2004, has scanned and archived approximately twelve million books encompassing many fields of research including history, literature and philosophy. Scholars have stated that digital humanities has and will continue to benefit from this digital resource as humanities researchers and computer scientists over the past few years have combined forces, allowing researchers to ask and answer questions that ten years ago would have been deemed inconceivable (Swift, 2010).

However, although there has been major investment in digital humanities tools in the past few years in order to move beyond Google and Google Books, most of these tools have remained unused. Owen and Gibbs highlight that in the 2005 Summit on Digital Tools at the University of Virginia it was found that roughly six percent of scholars use other digital resources or more complex digital tools when conducting their research and the remaining ninety four percent continue to rely on more readily available information (Gibbs and Owen, 2012).

Thus, it is apparent that digital humanities tools need to be created with ease of use in mind and to aid more traditional ways of researching information rather than trying to create new ways of exploring data. Gibbs and Owen also state that information needs to be easily accessible and visible to the researcher straight away as scholars are unwilling to remain on a site if they have to delve into vast amounts of data and dig deep to find what it is they are looking for (Gibbs and Owen, 2012).

It is clear that there is an extensive amount of information available online; however it is also clear that the digital humanities tools are not being utilised to the best of their ability. Thus, it is important that scholars are made aware of the existence of these tools and are also well-informed on how to use them.

Digital resources are changing digital humanities allowing scholars to analyse, visualise and think about what it is they are researching in a variety of new ways. Scholars now have the ability to engage more closely with the text and can accomplish more in a few weeks than previous scholars could have only dreamed of achieving in a few months or even years.

Bibliography

Gibbs, F. Owens, T. “Building Better Digital Humanities Tools: Toward broader audiences and user-centered designs”, Digital Humanities Quarterly. 2012. Web. 2 April 2013.

Leary, P. “Googling the Victorians”, Journal of Victorian Culture. 2005. Web. 2 April 2013.

Swift, M. “Google Books may advance humanities research, scholars say”, phys.org. 2010. Web. 1 April 2013.

Do we own our own data?

My personal project is based on Social Media, the information we provide social networking sites on a daily basis and how this information is being used or will be used in the future. Bill Cheswick’s image above is representative of the scale of networks created by Social Media. Just think of the immeasurable amount of information being shared!!!

In 1993 Oscar Gandy coined the term ‘panoptic sort’ in order to assign a name to the complex structure of the collection of data about groups and individuals and the subsequent discrimination which follows on the basis of this data (Gandy, 1993). The data collected is generated from people’s everyday lives as employees, consumers and citizens. This information is then used to organise and control the individual’s or the group’s access to both the services and goods that define our modern capitalist economy (Schermer, 2007). While Gandy, however, confines his portrayal of the workings of the ‘panoptic sort’ to the area of the free market economy, it is quite clear that it can equally be applied to what I will call “website surveillance”.

When you search for a particular item or service within Google, advertising for that item or service will subsequently appear on your screen. The adverts that follow us around the web are in fact linked to the various things that we search for and the information that we submit on a daily basis. Data is continuously being collected. However the data collected is not limited to the search itself, but rather, websites gather as much personal information as they can about every individual user.  According to Lawrence Lessig over ninety percent of commercial websites collect personal information about their users. This data is then categorised and used by these websites in various ways (Lessig, 2009).

Jaron Lanier, the American computer scientist and writer of You are Not a Gadget states that on social networking sites such as Facebook, “life is turned into database” (Lanier, 2011). In a BBC interview conducted with Lanier in 2011 he argued that there are in fact two kinds of data collected by these social networking websites. The data we know about includes all visible information available on our social networking profiles. However, there is an immeasurable amount of data collected about us that we do not even realise we are providing.

Max Schrem, an Austrian law student, argues that Facebook is of the belief that once information is written and posted to their social networking site it is in fact theirs to do with as they please (Schrem, 2012). After requesting that Facebook send him all the information about him stored on the company’s database he received a twelve hundred page document which was essentially all the information that they had acquired  about him since 2008 when he first joined the social networking site. The information, Schrem concluded, was in violation of European laws regarding privacy. However, the company has yet to change the way in which it collects its users’ data. It is apparent that the vast majority of people remain unperturbed even though many of us do acknowledge the fact that search engines such as Google and social networking sites like Facebook can track and spy on their users.

According to IBM, the International Business Machines Corporation, the majority of the information that is available in the world today has been created in the past couple of years following the explosive popularity of smartphones and sites such as Facebook, Twitter and LinkedIn. Websites, as I have previously explained, gather information about their users, however add to this the growth of e-commerce, GPS signal information, digital images and videos, and it is clear that there is much more data available about us than we had previously imagined.

In recent years however the notion of “Big data” has sparked worry amongst various people about infringements on privacy. “Big data” is a term used by those who occupy the information technology circles. The term refers to the vast amount of information that is stored on the web, possibly indefinitely. The worry lies in the fact that this data is now being accessed by researchers and is becoming an invaluable asset to companies. To put it simply, companies have the ability to grow their businesses with access to this information by learning how to understand their customers’ behaviour, by contacting them in real-time via Twitter to ensure satisfaction and to learn about their customers’ preferences for future reference. One may question where the line between what is public and what is be private should be drawn and also whether  the use of “Big data” will benefit us or whether it could be deemed the next Big Brother.

The video below by IBM shows how “Big data” could be used in order to predict demands for a video game.

Jaron Lanier argues that the data which is deemed valuable is the data that the user has no access to and quite often does not know about. He states that it is this data which is used by companies such as Facebook (Lanier, 2011), Twitter and Google in order to sell access to you to third parties, the so-called advertisers. The main problem with this is that users do not in fact own any of this data and their private information essentially becomes public property.

I argue, however, that the problem surrounding the idea of “Big data” is not in the gathering of the information itself but the potential that is there for a person with bad intentions to abuse and misuse it. The notion that what we are exposed to online could be tailored to each individual’s preferences and interests is one that I find both unusual and unnecessary from the point of view of a consumer, however I can appreciate the value it holds for businesses and other various companies alike. If individuals are given access to the control of their own data while allowing important relevant data to flow freely then it could potentially be a great innovation. Who will apply these restrictions however and how will they be policed?

All things considered it is clear that we should always be mindful of our internet footprint and be wary of the way we portray ourselves online. Always remember that what goes online, stays online…FOREVER!!!

Bibliography

Books

Gandy, Oscar. The Panoptic Sort: Political Economy of Personal Information. Westview Press Inc. 1993. Print.

Lanier, Jaron. You are Not a Gadget, Vintage: Reprint Edition. 2011. Print.

Lessig, Lawrence. Code: Version 2.0, Version 2, Basic Books. 2009. Print.

Schermer, Bart W. Software Agents, Surveillance, and the Right to Privacy: A Legislative Framework for Agent-enabled Surveillance, Amsterdam University Press. 2007. Print.

Online Sources: Videos

Lanier, Jaron. “What ‘Bugs’ me about Facebook.” 6 December 2011. Web. 19 March 2013.

Schrem, Max. “Max Schrems (AT) – Europe versus Facebook.” March 10 2012. Web. 24 February 2012.

Websites

http://www.ibm.com

http://www.youtube.com

 

Electronic Scholarly Editions – Kenneth Price

wordlee

 

“Mere digitizing produces information; in contrast, scholarly editing produces knowledge.”

This quote sums up Price’s entire article. He is arguing that the mere digitization of texts allows us to see the text similarly to the way it would be presented to us in print form; electronic scholarly editions however allow us to engage in closer reading with the text by providing the reader access to much more than just the final product.

Price, in the article “Electronic Scholarly Editions”, states that the reason people are making electronic scholarly editions is due to their capaciousness. There is no longer a limit to what a scholar can “fit on a page or afford to produce within the economics of print publishing”.

With sufficient resources, a group of scholars has the ability to create an edition of immense quality with multiple layers of information; an edition that reveals much more than just the final product. The ability to include the material scholars worked from in order to create the edition is something I find extremely interesting. The reader/audience is not only introduced to the final piece of text/the finished work but rather can experience what went into the creation of that work as scholars can include audio and video clips, high-quality colour reproductions of art works, and interactive maps.The electronic scholarly edition thus seems to be a gateway into a new world of textual experience.

One has the ability to completely immerse themselves in the text and perhaps even the thought-process of the author/poet if multiple editions are available to view side-by-side. It is not surprising then that electronic editions have deepened interest in the nature of textuality itself.

Price states that a great deal of twentieth-century editing, and many centuries before, was based solely on finding an authoritative text based on “final intentions”. However, there tends to be numerous versions of particular texts. It’s quite astonishing that with new developments in electronic editing we may have to ability to view all versions of certain texts, specifically those deemed valuable. Price emphasises the ability of authors to take complete ownership of their work in that they have the ability to edit their piece, add in their thought-process and highlight the changes made and subsequently why they were made.

However, one must also question whether the purity of an edition can be spoiled if there are no restrictions and no limitations on the amount of changes that can be made to that particular edition. If something can be altered indefinitely then does it eventually lose all of its original quality?

Price highlights that electronic scholarly edition is an enterprise that relies heavily on the collaboration of many people. Significant digital projects cannot be undertaken alone and editors, he states, now have to deal with more issues than they would have previously with print. Collaboration with technical experts is necessary in this field of work as knowledge of technical issues is required to make collaboration successful. Electronic editing thus can be a daunting task but it is also a field of neverending possibility.

Price also suggests that because software and hardware are constantly changing and advancing, it is important that electronic scholarly editions adhere to international standards. Text Encoding Initiative (TEI), he explains, will eventually be XML only. XML, or extensible markup language, is often referred to as the “acid-free paper of the digital age”. This is due to the fact that it is platform-independent and not protected by trademark or patent or copyright is has the ability to meet the demands of long-term electronic preservation.

XML allows editors to determine which part of the text is important or of major interest by tagging or labelling the specific area of text with something known as a markup. XML allows for flexibility. In the case of the Walt Whitman Archive structural features of the manuscript are tagged. For example where there line breaks, stanzas and even where Whitman himself had made revisions to the text.

Thus, the electronic scholarly edition allows one to engage with the text in a way that is unavailable in print. It is no longer only what is on the surface that is deemed important but also what lies beneath the surface.

Price argues that traditional boundaries are blurring before our eyes and now publishers, librarians and scholars increasingly take on overlapping functions. While some may see this as negative, Price highlights that it allows extensive room for creativity. People from different areas of work can provide their own interpretation of texts, sometimes varying significantly from one another.

The electronic scholarly edition is contributing to the democratisation of learning; people worldwide with a web browser now have access to an extensive range of material that was once hidden away. We now have an extremely large library available at our finger tips-something I believe we should take advantage of and appreciate immensely.

Katherine Hayles – How We Read: Close, Hyper, Machine

Comp-Book86542544In her article “How We Read Now: Close, Hyper, Machine” N. Katherine Hayles highlights that in the last twenty years the reading of print has declined significantly. In the video clip below the perceived crisis is reflected in the responses made by students when asked if they still read books:

The students in the video reflect the fast-paced, digital world we are living in, where people want to gather as much information as possible in as little time as possible and it is because of this that there is also a perceived fear that close reading of text has been replaced by hyperreading. Hyperreadng, defined by James Sosnoski in 1999 as “reader-directed, screen-based, computer-assisted reading” where, Sonoski believes, readers are moving away from and becoming distanced from the text by filtering key words and skimming through the majority of the content.

In the digital environment, Hayles highlights, hyperreading has become essential and filtering systems and Google searches have become just as much a part of a scholar’s toolkit as hyperreading itself. Nicholas Carr, author of The Shallows: What the Internet Is Doing to Our Brains, however, fears that hyperreading results in changes in brain function and subsequently leads to sustained concentration becoming more challenging.

Reading from a web-page differs greatly from print and results in a complete re-wiring of the brain due to the fact that concentration is drawn away from the linear flow of the text. An extensive array of hyperlinks draws attention away from the article or piece of text, tweeting and other short pieces of text encourage distracted forms of reading and habits develop such as persistent clicking and navigating from one thing to another all increasing cognitive load.

In contrast, “with linear reading cognitive mode is at a minimum and the transfer to long term memory happens more efficiently”. Also, the vast amount of material available to be read on the web results in skim reading due to the fact that there is just too much to get through and thus, close attention to any one text becomes impossible for a considerable length of time.

2595497078_4f6d5367bc_z

Hayles points out that as environments become more information intensive it is no wonder why hyperattention and its associated hyperreading is increasing while deep attention and close reading are on the decline. However, Hayles suggests that in a contemporary environment privilege should not be given to one form of reading over another and that close reading, hyperreading and machine reading all relate to one another and it is important that they are used together by scholars as a tool-kit, a repertoire of reading strategies.