While describing Citavi to fellow digital humanists, I’m often met with skepticism regarding some of its more significant limitations: Where’s the cloud storage? Where’s the Mac version? Where’s the app? Legitimate questions – especially since Zotero and Mendeley both tout the cross-platform, web integration and mobile support that Citavi lacks. How, then, can I extol the values of live linkage for a program that does not even link us automatically through the web?

As we begin to negotiate the transition from the personal knowledgebase to the web we must proceed with caution. Too much connectivity too early can limit the efficacy and popularity of a mnemotechnology by alienating some of the scholars integral to its responsible design. To a great extent, digital text has still failed to replace or revolutionize the time-honored technology of the page.  There are similar resistances at play in the move from print to digital texts as there are in the move from personal to collective knowledgebases. I have attempted to keep these at the forefront of this narrative while arguing that the personal knowledgebase is, at once, the technology most faithful to the book and most open to the possibility of a collective mind.

The previous section began with the question: how can we remember anything if we can only keep it in one place? There, it was a question of how metadata enhances our memory by allowing for various instances of the same knowledge object to be stored in multiple places. Here, it is a question of where all of this knowledge can be stored and how it can be shared without sacrificing its freedom of form.

What if, after uploading our knowledgebases to the cloud, we found that we were suddenly unable to view our citations in their entirety because of a mandatory update unilaterally imposed? If the discontinuation of features no longer deemed necessary from a corporate standpoint erodes our creative control over the collective knowledgebase, then thematization will have prevailed once more in an even more insidious form. The microthematic precision of the personal knowledgebase must, therefore, be retained until the web (and the laws governing it) are supple enough to host the output of each individual mind.

The reality is that knowledge is only as good as its infrastructure and that the kind of infrastructure best-suited for scholarship is not necessarily the best for the Web (qua economic entity). Enhancing our collective scholarly memory means dismantling the macrothematic machinery of our institution with the help of the microthematic tagging that is now possible within the personal knowledgebase. Insofar as it promotes greater precision and understanding, this is a profoundly unprofitable exercise to which advertising is diametrically opposed. The latter succeeds by generalizing discrete user metadata into common trends and funneling this interest toward commodities. Behind the scenes, advertising pioneers mnemotechnologies that might greatly enhance our collective knowledgebase but, at the end of the day, it remains a science of forgetting. The commodity is as valuable as its technology is obscure. In this regard, it is an object of anti-knowledge from which we stand to learn a great deal if we can only reverse engineer its technologies of seduction into technologies of discernment.

While I acknowledge the value of cloud and mobile integration for digital scholarship in years to come, I must maintain that this is something worth doing right – with the proper infrastructure – if we are to do it at all. As I have stressed from the beginning, the point is to keep as much programming power as possible in the hands of the knowledge workers who are actually using it. Even if we are not writing the code ourselves, we can at least articulate and evaluate features that promote deeper metadata and a more robust infrastructure for our collective knowledge. Before we begin automating the conversion of our metadata, we must understand which categories, keywords and links map onto which so that we can participate in this conversion. It is not that this remapping of idiosyncratic metadata for more general web applications won’t exert any influence over our personal knowledgebases, more that the separation of personal and web knowledgebases allows us to resist pressures of standardization and shelters us from the questions of intellectual property that would otherwise quash our potential to experiment with the unrestricted power of deep tagging. The practice of translating our own metadata to the languages of the web would also teach us a great deal about the limitations of our thinking and the limitations of the technologies through which they are reified.

Of course the idiosyncrasies must, necessarily, be standardized and optimized for dissemination across the web, but we cannot allow this informational bottleneck to fundamentally determine our own infrastructures. Each individual knowledgebase must remain a laboratory in which to experiment with levels of linkage and tagging not yet sustainable across the web. Despite, or, rather, because of its lack of cloud and mobile integration, Citavi is better suited for multi-user experimentation on copywritten material because it has the potential to aggregate users’ knowledgebases across semi-private networks in a peer-to-peer fashion and, thus, circumvent the policing of information on cloud servers – a strategy that continues to confound media conglomerates that are far more lucrative and powerful than the academic presses that will inevitably try to restrict this flow of information.

Only after experimenting on smaller, semi-private networks will we attain the sheer volume of data necessary to program and troubleshoot a more global information infrastructure capable of resisting the macrothematic pressures of the knowledge industry. While such academic social networks already exist (e.g. academia.edu) they do not interface with each other or with personal knowledgebases. The recent history of social media makes it clear that we cannot trust any major site to maintain privacy and accessibility parameters that are in the best interest of all users. Networks of any size are too valuable to resist the influence of the market and have tended to evolve in a way that best suits the monetization of user metadata (qua advertising) rather than responsible design from an intellectual, infrastructural standpoint. The real strategic value of the personal knowledgebase is that it ensures our total control over and freedom of experimentation with deep metadata and extends this to every digital text we have in our archive.  If we first learn to sync metadata between personal knowledgebases that are immune to the policy changes of social media sites and the jurisdiction of copyright law, then we can be sure our knowledge will remain sovereign and that we have the leverage and savoir-faire to negotiate structural changes in years to come.

Responsible informational design requires that we all ‘own our masters’ so to speak – especially in the early stages of the network when we are most vulnerable to decisions that might inherently divide us based on privacy and access rights dictated by the global information marketplace. Eventually, we might settle upon some generalized conversion protocol that retains the nuances of microthematic tagging. Here, Google may be one of the most promising patrons given its willingness to take on massive projects of dubious legality and its skill in mitigating the inevitable repercussions after the fact (e.g. Google Books, Google Earth). But this is not to say that each of us should simply render our knowledgebases unto Google, only that Google is perhaps the tech giant most capable of fighting and arbitrating the battles over intellectual property that will inevitably ensue when massive amounts of curated content from knowledgebases across the globe begin to aggregate on the web. Even to get a majority of published work hosted on the web in a way that retains a live linkage with networked knowledgebases would be a remarkable achievement – one that will require all the help we can get.

Suffice it to say that the worldwide web, which might appear to be the most obvious form of live linkage is, in many ways, the most convoluted and problematic. In order to keep the links between our minds alive we must not relinquish them to the web without first fortifying them through the trials of responsible design. As I continue to speculate about the possibilities of the Web and the Archive writ large, I will attempt to show how they depend, fundamentally, on some rather practical changes to the technologies of drafting, pedagogy and review.

Drafting

While it’s possible to draft an essay entirely within Citavi, I must confess that I still prefer OneNote for its cloud connectivity, haptic interface and superior aesthetics. Despite its lack of deep metadata and layout control, OneNote smoothly syncs and compiles many pieces of text rather than saving them separately as documents. It currently serves as an archive for all of my professional and personal writing and contains notes and drafts (both typed and scanned) as far back as my undergraduate years. It is also the program in which I am currently writing this very document. It does not yet offer a Citavi plug-in like Word, but a simple code makes it easy to restore all of the references for each citation. Even with the plug-in, the information used in a word document does not automatically sync with Citavi; the link is broken once we import our knowledge objects into Word, orphaning any changes made afterwards. More significantly, the side-pane display of the add-on is quite cumbersome and prevents the fluid transition between database, draft and layout.

What if there were a single, continuous link between each stage of academic work so that the kind of thinking proper to each stage were never lost in the shift between documents and programs? What if OneNote could function as a bridge between the knowledgebase and a more publication-oriented software? What if the changes I made while finalizing this document in Word synced automatically with this current draft in OneNote and also with the relevant citations and annotations in Citavi?

As nice as it would be to have these corporate powers working collectively to enhance the structures of our knowledge, the prospect is, admittedly, farfetched.  The market discourages the open exchange of “trade secrets” necessary for the kind of software suite I’m describing. The tools developed by for-profit companies will likely surpass those of open-source developers in power and ease of use. Citavi, a private company, was first to achieve full PDF integration and deep metadata, not Zotero, its open-source competitor. More important than the individual programs or companies actually uniting, however, is compatibility and standardization. The software suite does not even need to combine Citavi, Word, OneNote or any of the other programs as long as it brings together the most necessary functions of each. Microsoft, Apple, Google and Linux could each have their own academic software suites, as long as there were some reasonably effective means for exchanging metadata without sacrificing its depth.

By fusing together the media of scholarly production I believe that we can greatly enhance our individual memories and see that the fruits of our intellectual labor are registered more accurately and efficiently in the ever-expanding archive of our culture. How much more might we remember if we could transcribe our thinking in the heat of the readerly moment? How might we join the acts of envisioning and revising our work into a more continuous and collaborative process in such a way that the earliest organizational stages (which are too often carried out in isolation) are immanently linked with the collective process of revision? How do we design a workflow in which every vision is already a form of revision? Wouldn’t collaboration be more likely and productive if

  • the links between the origins and end products of our intellectual labor were rendered fluid and transparent?
  • we could communicate with one another across the margins of these texts?
  • we could effectively represent the discrete nodes of our common interest immanently within them?

Under the macrothematic pressures of our current infrastructure, these nodes are often truncated and strewn, elliptically, throughout independent, published documents. With the advent of the personal knowledgebase, however, we no longer need to rely on the macrothematic generalities of publication in order to discover common interest. Communication prior to publication on the level of knowledge organization has never been more feasible. Using citations as primary organizational units we can now pursue closer, more collective analyses of increasingly singular texts. And if we can communicate microthematically without having to subordinate citations to themes, then why bother doing so at all? While this might read like a rhetorical question for many of us, the best part is that we do not even need to make such a unilateral decision at all because the metadata is flexible enough to incorporate traditional and progressive infrastructures alike.

A truly (re)visionary workflow would not only collapse the boundaries between documents, it would fundamentally reconfigure the history of each document. The temporal distinctions between drafts would ultimately give way to an infinite process of drafting in which a living record of every letter typed, erased or otherwise modified could be resurrected. Imagine watching the fits and starts of the draft as it evolves in real time, pausing to reread and reformulate itself within and between other texts. Is such insight really necessary or would it be better to have more intuitive and powerful tools for comparing distinct drafts? Here, too, we needn’t decide just yet, especially since the former technology would likely be an extension of the latter. Drafts may well be the preferred nodal points of the revision process, but this doesn’t mean that deeper ways of visualizing the writing process are not worth exploring in their own right (see idea animation below).

In a post-Snowden age, many will recoil at the idea preserving our writing process with such fidelity. We shudder collectively at the thought of what might happen if all those text messages we decided not to send somehow reached their would-be recipients. These fears are justifiable but, perhaps, a little misplaced. I remember speaking briefly with N. Katherine Hayles about this after her Wellek lecture on writing and extinction. While discussing the political consequences of digitization she made a remark about the relative freedom and privacy of pre-digital writing compared with the oppressive visibility of writing in a digital age. I was curious to see whether she really thought that digital text was more oppressive or if this was a claim being explored in the context of the novel she was reading, but she politely turned the question back upon me – mentioning NSA surveillance as a prime example of the oppression of digital text. If such an eminent historian and theorist of technology fears for the freedom of digital text, then clearly it’s more than popular paranoia.

While I would never deny that the possibilities of surveillance are greater now than they were before digital text, I question whether digital text is unequivocally oppressive. Certainly, we have plenty of reason to suspect that nothing we enter into a networked device falls outside the purview of the international surveillance state. We do not even know for sure whether we have to hit ‘send’ for our writing to appear before unknown eyes. But is this a good enough reason not to explore the depths of visibility (i.e. surveillance) in an academic context? If we were all to revert to pre-digital technologies, would our minds really be liberated or would they be even more oppressed by the knowledge infrastructures that these technologies reify?

Freedom and oppression look quite different when viewed from the mnemotechnical perspective we have been pursuing thus far. Pre-digital text may be relatively free from surveillance, but it is also quite powerless to contest the reality of surveillance. The kind of collective knowledgebase that digital text makes possible, while it might resemble the NSA in the depth of its metadata and power of its search (i.e. surveillance) algorithms, actually has some chance of generating meaningful resistance to the negative influence of surveillance within and beyond the text. A collective knowledgebase is, perhaps, our only real hope of understanding the extent to which our privacy is compromised, coordinating the efforts of those capable of bringing about real political change and enabling the vast majority of us who are baffled by the complexity of these issues to follow these efforts closely enough to overcome our own politico-educational impotence. Ironically, our collective ignorance of what surveillance really means and how it harms us might best be overcome by repurposing these very surveillance technologies for more educational ends – using the knowledgebase we know to decrypt the knowledgebase we don’t. Rather than browsing the web with a vague technophobic paranoia, we could potentially learn what some of the thousands of leaked classified documents actually mean for our freedom and collaborate with those trying to do something about it.

Even with the heightened power of political resistance afforded by such enhanced knowledge infrastructure, restoring the kind of privacy we had in the pre-digital era still seems improbable. It may be that privacy is already, irrevocably and profoundly lost. My point is that it is still far more probable with digital technology than it is without it. But just because we can never be absolutely certain that hackers and governmental organizations cannot see everything we do, does not mean that we do not still have substantial control over the extent to which the general public might access our thoughts.

As personal knowledgebases congeal into larger and larger networks, the ethics of visibility will be of upmost concern. The possibility of encountering other minds within a collective text implies access to their private texts and the metadata that joins them. Varying levels of access could be defined by each user in such a way that more conservative scholars could retain privacy up till the point of publication and still take advantage of the collaborative potential afforded by whatever level of metadata and linkage they see fit. Apprehensive users could test the waters first, experimenting with more traditional macrothematic tagging or sharing their draft materials with a close circle of colleagues. As long as we can all get used to working within the same or similar knowledgebases, our individual privacy settings need not limit the overall transparency or effectiveness of the system.  What’s essential is that we at least have the option to share the unfolding of our ideas even if not everyone chooses to share in kind. As long as we do not alienate scholars by forcing them to surrender their privacy, the kind of work made possible by those who do should be advertisement enough.

The tensions within and between ideas are often more visible during the drafting process before their rough-hewn edges are smoothed into a publishable argument. If our work were less heavily invested in polishing away these resistances and we had a chance to encounter other minds on a conceptual plane that was not inherently defined by the pressures of publication, would it not be easier to carry out more critical and microthematic discussions of the texts at hand? And, if not, wouldn’t the traces of antipathy between thinkers citing the same texts provide invaluable data for those seeking to design a mnemotechnology that promotes a less thematic form of collaboration? While such encounters would not, in themselves, constitute some “giant leap” for our collective mind, they would, at the very least, provide one of the most powerful diagnostic tools for the mnemotechnical crisis – allowing academics and programmers alike to see, materially, where and how potential connections are being missed. Even if a majority of these connections were hostile, petty and dismissive – bent on preserving the tendencies of our current macrothematic mode of academic production – scholars more open to the (re)visionary workflow would, nevertheless, be able to explore collaborative possibilities that were previously foreclosed by a more univocal regime of publication and copyright.

What new forms of collaboration might be possible if the initial organizational vision together with the embedded history of its continual revision were folded into the “final” publication? Everyone’s text could be woven into what new media theorist, George Landow, calls “borderless text” – a web of knowledge that, at last, fulfills the true educational potential of the internet.  If every cited text were joined, microthematically, to the text citing it:

  • Any instance of an author’s name could recall all of his/her available works and all of the works in which these works (or his name) are referenced
  • Any instance of a work could recall all of the works in which this work is referenced (down to the specific phrase being quoted wherever direct quotation is made)
  • Any instance of a word could be linked with its definition in any available dictionary
  • Any key words or phrases could be cross correlated with any other instances of such words and phrases
  • Any translation, version, or draft of a work could be cross-correlated and compared interlinearly or side-by-side

With a greatest possible interfacing of digital libraries across scholarly communities, we might imagine a readerly workflow in which scholars can literally work within and between the texts of Cervantes’ Don Quixote – all its versions, translations, criticisms, translations of criticisms, criticisms of translations of criticisms etc.  Obviously such a total library would easily become unmanageable without the proper search and filtering criteria, but we should not automatically assume that it would necessarily become a library of Babel. With a powerful enough algorithm, we would be able to filter the critical reception spatiotemporally (by the historical and geographic origins and proliferations of citations), methodologically (by academic/institutional affiliation) or, perhaps, by an adaptive blend of parameters that weighs our recent movements through the library against the entire history of our itinerary.

Such a knowledgebase will not begin on a global scale. It will begin with many micro-insurgencies brought about by personal knowledgebases – thousands of individuals and groups digitizing texts and weaving them autonomously. This weaving cannot properly begin until there is so much “pirated” intellectual property in circulation that it becomes impossible for publishing houses to effectively litigate against insurgents. After all, once a document is digitized it is nearly impossible to control its dissemination – to track those who possess it and bill or fine them accordingly. Once such a critical mass of free text has been reached our institution will need to find some other means of sustaining itself or collapse entirely. If we’re willing to surrender our univocal model of intellectual property, we might begin to profit from something that is virtually unpiratable because of its scale and the rapidity of its growth. We might make a new living off of live subscriptions to global networks of digital scholarship and the vast tributaries of metadata they contain. We may yet construct a world in which knowledge can be liberated without requiring scholars to work for free.

If this is starting to sound too Quixotic or Borgesian, I must insist that even a more modest infrastructure would enable us at least to:

  • See what our peers are saying about the knowledge objects and citations most relevant to us
  • Carry out more productive and nuanced discussions without having to read essay or book-length arguments in their entirety.
  • Trace the emergence of each other’s ideas directly from the citations on which they are grounded and follow them through the various stages of organization to the completed draft
  • Generate and refine citation-specific metadata collectively.

Even with this, book-length arguments could be written into the interstices of the texts referenced and cited, thus, allowing us to inscribe our ideas more gravely and materially in the archive than ever before. But for this to happen we must take great care in the way we choose to visualize these interstices – we must insist that the conversations are animated in such a way that they are no longer marginalized.

Pedagogy

While the web has not yet superseded the brick and mortar university, this does not mean that it will not – or should not – especially if we ourselves are the ones negotiating the transition. Breaking down the spatial and temporal boundaries between students and teachers in the classroom would be a vital step towards developing the interfacing and filtering capacities necessary for making this kind of living text feasible on a larger scale. With the kind of infrastructure I’ve been describing, teachers and students could continuously interact from within the course text itself, regardless of whether they were participating in-person or online, within the dedicated term of study or without. Collective annotation, analysis, discussion and review could all be immanently and vitally linked. Rather than writing essays, students could write focused responses to specific citations and then discuss and review them immanently within the text – defining their views in relation to one another, the teacher and the scholarly community at large.

The labor of teaching could be divided much more synergistically. Upper level writing teachers, rather than being torn between the need to address higher level conceptual and structural issues and the compulsion to correct glaring grammatical mistakes, could assign remedial grammar modules in which students would be required to review and apply the rules they were violating. Computer algorithms might alleviate the difficulty of diagnosing and remedying the grammatical weaknesses in a way that could even free up the dedicated grammar teachers and speech therapists for more focused one-on-one time with the students who fail to make progress through the modules (or those who pay extra for it). The final stage of such remedial language training might even require students to proofread other students’ essays with a sufficient degree of accuracy and thoroughness. Such work needn’t remain strictly remedial either. Additional levels of mastery could be pursued for curricular credit or compensation. The ideal, in other words, is that students who initially need to pay extra to remediate themselves can eventually be compensated for remediating others.

The student-teacher boundary may grow increasingly amorphous with such a knowledge infrastructure, with the best students being promoted to more pedagogical roles within their given areas of expertise. As far as grading and evaluation is concerned, this would enable a greater degree of transparency and quality control throughout the review process. We might even develop a reciprocal system of double-blind peer review that pools writing from different institutions offering similar classes and requires students to review their reviewers without knowing whether they are students or teachers. They would just be anonymous voices in the citational cloud surrounding the piece of writing under review. This would not only help with grade norming; it would also help assess the quality of student vs. teacher feedback. This system could, mutatis mutandis, be employed for increasingly higher levels of evaluation and accreditation. It might, however, challenge the fixity and generality of tenure as we know it, replacing it with a more dynamic and democratic process grounded on our manifest competence in more precise areas of expertise.

Applying the core citational infrastructure to the classroom would allow for great advancements in eLearning insofar as each iteration of a class would become the database for the next and each instructor, the curator of this database.  Certain sections of collective text could be animated to voice recording so as to restore a degree of human narrativity to the proliferation of written commentary. This would make lecturing less repetitive and more modular – freeing up time for educators to develop new materials as they would no longer have to invest so much time in rehashing the old. I do not mean to discount the value of in-person discussion and the ethos of ‘relatability’ I discussed earlier. I’m merely suggesting that some lectures might, over time, be animated in a way that would adequately reproduce the revisionary junctures arrived at in the most critical discussions, condensing them into a format that could be viewed in a fraction of the time it would take to audit all of these classes in real time. This would be a powerful extension of the educational model of the “flipped classroom,” in which lectures are uploaded and assigned before class in order to take advantage of class time for live discussion.

At present, many digital lectures are poorly recorded, one-off videos.  Understandably, they fail to create a more dynamic interplay between audio and visual components since this is a skill few academics have mastered. Editing, animating and producing high-quality video content, it might be argued, is an art in itself. But if the future of textual knowledge requires a higher degree of animation, is in not worth reconsidering these deficiencies on a curricular level? In any case, we would not need extensive training in audio-visual design if there were software capable of automating this process for the least technologically inclined among us. Several existing applications can produce modestly impressive results (e.g. PowerPoint, Keynote, Prezi), but they are not yet adequate for the kind of pedagogical work I have in mind.

We need a more intuitive software for animating our ideas that sustains a reciprocal linkage between knowledge objects and the living text of ongoing commentary. Live video locks audio and visual elements into a linear temporality that can be quite challenging to splice and reconfigure in an appealing way. With animation, however, the audio and visual elements are inherently divided, which grants us more free play when it comes to representing ideas through text and images. Certainly animation, in itself, is equally if not more involved than editing video, but I believe that the kind of animation that most of us would need to animate our ideas and lessons is far easier to automate, more visually appealing and more modular than what we might be able to achieve with live video.

With enough automation, creating and curating a digital class could almost be as easy as diagramming a lecture on paper or illustrating the resulting conversation on a whiteboard. Both of these activities are rudimentary forms of mind mapping: a visual strategy of knowledge organization that does not necessarily follow the vertical or horizontal sequentiality of a written text or outline. Mind maps tend to take the shape of webs and clouds which, as I’ve mentioned, are shapes that resist the macrothematic pressures of a more conventional outline without necessarily compromising its mnemotechnical functionality because they are able to approximate hierarchical relations through other graphical means (e.g. relative size or centrality of ideas).  The question, then, is how to open our lesson plans and whiteboards to a knowledgebase and transform them into something that no longer gets discarded and erased each term.

Numerous whiteboard animation technologies exist, but they are rather sad approximations of what they might be. They trivialize the power of animation with photorealistic hands that illustrate what we type as a cursive font alongside stock images that are little better than clip art. What we really need is more precise control over the temporality of illustration vs. narration. Balancing the timing of voice and illustration before a live class is no simple task. It often requires extensive abbreviation and a steady rhythm of contributions between the teacher and the students. If we simply record the screen of our computer as we handwrite or type out the text of the lecture while trying to speak it, we’ll soon discover how unnatural it sounds when we try to keep the two processes in sync. If we are trying to condense and compile these live interactions as animations, then we need additional control over the speed at which text appears in relation to what is spoken. The overall aesthetic of the visual content would also benefit from some form of smoothing control so that the gaps between the appearance of typed letters is less jarring and the lineaments of words written with a stylus appear smoother and more calligraphic. Most importantly, we need this process to be automated in such a way that we can simply delete words and elements without having to re-record them so that, for instance, we can correct words we’ve typed or handwritten in the middle of a line and the animation would play as if we had written it that way in the first place, (rather than animating it being corrected after the fact). With even these modest forms of automation, we could begin to explore new modes of textuality that are better able to capture the inherent rhythm of commentary and discussion. Our readings of any given citation could, essentially, be brought to life through an animation directly linked to the source text within the knowledgebase.

While the relatively small file sizes of text documents makes it particularly well-suited for this kind of exploration, there’s no reason that a similar infrastructure might not allow us, eventually, to begin animating visual studies, music and film scholarship as well. Websites like YouTube, Soundcloud and Genius have already begun to explore the possibilities of microthematic tagging within individual media files. Programs such as Dedoose, nVivo and Scalar have also placed such controls more directly in the hands of scholars working on individual and collective projects. So, while there will, of course, be additional levels of difficulty involved in incorporating increasingly complex media formats, it’s not impossible to foresee a collective knowledgebase that weaves together text, image, sound and video seamlessly. This, however, would require a far more robust system of multimedia document review than we currently have at our disposal.

Review

Microsoft has developed one of the more popular modes of proofing and peer review, but there are still significant improvements to be made if we are to keep the link between the various versions of documents alive. Neither OneDrive nor rival cloud technologies like Google Drive or iCloud have really addressed this problem head on. We need a more effective means of visualizing the review process – one that retains all comments and editorial changes without making them too distracting to the overall reading experience.

This would require a more intelligent means of filtering substantive changes from typographical errors. We still rely on OCR because it allows us to integrate texts currently under copyright into our knowledgebases even though correcting the errors introduced by this process can be a rather time consuming endeavor. The problem is that, when it comes to cross-referencing instances of words, phrases or citations across personal knowledgebases, a single misread character can corrupt what might have been a very important linkage, since it’s impossible to guarantee that all users are using the exact same copy of a given text. The interface would, thus, need to be able to index with a certain degree of play – notifying the readers of ‘near matches’ which could then be reviewed personally.

It’s possible, however, to distribute this task across the entire community of scholars. For this, we would need a textual analytic program that would query the web and other networked knowledgebases and automatically detect errors based on near-matches in our own version. From here we could inspect these matches in detail or accept them en masse. Our input in this process could then be averaged into the statistical norm for the particular version of the text we are using. In doing so, we would effectively be working collaboratively and algorithmically with both humans and machines to minimize time wasted proofing machine error. If a text were popular enough and the network, large enough, all such errors might even be resolved before we even had a chance to participate. After uploading the text file to our database we would be prompted with all of the corrections appropriate for our version and, as long as we had a facsimile PDF, we could always cross-check the image against the embedded text for good measure. Such a tool would also allow us to remain “on the same page” despite having different versions of the text and may even reveal errors that the most skilled copyeditors failed to catch before publication.

A certain degree of play in the algorithm would also allow the metadata we generate for one citation to connect with every other ‘near match’ while flagging every minor variant as a potential error. Major variations, of the sort that appear across the folios and quartos of Shakespeare or, for that matter, the modern and Elizabethan English versions of his works, would have to be tagged manually. But this is the very kind of work best-suited for a collective knowledgebase since it allows decisions made by a specialized group to become immanently visible to the public and, hence, open to evaluation.

The good news is that we do not need to build these tools from scratch. Millions of websites have been using this kind of crowd-sourced proofing technology for years. reCAPTCHA, which we probably recognize as a security measure used to differentiate human users from bots, serves the dual purpose of correcting suspect phrases in digitized text. Jerome McGann et al. have spent decades designing tools for collating and annotating different versions of drafts in the BlakeRossetti and NINES archives at UVA as have Greg Crane et al. in the Perseus Digital Library at Tufts. It may even be possible to create a plugin for McGann’s Juxta program that could work across networks of personal knowledgebases.

Peer-review, while it overlaps with proofing in many cases, will also require its own set of tools.  How many conversations is it possible to represent within a document? If we’re talking about balloon comments of the sort we find in Word and Google Docs – not many. While comment balloons may suffice for our current institutional practice, they will become increasingly inadequate in an environment in which citations are immanently woven together throughout a document. Even a handful of users leaving detailed comments on the same text begin to crowd out the interface. Comment balloons have to stretch further and further above or below their target reference, making it difficult to see, in a glance, what is referring to what. The way they are stacked in a single, marginal column is inherently and mnemotechnically constricting. More problematically still, they only allow for precise linking between the primary text and the comments. There can be no critical conversation between comments because the only way of linking these is as ‘replies’ nested under the parent comment. There is no way for a comment to cite a specific part of another comment in reference to a specific citation from the document under discussion. This prohibits the kind of triangulation between citations necessary for productive critical conversation. Inline comments are even more problematic because they break up the actual flow of the commented text and admit even less room for multiple user entries. While they may be adequate for most proofing tasks, where there is more of a consensus regarding errors or variants, the more substantive commenting needed for peer review renders them impractical. A far better option for proofing and peer review would be the ability to toggle the comments between paratextual and intratextual display so that we could expand comments from marginal notes to primary texts as needed. This is more or less what Ted Nelson was advocating when he proposed expandable links as an alternative to the unidirectional variety that dominates the web today. Such scalability will be essential as the entire history of review both within and between documents becomes part of a general texture of citation – when drafting becomes a truly (re)visionary process and no longer needs to be reconstructed from independent documents both online and on personal hard drives.

If we could scale up the margins and/or scale down each critical node, we could begin to experiment with far more nuanced, cloud-like representations of scholarly conversation. If every comment could be collapsed to a phrase or abstract or, perhaps, authors’ names visually coded to represent the basic content of the commentary, then we might be surprised to find how much commentary we can track in the margins of our page and the peripherals of our minds. On the most macrothematic level, we would be able to see the text we’re working on condensed as the central node of this citational cloud. This cloud would also hover in the margins of the particular text, repopulating from page to page based on the metadata embedded there. We could narrow the range of reference further still, by highlighting specific passages, phrases or even words so that we might see whether a particular line of a text has ever been cited directly or indirectly by another author.  This all sounds rather miraculous, but condensing and clustering citational relationships in this way should be seen as the telos of the human-generated, microthematic tagging I’ve described in the previous section.

Obviously not all texts can be tagged quite so deeply and painstakingly by human knowledge workers. But even those untouched by human minds could still be represented in the cloud using an automated content mining algorithm. This would lack the nuance of human tagging, but would prevent underrepresented texts from falling off our collective mind map. What’s more, they could be flagged accordingly – as areas needing attention. Ideally, the citation cloud would provide the dialectical image of the critical history of a text. It would be capable of directing our energies towards the most relevant and least worked reaches of the archive – in a way that intrinsically stimulates our collective memory by exposing relevant links between established and unknown areas.

Perhaps the greatest potential for collaboration in human and machine intelligence lies in translation. An interface of this sort could also promote relevant connections between languages using a form of hybrid human and machine translation. Machine translation may stumble over many of the technical nuances but, so long as the human metadata were translated with reasonable accuracy, we could still communicate microthematically with scholars around the world. It is important not to forget that the efficacy of machine translation will improve with more human input and correction. The relationship is reciprocal – machine algorithms directing us to citations we know we should read even if, initially, we haven’t the faintest idea how to read them, then learning from our subsequent attempts to translate and understand them. This would easily be one of the most powerful pedagogical tools ever conceived – a web of language in which we teach machines to teach us to teach each other to teach them etc. Imagine being able to peruse every known translation intratextually, paratextually or interlineally with the history of commentary ever unfurling beyond the margins.

The richness and depth of this collective metadata would also be the perfect soil for the neural networks currently striving to transcend the limit between human and machine intelligence once and for all. Neural nets have already surpassed human intelligence in games like Chess, Go and Jeopardy by analyzing thousands of instances of us playing these games. Natural language acquisition is the final frontier. What better place to learn the nuance of language than in a knowledgebase of such microthematic precision?

Even a relatively “soft” artificial intelligence might save us the effort of citing anything at all. We might simply begin typing a phrase and, after reaching a certain statistical threshold a similarity, be prompted with a drop-down, auto-complete menu enabling us to select however much of the passage we’d like to directly quote or indirectly link. The anti-thematic implications of such an augmented intelligence are especially profound when we consider the possibility that we might not even be consciously trying to cite another work when we find ourselves solicited in this way. A vague constellation of key words and phrases might present us with passages we had read, forgotten, and were in the process of re-membering as our own – perhaps even passages that we had never read, but really ought to have read before presuming to (re)invent them independently. This capacity of a machine algorithm to interrupt and challenge the originality and thematic generality of our work is where the mnemotechnology I’ve been describing thus far appears most amenable to de Man’s deconstructive praxis and closest to his unsettling vision of an “implacable” “text machine.” By inhumanly juxtaposing all the text we find original and unique with all the text in which similar tropes turn up, this machine would provide a vital safeguard against our all too human fantasies of inclusivity and exhaustiveness – helping us to remember more by forcing us to perpetually deconstruct everything we thought we knew.