Codifying the Humanities, Humanizing Code

In a recent post titled Don’t Circle The Wagons Bethanie Nowviskie observes that while the humanities tend to have a more theoretical orientation, coders tend to engage in a lot more praxis.  While one could nitpick Nowviskie about how much this observation really accords with reality (coders can spend a good deal of time honing tools before actually using them to produce anything useful) it does point to a semantic issue that lies at the core of the Digital Humanities.  DH, as Kathleen Fitzpatrick has defined it, uses digital tools for humanities work but at the same time uses the frameworks of the humanities to make sense of digital technologies.  Louis Menand, in The MarketPlace of Ideas speaks of this duality too.  Although in his view it’s not particular to the humanities but is instead a tension that exists more generally in universities that promote the liberal arts and more utilitarian disciplines:



Liberal education is enormously useful in its anti-utilitarianism. Almost any liberal arts field can be made non-liberal by turning it in the direction of some practical skill with which it is already associated. English departments can become writing programs, even publishing programs; pure mathematics can become applied mathematics, even engineering; sociology shades into social work; biology shades into medicine; political science and social theory lead to law and political administration; and so on. But conversely, and more importantly, any practical field can be made liberal simply by teaching it historically or theoretically. Many economics departments refuse to offer courses in accounting, despite student demand for them. It is felt that accounting is not a liberal art. Maybe not, but one must always remember the immortal dictum: Garbage is garbage, but the historyof garbage is scholarship. Accounting is a trade, but the history of accounting is a subject of disinterested inquiry—a liberal art. And the accountant who knows something about the history of accounting will be a better accountant. That knowledge pays off in the marketplace. Similarly, future lawyers benefit from learning about the philosophical aspects of the law, just as literature majors learn more about poetry by writing poems.

In embracing the university as a place that produces but also interprets, maybe one thing we need to do, as Digital Humanists take up the call to learn how to code, is to learn it in a way that embraces the dualities that Fitzpatrick and Menand describe.   Of course one can’t learn to code simply through studying it’s history.  But maybe,  when we teach and learn code we should spend more time dwelling on its origins.  As I begin to think how I’m going to teach an introductory course on Web programming next fall I wonder if there’s room for the following video by Chuck Severance on the history and origins of Javascript:
My hope is that there’s at least a little bit of room for history in these courses. If there is, we’ll be in a better place to bring interpretive approaches to bear on technical subjects while also bringing technical know-how to more interpretive disciplines.
Advertisements

Unpacking Code, Composition, and Privilege: What Role can Dilbert and the Digital Humanities Play?

I ordinarily wait a little longer between blog posts and tryto write with a bit more polish but I wanted to jot down two questions that emerged after writing last week’s post on Code Versus Composition.  Hopefully I’ll get to these concerns overthe next couple of months: 
In the Digital Humanities blogosphere and in books likeUnlocking the Clubhouse: Woman In Computing there is much to learn about the way that codingculture may create and sustain groups of privilege. In turn, the methodologiesof class, race and gender studies can help to lend insight into thisculture.  Despite the fact that we like to think we live in a post-class, post-gender and post-racial society those categories aren’t going away yet.  And until they do there’s room for sequels to Unlocking the Clubhouse books.    Humanists, and digitalhumanists in particular, may be in a good position to use these methods ofanalysis since they’ve honed these methods in other disciplinary endeavors.  


The question I have, however, is whetherclass, race and gender are the only lenses through which privilege and thedistribution of power can be tracked. While they are powerful tools do their methods generate attentionblindness that obscure other forms of privilege? To this point there are veryinsightful technological theorists who haven’t placed the triad of class, raceand gender at the core of their analysis. For example, Neil Postman’s short address “Five Things We Need to Know About Technological Change” and his “secondidea” in that essay provides areally useful way for uncovering how privilege (and deprivation) are realized during technological change:

” the advantages and disadvantages of new technologies are never distributed evenly among the population. This means that every new technology benefits some and harms others…..Who specifically benefits from the development of a new technology? Which groups, what type of person, what kind of industry will be favored? And, of course, which groups of people will thereby be harmed?”

While Postman’s approach certainly prompts us to think about race, class and gender groups, it isn’t constrained by it.  Other groups can also be considered.  For example, in our current N.E.H research my colleagues and I are examining how digital technology is shaping and reshaping cognition.  While it’s certainly worthwhile to ask whether these changes privilege particular genders, classes or races, an equally salient question is whether it favors a type of person who is better able to multi-task.  In creating more and more digital distractions are coders generating the social conditions in which multi-taskers will prevail? And in my open source software advocacy work one should ask whether a particular form of coding collaborative work privileges groups with a particular political and economic ideology.  The same question applies to the study of growing global networks: are those networks privileging people who harbor sympathies to neo-liberalism and antipathies to more communitarian ideologies?  

On a morehumorous level, the Dilbert cartoons also illuminate.  But his lens, more often than not revolvesaround the tensions between technicians and managers:

Certainly class, race and gender help to inform who is privileged as technicians and managers negotiate technological change.  But maybe technicians and managers can be considered groups as well.  Which of these groups is more favored as coding’s presence grows?

Finally, while coding and the product of what coders produceis certainly subject to class, race and gender studies critique, and morelargely the critique of Neil Postman, we shouldn’t forget that coding, increating privilege and division, can also often be a bridging activity thatbrings together and harmonizes cultures that conventionally are portrayed as atodds.  On our campus the College of Artsand Humanities and the College of Applied Sciences and Technology don’t mixthat much.  It’s a division that is reminiscentof the one C.P. Snow popularized 50 years ago. But coding doesn’t have to be this way nor is it always this waynow.  It can bring different culturestogether. Speaking metaphorically, it’s not always Code versus Composition butsometimes very much Code and Composition. That I think, is at least one hope ofthe Digital Humanities.  That hope shouldn’t be forgotten even as we engage inclass, race and gender critiques and it raises a concomitant question:   Inrecent years how much has this hope been satisfied and what more work needs tobe done in order to have this hope fulfilled?

Code Versus Composition

In recent months voices in the media thathave been encouraging lay people to learn to code.  First, Douglas Rushkoff published a piece onCNN title “Learn to Code” which is many ways just a coda to Program or be Programmed.  And second,CodeAcademy was launched which presents itself as a quick and easy way to learnto code online. 

Given how ubiquitous code is becoming in life (to wit: as Iwrite this, code is processing my typing and is also providing the mediumthrough which you read this) it seems plausible to think of code as a possiblenew basic literacy that gives definition to the ideal of an educatedperson.  Since I’m about to beginteaching code in the fall I welcome this interest: it adds tothe marketability of my teaching as well as that of my colleagues.  And it’s nice to see it portrayed for what itis: an activity that in addition to being intrinsically fun also leads toexciting rumunerative careers.  But is itreally a basic literacy?
I’m of two minds about it. On the one hand, it is plausible to think of it as a literacy whicheveryone should have:
For one thing, code, like the printed word, iseverywhere.  In a culture that doesn’tread or write composition doesn’t have much use.  It isn’t a basic literacy.  But once reading and writing becomeentrenched in everyday activities it does become a basic literacy.  Given how widely code has spread, it wouldseem like the same logic would apply here too. Code is everywhere so everyone needs to understand code. 
For another thing, like the activity of writing, theactivity of coding trains our minds to think in ways that give order to a worldthat  probably could use a little more ordering (pace Max Weber fears of the world as an over-rationalized iron cage).  Composition illuminates.  Coding also illuminates.  Ergo, code and composition are (or at leasthave become) basic literacies.
Finally, code increasingly has become the way weinterface with tools.  Why is thisimportant?  More so than any otherspecies, we are our tools. As Winston Churchill once said, “We shape ourbuildings, and then they shape us.” Similarly, we shape our tools and then they shape us.  But to keep that reshaping a two-way street,and to make sure we don’t just devolve into whatever machines want us to become, we have to shape our tools.  And if youwant to be directly part of the shaping, these days you have to know how tocode.
On the other hand, in spite of the above rationales, I’m notquite ready to accept coding as a literacy that is as basic ascomposition:
For one thing, while code is everywhere, it’s embedded andhidden in our machines.  It doesn’t popup unmediated on a street sign, or on a Hallmark card or in an email or in anewspaper editorial.  Even programmersdon’t ordinarily use code to navigate through a new town, to write a valentine,or to refine a political position. 
For another thing, code is primarily used to communicatewith machines.  You don’t use it (withoutancillary devices) to connect and bond and lead an initiative with otherpeople.  The CNN piece reports that Mayor Bloomberghas taken up the challenge to code.  Whoknows, maybe he actually went through with it. But I doubt his coding skills have brought much more civic order to NewYork.  Code ( to follow an Aristotelianparadigm )  is a language which givesorder to our material lives.  But, (atleast until the programmers take over our spiritual and political lives) itisn’t the language we use to sermonize or legislate about political matters. 
For a final thing it may be true that our tools shape ourhumanity and that in turn, our code shapes our tools.  But that doesn’t mean we can’t shape theprogrammers who code our tools.  Ineffect, we’re not fated to have our destiny controlled by machines just becausewe don’t personally code.  We can controlout destiny and shape our tools by hiring a programmer.
Ok.  So where does that leave us?  If you are  MayorBloomberg, or  Audrey Watters (a technology commentator who has dived into CodeAcademy) or Miriam Posner or the legion of other peoplewho’ve taken up Rushkoff or CodeAcademy’s call to code: Take heart! It’s fun! And yes, coders are changing the world and ourdefinition of what it means to be human. But that task isn’t the province of coders alone.  Nor, despite their bestefforts, is it ever likely to be.    

KCPW Radio Interview

Susan Matt (my spouse) and I were interviewed on KCPW today about our course “Are Machines Making Us Stupid?” Here is a link to the podcast on the KCPW site or listen to it here as well:

AudioPlayer.embed(“pod_audio_2”, {soundFile: “http%3A%2F%2Fkcpw.org%2Ffiles%2F2012%2F02%2F02-21-CityViewsSeg2.mp3”});

Segment 2: Living the Tech Life

Today’s conventional wisdom may be that a well-rounded life must include Facebook, iPhones and constant connectivity. But does technology and omnipresent media really enrich our relationships, boost our moods and enhance our intellectual capacity? Professors Susan Matt and Luke Fernandez join us to explore the question: Are machines making us stupid?

Guests:
Dr. Susan Matt, Professor and Chair of the History Department, Weber State University
Dr. Luke Fernandez, Manager of Program and Technology Development, Weber State University

William Powers and The Technological Humanities

Last week William Powers visited Weber State University and spoke about his book Hamlet’s Blackberry. Recent articles in The Atlantic and in the New Yorker have cast him as a bit of a grouch about technology. Such portraits don’t do justice to his message. While Powers says it can be beneficial to disconnect (via Walden Zones or via Digital Sabbaths) he’s also quite upbeat about the ways that technology has drawn us closer together. The point in taking an occasional recess from our technologies and from our social connections is that it can complement our more social selves. By moving between these different experiences we can lead richer and more meaningful lives than if we simply seek one of these experiences while excluding the other.

He also isn’t trying to dictate to anyone. Each of us needs to find our own balance between inner directed activities and outer directed ones. The way to find that balance is to examine our personal patterns of technology adoption and to identify the combination that develops this equilibrium in our selves. Diversity is good. If you don’t feel that the “world is too much with us” William Powers (unlike William Wordsworth) isn’t going to hold it against you.

Of course, in defending Powers, I’m not also trying to say that everyone needs to like his book. In fact, a portion of the students in the course I’m co-teaching this semester ( titled “Are Machines Making You Stupid?” ) took issue with Powers’ claims about digital maximalism. (See footnote below.) That’s fine. The larger point is that Powers visit sparked interesting conversations in our local community that complement ones taking place regionally, nationally and globally. Below are two short viral videos whose popularity suggest how salient these issues are in the zeitgeist (Powers showed them during his talk):

I. Disconnect to Connect

II. Girl Fall Into Fountain (sorry this one I can’t embed)

Finally, if these issues seem present globally it’s also worth noting that they are present historically. As our class is discovering, anxieties about technology are not new. We’ve been wondering for centuries whether our inventions are making us smarter or dumber, shallower or deeper. But just because we’ve been worrying about these questions since the time of Socrates doesn’t mean we can stop worrying about them now. In order to adopt technologies wisely each generation needs to think these questions through anew. That’s the curse (and blessing) of the “technological humanities.”

—————————–

Footnote:

For our first writing assignment we had students respond to the following question:

 In Hamlet’s Blackberry, William Powers asserts that “we’ve effectively been living by a philosophy . . . that (1) connecting via screens is good, and (2) the more you connect, the better. I call it Digital Maximalism, because the goal is maximum screen time. Few of us have decided this is a wise approach to life, but let’s face it, this is how we’ve been living.”


For your first writing assignment, we would like you to respond to this assertion. Do you agree with Powers’s claims here? If so, why? If not, why do you disagree? You might also consider the following questions: is it truly a philosophy (or is it something else)? Do we truly value maximum screen time? Is it truly how we’ve been living? 

 A significant portion of the class questioned whether digital maximalism was as pervasive as Powers claims. They did so by referring to examples in their own lives or their family’s lives in which they had been able to spend time away from screens. They also were reticent to blame technology for any pathology or addiction that might emerge in the presence of technology. To do so, in their view, would constitute an abdication of personal responsibility.

While those criticisms are fine as far as they go, I hope, as the course progresses, to encourage them to dwell a little more on this issue. In my view, taking personal responsibility and finding blame in technology are not necessarily mutually exclusive or contradictory positions. In fact, often times they complement each other. By uncovering ways in which technology encourages certain behaviours while discouraging others we’re in a better position to make informed and responsible choices about how to use our tools.

Getting students to speak with nuance about the ways that we shape our tools, and in turn, how tools shape us is a perennial challenge in courses like this. Students tend to think about these things in binary categories: either we’re completely free beings who must take complete responsibility for the way we use our tools or we are “tools of our tools” who therefore can’t have any responsibilities. Few consider whether there may be a spectrum of states in between these poles.

Beyond the conundrum of technological determinism I also hope that we get to explore digital maximalism in terms of Neil Postman’s third idea:

The third idea, then, is that every technology has a philosophy which is given expression in how the technology makes people use their minds, in what it makes us do with our bodies, in how it codifies the world, in which of our senses it amplifies, in which of our emotional and intellectual tendencies it disregards.

If digital maximalism isn’t the “idea” or “philosophy” that is embedded in recent digital developments what philosophy is it then?

Google’s Doodles and the Waning of Serendipity

I just finished reading The Filter Bubble by Eli Pariser, whois the current president of moveon.org. In keeping with the interests of that organization, Pariser’s book is anattempt (at least tacitly) to expand the communitarian and civic capacities ofthe Web.  But he makes his way there byarguing that the Web is confining rather than expanding our cognitive horizons.  Instead of introducing us to a broader and morevaried set of people, the Web is increasingly taking us to points of view that arecongruent rather than divergent with our own. With personalized search and personalized social networking, the ‘netintroduces us to places and people we already like and that we’re alreadyinterested in.  As searching and matchingalgorithms improve, we’re increasingly exposed to material that is alreadyrelevant to our lives.  This, of course,is good up to a point: we like relevance. The downside is that we’re challenged less and less to consider or visitperspectives that differ from our own. 
These trends have been in the works for many years now –Cass Sunstein famously identified them as far back as 2002 in the bookRepublic.com.  But, as Pariser argues,what makes them more worrisome in 2012 is that they’ve become moreinsidious.  In the past we narrowed ourhorizons through conscious acts: we went to nytimes.com instead of foxnews.com(or vice versa) by choice and more or less deliberately.  But as the Web has become personalized, thesechoices are increasingly made for us behind the scenes in ways that we’re onlyvaguely aware of.  When I visitAmazon.com and shop for The Audacity of Hope, Amazon also suggests I buy BillClinton’s memoir, but not say, Bill O’Reilly’s Pinheads andPatriots.  And when I visit Facebook, myfriends, more often than not, seem to share similar points of view.  Pariser doesn’t reference Marx, but the filteris the modern generator of false consciousness. In the past we did our own Web filtering. But now our filters are selected behind the scenes.  In the brave new world of the personalizedWeb our false consciousness is created for us.
In Pariser’s closing chapter, he offers up a number of thingsthat individuals, corporations and governments can do to allay the moreinsidious effects of filtering.  Hesuggests that as individuals we occasionally erase our tracks so that siteshave a more difficult time personalizing their content. (To paraphrase Pariser: “If we don’t erase our [Web] history we are condemned to repeat it”).  For corporations, he suggests that theirpersonalization algorithms be made more transparent and that a littleserendipity be introduced into searches so we’re occasionally exposed tosomething beyond our current interests and desires.  And for governments he suggests a strongerrole in overseeing and regulating personalization. 
There are problems with Pariser’s suggested solutions andEvgeny Morozov, in his own review of Pariser, brings a very important one tolight.  In expanding our civic andcommunitarian and serendipitous encounters, it would be nice if Googleoccasionally popped up a link to “What is happening in Darfur?” when we type“Lady Gaga” into Google.  But who exactly is supposed to decide what these serendipitous experiences are tobe?  We may want to allay some of thecognitive deficiencies that the current ‘net breeds.  But the danger in doing so is that we replaceone bias with another.  In looking a littlefurther into this I visited the thousands of doodles (e.g. custom banners) thatGoogle has generated in the past couple of years.  Not surprisingly I didn’t see much therethat’s over-the-top civic or political. But maybe that sin of omission is better than the alternative: I prefer “don’t be evil” (their current motto) to “dogood but risk partisanship and bias in the attempt.” 
Pariser may not provide convincing fixes, but his descriptionof the problem makes the book a worthy read. One would think that as the information stream accelerates we’d becomeincreasingly subject to distractions and to new ways of seeing the world.  In fact, Clay Shirky touches on this point in“It’s Not Information Overload. It’s Filter Failure:” the filters which themass media industry imposed on late 20th century media consumers have beencorroded by the advent of the Web.    But thetrends that Shirky makes light of  may bereversing.  Our cognitive horizons may be contracting rather than expanding in the age of personalization.  And our attention blindness may be increasingrather than decreasing as the filter bubble grows.  In bringing those concerns to light,Pariser’s has done good work.  

The Two Cultures

This blog is sort of an informal companion to a colloquium we hold here at Weber which is also called “I.T. in the University.”  Recently we had the privilege of reading Science Fiction and Computing: Essays on Interlinked Domains which was edited by Eric Swedin and David Ferro who are colleagues of mine here at Weber.  The below is a guest post by them.

Most students of the history of science and the history of technology will remember “the two cultures” from their education.  The phrase comes from the English molecular physicist and novelist C. P. Snow who described in The Two Cultures and the Scientific Revolution (1959) that a gulf of understanding existed between scientists and literary intellectuals.  The people within these two cultures understood their own cultures, but scientists often did not appreciate the humanities, and humanities-oriented intellectuals did not understand science.  Snow advocated education to overcome the ignorance on both sides.

The two cultures is a living reality even today and that the divide still exists is obvious on many college campuses.  Professors tend to dialogue only with professors in closely related disciplines and students often find themselves drawn either to science, technology, and engineering, or to the arts and humanities.  They learn different languages and different values about what is important.  A good example is the arts and humanities student who dreads taking a science class because they just see mountains of memorization and a way of thinking that bewilders them.  They think of a math class as an act of cruelty.  Of course, a science or technology student who is sent off to take their general education class in literature looks at a pile of novels that they have to read for class as nothing less than torture.  They find the novels boring and the discussions to be vague and full of opinions unsupported by any sort of methodical thought.

This divide is most unfortunate and people on both sides need to make the effort to learn about information from the other side of the divide, even if is material that does not interest them.  The authors of this guest blog entry have straddled this divide through doctorates from the arts and humanities side of the divide and considerable experience in teaching both computer technology courses and history courses.  One of the areas that drew our interest was science fiction.  It is often thought that people on the science and technology side of the divide have no appreciation of literature, but that it not true.  They just have their own literature.

We will not go down the rabbit hole of defining exactly what science fiction is, because there is no common agreement on an exact definition.  Much of the best science fiction is based on an understanding of science and technology and readers who do not have that background come away frustrated and bewildered from reading science fiction.  They cannot fill in the blank spots to evoke a sense of wonder that is often found in science fiction.  An appreciation of science fiction can build bridges between the two cultures as one side learns to appreciate humanities beyond just science fiction and the other side learns enough about science and technology to appreciate science fiction as genuine literature.

As computer experts, one in a Computer Science program and the other in an Information Systems program, we often noticed how many people in our fields that we knew liked science fiction.  This was particularly true of the students and professors in our field who were the top performers.  We wondered if we could document a connection between computing and science fiction.  This led to some articles and then to a volume of interdisciplinary essays that we edited, Science Fiction and Computing: Essays on Interlinked Domains (McFarland, 2011).  We found strong linkages.

Science fiction has often provided terms, concepts, and a milieu of technological enthusiasm for pioneers in the computer field.  Science fiction also provided ways for computer innovators to talk about where they thought computers were going.  More research needs to be done on the linkages between science fiction and computers, a wonderful opportunity for different academic disciplines to talk to each other, and we hope our book helps that conversation along.  We also hope that our book will encourage academics, educators, and other people to think about how we can bridge the two divides in our intellectual culture.  We need academics and students who are grounded in science and technology to appreciate the contributions made by the arts and humanities, and for the reverse to be true.