Drinking from the Firehose: extended cognition & shitty interfaces


We suck at creating effective interfaces for exploring and utilizing the Internet. In particular, I want to complain about the browser. 

I spend a huge portion of my waking life and cognitive energies working in a single Chrome browser window. This browser is the interface to my work environment, my school environment, and large parts of my social environment —not to mention my general thinking/time wasting/entertainment environment. How is all this diverse and dynamic cognitive activity represented? Via a flat grey window with lots of tabs, pretty much the same way it looked back in 1998.

The Internet and the different applications it supports are obviously powerful cognitive tools, capable of supporting a wide variety of extremely complex human activities. Yet the basic “window” to this world is incredibly simplistic. The browser does not provide a complex, flexible structure to complement the complex, flexible activity of “being online”. This is a damn shame particularly because the Internet and its many environments and applications allow us to do incredibly advanced cognitive work, and yet the burden of organizing and keeping track of this complex cognitive work is left entirely to us and our naked, puny minds.

One of the most interesting ways to use the Internet in my experience is to think with the Internet- to conduct research and explore and enhance a train of thought via hyperlinked rhizome that is the Interwebs. Doing this, my browser window(s) come to look like a very flat visual representation of my stream of consciousness. I would argue that in many ways the browser window is the closest any technology has come to externalizing the flow of human thought. Written language is of course still the paradigm technology for storing and enhancing the flow of thought, but does not have the dynamic, “wormhole” characteristic that actual stream of consciousness thought does. What it lacks in complexity, written language makes up for by helping us to clarify and organize our own messy thoughts, making them easily communicable to others. In contrast, the browser’s defining affordance is allowing the stream of consciousness to expand ever outward into potentially infinitely branching thoughts.

If you google images search “too many tabs”, you see MANY instances of this phrase. Which says to me that the browsing experience has an obvious and intuitive correlation with our cognition.

Which is great! But I often find myself truly straining under the cognitive load of this interface, wishing for some or any of the powerful affordances we find in nearly any other interface- the power of written language, the power of GUI and the computer desktop– heck even the basic affordances of a physical desk. This would help us begin to approach the problem of organizing the sheer amount of material and references that are generated in the act of browsing. But much more interesting would be to take a look at the unique properties of browsing and figuring out an organizational and graphical structure that could make the volume and complexity of data generated in the act of browsing truly useful, meaningful, and communicable. That is, to truly enhance our powers of cognition not just by adding breadth and depth, but also by adding complexity and precision and meta-awareness of browser-thought.

Imagine perhaps if your browser automatically generated a graphical “tree” of your browsing history, showing different paths of thought. Each path could be labelled and perhaps even tagged, creating a visual representation of your train of thought when exploring a particular area. This tree could be stored and shared with others. It would help boost meta-cognition about your research, providing a “big picture” to help organize and structure your browsing. This big picture awareness could help to combat the tendency of hyperlinked browsing to suck you into informational wormholes, eventually losing track of your original train of thought entirely.

Overall, I think that both the Internet and the computer are obviously some of the most powerful cognitive artifacts we have ever made, and in just a few decades they have radically extended and enhanced our cognition. But we have not yet developed human-cognition-friendly interfaces for exploring these repositories. I would argue that the current state of the browser interface makes our unstructured access to these repositories almost more of a cognitive burden, giving us unlimited and unorganized access to more than a single mind can understand.

The browser is just one pressingly obvious example of a larger, systemic problem that we will need to face in the next few decades. The Internet is a massive, continuously growing area of extended cognition, yet it still exists largely as a massive “data dump”, with very limited capabilities to organize, process, and understand this data– to bring it back down to the scale of human understanding.


On a related note: “Real-Time Space-Efficient Synchronized Tree-Based Web Visualization and
Design” http://www.rowan.edu/colleges/csm/departments/computerscience/research/reports/TR2006-1.pdf

Leave a comment

Filed under Uncategorized

Telecommunications Policy Research Conference #42 (#TPRC42): A few reflections

This weekend I attended the Telecommunications Policy Research Conference in Arlington, Virginia. I am not a policy expert nor a computer scientist- but it was very interesting to see some of “hot spots” of debate and interest in this field. I was most interested to glean a sense of areas of technological developments that are either so new, so pervasive, or both, that it presents a major area of interest in terms of policy development.

Below I have outlined 4 areas that seem to fit into this category, some of the interesting take aways from the conference on these areas, and resources for further explanation/exploration:

1. Wireless grids

Wireless grids are Ad hoc, distributed resource-sharing networks between heterogeneous wireless devices“. As far as I understand it, this technology represents an extension and alteration of the conditions of the traditional Internet as we know it. In particular, these networks provide new affordances in the sense that these networks can be pulled together ad hoc, require no centralized control in the form of a router, and can be made through networks of small hand-held devices (such as the nearly ubiquitous hand-held smart phone).

This technology has many interesting applications, including emergency response systems, means of creating networks in the face of censorship, and definite relevance for the “Internet of things” (explored more in my next point). Yet this technology is also still in a very early stage, and needs more development in terms of the protocols and “middleware” that would help securely organize these networks arising between radically different, non-traditional devices and interfaces.






2. Internet of things

The Internet of Things is closely related to the technologies of wireless grids and the development of new forms of networking generally. The Internet of Things is a phrase that describes a potential network of not just dedicated computing devices (like computers or smartphones), but nearly any appliance or object we wish. This promises new means of remote sensing and remote action. Early instantiations of this idea include home monitoring devices which report data to your smartphone, or simple tracking devices that can be attached to your personal belongings. The Internet of Things would exponentially expand the world of “big data”– and of course–opens up new concerns about the privacy and security of that data.

Although we are seeing some implementations of the Internet of Things on the commercial market, we are still in the very early stages of this technology, with quite a bit of development to go both technologically and (I would argue) in terms of our understanding of the new affordances and possibilities the Internet of Things would allow.






3. Spectrum

Wireless connections are supported by something called “spectrum”- in fact, the same spectrum used by TV and radio broadcasters. Spectrum is a major policy issue because it is a limited natural resource that is approved and managed by the government- the FCC in particular. As it stands, most of the existing spectrum within the range which is physically usable for wireless broadband is already occupied. Although both the government and wireless providers are searching for more efficient ways to use and share this spectrum, given the incredible rise in demand for wireless via tablets and smartphones, many have raised the question of whether we are on the edge of a “spectrum crisis”. Such a crisis would entail drastically dropped speeds, likely prohibiting things like online video streaming and significantly slowing down browsing speeds.

In order to avoid such a crisis, many are turning their attention to technological solutions, as well as possible policy solutions. However, it seems that at some point we will have to turn our attention to bigger questions such as: Is government regulation helping or hindering the process of spectrum management? Would the free market be better able to solve this problem? What if we simply “run out”- are there viable alternatives to spectrum for wireless connections?

Although this topic is somewhat obscure to the general population, it seems like spectrum may be a technological bottleneck we will encounter as the use of mobile devices – and the Internet of Things! – continues to grow.






4. Algorithms

Friday afternoon featured a fascinating panel titled “Governing the Ungovernable: Algorithms, Bots, and Threats to Our Information Comfort-Zones”  (featuring, among others, CCT’s Mike Nelson), exploring the impact of intelligent systems on the world of general consumers. In particular, I was interested in the thread of algorithms, which increasingly determine the types of experiences we have online. These largely invisible technologies have recently gained a bit more spotlight via the Facebook and OkCupid experiments, but overall it seems that these algorithms exist in a kind of shady underworld that is little understood by the average platform user– and yet increasingly these algorithms use deeply personal information to draw deeply personal conclusions– and use these conclusions to create a particular experience, entirely unbeknownst to users.

The key word of this discussion was “transparency”. It was suggested that private companies could go a long way towards gaining more trust from its users by being more transparent about their various methods of data aggregation, data processing, and how they are using this information. From my perspective, this seems highly improbable and highly ineffective (after all, just how jazzed do people get about reading Facebook’s privacy policies…?). This is undoubtably an area that will continue to develop significantly in the coming years— and at some point will need to be addressed by more formal policy initiatives, as we are just beginning to see happening in parts of Europe.





Leave a comment

Filed under Uncategorized

The Prayer Nut and the Mobile Phone

I recently visited the Rijksmuseum during a trip to Amsterdam, and had the pleasure to experience the “Art is Therapy” exhibition, a kind of “meta” exhibition wherein the curators created large, printed post-it notes with commentary on works of art meant to show how the works could change the viewer, could incite positive change and a kind of spiritual healing.

prayer nut

In particular I was drawn to the commentary laid out next to a “prayer nut”, a miniature carved wooden ball from the Middle Ages meant to act as both spiritual reminder and status symbol for its owner:

“The prayer nut is an aid to the interior life. It is specifically designed to provoke an inner state. 

There are lots of things we care about in theory, but forget about in practice. Religions understand this– and design all sorts of tools (from cathedrals to possibly the smallest of all prompts: the prayer nut) to help us keep important ideas closer to the front of our minds. Religion can be seen as a giant memory-prompting machine, always trying to get us back on track. 

The nut understands our frailties: it doesn’t condemn them, it seems to respond very creatively to them…

Modern technology is very good at catering for what is urgent, but very bad at keeping us in touch with what is important. Smartphone providers have something to learn from the prayer nut.”

"Sickness: I'm always reaching into my pocket to check my phone."

“Sickness: I’m always reaching into my pocket to check my phone.”

Notably, the museum’s accompanying text notes that the owner of this nut would also have been very likely to enjoy showing off such a fine work of art as this intricate prayer nut…which cannot help but make me think of the ostentatious pleasure of displaying one’s iPhone.

That said, I think that it is a very good point that unlike the prayer nut, the phone is pulling us towards the “urgent”, not the important. How could we design our machines to put us in a more reflective mode? To keep us more connected with the bigger ideas, the more meaningful narratives that drive our life rather than the vagaries of the current moment? How could a machine push us towards LESS use, less need, less addictive, self-centered, and impulsive activity? Towards mindfulness?


How could we respond creatively to the frailties of humanity? How could we alter our technologies to make us better people, or at least to mediate the bad habits and negative side effects that new technologies seem to give rise to?

Food for thought.

Leave a comment

Filed under Uncategorized

Paradigms of Accumulation & Loss in a Digital World

ImageOne phenomenon of the digital world that I think we must increasingly come to terms with is the changed and changing nature of accumulation and loss. In a sense, this new digital world is marked very strongly as “lossless”, as compared to previous forms of media/medium. Especially now that everything we create is backed up automatically in the “Cloud”, it becomes increasingly difficult to lose even the most mundane emails, photos, receipts for tickets, etc.

Unnecessary accumulation is one of the defining features of the (modern? post-post modern?) world we live in. Accumulation of data, of ikea furniture and other cheap and easily accessible consumer items, accumulation of massive amounts of waste and the pollutants that follow, accumulation of all the various ideas and products and thoughts of the entire history of humanity. To address this era of accumulation, we will need to learn the art of curating, of throwing away, of recycling- and even of preventing additional creation altogether.

Snapchat is just one prominent and still somewhat mysterious case where the standards of accumulation are being rethought. Instead of making accumulation and infinite storage of a sent photo the default, the app is entirely based on the premise of a default of loss. While storage is possible via the screenshot, this involves an explicit and intentional action on the receiver’s part, as well as potentially interesting social implications by notifying the sender that the image has been saved. It is interesting to think about other ways that the paradigm of easy and automatic accumulation could be changed, whether for entertainment purposes as in the case of Snapchat, for environmental purposes, or as a way of ensuring that our digital world does not become quickly overrun with the detritus of everyday life and becomes a more curated, meaningful storage of our experiences.

A few examples of technologies of accumulation come to mind. An interesting one for me personally is Amazon, and online shopping in general. Especially as a Prime user, all it takes is a fleeting thought and a few clicks for me to add something to my growing collection of worldly possessions. This encourages an accumulation of things like never before. I do not have to hand anyone money, leave my house, or even really have a second thought about an item before I buy it. This obviously shifts the paradigm strongly toward accumulation. In contrast, there is no easy way to discard or recycle or pass on the objects I no longer really want or need. Whats more, I’m sure there are many smart minds in the industry figuring out how to make every THING in the world as easily or more easily acquired. (See: Seamless making your food desire only a few iPhone taps away, Amazon’s new drones bringing those items to you in less than 24 hours,…and who knows what the future of 3D printing may bring us). How can we combat this basic compulsion towards accumulation? We need to begin developing technologies of curation, organization- designing behaviors of divestment and restraint.

These technological changes need not always be radical. I currently have about 10 gigabytes of old emails filling various accounts. What if, instead of marking something as “delete”, it was automatically deleted after a set period of time (30 days? 6 months). If you wanted to save something, you would have to intentionally, thoughtfully choose to save it. In the world of material things, the default is typically that things stick around unless you decide to get rid of them. In the world of the digital, things can disappear without a trace. To force us to think about accumulation and loss, to change the dominant paradigm from passive accumulation to active conservation may help us begin to address some of the larger issues that will only become more and more pressing over time.

This kind of studied reflection on digital storage and loss may even me life-savingly important. In the wake of the disappearance of Malaysia Airline Flight 370, myself and many others are wondering- in an era where my smartphone tracks my trip to costco and back, how is it possible that we do not have the data to track an airplane carrying hundreds of passengers across the ocean? The answer seems to be cost: “Although it would be possible to stream data from an aircraft in real time via satellite, implementing such a system across the industry would cost billions of dollars”. (Wired, “How It’s Possible to Lose an Airplane in 2014“). Although undoubtedly still expensive, what if flight 370 had been able to simply send a live stream of GPS data via satellite, if not the full data a black box records? One has to imagine that there are innovative and cost saving measures that could be taken to preserve this valuable data. Although this is a somewhat dramatic example, I am sure there are many many cases where a simple questioning of our existing paradigms of digital accumulation would radically transform our quality of life.

Leave a comment

Filed under Uncategorized

Twitter’s IPO: a financial perspective on our emergent social-media world

twitterFor the past several decades, the stock market and the financial world seem to be increasingly abstracted from “the real world.” The epic housing bubble and economic downfall of the 2000’s are just a few obvious manifestations of this phenomenon. My rough understanding of how any valuation should work is based on my experience of everyday consumerism, paying for basic products whose cost is based roughly on the cost of production and a bit of added expense for the psychological value of an object based on brand and commercial image.

Obviously, the wild world of valuation on the stock market is a much different kind of enterprise. I was struck by this anew when considering Twitter’s IPO last week, and how this flurry of financial activity relates to our understanding or lack thereof about the “value” of social media.

I first became curious when I read that Twitter stock prices had jumped up 73% from their initial offer price the night before. This seems to be an absurd inflation of price in an incredibly short period of time. A certain amount of inflation based on hype and a flurry of consumer interest makes sense, but 73% seems a little…out of control. Upon further research, it seems that this kind of inflation is not totally unheard of, but it’s scale does reflect a time of extreme absurdity generally:

“These first day price pops were unusually high during the dot com bubble, when the typical pop was 65% of the offer price, well above the 7-15% range at other times. Twitter’s pop was 73%, reminiscent of the dot com mania days when investor psychology allowed companies yet to show a profit to trade at high prices on unrealistic hopes.” (From Forbes.com)

Which begs the question- in the year 2013, well after the dot-com bubble and well into a world where hot new tech companies have been hitting the market for decades- is it possible that investors can still have the same naiveté about Twitter that investors in the early 90’s might have had? Or is there something else going on here?

To understand this, I think we have to ask the deceptively simple-sounding question: what is the “real” value of Twitter anyway?

As best I can understand, the major “value” of Twitter lies in the following:

  1. The value of their user base; 200M active users (compared to Facebook’s 1.15 billion users).

  2. The value of mobile marketing to this massive user base. Although Facebook has far and away a much larger user base, Twitter is in some ways much more strongly tied into the commercial world; it closely links the consumer with brands in a way that Facebook or other platforms do not.

  3. Twitter is theoretically an innovative software platform with long-term value.

  4. The brand. Twitter as a brand has become a kind of social institution, perhaps independent of the actual technology/platform behind the brand.


In concrete terms, Twitter has not yet had much success turning a profit on this user base. In 2012, they made $317M in sales, but overall reported a loss of $79.4M. And perhaps they simply have yet to push this commercial model as far as it can go- but it seems that there is an obvious chance of diminishing returns, wherein aggressive advertising and sales of user data may begin to drive away users.

If we suppose that they may find an innovative and non-invasive way to make a profit on their user base, then perhaps Twitter’s value is partly in their ability to innovate as a company and create new forms of commercial interaction. Yet “innovativeness” is more of a hypothetical, even symbolic value of a company, rather than a concrete and reliable factor in the long-term bedrock of a valuation.

What can be the long-term value of any simple social media platform, such as a Vine, a Snapchat, even an Instagram or Twitter? Companies like Google or Apple have more concrete products that can be measured and relied upon in the long term– Apple’s truly valuable software and hardware combined with the power of a massive commercial brand, or Google’s innovative and massively complex software combined with a deeply ingrained presence in the very use of the Internet. But for Twitter- whose wild success is arguably based on the complete simplicity of the interface and its constantly changing, updating, “hype machine” capabilities- I would argue that the real, long-term value in the product itself is very much up in the air.

To return to the naïve dot-com bubble investor, it seems that the value of these companies, at least for the moment, still relies primarily in a psychological force- the force of an idea of innovation and potential. This magic of the startup tech world is an idea that still seems to permeate our society. Perhaps we can understand this particularly well in the context of American society, where self-starting companies built on pure human innovation seem to truly embody the “American Dream”.

Yet- we are also supposedly a society that values the individual. Be that as it may, our financial institutions do not seem dreamy-eyed or full of idealism when it comes to the dollars and cents of an individual, as evidenced by the way insurance companies coldly calculate the value of a human life. There seems to be a deep perversion in this system wherein private companies like Twitter can capture the romantic imagination (and, of course, the massive investments) of financial institutions, but human life is valued with crisp realism and efficiency.


Filed under Uncategorized

#Instalife: How is Instagram altering our practices and understanding of photography?

Instagram is a social, mobile photography app. They have 100 million monthly active users, and 40 million Instagram photos are posted per day. Purchased by the leviathan of social media that is Facebook in 2012, Instagram seems to be some kind of “big deal”.

From a commercial perspective, Instagram is an immensely easy-t0-use and popular social platform, and its default setting of making all images open the public makes it eminently available for market research (for example: I can search the tag “#target” and tap into what people are thinking and reacting to about Target and their stores). Instagram is also an excellent medium for brands and celebrities to communicate directly and intimately with their fans. Images are in many ways the language of the commercial world, and Instagram is a platform catered directly to that language.

Above and beyond the “market value” of Instagram, it is also changing the meaning and practice of photography at every level, from high art to the “laymen’s” snapshots of the Eiffel Tower and cute toddlers.

In some ways, this change is simply an amplification of the changes already underway since the invention of cheap film, and then the invention of the digital photo, and then the invention of the Internet. This technological progression has created an increasingly universal and democratic practice of photography. Taking a photo is getting easier and cheaper- and furthermore sharing those photos is becoming easier and cheaper as well. With the invention of the mobile camera, if you are carrying around a phone you are also carrying around a camera. With the development of high-tech devices like the iPhone, if you are carrying around a cell phone you are carrying around an incredibly high-quality camera. And if you have Instagram, you have a way of instantly and easily editing that photo and sharing it with the world (or at least- the entire Internet world).

But these things are, in theory, true also of tools like Flickr or Tumblr or Facebook. What makes Instagram different? How does this specific tool influence our practices and understanding of photography?

Perhaps the most explicit difference between Instagram and other platforms is its unique time-frame. Instagram is not merely a photo-sharing app, but actually contains a camera within the app. The tool emphasizes instantaneity (obviously) and a mimimum amount of time between taking and sharing the photo.

In this  way, Instagram takes on a quality (similar to the practice of Tweeting) of inherent presentness. It is a token of the “this-now”,  a visual status-update of sorts. This quality is emphasized by the common picture tag “#latergram”, used to indicate photos taken sometime in the past (typically more than a day). As was insightfully pointed out on PBS’s Idea Channel, this hashtag is particularly odd if we consider that every photograph is in some sense a “latergram”, removed from the actual moment represented in the photo, and that for most of history the point of photography was to preserve a record or image for later consumption. Yet it is true that Instagramography  has a kind of flat temporality that previous forms of photography don’t. The photos tend to be taken and instantly shared, often times with the photograph being taken expressly for the purpose of Instagramming, without any period of latency or consideration of the photograph. After being shared, this photograph is consumed fairly quickly, disappearing in a flow of new images within a day.

Another example of the temporality of Instagram is the hashtag #tbt, or “Throw Back Thursday”, a day where many users post heavily nostalgic pictures from years past. In a very literal sense, Instagram has only really been popular for a few years, so Instagram does not typically allow for a kind of nostalgic reminiscing in the same way old photo albums- even ones on Facebook- do. #TBT is the exception that proves the rule in this case.

Apart from these exceptions, Instagram photos tend to occupy two temporalities: the Here and Now, and the Atemporal Abstraction. Interestingly, Instagram actually quite literally addresses the HERE in here and now; although timestamping of images is universal, Instagram photos all contain a location-stamp, which Instagram uses to create “Photo maps”, or maps showing SPECIFICALLY where a photo was posted. Additionally, users may add their own location tag (“MoMA”), specifying which restaurant or business they are at, or utilize a hashtag (#playoffs) to meta-label attendance at an event.

Here and Now at Yankee Stadium. http://instagram.com/_puellaludens/

Here and Now at Yankee Stadium. http://instagram.com/_puellaludens/

An event like Hurricane Sandy is an excellent example of the unique practice and meaning of Instagramography. The storm was a huge event that nearly all residents of NYC and the East Coast in some sense “participated” in, and furthermore was the kind of thing your dad would have tried to get some Polaroids of back in 1975, as a kind of momento of the time a hurricane hit New York. In 2012, people turned to their iPhones to capture these bizarre momentos of the flooding of NYC streets and the loss of power in Times Square– and also to communally share and consume these images via social media platforms. Rather than about the preservation of an image, this practice was about the communal participation in and consumption of the Here and Now.

Apart from this kind of Instagram, there is also the Atemporal Abstraction- the photograph that does not really represent a specific time or place- although perhaps an experience, or an image. This images are more purely aesthetic, or “artistic” rather than documentary. The banal version of this might be the Manicure-photograph, or the funny picture of a cat- the more artistic version of this might be the close-up picture of beads of dew on grass, perhaps through one of Instagram’s artsy filters.

These images gain their social capital not by being linked to a time and place, but rather the opposite; they represent something more widely accessible, based purely on aesthetic choice. I would argue that this phenomenon in Instagramography represents a shift in laymen-photographic practice. This is again partially due to changes in the technology. Whereas in an earlier era it may have made sense to go to the Louvre and take a picture of the Mona Lisa, this no longer makes sense in a world where you can simply Google an image of the Mona Lisa, and access the same kind of “memory”, probably through a much higher-quality photograph. For myself personally, Instagram and iPhone-ography makes me feel this sense of photographic nihilism even in my everyday life. How banal is a photograph of a sunset, or the New York City skyline, even if it is incredibly beautiful to me here and now? Instead, I find myself (and I often see others as well) looking for beauty in more unsuspecting places, or appropriating and participating in the image to make it more aesthetically “valuable”.

One genre of Instagramography that I think falls into this category is something I call the “microgram”, or “abstractagram”- close-up photos of the textures and details of everyday life. These images abstract and aestheticize the banal, making them potentially more interesting to a wide world who does not share any memory or experience with you. This kind of photograph I think also speaks to the power Instagram has the alter the way we see the world- to make us ask ourselves “Is this beautiful? Is this interesting?” far more often and perhaps more creatively than we otherwise would.



In a world where nearly everything is being documented by someone at any given time, it is easy to fall into a kind of photographic nihilism. I might suggest that actually part of the power of Instagram is that, unlike Facebook, it encourages a kind of scarcity of photography, a kind of more “curated” activity of sharing. Whereas Facebook allows me to upload my entire album of vacation photos all at once, Instagram only allows one upload at once- and as a mobile platform, it usually simply doesn’t make sense to sit there in the “real world” and upload image by image. Instagram values the singular, and this is something that is incredibly rare and difficult to attain in our world so very over-flooded by images of every kind. This is perhaps relatable to the popularity of the app “Snapchat”, which revolves entirely around an idea of scarcity: you cannot “upload” a photo, it must be taken NOW and sent NOW; the image or video clip only lasts for a maximum of 10 seconds and a minumum of 3; and it must be sent to separately selected individuals, rather than out to a mass, pre-existing network.

Instagram does not quite so heavily emphasize scarcity as Snapchat; it still in some ways endulges the idea of the photo “album”, in that individuals have “profiles” that collect individual posts and act as a kind of sleek visual journal. As Susan Sontag says, “Photographs are really experience captured, and the camera is the ideal arm of consciousness in its acquisitive mood.” There is much to be said here about the relationship between a consumerist culture and photography; iPhoneography seems undeniably to fuel the hunger to consume and horde the world, to turn even experience into a commodity. But it seems that Instagram is a platform that alters and even in some ways disrupts this “acquisitive mood”. By enforcing a kind of “curation” of content, the consumer is forced to consider the aesthetics and perhaps hidden beauty of his experiences. Furthermore, the particular social sphere of Instagram means that users do not simple consume- but also produce, share, and participate intimately in an community of Instagrammers.

Not to be overly sentimental; this curation of content and social sphere of Instagram encourages “selfies” of girls in bikinis at the beach just as much as it encourages photos of carefully considered shadows on a sidewalk or of the aftermath of Hurricane Sandy in a poor neighborhood. All things considered, it is impossible to say that there is any clear overall “effect” of this technology on the entire realm of photography or photographers. But Instagram does seem to offer some new alternatives and encourage new practices of photography, perhaps even slowly altering our understanding of the meaning of photography itself in 2013.


Filed under Anthropology, New Media, Philosophy, psychology

The Politics and Power of Internet Infrastructure, Pt. 3

Please see also Part 1 and Part 2.


In the last section I considered the roles of business and governments in protecting “net neutrality”, or the basic neutrality of Internet conduits. Net neutrality is a subtle concept, involving the protection of a particular idea about what Internet access is and should look like. But in a world where the Internet is so very new- and already so very ubiquitous- it is still a matter up for consideration what the fundamentals of digital rights are. The biggest of these questions might be whether Internet access- whether “neutral” or not- is fundamental human right? As a huge space of international dialogue, free information flows, and democratic action, access to the Internet seems to be a corollary to  rights to free speech or education.

Although most people probably wouldn’t say that humans have a fundamental right to Internet access the same way they have a fundamental right to food or water or happiness, I also think that many would see North Korea’s complete prohibition of Internet access to it’s citizens as deeply fascist and possibly even inhumane.

A poll conducted by the BBC World Service in 2010 suggests that four out of five people (adult Internet users and non-users in 26 countries) felt that Internet access is a fundamental right. This is a philosophical stance, but it leads us to the more concrete question of how this right is to be protected and supported against the powers that be. We understand that the advancement of human rights probably should not be left to the discretion of private businesses, and in cases like North Korea maybe not even to the discretion of individual states. Given this, who is the proper protector and regulator of Internet access?

Professor Susan Crawford, legal scholar and board member of ICANN, suggests that it ought to be treated as a utility, and that as such, the U.S. government is failing its citizens by not regulating the telecommunications companies in order to ensure universal access. She points out that as a nation, we are very good at rhetorically emphasizing the importance of Internet access, but in concrete terms we are very bad at implementing policy to ensure access to our own citizens. By allowing the non-competitive, almost monopolistic control of Internet infrastructure to exist unimpeded, the U.S. is deepening the “digital divide.” The digital divide describes the division between those who can afford Internet access and those who can’t- with huge consequences in our increasingly Internet-run world. Those who don’t have Internet access- or even have slow or unreliable Internet access- are less able to inform and educate themselves, less able to perform work or do homework, less able to find jobs and other critical resources like housing, etc etc etc. Crawford suggests that a truly “equal” society like ours would treat this essential informational tool as a utility, and regulate it to ensure that at least some form of reliable, inexpensive Internet is available to everyone in the country- in the same way that they ensure that some form of water or heat or electricity is available.

Susan Crawford is one of the many regulators overseeing U.S. policy to try to ensure the right to Internet access in this country. As with other forms of universal human rights, there also exist entire international institutions dedicated to protecting this right. The earliest of these institutions grew out of the need to establish international standards and protocols just to make sure that the international network could technically function.

The Internet Engineering Task Force, or IETF, is an extremely loose organization that emerged in the early days of the Internet we now know, in order to develop rigorously standardized protocols for data flow that allows nodes of the Internet to connect to one another, regardless of variations in hardware and location etc. This organization is dedicated to the purely technical task of ensuring that the Internet continues to function as an international network, even as technology develops. The IETF is also interesting because it functions in a way that nearly mimics political ideals about the Internet itself. The business of the IETF is conducted entirely by volunteers, who join the open committee to answer “RFCs” or “requests for comments” on topics which need resolving. Decisions are made entirely through a process of rough consensus, and members act purely as individuals, even though they may be parts of government, private corporations, or non-profit institutions. Indeed, strictly speaking the IETF does not have official members- is it an organization as much as an activity. It does, however, have several more official organizations that help to oversee it and support it, including the Internet Society (ISOC), an international non-profit organization.

The IETF serves as one model of an international institution that protects the fundamental capability to access the Internet. Another institution, ICANN, presents a rather different model serving a similar function.

ICANN, or the Internet Corporation for Assigned Names and Numbers (discussed in the first part of this project), similarly arose in the early days of the Internet to take over tasks of technical oversight and regulation previously conducted by the U.S. government. However, ICANN has a few major differences from the IETF. First of all, the IETF is primarily concerned with creating protocols and public documents for other Internet organizations etc. to voluntarily follow. In contrast, ICANN has more direct control over the actual infrastructure of the Internet; in particular, ICANN holds control over the “root zone” of the Domain Name System; that is, it can directly change the mapping of IP addressed onto domains, and also directly modify the centralized public directory which makes these mappings public to all other Internet users. In this sense, ICANN has “teeth”, or actual technological power to alter Internet access that the IETF does not have. These “teeth” are of huge political significance as well. Parts of ICANN are still under control by the U.S. Department of Commerce. In 2006 ICANN signed a document with the D.O.C. clarifying that they still retained the ability of final, unilateral oversight of some of ICANN’s functions. In contrast to the IETF’s international, multi-stakeholder, distributed and agreement-based process and enforcement, ICANN is a non-profit organization that is still under partial control of the U.S. government, and works on a model more of technical regulation and political coercion that sheer agreement. In particular, this insistence of the U.S. government to retain some form of (currently purely symbolic) control over an organization with real technological control over an international utility some consider to be a human right is actively protested by many other state governments, who see this as an unjust balance of power.
Beyond these institutions of technical regulation, many other large-scale NGOs exist to help set international standards for Internet access and regulation. The U.N., as the most obvious forum for regulating an international system like the Internet, has been central in developing several organizations and meetings surrounding Internet governance. Perhaps most prominently, the International Telecommunication Union (ITU) is a specialized agency within the U.N. Governments join the Union as “Member states”, although “private organizations” like telecommunications companies and research and development organizations may also join as non-voting members. The ITU was responsible for organizing the twin meetings of the World Summit on Information Society in 2003 and 2005, which in turn founded the “Internet Governance Forum” (IGF) at the 2003 WSIS in Geneva. The Internet Governance Forum, along with the WSIS’s, are different from the ITU in that they are centered around a “Multi-stakeholder” governance model. This model emphasizes participation by all individuals, groups, or organizations that have some kind of “stake” in the matter being discussed. As we have encountered before, the Internet is an institution of deep and personal concern to private businesses, state governments, as well as individual citizens. Given this, it seems that the Internet is the perfect issue around which to develop a strong international system of multi-stakeholder governance. The lack of such a system seemed so glaring, that it was suggested that an organization be formally convened in preparation for the first World Summit on the Information Society:
“(t)he WGIG identified a vacuum within the context of existing structures, since there is no global multi-stakeholder forum to address Internet-related public policy issues. It came to the conclusion that there would be merit in creating such a space for dialogue among all stakeholders. This space could address these issues, as well as emerging issues, that are cross-cutting and multidimensional and that either affect more than one institution, are not dealt with by any institution or are not addressed in a coordinated manner”.

In “Networks and States”, Milton Mueller argues that the first WSIS conference “became a mobilizing structure for transnational civil society groups focused on issues in communication and information policy”, and that the IGF “supplied an institutional venue with the potential to prolong and strengthen that network.” (Mueller, 83). In many ways, this organization introduces an entirely new form of governance- a network of networks, much like the Internet itself.  However, Mueller also notes that this highly democratic and emergent form of governance is still developing the formal mechanisms of representation and decision making needed to actually and effectively govern. This combines the age-old problems of how we create maximally democratic government institutions, and the problem of how we make and enforce law on an international scale. Although it seems that the Internet is helping us to make some headway in these areas, we also see older models of state-based hierarchical governance continuing to lead the realm of Internet governance.

As an example of this, in addition to the IGF and the World Summit on Information Society, the ITU also sponsored the 2012 World Conference on International Telecommunications (WCIT-12). This meeting, dedicated to modifying the International Telecommunications Regulations (last updated in 1988), was restricted to the 193 member states of the ITU. Exemplifying the traditional model of state-based governance, and in this case inter-state-based governance, the conference is rumored to have proposed that the ITU take control of surveillance and filtering of Internet content, as well as the duties of ICANN and the IETF, and would furthermore potentially condone state governments to filter content, and even allow government shut-down of the Internet if deemed necessary. These are only rumors, however, because the conference- instead of being open to the public- occurred behind closed doors. The U.S. was one of many states that ultimately did not sign the treaty. Although there are likely many motivations for this (including a possible provision removing ICANN from U.S. control), the U.S. claimed that it could not support the treaty because it did not support a multi-stakeholder approach to regulation; indeed, it seems that (in keeping with our earlier description), the U.S. did not want to make provisions regulating the Internet at all.

If the IGF represents a model of governance fitted to the higher potential of the Internet to create a more democratic and open society, able to effectively advance human rights around the world- including the right to Internet access- then the WCIT treaty represents a model of governance fitted to the ultimate power of the Internet to create a more tightly controlled and hierarchical society. It is hard to say which of these models will win out, or how they may eventually come to combine and compromise.

What is clear is that the Internet is a technology that is radically redistributing power, and that big and small businesses, state governments and the U.N., NGO’s, individual citizens and loose organizations of concerned volunteers are all working to control how this power is organized and regulated. Basic Internet infrastructure- such as the Internet backbone, ISPs, IP addresses and domain names, and Internet protocols are all points of extreme power over the fundamental nature of the Internet. Naturally, these are also the hot-spots of political activity. These are the areas around which we, as an international civil society, must defend net neutrality and the human right to Internet access.

The Internet is an ever-changing, highly unstable force in our current world, but it is foolish to think that this means that it is invulnerable to exploitation and control by extremely powerful forces. The potentially revolutionary and powerfully humanistic nature of the Internet is not inherent, and in order to advance it, we must quickly develop new forms of revolutionary and humanistic governance and regulation- or let governments and private businesses determine the nature of our existence in this new world of communication and information.


Crawford, Susan P. Captive Audience: The Telecom Industry and Monopoly Power in the New Gilded Age. New Haven [Conn.: Yale UP, 2013. Print.
DeNardis, Laura. “The Turn to Infrastructure for Internet Governance.” Web log post. Concurring Opinions. N.p., 26 Apr. 2012. Web. <http://www.concurringopinions.com/archives/2012/04/the-turn-to-infrastructure-for-internet-governance.html&gt;.
Goldsmith, Jack L., and Tim Wu. Who Controls the Internet?: Illusions of a Borderless World. New York: Oxford UP, 2006. Print.
MacKinnon, Rebecca. Consent of the Networked: The World-wide Struggle for Internet Freedom. New York: Basic, 2012. Print.
Mueller, Milton. Networks and States: The Global Politics of Internet Governance. Cambridge, MA: MIT, 2010. Print.
“OpenNet Initiative: Global Internet Filtering App.” ONI Internet Filtering Map. N.p., n.d. Web. 02 May 2013. <http://map.opennet.net/&gt;.

Leave a comment

Filed under Anthropology, New Media, Philosophy, Policy, Politics, psychology