The Prayer Nut and the Mobile Phone

I recently visited the Rijksmuseum during a trip to Amsterdam, and had the pleasure to experience the “Art is Therapy” exhibition, a kind of “meta” exhibition wherein the curators created large, printed post-it notes with commentary on works of art meant to show how the works could change the viewer, could incite positive change and a kind of spiritual healing.

prayer nut

In particular I was drawn to the commentary laid out next to a “prayer nut”, a miniature carved wooden ball from the Middle Ages meant to act as both spiritual reminder and status symbol for its owner:

“The prayer nut is an aid to the interior life. It is specifically designed to provoke an inner state. 

There are lots of things we care about in theory, but forget about in practice. Religions understand this– and design all sorts of tools (from cathedrals to possibly the smallest of all prompts: the prayer nut) to help us keep important ideas closer to the front of our minds. Religion can be seen as a giant memory-prompting machine, always trying to get us back on track. 

The nut understands our frailties: it doesn’t condemn them, it seems to respond very creatively to them…

Modern technology is very good at catering for what is urgent, but very bad at keeping us in touch with what is important. Smartphone providers have something to learn from the prayer nut.”

"Sickness: I'm always reaching into my pocket to check my phone."

“Sickness: I’m always reaching into my pocket to check my phone.”

Notably, the museum’s accompanying text notes that the owner of this nut would also have been very likely to enjoy showing off such a fine work of art as this intricate prayer nut…which cannot help but make me think of the ostentatious pleasure of displaying one’s iPhone.

That said, I think that it is a very good point that unlike the prayer nut, the phone is pulling us towards the “urgent”, not the important. How could we design our machines to put us in a more reflective mode? To keep us more connected with the bigger ideas, the more meaningful narratives that drive our life rather than the vagaries of the current moment? How could a machine push us towards LESS use, less need, less addictive, self-centered, and impulsive activity? Towards mindfulness?


How could we respond creatively to the frailties of humanity? How could we alter our technologies to make us better people, or at least to mediate the bad habits and negative side effects that new technologies seem to give rise to?

Food for thought.

Leave a comment

Filed under Uncategorized

Paradigms of Accumulation & Loss in a Digital World

ImageOne phenomenon of the digital world that I think we must increasingly come to terms with is the changed and changing nature of accumulation and loss. In a sense, this new digital world is marked very strongly as “lossless”, as compared to previous forms of media/medium. Especially now that everything we create is backed up automatically in the “Cloud”, it becomes increasingly difficult to lose even the most mundane emails, photos, receipts for tickets, etc.

Unnecessary accumulation is one of the defining features of the (modern? post-post modern?) world we live in. Accumulation of data, of ikea furniture and other cheap and easily accessible consumer items, accumulation of massive amounts of waste and the pollutants that follow, accumulation of all the various ideas and products and thoughts of the entire history of humanity. To address this era of accumulation, we will need to learn the art of curating, of throwing away, of recycling- and even of preventing additional creation altogether.

Snapchat is just one prominent and still somewhat mysterious case where the standards of accumulation are being rethought. Instead of making accumulation and infinite storage of a sent photo the default, the app is entirely based on the premise of a default of loss. While storage is possible via the screenshot, this involves an explicit and intentional action on the receiver’s part, as well as potentially interesting social implications by notifying the sender that the image has been saved. It is interesting to think about other ways that the paradigm of easy and automatic accumulation could be changed, whether for entertainment purposes as in the case of Snapchat, for environmental purposes, or as a way of ensuring that our digital world does not become quickly overrun with the detritus of everyday life and becomes a more curated, meaningful storage of our experiences.

A few examples of technologies of accumulation come to mind. An interesting one for me personally is Amazon, and online shopping in general. Especially as a Prime user, all it takes is a fleeting thought and a few clicks for me to add something to my growing collection of worldly possessions. This encourages an accumulation of things like never before. I do not have to hand anyone money, leave my house, or even really have a second thought about an item before I buy it. This obviously shifts the paradigm strongly toward accumulation. In contrast, there is no easy way to discard or recycle or pass on the objects I no longer really want or need. Whats more, I’m sure there are many smart minds in the industry figuring out how to make every THING in the world as easily or more easily acquired. (See: Seamless making your food desire only a few iPhone taps away, Amazon’s new drones bringing those items to you in less than 24 hours,…and who knows what the future of 3D printing may bring us). How can we combat this basic compulsion towards accumulation? We need to begin developing technologies of curation, organization- designing behaviors of divestment and restraint.

These technological changes need not always be radical. I currently have about 10 gigabytes of old emails filling various accounts. What if, instead of marking something as “delete”, it was automatically deleted after a set period of time (30 days? 6 months). If you wanted to save something, you would have to intentionally, thoughtfully choose to save it. In the world of material things, the default is typically that things stick around unless you decide to get rid of them. In the world of the digital, things can disappear without a trace. To force us to think about accumulation and loss, to change the dominant paradigm from passive accumulation to active conservation may help us begin to address some of the larger issues that will only become more and more pressing over time.

This kind of studied reflection on digital storage and loss may even me life-savingly important. In the wake of the disappearance of Malaysia Airline Flight 370, myself and many others are wondering- in an era where my smartphone tracks my trip to costco and back, how is it possible that we do not have the data to track an airplane carrying hundreds of passengers across the ocean? The answer seems to be cost: “Although it would be possible to stream data from an aircraft in real time via satellite, implementing such a system across the industry would cost billions of dollars”. (Wired, “How It’s Possible to Lose an Airplane in 2014“). Although undoubtedly still expensive, what if flight 370 had been able to simply send a live stream of GPS data via satellite, if not the full data a black box records? One has to imagine that there are innovative and cost saving measures that could be taken to preserve this valuable data. Although this is a somewhat dramatic example, I am sure there are many many cases where a simple questioning of our existing paradigms of digital accumulation would radically transform our quality of life.

Leave a comment

Filed under Uncategorized

Twitter’s IPO: a financial perspective on our emergent social-media world

twitterFor the past several decades, the stock market and the financial world seem to be increasingly abstracted from “the real world”, as evidenced particularly by the epic housing bubble and economic downfall of the 2000’s. My rough understanding of how any valuation should work is based on my everyday consumer experience of paying for consumer objects based approximately the cost of materials and construction, with a bit of wiggle room for the psychological value of an object based on brand and commercial image etc.

Obviously, the wild world of valuation on the stock market is a much different kind of enterprise. I was struck by this anew when considering Twitter’s IPO last week, and how this flurry of financial activity related to our understanding or lack thereof about the “value” of this social media company par excellence.

I first became curious when I received the notification that Twitter stock prices had jumped up 73% from their offer price the night before. This seems to be an absurd inflation of price for something in an incredibly short period of time. A certain amount of inflation based on hype and a flurry of consumer interest makes sense to me, but 73% seems a little…out of control. Upon further research, it seems that this kind of inflation is not unheard of, but it’s scale does reflect a time of extreme absurdity:

“These first day price pops were unusually high during the dot com bubble, when the typical pop was 65% of the offer price, well above the 7-15% range at other times. Twitter’s pop was 73%, reminiscent of the dot com mania days when investor psychology allowed companies yet to show a profit to trade at high prices on unrealistic hopes.” (From

Which begs the question- in the year 2013, well after the dot-com bubble and well into a world where hot new tech companies have been hitting the market for decades- is it possible that investors can still have the same naiveté about Twitter that investors in the early 90’s might have had? Or is there something else going on here?

Which brings me to the essential question behind this whole phenomenon: what is the real value of Twitter?

As best I can understand, the major “value” of Twitter lies in the following:

  1. The value of their user base; 200M active users (compared to Facebook’s 1.15 billion users) and

  2. The parallel value of “mobile marketing” to this massive user base. Although Facebook has far and away a much larger user base, Twitter is in some ways much more strongly tied into the commercial world; it closely links the consumer with brands in a way that Facebook or other platforms do not (with the exception perhaps of a Pinterest).

  3. Twitter is theoretically an innovative software platform with long-term value.

  4. The brand. Twitter as a brand has become a kind of social institution, perhaps independent of the actual technology/platform behind the brand.


In concrete terms, Twitter has not yet had much success turning a profit on this user base. In 2012, they made $317M in sales, but overall reported a loss of $79.4M. And perhaps they simply have yet to push this commercial model as far as it can go- but it seems that there is an obvious chance of diminishing returns, wherein aggressive advertising may begin to drive away users.

If we suppose that they may find an innovative and non-invasive way to make a profit on their user base, then perhaps Twitter’s value is partly in their ability to innovate as a company and create new forms of commercial interaction. Yet “innovativeness” is more of a hypothetical value of a company, rather than a concrete and reliable in the long-term bedrock of valuation.

What can be the long-term value of nearly any simple social media platform, such as a Vine, a Snapchat, even an Instagram or Twitter? Even companies like Google or Apple have more concrete “products” that can be measured and relied upon in the long term- with Apple’s truly valuable software and hardware combined with the power of a massive commercial brand, or Google’s truly innovative software combined with a deeply ingrained presence in the very use of the Internet. But Twitter- whose wild success is arguably based on the complete simplicity of the interface and its constantly changing, updating, “hype machine” capabilities- the question of real, long-term value is very much up in the air.

To return to the notion of the naïve dot-com bubble investor, it seems that the “value” of these companies, at least for the moment, still relies primarily in a psychological force- the force of an idea of innovation and potential. This magic of the startup tech world is an idea that still seems to permeate our society. Perhaps we can understand this particularly well in the context of American society, where self-starting companies built on pure human innovation seem to truly embody the “American Dream”.  Yet- we are also supposedly a society that values the individual. Be that as it may, our financial institutions do not seem dreamy-eyed or full of idealism when it comes to the dollars and cents of an individual, as strongly evidenced by the way insurance companies coldly calculate the value of a human life. When will the time will come for our financial institutions to begin valuing companies like Twitter, with all their social influence and human importance, in the same way they value a human life?


Filed under Uncategorized

#Instalife: How is Instagram altering our practices and understanding of photography?

Instagram is a social, mobile photography app. They have 100 million monthly active users, and 40 million Instagram photos are posted per day. Purchased by the leviathan of social media that is Facebook in 2012, Instagram seems to be some kind of “big deal”.

From a commercial perspective, Instagram is an immensely easy-t0-use and popular social platform, and its default setting of making all images open the public makes it eminently available for market research (for example: I can search the tag “#target” and tap into what people are thinking and reacting to about Target and their stores). Instagram is also an excellent medium for brands and celebrities to communicate directly and intimately with their fans. Images are in many ways the language of the commercial world, and Instagram is a platform catered directly to that language.

Above and beyond the “market value” of Instagram, it is also changing the meaning and practice of photography at every level, from high art to the “laymen’s” snapshots of the Eiffel Tower and cute toddlers.

In some ways, this change is simply an amplification of the changes already underway since the invention of cheap film, and then the invention of the digital photo, and then the invention of the Internet. This technological progression has created an increasingly universal and democratic practice of photography. Taking a photo is getting easier and cheaper- and furthermore sharing those photos is becoming easier and cheaper as well. With the invention of the mobile camera, if you are carrying around a phone you are also carrying around a camera. With the development of high-tech devices like the iPhone, if you are carrying around a cell phone you are carrying around an incredibly high-quality camera. And if you have Instagram, you have a way of instantly and easily editing that photo and sharing it with the world (or at least- the entire Internet world).

But these things are, in theory, true also of tools like Flickr or Tumblr or Facebook. What makes Instagram different? How does this specific tool influence our practices and understanding of photography?

Perhaps the most explicit difference between Instagram and other platforms is its unique time-frame. Instagram is not merely a photo-sharing app, but actually contains a camera within the app. The tool emphasizes instantaneity (obviously) and a mimimum amount of time between taking and sharing the photo.

In this  way, Instagram takes on a quality (similar to the practice of Tweeting) of inherent presentness. It is a token of the “this-now”,  a visual status-update of sorts. This quality is emphasized by the common picture tag “#latergram”, used to indicate photos taken sometime in the past (typically more than a day). As was insightfully pointed out on PBS’s Idea Channel, this hashtag is particularly odd if we consider that every photograph is in some sense a “latergram”, removed from the actual moment represented in the photo, and that for most of history the point of photography was to preserve a record or image for later consumption. Yet it is true that Instagramography  has a kind of flat temporality that previous forms of photography don’t. The photos tend to be taken and instantly shared, often times with the photograph being taken expressly for the purpose of Instagramming, without any period of latency or consideration of the photograph. After being shared, this photograph is consumed fairly quickly, disappearing in a flow of new images within a day.

Another example of the temporality of Instagram is the hashtag #tbt, or “Throw Back Thursday”, a day where many users post heavily nostalgic pictures from years past. In a very literal sense, Instagram has only really been popular for a few years, so Instagram does not typically allow for a kind of nostalgic reminiscing in the same way old photo albums- even ones on Facebook- do. #TBT is the exception that proves the rule in this case.

Apart from these exceptions, Instagram photos tend to occupy two temporalities: the Here and Now, and the Atemporal Abstraction. Interestingly, Instagram actually quite literally addresses the HERE in here and now; although timestamping of images is universal, Instagram photos all contain a location-stamp, which Instagram uses to create “Photo maps”, or maps showing SPECIFICALLY where a photo was posted. Additionally, users may add their own location tag (“MoMA”), specifying which restaurant or business they are at, or utilize a hashtag (#playoffs) to meta-label attendance at an event.

Here and Now at Yankee Stadium.

Here and Now at Yankee Stadium.

An event like Hurricane Sandy is an excellent example of the unique practice and meaning of Instagramography. The storm was a huge event that nearly all residents of NYC and the East Coast in some sense “participated” in, and furthermore was the kind of thing your dad would have tried to get some Polaroids of back in 1975, as a kind of momento of the time a hurricane hit New York. In 2012, people turned to their iPhones to capture these bizarre momentos of the flooding of NYC streets and the loss of power in Times Square– and also to communally share and consume these images via social media platforms. Rather than about the preservation of an image, this practice was about the communal participation in and consumption of the Here and Now.

Apart from this kind of Instagram, there is also the Atemporal Abstraction- the photograph that does not really represent a specific time or place- although perhaps an experience, or an image. This images are more purely aesthetic, or “artistic” rather than documentary. The banal version of this might be the Manicure-photograph, or the funny picture of a cat- the more artistic version of this might be the close-up picture of beads of dew on grass, perhaps through one of Instagram’s artsy filters.

These images gain their social capital not by being linked to a time and place, but rather the opposite; they represent something more widely accessible, based purely on aesthetic choice. I would argue that this phenomenon in Instagramography represents a shift in laymen-photographic practice. This is again partially due to changes in the technology. Whereas in an earlier era it may have made sense to go to the Louvre and take a picture of the Mona Lisa, this no longer makes sense in a world where you can simply Google an image of the Mona Lisa, and access the same kind of “memory”, probably through a much higher-quality photograph. For myself personally, Instagram and iPhone-ography makes me feel this sense of photographic nihilism even in my everyday life. How banal is a photograph of a sunset, or the New York City skyline, even if it is incredibly beautiful to me here and now? Instead, I find myself (and I often see others as well) looking for beauty in more unsuspecting places, or appropriating and participating in the image to make it more aesthetically “valuable”.

One genre of Instagramography that I think falls into this category is something I call the “microgram”, or “abstractagram”- close-up photos of the textures and details of everyday life. These images abstract and aestheticize the banal, making them potentially more interesting to a wide world who does not share any memory or experience with you. This kind of photograph I think also speaks to the power Instagram has the alter the way we see the world- to make us ask ourselves “Is this beautiful? Is this interesting?” far more often and perhaps more creatively than we otherwise would.



In a world where nearly everything is being documented by someone at any given time, it is easy to fall into a kind of photographic nihilism. I might suggest that actually part of the power of Instagram is that, unlike Facebook, it encourages a kind of scarcity of photography, a kind of more “curated” activity of sharing. Whereas Facebook allows me to upload my entire album of vacation photos all at once, Instagram only allows one upload at once- and as a mobile platform, it usually simply doesn’t make sense to sit there in the “real world” and upload image by image. Instagram values the singular, and this is something that is incredibly rare and difficult to attain in our world so very over-flooded by images of every kind. This is perhaps relatable to the popularity of the app “Snapchat”, which revolves entirely around an idea of scarcity: you cannot “upload” a photo, it must be taken NOW and sent NOW; the image or video clip only lasts for a maximum of 10 seconds and a minumum of 3; and it must be sent to separately selected individuals, rather than out to a mass, pre-existing network.

Instagram does not quite so heavily emphasize scarcity as Snapchat; it still in some ways endulges the idea of the photo “album”, in that individuals have “profiles” that collect individual posts and act as a kind of sleek visual journal. As Susan Sontag says, “Photographs are really experience captured, and the camera is the ideal arm of consciousness in its acquisitive mood.” There is much to be said here about the relationship between a consumerist culture and photography; iPhoneography seems undeniably to fuel the hunger to consume and horde the world, to turn even experience into a commodity. But it seems that Instagram is a platform that alters and even in some ways disrupts this “acquisitive mood”. By enforcing a kind of “curation” of content, the consumer is forced to consider the aesthetics and perhaps hidden beauty of his experiences. Furthermore, the particular social sphere of Instagram means that users do not simple consume- but also produce, share, and participate intimately in an community of Instagrammers.

Not to be overly sentimental; this curation of content and social sphere of Instagram encourages “selfies” of girls in bikinis at the beach just as much as it encourages photos of carefully considered shadows on a sidewalk or of the aftermath of Hurricane Sandy in a poor neighborhood. All things considered, it is impossible to say that there is any clear overall “effect” of this technology on the entire realm of photography or photographers. But Instagram does seem to offer some new alternatives and encourage new practices of photography, perhaps even slowly altering our understanding of the meaning of photography itself in 2013.


Filed under Anthropology, New Media, Philosophy, psychology

The Politics and Power of Internet Infrastructure, Pt. 3

Please see also Part 1 and Part 2.


In the last section I considered the roles of business and governments in protecting “net neutrality”, or the basic neutrality of Internet conduits. Net neutrality is a subtle concept, involving the protection of a particular idea about what Internet access is and should look like. But in a world where the Internet is so very new- and already so very ubiquitous- it is still a matter up for consideration what the fundamentals of digital rights are. The biggest of these questions might be whether Internet access- whether “neutral” or not- is fundamental human right? As a huge space of international dialogue, free information flows, and democratic action, access to the Internet seems to be a corollary to  rights to free speech or education.

Although most people probably wouldn’t say that humans have a fundamental right to Internet access the same way they have a fundamental right to food or water or happiness, I also think that many would see North Korea’s complete prohibition of Internet access to it’s citizens as deeply fascist and possibly even inhumane.

A poll conducted by the BBC World Service in 2010 suggests that four out of five people (adult Internet users and non-users in 26 countries) felt that Internet access is a fundamental right. This is a philosophical stance, but it leads us to the more concrete question of how this right is to be protected and supported against the powers that be. We understand that the advancement of human rights probably should not be left to the discretion of private businesses, and in cases like North Korea maybe not even to the discretion of individual states. Given this, who is the proper protector and regulator of Internet access?

Professor Susan Crawford, legal scholar and board member of ICANN, suggests that it ought to be treated as a utility, and that as such, the U.S. government is failing its citizens by not regulating the telecommunications companies in order to ensure universal access. She points out that as a nation, we are very good at rhetorically emphasizing the importance of Internet access, but in concrete terms we are very bad at implementing policy to ensure access to our own citizens. By allowing the non-competitive, almost monopolistic control of Internet infrastructure to exist unimpeded, the U.S. is deepening the “digital divide.” The digital divide describes the division between those who can afford Internet access and those who can’t- with huge consequences in our increasingly Internet-run world. Those who don’t have Internet access- or even have slow or unreliable Internet access- are less able to inform and educate themselves, less able to perform work or do homework, less able to find jobs and other critical resources like housing, etc etc etc. Crawford suggests that a truly “equal” society like ours would treat this essential informational tool as a utility, and regulate it to ensure that at least some form of reliable, inexpensive Internet is available to everyone in the country- in the same way that they ensure that some form of water or heat or electricity is available.

Susan Crawford is one of the many regulators overseeing U.S. policy to try to ensure the right to Internet access in this country. As with other forms of universal human rights, there also exist entire international institutions dedicated to protecting this right. The earliest of these institutions grew out of the need to establish international standards and protocols just to make sure that the international network could technically function.

The Internet Engineering Task Force, or IETF, is an extremely loose organization that emerged in the early days of the Internet we now know, in order to develop rigorously standardized protocols for data flow that allows nodes of the Internet to connect to one another, regardless of variations in hardware and location etc. This organization is dedicated to the purely technical task of ensuring that the Internet continues to function as an international network, even as technology develops. The IETF is also interesting because it functions in a way that nearly mimics political ideals about the Internet itself. The business of the IETF is conducted entirely by volunteers, who join the open committee to answer “RFCs” or “requests for comments” on topics which need resolving. Decisions are made entirely through a process of rough consensus, and members act purely as individuals, even though they may be parts of government, private corporations, or non-profit institutions. Indeed, strictly speaking the IETF does not have official members- is it an organization as much as an activity. It does, however, have several more official organizations that help to oversee it and support it, including the Internet Society (ISOC), an international non-profit organization.

The IETF serves as one model of an international institution that protects the fundamental capability to access the Internet. Another institution, ICANN, presents a rather different model serving a similar function.

ICANN, or the Internet Corporation for Assigned Names and Numbers (discussed in the first part of this project), similarly arose in the early days of the Internet to take over tasks of technical oversight and regulation previously conducted by the U.S. government. However, ICANN has a few major differences from the IETF. First of all, the IETF is primarily concerned with creating protocols and public documents for other Internet organizations etc. to voluntarily follow. In contrast, ICANN has more direct control over the actual infrastructure of the Internet; in particular, ICANN holds control over the “root zone” of the Domain Name System; that is, it can directly change the mapping of IP addressed onto domains, and also directly modify the centralized public directory which makes these mappings public to all other Internet users. In this sense, ICANN has “teeth”, or actual technological power to alter Internet access that the IETF does not have. These “teeth” are of huge political significance as well. Parts of ICANN are still under control by the U.S. Department of Commerce. In 2006 ICANN signed a document with the D.O.C. clarifying that they still retained the ability of final, unilateral oversight of some of ICANN’s functions. In contrast to the IETF’s international, multi-stakeholder, distributed and agreement-based process and enforcement, ICANN is a non-profit organization that is still under partial control of the U.S. government, and works on a model more of technical regulation and political coercion that sheer agreement. In particular, this insistence of the U.S. government to retain some form of (currently purely symbolic) control over an organization with real technological control over an international utility some consider to be a human right is actively protested by many other state governments, who see this as an unjust balance of power.
Beyond these institutions of technical regulation, many other large-scale NGOs exist to help set international standards for Internet access and regulation. The U.N., as the most obvious forum for regulating an international system like the Internet, has been central in developing several organizations and meetings surrounding Internet governance. Perhaps most prominently, the International Telecommunication Union (ITU) is a specialized agency within the U.N. Governments join the Union as “Member states”, although “private organizations” like telecommunications companies and research and development organizations may also join as non-voting members. The ITU was responsible for organizing the twin meetings of the World Summit on Information Society in 2003 and 2005, which in turn founded the “Internet Governance Forum” (IGF) at the 2003 WSIS in Geneva. The Internet Governance Forum, along with the WSIS’s, are different from the ITU in that they are centered around a “Multi-stakeholder” governance model. This model emphasizes participation by all individuals, groups, or organizations that have some kind of “stake” in the matter being discussed. As we have encountered before, the Internet is an institution of deep and personal concern to private businesses, state governments, as well as individual citizens. Given this, it seems that the Internet is the perfect issue around which to develop a strong international system of multi-stakeholder governance. The lack of such a system seemed so glaring, that it was suggested that an organization be formally convened in preparation for the first World Summit on the Information Society:
“(t)he WGIG identified a vacuum within the context of existing structures, since there is no global multi-stakeholder forum to address Internet-related public policy issues. It came to the conclusion that there would be merit in creating such a space for dialogue among all stakeholders. This space could address these issues, as well as emerging issues, that are cross-cutting and multidimensional and that either affect more than one institution, are not dealt with by any institution or are not addressed in a coordinated manner”.

In “Networks and States”, Milton Mueller argues that the first WSIS conference “became a mobilizing structure for transnational civil society groups focused on issues in communication and information policy”, and that the IGF “supplied an institutional venue with the potential to prolong and strengthen that network.” (Mueller, 83). In many ways, this organization introduces an entirely new form of governance- a network of networks, much like the Internet itself.  However, Mueller also notes that this highly democratic and emergent form of governance is still developing the formal mechanisms of representation and decision making needed to actually and effectively govern. This combines the age-old problems of how we create maximally democratic government institutions, and the problem of how we make and enforce law on an international scale. Although it seems that the Internet is helping us to make some headway in these areas, we also see older models of state-based hierarchical governance continuing to lead the realm of Internet governance.

As an example of this, in addition to the IGF and the World Summit on Information Society, the ITU also sponsored the 2012 World Conference on International Telecommunications (WCIT-12). This meeting, dedicated to modifying the International Telecommunications Regulations (last updated in 1988), was restricted to the 193 member states of the ITU. Exemplifying the traditional model of state-based governance, and in this case inter-state-based governance, the conference is rumored to have proposed that the ITU take control of surveillance and filtering of Internet content, as well as the duties of ICANN and the IETF, and would furthermore potentially condone state governments to filter content, and even allow government shut-down of the Internet if deemed necessary. These are only rumors, however, because the conference- instead of being open to the public- occurred behind closed doors. The U.S. was one of many states that ultimately did not sign the treaty. Although there are likely many motivations for this (including a possible provision removing ICANN from U.S. control), the U.S. claimed that it could not support the treaty because it did not support a multi-stakeholder approach to regulation; indeed, it seems that (in keeping with our earlier description), the U.S. did not want to make provisions regulating the Internet at all.

If the IGF represents a model of governance fitted to the higher potential of the Internet to create a more democratic and open society, able to effectively advance human rights around the world- including the right to Internet access- then the WCIT treaty represents a model of governance fitted to the ultimate power of the Internet to create a more tightly controlled and hierarchical society. It is hard to say which of these models will win out, or how they may eventually come to combine and compromise.

What is clear is that the Internet is a technology that is radically redistributing power, and that big and small businesses, state governments and the U.N., NGO’s, individual citizens and loose organizations of concerned volunteers are all working to control how this power is organized and regulated. Basic Internet infrastructure- such as the Internet backbone, ISPs, IP addresses and domain names, and Internet protocols are all points of extreme power over the fundamental nature of the Internet. Naturally, these are also the hot-spots of political activity. These are the areas around which we, as an international civil society, must defend net neutrality and the human right to Internet access.

The Internet is an ever-changing, highly unstable force in our current world, but it is foolish to think that this means that it is invulnerable to exploitation and control by extremely powerful forces. The potentially revolutionary and powerfully humanistic nature of the Internet is not inherent, and in order to advance it, we must quickly develop new forms of revolutionary and humanistic governance and regulation- or let governments and private businesses determine the nature of our existence in this new world of communication and information.


Crawford, Susan P. Captive Audience: The Telecom Industry and Monopoly Power in the New Gilded Age. New Haven [Conn.: Yale UP, 2013. Print.
DeNardis, Laura. “The Turn to Infrastructure for Internet Governance.” Web log post. Concurring Opinions. N.p., 26 Apr. 2012. Web. <;.
Goldsmith, Jack L., and Tim Wu. Who Controls the Internet?: Illusions of a Borderless World. New York: Oxford UP, 2006. Print.
MacKinnon, Rebecca. Consent of the Networked: The World-wide Struggle for Internet Freedom. New York: Basic, 2012. Print.
Mueller, Milton. Networks and States: The Global Politics of Internet Governance. Cambridge, MA: MIT, 2010. Print.
“OpenNet Initiative: Global Internet Filtering App.” ONI Internet Filtering Map. N.p., n.d. Web. 02 May 2013. <;.

Leave a comment

Filed under Anthropology, New Media, Philosophy, Policy, Politics, psychology

The Politics and Power of Internet Infrastructure Pt. 2

Please see the first part of this project here.


In my last post I described the centralized control of the Internet’s backbone  by only 5 major companies around the world. This is an example of horizontal integration at the level of Internet service supplies. Thus far, this horizontal integration has posed few problems. However, when combined with vertical integration, this centralized control becomes far more problematic in the world of Internet politics.

Roughly speaking, horizontal integration of the Internet backbone represents a higher level of control over the Internet, but vertical integration represents more direct control over the “consumer” end of the Internet. Vertical integration in Internet companies is seen as particularly dangerous because it threatens the basic end-to-end architecture of Internet networks I described earlier. When ISP’s gain greater control not just of the infrastructural backbone, but also control the network “downstream” closer to Internet consumers, ISPs must walk a fine line between simply creating and maintaining infrastructure and manipulating this infrastructure in ways that fundamentally change the nature of Internet access. Vertical integration of high-level ISPs is dangerous because it makes it very easy and very tempting for companies to modify the network in ways beneficial to them as a company.

This issue of manipulating the conduits of the Internet in ways that change the nature of Internet access is at the heart of the idea of “net neutrality”. The phrase, introduced into Internet discourse by Tim Wu, a professor at Columbia Law School, generally refers to the idea that all data traveling on the Internet ought to be treated equally. The notion of net neutrality forms a kind of baseline for our understanding about the appropriate regulation of the Internet- it describes a kind of basic social contract about the Internet. Yet this term has many many possible practical manifestations and meanings. In it’s 2005 “Broadband Policy Statement”, the FCC stated that:

“To encourage broadband deployment and preserve and promote the open and interconnected nature of the public Internet, consumers are entitled to:

access the lawful Internet content of their choice…

run applications and use services of their choice, subject to the needs of law enforcement…

connect their choice of legal devices that do not harm the network…

competition among network providers, application and service providers, and content providers.”

This broad statement about “net neutrality” and the rights of Internet users sets a kind of baseline expectation for Internet infrastructure and services within the United States. Yet not all countries share this basic understanding, and even in the U.S. the FCC is encountering trouble finding legal grounds to effectively enforce these measures. It is fairly well known that in countries like China, where there is a radically different understanding of permissible free speech and much tighter control of media outlets, the range of access to “lawful Internet content” of the consumer’s choice is much smaller. In order to maintain this control over the Net, the Chinese government does not simply create laws about content and hope that citizens follow them, but maintains a large complex of surveillance, monitoring, and censoring via technological means and via the control of private corporations (I will discuss the implications of this more in the 3rd section.)

This practice of regulating behavior by modifying the conduits of the Internet seems to violate the traditional definition of network neutrality, and likely even the FCC’s definition thereof. However, this is a fairly common model of intervention in many places around the world.

That said, many would also consider the United States to be in violation of “net neutrality”, even by its own definition set out by the FCC. The U.S. government is very weak in terms of its regulation of the Internet, and in particular the Telecommunications Act of 1996 made deregulation of media ownership the norm. This means that there is very little done in the way of ensuring the FCC’s fourth provision entitling consumers to “competition among network providers, application and service providers, and content providers.”

In a recent case, the FCC determined that Comcast, a network provider that also provides video content, should not be allowed to slow down internet service or otherwise interfere with customers using peer-to-peer software (software often but not exclusively used for illegal file sharing.)  This seems to be a clear case of an owner of Internet conduits modifying those conduits out of self-interest- a violation of net neutrality. However, this decision was overturned by a Federal court which decided that the FCC did not have the jurisdiction to enforce this law. In this case we see that the work of “Internet governance”, or deciding the basic functionality of the Internet and the rights of Internet users, is being largely carried out by private businesses. In contrast, in China the government is the primary determinant of network policy, and exerts direct control over the network. We can see from these two examples that definitions of  “net neutrality” and the social contracts between citizens and their government about how media ought to be regulated (or not) vary widely from region to region.

It is interesting is that, although there seems to be little widespread concern about violations of net neutrality by private companies, citizens do seem to be against more traditional control over behavior via law enforcement. The widespread SOPA/PIPA protests in January 2012 can be read as evidence that citizens do not find it an acceptable action of the government to intervene in the functioning of the Internet. The bills ostensibly were meant as measures to protect intellectual property; interestingly, many ISPs have actively taken measures to regulate piracy by various intrusive measures including “throttling”, or slowing down data flows to recalcitrant downloaders. Yet this privately-enacted regulation seems to receive very little criticism. There are many other possible variables to consider here, but it seems that in the world of Internet politics, private businesses may be seen as more appropriate regulators than the government. This is extremely important, because it represents a precedent of direct technological control and regulation, rather than behavior-based or legal regulation. I will consider the implications of this precedent more in a later section.

This above case suggests the question: how is ‘net neutrality’ brought about? Through government regulation, or through free markets? Do we need governments to prevent radical horizontal and vertical integration that allows companies to modify Internet access to suit their personal interests? Or do we need to prevent the government from surveying and filtering online content, potentially overstepping the bounds of mere law enforcement and becoming censors?

Based on the extremely limited evidence described above, it seems that U.S. citizens see free markets as the key to net neutrality, and government regulation as potentially destructive towards free, neutral access.

To consider this question further, lets examine a few cases of government regulation versus free market scenarios.

In the U.S., we largely have a “free market” scenario. As mentioned, the Telecommunications Act of 1996 deregulated the market in the hopes of encouraging competition and lowering the barrier to entry for new businesses. We also have very few laws regulating content, and virtually none enforcing content regulation through technical means (ie, requiring private businesses to actively regulate this content in accordance with U.S. law). In fact, the U.S. has many laws which protect private businesses from being held liable for hosting illegal content posted by third parties.

In contrast, in much of the E.U. governments enforce or encourage competition between ISPs through a variety of legal measures. This means that in the E.U., prices for Internet service are much, much cheaper, and broadband is a lot faster than in the U.S. This competition eliminates the problem seen in the U.S. where private companies can slow down service to competing content providers, or throttle Internet speeds for peer-to-peer users. That said, in the E.U. it is also (slightly) more common to enforce technical filtering of Internet content. Many countries maintain blacklists of domain names that ISPs are required to block; typically these are exclusively directed at child pornography sites, but recently the U.K. ordered ISPs to block The Pirate Bay, a site primarily used for illegal downloading. In “Beyond Denial”, authors Ronald Deibert and Rafal Rohozinski suggest that even this minimal (and highly supportable) government regulation on a technical level sets a dangerous precedent:

“The convenient rubric of terrorism, child pornography, and cyber security has contributed to a growing expectation that states should enforce order in cyberspace, including policing unwanted content. Paradoxically, advanced democratic states within the Organization for Security and Cooperation in Europe (OSCE)—including members of the European Union (EU)—are (perhaps unintentionally) leading the way toward the establishment of a global norm around filtering of political content with the introduction of proposals to censor hate speech and militant Islamic content on the Internet. This follows already existing measures in the UK, Canada, and elsewhere aimed at eliminating access to child pornography.”

This kind of filtering at the level of ISPs can be seen potentially both as a violation of net neutrality, in that data being sent across the network is being searched and discriminately filtered based on content, and a violation of free speech. Although overall child pornography should not be protected by the ideals of net neutrality or free speech, simply utilizing this technology opens a door to more serious content filtering and violations of net neutrality.

In these two cases, we see two inverse sets of power relations in regards to Internet infrastructure, with different resulting conditions of “net neutrality”. In the U.S., the balance of power lies heavily with ISPs and telecommunications companies. This limits the risk of violations of net neutrality in terms of government enforced surveillance and content-filtering, but increases the risk of content throttling and the kind of monopolistic control that allows ISPs to modify the Internet conduits for personal gain, without allowing consumers a choice of alternative, unmodified Internet access.  In the U.K. and around the E.U., the balance of power lies more heavily with the government. This increases competition and makes the Internet access individuals receive less constrained by commercial interests (ie, imagine a world where one did not need to choose between Comcast’s pricey “triple play” bundling, or where having Verizon’s extremely fast FiOS connection didn’t mean exorbitant prices, loss of telephone service during power outages, and the inability to switch back to a copper-based service- as they remove the old lines to prevent this.), but also means that the government is more ready to enforce content filtering of a kind that potentially violates rights to net neutrality as well as freedom of speech.

This relative balance of power between state governments and private companies greatly influences (or possibly reflects?) local understandings and practices of “net neutrality”. As I will explore in the next section, deeper questions about “digital rights”- and in particular questions about whether Internet access is a basic right- are greatly influenced by the relative power of NGOs to both businesses which own Internet infrastructure and the governments which regulate them.

1 Comment

Filed under Anthropology, New Media, Philosophy, Policy, Politics

The Politics and Power of Internet Infrastructure

We traditionally understand the world to be controlled primarily by technologies of violence and destruction. Who controls these technologies and how they are regulated are arguably the most important factors determining the global political landscape. Nuclear proliferation almost entirely determined international relations during the Cold War era, and North Korea’s recent suggestion about possible nuclear power attest to the continued importance of these technologies. Within the U.S., questions of power through weapons still exerts a dominating force on the politics of the United States. While countries like China do not allow their citizens to own firearms, the U.S. constitution affirms the right of citizens to keep and bear arms. Yet the exact details of whom may keep and bear arms, under what conditions and with what stipulations, is a question being seriously considered in the United States today, and which raises fundamental questions about the power of the government versus the power of the people.

Yet we live in an age where politics and power are driven increasingly by technologies not of violence but of information. Although violent technologies will always remain important, what is becoming increasingly essential to international and national politics and structures of power is the regulation and control of information. In particular, the Internet, that behemoth of information and communication that is quickly gaining control over all previous forms of communication (television, radio, print, telephone, newspaper, mail), is already a technology with political power on the scale of weapons of mass destruction. The following is a consideration of how this new form of information technology is manifesting radically new forms of power, reconfiguring a landscape previously determined largely by technologies of violence.

The Internet is a globalized network composed of various levels of hardware and software, crossing lines of government jurisdiction, and rapidly evolving since it’s birth only a few decades ago. Because of its relative newness and its complex, international nature, the Internet is still a relatively unregulated place, a kind of global “wild west.” And yet- although regulation in the traditional legal sense still remains relatively weak- clear power structures are rapidly emerging and crystallizing around certain aspects of the Internet. In particular, basic Internet infrastructure and what are called “critical Internet resources” are the areas around which these new power structures are emergent and quickly sedimenting. It is at this infrastructural level that the basic nature of the Internet is determined, with huge political implications for state governments, private businesses, and the citizens of the world.


It is worth spending some time considering the basic infrastructure of the Internet in order to understand how certain power structures arise out of this technological base.

First of all, it is important to note that the Internet was built as a highly distributed network with a large degree of decentralization and flexibility. Yhe Internet was partially designed as a structure of communication meant to allow a maximum sharing of resources across a wide geographical region with the minimum amount of failure or error. With this in mind, “ARPANET”, a project funded by the U.S. defense department, was developed as the early predecessor of the modern Internet. The network utilized flexible information flows and redundancy measures to ensure that parts of the network could be cut out without drastically effecting the entire structure; thus creating a communication network able to potentially deal with a nuclear attack taking out centralized hubs of communication (an ironic example of how technologies of destruction determine even apparently mundane technologies of information).


Contributing to the flexibility of the Internet is that it was designed with what is called “end-to-end architecture”, meaning that the network itself is built with almost no regulation (ie, completely “neutral”) and with only as many protocols etc. as is necessary to make the different components of the Internet compatible. The idea behind this is that Internet users have such a wide variety of different uses and needs for the Internet, that rather than limit the network by building in more features- for example, an automatic encryption feature that might be useful for some users but would simply slow down the processing of users who do not need encryption- that the additional features could be added in at the end (encryption taking place locally on the computers of those who need it). Because the particularized regulation of the Internet is done at the “ends” of the Network, in an “ad hoc” fashion, it makes overall regulation much more difficult to implement. Imagine a toll road that, instead of building the roads such that highway users are more or less forced to pass through and pay the toll, tolls had to be collected by sending individual bills to the homes of each individual user. They system would be incredibly cumbersome, perhaps even to the point of making tolls more costly than profitable.

This basic state of deregulation and flexibility in the technological base of the Internet has led many people to suggest that the Internet is inherently “free” and “egalitarian” (related to the idea that the Internet is inherently revolutionary and democratic). However, this “free” and non-hierarchical Internet infrastructure is managed and regulated by very powerful intermediaries. On top of this infrastructure exist many layers of organization and software to make the Internet usable as we know it, and two of these layers involve highly centralized and powerful intermediaries.

The first if these layers is that of Domain name and numbering. Domain naming and numbering is the basic process which allows information to flow from one side of the world to the other and reach the correct destination. Each individual node in the network has an IP address which other nodes use to contact it. Furthermore, these IP addresses are typically mapped to Domain names, which are what end users use to identify and request contact with another node. This is fairly similar to the idea of a phonebook. The user knows the name of someone they want to contact, they look this name up, and are returned with the address they can use to find them. This is of course an over-simplification; on the Internet, in keeping with its typical structure of distributed and flexible networks, these IP addresses change fairly frequently, many domains (like Google, for example) have large numbers of different IP addresses, etc. (For more information:, ).

This complex system of addresses and names, which allow for the organized flows of information across a highly complex international network, is managed almost entirely by a single organization: ICANN, the Internet Corporation for Assigned Names and Numbers. Although their role may seem largely bureaucratic and organizational, this organization is at heart immensely powerful, and holds huge political significance in the world of Internet regulation. To invoke another metaphor, ICANN’s role is sort of similar to the idea of an international map-maker/land-owner, who not only maps out what territory exists and whom it belongs to, but also has the ability to make some of these territories “invisible” or “inaccessible” to the rest of the world, or to take away territories when it believes that the owner does not have a right to them. Like in the real world, these “territories” or domain names are in fact incredibly valuable; consider the value of the domain name “”. Consider the value lost if ICANN decided to misdirect the paths to “” even for a few hours. Consider that ICANN can “divest” domain names and IP addresses for sites displaying copyright infringement, selling of illegal drug paraphernalia, and other offensive acts or crimes. In this way, ICANN is in fact a powerful political tool, with the ability to fundamentally shape the content of the Internet and regulate human behavior online. What makes this technical ability all the more politically important is the relationship that ICANN, as a private, non-profit corporation based in California has to the U.S. government. I will explore this particular power relationship in a later section.

The second layer of Internet infrastructure that involves a highly centralized and powerful intermediary is the at the level of the “Internet backbone”. The Internet is essentially composed of linkages between smaller clusters of networks, and these inter-network linkages are the Internet backbone. Originally built by the National Science Foundation, most of these huge fiber optic cables are now privately owned by a small number of Internet Service Providers or ISPs. Those ISPs owning portions of the Internet backbone are typically called “Tier 1” ISPs, and there are only about 5 of them around the entire world.

These Tier 1 companies all agree to share the information flowing through their chunks of the Internet backbone through “peering agreements”. Because each of these Tier 1 ISPs are approximately similar in their size of the market share, and because their interconnection is essential for all parts of the Internet to be connected to all other parts (rather than having 5 fragmentary “internets”), they share this traffic flow with each other free of charge. However, smaller providers must pay a Tier 1 provider to have access to the Internet backbone.

Like ICANN, these Tier 1 providers act as a kind of centralized point of power over an otherwise largely flexible, dispersed, and difficult to control network. Also like ICANN, although these private companies control the basic functioning of the Internet, there seems to be little reason to believe that they will use these powers in any significantly damaging ways. Rather, the current significance of these providers is in a more subtle form of power. Whereas ICANN establishes a precedent for NGO-government interaction in the regulation of the Internet, Tier 1 arrangements are establishing a precedent for highly centralized control of both the backbone of networks, as well as control of the consumer market in terms of both structure and content (horizontal and vertical integration). While horizontal integration is the fundamental characteristic of the Internet (connecting nodes to each other), vertical integration greatly amplifies the power these already very-powerful private business have over consumer access to information and communication technologies. This vertical integration becomes particularly meaningful in its relationship to state governments, which not only allow radical vertical integration, but take advantage of these centralized points of control in order to control and regulate behavior on the Internet.

NEXT: Part 2: Network Neutrality


Filed under Anthropology, New Media, Philosophy, Politics, psychology