Some Preliminary Thoughts on the “End of Work”

The hypothetical future end of work is a dream that has been around since the earliest days of human civilization– yet the information revolution (paired with an ongoing period of economic recession) has been cause for a renewed and newly believable consideration of the idea that some day– we may simply run out of jobs.

At the heart of it, there can be no doubt that the technological changes wrought in the past few decades have radically changed the workforce and will continue to change the nature of our daily work. Indeed, I recently found myself asking exactly what it was that people *did* before email, given that so many desk jobs seem to primarily entail reading and answering emails.

While this is partially meant to be taken in jest, I think it does indicate that an equal if not larger impact of new technologies has been to reformulate and change the exact nature of the work that we now do in conjunction with computers. The theory of technology replacing work seems to assume a finite amount of work to be done…which of course is not true. The issue seems more likely to be that in the current moment, a sudden influx of efficient machines has eliminated a good number of jobs, and we haven’t quite figured out the new roles that can be created- roles that look more like “thought partners” in a technologically supported environment rather than the number-crunching workhorses.

These are just some preliminary thoughts on an extremely complicated topic, inspired by a few recent interesting pieces around the topic:

The Atlantic’s detailed exploration about the theoretical causes behind the end of work, and interesting thoughts on what this new era may look like. While I think it is misguided in the simplistic analysis of the changes wrought by technology, it is certainly an interesting read with many interesting data points and insights from various fields.

Planet Money has done a series of podcasts on the end of work, but I was particularly captivated by a recent episode which used a science fiction audio drama to explore the idea of a workless future. A very interesting listen, and raises some interesting questions and hypotheses about what it would be like to really live in a world without work.

Leave a comment

Filed under Uncategorized

“MRW”: Remix Culture and the Reaction Gif

This post originally appeared on the gnovis website.

The remix is a subject of growing concern and intellectual debate in the past few decades. It has gone from a relatively circumscribed musical practice to an essential element of our entire creative culture, with notable examples including everything from Grumpy Cat to Warhol’s screen prints.


As described by Eduardo Navas, “remix culture can be defined as the global activity consisting of the creative and efficient exchange of information made possible by digital technologies that is supported by the practice of cut/copy and paste.” Just as digital technologies have made remix an increasingly accessible creative practice, the Internet has played an essential role in circulating remixed works and helping to support our flourishing remix culture.

But remix culture isn’t all fun and games. It has also garnered a significant amount of attention for being a politically charged and potentially transgressive practice, raising questions about intellectual property and changing notions of creativity and originality. Harvard law professor Lawrence Lessig devoted an entire book to our growing remix culture and its legal and cultural implications.

Despite all the hubbub, I have to question how pervasive this practice actually is, particularly at the level of the standard media consumer. Sometimes it can be hard to see this “remix culture” that supposedly surrounds us when we are up to our ears in professionally produced multi-million dollar mass media products. Even remix itself has been co-opted by the Hollywood-industrial-complex. Yet there is one particular phenomenon that exemplifies the power and pervasive everydayness of remix: the gif! The gif is a very simple and accessible form of remix that draws directly on the power of mass media, but subverts it for extremely everyday kinds of creativity and expression.

For those of you who don’t know, a “gif” refers to a specific file format, like .jpg or .png. This format is notable for its small size and its ability to support looping animation. But the gif is more than just a format. In a blog post announcing a move away from the actual gif format as it nears obsoletion, image hosting site imgur claimed that “GIFs are no longer about .GIFs–the culture of the GIF now trumps the file format.” The gif is a very simple technology that has led to a very complex set of media practices and consumption.

In comments sections across the Internet there is an ongoing and heated debate about the proper pronunciation of the word “gif”. History has shown that the original creator of the file format wanted it to be a soft g, as in “jif”, but when did facts ever stop a good Internet argument?

Andy Baio makes the interesting and somewhat tongue-in-cheek argument that while “jif” may be appropriate when talking about the file format, perhaps “gif” may be the appropriate name for the actual practices of giffing. As he says, “‘JIF’ is the format. ‘GIF’ is the culture.” While the pronunciation debate is totally arbitrary in the end, I would argue that these two different pronunciations highlight the split between the original technology and the culture it has spawned. This split explains why you might hear someone say “I love gifs!” but not “I love jpgs!”. When we say “gifs” we are not referring to a format (not referring to “jifs”)- we are referring to a broad set of cultural practices with their own aesthetics and communities of use.

In the wide world of gifs, I am particularly interested by the reaction gif. The reaction gif specifically captures emotional and bodily reactions.


MRW my parents found a pack of cigarettes in my 13 year old brother’s backpack, and he responds with “But I only smoke when I’m drunk!”

These gifs are typically used in online conversations to illustrate a specific reaction to a situation or comment. For some excellent examples, check out or These gifs are particularly interesting because they are used as a form of personal expression. Through the reaction gif, the content of tightly controlled mass media products becomes a tool for everyday interpersonal communication. Remix is known as a strategy for turning the media consumer into the media producer, and with the reaction gif the remixer does not need and real expertise or skills in order to take this content and repurpose it for their own specific ends. As such, this pervasive and incredibly accessible form of remix is an interesting point of entry for understanding the role of remix practices in our culture more generally.

In keeping with Marshall McLuhan, I want to ground my analysis of the reaction gif in the medium itself. Carl Goodman, the director of the Museum of Moving Image in New York, said that “The GIF occupies very fertile ground between the still and the moving image.” The gif is a kind of phenomenological hybrid of photography and film. A gif is not truly still like a photograph, but its temporal scope is so incredibly limited that there isn’t a continuous flow to the gif as there is in a piece of film. This limited temporal scope is just enough to carry a single slice of the dynamic, captivating movement that characterizes film- but the short duration and infinite looping allows this moment to be closely examined and analyzed, as with a photograph.

Hampus Hagman argues that the gif captures the essence of cinema: movement. But it’s extremely limited temporal scope strips away the narrative which surrounds and contextualizes that movement. Hagman describes the resulting content as  “gesture”, suggesting that that the movement captured by the gif may be stripped of a larger narrative, but it still carries a particular kind of meaning:


The gif has a unique ability to capture and isolate bodily gesture. This makes the reaction gif seem like a natural, almost obvious use of this medium. In the context of our woefully disembodied communications environment, the reaction gif becomes a powerful communicative tool, bringing back in the meanings carried so powerfully and elegantly through bodily actions.

The reaction gif acts as a particular kind of semantic unit ready to be inserted into the flow of any conversation, like a kind of uber-emoji. It is not just a form of entertainment- it is a tool which allows us to enhance and augment our primarily text-based online communications.

Of course, unlike in face-to-face communication, this semantic meaning is not enacted by our own bodies but via the bodies of mass media. Through the reaction gif, Jack Nicholson’s face becomes a tool to express how I feel about Monday mornings. There is a particular humor in making these deeply spectacular (in the Benjaminian sense of the word) bodies enact our everyday and mundane lives. The reaction gif (as opposed to the fan-gif or other practices) functions precisely by playing with the distance between its current use and the original narrative context. Big bird communicates the experiences of a drunk teen, a cat communicates the experience of dropping a tiny screw, an underpaid mall cop communicates the experience of a 20-something girl dealing with drama amongst her friends. This is a quintessential example of the context collapse that characterizes so much of new media practices and products. Reaction gifs play with an ironic reuse of mass media, bending, subverting and distorting the original meaning by putting it in a new context.

This ironic play between different levels of meaning, both from the original source material and the new context into which it is being used, requires a particular kind of expertise. It requires a particular kind of cultural literacy and an ability to reinterpret and reimagine what is given to you via mass media. It requires expertise in remix. This is not a technical expertise— indeed, one need not actually make anything at all. Reaction gifs exist in abundance across the web, in searchable databases like, ready to be plucked and repurposed for a thousand different conversations. Instead, this is a kind of conceptual expertise. While other forms of remix may remain inaccessible because they require extensive technological expertise and/or dedicated creative drive, the reaction gif has almost no barriers to entry. It shows that remix does not just manifest itself through epic remix videos or a highly-produced and legally questionable mashup albums– but often through much smaller remix-acts that pervade the life of everyday prosumers. And it is indeed an act. The reaction gif is not an objet d’art– it is a tool for communication. In the reaction gif we see that remix not only pervades through our culture, but has  infiltrated into our first and most basic form of media: language.



  1. Baio, Andy. “‘JIF’ Is the Format. ‘GIF’ Is the Culture.” Medium. Medium, 29 Apr. 2014. Web. 07 Dec. 2014.
  2. Fish, Adam. “Remix Culture Is a Myth.” Savage Minds. N.p., 12 Apr. 2010. Web. 07 Dec. 2014.
  3. Hagman, Hampus. “The Digital Gesture: Rediscovering Cinematic Movement Through Gifs.” Refractory. University of Melbourne, 29 Dec. 2012. Web. 7 Dec. 2014.
  4. McKay, Sally. “The Affect of Animated GIFs (Tom Moody, Petra Cortright, Lorna Mills) | Art & Education.” Art & Education. Art & Education, 14 Sept. 2009. Web. 07 Dec. 2014.
  5. Lessig, Lawrence. Remix: Making Art and Commerce Thrive in the Hybrid Economy. New York: Penguin, 2008. Print.
  6. Lethem, Jonathan. “The Ecstasy of Influence.” Harpers Magazine. Harpers Magazine, Feb. 2007. Web. 07 Dec. 2014.
  7. Navas, Eduardo. “Regressive and reflexive mashups in sampling culture.”Mashup Cultures (2010): 157-177.
  8. Uhlin, Graig. “Playing in the Gif (t) Economy.” Games and Culture (2014): 1555412014549805.


Leave a comment

Filed under New Media, UI, Uncategorized

iBeacons! Or: How to ruin a perfectly good disruptive technology by using it for spam

“iBeacon” is a new technology currently in the throes of the same questionable phase of adoption and experimentation as products like Google Glass, Nest, and many other technologies in the IoT/context-aware/augmented reality department. The iBeacon is interesting because it has a relatively low barrier to entry, it is a simple enhancement on existing location services technology, and it is already being implemented commercially. It also has a huge amount of potential for radically new and futuristic applications, and huge potential for imminent failure.

iBeacon is a protocol developed by Apple which allows mobile devices to communicate with locally placed “beacons” through Bluetooth Low-Energy signals. This technology can be thought of as a new and improved version of Apple’s Location Services, which tracks information about a device’s current location through GPS and delivers that information to a variety of location-based apps like Yelp, Google Maps, and Tinder. The difference is that locally placed “beacons” makes this detection much, much more accurate, allowing apps to track your precise location. For instance, these apps can detect when you enter a store, when you are walking down a particular aisle, or when you go towards the checkout. (For some great further reading on beacon technology, see my list of links at the end of this post).

This technology is definitely “new” in a technical sense- but how can we describe and qualify this newness? Let us begin with that most classic means of describing new technologies: “disruptive”. According to the official party line established by Clayton Christensen, something is disruptive when it “allows a whole new population of consumers at the bottom of a market access to a product or service that was historically only accessible to consumers with a lot of money or a lot of skill”. This definitely doesn’t seem like that kind of technology; this technology is much closer to what Christensen describes as a “sustaining” innovation. Rather than disruptive innovation which caters to the bottom of the market neglected by incumbent leading companies, sustaining innovation allows the incumbent to continually add more complexity to an existing technology and sell it to a higher tier of the market with more cash-flow. Beacon technology is a case-in-point of sustaining innovation. Developed by the behemoth incumbent Apple, iBeacon technology is only accessible to users with a relatively new smartphone or other device with a relatively high degree of technological savvy- and is targeted only to those retailers interested in creating a highly customized and flashy form of interactivity with a certain demographic of its customers. Particularly in the current stage, beacon technology is exemplary of the kinds of sophisticated, expensive and complicated innovations that Christensen describes as the antithesis of disruptive innovation.

However, I would like to pause a moment to consider how this technology compares to our colloquial use of the term “disruptive”. The term has expanded from Christensen’s original definition to include any and all new technologies that could potentially change the existing market dynamics of one or more industries. The iBeacon actually does fit this meaning of disruptive- perhaps better than many of the things that get labelled as disruptive on a daily basis. Beacon technology opens up any and all spaces- from your home to the museum to the subway- to becoming augmented, interactive spaces. It is one of very few commercially available manifestations of broader disruptive trends like the IoT, context-aware computing and augmented reality, and even if this particular early-stage iteration isn’t successful, it is a harbinger of even cheaper, simpler and more accessible technologies with basically the same purpose. It’s only real competitors are NFC chips and QR codes— but other than that place-based interactivity is still dominated by single-use machines (think cash register, ticket reader) and human actors who guide and assist our interactions with a given space. The realm of human-place interaction is ripe for disruption, and in particular ripe for connections to the increasingly universal multi-purpose interfaces we all carry in our pockets.

To describe this particular aspect of beacons’ newness in another way, you could call them “revolutionary”. Like a new Kuhnian paradigm, iBeacon technology opens up a generative new worldview. Since I’ve started thinking about iBeacons, everywhere I go I can’t help but ask myself…how could an iBeacon change my interaction with this space? In some ways the world suddenly seems more like a videogame, where suddenly walking through a doorway or picking up an object triggers a new pathway or layer of interactivity. With that said, although generative and revolutionary, this paradigm doesn’t seem to be incommensurate (in Kuhn’s sense) with the old paradigm. In some ways, this technology is so new and the accompanying sociotechnical infrastructure so…nonexistent that it doesn’t have many points of conflict with the existing paradigm. The only “competitors” are humans and single purpose machines, but because of the current narrow demographic reach of this technology, it will be a long time before it could begin to replace or significantly alter the current paradigm of user- place interaction. With that said, as the technology slowly “trickles down” and as new and unexpected uses of the technology are designed, perhaps this “anomaly” will become more of a direct affront that current the systems must adapt to. We can imagine that rather than being a kind of glitchy technological gimmick stores can choose to try out, someday microlocation technology will become the default and places without it will be at a relative disadvantage.

As one final dimensions of analysis, I want to consider whether beacons are a “radical” innovation. Given that this technology creates a fundamentally new kind of affordance and can even create a different kind of worldview, it definitely seems to have a potential kind of radicalness. However, this radical innovation isn’t so much within the technology itself as within the applications we can develop for it. Beacon technology is a protocol– it isn’t a fancy new device, but a more sophisticated modification of existing location-services technologies. Yet it is the very open-endedness of this technology is what gives it its potential to be radically innovative. It simple provides a new affordance which new apps can use to any number of different ends. Unfortunately, it seems that current implementations of this technology are highly circumscribed. Beacons are currently being used primarily for enhancing retail experiences by pushing coupons or ads to users. In this sense, it is being used as a fancy new way of pushing spam in users’ faces. This particular use is definitely not radical, and I would argue it doesn’t offer any compelling reason for users to adapt their practices or learn the new skills required to use this technology (ie, turning on Bluetooth and usually downloading an app specific to that location or a chain of locations). A recent Guardian article even asks whether this spammy use of beacon technology may lead to its eventual failure.


Further Beacon-Reading:

4 Reasons Why Apple’s iBeacon Is About to Disrupt Interaction Design

How do iBeacons work?

BEACONS: What They Are, How They Work, And Why Apple’s iBeacon Technology Is Ahead Of The Pack

 First principle of contextual computing: Don’t Be Boring (from the Estimote blog, a startup that makes the very cute beacons pictured above)

Leave a comment

Filed under HCI, New Media, UI, Uncategorized

Drinking from the Firehose: extended cognition & shitty interfaces


We suck at creating effective interfaces for exploring and utilizing the Internet. In particular, I want to complain about the browser. 

I spend a huge portion of my waking life and cognitive energies working in a single Chrome browser window. This browser is the interface to my work environment, my school environment, and large parts of my social environment —not to mention my general thinking/time wasting/entertainment environment. How is all this diverse and dynamic cognitive activity represented? Via a flat grey window with lots of tabs, pretty much the same way it looked back in 1998.

The Internet and the different applications it supports are obviously powerful cognitive tools, capable of supporting a wide variety of extremely complex human activities. Yet the basic “window” to this world is incredibly simplistic. The browser does not provide a complex, flexible structure to complement the complex, flexible activity of “being online”. This is a damn shame particularly because the Internet and its many environments and applications allow us to do incredibly advanced cognitive work, and yet the burden of organizing and keeping track of this complex cognitive work is left entirely to us and our naked, puny minds.

One of the most interesting ways to use the Internet in my experience is to think with the Internet- to conduct research and explore and enhance a train of thought via hyperlinked rhizome that is the Interwebs. Doing this, my browser window(s) come to look like a very flat visual representation of my stream of consciousness. I would argue that in many ways the browser window is the closest any technology has come to externalizing the flow of human thought. Written language is of course still the paradigm technology for storing and enhancing the flow of thought, but does not have the dynamic, “wormhole” characteristic that actual stream of consciousness thought does. What it lacks in complexity, written language makes up for by helping us to clarify and organize our own messy thoughts, making them easily communicable to others. In contrast, the browser’s defining affordance is allowing the stream of consciousness to expand ever outward into potentially infinitely branching thoughts.

If you google images search “too many tabs”, you see MANY instances of this phrase. Which says to me that the browsing experience has an obvious and intuitive correlation with our cognition.

Which is great! But I often find myself truly straining under the cognitive load of this interface, wishing for some or any of the powerful affordances we find in nearly any other interface- the power of written language, the power of GUI and the computer desktop– heck even the basic affordances of a physical desk. This would help us begin to approach the problem of organizing the sheer amount of material and references that are generated in the act of browsing. But much more interesting would be to take a look at the unique properties of browsing and figuring out an organizational and graphical structure that could make the volume and complexity of data generated in the act of browsing truly useful, meaningful, and communicable. That is, to truly enhance our powers of cognition not just by adding breadth and depth, but also by adding complexity and precision and meta-awareness of browser-thought.

Imagine perhaps if your browser automatically generated a graphical “tree” of your browsing history, showing different paths of thought. Each path could be labelled and perhaps even tagged, creating a visual representation of your train of thought when exploring a particular area. This tree could be stored and shared with others. It would help boost meta-cognition about your research, providing a “big picture” to help organize and structure your browsing. This big picture awareness could help to combat the tendency of hyperlinked browsing to suck you into informational wormholes, eventually losing track of your original train of thought entirely.

Overall, I think that both the Internet and the computer are obviously some of the most powerful cognitive artifacts we have ever made, and in just a few decades they have radically extended and enhanced our cognition. But we have not yet developed human-cognition-friendly interfaces for exploring these repositories. I would argue that the current state of the browser interface makes our unstructured access to these repositories almost more of a cognitive burden, giving us unlimited and unorganized access to more than a single mind can understand.

The browser is just one pressingly obvious example of a larger, systemic problem that we will need to face in the next few decades. The Internet is a massive, continuously growing area of extended cognition, yet it still exists largely as a massive “data dump”, with very limited capabilities to organize, process, and understand this data– to bring it back down to the scale of human understanding.


On a related note: “Real-Time Space-Efficient Synchronized Tree-Based Web Visualization and

1 Comment

Filed under Uncategorized

Telecommunications Policy Research Conference #42 (#TPRC42): A few reflections

This weekend I attended the Telecommunications Policy Research Conference in Arlington, Virginia. I am not a policy expert nor a computer scientist- but it was very interesting to see some of “hot spots” of debate and interest in this field. I was most interested to glean a sense of areas of technological developments that are either so new, so pervasive, or both, that it presents a major area of interest in terms of policy development.

Below I have outlined 4 areas that seem to fit into this category, some of the interesting take aways from the conference on these areas, and resources for further explanation/exploration:

1. Wireless grids

Wireless grids are Ad hoc, distributed resource-sharing networks between heterogeneous wireless devices“. As far as I understand it, this technology represents an extension and alteration of the conditions of the traditional Internet as we know it. In particular, these networks provide new affordances in the sense that these networks can be pulled together ad hoc, require no centralized control in the form of a router, and can be made through networks of small hand-held devices (such as the nearly ubiquitous hand-held smart phone).

This technology has many interesting applications, including emergency response systems, means of creating networks in the face of censorship, and definite relevance for the “Internet of things” (explored more in my next point). Yet this technology is also still in a very early stage, and needs more development in terms of the protocols and “middleware” that would help securely organize these networks arising between radically different, non-traditional devices and interfaces.


2. Internet of things

The Internet of Things is closely related to the technologies of wireless grids and the development of new forms of networking generally. The Internet of Things is a phrase that describes a potential network of not just dedicated computing devices (like computers or smartphones), but nearly any appliance or object we wish. This promises new means of remote sensing and remote action. Early instantiations of this idea include home monitoring devices which report data to your smartphone, or simple tracking devices that can be attached to your personal belongings. The Internet of Things would exponentially expand the world of “big data”– and of course–opens up new concerns about the privacy and security of that data.

Although we are seeing some implementations of the Internet of Things on the commercial market, we are still in the very early stages of this technology, with quite a bit of development to go both technologically and (I would argue) in terms of our understanding of the new affordances and possibilities the Internet of Things would allow.


3. Spectrum

Wireless connections are supported by something called “spectrum”- in fact, the same spectrum used by TV and radio broadcasters. Spectrum is a major policy issue because it is a limited natural resource that is approved and managed by the government- the FCC in particular. As it stands, most of the existing spectrum within the range which is physically usable for wireless broadband is already occupied. Although both the government and wireless providers are searching for more efficient ways to use and share this spectrum, given the incredible rise in demand for wireless via tablets and smartphones, many have raised the question of whether we are on the edge of a “spectrum crisis”. Such a crisis would entail drastically dropped speeds, likely prohibiting things like online video streaming and significantly slowing down browsing speeds.

In order to avoid such a crisis, many are turning their attention to technological solutions, as well as possible policy solutions. However, it seems that at some point we will have to turn our attention to bigger questions such as: Is government regulation helping or hindering the process of spectrum management? Would the free market be better able to solve this problem? What if we simply “run out”- are there viable alternatives to spectrum for wireless connections?

Although this topic is somewhat obscure to the general population, it seems like spectrum may be a technological bottleneck we will encounter as the use of mobile devices – and the Internet of Things! – continues to grow.


4. Algorithms

Friday afternoon featured a fascinating panel titled “Governing the Ungovernable: Algorithms, Bots, and Threats to Our Information Comfort-Zones”  (featuring, among others, CCT’s Mike Nelson), exploring the impact of intelligent systems on the world of general consumers. In particular, I was interested in the thread of algorithms, which increasingly determine the types of experiences we have online. These largely invisible technologies have recently gained a bit more spotlight via the Facebook and OkCupid experiments, but overall it seems that these algorithms exist in a kind of shady underworld that is little understood by the average platform user– and yet increasingly these algorithms use deeply personal information to draw deeply personal conclusions– and use these conclusions to create a particular experience, entirely unbeknownst to users.

The key word of this discussion was “transparency”. It was suggested that private companies could go a long way towards gaining more trust from its users by being more transparent about their various methods of data aggregation, data processing, and how they are using this information. From my perspective, this seems highly improbable and highly ineffective (after all, just how jazzed do people get about reading Facebook’s privacy policies…?). This is undoubtably an area that will continue to develop significantly in the coming years— and at some point will need to be addressed by more formal policy initiatives, as we are just beginning to see happening in parts of Europe.


Leave a comment

Filed under Uncategorized

The Prayer Nut and the Mobile Phone

I recently visited the Rijksmuseum during a trip to Amsterdam, and had the pleasure to experience the “Art is Therapy” exhibition, a kind of “meta” exhibition wherein the curators created large, printed post-it notes with commentary on works of art meant to show how the works could change the viewer, could incite positive change and a kind of spiritual healing.

prayer nut

In particular I was drawn to the commentary laid out next to a “prayer nut”, a miniature carved wooden ball from the Middle Ages meant to act as both spiritual reminder and status symbol for its owner:

“The prayer nut is an aid to the interior life. It is specifically designed to provoke an inner state. 

There are lots of things we care about in theory, but forget about in practice. Religions understand this– and design all sorts of tools (from cathedrals to possibly the smallest of all prompts: the prayer nut) to help us keep important ideas closer to the front of our minds. Religion can be seen as a giant memory-prompting machine, always trying to get us back on track. 

The nut understands our frailties: it doesn’t condemn them, it seems to respond very creatively to them…

Modern technology is very good at catering for what is urgent, but very bad at keeping us in touch with what is important. Smartphone providers have something to learn from the prayer nut.”

"Sickness: I'm always reaching into my pocket to check my phone."

“Sickness: I’m always reaching into my pocket to check my phone.”

Notably, the museum’s accompanying text notes that the owner of this nut would also have been very likely to enjoy showing off such a fine work of art as this intricate prayer nut…which cannot help but make me think of the ostentatious pleasure of displaying one’s iPhone.

That said, I think that it is a very good point that unlike the prayer nut, the phone is pulling us towards the “urgent”, not the important. How could we design our machines to put us in a more reflective mode? To keep us more connected with the bigger ideas, the more meaningful narratives that drive our life rather than the vagaries of the current moment? How could a machine push us towards LESS use, less need, less addictive, self-centered, and impulsive activity? Towards mindfulness?


How could we respond creatively to the frailties of humanity? How could we alter our technologies to make us better people, or at least to mediate the bad habits and negative side effects that new technologies seem to give rise to?

Food for thought.

1 Comment

Filed under Uncategorized

Paradigms of Accumulation & Loss in a Digital World

ImageOne phenomenon of the digital world that I think we must increasingly come to terms with is the changed and changing nature of accumulation and loss. In a sense, this new digital world is marked very strongly as “lossless”, as compared to previous forms of media/medium. Especially now that everything we create is backed up automatically in the “Cloud”, it becomes increasingly difficult to lose even the most mundane emails, photos, receipts for tickets, etc.

Unnecessary accumulation is one of the defining features of the (modern? post-post modern?) world we live in. Accumulation of data, of ikea furniture and other cheap and easily accessible consumer items, accumulation of massive amounts of waste and the pollutants that follow, accumulation of all the various ideas and products and thoughts of the entire history of humanity. To address this era of accumulation, we will need to learn the art of curating, of throwing away, of recycling- and even of preventing additional creation altogether.

Snapchat is just one prominent and still somewhat mysterious case where the standards of accumulation are being rethought. Instead of making accumulation and infinite storage of a sent photo the default, the app is entirely based on the premise of a default of loss. While storage is possible via the screenshot, this involves an explicit and intentional action on the receiver’s part, as well as potentially interesting social implications by notifying the sender that the image has been saved. It is interesting to think about other ways that the paradigm of easy and automatic accumulation could be changed, whether for entertainment purposes as in the case of Snapchat, for environmental purposes, or as a way of ensuring that our digital world does not become quickly overrun with the detritus of everyday life and becomes a more curated, meaningful storage of our experiences.

A few examples of technologies of accumulation come to mind. An interesting one for me personally is Amazon, and online shopping in general. Especially as a Prime user, all it takes is a fleeting thought and a few clicks for me to add something to my growing collection of worldly possessions. This encourages an accumulation of things like never before. I do not have to hand anyone money, leave my house, or even really have a second thought about an item before I buy it. This obviously shifts the paradigm strongly toward accumulation. In contrast, there is no easy way to discard or recycle or pass on the objects I no longer really want or need. Whats more, I’m sure there are many smart minds in the industry figuring out how to make every THING in the world as easily or more easily acquired. (See: Seamless making your food desire only a few iPhone taps away, Amazon’s new drones bringing those items to you in less than 24 hours,…and who knows what the future of 3D printing may bring us). How can we combat this basic compulsion towards accumulation? We need to begin developing technologies of curation, organization- designing behaviors of divestment and restraint.

These technological changes need not always be radical. I currently have about 10 gigabytes of old emails filling various accounts. What if, instead of marking something as “delete”, it was automatically deleted after a set period of time (30 days? 6 months). If you wanted to save something, you would have to intentionally, thoughtfully choose to save it. In the world of material things, the default is typically that things stick around unless you decide to get rid of them. In the world of the digital, things can disappear without a trace. To force us to think about accumulation and loss, to change the dominant paradigm from passive accumulation to active conservation may help us begin to address some of the larger issues that will only become more and more pressing over time.

This kind of studied reflection on digital storage and loss may even me life-savingly important. In the wake of the disappearance of Malaysia Airline Flight 370, myself and many others are wondering- in an era where my smartphone tracks my trip to costco and back, how is it possible that we do not have the data to track an airplane carrying hundreds of passengers across the ocean? The answer seems to be cost: “Although it would be possible to stream data from an aircraft in real time via satellite, implementing such a system across the industry would cost billions of dollars”. (Wired, “How It’s Possible to Lose an Airplane in 2014“). Although undoubtedly still expensive, what if flight 370 had been able to simply send a live stream of GPS data via satellite, if not the full data a black box records? One has to imagine that there are innovative and cost saving measures that could be taken to preserve this valuable data. Although this is a somewhat dramatic example, I am sure there are many many cases where a simple questioning of our existing paradigms of digital accumulation would radically transform our quality of life.

Leave a comment

Filed under Uncategorized