pav.alxndr

Performing Artists, Kill Your Guilty Conscience

My amazing and talented wife Jessica recently did some voice work to help another actress prepare for a film, for which she was paid. She was told today, however, that the film project had been canceled. This, of course, happens sometimes, and it’s not has though Jess was going to be in the thing, so no harm, no foul for her. But then she admitted to me that she felt a twinge of guilt for accepting payment for her work now that the film won’t actually go into production. When she said this to me, I think my eyes bugged out of my head, and I may have dropped whatever I was holding. Had I been sipping a beverage, I probably would have done a spit take over my laptop keyboard, necessitating a puppy-dog-eyed trip to the Genius Bar.

Guilty? For being paid for your work? I made the comparison to someone who might have built an object: If someone had constructed a set piece for the film, and the film was canceled, no one would think that the builder shouldn’t be paid. Work done is work done.

But somehow with artists, I think particularly performing artists, there is a feeling that what we do doesn’t really count as work, and that if we happen to get paid for it, it’s just icing. A happy coincidence.

Part of this is fueled by raw economics. The supply of performers (actors, singers, dancers, etc.) is far, far, far greater than the demand for them, which leads to performers doing ungodly amounts of work for nothing, and in many cases, actually paying to work in order to get “experience,” get “exposure,” and really, get “exploited.” (Say the word “showcase” to an actor and see if you can detect them dying inside.)

There’s also something about the evanescence of performance work, particularly live performance. You do it, and the work then flitters off into the ether, perhaps captured in recordings or memory, but now passed.

Finally, there’s the trope that’s related to the idea that one must “do what you love,” which can easily be misinterpreted as “since you love doing it, doing it is payment in itself.” Actors and other performers are made to feel that they are privileged just to be allowed to ply their craft at all, and that it is only a rarified few who should deign to feel entitled to compensation for it. It can feel to some as almost impolite to expect to be paid for performance-art work.

And I get it. I have been there. As someone who is usually drenched in self-loathing, I know what it is not to value one’s own labor. Adulthood and the oppression of debt and expenses has changed me a great deal, however, plus I’ve been out of the performing arts workforce for several years now. Raw necessity has hardened me somewhat when it comes to expecting fair compensation, even for work that I might do on my own time for nothing anyway. (Music, for example.)

Here’s the key difference: If I choose to do creative work on my own (and on my own terms) for no payment, all for me, that’s my decision. If you want me to do similar work for you, on your terms, you must pay me. The two are not related, but we sensitive artists types are primed to conflate them.

Back to Jess. Her work in this case was not even “performance” per se, but using her talents to help another performer with their vocal work. It was a kind of training. So it’s not even as though she got the chance to spread her creative wings and practice her craft at its fullest for the sheer joy of it. She did contract training work. And yet she still felt bad for accepting her compensation.

It makes me more than a little angry that our culture has been set up this way, so that my brilliantly talented and already overworked wife would feel bad for being paid for her services, done in her extremely scarce spare time. And it happens to all manner of creative professionals, not just performers but writers and designers too. Because it’s “creative,” it doesn’t count as real work.

Get paid. If you also happen to enjoy that work? That’s the icing. And it’s irrelevant. Get paid fairly for your work and treat it like the business transaction it is. Everyone else does.

An Unbearable Ache and an Unexpected Alphabet

I have highly mixed feelings about having my kids in daycare. On the one hand, it's wonderful that they get a full day's worth of attention, stimulation, exercize, education, social acclamation, and genuine care, every single day. It's a great daycare, the kids love it, and we're really lucky to have it available. On the other hand, I can't escape the fact that the majority of my kids' waking hours are spent being taken care of by someone who's not me or their mother. Our roles are reduced to mornings, evenings, and weekends. Like we're sharing custody of our kids with the daycare teachers. When my first kid started being looked after by a nanny when he was a baby, I cried like a baby myself the night before out of the crushing guilt I felt for not being there to raise him all day myself.

It's an economic reality, of course. If the math worked out differently, where one of us not working and staying home with the kids was more or less a wash with both of us working and keeping the kids in daycare, we might do that. Or if it worked out that one of us had a part-time job, and the kids were looked after only part of the week. But with the costs of living being what they are, there's no escaping this arrangement until the kids are old enough to go to public school. And then it's the school that's got shared custody.

And even though I work from home, if you've had small children, there's no getting anything done with two little kids around who need, well, everything. Having the kids stay home with me is not even close to being an option, at least until they're old enough to more or less look after and amuse themselves with minimal supervision.

And look, even if we did have the kids to ourselves all the time, we couldn't hope to provide the enrichment that this daycare does. They have the expertise, experience, and resources to make the kids' days very fulfilling. We'd do our best, but they'd still be more or less stuck with mom or dad all the time.

Screen Shot 2014-11-01 at 8.23.57 PM

So anyway this is what got me thinking about this again: Today, out of nowhere, my 4-year-old boy starts writing down letters of the alphabet. Starts with A, gets down to G where he gets a little confused about which way it goes, manages H and I, and then gets similarly confused and then frustrated by J.

But I had no idea he could do this. A few months ago, he couldn't color within the lines or draw, well, anything beyond a mash of scribbles ("it's a storm!"). A few weeks ago, he started bringing home actual pictures of things that he'd drawn; firetrucks, houses, and members of his family. I was amazed by these, simply gushing over them.

And then today, he starts writing the alphabet, neatly, strictly within the lines of a piece of ruled paper. I don't think he had any idea I'd be as blown away and proud as I was.

Would I have gotten him writing his alphabet if he were at home with me all day? I don't know.

But he's doing great. They both are. I miss them when they're at daycare, and I hope my wife and I can get to the place where we don't need to rely on it full time. Until then, my heart will still ache over relinquishing so much of their lives to others, but it won't be an unbearable ache.

HP Unveils An All-Touch, Projector-Controlled PC

HP has announced something truly new and novel, a desktop PC that is entirely touch-based, but not by reaching out in front of you to touch the screen (which you can also do), but via an overhead projection onto a giant touchpad where a keyboard would normally be. Questions of practicality aside, it looks at first glance to be both a logical step in the direction technology is going, and a refreshingly new take on the desktop PC. It's called the Sprout. [youtube [www.youtube.com/watch](https://www.youtube.com/watch?v=IBnf_lHxPdE?rel=0])

Skepticism is of course warranted. My immediate reservations include what appears from the video to be a big lack of saturation in the projection. I see this as a problem for two big reasons: One, it's your primary way of interacting with the computer, so having things look dim or hazy could make it hard to be precise. Two, I'd worry that aside from when you're typing, you'd be looking at the projection more than you would the primary display, which means most of your time is spent looking at the faint projection instead of the big, pretty monitor in front of you. That seems ergonomically awkward to say the least.

Perhaps in real life the image of the projection is much better than it seems in the promo videos (but I bet not as good as it looks in the promo photos)

But I suspect this is the right idea for where things are going. Take this concept as a foundation, and imagine that you have a rich, high-resolution projection with an iOS-style interface in front of a Retina iMac. Or instead of a projection, you have an iPad-style surface that adapts to the needs of the task at hand, and can even scan and sense dimension and distance of your hands and other objects. Now you're talking.

The Attack, Four Years Later

Taken a month after the attack, holding my boy on his first birthday, arm in a purple cast, shirt covered in drool.

As you might know, a ways back two thugs beat the shit out of me outside my home Metro station when I lived in DC, and it was really, really bad, and it changed a lot of things for me. Of course, right? Well, last night marked four years since that event, and I thought it might merit some brief reflection here, because it’s impacted so much of my life, my thinking and, of course, my writing here.

To start, here’s my first telling of the event itself, a few weeks after it happened, around my 33rd birthday, wherein I mused at the idea of relative levels of misfortune:

I don’t feel “lucky” as many have said I should. It usually goes something like “You’re lucky you survived” or “You’re lucky they didn’t have a knife” or something like that. I understand the sentiment, but no, I’m not lucky. If I’m “lucky they didn’t have a knife,” that assumes a world in which the zero-point, the point of normalcy, is to be severely beaten by two anonymous thugs and then stabbed. Only then are you “lucky” not to be stabbed. Though I suppose it’s a good thing that my attackers were caught and convicted, I don’t feel triumphant. I know they will likely only come out of prison worse than when they went in. There is little vindication in this.

A few months later, newly transplanted to Maine, I recounted a kind of superstitious milestone in which I was surprised not to be beaten anew.

Speaking of superstitious milestones, as the first anniversary of the event passed, I experienced both anxiety and distraction:

What’s odd is that I had been kind of bracing for the first anniversary of the event, as though there was a sort of rent in the universe where it happened in time, and when the Earth passed through that space once more, as it will with every year’s revolution around the Sun, I would somehow feel it; almost as though it would happen all over again. Of course it didn’t, but even more surprising to me is that on the actual day, it barely registered.

Shortly after, I mused about the futile “what could I have done differently” question, the presumption of some that I should/could have “fought back”:

It’s an absurd question, really, because I know that I could not have. I was snuck up on from behind and hit extremely hard on the back of my head, which knocked me straight to the ground, after which I was pummeled mercilessly by two assailants whose faces I never saw. My neocortex knows there was nothing to be done but survive. My lizard brain, and a small handful of males in my life who I presume are well-meaning, tell me otherwise.

I totally forgot about the two-year mark. Which was kind of good.

After being “discharged” from my first experience with post-attack therapy (I still see a guy), I wrote at some length about the process of working through all of the pain and fear, about how there already existed a roiling undertow of PTSD in my psyche, and how the attack brought it raging to the fore:

Therapy, if you’re doing it right, will get to work on the problem you came in for, yes, but will also address whatever might surround the event in question, other things in my life and mind that gave the attack the meaning that I would come to give it. I think we did it right. The work we did in therapy certainly targeted the assault — heavily — but managed to clean out a lot of other cruft that had built up over the years, over the decades. The attack was an extremely traumatic event, of course, but it had been colored by myriad other events from my past, a sickly array of self-conceptions and assumptions that I had spent a lifetime inculcating myself with, being miseducated about by the world around me. We targeted that stuff, too.

We didn’t fix it all, but we shrunk it. We got me to perceive those things as closer to their actual size, to their actual power. I didn’t lose all my misperceptions about myself or how others see me, but I learned to at least acknowledge that they may not all be true. Guys, I’m telling you, that’s huge.

About a year ago I looked at how PTSD was portrayed in Iron Man 3, and I was mighty surprised how true it felt to me, and how seeing someone else experience the panic, even in a fictional setting, was somewhat triggering:

We see a lot of troubled superheroes. Too often, though, their traumas exhibit themselves in brooding or vendetta. It was extremely refreshing to see a trauma manifest clinically in a superhero character, in a way I as a fellow-sufferer recognized.

And now it's been four years. It's obvious to say "so much has changed," because, well, of course! It's been four freaking years! But so much has changed directly because of the attack. Perhaps Jessica and I would have eventually decided to move to Maine anyway, it's certainly something we'd been thinking about, but we certainly wouldn't have moved when we did. We knew we wanted a second child, and eventually we did, but of course it would have been a different child than the one we got (who is awesome).

But as I talk about in my piece on therapy, I wouldn't have done the work that I clearly needed to do (and continue to do) regardless of the attack. I had shit that needed dealing with one way or the other, beating or no beating, and I might never have begun to deal with it if I'd not been pushed so terrifyingly far.

I'm not glad in any sense that it happened. It was a nightmare. But it did happen. And here's where I am now.

Death by a Thousand Emotional Microtransactions

shutterstock_58511576 How much do you care what people think of you? How much do you care what people you’ve never met think of you? How much do you care what people you’ve never met think about any individual choice you make or opinion you share? How much do you care what people you’ve never met think of the specifics of the format, timing, wording, tone, or technological means of the opinion you’ve shared?

If you use Twitter, or social media in general, you already kind of know the answer, or at least you’re learning it.

I have been learning some lessons myself about social media; how it can be used either passively or with intention; how it informs our personal identity; and how I have allowed it too much unfettered access to my nervous system, among other things. Clearly, for all its benefits, Twitter is also an enormous source of potential stress, eliciting what I call the Torrent of Feelings. I won’t get into the myriad factors that make this so. Browse this here blog, and you’ll see other ruminations on this subject.

What occurs to me lately is that a lot of the stress that Twitter (et. al.) engenders has to do with our perceptions of being judged. The more you present yourself on one of these platforms (and I’ll just use Twitter for now, since it’s my traditional platform and I’m tired of typing provisos indicating “et cetera”), the more you have your sense of self and identity wrapped up in it. And that can make one sensitive to the scrutiny that comes with such exposure.

Freddie deBoer recently put it like this:

“You’re doing it wrong” is the internet’s truest, most genuine expression of itself. For whatever reason, the endless exposure to other people’s minds has made the vague feeling that someone, somewhere, is judging you into the most powerful force in the world.

But what is being judged? The more I think about it, the more I think the answer is “everything.” And not “everything” in the sense of one’s whole self. That is happening, but it’s piecemeal. Very piecemeal, granular in the extreme. Because of course no one can encapsulate their whole selves in a tweet, or even a series of them, so judgment comes in small units. The hyperscrutinization that people experience (I know I do) on Twitter happens tweet by tweet, and on down.

Of course you can be called out for the substance of your opinions and choices, whether deservedly or not. But you can also be derided for your word choice, the timing of your tweet, your grammar, your nuance, your lack of nuance, your hashtag use, your frequency of tweeting or lack thereof, what client you’ve chosen to tweet from, and so on. And in those instances, though they are highly focused, the effect on the recipient is to add it to the collections of judgments about themselves as people. As Boone Gorges puts it, “A life spent on Twitter is a death by a thousand emotional microtransactions.”

And while I strongly advocate using Twitter and social media with great intention, there’s not much you can do about this micro-judgment phenomenon besides not using Twitter. That’s because Twitter is used by humans (usually), and humans, even the ones we really like, also tend toward the shallow and the knee-jerk response in an environment that fosters that kind of thing. Gorges again:

Every tweet I read or write elicits some small (or not so small) emotional reaction: anger, mirth, puzzlement, guilt, anxiety, frustration. I’ve tried to prune my following list so that when I do find myself engaging in a genuine way, it’s with a person I genuinely want to engage with. But there’s a limit to how much pruning can be done, when unfollowing a real-life friend is the online equivalent of punting his puppy across the room. So all day long, I’m in and out of the stream, always reacting to whatever’s coming next.

And there’s a domino effect. Especially during times of collective stress (such as the siege on Ferguson, the death of someone notable, etc.), those on the periphery peek in, see the Torrent of Feelings swirling around them, which causes them to judge the validity of that. Erin Kissane writes:

In the flood of information and emotion from something like Ferguson (or war crimes or an epidemic) … there we all are, gradually drowning. So people get huffy about the volume emotion that these events arouse—angry that others are angry about the wrong things or too many things or in the wrong register. … (I am properly angry, you are merely “outraged.”)

It should be noted that of the three writers quoted here, all three have left Twitter. DeBoer’s been gone for a while I think, and the other two announced their exit in the quoted posts.

Now, I’m not leaving. I have too much invested socially and professionally in Twitter to foreswear it. I will have to make do with diligent pruning, and accept that it will require a degree of fluidity: maybe I mute or unfollow certain people at certain times, and then bring them back to my feed at other times, for example. I will probably screw some of it up.

All of this is to say that Twitter is valuable, but we human beings are so damned vulnerable. The Twitter service does not care at all about this vulnerability, and probably thrives as a result of it. But I think we can do a lot to both harness Twitter’s positive value while being highly mindful of its power to kill by a thousand cuts (and this is before we even get to outright abuse, harassments, and threats, which is a related problem at a much higher temperature). I’ll be thinking about these things as I tweet and react, but also as I take in the reactions of others to me. It won’t be easy.

--

Image by Shutterstock.

The Real People Who Serve As the Internet's Depravity Filter

An incredible investigative piece in Wired by Adrian Chen reports on the lives of contract content moderators, folks whose job it is to go through content posted to online platforms (such as Facebook, YouTube, Whisper, etc.), and deal with the content that violates a platform’s policies or the law. And yes, we’re talking about the really bad stuff: Not just run-of-the-mill pornography or lewd images, but examples of humanity at its worst, from torture, sexual assault (involving adults, children, and animals), and beheadings. Just reading Chen’s piece is a traumatic experience in and of itself, knowing what material is out there, what unthinkable behavior real people are engaging in, and what the relentless exposure to this content must do to the psyches of these grossly underpaid contract workers, whose lives are slowly being ruined, their well-being slowly poisoned, post by post and video by video. Simply reading this article will probably require some recovery time.

I can’t have a blog about tech, culture, and humanism without at least acknowledging what Chen has brought into daylight. I don’t think I have any novel observations at the outset, having just read it, still somewhat teetering on my heels. But here are some thoughts and questions that it raises for me:

First, the obvious: Are the major tech companies for whom this work is done really aware of what they put these moderators through? From the Bay Area liberal arts grads to the social-media-sweatshop moderators in the Philippines, hundreds of thousands of smart, sensitive human beings (and I think they must be smart and sensitive to have the kind of judgment and empathy required to do this kind of work) are having their minds eaten alive, losing their ability to trust, to love, to feel joy, with disorders that mirror, or explicitly are, post-traumatic stress. Do Mark Zuckerberg or Larry Page or whoever it is that runs Whisper give a damn? (Given how little Twitter has done to deal with abuse and harassment of its users, I think it’s safe to presume for now that they probably don’t.)

Also, now that we know what these folks are exposed to, what can we as users of these services do about it? What will we do about it? (I fear the answer is probably similar to what we all did when we learned about the conditions in factories in China: more or less nothing.)

Here’s what affected me the most about all of this. This report was a reminder of the depths of human depravity. Now, it’s not news that there are horrible people doing horrible things to each other, and likely ever shall it be. But something about the way it’s described in this report amplifies it for me. If these hundreds of thousands of moderators are being overwhelmed, deluged with violence and death and evil in all manner of their cruelly novel variations, how many of our fellow humans are perpetrators? These moderators are only catching the portion of these people who either get caught in the act or purposefully broadcast their actions. What more must be taking place? I can barely stand to ask the question.

Bearing witness to a video of a man doing something I cannot bear to recount here to a young girl, one moderator points us to the insidiousness of all of this, emphasis mine:

The video was more than a half hour long. After watching just over a minute, Maria began to tremble with sadness and rage. Who would do something so cruel to another person? She examined the man on the screen. He was bald and appeared to be of Middle Eastern descent but was otherwise completely unremarkable. The face of evil was someone you might pass by in the mall without a second glance.

Chen writes of how these moderators no longer feel they can trust the people in their day-to-day lives. You can see why.

Finally, I’ll be thinking about the fact that its these devices and services that I am so fascinated and often entranced by that are the delivery vessels for this horror. It is tempting to relegate one’s thinking about the tech revolution as one of liberation and renaissance. But these tools are available to us all, to the best of us and the worst. What then? What now?

Computers, Coats, and Chainsaws: Longevity and Turnover for Technology

Image by Shutterstock In an upcoming episode of the iMortal Show (which I promise is coming back soon – a new episode is being recorded this week), we’re going to discuss something that is often missed on websites like my own: products with longevity, objects and devices that are well-made and not intended to be changed over frequently in the way that smartphones are.

I was inspired by a post at Tools and Toys, in which Chris Bowler explains the thinking behind his site, which features high-quality items that he believes are worth their (often substantial) investment, in large part because of what they will mean to you as a human being – and thus, the connection to this blog:

Mindful purchases can lead to a more peaceful existence. Partly because we make less purchases when cognizant of all of the above, but also because quality items do what they’re expected to do, time and again, and you begin to put trust in the item. …

The fact that our culture attempts to identify humans as consumers is a terrible reality. But if we all make conscious choices to buy quality items — ones we will use and ones that will last — and for which the human beings who are involved in the creation process are paid and treated appropriately, we’ll make this world a little better. And if we focus more on our craft than we do on our tools, we’ll do well.

There’s a lot to unpack there, of course, and this post is just to introduce the topic and take a glance at one aspect of it. Naturally thoughts turn not just to the objects Bowler discusses (winter coats and chainsaws among them), but to our gadgets.

I just referred to smartphones as frequently changeable, but there must be degrees within that as well. My iPhone 5S will remain relevant and useable, I would bet, a good deal longer than, say, a Lumia or Galaxy of similar vintage (I can’t prove it, obviously, but that’s by guess). But we’re talking about a difference of months, say, 18 to 36. Not years and years.

It’s a similar story with Macs versus Windows PCs. Anecdotally, it seems to me that Macs not only maintain a high level usability over time, but they even sport a look and finish that is more timeless than something made by HP or Lenovo. But again, this only covers years, not decades.

This leads me to a post I stumbled upon tonight by Nate Vaughn, who recommends a fairly high rate of turnover for Macs. This may seem like it’s the opposing view to what Bowler is trying to advocate for, but it’s not. What Vaughn does is acknowledge the real-world longevity of a Mac, and recommend a course of action to maximize it:

The typical model of computer ownership by generally less technology friendly is “Buy it and hold it for as long as possible.” This emotional response makes perfect sense, you just spent this large chunk of money on a shiny new thing, it should last. The flaw in this approach is that these items have a built in shelf life of 20–30 months. Not that they are worthless after that, but they are certainly well into middle-age.

Vaughn actually puts together a kind of loose formula that shows a net economic benefit to not holding on to your machine for too long, but reselling it within a reasonable amount of time so that the thing is still highly usable, and therefore still worth it for someone else to buy, and also still economically viable to you the seller, so you can upgrade to your next machine with less pain.

The inversion in [“buy it and hold”] thinking is this: you’re not paying for the new machine, you’re paying for the depreciation of value of the machine you already own. And Apple products are like Volvos, they hold their value well.

So with both Vaughn and Bowler, we get an appreciation for longevity; a recognition that some objects, though expensive to invest in, are worth it for how much value can be reaped from them over time. Vaughn takes that a step further, and says, you know, that longevity will eventually become diminishing returns. As well made as some of them are, computers are not coats or chainsaws.

Mars, Musk, and a Meditation

Ross Andersen’s interview with Elon Musk at Aeon, on Musk’s ambitions for Mars colonization, is a gem. “Interview” doesn’t do it justice; it’s part interview, part examination of the motivations (Musk’s and civilization’s) for a Mars migration, as well as a meditation on the humanity of such an endeavor.

A big takeaway is how Musk sees a Mars trip not simply as a lofty goal of humanistic enrichment, but as a last and only best hope for a species tied to the unpredictable fortunes of a single planet and its fragile ecosphere. If we’re to go on as a species, we have to leave, sooner than later.

But you know, it’s not even about our species, per se. It’s about what we carry within us: consciousness.

Musk has been pushing this line – Mars colonisation as extinction insurance – for more than a decade now, but not without pushback. ‘It’s funny,’ he told me. ‘Not everyone loves humanity. Either explicitly or implicitly, some people seem to think that humans are a blight on the Earth’s surface. They say things like, “Nature is so wonderful; things are always better in the countryside where there are no people around.” They imply that humanity and civilisation are less good than their absence. But I’m not in that school,’ he said. ‘I think we have a duty to maintain the light of consciousness, to make sure it continues into the future.’

And about those humans. Leave Musk for a moment, and read Andersen’s musing on the hypothetical trip to Mars by the future colonists:

It would be fascinating to experience a deep space mission, to see the Earth receding behind you, to feel that you were afloat between worlds, to walk a strange desert under an alien sky. But one of the stars in that sky would be Earth, and one night, you might look up at it, through a telescope. At first, it might look like a blurry sapphire sphere, but as your eyes adjusted, you might be able to make out its oceans and continents. You might begin to long for its mountains and rivers, its flowers and trees, the astonishing array of life forms that roam its rainforests and seas. You might see a network of light sparkling on its dark side, and realise that its nodes were cities, where millions of lives are coming into collision. You might think of your family and friends, and the billions of other people you left behind, any one of which you could one day come to love.

Do you, or do you not, feel the anxiety of being adrift? Do you not picture that blurry sapphire sphere receding from view as you realize how utterly surrounded and engulfed you are by blackness, pushed with direction and intention, but somehow still lost? My heart is beating faster.

And somehow, it all puts me in mind of Ernie from Sesame Street. I think it’s safe to say that as adventurous as the lad is, he would not be among the passengers on Musk’s one-way trip to Mars.

[youtube www.youtube.com/watch

And a bit of trivia to tie it all up: Somewhere there exists, perhaps with my dad, or maybe only with my grandmother, a well-produced recording of a 5-year-old me singing this song, accompanied by my dad on guitar. I didn’t really get it then, but I do now.

Perpetual Dislocation and the Angst of Techno-Conservatives

Image by Shutterstock Nicholas Carr is a thinker I struggle with. (I mean, I struggle with his thoughts as expressed in written form, I don’t struggle with him personally or physically. Just so we’re clear.) Ever since introducing the rhetorical question “Is Google making us stupid?” I’ve been skeptical of his, let us say “conservative” perspective on technology. By this I mean he is among those who have taken it upon themselves to serve as dampening pedals on the otherwise boisterous enthusiasm generally expressed for new technologies. This is an important role, and I don’t mean to diminish it – it’s at times when societies go barreling into uncharted territories with unearned confidence that we need smart people to counsel some moderation. And he’s good at it. I just also happen to disagree with him more often than not.

In reviews of his new book, The Glass Cage, I keep seeing this snippet about the ubiquity of things like GPS and Google Maps:

The more you think about it, the more you realize that to never confront the possibility of getting lost is to live in a state of perpetual dislocation.

I plopped it into my Evernote bucket-o’-things-to-consider-writing-about, and it grates on me every time I see it. It reads like a parody of an Internet alarmist. Hey man, if you don’t get lost, then you are truly lost. Whoa.

I needed more information, because there’s clearly more to whatever Carr’s argument is here. I don’t have the book, and I’ve been wondering if I’d give it a shot, but obviously I haven’t yet. So I went to Amazon’s look-inside-the-book feature to see this quite in a fuller context. And as I’m reading, this idea jumps out at me: This reads like a David Brooks column. I know, I’m predisposed to be pro-technology, and the few paragraphs to which I’m exposing myself are not a fair appraisal of Carr or his entire book, but it is nonetheless my reaction. You know what I’m talking about? The way Brooks seems like he’s going out of his way, and twisting his mind in all sorts of weird directions, to ensure that he feels uneasy about something that is pretty much entirely good. It’s tut-tutting progress for the sake of the tuts. That’s what it felt like for those few paragraphs.

But then I went back a little bit, to the run-up to the “dislocation” quote in question, and what I read made my jaw drop. And I saw it immediately after my David Brooks revaluation (emphasis mine):

A GPS device, by allowing us to get from point A to point B with the least possible effort and nuisance, can make our lives easier, perhaps imbuing us, as David Brooks suggests, with a numb sort of bliss. But what it steals from us, when we turn to it too often, is the joy and satisfaction of apprehending the world around us – and of making the world a part of us.

I know! Can you believe it! I mean, what are the chances? Again, I want to be fair to Carr, I do. But you have to admit, that’s kind of funny. Especially if you find comparisons to David Brooks funny. And I do. But it’s not that much of a coincidence, I suppose. Brooks and Carr both fulfill in their respective places the role of the conservative. Not in the Tea Party sense, but in the sense of “someone who stands athwart history, yelling Stop, at a time when no one is inclined to do so, or to have much patience with those who so urge it.” In other words, their role is to publicly suffer ulcers over the largely-positive changes and developments in politics and culture (Brooks) or technology and society (Carr). That can be a useful role, even if it does often induce eye rolls.

Fun aside, I want to give genuine credit to Carr, who, also like Brooks, is no slouch in turning a phrase, and poses serious questions. Before his citation of Brooks, Carr writes this, which deserves contemplation:

While we may no longer have much of a cultural stake in the conversation of our navigational prowess, we still have a personal stake in it. We are, after all, creatures of the earth. We are not abstract dots proceeding along thin blue lines on the computer screens. We are real beings in real bodies in real places. Getting to know a place takes effort, but it ends in fulfillment and in knowledge. It provides a sense of personal accomplishment and autonomy, and it also provides a sense of belonging, a feeling of being at home in a place rather than passing through it. … We may grimace when we hear people talk of “finding themselves,” but the figure of speech, however vain and shopworn, acknowledges our deeply held sense that who we are is tangled up in where we are. We can’t extract the self from its surroundings, at least not without leaving something important behind.

I take that seriously, the idea that place – both connection to a place and an awareness of where one is alien – is part of the fabric of who we are. Where I disagree is with the idea that these are crucial aspects of who we are, or that we are somehow undefinable or “less ourselves” without them. Place, like pretty much everything else our minds perceive, is a construction, just like the Internet. We imbue in with whatever value it possesses. It is not an innate value. In a future hypothetical time in which place truly has no bearing on our lives, we will still find ways to distinguish ourselves, still find ways to learn and enrich ourselves, and even become alienated. Place for now is part of our fabric, but it can be replaced by other fibers.

And as someone who has a terrible – and I mean abysmal – sense of direction and orientation, I’m literally liberated by both the ability to navigate to a near-universal extent, and the removal of “place” as a geographical construct brought by cyberspace. For many people (I assume including Carr), I have no doubt that this is jarring; that this really does feel like a kind of perpetual dislocation, and for them I feel sincere sympathy.

But for me, may this dislocation be indeed perpetual. I have plenty of other challenges in my life, and more than enough scenarios in which alienation builds my character. I’ve been lost quite enough for one lifetime, thank you very much.

The Apotheosis of Voltron

Did you already know about this? You probably already know about this.

Look, I had heard of MC Frontalot, but being 36 and out of touch, I never heard any of his stuff. Little did I know that he is artist rapping about toilet paper manufacture in Elmo's Potty Time, which, let me tell you, I have seen many, many times. So I'm already impressed.

But lemme back up. Something recently got me thinking, wow, Voltron's lips sure are drawn prominently, aren't they? Yes, I was thinking about Voltron's lips. It amused me so much to realize that, here I was, a grown man thinking about Voltron's lips, that I made a Twitter account for it: @VoltronsLips. The bio?

"And I'll form...the head."

Because that's what Keith, the leader-guy, always says at the end of the forming of Voltron, which, if you haven't figured out, is a gestalt robot thing, a la Devastator, where five mechanical lions, piloted by humans, join together to form a super-robot.

Okay?

Okay, and then as I'm tweeting about with @VoltronsLips, @LenSanook points me to the video below, and it is now my favorite song of all time.

[youtube www.youtube.com/watch

So you probably knew about it, it's been around a little while, but I'm having a religious experience over this.

A Song to Unfollow By

I'm doing some light Twitter culling tonight, as the relentlessness that is the Torrent of Feelings, the constant barrage of snark and attacks and outrage and disgust, is getting to be too much. Tonight I came upon this song by Jonathan Mann (the song-a-day guy who has been kind enough to pal around with me a little on Twitter). It's just right.

[youtube][www.youtube.com/watch](https://www.youtube.com/watch?v=9YRdaBOZ-q4[/youtube])

Inexhaustible Novelty and its Discontents

If you follow technology news at all, you’re probably tired of hearing variations on the word “disruption.” You’ve maybe heard the words “revolutionary” or “revolutionize” a few too many times in the past decade, to the point where it sounds kind of silly. I get that. While it’s undeniable that the Age of the iPhone has fundamentally changed so much of our day-to-day lives, the constant acknowledgement of these changes, and the fact that it never seems to be enough, can be a bit much. I get that.

Ian Bogost writes at The Atlantic of "the inescapability of docility" as a result of our domestication-by-tech, and is worn out:

A torpor has descended, the weariness of having lived this change before—or one similar enough, anyway—and all too recently. The future isn’t even here yet, and it’s already exhausted us in advance.

Well, not me, but I think I can sympathize somewhat with the idea that the current pace of technological novelty can be exhausting. There’s a computer in your pocket, and then in your TV, and on your face, and on your wrist, and every time a new embedding of consumer technology plants a flag, the culture scrambles to take part in the Grand Discussion about What It All Means. Including me! But if it’s not your bag, if you’re just trying to work around all of this change and rapid adoption, I can see how the past year or so of consumer tech discussion could induce severe eye-rolling.

I wonder if this is just the way things are going to be from now on, with new paradigms shifting at such a clip as to manufacture a constant need for reanalysis of What It All Means, or if there’s something particular to this span of ten to fifteen years. Technological advancement does seem to be on this asymptotic upward slope approaching infinity when you think about the span of time between, say, the printing press and the PC versus the PC and the Apple Watch. Perhaps this is just something we have to get used to. (Insert ponderings about the Singularity here.)

But there does seem to be something novel about the developments of the last decade or so. The more-or-less simultaneous advent of things like capacitive touch screens, super-high-resolution displays, tiny desktop-cads processors, high-capacity solid-state local storage, near-ubiquitous wireless broadband connectivity, and gargantuan cloud server capacities have conspired to upend so many things about our society so quickly, that it’s hard to imagine that such a convergence of similarly transformative technologies could continue to occur decade after decade as it has. We may have just been riding that rare elevator on the otherwise long rock-climb up the technological mountain.

If that’s correct, and things will soon go back to “normal” (whatever that means), perhaps those who are exhausted by all of this novelty will get a break, and rather than, say, 25-year-olds suddenly feeling old because of those spunky 20-year-olds who are on the cuttingest of edges, we can go back to, say, parents being confused by their kids, with technological generations more closely mirroring biological ones. Today, the phrase “in my day” can mean a couple of years ago, as in, “In my day, tablets only had 3G connectivity!”

The thing is, the pace of recent change has created a demand for its continuous regeneration. Products and services that “change everything” are now expected, even demanded, on a yearly basis. But that’s not a technological demand, really. It’s a cultural and commercial one. Again, I have some sympathy with Bogost, as he laments the spate of “think pieces” and navel-gazing (or rather wrist-gazing) heralded by the Apple Watch and its contemporaries:

I’m less interested in accepting wearables given the right technological conditions as I am prospectively exhausted at the idea of dealing with that future’s existence. Just think about it. All those people staring at their watches in the parking structure, in the elevator. Tapping and stroking them, nearly spilling their coffee as they swivel their hands to spin the watch’s tiny crown control.

The enemy here isn’t the technology itself, it’s the manufacturing of their necessity. I’m on record for being skeptical of the Apple Watch’s reason for existing, a skepticism similar to that I felt when we first met Google Glass (well, maybe not first met…I had to get over how neat and future-y it was before I could begin to arch my eyebrow). While smartphones and tablets I believe really did provide a level of utility that made their usefulness almost self-evident, some of the more recent Revolutionary Products do seem to be solutions searching for problems. Smartwatches, Google Glass, Oculus Rift, even curved displays; while smartphones eased into and complemented the flow of our lives, these other things almost need to be forced in and adapted to. So the ennui Bogost experiences here may not be so much with Glass, but with Glassholes. And I sympathize.

I obviously don’t share the sentiment overall, however. Of course our consumer culture will always be hungry for New. Our journalistic culture, that ravenous web content beast, will always demand to be fed more and more New. And much of this New will be thoroughly prodded and examined and consumed, and then summarily discarded.

But just as one deals with frustration with New England weather, wait five minutes. Because when we least expect it, something genuinely novel, useful, and, well, disrupting will emerge, and all those pretentious think pieces and navel-gazings will give way once again to a sincere Grand Discussion about What It All Means. And we’ll actually need it.

And rather than be exhausting, it will be energizing.

Angels in the iPod: Lawrence Krauss on TWiT

Physicist Lawrence Krauss was recently the guest on Triangulation, the interview program on the TWiT network. It's one of those lovely convergences where science and skept0-atheism cross paths with tech media, so I thought it'd be a good thing to post here.

Content-wise, it's fairly introductory stuff. If you follow Krauss's science popularization work, you probably won't get a whole lot new here. But host Leo Laporte is obviously enamored of his interview subject, and the conversation touches on some of what I try to cover here at iMortal, how technology and science are parts of our lives at a cultural level and at the level of personal meaning. They note that you can't appreciate your gadgets unless you accept the science that makes them work, or as Leo puts it, there are no angels in the iPods. "If you reject science," he says, "you reject everything that science has brought us."

[youtube www.youtube.com/watch

Final Fantasy VII's Final Battle Against Sephiroth...Sung A Capella

Smooth McGroove, he who produces amazing a cappella renditions of music from video games, has created his masterpiece.

I've previously heaped praise on him for his versions of the Final Fantasy VII battle and Mega Man II Dr. Wily stage themes, and my absolute favorite, the DuckTales Moon theme. They all delighted me.

But this, well, this is something else entirely. Here's Smooth McGroove doing "One-Winged Angel," the theme of the final battle in Final Fantasy VII versus Sephiroth -- complete with the Latin-singing choir of multiple Smooth McGrooves. Not only is it musically impressive (this is an entire orchestral piece done entirely with his voice), but also his best video editing.

[youtube www.youtube.com/watch

And those scenes from the battle with Sephiroth, man, that's some strong feelings that brings back. I tip my hat to you, Mr. McGroove.

When in Doubt, Blame Slender Man

Image by  Marijune Alejo This is entirely predictable, and disappointing. Because of a couple of recent incidents in which young people have commited crimes because of the perpetrators’ imagined connection to the entirely fictional being Slender Man, certain corner of the media were sure to be on the lookout for what might constitute a “trend” or “epidemic” of Slender Man-related offenses.

And they got one! Last week, a teenage girl in Pasco County, Florida allegedly set fire to her family home after an argument. Afterward, she apparently texted an apology to her parents. Luckily, no one was harmed, but the girl was charged with arson and attempted murder. You’d think this would be news enough, but then there’s this shocking angle. The Tampa Pay Times reports:

The investigation also revealed that the girl frequents the websites creepypasta.com and souleater.com, which are both associated with Slender Man, the fictional internet character who was said to be the motivation behind two 12-year-old Wisconsin girls stabbing and nearly killing a classmate earlier this year.

That was enough for Huffington Post and Fox News to run with.

The HuffPo headline reads, “ANOTHER Slender Man Attack? Teen Allegedly Burns House Down With Family Inside.” Fox tells us, “Fla. teen with Slender Man obsession sets fire to home with family inside.”

The HuffPo headline follows Betteridge’s Law (“Any headline which ends in a question mark can be answered by the word no”), as you can imagine. Nothing in that piece or in any other reporting I’ve seen indicates that this event had anything to do with Slender Man, and the fact of the teen’s interest in sites that feature Slender Man are entirely coincidental.

You know who else thinks that? The investigators. Here’s the reporting from WTSP:

At this time, investigators have no evidence to believe that she set the home on fire because of the violence found on these websites, but authorities remain concerned.

Predictably, probably looking to have something to say, a general warning about those scary interwebs is given by the sheriff:

Pasco Sheriff Chris Nocco says that while Slenderman and websites like CreepyPasta and SoulEater may not have directly led to the teen setting the fire, he wants parents to be aware of their existence and the content they contain.

There’s obviously nothing wrong with parents being aware of what content their kids consume online, and obviously there’s no direct line here.

Fox’s headline goes in a slightly different direction, claiming that the teen has a Slender Man “obsession.” But there’s been no reporting that supports that characterization. But it serves to draw a connection where perhaps none exists. There could be a connection, but no one’s reported any evidence as of yet.

From my limited exposure, clearly the kind of material that comes from these websites is inappropriate for certain young people. Clearly it’s not for kids, and emotional teens are already impressionable, so I don’t want to downplay the influence that violent media could have on a young mind. By no means.

Some of the reporting notes that the teen in question was also reading many other violent media online, but since they’re not known to a general audience, they go largely ignored in the clickbait pieces.

My critique here is of some in the media’s choice to infer or imply a nonexistent connection to a hot-button Internet phenomenon to a genuine crisis for a real family. Who knows what affect any of this media she’s consumed may have had? And who knows what else might be going on in this scared, angry girl’s life that could have driven her to this kind of act? Crying “Slender Man” is easy, and exploitative of a family in a terrible time.

Hey, I have a lot of fun with Slender Man. I find the idea that a mythical horror-being, birthed online, and whose creator has explicitly said is entirely made up, has had such an impact on imaginations and psyches. It’s obviously fictional, and yet it stirs something in people’s lizard brains that makes them truly scared. So I’ve made funny images with Slender Man, I like to make jokes about him, and I like to tweet this:

Screen Shot 2014-09-07 at 1.58.38 PM

If anything bad were to happen to me, or if I were to do something crazy, would that be ANOTHER SLENDER MAN ATTACK?

Probably!

The Less-Than Doomed E-reader

  Image by Shutterstock.

I’ve been seeing more and more writing lately about the allegedly imminent death of standalone e-readers (and really, Kindles, because no one is buying Nooks or Kobos). It seems that sales of the devices year over year have been trending downward, spurring many to wonder if the entire category is in its death throes.

But as I noted in 2012, e-readers aren’t as perishable as the more rapidly-changing category of phones and tablets. I wrote:

Think about your TV set. If you’ve bought one in the past eight years or so, you probably have a perfectly good flat-screen LCD or plasma HDTV set that you have no reason to upgrade, unless you’re dying for a bigger screen than you have. But chances are the change in the performance of the device itself is not something you’re probably even thinking about.

I think this is what it’s like for Kindles and the like. You use your TV to watch video content, and that’s about it. Very little has changed fundamentally in recent years to compel frequent upgrades. Likewise with e-readers: you’re buying one to read books, and that’s, again, about it. … My wife has a Kindle 2 from 2009, and isn’t the least bit interested in upgrading. She loves it.

(I should note, she only this past week upgraded to a Kindle Paperwhite, but she had her original Kindle for five years. Almost no one keeps a smartphone for five years.)

Todd Wasserman at Mashable recently made this same point:

Unlike smartphones or tablets, e-reader models don’t really evolve, so there’s no need to upgrade. A Kindle you bought in 2011 is pretty much the same as the one you’d buy in 2014. [And] if you own a tablet, a single-function e-reader is also a luxury.

Interestingly, a few months ago I sold my own Kindle because I was reading so much on my iPad. And now I regret it, so presuming I can scrape together the scratch, I’ll eventually be in the market for a new one again.

Kevin Roose echoes the idea of the Kindle as an unnecessary luxury, writing, “The death of the standalone e-reader might be good news for consumers, who will have one fewer gadget to buy and lug around.” But I think that’s overthinking it. Kindles are too small and light for the word “lug” to apply to their transport. But there’s no denying that the pressure to buy an additional device that largely mimics the functionality of something you already have (presumably a high-resolution tablet or large, hi-res phone) is something most folks can do without. Especially for those who aren’t voracious readers.

Roose also understands why smartphones and tablets don’t quite cut it for devoted book readers:

[T]here’s no getting around the fact that smartphones aren’t designed for focused, sustained reading. … [They] breed short attention spans. On a phone or a multi-function tablet, e-books have to compete for attention with Facebook, Instagram, Pandora, Angry Birds, and everything else you do. It’s the difference between watching TV intently, and watching TV while folding laundry, talking on the phone, and doing the crossword puzzle.

All of this leads me to think that e-readers are not doomed, but that they’re going to cease to be an explosive category of mass market technology. Instead, I think we’ll see them continue to be honed and improved for a slightly niche market of frequent book consumers. And since they don’t require frequent upgrades on par with phones, I wouldn’t be surprised if we see two general categories of e-reader devices:

  1. Free or nearly-free commodity e-readers that Amazon may just give away to Prime subscribers, for example, because they encourage e-book purchasing, and
  2. High-end “luxury” e-readers (like the Paperwhite) with advanced, ever-more-readable e-ink screens, improved lighting, and premium builds.

And that would be fine! Amazon alone could sustain that kind of market, and even other companies like Kobo could carve out their own corner of the market with their own takes on the luxury e-reader.

So while the adoption of e-readers may be flattening out, I think the device category itself isn’t going anywhere. You just may have to pop your head into coffee shops and libraries to find them in the wild.

The Moore's Law Express Hits the Great Ceiling: A Possible Hitch to Alien Contact

Amid the discussions of the potential for contact with extra-terrestrial civilizations, there’s one big buzzkill I don’t recall ever hearing posited as a possibility for why we haven’t made contact yet: Because it can’t be done.

We are used to the idea that technology advances exponentially, that we are all riding the Moore’s Law Express to the Singularity, and that as long as we don’t destroy ourselves via world war, climate catastrophe, or extermination by the artificial intelligences we’ve created, we will be capable of wonders that we can’t even image today, just as our nomadic ancestors of 100,000 years ago could never have imagined a steam engine, library, vaccine, or iPhone.

It follows that any other species on another world that has developed intelligence will get to hitch a ride on the same train. The details will differ, what they figure out first, what they emphasize, and what they’re physically capable of manufacturing will be different, but given a clear path, they too will achieve unimaginably advanced technologies that will, among many other things, allow them to voyage the galaxy and make themselves known to its other inhabitants.

There are lots of reasons to think this won’t happen, or if it does, that we won’t ever be aware of it. In an excellent piece by Tim Urban that I found via John Gruber, several reasons for our ongoing celestial loneliness are offered, all pretty sensible (except the one about the government cover-up, which he also thinks is silly). Some examples:

Super-intelligent life could very well have already visited Earth, but before we were here. In the scheme of things, sentient humans have only been around for about 50,000, a little blip of time—if contact happened before then, it might have made some ducks flip out and run into the water and that’s it.
Getting the sole experience of First Contact is so like the ducks, you know?

Another follows the metaphor of ants trying to comprehend a nearby highway (one presumes they cannot):

[I]t’s not that we can’t pick up the signals from Planet X using our technology, it’s that we can’t even comprehend what the beings from Planet X are or what they’re trying to do. It’s so beyond us that even if they really wanted to enlighten us, it would be like trying to teach ants about the internet.
That’s very much in line with the Moore’s Law Express, where it just so happens that the Planet X-ians are so much further down the track that we can’t even see them.

Urban also puts forth the idea of a “Great Filter,” a kind of universal civilizational buffer zone that extraordinarily few species ever cross. Maybe it’s because of planetary or astrological cataclysms killing off entire biospheres before they can evolve, or maybe it’s a near-inevitability of intelligent species destroying themselves, but either way, there may be some Rubicon that finishes off nearly all civilizations before they can become space-faring, let alone Type II or III.

(A side note about Type III civs, the kind that harvest an entire galaxy’s energy: Urban talks about how there might be a relatively small number of them that can inhabit any one galaxy, and I’m thinking, if they’re defined by their ability to eat up the energy of a whole galaxy, I have to imagine it’s a “there ain’t room for both of us in this one-horse town” kind of thing, where it’s not 1000 Types IIIs in a given galaxy, but one, ever. But I digress.)

And he posits many other possibilities, and you should read the whole piece, because it’s really good.

But my thinking, which again is a real bummer, is that we need to consider the possibility that we haven’t made contact with alien civilizations because it simply can’t be done. The Moore’s Law Express actually does have a final stop at which technological advancement more or less halts because of the limits of physics, or even just the limits of any intelligence (natural or artificial) tomanipulate physics.

It might just be that traversing light years in a span of time that allows for survival, proliferation, or communication is simply impossible. It may be that there is no way to send communications signals of any known kind across the vast stretches of nothing that would allow another intelligence to receive them, let alone understand them.

Maybe there can and will be no warp speed, no folding of space, no teleportation, no subspace communications, no navigation of wormholes, no uploading of consciousness to interstellar servers, no Dyson Spheres, and no Singularity. As opposed to a Great Filter that finishes off civilizations on the way up, there may instead by a Great Ceiling, a lid on reality that says we (meaning we on Earth and any other species in the Universe) can go this far, but no further.

Now look, I know that thinking this way sucks, and it’s no way to get kids excited about science and exploration, or to rally the public to support more investment in scientific research. It is in our interest as a species and a civilization to cheerfully ride the Moore’s Law Express as though it has no terminus. But if the conversation about why we haven’t made contact with aliens is going to be an honest one, I think it has to at least acknowledge this sad possibility: Not that “they” might not be out there, but that they are, and we simply can never know for sure, and nor can they.

Okay, now pretend you never read this.

By the way, one potential way to travel the stars is by way of a Bussard Collector, and I just happen to have written a song about one. See? I have hope.

Skepticism Warranted in the Panic Over Twitter Changes

Image by Shutterstock The thing that's been giving the online world a collective ulcer is the idea that Twitter is going to fundamentally change the way its service works by bringing Facebook-style curation to its real-time firehose. But is it really? Despite the recent rending of garments by the Twitter faithful, I have found that skepticism is warranted.

The panic began with the implementation of a system whereby tweets favorited by people you follow might appear out of context in your timeline, and things you favorite might emerge likewise in other people’s feeds. I wrote about how this was an ominous sign of Twitter mucking with what makes it great: real-time access to the zeitgeist, filtered only by whom you choose to follow.

Then the Wall Street Journal reported that Twitter’s CFO Anthony Noto had indicated that changes were coming to the traditional Twitter timeline:

Twitter’s timeline is organized in reverse chronological order, a delivery system that has not changed since the product was created eight years ago and one that some early adopters consider sacred to the core Twitter experience. But this “isn’t the most relevant experience for a user,” Noto said. Timely tweets can get buried at the bottom of the feed if the user doesn’t have the app open, for example. “Putting that content in front of the person at that moment in time is a way to organize that content better.”

Mathew Ingram’s analysis of this at GigaOm is what really had people reaching for their pitchforks and torches, writing that it “sounds like a done deal” and that coming modifications “could change the nature of the service dramatically.”

That sounds really scary to folks who have stuck with Twitter since the beginning, and truly value what it provides. It stands as a stark contrast to the heavily-curated experience of Facebook, its immediacy giving it its power and unique position in the media universe.

But as even Ingram acknowledged in a subsequent post, this change -- an algorithmic approach to the timeline -- was probably meant for the “discover” tab of the site, which is already heavily curated by the service, and within Twitter’s search feature. In fact, that’s what the Journal article even says:

Noto said the company’s new head of product, Daniel Graf, has made improving the service’s search capability one of his top priorities for 2015.

“If you think about our search capabilities we have a great data set of topical information about topical tweets,” Noto said. “The hierarchy within search really has to lend itself to that taxonomy.” With that comes the need for “an algorithm that delivers the depth and breadth of the content we have on a specific topic and then eventually as it relates to people,” he added.

Sure sounds to me like he’s just talking about search, since that’s what he actually says. It didn’t help matters, I suppose, that whoever wrote the Journal’s headline chose to blare, “Twitter Puts the Timeline on Notice.” Because, no, it didn’t. Not here, anyway.

After Ingram’s first piece, in fact, Dick Costolo, Twitter’s CEO (who I presume is in a position to know) tweeted, "[Noto] never said a ‘filtered feed is coming whether you like it or not’. Goodness, what an absurd synthesis of what was said.

What really settles all of this for me, though, is an interview given by Katie Jacobs Stanton, who is Twitter’s new media chief. What I read from what she tells Re/Code is that Twitter’s current and long-term strategies continue to revolve around the real-time, unfiltered timeline. Here’s Stanton on Twitter as a companion to live TV viewing (emphasis mine):

What’s happened is that every day our users have been able to transform the service into this second screen experience while watching live TV because Twitter has a lot of those characteristics. It’s live, it’s public, it’s conversational, it’s mobile. Television is something really unique, really powerful about Twitter and we’ve obviously made a big investment in that whole experience.

Here she is on the value Twitter provides during breaking news events:

We have a number of unique visitors that come to Twitter to get breaking news, to hear what people have to say. Joan Rivers died [Thursday] and people were grieving on Twitter and talking about her, but they’re also coming to listen to the voices on Twitter as they pay respect to Joan Rivers. This happens all the time. There’s also the broader syndication of tweets. News properties of the world embed tweets and cite tweets. That’s really unique.

Note how she emphasizes the fact that Twitter is not incidentally serving as this kind of platform, but that this makes it uniquely crucial to the wider media universe, for consumers of news and those producing it.

She later refers to Twitter as “the operating system of the news.” This does not sound to me like a service that is intent on dismantling what its own media boss is touting as its chief virtue.

Twitter will of course go through changes, and its experiments with favorites-from-nowhere is an example of that. It won’t always be exactly as it is. But I’m beginning to believe that the recent panic (including my own) is unwarranted. I suspect that the people at Twitter understand why people use it as religiously and obsessively as they do, that they use it very differently from the way they use Facebook, and that there needs to always be a way for a user to tap into the raw stream.

Maybe, down the road, the initial experience Twitter presents a user with is one that is more curated, more time-shifted, but I suspect that the firehose will always be there for faithful.

Measuring the Immeasurable Heavens

[youtube www.youtube.com/watch

In this video from Nature, we are introduced to Laniakea, the incomprehensibly vast supercluster of galaxies of which our own Milky Way is an infinitesimal part. Astounding.

"Laniakea," by the way, is Hawaiian for "immeasurable heavens." Well, they measured them.

Learning Not to Be Tormented by the Twitter Torrent

I took a vacation from work last week, but I’m not good at vacations. One way or the other, I usually find some way to taint what should be a chance to relax with stress and labor. Sometimes that source of stress can be my own children. Not so much this time. This time, it was Twitter.

At first I had narrowed this epiphany to the bunch of jerks who attacked me when I tweeted in support of Anita Sarkeesian, and after my post on video games’ brutalization of women. And yes, that was stressful, and it’s not my fault that lots of people are jerks and decide to act on their jerkishness. But as the sun set on the last day of my time off, I realized that jerks on Twitter weren’t the sole problem for me, nor even at the core. It’s Twitter.

Two weeks ago, I, like hundreds of thousands of people I suspect, allowed the harrowing and upsetting news from Ferguson, Missouri eat me alive, night after night. I felt a kind of moral obligation to keep my eyes affixed to Tweetdeck as every outrageous development crossed the zeitgeist in real time. I could be of now help, and I couldn’t change the minds of those who thought the police’s siege was justified, but I wouldn’t allow myself to stop internally churning over every distressing incident. Tweet by tweet. Helpless watching and tweeting my feelings only served, in the end, to put a dent in my well-being.

It wasn’t a bad idea for me to be informed, or to feel a deep compassion for the peaceful people whose very humanity was being challenged by our system, there represented by a militarized police force. I know there is real value in being well-informed and empathetic . But there is educating yourself, and there is abusing yourself.

In the midst of the blowback over my tweets and posts about Anita Sarkeesian and video games, I found myself wounded with every attack. Sometimes the lashings came from people I sort of knew on Twitter, which stung in a particular way that unleashed all sorts of self-doubting anxieties. But even the stupid and overtly hostile attacks from trolls and other miscellaneous dingbats hurt. These were snarky, mean-spirited attempts at zingers from fools, devoid of sense, and they still upset me.

Put aside why I “let” these things affect me as severely as they do. Folks, this is what it is to be like me. The better question to ask is why I place myself in such a position where I can be affected.

Marco Arment recently discovered something similar after a Twitter-fight with a tech journalist, which apparently really got to him, and he’s not exactly a shrinking violet:

Much of the stress I felt during this is from the amount of access to me that I grant to the public….We allow people access to us 24/7. We’re always in public, constantly checking an anonymous comment box, trying to explain ourselves to everyone, and trying to win unwinnable arguments with strangers who don’t matter in our lives at all.

We allow this access because of what we feel we’re getting in return: all the benefits of the Twitter firehose, every tweet in real time from those we’ve chosen to follow, plus (and this is the big one) a platform to reach as many people who choose to follow us, plus however many people follow them, should they pass our content along. When Twitter’s great, it’s really, really great. “It’s like 51% good and 49% bad,” as Brent Simmons recently put it. “I don’t see it getting any better. Hopefully it can hold the line at just-barely-worth-it.”

And it’s not just about people ganging up on me, it’s about exposing myself to the Great Torrent of Feelings that Twitter can become. As I just talked about in my last post, Twitter and other social media can, at their worst, serve as platforms for one group of people to vociferously agree with each other at the expense of another group of people, who are just, like, “the worst.” Regardless of my orientation to this dynamic, whether I’m in the agreeing-group, the dissenting group, or simply watching it take place, it’s dispiriting, frustrating, and if it’s about an issue or group of people I care about, upsetting.

Here’s Frank Chimero, expressing something that rings true for me as well:

My feed (full of people I admire) is mostly just a loud, stupid, sad place. Basically: a mirror to the world we made that I don’t want to look into. The common way to refute my complaint is to say that I’m following the wrong people. I think I’m following the right people, I’m just seeing the worst side of them while they’re stuck in an inhospitable environment. It’s exasperating to be stuck in a stream.

And here’s a kicker: While the groupthink and mob dynamics are Twitter at its worst, I think Ferguson is an example of Twitter at its best: the real-time documentation of an existentially crucial event, with contributions from people who are participants and first-hand witnesses to developments, along with analysis and reaction from people watching from outside. But just because it’s important and useful doesn’t mean it’s healthy for a given individual to drink all of it in night after night.

As I grew wearier this week, I took breaks from Twitter. I didn’t do any kind of cold-turkey abstention or detox. I just put it aside for a while. I took almost one whole day away before finally writing and posting my article on video games, and over the course of the week, I affirmatively decided not to allow Twitter or social media to be a part of my routine or my passive phone-checking. I chose not to put it in front of me when I was playing with my kids. I assembled some toys and organized my daughter’s room with only a podcast, and little phone checking. I rode my bike more than I ever have. I turned off all of its notifications for reading a book on my iPad. I even checked out some dead-tree books from the library, in large part so that when I was reading them, the Great Torrent of Feelings could not reach me.

As I so often write under this blog’s banner, social media is best used with great intention. I usually mean this in terms of fostering your personal identity or in curating what content you’re exposed to. But it also applies to how much of your time and attention you allow it to claim overall. I have defaulted, I fear, to a stance in which the Twitter Torrent was granted passage through my nervous system as often, and for as long, as anyone else using it wanted. This week, while not the most relaxing and diverting vacation I could have hoped for, has at least taught me to be more specific and, yes, intentional, about my time in the Torrent.

Hat tip to Alan Jacobs, from whom I found a couple of these quotes, and usually deserves many hat tips.

To Save Us From the Police, It's Cameras All the Way Down

In order to bring more justice into the American criminal justice system, we may all need to point cameras at each other.

That’s where a lot of the conversation is going in the wake of the Michael Brown shooting, which of course led to the days-long crisis in Ferguson, Missouri. The thinking goes that if police officers are all equipped with cameras that record every interaction with civilians, we may actually get better results from both ends of those interactions.

In a report by German Lopez at Vox we get an idea of what the aim of police body cameras woud be:

The devices are small cameras that can be attached to a police officer’s uniform or sunglasses or worn as a headset. Such a camera could have fully captured the entire confrontation between Brown, an 18-year-old black man who was unarmed at the time of the shooting, and Ferguson Police Officer Darren Wilson.

But without the cameras, the public is left with conflicting accounts from police and eyewitnesses about what, exactly, happened.

Jay Stanley of the ACLU told Vox why that organization is backing the idea:

[Cameras] have the potential to be a win-win situation. A lot of departments are finding that for every time they’re used to record an abusive officer, there are other times where they save an officer from a false accusation of abuse or unprofessional behavior.

It’s undeniable that this, in the abstract, would be invaluable. Imagine the grief, effort, time, and expense saved when what transpires between police and civilians were reliably recorded.

Communities that have already adopted this kind of thing have reported encouraging results, with fewer complaints filed against police and the police themselves using less force. The New York Times last year quoted one such police chief as saying:

When you put a camera on a police officer, they tend to behave a little better, follow the rules a little better … And if a citizen knows the officer is wearing a camera, chances are the citizen will behave a little better.

Your privacy-concern flags should be going up at this point, and for good reason. The most obvious problem is determining exactly who controls whether that camera is on or off. From my reading it seems the only solution is making it the rule that they must be recording with all interactions, but I’m not sure how you prove that this is done reliably. One might consider controlling the cameras remotely, so that the officer can’t choose when the camera goes off, but what if the officer needs to go to the bathroom, or call their spouse about a private matter? Do they need to request the camera be turned off? And then how do we know they are requesting it be turned off for legitimate reasons?*

We don’t. And this is where we get to an even more interesting idea, this one from Mike Elgan, here writing at Computerworld, in which he, too, advocates for cameras recording every police-civilian interaction, but he wants the cameras to be pointed in the other direction. “Shouldn’t recording your own police interrogation be a constitutionally protected right, like the right to an attorney?” he asks.

Elgan doesn’t limit his thesis to interactions with police, also advocating for the free recording of politicians and lobbyists, children and caregivers, and others. Here’s what he has to say about police encounters:

It should be perfectly legal to openly videotape the entire conversation, as well as when we’re questioned or interrogated. They’ve got a dash cam or interrogation room camera pointed at us. We should have one pointed at them, too. (The knowledge that such cameras are allowed might prevent abuse…)

The principle he invokes is a compelling one, that as the surveillance state grows (inexorably?) in its breadth and power, we as individuals should claim that power for ourselves as well. Accountability for our actions goes both ways, as those who make the rules and enforce the rules also have to follow them. A civilian surveillance state, a surveillance grassroots if you will, could theoretically create the balance that many worry about when it’s only the police who have control of the cameras. And now this is especially feasible since the technology readily exists for us to do so. You probably have a device more than capable of recording video or audio of a conversation with anyone within your grasp right now.

But to take a step back, here’s Jonathan Coppage at The American Conservative worries about what good cops might lose if they go around “dressed as Google Glassholes”:

One only has to glance in the window of a local patrol car to see the sprawling array of screens, keyboards, and communication devices designed to link the officer to all the information they could need. The problem being, of course, that the most important information the common cop needs still can’t be pulled up within his car: the knowledge gained from building relationships with those in the community he patrols.

That relationship-building is a core component of a police officer’s mission … [and] requires a certain amount of discretion, getting to know a neighborhood’s warts as well as its virtues. The conversations that give an officer an accurate picture of the seedy but not destructive side of his citizens’ lives could very well be more difficult or awkward should the policeman’s sunglasses be rolling film.

It’s clearly dicey. And despite what seems like the utter omnipresence of smartphones, we can’t presume everyone can have one easily at hand, or to have the wherewithal to start recording extremely stressful and often hostile confrontations. But as a tool, mutual surveillance might still be extremely useful for keeping the peace and encouraging cooler heads in tough spots. After all, the officer may only need suspect he’s being recorded for some pressure to be released.

Update 8/27/14: Missouri's Sen. Claire McCaskill has just said that she believes all police officers, nationwide, should be required to wear cameras, though she has not introduced legislation.

*The public radio program On the Media has a great piece on cameras in interrogation rooms that complements this topic very well, where the question of who gets to turn the camera off is also raised.

The Pixel-Based Employee: In Praise of Remote Working

This is how most of my press releases get written.  Image by Shutterstock I work from home, but not because I wanted to stay home with the kids. At the time of the arrangement, I had just moved to Maine, this position opened up, and my employers were willing to give the idea of my staying in Maine and working remotely a shot, and it worked out. I’ve been very glad and grateful for it.

While I know there are a lot of benefits to in-person interaction with one’s coworkers, and that there really is something useful about being able to pop by someone’s desk to check in on something or air an idea, I do think there is something very freeing about remote work that binding oneself to an office environment can stifle.

Mandy Brown, who writes the wonderful A Working Library blog, has a great piece on making the best out of remote-working teams, scenarios in which most or all members of a working group are in disparate locations. “Our communication is no less real for its delivery via pixels rather than sound waves,” she says, and I agree. She writes:

Remote working encourages habits of communication and collaboration that can make a team objectively better: redundant communication and a naturally occurring record of conversation enable team members to better understand each other and work productively towards common ends. At the same time, an emphasis on written communication enforces clear thinking, while geography and disparate time zones foster space for that thinking to happen. In that way, remote teams are more than just a more humane way of working: they are simply a better way to work.

I recommend the whole thing, but a central bit of advice she gives really rings true to me, the bit in the above quote about redundant communication. Because so much of remote work is done over email and other written platforms, it’s understandable to think that anything said in those media are now set in stone, as it were, now enshrined as part of the Official Record, and now generally understood. But as anyone who deals with a lot of email knows, these messages can come through in such volume, often with so much content and insufficient context , and often with tons of gobbledygook embedded in endless email threads, that it’s rather easy for important things to be missed. Decisions might have been made, ideas finalized, plans made, and yet not everyone will be up to speed because the data was lost in the back-and-forth. Better to be repetetive than to lose time and effort to easy misses.

While at my job there is a central main office, there are also branch offices and other folks in different parts of the world, so we are in large part a “remote team,” even if unofficially, meaning that we have to contend with this kind of thing organizationally anyway. As Brown says in her piece, it makes sense for everyone in organizations and teams to get into the headspace and practices of remote working regardless of whether they themselves come into the office every day:

It’s easy … for the remote team members to end up as second-class citizens, always a step behind their in-office counterparts. Many remote workers I spoke to voiced anxiety about being neglected, simply because their colleagues naturally prioritized the needs of the people they could see face-to-face each day. It’s necessary for everyone on a team to adapt to remote work, even those who continue to commute to a traditional office each day.

I don’t feel this kind of neglect, other than missing out on what I metioned before, the unavoidable fact that there are benefits to being in the same space as other folks, learning more about them personally from day by day experience with each other, which leads to a particular kind of understanding and chemistry. But as an introvert, working remotely removes many of the stresses I might feel from constant human proximity and the social pressures that stem from that. For me, it’s been a great arrangement, and I think I perform better as a result.

Bolstering my good feelings about remote working is a piece at Quartz by Anna Codrea‑Rado on the opposite of remote working: open-plan offices. And it turns out, while kind of faddish in recent years, they’re not all that great for, well, work:

Writing in the Journal of Environment and Behavior, Aoife Brennan, Jasdeep Chugh and Theresa Kline found that such workers reported more stress, less satisfaction with their environment and less productivity. Brennan et al went back to survey the participants six months after the move and found not only that they were still unhappy with their new office, but that their team relations had broken down even further.

Other research she cites notes the problem of distractions from conversations and phones and other sounds stunting productivity. Additional research shows that workers in open-plan situations were taking more sick days – and quite possibly because, well, it was easier to get sick!

And again, this is an introvert’s nightmare, having to do productive, thoughtful work while always feeling the press of impending social stress.

Now, when I worked in media research at the 2008 Hillary Clinton presidential campaign, we were in an open-plan situation (“plan” used loosely), but we were all so unbelievably consumed and busy, that we were largely de facto cubicled. There just wasn’t time to look up or over at your coworkers for much. But even then, it added to the stress for me. Which is saying something.

I know there is a middle ground here, an office environment in which people’s personal space is respected, and a good in-person rapport is free to develop (assuming such a rapport between two given individuals is possible). I sometimes see office environments I admire and think about how it might be a cool thing to work in a setup like that. But I wouldn’t want to trade the freedom and, yes, increased productivity that remote working provides me. I usually wear pants, too. Just to make it feel more like work.

But I’d never do remote working with the kids at home all day. Not at their current ages, anyway. I can’t parent rowdy and needy toddlers and at the same time work to save secularism. For now, my kids have a great daycare that really enriches them, and builds them up socially.

They commute.

You Can Be Jailed for Internet Blasphemy Before You've Even Committed It in India

968full-minority-report-screenshot.jpg If you use the Internet in a particular state in India, you might be jailed for pre-crime.

I wish I was being overly dramatic, but it really does seem to be the case that a law amended earlier this month assumes authorities in the Indian state of Karnataka to have Minority Report-like precognitive powers, allowing them to arrest someone who they think might at some point violate their Information Technology laws.

Let me back up a bit. What first caught my attention was a bit of news hitting the tech blogosphere that Karnataka police were letting it be known that citizens would be violating the law by the mere act of “liking” something on Facebook that has “an intention of hurting religious sentiments knowingly or unknowingly,” and that folks should report any such activity they see to the police. (Never mind that it doesn’t make sense that something could have “an intention” of doing something “knowingly or unknowingly.”) This is reprehensible on its face, criminalizing not only “blasphemous” content, but even the appearance of approval of said content. It’s a human rights violation of the most obvious sort.

But following the links deeper into the originating reports, I find that this is just part of the problem. It seems that this is a way of enforcing what’s called, amazingly, the Karnataka Prevention of Dangerous Activities of Bootleggers, Drug-offenders, Gamblers, Goondas, Immoral Traffic Offenders, Slum-Grabbers and Video or Audio Pirates Bill, or the “Goondas Act.” (A goonda is a hired thug.) And it’s the “prevention” part of that title that’s key, because it effectively takes any offenses under the auspices of the state’s Information Technology and Copyright acts under its own umbrella, and aims to stop them before they can actually be committed, according to the Bangalore Mirror:

Until now, people with a history of offences like bootlegging, drug offences and immoral trafficking could be taken into preventive custody. But the government, in its enthusiasm, while adding acid attackers and sexual predators to the law, has also added ‘digital offenders’. While it was thought to be against audio and video pirates, Bangalore Mirror has found it could be directed at all those who frequent [Facebook], Twitter and the online world, posting casual comments and reactions to events unfolding around them. [ … ]

Technically, if you are even planning to forward ‘lascivious’ memes and images to a WhatsApp group or forwarding a song or ‘copyrighted’ PDF book, you can be punished under the Goondas Act.

And once arrested, you can be held from 90 days to a full year before even seeing a judge to make your case. It’s horrifying. One section of the act even prohibits the “publishing of information which is obscene in electronic form,” which includes “any material which is lascivious or appeal to the prurient interest.” Sunil Abraham of the Centre for Internet and Society provides the Mirror with a terrifying and yet totally plausible example of what could happen:

If I publish an image of a naked body as part of a scientific article about the human body, is it obscene or not? It will not be obscene and, if I am arrested under the [original Information Technology] Act, I will be produced before the magistrate within 24 hours and can explain it to him. But now, I will be arrested under the Goonda Act and need not be produced before a magistrate for 90 days. It can be extended to one year. So for one year, I will be in jail even if I have not committed any wrong.

So what for me began as more fuel for the fire against blasphemy laws around the world, the battle against which my employing organization the Center for Inquiry has taken on as one of its core missions, revealed itself to be a situation with a police and surveillance state run utterly amok, persecuting those who might at some point violate some arbitrary and undefinable religious or moral sensibility.

Twitter is Monkeying with What Makes it Great

Original image from Shutterstock. Twitter has a lot of problems. It doesn’t seem to have the wherewithal to deal with abuse and harassment on its platform, it’s managed to antagonize the developer community by limiting anyone’s ability to make new apps and interfaces to the service, and, oh yeah, it still doesn’t really know how to make money for itself. But the core service is something truly valuable and truly simple, and in that simplicity it has been – dare I say it? Yes I dare! – revolutionary.

But under the shadow of Facebook’s supermassive user base and Google’s vast resources underpinning so much of what we know of as the Web, Twitter seems willing to at least experiment with making fundamental changes to what makes it so great in the first place.

Anyone using the Twitter web interface might have noticed already that not only are retweets (when one posts someone else’s tweet on their own timeline) appearing in users’ feeds, but so are other people’s favorites (when you click the star on a tweet). Not all favorites from all followers, but those determined by algorithm to be of potential interest to you.

Here’s how Twitter itself explains the new order (with my emphasis):

[W]hen we identify a Tweet, an account to follow, or other content that’s popular or relevant, we may add it to your timeline. This means you will sometimes see Tweets from accounts you don’t follow. We select each Tweet using a variety of signals, including how popular it is and how people in your network are interacting with it. Our goal is to make your home timeline even more relevant and interesting.

That’s right, Twitter is playing with building its own Facebook-like curation brain. Or to put it another way, Twitter is putting kinks in its own firehose.

This is disconcerting to longtime Twitter users for a number of reasons. First is the relinquishing of control being forced on the user: what was once a raw feed of posts from a list of people entirely determined by the user will become one where content is inserted that the user may not even want there. As John Gruber put it, “That your timeline only shows what you’ve asked to be shown is a defining feature of Twitter.” Maybe not for long.

Another issue is that this content can be time-shifted, meaning that the immediacy of dipping into one’s Twitter stream for the second-by-second zeitgeist will become diluted at best, and meaningless at worst.

But also, this one relatively minor change in the grand scheme of things signifies an entirely different concept for what a “favorite” means on Twitter. It’s really never been entirely clear to me what clicking the star on someone’s tweet was supposed to signify, but as with many things on Twitter, folks have made it their own. For some it’s the equivalent of a nod or smile of approval without a verbal response, for others it serves the purpose of a bookmark, so you can return to it later. It’s never been meant as a “signal” to Twitter to provide more content like that tweet. Importantly, it’s always mainly been between the user and the original tweeter (not entirely, as one can click through on a given tweet and see all those who have favorited something), and now that’s completely gone. Now you have to assume that your favorites, along with your retweets, will be broadcast, put in front of people in their timelines.

Dan Frommer says changes like this may be necessary for Twitter’s longtime viability:

The bottom line is that Twitter needs to keep growing. The simple stream of tweets has served it well so far, and preservationists will always argue against change. But if additions like these—or even more significant ones, like auto-following newly popular accounts, resurfacing earlier conversations, or more elaborate features around global events, like this summer’s World Cup—could make Twitter useful to billions of potential users, it will be worth rewriting Twitter’s basic rules.

But with events around the world being as they are, the value of the Twitter firehose hasn’t been this clear since perhaps Iran’s Green Revolution. For Twitter to be monkeying with its fundamentals, the things that make it stand apart from Facebook and other platforms, is frightening. I have to hope that if Twitter does take this too far, that another platform will emerge that can be all that was good about Twitter, and also attract a critical mass of users to make it valuable.

Maybe we should have given App.net more of a shot.

As Ye Live, So Shall Ye Google

Image by Shutterstock. In American counties considered the easiest in which to live, cameras, iPad apps, and jogging are among the subjects that residents are googling for. In the hardest counties to live in, it’s diabetes, guns, and the Antichrist. This is according to an analysis by the New York Times which used its own metrics to determine what the easiest and hardest places to live were, and then partnered with Google to determine what search terms correlated most strongly. Some of the results as reported by David Leonhardt at the Times are surprising to say the least.

But first let’s get one aspect of this straight. What this report does not say is that in the hardest places to live the Antichrist is the thing most Googled for, nor are digital cameras the top search queries for Easy Street. It is saying that these search terms correlated most strongly to the counties in question. Common searches are common searches across the board, so at the moment I am typing this, things like “Little League World Series” and “Rick Perry” are among the terms dominating Google search at the national level, and we can assume that they are doing so even in the hardest and easiest-going counties alike.

There is little that can be inferred definitively from the Times report, but it’s very telling that when life is less stressful, one has more freedom to think about things like digital photography, and when life is a trial, one’s mind turns to weapons and eternal retribution for whatever one plans on doing with those weapons.

All this said, here are some things that struck me:

  • It is not merely that Easy Street dwellers are into digital cameras, but that they are interested in specific models, such as the Canon Elph and other point-and-shoots, which then makes me wonder how easy their lives can really be if they’re even considering Elphs and not just a decent smartphone with a decent camera. Have they not heard of these yet?
  • One may be tempted to tie a high rate of searches for the Antichrist and guns together as some sort of wish for the End Times infesting the culture, but also consider that “severe itching” is also on that list, which I think makes many of the other searches much more understandable.
  • “Dog Benadryl” is also on the list for the hardest places to live, which, despite the “severe itching,” raises more questions than it answers.
  • Among the search terms for the easiest places to live was the 2001 Ben Stiller film Zoolander, which frankly makes me question the veracity of this entire project. No one, whether their lives be easy or difficult, should be spending any time thinking about that.
  • The greatest tragedy to emerge from all of this? According to Leonhardt, “Searches on some topics, like Oprah Winfrey or the Super Bowl, are popular almost everywhere.”

We have so far to go.