Tuesday, February 14, 2006

Technological inevitability and intervention

I spent the weekend revising my dissertation's discussion of the "inevitability" of pervasive computing.

In my interviews, we discussed the "whys" of locative media and among the answers were: 'we're doing it because we can' and 'we're doing it because they're doing it, whether we want them to or not'. In both cases, and in articles like this too, technological development and implementation is considered absolutely inevitable, and by extension, natural or normal.

This same belief in inevitability has also figured prominently in my informal discussions with designers over the past few years. Comments typically go something like this: 'ubiquitous computing isn't just coming, it's already here, and our efforts should go towards doing it right'.

In fact, it has been very rare for a discussion to engage the possibility of no technological intervention at all. And trying to discuss how someone's career is heavily invested in maintaining this sense of inevitability around technology is rife with social dangers. (Let me just say it's amazing how quickly a conversation amongst friends can deteriorate into accusations of academic arrogance or irrelevance!)

But where I'm struggling the most in my revisions is around the matter of intervention. In other words, if pervasive computing is inevitable, what's the best we can do? (What are the limits on our agency?) If ubicomp is going to be different or better than what we've made so far, why are we still resorting to utopian/dystopian discourses just like we did the early days of the internet?

20 Comments:

Anonymous Will Davies said...

These are very important but tricky questions. Whenever technologies are new, anyone who voices a sense that they may be of real but limited desirability is represented as a reactionary and a luddite. Meanwhile, anyone who suggests that interventions can temper the speed and direction of technological progress is ridiculed and compared to King Cannute.

My latest effort to make the argument is in this Prospect article. I'm asking people (especially policy makers) to recognise that the building of multi-lane roads was once seen as inevitable and universally desirable. The M6 motorway in Britain ploughs a revene through Glasgow city centre, in a way that is widely recognised as appallingly destructive. In the decades since, politics has helped temper this form of modernism, and will ultimately do the same thing, I believe, for digital infrastructure. The question is how long it takes to realise this, and to realise that conservation and conservatism are not the same thing.

04:21  
Anonymous Francois Lachance said...

I wonder if the title of the entry can bear a little revision by reversal and thus orient the discussion towards a bi-level framework...

Begin with Intervention and then ask how the sense of inevitability shapes the possiblities and types of intervention. The sense of the inevitable sometimes leads to action designed to limit negative affects. Very often the sense of a big thing coming leads to a rush and the adoption of a series of quick guerilla-like interventions. The sense of the inevitable shortens the time span imagined for interventions.

A less monolithic view where there is a bit of a crack in the monument of inevitalbity may also lead to an appreciation of multiple temporalities for various interventions.

05:36  
Blogger Anne said...

Good points Will, thanks, and I'm looking forward to reading your article more carefully later tonight :)

Francois, your suggestion to reverse the order of the words is extremely helpful - thanks! And yes, a less monolithic view is definitely needed. I've always suspected that in practice there is much more flexibility.

12:12  
Anonymous Sanjay Khanna said...

I'm mesmerized by this dialogue and the questions you pose, Anne. It makes me wonder: What is inevitable? What broad and unstoppable forces are influencing our ideas of inevitability and perhaps shifting our attention from forces that we need to be mindful of and strongly influence?

I do see pervasive computing as one of the inevitabilities along with widespread environmental change and a massive rise in anxiety. But I don't know why, in particular, I see pervasive computing as inevitable except for the reason that businesses, governments and investors are investing widely in it as a way to drive economic activity...and they're in too deep to back out. Widespread environmental change in the form of the direct and indirect effects of global warming is well documented. And the rise of anxiety is the human emotional component, a psychological praecursor to the future shock Toffler uncritically described.

I think pervasive computing/interaction design interventions can address the anxiety and enter cracks in the "monument of inevitability," enabling emotional environments to be more conducive to calm thinking and thoughtful responses to cultural and technological acceleration. Now seems like a good time to imagine how. So...how? :-)

16:50  
Anonymous Paul said...

What I always find missed in these discussions is the issue of "complexity" -- not complexity in the "complicated or tangled" sense of the word, but in the "powerful, productive" sense. Complex systems produce, and not knowing what they produce (or are capable of producing) does not slow them down. Likewise, they are not constrained by anyone's good intentions. About all that's assured is that the more complex a system, the more it is capable of doing and the more it will eventually do if allowed to.

The real complexity in computers is not in the individual devices but in the connections between devices. The resulting system is "greater than the sum of its parts." The most complex "devices" on earth are people. When computers enable or facilitate connections between people the level of complexity in the whole System jumps up. Way up. Some guy named Tim devises a way for computers to help scientists share and cross-reference documents and, viola!, out pops porn sites and Amazon.com and ...

This is what technologists never want to talk about (or think about). If they crank up the complexity they may get the result they are after, but they will also get a lot of things they never intended. It is very difficult to evaluate and adjust for this unintended complexity. More difficult is getting an employer or investor to fund such evaluations. "We've got the product you've paid for done but now we'd like a few more years to study what else the complexity it creates is capable of ..."

Not likely. Even in an industry where this "what else can it do" research is expected (pharmaceuticals) it is done only to the very minimum extent possible. Add to this the fact that complexity, by definition, is a bit chaotic in its behavior and it also becomes clear that one can never fully predict what it will do. This is the big hole in techno-evangelists' blathering. Why is it that guys like Ray Kurzweil always assume that when computers are complex enough to have human level intelligence that they will be either understandable or predictable?

The other factor to consider when looking at these issues is that in today's world technology advances in a market driven "vehicle" and, as everyone knows, the market drives way over the speed of reason. Market capitalism's great strength is its complete blindness. It just goes wherever, running over whatever. And the market's greatest trick is that no one's to blame -- no one is responsible for the complete lack of moral or ethical behavior of the market. More precisely, everyone's to blame and therefore no one is responsible.

My feeling, Anne, is that the sense of inevitability is justified not so much because technologists just love making new stuff, but because there is that blind, reckless, "full speed ahead and damn the consequences" market system pushing them on, grabbing anything and everything they come up with just to see if it will help the profit margins.

An intervention not directed at the flaws in market capitalism will not work in my opinion, and I have no idea (yet) how one might even begin to do that in any manner that isn't worse than futile. Before there is any beginning, though, there needs to be some awareness -- someone needs to start asking the question, "How certain are we that we really want to live with both the intended and unintended consequences of this technology?"

It seems to me that this is what you're doing. I commend you for that.

10:01  
Anonymous gene said...

Hmm, nice questions Anne ;-)

Which bits of pervasive computing do you see as inevitable, and on what timescales? There are lots of moving parts to this ubicomp thing; some pieces are here today (phones, mp3, gps and similar gadgets, mostly isolated islands and closed networks), some are 10 years away (arphids in widespread consumer uses), some are probably 50+ years out (animated ice cream carton displays that show the life stories of the cows that gave the milk, the migrant workers that picked the cacao beans, etc). Some ubicomp predictions are simply naive or boldly incompetent ideas that will never survive practical implementation or market forces. I would not be too quick to paint a broad brushstroke of inevitability. And where intervention is called for, good news, I think we have some time to respond.

As to your question what is the best we can do? I'm not quite sure what you mean, but my humble opinion is 1) Fight constantly and unwaveringly for designs, architectures and technical standards that admit openness, access, transparency, and individual freedom to create. Resist walled gardens, closed architectures, and centralized control of the new ubimedia. 2) Quest for and demonstrate the potential for beauty, inspiration, creative expression, to elevate our sense of humanity. There will be plenty enough dystopian thinking to go around in a pervasive world; try to fill the world with light and insight. It sounds really loony but I mean this seriously -- this is one of the central design challenges we (should) face as we go about filling our world with the techno-clutter of convenience.

16:02  
Blogger Anne said...

Sanjay, Paul & Gene - thanks so much for the insightful comments!

One thing strikes me immediately - I should have distinguished between perceived and actual inevitability. In either case, the inevitable shouldn't be considered monolithic or static. In fact, part of the problem of inevitability is precisely in the naming of the technological subject.

For example, if we take Sterling's "spimes" or Greenfield's "everyware", it's difficult to get away from totalising (i.e. disambiguating) definitions and both rely on some sort of technological inevitability for their argument (and neologisms) to work, to sway.

But, at the risk of repeating myself, I'm also interested in the types of intervention that would include the possibility of just saying "no" to technology. The vast majority of researchers & designers I've spoken to prefer to focus on how to do their job better, not on how not to do their job at all. Hardly surprising - no one wants to make themselves redundant - but why can't the possibility of non-intervention be included in tech development? (And I don't mean those times when the business case is lacking, so R&D grinds to a halt.)

Why shouldn't we hold researchers and designers accountable for every time they choose to support the status quo of "inevitability", instead of challenging it?

In my mind, as it stands right now, we have terribly inadequate forums for negotiating what we want and what we don't want, let alone infrastructure in place to deal with unintended consequences as they manifest themselves in the coming years.

Ironically, if these technologies are actually inevitable, I would think we'd want to be extra certain to deal with these shortcomings sooner rather than later.

02:19  
Anonymous gene said...

So today the "forums for negotiating what we want and what we don't want" are primarily 1. open markets; 2. institutional regulation (courts, government agencies, NGOs); 3. special interest groups (NRA, Sierra Club, Christian Coalition in the US...); 4. grassroots activism (CASPIAN/SpyChips, anti-globalization movement, etc). And yes these do seem rather inadequate, but like it or not they are the systems that have evolved as the way we have society-wide "discourse" about what we do and do not want.

As Paul pointed out earlier, I don’t know how we make mainstream researchers and designers accountable for challenging market-driven inevitability, as they tend to be embedded in the commercial system and driven by its goals. For all the talk about “user-centered design”, most design work is actually “profit-centric” at its core. Resisting the status quo has historically been the province of artists, activists, and visionary individuals, and these people frequently operate at the fringes, marginalized in their influence by better funded, more powerful and entrenched interests.

Maybe this is a useful angle – could/should researchers and designers align with (or become) artists and activists and visionaries working from a perspective outside the commercial system? Fewer Nielsen/Normans, IDEOs and frogs, and more PAIRs, SRLs and Jeremijenkos? And would such non-commercial motivation result in interventions that are objectively “better” at identifying and coping with the consequences of tech or non-tech choices?

09:27  
Anonymous anne said...

Excellent comments Gene - I'm still thinking :)

06:59  
Blogger adamgreenfield said...

Will, I'm hoping you're being literal when you say "this form of modernism" - i.e. bringing the blame to bear on this *particular* form. I know you know there were and are more than one modernism - some of them quite populist in intention and execution both. I'd hate to see them tarred with the usual reactionary anti-Modernist shibboleths. And I think that's entirely enough mixed metaphors for one paragraph.

W/R/T Sanjay's putative "massive rise in anxiety": sorry, but inasmuch as I'm probably sympathetic to your larger aims, I don't think we further them by making sloppy claims. A "massive rise" on whose part, occasioned by what, and measured how? If you're claiming that pervasive computing necessarily entrains anxiety in humans exposed to it, you're failing to account for the very real sense of security occasioned by things like OnStar guidance and medical-alert instrumentation for the elderly.

I'm no techno-utopian - not in the slightest - but it's simply disrespectful to dismiss the real appeal pervasive and ubiquitous technologies have for "real people," even if the value propositions are rarely particularly well articulated.

I think the most important thing is to note that people will muddle through, whatever comes of ubicomp(s). We always have: we make do. When necessary we resist, we repurpose quite creatively... and every so often we go with the irresistible flow. Making do is what we're best at.

On that level, I'm not worried, and in fact I think certain trends toward the decentralization of technological development hold some good news for us. Some days I think even the limitations on civil liberty I see as being almost inherent in deployed ubiquitous systems will be easily enough circumvented.

It's mostly when I put on my user-experience hat that I get concerned. Very simply, there are wide, wiiiide swathes of my life that I see no reason to remodel along the lines suggested by the recent history of information technology. Maybe this is where the anxiety comes in - but, as usual, knowledge is power, and power is an excellent specific for anxiety. That's why I wrote the book. ; . )

11:03  
Anonymous Paul said...

Adam says: "I think the most important thing is to note that people will muddle through, whatever comes of ubicomp(s). We always have: we make do. When necessary we resist, we repurpose quite creatively... and every so often we go with the irresistible flow. Making do is what we're best at."

Muddle? And this is a good thing? Or an "it will be okay if we can just ..." thing? Life, Liberty and the Pursuit of Muddling. Interesting concept. I'll give you one thing, Adam. You can sure pack a lot of scary stuff into what at first seems an innocuous paragraph.

Along with "if you build it they will muddle," your observation that "we repurpose creatively..." is also troubling in its one-sidedness. You're absolutely right that we can repurpose, and creatively, at that. This was implicit in my earlier comment when I said that complexity was not constrained by anyone's good intentions. Unfortunately, anyone can repurpose, including the ubiscum who will try anything and everything conceivable to turn ubicomp into ubispam, ubisubjugated, ubipest, ...

Whatever happened to ubismart?

15:44  
Anonymous Anonymous said...

Adam - agreed on the question of 'particular modernisms'. After all, 1960s modernism was already beginning to react against the dogma of Corbusian urbanism, precisely because it was so anti-democratic and divorced from empirical understandings of how communities prosper. What interests me is what a digital modernism might look like that was similarly humanist, and similarly progressive in its critique of the status quo. We constantly urge technologists, policy makers, designers etc to put people at the centre of their worldview, not machines, but I'm not sure this is yet articulated as a concerted democratic project.

04:37  
Blogger adamgreenfield said...

Oh, Anonymous, you are singing my tuuuuuuunnnne!

Paul: You'd have to be trying pretty hard to interpret my stance as anything but profoundly skeptical about ubicomp.

I'm not sure if I haven't expressed myself clearly or if you're deliberately misreading me, but you wind up taking a proposition that is straight out of de Certeau (and which is generally understood, by me as well as plenty of other folks, as one of profound faith in the human spirit to resist the hegemony of totalizing systems) and calling it "troubling."

Is muddling through a "good" thing? I dunno. By definition, it's surely not optimal. The word - 'tis true, I'm afraid - obscures many individual moments of sorrow and even horror. But again, it's what people do, even under the worst conditions our viciously creative species has yet been able to devise. Vorkuta, the townships, Treblinka, the Dust Bowl - you name it, people got through it, some of 'em with dignity and decency intact.

Something tells me that, however bad a situation ubicomp may present some of us with, we'll manage somehow.

15:55  
Anonymous Paul said...

Adam, the issue wasn't your skepticism but your resignation. I don't find the idea that "we'll make do," which you see as something hopeful, to be of much comfort. As with AIDS or the inevitable pandemic flu, I have no doubts that the species will survive. I would rather we not have to muddle through, though.

Anne's original post is, in part, asking if we really have a choice when it comes to technology and, if so, should we intervene. I am of the mind that things are spinning out of control -- we're doing more and more damage for fewer and fewer benefits -- and that if it's possible we need to put the brakes on. I'm not at all sure we can do anything now but wait and see what happens.

Like you, I do see benefits in what technology has given us, OnStar and medical-alerts being good, and actually minor examples. Certainly a very valid question here is rather or not these benefits can be attained without the costs associated with an unconstrained market system. [I know the market has constraints, I'm referring here to moral, ethical and social constraints, which are sorely lacking.] This, too, I think is what Anne was trying to get at.

As I stated earlier, I believe the major problems to be addressed are in market capitalism. A system that measures success in ever increasing profits and growth can have no real regard for ethical choices and moral constraints. It can only pretend to care about the quality of life, and only so long as such pretenses increase profits.

It is not at all a fact that technology has improved the quality of life for humans (although it certainly has for some humans). There are more people living in extreme poverty today than there were people (in total) 200 years ago. This is not a success story. We're not living in "happy ever after." I guarantee you that ubicomp won't get us there (and I'm just as sure that it will be sold to us as "what you need to get there").

Why isn't the overall quality of life better? Is that inevitable? Can technology help? Why hasn't it? I would love to wake up Muddle Class Americans and get them to start addressing these questions. I have no idea how to succeed in that endeavor.

12:14  
Blogger adamgreenfield said...

Paul, you're preaching (a little clumsily, too) to the choir - as you'd know if you were familiar with what I've written on the subject.

It's not that I disagree with you, it's that I just don't believe in the notion of progress, let alone any idea of popular enlightenment stemming from vanguardist action. I don't think there's any "we" that can wake up the slumbering middle classes. People make the choices available to them, that make sense to them locally - you and me every bit as much as the Hummer-driving TV host or the proverbial Pakistani bricklayer. And the structure of these choices, globally, at the moment, pretty clearly says to me that ubiquitous computing(s) is effectively inevitable - and that our energy is now best spent figuring out how to ameliorate the worst of the foreseeable consequences.

Without disputing, for the moment, the idea that there may be deep problems at the heart of market capitalism, the market may turn out to be just the brake on ubiquitous development you're hoping for. Certainly the "digital home" is without traction in the market so far, while the various "convergent devices" available have been relative underperformers.

"Why isn't the overall quality of life better?" is not a ridiculous question to ask, but it's a little broad for this particular time and place, don't you think?

04:44  
Anonymous Paul said...

Your right about the preaching. I seem at times to have a damn soapbox glued to the bottom of one shoe and I am most likely to be on it when trying to expand my horizons a little. My apologies. (And I am worse than clumsy at it.)

I don't know you from Adam, Adam. I don't know what you've written. I've responded only to what I've seen in your writing here which appears to me to be a resignation. Perhaps I will eventually adopt the same stance, but I'm not ready to yet.

You say: "People make the choices available to them, that make sense to them locally - you and me every bit as much as the Hummer-driving TV host or the proverbial Pakistani bricklayer."

That sounds right but the more I got to thinking about it the less sure I was. Here's where I'm having a problem with it, Adam. It assumes that people know what choices are available and it implies that they actively make some choice.

What do you think of this, instead? People sometimes consciously choose from the alternatives they are aware of but often choose not to choose.

Assuming this is more accurate, it raises some questions. How do people become aware of the choices available to them? How do they make informed choices? Why do they believe inaction is not choosing when it obviously is choosing the default -- the consequence of not choosing anything else?

Are these questions also too broad? If so, we can apply them just to technologists and will then have some of the questions Anne is (was? will be?) heading towards. Some particular technology is more likely to be inevitable if someone is choosing to pursue it. It's also more likely if people are not choosing to intervene. As well, people cannot choose to intervene if they are not aware that this is a choice and they won't choose to if they cannot justify this choice at the moment(s) it needs to be made.

Is any of this making sense? Is my logic holding up?

12:48  
Blogger adamgreenfield said...

Well, I think that's a lot closer to being a usefully accurate statement. FWIW, if you're inclined to continue the conversation, you're more than welcome to pose your question at Studiesboard, where the critical-ubiquity line is practically the site's raison d'etre.

05:56  
Anonymous Sanjay Khanna said...

Great dialog between Adam and Paul.

Adam, when I refer to a massive rise in anxiety, I'm not attributing it to ubicomp at all. Research done by the National Institute of Mental Health (part of the National Institutes of Health) in recent years indicates that clinical depression and anxiety are rising dramatically, as are lower-grade versions of anxiety.

What I'm interested in is how the environment of pervasive computing converges with human emotional reality, which is becoming quite fearful because society is changing so rapidly--technologically and in social behavior. I think the convergence of pervasive computing and pervasive anxiety will have a big impact on what eventually transpires. I'd also argue that the positive interventions you hope for (ameleriorating the most negative tendencies of *everyware*) will come from innovators and citizens who are calmer and more capable than some of us and are able to see the gaps, and act within them to create positive outcomes in local, but important, ways. But you know that already. :-)

19:37  
Blogger Anne said...

Brilliant conversation and just a couple of observations :-)

What Paul senses as "resignation" in Adam's position reminds me of responses to Adam's presentation at Design Engaged last fall. Because I find much hope in the local and the everyday (Adam also reads de Certeau ;)) doom-and-gloom scenarios do not scare me (or him?) perhaps as much as they are meant to. Adam's DE presentation focussed almost entirely on apocalyptic futures - a.k.a. 'life is suffering' - and it was easy to feel defeated and miss the potential for 'good', for hope.

But I'm also not fond of the 'muddling' metaphor. I suspect it comes from Adam's dedication to eastern philosophies, and I think this language becomes particularly problematic when contrasted with western notions of agency or our ability to act (small-p) politically.

I also appreciate Paul's (and let not forget Gene's) consistent attempts to connect points back through my original matters-of-concern and, more importantly, to reinforce the possibilities of discussing something that seems impossible instead of entrenching ourselves in what we know.

02:51  
Blogger adamgreenfield said...

Anne, you are precisely correct in locating what you do where you do. ; . )

There are certainly traditions in which ideas of "resignation" and "surrender" don't have the valence they enjoy in the West. (What is it that "Islam" means, again?)

Sanjay, I just don't buy the idea that anxiety is suddenly "objectively" greater than ever - especially not when that idea is propagated by an institution which could arguably be said to have a vested interest in bringing greater numbers of citizens under its purview.

We know how political the act of changing definitions of inner states has been in the past - remember, for example, that it wasn't until 1972 or '73 that homosexuality was removed from the DSM. So instead of relying on some specious "fact," promulgated in a context where careers and endowments and even the course of entire industries hinge on specifying some percentage of the population "with anxiety," can't we just start from a ground of compassion for our fellow sentient beings? Does that ethic really need to be propped up?

05:37  

Post a Comment

<< Home

CC Copyright 2001-2009 by Anne Galloway. Some rights reserved. Powered by Blogger and hosted by Dreamhost.