Got Topsight?

If you’re doing UX work on intranet and collaboration tools, you may have experienced first-hand how research and design inside a company is quite different from when you’re doing it for customers out there on the web. In addition to the usual kit of analytics, you can really get to know the users. Do the job right and you’ll know them by name, learn how they work, see how they work with and against their tools, rules and processes to get the job done.

How many people do you need to interview? Obviously more than the mythical 5-7 people but I’ve found that the number depends on how big and diverse the organization is. You’ll usually find similar tasks done differently across locations (geographic or in the building), teams, units, specialties and departments. To succeed you’ll need to look at the work itself, not just the workers.

Happily, Clay Spinuzzi has written a book on how to conduct this kind of research. It’s called Topsight and is available on Kindle and in print. In it, you will find a guide on how do design your field study, how to conduct it and how to interpret the results. It’s very hands-on and includes several tools for mapping the work and making the small, medium and large-scale issues visible. Topsight focuses on the tangible stuff – information, interfaces, tools, processes. You won’t risk your intranet research project turning into a ethnographic study of corporate culture (which is in itself great, just not as valuable for our projects).

I’d love to get your feedback on the book’s design and content. Clay wrote it after teaching my team a half-year course in field studies and analysis methods from activity theory and actor-network theory. We’ve incorporated Topsight into several projects here at BEKK (where I work) and the results have been very positive. I’m interested in seeing how designers can use these methods to uncover how work is actually done and how we can help information flow more appropriately.

Five things I want from a deferred reading client

I follow a lot of interesting people on Twitter. For better or worse, a lot of them have morphed into link-posters, so there’s a lot of interesting stuff coming my way, too much to read right there and then.

In 2009 I described how I get through tweeted links. Since then tools have come and gone and new habits have emerged.

Here’s what I want from a deferred reading client today.

1. Zero-excise defer/remember across devices, browsers and apps.

I know it’s not easy but I wish it was possible to mark items for deferred reading in the FB app too, etc. I’ve set up a IFTTT job that pushes faved Tweets into Evernote and Read It Later. I fave tweets using Tweetbot’s triple-tap-to-fave (you can set triple tap up to do a lot of things) but really, a button or hit zone for faving directly in the timeline with *one* click would be better.

2. Readable view options that understand the writer’s formatting, now or later.

Readability has nice options and good fonts but sometimes destroys explanatory typography and fudges image loading. Evernote Clearly sort of understands what to do. Read It Later has OK formatting and a few nice options but boring fonts and doesn’t let me make pages more legible right then and there.

3. Painless access across devices with background sync’ed local copies and cloud backup.

Waiting for the various reading clients to sync is incredibly annoying. Having copies of pages in Evernote has been a life saver more than once. For a while I used BrowseBack to save everything I read but it’s a separate web client that operates in the background, stupidly duplicating your browsing bandwidth and requiring a separate login. A plugin that just saved whatever you were reading in your preferred browser would be a more elegant solution. In terms of having a copy and being able to browse through what you’d read before it was great, especially when looking for an item that you remembered the looks, but not the title of.

4. Great refindability

This could use text mining, metadata, content popularity, the item’s topics, your tags, others’ tags and time/date/geo tags. When I’m done reading I might want to add a comment, tag it or throw it into a bundle without jumping to yet another tool.

5. Intelligent suggestions for related items, topics, writers is something I haven’t found yet.

Back when Delicious was still alive I would always do a reverse search on a URL to see who had tagged it, see how they had described it and then explore their links. Now that everyone’s tweeting links instead it’s a lot harder to see their link sets. By connecting a deferred reading client to my Twitter account, I could offer my tweeted/shared links to others too.

That pretty much takes care of active reading. But what about passive reading, i.e. vacuuming my Twitterstream for links and following people’s shares and posts in other channels?

The now-defunct ReadTwit did a great job of this and turned my tweeps’ links into a full-content RSS stream that was easy to skim. Instead of mentally parsing 100 characters in a tweet and deciding whether to fave it or not (and thus push it to a deferred reading client), everything goes in and I skim the feed, mark a few for late reading or read them at length in the original design or in a more readable version and tag them.

The line between a newsreader and a deferred reading client is quite blurred. For me, a blend of Feedly, Read It Later, Readability and the old Delicious would be perfect, but for now I am stuck with a discombobulated mix of the same apps. Read here, save there, tag over there. Bah.

What would work for you?

Medfører innsikt en delingsforpliktelse?

Jeg begynner å bli ganske lei av å lese bloggposter som “Derfor bør du bruke sosiale medier internt i organisasjonen” og “Tre enkle steg for bedre samhandling”. Men det er vel bare prisen man betaler for å være tidlig ute.

Og ulempen er: om de som kan, ikke deler, så leser folk bare innleggene til folk som ikke kan.

Altså: enten må du dele det du kan, eller så må du se på at folk som har mindre kompetanse enn deg dele, bli hørt på og kjøre båten på skjær … fordi du ikke gadd å bidra.

Jeg har brukt snart åtte år på å forstå hva som får samhandling til å fungere, hvilke faktorer som bidrar eller blokkerer, hvordan forståelse kan muliggjøres i stor skala og utrolig mange andre ting. Jo mer jeg leser, jo flere fagfolk jeg blir kjent med og jo flere metoder og verktøy jeg lærer meg, jo vanskeligere synes jeg det blir å gi entydige råd og kjappe svar. Du verden så moro det var å tenke “vi kan jo bare erstatte Word med Wikier, så blir det enklere å dele, og så …” yeah, right.

Dette er ikke et nytt problem: Bertrand Russel formulerte det godt: “One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision”. Tvil og motstridende opplysninger til side: medfører innsikt en delingsforpliktelse? Objektivisten i meg sier selvsagt nei, forvalteren mener dog det er viktig.

For meg er den største ulempen at vi overfokuserer på verktøyene og glemmer virket. Spør meg ikke hvorfor, men vi hopper rett til løsningen uten å stille diagnose, kartlegge aktører, regler, flyt, rammer – ja nesten hele schmæla unntatt dokumentene og datasystemene. Før vi kan foreskrive løsninger må vi forstå hva folk egentlig gjør, hvordan, hvorfor og hvor glippene er. Hva gjør de for å finne, klippe opp, destillere, reartikulere, omstrukturere, raffinere, revidere og utnytte hva de finner av tall, fakta, informasjon, innsikt og teft?

Det er flusst av metoder, verktøy, teorier og lærdom å benytte, men den opplyste dialogen mangler. Hvordan kan vi få den på plass?

Parsing tweeted links, part 2

woven heart tute

I come across a lot of interesting links. So many, in fact, that I need to spend my attention wisely. I asked around on Twitter, where Dave Malouf suggested using Instapaper and Bjørn Wang pointed out that your Twitterstream isn’t the only place we need to parse links.

Make a feed with ReadTwit

After posting part 1, Ida Aalen suggested ReadTwit:

I read it through Google Reader, and combine it with the Read It Later-add-on for Firefox. Works like a dream!

ReadTwit combs your twitterstream and turns your contacts’ tweeted links into a newsfeed that you can subscribe to in any feed reader.

Pros: you can ignore links in your twitterstream (nearly effortless)
Cons: since you’re not filtering in a low-information space (the twitterstream), you have to parse the richest RSS feed ever, one post at a time

Read the feed

I’ve installed Fever on my server, which works like a charm. Popular items float to the top, so you at least see those. Plus, there’s the added joy of not telling Google everything, at all times. Here, I can either save items to Fever itself, add them to Read It Later or bookmark them using the tool of my choice. You can do all these things in Google Reader, etc. and you can do some of them in a local newsreader such as NewsFire.

Skip/forget/bookmark/share/discuss … or Read it Later

When you go through your tweetlink feed (yes, I made that tern up right now), the low-effort options are skip and forget. The people I follow tweet about networked action in the form of Enterprise 2.0, social (I wish we still called it participatory) media, marketing, journalism and at times design. Most of them are conscientious linkers, so there are always too many great items to look at. Which means that a lot of links are saved for later, in some way or form.

I have lots of links in lots of places:

  • Read It Later: 367 links
  • Twitter: a few hundred, I think
  • a hundred or so
  • Awesomebar: a few hundred
  • Delicious/Ma.gnolia: about two thousand

If Ma.gnolia, Delicious or someone else makes it easy to create short URLs and view usage stats, I can ditch By all means, I like, but I like service simplicity even more.

Next, if I can figure out how to display faved tweets prominently in ReadTwit, I’ve removed another channel.

Last but not least, one-click bookmarking in Delicious/Ma.gnolia would remove Read It Later from the equation. Again, I think their iPhone app is great, but I want to push a Ma.gnolia/Delicious API in their direction, not the opposite.

Managing flows, not stocks

The point here is not to read, summarize, tag and share everything. That would be clever if managing stocks was clever, which it isn’t. The point is to acquire and retain an overview, and easily find stuff again later on, even without tagging or describing the content. I’m still looking for a way of marking content that I pay attention to (read, discuss, share) so that I can find it again later on. Preferably without any conscious effort.

Read It Later improvements

Are you happy with the Read It Later UI in Firefox? As I noted earlier, when my RIL lists get too long, they collapse. I would love a BrowseBack-ish (“Time Machine for Browsing”) overview, but I’d settle for a newsfeed that included the entire story, instead of just the links. Why? Fewer interruptions. Tell me what you think in the comments.

Galloping Gertie and the financial markets

Galloping Gertie collapsing due to aeroelastic flutter

Aeroelastic flutter bringing down Galloping Gertie

We are currently witnessing high volatility in financial markets around the world. The character of the volatility seems difficult to pin down and is, with a high degree of likelyhood different in nature than preceding financial disruptions.

Because we are not able to understand what is going on, our efforts at remedying the condition may well exacerbate it, without us knowing. Some actions will be right, some actions wrong and the combination of the two impossible to calculate at present.

Heavy debt and complex financial products such as derivatives aside, I wonder if one of the contributing factors in this debacle is the near frictionless transmission, commenting and retransmission of stories from the financial market. Some news organizations are keeping their heads cool, others seem to be publishing simple, highly affective messages that generate clicks.

With no daily respite from the markets, a higher degree of interconnectivity and some of the more frightening aspects of scale-free networks playing off each other to create a sudden Galloping Gertie effect.

Galloping Gertie, or the original Tacoma Narrows Bridge used a new and different kind of construction to build a lighter, cheaper suspension bridge that eventually succumbed to aeroelastic flutter. Happily, no lives were lost, but according to the Wikipedia article the failure

[…] also boosted research in the field of bridge aerodynamics/aeroelastics which have themselves influenced the designs of all the world’s great long-span bridges built since 1940.

The collapse showed the engineers that there were hitherto unknown forces at play, forces that were revealed by the collapse itself. In Max Boisot’s world, we might see the bridge as a sensor, one that is finally able to detect data types that have so far been invisible to us.

Othmar Ammann, a member of the Federal Works Agency Commission that investigated the collapse and a leading bridge designer in his own right later wrote

[…] the Tacoma Narrows bridge failure has given us invaluable information […] It has shown [that] every new structure which projects into new fields of magnitude involves new problems for the solution of which neither theory nor practical experience furnish an adequate guide. It is then that we must rely largely on judgement and if, as a result, errors or failures occur, we must accept them as a price for human progress (source)

I’m adopting Ammann’s analysis of this situation. We have combined complex financial instruments with frictionless information sharing and scale-free networks and are witnessing the effects. I hope we will discover what kind of aeroelastic flutter is at work this time.

A new view on feeds

For some time now, many have followed their friends and interests online. Since people tend to publish in different places, subscribing via an RSS feed at least allows us to not have to go to their website to watch/view/read their content, but many sources make for an unwieldy list.

Google Reader’s endless (continously loading) page makes it a bit easier to churn through all those newsfeeds. When you get to the bottom of the page, items are dynamically added to the list so you can keep on scrolling. Usability-wise, this tweak increases the users’ efficiency.

Even if you are using Google Reader or a similar tool, you’re still dealing with multiple silos of content. Chugging down to the bottom of one writer’s pile leads to no other reward than having to start digging through the next pile. In this respect, merging the different piles into one long list gives you a method for ignoring items of equal old age so you can pay attention to newer stuff.

Reverse chronology doesn’t lead to a “newer is better” way of thinking but it does affect interfaces. We do best what we do most often, so I’m not surprised to see the blogging, flickrer, tubing, twittering crowd of tool-makers bring reverse chronology into the tools and interfaces they make. Indeed, reverse chronology is becoming an interface paradigm in its own right.

If newsfeeds – which offer just one axis of navigation – are preferred format for chewing through the Daily Me things, timelines are a smart way to get an overview.

We didn’t have Twitter or Friendfeed in 2001, but we did have a prototype of Grasshopper. Later known as  Rememble, the service lets you share images, comments, text messages and more and places all of them on a timeline that you can easily navigate. You can then comment, label, edit and share any of those items.

Rememble at zoom level 1

Rememble showing a bird's eye view of posts

Rememble's timeline at medium zoom

Rememble showing individual items at a readable size

Rememble at zoom level 3

Rememble's timeline showing one single item

Rememble didn’t take off like Twitter did. It didn’t have a social network to begin with, but more importantly, I think we the users just weren’t ready for the idea that we should be sharing intentions and experiences with each other (and possibly strangers). Interestingly, though, Rememble offered much of the same functionality as Twitter and Twinkle in 2001 (as did the now-defunct Nokia Lifeblog) but framed itself as a mostly personal memory keeper. But I digress…

Timelines are an interesting way to visualize data, and are useful tools when we make them interactive. The Simile Timeline (see a demo at the original site) lets you explore a timeline on the macro and local level simultaneously. In my view, the dialog in the image below is the micro level, but unfortunately it’s hard to explore two items at the same time. The main benefit of this timeline is that its easy to spot what’s a/synchronous and easy to measure the distance between two events.

Simile's timeline exploration of the Kennedy assassination

The Simile Timeline, showing a minute-by-minute account of the Kennedy assassination

Yugop‘s interface for the MoMA exhibit entitled Design and the Elastic Mind combines splendid non-dynamic poster design with an overlay that shows which items have similarities or are somehow related to one another. Although IntenseDebate and other comment-enabled developers embed pointers to what a debater is saying elsewhere, there are no tools for easily surfacing relationships between entities, whether in a feed or a timeline. You have to dig, search and make your own situational knowledge tools to uncover them.

Dipity is, perhaps a new version of Rememble that takes a “yes, and” approach to the interface, offering timeline, list, flipbook (akin to a carousel/coverflow slideshow) and map views of items. As with every other social application on the planet, you can import your friends, link to your services, etc. but more interestingly you can follow (newspeak for “subscribe to posts from/on”) topics of interest to you, and view them as you prefer.

I’m still looking for a tool that visualizes relationships between entities. Imagine a timeline, list or any sort of representation that can be transformed to a relational view where a node’s network and the valence/strength/direction of ties, reader volume, link freshness/decay, etc are easy to spot and work with. If you’re aware of a tool that does this (well) please let me know.

Near and local

“When you make a thing, a thing that is new, it is so complicated making it that it is bound to be ugly. But those that make it after you, they don’t have to worry about making it. And they can make it pretty, and so everybody can like it when others make it after you.”
- Pablo Picasso, as quoted by Gertrude Stein

So if we use Twitter for spreading “news” among friends, EveryBlock could share news from your area. We certainly thought of group communication when we made, but we hadn’t thought of indexing and filtering local news for re-distribution in neighborhoods. I have to keep reminding myself that media is changing (not for better, not for worse: changing), but I’ve been programmed to accept certain items as valid and others as chaff, using a set of criteria where I judge value through the perception of quality.

Quality, however, is an attribute of stiffness, of the basics having been worked out, of cartilage having been turned into bone. Perhaps bone and cartilage is a good metaphor for what’s happening: old formats are solid and now far too stiff, while the new formats can barely stand on their feet. They flourish by virtue of the sheer number of people outputting content, but the format is far from complete.

In Reinventing Comics Scott McCloud asks what the comic format of the future might look and feel like. We almost cracked our heads open trying to reinvent comics back in 2001, but the conclusion was that something else, different, something not like comics and perhaps only partly inspired by it might be its descendant.

The offspring, it seems, are nothing like the parents, and perhaps being skilled with the old tools makes you incapable of inventing the new tool. Are mini-documentaries the photojournalism format of tomorrow? Vincent Laforet sees it as part of the broader change in news media, but points out that television has been making documentaries for decades: how can photojournalists compete on TV’s territory?

In a talk last Friday, I told the audience that, media-wise, they could almost pretend that the 20th century never happened when trying to get a grip on social media. By pretending that we’ve gone from 1850 to 2008 in one leap, and leaving out the massive storytelling trio of radio, television and newspapers, we can try to observe the communication efforts going on between groups and individuals via social media and participatory tools. By keeping in mind that the idea of a central, “agreed-upon” narrative is a recent phenomenon that was, perhaps, just a passing fancy. The polarization of extremes is, perhaps the new fancy.

A century ago, the only way to distribute music before the advent of audio recording and broadcast was through sheet music or through bands/choirs/performers who traveled. Every interpretation of a work was local and very or fairly unique. In many cases, people would gather to sing the new, popular songs, or listen to their friends perform them. The same went for the sensemakers, the commentators, the argumenters, the mavens and the connectors: they were local.

I wonder if we’re seeing the contours of a new near and local era, where things are relevant either because they affect you through proximity or through interest and affiliation.

The publishing/sharing formats are still fuzzy and frequently misapplied or misinterpreted. We’ll sort that out eventually and no, I don’t think it would be a tragedy if most bloggers turned into Twitterers or whichever “I want to share this with my friends” tool is coming next.

We can now participate, inquire and discuss as never before, at low cost and with great ease, but there’s no guarantee that our discussions will be clarifying, build consensus and enable positive change. Much of the space taken up by mass media is being taken over by massively amplified local chatter; I’m not sure it will be any better but it will be different.

If the near and the local are on the rise, what about that which is further away? What about a shared culture, common values and government of, for and by the people? Yes, there is a positive obverse to these worries, but I don’t know what to call it. Peering?

Skill takes time to build and yet we are faced with a September that refuses to end where every group has to  rise above the malstrøm of newbie questions and continue compiling data and crafting knowledge. I want skill. I want stuff to be poignant, well-written, concise, relevant and free of spelling errors. I want video to have decent resolution, great cinematography, good directing and splendid screenwriting to warrant watching it, not having it waste my time. I want, I want, I want.

Michael Wesch, on the other hand, wants to understand what’s going on and therefore becomes a participant observer and takes his students along to create a fascinating study of how people use YouTube. It turns out that my search for skill, quality and all of the other good stuff needs to be aimed somewhere else. YouTube, Flickr and all the other participation machines are at once troves of insight and heaps of garbage, for we are both presenting our very best work and massively sharing the conversations that were once mumbled in private.

Outside the near and the local, my mediated reality offers less context than I’m used to; I never know what to expect. Am I about to see something moving, interesting, irrelevant or boring? Should I turn on my museum mode or would my TV ad cognitive filters work better? Am I engaged in a sort of conversation or am I listening to a presentation? Is that item there provocative but well thought-out or merely a knee-jerk response to some other nonsense? For me, the fact that a blog, a page on Flickr or video on YouTube is merely a shell means that there’s no way of knowing what that stuff inside the box in the middle of the page will be. Good? Bad? Relevant? Boring? There’s really no good way of finding out for yourself without trying for yourself.

For now, I’m going to try to forget that mass media ever existed, even as I consume it by the megabyte every second. I’m going to try to unbundle its components and put them away, so I can actually see what is happening inside the participation games. Wish me luck.