Stickiness versus Priciness?

By on

There’s an interesting blog entry in the datamining blog about the extent to which free sites are motivated to offer good customer service.  The heart of the argument is that if you’re a freeloader on a site on which the bills are being paid by a small percentage of subscribers, you’re likely to see your service suffer.

The deeper argument, though, is that  sites that are sticky are likely to be able to get away with treating their freeloading customers worse over time than sites that are not sticky.  For instance, if part of the reason that you like to shop at Amazon is the recommendations they give you, you can’t switch to a different store easily, because they won’t be able to provide recommendations that are as good.  There are two reasons for this: first of all, they won’t have a personal profile for you, and second, they won’t have as much data about other customers’ behavior.  Because of this stickiness, Amazon should eventually be able to collect higher rents than other online stores.

One interesting solution to both of these problems would be portable profiles.  Consumers could demand that the businesses they buy from accept a profile in a standard format, and export useful information to that standard format.  (Check out the P3P proposal for an example of what such a profile might look like.) Then, customers could easily take their data with them to whatever business they wish to shop at.  For instance, at MovieLens we often get Netflix customers who ask us to import their Netflix profile, so they can use our recommendation engine on their Netflix data.  (We currently don’t support this, because we’re pretty sure doing so would be against Netflix’ terms of use.  We’d love for them to give us permission, though!)

There’s also an aside about the risk of news aggregators being in charge of what we see. The idea is that a news aggregator might refuse to broadly disseminate news that would oppose its interests.  This possibility returns us to an interesting recommender systems problem: how can the user of a recommendation system know that the system is making decisions that are in his or her best interests?  Is there a zero knowledge proof that might help?

John 

 

 

What Makes a Tech Center?

By on

This morning’s local paper featured an article about Control Data Corporation, a major player in the olden days of mainframe computing.

By the late 1970s, [Control Data] had made the Twin Cities one of five U.S.
computer industry centers (a distinction that is now only a memory). By
encouraging entrepreneurship among employees, it spawned dozens of
local spinoff companies, including the supercomputer firm Cray Research
(also now gone). At its peak, CDC had 60,000 employees and about $5
billion in revenue.

This summer, I went and worked in Silicon Valley, to see what a modern day computer industry center was like.  It was indeed an exciting environment, full of new companies, people with ideas, and support for those ideas.  Contrast that with Minneapolis (a city I very much love), where technology innovation feels particularly limited to a few industries.  And yet, Minneapolis/St. Paul ranks as the #1 best metro center for business.  Where are the tech startups?

It feels as though Minneapolis is prime for a computing technology resurgence.  But I’m not sure what the catalyst of that resurgence will be, or when it will happen.

Max

How much does Shilad love presenting his research?

By on

Shilad Sen is very excited about his poster This much!

 

Max Harper and his poster Reid Priedhorsky

Max and Reid choose to show their love of research in much calmer ways.

The Computer Science department hosted their biennial open house last week. The morning program included a poster session, populated with current graduate students and their research. Several GroupLens students presented some of their current research.

Shilad Sen – Better Tagging Systems

Max Harper – Predictors of Answer Quality in Online Q&A Sites

Reid Priedhorsky – Creating, Destroying and Restoring Value in Wikipedia

Nishi Kapoor (not pictured) – TechLens: A Researcher’s Desktop

Sara Drenner (not pictured) – Barriers to Entry in Recommender Systems

 

today’s xkcd

By on

Yesterday’s xkcd comic:

I found this highly amusing, particularly in light of (a) knowing that applications I use frequently are full of SQL injection bugs and have been for years despite my complaints, and (b) as a programmer, observing how easy it is to skip input sanitization.

Microtrends and collaborative filtering?

By on

I’ve recently been hearing a bit about Mark Penn’s book "Microtrends: The Small Froces Behind Tomorrow’s Big Changes". As this review says, Penn analyzes poll and survey data to identify 75 important microtrends (which appear to correspond to ‘small’ segments of the US population, say at least 3 million) that, he believes, are interesting and important.

How, since Mark Penn is the guy who identified the ‘soccer mom’ demographic for Bill Clinton’s 1996 re-election and is now the chief political advisor to Hillary Clinton, when he talks, people listen.

And when I listend, I’ve found what he has to say interesting. However, since I haven’t read the book, I don’t know exactly how he comes up with his microtrends. It seems like the scientific approach would be to apply clustering algorithms or factor analysis or some such technique, which, as far as I can tell from browsing reviews.

I wonder what such an approach would reveal: if you ran a clustering algorithm, say, on a large survey dataset, would the clusters include Penn’s microtrends? Would one even be able to make sense of the clusters?