Collective Intelligence FOO Camp

By on

I just got back from the Collective Intelligence FOO Camp that O’Reilly organized at Google.  The meeting was great, the people were great, and overall the experience was great.

One issue that popped up is what exactly people mean by Collective Intelligence.  At a high level, it was clear that everyone meant basically the same thing:

agent -> work
agent -> work
agent -> work                                                                 (some require superlinear)
agent -> work              —— combining function  ——> outcome that would
agent -> work                                                                   be harder to produce
agent -> work                                                                   with any individual agent

Interestingly, a number of participants were only interested in examples in which the outcome was superlinear in the number of participants.  I’m not sure why this would be.  Several participants were speculating about what a “complexity theory” of collective intelligence would be like: could we identify problems that are demonstrably more difficult for a collective intelligence to solve than other problems?

I’m personally more of a “big tent” CI guy.  I think that as long as the result is intelligent, I’m okay with situations in which the individuals agents are providing the real intelligence, and the combining function is simple.  If we want to taxonomize, I can see at least three interesting types of CI:

Types of Collective Intelligence

  1. parallel intelligence: many independent agents (e.g., Wikipedia, reCaptcha)
  2. aggregate intelligence: independent agents + combining function that joins the results (e.g., recommender system)
  3. emergent intelligence: the result is intelligent, even if the individuals are not (e.g., ants foraging, leaving scent trails)

Overall, the experience was fun.  I did find it intriguing that we had no tools for applying collective intelligence to the process of creating a “unconference”.  For that, we used white boards and markers, lots of sticky notes, and pieces of paper to cover up events that were cancelled.  It would seem easy to do better: people could propose ideas, which would show up on people’s laptops. They could say which ones they would like to attend, which would cause them to be scheduled so most people did not have conflicts, and so they would be in rooms of approximately the right size.  If noone wanted to come, they could be cancelled or merged.  It would be fun to see CI in action at a Foo Camp of the future ..

John

 

Visual Search

By on

ManagedQ is a very cool search engine interface.  It runs on top of google, and presents a visual view that aggreggates pages according to people, places, and things.  Kind of fun to play with … but the fact that it requires the Shockwave Player means I can’t use it on some of my favorite platforms :(.  I very much like the higher-level view on top of the vanilla Google interface.  Check it out!

John

Tiinker News Site: Human vs. Machine

By on

Read/WriteWeb reports on Tiinker, a news site that uses machine learning to figure out what sort of articles you’re interested in, without messing around with all that social information that reddit, digg, etc. are based on.

I think this is a huge step backwards: machine learning is fine at figuring out topic, but lousy at *quality*.  Further, how can Tiinker know your interest in something new that you haven’t read about yet? 

I’d be much more enthusiastic about an approach that would combine the social news of a reddit or digg with machine learning to create smart, social news.  In a system like this, people would read news that they’re interested in, based on what other people like them have been reading, and their own automatically learned profile.

What do you think?
John

Powered by ScribeFire.

Facebook “Application Reputation”

By on

Very interesting article in Read/Write Web about Facebook planning to use the success of an application as a way of choosing how much “spam” (my word, not theirs!) the application can generate.  The idea is that applications that are generally popular with the readers they contact would be allowed to contact more readers in the future.

It would be very interesting to build models to explore how this idea works in practice.  Two dimensions I suspect are particularly rich are:

1) How much throttling of applications is necessary to keep the amount of “application spam” down to a reasonable level, for a given level of new Facebook applications?
2) If a particular user marks a particular application as valuable (usually implicitly, by using it), how might that translate into a score for that application in the future?  Is there a role for user reputation in this game, as well as application reputation?

John

Powered by ScribeFire.

Can we talk about “usable” programming languages?

By on

Very interesting comment posted on this debate about programming languages.  The short form of the argument is that usability testing doesn’t work for products that have a steep learning curve, because by the time you’ve learned how to use the product effectively you’re too biased to comment on its usability fairly.  I think there’s some depth to this argument: it may explain fundamental limits on usability testing for very high dimensional, very complicated products.

However, I also think it misses some of the potential for usability testing for languages.  After all, one of the things that makes a language like Java annoying for the beginning is that there’s simply too much to learn before you can start doing real things with the language.  Usability testing could be a very interesting way to differentiate between different languages that have similar power, to predict which ones would be best for beginners to learn.

Fun debate!
John     

Powered by ScribeFire.