Creating a Viral App

By on

Here’s a fun article about creating a viral app, based on a teacher’s experience with a Stanford class that wrote Facebook applications last year.  The article has some interesting ideas about distribution rates through viral marketing.

John

Powered by ScribeFire.

Return to Basics in Computer Science

By on

Joe Konstan pointed me to this interesting debate around an essay by Dewar and Schonberg about the teaching of computer science.  Many of these essays are shallow: the authors have a particular style of programming that they think will solve all problems, and they advocate that all of us ought to spend more time teaching and learning their style.  Dewar and Schonberg’s presentation is much deeper, perhaps because of their experience both teaching and practicing computer science.

The basic three arguments in their article are:

1) Computer scientists should learn about many different language
paradigms (described in detail in the essay), because that gives them a
deep understanding of the approaches that might best solve any
particular problem.

2) Computer scientists should learn more about formal models and how they can be used in software construction.  In particular, computer scientists should learn more of the appropriate math.

3) Computer scientists should learn the discipline from the ground up.  The early use of a very high-level language such as Python — or even Java — means that students develop a too high-level understanding of programming, without the details that are sometimes crucial.

I think the authors are absolutely correct on (1), and they do a good job of presenting their arguments.  I’ll say no more on this issue, since their article does such a good job.

I think the issue of formal models is a trickier one.  I certainly agree with their assertion that it would be valuable for all computer scientists to understand how formal models can be part of the solution for achieving very high reliability.  On the other hand, most computer scientists will never work on such systems.  How much of their time should be spent studying such systems?  My bias would be to create a path for those students who find such work interesting, but to require only the basics for all students.

Finally, I think the authors are dead-wrong on the idea of teaching students to program from the hardware up.  I understand the temptation: that’s how we learned, and we’re all awfully good at what we do, so it must be the best way to learn.  But, this argument misses the most important skill for a computer scientist: effective abstraction.  The current approach of beginning with high-level languages starts students on the path to understanding the really deep issues of our discipline, rather than spending this precious formative time on problems only a few of them will face in practice.  The authors argue that high-level languages are much easier to outsource than lower-level thinking.  They have it exactly backwards: the most challenging problems in our discipline today — and the most difficult to outsource — are mapping from user needs to concrete requirements that are implementable.  A student who knows what a high-level language can do, along with the power of its attendant libraries, is much better prepared for this sort of work than a student who has spent years learning about machine architecture and machine language.

I agree with the authors that Java is the wrong language for the first course, but for nearly the opposite reason.  Java is a difficult language to learn because it requires that so many details be understood before interesting program can be built.  In particular, Java suffers with opaque syntax for simple things — like the basic list, and dictionary data structures — and the lack of a lambda expression to make higher-order programming accessible.  Beginning students would be much better off with a language like Python, which gives them the tools to explore both modern imperative programming and functional programming.

John

Powered by ScribeFire.

Diebold and New Hampshire

By on

Lots of folk online wondering about whether Diebold messed up the election results in New Hampshire.  For the record, I think that’s unlikely — but the key issue is that the use of closed source voting machines creates a dangerous lack of transparency in one of the key processes of democracy.  We should get rid of these machines until we have a more reliable way to guarantee they’re doing what they promise to do.

Meanwhile, this page is one of the more sober analyses of what happened in New Hampshire.  Bottom line: though hand counted results show Obama doing much better than machine-counted results, demographics seems a more likely cause of the difference than cheating.

John

Powered by ScribeFire.

Philosophy of Science if Physics is a Simulation

By on

There is an unusually fun discussion on Slashdot about a not very good paper that suggests that physicists should explore the question of whether the universe is a simulation by a sufficiently advanced civilization.  The argument is an old one: the basic premise is that if a civilization ever advances far enough that they are able to simulate a universe as complicated as ours, they’ll probably simulate it zillions of times, which implies there are more simulations than universes, which implies we’re likely in a simulation.  The argument from probability seems silly to me: starting with a premise that we have no way to analyze probabilistically (“ever advances far enough …”) and ending with a probabilistic argument is an awkward path, but …

The basic question is interesting from a philosophy of science perspective.  If we are in a simulation, can we ever know?  If so, can we affect the simulation?  There are good reasons to believe we could not learn if we’re in a simulation, because all of our a posteriori knowledge would be of the simulated universe.  How could we design an experiment that would measure something the is by definition outside of the things we can measure?  The paper cited on Slashdot suggests that we might seek information theoretic limitations of our current universe.  I’m not sure a discovery one way or the other would convince anyone: if there are limitations on the information processing ability of our universe, that might just be the physics of our universe, rather than evidence of an underlying computer a lot like our current computers.  If there are no such limits (that we can find), they might just be very hard to find — or the universe might be running on a very different type of computer, with different limits than those we are used to.

In any case, we’re probably best off continuing to work away at figuring out what the rules of our universe are.  If it is a simulation, it’s a pretty good one, and we probably can’t prevent the cosmic Ctrl-Alt-Del anyway.

Of course, we might be able to induce a Ctrl-Alt-Del if we want to.  Perhaps there’s a bug in the simulation software that we could exploit to cause the blue screen of death on a grand scale.  (Some people have suggested that nuclear weapons are such a bug, on a planetary scale.)

I’m reminded of an old story about an early disk drive.  The drive was large, nearly as tall as a person, and much wider.  The story is that the disk read/write assembly was massive enough that if the software “seeked” from side to side of the drive rhythmically, the drive could be made to tip slightly from side to side, and eventually to “walk” across the floor.  Perhaps a goal of physics should be to see if we can tip a cosmic disk drive over, to see what would happen.

John

Powered by ScribeFire.