On a recent trip to Walt Disney World, I played the excellent new Kim Possible mobile game in Epcot where players are loaned a special cellular phone with various sensors and emitters in it. The phone plays videos about mysteries taking place in Epcot, which players solve by visiting sites and waving their phones at props that animate when they sense the proximity of the device, using the phones’ geolocation and readings from the phones’ RFIDs.

It’s a very clever game: not only does it bring some much-needed tween entertainment to Epcot’s World Showcase, but it also does some insanely clever networking stuff, spreading players out by sending them after clues in less-crowded parks based on up-to-the-second information about loading.

But it got me thinking: why is the phone emitting and the world sensing? Why not build the sensors into the phone and the emitters into the world?

This question is at the center of any number of thorny policy questions about privacy, surveillance, freedom, and open systems. The last decade has seen an enormous growth of sensors and readers, from the RFID toll-payment system glued to your windshield to the two or even three cameras in your mobile phone to the CCTV your nosy neighbor is using to spy on your backyard pool. The possibilities for emitting and sensing data are genuinely revolutionary, and many of us in the computers, freedom, and privacy crowd have been worrying that privacy’s headed for the guillotine.

The problem is that this stuff is both cheap and cool, and there are a million things you can do with it that make the world seem like magic — the contactless cards that let you gas up, get on the bus, or get into your building by waving your wallet at some reader, for example.

Since sensors are more expensive than emitters, all the early effort was on developing applications that assumed emitters would be stuck all over you so that the relatively sparse sensors in the world around you could figure out where you were and adjust accordingly. You’re the barcode, and you wave yourself at various checkout points to activate them.

But sensor prices are crashing. My latest phone, a Google/HTC Nexus One, has an extra mic solely for noise cancellation; last year I was carrying around a Nokia phone with two cameras, one outside for snapping the world and one inside for video-conferencing. Magstripe cards can increasingly be swiped in any direction — the readers have two, or even four, reading heads. This year’s CES coverage suggests that 2010 might well be The Year They Put a CCD on Everything (including my dentist’s new X-ray machine, which no longer uses film).

Which presents the potential for a very disruptive future: one in which you are the register and the world is barcoded. That’s what the Semacode people have been working on forever; it’s what drives mobile apps that scan UPCs on store shelves and tell you where to go for cheaper stuff, but that’s just the start of things.

Thus far, RFIDs in products have been designed with stores, not customers, in mind. It needn’t be so. And even where there’s no UPC or RFID or other identifier, devices with high-resolution cameras and geolocative sensors have lots of options for figuring out more information about their environments: reading and parsing model numbers, part numbers, and street signs with optical character recognition and database lookups.

It all depends on how the system is designed, and why. A networked society that treats people as scanners and keeps their data on their devices or in their encrypted private networked storage is one in which we can navigate the world better. One that treats humans as objects to be scanned, managed, and regimented is one that realizes the worst technophobic nightmares.

The choice is ours.