Here’s an interesting thought. The flip side of “prove you aren’t a robot,” is “prove you are human.”

Though it’s no easier to prove, at least it places the onus of proof on the spam bot and not your human guests. It’s subtle, but there’s a philisophical difference between requiring people to do something that is difficult for a machine, versus asking the machine to do what a real human will do naturally.

Damien Katz writes about one way to approach this problem with the use of CSS.

It’s a neat idea, instead of asking the user to prove he’s human, it instead tricks the spam bot into revealing it’s a bot. It does this with a email field that is hidden from the user by CSS.

When a human user fills out the form, the hidden field will always be blank. But when filled out by a spam bot, it doesn’t know the field is supposed to be hidden, so it adds a bogus email address and submits the form. When the back-end code sees the email in the posting, it knows the email was filled in by a bot and ignores the whole submission.

It’s not perfect—this won’t stop custom-coded spam attacks—but it does kill some of the automated, roving, spider-based comment spam. Working negative captcha methods into a dynamic changing-key system (much like current captcha but transparent to your human users) is the obvious next step, and I bet we’ll be seeing (or should I say “not seeing”) stuff like this very soon. – Link.