Gears API: create web apps that work offline

Gears API: create web apps that work offline

Google released a new Javascript API today called Gears that makes it possible to write modeless web applications that will function offline. A browser plugin is available for IE and Firefox (OS X, Linux, and Windows), with Safari support planned for the near future. The plugin will need to be installed by users of Gears-enabled applications.

From what I can see–and keep in mind that I haven’t used the API yet–there are 3 basic services that the API provides:

  • local file resource storage and caching so that you can view files after disconnecting
  • a client-side SQL database that can be used by Javascript to store and fetch data
  • a worker pool module for running asynchronous background processes

The obvious use for this is to make stateful applications that continue to operate when you’re offline, but maybe there are some privacy opportunities here too. Today, applications come in primarily two varieties: apps with user data and software stored locally, and web-based applications that execute and store data on the server. What I’m curious to see is if developers will begin making a third, hybrid category of application, where software release and maintenance is web-based and global data is available for local consumption, but the storage and processing of user-specific data takes place on the client side, safe from unwanted profiling.

Google Gears API Developer’s Guide – Link

0 thoughts on “Gears API: create web apps that work offline

  1. says:

    oh my oh my. If I got it right from pete’s video, it visits all the links on the page to see they exits and the search term is inside.
    Well, I find it a bit crazy. remembering the auto follow feature of firefox. it was really abusive. Imagine you do it for each link on google! what a waste of time.
    btw, I guess google already shows you if the searchterm is there in the short description.
    also I’ve check pete’s code, he uses a GET to check the result exits or not, not even a HEAD.
    Real dissapointing.

    beside finding him a bit narcissist, I really liked the split screen idea.

  2. petewarden says:

    Ouch, tell me how you really feel LiveBookMark.

    I’m doing a GET because one of the my biggest time-wasters on obscure searches is pages that don’t have the keywords, either because of deliberate cloaking, or just changes since the pages were spidered. The extension checks the returned HTML to make sure it’s got the keywords. No images or other media are pulled.

    Is that abusing the host’s resources? It’s something I’ve been wrestling with, and is a debate I’d like to have. It seems like the only way users are going to avoid having their time abused by black-hat SEO is some sort of mechanism like this. The current informal code of ethics seems to have been written for webmasters’ benefit, not users.

    I’ll go into more detail, and cover the technical options (robots.txt, opt-in/out, agent detection) for dealing with this in a blog post.

  3. jason_striegel says:

    It’s only grabbing pages when you’re doing a search, so we’re not really talking of anything on the order of Firefox’s autofollow. Typically, you’re searching through the results anyway, so being able to verify what you are clicking on before downloading all the images on a page could actually be saving resources.

    Maybe a on/off toggle would be nice, so that I could turn off the functionality easily without restarting Firefox. Usually, I’m happy without it, but when you’re searching for terms that are hard to find, this is a nice way of filtering out the non-relevant information.

  4. petewarden says:

    There actually is an option, you can toggle it using Control + / (forward slash). I do need to add some GUI and better help system for those sort of preferences too.

  5. says:

    re: petewarden
    almost all sane webmasters agree that autofollow of firefox is abusive. You’ve just introduced another abusive tool to firefox. I agree with jason_striegel about his approach on filtering the terms.
    cloacking is ugly enough but even yahoo does that.You can’t do much about it.
    I’m not trying to argue with you. I really liked the split screen idea. I guess you should just use HEAD to know it the page still exists and leave it to the split screen to find the term is still there.
    when you include robots.txt support, make sure that you’ve adjusting the user-agent as well.
    I appriciate your effort on trying the port the plugin to IE. Opera porting would be appriated as well.

  6. petewarden says:

    I’ve now completed the IE port, and renamed PeteSearch to the more descriptive GoogleHotKeys.

    In addition to the blog posts covering the port I’ve also put together a public wiki with full documentation on creating your own Internet Explorer extensions, and there’s the full source code for the completed plugin for reference too.

  7. nonlinearly says:

    Hi, I have a C code that run on Windows command line. The code is about telephone number identification when the phone is ringing. When I run the exe compiled file from the command line then the program is always open waiting for phone ringing to display the caller’s number…. (the program use a separate thread so as not to hang the other operations until the phone is rung).
    I need this functionality to move it to firefox… any idea?

Discuss this article with the rest of the community on our Discord server!


Maker Faire Bay Area 2023 - Mare Island, CA

Escape to an island of imagination + innovation as Maker Faire Bay Area returns for its 15th iteration!

Buy Tickets today! SAVE 15% and lock-in your preferred date(s).