Confirmed users
503
edits
m (Reverted edit of Wpiocs, changed back to last version by Udayan) |
|||
| (10 intermediate revisions by 4 users not shown) | |||
| Line 82: | Line 82: | ||
the api. | the api. | ||
: | :[[User:Roc|Roc]] I think the right thing to do here is to make an offline application just a set of pinned pages in the cache. Then you can write a single set of pages that can function online or offline. HTTP caching instructions take care of application updating. We could automatically allow a site to pin N% of your disk space, if your free disk space is >M% of total; if they try to go over that, the user has to approve it. | ||
:Instead of providing a manifest, a more Web-friendly approach might be to crawl the page. Basically, when you select a page for offline viewing (which could be as simple as just bookmarking it), we pin it and also download and pin any of the resources we would if we were doing Save As --- images, stylesheets, scripts, IFRAMEs. In addition we could provide a new 'rel' value for <a> links that says "crawl this link for offline viewing". This would be really easy for Web developers to use and maintain and not too difficult to implement, I suspect. Application crawling and download could be done in the background. | :Instead of providing a manifest, a more Web-friendly approach might be to crawl the page. Basically, when you select a page for offline viewing (which could be as simple as just bookmarking it), we pin it and also download and pin any of the resources we would if we were doing Save As --- images, stylesheets, scripts, IFRAMEs. In addition we could provide a new 'rel' value for <a> links that says "crawl this link for offline viewing". This would be really easy for Web developers to use and maintain and not too difficult to implement, I suspect. Application crawling and download could be done in the background. | ||
::[[User:Blizzard|Blizzard]] Pinning pages in the cache sounds like a great idea as a way to implement this, but I don't think using a heuristic that eventually asks the user to add storage is probably the right way to go. Users don't often know how big their cache is nor how much space they have left on their hard drive. I did have a hook for the actual application to throw the space dialog, but that's a little different than the browser throwing that dialog. That requires the app knowing how much space is allocated to it and how much space it is using. The nice thing about that is the app can avoid ever throwing that dialog by expiring data instead of requiring the user to just add more data. | |||
::I also think that it's important that we keep the maifest separate from the pages themselves for a few reasons: | |||
::<ol><li>It's hard to know where an offline application "starts." That is, if you're on page X does that app start on page X or on another page entirely? The manifest is the logical starting point for the "bookmark" and can also contain "start" information. i.e. which cached page should be loaded when you are offline and the bookmark is loaded? <li>When do you expire data? If you update your offline app how do you tell when certain pages are unused? If there's no single location for all of the pages to be found upgrades can get a lot more challenging and there's a good chance that we'll end up with some unpredictable heuristic. <li>Raw crawling misses a lot of data. Urls and locations that are accessed via javascript and building form data on the fly can create locations that won't show up in a crawl. <li>It would be a lot of work to maintain all of the possible links in all of your pages. I think it would be a lot easier just to have a simple text-based file that contains a list of resources for all of the pages. And a simple <link> page that links to that manifest. Everything is explicit and well-understood in that case.</ol> | |||
:::[[User:udayan|udayan]]One more set of problems arises with deployment of applications in geogaphically remote areas. These areas have connectivity on a very limited basis ( not 24 X 7 ), connectivity is of poor quality ( speed and reliability ) and connectivity is expensive. Also the remoteness means applications deployed in such areas need to be remotely managable, as physical access is not always feasible. This creates a need for the applications to be : (a) Remote depoyable and managable and (b) Applications must work in offline mode ( forms data based data entry, locally cached, submitted on detection of connectivity, upgrades to application and things like form templates to be cownloaded on detection of connectivity ). | |||
:::For this we need the ability in the browser to be able to explicitly tag content as "offline capable", where "offline capable" would imply a set of things and not just being able to "pin" content in a cache. | |||
===Storage=== | ===Storage=== | ||
| Line 148: | Line 159: | ||
mail.yahoo.com and address.yahoo.com.) | mail.yahoo.com and address.yahoo.com.) | ||
[[User:Roc|Roc]] I think that trying to provide structured storage on the client is hopeless for the same reasons that trying to provide structured storage in a filesystem is hopeless. Whatever model we choose will never be the right model for the majority of applications. Furthermore, it diverges from today's Web programming models. I think we should just provide a simple filesystem API --- essentially, a persistent hashmap from string keys to string values --- slightly enhanced cookies. Remember that developers will want to at least do all the things they do with cookies today, including obfuscation, encryption, and on our side, quota management. People can build libraries on top of this if they want to. They can build libraries for indexing, LIFO cache management and so on. | :[[User:Roc|Roc]] I think that trying to provide structured storage on the client is hopeless for the same reasons that trying to provide structured storage in a filesystem is hopeless. Whatever model we choose will never be the right model for the majority of applications. Furthermore, it diverges from today's Web programming models. I think we should just provide a simple filesystem API --- essentially, a persistent hashmap from string keys to string values --- slightly enhanced cookies. Remember that developers will want to at least do all the things they do with cookies today, including obfuscation, encryption, and on our side, quota management. People can build libraries on top of this if they want to. They can build libraries for indexing, LIFO cache management and so on. | ||
[[User:VladVukicevic|VladVukicevic]] 18:00, 7 Jul 2005 (PDT): That's my thinking as well -- something like "supercookies" instead of fully queryable structured storage. At the very least, they'd be much simpler to start to use. We can also provide helpers for serializing a JS Object to/from the value format, which should take care of most consumers' needs. I wouldn't try to just use the cookies api though, since unlike any local-cookies (brownies?) these would never be transmitted to a remote server as part of a request. I think a simple hash map would fit well with the AJAX model as well. | ::[[User:VladVukicevic|VladVukicevic]] 18:00, 7 Jul 2005 (PDT): That's my thinking as well -- something like "supercookies" instead of fully queryable structured storage. At the very least, they'd be much simpler to start to use. We can also provide helpers for serializing a JS Object to/from the value format, which should take care of most consumers' needs. I wouldn't try to just use the cookies api though, since unlike any local-cookies (brownies?) these would never be transmitted to a remote server as part of a request. I think a simple hash map would fit well with the AJAX model as well. | ||
[[User:Roc|Roc]] I agree so I deleted that part of my comment. | :::[[User:Roc|Roc]] I agree so I deleted that part of my comment. | ||
===UI=== | ===UI=== | ||
| Line 194: | Line 205: | ||
users. | users. | ||
: | :[[User:Roc|Roc]] I really like the bookmarks UI idea. When we're in offline mode bookmarks for pages that are not available should be disabled with a tooltip explaining why. Although maybe everything that gets bookmarked should be downloaded for offline access anyway. | ||
===APIs=== | ===APIs=== | ||
The apis have a few easy rules | ====The apis have a few easy rules==== | ||
1. Think about the use cases. | 1. Think about the use cases. | ||
| Line 207: | Line 218: | ||
will) add those later through their own libraries. | will) add those later through their own libraries. | ||
Known use cases | ====Known use cases==== | ||
1. Storage <-> XML-RPC bridge. It's clear that people will want to | 1. Storage <-> XML-RPC bridge. It's clear that people will want to | ||
| Line 223: | Line 234: | ||
that XML document. | that XML document. | ||
====Deployment APIs==== | |||
A few different APIs have been proposed: | |||
1. A database api that's based on a classic relational databaes model. That is, tables and rows. | |||
2. A simple dictionary system. A single-level lookup based on a simple string key. | |||
3. No API. Just the ability to just cache local pages. | |||
====Functional Coverage==== | |||
Required APIs probably include: | |||
1. A way to access local storage and read information out of the database. | |||
2. Assuming that we want to go with a system which allows a huge amount of storage, a way to access and query (?) that data. | |||
3. A way to handle page transitions from one page to another. This would probably be a client-side equiv to form handling on the server side. | |||