bsmedberg says: Why is the update a separate executable? What we would need to add to xpinstall is
- Binary-patch functionality
- Ability to do the xpinstall at shutdown/startup (not now)
Since we're planning on coding these features anyway, let's do it right! I can't see that it would take a lot more time than creating separate update executables for each update.
darin: There is a strong desire to avoid the complexity of xpinstall. We only need the ability to add, remove, replace, and patch files, and that can be done simply and reliably without xpinstall. Most of the complexity of software update is not addressed by xpinstall. The separate executable idea was intended for two reasons: 1) no need for the user to download the update utility until they need to update their app, 2) the updater would be very small, and 3) we might want to change the updater in the future.
Silver says: on NT-based OSes, you can at least rename files that are loaded as part of an application. This would allow you to rename existing, in use files, put down the new ones, all with Firefox running. Then restart it, and clean up the old ones in the background.
darin: There's no real benefit to this approach since users still need to restart Firefox to get the new version. It is easy enough to apply the changes between the time Firefox shutsdown and restarts from the point of view of the user. We can easily make the updater show a progress meter, and that should just do the trick.
Comments from bsmedberg: are we sure that the mozilla mirror network supports byte-range requests properly? Is there some other way to gate bandwidth?
darin: Yes, I have tested this out, and it seems to work. The only problem is that the various web servers do not all compute ETags in the same way, so we cannot issue If-Match requests, but that should be okay since the URLs of the files being downloaded should be sufficent to make the entities unique.
justdave: NO!!! The primary mirrors support it, but there's no way to guarantee that the secondaries do, and the primary mirrors don't have enough available bandwidth to support every Firefox user downloading at once when we have an update. Anything the update service downloads needs to be obtained via our download redirector on download.mozilla.org, so the bandwidth gets distributed to all of the primary and secondary mirrors according to who has bandwidth available.
darin: I think you're worrying about something that we can easily solve. Fetching the data via redirect is not a problem for this system. The idea here is that we will try to fetch the files in small chunks. We need to worry about all of the firefox's trying to update themselves, but we can be smart to ensure that they are balanced out. The download redirector could even return an error if the load is too high. Then the firefoxes will wait until some timeout before trying again. Are you sure that the mirrors do not support byte range requests? Even very old versions of Apache supports it for static files. Are you concerned about using HTTP instead of FTP?
justdave: Yeah, solvable easily enough. :) Some of the mirrors are actually using non-apache webservers, is the problem. HTTP is very much prefered over FTP though. Whether the server supports byte-ranges ought to be able to be tested for, so we could have Bouncer's sentry script test the servers to make sure they support it (so ones that don't support it get removed from the channel the update service will end up using). Bug 292942 has been filed for this.
Comments from bsmedberg: We should think carefully about how we handle these signatures. I presume we want mozilla updates to be signed *by mozilla.org*, not just signed in general. How do we identify which cert/certchains are appropriate?
darin: I'm not sure. I suspect that dougt will have some good ideas about this problem.
jmdesp We just had a discussion about that in news:npm.crypto. It may require a litle more work to verify what CA the extension is signed under, but it's not really hard. The main point here is that Mozilla would then act as a CA. I emitted the idea that what would be important is not the identity of the person who wants the certificate, but that he has a valuable extension to distribute. We could require that the extension be first made publicly available unsigned, and the certificate granted after a positive community review.
One question then is if we end up issuing many certificates, won't some of the end up badly used ? One solution is that certificate could be linked to an extension, not an individual, so we could use as the id in the cert the GUID of the extension, so that the certificate can be used only for one specific extension.
Then at installation time, we could check the extension by using the Update mechanism and it would restrain us from installing it if it is known that it is dangerous or is a version that has security holes. Peter Gutman raised the issue that for ActiveX, the problem is more the exploitation of vulnerabilities in legit ActiveX than people deliberatly signing evil components. But the mechanism above would cover both.
The one problem left is the risk that the checking of the validity of extension would end up as a major load for the servers. If you can believe it could happen, check the story of Class3SoftwarePublishers.crl at Verisign : http://www.verisign.com/verisign-inc/news-and-events/news-archive/us-news-2004/page_000738.html
Class3SoftwarePublishers.clr was only 7 Kb, but they would have needed up to 4 Gbit of bandwidth to keep up that day. They minimize the bandwidth by normally restricting the number of request for that file to 1 per month, except that it failed that day and they received ten time the usual number of requests.
Comments from chofmann: capturing some things from a discussion this morning...
-how to deal with situation requiring multiple updates. options are to apply patches in sequence possibly running browser between installations to sanity check, or not. or maybe just initially force multi patch users down the full update path.
-experiance shows we need to have a pretty exhastive list of OS combinations to test against so we capture behavioral diffs between win98, win2k, xp, osX releases, linux distros, etc...
-roll back: details to work out about that to do when a roll back situation is encountered. maybe try again, or roll from patch to full upgrade.
-look at options for compression of the packages:....
Comments from jmdesp: Currently the software update uses an SSL connexion to get the information. How will this scale if we have several millions of clients ? Bouncer will only redirect them after one initial exchange, and with SSL this exchange is a non neglectible amount of data. It might be better to switch to a model where the security is not insured not through SSL but by sending a signed answer on HTTP, and with DNS level repartition of requests, not http redirect.
darin: I believe that the current application update system serves an RDF manifest file via HTTPS. The current plan does not change that. Do you believe that the current system places too much load on the HTTPS server providing these manifest files today?
justdave: we're already getting several millions of clients, and it's handling it just fine, and we have plenty of infrastructure to scale now. For example, on May 19, there were 9,873,623 hits on the AUS servers. AUS is using DNS round-robining right now (and will soon have a set of load-balancers so multiple servers will share an IP, as well). Bouncer is only used when there's actually an update to retrieve, it's not used for checking for updates. Bouncer also doesn't use SSL (which is why the plan is to have signatures in the RDF file which *is* retrieved via SSL which get verified after download).
The license of bsdiff/bspatch looks problematic. It is a «BSD Protection license» , there is nothing about it at the OSI: http://www.opensource.org/licenses/ , it's not GPL compatible, and according to debian-legal - http://lists.debian.org/debian-legal/2003/10/msg00347.html - it isn't a free license anyway. The project is new enough that a license change should be possible, but there are equivalent libs - xdelta seems standard, jojodiff/jdiff is another alternative.
BTW, there is also the very simple rsync (synchronization algorithm and protocol, implemented by librsync). This is very different in that it requires very little infrastructure. It can be used to synchronize any version of a set of files (all the files, in the app folder, that an xpi would replace or patch) with any other version. The xpi files could then be replaced by just their manifest. To get the data the client synchronises its application directory with the one on the server. There's still a big question however - I'm not sure how the cpu load grows with the number of clients.
--Tobu 21:04, 2 Jun 2005 (PDT)
My main grief against firefox is to be obliged to restart it when adding xpi extensions. I don't know internals so I don't know if there is a good/bad reason for that. But that would be a sure win to dtrop that limitation
Cognominal 14:20, 3 Jun 2005 (PDT)
Part of the plan worries me, to quote the first paragraph:
"Revise the existing toolkit code which downloads XPI updates. Provide a silent mode that will be used for security updates. Do this only if the user has agreed (via some UI during installation perhaps) and only if the user has write permission to the installation directory. We don't want this update system to get in the way of RPM or MSI based solutions, etc."
Picture the large microsoft windows running company where employees are users, not even "power users", and don't have write permission to the installation directory. The quoted paragraph seems to specifically exclude the clients automatically upgrading. Now, I think the way other programs running under windows get around the necessary priviledge elevation, is by running a "service" as a user with rights to the installation directory, e.g. system, or administrator, which the client running as user can connect to. Without this, there doesn't seem to be a way of managing firefox without visiting each an every computer it is installed on.
Automatic Updates vs System Security
"Picture the large microsoft windows running company where employees are users, not even "power users", and don't have write permission to the installation directory. The quoted paragraph seems to specifically exclude the clients automatically upgrading. Now, I think the way other programs running under windows get around the necessary priviledge elevation, is by running a "service" as a user with rights to the installation directory, e.g. system, or administrator, which the client running as user can connect to. Without this, there doesn't seem to be a way of managing firefox without visiting each an every computer it is installed on."
As an administrator of a multiple Domains and stand-alone networks over many campuses, it is a poor system to upgrade Firefox if it must be done an administrator on each machine. If it was a service or a program that called an admin account to do the upgrade, it would a be much less of a head ache. I don't like IE, however, automatic updates saves me so much time. Certainly, I don't want to give Users/Domain User RW access to programs to do updates. Less perm to the Users, less security issues.
Most home computers bought, OEM etc, are improperly setup. Stores/OEMs set up the default account as an administrator (this is why so many people have spyware, malware issues). But for those home computers that are setup for limited user or just user accounts, the same problem will occur as well. And that's just Windows.
What about GNU/Linux or OSX? Root owns my programs. Will Firefox have to be ran as Root or a super user just to update?
ctempleton3: I work for a large multinational defense corporation. Our division's corporate policies do not allow for Firefox to be used because there is no centralized system to push down updates to individual users. Our MIS department believes that a copy of Firefox that remains unpatched after a security flaw is found is more dangerous than a copy of Internet Explorer that is patched. I ask that you as developers look into a open source system of pushing down updates to users
Software Updates Revised
My interest is in the complete update process and I think some vital elements are missing.
- To begin with I read in the discussions that some customised plug-ins, such as imagedownload, cannot cope with updates. This knowledge should be stored somewhere explicitly, and used by the update checker.
- I think we're missing out on valuable feedback on failed installations. Why don't we use talkback? It would be nice to see how many deployments actually fail, and do something about it.
- Older compiled versions cannot be downloaded from the site. I know we want to encourage users to work with the latest version, however, I hear from system managers that they need to first test updates on a different system, before company wide deployment. The updater does not support this yet, especially due to the fact that you cannot give a different location as to where updates can be found.
Feel free to comment on this, these conclusions are based on some research I've been doing in the area of software deployment.
Are silent downloads and mandatory upgrades a good thing?
I think that automatic download and installation of upgrades without user consent is problematic.
For example, what if the new version has some bug? What if it is not even a bug, but a functionality change that not all users like? What if this affects only 2% of users? Depending on software popularity, even a small percentage of users can translate into very very large numbers.
I think that to retain user satisfaction, which is critical for product popularity, the users should be allowed to decide for themselves when and how they want to download/install available upgrades.