From MozillaWiki
Jump to: navigation, search


I am proposing that we adopt a similar eviction strategy to that used by Places, which is to have a soft cap for total cookies, but evict only after cookies reach a minimum age since last use. This would mean, as it does with Places, that heavy users would store significantly more cookies, but persisting based on time rather than an arbitrary cap will better match user expectations and provide a better user experience.


Since the change in the eviction process (using last-used time instead of last-created) people have noticed some less-used cookies frequently are expired by the time they return to the site in question. Banking and other sites that aren't accessed daily, but persist some info in cookies, are prime examples cited.

Especially with the rise of Google Analytics and other tools for tracking users, along with the different ad networks, it is easy to accept 1000 cookies in a few days of heavy browsing, which leads to older cookies being pushed out of the cookie store. This is entirely transparent to users, which leads to frustration and perceptions of dataloss.

As the current limit of 1000 cookies is relatively arbitrary, there is no harm, other than the increase in the hashtable size for cookies.


In order to leave behaviour intact for existing consumers, I propose to add a new value to network.cookie.lifetimePolicy for this new behaviour. I will add a new pref (network.cookie.minimumEvictionTime) to control the minimum number of days cookies should be retained for. When we are doing cookie evictions we will only expire cookies that are older than that number of days.

Cookie Eviction

On startup, do a COUNT where lastAccessed > minAge

  • If count is greater than the soft cap:
    • Only query in cookies where lastAccessed >= minAge and expiry is in the future
    • (async, on a delay) delete all cookies where lastAccessed < minAge or expiry < now
  • If count is less than the soft cap, its a little trickier!
    • First, switch writes to async, and remove the batching hack which makes lastAccessed not unique.
    • Do a selective query ordered, descending, by lastAccessed, with the soft cap as the limit. SQLite query time is not the bulk of the time we spend reading in cookies, and any added query overhead should be countered by reading in fewer cookies.
    • On reading the last cookie, record the lastAccessed time (oldestCookie).
    • (async on a delay) delete all cookies where lastAccessed < oldestCookie or expiry < now

We will _not_ do evictions during a session, in the general case, to minimize sqlite overhead (even with async, deletes are not free, and could be 5-10 ms during pageload for the "add a cookie, delete a cookie" eviction method we currently use). However, we will have a "panic number" to avoid a very long-running session from storing a truly massive number of cookies. This number will be somewhere between 5x and 10x the soft cap. We don't expect to hit this often, but the best solution seems to be to repeat the init process and repeat the startup filtering/delayed cleanup behaviour.