27
edits
| Line 101: | Line 101: | ||
The first update request happens at a random interval between 0-5 minutes after the browser starts. The second update request happens between 15-45 minutes later. After that, each update is once every 30 minutes. | The first update request happens at a random interval between 0-5 minutes after the browser starts. The second update request happens between 15-45 minutes later. After that, each update is once every 30 minutes. | ||
If the client receives an error during update, it tries again in a minute. If it receives three errors in a row, it skips updates until at least 60 minutes have passed before trying again. If it then receives another (4th) error, skips updates for the next 180 minutes and if it receives another (5th) error, | If the client receives an error during update, it tries again in a minute. If it receives three errors in a row, it skips updates until at least 60 minutes have passed before trying again. If it then receives another (4th) error, skips updates for the next 180 minutes and if it receives another (5th) error, it skips updates for the next 360 minutes. It will continue to check once every 360 minutes until the server responds with a success message. The current implementation doesn't change the 30 minute timer interval, it just involves skipping updates until the back off time has elapsed. | ||
A lookup request happens on page load if the user has opted into remote checking. If a lookup request fails, we automatically fall back on a local table. If there are 3 lookup failures in a 10 minute period, we skip lookups during the next 10 minutes. Each successive lookup failure increases the wait by (2*last wait + 10 minutes). The maximum wait before trying again is 360 minutes. As mentioned above, if we're not doing lookups, we query the local lists instead. | A lookup request happens on page load if the user has opted into remote checking. If a lookup request fails, we automatically fall back on a local table. If there are 3 lookup failures in a 10 minute period, we skip lookups during the next 10 minutes. Each successive lookup failure increases the wait by (2*last wait + 10 minutes). The maximum wait before trying again is 360 minutes. As mentioned above, if we're not doing lookups, we query the local lists instead. | ||
| Line 256: | Line 256: | ||
We solve the encoding problem, but not the canonicalization problem. We repeatedly URL-unescape a URL until it has no more hex-encodings, then we escape it once. Yes, this can map several distinct URLs onto the same string, but these cases are rare, and happen primarily in query params. But taking this approach solves a multitude of other potential problems. | We solve the encoding problem, but not the canonicalization problem. We repeatedly URL-unescape a URL until it has no more hex-encodings, then we escape it once. Yes, this can map several distinct URLs onto the same string, but these cases are rare, and happen primarily in query params. But taking this approach solves a multitude of other potential problems. | ||
Additionally, we canonicalize the hostname as [[Phishing_Protection:_Server_Spec#Canonical_Hostname_Creation|mentioned in the server spec]]. Enchash lookups involve truncating the hostname at 5 dots. Url and domain table lookups do not do any truncation. | |||
=== Relationship to Existing Products === | === Relationship to Existing Products === | ||
edits