Browser security in my experience turns out to be mostly security in programming language design and implementation. The number of critical security bugs that we've faced over the years that were in the crypto module, or in HTTP code, were very few compared to the total bug count.
Rather than attempting a bibliographic summary or history of the state of security research, I am starting with a list of people whose work I know best.
Michael Franz, who is at UC Irvine, spoke at an IBM virtual machine conference (as of 3 August 2006) two years ago where I spoke on Firefox and Mozilla's VM needs. He was kind enough to stop by Mozilla in early March of this year and speak on his past and current work. See links  to his publications.
Michael's focus on virtual machines and compilers points the way toward real browser as well as OS security, transcending the current mode among browser implementors of hacking and patching memory-unsafe C++ code. The most-trusted computing base must not be megalines of code -- it should be the compiler, VM, and security module, at tens or at most hundreds of KSLOCs.
Andrew Myers, my old pal from SGI days, is a prof at Cornell who has done wonderful work in this area, going back to his thesis at MIT, JFlow. See links  to his publications. His slides from this year's PLDI nicely summarize the problem-space we face: Expressing and Enforcing Security with Programming Languages.
Vincent Simonet and the fine folks at INRIA behind OCaml have given the world FlowCaml, OCaml with an information flow type system.
Since JS and the other browser-hosted programming languages are not statically typed, FlowCaml may not seem useful, but with JS2 (ECMAScript Edition 4), we will have type annotations and the option of a static type checker. The JS2 type system won't support Hindley-Milner type inference, but we anticipate using both types and static checking in Mozilla code, and we should aspire to realize both optimization and security wins from the new type system.
[Under construction, see links above]