Platform/GFX/2011-September-13-with-Apple-and-Google

From MozillaWiki
< Platform‎ | GFX
Jump to: navigation, search

When: Tuesday September 13, starting at 2 pm Where: San Jose Convention Center, in one of the BoF rooms, precise room TBD URL for this BoF session: http://lanyrd.com/shgcb Ideas from Matt Delaney:

  • 1) Discuss moz-element and extensions to this general idea of rendering DOM subtrees as images/backgrounds/new DOM elements
  • 2) All the awesomeness seen here: http://www.khronos.org/webgl/wiki/Presentations
  • 3) Off-screen tabs goodness
  • 4) WebGL next steps, in general
  • 5) A plan for rectifying differences between canvas interfaces and impls. amongst browsers/ports.
  • 6) SVG plans/stuff…?

Ideas from Benoit and Mozilla team:

  • Mac OpenGL bugs
    • The glGenerateMipmap bug
    • The shader compiler bug and ANGLE's SH_EMULATE_BUILTIN_FUNCTIONS. Is this working?
    • Any news on the vertex attrib 0 Mac bug? I.e. if vertex attrib 0 array is enabled but not used by the current program then we crash. We have work-arounds in place but they do slow down WebGL on Mac. See https://bugzilla.mozilla.org/show_bug.cgi?id=631420 (can CC you on it) or WebGL list discussion from February.
    • The AMD/Mac vertex attribute alignment bug
  • WebGL stuff:
    • ARB_robustness stuff to discuss?
    • AA stuff to discuss?
  • Blacklisting stuff:
    • We could exchange some experience. Mozilla's ahead on X11, Google's ahead on Mac, and we could discuss dual GPU detection as Ali's been working on that.
-> (Ken) Avoid GLXPixmaps, prefer PBuffers.

Ideas from Google: 1) Come to some consensus about how much information to expose in RENDERER and VENDOR strings, and/or come up with alternative, structured representation. Issues: privacy, avoiding another user-agent string debacle, allowing high-end apps to understand characteristics of certain graphics cards.

-> (Benoit) : Sure let's discuss this but since Opera won't be represented we should wait until the Khronos F2F next week to reach any "final" decision. But it's OK since we have quite a bit of discussion to do before we arrive to a good solution :) The setting for the "negociation" is as follows:
  • Mozilla: what about JIT benchmarking? Hasn't been fully explored yet.
  • Google: thinks JIT benchmarking is not a good solution for everybody, would like other solutions to be offered web apps. Suggests RENDERER string.
  • Mozilla: wants to minimize the ratio between UA-ish / private info exposed, and usefulness. RENDERER string gives a poor ratio.
  • Mozilla: WebGL is already exposing some structured information, via getParameter(), so if it is needed to expose some more there, we can discuss that. But what? Some ideas:
    • "vendor ID" ?
    • "shading language version supported" ?
    • "number of GPU cores" ? (Round to power of 2 / of 4) (Might be hard to make sense of across vendors. NVIDIA has big cores, etc)
    • "GFlops" ??? (yes, this sounds fishy) (again, round to something coarse)
  • ... or can you propose another solution?
  • (kbr) Here is some feedback from a development team that wants the RENDERER string or some equivalent exposed:
    • Performance issues:
      • Some NVidia cards claim to support VTF (6&7 series?), but it is in software. ANGLE has worked around the ones they're aware of by disabling it, but in the OpenGL path it claims support for VTF and their app becomes crippled because the "fast" VTF path throws them into software rendering. http://code.google.com/p/angleproject/wiki/VTF (Possible solution -- disable VTF in the WebGL implementation on these cards)
      • Perf 2) There are several Intel cards that support WebGL, but the vertex shader support is software, not hardware, accelerated. These invariably tend to be slow, and we'd like to fall back to alternative algorithms on these cards. (No way to expose this information to the app in the current WebGL API)
      • Driver bugs: The new Macs with ATI cards have a driver bug (vertex attribute alignment) that is incompatible with the WebGL spec, but are not blacklisted. In this case we were able to find a workaround that we could apply to all cards that didn't negatively impact performance (although it does use slightly more memory), but I expect to find more cases where we aren't able to change our architecture for all cards to work around driver bugs. (This particular bug would be both very difficult and inefficient to work around in a WebGL implementation, and blacklisting all new Macs is not a viable option)
    • Suggestion from nduca@google: what about reporting "capability levels" as a getInteger query:

1. PS2 content 2. iPhone+shader content 3. X360/PS3 content 4. Bleeding edge: does everything in the GLES2 spec perfectly, without fail And perhaps a "DOMString getSpecViolations()" added to the WebGL API???


Notes from meeting:


  • Apple stuff:
    • Apple will look into dual GPU switching issue where discrete GPU got locked in even after a context had been destroyed and a new one created (i.e. process must be destroyed). Google says that they have to look more into this and aren't sure if this is actually happening, but according to the documentation and their tests so far, this is what's happening. Apple requested more information from Mozilla and Google.
    • Apple will look into other dual GPU switching issue where there is a hole in the screen. Google thinks that they have come up with a repro but they're not sure and think that their repro may not be the same issue as the randomly occuring problem.
    • Mozilla (bjacob) will be the point of communication for OpenGL issues because of the bug tracker (Bugzilla). We will need to put together a better package to sell particular bugs to Apple
    • Apple says there will be 10.7.2, 10.7.3, etc. (i.e. hot fixes which can include fixes).
    • It was generally agreed that OpenGL is a bottleneck for bug fixes and we will continue to pressure them, mostly focusing on smaller bugs that they are more likely to fix.

Antialiasing:

    • Antialiasing in Chrome is 2x2 MSAA
    • WebGL doesn't support framebuffer antialiasing. There's an extension to OpenGL that supports this, but it isn't available as a WebGL extension.
    • GL_(EXT/ANGLE)_framebuffer_multisample ?
    • We need to file the ATI MSAA bug
  • Blacklisting:
    • Issues with dual GPU: finding out what GPUs they are, and finding out what they can do. Talked with Nvidia and they had no useful information on this.
    • Optimus appears to Windows as a single Intel GPU. The only way to find out whether or not it's in use is to scan for the Optimus DLL. This process finds Optimus (shared user mode), Nvidia user mode driver, and Intel user mode driver. What is needed is to be able to determine which video cards are under the surface so that blacklisting can be done if there are bugs specific to a running card. Sometimes a driver setup will make Fx very slow, too, which requires disabling features on it.
    • Current method is to look at the registry, but the problem with this is that it's unreliable as any changes don't register immediately. Drivers don't clean up after themselves if they are removed, etc.
    • Google has no notable progress on this issue.
    • Google doesn't want to use PCI IDs because they don't exist on systems on a chip (i.e. mobile). The suggestion is to use GL renderer/vendor.
    • Google uses 1x1 pbuffers instead of pixmap, which they say is portable. They shut off the compositing path completely if this fails and the driver won't support GL ES.

Accelerated canvas:

  • Discussion about how accelerating canvas isn't always ideal. When do we accelerate it, and when do we not? Accelerating a small area makes it much slower.
    • Generally agreed that the scripts shouldn't be given control over whether or not it should be accelerated.
    • Discussed potentially giving cues to the browser that there's "never going to be a get image data". Potentially can ask a set of questions that can help determine whether or not to accelerate it.
    • Can also do heuristics (like Google does) for things such as automatically not accelerating if the canvas is small.
    • Discussed potentially doing asynchronous get image data calls. Done as part of implementation, separate of spec. An example implementation is to implement a JS callback. The purpose of this is to eliminate the pipeline stall.
    • Acceleration is slow because of pipeline stall. Doing it asynchronously would resolve this.
    • Another suggestion is to keep sampling the current usage of get image data to see if it should be switched to accelerated/unaccelerated. Ex. 1 call/frame is small but any more is relatively large and should force a switch.
    • Generally agreed that we shouldn't expose a way to allow scripts to request a 2D canvas without hardware acceleration.
    • Switching between software and hardware in the middle of a draw can potentially cause it to look different. The same is true for comparing a software draw to a hardware draw of the same image.
    • People need to think more about this. No real consensus was made; just a set of suggestions.
  • Off-screen tabs:
    • How can we implement some unified way of doing this?
    • It's an experimental feature in Chrome where the entire page is rendered off-page and mouse and keyboard events are sent to it, and send pixels over to the main screen.
    • There is a huge security risk of being able to render an arbitrary DOM. The issue is that we don't want to give access to read pixels away from a page. If this is allowed, you can see the theme the person is running, file paths (if displayed), seeing a gmail inbox, etc.
    • Some bug or something to do with alpha channels. One way to get rid of this is to disable the optimization done by the GPU. Tainting doesn't fully protect the vulnerability here. See the bug for more info (probably shouldn't write too much about it here anyways). Supposedly, this is only a vulnerability if SVG is implemented (Safari doesn't have this). Can also spoof mouse clicks with this.
    • Mixed opinions on whether to continue all features or stop and fix this bug.
    • The way Gecko deals with this is by restricting SVG images already, which eliminates part of the problem.
    • Discussion of whether or not to request permission/privileges, was generally rejected as not solving the problem since the user will probably accept any prompt that they see.
    • We came back to this and discussed it again. We talked about perhaps having some way of doing it in a standard way between browsers.
    • Discussed getting rid of cross-origin (i.e. cross-site) drawing, but this was also rejected since the Chrome extension demo'd at the meeting needs it, as do many other potential use cases.
    • This is a problem in canvas 2D, not WebGL. WebGL does not allow cross-origin images.
    • There are many ways to resolve this, but few of them are trivial and it was mentioned that any that are trivial would likely not deal with problems that arise in the future.
    • canvas 2d cross-origin image attack: http://philip.html5.org/demos/canvas/img-timing-1.html, http://philip.html5.org/demos/canvas/img-timing-3.html
    • Talked about using a declarative 3D language. This would avoid the security problems discussed.
    • This is still a very early problem because the experimental Chrome features need to be refactored/redone, and will take months to be ready.

SVG images:

    • Mozilla locks down external resources completely because they create potential exploits where loading them causes requests to external servers.
  • Renderer string:
    • There's no query right now in WebGL to prevent very slow rendering paths.
    • This is exposed by D3D, but not WebGL.
    • Some ATI cards use software for vertex processing/shading.
    • How do we deal with this? The questions are the following: should we run this program at all? Or should we just disable certain features that are slowing everything down.
    • How practical or impractical is benchmarking?
      • Sounds relatively impractical; the suggestions given were basically that WebGL developers would have to benchmark their programs and do analysis themselves.
      • Another suggestion was to do a very small benchmark that is barely noticeable, and then base decisions from that point onward on that.
      • A problem discussed with this approach is that doing benchmarks via WebGL are difficult. WebGL exposes some timestamp function, but Google mentioned that there are large errors in these (3-4 ms, noisy). It also depends heavily on things such as the OS pre-empting, etc.
      • D3D exposes a function called D3DTimestamp(), but it only works on Windows, and only on Nvidia and ATI cards.
    • A short discussion on using arbitrary number assignments to GPUs which boost/lower quality settings. Generally rejected since it's too arbitrary.
    • It was discussed that the WebGL implementation could expose a performance level. Perhaps this could be uniform across all implementations.
    • Google is talking about removing RENDERER string.
    • A suggestion is to expose the PCI device and vendor ID instead of RENDERER.
    • A concern was raised that this exposes a lot of information. For example, having a good video card means that you're probably not a little girl, so ads can be targeted towards you, etc.
    • Suggestion was made to expose framerate, which would allow applications to scale down their settings to get higher FPS.
    • Another suggestion to mark features as "probably being very slow", like having a slow vertex shader, etc.
    • It was mentioned that this is not an objective measure, and an implementation should be objective. However, it was generally agreed that objectivity is not a top priority.