In a previous post I wrote about how we can use accelerated compositing in the Clutter port of WebKit in order to take advantage of the graphics hardware when merging media elements into the finalized web page, which can be very dynamic (read: several updates per second) due to JavaScript usage and also because of CSS3 animations.
Another advantage of having the composition happening in the GPU is that whatever elements we render there (hw-accelerated video decoding or WebGL) doesn't need to be brought back to the CPU for composition, which would be very expensive.
For the last few weeks I have been working on a proof of concept of how we could implement the WebGL specification in our port, using Clutter. We cannot take the WebGL implementation that is already in use by the Gtk+, Qt and OSX ports because direct GL access isn't possible with Clutter, so it has taken quite a bit more of work than that. Here you can see the code in action, it's a WebGL canvas being rotated with CSS3:
What I ended up doing was adding a mechanism to Cogl (Clutter's graphics backend that abstracts OpenGL) so that applications could use raw GL to draw into an offscreen framebuffer without disturbing the rest of the application's graphics state, that is managed by Cogl. The branch that implements this is here.
Note that this is a proof of concept, in order to upstream this work we would need to find a way to share more code with the existing WebGL implementation in WebKit and also fix all the hackiness on the Cogl branch.
Other further work includes reducing the frequency with which GL contexts are switched, which should help enormously in GPUs such as Imagination's PVR and ARM's Mali.
To finish, I would like to thank my employer Collabora for allowing me to work on this and share the results, and to the Clutter guys that patiently answered my questions and showed me the right way, specially Robert Bragg and Neil Roberts.
If your company is interested in this, we'll be pleased in working with you, just drop an email to sales@collabora.com.
Another advantage of having the composition happening in the GPU is that whatever elements we render there (hw-accelerated video decoding or WebGL) doesn't need to be brought back to the CPU for composition, which would be very expensive.
For the last few weeks I have been working on a proof of concept of how we could implement the WebGL specification in our port, using Clutter. We cannot take the WebGL implementation that is already in use by the Gtk+, Qt and OSX ports because direct GL access isn't possible with Clutter, so it has taken quite a bit more of work than that. Here you can see the code in action, it's a WebGL canvas being rotated with CSS3:
What I ended up doing was adding a mechanism to Cogl (Clutter's graphics backend that abstracts OpenGL) so that applications could use raw GL to draw into an offscreen framebuffer without disturbing the rest of the application's graphics state, that is managed by Cogl. The branch that implements this is here.
Note that this is a proof of concept, in order to upstream this work we would need to find a way to share more code with the existing WebGL implementation in WebKit and also fix all the hackiness on the Cogl branch.
Other further work includes reducing the frequency with which GL contexts are switched, which should help enormously in GPUs such as Imagination's PVR and ARM's Mali.
To finish, I would like to thank my employer Collabora for allowing me to work on this and share the results, and to the Clutter guys that patiently answered my questions and showed me the right way, specially Robert Bragg and Neil Roberts.
If your company is interested in this, we'll be pleased in working with you, just drop an email to sales@collabora.com.