Wayland utilizing Android GPU drivers on glibc based systems, Part 1
http://mer-project.blogspot.jp/2013/04/wayland-utilizing-android-gpu-drivers.html
THURSDAY, APRIL 11, 2013
In this blog series, I will be presenting a solution that I've developed that enables the use of Wayland on top of Android hardware adaptations, specifically the GPU drivers, but without actually requiring the OS to be Bionic based. This is part 1.
This work was and is done as part of my job as Chief Research Engineer in Jolla, which develop Sailfish OS, a mobile-optimized operating system that has the flexibility, ubiquity and stability of the Linux core with a cutting edge user experience built with the renowned Qtplatform.
The views and opinions expressed in this blog series are my own and not that of my employer.
At the end of the series, the aim is to have finished cleaning up the proof of concept code and published it under a "LGPLv2.1 only" license, for the benefit of many different communities and projects (Sailfish, OpenWebOS, Qt Project, KDE, GNOME, Hawaii, Nemo Mobile, Mer Core based projects, EFL, etc).
The blog series seeks to explain and document the solution and many aspects about non-Android systems, Wayland and Android GPU drivers that are not widely known.
This work is done with the hope that it will attract more contribution and collaboration to bring this solution and Wayland in general into wider use across the open source ecosystem and use a large selection of reference device designs for their OS'es.
Why am I not releasing code today? Because that code alone doesn't foster collaboration. There's more to involving contributors into development - such as explaining reasons why things are the way they are. It's also my own way to make sure I document the code and clean it up, to make it easier for people to get involved.
Now, let's get to it..
The grim situation in mobile hardware adaptation availability
One of the first thing somebody with a traditional Linux background realizes as he tries to make a mobile device today when he meets with an ODM is that 99% of chipset vendors and hence ODMs - will only offer Android hardware adaptations to go along with the device designs.
When you ask about X11 support within the GPU drivers or even Wayland they'll often look blankly at you and wonder why anybody would want to do anything else than Android based systems. And when you go into details they'll either tell you it can't be done - or charge you a massive cost to have it done.
This means that OS'es and devices that are non-Android will be not able to take into usage the huge availability of (often low cost) device designs that are out there, increasing the time to market and R&D cost massively.
Libhybris
In August 2012, I published my initial prototype for 'libhybris'. What is libhybris? Libhybris is a solution that allows non-Android systems such as glibc-based systems (like most non-Android systems are) to utilize shared objects (libraries) built for Android. In practice this means that you can leverage things like OpenGL ES 2.0 and other hardware interfacing provided within Android hardware adaptations.
I had developed libhybris initially in my idle hours at home and the big question you might have is: Why did I open source it instead of keeping it to myself and earn on it as it obviously was the holy grail for non-Android systems?
The simple answer is this: by working together on open source code, it would help accelerate the development of libhybris and testing of the software for everybody's mutual benefit.
I didn't feel good about libhybris initially, it's not the most perfect solution to the problem: many around me in the open source community were and are fighting to have chipset vendors provide Wayland or X11 adaptations for mobile chipsets or even GPU drivers for non-Android systems in the first place.
But I felt that this was the required road that had to be taken before non-Android systems turned completely irrelevant in the bigger picture. When we again have volume of non-Android devices, we can have our own dedicated HW adaptations again
Open sourcing worked quite well - a small group of people got together, tested it, improved it, got it running on a lot of multiple chipsets - thanks to OpenWebOS, Florian Haenel (heeen), Thomas Perl (thp), Simon Busch (morphis) and others. It turned the project from a late night hacking project into a viable solution for building device OS'es on top of. Or even running Android NDK applications using.
To be able to render into a specific GPU buffer under your own control, you usually need to get inside the EGL/OpenGL ES implementation. On some chipsets, it's possible to use specific EGL operations that allows shared (across two processes) images to be rendered into - such as on Raspberry Pi.
In the EGL implementation, you should be able to follow the path of the buffer, it's attributes (size, stride, bpp/format) and when the client has requested to do eglSwapBuffers.
A note on gralloc and native handles
This work was and is done as part of my job as Chief Research Engineer in Jolla, which develop Sailfish OS, a mobile-optimized operating system that has the flexibility, ubiquity and stability of the Linux core with a cutting edge user experience built with the renowned Qtplatform.
The views and opinions expressed in this blog series are my own and not that of my employer.
At the end of the series, the aim is to have finished cleaning up the proof of concept code and published it under a "LGPLv2.1 only" license, for the benefit of many different communities and projects (Sailfish, OpenWebOS, Qt Project, KDE, GNOME, Hawaii, Nemo Mobile, Mer Core based projects, EFL, etc).
QML compositor, libhybris, Wayland on top of Qualcomm GPU Android drivers |
The blog series seeks to explain and document the solution and many aspects about non-Android systems, Wayland and Android GPU drivers that are not widely known.
(Ignore the tearing, old demo video)
This work is done with the hope that it will attract more contribution and collaboration to bring this solution and Wayland in general into wider use across the open source ecosystem and use a large selection of reference device designs for their OS'es.
Why am I not releasing code today? Because that code alone doesn't foster collaboration. There's more to involving contributors into development - such as explaining reasons why things are the way they are. It's also my own way to make sure I document the code and clean it up, to make it easier for people to get involved.
Now, let's get to it..
The grim situation in mobile hardware adaptation availability
One of the first thing somebody with a traditional Linux background realizes as he tries to make a mobile device today when he meets with an ODM is that 99% of chipset vendors and hence ODMs - will only offer Android hardware adaptations to go along with the device designs.
When you ask about X11 support within the GPU drivers or even Wayland they'll often look blankly at you and wonder why anybody would want to do anything else than Android based systems. And when you go into details they'll either tell you it can't be done - or charge you a massive cost to have it done.
This means that OS'es and devices that are non-Android will be not able to take into usage the huge availability of (often low cost) device designs that are out there, increasing the time to market and R&D cost massively.
Libhybris
In August 2012, I published my initial prototype for 'libhybris'. What is libhybris? Libhybris is a solution that allows non-Android systems such as glibc-based systems (like most non-Android systems are) to utilize shared objects (libraries) built for Android. In practice this means that you can leverage things like OpenGL ES 2.0 and other hardware interfacing provided within Android hardware adaptations.
I had developed libhybris initially in my idle hours at home and the big question you might have is: Why did I open source it instead of keeping it to myself and earn on it as it obviously was the holy grail for non-Android systems?
The simple answer is this: by working together on open source code, it would help accelerate the development of libhybris and testing of the software for everybody's mutual benefit.
I didn't feel good about libhybris initially, it's not the most perfect solution to the problem: many around me in the open source community were and are fighting to have chipset vendors provide Wayland or X11 adaptations for mobile chipsets or even GPU drivers for non-Android systems in the first place.
But I felt that this was the required road that had to be taken before non-Android systems turned completely irrelevant in the bigger picture. When we again have volume of non-Android devices, we can have our own dedicated HW adaptations again
Open sourcing worked quite well - a small group of people got together, tested it, improved it, got it running on a lot of multiple chipsets - thanks to OpenWebOS, Florian Haenel (heeen), Thomas Perl (thp), Simon Busch (morphis) and others. It turned the project from a late night hacking project into a viable solution for building device OS'es on top of. Or even running Android NDK applications using.
Earlier this year however, I discovered that a well-known company had taken the code - disappeared underground with it for several months, improved upon it, utilized the capability in their advertisements and demos and in the end posted the code utilizing their own source control system, detached from any state of that of the upstream project's. Even to the extent some posters around the web thought libhybris was done by that company itself.
That kind of behavior ruined the initial reason I open sourced libhybris in the first place and I was shocked to the point that I contemplated to by default not open source my hobby projects any more. It's not cool for companies to do things like this, no matter your commercial reasons. It ruins it for all of us who want to strengthen the open source ecosystem. We could have really used your improvements and patches earlier on instead of struggling with some of these issues.
But, I will say that their behavior has improved - they are now participating in the project, discussing, upstreaming patches that are useful. And I forgive them because they've changed their ways and are participating sanely now.
Now for a few words on my story with Wayland..
Now for a few words on my story with Wayland..
Wayland
My journey with Wayland started in late 2011. It was my belief around that time that Wayland was a bit dry and boring - it was just a protocol. I was not fully appreciating the power and simplicity that it provided for embedded UI. That it was a building block for much more exciting things - like libhybris turned out to be.
Being in embedded Linux and exploring Qt Lighthouse, I had learnt of Qt Scenegraph and a curious new thing called QtCompositor. QtCompositor was what sold me on Wayland - it enabled amazing capabilities on embedded and ease of development of effects, window management and homescreens. Things that previously would take several manyears to develop for embedded devices was made easy to do. And allowed stack builders to have similar graphical stacks on SDK/virtual machines for development as on target devices.
If you don't know QML, it's a declarative UI language to design beautiful and functional UIs with. What QtCompositor did, was to enable that you first off could get events about windows appearing, changing size, etc - but also that each window, - the graphical buffer, became just another item in your UI scenegraph like an image or a label would be. It could even make your graphical buffer be a widget inside your traditional UI.
Screenshot from http://blog.qt.digia.com/blog/2011/03/18/multi-process-lighthouse/ |
This could naturally be expanded to much more curious things, such as 3D Wayland compositors. If you'd like to hear more about QtCompositor, you can also watch the following talk from Qt Developer Days by Andy Nichols. Capable QtCompositor technology is something that is here today. Not something that has to be developed from scratch or roadmapped.
I was doing the role of maintainer of the Nokia N900 hardware adaptation for MeeGo at the time and I wanted to see if I could get Wayland working on it - it had a PowerVR SGX GPU. I reached out to #wayland on irc.freenode.net and was met with open arms, guidance and a lot of help from krh (Wayland founder), jlind, capisce (QtWayland/QtCompositor), Andrew Baldwin, Mika Bostrom and many other talented people and I was able to get started very quickly with Wayland.
To get things working with Wayland, what I needed to do was figure out:
To get things working with Wayland, what I needed to do was figure out:
- How to render an image with OpenGL ES 2.0 into GPU buffers that I had under my control
- Share that GPU buffer with another process (the compositor)
- Include that GPU buffer as part of a OpenGL scenegraph, a texture - and display this to the screen (in the compositor)
- And for performance, flip a full screen GPU buffer straight to the screen, bypassing the need to do OpenGL rendering
To be able to render into a specific GPU buffer under your own control, you usually need to get inside the EGL/OpenGL ES implementation. On some chipsets, it's possible to use specific EGL operations that allows shared (across two processes) images to be rendered into - such as on Raspberry Pi.
In the EGL implementation, you should be able to follow the path of the buffer, it's attributes (size, stride, bpp/format) and when the client has requested to do eglSwapBuffers.
On the PowerVR SGX, there was an API provided called WSEGL, for making plugins that were windowing systems (X11, Framebuffer, QWS, Wayland..) that allowed me to do just that.
Sharing that buffer is sometimes a bit more difficult - you effectively need to make the same GPU buffer appear in two processes at once. On SGX, this was simple - you could request a simple integer handle to the buffer and share that value using whatever protocol you wanted. In the target process you then just map in the GPU buffer through a mapping method.
Wanting to standing on the shoulders of giants, I looked at how Mesa had implemented their Wayland protocol for DRM - it too had simple handles and shared these buffers through a custom Wayland protocol.
Even if it was a custom protocol for buffer handling (creation, modification, etc), the same operations for handling buffers in Wayland still applied to it. I didn't need to do anything extra for compositor or client for the buffers in particular - I could piggyback on existing infrastructure available in Wayland protocol.
Wayland made it easy for me to take existing methods for the techniques/needs listed above into use and made it possible to quickly and easily implement Wayland support for the chipset.
Now, to something a little different, but quite related:
Android and it's ANativeWindow
When you use eglCreateWindowSurface, as in, creating a EGL window surface for later GL rendering with Android drivers, you have to provide a reference to the native window you want to do it within. In Android, the native window type is ANativeWindow.
As you know, Android's graphics stack is roughly application -> libEGL that sends GPU buffers to SurfaceFlinger that either flings the buffer to the screen or composites it with OpenGL again with libEGL.
Why not just include all the functionality and code in the EGL stack which communicates with SurfaceFlinger? The answer is that you need to sometimes target multiple types of outputs - be it video/camera streaming to another process, framebuffer rendering, output to a HW composer or communication with SurfaceFlinger.
One of the good things about Android graphics architecture is that through the use of ANativeWindow, they have made it possible to flexibly keep the code that does this work outside the EGL drivers - that is, open source and available for customization for each purpose. That means that EGL/OpenGL drivers are less tied to the Android version itself (sometimes API versions of ANativeWindow changes) and can be reused in binary form easily across upgrades.
ANativeWindow provides handy hooks for a windowing system to be managing GPU buffers (queueBuffer - send a buffer, I'm done rendering, dequeueBuffer - I need a buffer for rendering, cancelBuffer - woops, I didn't need it anyway, etc) - and it gives the methods you need to accomplish things, like I did on PowerVR SGX.
This is the entry point used to implement Wayland on top of Android GPU drivers on glibc based systems. Some fantastic work in this area has already been done by Pekka Paalanen (pq) as part of his work for Collabora Ltd. (Telepathy, GStreamer, WebKit, X11 experts) which proved that this is possible. Parts of the solution I will publish is based on their work - their work was groundbreaking in this field and made all this possible.
Even if it was a custom protocol for buffer handling (creation, modification, etc), the same operations for handling buffers in Wayland still applied to it. I didn't need to do anything extra for compositor or client for the buffers in particular - I could piggyback on existing infrastructure available in Wayland protocol.
Wayland made it easy for me to take existing methods for the techniques/needs listed above into use and made it possible to quickly and easily implement Wayland support for the chipset.
Now, to something a little different, but quite related:
Android and it's ANativeWindow
When you use eglCreateWindowSurface, as in, creating a EGL window surface for later GL rendering with Android drivers, you have to provide a reference to the native window you want to do it within. In Android, the native window type is ANativeWindow.
As you know, Android's graphics stack is roughly application -> libEGL that sends GPU buffers to SurfaceFlinger that either flings the buffer to the screen or composites it with OpenGL again with libEGL.
Why not just include all the functionality and code in the EGL stack which communicates with SurfaceFlinger? The answer is that you need to sometimes target multiple types of outputs - be it video/camera streaming to another process, framebuffer rendering, output to a HW composer or communication with SurfaceFlinger.
One of the good things about Android graphics architecture is that through the use of ANativeWindow, they have made it possible to flexibly keep the code that does this work outside the EGL drivers - that is, open source and available for customization for each purpose. That means that EGL/OpenGL drivers are less tied to the Android version itself (sometimes API versions of ANativeWindow changes) and can be reused in binary form easily across upgrades.
ANativeWindow provides handy hooks for a windowing system to be managing GPU buffers (queueBuffer - send a buffer, I'm done rendering, dequeueBuffer - I need a buffer for rendering, cancelBuffer - woops, I didn't need it anyway, etc) - and it gives the methods you need to accomplish things, like I did on PowerVR SGX.
This is the entry point used to implement Wayland on top of Android GPU drivers on glibc based systems. Some fantastic work in this area has already been done by Pekka Paalanen (pq) as part of his work for Collabora Ltd. (Telepathy, GStreamer, WebKit, X11 experts) which proved that this is possible. Parts of the solution I will publish is based on their work - their work was groundbreaking in this field and made all this possible.
A note on gralloc and native handles
The graphical buffer allocation in Android is handled by a libhardwaremodule named 'gralloc'. This is pretty straightforward, allocate buffer - get a buffer handle, dealloc, register (if you got the buffer from a separate process and want to map it in), but most documentation pieces don't talk about buffer_handle_t and what it actually is.
If you do a little bit of detective work, you'll find out that buffer_handle_t is actually defined as a native_handle_t* .. and what are native handles?
The structure is practically this: number of integers and a number of file descriptors plus the actual integers and file descriptors. How do you share a buffer across two processes then?
You have to employ something as obscure as "file descriptor passing". This page describes it as "socket magic" which it truly is. It takes a file descriptor from one process and makes it available in another.
The android GPU buffers are typically consisting of GPU buffer metadata (handle, size, bpp, usage hints, GPU buffer handle) and then file descriptors mapping GPU memory or otherwise shared memory into memory. To make the buffer appear in two processes, you pass the handle information and the related file descriptors.
The good news however is that Wayland already supports file descriptor passing so you don't have to write obscure code handling it yourself for your custom Wayland compositor.
Conclusion
This concludes the first blog post in this series, to give a bit of background about how Wayland, libhybris and GPU drivers for Android can work together. Next blog post will talk more about the actual server side implementation of this work. Last blog post will talk about direction of the future work on it - and what you can do with it today and how as well as explaining the .
If you'd like to use, discuss and participate in the development of this solution, #libhybris on irc.freenode.net is the best place to be. A neutral place for development across different OS efforts.
If you do a little bit of detective work, you'll find out that buffer_handle_t is actually defined as a native_handle_t* .. and what are native handles?
The structure is practically this: number of integers and a number of file descriptors plus the actual integers and file descriptors. How do you share a buffer across two processes then?
You have to employ something as obscure as "file descriptor passing". This page describes it as "socket magic" which it truly is. It takes a file descriptor from one process and makes it available in another.
The android GPU buffers are typically consisting of GPU buffer metadata (handle, size, bpp, usage hints, GPU buffer handle) and then file descriptors mapping GPU memory or otherwise shared memory into memory. To make the buffer appear in two processes, you pass the handle information and the related file descriptors.
The good news however is that Wayland already supports file descriptor passing so you don't have to write obscure code handling it yourself for your custom Wayland compositor.
Conclusion
This concludes the first blog post in this series, to give a bit of background about how Wayland, libhybris and GPU drivers for Android can work together. Next blog post will talk more about the actual server side implementation of this work. Last blog post will talk about direction of the future work on it - and what you can do with it today and how as well as explaining the .
If you'd like to use, discuss and participate in the development of this solution, #libhybris on irc.freenode.net is the best place to be. A neutral place for development across different OS efforts.
You are simply amazing!
ReplyGreat idea ! It's better to adopt to the situation than complain that hw vendors doesn't support X11 :)
ReplyIt's a shame that Ubuntu behave as you describe it.
I would like to see Salfish OS images for cheap HDMI dongles like mk802 and others based on various chinese ARM processors (Allwiner Rockchip). To sucseed with your system you need to install it on real hardware and show hw accelerated multimedia. We leave in XXI centrury dominated by multimedia and youtube like portals :)
Cross fingers for success of this project !!!
Awesome stuff. I can't wait to see all this come together. Also... one more reason to buy a phone from Jolla.
ReplyThanks for your amazing efforts!
ReplyWhat do you think about recent developments with open source Tegra drivers backed by Nvidia officially? Do you expect them to set a new trend or to provide a breakthrough for glibc Linux on mobile?
What I don't get is, why haven't we seen some rough ports of Sailfish yet to decent ARM SoCs like we have already for Ubuntu? Sure it's mostly a gimmick & publicity stunt by Ubuntu at this early stage, but it helps greatly with winder acceptance/interest if this becomes readily doable. And you gain a much wider captive (dev) audience. There should be some Sailfish fw images by now for some of the most popular (top-end) Android handsets!
ReplyThis is my personal blog, for Sailfish things you'll have to ask @JollaHQ on twitter. This presents technology I've developed and it doesn't have anything to do with the technology choices done or will be done by Sailfish, Jolla or anybody.
Do I get this right that Jolla is internally based on Wayland?
ReplyI do research, whatever I do may one day come in handy for Jolla and sometimes it may not :)
While I can understand it would sting a bit for Canonical to take your project and run with it behind closed doors, I understand their point of view:
Reply1. Whenever they've tried to get involved with upstream projects, they've always been met with hostility and suspicion. Contributions get rejected and no reason given.
2. Committee-rule is the cornerstone of open-source projects, and it doesn't allow very agile development as everything must get discussed and/or flamed about. Hacking like crazy with a much smaller team (with the same vision) has worked pretty well for open-source businesses like Canonical and Novell - the latter developed XGL and Compiz in this way too. Let's also not forget Apple doing this with KHTML to form Webkit; a much better HTML renderer than KHTML was ever going to be.
Ubuntu has been burnt before, especially by a popular desktop that starts with the letter G and has a foot as its logo. I can't really judge them too harshly for what they did to your project, and at least they've opened the development back up to all again and are working with you.
Yes, maybe -- but it doesn't mean they could at least try. Their engineers knew that libhybris happily accepted patches without bullshit. Just traditional open source.
Great work! Looking forward to the read the rest in the series.
ReplyNice work :)
ReplyVery interesting read! I want Wayland on my SGS2 :-)
ReplyVery cool; we discussed attempting this inside of the Qt PSO several times but never actually took any steps to deliver this into the world.
ReplyThe Samsung Chromebook is a gorgeous bastard piece of hardware, and seeing how fucking hamstrung even Google are when dealing with X11 really drives home how hopelessly fucked we always will be with 1) X11 and 2) Hardware vendor provided X11 integration.
I look forward to seeing a solid push towards Wayland, good on you for aiding and abetting it. (Was working on the Pi with Qt 5, I don't know whether it is maintained in a functional form, but it was glorious to see how much you could get out of the hardware when X11 was paired off and throttled/garroted)