转 OpenFaaS 介绍
Why limit serverless functions to whatever programming languages that are supported by the provider?
OpenFaaS is a new open source serverless software program can run any CLI-driven binary program embedded in a Docker container. “Any command line interface that runs on Linux we can package it,” boasted Alex Ellis, who created OpenFaaS and by day works as a software engineer for ADP.
In effect, any program that can be run from the command line can be containerized and served up as a function through OpenFaaS. Even complex multimedia Linux apps such as FFmpeg or ImageMagick can be packaged as a function in about five lines as a Docker file.
The first deployments of serverless, such as AWS Lambda, were very much proprietary services, meaning that the end-user has no control over the back-end platform running the code (indeed, this is where the term “serverless” comes from). Commercial serverless services are also typically restricted by the languages they specifically support. In Lambda’s case, this means Node.js (JavaScript), Python, Java, and C# in Lambda’s case.
OpenFaas eliminates both of these issues. “OpenFaaS is serverless on your terms,” Ellis said, speaking with us at the Linux Foundation’s Open Source Summit last month.
OpenFaaS began as a side hack for a project that Ellis was working on to make use of his Amazon Alexa voice service. He had a Raspberry Pi running multi-color lights on a Christmas tree, which he wanted to control by way of Alexa.
Ellis found the prep work to be tedious to run his code on Lambda though — every change required repackaging the code, along with a Node.js instance, into a zip file that would then have to be uploaded to Amazon.
Instead, Ellis packaged his code in a container that would be managed by Docker Swarm and switched out the endpoint from Lambda to an HTTP call.
Ellis’ work drew lots of attention, both from his write-ups in Hacker News as well as from Docker, which gave him a spot to show his work at last year’s conference as “Cool Hack.”
OpenFaaS (originally called Docker FaaS) follows the Unix model of pipelines, wherein multiple components can be strung together through standard in (stdin) and standard out (stdout) streams.
The API gateway, written in Go, acts like a reverse proxy, from which users can call functions, with the gateway keeping track of the routing. The API reads the request body and headers of a request forwarding it to the appropriate container and piping the results back to the user.
In effect, Ellis had replaced Lambda back-end with some crafty staging through the Docker Swarm orchestration tool. Kubernetes can also be used, and the codebase is simple enough that others can easily add their own orchestration engine, such as someone did for Rancher Cattle, Ellis said.
The container encapsulating the function/program listens on port 8080 for requests. This is done through a copy of Ellis’ Function Watchdog component placed within the container, which shuttles the HTTP requests between the function and the user. When a new request is in, the Watchdog forks the process and ships the request data in.
The Function Watchdog acts a bit like the old Common Gateway Interface (CGI), oft used in the early days of the Web. In fact, to speed performance, Ellis actually cribbed some techniques from the Fast CGI, a performance-minded extension of CGI.
Worried about the overhead with firing up a new JVM for every Java application, Ellis designed the Watchdog to fork several identical processes, rather than just one, in case additional instances would be needed. “As requests come in, they would always use one of the pooled processes,” thus slimming the startup time, Ellis said. Instead of the weird, now all-but-forgotten, binary protocol Fast CGI employed, Ellis applied the lesson to standard HTTP.
For autoscaling, he used Docker Swarm replicas to create additional instances, which can be triggered into existence via a JSON alert from the Prometheusmonitoring tool. Minio handles the scaling out of storage, which provides an AWS s3-compatible API.