As a back-end with web-server, speak the uwsgi protocol
<uwsgi id = "uwsgibk"> <stats>127.0.0.1:9090</stats> <socket>127.0.0.1:3030</socket> <file>./server.py</file> <enable-threads/> <post-buffering/> <memory-report/> </uwsgi>
As a back-end with web-server, speak the http protocol
The http and http-socket options are entirely different beasts. The first one spawns an additional process forwarding requests to a series of workers (think about it as a form of shield, at the same level of apache or nginx), while the second one sets workers to natively speak the http protocol. TL/DR: if you plan to expose uWSGI directly to the public, use –http, if you want to proxy it behind a webserver speaking http with backends, use –http-socket. .. seealso:: Native HTTP support
<uwsgi id = "httpbk"> <stats>127.0.0.1:9090</stats> <http-socket>127.0.0.1:3030</http-socket> <file>./server.py</file> <enable-threads/> <post-buffering/> <memory-report/> </uwsgi>
Expose self directly to the public
The http and http-socket options are entirely different beasts. The first one spawns an additional process forwarding requests to a series of workers (think about it as a form of shield, at the same level of apache or nginx), while the second one sets workers to natively speak the http protocol. TL/DR: if you plan to expose uWSGI directly to the public, use –http, if you want to proxy it behind a webserver speaking http with backends, use –http-socket. .. seealso:: Native HTTP support
<uwsgi id = "http"> <stats>127.0.0.1:9090</stats> <http>:80</http> <file>./server.py</file> <enable-threads/> <post-buffering/> <memory-report/> </uwsgi>
Nginx with The uWSGI FastRouter
http://uwsgi-docs.readthedocs.org/en/latest/Nginx.html
http://uwsgi-docs.readthedocs.org/en/latest/Fastrouter.html
http://stackoverflow.com/questions/21518533/putting-a-uwsgi-fast-router-in-front-of-uwsgi-servers-running-in-docker-containe
http://stackoverflow.com/questions/26499644/how-to-use-the-uwsgi-fastrouter-whith-nginx
Configurations of Nginx
location /test { include uwsgi_params; uwsgi_pass 127.0.0.1:3030; }
Configurations of FastRouter
<uwsgi id = "fastrouter"> <fastrouter>127.0.0.1:3030</fastrouter> <fastrouter-subscription-server>127.0.0.1:3131</fastrouter-subscription-server> <enable-threads/> <master/> <fastrouter-stats>127.0.0.1:9595</fastrouter-stats> </uwsgi>
Configurations of instance
<uwsgi id = "subserver1"> <stats>127.0.0.1:9393</stats> <processes>4</processes> <enable-threads/> <memory-report/> <subscribe-to>127.0.0.1:3131:[server_ip] or [domain]</subscribe-to> <socket>127.0.0.1:3232</socket> <file>./server.py</file> <master/> <weight>8</weight> </uwsgi>
<uwsgi id = "subserver2"> <stats>127.0.0.1:9494</stats> <processes>4</processes> <enable-threads/> <memory-report/> <subscribe-to>127.0.0.1:3131:[server_ip] or [domain]</subscribe-to> <socket>127.0.0.1:3333</socket> <file>./server.py</file> <master/> <weight>2</weight> </uwsgi>
If we HTTP-GET [server_ip] or [domain]/test, the route of request as follows:
Nginx >> FastRouter(port 3030) >> FastRouter(port 3131) >> subserver1(port 3232) or subserver2(port 3333)
and the protocol of all is uwsgi.
Faster Router also has specific stats.
http router
http://uwsgi-docs.readthedocs.org/en/latest/HTTP.html
Router
<uwsgi id = "httprouter"> <enable-threads/> <master/> <http>:8080</http> <http-stats>127.0.0.1:9090</http-stats> <http-to>127.0.0.1:8181</http-to> <http-to>127.0.0.1:8282</http-to> </uwsgi>
sub-server1
<uwsgi id = "httpserver1"> <stats>127.0.0.1:9191</stats> <socket>127.0.0.1:8181</socket> <memory-report/> <file>./server.py</file> <enable-threads/> <post-buffering/> </uwsgi>
sub-server2
<uwsgi id = "httpserver2"> <stats>127.0.0.1:9292</stats> <memory-report/> <file>./server.py</file> <socket>127.0.0.1:8282</socket> <enable-threads/> <post-buffering/> </uwsgi>
Other route mode
Just like Fast Route and Http Route, the router can forward(http-to) or route requests to other sub-servers, the only difference is protocol.
The protocols as follows:
Fast, HTTP(S), raw, ssl.
Certainly, all of routers have specific stats in the same style.
The uWSGI Emperor
http://uwsgi-docs.readthedocs.org/en/latest/Emperor.html
Emperor
<uwsgi id = "emperor"> <emperor>./vassals</emperor> <emperor-stats-server>127.0.0.1:9090</emperor-stats-server> </uwsgi>
Vassal1
<vassal1> <uwsgi id = "vassal1"> <http>:8080</http> <stats>127.0.0.1:9191</stats> <memory-report/> <enable-threads/> <post-buffering/> <file>./server.py</file> <chdir>..</chdir> </uwsgi> </vassal1>
Vassal2
<vassal2> <uwsgi id = "vassal2"> <http>:8181</http> <stats>127.0.0.1:9292</stats> <memory-report/> <enable-threads/> <post-buffering/> <file>./server.py</file> <chdir>..</chdir> </uwsgi> </vassal2>
Emperor also has specific stats.
multi-mountpoint
<uwsgi id = "vassal1"> <socket>127.0.0.1:3030</socket> <http>:8080</http> <stats>127.0.0.1:9191</stats> <memory-report/> <enable-threads/> <post-buffering/> <manage-script-name/> <chdir>..</chdir> <mount>/pic=server.py</mount> <mount>/test=fuck.py</mount> <workers>2</workers> </uwsgi>
注意:
http://stackoverflow.com/questions/19475651/how-to-mount-django-app-with-uwsgi
<manage-script-name/>参数很关键,但官方help中并没有提这个,F*CK YOU!
这样配置后,uWSGI会根据不同的request路径调用不同的app。
The End
uWSGI作为backend,和Nginx的配合比Apache要好,建议用Nginx作前端。
如果在CentOS上以INET socket为载体部署Nginx+uWSGI或Apache+uWSGI时,客户端访问遇到50x Bad Gateway错误时,强烈建议你换到Ubuntu上去试试,因为同样的配置在CentOS上有这个问题,换到Ubuntu后就可能没问题。
2014-11-03补充:CentOS下的这个问题的原因是RedHat系列的Linux发行版默认开启SELinux,SELinux的某些策略导致的这个问题,不知道算不算是BUG。关闭了SELinux就可解决,具体如何关闭可以参考:http://www.cnblogs.com/lightnear/archive/2012/10/06/2713090.html。Debian系列的发行版因为默认没有开启SELinux,所以不存在这个问题。上周末看到一篇比较Liunx发行版性能的文章时,偶然发现的有SELinux这个东西,今天一试果然。如果再碰到其他类似的诡异问题,也可以尝试关闭SELinux试试。
如果用Apache+uWSGI部署后,客户端访问时返回一个文件,那有可能是Apache对页面压缩后的结果,把Apache的GZIP压缩模块关掉就好了。Nginx没有这个问题,更好用不是吗?
忍不住吐槽
尼玛uWSGI的文档就是一坨SHIT,SHIT!国内这方面的知识可能少很多,在这里贴出来,方便国内的用户。
像这种小众一点的东西,国内资料很少,跟国外比还是有差距啊,基友们还需努力啊。