Greenlets, threads, and processes

Greenlets, threads, and processes [转载]

It's very common in a program to want to do two things at once: repaginate a document while still responding to user input, or handle requests from two (or 10000) web browsers at the same time. In fact, pretty much any GUI application, network server, game, or simulator needs to do this.

It's possible to write your program to explicitly switch off between different tasks, and there are many higher-level approaches to this, which I've covered in previous posts. But an alternative is to have multiple "threads of control", each doing its own thing independently.

There are three ways to do this: processes, threads, or greenlets. How do you decide between them?

  • Processes are good for running tasks that need to use CPU in parallel and don't need to share state, like doing some complex mathematical calculation to hundreds of inputs.
  • Threads are good for running a small number of I/O-bound tasks, like a program to download hundreds of web pages.
  • Greenlets are good for running a huge number of simple I/O-bound tasks, like a web server.
If your program doesn't fit one of those three, you have to understand the tradeoffs.

 

Multiprocessing

Traditionally, the way to have separate threads of control was to have entirely independent programs. And often, this is still the best answer. Especially in Python, where you have helpers like multiprocessing.Process, multiprocessing.Pool, and concurrent.futures.ProcessPoolExecutor to wrap up most of the scaffolding for you.

Separate processes have one major advantage: They're completely independent of each other. They can't interfere with each others' global objects by accident. This can make it easier to design your program. It also means that if one program crashes, the others are unaffected.

Separate processes also have a major disadvantage: They're completely independent of each other. They can't share high-level objects. Processes can pass objects around—which is often a better solution. The standard library solutions do this by pickling the objects; this means that any object that can't be pickled (like a socket), or that would be too expensive to pickle and copy around (like a list of a billion numbers) won't work. Processes can also share buffers full of low-level data (like an array of a billion 32-bit C integers). In some cases, you can pass explicit requests and responses instead (e.g., if the background process is only going to need to get or set a few of those billion numbers, you can send get and set messages; the stdlib has Manager classes that do this automatically for simple lists and dicts). But sometimes, there's just no easy way to make this work.

As a more minor disadvantage, on many platforms (especially Windows), starting a new process is a pretty heavy thing to do. We're not talking minutes here, just milliseconds, but still, if you're kicking off jobs that may only take 5ms to finish, and you add 30ms of overhead to each one, that's not exactly an optimization. Usually, using a Pool or Executor is the easy way around this problem, but it's not always appropriate.

Finally, while modern OS's are pretty good at running, say, a couple dozen active processes and a couple hundred dormant ones, if you push things up to hundreds of active processes or thousands of dormant ones, you may end up spending more time in context-switching and scheduling overhead than doing actual work. If you know that your program is going to be using most of the machine's CPU, you generally want to try to use exactly as many processes as there are cores. (Again, using a Pool or Executor makes this easy, especially since they default to creating one process per core.)

Threading

Almost all modern operating systems have threads. These are like separate processes as far as the operating system's scheduler is concerned, but are still part of the same process in terms of the memory heap, open file table, etc. are concerned.

The advantage of threads over processes is that everything is shared. If you modify an object in one thread, another thread can see it.

The disadvantage of threads is that everything is shared. If you modify an object in two different threads, you've got a race condition. Even if you only modify it in one thread, it's not deterministic whether another thread sees the old value or the new one—which is especially bad for operations that aren't "atomic", where another thread could see some invalid intermediate value.

One way to solve this problem is to use locks and other synchronization objects. (You can also use low-level "interlocked" primitives, like "atomic compare and swap", to build your own synchronization objects or lock-free objects, but this is very tricky and easy to get wrong.)

The other way to solve this problem is to pretend you're using separate processes and pass around copies even though you don't have to.

Python adds another disadvantage to threads: Under the covers, the Python interpreter itself has a bunch of globals that it needs. The CPython implementation (the one you're using if you don't know otherwise) does this by protecting its global state with a Global Interpreter Lock (GIL). So, a single process running Python can only execute one instruction at a time. So, if you have 16 processes, your 16 core machine can execute 16 instructions at once, one per process. But if you have 16 threads, you'll only execute one instruction, while the other 15 cores sit around idle. Custom extensions can work around this by releasing the GIL when they're busy doing non-Python work (NumPy, for example, will often do this), but it's still a problem that you have to profile. Some other implementations (Jython, IronPython, and some non-default-as-of-early-2015 optional builds of PyPy) get by without a GIL, so it may be worth looking at those implementations. But for many Python applications, multithreading means single-core.

So, why ever use threads? Two reasons.

First, some designs are just much easier to think of in terms of shared-everything threading. (However, keep in mind that many designs look easier this way, until you try to get the synchronization right…)

Second, if your code is mostly I/O-bound (meaning you spend more time waiting on the network, the filesystem, the user, etc. than doing actual work—you can tell this because your CPU usage is nowhere near 100%), threads will usually be simpler and more efficient.

Greenlets

Greenlets—aka cooperative threads, user-level threads, green threads, or fibers—are similar to threads, but the application has to schedule them manually. Unlike a process or a thread, your greenlet function just keeps running until it decides to yield control to someone else.

Why would you want to use greenlets? Because in some cases, your application can schedule things much more efficiently than the general-purpose scheduler built into your OS kernel. In particular, if you're writing a server that's listening on thousands of sockets, and your greenlets spend most of their time waiting on a socket read, your greenlet can tell the scheduler "Wake me up when I've got something to read" and then yield to the scheduler, and then do the read when it's woken up. In some cases this can be an order of magnitude more scalable than letting the OS interrupt and awaken threads arbitrarily.

That can get a bit clunky to write, but third-party libraries like gevent and eventlet make it simple: you just call the recv method on a socket, and it automatically turns that into a "wake me up later, yield now, and recv once we're woken up". Then it looks exactly the same as the code you'd write using threads.

Another advantage of greenlets is that you know that your code will never be arbitrarily preempted. Every operation that doesn't yield control is guaranteed to be atomic. This makes certain kinds of race conditions impossible. You still need to think through your synchronization, but often the result is simpler and more efficient.

The big disadvantage is that if you accidentally write some CPU-bound code in a greenlet, it will block the entire program, preventing any other greenlets from running at all instead, whereas with threads it will just slow down the other threads a bit. (Of course sometimes this is a good thing—it makes it easier to reproduce and recognize the problem…)

It's worth noting that other concurrent designs like coroutines or promises can in many cases look just as simple as greenlets, except that the yields are explicit (e.g., with asyncio coroutines, marked by yield from expressions) instead of implicit (e.g., marked only by magic functions like socket.recv).

 

相关文章:

几种网络服务器模型的介绍与比较 -- 使用事件驱动模型实现高效稳定的网络服务器程序

 


 

Blog: Stupid Python Ideas

An unfocused collection of blog posts about Python, from tutorials for questions that come up over and over on StackOverflow to explorations of the CPython internals. The blog originally started purely to talk about suggestions for improving the language (and still has a lot of that). Because Python is so mature and well designed, most ideas to improve it are bad ideas, hence the name.

 

posted @ 2017-09-05 22:21  harvyxu  阅读(270)  评论(0编辑  收藏  举报