优化curl并发使用
经典curl并发的处理流程:首先将所有的URL压入并发队列, 然后执行并发过程, 等待所有请求接收完之后进行数据的解析等后续处理。
在实际的处理过程中, 受网络传输的影响, 部分URL的内容会优先于其他URL返回, 但是经典curl并发必须等待最慢的那个URL返回之后才开始处理, 等待也就意味着CPU的空闲和浪费. 如果URL队列很短, 这种空闲和浪费还处在可接受的范围, 但如果队列很长, 这种等待和浪费将变得不可接受.
优化的方式时当某个URL请求完毕之后尽可能快的去处理它, 边处理边等待其他的URL返回, 而不是等待那个最慢的接口返回之后才开始处理等工作, 从而避免CPU的空闲和浪费. 下面贴上具体的实现:
function multiCurl($url, $log) { $queue = curl_multi_init(); foreach($log as $info) { $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_POST, 1); curl_setopt($ch, CURLOPT_TIMEOUT, 3); curl_setopt($ch, CURLOPT_POSTFIELDS, $info); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_HEADER, 0); curl_setopt($ch, CURLOPT_NOSIGNAL, true); curl_multi_add_handle($queue, $ch); } $responses = array(); do { while (($code = curl_multi_exec($queue, $active)) == CURLM_CALL_MULTI_PERFORM) ; if ($code != CURLM_OK) { break; } // a request was just completed -- find out which one while ($done = curl_multi_info_read($queue)) { // get the info and content returned on the request //$info = curl_getinfo($done['handle']); //$error = curl_error($done['handle']); $results = curl_multi_getcontent($done['handle']); //$responses[] = compact('info', 'error', 'results'); $responses[] = $results; // remove the curl handle that just completed curl_multi_remove_handle($queue, $done['handle']); curl_close($done['handle']); } // Block for data in / output; error handling is done by curl_multi_exec if ($active > 0) { curl_multi_select($queue, 0.5); } } while ($active); curl_multi_close($queue); return json_encode($responses); }