go-net-http 4
不要通过共享内存来通信,而应该通过通信来共享内存 这个是golang社区的经典语 说的是什么意思呢?
之前使用C代码进行性能优化的时候,遇到了很多高性能的架构,但是其只依赖于高性能的MPSC队列(queue普遍使用的原子锁,offset,count都使用CAS操作),而从来不在事务逻辑里用锁
那应该怎样理解? 答案: “减少共享内存” 和多用 “消息”队列
RoundTrip是发送和接收一次http请求的底层实现逻辑
来看看其如何实现长连接 如何实现并发 如何去提升性能
数据结构
根据前面几篇文章,可知核心struct主要有 Transport 以及persistConn
// Transport is an implementation of RoundTripper that supports HTTP,
// HTTPS, and HTTP proxies (for either HTTP or HTTPS with CONNECT).
//
// By default, Transport caches connections for future re-use.
// This may leave many open connections when accessing many hosts.
// This behavior can be managed using Transport's CloseIdleConnections method
// and the MaxIdleConnsPerHost and DisableKeepAlives fields.
//
// Transports should be reused instead of created as needed.
// Transports are safe for concurrent use by multiple goroutines.
//
// A Transport is a low-level primitive for making HTTP and HTTPS requests.
// For high-level functionality, such as cookies and redirects, see Client.
//
// Transport uses HTTP/1.1 for HTTP URLs and either HTTP/1.1 or HTTP/2
// for HTTPS URLs, depending on whether the server supports HTTP/2,
// and how the Transport is configured. The DefaultTransport supports HTTP/2.
// To explicitly enable HTTP/2 on a transport, use golang.org/x/net/http2
// and call ConfigureTransport. See the package docs for more about HTTP/2.
//
// Responses with status codes in the 1xx range are either handled
// automatically (100 expect-continue) or ignored. The one
// exception is HTTP status code 101 (Switching Protocols), which is
// considered a terminal status and returned by RoundTrip. To see the
// ignored 1xx responses, use the httptrace trace package's
// ClientTrace.Got1xxResponse.
//
// Transport only retries a request upon encountering a network error
// if the connection has been already been used successfully and if the
// request is idempotent and either has no body or has its Request.GetBody
// defined. HTTP requests are considered idempotent if they have HTTP methods
// GET, HEAD, OPTIONS, or TRACE; or if their Header map contains an
// "Idempotency-Key" or "X-Idempotency-Key" entry. If the idempotency key
// value is a zero-length slice, the request is treated as idempotent but the
// header is not sent on the wire.
type Transport struct {
idleMu sync.Mutex
closeIdle bool // user has requested to close all idle conns 用户请求关闭所有的闲置连接
idleConn map[connectMethodKey][]*persistConn // most recently used at end每个host对应的闲置连接列表
//每个host对应的等待闲置连接列表,在其它request将连接放回连接池前先看一下这个队列是否为空,不为空则直接将连接交由其中一个等待对象
idleConnWait map[connectMethodKey]wantConnQueue // waiting getConns
idleLRU connLRU
reqMu sync.Mutex
reqCanceler map[cancelKey]func(error)
altMu sync.Mutex // guards changing altProto only
altProto atomic.Value // of nil or map[string]RoundTripper, key is URI scheme
connsPerHostMu sync.Mutex
connsPerHost map[connectMethodKey]int //每个host对应的等待连接个数
connsPerHostWait map[connectMethodKey]wantConnQueue // waiting getConns// 每个host对应的等待连接列表
// Proxy specifies a function to return a proxy for a given
// Request. If the function returns a non-nil error, the
// request is aborted with the provided error.
//
// The proxy type is determined by the URL scheme. "http",
// "https", and "socks5" are supported. If the scheme is empty,
// "http" is assumed.
//
// If Proxy is nil or returns a nil *URL, no proxy is used.
Proxy func(*Request) (*url.URL, error)
// OnProxyConnectResponse is called when the Transport gets an HTTP response from
// a proxy for a CONNECT request. It's called before the check for a 200 OK response.
// If it returns an error, the request fails with that error.
OnProxyConnectResponse func(ctx context.Context, proxyURL *url.URL, connectReq *Request, connectRes *Response) error
// DialContext specifies the dial function for creating unencrypted TCP connections.
// If DialContext is nil (and the deprecated Dial below is also nil),
// then the transport dials using package net.
//
// DialContext runs concurrently with calls to RoundTrip.
// A RoundTrip call that initiates a dial may end up using
// a connection dialed previously when the earlier connection
// becomes idle before the later DialContext completes.
// 用于指定创建未加密的TCP连接的dial功能,如果该函数为空,则使用net包下的dial函数
DialContext func(ctx context.Context, network, addr string) (net.Conn, error)
// Dial specifies the dial function for creating unencrypted TCP connections.
//
// Dial runs concurrently with calls to RoundTrip.
// A RoundTrip call that initiates a dial may end up using
// a connection dialed previously when the earlier connection
// becomes idle before the later Dial completes.
//
// Deprecated: Use DialContext instead, which allows the transport
// to cancel dials as soon as they are no longer needed.
// If both are set, DialContext takes priority.
Dial func(network, addr string) (net.Conn, error)
// DialTLSContext specifies an optional dial function for creating
// TLS connections for non-proxied HTTPS requests.
//
// If DialTLSContext is nil (and the deprecated DialTLS below is also nil),
// DialContext and TLSClientConfig are used.
//
// If DialTLSContext is set, the Dial and DialContext hooks are not used for HTTPS
// requests and the TLSClientConfig and TLSHandshakeTimeout
// are ignored. The returned net.Conn is assumed to already be
// past the TLS handshake.
// 以下两个函数处理https的请求
DialTLSContext func(ctx context.Context, network, addr string) (net.Conn, error)
// DialTLS specifies an optional dial function for creating
// TLS connections for non-proxied HTTPS requests.
//
// Deprecated: Use DialTLSContext instead, which allows the transport
// to cancel dials as soon as they are no longer needed.
// If both are set, DialTLSContext takes priority.
DialTLS func(network, addr string) (net.Conn, error)
// TLSClientConfig specifies the TLS configuration to use with
// tls.Client.
// If nil, the default configuration is used.
// If non-nil, HTTP/2 support may not be enabled by default.
TLSClientConfig *tls.Config
// TLSHandshakeTimeout specifies the maximum amount of time to
// wait for a TLS handshake. Zero means no timeout.
TLSHandshakeTimeout time.Duration
// DisableKeepAlives, if true, disables HTTP keep-alives and
// will only use the connection to the server for a single
// HTTP request.
//
// This is unrelated to the similarly named TCP keep-alives.
DisableKeepAlives bool // 是否复用连接
// DisableCompression, if true, prevents the Transport from
// requesting compression with an "Accept-Encoding: gzip"
// request header when the Request contains no existing
// Accept-Encoding value. If the Transport requests gzip on
// its own and gets a gzipped response, it's transparently
// decoded in the Response.Body. However, if the user
// explicitly requested gzip it is not automatically
// uncompressed.
DisableCompression bool // 是否压缩
// MaxIdleConns controls the maximum number of idle (keep-alive)
// connections across all hosts. Zero means no limit.
MaxIdleConns int // 总的最大闲置连接的个数
// MaxIdleConnsPerHost, if non-zero, controls the maximum idle
// (keep-alive) connections to keep per-host. If zero,
// DefaultMaxIdleConnsPerHost is used.
MaxIdleConnsPerHost int // 每个host最大闲置连接的个数
// MaxConnsPerHost optionally limits the total number of
// connections per host, including connections in the dialing,
// active, and idle states. On limit violation, dials will block.
//
// Zero means no limit.
MaxConnsPerHost int // 每个host的最大连接个数,如果已经达到该数字,dial连接会被block住
// IdleConnTimeout is the maximum amount of time an idle
// (keep-alive) connection will remain idle before closing
// itself.
// Zero means no limit.// 闲置连接的最大等待时间,一旦超过该时间,连接会被关闭
IdleConnTimeout time.Duration
// ResponseHeaderTimeout, if non-zero, specifies the amount of
// time to wait for a server's response headers after fully
// writing the request (including its body, if any). This
// time does not include the time to read the response body.
// 读超时,从写完请求到接受到返回头的总时间
ResponseHeaderTimeout time.Duration
// ExpectContinueTimeout, if non-zero, specifies the amount of
// time to wait for a server's first response headers after fully
// writing the request headers if the request has an
// "Expect: 100-continue" header. Zero means no timeout and
// causes the body to be sent immediately, without
// waiting for the server to approve.
// This time does not include the time to send the request header.
//两个请求间的超时时间
ExpectContinueTimeout time.Duration
// TLSNextProto specifies how the Transport switches to an
// alternate protocol (such as HTTP/2) after a TLS ALPN
// protocol negotiation. If Transport dials an TLS connection
// with a non-empty protocol name and TLSNextProto contains a
// map entry for that key (such as "h2"), then the func is
// called with the request's authority (such as "example.com"
// or "example.com:1234") and the TLS connection. The function
// must return a RoundTripper that then handles the request.
// If TLSNextProto is not nil, HTTP/2 support is not enabled
// automatically.
TLSNextProto map[string]func(authority string, c *tls.Conn) RoundTripper
// ProxyConnectHeader optionally specifies headers to send to
// proxies during CONNECT requests.
// To set the header dynamically, see GetProxyConnectHeader.
// 返回中header的限制
ProxyConnectHeader Header
// GetProxyConnectHeader optionally specifies a func to return
// headers to send to proxyURL during a CONNECT request to the
// ip:port target.
// If it returns an error, the Transport's RoundTrip fails with
// that error. It can return (nil, nil) to not add headers.
// If GetProxyConnectHeader is non-nil, ProxyConnectHeader is
// ignored.
GetProxyConnectHeader func(ctx context.Context, proxyURL *url.URL, target string) (Header, error)
// MaxResponseHeaderBytes specifies a limit on how many
// response bytes are allowed in the server's response
// header.
//
// Zero means to use a default limit.
MaxResponseHeaderBytes int64
// WriteBufferSize specifies the size of the write buffer used
// when writing to the transport.
// If zero, a default (currently 4KB) is used.
WriteBufferSize int
// ReadBufferSize specifies the size of the read buffer used
// when reading from the transport.
// If zero, a default (currently 4KB) is used.
ReadBufferSize int
// nextProtoOnce guards initialization of TLSNextProto and
// h2transport (via onceSetNextProtoDefaults)
nextProtoOnce sync.Once
h2transport h2Transport // non-nil if http2 wired up
tlsNextProtoWasNil bool // whether TLSNextProto was nil when the Once fired
// ForceAttemptHTTP2 controls whether HTTP/2 is enabled when a non-zero
// Dial, DialTLS, or DialContext func or TLSClientConfig is provided.
// By default, use of any those fields conservatively disables HTTP/2.
// To use a custom dialer or TLS config and still attempt HTTP/2
// upgrades, set this to true.
ForceAttemptHTTP2 bool
}
// persistConn wraps a connection, usually a persistent one
// (but may be used for non-keep-alive requests as well)
type persistConn struct {
// alt optionally specifies the TLS NextProto RoundTripper.
// This is used for HTTP/2 today and future protocols later.
// If it's non-nil, the rest of the fields are unused.
alt RoundTripper
t *Transport
cacheKey connectMethodKey // schema + host + uri
conn net.Conn
tlsState *tls.ConnectionState
br *bufio.Reader // from conn连接的读buffer
bw *bufio.Writer // to conn 连接的写buffer
nwrite int64 // bytes written
//// 作为persistConn.roundTrip和readLoop之间的同步,由roundTrip写,readLoop读
reqch chan requestAndChan // written by roundTrip; read by readLoop
//作为persistConn.roundTrip和writeLoop之间的同步,由roundTrip写,writeLoop读
writech chan writeRequest // written by roundTrip; read by writeLoop
closech chan struct{} // closed when conn closed 连接关闭的channel
isProxy bool
// 是否读完整请求内容,由readLoop负责
sawEOF bool // whether we've seen EOF from conn; owned by readLoop
// 读数据的最大值,由readLoop负责
readLimit int64 // bytes allowed to be read; owned by readLoop
// writeErrCh passes the request write error (usually nil)
// from the writeLoop goroutine to the readLoop which passes
// it off to the res.Body reader, which then uses it to decide
// whether or not a connection can be reused. Issue 7569.
// writeLoop和readLoop之间的同步,用以判断该连接是否可以复用
writeErrCh chan error
writeLoopDone chan struct{} // closed when write loop ends
// Both guarded by Transport.idleMu:
idleAt time.Time // time it last become idle 最后一次闲置的时间
idleTimer *time.Timer // holding an AfterFunc to close it 用来到期清理本连接
mu sync.Mutex // guards following fields
numExpectedResponses int //连接的请求次数,大于1表示该连接被复用
//设置连接错误的原因
closed error // set non-nil when conn is closed, before closech is closed
// 设置连接被取消的原因
canceledErr error // set non-nil if conn is canceled
// 在使用过程中被损坏
broken bool // an error has happened on this connection; marked broken so it's not reused.
// 连接是否被复用
reused bool // whether conn has had successful request/response and is being reused.
// mutateHeaderFunc is an optional func to modify extra
// headers on each outbound request before it's written. (the
// original Request given to RoundTrip is not modified)
mutateHeaderFunc func(Header)
}
客户端核心逻辑是:
func (t *Transport) roundTrip(req *Request) (*Response, error) {
treq := &transportRequest{Request: req, trace: trace}
cm, err := t.connectMethodForRequest(treq) // 给请求打标签,其值是shema + uri + httpversion
// 核心函数,获取一个请求链接,这个链接可能是缓存池中的,也可能是新建立的; 请求的发送与接受也是在这个函数内部实现的
pconn, err := t.getConn(treq, cm)
}
/*当客户端在 HTTP 请求头中带有 "Connection: close" 字段时,它表示客户端希望在请求完成后立即关闭与服务器的连接。
这通常是一种短连接的信号,即在请求/响应完成后关闭连接。
当服务器接收到带有 "Connection: close" 头部的请求并成功响应后,它通常会遵循客户端的指令,即在响应完成后关闭连接。
服务器会在 HTTP 响应头中包含 "Connection: close" 字段,以告知客户端连接将在响应完成后关闭。
*/
/*
不论request还是response的header中包含了值为close的connection,都表明当前正在使用的tcp链接在请求处理完毕后会被断掉。
以后client再进行新的请求时就必须创建新的tcp链接了。
HTTP Connection的 close设置允许客户端或服务器中任何一方关闭底层的连接双方都会要求在处理请求后关闭它们的TCP连接。
*/
writeLoop函数
func (pc *persistConn) writeLoop() {
defer close(pc.writeLoopDone)
for {
select {
case wr := <-pc.writech: // 收到roundTrip的写命令
startBytesWritten := pc.nwrite
err := wr.req.Request.write(pc.bw, pc.isProxy, wr.req.extra, pc.waitForContinue(wr.continueCh)) // 主函数,实际写http数据
if bre, ok := err.(requestBodyReadError); ok {
err = bre.error
wr.req.setError(err)
}
if err == nil {
err = pc.bw.Flush()
}
if err != nil {
wr.req.Request.closeBody()
if pc.nwrite == startBytesWritten {
err = nothingWrittenError{err}
}
}
pc.writeErrCh <- err // 通知readLoop,用于判断连接是否可以回收
wr.ch <- err // 通知persistConn.roundTrip,设定读response的timeout
if err != nil { // 有错误发生时不再复用连接
pc.close(err)
return
}
// 处理下一个请求
case <-pc.closech: // 连接被关闭
return
}
}
}
waitForContinue函数
该函数用来支持Expect:100-continue功能,它返回一个闭包函数,它的作用是阻塞继续执行,只到以下三种情况发生。
1是在readLoop接受到100-continue的返回结果,会向continueCh发送一个数据,
2是在100-continue等待server返回结果超时的情况,这时会继续将body发送出去,
3是连接关闭的情况
func (pc *persistConn) waitForContinue(continueCh <-chan struct{}) func() bool {
if continueCh == nil {
return nil
}
return func() bool {
timer := time.NewTimer(pc.t.ExpectContinueTimeout) // 设定超时时间
defer timer.Stop()
select {
case _, ok := <-continueCh: // 如果readLoop接受到的消息表示不支持Expect:100-continue类型的请求,则会关闭channel
return ok
case <-timer.C: // 超时
return true
case <-pc.closech: // 连接被关闭
return false
}
}
}
write函数
这个函数实现了发送一次请求的实际操作;注意Expect:100-continue的处理方式
func (r *Request) write(w io.Writer, usingProxy bool, extraHeaders Header, waitForContinue func() bool) (err error) {
trace := httptrace.ContextClientTrace(r.Context())
if trace != nil && trace.WroteRequest != nil {
defer func() {
trace.WroteRequest(httptrace.WroteRequestInfo{ // 完成写请求的回调函数
Err: err,
})
}()
}
// 确定host
host := cleanHost(r.Host)
if host == "" {
if r.URL == nil {
return errMissingHost
}
host = cleanHost(r.URL.Host)
}
host = removeZone(host)
ruri := r.URL.RequestURI()
if usingProxy && r.URL.Scheme != "" && r.URL.Opaque == "" {
ruri = r.URL.Scheme + "://" + host + ruri
} else if r.Method == "CONNECT" && r.URL.Path == "" {
// CONNECT requests normally give just the host and port, not a full URL.
ruri = host
if r.URL.Opaque != "" {
ruri = r.URL.Opaque
}
}
if stringContainsCTLByte(ruri) {
return errors.New("net/http: can't write control character in Request.URL")
}
var bw *bufio.Writer
if _, ok := w.(io.ByteWriter); !ok {
bw = bufio.NewWriter(w)
w = bw
}
// 开始写数据
_, err = fmt.Fprintf(w, "%s %s HTTP/1.1\r\n", valueOrDefault(r.Method, "GET"), ruri)
if err != nil {
return err
}
// Header lines
_, err = fmt.Fprintf(w, "Host: %s\r\n", host)
if err != nil {
return err
}
if trace != nil && trace.WroteHeaderField != nil {
trace.WroteHeaderField("Host", []string{host})
}
userAgent := defaultUserAgent
if r.Header.has("User-Agent") {
userAgent = r.Header.Get("User-Agent")
}
if userAgent != "" {
_, err = fmt.Fprintf(w, "User-Agent: %s\r\n", userAgent)
if err != nil {
return err
}
if trace != nil && trace.WroteHeaderField != nil {
trace.WroteHeaderField("User-Agent", []string{userAgent})
}
}
// Process Body,ContentLength,Close,Trailer
tw, err := newTransferWriter(r)
if err != nil {
return err
}
err = tw.writeHeader(w, trace)
if err != nil {
return err
}
err = r.Header.writeSubset(w, reqWriteExcludeHeader, trace)
if err != nil {
return err
}
if extraHeaders != nil {
err = extraHeaders.write(w, trace)
if err != nil {
return err
}
}
_, err = io.WriteString(w, "\r\n")
if err != nil {
return err
}
// 完成header
if trace != nil && trace.WroteHeaders != nil {
trace.WroteHeaders()
}
// Flush and wait for 100-continue if expected.
if waitForContinue != nil {
if bw, ok := w.(*bufio.Writer); ok {
err = bw.Flush() // 先发送header请求
if err != nil {
return err
}
}
if trace != nil && trace.Wait100Continue != nil {
trace.Wait100Continue()
}
if !waitForContinue() { // 等待server的返回,在以下两种情况下继续发送body,1是server返回ok,2是超时
r.closeBody()
return nil
}
}
if bw, ok := w.(*bufio.Writer); ok && tw.FlushHeaders {
if err := bw.Flush(); err != nil {
return err
}
}
// 写body
err = tw.writeBody(w)
if err != nil {
if tw.bodyReadError == err {
err = requestBodyReadError{err}
}
return err
}
if bw != nil {
return bw.Flush()
}
return nil
}
readLoop函数
func (pc *persistConn) readLoop() {
closeErr := errReadLoopExiting // default value, if not changed below
defer func() {
pc.close(closeErr)
pc.t.removeIdleConn(pc)
}()
tryPutIdleConn := func(trace *httptrace.ClientTrace) bool {
if err := pc.t.tryPutIdleConn(pc); err != nil { // 将连接放回空闲连接池
closeErr = err
if trace != nil && trace.PutIdleConn != nil && err != errKeepAlivesDisabled {
trace.PutIdleConn(err)
}
return false
}
if trace != nil && trace.PutIdleConn != nil {
trace.PutIdleConn(nil)
}
return true
}
// eofc is used to block caller goroutines reading from Response.Body
// at EOF until this goroutines has (potentially) added the connection
// back to the idle pool.
eofc := make(chan struct{})
defer close(eofc) // unblock reader on errors
// Read this once, before loop starts. (to avoid races in tests)
testHookMu.Lock()
testHookReadLoopBeforeNextRead := testHookReadLoopBeforeNextRead
testHookMu.Unlock()
alive := true
for alive {
pc.readLimit = pc.maxHeaderResponseSize()
_, err := pc.br.Peek(1)
pc.mu.Lock()
if pc.numExpectedResponses == 0 { // 表示server端主动关闭连接
pc.readLoopPeekFailLocked(err)
pc.mu.Unlock()
return
}
pc.mu.Unlock()
rc := <-pc.reqch // 一次新的请求,和persistConn.roundTrip进行同步
trace := httptrace.ContextClientTrace(rc.req.Context())
var resp *Response
if err == nil {
resp, err = pc.readResponse(rc, trace) // 读请求的具体实现
} else {
err = transportReadFromServerError{err}
closeErr = err
}
if err != nil {
if pc.readLimit <= 0 {
err = fmt.Errorf("net/http: server response headers exceeded %d bytes; aborted", pc.maxHeaderResponseSize())
}
select {
case rc.ch <- responseAndError{err: err}:
case <-rc.callerGone: // 调用方放弃请求,链接关闭。之所以不再复用该连接,可能是避免未知原因造成下次请求的异常
return
}
return
}
pc.readLimit = maxInt64 // effectively no limit for response bodies
pc.mu.Lock()
pc.numExpectedResponses--
pc.mu.Unlock()
bodyWritable := resp.bodyIsWritable()//检查 HTTP 响应体是否支持写入 resp code 为1xx,在读取数据解析时候回设置resp 的writeer接口为pconn
hasBody := rc.req.Method != "HEAD" && resp.ContentLength != 0
if resp.Close || rc.req.Close || resp.StatusCode <= 199 || bodyWritable {
// Don't do keep-alive on error if either party requested a close
// or we get an unexpected informational (1xx) response.
// StatusCode 100 is already handled above.
alive = false
}
if !hasBody || bodyWritable {
pc.t.setReqCanceler(rc.req, nil)
// Put the idle conn back into the pool before we send the response
// so if they process it quickly and make another request, they'll
// get this same conn. But we use the unbuffered channel 'rc'
// to guarantee that persistConn.roundTrip got out of its select
// potentially waiting for this persistConn to close.
// but after
alive = alive &&
!pc.sawEOF &&
pc.wroteRequest() && // 这个函数有两个作用,1是确保请求已经写完,因为不排除服务端还未完全接受到请求时就返回结果;2是判断此次请求在writeLoop中是否发生错误,如果有错误则连接失效
tryPutIdleConn(trace) // 将该连接放入连接池中
if bodyWritable {
closeErr = errCallerOwnsConn
}
select {
case rc.ch <- responseAndError{res: resp}: // 写response,会在persistConn.roundTrip函数中获取到该结果
case <-rc.callerGone:
return
}
continue // 处理下一个请求
}
waitForBodyRead := make(chan bool, 2)
body := &bodyEOFSignal{
body: resp.Body,
earlyCloseFn: func() error {
waitForBodyRead <- false
<-eofc // will be closed by deferred call at the end of the function
return nil
},
fn: func(err error) error {
isEOF := err == io.EOF
waitForBodyRead <- isEOF
if isEOF {
<-eofc // see comment above eofc declaration
} else if err != nil {
if cerr := pc.canceled(); cerr != nil {
return cerr
}
}
return err
},
}
resp.Body = body
if rc.addedGzip && strings.EqualFold(resp.Header.Get("Content-Encoding"), "gzip") {
resp.Body = &gzipReader{body: body}
resp.Header.Del("Content-Encoding")
resp.Header.Del("Content-Length")
resp.ContentLength = -1
resp.Uncompressed = true
}
select {
case rc.ch <- responseAndError{res: resp}: // 写response,会在persistConn.roundTrip函数中获取到该结果
case <-rc.callerGone:
return
}
// Before looping back to the top of this function and peeking on
// the bufio.Reader, wait for the caller goroutine to finish
// reading the response body. (or for cancellation or death)
select {
case bodyEOF := <-waitForBodyRead:
pc.t.setReqCanceler(rc.req, nil) // before pc might return to idle pool
alive = alive &&
bodyEOF &&
!pc.sawEOF &&
pc.wroteRequest() &&
tryPutIdleConn(trace) // 将该连接放入连接池中
if bodyEOF {
eofc <- struct{}{}
}
case <-rc.req.Cancel:
alive = false
pc.t.CancelRequest(rc.req)
case <-rc.req.Context().Done():
alive = false
pc.t.cancelRequest(rc.req, rc.req.Context().Err())
case <-pc.closech: // 连接被关闭,可能是lru被踢出出来了
alive = false
}
testHookReadLoopBeforeNextRead()
}
}
readResponse函数
读取一次http请求返回,如果是header带“Expect:100-continue”的请求,则在正常情况下会收到两次server的返回结果;
"Expect: 100-continue" 是一个 HTTP 头部,客户端可以在请求中包含该头部,以指示服务器在发送整个请求体之前先检查是否愿意接受请求。这个机制被称为 "Expect/Continue"。
具体工作流程如下:
- 客户端发送带有 "Expect: 100-continue" 头部的请求给服务器。
- 如果服务器支持并愿意接受请求,它会用 HTTP 状态码 100 Continue 做出回应。
- 在收到 "100 Continue" 响应后,客户端继续发送请求体的其余部分。
//客户端:
POST /example HTTP/1.1
Host: example.com
Content-Length: 1000
Expect: 100-continue
<空行>
//服务端:
HTTP/1.1 100 Continue
<空行>
// readResponse reads an HTTP response (or two, in the case of "Expect:
// 100-continue") from the server. It returns the final non-100 one.
// trace is optional.
func (pc *persistConn) readResponse(rc requestAndChan, trace *httptrace.ClientTrace) (resp *Response, err error) {
if trace != nil && trace.GotFirstResponseByte != nil {
if peek, err := pc.br.Peek(1); err == nil && len(peek) == 1 {
trace.GotFirstResponseByte()
}
}
num1xx := 0 // number of informational 1xx headers received
const max1xxResponses = 5 // arbitrary bound on number of informational responses
continueCh := rc.continueCh
for {
resp, err = ReadResponse(pc.br, rc.req)// 读取server的返回请求结果
if err != nil {
return
}
resCode := resp.StatusCode
if continueCh != nil {// 表示header中带“Expect:100-continue”的请求
if resCode == 100 {
if trace != nil && trace.Got100Continue != nil {
trace.Got100Continue()
}
continueCh <- struct{}{}// 发送通知,让writeLoop可以继续发送请求的body
continueCh = nil// 重置channel
} else if resCode >= 200 {// 不支持Expect:100-continue,通知writeLoop取消发送
close(continueCh)
continueCh = nil// 重置channel
}
}
is1xx := 100 <= resCode && resCode <= 199
// treat 101 as a terminal status, see issue 26161
is1xxNonTerminal := is1xx && resCode != StatusSwitchingProtocols
if is1xxNonTerminal {
num1xx++
if num1xx > max1xxResponses {
return nil, errors.New("net/http: too many 1xx informational responses")
}
pc.readLimit = pc.maxHeaderResponseSize() // reset the limit
if trace != nil && trace.Got1xxResponse != nil {
if err := trace.Got1xxResponse(resCode, textproto.MIMEHeader(resp.Header)); err != nil {
return nil, err
}
}
continue
}
break
}
if resp.isProtocolSwitch() {//如果是101想switch protocol 设置body为read writer close 接口 body的写接口为pc.coon的write
resp.Body = newReadWriteCloserBody(pc.br, pc.conn)
}
resp.TLS = pc.tlsState
return
}
func newReadWriteCloserBody(br *bufio.Reader, rwc io.ReadWriteCloser) io.ReadWriteCloser {
body := &readWriteCloserBody{ReadWriteCloser: rwc}
if br.Buffered() != 0 {
body.br = br
}
return body
}
http代理服务器(3-4-7层代理)-网络事件库公共组件、内核kernel驱动 摄像头驱动 tcpip网络协议栈、netfilter、bridge 好像看过!!!!
但行好事 莫问前程
--身高体重180的胖子
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· 阿里最新开源QwQ-32B,效果媲美deepseek-r1满血版,部署成本又又又降低了!
· SQL Server 2025 AI相关能力初探
· 单线程的Redis速度为什么快?
· AI编程工具终极对决:字节Trae VS Cursor,谁才是开发者新宠?
· 开源Multi-agent AI智能体框架aevatar.ai,欢迎大家贡献代码
2021-01-22 路由存在的情况下ping提示Destination Host Unreachable