Network Service in Chrome

Network Service in Chrome

John Abd-El-Malek

March 2016

 

Objective

Create a Mojo Network Service in Chrome, and start converting the code to use it.


Background

As part of moving the Chrome codebase towards a service model, we aim to split low-level parts of the code into separate Mojo services. This includes services such as UI, file system, and networking. These services were prototyped in Mandoline, and we have basic (read: non-complete) implementations which were used to get Mandoline to pass basic page cyclers. Work is underway to create a UI service (mus) and file system related services (which LocalStorage is the first customer for) in Chrome, based on the earlier prototypes. This proposal is for migrating the existing network service implementation (see mojo/services/network) and building it out so that it’s production quality.


The expected advantages for this work are:

  • Code simplification: network requests should be made the same way, regardless of which process the request originates from. Similarly, the caller shouldn’t have to change if in the future the implementation moves processes.

  • Performance: the mojo service would be callable from any thread, reducing thread hops. It would also be directly callable from Blink, using Blink string types etc…, leading to less conversions and layers.

  • Longer-term: using mojo interfaces and isolating the network code from chrome code means we can eventually move the network code to a separate process for stability and security improvements. This would also allow us to make network requests from other Mojo apps when Chrome isn’t running, i.e. on ChromeOS with ARC++, or on all platforms for downloads/uploads (potentially from service worker) without the browser running.


Goals & Non Goals

Goals:

  • No performance regression. Of course, we hope that this leads to performance wins.

  • Web platform and browser features (e.g. AppCache, extensions) don’t live in the network service/process


Non Goals:

  • Use this opportunity to change src/net to improve code health. For now, we will proceed with considering this as a refactoring inside src/content, so that we don’t have to worry about other consumers of src/net outside the chromium repository.

Overview/High-level Design

The first stage of the work is to issue network requests through mojo interfaces. The second stage is to rewrite the glue layers of features that hook into networking.

Stage 1: Mojoification

The work to consume network code through mojo interfaces can be split into the following steps:

  1. Migrate existing network service code and get it to link with chrome. It’ll be registered as a mojo service behind a command line flag.

  2. Switch net::URLFetcher users to use the Mojo interface. This should be low risk as both URLFetcher and the network service are thin wrappers around net module.

  3. Complete the implementation of Mandoline’s WebURLLoader implementation that uses the network service. Add a command line flag to switch to it.

  4. Migrate browser code that makes network requests to use the mojo interface instead of net::URLFetcher.

  5. Run Finch experiments on the previous step to ensure performance/errors don’t regress.

  6. Move the mojo WebURLLoaderImpl into Blink to remove the extra layer.

 

At this point, we are able to swap different implementations of the networking code. The original code path (ResourceDispatcherHost) will be the default. With a command line option, we can route network requests to go through Mandoline’s networking service which is optionally running in a separate process.

Stage 2: Glue Rewriting

The work at this stage is to rewrite the glue code of all the features that hook into networking. Instead of intercepting and modifying requests once they’ve reached the net/ layer, features should hook before the request is made at the callsite. The network service can therefore be a simple service that only knows how to make requests to the network. Requests that are handled by the browser (i.e. service worker, appcache) should be handled outside of it. Requests that don’t go to network, such as files, should be handled by the browser.


To decouple browser features, we can use different URLLoaderFactory implementations to allow interception and handling of requests. As an example, see the detailed designs for:


Core Concepts

As mentioned above, with the network service there is no more central choke point where all requests go through. Navigations and browser initiated requests (e.g. safe browsing, UMA pings) go to the network process. Subresources, which are everything that the renderer requests, go directly to the network process as well without hopping to the browser process. This is to avoid extra process hops which would add latency.


The URLLoaderFactory interface is used to create a request object. For simple requests in the browser (e.g. non-navigations), the SimpleURLLoader class wraps URLLoaderFactory to provide an easy interface for the common case.


Features at different layer (e.g. src/content or src/chrome) can modify the request, redirect it, pause or block it using the URLLoaderThrottle interface. As an example, safe browsing uses this to check each request. Throttles can be added in the browser process through ContentBrowserClient::CreateURLLoaderThrottles or in the renderer process through ContentRendererClient::CreateURLLoaderThrottleProvider.


In rare cases where the state required to modify a request is only available in the browser process it’s possible to proxy every URLLoaderFactory call through the browser which then creates URLLoader objects that proxy to the real network ones. This should be used sparingly because it introduces a process hop for every request’s IPCs which has a performance penalty. This is currently used by extensions which modify requests using the webRequest API. See ContentBrowserClient::WillCreateURLLoaderFactory and its implementation in src/chrome as an example. There are similar mechanism to proxy websockets (see ContentBrowserClient::CreateWebSocket) and cookie requests from the renderer (see ContentBrowsreClient::WillCreateRestrictedCookieManager).


Security

Old codepath

In the non-network-service code path, security checks for requests go through ChildProcessSecurityPolicyImpl::CanRequestURL. For navigations, thanks to PlzNavigate the calls are checked (and rechecked) from the browser. See RenderFrameHostImpl::BeginNavigation, NavigatorImpl::DidStartProvisionalLoad, RenderFrameHostImpl::ValidateDidCommitParams and WebContentsImpl::OnDidFinishLoad, all of which eventually call ChildProcessSecurityPolicyImpl::CanRequestURL. For subresource requests from child processes, ResourceDispatcherHostImpl::OnRequestResourceWithMojo calls ChildProcessSecurityPolicyImpl::CanRequestURL.


Network Service

Navigations

With the network service, the navigation code path remains the same so the security checks above will suffice.

Subresources

For subresources, ResourceDispatcherHost is not used. There is not a single point where all requests from child processes go through, since we have many different implementations of URLLoaderFactory (e.g. for WebUI, FileSystem, Extensions, Files, Blobs). The approach we’re taking is use the mojo pipe to a particular URLLoaderFactory as a capability. So for example, if a renderer is not showing WebUI, we never send it a pipe to the WebUIURLLoaderFactory. Only if a RenderFrameHost commmits a page with WebUI bindings do we sent it a pipe connected to WebUIURLLoaderFactory. Similar logic is used for files. For Blobs, since URLs are unguessable we can send the pipe always to all renderers.


Some schemes, like extensions and filesystem, need extra checks. So the URLLoaderFactory implementation in the browser is created with the RenderProcessHost’s ID, and calls ChildProcessSecurityPolicyImpl::CanRequestURL every time it gets a request.

posted @ 2021-06-22 15:41  Bigben  阅读(321)  评论(0编辑  收藏  举报