This section describes some best practices for analysis. These practices come from experience of analysts in the Data Mining Team. We list a few things you should do (or at least consider doing) and some pitfalls to avoid. We provide a list of issues to keep in mind that could affect the the quality of your results. Finally, a list of tools and data sets are referenced that might help in your analysis.

Analysis Quality

  • Did you spend time thinking about what question you were answering?
  • Did you engage potential users of your analysis to ensure you address right questions?
  • How much effort did you put into checking the quality of the data?
  • How reproducible is your analysis? If you were to pick up your project 6 months from now could you reuse anything?
  • Did you review your write up to your satisfaction?
  • Did you have others review your analysis artifacts (scripts, code, etc.)?
  • Is your write up something you would be proud to publish?
  • Do you think readers of your analysis summary can understand the key points easily and benefit from them?

Analysis Do's

  • Look at the distribution of your data. Always look at histograms (value and counts) for key fields in your analysis and see what pops out. In most cases, you will find some surprises that need futher investigation before you dive into your real analysis.
  • Skewed Distributions. Most of the data distributions we see in our are very skewed "heavy or long tailed"). For example, if you are analyzing queries, there may be a handful of queries that dominate (e.g., "google'). The metrics computed for a particular feature or vertical may be heavily skewed because of those few queries.
  • Segmentation. Metrics are more useful when segmented appropriately — not all segments are necessarily useful, but almost always some kind of segmentation can provide more useful insights. E.g. segmenting by dominant/not-dominant query (head vs. tail, "super-head" vs. rest). For more on this see section on Segmentation. See also a good blog: http://www.kaushik.net/avinash/2010/05/web-analytics-segments-three-category-recommendations.html on segmentation from the Web Analytics expert Avinash Kaushik.
  • Deep dive: Always look at some unaggregated data as part of your analysis -- especially for results that are surprisingly (both positively or negatively). Some good ideas are to use Magic Mirror to get a few sample sessions to see what users are doing in detail. While that will not answer questions you have, but it may raise a few questions that may not have been considered or show up some assumptions you made are false.
  • Make sure the data is correct. Talk to people who generated the data to verify that every field you are using means what you think it means. Don't trust your intuition, always check. For example, when using DQ field from one of the databases it is good to verify which verticals are included in the DQ computation. Not all are included and the list of the ones that are included differs depending in Competitive and Live Metrics databases.
  • Think about baselines. Make sure that the numbers you are comparing are meaningful in their comparison. Often some subset of the population cannot be meaningfully compared to the population as a whole. For example, it isn't terribly meaningful to compare IE entry point Bing users to the global Bing user population in terms of value, because the global Bing user population will be biased by low-value marketing users, have different demographics, etc. It may be that you will simply demonstrate that marketing users are less likely to return than IE and Toolbar users, which is expected, and not what you set out to prove at all.
  • Think ahead about possible shortfalls of your methods. Build specific experiments to test whether these shortcomings are real. The beginning of any analysis should project should include an active brainstorm of possible reasons the analysis method would be flawed. The project should specifically build in experiments and data sets to attempt to prove or disprove those possible shortcomings. For example, when developing Session Success Rate, we realized that there were concerns that success due to answers would not be properly measured, invalidating the metric for answers-related experiments. To help shed light on this we ensured we tested on data for a known-good answers ranker flight, to ensure that Session Success Rate didn't tell the wrong story in that case.
  • Ensure your metric can find both good and bad. Sometimes your tools will have biases which can be found by testing both good and bad examples. If you metric always says that things are good, it probably isn't useful. This can sometimes be accomplished by having some prior knowledge about good cases and bad cases, and ensuring both are included in your set. For example, imagine that your analysis intends to find the impact of exposure to various Bing features on usage of Bing. In this case, the analysis should include both features like Instant Answers, which we believe are a positive experience for our users, and features like no-results pages, which we believe aren't a good experience for our users. In this case, if our analysis says that both are really good things, or both are really bad things, then we know our analysis hasn't produced reliable results.
  • Communicate the analysis results. Allocate time and put some effort into communicating the results of your analysis to your customers as well as to anyone who may potentially be interested. Don't wait for them to contact you. Contact them first and ask if they are interested.

Analysis Dont's

  • Don't go too broad in the analysis. When trying to look at everything it's very easy to drown in data.
  • Don't use a page view-level quantity to determine a cohort of users without extreme care. This can introduce unexpected biases due to coverage effects, which can influence broad features of the cohort.
  • Don't be afraid to turn away from some analysis method which is proving unproductive. Just because you've written up a plan and scheduled time for a project doesn't mean you should be afraid to fail fast if that's the right thing to do.

 

Analysis Issues

  • Precision: add error bars (e.g. 95% confidence intervals). This is especially important when working with sampled data (samples NIF streams or Magic Mirror). For example if we compare two estimates (e.g. CTR) that are different, but the 95% confidence intervals overlap, we can't say that they are different (though we can't say that they're equal either).
  • Accuracy: depending on the "ground truth" and data set used for the analysis, there may be a bias that needs to be understood to put the analysis results in perspective. For example when using a particular flight for the analysis, there is a mechanism for selecting users to be in that flight — i.e. the users in the flight may not be a true random sample from the population your analysis is interested in, in which case there's a bias introduced into the analysis. There can also be temporal bias, e.g. due to seasonal effects: browsing patterns may be different during the weeks before Christmas than say in February. Day of the week effects could also be an issue (best to use multiples of 7 days for analysis data, e.g. 35 days). Also (unless there is very good reason for it), don t aggregate over very long periods of time as the signal will likely change over long time. This presents a trade-off between aggregating over short term thus having less data and larger error versus aggregating over long term thus having more data and better precision, but yielding less sensitivity to temporal effects. In general a four or five week period best balances this trade-off.
  • Weighted aggregation: When computing aggregate values, one can choose to add different weights to different data points. Currently Foray (flight analysis) and LiveMetrics compute aggregate metrics in different ways: LiveMetrics gives each impression equal weight, whereas Foray gives each user equal weight (by first computing aggregates per user and then aggregating these values over all users). As a result the metrics values in LiveMetrics represent heavy users more than light users. The results obtained from these two methods can differ both quantitatively and qualitatively. Depending on the analysis one or the other (or neither) may be most appropriate.
posted on 2013-07-12 15:19  风生水起  阅读(374)  评论(0编辑  收藏  举报