Elasticsearch 之(27)cardinality算法之优化内存开销以及HLL算法
1、cardinality语法
es,去重,cartinality metric,对每个bucket中的指定的field进行去重,取去重后的count,类似于count(distcint)
cardinality,count(distinct),5%的错误率,性能在100ms左右
{ "size" : 0, "aggs" : { "months" : { "date_histogram": { "field": "sold_date", "interval": "month" }, "aggs": { "distinct_colors" : { "cardinality" : { "field" : "brand" } } } } } }
{ "took": 70, "timed_out": false, "_shards": { "total": 5, "successful": 5, "failed": 0 }, "hits": { "total": 8, "max_score": 0, "hits": [] }, "aggregations": { "group_by_sold_date": { "buckets": [ { "key_as_string": "2016-05-01T00:00:00.000Z", "key": 1462060800000, "doc_count": 1, "distinct_brand_cnt": { "value": 1 } }, { "key_as_string": "2016-06-01T00:00:00.000Z", "key": 1464739200000, "doc_count": 0, "distinct_brand_cnt": { "value": 0 } }, { "key_as_string": "2016-07-01T00:00:00.000Z", "key": 1467331200000, "doc_count": 1, "distinct_brand_cnt": { "value": 1 } }, { "key_as_string": "2016-08-01T00:00:00.000Z", "key": 1470009600000, "doc_count": 1, "distinct_brand_cnt": { "value": 1 } }, { "key_as_string": "2016-09-01T00:00:00.000Z", "key": 1472688000000, "doc_count": 0, "distinct_brand_cnt": { "value": 0 } }, { "key_as_string": "2016-10-01T00:00:00.000Z", "key": 1475280000000, "doc_count": 1, "distinct_brand_cnt": { "value": 1 } }, { "key_as_string": "2016-11-01T00:00:00.000Z", "key": 1477958400000, "doc_count": 2, "distinct_brand_cnt": { "value": 1 } }, { "key_as_string": "2016-12-01T00:00:00.000Z", "key": 1480550400000, "doc_count": 0, "distinct_brand_cnt": { "value": 0 } }, { "key_as_string": "2017-01-01T00:00:00.000Z", "key": 1483228800000, "doc_count": 1, "distinct_brand_cnt": { "value": 1 } }, { "key_as_string": "2017-02-01T00:00:00.000Z", "key": 1485907200000, "doc_count": 1, "distinct_brand_cnt": { "value": 1 } } ] } } }
2、precision_threshold优化准确率和内存开销
GET /tvs/sales/_search { "size" : 0, "aggs" : { "distinct_brand" : { "cardinality" : { "field" : "brand", "precision_threshold" : 100 } } } }brand去重,如果brand(品牌)的unique value,在100个以内,小米,长虹,三星,TCL,HTL。。。
在多少个unique value以内,cardinality,几乎保证100%准确
cardinality算法,会占用precision_threshold * 8 byte 内存消耗,100 * 8 = 800个字节
占用内存很小而且unique value如果的确在值以内,那么可以确保100%准确
100,数百万的unique value,错误率在5%以内
precision_threshold,值设置的越大,占用内存越大,可以确保更多unique value的场景下,100%的准确
field,去重,count,这时候,unique value,10000,
precision_threshold=10000,
10000 * 8 = 80000 个byte,
80000 / 1024 ≈ 80KB
3、HyperLogLog++ (HLL)算法性能优化
cardinality底层算法:HLL算法,HLL算法的性能会对所有的uqniue value取hash值,通过hash值近似去求distcint count,误差
默认情况下,发送一个cardinality请求的时候,会动态地对所有的field value,取hash值; 将取hash值的操作,前移到建立索引的时候
创建索引时, brand field type 增加创建其hash值索引
PUT /tvs/ { "mappings": { "sales": { "properties": { "brand": { "type": "text", "fields": { "hash": { "type": "murmur3" } } } } } } }
根据hash值作引进行cartinality metric
GET /tvs/sales/_search { "size" : 0, "aggs" : { "distinct_brand" : { "cardinality" : { "field" : "brand.hash", "precision_threshold" : 100 } } } }