Understanding Complex Event Processing (CEP)/ Streaming SQL Operators with WSO2 CEP (Siddhi)
转自:https://iwringer.wordpress.com/2013/08/07/understanding-complex-event-processing-cep-operators-with-wso2-cep-siddhi/
CEP model has many sensors. A sensor can be a real sensor (e.g. temperature sensor), some agent, or a system that support instrumentation. The sensor sends events to CEP and each event has several name-value properties.
We call events coming from the same sensor as a “stream” and give it a name. When an interesting event occurs, the sensor sends that event to the stream.
To use a stream, you need to first define them.
define stream PizzaOrders (id string, price float, ts long, custid string)
CEP listens to one or more streams, and we can write queries telling the CEP to look for certain conditions. For writing queries, you can use following constructs.
- Filters
- Windows
- Joins
- Patterns and Sequences
- Event tables
- Partitions
Let us see what we can do with each construct.
Filter
The filter checks a condition about the property in an event. It can be a =, >, < etc., and you can create complex queries by combing multiple conditions via and, or, not etc.
Following query detect pizza orders that are small and placed too far from the store.
select from PizzaOrders[price 1km]
insert into NBNOrders id, price, distance
Windows
An event stream can have an infinite number of events. Windows are a way to select a subset of events for further processing. You can select events in many ways: events came in a time period, last N events etc.
An output from a window is set of events. You can use it for further processing (e.g. joining event streams) or calculate aggregate function like sum and average.
We can either get output to be triggered when all events are collected or whenever a new event is added. We call the first type batch windows and second sliding windows.
For example, a window can collect all pizza orders placed in the last hour and emit the average value of the order once every hour.
from PizzaOrders#window.time( 1h ) into HourlyOrderStats avg(price) as avgPrice
Joins
Join operator join two event streams. The idea is to match event coming from two streams and create a new event stream.
For example, you can use join operator to join PizzaDelivery stream and PizzaOrder stream and calculate the time took to deliver each order.
from PizzaOrder#window.time(1h) as o join PizzaDelivery as d
on o.id == d.id
insert into DeliveryTime o.id as id, d.ts-0.ts as ts
At least one side of the join must have a window. For example, in above example, we can have a one hour window for PizzaOrder (because delivery always happens after the order) where join will store the events coming in PizzaOrder for one hour and match them against delivery events. If you have two windows, the join will store events at each stream and match them against events coming to the other stream.
Patterns and Sequences
Patterns and sequences let us match conditions that happen over time.
For example, we can use patterns to identify returning customers using the following query. Here -> denotes followed by relationship.
from every a1 = PizzaOder
-> a2 = PizzaOder[custid=a1.custid]
insert into ReturningCustomers
a1.custid as custid a2.ts as ts
Patterns match even when there are other events in between two matching conditions. Sequences are similar but provided event sequence must exactly match the events that happened. For example, following is the same query implemented using sequences. Note here the second line is to ignore any not matching events.
from every a1 = PizzaOder,
PizzaOder[custid!=a1.custid]*,
a2 = PizzaOder[custid=a1.custid]
insert into ReturningCustomers
a1.custid as custid a2.ts as ts
Here instead of -> relationship we use a regular expression like notation to define a sequence of conditions.
Partitions (available in upcoming 3.0 release)
Siddhi evaluates a query matching all the events in event streams used by that query. Partitions let us partition events into several groups based on some condition before evaluating queries.
For example, let say we need to find the time spent until pizza left the shop and until it is delivered. We can first partition pizza orders by orderID and then evaluate the query. It simplifies the query by a great extent.
define partition oderParition by PizzaOder.id, PizzaDone.oid, PizzaDelivered.oid
select from PizzaOder as o ->PizzaDone as p -> PizzaDelivered as d
insert into OrderTimes (p.ts-o.ts) as time2Preprae, (d.ts-p.ts) as time2Delivery
partition by oderParition
We do this for several reasons.
- Evaluating events separately within several partitions might be faster than matching them all together. In the latter case, we match events only within the partition.
- Sometimes it makes queries easier to design. For example, in the above query, partitioning let us write a query without worrying about other orders that are overlapped with the same order.
- Partitions let CEP runtime to distribute evaluation to multiple machines, and this can helps when scaling queries.
define table LatePizzaOrdersTable (ordered string, ts long, price float);
Then you can add events to it, delete events from it, and join those events in the table against incoming events.
For example, let’s say we need to store all late deliveries and if late delivery happened to the same customer twice we want to give them free pizza.
from LatePizzaDeliveries insert into
LatePizzaOrdersTable;
Then we can join events from event table with incoming events as follows.
from LatePizzaDeliveries as l join LatePizzaOrdersTable as t
on l.custid=t.custid AND l.ts!=t.ts
insert into FreePizzaOrders
You can also do the same using an event stream. However, event tables can be written to the disk and very useful for the long-running use cases. For example, if we do the above using an event stream stored values will be lost when we restart the server. However, values in event tables will be preserved in a disk.
Update 2017 September: You can try out above queries with WSO2 Stream Processor, which is freely available under Apache Licence 2.
Update 2018 January: You can find a detailed discussion about operators from Stream Processing 101: From SQL to Streaming SQL in 10 Minutes