机器翻译-Data-driven Design of Fault Diagnosis and Fault-tolerant Control Systems 8.2.1.2-8.4节
英文原文
8.2.1.2 Coprime Factorization Techniques
Coprime factorization of a transfer function (matrix) gives a further system
rep-resentation form which will be intensively used in our subsequent study.
Roughly speaking, a coprime factorization over\[\mathcal{R}\mathcal{H}_{\infty}$$ is to factorize a transfer matrix into two stable and coprime transfer matrices. \]
Definition 8.1 Two stable transfer matrices $$\hat{M}(z)$$,
\[\hat{N}(z)$$ are called left coprime if \]
there exist two stable transfer matrices $$\hat{X}(z)$$ and $$\hat{Y}(z)$$ such
that
138 8 Introduction, Preliminaries and I/O Data Set Models
Similarly, two stable transfer matrices M(z), N(z) are right coprime if there
exist two stable matrices Y(z), X(z) such that
Let G(z) be a proper real-rational transfer matrix. The left coprime
factorization (LCF) of G(z) is a factorization of G(z) into two stable and
coprime matrices which will play a key role in designing the so-called residual
generator. To complete the notation, we also introduce the right coprime
factorization (RCF), which is however only occasionally applied in our study.
Definition 8.2 $$G\left( z \right) = {\hat{M}}^{- 1}\left( z \right)\hat{N}(z)$$
with the left coprime pair $$\left( \hat{M}\left( z \right),\hat{N}\left( z
\right) \right)$$ is
called LCF of G(z). Similarly, RCF of G(z) is defined by $$G\left( z \right)
= N\left( z \right)M\left( z \right)^{- 1}$$ with the right coprime pair
\hat{M}\left( z \right) = \left( A - LC, - L,C,I \right),\hat{N}\left( z \right) = \left( A - LC,B - LD,C,D \right)
M\left( z \right) = \left( A + BF,B,F,I \right),N\left( z \right) = \left( A + BF,B,C,C + DF,D \right)
\hat{X}\left( z \right) = \left( A - LC, - \left( B - LD \right),F,I \right),Y\left( z \right) = \left( A - LC, - L,F,0 \right)
X\left( z \right) = \left( A - LC, - \left( B - LD \right),F,I \right),Y\left( z \right) = \left( A - LC, - L,F,0 \right)
G\left( z \right) = {\hat{M}}^{- 1}\left( z \right)\hat{N}\left( z \right) = N\left( z \right)M^{- 1}\left( z \right)
\begin{bmatrix}
X\left( z \right) & Y\left( z \right) \
- \hat{N}\left( z \right) & \hat{M}\left( z \right) \
\end{bmatrix}\begin{bmatrix}
M\left( z \right) & - \hat{Y}\left( z \right) \
N\left( z \right) & X\left( z \right) \
\end{bmatrix} = \begin{bmatrix}
I & 0 \
0 & I \
\end{bmatrix}\
r\left( z \right) = \begin{bmatrix}
- \hat{N}\left( z \right) & \hat{M}\left( z \right) \
\end{bmatrix}\begin{bmatrix}
u\left( z \right) \
y\left( z \right) \
\end
x\left( k + 1 \right) = Ax\left( k \right) + Bu\left( k \right) + E_{d}d\left( k \right) + \eta\left( k \right)
y\left( k \right) = C_{x}\left( k \right) + D_{u}\left( k \right) + F_{d}d\left( k \right) + v\left( k \right)
- input–output model
where Gyd(z), Gyν(z) are known.
8.2.1.4 Modeling of Faults
There exists a number of ways of modeling faults. Extending model (8.16) to
is a widely adopted one, where $$f \in \mathcal{R}^{k_{f}}$$ is a unknown
vector that represents all possible faults and will be zero in the
fault-free case, $$G_{\text{yf}}(z\mathcal{) \in L}\mathcal{H}_{\infty}$$ is
a known transfer matrix. Throughout this book, f is assumed to be a
deterministic time func-tion. No further assumption on it is made, provided
that the type of the fault is not specified.
Suppose that a minimal state space realization of (8.17) is given by
With know matrixs Ef , Ff .Then we have
140 8 Introduction, Preliminaries and I/O Data Set Models
It becomes evident that Ef , Ff indicate the place where a fault occurs
and its influ-ence on the system dynamics. It is the state of the art that
faults are divided into three categories:
-
sensor faults fS : these are faults that directly act on the process
measurement -
actuator faults fA: these faults cause changes in the actuator
-
process faults fP : they are used to indicate malfunctions within the
process.
A sensor fault is often modeled by setting Ff = I, that is,
while an actuator fault by setting Ef = B, Ff = D, which leads to
Depending on their type and location, process faults can be modeled by Ef
= EP and Ff = FP for some EP, FP. For a system with sensor, actuator
and process faults, we define
and apply (8.18), (8.19) to represent the system dynamics.
Due to the way how they affect the system dynamics, the faults described by
(8.18), (8.19) are called additive faults. It is very important to note that
the occurrence of an additive fault will not affect the system stability,
independent of the system configuration. Typical additive faults met in
practice are, for instance, an offset in sensors and actuators or a drift in
sensors. The former can be described by a constant, while the latter by a
ramp.
In practice, malfunctions in the process or in the sensors and actuators
often cause changes in the model parameters. They are called multiplicative
faults and generally modeled in terms of parameter changes.
8.2.2 Model-Based Residual Generation Schemes
Next, we introduce some standard model- and observer-based residual
generation schemes.
8.2 Preliminaries and Review of Model-Based FDI Schemes | 141 |
---|
8.2.2.1 Fault Detection Filter
Fault detection filter (FDF) is the first type of observer-based residual
generators proposed by Beard and Jones in the early 1970s. Their work marked the
beginning of a stormy development of model-based FDI techniques.
Core of an FDF is a full-order state observer
which is constructed on the basis of the nominal system model Gyu(z) =
C(zI − A)−1B + D. Built upon (8.25), the residual is simply defined
by
Introducing variable
yields, on the assumption of process model (8.18), (8.19),
It is evident that r(k) has the characteristic features of a residual when the
observer gain matrix L is chosen so that A − LC is stable.
The advantages of an FDF lie in its simple construction form and, for the reader
who is familiar with the modern control theory, in its intimate relationship
with the state observer design and especially with the well-established robust
control theory by designing robust residual generators.
We see that the design of an FDF is in fact the determination of the observer
gain matrix L. To increase the degree of design freedom, we can switch a
matrix to the output estimation error $$y\left( z \right) - \hat{y}(z)$$, that
is
A disadvantage of FDF scheme is the online implementation of the full-order
state observer, since in many practical cases a reduced order observer can
provide us with the same or similar performance but with less online
computation. This is one of the motivations for the development of Luenberger
type residual generators, also called diagnostic observers.
142 8 Introduction, Preliminaries and I/O Data Set Models
8.2.2.2 Diagnostic Observer Scheme
The diagnostic observer (DO) is, thanks to its flexible structure and
similarity to the Luenberger type observer, one of the mostly investigated
model-based residual generator forms.
The core of a DO is a Luenberger type (output) observer described by
where $$z \in \mathcal{R}^{s}$$, s denotes the observer order and can be
equal to or lower or higher than the system order n. Although most
contributions to the Luenberger type observer are focused on the first case
aiming at getting a reduced order observer, higher order observers will play
an important role in the optimization of FDI systems.
Assume Gyu(z) = C(zI − A)−1B + D, then matrices G, H, L , Q, V
and W together with a matrix $$T \in \mathcal{R}^{s \times n}$$ have to
satisfy the so-called Luenberger conditions,
under which system (8.30), (8.31) delivers a residual vector, that is
Let e(k) = Tx(k) − z(k), it is straightforward to show that the system
dynamics of DO is, on the assumption of process model (8.18), (8.19),
governed by
Remember that in the last section it has been claimed all residual generator
design schemes can be formulated as the search for an observer gain matrix
and a post-filter. It is therefore of practical and theoretical interest to
reveal the relationships between matrices G, L , T , V and W solving
Luenberger equations (8.32)–(8.34) and observer gain matrix as well as
post-filter.
A comparison with the FDF scheme makes it clear that
-
the diagnostic observer scheme may lead to a reduced order residual
generator, which is desirable and useful for online implementation, -
we have more degree of design freedom but, on the other hand,
-
more involved desig
8.2 Preliminaries and Review of Model-Based FDI Schemes
8.2.2.3 Kalman Filter Based Residual Generation
Consider (8.14), (8.15). Assume that η(k), ν(k) are white Gaussian
processes and independent of initial state vector x(0), u(k) with
A Kalman filter is, although structured similar to an observer of the
full-order, a time-varying system given by the following recursions:
recursive computation for optimal state estimation:
recursive computation for Kalman filter gain:
where xˆ(k) denotes the estimation of x(k) and
is the associated estimation error covariance.
The significant characteristics of Kalman filter is
- the state estimation is optimal in the sense of
- the so-called innovation process e(k) = y(k) −C xˆ (k)− Du(k) is a
white Gaussian process with covariance
144 8 Introduction, Preliminaries and I/O Data Set Models
Below is an algorithm for the online implementation of the Kalman filter
algorithm given by (8.40)–(8.45).
Algorithm 8.1 On-line implementation of Kalman filter
S0: Set xˆ(0), P(0) as given in (8.40) and (8.42)
S1: Calculate Re(k), K (k), xˆ(k), according to (8.45), (8.44) and (8.41)
S2: Increase k and calculate P(k + 1) according to (8.43)
S3: Go S1.
Suppose the process under consideration is stationary, then
which is subject to
With
It holds
Equation (8.49) is an algebraic Riccati equation whose solution P is positive
definite under certain conditions. It thus becomes evident that given system
model the gain matrix K can be calculated offline by solving Riccati equation
(8.49). The corre-sponding residual generator is then given by
Note that we now have in fact an observer of the full-order.
Remark 8.1 The offline set up (S0) in the above algorithm is needed only for
one time, but S1–S3 have to be repeated at each time instant. Thus, the online
implemen-tation, compared with the steady-state Kalman filter, is
computationally consuming. For the FDI purpose, we can generally assume that the
system under consideration is operating in its steady state before a fault
occurs. Therefore, the use of the steady-state type residual generator (8.50),
(8.51) is advantageous. In this case, the most involved computation is finding a
solution for Riccati equation (8.49), which, nevertheless, is carried out
offline, and moreover for which there exists a number of numerically reliable
methods and CAD programs.
8.2 Preliminaries and Review of Model-Based FDI Schemes | 145 |
---|
8.2.2.4 Parity Space Approach
The parity space approach was initiated by Chow and Willsky in their pioneering
work in the early 1980s. Although a state space model is used for the purpose of
residual generation, the so-called parity relation, instead of an observer,
builds the core of this approach. The parity space approach is generally
recognized as one of the important model-based residual generation approaches,
parallel to the observer-based and the parameter estimation schemes.
We consider in the following the state space model (8.18), (8.19) and, without
loss of generality, assume that r ank(C) = m. For the purpose of
constructing a residual generator, we first suppose f (k) = 0, d(k) = 0.
Following (8.18), (8.19), y(k − s), s > 0, can be expressed in terms of
x(k − s), u(k − s), and y(k − s + 1) in terms of x(k − s), u(k −
s + 1), u(k − s) as follows
Repeating this procedure yields
Introducing the notations
leads to the following compact model form
Note that (8.54) describes the input and output relationship in dependence on
the past state vector x(k − s). It is expressed in an explicit form, in
which
146 8 Introduction, Preliminaries and I/O Data Set Models
-
ys (k) and us (k) consist of the temporal and past outputs and inputs
respectively and are known -
matrices Ho,s and Hu,s are composite of system matrices A, B, C, D and
also known -
the only unknown variable is x(k − s).
The underlying idea of the parity relation based residual generation lies in
the utilization of the fact, known from the linear control theory, that for
s ≥ n the following rank condition holds:
r ank Ho,s ≤ n < the row number of matrix Ho,s = (s + 1)m.
This ensures that for s ≥ n there exists at least a (row) vector
\[v_{s}\left( \neq 0 \right) \in \mathcal{R}^{\left( s + 1 \right)m}$$ such that \]
Hence, a parity relation based residual generator is constructed by
whose dynamics is governed by, in case of f (k) = 0, d(k) = 0,
Vectors satisfying (8.55) are called parity vectors, the set of which,
is called the parity space of the sth order.
In order to study the influence of f, d on residual generator (8.56), let
f(k) ≠ 0, d(k) ≠ 0. It is straightforward that
where
8.2 Preliminaries and Review of Model-Based FDI Schemes
Constructing a residual generator according to (8.56) finally results in
We see that the design parameter of the parity relation based residual generator
is the parity vector whose selection has decisive impact on the performance of
the residual generator.
Remark 8.2 One of the significant properties of parity relation based residual
gener-ators, also widely viewed as the main advantage over the observer-based
approaches, is that the design can be carried out in a straightforward manner.
In fact, it only deals with solutions of linear equations or linear optimization
problems. In against, the implementation form (8.56) is surely not ideal for an
online realization, since it is presented in an explicit form, and thus not only
the temporal but also the past measurement and input data are needed and have to
be recorded.
8.2.2.5 Kernel Representation and Parameterization of Residual Generators
In the model-based FD study, the FDF, DO and Kalman filter based residual
gener-ators are called closed-loop configurated, since a feedback of the
residual signal is embedded in the system configuration and the computation is
realized in a recursive form. Differently, the parity space residual generator
is open-loop structured. In fact, it is an FIR (finite impulse response) filter.
Below, we introduce a general form for all types of LTI residual generators,
which is also called parameterization of residual generators.
A fundamental property of the LCF is that in the fault- and noise-free case
Equation (8.61) is called kernel representation (KR) of the system under
consid-eration and useful in parameterizing all residual generators. For our
purpose, we introduce below a more general definition of kernel representation.
Definition 8.3 Given system (8.2), (8.3), a stable linear system K driven
by u(z), y(z) and satisfying
148 8 Introduction, Preliminaries and I/O Data Set Models
is called stable kernel representation (SKR) of (8.2), (8.3).
model (8.18), (8.19) with unknown input vectors. The parameterization forms
of all LTI residual generators and their dynamics are described by
where
R(z)(≠0) is a stable parameterization matrix and called post-filter.
Moreover, in order to avoid loss of information about faults to be detected,
the condition r ank(R(z)) = m is to be satisfied.
It has been demonstrated that all LTI residual generators can be expressed
by (8.63), while their dynamics with respect to the unknown inputs and
faults are para-meterized by (8.64). Moreover, it holds
with yˆ delivered by a full order observer as an estimate of y, we can
apply an FDF,
for the online realization of (8.63).
As a summary, we present a theorem which immediately follows from Definition
8.3 and the residual generator parametrization.
Theorem 8.1 Given process model (8.18), (8.19), an LTI dynamic system
is a resid-ual generator if and only if it is an SKR of (8.2), (8.3).
8.3 I/O Data Models
In order to connect analytical models and process data, we now introduce
different I/O data models. They are essential in our subsequent study and
build the link between the model-based FD schemes introduced in the previous
sections and the data-driven design methods to be introduced below. For our
purpose, the following LTI process model is assumed to be the underlying
model form adopted in our study
8.3 I/O Data Models 149
where u √ Rl , y √ Rm and x √ Rn , w √ Rn and v √ Rm denote
noise sequences that are normally distributed and statistically independent
of u and x(0).
Let ω(k) √ Rξ be a data vector. We introduce the following notations
where s, N are some (large) integers. In our study, ω(k) can be y(k),
u(k), x(k), and
represents m or l or n given in (8.65), (8.66). The first I/O data
model described by
follows directly from the I/O model (8.54) introduced in the study on parity
space scheme, where Hu,s √ R(s+1)m ×(s+1)l , Hw,s Wk,s + Vk,s
represents the influence of the noise vectors on the process output with
Hw,s having the same structure like Hu,s and Wk,s , Vk,s as defined in
(8.67), (8.68).
In the SIM framework, the so-called innovation form, instead of (8.69), is
often applied to build an I/O model. The core of the innovation form is a
(steady) Kalman filter, which is written as
with the innovation sequence y(k) − yˆ(k) := e(k) being a white
noise sequence and
- the Kalman filter gain matrix. Based on (8.70), the I/O relation of the
process can be alternatively written into
150 8 Introduction, Preliminaries and I/O Data Set Models
The following two I/O data models follow from (8.71), (8.72):
8.4 Notes and References
In order to apply the MVA technique to solving FD problems in dynamic
processes, numerous methods have been developed in the last two decades.
Among them, dynamic PCA/PLS [1–3], recursive implementation of PCA/PLS [4,
5], fast moving window PCA [6] and multiple-mode PCA [7] are widely used in
the research and applications in recent years.
SIM is a well-established technique and widely applied in process
identification [8–11]. The application of the SIM technique to FDI study is
new and has been first proposed by [12–15].
In Sect. 8.2, the basics of the model-based FDI framework have been
reviewed. They can be found in the monographs [1, 16–25] and in the survey
papers [26–29]. The first work on FDF has been reported by Beard and Jones
in [30, 31], and Chow and Willsky have proposed the first optimal FDI
solution using the parity space scheme [32]. The reader is referred to
Chaps. 3 and 5 in [25] for a systematic handling of the issues on process
and fault modeling and model-based residual generation schemes.
The concept SKR will play an important role in our subsequent studies. In
fact, residual generator design is to find an SKR, as given in Theorem 8.1.
The SKR definition given in Definition 8.3 is similar to the one introduced
in [33] for nonlinear systems.
初步译文
8.2.1.2 互质分解技术
传递函数(矩阵)的互质分解给出了一种更进一步的系统表示形式,这种表示形式将在后续的研究中得到广泛的应用。粗略地说,$$\mathcal{R}\mathcal{H}_{\infty}$$的上的互质互质分解就是把一个转移矩阵分解为两个稳定的互质转移矩阵。
定义8.1
如果存在两个稳定转移矩阵$$\hat{X}(z)$$和$$\hat{Y}(z)$$满足(8.5),则两个稳定转移矩阵
\begin{bmatrix}
\hat{M}\left( z \right) & \hat{N}\left( z \right) \
\end{bmatrix}\begin{bmatrix}
\hat{X}\left( z \right) \
\hat{Y}\left( z \right) \
\end{bmatrix} = I
\begin{bmatrix}
X\left( z \right) & Y\left( z \right) \
\end{bmatrix}\begin{bmatrix}
M\left( z \right) \
N\left( z \right) \
\end{bmatrix} = I.
\hat{M}\left( z \right) = \left( A - LC, - L,C,I \right),\hat{N}\left( z \right) = \left( A - LC,B - LD,C,D \right)
M\left( z \right) = \left( A + BF,B,F,I \right),N\left( z \right) = \left( A + BF,B,C,C + DF,D \right)
\hat{X}\left( z \right) = \left( A - LC, - \left( B - LD \right),F,I \right),Y\left( z \right) = \left( A - LC, - L,F,0 \right)
X\left( z \right) = \left( A - LC, - \left( B - LD \right),F,I \right),Y\left( z \right) = \left( A - LC, - L,F,0 \right)
G\left( z \right) = {\hat{M}}^{- 1}\left( z \right)\hat{N}\left( z \right) = N\left( z \right)M^{- 1}\left( z \right)
\begin{bmatrix}
X\left( z \right) & Y\left( z \right) \
- \hat{N}\left( z \right) & \hat{M}\left( z \right) \
\end{bmatrix}\begin{bmatrix}
M\left( z \right) & - \hat{Y}\left( z \right) \
N\left( z \right) & X\left( z \right) \
\end{bmatrix} = \begin{bmatrix}
I & 0 \
0 & I \
\end{bmatrix}\
r\left( z \right) = \begin{bmatrix}
- \hat{N}\left( z \right) & \hat{M}\left( z \right) \
\end{bmatrix}\begin{bmatrix}
u\left( z \right) \
y\left( z \right) \
\end
x\left( k + 1 \right) = Ax\left( k \right) + Bu\left( k \right) + E_{d}d\left( k \right) + \eta\left( k \right)
y\left( k \right) = C_{x}\left( k \right) + D_{u}\left( k \right) + F_{d}d\left( k \right) + v\left( k \right)
y\left( z \right) = G_{\text{yu}}\left( z \right)u\left( z \right) + G_{\text{yd}}\left( z \right)d\left( z \right) + G_{\text{yf}}\left( z \right)f\left( z \right)
y\left( z \right) = G_{\text{yu}}\left( z \right)u\left( z \right) + G_{\text{yd}}\left( z \right)d\left( z \right) + G_{\text{yf}}\left( z \right)f\left( z \right)
x\left( k + 1 \right) = Ax\left( k \right) + Bu\left( k \right) + E_{d}d\left( k \right) + E_{f}f\left( k \right)
y\left( k \right) = Cx\left( k \right) + Du\left( k \right) + F_{d}d\left( k \right) + F_{f}f\left( k \right)
G_{\text{yf}}\left( z \right) = F_{f} + C\left( zI - A \right)^{- 1}E_{f}
y\left( k \right) = C_{x}\left( k \right) + D_{u}\left( k \right) + F_{d}d\left( k \right) + fs\left( k \right)
x\left( k + 1 \right) = Ax\left( k \right) + B\left( u\left( k \right) + f_{A} \right) + E_{d}d(k)
y\left( k \right) = Cx\left( k \right) + D\left( u\left( k \right) + f_{A} \right) + F_{d}d\left( k \right)
f = \begin{bmatrix}
f_{A} \
f_{P} \
f_{S} \
\end{bmatrix},E_{f} = \begin{bmatrix}
B & E_{P} & 0 \
\end{bmatrix},F_{f} = \begin{bmatrix}
D & F_{P} & I \
\end{bmatrix}
\hat{x}\left( k + 1 \right) = A\hat{x}\left( k \right) + Bu\left( k \right) + L\left( y\left( k \right) - C\hat{x}\left( k \right) - Du\left( k \right) \right)
r\left( k \right) = y\left( k \right) - \hat{y}\left( k \right) = y\left( k \right) - C\hat{x}\left( k \right) - Du\left( k \right)
e\left( k \right) = x\left( k \right) - \hat{x}\left( k \right)
e\left( k + 1 \right) = \left( A - LC \right)e\left( k \right) + \left( E_{d} - LF_{d} \right)d\left( k \right) + \left( E_{f} - LF_{f} \right)
r\left( k \right) = Ce\left( k \right) + F_{d}d\left( k \right) + F_{f}f\left( k \right)
r\left( z \right) = V\left( y\left( z \right) - \hat{y}\left( z \right) \right)
z\left( k + 1 \right) = Gz\left( k \right) + Hu\left( k \right) + Ly\left( k \right)
r\left( k \right) = Vy\left( k \right) - Wz\left( k \right) - Qu\left( k \right)
{\text{I.\ \ G\ is\ stable}\backslash n}{II.\ \ TA - GT = LC,H = TB - LD\backslash n}{III.\ \ VC - WT = 0,Q = VD}
\forall u,x\left( 0 \right),\operatorname{}{r\left( k \right) = 0}
e\left( k + 1 \right) = Ge\left( k \right) + \left( TEd - LFd \right)d\left( k \right) + \left( TE_{f} - LF_{f} \right)f\left( k \right)
r\left( k \right) = Ve\left( k \right) + VF_{d}d\left( k \right) + VF_{f}f(k)
\varepsilon\begin{bmatrix}
\eta\left( i \right)\eta^{T}\left( j \right) & \eta\left( i \right)v^{T}\left( j \right) \
v\left( i \right)\eta^{T}\left( j \right) & v\left( i \right)v^{T}\left( j \right) \
\end{bmatrix} = \begin{bmatrix}
\Sigma_{\eta} & S_{\text{ηv}} \
S_{\text{vη}} & \Sigma_{v} \
\end{bmatrix}\delta_{\text{ij}},\delta_{\text{ij}} = \left{ \begin{matrix}
1,\ \ i = j \
0,\ \ i \neq j \
\end{matrix} \right.\
\Sigma_{v} > 0,\Sigma_{\eta} \geq 0,\varepsilon\left\lbrack \eta\left( k \right) \right\rbrack = 0,\varepsilon\left\lbrack v\left( k \right) \right\rbrack = 0
\varepsilon\left\lbrack x\left( 0 \right) \right\rbrack = \overline{x},\varepsilon\left\lbrack \left( x\left( 0 \right) - \overline{x} \right)\left( x\left( 0 \right) - \overline{x} \right)^{T} \right\rbrack = P_{o}
\ \hat{x}\left( 0 \right) = \overline{x}
\hat{x}\left( k + 1 \right) = A\hat{x}\left( k \right) + \text{Bu}\left( k \right) + K\left( k \right)\left( y\left( k \right) - C\hat{x}\left( k \right) - \text{Du}\left( k \right) \right)
P\left( 0 \right) = P_{0}
P\left( K + 1 \right) = \text{AP}\left( k \right)A^{T} - K\left( k \right)R_{e}\left( k \right)K^{T}\left( k \right) + \Sigma_{\eta}
K\left( k \right) = \left( \text{AP}\left( k \right)C^{T} + S_{\text{ηv}} \right)R_{e}^{- 1}\left( k \right)
R_{e}\left( k \right) = \Sigma_{v} + \text{CP}\left( k \right)C^{T}
P\left( k \right) = \varepsilon\left\lbrack \left( x\left( k \right) - \hat{x}\left( k \right) \right)\left( x\left( k \right) - \hat{x}\left( k \right) \right)^{T} \right\rbrack
P\left( k \right) = \varepsilon\left\lbrack \left( x\left( k \right) - \hat{x}\left( k \right) \right)\left( x\left( k \right) - \hat{x}\left( k \right) \right)^{T} \right\rbrack \Longrightarrow min
\varepsilon\left( e\left( k \right)e^{T}\left( k \right) \right) = R_{e}\left( k \right) = \Sigma_{v} + \text{CP}\left( k \right)C^{T}
\operatorname{}{K\left( k \right) =}K = constant\ matrix
K = \left( \text{AP}C^{T} + S_{\eta v^{T}} \right)R_{e}^{- 1}
P = \lim_{k \rightarrow \infty}P\left( k \right),R_{e} = \Sigma_{v} + CPC^{T}
P = \text{AP}A^{T} - KR_{e}K^{T} + \Sigma_{\eta}
\hat{x}\left( k + 1 \right) = A\hat{x}\left( k \right) + Bu\left( k \right) + K\left( y\left( k \right) - C\hat{x}\left( k \right) - Du\left( k \right) \right)
r(k) = y\left( k \right) - C\hat{x}\left( k \right) - Du\left( k \right)
y\left( k - s \right) = Cx\left( k - s \right) + Du\left( k - s \right)
{y\left( k - s + 1 \right) = Cx\left( k - s + 1 \right) + Du\left( k - s + 1 \right)\backslash n}{= CAx\left( k - s \right) + CBu\left( k - s \right) + Du(k - s + 1)}
y\left( k - s + 2 \right) = CA^{2}x\left( k - 2 \right) + CABu\left( k - s \right) + CBu\left( k - s + 1 \right) + Du\left( k - x + 2 \right),\ldots,
y\left( k \right) = CA^{s}x\left( k - s \right) + CA^{s - 1}\text{Bu}\left( k - s \right) + \cdots + CBu\left( k + 1 \right) + Du\left( k \right)
y_{s}\left( k \right) = \begin{bmatrix}
y\left( k - s \right) \
y\left( k - s + 1 \right) \
\vdots \
y\left( k \right) \
\end{bmatrix},u_{s}\left( k \right) = \begin{bmatrix}
u\left( k - s \right) \
u\left( k - s + 1 \right) \
\vdots \
u\left( k \right) \
\end{bmatrix}
H_{o,s} = \begin{bmatrix}
C \
\text{CA} \
\vdots \
CA^{s} \
\end{bmatrix},H_{u,s} = \begin{bmatrix}
D & 0 & \cdots & 0 \
\text{CB} & D & \ddots & \vdots \
\vdots & \ddots & \ddots & 0 \
CA^{s - 1}B & \cdots & \text{CB} & D \
\end{bmatrix}
{y_{s}(k) = H}{o,s}x\left( k - s \right) + Hu_{s}\left( k - s \right)
\text{rank}\left( H_{o,s} \right) \leq n < the\ row\ number\ of\ matrix\ H_{o,s} = \left( s + 1 \right)m
u_{s}H_{o,s} = 0
r\left( k \right) = v_{s}\left( y_{s}\left( k - s \right) - H_{u,s}u_{s}\left( k - s \right) \right)
r\left( k \right) = v_{s}\left( y_{s}\left( k \right) - H_{u,s}u_{s}\left( k \right) \right) = v_{s}H_{o,s}\left( k - s \right) = 0
P_{s} = \left{ v_{s} \middle| v_{s}H_{o,s} = 0 \right}
y_{s}\left( k \right) = H_{o,s}x\left( k - s \right) + H_{u,s}u_{s}\left( k \right) + H_{f,s}f_{s}\left( k \right) + H_{d,s}d_{s}\left( k \right)
f_{s}\left( k \right) = \begin{bmatrix}
f\left( k - s \right) \
f\left( k - s + 1 \right) \
\vdots \
f\left( k \right) \
\end{bmatrix},\ H_{f,s} = \begin{bmatrix}
F_{f} & 0 & \cdots & 0 \
CE_{f} & F_{f} & \ddots & \vdots \
\vdots & \ddots & \ddots & 0 \
CA^{s - 1}E_{f} & \cdots & CE_{f} & F_{f} \
\end{bmatrix}
d_{s}\left( k \right) = \begin{bmatrix}
d\left( k - s \right) \
d\left( k - s + 1 \right) \
\vdots \
d\left( k \right) \
\end{bmatrix},\ H_{d,s} = \begin{bmatrix}
F_{d} & 0 & \cdots & 0 \
CE_{d} & F_{d} & \ddots & \vdots \
\vdots & \ddots & \ddots & 0 \
CA^{s - 1}E_{d} & \cdots & CE_{d} & F_{d} \
\end{bmatrix}
r\left( k \right) = v_{s}(H_{f,s}f_{s}\left( k \right) + H_{d,s}d_{s}\left( k \right),v_{s} \in P_{s}
\forall u,\begin{bmatrix}
- \hat{N}\left( z \right) & \hat{M}\left( z \right) \
\end{bmatrix}\begin{bmatrix}
u\left( z \right) \
y\left( z \right) \
\end{bmatrix} = 0
\forall u\left( z \right),r\left( z \right) = \kappa\begin{bmatrix}
u\left( z \right) \
y\left( z \right) \
\end{bmatrix} = 0
{r\left( z \right) = R\left( z \right)\begin{bmatrix}
- \hat{N}\left( z \right) & \hat{M}\left( z \right) \
\end{bmatrix}\begin{bmatrix}
u\left( z \right) \
y\left( z \right) \
\end{bmatrix}\backslash n}{= R\left( z \right)\left( {\hat{N}}{d}\left( z \right)d\left( z \right) + {\hat{N}}\left( z \right)f\left( z \right) \right)}
{\hat{N}}{d}\left( z \right) = \left( A - \text{LC},\text{Ed} - \text{LFd},C,\text{Fd} \right),{\hat{N}}\left( z \right) = \left( A - \text{LC},E_{f} - LF_{f},C,F_{f} \right)
\left\lbrack - \hat{N}\left( z \right)\hat{M}\left( z \right) \right\rbrack\begin{bmatrix}
u\left( z \right) \
y\left( z \right) \
\end{bmatrix} = y\left( z \right) - \hat{y}\left( z \right)
\hat{x}\left( k + 1 \right) = A\hat{x}\left( k \right) + \text{Bu}\left( k \right) + L\left( y\left( k \right) - \hat{y}\left( k \right) \right),\hat{y}\left( k \right) = C\hat{x}\left( x \right) + \text{Du}\left( k \right)
x\left( k + 1 \right) = \text{Ax}\left( k \right) + \text{Bu}\left( k \right) + w\left( k \right)
y\left( k \right) = \text{Cx}\left( k \right) + \text{Du}\left( k \right) + v\left( k \right)
{w_{s}\left( k \right) = \begin{bmatrix}
w\left( k - 1 \right) \
\vdots \
w\left( k \right) \
\end{bmatrix} \in \mathcal{R}^{\left( s + 1 \right)\xi}\backslash n}{\ \Omega_{k} = \begin{bmatrix}
w\left( k \right) & \cdots & w\left( k + N - 1 \right) \
\end{bmatrix} \in \mathcal{R}^{\xi \times N}}
\Omega_{k,s} = \begin{bmatrix}
w_{s}\left( k \right)\cdots w_{s}\left( k + N - 1 \right) \
\end{bmatrix} = \begin{bmatrix}
\Omega_{k - s} \
\vdots \
\Omega_{k} \
\end{bmatrix} \in \mathcal{R}^{\xi \times N}
Y_{k,s} = \Gamma_{s}X_{k - s} + H_{u,s}U_{k,s} + H_{w,s}W_{k,s} + V_{k,s} \in \mathcal{R}^{\left( s + 1 \right)m \times N}
Y_{s} = \begin{bmatrix}
C \
\text{CA} \
\vdots \
CA^{s} \
\end{bmatrix} \in \mathcal{R}^{\left( s + 1 \right)m \times n},H_{u,s} = \begin{bmatrix}
D & 0 & \ & \ \
\text{CB} & \ddots & \ddots & \ \
\vdots & \ddots & \ddots & 0 \
CA^{s - 1}B & \cdots & \text{CB} & D \
\end{bmatrix}
\hat{x}\left( k + 1 \right) = A\hat{x}\left( k \right) + \text{Bu}\left( k \right) + K\left( y\left( k \right) - \hat{y}\left( k \right) \right),\hat{y}\left( k \right) = C\hat{x}\left( k \right) + \text{Du}\left( k \right)
\hat{x}\left( k + 1 \right) = A\hat{x}\left( k \right) + \text{Bu}\left( k \right) + \text{Ke}\left( k \right) = A_{K}\hat{x}\left( k \right) + B_{K}u\left( k \right) + \text{Ky}\left( k \right)
y\left( k \right) = C\hat{x}\left( k \right) + \text{Du}\left( k \right) + e\left( k \right),A_{K} = A - \text{KC},B_{K} = B - \text{KD}
Y_{k,s} = \Gamma_{s}{\hat{X}}{k - s} + HU_{k,s} + H_{e,s}E_{k,s},H_{e,s} = \begin{bmatrix}
I & 0 & \ & \ \
\text{CK} & \ddots & \ddots & \ \
\vdots & \ddots & \ddots & 0 \
CA^{\left( s - 1 \right)}K & \cdots & \text{CK} & I \
\end{bmatrix}
\left( I - H_{y,s}^{K} \right)Y_{k,s} = \Gamma_{s}^{K}{\hat{X}}{k - s} + H{K}U_{k,s},\Gamma_{s} = \begin{bmatrix}
C \
CA_{K} \
\vdots \
CA_{K}^{s} \
\end{bmatrix}
H_{y,s}^{K} = \begin{bmatrix}
0 & 0 & \ & \ \
\text{CK} & \ddots & \ddots & \ \
\vdots & \ddots & \ddots & 0 \
CA_{K}^{s - 1}K & \cdots & \text{CK} & 0 \
\end{bmatrix},H_{u,s}^{K} = \begin{bmatrix}
0 & 0 & \ & \ \
CB_{K} & \ddots & \ddots & \ \
\vdots & \ddots & \ddots & 0 \
CA_{K}^{s - 1}K & \cdots & CK_{B} & 0 \
\end{bmatrix}
与(8.67),(8.68)中给出的结构相同。
8.4 注释和参考
为了将MVA技术应用于解决动态过程中的FD问题,最近二十年来已开发了出很多方法。其中,动态PCA/PLS[1-3],PCA/PLS[4,5]的递归实现,快速移动窗体PCA[6]和多模式PCA[7]在近年来得到了广泛的应用和研究。
SIM是一个成熟的技术,在过程识别中得到了广泛的应用[8-11]。SIM技术在FDI中的应用,是[12-15]首次提出的。
第8.2节回顾了基于模型的FDI框架的基本内容。它们可以在专著[1,16-
25]和调查文件[26-29]中找到。
Beard 和 Jones 在[30,31]中报道了关于FDF的第一项工作,Chow
和Willsky提出了第一个利用奇偶空间法的最优FDI解决方法[32]。读者可以参考[25]中的第三章和第五章来系统地处理过程及故障建模以及基于模型的残差生成法等问题。
SKR的概念将在我们的后续研究中发挥重要作用。实际上,残差生成器的设计就是要找到定理8.1中给出的SKR。定义8.3中给出的SKR定义类似于[33]中针对非线性系统引入的定义。