ch1

The study of digital communications starts with the following fundamental problem.

Given a set of messages \(\{s(t), 0 \leq t \leq T \}_{i=1}^{M −1}\) , where \(T\) is the message duration, the transmitter chooses a message \(s(t)=s_i(t)\) for transmission in each message interval with probability \(p_i = \text{Pr}(s(t) = s_i(t))\), \(i = 0, 1, ... , (M −1)\), such that \(\sum p_i = 1\).

The message is transmitted over the \underline{a}dditive, white, Gaussian noise (AWGN) channel.

The received signal is the summation of the transmitted signal and the AWGN, and is given by

\[r(t)=s(t)+n(t), 0\leq t<T. \]

Problem (1): How do we design the receiver to determine which message \(s_i(t)\), \(i = 0,1, ...,M −1\) has been sent given the received noisy signal \(r(t)\), \(0 \leq t < T\)?
The generally accepted solution is the one that makes the decision with the minimum probability of decision error.

Problem (2): What is the error probability performance of the optimum receiver? This is a very important problem, and will be examined in detail.
In general, the signals can be coded.

Codes are used to improve system performance. The ultimate performance achievable by any code on the AWGN channel is determined by the Shannon capacity limit of the channel. In 1948, the capacity result was derived, and researchers started to look for codes which can achieve this capacity. In 1993, turbo codes were discovered and shown to be able to achieve capacity. Earlier, in 1964, low density parity check (LDPC) codes were proposed by Gallager, but were shown to be capacity-achieving only in the late nineties.

The AWGN channel model is only the simplest and most basic of the channel models encountered in the study of digital communications. In wireless communications, for instance, the channel is a fading channel instead of being a pure AWGN channel.

We will begin with AWGN model.

The optimum receiver design problem as posed above is cast in terms of waveforms. To solve the problem analytically with the most generality, it is necessary to convert the waveform problem into a geometric problem in which we deal with vectors instead of waveforms. This leads to a solution which is simple and intuitive, and provides the most general setting for discussing the optimum receiver design problem, for both coded and uncoded signals. Therefore, we will begin here with a study of the vector representations of signals and noise, which will enable us to construct a geometric picture of the fundamental problem of digital communications.

Vector Representation

We want to express all signals and noise waveforms as N-dimensional Euclidean vectors as follows:

\[s(t) \leftrightarrow \mathbf{s} =[s_0, s_1, ..., s_{N−1}]^T \]

\[n(t) \leftrightarrow \mathbf{n} =[n_0, n_1, ..., n_{N−1}]^T \]

\[r(t) \leftrightarrow \mathbf{r} =[r_0, r_1, ..., r_{N−1}]^T \]

\[r(t) = s(t)+n(t) \leftrightarrow \mathbf{r}=\mathbf{r} + \mathbf{n} \]

The dimension \(N\) will be determined later.
\(L_2[0, T)\) is the vector space of square-integrable function on \([0, T)\), i.e.,

\[\int_o^T |x(t)|^2 dt < \infty, \text{ for all } x(t) \in L_2[0,T). \]

If \(x(t)\) is a signal, then

  • \(|x(t)|^2\) is the normalized power and
  • \(\int_0^T |x(t)|^2 dt\) is the energy in \(x(t)\) over \([0, T)\).
    We can think of \(L_2[0, T)\) as the space of all finite energy signals over the interval \([0, T)\). \(L_2[0, T)\) is a vector space.
  • If \(x(t) \in L_2[0, T)\) and \(y(t) \in L2[0, T)\), then \(\alpha_1 x(t) + \alpha_2 y(t) \in L_2[0, T)\), for scalars \(\alpha_1\) and \(\alpha_2\) that are finite.
  • \(0 \in L_2 [0, T)\).

On \(L_2[0, T)\), we introduce an inner product such that if \(x(t) \in L_2[0, T)\) and \(y(t) \in L_2[0, T)\), then the inner product of \(\mathbf{x}\) and \(\mathbf{y}\) is defined as the correlation between \(x(t)\) and \(y(t)\), i.e.,

\[<\mathbf{x}, \mathbf{y}> =\int_0^T x(t)y^*(t)dt. \]

This inner product leads to the concept of orthogonality: \(\mathbf{x}\) and \(\mathbf{y}\) are orthogonal or $ \mathbf{x} \perp \mathbf{y}$ if \(<\mathbf{x}, \mathbf{y}> =0\).
A set of signals \(\{x(t), 0\leq t<T\}_{i=0}^{N−1}\) is an orthogonal set if
Given x(t) ∈ L2[0, T), we have
where ⋅ denotes the norm, and Ex is the energy in x(t).
〈xi,xj〉=0, i≠ j; 〈xi,xj〉≠0, i= j. x2=〈x,x〉=∫T x(t)2dt=Ex,
We can normalize x by dividing x by ||x||, i.e.,
xˆ = x / x
s u c h t h a t xˆ = 1 .
The set {x (t), 0 ≤ t < T}N −1
i
i=0
is orthonormal if
〈x,x〉=δ =⎧0, i≠j,
where 〈xi,xi〉=∫T xi(t)2 dt=1isthenormalizationcondition. 0
More generally, in the complex case, we have
and
〈x,y〉=∫T x(t)y(t)dt, 0
〈x,x〉=∫T x(t)x
(t)dt=∫T x(t)2 dt =energy, 00
ij ij⎨1,i=j ⎩
where (.)* denotes the complex conjugate.
3
To represent s(t) as s=[s0 s1 sN−1]T in some N-dimensional Euclidean space, we need an orthonormal basis{φ(t),0≤t<T}N−1 suchthatwehave
N−1
s(t)=∑sφ(t), 0≤t<T.
i i=0
such that every v∈ S can be expressed as
v = ∑α v ,
0⎩ such that every si (t ) ∈ S can be expressed as
si(t)= where sik = 〈si ,φk 〉=∫T si (t)φk (t)dt .
s φ (t), ik k
i=0,1,...,(M−1)
ii i=0
Given a vector space S of dimension N, a basis for S is a set of linearly independent vectors {v }N −1 i i=0
N−1
where {α }N −1 is a set of scalars. The set of vectors {v }N −1 are linearly independent if i i=0 i i=0
i=0
impliesthatαi =0,fori=0,1, ,N−1.

posted @ 2017-02-08 21:00  Gre Tony  阅读(207)  评论(0编辑  收藏  举报