III. to solve the minimization problem in (1),

III.     KERNEL-BASED REGRESSION APPROACH

According to
the Moore-Aronszajn theorem in 17, there exists a unique reproducing kernel
Hilbert space (RKHS) for every positive definite kernel on ?Nt × ?Nt and vice versa. We
consider a positive definite reproducing kernel k: ?Nt × ?Nt ? ? and its corresponding RKHS
with inner product ?? , ??K, where H denotes the RKHS. Our ultimate goal is to
find the relationship, denoted by the model ƒd(?) with d ?{1, ? , D}, between TOA
measurements and its corresponding dth spatial
coordinate. Based on the representer theorem 20-21 and the formulation in
15, the function ƒd(?) can be obtained through minimizing the following mean
square error cost function plus a regularization term (which is included to
prevent over-fitting) with respect to ƒd(?):

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

 

1 /N ? (Ri,d ?
ƒd(ri))2 + 5?ƒd(?)?2                (1)

 

where Ri,d is
the dth spatial coordinate of Ri. In the regularization term, 5 is a
positive parameter and the norm of ƒd(?) is defined in RKHS and is expressed as:

ƒd(?) = ?N 1
?j,dk(?, rj)                   (2)

Specifically,
we consider the Gaussian kernel:

k(ri, rj) =
exp(   2o2 )       (3)

where o is the
bandwidth of the Gaussian kernel. Here, ?j,d is the coefficient or weight for
the dth spatial coordinate of the jth reference node; N is chosen to be less
than Nr such that N reference nodes are used for training to find ƒd(?) and the
remaining (Nr ? N) reference nodes are used for cross validation. From (3), one
can see that a constant bias on {ri} has no effect on the localization results.

 

In order to
solve the minimization problem in (1), we need to use the reproducing property:

k(ri, rj)
=K           (4)

So the second
term in equation (1) can be written as

(5)

At last, we
can rewrite our objective function as its dual optimization problem with
respect to a in a matrix format,(6)

where Rd is a N
× 1 matrix whose I entry is Ri,d , i ?{1,? , N} ; K is a N × N
matrix whose (i, j)t? entry is the kernel k(ri,
rj) , i, j ? {1, ? , N} ; ad is the N × 1 matrix whose ith entry is ?i,d
, i ? {1, ? , N}. So the problem is simplified and becomes a
well-known quadratic regression problem for finding a finite dimensional
coefficient vector ad.

 

Taking
gradient of the cost function in

(6)

with respect
to ad, and making it equals to zero, we have

?KRd + K2ad +
5NKad = 0          (7)

So we can
easily get the solution which is

ad = (K +
5NI)–1Rd                       (8)

The above derivation
is for the dth spatial dimension. Define
†(?) = ƒ ƒ . .. ƒ and a^ = ?? … ? . Then the mapping in (2) is extended
to all D dimensions:

†(?) = ?N 1 a^jk(?, rj). (9)