Computers Math. Applic. Vol. 30, No. 3-6, pp. 155-168, 1995
Pergamon
CopyrightQ1995 Elsevier Science Ltd
Printed in Great Britain. All rights reserved
0898-1221/95 $9.50 + 0.00
0898-1221(95)00093-3
Interlacing Properties of the Zeros
of the Orthogonal Polynomials and
Approximation of the Hilbert Transform
G . MASTROIANNI AND D. OCCORSIO
Dipartimento di Matematica, Universitg della Basilicata
Via N. Sauro 85, 85100 Potenza, Italy
Mastroianni©pzvx85. c ineca, it
Abstract--In
this paper, starting from interlacing properties of the zeros of the orthogonai polynomials, the authors propose a new method to approximate the finite Hilbert transform. For this
method they give error estimates in uniform norm.
Keywords--Orthogonal
polynomials, Lagrange interpolation, Hilbert transform.
1. I N T R O D U C T I O N
The finite Hilbert transform of the function f is defined as
H(f;t) = f l f(x) dx =
i x
-- t
lim
~0+
/
f(x)
x
dx,
(1.1)
-- t
where the integral is in the Cauchy principal value sense.
A condition for the existence of H(f) is that the function f is continuous in [-1,1]
( f e C(°)([-1,1])) and f~w(f;u)u-ldu < oc, (f e DT), where w(f,.) is the ordinary modulus of continuity.
In numerical analysis and in approximation theory, it is very useful to approximate the finite
Hilbert transform, because it appears in several problems of various nature. Furthermore, many
physics and engineering problems involve Cauchy integral equations and sometime their solutions
have zeros or singularities in i l . For this reason, the solution is expressed as a product of a smooth
function for a Jacobi weight v o'z, a,/~ > - 1 . Then we are interested in approximating H(vO,Zf).
Actually, by global approximation of the function f, two kinds of quadrature rules have been
proposed to approximate H(va'Zf): "product rules" and "Gaussian rules." About the product
rules, there exist efficient methods [1,2] founded on recurrence formulae for a large class of weight
functions, but the computation of the coefficient requires some effort. The Gaussian rules are simpler than the previous one, but they present numerical instability and generally diverge whenever
the function f is only Hhlder continuous.
In this paper, we propose a new method to approximate H(v~'Zf). This procedure employs
Gaussian rules, using interlacing properties of the zeros of orthogonal polynomials [3] and recent
results about the Lagrange interpolation [4]. We also prove that this method is numerically stable
and convergent.
Typeset by ~4.AdS-TEX
155
156
G. MASTROIANNI AND D. OCCORSIO
In the following section, we state the above-mentioned method. Section 3 contains results
about the convergence, while in Section 4, there are their proofs. Finally in Section 5, some
numerical examples are given.
2. T H E
METHOD
We assume f E DT. We have
H(v~'Zf; t) =
/1
f(x) va,Z(x ) dx =
f l f(x) - f(t) va,~(x ) dx
1 x-t
+ f(t)
1
X
t
(2.1)
dz,
lx--t
: = F(f; t) + f ( t ) g (v~'~; t ) .
Since it is possible to use an efficient procedure due to Gautschi and Wimp [5] to evaluate
H(va'~; t), we have to compute the function F(f). The idea is to approximate the function F ( f )
by a Lagrange polynomial
M
LM(F(f); t) = ~ lj(t)F(f; tj)
j=l
on suitable interpolation knots t l , . . . , t M .
Since we cannot evaluate exactly F(f; tj), making use of a Gaussian rule, we replace F ( f ; t j)
by
N
FN(f; tj) = ~
Ag,k(V ~'~)
f(Xg,k(V~'~)) -- f(tj)
Xy,k(V ~'B) -- tj
'
j = 1 .... , M,
(2.2)
k=l
where {XN,k(V"'/3)}N= I are the zeros of the Jacobi polynomial p y ( v ~'~)
the Christoffel numbers related to the weight v a'~.
Therefore, we approximate H(va'~f) by
~I (va'Z f; t) := LM(FN(f); t) + f ( t ) H (v~'Z; t ) .
and
{/~N,k(Va'/~)}~= 1 are
(2.3)
Now we want to select the knots {tj}M1 and N to obtain an optimal interpolation process LM,
i.e., such that the Lebesgue constants [[LM[[ = supllfll= 1 [[Lm(f)ll ([[.[[ denotes the s u p - n o r m ) ,
diverge as log M. Moreover, to preserve numerical stability, we intend to choose the zeros
{XN,k(Va'~)}N=I sufficiently far from the knots {tj}/=
M 1. In this aim, we need some results. For
any weight u, we denote by {pk(u)}~°=o the corresponding sequence of orthonormal polynomials
with leading positive coefficient. In the following, we will denote by Lip A the class of continuous
functions in [-1, 1], such that w(f; ~) _< Kfi ~, 0 < A _< 1 where k is a positive constant. The
following two theorems concern interlacing properties of the zeros of orthogonal polynomials.
THEOREM 2.1 [3]. If w is an arbitrary weight function and @(x) = (1 - x 2) w(x), then the zeros
ofpn+l(w) interlace the zeros ofpn(~). Moreover, ifw(x) = ~(x)(1 - x)~(1 + x) ~, a,/3 > - 1 ,
6 LipA, 0 < A _< 1, denoted by y2n+l,i = COS?72n+1,i, i = 1 . . . . , 2 n + 1 the zeros of
pn+l(w)p,~(@), in increasing order, we have
C
~2nq-l,z -- 772n,i+1 ~ --~
n
with C positive constant independent of i and n.
i ----1 , . . . , 2n
Hilbe~Trans~rm
157
THEOREM 2.2 [3]. If W is an arbitrary weight function and Wl(X) = (1 - x)w(x), w2(x) =
(1 + x)w(x), then the zeros of p,(w2) interlace the zeros of pn(wx). Moreover, ff w(x) = ~(x)
(1 - x)a(1 + x) ~, a, 13 > - 1 , ~ 6 Lip A, 0 < A _< 1, denoted by z2,,, = cos a2n,i, i = 1 , . . . , 2n the
zeros of Pn (w 1)Pn (w2), in increasing order, we have
~2n,i
--
O'2n,i+l
c
~ --,
i = 1 , . . . , 2n -- 1
n
with C positive constant independent of i and n.
T h e next theorem concerns the construction of Lagrange interpolation processes on the zeros
of Jacobi polynomials.
THEOREM 2.3 [4]. Let {xn#(v~'~) }'~=l be the zeros of the Jacobi polynomial pn(v ~,~) and let
Xn,l-s+k (v ~'e) = -I + k 1 + xn,1 (ve'6),
S
1-
Xn,n+j (V~'~) = 1 - ( r - j)
xn, n
(v 7,6)
r
k=0,...,s-
1,
j -- 1 . . . . ,r.
For a given bounded function g, we denote by L~,~,s(v~'e; g) the Lagrange polynomial interpolatn+r
ing 9 on the knots {Xn#(V 7 '6 )}i=l-s"
If the parameters r, s 6 N and 7, ~ satisfy the conditions
~+1
~/
~<~<~+~,
5
6
1
6
~+~<s<~+~,
5
then
]lL,~,r,~ (v'~"~) H _< C logn,
where C is a positive constant independent of n.
Making use of the previous theorems, we are able to select the knots tj, j = 1 , . . . , M. In the
following, we assume a, t3 < 1. Whenever, for example, a _> 1, then we assume (1 - x) a-[a] as
the weight ([a] denotes the integer part of a), and f ( x ) ( 1 - x)[ ~1 as the function. Similarly, we
behave whenever/3 >_ 1. Now we consider some cases t h a t more frequently happen.
At first, we suppose - 1 < a,/3 _< 0. In this eventuality, the zeros of pm(v~+l'~+l)pm+i(v~,~ )
verify the assumptions and the conclusions of Theorem 2.1. Therefore, we choose as Gaussian
knots
Xm+la (v c~'z) ,
i = 1 .... , m + 1.
B y using T h e o r e m 2.3 for 7 = a + 1,6 = / 3 + 1, we choose as interpolation knots {tj}Ma
--1 ~
Xm, 0
(Va+l'/3+l)
,Xm, 1 (va+l'/3-kl)
, ... , Xm,m
(Va+lJ3+l)
, Xm,rn+l
(V°~'bl'/3+l) =- 1.
Then, (2.2) becomes
m+l
Fro+, (f; xm,,
=
k=l
(v
s (x +i,k
- s
zm+l,k (v~,~) - xm,i (v~+1,~+1)
i =0,...,re+l,
and finally, we obtain
~I (va'B f ; t ) = Lm,l,1 (Fm+l(f);t) + f ( t ) H ( v a ' ~ ; t ) .
158
G. MASTROIANNI AND D. OccoRslo
Now we suppose 0 < a,/3 < 1/2. By using Theorem 2.1, the polynomial pm+l(Va-l'~-l)pm(Va'" )
has zeros sufficiently far apart; therefore, the Gaussian knots are
xm,i (va'"),
i = 1,...,m.
Moreover, from Theorem 2.3 used for 7 = ~ - 1, ~ = / ~ - 1, it follows tha~ the interpolation knots
{tj }jM=I are
Xm+l,~ ( v a - l ' " - l ) ,
i = 1 , . . . , m + 1.
Let be 1/2 < a, ~ < 1. By Theorems 2.1 and 2.3, we have that
Xm, ~
(Ca'"),
i = 1 , . . . , m,
are the Gaussian knots, while the interpolation knots {tj}M=l are
Xm+l,i (va-l'"-l) ,
i = O,...,m + 2.
Still now, we have assumed that a and /3 have equal sign. It is useful to consider the case
0 < a < 1/2 and - 1 < f~ <_ 0. Theorem 2.2 assures a "good" distance between two consecutive
zeros ofpm(va'")pm(va-l'"+l), as long as by Theorem 2.3 for ~ = a - l , 5 = f l + l it follows r = 0,
s = l . So that
xm,~ (va'~),
i = 1,...,m,
are the Gaussian knots and
Xm, i
(va-l'B+l),
i ----0 , . . . , m ,
are the interpolation knots {tj}~= 1. Other cases can be considered, but we omit details.
As we have previously observed, our method is very simple, since it reduces to the construction
of a Gaussian rule and to the evaluation of a Lagrange polynomial, both of them with respect
to Jacobi weights. It is well known that for such weights there exist efficient methods [6,7] to
calculate zeros and Christoffel constants. A matter of more general interest is that this method
can be applied when we replace the Jacobi weight v a'z in (2.1) by w = ~v a'~, ~ E Lip A,
0 < A _ 1, provided that zeros and Christoffel constants are really computable, for instance ~ is
a rational function (see [8,9]).
Finally the previous procedure can be used to approximate weakly singular integrals, with
singularities inside ( - 1 , 1), or Hadamard integrals.
3. C O N V E R G E N C E
Denoted by IIn the class of polynomials of degree n at most, for any function g E C(°)([-1, 1]),
En(g) = min [ [ g - P[[
PCII,~
is the best uniform approximation error by algebraic polynomials. The following theorems evaluate the difference between the exact function H(va'~f) and the approximant H(va'~f). Since
(va'" f; t) = LM(FN(f); t) + f ( t ) g (va'"; t ) ,
/1
the error committed by our method is due to the approximation of
F ( I ; t) =
1
f ( x ) - f ( t ) v a , . ( x ) dx
X
t
by LM(FN(f); t) (see (2.1) and (2.3)).
Then we have to estimate
~M(f; t) := F(f; t) - LM(FN(f); t).
Ifc~, ~ >_ 0, f • C(°)([-1,1]), and f ~ w ( f ; u ) u - l d u
for It] < 1, and we can state the following theorem.
(3.1)
< oc, (i.e., f • DT), then F(f;t) exists
HAlbert Transform
159
THEOREM 3.1. Let v~'Z,a,/3 >_ O, and let f E DT, then
II
u(S)ll <- CEM-I(S)log2 M,
where C is a positive constant independent of f and M.
Whenever a,/3 < 0 and we assume only f E DT, then generally F ( f ) is not bounded at
the endpoints :kl. However when f is a more regular function, for example f E LipA, with
m a x ( - 2 a , -2/3) < A _< 1, the above method works on [-1, 1] and the error estimate is given by
the following theorem.
THEOREM 3.2. Let v a'~, a,/3 < 0 and
assume
f(k) E Lip A, with k > O, and m a x { - 2 a , -2/~} <
A <_ 1, then
log 2 M
[[~u(f)[[ --< C U;~+ k ,
k >_ O,
where C is a positive constant independent of f and M.
As we have previously observed, H(va'~f), in general, is not bounded in ±1 for a, /3 < 0.
Then it is natural to give a weighted estimate of the error.
THEOREM 3.3. Let v ~'~, a, /3 < O, and let f E DT, then
IlCM(f)v-"'- ll <_Clog2 M
f
l/M
(3.2)
w(f;u)u-ldu,
J0
moreover, if $ E C(1)([-1, 1]), then
log 2 M
M
EM-I(f'),
(3.3)
where C is a positive constant independent of f and M.
REMARK. We suspect that log2M that appears in our estimate can be replaced by logM, as
well as sometime happens in the estimate of the product rules. In any case, we can pay for this
price in exchange for the simplicity of the method.
4. T H E
PROOFS
We need the following lemma.
LEMMA 4.1 [10]. I [ f c c ( r ) ( [ - 1 , 1 ] ) , r > O, then [or each n E N there exists a polynomial qn
of degree at most n >_ 4(r + 1) such that
f(k)(x)--q(a)(X) < c o n s t [ n - l ~ ] r - k w ( f ( ~ ) ; n - X ~ ) ,
0<k<r,
q(k)(x) < c o n s t [ A n ( x ) ] r - k w ( f ( r ) ; A n ( x ) ) ,
k > r,
-l<x<l,
-1<x<1,
where An(x) = n-1 x/ri- - x 2 + n -2.
Setting E ( f ; t) - FN(f; t) := R N ( f ; t), by (3.1), we get
I O u ( f ; t ) l <_d l F ( / ; t ) - L i ( F ( f ) ; t ) l + I L u ( R N ( f ) ; t ) l .
Let qM be the polynomial of Lemma 4.1 related to the function f . Setting rM = f -- qM and
taking into account that d2M(P) = O, VP E HM, we have
• M(rM; t) ~ CIF(rM; t)l + ILM(F(rM); t) I + ILM(RN(rM); t)l.
(4.1)
160
G. MASTROIANNI AND D. OCCORSlO
PROOF OF THEOREM 3.1. Using [11, (3.22) of Lemma 3.6, p. 457], we obtain
-l<t<l,
,F(rM;t)l<¢/~_ ~ rM(~--rM(t)_t va'a(x)dx<Cw(f;~) l°gM'
a, fl> 0.
On the other hand,
IF(rM;±l)l <-Ctlrmll<-w (f; ~ ) .
so that
IIF(rM)II<Cw(f; ~)logM.
(4.2)
Consequently,
/
1 \
IILM(F(rM))It ~ ClILMII" IIF(rM)II ~ ClILMII~ ~f; Mfl
log M.
Since we have chosen the interpolation knots tj, j = 1 , . . . , M in such a way that
we get
,,LM(F(rM))II< Cw (f; ~ ) log2M.
IILMII <
c log M,
(4.3)
Now, we recall that
I
N
IRN(rM;t)I <_C IF(rM;t)I+ ~AN# (v~'~)
i=1
(4.4)
zN,i
-
xg,i(v~'~)
Setting
= COS/gN#(Va'~) and t = cos/9, by using [11, (3.17) in Lemma 3.5, p. 455], we
have, uniformly with respect to N
IO-ON,,(v°.~)l>~
(v
rM (XN,i(Va'~)) --rM(t)
Zg,i(V ~'1~) -- t
~ C03
Combining the last inequality with (4.4) and (4.2), we get
"RN(rM)" < Cw(f; ~ ) logM,
uniformly with respect to N, and
,,LM(RN(rM))H< Cw( f ; ~ ) ,,LM,IlogM,
so that
IILM(RN(rM))II <-Cw( f; ~ ) log2M.
Combining (4.2), (4.3) and the last inequality with (4.1), we obtain
'lgPM(f)ll< Cw(f; ~ ) log2M.
Now for a polynomial P E HM, we have
"~M(f)'I = 'I~M(f -- P)" < Cw ('f - P';-~-M) log2 M
_< 2Cllf - PII log2 M.
Making the infimum on P E HM, Theorem 3.1 follows.
II
Hilbert Transform
161
First we suppose k = 0 and assume f E Lip A, with m a x { - 2 c ~ , - 2 B }
< A < 1. We start by (4.1). Making use of [11, (3.23) of Lemma 3.6, p. 457], we have
P R O O F OF T H E O R E M 3 . 2 .
IF(rM;t)l ~_ C l oM~
gM ,
max(-2a,-2~)<A<
1,
-1 <t<l.
(4.5)
Now we observe that F(rM; +1) make sense for the restriction on A. For t = - 1 , we have
IF(rM;--1)[ <_C f - l + ~
[rM(xL____r_Za(_l)] (1 + x) ~ dx
d--1
+ i_ 1
IrM(x) -- rM(--1)] Vc~,~(X)dx =: Cl(t) + C2(t)
I+M_
~
X+ 1
If(z) - f ( - 1 ) l ( 1 + x) ~-1 dx
Cl(t) _< C f-l+
J--1
+
(4.6)
IqM(X) -- qM(--1)l (1 + X) ~ dx
s-1
x+l
Iq~(~)l(1 + x) n dx,
(1 + x) n-l+~ dx +
< C
J -1
J -1
C
- 1 < ~ < - 1 + M----~,
since (1 - x) > 1. By applying Lemma 4.1,
c
c
Cl(t) <_ M2n+2----------~ +~
c
c
1
(1 + x) n+~-I dx <_ M2n+2-------~ + M~_~ M~+1+2 ~ .
Taking into account that 2A + 2/7 > A, we have
C
Cl(t) <_ M---~.
(4.7)
Now we consider C2(t). Making use of Lemma 4.1,
C2(t) < C [
J_-I+M-~
_
~
v~'n-l(x) dx
_
l+
(l+x)~+~-ldX+-M-~
dx
C
C
< M2~+2n <_ M; ~.
(4.8)
The same estimate holds for t = 1. Combining (4.5)-(4.8), we can conclude
NF(M)H < c logM
M~ ,
max(-2a,-2~)
< A_< 1,
(4.9)
and consequently,
M
[[LM(F(rM))H < C "Og2
]
-
M ~
,
(4.10)
since
][RN(rM)II < C IIf(rM)[I
N
q- E ~N,i (Va'~)
rM (XN# (V°'~)) -- rM(t)
- ~ N . 7 ~ _~~
=: IIF(rM){I + E(t).
/=1
30:3/6-L
(4.11)
162
G. MASTROIANNI AND D. OoooI¢.SIO
Setting XN,~(v~'~) = cosON,i(Va'a) and t = cos0, by using [11, (3.18) in Lemma 3.5, p. 455], we
have
r~(t) _< c
~
~ , , (v °,~)
r~ (x~., (v"'~)) - r~(t)
z~,~ (v~,~) - t
1o-o.,.(.o,~)l_>~
< c log M v~+.},~+ j (t),
-Ma
(4.12)
uniformly with respect to N. We observe that in this case t E { t l , . . . , ~:M}and ON,k(va,~) --Oj >
C/M. Then by (4.11), (4.9), and (4.12), taking into account that a + A/2,/3 + )~/2 > 0, we get
IIRN(rM)II < C l o g M
-
M
~
'
uniformly with respect to N, and
rlLM(RN(rM))II<_CIILMtllo~M,
so that
log 2 M
[[LM(RN(rM))H < C M------Z-
(4.13)
Combining (4.9),(4.10),(4.13) with (4.1), the theorem is proved for k = 0. To prove the theorem
for k > 0, we start from (4.1) again. For It[ < 1, we have
:= Bl(t) + B2(t) + B3(t).
(4.14)
.1(,)
,..
Applying Lemma 4.1,
Bl(t) <C
{, r'-'v''o,''',.,
~a_l
Ix-t[
dx+
a-1
-~--~ dx
}
.
Taking into account [11, (3.12) in Lemma 3.3, p. 453], we get
Bl(t) < C{ (1-- Mt)-~-~+"
f;-'~,~ (l +IXz)~+'?+~
v~' +"'~?+o(t)
}
~
-- tl
dx +
MX+k
logM .
--
1
Since (k + .~)/2 +/3 > O, making still use of [11, (3.12) in Lemma 3.3, p. 453], we get
Bx(t) < C
v~?+.,~-~+~(t)
M),+ k
logM.
(4.15)
By similar developments, we obtain
B3(t) < C v~'Z~+~'~f-~+a(t) logM.
MX+k
(4.16)
Hilbert Transform
163
Now we estimate B2. Applying Lemma 4.1, we have for t - (1 +
t)/M 2 < ~z < t + (1 - t)/M 2,
1--t
v 'a(x) dx
B2(t) <_ C Jt--~M-~
C
[t+M---'z k-l#x- k-l+x-~
<_ M,~+k_l jt_lM_+_~ V 2 ~-~, 2 + ~ ( z ) d x <
C
M~+k+l,
(4.17)
since (k + A - 1)/2 + a, (k + A - 1)/2 + fl > 0. Then combining (4.15)-(4.17) with (4.14), we get
log M
IF(rm;t)l < CM~+-----g,
- 1 < t < 1.
(4.18)
Assume t = -1. By (4.14), since B I ( - 1 ) = 0, we have
IF(rm;-1)l
< C [J--I--I+M----C-~IrM(X~(_~M (--1)1
< C [_1+-% Ir~(G)l(x
J-1
-
[1 ~M- irM(x)_rM(_l)lv~,~(x)d
(l + x)~ dx+j-l+
+ x) ~ dx + /l_
-
]rM(X)]v~,~(x)dx '
X+I
I+M~
C
- 1 < ¢= < - 1 + M----7.
Then, proceeding as in the case k = 0, we obtain
IF(rm;-1)1
C
<_ MX+k.
The same estimate holds for t -- 1. Thus,
IF(rm; 4-1)1 <
C
(4.19)
M~+------~.
Combining (4.18)-(4.19), we get
C
IIF(rM)[I <_ ~
Easily we obtain also
IILM(F(rM))II <_~
Setting XN,i(v a'~)
we get
= cosON#(V~'~)
logM.
C
and t = cosO, since
(4.20)
log 2 M.
t C {tl,... ,tu}
(4.21)
and
ON,k(v~'~) -- Oj >_
C/M,
,RN(rM;t)I<_C{IF(rM;t),+ IO-°N.,(v~'~)l>~)~N.i(Va.13)rM(XN,i(V~))~_._NN.~
~
_~/
:=
M(t) }
F(rM;t) + Z(t).
(4.22)
Then we have
AN,, (v
IO-ON.dV~.
)l>xr
r._ (xN,,
ZN# (v ~'~) -- t
+ IrM(t)i
AN# (v a'~)
I.
164
G. MASTROIANNIAND D. OCCORSZO
Making use of [11, (3.16) in Lemma 3.4, p. 454], we get
E(t) <__C
1
M~+k
~-"
IO-Omdv~',~)l>~
MIx, N,~ (v'~,:~)- tl
"XN'dV'~'~)
+
:+~':~+a'~
(t) log M }
M~+k
<_c v~+aF"e+~'P
Ma+k (t) log M
+
r(1 + x.,v,, (va't~)) B+ -k+~
C
t<0
t>O
Thus, taking into account that a + (~ + k)/2 > 0, /3 + (A + k)/2 > 0, and making use of
[12, Lemmas 5.7 and 5.8, pp. 164-165], we have
log M
E(t) <_ C MX+k,
Ill ~_ 1.
(4.23)
Combining (4.20) and (4.23) with (4.22), we get
log M
IlRN(rM)II < C M~+k,
and consequently,
(4.24)
log M
[[LM(RN(rM))t[ <_CIILM[IM~+k,
so that
log 2 M
IILM(RN(rM))H <_C Ma+k.
(4.25)
Combining (4.20),(4.21),(4.25) with (4.1), the assertion is completely proved.
|
PROOF OF THEOREM 3.3. By (4.1), it follows
OM(rM; t)v-~'-~(t) <_C{tF(rM; t)l + ILM(F(rM); t)l + ILM(RN(rM); t)]} v-a'-~(t).
(4.26)
Setting I = [-1,1], IM = It - (1 + t)/(2M2),t + (i - t)/(2M2)], I~M = I - IM,
IF(rM;t)lv-~'-:3(t)
--<CV-a'-~(t){IflrM(X)--rM(t)]v~'~(x)dx+
t
I~i
flrM(X)--'M(t)I
X
va'o(x)dx
:= Al(t) + A2(t).
- ( J~- ~-~
.h-~
t
~(~i ~7~
-tl)
},
dx+l:+¢---~IP~(~=)ldx
_<c ~[~+'~'~
l+t
1-t
t - 2M--~ < ~= < t + 2M-~.
Hilbert Transform
165
By Lemma 4.1, we get
Al(t)<_¢
{z
<_ C
To give a bound to
~w(f;u) u-ldu+~
co(f; U) U-1
<
z
w(f;u)u-tdu+w
(f;
du.
(4.27)
A2(t), we make use of [11, (3.12) in Lemma 3.3, p. 453],
A2(t) <_CIIrMIIv-~'-a(t) f v"'Z(x)v~-_Ti
dx.
I'M
Since [11]
[v-~'-~(t)H (v~'a; t) l < C log M,
a, f3 < 0,
Itl _< 1,
then
A2(t) <_w (f; -~-) log M.
(4.28)
Combining (4.27) and (4.28), we obtain
IIF(rM)v-O'-~ll
(4.29)
w(f;u) u-' du,
< ClogM
and consequently,
1
I[LM(F(rM))[v -~'-~ < Clog2 M
f0 w(f;u)u-l du.
(4.30)
Taking into account
IRN(rM; t)l v-~'-a(t)
<
c IIF(rM),-°,-~II + v-°,-~(t) Z ~N,~ (V°'~)
i=1
rM (XN,~ (V~'~)) -- rM(t) ]
x--;,,(v~,~--~-t
(4.31)
--: IIF(,-~)v-°,-~ll + r~(t).
Now we set xN,i(v a'~) = COSON,i(V~'~) and t = cosO. Since t E {tt,... ,tM} and ON,k(V~';3) -Oj >_C/M, we have
ON.k(V'~.~)--Oj
~N,~(,"'~)
jxZCv~,~ - tl }
Since
AN,~(v a'o)
ON.k(v°.a)--O~ IxN,, (v",~) - tJ
by applying Lemma 4.1, we have
<ClogM,
a,~<0,
It I_<1,
166
G. MASTROIANNIAND D. OccoRsio
C o m b i n i n g t h e l a s t i n e q u a l i t y w i t h (4.29) a n d (4.31), we g e t
'[RN(rM)V-a'-~H <_C{logMfol/Mw(f;u)u-lduq-w(f;1)logM}
1/M
w(f;u)u-ldu.
_< C l o g M
Jo
Thus, by Theorem 2.3, we get
1
IILM(RN(rM))V-"'-ZII
w(f;u)u-ldu.
< Clog2 M
(4.31)
Then (3.2) follows combining (4.29)-(4.31) with (4.26).
Now assume f E C(1)([-1, 1]). Then by (3.2), we have
N(I)M(f)
v-a'-f~][
< C log 2 i
Ilf'll
-
M
(4.32)
'
Let qM-1 the best uniform approximation polynomial of f~ and QM E HM such that Q ~ = qM-1.
Then, by (4.32), we get
IlCM(f) v-~'-~[I = [I~M(f --QM) v-='-~[I
_< C l o g 2
5. N U M E R I C A L
M I[(f -QM)'II)~ <_C_...._.~__
. l2 Mo EM_I(f,)
g
|
EXAMPLES
In this section, we present some numerical results obtained by using our method, showing that
the numerical errors agree with the estimate of the previous theorems.
EXAMPLE 1. This example can be found in [1].
.~_1
ex
H(f; t) =
-lx-t
dx.
In this case, the interpolation nodes are m + 2 and they are
{Xm,i \V(1 ' 1m]ji=l
~
U {=J:=l},
while the knots of the Gaussian rule are the zeros of Pm+l(V°'°). In the following, we give some
tables of the absolute error obtained through comparison with the analytical solution for various
value of t.
Table 1. m = 8.
Table 2. m = 16.
t
IH(vC~'~f;t)-ffI(va'~f;t)l
t
]H (v~'Df;t) - ~I (v~'~f;t) ]
0.01
0.146540E-09
0.01
0.422770E-013
0.05
0.131077E-09
0.05
0.265931E-012
0.1
0.8641595E-10
0.1
0.964777E-013
0.2
0.482977E-10
0.2
0.236846E-012
0.3
0.142488E-09
0.3
0.918579E-013
0.6
0.135784E-09
0.6
0.466931E-013
0.8
0.958530E-10
0.8
0.295877E-012
0.9
0.452853E-10
0.9
0.458954E-012
0.95
0.648729E-10
0.95
0.112702E-012
0.99
0.468135E-10
0.99
0.478533E-012
Hilbert T r a n s f o r m
EXAMPLE
167
2.
H(f;t)
:
1
dx
- - dx.
=
1 "~(1.1) 2 - x 2 x - t
This example can be found in [13]. In this case, we choose the same knots used in the previous
one, but taking into account t h a t the singularities of the function 1/x/(1.1)2 - x 2 are "close" to
the interval [ - 1 , 1], a larger number of knots is required.
Table 4. m --- 32.
Table 3. m = 16.
t
[H(va'#f;t)-ffI(va'#f;t)l
t
[H (v",Of; t) - ~" ( v " , ~ f ; t) l
0.25
0.295195E-04
0.25
0.348525E-07
0.99
0.700020E-03
0.99
0.340218F-,-06
Table 5. m = 64.
t
EXAMPLE
IH ( v ~ ' ~ f ; $) - / ~
(va'f~f; t) ]
0.25
0.351559E-09
0.99
0.143051E-08
3.
H(f;t) :
f: zlzl
dz t dx.
1 ~/1 - z 2 x -
In this example, a = /3 = - 1 / 2 , then we choose [Sx rn,ik:ou 1~2,1~2"vim
}Ji=l (.J {2:1}, as interpolation
knots, while the zeros of Pm+l(V -1/2'-1/2) as the knots of the Gaussian rule.
Table 7. m = 64.
Table 6. m = 32.
t
]H(va'Bf;t) - ~ I ( v a ' ~ f ; t )
l
t
[H (va'fTf; t) -- ~I (v ~'~ f; t) [
-0.99
0.572003E-06
--0.99
0.267368E-07
0.01
0.257048E-02
0.01
0.468045F-,-03
0.05
0.418544E-03
0.05
0.280239E-03
0.25
0. I35369E-03
0.25
0.156793E-04
0.8
0.655437E-05
0.8
0.595368E-06
0.9
0.282223E-05
0.9
0.225838F_,-06
0.999999
0.159008E-08
0.999999
0.221591E-09
Table 8. m = 128.
t
IH (v~'~f;t) - ~I (va,f~f;t) l
-0.99
0.318389E-06
0.01
0.174744E-04
0.05
0.232971F-,-04
0.25
0.534911E-06
0.8
0.16786IE-06
0.9
0.115249E-06
0.999999
0.331633E-06
168
G. MASTROIANNIAND D. OCCORSIO
REFERENCES
1. G. Monegato, The numerical evaluation of one-dimensional Cauchy principal value integrals, Computing 29,
337-354, (1982).
2. G. Monegato, Convergence of product formulas for the numerical evaluation of certain two-dimensional
Cauchy principal integrals, Numer. Math. 43, 161-173, (1984).
3. G. Criscuolo, G. Mastroianni and D. Occorsio, Convergence of extended Lagrange interpolation, Math. Comp.
55, 197-212, (1990).
4. G. Mastroianni, Uniform convergence of derivatives of Lagrange interpolation, J. Comput. and Appl. Math.
43, 1-15, (1992).
5. W. Gautschi and Wimp, Computing the Hilbert transform of a Jacobi weight function, Bit 2T (2), 203-215,
(1987).
6. P. Baratella, M.T. Garetto and G. Monegato, Una collezione di routines per la valutazione numerica di
integrali in una e pifi dimensioni, I.A.C., Monografie di Software Matematico (18), Roma, (1983).
7. W. Gautschi, Computational aspects of orthogonal polynomials, NATO Adv. Stu. Inst. Series Math. Phys.;
(Edited by P. Nevai) 294, (1990).
8. W. Gautschi, Orthogonal Polynomials: Computation and Applications, (preprint), Purdue University.
9. A.Nihiforov and V. Ouvarov, Elements de la Theorie des Fonctions Speciales, (Edited by MIR (Moscow)),
(1976).
10. P.O. Runck, Bemerkungen zu den Approximationssiitzen von Jackson und Jackson-Timan, ISNM 10,
303-308, (1969).
11. G. Criscuolo and G. Mastroianni, On the uniform convergence of Gaussian quadrature rules for Cauchy
principal value integrals, Numer. Math. 54, 445-461, (1989).
12. G. Criscuolo and G. Mastroianni, Mean and uniform convergence of quadrature rules for evaluating the finite
Hilbert transform, Progress in Approx. Theory, a special issue of J. Approx. Theory, pp. 141-175, (1991).
13. D.F. Paget and D. Elliott, An algorithm for the numerical evaluation of certain Cauchy principal value
integrals, Numer. Math. 19, 373-385, (1972).