An enhanced fletcher-reeves-like conjugate gradient methods for image restoration

ABSTRACT


INTRODUCTION
The subject of image restoration has been widely researched and applied in numerous fields of science and engineering.It requires the reconstruction of an original scene from a deteriorated observation.For example, air turbulence degrades star pictures viewed by ground-based telescopes.Images are often exposed to noise due to environmental factors, transmission channels, and other related elements during acquisition, compression, and transmission.As a result, the image quality is affected, leading to distortion and loss of image information.Noise also impacts later image processing tasks, such as the analysis and tracking of images, as well as video processing.Thus, image denoising is a crucial aspect of modern image processing systems.
Image denoising aims to restore the original image quality by minimizing noise from a noise-rich image.However, since noise, edge, and texture are high-frequency constituents, it is challenging to differentiate between them during denoising.As a result, restored images may lose some important details.Overall, the challenge in image processing systems is to recover relevant information from noisy images during noise removal to produce high-quality images.

6269
There are certain cases where stellar pictures must be recovered even if they have not been observed within the atmosphere.The fundamental goal of this research is to create a class of iterative optimization algorithms applicable to edge-preserving regularization (EPR) objective functions.To reduce impulse noise, a two-phase technique was recently developed in [1].For salt-and-pepper noise, the adaptive median filter (AMF) is used, while for random-valued noise, the adaptive center-weighted median filter (ACWMF) is used, which is first improved by applying the variable window technique to increase its detection capabilities in severely damaged pictures [2].We only utilize the salt-and-pepper noise in this study.Let X represent the true picture and Α = {1,2,3, . . . . .M} × {1,2,3, . . . . .N} represent the index set of X.Let Ν ⊂ Α denote the set of noise pixel indices detected during the first phase.Also, let P i,j be the set of pixel's four nearest neighbors at position (i, j) ∈ Α, y i,j denote the discovered pixel value of the actual picture at position (i, j), and u i,j = [u i,j ] (i,j)∈Ν denote a lexicographically ordered column vector of length c, where c represents the size of N.Then, minimizing the following function will recover the noise pixels.
where β is the regularization parameter, ).
Function ( 1) is a hypothetical function that preserves the edges ϕ α = √α + x 2 , α > 0, that can be used to describe impulsive noise in general.Minimizing (1) defines the essence of a slavish AMF introduced in [3].This is a typical method for locating pixels that may be contaminated.In practice, the non-smooth data-fitting term might be dropped because it is not needed in the second phase, when only poor-quality pixels are recovered after reduction.As a result, a number of optimization strategies for minimizing the following smooth EPR functional may be utilized (such as [4]- [7]).
The conjugate gradient (CG) method for image correction is quite effective for solving unconstrained optimization problems of the form (3), due to their low memory requirements and simplicity of coding [4]- [7].To solve (1), an iterative computation of a new solution vector is done using (4).
The step length   is calculated is traditionally obtained through a one-dimensional line search which, in practice, is usually inexact due to cost and impracticality considerations.For quadratic functions, α k can be computed exactly using [8].
However, for general functions, α k is computed to ensure that the obtained search direction is sufficiently downhill through satisfying the strong Wolfe conditions [2].

𝑓(𝑥
where 0 < δ < σ < 1. CG methods compute search directions using (7), respectively.The two methods have been the focus of many studies, not just because of their historical significance, but also because of their proven global convergence.Many other variants have been examined in an attempt to improve the numerical behavior of the CG methods, given their attractive storage requirements; see, for example, [11]- [14].CG methods can be utilized in solving problems related to machine learning, fluid mechanics, solution of nonlinear equations and differential equations, deep learning, in addition to other applications.Another possible area of application is human performance technology (HPT).HPT is largely based on computer systems' numerical performance improvement (PI) characteristics, which rely on logical judgements enabled by specialized algorithms [15].PI also helps to widen the scope of instructional design by using a systems perspective to address performance opportunities and obstacles.CG methods have proven valuable in solving issues in the adoption of mobile electronic performance support systems (EPSS) improved the job performance and efficiency of mobile users, according to a cross-sectional qualitative research [16].
In order to improve the computational efficiency of the standard CG method, a special type of conjugate gradient methods have recently been extensively investigated [17]- [20].The approaches in [9], [18] propose a conjugacy condition of the form.
Unlike the traditional CG algorithms, the aforementioned approach has the unique characteristic of consistently constructing better descent directions while satisfying the conjugacy conditions, as evidenced by the reported results.In the following section, a quadratic model will be exploited to derive new conjugacy parameters βk, giving rise to new CG algorithms.

NEW CONJUGATE GRADIENT COEFFICIENTS
The formulation of the new CG method presented here exploits a classical quadratic model characterized by its simplicity.Iiduka and Narushima [21] propose the following choice for βk.

𝛽 𝑘 = 𝑔 𝑘+1
(10) where Q is the constant Hessian of some quadratic function.The parameter βk satisfies a conjugacy condition of the form: In our derivation we will introduce an appropriate approximation to the quantity d k T Qs k , essential to our proposed method.Assume f is a quadratic function of the form: This quadratic function's gradient is explicitly given by (13).
As a result, curvature information may be expressed as (14).
From ( 14), we obtain: given that conjugacy holds and exact line search, (16) becomes or, alternatively, and The new expressions for βk are collectively referred to here as BL algorithms.
The following is the outcome of using the (22).
We obtain, using ( 16) and ( 21), From the downhill property of the search direction, it is obvious that d k T g k < 0, thus we get The proof is thus complete.

CONVERGENCE ANALYSIS
In order to establish the global convergence of the BL algorithms, the following assumptions are needed: − The level set  = { ∈   /() ≤ ( 1 )} is bounded.− In some neighborhood  of , the gradient g of the objective function is Lipschitz continuous, namely, there exists some constant L > 0 such that (see [22] for more details).The theorems in [23] have proven to be useful in proving global convergence.We adopt some of those here and prove them for our methods and some of the results in [24], [25].Lemma 1. Assume that assumptions 1 and 2 above hold.Then for any iteration a method that produces αk by doing the Wolfe line search, the (26) holds.
Theorem 2. Assume that Assumptions 1 and 2 above hold.If formula βk satisfies (20), then we have: Proof: By induction, assume that ( 27) does not hold.The ( 8) may be expressed as d k+1 + g k+1 = β k d k .Upon squaring both sides, we get Using ( 23), the following results hold: Upon dividing both sides of ( 29) by (d k+1 T g k+1 ) 2 , we get This yield Hence, Assume that  1 > 0 exists such that ‖  ‖ >  1 for every We can see that, using the assumption and (33) as a guide, By Lemma 1, we may conclude that lim k→∞ inf‖g k ‖ = 0 holds.

NUMERICAL RESULTS
The BL1, BL2, and BL3 algorithms performance is examined in the domain of minimizing salt-and-pepper impulse noise (3).The test images are listed in Table 1.Table 1 also reports the numerical results for comparing the classical fletcher reeves (FR) method to the newly derived ones in terms of the number of iterations, function/gradient evaluation count in addition to peak signal-to-noise ratio (PSNR).All simulations are run using MATLAB 2015a.It is worth emphasizing that the major focus of the study is on how fast the problem of reducing carbon emissions in (3) can be tackled efficiently.The pixel quality of the corrected pictures is assessed using PSNR value given by ( 35 where  ,  and  , * denote the pixel values of the corrected and the original image, respectively.For both procedures, the following are the termination conditions (36).

Table 1 .
Numerical results of FR and new algorithms