Basic difference between solving an operator equation and a matrix equation by the conjugate gradient

Tapan Kumar Sarkar, Ecrument Arvas

Research output: Chapter in Book/Entry/PoemConference contribution

Abstract

In conventional matrix methods, the problem is first discretized by selecting a set of basis functions and then solving the resulting matrix problem exactly so that the error is always zero. By applying the conjugate gradient (iterative) method directly to solve an operator equation, an exact solution is developed of the problem to the infinite-dimensional problem in a symbolic fashion. The exact solution is attained in M steps where M is the number of independent eigenvalues of the operator in the infinite-dimensional space. From a computational point of view, the solutions for the two equations are similar (or different) depending on how the weighting function is integrated in the inner product and how the derivative operator is treated; this is a secondary point for the iterative methods. The greatest strength of the iterative method of the solution of the operator equation lies in the fact that unlike matrix methods, the nature of the expansion functions (i.e. the discretized version for the unknown) can be changed at each iteration depending the desired degree of accuracy and the discretization error can be quantified at each iteration, if so desired.

Original languageEnglish (US)
Title of host publicationIEEE Antennas and Propagation Society, AP-S International Symposium (Digest)
PublisherIEEE Computer Society
Pages222-224
Number of pages3
Volume1
StatePublished - 1988

ASJC Scopus subject areas

  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Basic difference between solving an operator equation and a matrix equation by the conjugate gradient'. Together they form a unique fingerprint.

Cite this