Abstract
In conventional matrix methods, the problem is first discretized by selecting a set of basis functions and then solving the resulting matrix problem exactly so that the error is always zero. By applying the conjugate gradient (iterative) method directly to solve an operator equation, an exact solution is developed of the problem to the infinite-dimensional problem in a symbolic fashion. The exact solution is attained in M steps where M is the number of independent eigenvalues of the operator in the infinite-dimensional space. From a computational point of view, the solutions for the two equations are similar (or different) depending on how the weighting function is integrated in the inner product and how the derivative operator is treated; this is a secondary point for the iterative methods. The greatest strength of the iterative method of the solution of the operator equation lies in the fact that unlike matrix methods, the nature of the expansion functions (i.e. the discretized version for the unknown) can be changed at each iteration depending the desired degree of accuracy and the discretization error can be quantified at each iteration, if so desired.
Original language | English (US) |
---|---|
Title of host publication | IEEE Antennas and Propagation Society, AP-S International Symposium (Digest) |
Publisher | IEEE Computer Society |
Pages | 222-224 |
Number of pages | 3 |
Volume | 1 |
State | Published - 1988 |
ASJC Scopus subject areas
- Electrical and Electronic Engineering