April 15, 1999
Similar papers 3
July 14, 2015
In scientific computing, it is time-consuming to calculate an inverse operator ${\mathscr A}^{-1}$ of a differential equation ${\mathscr A}\varphi = f$, especially when ${\mathscr A}$ is a highly nonlinear operator. In this paper, based on the homotopy analysis method (HAM), a new approach, namely the method of directly defining inverse mapping (MDDiM), is proposed to gain analytic approximations of nonlinear differential equations. In other words, one can solve a nonlinear d...
April 22, 2019
A very simple and efficient local variational iteration method for solving problems of nonlinear science is proposed in this paper. The analytical iteration formula of this method is derived first using a general form of first order nonlinear differential equations, followed by straightforward discretization using Chebyshev polynomials and collocation method. The resulting numerical algorithm is very concise and easy to use, only involving highly sparse matrix operations of a...
October 4, 2018
We propose a nonlocal operator method for solving partial differential equations (PDEs). The nonlocal operator is derived from the Taylor series expansion of the unknown field, and can be regarded as the integral form "equivalent" to the differential form in the sense of nonlocal interaction. The variation of a nonlocal operator is similar to the derivative of shape function in meshless and finite element methods, thus circumvents difficulty in the calculation of shape functi...
August 23, 2017
The purpose of this research is to propose a new approach named the shifted Bessel Tau (SBT) method for solving higher-order ordinary differential equations (ODE). The operational matrices of derivative, integral and product of shifted Bessel polynomials on the interval [a, b] are calculated. These matrices together with the Tau method are utilized to reduce the solution of the higher-order ODE to the solution of a system of algebraic equations with unknown Bessel coefficient...
November 15, 2021
We consider a class of difference-of-convex (DC) optimization problems where the objective function is the sum of a smooth function and a possible nonsmooth DC function. The application of proximal DC algorithms to address this problem class is well-known. In this paper, we combine a proximal DC algorithm with an inexact proximal Newton-type method to propose an inexact proximal DC Newton-type method. We demonstrate global convergence properties of the proposed method. In add...
April 1, 2013
For nonlinear reduced-order models, especially for those with non-polynomial nonlinearities, the computational complexity still depends on the dimension of the original dynamical system. As a result, the reduced-order model loses its computational efficiency, which, however, is its the most significant advantage. Nonlinear dimensional reduction methods, such as the discrete empirical interpolation method, have been widely used to evaluate the nonlinear terms at a low cost. Bu...
May 20, 2014
We present an accurate and efficient discretization approach for the adaptive discretization of typical model equations employed in numerical weather prediction. A semi-Lagrangian approach is combined with the TR-BDF2 semi-implicit time discretization method and with a spatial discretization based on adaptive discontinuous finite elements. The resulting method has full second order accuracy in time and can employ polynomial bases of arbitrarily high degree in space, is uncond...
July 9, 2002
This paper shows that the weighting coefficient matrices of the differential quadrature method (DQM) are centrosymmetric or skew-centrosymmetric if the grid spacings are symmetric irrespective of whether they are equal or unequal. A new skew centrosymmetric matrix is also discussed. The application of the properties of centrosymmetric and skew centrosymmetric matrix can reduce the computational effort of the DQM for calculations of the inverse, determinant, eigenvectors and e...
May 27, 2021
A q-Gauss-Newton algorithm is an iterative procedure that solves nonlinear unconstrained optimization problems based on minimization of the sum squared errors of the objective function residuals. Main advantage of the algorithm is that it approximates matrix of q-second order derivatives with the first-order q-Jacobian matrix. For that reason, the algorithm is much faster than q-steepest descent algorithms. The convergence of q-GN method is assured only when the initial guess...
August 10, 2014
In this research, the Bernoulli polynomials are introduced. The properties of these polynomials are employed to construct the operational matrices of integration together with the derivative and product. These properties are then utilized to transform the differential equation to a matrix equation which corresponds to a system of algebraic equations with unknown Bernoulli coefficients. This method can be used for many problems such as differential equations, integral equation...