March 5, 1998
Similar papers 3
November 12, 2021
In this paper we present a short overview of the new Wolfram Mathematica package intended for elementary "in-basis" tensor and differential-geometric calculations. In contrast to alternatives our package is designed to be easy-to-use, short, all-purpose, and hackable. It supports tensor contractions using Einstein notation, transformations between different bases, tensor derivative operator, expansion in basis vectors and forms, exterior derivative, and interior product.
February 11, 2008
The long standing problem of the relations among the scalar invariants of the Riemann tensor is computationally solved for all 6x10^23 objects with up to 12 derivatives of the metric. This covers cases ranging from products of up to 6 undifferentiated Riemann tensors to cases with up to 10 covariant derivatives of a single Riemann. We extend our computer algebra system Invar to produce within seconds a canonical form for any of those objects in terms of a basis. The process i...
November 9, 2002
Symmetry properties of r-times covariant tensors T can be described by certain linear subspaces W of the group ring K[S_r] of a symmetric group S_r. If for a class of tensors T such a W is known, the elements of the orthogonal subspace W^{\bot} of W within the dual space of K[S_r] yield linear identities needed for a treatment of the term combination problem for the coordinates of the T. We give the structure of these W for every situation which appears in symbolic tensor cal...
December 10, 2019
In a recent paper by the author (Chen in JHEP 02:115, 2020), the reduction of Feynman integrals in the parametric representation was considered. Tensor integrals were directly parametrized by using a generator method. The resulting parametric integrals were reduced by constructing and solving parametric integration-by-parts (IBP) identities. In this paper, we furthermore show that polynomial equations for the operators that generate tensor integrals can be derived. Based on t...
August 25, 2014
This paper studies symmetric tensor decompositions. For symmetric tensors, there exist linear relations of recursive patterns among their entries. Such a relation can be represented by a polynomial, which is called a generating polynomial. The homogenization of a generating polynomial belongs to the apolar ideal of the tensor. A symmetric tensor decomposition can be determined by a set of generating polynomials, which can be represented by a matrix. We call it a generating ma...
November 6, 2014
Tensor transpose is a higher order generalization of matrix transpose. In this paper, we use permutations and symmetry group to define? the tensor transpose. Then we discuss the classification and composition of tensor transposes. Properties of tensor transpose are studied in relation to tensor multiplication, tensor eigenvalues, tensor decompositions and tensor rank.
January 21, 2003
We discuss the application of computer algebra to problems commonly arising in numerical relativity, such as the derivation of 3+1-splits, manipulation of evolution equations and automatic code generation. Particular emphasis is put on working with abstract index tensor quantities as much as possible.
February 5, 2014
A fundamental process in the implementation of any numerical tensor network algorithm is that of contracting a tensor network. In this process, a network made up of multiple tensors connected by summed indices is reduced to a single tensor or a number by evaluating the index sums. This article presents a MATLAB function ncon(), or "Network CONtractor", which accepts as its input a tensor network and a contraction sequence describing how this network may be reduced to a single...
November 3, 2017
Kjolstad et. al. proposed a tensor algebra compiler. It takes expressions that define a tensor element-wise, such as $f_{ij}(a,b,c,d) = \exp\left[-\sum_{k=0}^4 \left((a_{ik}+b_{jk})^2\, c_{ii} + d_{i+k}^3 \right) \right]$, and generates the corresponding compute kernel code. For machine learning, especially deep learning, it is often necessary to compute the gradient of a loss function $l(a,b,c,d)=l(f(a,b,c,d))$ with respect to parameters $a,b,c,d$. If tensor compilers are ...
January 31, 2013
Symmetric tensor operations arise in a wide variety of computations. However, the benefits of exploiting symmetry in order to reduce storage and computation is in conflict with a desire to simplify memory access patterns. In this paper, we propose a blocked data structure (Blocked Compact Symmetric Storage) wherein we consider the tensor by blocks and store only the unique blocks of a symmetric tensor. We propose an algorithm-by-blocks, already shown of benefit for matrix com...