TOPS Solver Component

The TOPS Solver Component (TSC) is a Babel/SIDL based CCA compliant HPC software component (henceforth shortened to CCA component). It provides direct access to virtually all of the TOPS (as well as many other) linear and nonlinear algebraic solvers including geometric and algebraic multigrid (a partial list may be found here).

TOPS solvers may be used in three distinct ways:
  1. as a solver component (TSC),
  2. through a common C, C++, Fortran, Python language binding (PETSc),
  3. through each package's individual binding.
For all three approaches first download and install the TOPS software. (These pages ONLY describe using the TOPS Solver Components, consult PETSc or each package directly to see how to use them as traditional software libraries).

The application developer interacts with the  TSC by constructing a CCA component that implements the TOPS.System interface and one or more problem specific interfaces. This System component will define the algebraic system to be solved.

The TSC and the System component can be combined using a traditional programming language, a component scripting language or a component GUI (see demo) such as ccaffeine. The TSC and the System component then collaborate to solve one or more algebraic problems. Complex applications will likely also couple several additional CCA components, see the CCA tutorials for more information on writing applications using CCA components.

The TOPS component generator can be used to generate the SIDL for your problem and all the boilerplate code needed to use it as a CCA component.

TOPS Solver Component Tutorial

Installation:

If you did not use the TOPSInstaller to build the TSC, then configure PETSc with config/configure.py --with-shared=1 --with-babel-dir=dir --with-ccafe-dir=dir --with-clanguage=c++ (any other options you desire). Then do make all test.

To compile and run the first example:  cd src/tops/examples/c++/ex1 and do make server-c++ test-cca

  Example 1:

The first is the classic Bratu problem discretized with finite differences on a regular grid in two dimensions. The application code consists of the SIDL definition of the System component (in ex1.sidl)

package Ex1 version 0.0.0 {class System implements-all TOPS.System.System, TOPS.System.Compute.Residual {}}

and the code that defines the nonlinear equation (in Ex1_System_Impl.cc)

void Ex1::System_impl::computeResidual ( /* in */ ::sidl::array x, /* in */ ::sidl::array f) throw (){
// DO-NOT-DELETE splicer.begin(Ex1.System.computeResidual)
 TOPS::Structured::Solver solver = this->solver;
int xs = f.lower(0); // first grid point in X and Y directions on this process
int ys = f.lower(1);
int xm = f.length(0); // number of local grid points in X and Y directions on this process
int ym = f.length(1);
int i,j;
int mx = solver.getDimensionX();
int my = solver.getDimensionY();
double hx = 1.0/(double)(mx-1);
double hy = 1.0/(double)(my-1);
double sc = hx*hy;
double hxdhy = hx/hy;
double hydhx = hy/hx;

/* Compute function over the locally owned part of the grid */
  for (j=ys; j<ys+ym; j++) {
    for (i=xs; i<xs+xm; i++) {
      if (i == 0 || j == 0 || i == mx-1 || j == my-1) {
        f.set(i,j,x.get(i,j));
      } else {
        double u       = x.get(i,j);
        double uxx     = (2.0*u - x.get(i-1,j) - x.get(i+1,j))*hydhx;
        double uyy     = (2.0*u - x.get(i,j-1) - x.get(i,j+1))*hxdhy;
        f.set(i,j,uxx + uyy - sc*exp(u));
      }
    }
  } 
  // DO-NOT-DELETE splicer.end(Ex1.System.computeResidual)
}


Example 2:

The next example is a version of the driven cavity; this is a multicomponent PDE again solved on a structured grid. 
First tell the Tops.Solver that it is solving a multicomponent problem with four components (in Ex2_System_Impl.cc)

void ex2::system_impl::initializeOnce throw (){
// 
DO-NOT-DELETE splicer.begin(Ex2.System.initializeOnce)
this->solver.setBlockSize(4);
// DO-NOT-DELETE splicer.end(Ex2.System.initializeOnce)
}

 and define several PDE parameters (in Ex2_System_Impl.hh)

// DO-NOT-DELETE splicer.begin(Ex2.System._implementation)
TOPS::Structured::Solver solver;
double                   grashof, prandtl, lid;
// DO-NOT-DELETE splicer.end(Ex2.System._implementation)

initialize them (
in Ex2_System_Impl.cc)

void Ex2::System_impl::_ctor() {
// DO-NOT-DELETE splicer.begin(Ex2.System._ctor)
this->lid = 0.0; this->prandtl = 1.0; this->grashof = 1.0;
// DO-NOT-DELETE splicer.end(Ex2.System._ctor) }

One can also provide a nonzero initial guess to the solver by inheriting from TOPS.SystemComputeInitialGuess using SIDL such as (in Ex2.sidl)

package Ex2 version 0.0.0 {class System implements-all TOPS.System.System, TOPS.System.Compute.Residual, TOPS.System.Compute.InitialGuess {}}  

and providing code like (in Ex2_System_Impl.cc)

void Ex2::System_impl::computeInitialGuess ( /* in */ ::sidl::array<double> x ) throw () {
  // DO-NOT-DELETE splicer.begin(Ex2.System.computeInitialGuess)
  /*
     Compute initial guess over the locally owned part of the grid
     Initial condition is motionless fluid and equilibrium temperature
  */
  TOPS::Structured::Solver solver = this->solver;
  int xs = x.lower(1);      // first grid point in X and Y directions on this process
  int ys = x.lower(2);
  int xm = x.length(1);       // number of local grid points in X and Y directions on this process
  int ym = x.length(2);
  int i,j;
  double dx  = 1.0/(solver.getDimensionX()-1);
  double grashof = this->grashof; 
  for (j=ys; j<ys+ym; j++) {
    for (i=xs; i<xs+xm; i++) {
      x.set(U,i,j,0.0);
      x.set(V,i,j,0.0);
      x.set(OMEGA,i,j,0.0);
      x.set(TEMP,i,j,(grashof>0)*i*dx);
    }
  }
  // DO-NOT-DELETE splicer.end(Ex2.System.computeInitialGuess)
}

Example 3:

A simple Poisson problem in 3 dimensions again on a structured grid, with zero Dirichlet boundary conditions. The SIDL code is (in ex3.sidl)

package Ex3 version 0.0.0 {class System implements-all TOPS.System.System, TOPS.System.Compute.Matrix, TOPS.System.Compute.RightHandSide {}}

The code that defines the matrix is given by (in Ex3_System_Impl.cc)

void Ex3::System_impl::computeMatrix (/* in */ ::TOPS::Matrix J ) throw () {
  // DO-NOT-DELETE splicer.begin(Ex3.System.computeMatrix)
  TOPS::Structured::Matrix B = (TOPS::MatrixStructured)J;
  TOPS::Structured::Solver solver = this->solver;
  int xs = B.lower(0);      // first grid point in X and Y directions on this process
  int ys = B.lower(1);
  int zs = B.lower(2);
  int xm = B.length(0);       // number of local grid points in X and Y directions on this process
  int ym = B.length(1);
  int zm = B.length(2);
  int i,j,k;
  int mx = solver.getDimensionX();
  int my = solver.getDimensionY();
  int mz = solver.getDimensionZ();

  double hx     = 1.0/(double)(mx-1);
  double hy     = 1.0/(double)(my-1);
  double hz     = 1.0/(double)(mz-1);
  double sc     = hx*hy*hz;
  double hxhydhz  = hx*hy/hz;
  double hyhzdhx  = hy*hz/hx;
  double hxhzdhy  = hx*hz/hy;
 
  /*
     Compute part of matrix over the locally owned part of the grid
  */
  double d = 2.0*(hxhydhz + hxhzdhy + hyhzdhx);
  sidl::array<double> dd = sidl::array<double>::create1d(1,&d);

  double r[7];
  r[0] = r[6] = -hxhydhz;
  r[1] = r[5] = -hxhzdhy;
  r[2] = r[4] = -hyhzdhx;
  r[3] = 2.0*(hxhydhz + hxhzdhy + hyhzdhx);
  sidl::array<double> rr = sidl::array<double>::create1d(7,r);

  for (k=zs; k<zs+zm; k++) {
    for (j=ys; j<ys+ym; j++) {
      for (i=xs; i<xs+xm; i++) {
        if (i==0 || j==0 || k==0 || i==mx-1 || j==my-1 || k==mz-1){
          B.set(i,j,k,dd); // diagonal entry
        } else {
          B.set(i,j,k,rr);   // seven point stencil
        }
      }
    }
  }
  // DO-NOT-DELETE splicer.end(Ex3.System.computeMatrix)
}

The code that computes the right hand side is given by (in Ex3_System_Impl.cc)

void Ex3::System_impl::computeRightHandSide (/* in */ ::sidl::array<double> b ) throw () {
  // DO-NOT-DELETE splicer.begin(Ex3.System.computeRightHandSide)
  TOPS::Structured::Solver solver = this->solver;
  int xs = b.lower(0);      // first grid point in X and Y directions on this process
  int ys = b.lower(1);
  int zs = b.lower(2);
  int xm = b.length(0);       // number of local grid points in X and Y directions on this process
  int ym = b.length(1);
  int zm = b.length(2);
  int i,j,k;
  int mx = solver.getDimensionX();
  int my = solver.getDimensionY();
  int mz = solver.getDimensionZ();

  double hx     = 1.0/(double)(mx-1);
  double hy     = 1.0/(double)(my-1);
  double hz     = 1.0/(double)(mz-1);
  double sc     = hx*hy*hz;
 
  /*
     Compute right hand side over the locally owned part of the grid
  */
  for (k=zs; k<zs+zm; k++) {
    for (j=ys; j<ys+ym; j++) {
      for (i=xs; i<xs+xm; i++) {
        if (i == 0 || j == 0 || i == mx-1 || j == my-1 || k == 0 || k == mz-1) {
          b.set(i,j,k,0.0);
        } else {
          b.set(i,j,k,sc);
        }
      }
    }
  } 
  // DO-NOT-DELETE splicer.end(Ex3.System.computeRightHandSide)
}

Example 4:

Handling algebraic systems that arise from unstructured grids or any unstructured data structure is generally orders of magnitude more difficult than from structured grids. This is largely due to needing to manage the (problem specific) (for example, grid) data that is used to evaluate functions and matrices; the SciDAC Terascale Simulation Tools and Technologies (TSTT) ISIC is responsible for developing software for this portion of the application, thus the interface for the algebraic solvers can be very small.

Up front, the user indicates the ghost degrees of freedom needed by each process, then (as with the structured grid case) the solver requests local calculations from the TOPS.System interfaces. The needed ghost nodes is computed by the TSTT component then provided to the TOPS.Solver by, for example
(in Ex4_System_Impl.cc)

void Ex4::System_impl::initializeOnce () throw ()
{
  // DO-NOT-DELETE splicer.begin(Ex4.System.initializeOnce)
  this->solver.setLocalSize(this->n);
  int rank; MPI_Comm_rank(MPI_COMM_WORLD,&rank);
  int size; MPI_Comm_rank(MPI_COMM_WORLD,&size);
  int start = this->n*rank;
  int cnt = 0,g[2];
  if (rank) g[cnt++] = start-1;
  if (rank == size-1) g[cnt++] = start+this->n;
  this->solver.setGhostPoints(sidl::array<int>::create1d(cnt,g));
  // DO-NOT-DELETE splicer.end(Ex4.System.initializeOnce)
}

For simplicity we are using a one dimensional grid decomposed into slices, those each process has one or two ghost points (when run on one process there are no ghost points). The problem, again to make the code trivial, is the one dimensional Possion problem.

When the nonlinear residual or right hand side is requested a "ghosted array" is passed into and out of the application method thus the user does not to do the MPI communication directly.

void Ex4::System_impl::computeRightHandSide (/* in */ ::sidl::array<double> b ) throw ()
{
  // DO-NOT-DELETE splicer.begin(Ex4.System.computeRightHandSide)
  int i,nlocal = b.length(0);
  int rank; MPI_Comm_rank(MPI_COMM_WORLD,&rank);
  int size; MPI_Comm_rank(MPI_COMM_WORLD,&size);
  // For a finite element discretization the local element contributions to the
  // ghost degrees of freedom would also be computed here. Skipped here.
  if (!rank) nlocal--;
  if (rank == size-1) nlocal--;
  for (i=0; i<nlocal; i++) {
    b.set(i,1.0);
  }
  // DO-NOT-DELETE splicer.end(Ex4.System.computeRightHandSide)
}

The matrix values are contributed by block, generally the entire contribution for a single finite element is added in a single call.