next up previous
Next: Summary Up: KIVA-3 Engine Simulations Previous: Data Structures

Dependency Analysis

The governing equations for the fluid and particle dynamics and chemical reactions are discretized in both time and space in KIVA-3. The temporal differencing is done in 3 different phases (A,B, and C). Phase A is where most of the source terms are calculated, Phase B is where the diffusion terms are calculated and Phase C is where the advection is computed. The years of experience built in the CFD studies have produced very many numerical schemes and methods to solve fluid equations and KIVA-3 seems to be taking advantage of a variety of schemes to gain greater flexibility in the code for solving fluids in different regimes. All these three phases apply to the same time interval, but one at a time built on top of each other. The best way to summarize the temporal differencing is the following crude formula. Given

solve for at with increments as

KIVA-3 uses a

type formula mixing both explicit and implicit schemes to adapt the solution to the changing CFL condition. Here, is the implicitness variable and is computed at every time step to check the flow conditions. Implicit schemes generally pose difficulties for parallel computing due to the dependencies on the current values of the variable that is being calculated. However, the implicit solvers in KIVA-3 are iterative ones and the physical variables on the RHS of the governing equations are already known (predictor) and thus require no sequential dependencies between processors. However, the processors have to march together and make available to each other the necessary values at each iteration.

Spatial differences are done via the control volume, integrating the differential term in question over the volume of a regular or momentum cell. Volume integrals of gradient terms are converted into surface area integrals using the divergence theorem and computation of such terms constitute the heart of the diffusion solvers in the code. Such area integral terms over surfaces become sums over cell faces.

where represents one of the six cell faces. Evaluating the gradient on the surface is done as follows

where r, l, t, b, f, and d indicates right, left, top, bottom, front and back respectively. Here, and are geometric factors for face and Qs are averages computed at mid-points on the surface as shown in Fig. 4. is simply the average of the values of Q in the four cells surrounding cell edge x.

 
Figure 4: The definition of the gradient of cell-centered quantity Q on cell face .

Volume integrals over momentum cells are also converted to area integrals. Momentum cells are constructed around vertices and involve 8 regular cells that share a particular vertex. There are a total of 24 faces (), three of each reside in one of the 8 cells. The area vectors () for momentum cell faces are usually substituted by the regular cell area vectors (). In the computational loop over the vertices, each vertex (i4) contribute to the momentum via its 3 faces overlapping with the i4th cell. When the loop is finished the area integrals through all the faces are complete. The vertices on the boundary do not have as many faces as the internal ones unless the boundary is shared by other processors that have the remainder of the missing faces. Contributions from other processors will have to be gathered for physically internal but computationally external boundary vertex momentum cells. The integral over the three momentum cell faces in question is represented by

Three faces of overlap (a, b, c) between momentum cell i4 and regular cell i4 are the negative of the other three faces of the momentum cell that coincide with th of the regular cell faces (d, e, f).

Cell-face velocities require the pressure evaluation on both sides of the regular cell faces. A cell-face volume is centered around the face in question with faces () cutting through the cell centers on both sides. Again the computation for cell-face velocities has to collect contributions from adjacent processors that share the face in question.

A block-wise decomposition (whether in one or multi-direction) requires the processors share a common face. This in no way indicates that there is a duplication by the adjacent processors of the vertices on the common faces. A vertex gets th of its mass from each cell that has the vertex in common, and vertices on the processor boundary are only partially represented by each processor. KIVA-3 applies the same rules (physics, numerics and BCs) to every computational cell, except some of the external boundary conditions are applied via the F flags. Care needs to be taken to treat physically internal but computationally external boundaries as internal by temporarily changing the flags to 1.0.

Advection of cell-center quantities require computing cell-face quantities and that in turn requires cell-center quantities and their derivatives on both sides of the face depending on the upwind conditions. Required information is gathered from other processors and put into ghost cells. Advection of momentum, however, is a somewhat different. One has not only to gather derivatives and variable values from vertices in the neigborhood, but also has to sum the contributions to account for all the momentum cell faces residing in multiple number of regular cells.

The governing equations for spray dynamics and chemical reactions are discretized over the same mesh system as we have described so far. The fluid flow is affected by the spray injection and chemical ignition processes and the effects show up in the fluid equations, mostly in the source-term. Spray injection and fuel ignition processes are both local and may not occur in the domains of all the processors, causing a slight load-balancing problem. Most of the spray and chemistry quantities are cell-centered and involve no dependencies between neighboring cells. Spray particles cross cell faces and may end up changing processor domains. A mechanism for particle destruction and creation accross processor boundaries is required. Spray particles contribute to the vertex masses and momentum and their interaction with fluid is described through collisions, breakup, and evaporation.

The piston motion in z-direction suggests that decomposition of blocks be parallel to the z-direction for best possible load balancing and low communication/computation ratios. During the piston motion, the grid is decomposed, re-organized and sorted again as done in the beginning of the simulation. There is need for processor communication to assure the removal of the grid at the same x-y planes. The interface between processors is assumed to have the same grid points on both sides to match the communication patterns when needed. A more general approach would eliminate this requirement by gathering into the ghost cells values computed by interpolations or extrapolations.



next up previous
Next: Summary Up: KIVA-3 Engine Simulations Previous: Data Structures



Osman Yasar
Tue Jun 6 10:52:43 EDT 1995