#### illini

##### Well-Known Member

- Joined
- Oct 9, 2003

- Messages
- 1,278

- Reaction score
- 0

Let's start with the fundamental equations of fluid dynamics, known as the Navier-Stokes equations. The Navier-Stokes equations are a set of coupled, non-linear partial differential equations that are suitable for describing time-varying, compressible (with shocks), viscous flows with turbulence. However, there is no general solution to these equations, meaning that theorists have to apply further limiting assumptions in order to arrive at solutions, or that they must be solved numerically, in which case the numerical method itself introduces additional assumptions. In fact, the Navier-Stokes equations themselves are the result of several assumptions.

To solve the full Navier-Stokes equations numerically typically requires a finite difference or finite volume method which breaks up the space around or inside the body of interest into little cells, then solves a linearized, algebraic approximation of those equations in each cell. The linearized, algebraic approximation approaches the true equations in the limit as the cell size and the time step size go to zero. However, since no computer can handle infinitely dense grids and infinitesimally small time steps we must accept the error inherent in less dense grids and larger time steps (this too is limited by numerical stability constraints) in exchange for timely solutions. As a CFD professor of mine was fond of saying, you can't catch a small fish with a large net, meaning that a large grid inevitably means loss of detail and, therefore, a sacrifice in accuracy. Furthermore, the numerical methods themselves have a nasty tendency to distort solutions by adding in their own artificial viscosity and waves. Sometimes these artifacts of the numerical algorithms are necessary for numerical stability. But in any case, they distort the solution.

The full Navier-Stokes equations solved numerically on a sufficiently dense grid to catch even the smallest turbulent eddy (the smallest fish) can simulate turbulence directly. Since this is highly computationally intensive, resolving some of the largest eddies while sacrificing some of the smallest eddies and modeling their effects with mathematically artificial turbulence models leads to a large eddy simulation. However these are still computationally intensive. The movie at this url: http://mysite.verizon.net/vze3qt6z/Thesis.qt is from my Ph.D. thesis and required 100 hours of Cray time.

Since resolving even large turbulent eddies is computationally expensive, most Navier-Stokes solvers model the effects of turbulence using what is known as the Eddy Viscosity Hypothesis. The Eddy Viscosity Hypothesis states that turbulence acts like viscosity on steroids, therefore we can account for its effects by jacking up the viscosity coefficient where appropriate. A variety of turbulence models exist (such as k-epsilon) for computing the eddy viscosity. But these, of course, are very gross approximations of what really goes on with turbulent flows. So, bottom line: your typical Navier-Stokes solver has replaced actual turbulence with an eddy viscosity, can only resolve whatever the grid and time step allow it to resolve, linearizes the equations locally, and introduces artificialities. As long as you understand how to balance all these factors and the impact of the assumptions, you can use such a flow solver with great effect.

Viscosity is not terribly important for many problems so the viscosity terms are often dropped from the Navier-Stokes equations, giving us what is known as the Euler equations. Dropping viscosity greatly speeds up solutions, while still preserving the ability to capture shocks. By dropping viscosity, we lose boundary layer profiles, turbulence (but not necessarily vortex shedding), and the shape of shocks (amongst other things), but retain compressibility (necessary for high speed and supersonic flight) and the ability to capture form drag. The numerical methods will be similar to the Navier-Stokes flow solvers, and have the same limitations.

If we can also drop compressibility we can further reduce the equations (to be clear, we can drop compressibility while retaining viscosity if we like). Fluid dynamicists use Mach 0.3 as a rule of thumb for determining whether compressibility is important. If it isn't, the equations become simpler. If we can say that a fluid is both incompressible and inviscid, then we call it an ideal fluid (something that doesn't exist...this is an approximation). Numerical methods will be similar to above: spatial grids with time stepping flow solvers that linearize the equations and introduce their own numerical artifacts.

If we have an ideal fluid that we can assume to be irrotational, then we can formulate the potential flow equations and use a panel method to solve. We can make a panel method approximate compressibility effects through linear extensions to the equations. This is what AeroCFD does. Panel methods don't require grids, but rather distribute little vortex panels along the surface of the object of interest. This is much simpler and straightforward than gridded CFD methods, but still very powerful.

Believe it or not, this is a very brief synopsis of the levels of CFD that exist out there. Somebody somewhere is bound to step in and tell me what I left out and they'll be absolutely correct. Several of my old professors would probably crucify me for some of the corners I cut in this description. However, it is not my intent to be exhaustive. Also, believe it or not, my intent is not to pick on AeroCFD and point out that it lives on the lowest rung of the CFD ladder. Not even close. Rather, my intent is for everybody to understand the number of assumptions, limitations, and yes capabilities inherent in CFD software. Tools like AeroCFD can be extremely useful, but will be even moreso when we understand what they can and cannot do.