Table of Contents

Class MultiFidelityPINN<T>

Namespace
AiDotNet.PhysicsInformed.PINNs
Assembly
AiDotNet.dll

Multi-fidelity Physics-Informed Neural Network for combining data of different accuracy levels.

public class MultiFidelityPINN<T> : PhysicsInformedNeuralNetwork<T>, INeuralNetworkModel<T>, INeuralNetwork<T>, IFullModel<T, Tensor<T>, Tensor<T>>, IModel<Tensor<T>, Tensor<T>, ModelMetadata<T>>, IModelSerializer, ICheckpointableModel, IParameterizable<T, Tensor<T>, Tensor<T>>, IFeatureAware, IFeatureImportance<T>, ICloneable<IFullModel<T, Tensor<T>, Tensor<T>>>, IGradientComputable<T, Tensor<T>, Tensor<T>>, IJitCompilable<T>, IInterpretableModel<T>, IInputGradientComputable<T>, IDisposable

Type Parameters

T

The numeric type used for calculations.

Inheritance
MultiFidelityPINN<T>
Implements
IFullModel<T, Tensor<T>, Tensor<T>>
IModel<Tensor<T>, Tensor<T>, ModelMetadata<T>>
IParameterizable<T, Tensor<T>, Tensor<T>>
ICloneable<IFullModel<T, Tensor<T>, Tensor<T>>>
IGradientComputable<T, Tensor<T>, Tensor<T>>
Inherited Members
Extension Methods

Remarks

For Beginners: Multi-fidelity learning combines data from multiple sources with different accuracy levels:

Low-Fidelity Data (Cheap, Abundant):

  • Coarse simulations
  • Simplified physical models
  • Fast but approximate calculations
  • Example: 2D simulation of a 3D problem

High-Fidelity Data (Expensive, Scarce):

  • Fine-mesh simulations
  • Physical experiments
  • High-accuracy calculations
  • Example: Wind tunnel measurements

The Multi-Fidelity Approach:

  1. Train on abundant low-fidelity data to learn general trends
  2. Use scarce high-fidelity data to correct errors
  3. Learn the correlation between fidelity levels
  4. Enforce physics constraints at all fidelity levels

Mathematical Model: u_HF(x) = rho(x) * u_LF(x) + delta(x)

Where:

  • u_LF(x): Low-fidelity prediction
  • u_HF(x): High-fidelity prediction
  • rho(x): Scaling factor (learned)
  • delta(x): Correction/bias term (learned)

This implementation uses a nonlinear correlation model where a neural network learns the relationship between fidelity levels.

References:

  • Meng, X., and Karniadakis, G.E. "A composite neural network that learns from multi-fidelity data: Application to function approximation and inverse PDE problems" Journal of Computational Physics, 2020.

Constructors

MultiFidelityPINN(NeuralNetworkArchitecture<T>, IPDESpecification<T>, IBoundaryCondition<T>[], IInitialCondition<T>?, PhysicsInformedNeuralNetwork<T>?, int, IGradientBasedOptimizer<T, Tensor<T>, Tensor<T>>?, double, double, double, double, double, bool)

Creates a Multi-Fidelity PINN with optional custom low-fidelity network.

public MultiFidelityPINN(NeuralNetworkArchitecture<T> architecture, IPDESpecification<T> pdeSpecification, IBoundaryCondition<T>[] boundaryConditions, IInitialCondition<T>? initialCondition = null, PhysicsInformedNeuralNetwork<T>? lowFidelityNetwork = null, int numCollocationPoints = 10000, IGradientBasedOptimizer<T, Tensor<T>, Tensor<T>>? optimizer = null, double lowFidelityWeight = 1, double highFidelityWeight = 10, double correlationWeight = 1, double pdeWeight = 1, double boundaryWeight = 1, bool freezeLowFidelityAfterPretraining = true)

Parameters

architecture NeuralNetworkArchitecture<T>

Network architecture for the high-fidelity/correlation network.

pdeSpecification IPDESpecification<T>

The PDE specification.

boundaryConditions IBoundaryCondition<T>[]

Boundary conditions.

initialCondition IInitialCondition<T>

Initial condition (optional).

lowFidelityNetwork PhysicsInformedNeuralNetwork<T>

Custom low-fidelity network (null = create default).

numCollocationPoints int

Number of collocation points for PDE residual.

optimizer IGradientBasedOptimizer<T, Tensor<T>, Tensor<T>>

Optimizer (null = use Adam with default settings).

lowFidelityWeight double

Weight for low-fidelity data loss (default: 1.0).

highFidelityWeight double

Weight for high-fidelity data loss (default: 10.0 - higher because scarcer).

correlationWeight double

Weight for fidelity correlation loss (default: 1.0).

pdeWeight double

Weight for PDE residual loss (default: 1.0).

boundaryWeight double

Weight for boundary condition loss (default: 1.0).

freezeLowFidelityAfterPretraining bool

Whether to freeze low-fidelity network after pretraining (default: true).

Remarks

For Beginners: The loss weights control the relative importance of each objective:

  • lowFidelityWeight: How much to fit the cheap/abundant data
  • highFidelityWeight: How much to fit the expensive/accurate data (usually higher)
  • correlationWeight: How strongly to enforce the fidelity relationship
  • pdeWeight: How much to enforce the physics equations

Typical values:

  • More high-fidelity data: Lower highFidelityWeight
  • Very noisy low-fidelity data: Lower lowFidelityWeight
  • Strong physics constraints: Higher pdeWeight

Properties

IsLowFidelityFrozen

Gets whether the low-fidelity network is frozen.

public bool IsLowFidelityFrozen { get; }

Property Value

bool

LowFidelityNetwork

Gets the low-fidelity network for external access.

public PhysicsInformedNeuralNetwork<T> LowFidelityNetwork { get; }

Property Value

PhysicsInformedNeuralNetwork<T>

Methods

GetFidelityCorrection(T[])

Gets the correction (difference between fidelity levels) at a point.

public T[] GetFidelityCorrection(T[] point)

Parameters

point T[]

Input coordinates.

Returns

T[]

Fidelity correction values.

GetHighFidelitySolution(T[])

Gets the high-fidelity prediction at a point.

public T[] GetHighFidelitySolution(T[] point)

Parameters

point T[]

Input coordinates.

Returns

T[]

High-fidelity solution estimate.

GetLowFidelitySolution(T[])

Gets the low-fidelity prediction at a point.

public T[] GetLowFidelitySolution(T[] point)

Parameters

point T[]

Input coordinates.

Returns

T[]

Low-fidelity solution estimate.

SetHighFidelityData(T[,], T[,])

Sets the high-fidelity training data.

public void SetHighFidelityData(T[,] inputs, T[,] outputs)

Parameters

inputs T[,]

Input coordinates [numSamples, inputDim].

outputs T[,]

Solution values [numSamples, outputDim].

SetLowFidelityData(T[,], T[,])

Sets the low-fidelity training data.

public void SetLowFidelityData(T[,] inputs, T[,] outputs)

Parameters

inputs T[,]

Input coordinates [numSamples, inputDim].

outputs T[,]

Solution values [numSamples, outputDim].

SetLowFidelityFrozen(bool)

Freezes or unfreezes the low-fidelity network.

public void SetLowFidelityFrozen(bool frozen)

Parameters

frozen bool

Whether to freeze the network.

SolveMultiFidelity(int, int?, double, bool, int)

Solves the PDE using multi-fidelity training.

public MultiFidelityTrainingHistory<T> SolveMultiFidelity(int epochs = 10000, int? pretrainingEpochs = null, double learningRate = 0.001, bool verbose = true, int batchSize = 256)

Parameters

epochs int

Total number of training epochs.

pretrainingEpochs int?

Epochs to pretrain low-fidelity network (default: epochs/4).

learningRate double

Learning rate for optimization.

verbose bool

Whether to print progress.

batchSize int

Batch size for training.

Returns

MultiFidelityTrainingHistory<T>

Multi-fidelity training history with detailed metrics.

Remarks

For Beginners: Multi-fidelity training proceeds in stages:

Stage 1: Pretrain Low-Fidelity Network

  • Train only the low-fidelity network on low-fidelity data
  • Goal: Learn general trends from abundant cheap data

Stage 2: Joint Training

  • Train both networks together
  • High-fidelity network learns to correct low-fidelity predictions
  • Correlation ensures consistency between fidelity levels

Optional: Freeze Low-Fidelity

  • After pretraining, lock the low-fidelity network weights
  • Only train the correction/correlation part
  • Can improve stability