Class MultiFidelityPINN<T>
- Namespace
- AiDotNet.PhysicsInformed.PINNs
- Assembly
- AiDotNet.dll
Multi-fidelity Physics-Informed Neural Network for combining data of different accuracy levels.
public class MultiFidelityPINN<T> : PhysicsInformedNeuralNetwork<T>, INeuralNetworkModel<T>, INeuralNetwork<T>, IFullModel<T, Tensor<T>, Tensor<T>>, IModel<Tensor<T>, Tensor<T>, ModelMetadata<T>>, IModelSerializer, ICheckpointableModel, IParameterizable<T, Tensor<T>, Tensor<T>>, IFeatureAware, IFeatureImportance<T>, ICloneable<IFullModel<T, Tensor<T>, Tensor<T>>>, IGradientComputable<T, Tensor<T>, Tensor<T>>, IJitCompilable<T>, IInterpretableModel<T>, IInputGradientComputable<T>, IDisposable
Type Parameters
TThe numeric type used for calculations.
- Inheritance
-
MultiFidelityPINN<T>
- Implements
- Inherited Members
- Extension Methods
Remarks
For Beginners: Multi-fidelity learning combines data from multiple sources with different accuracy levels:
Low-Fidelity Data (Cheap, Abundant):
- Coarse simulations
- Simplified physical models
- Fast but approximate calculations
- Example: 2D simulation of a 3D problem
High-Fidelity Data (Expensive, Scarce):
- Fine-mesh simulations
- Physical experiments
- High-accuracy calculations
- Example: Wind tunnel measurements
The Multi-Fidelity Approach:
- Train on abundant low-fidelity data to learn general trends
- Use scarce high-fidelity data to correct errors
- Learn the correlation between fidelity levels
- Enforce physics constraints at all fidelity levels
Mathematical Model: u_HF(x) = rho(x) * u_LF(x) + delta(x)
Where:
- u_LF(x): Low-fidelity prediction
- u_HF(x): High-fidelity prediction
- rho(x): Scaling factor (learned)
- delta(x): Correction/bias term (learned)
This implementation uses a nonlinear correlation model where a neural network learns the relationship between fidelity levels.
References:
- Meng, X., and Karniadakis, G.E. "A composite neural network that learns from multi-fidelity data: Application to function approximation and inverse PDE problems" Journal of Computational Physics, 2020.
Constructors
MultiFidelityPINN(NeuralNetworkArchitecture<T>, IPDESpecification<T>, IBoundaryCondition<T>[], IInitialCondition<T>?, PhysicsInformedNeuralNetwork<T>?, int, IGradientBasedOptimizer<T, Tensor<T>, Tensor<T>>?, double, double, double, double, double, bool)
Creates a Multi-Fidelity PINN with optional custom low-fidelity network.
public MultiFidelityPINN(NeuralNetworkArchitecture<T> architecture, IPDESpecification<T> pdeSpecification, IBoundaryCondition<T>[] boundaryConditions, IInitialCondition<T>? initialCondition = null, PhysicsInformedNeuralNetwork<T>? lowFidelityNetwork = null, int numCollocationPoints = 10000, IGradientBasedOptimizer<T, Tensor<T>, Tensor<T>>? optimizer = null, double lowFidelityWeight = 1, double highFidelityWeight = 10, double correlationWeight = 1, double pdeWeight = 1, double boundaryWeight = 1, bool freezeLowFidelityAfterPretraining = true)
Parameters
architectureNeuralNetworkArchitecture<T>Network architecture for the high-fidelity/correlation network.
pdeSpecificationIPDESpecification<T>The PDE specification.
boundaryConditionsIBoundaryCondition<T>[]Boundary conditions.
initialConditionIInitialCondition<T>Initial condition (optional).
lowFidelityNetworkPhysicsInformedNeuralNetwork<T>Custom low-fidelity network (null = create default).
numCollocationPointsintNumber of collocation points for PDE residual.
optimizerIGradientBasedOptimizer<T, Tensor<T>, Tensor<T>>Optimizer (null = use Adam with default settings).
lowFidelityWeightdoubleWeight for low-fidelity data loss (default: 1.0).
highFidelityWeightdoubleWeight for high-fidelity data loss (default: 10.0 - higher because scarcer).
correlationWeightdoubleWeight for fidelity correlation loss (default: 1.0).
pdeWeightdoubleWeight for PDE residual loss (default: 1.0).
boundaryWeightdoubleWeight for boundary condition loss (default: 1.0).
freezeLowFidelityAfterPretrainingboolWhether to freeze low-fidelity network after pretraining (default: true).
Remarks
For Beginners: The loss weights control the relative importance of each objective:
- lowFidelityWeight: How much to fit the cheap/abundant data
- highFidelityWeight: How much to fit the expensive/accurate data (usually higher)
- correlationWeight: How strongly to enforce the fidelity relationship
- pdeWeight: How much to enforce the physics equations
Typical values:
- More high-fidelity data: Lower highFidelityWeight
- Very noisy low-fidelity data: Lower lowFidelityWeight
- Strong physics constraints: Higher pdeWeight
Properties
IsLowFidelityFrozen
Gets whether the low-fidelity network is frozen.
public bool IsLowFidelityFrozen { get; }
Property Value
LowFidelityNetwork
Gets the low-fidelity network for external access.
public PhysicsInformedNeuralNetwork<T> LowFidelityNetwork { get; }
Property Value
Methods
GetFidelityCorrection(T[])
Gets the correction (difference between fidelity levels) at a point.
public T[] GetFidelityCorrection(T[] point)
Parameters
pointT[]Input coordinates.
Returns
- T[]
Fidelity correction values.
GetHighFidelitySolution(T[])
Gets the high-fidelity prediction at a point.
public T[] GetHighFidelitySolution(T[] point)
Parameters
pointT[]Input coordinates.
Returns
- T[]
High-fidelity solution estimate.
GetLowFidelitySolution(T[])
Gets the low-fidelity prediction at a point.
public T[] GetLowFidelitySolution(T[] point)
Parameters
pointT[]Input coordinates.
Returns
- T[]
Low-fidelity solution estimate.
SetHighFidelityData(T[,], T[,])
Sets the high-fidelity training data.
public void SetHighFidelityData(T[,] inputs, T[,] outputs)
Parameters
inputsT[,]Input coordinates [numSamples, inputDim].
outputsT[,]Solution values [numSamples, outputDim].
SetLowFidelityData(T[,], T[,])
Sets the low-fidelity training data.
public void SetLowFidelityData(T[,] inputs, T[,] outputs)
Parameters
inputsT[,]Input coordinates [numSamples, inputDim].
outputsT[,]Solution values [numSamples, outputDim].
SetLowFidelityFrozen(bool)
Freezes or unfreezes the low-fidelity network.
public void SetLowFidelityFrozen(bool frozen)
Parameters
frozenboolWhether to freeze the network.
SolveMultiFidelity(int, int?, double, bool, int)
Solves the PDE using multi-fidelity training.
public MultiFidelityTrainingHistory<T> SolveMultiFidelity(int epochs = 10000, int? pretrainingEpochs = null, double learningRate = 0.001, bool verbose = true, int batchSize = 256)
Parameters
epochsintTotal number of training epochs.
pretrainingEpochsint?Epochs to pretrain low-fidelity network (default: epochs/4).
learningRatedoubleLearning rate for optimization.
verboseboolWhether to print progress.
batchSizeintBatch size for training.
Returns
- MultiFidelityTrainingHistory<T>
Multi-fidelity training history with detailed metrics.
Remarks
For Beginners: Multi-fidelity training proceeds in stages:
Stage 1: Pretrain Low-Fidelity Network
- Train only the low-fidelity network on low-fidelity data
- Goal: Learn general trends from abundant cheap data
Stage 2: Joint Training
- Train both networks together
- High-fidelity network learns to correct low-fidelity predictions
- Correlation ensures consistency between fidelity levels
Optional: Freeze Low-Fidelity
- After pretraining, lock the low-fidelity network weights
- Only train the correction/correlation part
- Can improve stability