Table of Contents

Class VariationalPINN<T>

Namespace
AiDotNet.PhysicsInformed.PINNs
Assembly
AiDotNet.dll

Implements Variational Physics-Informed Neural Networks (VPINNs).

public class VariationalPINN<T> : NeuralNetworkBase<T>, INeuralNetworkModel<T>, INeuralNetwork<T>, IFullModel<T, Tensor<T>, Tensor<T>>, IModel<Tensor<T>, Tensor<T>, ModelMetadata<T>>, IModelSerializer, ICheckpointableModel, IParameterizable<T, Tensor<T>, Tensor<T>>, IFeatureAware, IFeatureImportance<T>, ICloneable<IFullModel<T, Tensor<T>, Tensor<T>>>, IGradientComputable<T, Tensor<T>, Tensor<T>>, IJitCompilable<T>, IInterpretableModel<T>, IInputGradientComputable<T>, IDisposable

Type Parameters

T

The numeric type used for calculations.

Inheritance
VariationalPINN<T>
Implements
IFullModel<T, Tensor<T>, Tensor<T>>
IModel<Tensor<T>, Tensor<T>, ModelMetadata<T>>
IParameterizable<T, Tensor<T>, Tensor<T>>
ICloneable<IFullModel<T, Tensor<T>, Tensor<T>>>
IGradientComputable<T, Tensor<T>, Tensor<T>>
Inherited Members
Extension Methods

Remarks

For Beginners: Variational PINNs (VPINNs) use the weak (variational) formulation of PDEs instead of the strong form. This is similar to finite element methods (FEM).

Strong vs. Weak Formulation:

Strong Form (standard PINN):

  • PDE must hold pointwise: PDE(u) = 0 at every point
  • Example: -∇²u = f everywhere
  • Requires computing second derivatives
  • Solution must be twice differentiable

Weak Form (VPINN):

  • PDE holds "on average" against test functions
  • ∫∇u·∇v dx = ∫fv dx for all test functions v
  • Integration by parts reduces derivative order
  • Solution only needs to be once differentiable
  • More stable numerically

Key Advantages:

  1. Lower derivative requirements (better numerical stability)
  2. Natural incorporation of boundary conditions (through integration by parts)
  3. Can handle discontinuities and rough solutions better
  4. Closer to FEM (well-understood mathematical theory)
  5. Often better convergence properties

How VPINNs Work:

  1. Choose test functions (often neural networks themselves)
  2. Multiply PDE by test function and integrate
  3. Use integration by parts to reduce derivative order
  4. Minimize the residual in the weak sense

Example - Poisson Equation: Strong: -∇²u = f Weak: ∫∇u·∇v dx = ∫fv dx (after integration by parts)

VPINNs train the network u(x) to satisfy the weak form for all test functions v.

Applications:

  • Same as PINNs, but particularly useful for:
    • Problems with rough solutions
    • Conservation laws
    • Problems where weak solutions are more natural
    • High-order PDEs (where reducing derivative order helps)

Comparison with Standard PINNs:

  • VPINN: More stable, lower derivative requirements, closer to FEM
  • Standard PINN: Simpler to implement, direct enforcement of PDE

The variational formulation often provides better training dynamics and accuracy.

Constructors

VariationalPINN(NeuralNetworkArchitecture<T>, Func<T[], T[], T[,], T[], T[,], T>, int, int)

Initializes a new instance of the Variational PINN.

public VariationalPINN(NeuralNetworkArchitecture<T> architecture, Func<T[], T[], T[,], T[], T[,], T> weakFormResidual, int numQuadraturePoints = 10000, int numTestFunctions = 10)

Parameters

architecture NeuralNetworkArchitecture<T>

The neural network architecture for the solution.

weakFormResidual Func<T[], T[], T[,], T[], T[,], T>

The weak form residual: R(x, u, ∇u, v, ∇v).

numQuadraturePoints int

Number of quadrature points for integration.

numTestFunctions int

Number of test functions to use.

Remarks

For Beginners: The weak form residual should encode the variational formulation of your PDE.

Example - Poisson Equation (-∇²u = f): Weak form: ∫∇u·∇v dx - ∫fv dx = 0 weakFormResidual = (x, u, grad_u, v, grad_v) => { T term1 = DotProduct(grad_u, grad_v); // ∇u·∇v T term2 = f(x) * v; // fv return term1 - term2; }

The method integrates this over the domain using numerical quadrature.

Properties

SupportsTraining

Indicates whether this model supports training.

public override bool SupportsTraining { get; }

Property Value

bool

Methods

ComputeWeakResidual(int)

Computes the weak form residual by integrating over the domain.

public T ComputeWeakResidual(int testFunctionIndex)

Parameters

testFunctionIndex int

Index of the test function to use.

Returns

T

The weak residual (should be zero for a perfect solution).

Remarks

For Beginners: This computes ∫R(u, v)dx where:

  • u is the neural network solution
  • v is a test function
  • R is the weak form residual

For a true solution, this integral should be zero for ALL test functions. We approximate "all" by using a finite set of test functions.

Test Function Choices:

  1. Polynomial basis (Legendre, Chebyshev)
  2. Trigonometric functions (Fourier)
  3. Another neural network
  4. Random functions

This implementation uses simple polynomial test functions.

CreateNewInstance()

Creates a new instance with the same configuration.

protected override IFullModel<T, Tensor<T>, Tensor<T>> CreateNewInstance()

Returns

IFullModel<T, Tensor<T>, Tensor<T>>

New VPINN instance.

DeserializeNetworkSpecificData(BinaryReader)

Deserializes VPINN-specific data.

protected override void DeserializeNetworkSpecificData(BinaryReader reader)

Parameters

reader BinaryReader

Binary reader.

Forward(Tensor<T>)

Performs a forward pass through the network.

public Tensor<T> Forward(Tensor<T> input)

Parameters

input Tensor<T>

Input tensor for evaluation.

Returns

Tensor<T>

Network output tensor.

GetModelMetadata()

Gets metadata about the VPINN model.

public override ModelMetadata<T> GetModelMetadata()

Returns

ModelMetadata<T>

Model metadata.

GetSolution(T[])

Gets the solution at a specific point.

public T[] GetSolution(T[] point)

Parameters

point T[]

Returns

T[]

InitializeLayers()

Initializes the layers of the neural network based on the architecture.

protected override void InitializeLayers()

Remarks

For Beginners: This method sets up all the layers in your neural network according to the architecture you've defined. It's like assembling the parts of your network before you can use it.

Predict(Tensor<T>)

Makes a prediction using the VPINN.

public override Tensor<T> Predict(Tensor<T> input)

Parameters

input Tensor<T>

Input tensor.

Returns

Tensor<T>

Predicted output tensor.

SerializeNetworkSpecificData(BinaryWriter)

Serializes VPINN-specific data.

protected override void SerializeNetworkSpecificData(BinaryWriter writer)

Parameters

writer BinaryWriter

Binary writer.

Solve(int, double, bool, int, double)

Trains the network to minimize the weak residual.

public TrainingHistory<T> Solve(int epochs = 1000, double learningRate = 0.001, bool verbose = true, int batchSize = 256, double derivativeStep = 0.0001)

Parameters

epochs int

Number of training epochs.

learningRate double

Learning rate for optimization.

verbose bool

Whether to print progress.

batchSize int

Number of quadrature points per batch.

derivativeStep double

Finite-difference step size for input derivatives.

Returns

TrainingHistory<T>

Train(Tensor<T>, Tensor<T>)

Performs a basic supervised training step using MSE loss.

public override void Train(Tensor<T> input, Tensor<T> expectedOutput)

Parameters

input Tensor<T>

Training input tensor.

expectedOutput Tensor<T>

Expected output tensor.

UpdateParameters(Vector<T>)

Updates the network parameters from a flattened vector.

public override void UpdateParameters(Vector<T> parameters)

Parameters

parameters Vector<T>

Parameter vector.