Class UniversalDifferentialEquation<T>
- Namespace
- AiDotNet.PhysicsInformed.ScientificML
- Assembly
- AiDotNet.dll
Implements Universal Differential Equations (UDEs) - ODEs with neural network components.
public class UniversalDifferentialEquation<T> : NeuralNetworkBase<T>, INeuralNetworkModel<T>, INeuralNetwork<T>, IFullModel<T, Tensor<T>, Tensor<T>>, IModel<Tensor<T>, Tensor<T>, ModelMetadata<T>>, IModelSerializer, ICheckpointableModel, IParameterizable<T, Tensor<T>, Tensor<T>>, IFeatureAware, IFeatureImportance<T>, ICloneable<IFullModel<T, Tensor<T>, Tensor<T>>>, IGradientComputable<T, Tensor<T>, Tensor<T>>, IJitCompilable<T>, IInterpretableModel<T>, IInputGradientComputable<T>, IDisposable
Type Parameters
TThe numeric type used for calculations.
- Inheritance
-
UniversalDifferentialEquation<T>
- Implements
- Inherited Members
- Extension Methods
Remarks
For Beginners: Universal Differential Equations combine known physics with machine learning.
Traditional ODEs: dx/dt = f(x, t, θ) where f is a known function with parameters θ Example: dx/dt = -kx (exponential decay, k is known)
Pure Neural ODEs: dx/dt = NN(x, t, θ) where NN is a neural network
- Very flexible, can learn any dynamics
- But ignores known physics
- May violate physical laws
Universal Differential Equations (UDEs): dx/dt = f_known(x, t) + NN(x, t, θ)
- Combines known physics (f_known) with learned corrections (NN)
- Best of both worlds!
Key Idea: Use neural networks to model UNKNOWN parts of the physics while keeping KNOWN parts as explicit equations.
Example - Epidemic Model: Known: dS/dt = -βSI, dI/dt = βSI - γI (basic SIR model) Unknown: How β (infection rate) varies with temperature, policy, etc. UDE: dS/dt = -β(T, P)SI where β(T, P) = NN(temperature, policy)
Applications:
- Climate modeling (known physics + unknown feedback loops)
- Epidemiology (known disease spread + unknown interventions)
- Chemistry (known reactions + unknown catalysis effects)
- Biology (known population dynamics + unknown environmental factors)
- Engineering (known mechanics + unknown friction/damping)
Constructors
UniversalDifferentialEquation(NeuralNetworkArchitecture<T>, int, Func<T[], T, T[]>?, IGradientBasedOptimizer<T, Tensor<T>, Tensor<T>>?)
public UniversalDifferentialEquation(NeuralNetworkArchitecture<T> architecture, int stateDim, Func<T[], T, T[]>? knownDynamics = null, IGradientBasedOptimizer<T, Tensor<T>, Tensor<T>>? optimizer = null)
Parameters
architectureNeuralNetworkArchitecture<T>stateDimintknownDynamicsFunc<T[], T, T[]>optimizerIGradientBasedOptimizer<T, Tensor<T>, Tensor<T>>
Properties
SupportsJitCompilation
Gets whether this model currently supports JIT compilation.
public override bool SupportsJitCompilation { get; }
Property Value
- bool
True if the model can be JIT compiled, false otherwise.
Remarks
Some models may not support JIT compilation due to: - Dynamic graph structure (changes based on input) - Lack of computation graph representation - Use of operations not yet supported by the JIT compiler
For Beginners: This tells you whether this specific model can benefit from JIT compilation.
Models return false if they:
- Use layer-based architecture without graph export (e.g., current neural networks)
- Have control flow that changes based on input data
- Use operations the JIT compiler doesn't understand yet
In these cases, the model will still work normally, just without JIT acceleration.
SupportsTraining
Indicates whether this model supports training.
public override bool SupportsTraining { get; }
Property Value
Methods
Backward(Tensor<T>)
Performs a backward pass through the network (backpropagation).
public Tensor<T> Backward(Tensor<T> outputGradient)
Parameters
outputGradientTensor<T>Gradient of the loss with respect to network output.
Returns
- Tensor<T>
Gradient of the loss with respect to input.
ComputeDerivative(T[], T)
Computes dx/dt = f_known(x, t) + NN(x, t).
public T[] ComputeDerivative(T[] state, T time)
Parameters
stateT[]timeT
Returns
- T[]
CreateNewInstance()
Creates a new instance with the same configuration.
protected override IFullModel<T, Tensor<T>, Tensor<T>> CreateNewInstance()
Returns
- IFullModel<T, Tensor<T>, Tensor<T>>
New UDE instance.
DeserializeNetworkSpecificData(BinaryReader)
Deserializes UDE-specific data.
protected override void DeserializeNetworkSpecificData(BinaryReader reader)
Parameters
readerBinaryReaderBinary reader.
Forward(Tensor<T>)
Performs a forward pass through the network.
public Tensor<T> Forward(Tensor<T> input)
Parameters
inputTensor<T>Input tensor for evaluation.
Returns
- Tensor<T>
Network output tensor.
GetModelMetadata()
Gets metadata about the UDE model.
public override ModelMetadata<T> GetModelMetadata()
Returns
- ModelMetadata<T>
Model metadata.
InitializeLayers()
Initializes the layers of the neural network based on the architecture.
protected override void InitializeLayers()
Remarks
For Beginners: This method sets up all the layers in your neural network according to the architecture you've defined. It's like assembling the parts of your network before you can use it.
Predict(Tensor<T>)
Makes a prediction using the UDE model.
public override Tensor<T> Predict(Tensor<T> input)
Parameters
inputTensor<T>Input tensor with state and time.
Returns
- Tensor<T>
Predicted derivative tensor.
SerializeNetworkSpecificData(BinaryWriter)
Serializes UDE-specific data.
protected override void SerializeNetworkSpecificData(BinaryWriter writer)
Parameters
writerBinaryWriterBinary writer.
Simulate(T[], T, T, int, OdeIntegrationMethod)
Simulates the UDE forward in time.
public T[,] Simulate(T[] initialState, T tStart, T tEnd, int numSteps, OdeIntegrationMethod method = OdeIntegrationMethod.Euler)
Parameters
initialStateT[]tStartTtEndTnumStepsintmethodOdeIntegrationMethod
Returns
- T[,]
Train(Tensor<T>, Tensor<T>)
Performs a supervised training step against derivative targets.
public override void Train(Tensor<T> input, Tensor<T> expectedOutput)
Parameters
inputTensor<T>Input tensor with state and time.
expectedOutputTensor<T>Expected derivative tensor.
Remarks
Uses standard backpropagation like all other neural networks. The network learns to predict the derivative (dx/dt) at each state.
UpdateParameters(Vector<T>)
Updates the network parameters from a flattened vector.
public override void UpdateParameters(Vector<T> parameters)
Parameters
parametersVector<T>Parameter vector.