Class InverseProblemOptions<T>
- Namespace
- AiDotNet.PhysicsInformed.Interfaces
- Assembly
- AiDotNet.dll
Configuration options for inverse problem PINN training.
public class InverseProblemOptions<T>
Type Parameters
TThe numeric type.
- Inheritance
-
InverseProblemOptions<T>
- Inherited Members
Properties
DataWeight
Weight for the observation data loss relative to physics loss.
public T? DataWeight { get; set; }
Property Value
- T
Remarks
Higher weight: Trust observations more Lower weight: Trust physics more For noisy data, use lower weight
EstimateUncertainty
Whether to estimate parameter uncertainty.
public bool EstimateUncertainty { get; set; }
Property Value
LogParameterHistory
Whether to log parameter estimates during training.
public bool LogParameterHistory { get; set; }
Property Value
ParameterLearningRate
Learning rate for the unknown parameters (if separate rates are used).
public double ParameterLearningRate { get; set; }
Property Value
PriorMeans
Prior means for Bayesian regularization.
public T[]? PriorMeans { get; set; }
Property Value
- T[]
PriorStandardDeviations
Prior standard deviations for Bayesian regularization.
public T[]? PriorStandardDeviations { get; set; }
Property Value
- T[]
Regularization
The type of regularization to apply.
public InverseProblemRegularization Regularization { get; set; }
Property Value
RegularizationStrength
Regularization strength (λ in the formulas above).
public T? RegularizationStrength { get; set; }
Property Value
- T
Remarks
Too small: Solution may be unstable (overfitting to noise) Too large: Parameters biased toward prior/regularization Rule of thumb: Start with λ ≈ noise_level² / signal_level²
UncertaintySamples
Number of samples for uncertainty estimation (if enabled).
public int UncertaintySamples { get; set; }
Property Value
UseSeparateLearningRates
Whether to use separate learning rates for solution and parameters.
public bool UseSeparateLearningRates { get; set; }
Property Value
Remarks
Parameters often need different learning rates than the neural network. Typically, parameters need smaller learning rates for stability.