Class GaussianDifferentialPrivacy<T>
- Namespace
- AiDotNet.FederatedLearning.Privacy
- Assembly
- AiDotNet.dll
public class GaussianDifferentialPrivacy<T> : PrivacyMechanismBase<Dictionary<string, T[]>, T>, IPrivacyMechanism<Dictionary<string, T[]>>
Type Parameters
T
- Inheritance
-
GaussianDifferentialPrivacy<T>
- Implements
- Inherited Members
Constructors
GaussianDifferentialPrivacy(double, int?)
Initializes a new instance of the GaussianDifferentialPrivacy<T> class.
public GaussianDifferentialPrivacy(double clipNorm = 1, int? randomSeed = null)
Parameters
clipNormdoubleThe maximum L2 norm for gradient clipping (sensitivity bound).
randomSeedint?Optional random seed for reproducibility.
Remarks
For Beginners: Creates a differential privacy mechanism with a specified gradient clipping threshold.
Gradient clipping (clipNorm) is crucial for DP:
- Bounds the maximum influence any single data point can have
- Makes noise calibration possible
- Common values: 0.1 - 10.0 depending on model and data
Lower clipNorm:
- Stronger privacy guarantee
- More aggressive clipping
- May slow convergence
Higher clipNorm:
- Less clipping
- Faster convergence
- Requires more noise for same privacy
Recommendations:
- Start with clipNorm = 1.0
- Monitor gradient norms during training
- Adjust based on typical gradient magnitudes
Methods
ApplyPrivacy(Dictionary<string, T[]>, double, double)
Applies differential privacy to model parameters by adding calibrated Gaussian noise.
public override Dictionary<string, T[]> ApplyPrivacy(Dictionary<string, T[]> model, double epsilon, double delta)
Parameters
modelDictionary<string, T[]>The model parameters to add noise to.
epsilondoublePrivacy budget for this operation (smaller = more private).
deltadoubleFailure probability (typically 1e-5 or smaller).
Returns
- Dictionary<string, T[]>
The model with differential privacy applied.
Remarks
This method implements the Gaussian mechanism for (ε, δ)-differential privacy:
For Beginners: This adds carefully calculated random noise to protect privacy while maintaining model utility.
Step-by-step process:
- Calculate current L2 norm of model parameters
- If norm > clipNorm, scale down parameters to clipNorm
- Calculate noise scale σ based on ε, δ, and sensitivity
- Add Gaussian noise N(0, σ²) to each parameter
- Update privacy budget consumed
Mathematical details:
- Sensitivity Δ = clipNorm (worst-case parameter change)
- σ = (Δ/ε) × sqrt(2 × ln(1.25/δ))
- Noise ~ N(0, σ²) added to each parameter independently
For example, with ε=1.0, δ=1e-5, clipNorm=1.0:
- σ = (1.0/1.0) × sqrt(2 × ln(125000)) ≈ 4.7
- Each parameter gets noise from N(0, 4.7²)
- Original params: [0.5, -0.3, 0.8]
- Noisy params: [0.52, -0.35, 0.83] (example with small noise realization)
Privacy accounting:
- Each call consumes ε privacy budget
- Total budget accumulates: ε_total = ε_1 + ε_2 + ... (simplified)
- Advanced: Use Rényi DP for tighter composition bounds
GetClipNorm()
Gets the gradient clipping norm used for sensitivity bounding.
public double GetClipNorm()
Returns
- double
The clipping norm value.
Remarks
For Beginners: Returns the maximum allowed parameter norm. Parameters larger than this are scaled down before adding noise.
GetMechanismName()
Gets the name of the privacy mechanism.
public override string GetMechanismName()
Returns
- string
A string describing the mechanism.
GetPrivacyBudgetConsumed()
Gets the total privacy budget consumed so far.
public override double GetPrivacyBudgetConsumed()
Returns
Remarks
For Beginners: Returns how much privacy budget has been used up. Privacy budget is cumulative - once spent, it's gone.
For example:
- Round 1: ε = 0.5 consumed, total = 0.5
- Round 2: ε = 0.5 consumed, total = 1.0
- Round 3: ε = 0.5 consumed, total = 1.5
If you started with total budget 10.0, you have 8.5 remaining.
Note: This uses basic composition. Advanced composition (Rényi DP) gives tighter bounds and would show less budget consumed.
ResetPrivacyBudget()
Resets the privacy budget counter.
public void ResetPrivacyBudget()
Remarks
For Beginners: Resets the privacy budget tracker to zero.
WARNING: This should only be used when starting a completely new training run. Do not reset during active training as it would give false privacy accounting.
Use cases:
- Starting new experiment with same mechanism instance
- Testing and debugging
- Separate training phases with independent privacy guarantees