Table of Contents

Class GaussianDifferentialPrivacyVector<T>

Namespace
AiDotNet.FederatedLearning.Privacy
Assembly
AiDotNet.dll

Implements Gaussian differential privacy for vector-based model updates.

public sealed class GaussianDifferentialPrivacyVector<T> : PrivacyMechanismBase<Vector<T>, T>, IPrivacyMechanism<Vector<T>>

Type Parameters

T

The numeric type for model parameters (e.g., double, float).

Inheritance
PrivacyMechanismBase<Vector<T>, T>
GaussianDifferentialPrivacyVector<T>
Implements
Inherited Members

Remarks

For Beginners: This adds carefully calibrated random noise to a parameter vector so that individual data points cannot be inferred from the update, while the overall signal remains useful.

Constructors

GaussianDifferentialPrivacyVector(double, int?)

public GaussianDifferentialPrivacyVector(double clipNorm = 1, int? randomSeed = null)

Parameters

clipNorm double
randomSeed int?

Methods

ApplyPrivacy(Vector<T>, double, double)

Applies privacy-preserving techniques to a model update before sharing it.

public override Vector<T> ApplyPrivacy(Vector<T> model, double epsilon, double delta)

Parameters

model Vector<T>

The model update to apply privacy to.

epsilon double

Privacy budget parameter - smaller values provide stronger privacy.

delta double

Probability of privacy guarantee failure - typically very small (e.g., 1e-5).

Returns

Vector<T>

The model update with privacy mechanisms applied.

Remarks

This method transforms model updates to provide privacy guarantees while maintaining utility.

For Beginners: This is like redacting sensitive parts of a document before sharing it. You remove or obscure information that could identify individuals while keeping the useful content intact.

Common techniques:

  • Differential Privacy: Adds random noise proportional to sensitivity
  • Gradient Clipping: Limits the magnitude of updates to prevent outliers
  • Local DP: Each client adds noise before sending updates
  • Central DP: Server adds noise after aggregation

For example with differential privacy:

  1. Client trains model and computes weight updates
  2. Applies gradient clipping to limit maximum change
  3. Adds calibrated Gaussian noise to each weight
  4. Sends noisy update to server
  5. Even if server is compromised, individual data remains private

GetClipNorm()

Gets the gradient clipping norm used for sensitivity bounding.

public double GetClipNorm()

Returns

double

The clipping norm value.

Remarks

For Beginners: Returns the maximum allowed parameter norm. Parameters larger than this are scaled down before adding noise.

GetMechanismName()

Gets the name of the privacy mechanism.

public override string GetMechanismName()

Returns

string

A string describing the privacy mechanism (e.g., "Gaussian Mechanism", "Laplace Mechanism").

Remarks

For Beginners: This identifies which privacy technique is being used, helpful for documentation and comparing different privacy approaches.

GetPrivacyBudgetConsumed()

Gets the current privacy budget consumed by this mechanism.

public override double GetPrivacyBudgetConsumed()

Returns

double

The amount of privacy budget consumed so far.

Remarks

Privacy budget is a finite resource in differential privacy. Each time you share information, you "spend" some privacy budget. Once exhausted, you can no longer provide strong privacy guarantees.

For Beginners: Think of privacy budget like a bank account for privacy. Each time you share data, you withdraw from this account. When the account is empty, you've used up your privacy guarantees and should stop sharing.

For example:

  • Start with privacy budget ε=10
  • Round 1: Share update with ε=1, remaining budget = 9
  • Round 2: Share update with ε=1, remaining budget = 8
  • After 10 rounds, budget is exhausted

ResetPrivacyBudget()

Resets the privacy budget counter.

public void ResetPrivacyBudget()

Remarks

For Beginners: Resets the privacy budget tracker to zero.

WARNING: This should only be used when starting a completely new training run. Do not reset during active training as it would give false privacy accounting.