Table of Contents

Interface IPrivacyMechanism<TModel>

Namespace
AiDotNet.Interfaces
Assembly
AiDotNet.dll

Defines privacy-preserving mechanisms for federated learning to protect client data.

public interface IPrivacyMechanism<TModel>

Type Parameters

TModel

The type of model to apply privacy mechanisms to.

Remarks

This interface represents techniques to ensure that model updates don't leak sensitive information about individual data points in clients' local datasets.

For Beginners: Privacy mechanisms are like filters that protect sensitive information while still allowing useful knowledge to be shared.

Think of privacy mechanisms as protective measures:

  • Differential Privacy: Adds carefully calibrated noise to make individual data unidentifiable
  • Secure Aggregation: Encrypts updates so the server only sees the combined result
  • Homomorphic Encryption: Allows computation on encrypted data

For example, in a hospital scenario:

  • Without privacy: Model updates might reveal information about specific patients
  • With differential privacy: Random noise is added so you can't identify individual patients
  • The noise is calibrated so the overall patterns remain accurate

Privacy mechanisms provide mathematical guarantees:

  • Epsilon (ε): Privacy budget - lower values mean stronger privacy
  • Delta (δ): Probability that privacy guarantee fails
  • Common setting: ε=1.0, δ=1e-5 means strong privacy with high confidence

Methods

ApplyPrivacy(TModel, double, double)

Applies privacy-preserving techniques to a model update before sharing it.

TModel ApplyPrivacy(TModel model, double epsilon, double delta)

Parameters

model TModel

The model update to apply privacy to.

epsilon double

Privacy budget parameter - smaller values provide stronger privacy.

delta double

Probability of privacy guarantee failure - typically very small (e.g., 1e-5).

Returns

TModel

The model update with privacy mechanisms applied.

Remarks

This method transforms model updates to provide privacy guarantees while maintaining utility.

For Beginners: This is like redacting sensitive parts of a document before sharing it. You remove or obscure information that could identify individuals while keeping the useful content intact.

Common techniques:

  • Differential Privacy: Adds random noise proportional to sensitivity
  • Gradient Clipping: Limits the magnitude of updates to prevent outliers
  • Local DP: Each client adds noise before sending updates
  • Central DP: Server adds noise after aggregation

For example with differential privacy:

  1. Client trains model and computes weight updates
  2. Applies gradient clipping to limit maximum change
  3. Adds calibrated Gaussian noise to each weight
  4. Sends noisy update to server
  5. Even if server is compromised, individual data remains private

GetMechanismName()

Gets the name of the privacy mechanism.

string GetMechanismName()

Returns

string

A string describing the privacy mechanism (e.g., "Gaussian Mechanism", "Laplace Mechanism").

Remarks

For Beginners: This identifies which privacy technique is being used, helpful for documentation and comparing different privacy approaches.

GetPrivacyBudgetConsumed()

Gets the current privacy budget consumed by this mechanism.

double GetPrivacyBudgetConsumed()

Returns

double

The amount of privacy budget consumed so far.

Remarks

Privacy budget is a finite resource in differential privacy. Each time you share information, you "spend" some privacy budget. Once exhausted, you can no longer provide strong privacy guarantees.

For Beginners: Think of privacy budget like a bank account for privacy. Each time you share data, you withdraw from this account. When the account is empty, you've used up your privacy guarantees and should stop sharing.

For example:

  • Start with privacy budget ε=10
  • Round 1: Share update with ε=1, remaining budget = 9
  • Round 2: Share update with ε=1, remaining budget = 8
  • After 10 rounds, budget is exhausted