Table of Contents

Interface IAdversarialDefense<T, TInput, TOutput>

Namespace
AiDotNet.Interfaces
Assembly
AiDotNet.dll

Defines the contract for adversarial defense mechanisms that protect models against attacks.

public interface IAdversarialDefense<T, TInput, TOutput> : IModelSerializer

Type Parameters

T

The numeric data type used for calculations (e.g., float, double).

TInput

The input data type for the model (e.g., Vector<T>, string).

TOutput

The output data type for the model (e.g., Vector<T>, int).

Inherited Members

Remarks

An adversarial defense is a technique to make machine learning models more resistant to adversarial attacks and improve their robustness.

For Beginners: Think of adversarial defenses as "armor" for your AI model. Just like armor protects a knight from attacks, these defenses protect your model from adversarial examples that try to fool it.

Common examples of adversarial defenses include:

  • Adversarial Training: Training the model on adversarial examples to make it robust
  • Input Transformations: Preprocessing inputs to remove adversarial perturbations
  • Ensemble Methods: Using multiple models to make predictions more reliable

Why adversarial defenses matter:

  • They make models safer for real-world deployment
  • They improve model reliability under attack
  • They're critical for security-sensitive applications
  • They help models generalize better to unusual inputs

Methods

ApplyDefense(TInput[], TOutput[], IFullModel<T, TInput, TOutput>)

Trains or hardens a model to be more resistant to adversarial attacks.

IFullModel<T, TInput, TOutput> ApplyDefense(TInput[] trainingData, TOutput[] labels, IFullModel<T, TInput, TOutput> model)

Parameters

trainingData TInput[]

The training data to use for defensive training.

labels TOutput[]

The labels for the training data.

model IFullModel<T, TInput, TOutput>

The model to harden against attacks.

Returns

IFullModel<T, TInput, TOutput>

The defended/hardened model.

Remarks

This method applies defensive techniques to improve model robustness.

For Beginners: This is like training a model to recognize and resist tricks. The defense mechanism teaches the model to handle adversarial examples correctly, making it harder for attackers to fool it.

The process typically involves:

  1. Generating adversarial examples during training
  2. Training the model on both clean and adversarial data
  3. Optimizing the model to be robust against perturbations
  4. Validating improved robustness on test adversarial examples

EvaluateRobustness(IFullModel<T, TInput, TOutput>, TInput[], TOutput[], IAdversarialAttack<T, TInput, TOutput>)

Evaluates the robustness of a defended model against attacks.

RobustnessMetrics<T> EvaluateRobustness(IFullModel<T, TInput, TOutput> model, TInput[] testData, TOutput[] labels, IAdversarialAttack<T, TInput, TOutput> attack)

Parameters

model IFullModel<T, TInput, TOutput>

The defended model to evaluate.

testData TInput[]

Test data to use for evaluation.

labels TOutput[]

The true labels for test data.

attack IAdversarialAttack<T, TInput, TOutput>

The attack to test against.

Returns

RobustnessMetrics<T>

Robustness metrics including clean accuracy and adversarial accuracy.

Remarks

For Beginners: This tests how well your defense works by trying attacks on the defended model and measuring how often it resists them successfully.

GetOptions()

Gets the configuration options for the adversarial defense.

AdversarialDefenseOptions<T> GetOptions()

Returns

AdversarialDefenseOptions<T>

The configuration options for the defense.

Remarks

For Beginners: These are the "settings" for the defense mechanism, controlling how aggressively it protects the model and what techniques it uses.

PreprocessInput(TInput)

Preprocesses input data to remove or reduce adversarial perturbations.

TInput PreprocessInput(TInput input)

Parameters

input TInput

The potentially adversarial input.

Returns

TInput

The cleaned/defended input.

Remarks

For Beginners: This is like a "filter" that cleans up suspicious inputs before they reach your model. It tries to detect and remove malicious changes that an attacker might have added.

Reset()

Resets the defense state to prepare for a fresh defense application.

void Reset()

Remarks

For Beginners: This clears any saved state from previous defense operations.