Namespace AiDotNet.AutoML.NAS
Classes
- AttentiveNASConfig<T>
Configuration for an AttentiveNAS sub-network.
- AttentiveNAS<T>
AttentiveNAS: Improving Neural Architecture Search via Attentive Sampling. Uses an attention-based meta-network to guide the sampling of sub-networks, focusing search on promising regions of the architecture space.
Reference: "AttentiveNAS: Improving Neural Architecture Search via Attentive Sampling" (CVPR 2021)
- BigNASConfig
Configuration for a BigNAS sub-network.
- BigNAS<T>
BigNAS: Scaling Up Neural Architecture Search with Big Single-Stage Models. Combines sandwich sampling with in-place knowledge distillation to train very large super-networks that can adapt to various deployment scenarios.
Reference: "BigNAS: Scaling Up Neural Architecture Search with Big Single-Stage Models"
- ENAS<T>
Efficient Neural Architecture Search via Parameter Sharing. ENAS uses a controller RNN to sample architectures and shares weights among child models, achieving 1000x speedup over standard NAS.
Reference: "Efficient Neural Architecture Search via Parameter Sharing" (ICML 2018)
- FBNet<T>
FBNet: Hardware-Aware Efficient ConvNet Design via Differentiable Neural Architecture Search. Uses Gumbel-Softmax with hardware latency constraints to find efficient architectures optimized for specific target devices.
Reference: "FBNet: Hardware-Aware Efficient ConvNet Design via Differentiable NAS" (CVPR 2019)
- GDAS<T>
Gradient-based Differentiable Architecture Search with Gumbel-Softmax sampling. GDAS uses Gumbel-Softmax to make the architecture search fully differentiable while maintaining discrete selection during forward pass.
Reference: "Searching for A Robust Neural Architecture in Four GPU Hours" (CVPR 2019)
- HardwareConstraints<T>
Hardware constraints for NAS. Defines maximum latency, energy, and memory limits for architecture search.
- HardwareCostModel<T>
Models hardware costs for neural architecture search operations using FLOP-based estimation. Supports latency, energy, and memory cost estimation for different hardware platforms.
- HardwareCost<T>
Represents the hardware cost of an operation or architecture.
- NasAutoMLModelBase<T>
Base class for NAS-based AutoML models.
- OnceForAll<T>
Once-for-All (OFA) Networks: Train Once, Specialize for Anything. Trains a single large network that supports diverse architectural configurations, enabling instant specialization to different hardware platforms without retraining.
Reference: "Once for All: Train One Network and Specialize it for Efficient Deployment" (ICLR 2020)
- PCDARTS<T>
Partial Channel Connections for Memory-Efficient Differentiable Architecture Search. PC-DARTS reduces memory consumption by sampling only a subset of channels during the search, making it more scalable to larger search spaces and datasets.
Reference: "PC-DARTS: Partial Channel Connections for Memory-Efficient Architecture Search" (ICLR 2020)
- PlatformCharacteristics
Platform characteristics for hardware cost estimation. Contains performance metrics like GFLOPS, memory bandwidth, and energy efficiency.
- ProxylessNAS<T>
ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware. Uses path binarization and latency-aware loss to search directly on the target device without requiring a proxy task or separate hardware lookup tables.
Reference: "ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware" (ICLR 2019)
- SubNetworkConfig
Configuration for a sub-network sampled from OFA.
Enums
- HardwarePlatform
Supported hardware platforms.