Class MeshCNN<T>
- Namespace
- AiDotNet.NeuralNetworks
- Assembly
- AiDotNet.dll
Implements the MeshCNN architecture for processing 3D triangle meshes.
public class MeshCNN<T> : NeuralNetworkBase<T>, INeuralNetworkModel<T>, INeuralNetwork<T>, IFullModel<T, Tensor<T>, Tensor<T>>, IModel<Tensor<T>, Tensor<T>, ModelMetadata<T>>, IModelSerializer, ICheckpointableModel, IParameterizable<T, Tensor<T>, Tensor<T>>, IFeatureAware, IFeatureImportance<T>, ICloneable<IFullModel<T, Tensor<T>, Tensor<T>>>, IGradientComputable<T, Tensor<T>, Tensor<T>>, IJitCompilable<T>, IInterpretableModel<T>, IInputGradientComputable<T>, IDisposable
Type Parameters
TThe numeric type used for calculations (typically float or double).
- Inheritance
-
MeshCNN<T>
- Implements
- Inherited Members
- Extension Methods
Remarks
MeshCNN is a deep learning architecture that operates directly on 3D mesh data by treating edges as the fundamental unit of computation. This enables learning from the mesh structure itself rather than converting to voxels or point clouds.
For Beginners: MeshCNN processes 3D shapes represented as triangle meshes.
Key concepts:
- Mesh: A 3D surface made of connected triangles (vertices + faces)
- Edge: A line segment connecting two vertices, shared by up to 2 faces
- Edge features: Properties like dihedral angle, edge ratios, face angles
How it works:
- Extract edge features from the mesh (5 features per edge by default)
- Apply edge convolutions to learn patterns in edge neighborhoods
- Pool edges by removing less important ones (simplifies the mesh)
- Repeat conv + pool to build hierarchical features
- Aggregate edge features for classification/segmentation
Applications:
- 3D shape classification (e.g., recognize chair vs table)
- Mesh segmentation (label each part of a 3D model)
- Shape retrieval (find similar 3D models)
Reference: "MeshCNN: A Network with an Edge" by Hanocka et al., SIGGRAPH 2019
Constructors
MeshCNN()
Initializes a new instance of the MeshCNN<T> class with default options.
public MeshCNN()
Remarks
Creates a MeshCNN with default configuration suitable for ModelNet40 classification.
MeshCNN(MeshCNNOptions, IGradientBasedOptimizer<T, Tensor<T>, Tensor<T>>?, ILossFunction<T>?)
Initializes a new instance of the MeshCNN<T> class with specified options.
public MeshCNN(MeshCNNOptions options, IGradientBasedOptimizer<T, Tensor<T>, Tensor<T>>? optimizer = null, ILossFunction<T>? lossFunction = null)
Parameters
optionsMeshCNNOptionsConfiguration options for the MeshCNN.
optimizerIGradientBasedOptimizer<T, Tensor<T>, Tensor<T>>The optimizer for training. Defaults to Adam if null.
lossFunctionILossFunction<T>The loss function. Defaults based on task type if null.
Remarks
For Beginners: Creates a MeshCNN with custom configuration.
Exceptions
- ArgumentNullException
Thrown when options is null.
MeshCNN(int, int, ILossFunction<T>?)
Initializes a new instance of the MeshCNN<T> class with simple parameters.
public MeshCNN(int numClasses, int inputFeatures = 5, ILossFunction<T>? lossFunction = null)
Parameters
numClassesintNumber of output classes for classification.
inputFeaturesintNumber of input features per edge. Default is 5.
lossFunctionILossFunction<T>The loss function. Defaults based on task type if null.
Remarks
For Beginners: Creates a MeshCNN with default architecture settings.
Properties
ConvChannels
Gets the channel configuration for edge convolution layers.
public int[] ConvChannels { get; }
Property Value
- int[]
InputFeatures
Gets the number of input features per edge.
public int InputFeatures { get; }
Property Value
NumClasses
Gets the number of output classes for classification.
public int NumClasses { get; }
Property Value
PoolTargets
Gets the pooling targets for mesh simplification.
public int[] PoolTargets { get; }
Property Value
- int[]
Methods
Backward(Tensor<T>)
Performs backward pass through the network.
public Tensor<T> Backward(Tensor<T> outputGradient)
Parameters
outputGradientTensor<T>Gradient of loss with respect to output.
Returns
- Tensor<T>
Gradient with respect to input.
CreateNewInstance()
Creates a new instance for cloning.
protected override IFullModel<T, Tensor<T>, Tensor<T>> CreateNewInstance()
Returns
- IFullModel<T, Tensor<T>, Tensor<T>>
New MeshCNN instance.
Remarks
For Beginners: This creates a blank version of the same type of neural network.
It's used internally by methods like DeepCopy and Clone to create the right type of network before copying the data into it.
DeserializeNetworkSpecificData(BinaryReader)
Deserializes network-specific data.
protected override void DeserializeNetworkSpecificData(BinaryReader reader)
Parameters
readerBinaryReaderBinary reader.
Remarks
This method is called at the end of the general deserialization process to allow derived classes to read any additional data specific to their implementation.
For Beginners: Continuing the suitcase analogy, this is like unpacking that special compartment. After the main deserialization method has unpacked the common items (layers, parameters), this method allows each specific type of neural network to unpack its own unique items that were stored during serialization.
Forward(Tensor<T>)
Performs a forward pass through the network.
public Tensor<T> Forward(Tensor<T> input)
Parameters
inputTensor<T>Edge features tensor with shape [numEdges, InputFeatures].
Returns
- Tensor<T>
Classification logits with shape [NumClasses].
Remarks
Call SetEdgeAdjacency(int[,]) before calling this method.
Exceptions
- InvalidOperationException
Thrown when edge adjacency is not set.
GetModelMetadata()
Gets metadata about this model.
public override ModelMetadata<T> GetModelMetadata()
Returns
- ModelMetadata<T>
Model metadata.
InitializeLayers()
Initializes the layers of the MeshCNN network.
protected override void InitializeLayers()
Remarks
If the architecture provides custom layers, those are used. Otherwise, default layers are created using CreateDefaultMeshCNNLayers(NeuralNetworkArchitecture<T>, int, int[]?, int[]?, int[]?, int, bool, double, bool).
Predict(Tensor<T>)
Generates predictions for the given input.
public override Tensor<T> Predict(Tensor<T> input)
Parameters
inputTensor<T>Edge features tensor.
Returns
- Tensor<T>
Classification logits.
Remarks
For Beginners: This is the main method you'll use to get results from your trained neural network. You provide some input data (like an image or text), and the network processes it through all its layers to produce an output (like a classification or prediction).
SerializeNetworkSpecificData(BinaryWriter)
Serializes network-specific data.
protected override void SerializeNetworkSpecificData(BinaryWriter writer)
Parameters
writerBinaryWriterBinary writer.
Remarks
This method is called at the end of the general serialization process to allow derived classes to write any additional data specific to their implementation.
For Beginners: Think of this as packing a special compartment in your suitcase. While the main serialization method packs the common items (layers, parameters), this method allows each specific type of neural network to pack its own unique items that other networks might not have.
SetEdgeAdjacency(int[,])
Sets the edge adjacency for the current mesh being processed.
public void SetEdgeAdjacency(int[,] edgeAdjacency)
Parameters
edgeAdjacencyint[,]A 2D array of shape [numEdges, NumNeighbors] containing neighbor edge indices.
Remarks
For Beginners: Before processing a mesh, you must tell the network how edges are connected. This method sets that connectivity information.
Call this method before each Forward pass with a new mesh.
Exceptions
- ArgumentNullException
Thrown when edgeAdjacency is null.
Train(Tensor<T>, Tensor<T>)
Trains the network on a single batch.
public override void Train(Tensor<T> input, Tensor<T> expectedOutput)
Parameters
inputTensor<T>Edge features tensor.
expectedOutputTensor<T>Ground truth labels.
Remarks
This method performs one training step on the neural network using the provided input and expected output. It updates the network's parameters to reduce the error between the network's prediction and the expected output.
For Beginners: This is how your neural network learns. You provide: - An input (what the network should process) - The expected output (what the correct answer should be)
The network then:
- Makes a prediction based on the input
- Compares its prediction to the expected output
- Calculates how wrong it was (the loss)
- Adjusts its internal values to do better next time
After training, you can get the loss value using the GetLastLoss() method to see how well the network is learning.
UpdateParameters(Vector<T>)
Updates network parameters using a flat parameter vector.
public override void UpdateParameters(Vector<T> parameters)
Parameters
parametersVector<T>Vector containing all parameters.
Remarks
For Beginners: During training, a neural network's internal values (parameters) get adjusted to improve its performance. This method allows you to update all those values at once by providing a complete set of new parameters.
This is typically used by optimization algorithms that calculate better parameter values based on training data.