Class QuantumNeuralNetwork<T>
- Namespace
- AiDotNet.NeuralNetworks
- Assembly
- AiDotNet.dll
Represents a Quantum Neural Network, which combines quantum computing principles with neural network architecture.
public class QuantumNeuralNetwork<T> : NeuralNetworkBase<T>, INeuralNetworkModel<T>, INeuralNetwork<T>, IFullModel<T, Tensor<T>, Tensor<T>>, IModel<Tensor<T>, Tensor<T>, ModelMetadata<T>>, IModelSerializer, ICheckpointableModel, IParameterizable<T, Tensor<T>, Tensor<T>>, IFeatureAware, IFeatureImportance<T>, ICloneable<IFullModel<T, Tensor<T>, Tensor<T>>>, IGradientComputable<T, Tensor<T>, Tensor<T>>, IJitCompilable<T>, IInterpretableModel<T>, IInputGradientComputable<T>, IDisposable
Type Parameters
TThe numeric type used for calculations, typically float or double.
- Inheritance
-
QuantumNeuralNetwork<T>
- Implements
- Inherited Members
- Extension Methods
Remarks
A Quantum Neural Network (QNN) is a neural network architecture that leverages quantum computing principles to potentially solve certain problems more efficiently than classical neural networks. It uses quantum bits (qubits) instead of classical bits, allowing it to process information in ways not possible with traditional neural networks.
For Beginners: A Quantum Neural Network combines ideas from quantum computing with neural networks.
Think of it like upgrading from a regular calculator to a special calculator with new abilities:
- Regular neural networks use normal bits (0 or 1)
- Quantum neural networks use quantum bits or "qubits" that can be 0, 1, or both at the same time
- This "both at the same time" property (called superposition) gives quantum networks special abilities
- These networks might solve certain problems much faster than regular neural networks
For example, a quantum neural network might find patterns in complex data or optimize solutions in ways that would be extremely difficult for traditional neural networks.
While the math behind quantum computing is complex, you can think of a quantum neural network as having the potential to explore many possible solutions simultaneously rather than one at a time.
Constructors
QuantumNeuralNetwork(NeuralNetworkArchitecture<T>, int, INormalizer<T, Tensor<T>, Tensor<T>>?, ILossFunction<T>?)
Initializes a new instance of the QuantumNeuralNetwork<T> class with the specified architecture and number of qubits.
public QuantumNeuralNetwork(NeuralNetworkArchitecture<T> architecture, int numQubits, INormalizer<T, Tensor<T>, Tensor<T>>? normalizer = null, ILossFunction<T>? lossFunction = null)
Parameters
architectureNeuralNetworkArchitecture<T>The neural network architecture to use for the QNN.
numQubitsintThe number of qubits to use in the quantum neural network.
normalizerINormalizer<T, Tensor<T>, Tensor<T>>lossFunctionILossFunction<T>
Remarks
This constructor creates a new Quantum Neural Network with the specified architecture and number of qubits. It initializes the network layers based on the architecture, or creates default quantum network layers if no specific layers are provided.
For Beginners: This sets up the Quantum Neural Network with its basic components.
When creating a new QNN:
- architecture: Defines the overall structure of the neural network
- numQubits: Sets how many quantum bits the network will use
The constructor prepares the network by either:
- Using the specific layers provided in the architecture, or
- Creating default layers designed for quantum processing if none are specified
This is like setting up a specialized calculator before you start using it for calculations.
Methods
CreateNewInstance()
Creates a new instance of the quantum neural network with the same configuration.
protected override IFullModel<T, Tensor<T>, Tensor<T>> CreateNewInstance()
Returns
- IFullModel<T, Tensor<T>, Tensor<T>>
A new instance of QuantumNeuralNetwork<T> with the same configuration as the current instance.
Remarks
This method creates a new quantum neural network that has the same configuration as the current instance. It's used for model persistence, cloning, and transferring the model's configuration to new instances. The new instance will have the same architecture, number of qubits, normalizer, and loss function as the original, but will not share parameter values unless they are explicitly copied after creation.
For Beginners: This method makes a fresh copy of the current model with the same settings.
It's like creating a blueprint copy of your quantum neural network that can be used to:
- Save your model's settings
- Create a new identical model
- Transfer your model's configuration to another system
This is useful when you want to:
- Create multiple similar quantum neural networks
- Save a model's configuration for later use
- Reset a model while keeping its quantum-specific settings
Note that while the settings are copied, the learned parameters are not automatically transferred, so the new instance will need training or parameter copying to match the performance of the original.
DeserializeNetworkSpecificData(BinaryReader)
Deserializes quantum neural network-specific data from a binary reader.
protected override void DeserializeNetworkSpecificData(BinaryReader reader)
Parameters
readerBinaryReaderThe BinaryReader to read the data from.
Remarks
This method reads the specific parameters and state of the quantum neural network from a binary stream.
For Beginners: This loads a saved quantum neural network state from a file. It rebuilds the network exactly as it was when you saved it, including all its learned information and quantum-specific settings.
GetModelMetadata()
Retrieves metadata about the quantum neural network model.
public override ModelMetadata<T> GetModelMetadata()
Returns
- ModelMetadata<T>
A ModelMetaData object containing information about the network.
Remarks
This method collects and returns various pieces of information about the quantum neural network's structure and configuration.
For Beginners: This provides a summary of the quantum neural network's setup, including its structure, the number of qubits it uses, and other important details. It's like getting a blueprint of the network's current state.
InitializeLayers()
Initializes the neural network layers based on the provided architecture or default configuration.
protected override void InitializeLayers()
Remarks
This method sets up the neural network layers for the Quantum Neural Network. If the architecture provides specific layers, those are used. Otherwise, a default configuration optimized for quantum processing is created based on the number of qubits specified during initialization.
For Beginners: This method sets up the building blocks of the neural network.
When initializing layers:
- If the user provided specific layers, those are used
- Otherwise, default layers suitable for quantum neural networks are created automatically
- The system checks that any custom layers will work properly with quantum computations
Layers are like the different processing stages in the neural network. For a quantum neural network, these layers are designed to work with quantum principles, allowing the network to take advantage of quantum effects like superposition and entanglement.
Predict(Tensor<T>)
Makes a prediction using the quantum neural network for the given input.
public override Tensor<T> Predict(Tensor<T> input)
Parameters
inputTensor<T>The input tensor to make a prediction for.
Returns
- Tensor<T>
The predicted output tensor.
Remarks
This method performs a forward pass through the quantum neural network, applying quantum operations simulated on classical hardware.
For Beginners: This is where the quantum neural network processes input data and makes a prediction. It simulates quantum operations on a classical computer, giving an approximation of how a true quantum computer might behave.
SerializeNetworkSpecificData(BinaryWriter)
Serializes quantum neural network-specific data to a binary writer.
protected override void SerializeNetworkSpecificData(BinaryWriter writer)
Parameters
writerBinaryWriterThe BinaryWriter to write the data to.
Remarks
This method writes the specific parameters and state of the quantum neural network to a binary stream.
For Beginners: This saves the current state of the quantum neural network to a file. It records all the important information about the network so you can reload it later exactly as it is now.
Train(Tensor<T>, Tensor<T>)
Trains the quantum neural network using the provided input and expected output.
public override void Train(Tensor<T> input, Tensor<T> expectedOutput)
Parameters
inputTensor<T>The input tensor for training.
expectedOutputTensor<T>The expected output tensor for the given input.
Remarks
This method performs one training iteration, including forward pass, loss calculation, backward pass, and parameter update using a quantum-inspired optimization technique.
For Beginners: This is how the quantum neural network learns. It processes an input, compares its prediction to the expected output, and adjusts its internal settings to improve future predictions. The adjustments are made using techniques inspired by quantum computing.
UpdateParameters(Vector<T>)
Updates the parameters of the quantum neural network layers.
public override void UpdateParameters(Vector<T> parameters)
Parameters
parametersVector<T>The vector of parameter updates to apply.
Remarks
This method updates the parameters of each layer in the quantum neural network based on the provided parameter updates. The parameters vector is divided into segments corresponding to each layer's parameter count, and each segment is applied to its respective layer.
For Beginners: This method updates how the quantum neural network makes decisions based on training.
During training:
- The network learns by adjusting its internal parameters
- This method applies those adjustments
- It takes a vector of parameter updates and distributes them to the correct layers
- Each layer gets the portion of updates meant specifically for it
For a quantum neural network, these parameters might control operations like quantum rotations, entanglement settings, or other quantum-inspired transformations.
This process allows the quantum neural network to improve its performance over time by adjusting how it processes information.