Class IRGraph
- Namespace
- AiDotNet.JitCompiler.IR
- Assembly
- AiDotNet.dll
Represents a computation graph in intermediate representation form.
public class IRGraph
- Inheritance
-
IRGraph
- Inherited Members
Remarks
An IRGraph is a structured representation of a sequence of tensor operations that have been recorded during autodiff execution. It serves as an intermediate format between the high-level ComputationNode graph and the low-level compiled code.
For Beginners: Think of an IRGraph as a recipe for computations.
Just like a recipe lists ingredients and steps:
- InputIds are the ingredients (input tensors)
- Operations are the cooking steps (add, multiply, etc.)
- OutputIds are the final dishes (output tensors)
- TensorShapes tells us the "size" of each intermediate result
The IR graph makes it easier to optimize the computation (like combining steps) and then compile it to fast executable code.
Example: If your model does: result = ReLU(MatMul(input, weights) + bias) The IR graph would have 3 operations: MatMul, Add, ReLU Each operation knows its inputs and produces an output.
Properties
InputIds
Gets or sets the IDs of input tensors to this graph.
public List<int> InputIds { get; set; }
Property Value
Remarks
Input tensors are provided by the caller and are not computed within the graph. They serve as the starting point for all computations.
For Beginners: These are the "ingredients" that you provide to start the computation.
For a neural network, inputs might be:
- The input data (like an image)
- Model parameters (weights and biases)
The graph will process these inputs through all its operations to produce outputs.
Metadata
Gets or sets optional metadata about the graph.
public Dictionary<string, object> Metadata { get; set; }
Property Value
Operations
Gets or sets the list of operations in this graph, in execution order.
public List<IROp> Operations { get; set; }
Property Value
Remarks
Operations are stored in topological order, meaning each operation appears after all operations that produce its inputs. This ensures correct execution order.
For Beginners: This is the ordered list of computation steps.
The order matters! You can't add two numbers before you've computed them. Each operation in the list uses results from earlier operations.
OutputIds
Gets or sets the IDs of output tensors produced by this graph.
public List<int> OutputIds { get; set; }
Property Value
Remarks
Output tensors are the final results of the graph computation and are returned to the caller.
For Beginners: These are the "final dishes" - the results you care about.
For a neural network, outputs might be:
- Predictions (class probabilities)
- Loss value
- Intermediate features (for visualization)
Everything else in the graph is just intermediate calculations to get to these outputs.
TensorShapes
Gets or sets the mapping from tensor IDs to their shapes.
public Dictionary<int, int[]> TensorShapes { get; set; }
Property Value
- Dictionary<int, int[]>
Remarks
Every tensor in the graph (inputs, outputs, and intermediates) has a unique ID and a known shape (represented as int[] matching Tensor<T>.Shape). This dictionary provides that mapping.
For Beginners: This is like a table that tells us the size of each value.
For example:
- Tensor 0 might be [32, 784] (a batch of 32 images, each with 784 pixels)
- Tensor 1 might be [784, 128] (weights connecting 784 inputs to 128 outputs)
- Tensor 2 might be [32, 128] (the result of multiplying tensor 0 and 1)
Knowing shapes helps us:
- Allocate the right amount of memory
- Check that operations are valid (can't multiply incompatible shapes)
- Optimize operations for specific sizes
Methods
ComputeStructureHash()
Computes a hash code for this graph structure (ignoring tensor values).
public int ComputeStructureHash()
Returns
Remarks
The hash is based on the graph structure: operation types, shapes, and connectivity. This is used for caching compiled graphs - graphs with the same structure can reuse the same compiled code even if the actual tensor values are different.
For Beginners: This creates a "fingerprint" for the graph structure.
Two graphs with the same fingerprint have the same structure (same operations, same shapes) even if the actual numbers in the tensors are different.
This lets us reuse compiled code:
- First time: Compile the graph (slow)
- Next time with same structure: Reuse compiled code (fast!)
It's like having a pre-cooked recipe that you can use with different ingredients.
ToString()
Gets a string representation of the graph for debugging and visualization.
public override string ToString()
Returns
Validate()
Validates the graph structure for correctness.
public bool Validate()
Returns
- bool
True if the graph is valid, false otherwise.
Remarks
Validation checks include: - All input tensor IDs are defined in TensorShapes - All operation inputs reference valid tensor IDs - No cycles in the graph (it's a DAG) - All output IDs are produced by operations or are inputs
For Beginners: This checks that the "recipe" makes sense.
It verifies:
- You're not using an ingredient that doesn't exist
- Steps are in the right order (don't use results before computing them)
- The final outputs are actually produced by the recipe
If validation fails, something is wrong with how the graph was constructed.