Interface IKeyDetector<T>
- Namespace
- AiDotNet.Interfaces
- Assembly
- AiDotNet.dll
Interface for musical key detection models that identify the key and mode of music.
public interface IKeyDetector<T>
Type Parameters
TThe numeric type used for calculations.
Remarks
Key detection identifies the musical key (e.g., C major, A minor) of a piece of music. The key defines the central note (tonic) and scale (major/minor) that the music is based on.
For Beginners: The musical key is like the "home base" of a song.
What is a key?
- Every song has a central note that feels like "home"
- The key tells you which note that is and whether it's major (happy) or minor (sad)
- "C major" means C is home and it sounds happy
- "A minor" means A is home and it sounds sad/dark
How key detection works:
- Audio is analyzed to find which notes are used most
- This is compared to key profiles (templates of note usage)
- The best-matching key is selected
Why it matters:
- DJ mixing (match keys for smooth transitions)
- Music recommendation (similar keys = similar feel)
- Music production (know what key to write melodies in)
- Transposition (shifting a song to a different key)
Related concepts:
- Relative keys: Am is the relative minor of C major (same notes)
- Parallel keys: C major and C minor (same root, different mode)
Properties
SampleRate
Gets the expected sample rate for input audio.
int SampleRate { get; }
Property Value
SupportedKeys
Gets the list of keys this model can detect.
IReadOnlyList<string> SupportedKeys { get; }
Property Value
Remarks
Typically 24 keys: 12 major + 12 minor.
Methods
Detect(Tensor<T>)
Detects the musical key of audio.
KeyDetectionResult<T> Detect(Tensor<T> audio)
Parameters
audioTensor<T>Audio waveform tensor [samples] or [channels, samples].
Returns
- KeyDetectionResult<T>
Key detection result.
Remarks
For Beginners: This is the main method for finding the key. - Pass in audio of a song - Get back the key (e.g., "C major" or "A minor")
DetectAsync(Tensor<T>, CancellationToken)
Detects the musical key asynchronously.
Task<KeyDetectionResult<T>> DetectAsync(Tensor<T> audio, CancellationToken cancellationToken = default)
Parameters
audioTensor<T>Audio waveform tensor.
cancellationTokenCancellationTokenCancellation token for async operation.
Returns
- Task<KeyDetectionResult<T>>
Key detection result.
GetCamelotNotation(string)
Gets the Camelot wheel notation for a key.
string GetCamelotNotation(string key)
Parameters
keystringThe key to convert (e.g., "C major").
Returns
- string
Camelot notation (e.g., "8B").
Remarks
For Beginners: The Camelot wheel is a DJ tool that shows which keys mix well together. Adjacent numbers on the wheel = compatible keys.
GetCompatibleKeys(string)
Finds compatible keys for mixing.
IReadOnlyList<string> GetCompatibleKeys(string key)
Parameters
keystringThe reference key.
Returns
- IReadOnlyList<string>
List of compatible keys for harmonic mixing.
GetKeyProbabilities(Tensor<T>)
Gets key probabilities for all possible keys.
IReadOnlyDictionary<string, T> GetKeyProbabilities(Tensor<T> audio)
Parameters
audioTensor<T>Audio waveform tensor.
Returns
- IReadOnlyDictionary<string, T>
Dictionary mapping key names to probability scores.
Remarks
For Beginners: Some songs are ambiguous - they might sound like they're in C major or A minor (these share the same notes). This method shows the probability for each possible key.
GetRelativeKey(string)
Gets the relative major/minor key.
string GetRelativeKey(string key)
Parameters
keystringThe key to find the relative for.
Returns
- string
The relative key.
TrackKeyChanges(Tensor<T>, double)
Tracks key changes over time within a piece.
KeyTrackingResult<T> TrackKeyChanges(Tensor<T> audio, double segmentDuration = 5)
Parameters
audioTensor<T>Audio waveform tensor.
segmentDurationdoubleDuration of each analysis segment in seconds.
Returns
- KeyTrackingResult<T>
Key tracking result with key over time.
Remarks
For Beginners: Some songs change key during the song (called modulation). This tracks those changes over time.