Skip to main content

Extended Chord recognition

Recognize played chords, strumming directions and key from audio

1. Endpoint request format:

ℹ️NOTE:: this endpoint uses the same spec as the chord-recognition endpoint.

2. Result format

The extended chord recognition detects the played chords, the strumming directions and the key of the piece within time intervals of the audio file. The result returns the musical key and two lists, one for the chords and one for the strumming directions.

The values possible for "key": Regex: ^[A-G](?:#{1}|b{1})?\s+(?:major|minor)$

Similar to the chord recognition endpoint, the list of chords has the following format and is stored in chords: [<start_time_in_seconds>, <end_time_in_seconds>, <chord_name>] with the type [float, float, string]. The <chord_name> contains the values in range of the specified vocabulary.

The list of strumming directions is stored in strums and has the format: [<time_stamp_in_seconds>, <strum_direction>] with the type [float, string]. The <strum_direction> are the two values "U" for up- and "D" for down-strokes.

Example:

{
"key": "A minor",
"strums": [
[
0.6461865848302841,
"D"
],
...
[
1.0264313435554504,
"D"
],
],
"chords": [
[
0.0,
1.0185185185185184,
"N"
],
...
[
1.0185185185185184,
3.2407407407407405,
"E:maj"
],
]

}