This takes me back. 1990, building Boltzman machines and Perceptrons from arrays of void pointers to "neurons" in plain C. What did we use "AI" for back then? To guess the next note in a MIDI melody, and to recognise the shape of a scored note, minim, crotchet, quaver on a 5 x 9 dot grid. 85% accuracy was "good enough" then.
> recognise the shape of a scored note, minim, crotchet, quaver on a 5 x 9 dot grid
Reading music off a lined page sounds like a fun project, particularly to do it from scratch like 3Blue1Brown's number NN example[1].
Mix with something like Chuck[2] and you can write a completely clientside application with today's tech.
Thanks for these links. You're right, I think computer-vision "sight reading" is now a fairly done deal. Very impressive progress in the past 30 years.
Did the output sound musical?
For small values of "music"? Really, no. But tbh, neither have more advanced "AI" composition experiments I've encountered over the years, Markov models, linear predictive coding, genetic/evolutionary algs, rule based systems, and now modern diffusion and transormers... they all lack the "spirit of jazz" [0]
[0] https://i.pinimg.com/originals/e4/84/79/e484792971cc77ddff8f...