this post was submitted on 18 Jun 2023
5 points (100.0% liked)

Technology

87 readers
2 users here now

This magazine is dedicated to discussions on the latest developments, trends, and innovations in the world of technology. Whether you are a tech enthusiast, a developer, or simply curious about the latest gadgets and software, this is the place for you. Here you can share your knowledge, ask questions, and engage in discussions on topics such as artificial intelligence, robotics, cloud computing, cybersecurity, and more. From the impact of technology on society to the ethical considerations of new technologies, this category covers a wide range of topics related to technology. Join the conversation and let's explore the ever-evolving world of technology together!

founded 2 years ago
 

I gave my game engine the capability of handling software synthesizers, and those software synthesizers also can receive MIDI 2.0 commands. While my engine does already have a MIDI 1.0 sequencer, it needs some workarounds.

Also if I could spec my own format, I would add support for adaptive soundtracks, which was a major driving force behind the decision of writing software synths instead of going with the more conventional method of playing back audio.

you are viewing a single comment's thread
view the rest of the comments
[–] VeeSilverball@kbin.social 2 points 2 years ago

What's out there currently is whatever is on midi.org - which I believe has what you want, the Clip File specification, which is accompanied by a not-yet-ready Container File specification(est release in 2024).

That said, even implementing MIDI 1 for sequencing is a huge step in terms of possibilities; the cost of a dynamic soundtrack of the iMUSE/Monkey Island 2 sort, where you can make the entire soundtrack transition smoothly between pre-written clips, is mostly borne in the asset creation process. Either you have to program generative sequences, or you're looking at a composer spending hundreds of hours making transition cues.

What MIDI 2 adds for this task is mostly on the end of recording expression in higher resolution, and that means you are making a higher-fidelity sequencer asset, and programming higher-fidelity synth patches. So the asset cost may go up even further to actually make use of that stuff.

If I were exploring that space again, which I've done in the past, I would aim for one of:

  1. Sequenced sample playback with a focus on mixing and arranging longer samples and multisampled instruments generatively - a mostly-standard approach
  2. Discarding the MIDI interface and writing to my own synth engine directly, using a format like ABC to compose.
  3. Applying generative AI to create a "part player" that embellishes a MIDI sequence(this has some demonstrations but isn't doable in real-time...yet)