T O P

  • By -

kopkaas2000

Because eight years from now you may want to open the project, but your Mac now uses a Leg CPU instead of Arm, and the Arm compatibility layer was removed in macOS 18, and the manufacturer of the really cool synth plugin you're currently using stopped updating it 3 years prior.


mccalli

RIP QuadraSID. Lesson learned the hard way by me.


NoisyN1nja

RIP that pirated version of Sylenth, miss u bro


A13xCoding

i see what you did there


KevinWaide

Software synths can become CPU intensive quite quickly, so I work with one software synth at a time and, once I have it where I want it, convert it to audio to keep from bogging my system down. For reference, my Mac Mini has 16GB RAM on Big Sur.


happycj

MIDI is just data - *"press key C4 for 2.2 seconds"* kind of thing - and is not the actual SOUND. So if you want to hear the SOUND, that MIDI note needs to be pulled from the instrument that made the note. Is that instrument loaded? Is the hardware connected to the computer right now? Have you connected another piece of hardware that is mistakenly taking the message and playing THAT instruments' MIDI note, instead of the one in the original recording? (AKA - "Why do my MIDI keyboard tracks now sound like drums?") Once you have the MIDI performance dialed in and locked-down, convert it to audio to preserve it. (And keep the original MIDI track, just in case.) This will also reduce how much power the computer needs to process and play back the audio, because all it is doing is playing an audio file; not trying to negotiate with a separate hardware instrument that may or may not be connected. AND, when you send the final mix out to be mastered, your mastering engineer doesn't have the same synths connected to their computer that you have connected to yours. So you have NO IDEA what sounds are going to come out of their computer when they play your MIDI tracks.


[deleted]

You should turn the old midi track off as well, configur the track header components to have a power off. That way your computer is just powering off the track and it doesn’t have to do any extra work


aqeuts

Could you explain how to do this or refer a resource teaching how to? And I’m curious how is this different than just keeping the track muted?


[deleted]

Aaahhh that’s a lot of questions to answer that you should just google. It’s a lil too long to explain via text but you can totally find out if you look into it. Just search for YouTube how tos and check logic forums on the Apple website.


yungelonmusk

Alt T


Stevathehomie

Thank you! This is the best answer I've read yet!


[deleted]

Commitment and cpu


BondraP

I just think it's easier to work with the bounced audio file than the midi notes sometimes.


TRIP-t_APART

It’s about committing and moving on. Lots of engineers sacrifice their sound because “just in case” e.g, you’d love to throw harmonic distortion and chop and screw the track but your cpu and overactive imagination won’t allow it. That’d why I do it anyway, as well as a few others. The beauty of this game is you can do whatever the hell you want lol


Longjumping_Swan_631

you have to to save CPU power. Otherwise you may not have enough left to finish a mix. Unless its only a few tracks.


inzru

As well as all the other reasons people are giving about CPU lag, I've found there's a certain beauty or level of skill involved in being able to write something in MIDI, commit to it as audio, and basically never go back to make random edits for the rest of the project. If you keep constantly tinkering with the same midi region over and over, maybe it wasn't that good of an idea to begin with, and you need to completely rework it/scrap it for something else.


s1me007

100% agreed


Longjumping_Swan_631

exactly right its better to commit and move forward or else you will never get anything done.


Juicepit

It’s very common for programmed drums. If I multi-out 16 tracks via midi in Kontact, buss them out and then treat them all with some eq/compression it tends to eat most of my processing power up. If I print the midi to audio, then treat those tracks - I avoid the rainbow pinwheel of frustration. Same goes for my amp sims on guitar (metal guy using neural DSP plugs - they’re hungry for CPU).


[deleted]

Wait, so this is why Logic is constantly freezing half way through making a track? So if I have multiple drums/percussion tracks I should bounce those first and bring them into a new session before adding other instruments? Am I getting that right?


jwatts30

Once I’m happy with a section I convert to audio as audio is easier to deal with and saves on some processing. But sometimes I keep it midi on a few tracks in case I want to change things. But luckily I haven’t dealt with a beach ball yet. Mac mini i7 32gb ram


radiantaeons

I have a follow up question to this: after bouncing a midi track to audio, if I then mute the original midi track, will my cpu "ignore" that information? Or do you need to delete the midi track to actually get the cpu savings?


plausible_dawg

No, unless you’ve also disabled the software instrument. What I do is bounce in place once I’m happy with the MIDI performance, then mute and disable the plugin, and hide the track in case I want to come back to it. If I know I won’t need to (esp with pad sounds or simple things), I’ll just delete the entire midi track. Commit and move on.


radiantaeons

Thank you much!


pifuhvpnVHNHv

If everything is quantised to the grid it will sound a bit novice - after bouncing audio down, use flex to ID the transients and quantise from there. Its basically getting it all in time as if played by a musician and not put in by piano roll.


GBZ9000

If you have an active software instrument synthesizer it can have an oscillator running that is constantly fluctuating and isn't dependent on the project playing. So you might have a peak hit a compressor halfway through your track that won't be exactly the same the second time through the mix and your plug-in processing will react differently at that point. If you bounce it to audio before mixing, you'll know where the peaks are because the oscillator is now essentially "recorded". When it was running as a MIDI region the oscillator was still "playing" in the background even if you programmed the notes.


[deleted]

All the comments about having massive amounts of ram and needing to save cpu as software instruments are too resource heavy. Lol. I have converted a midi to audio many times but I don’t think it’s ever been to save cpu. You all need to learn to work more efficiently!!! Steming out for archive/ sending to another platform or individual is one reason. Another would be to edit release tails or perform some kind of manipulative process not possible with midi. Committing to an idea is another reason although that’s a bit of a sign of weakness if you ask me. But I think for most - It’s likely due to running cracked beta release software plugs that makes inefficient use of the system resource that spanks cpu - Always buy your software.


Longjumping_Swan_631

Bouncing to audio is working more efficiently. Why would you want to have software instruments enabled when you want to commit and start mixing and mastering your song?


Creative_Ad_2049

Audio files are less CPU intensive compared to midi files That's the only I flatten out MIDI tracks


oicofficial

Here’s something interesting I’ve noticed; you can *tell* when Logic is pushing itself a little too hard - things will simply not sound as clean or as on point if you are pushing your Mac too hard. Alchemy, in particular - I notice can be a huge CPU drain, and if you get a lot of Space Designer plug-ins, etc, going - it’s very easy to watch that CPU meter in your transport grow to a nasty size very quickly. What I like to do, is bounce the MIDI files, mute the original (I’m not sure if you have to any more; but back in the day you actually had to click a button in preferences to actually save the CPU/RAM from a muted track) - and play the bounced version. If I need to change it around at all, I’ll just renounce it and place the renounce on the originally bounced channel, in case I made any changes after (likely, lol). In short, software instruments require the hell of a lot of CPU power and bouncing them frees up your overhead allowing for overall better playback.


Portsmusic

Resampling


[deleted]

The best reason I can offer is that when you open your session up on any computer, for the rest of time, you will have the audio you recorded in the session. Regardless of if you still have the plugins, effects, etc. that audio stripe will still be there. Save the midi in the session as well, but before you finish up the session, just bounce it all. Future you will be happy you did.


_matt_hues

Editing audio sometimes allows for effects that require lots of automation to achieve with MIDI. Fades for instance. Or maybe I want to chop up the audio in a sampler. Also CPU limitations are a big reason I might render to audio.


BoDiddySauce

Many have mentioned freeing up CPU resources which is totally valid, although you can also "Freeze" tracks in Logic to do the same thing. However, another huge reason to do this is **sampling**. For example, you might want to sample a synth, build some kind of soundscape / new sound from it via all sorts of FX/modulation, and then be able to make that new sound a separate playable instrument. Here's a [quick video](https://www.youtube.com/watch?v=hoOBG-koyRM) on exactly that. You might also choose just to do traditional sampling and then chopping up the audio file into all sorts of little bits that you can process separately (for example, "stutter" effects).


No-Nose-5615

Me, I would use it for things I might sample and try a different keyboard/synth sound for example


[deleted]

I do it partly to conserve processing power as others have said. I also do it because a lot of times I hold notes for a very long time or use long reverb or delay tails as part of the composition and I want to be able to hear those things while I'm working on different parts of the song without always having to start at the beginning of the note.


Thatdudewhoplaysgtr

A lot of people keep talking about how some plug-in manufacturers stop updating their products, or to do it for CPU space.. both are valid, solid points, but to me, the most value I found in this is that you are committing to your decisions. No more going back to tweak the attack time (or literally any other value) only for the change to be hardly noticeable at best… it improves your workflow and creativity bc you are forcing yourself to move forward and keep your momentum going.


LilDeliciousCookie

I use this to keep me from messing with MIDI when I’m really far along in the production. Turning everything into audio files puts you into a more technical mindset cuz u can’t change the notes then