View Single Post
  #1  
Old 09-30-2010, 08:40 PM
pfo pfo is offline
Member
 
Join Date: Oct 2004
Location: Nashville, TN
Posts: 381
Default Elastic Audio in Depth

***I've asked these questions in other threads, but never gotten a response from Avid. However, I feel like the tech support presence here has improved noticeably since the rebranding, so I'm going to assume my questions got lost in the undoubtedly huge volume of thread reply emails these guys must get. As such, I'm starting a new post in hopes that I actually get a response this time***

What are the different Elastic Audio algorithms really doing, and what do they like to see to produce the best results?

The Pro Tools manual really doesn't go into much detail about how EA does its analysis, or give any kind of detailed explanation of the parameters of each plugin. For example, it will tell you that you can move the analysis markers as needed, but it doesn't say where they ought to be in the first place. And telling us that we can adjust the window size isn't very helpful when there's no real explanation of what the window is, or how it affects the processing. I've done plenty of experimentation, but it's very difficult to ascertain exactly what's going on using only trial and error. A real explanation would be very helpful.

With compression, eq, reverb, etc, it's easy to say "do it til it sounds right," but with the often extremely minute phase and timing changes produced by EA processing, that's really not a reasonable response. Often, things will appear to be "right," but then later I'll notice a random transient with a phase or timing issue, which leads me to question all the others that seemed right before. If there was a clear, in-depth explanation of the algorithms and processes, I think there would be a lot less guesswork, and as an added bonus, probably less bitching about EA messing up the audio. I've been working heavily with EA, and I've gotten very good results, but I'm still not quite there and I'm tired of guessing what's working and what isn't. There are a lot of variables and parameters in Elastic Audio, and I would like to have a proper understanding of these tools and how to use them.

On to the questions:

1) I notice that analysis markers are rarely placed at the start of a transient. But then sometimes they are. Where is the true "start" of a transient, and in which part of a transient should the analysis markers sit? It's no good to have warp markers in the middle of a transient, but does the location matter as much for analysis markers? It would seem that at the very least, they should all be in the same place relative to the transients.

2) How does the "Window Size" affect the processing, beyond making it sound less crappy or more crappy? What is the window, and what is it really doing? Does the start of the window sit right on the analysis marker, or does the window extend out to either side of the marker?

3) The manual tells us that we can turn on the envelope follower, which "simulates the original acoustics of the audio being stretched." I can't imagine ever not wanting the results to sound like the original audio, but this must be optional for a reason. So practically speaking, when would you want this enabled or disabled, and why?

4) Very often, a snare hit will get a marker in the overheads, but not in the room mics. What's the proper way to approach this situation? It's not realistic to expect anybody to place them by hand in the room mics, as you really can't see the transient with any degree of precision. Additionally, until there's a clear answer for the first question, we'll never know exactly where we would need to place the marker anyway.

5) What about the situation where the snare does get a marker in both pairs of mics, but they're not in the same relative location (ie, right before the transient in the overheads, and somewhere in the middle of the transient in the room mics)? Which is correct? Does it matter?

6) DigiTechSupt mentioned in another thread that we should copy the analysis markers from one track and paste them onto other tracks. How? The manual seems to make no mention of this, and for the life of me, I can't figure out a way to do this. But it does seem like that could help with some of these issues.

7) Obviously, the offset between overhead and room mics is constant, but the EA analysis markers never are. This seems like a major source of phase issues, so what's the best way to ensure that the markers are consistently offset by the correct amount from one track to the next?

8) I've started using stereo tracks for any mics that are paired or stereo (kick in/out, snare top/bottom, overheads, etc), and it definitely seems to help with phase issues within each pair. Should I take this to the logical extreme and start using multichannel tracks? I started doing that once, but ran up against some reason why it didn't seem like the right way to go, though I can't remember now why I abandoned that line of thinking. Maybe the analysis markers meant for cymbals were showing up on the kick and snare tracks and causing problems? I don't know. I guess you could do the analysis and then drag the regions onto multichannel tracks for the warping....but is this even a good path to go down?

9) Take a look at this screenshot. You're seeing the snare top/bottom, overheads, and room mics, all on stereo tracks (I didn't record these, so I can't tell you much about mic placement or anything). When I make a very small timing adjustment to a transient that falls a few beats after this screenshot, the relationship between the snare mics and the overhead mics (highlighted area) changes by about one millisecond, which completely alters the sound of the snare when heard through the close mics and the overheads. Is this avoidable, or just part of the deal with Elastic Audio?


Thank you for your help. I know some of these phase issues can be very minor, but they really do affect the realism and depth of multitrack drums. And beyond that, it's the principle of the thing. We're told that EA preserves the phase relationship of multitrack drums, yet simple experimentation will show that this is not always the case. Well, it either does or it doesn't. And if it does, then we're clearly not using it right, so educate us! And if EA just isn't the best tool for drums that are meant to sound natural (ie, not a slammin' pop tune), just tell us, so we can stop trying to make it work. If it were any other type of process, I'd just keep twiddling with it until it sounded right, but phase relationship is a very sensitive thing and the changes made by EA can be very subtle. Precise tools require precise understanding.

I remember in the earlier days of Beat Detective, even Pro Tools geniuses often described it and its analysis process as "black magic." As a result of this lack of understanding, there are probably thousands of threads here asking how to properly use Beat Detective. I really do think that a lot of the complaining about EA stems from the fact that we're all basically in the dark about how to make the damn thing work properly. The best methods for using it that I've discovered have come either from other users, or from my own experimentation. That doesn't make sense to me. Why should we need to rely on trial and error (our own or that of others) to figure out how to use these tools? That's perfectly reasonable for creative tools like reverb and compression, but for a technical process like EA, where there is a "right" and "wrong" way to use it, the manual shouldn't leave us guessing how to use it correctly. The Pro Tools manual is helpful, but just not deep enough.

I would love to hear from anybody, especially you folks who helped to design Elastic Audio. I know you know the answers to these questions!

Many thanks!
Reply With Quote