From: Tommy R. <tr...@to...> - 2010-04-05 18:05:15
|
Jonathan, Problem with that line of thought is not everyone runs LMMS on today's system. LMMS runs fairly well on my 1998 machine. I hope trade-offs between precision and performance can be made optional, as has been done so far in many ways. LMMS does so much in real-time that it is very sensitive to performance. Even with a new system, I can see myself taking it to the limit on how many tracks and effects I can pile onto it. Allowing a CPU/cache hit for greater precision would not be such an issue if LMMS was less real-time sensitive. For example, imagine if there was a feature to render selected tracks to a new audio track (muting the original ones) it would allow a workflow to get around hitting the CPU wall. Tweeking gets a bit complicated, as one would delete rendered track, unmute source tracks, tweek, and re-render to new audio track. But that would allow users to do huge projects. I remember using this feature often when I used to write with Mackie Tracktion. --Tommy From: Jonathan Aquilina <eag...@gm...> >toby with the processing power we have now a days it shouldnt hit >performance badly. especially since cache size and speeds are getting >quicker On Thu, Apr 1, 2010 at 11:40 PM, Tobias Doerffel <tob...@gm...>wrote: >> ...Changing internal processing sample format to >> double definitely would be nice if it does not introduce performance >> regressions (whichI fear due to double data rate and thus less CPU cache >> efficiency)... |