audacity-devel Mailing List for Audacity (Page 6)
A free multi-track audio editor and recorder
Brought to you by:
aosiniao
You can subscribe to this list here.
2001 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(50) |
Nov
(77) |
Dec
(169) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2002 |
Jan
(139) |
Feb
(147) |
Mar
(111) |
Apr
(348) |
May
(262) |
Jun
(294) |
Jul
(315) |
Aug
(186) |
Sep
(132) |
Oct
(135) |
Nov
(358) |
Dec
(241) |
2003 |
Jan
(557) |
Feb
(489) |
Mar
(361) |
Apr
(378) |
May
(493) |
Jun
(348) |
Jul
(289) |
Aug
(259) |
Sep
(322) |
Oct
(463) |
Nov
(305) |
Dec
(201) |
2004 |
Jan
(198) |
Feb
(186) |
Mar
(192) |
Apr
(216) |
May
(175) |
Jun
(200) |
Jul
(277) |
Aug
(127) |
Sep
(64) |
Oct
(208) |
Nov
(170) |
Dec
(154) |
2005 |
Jan
(239) |
Feb
(171) |
Mar
(123) |
Apr
(55) |
May
(74) |
Jun
(100) |
Jul
(129) |
Aug
(221) |
Sep
(209) |
Oct
(270) |
Nov
(590) |
Dec
(313) |
2006 |
Jan
(377) |
Feb
(189) |
Mar
(234) |
Apr
(180) |
May
(230) |
Jun
(404) |
Jul
(574) |
Aug
(300) |
Sep
(424) |
Oct
(444) |
Nov
(363) |
Dec
(153) |
2007 |
Jan
(223) |
Feb
(106) |
Mar
(311) |
Apr
(233) |
May
(336) |
Jun
(278) |
Jul
(467) |
Aug
(416) |
Sep
(550) |
Oct
(503) |
Nov
(483) |
Dec
(271) |
2008 |
Jan
(344) |
Feb
(127) |
Mar
(416) |
Apr
(381) |
May
(679) |
Jun
(749) |
Jul
(549) |
Aug
(281) |
Sep
(137) |
Oct
(324) |
Nov
(200) |
Dec
(330) |
2009 |
Jan
(634) |
Feb
(438) |
Mar
(560) |
Apr
(387) |
May
(313) |
Jun
(443) |
Jul
(947) |
Aug
(505) |
Sep
(477) |
Oct
(679) |
Nov
(714) |
Dec
(407) |
2010 |
Jan
(348) |
Feb
(283) |
Mar
(232) |
Apr
(173) |
May
(79) |
Jun
(109) |
Jul
(128) |
Aug
(62) |
Sep
(118) |
Oct
(153) |
Nov
(57) |
Dec
(76) |
2011 |
Jan
(105) |
Feb
(150) |
Mar
(314) |
Apr
(266) |
May
(55) |
Jun
(47) |
Jul
(113) |
Aug
(70) |
Sep
(77) |
Oct
(93) |
Nov
(106) |
Dec
(190) |
2012 |
Jan
(68) |
Feb
(188) |
Mar
(313) |
Apr
(80) |
May
(122) |
Jun
(222) |
Jul
(94) |
Aug
(239) |
Sep
(64) |
Oct
(164) |
Nov
(168) |
Dec
(277) |
2013 |
Jan
(336) |
Feb
(156) |
Mar
(80) |
Apr
(135) |
May
(150) |
Jun
(139) |
Jul
(160) |
Aug
(266) |
Sep
(386) |
Oct
(465) |
Nov
(366) |
Dec
(156) |
2014 |
Jan
(190) |
Feb
(88) |
Mar
(60) |
Apr
(38) |
May
(146) |
Jun
(104) |
Jul
(189) |
Aug
(424) |
Sep
(235) |
Oct
(990) |
Nov
(598) |
Dec
(393) |
2015 |
Jan
(256) |
Feb
(40) |
Mar
(195) |
Apr
(497) |
May
(227) |
Jun
(138) |
Jul
(257) |
Aug
(351) |
Sep
(151) |
Oct
(119) |
Nov
(78) |
Dec
(16) |
2016 |
Jan
(225) |
Feb
(289) |
Mar
(267) |
Apr
(318) |
May
(198) |
Jun
(177) |
Jul
(155) |
Aug
(268) |
Sep
(175) |
Oct
(56) |
Nov
(147) |
Dec
(67) |
2017 |
Jan
(110) |
Feb
(148) |
Mar
(191) |
Apr
(210) |
May
(164) |
Jun
(261) |
Jul
(332) |
Aug
(349) |
Sep
(54) |
Oct
(171) |
Nov
(199) |
Dec
(153) |
2018 |
Jan
(351) |
Feb
(182) |
Mar
(345) |
Apr
(113) |
May
(76) |
Jun
(176) |
Jul
(60) |
Aug
(171) |
Sep
(183) |
Oct
(310) |
Nov
(150) |
Dec
(23) |
2019 |
Jan
(91) |
Feb
(73) |
Mar
(172) |
Apr
(119) |
May
(112) |
Jun
(145) |
Jul
(66) |
Aug
(60) |
Sep
(89) |
Oct
(104) |
Nov
(89) |
Dec
(157) |
2020 |
Jan
(126) |
Feb
(322) |
Mar
(108) |
Apr
(98) |
May
(227) |
Jun
(194) |
Jul
(374) |
Aug
(85) |
Sep
(122) |
Oct
(44) |
Nov
(18) |
Dec
(72) |
2021 |
Jan
(120) |
Feb
(101) |
Mar
(169) |
Apr
(167) |
May
(115) |
Jun
(32) |
Jul
(17) |
Aug
(12) |
Sep
|
Oct
(2) |
Nov
(3) |
Dec
|
2022 |
Jan
(5) |
Feb
|
Mar
|
Apr
(3) |
May
(1) |
Jun
|
Jul
|
Aug
(2) |
Sep
|
Oct
(2) |
Nov
|
Dec
|
2023 |
Jan
(2) |
Feb
|
Mar
(2) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Peter S. <pet...@gm...> - 2021-05-18 15:04:30
|
On Tue, May 18, 2021 at 3:57 PM Peter Sampson < pet...@gm...> wrote: > Testing on W10 with Audacity 3.0.3 ea04eef > > 1) clear Audacity settings folder > 2) launch Audacity 3.0.3 ea04eef > 3) Observe: launches fine > 4) Exit > 5) Relaunch: Audacity 3.0.3 ea04eef > 6) Observe Audacity fails to launch > This remains the case with the latest alpha commit to Master Audacity 3.0.3 b1f05e5 https://github.com/audacity/audacity/commit/b1f05e5747cae9a3ccca1d4e816c9102ab5f3444 The white rectangle with the Audacity logo and text appears and then disappears, with no launch It is a regression on this recent alpha that I have Audacity 3.0.3 dad9823 As the finder, are you going to log this Steve ? Peter. > In fact at Step 5 none of my 3.x Audacities will launch. > > Peter. > > On Tue, May 18, 2021 at 3:50 PM Steve Fiddle <ste...@gm...> > wrote: > >> First run gives an assert: >> src/widgets/FileConfig.cpp(98): assert "mDirty == false" failed in >> ~FileConfig(). >> >> Backtrace: >> >> [1] FileConfig::~FileConfig() >> [2] AudacityFileConfig::~AudacityFileConfig() >> [3] AudacityFileConfig::~AudacityFileConfig() >> [4] >> std::default_delete<AudacityFileConfig>::operator()(AudacityFileConfig*) >> const >> [5] std::unique_ptr<AudacityFileConfig, >> std::default_delete<AudacityFileConfig> >::~unique_ptr() >> [6] PluginManager::Load() >> [7] PluginManager::Initialize() >> [8] AudacityApp::InitPart2() >> [9] AudacityApp::OnInit() >> [10] wxAppConsoleBase::CallOnInit() >> [11] wxEntry(int&, wchar_t**) >> [12] wxEntry(int&, char**) >> [13] main >> [14] __libc_start_main >> [15] _start >> -------------------------------- >> Second run crashes: >> >> ALSA lib pcm.c:2642:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.rear >> ALSA lib pcm.c:2642:(snd_pcm_open_noupdate) Unknown PCM >> cards.pcm.center_lfe >> ALSA lib pcm.c:2642:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.side >> ALSA lib pcm_route.c:869:(find_matching_chmap) Found no matching channel >> map >> ALSA lib pcm_route.c:869:(find_matching_chmap) Found no matching channel >> map >> ALSA lib pcm_route.c:869:(find_matching_chmap) Found no matching channel >> map >> ALSA lib pcm_route.c:869:(find_matching_chmap) Found no matching channel >> map >> Cannot connect to server socket err = No such file or directory >> Cannot connect to server request channel >> jack server is not running or cannot be started >> JackShmReadWritePtr::~JackShmReadWritePtr - Init not done for -1, >> skipping unlock >> JackShmReadWritePtr::~JackShmReadWritePtr - Init not done for -1, >> skipping unlock >> Cannot connect to server socket err = No such file or directory >> Cannot connect to server request channel >> jack server is not running or cannot be started >> JackShmReadWritePtr::~JackShmReadWritePtr - Init not done for -1, >> skipping unlock >> JackShmReadWritePtr::~JackShmReadWritePtr - Init not done for -1, >> skipping unlock >> ALSA lib pcm_oss.c:377:(_snd_pcm_oss_open) Unknown field port >> ALSA lib pcm_oss.c:377:(_snd_pcm_oss_open) Unknown field port >> ALSA lib pcm_usb_stream.c:486:(_snd_pcm_usb_stream_open) Invalid type for >> card >> ALSA lib pcm_usb_stream.c:486:(_snd_pcm_usb_stream_open) Invalid type for >> card >> Cannot connect to server socket err = No such file or directory >> Cannot connect to server request channel >> jack server is not running or cannot be started >> JackShmReadWritePtr::~JackShmReadWritePtr - Init not done for -1, >> skipping unlock >> JackShmReadWritePtr::~JackShmReadWritePtr - Init not done for -1, >> skipping unlock >> 15:45:24: Debug: ScreenToClient cannot work when toplevel window is not >> shown >> 15:45:24: Debug: ScreenToClient cannot work when toplevel window is not >> shown >> 15:45:24: Debug: ScreenToClient cannot work when toplevel window is not >> shown >> 15:45:24: Debug: ScreenToClient cannot work when toplevel window is not >> shown >> 15:45:24: Debug: ScreenToClient cannot work when toplevel window is not >> shown >> 15:45:24: Debug: ScreenToClient cannot work when toplevel window is not >> shown >> 15:45:24: Debug: ScreenToClient cannot work when toplevel window is not >> shown >> 15:45:24: Debug: ScreenToClient cannot work when toplevel window is not >> shown >> 15:45:24: Debug: ScreenToClient cannot work when toplevel window is not >> shown >> Expression 'stream->playback.pcm' failed in >> >> '/home/steve/Sourcecode/audacity/lib-src/portaudio-v19/src/hostapi/alsa/pa_linux_alsa.c', >> line: 4628 >> Expression 'stream->playback.pcm' failed in >> >> '/home/steve/Sourcecode/audacity/lib-src/portaudio-v19/src/hostapi/alsa/pa_linux_alsa.c', >> line: 4628 >> Expression 'stream->playback.pcm' failed in >> >> '/home/steve/Sourcecode/audacity/lib-src/portaudio-v19/src/hostapi/alsa/pa_linux_alsa.c', >> line: 4628 >> Expression 'stream->playback.pcm' failed in >> >> '/home/steve/Sourcecode/audacity/lib-src/portaudio-v19/src/hostapi/alsa/pa_linux_alsa.c', >> line: 4628 >> Expression 'stream->playback.pcm' failed in >> >> '/home/steve/Sourcecode/audacity/lib-src/portaudio-v19/src/hostapi/alsa/pa_linux_alsa.c', >> line: 4628 >> Expression 'stream->playback.pcm' failed in >> >> '/home/steve/Sourcecode/audacity/lib-src/portaudio-v19/src/hostapi/alsa/pa_linux_alsa.c', >> line: 4628 >> Expression 'stream->playback.pcm' failed in >> >> '/home/steve/Sourcecode/audacity/lib-src/portaudio-v19/src/hostapi/alsa/pa_linux_alsa.c', >> line: 4628 >> Expression 'stream->playback.pcm' failed in >> >> '/home/steve/Sourcecode/audacity/lib-src/portaudio-v19/src/hostapi/alsa/pa_linux_alsa.c', >> line: 4628 >> 15:45:24: Debug: ScreenToClient cannot work when toplevel window is not >> shown >> Segmentation fault (core dumped) >> -------------------------------- >> >> Steve >> >> On Tue, 18 May 2021 at 14:35, Steve Fiddle <ste...@gm...> >> wrote: >> > >> > Tested on Linux with 3.0.3 ea04ee >> > >> > If I remove: >> > audacity.cfg >> > pluginsettings.cfg >> > pluginregistry.cfg >> > >> > then Audacity will launch without error. >> > >> > If I then exit and try launching again, Audacity crashes: >> > "Segmentation fault (core dumped)" >> > >> > Steve >> >> >> _______________________________________________ >> audacity-devel mailing list >> aud...@li... >> https://lists.sourceforge.net/lists/listinfo/audacity-devel >> > |
From: Peter S. <pet...@gm...> - 2021-05-18 14:57:43
|
Testing on W10 with Audacity 3.0.3 ea04eef 1) clear Audacity settings folder 2) launch Audacity 3.0.3 ea04eef 3) Observe: launches fine 4) Exit 5) Relaunch: Audacity 3.0.3 ea04eef 6) Observe Audacity fails to launch In fact at Step 5 none of my 3.x Audacities will launch. Peter. On Tue, May 18, 2021 at 3:50 PM Steve Fiddle <ste...@gm...> wrote: > First run gives an assert: > src/widgets/FileConfig.cpp(98): assert "mDirty == false" failed in > ~FileConfig(). > > Backtrace: > > [1] FileConfig::~FileConfig() > [2] AudacityFileConfig::~AudacityFileConfig() > [3] AudacityFileConfig::~AudacityFileConfig() > [4] > std::default_delete<AudacityFileConfig>::operator()(AudacityFileConfig*) > const > [5] std::unique_ptr<AudacityFileConfig, > std::default_delete<AudacityFileConfig> >::~unique_ptr() > [6] PluginManager::Load() > [7] PluginManager::Initialize() > [8] AudacityApp::InitPart2() > [9] AudacityApp::OnInit() > [10] wxAppConsoleBase::CallOnInit() > [11] wxEntry(int&, wchar_t**) > [12] wxEntry(int&, char**) > [13] main > [14] __libc_start_main > [15] _start > -------------------------------- > Second run crashes: > > ALSA lib pcm.c:2642:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.rear > ALSA lib pcm.c:2642:(snd_pcm_open_noupdate) Unknown PCM > cards.pcm.center_lfe > ALSA lib pcm.c:2642:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.side > ALSA lib pcm_route.c:869:(find_matching_chmap) Found no matching channel > map > ALSA lib pcm_route.c:869:(find_matching_chmap) Found no matching channel > map > ALSA lib pcm_route.c:869:(find_matching_chmap) Found no matching channel > map > ALSA lib pcm_route.c:869:(find_matching_chmap) Found no matching channel > map > Cannot connect to server socket err = No such file or directory > Cannot connect to server request channel > jack server is not running or cannot be started > JackShmReadWritePtr::~JackShmReadWritePtr - Init not done for -1, > skipping unlock > JackShmReadWritePtr::~JackShmReadWritePtr - Init not done for -1, > skipping unlock > Cannot connect to server socket err = No such file or directory > Cannot connect to server request channel > jack server is not running or cannot be started > JackShmReadWritePtr::~JackShmReadWritePtr - Init not done for -1, > skipping unlock > JackShmReadWritePtr::~JackShmReadWritePtr - Init not done for -1, > skipping unlock > ALSA lib pcm_oss.c:377:(_snd_pcm_oss_open) Unknown field port > ALSA lib pcm_oss.c:377:(_snd_pcm_oss_open) Unknown field port > ALSA lib pcm_usb_stream.c:486:(_snd_pcm_usb_stream_open) Invalid type for > card > ALSA lib pcm_usb_stream.c:486:(_snd_pcm_usb_stream_open) Invalid type for > card > Cannot connect to server socket err = No such file or directory > Cannot connect to server request channel > jack server is not running or cannot be started > JackShmReadWritePtr::~JackShmReadWritePtr - Init not done for -1, > skipping unlock > JackShmReadWritePtr::~JackShmReadWritePtr - Init not done for -1, > skipping unlock > 15:45:24: Debug: ScreenToClient cannot work when toplevel window is not > shown > 15:45:24: Debug: ScreenToClient cannot work when toplevel window is not > shown > 15:45:24: Debug: ScreenToClient cannot work when toplevel window is not > shown > 15:45:24: Debug: ScreenToClient cannot work when toplevel window is not > shown > 15:45:24: Debug: ScreenToClient cannot work when toplevel window is not > shown > 15:45:24: Debug: ScreenToClient cannot work when toplevel window is not > shown > 15:45:24: Debug: ScreenToClient cannot work when toplevel window is not > shown > 15:45:24: Debug: ScreenToClient cannot work when toplevel window is not > shown > 15:45:24: Debug: ScreenToClient cannot work when toplevel window is not > shown > Expression 'stream->playback.pcm' failed in > > '/home/steve/Sourcecode/audacity/lib-src/portaudio-v19/src/hostapi/alsa/pa_linux_alsa.c', > line: 4628 > Expression 'stream->playback.pcm' failed in > > '/home/steve/Sourcecode/audacity/lib-src/portaudio-v19/src/hostapi/alsa/pa_linux_alsa.c', > line: 4628 > Expression 'stream->playback.pcm' failed in > > '/home/steve/Sourcecode/audacity/lib-src/portaudio-v19/src/hostapi/alsa/pa_linux_alsa.c', > line: 4628 > Expression 'stream->playback.pcm' failed in > > '/home/steve/Sourcecode/audacity/lib-src/portaudio-v19/src/hostapi/alsa/pa_linux_alsa.c', > line: 4628 > Expression 'stream->playback.pcm' failed in > > '/home/steve/Sourcecode/audacity/lib-src/portaudio-v19/src/hostapi/alsa/pa_linux_alsa.c', > line: 4628 > Expression 'stream->playback.pcm' failed in > > '/home/steve/Sourcecode/audacity/lib-src/portaudio-v19/src/hostapi/alsa/pa_linux_alsa.c', > line: 4628 > Expression 'stream->playback.pcm' failed in > > '/home/steve/Sourcecode/audacity/lib-src/portaudio-v19/src/hostapi/alsa/pa_linux_alsa.c', > line: 4628 > Expression 'stream->playback.pcm' failed in > > '/home/steve/Sourcecode/audacity/lib-src/portaudio-v19/src/hostapi/alsa/pa_linux_alsa.c', > line: 4628 > 15:45:24: Debug: ScreenToClient cannot work when toplevel window is not > shown > Segmentation fault (core dumped) > -------------------------------- > > Steve > > On Tue, 18 May 2021 at 14:35, Steve Fiddle <ste...@gm...> > wrote: > > > > Tested on Linux with 3.0.3 ea04ee > > > > If I remove: > > audacity.cfg > > pluginsettings.cfg > > pluginregistry.cfg > > > > then Audacity will launch without error. > > > > If I then exit and try launching again, Audacity crashes: > > "Segmentation fault (core dumped)" > > > > Steve > > > _______________________________________________ > audacity-devel mailing list > aud...@li... > https://lists.sourceforge.net/lists/listinfo/audacity-devel > |
From: Steve F. <ste...@gm...> - 2021-05-18 14:49:49
|
First run gives an assert: src/widgets/FileConfig.cpp(98): assert "mDirty == false" failed in ~FileConfig(). Backtrace: [1] FileConfig::~FileConfig() [2] AudacityFileConfig::~AudacityFileConfig() [3] AudacityFileConfig::~AudacityFileConfig() [4] std::default_delete<AudacityFileConfig>::operator()(AudacityFileConfig*) const [5] std::unique_ptr<AudacityFileConfig, std::default_delete<AudacityFileConfig> >::~unique_ptr() [6] PluginManager::Load() [7] PluginManager::Initialize() [8] AudacityApp::InitPart2() [9] AudacityApp::OnInit() [10] wxAppConsoleBase::CallOnInit() [11] wxEntry(int&, wchar_t**) [12] wxEntry(int&, char**) [13] main [14] __libc_start_main [15] _start -------------------------------- Second run crashes: ALSA lib pcm.c:2642:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.rear ALSA lib pcm.c:2642:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.center_lfe ALSA lib pcm.c:2642:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.side ALSA lib pcm_route.c:869:(find_matching_chmap) Found no matching channel map ALSA lib pcm_route.c:869:(find_matching_chmap) Found no matching channel map ALSA lib pcm_route.c:869:(find_matching_chmap) Found no matching channel map ALSA lib pcm_route.c:869:(find_matching_chmap) Found no matching channel map Cannot connect to server socket err = No such file or directory Cannot connect to server request channel jack server is not running or cannot be started JackShmReadWritePtr::~JackShmReadWritePtr - Init not done for -1, skipping unlock JackShmReadWritePtr::~JackShmReadWritePtr - Init not done for -1, skipping unlock Cannot connect to server socket err = No such file or directory Cannot connect to server request channel jack server is not running or cannot be started JackShmReadWritePtr::~JackShmReadWritePtr - Init not done for -1, skipping unlock JackShmReadWritePtr::~JackShmReadWritePtr - Init not done for -1, skipping unlock ALSA lib pcm_oss.c:377:(_snd_pcm_oss_open) Unknown field port ALSA lib pcm_oss.c:377:(_snd_pcm_oss_open) Unknown field port ALSA lib pcm_usb_stream.c:486:(_snd_pcm_usb_stream_open) Invalid type for card ALSA lib pcm_usb_stream.c:486:(_snd_pcm_usb_stream_open) Invalid type for card Cannot connect to server socket err = No such file or directory Cannot connect to server request channel jack server is not running or cannot be started JackShmReadWritePtr::~JackShmReadWritePtr - Init not done for -1, skipping unlock JackShmReadWritePtr::~JackShmReadWritePtr - Init not done for -1, skipping unlock 15:45:24: Debug: ScreenToClient cannot work when toplevel window is not shown 15:45:24: Debug: ScreenToClient cannot work when toplevel window is not shown 15:45:24: Debug: ScreenToClient cannot work when toplevel window is not shown 15:45:24: Debug: ScreenToClient cannot work when toplevel window is not shown 15:45:24: Debug: ScreenToClient cannot work when toplevel window is not shown 15:45:24: Debug: ScreenToClient cannot work when toplevel window is not shown 15:45:24: Debug: ScreenToClient cannot work when toplevel window is not shown 15:45:24: Debug: ScreenToClient cannot work when toplevel window is not shown 15:45:24: Debug: ScreenToClient cannot work when toplevel window is not shown Expression 'stream->playback.pcm' failed in '/home/steve/Sourcecode/audacity/lib-src/portaudio-v19/src/hostapi/alsa/pa_linux_alsa.c', line: 4628 Expression 'stream->playback.pcm' failed in '/home/steve/Sourcecode/audacity/lib-src/portaudio-v19/src/hostapi/alsa/pa_linux_alsa.c', line: 4628 Expression 'stream->playback.pcm' failed in '/home/steve/Sourcecode/audacity/lib-src/portaudio-v19/src/hostapi/alsa/pa_linux_alsa.c', line: 4628 Expression 'stream->playback.pcm' failed in '/home/steve/Sourcecode/audacity/lib-src/portaudio-v19/src/hostapi/alsa/pa_linux_alsa.c', line: 4628 Expression 'stream->playback.pcm' failed in '/home/steve/Sourcecode/audacity/lib-src/portaudio-v19/src/hostapi/alsa/pa_linux_alsa.c', line: 4628 Expression 'stream->playback.pcm' failed in '/home/steve/Sourcecode/audacity/lib-src/portaudio-v19/src/hostapi/alsa/pa_linux_alsa.c', line: 4628 Expression 'stream->playback.pcm' failed in '/home/steve/Sourcecode/audacity/lib-src/portaudio-v19/src/hostapi/alsa/pa_linux_alsa.c', line: 4628 Expression 'stream->playback.pcm' failed in '/home/steve/Sourcecode/audacity/lib-src/portaudio-v19/src/hostapi/alsa/pa_linux_alsa.c', line: 4628 15:45:24: Debug: ScreenToClient cannot work when toplevel window is not shown Segmentation fault (core dumped) -------------------------------- Steve On Tue, 18 May 2021 at 14:35, Steve Fiddle <ste...@gm...> wrote: > > Tested on Linux with 3.0.3 ea04ee > > If I remove: > audacity.cfg > pluginsettings.cfg > pluginregistry.cfg > > then Audacity will launch without error. > > If I then exit and try launching again, Audacity crashes: > "Segmentation fault (core dumped)" > > Steve |
From: Steve F. <ste...@gm...> - 2021-05-18 13:36:19
|
Tested on Linux with 3.0.3 ea04ee If I remove: audacity.cfg pluginsettings.cfg pluginregistry.cfg then Audacity will launch without error. If I then exit and try launching again, Audacity crashes: "Segmentation fault (core dumped)" Steve |
From: Steve F. <ste...@gm...> - 2021-05-18 13:32:18
|
He has sent me the huge file (445 MB as 7z) and I can make it available to any developer that wants to take a look, but he has asked me not to make the project public, so I'll send a link to the file on request. On my (Linux) computer with Audacity 3.0.2, I can open the file, reverse the first track, Undo, Save, Exit, and the file shrinks to 186 MiB. So it seems that "something" is stopping compaction from working on his (Windows 10) machine. Conjecture: Perhaps this is also related to the extreme slowness that some users are experiencing. Steve On Mon, 17 May 2021 at 16:27, Steve Fiddle <ste...@gm...> wrote: > > On Mon, 17 May 2021 at 04:59, Leland <ll...@ho...> wrote: > > > > Steve, have him open the project, duplicate a track, reverse the new track, > > save the project, delete the new track, save again. Does the project reduce > > in size then? > > (Basically, trying to force compaction.) > > Weird. He tried that, and here's what happened: > > "The file size did not change. > Also, when I duplicated the tracks, reversed them, and saved them, the > file size also didn't change. > To check my sanity, I looked at the property and the modification time > is changing." > > I've asked him if he could make a ZIP archive of the AUP3 and see what > size that is. That may give an indication of whether the extra size is > due to data in the database, or empty space. It may also reduce the > project to a manageable size to send to us. > > Steve > > > > > > -----Original Message----- > > From: Steve Fiddle <ste...@gm...> > > Sent: Sunday, May 16, 2021 7:34 PM > > To: Audacity-Devel list <aud...@li...> > > Subject: [Audacity-devel] Huge AUP3 project issue > > > > We have a report on the forum of a 200MB project (expected size) having a > > saved file size of 1350MB. > > > > On opening the 1.35GB project and saving a backup copy of the project, the > > backup project file is under 200 MB, which is about right for the amount of > > audio in the project. > > > > The forum thread is here: > > https://forum.audacityteam.org/viewtopic.php?f=46&t=118141 > > > > > > Steve > > > > > > _______________________________________________ > > audacity-devel mailing list > > aud...@li... > > https://lists.sourceforge.net/lists/listinfo/audacity-devel > > > > > > > > _______________________________________________ > > audacity-devel mailing list > > aud...@li... > > https://lists.sourceforge.net/lists/listinfo/audacity-devel |
From: Steve F. <ste...@gm...> - 2021-05-17 15:57:54
|
Can this effect be committed, even if not enabled by default? I would love to see this working with real-time preview, but even without that it works and has many features that are not available in the current compressor. With QA hat on, I think that the current compressor should remain as the default compressor until real-time preview works satisfactorily in the new compressor. It seems unlikely that the issue with real-time preview will ever be resolved unless the effect is at least in the code base. Steve On Mon, 17 May 2021 at 16:43, Max Maisel <mm...@po...> wrote: > > *bump* > > > Hi all, > > > > since Audacity 3.0.2 is released now, I rebased my Compressor effect > > onto the latest master. > > > > Max > > > > On Sunday, 28 March 2021 at 13:46, Max Maisel wrote: > > > Hi James, > > > > > > it's no problem for me to wait for the 3.0.1 release. > > > > > > The latency is mainly caused by various kinds of lookahead. One > > > lookahead is selected directly by the user, another lookahead is > > > implicit in case of the exponential fit envelope detector because > > > the > > > algorithm needs to process the signal backwards in the attack > > > stage. > > > > > > Latencies in the range of minutes are extreme cases if users select > > > high lookahead times or, in case of exponential fit, high attack > > > times. > > > But I don't want to limit the ranges of the lookahead and attack > > > time > > > sliders for offline processing just because of latency in realtime > > > mode. > > > > > > When using the analog simulation envelope detector together with > > > low > > > lookahead (a few milliseconds) like a real analog compressor, there > > > is > > > almost no noticably latency. > > > > > > Max > > > > > > On Saturday, 27 March 2021 at 14:41, James Crook wrote: > > > > Hi Max. > > > > > > > > The previous update I had on this was about the long latency. > > > > Disabling realtime preview is sort of OK, but something seems > > > > wrong. > > > > A one minute latency is colossal. That points to something > > > > underlying wrong. > > > > > > > > Also 3.0.1 has now become very very much a maintenance release > > > > for > > > > addressing the 3.0.0 unitary project issues. Paul would love to > > > > get > > > > some structural > > > > changes in that improve independence between pieces of code, and > > > > I > > > > am > > > > saying no. > > > > We'd also like to get portaudio and FFmpeg library updates in, > > > > and > > > > I > > > > am saying no. > > > > > > > > It is pretty clear that the release after 3.0.1 is going to be > > > > quite > > > > soon, so as RM > > > > I'm sorry to have to say that your compressor will have to sit > > > > out > > > > for 3.0.1. > > > > > > > > 3.0.1 is due to actually release on 17th April, so it's a further > > > > delay of about 3 > > > > weeks for you in seeing progress on your compressor getting in to > > > > Audacity. > > > > > > > > Is there a good reason for the very long latency? > > > > > > > > --James. > > > > > > > > > > > > > > > > > > > > On Sat, 27 Mar 2021 at 07:56, Max Maisel <mm...@po...> > > > > wrote: > > > > > Hi all, > > > > > > > > > > I've polished my Dynamic Compressor effect and think it is > > > > > ready > > > > > now > > > > > for inclusion. > > > > > > > > > > Since my last mail (see > > > > > https://sourceforge.net/p/audacity/mailman/message/37109016/ fo > > > > > r > > > > > the > > > > > initial message), I mainly reworked the user interface accoring > > > > > to > > > > > your > > > > > feedback and fixed several bugs. > > > > > > > > > > The effect is basically realtime capable but for now realtime > > > > > preview > > > > > is disabled after some discussion with Steve because there are > > > > > bad user experience due to high processing latency in the > > > > > effect > > > > > at > > > > > some settings. > > > > > > > > > > Main probleme here is, that the RealtimeEffectManager gives the > > > > > effect > > > > > a small block, e.g. 256 samples, and expects the same amount of > > > > > samples > > > > > in return. So if an effect like my compressor has a high > > > > > processing > > > > > latency, there can be an initial silence for over one minute > > > > > which > > > > > the > > > > > users will think is a bug. Any changes made to the effect > > > > > settings > > > > > will > > > > > only take effect after this long time as well. > > > > > > > > > > A possible solution would be latency compensation in the > > > > > RealtimeEffectManager so that the effect can request the > > > > > required > > > > > amount of samples in advance at the beginning or when the user > > > > > changes > > > > > settings. The silence at the beginning can then be discarded so > > > > > that > > > > > users do not notice it. But this is out of scope of the > > > > > compressor > > > > > effect changes. > > > > > > > > > > The lastest revision including CI builds can be found in my > > > > > pull > > > > > request at https://github.com/audacity/audacity/pull/676. > > > > > > > > > > The latest documentation for the effect is can be found at > > > > > https://alphamanual.audacityteam.org/man/Dynamic_Compressor. > > > > > > > > > > I'm looking forward to further feedback and integration. > > > > > > > > > > Max > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > audacity-devel mailing list > > > > > aud...@li... > > > > > https://lists.sourceforge.net/lists/listinfo/audacity-devel > > > > _______________________________________________ > > > > audacity-devel mailing list > > > > aud...@li... > > > > https://lists.sourceforge.net/lists/listinfo/audacity-devel > > > > > > > > > > > > > _______________________________________________ > > audacity-devel mailing list > > aud...@li... > > https://lists.sourceforge.net/lists/listinfo/audacity-devel > > > > > _______________________________________________ > audacity-devel mailing list > aud...@li... > https://lists.sourceforge.net/lists/listinfo/audacity-devel |
From: Max M. <mm...@po...> - 2021-05-17 15:42:33
|
*bump* > Hi all, > > since Audacity 3.0.2 is released now, I rebased my Compressor effect > onto the latest master. > > Max > > On Sunday, 28 March 2021 at 13:46, Max Maisel wrote: > > Hi James, > > > > it's no problem for me to wait for the 3.0.1 release. > > > > The latency is mainly caused by various kinds of lookahead. One > > lookahead is selected directly by the user, another lookahead is > > implicit in case of the exponential fit envelope detector because > > the > > algorithm needs to process the signal backwards in the attack > > stage. > > > > Latencies in the range of minutes are extreme cases if users select > > high lookahead times or, in case of exponential fit, high attack > > times. > > But I don't want to limit the ranges of the lookahead and attack > > time > > sliders for offline processing just because of latency in realtime > > mode. > > > > When using the analog simulation envelope detector together with > > low > > lookahead (a few milliseconds) like a real analog compressor, there > > is > > almost no noticably latency. > > > > Max > > > > On Saturday, 27 March 2021 at 14:41, James Crook wrote: > > > Hi Max. > > > > > > The previous update I had on this was about the long latency. > > > Disabling realtime preview is sort of OK, but something seems > > > wrong. > > > A one minute latency is colossal. That points to something > > > underlying wrong. > > > > > > Also 3.0.1 has now become very very much a maintenance release > > > for > > > addressing the 3.0.0 unitary project issues. Paul would love to > > > get > > > some structural > > > changes in that improve independence between pieces of code, and > > > I > > > am > > > saying no. > > > We'd also like to get portaudio and FFmpeg library updates in, > > > and > > > I > > > am saying no. > > > > > > It is pretty clear that the release after 3.0.1 is going to be > > > quite > > > soon, so as RM > > > I'm sorry to have to say that your compressor will have to sit > > > out > > > for 3.0.1. > > > > > > 3.0.1 is due to actually release on 17th April, so it's a further > > > delay of about 3 > > > weeks for you in seeing progress on your compressor getting in to > > > Audacity. > > > > > > Is there a good reason for the very long latency? > > > > > > --James. > > > > > > > > > > > > > > > On Sat, 27 Mar 2021 at 07:56, Max Maisel <mm...@po...> > > > wrote: > > > > Hi all, > > > > > > > > I've polished my Dynamic Compressor effect and think it is > > > > ready > > > > now > > > > for inclusion. > > > > > > > > Since my last mail (see > > > > https://sourceforge.net/p/audacity/mailman/message/37109016/ fo > > > > r > > > > the > > > > initial message), I mainly reworked the user interface accoring > > > > to > > > > your > > > > feedback and fixed several bugs. > > > > > > > > The effect is basically realtime capable but for now realtime > > > > preview > > > > is disabled after some discussion with Steve because there are > > > > bad user experience due to high processing latency in the > > > > effect > > > > at > > > > some settings. > > > > > > > > Main probleme here is, that the RealtimeEffectManager gives the > > > > effect > > > > a small block, e.g. 256 samples, and expects the same amount of > > > > samples > > > > in return. So if an effect like my compressor has a high > > > > processing > > > > latency, there can be an initial silence for over one minute > > > > which > > > > the > > > > users will think is a bug. Any changes made to the effect > > > > settings > > > > will > > > > only take effect after this long time as well. > > > > > > > > A possible solution would be latency compensation in the > > > > RealtimeEffectManager so that the effect can request the > > > > required > > > > amount of samples in advance at the beginning or when the user > > > > changes > > > > settings. The silence at the beginning can then be discarded so > > > > that > > > > users do not notice it. But this is out of scope of the > > > > compressor > > > > effect changes. > > > > > > > > The lastest revision including CI builds can be found in my > > > > pull > > > > request at https://github.com/audacity/audacity/pull/676. > > > > > > > > The latest documentation for the effect is can be found at > > > > https://alphamanual.audacityteam.org/man/Dynamic_Compressor. > > > > > > > > I'm looking forward to further feedback and integration. > > > > > > > > Max > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > audacity-devel mailing list > > > > aud...@li... > > > > https://lists.sourceforge.net/lists/listinfo/audacity-devel > > > _______________________________________________ > > > audacity-devel mailing list > > > aud...@li... > > > https://lists.sourceforge.net/lists/listinfo/audacity-devel > > > > > > > _______________________________________________ > audacity-devel mailing list > aud...@li... > https://lists.sourceforge.net/lists/listinfo/audacity-devel |
From: Steve F. <ste...@gm...> - 2021-05-17 15:27:54
|
On Mon, 17 May 2021 at 04:59, Leland <ll...@ho...> wrote: > > Steve, have him open the project, duplicate a track, reverse the new track, > save the project, delete the new track, save again. Does the project reduce > in size then? > (Basically, trying to force compaction.) Weird. He tried that, and here's what happened: "The file size did not change. Also, when I duplicated the tracks, reversed them, and saved them, the file size also didn't change. To check my sanity, I looked at the property and the modification time is changing." I've asked him if he could make a ZIP archive of the AUP3 and see what size that is. That may give an indication of whether the extra size is due to data in the database, or empty space. It may also reduce the project to a manageable size to send to us. Steve > > -----Original Message----- > From: Steve Fiddle <ste...@gm...> > Sent: Sunday, May 16, 2021 7:34 PM > To: Audacity-Devel list <aud...@li...> > Subject: [Audacity-devel] Huge AUP3 project issue > > We have a report on the forum of a 200MB project (expected size) having a > saved file size of 1350MB. > > On opening the 1.35GB project and saving a backup copy of the project, the > backup project file is under 200 MB, which is about right for the amount of > audio in the project. > > The forum thread is here: > https://forum.audacityteam.org/viewtopic.php?f=46&t=118141 > > > Steve > > > _______________________________________________ > audacity-devel mailing list > aud...@li... > https://lists.sourceforge.net/lists/listinfo/audacity-devel > > > > _______________________________________________ > audacity-devel mailing list > aud...@li... > https://lists.sourceforge.net/lists/listinfo/audacity-devel |
From: Robert H. <aar...@gm...> - 2021-05-17 12:30:35
|
Small correction: To get the same amplitude as the original, we have also to take the square root at the end. The correct last line is therefore: (s-sqrt (sum (s-square (h1 *track*)) (s-square (h2 *track*)))) Sorry for that. Robert On 17/05/2021, Robert Hänggi <aar...@gm...> wrote: > On 17/05/2021, Steve Fiddle <ste...@gm...> wrote: >> Interesting stuff. >> The final line of Robert's "squaring two signals and adding them >> together" >> assumes that the selected audio is a mono track. >> For a two channel track it needs to be: >> (sum (s-square (h1 (aref *track* 0))) (s-square (h2 (aref *track* 1)))) >> > We will use multichan-expand in the end so that it works with either > track type, unless the stereo track effect has some linked parameter > e.g. feeding L back in to R... > Robert > >> Steve >> >> On Mon, 17 May 2021 at 09:10, Robert Hänggi <aar...@gm...> >> wrote: >>> >>> Here is the plug-in: >>> >>> https://www.dropbox.com/t/AK66Xx2uxpvatEOH >>> It works only on stereo tracks. >>> The idea is to rotate the right channel by 90° at each application. >>> So, at the second time, you have basically inverted the right track >>> and at the forth, it will be back to the original state (although it >>> won't be exactly the original due to the nature of the IIR/HT with the >>> needed one sample delay). >>> In other words, the left and right channels represent the real and >>> imaginary part of the signal, especially if the track is dual-mono >>> (same content in both channels). >>> Let's look at the last line: >>> (vector (h1 (aref *track* 0)) (h2 (aref *track* 1))) >>> The signal that comes from the track is always *track*. >>> If it is mono, you can work with it as it is. >>> If it is stereo however, it is represented as an array with two >>> elements. Aref references the left (=0) and right (=1) channel. >>> The function h1 creates the original signal, the real part. Of course, >>> this would normally not be necessary but we need to adapt it for the >>> delay introduced through the IIR (ideally, the delay would be only >>> half a sample for h2). >>> the function vector creates again an array from the two sounds. >>> OK, let's assume that we work only with a mono track. >>> We could for instance use the Hilbert transform to return the >>> amplitude envelope of the signal. >>> This is done by squaring the two signals and adding them together. >>> First, create a mono track with e.g. a chirp or some other signal or >>> file. >>> >>> Copy all the text from the plug-in into the Nyquist prompt. >>> replace the line with '(vector...' with the following: >>> >>> (defun s-square (sig) >>> (mult sig sig)) >>> (sum (s-square (h1 *track*)) (s-square (h2 *track*))) >>> >>> (you can save this as a preset if you go to the manage button) >>> >>> Apply the effect. You should (almost) hear nothing because the track >>> represents now the amplitude instead of the signal, in other words, >>> just plenty of DC offset. >>> >>> HTH >>> Robert >>> >>> >>> >>> On 17/05/2021, Federico Miyara <fm...@fc...> wrote: >>> > >>> > Petr, >>> > >>> > There is a free program called SPEAR that you can download here: >>> > >>> > http://www.klingbeil.com/spear/downloads/ >>> > <http://www.klingbeil.com/spear/downloads/> >>> > >>> > It models a signal by detecting partials and representing their time >>> > evolution by sine waves >>> > >>> > I'm not sure it is sufficiently accessible, but it allows several >>> > types >>> > of edits such as the partial displacemente you mention. >>> > >>> > Regards, >>> > >>> > Federico Miyara >>> > >>> > >>> > >>> > On 16/05/2021 02:34, Petr Pařízek via audacity-devel wrote: >>> >> Hello to all of you, >>> >> >>> >> to introduce myself a bit, I'm a piano player and a music composer >>> >> and >>> >> a music theorist who is very interested in things regarding digital >>> >> audio effects. Many years ago, I wrote a lot of small programs for >>> >> the >>> >> old QBasic for DOS and currently I'm planning to start learning >>> >> Nyquist in some near future. >>> >> FYI, I'm blind and that's why, when manipulating with the contents of >>> >> a sound file, I often combine listening to the sound and converting >>> >> the sample values to text, if I want to know more about some tiny >>> >> details (where most people would probably zoom in the waveform). >>> >> >>> >> I'm thinking of a possible new effect which might one day be >>> >> implemented in Audacity. Currently, I'm absolutely unsure whether >>> >> this >>> >> kind of effect could be coded in Nyquist at all or whether the only >>> >> way is to write such complex stuff in C or whether there's yet >>> >> another >>> >> way of doing it which I don't happen to know about. But I'd be >>> >> super-happy if I were told that this thing could indeed be coded in >>> >> Nyquist. >>> >> Therefore, I'll do my best to describe the effect, as some say, "in >>> >> prose", and hope my description is understandable for you all. In >>> >> case >>> >> it isn't, I'm definitely open to clarification. I'll be very happy to >>> >> know your opinions about what might be the best way to code this. >>> >> I'd like to stress that I'm not intending this effect for real-time >>> >> performance at all, even though the description of the effect itself >>> >> might make you think I am. I'm not even suggesting something like a >>> >> "Preview" facility because I don't want the processing speed to be of >>> >> any importance here. In every case, I'm willing to sacrifice speed >>> >> over precision, even if the algorithm eventually turned out to be >>> >> super-slow. >>> >> Although I'd love to have such a thing working one day, I'm even >>> >> ready >>> >> for the possibility that this effect might never be implemented, if I >>> >> realize it would be too difficult for me to code (honestly, I've >>> >> never >>> >> coded in anything other than QBasic or briefly in Turbo Pascal, which >>> >> would probably require me to learn C all from scratch if C turned out >>> >> to be inevitable). >>> >> >>> >> - The core part of the algorithm is a frequency shifter [1]. Unlike a >>> >> pitch shifter, whose aim is to alter all the frequencies by a >>> >> constant >>> >> ratio, a frequency shifter alters all the frequencies by a constant >>> >> difference. >>> >> - The corresponding dialog box would offer the following parameters: >>> >> 1) the amount by which the frequencies should be shifted, given in >>> >> Hz, >>> >> which could be either positive or negative; >>> >> 2) two volume settings, namely for "dry" and "wet"; >>> >> 3) the amount by which the wet signal is fed back into the input, >>> >> given as a value that is less than 100% and more than -100%. >>> >> 4) the amount by which the feedback is to be delayed, probably given >>> >> in ms, which should always be given as a positive number; this >>> >> parameter has no effect if feedback is set to 0. >>> >> >>> >> [1] The actual realization would go like this: >>> >> - A) We store two intermediate copies of our original signal, label >>> >> them "IP" and "Q", and modify them as described in [2], >>> >> - B) Each of the modified intermediate signals is separately >>> >> amplitude-modulated: >>> >> IP is multiplied by a cosine wave of the given frequency, >>> >> Q is multiplied by a sine wave of the same frequency, >>> >> - C) we sum the two products to get the frequency-shifted signal, >>> >> - D) this signal, multiplied by the "Wet" coefficient, is sent to the >>> >> output, together with the original signal multiplied by the "Dry" >>> >> coefficient, >>> >> - E) the same frequency-shifted signal, this time multiplied by the >>> >> "Feedback" coefficient and delayed by "Delay" ms, is sent back to the >>> >> input. >>> >> >>> >> [2] We make a filter that works like an inverted Hilbert transform, >>> >> for which reason I'll call it the IHT. The length of the filter will >>> >> probably be hard-coded and unknown to the user. The longer the >>> >> filter, >>> >> the closer the approximation gets to a proper IHT. >>> >> - For a positive integer l, the filter length should be either l*4 or >>> >> l*4-1 samples. Practically, the two make no difference because every >>> >> other coefficient is equal to zero. >>> >> - Even though the filter is l*4 samples long, our sample position >>> >> indexes, instead of going from 0 to l*4-1, should go from -2*l to >>> >> +2*l-1. Let's call them k. Similarly, for a filter of length 4*l-1, >>> >> the sample position indexes k would go from -(2*n-1) to +2*n-1, i.e. >>> >> from -2*n + 1 to 2*n - 1. >>> >> The actual values of the filter coefficients meet the following rule: >>> >> - For all even numbers k, the coefficient c(k) is equal to zero. >>> >> - For all odd numbers k, the coefficient c(k) is equal to -2/)k*π). >>> >> - Next, we convolve our original signal with this filter and store >>> >> the >>> >> result into an intermediate buffer, which may be called Q (as in >>> >> "quadrature"). >>> >> - Then, depending on whether our filter length is even or odd, we >>> >> delay our original signal either by 2*n or by 2*n-1 samples and store >>> >> this delayed copy into another intermediate buffer, which we may call >>> >> IP (meaning "in phase"). >>> >> >>> >> You may be wondering why I insist on using an IHT instead of a proper >>> >> HT or on multiplying IP by a cosine wave rather than a sine wave. The >>> >> answers are: >>> >> - If I choose the amount of frequency shifting to be zero and do it >>> >> the way I've described, the supposed frequency-shifted signal will >>> >> only be delayed by "Delay" ms but in all other aspects it will be >>> >> identical to the original sound -- i.e. there won't be any additional >>> >> phase shifts or delays. In contrast, if IP were multiplied by a sine >>> >> wave and Q were multiplied by a cosine wave, then the supposed >>> >> frequency-shifted signal (with a zero frequency shift) would >>> >> correspond to the original signal not just delayed but also >>> >> Hilbert-transformed. This doesn't seem like an issue if the feedback >>> >> is set to zero. However, once I set the feedback to a non-zero value, >>> >> this thing starts to matter significantly. >>> >> - When I use an IHT, then I can get the desired frequency shift by >>> >> adding the two amplitude-modulated signals. In contrast, if I used a >>> >> proper HT, adding them would give me the opposite frequency shift and >>> >> to get the desired one, I would have to subtract them. >>> >> >>> >> Okay, that's it. Sorry for such a long post but I didn't want to miss >>> >> any important details. >>> >> >>> >> Thanks for your comments or suggestions. >>> >> >>> >> Petr >>> >> >>> >> >>> >> >>> > >>> > >>> > >>> > -- >>> > El software de antivirus Avast ha analizado este correo electrónico en >>> > busca >>> > de virus. >>> > https://www.avast.com/antivirus >>> > >>> >>> >>> _______________________________________________ >>> audacity-devel mailing list >>> aud...@li... >>> https://lists.sourceforge.net/lists/listinfo/audacity-devel >> >> >> _______________________________________________ >> audacity-devel mailing list >> aud...@li... >> https://lists.sourceforge.net/lists/listinfo/audacity-devel >> > |
From: Robert H. <aar...@gm...> - 2021-05-17 11:29:58
|
On 17/05/2021, Steve Fiddle <ste...@gm...> wrote: > Interesting stuff. > The final line of Robert's "squaring two signals and adding them together" > assumes that the selected audio is a mono track. > For a two channel track it needs to be: > (sum (s-square (h1 (aref *track* 0))) (s-square (h2 (aref *track* 1)))) > We will use multichan-expand in the end so that it works with either track type, unless the stereo track effect has some linked parameter e.g. feeding L back in to R... Robert > Steve > > On Mon, 17 May 2021 at 09:10, Robert Hänggi <aar...@gm...> wrote: >> >> Here is the plug-in: >> >> https://www.dropbox.com/t/AK66Xx2uxpvatEOH >> It works only on stereo tracks. >> The idea is to rotate the right channel by 90° at each application. >> So, at the second time, you have basically inverted the right track >> and at the forth, it will be back to the original state (although it >> won't be exactly the original due to the nature of the IIR/HT with the >> needed one sample delay). >> In other words, the left and right channels represent the real and >> imaginary part of the signal, especially if the track is dual-mono >> (same content in both channels). >> Let's look at the last line: >> (vector (h1 (aref *track* 0)) (h2 (aref *track* 1))) >> The signal that comes from the track is always *track*. >> If it is mono, you can work with it as it is. >> If it is stereo however, it is represented as an array with two >> elements. Aref references the left (=0) and right (=1) channel. >> The function h1 creates the original signal, the real part. Of course, >> this would normally not be necessary but we need to adapt it for the >> delay introduced through the IIR (ideally, the delay would be only >> half a sample for h2). >> the function vector creates again an array from the two sounds. >> OK, let's assume that we work only with a mono track. >> We could for instance use the Hilbert transform to return the >> amplitude envelope of the signal. >> This is done by squaring the two signals and adding them together. >> First, create a mono track with e.g. a chirp or some other signal or file. >> >> Copy all the text from the plug-in into the Nyquist prompt. >> replace the line with '(vector...' with the following: >> >> (defun s-square (sig) >> (mult sig sig)) >> (sum (s-square (h1 *track*)) (s-square (h2 *track*))) >> >> (you can save this as a preset if you go to the manage button) >> >> Apply the effect. You should (almost) hear nothing because the track >> represents now the amplitude instead of the signal, in other words, >> just plenty of DC offset. >> >> HTH >> Robert >> >> >> >> On 17/05/2021, Federico Miyara <fm...@fc...> wrote: >> > >> > Petr, >> > >> > There is a free program called SPEAR that you can download here: >> > >> > http://www.klingbeil.com/spear/downloads/ >> > <http://www.klingbeil.com/spear/downloads/> >> > >> > It models a signal by detecting partials and representing their time >> > evolution by sine waves >> > >> > I'm not sure it is sufficiently accessible, but it allows several types >> > of edits such as the partial displacemente you mention. >> > >> > Regards, >> > >> > Federico Miyara >> > >> > >> > >> > On 16/05/2021 02:34, Petr Pařízek via audacity-devel wrote: >> >> Hello to all of you, >> >> >> >> to introduce myself a bit, I'm a piano player and a music composer and >> >> a music theorist who is very interested in things regarding digital >> >> audio effects. Many years ago, I wrote a lot of small programs for the >> >> old QBasic for DOS and currently I'm planning to start learning >> >> Nyquist in some near future. >> >> FYI, I'm blind and that's why, when manipulating with the contents of >> >> a sound file, I often combine listening to the sound and converting >> >> the sample values to text, if I want to know more about some tiny >> >> details (where most people would probably zoom in the waveform). >> >> >> >> I'm thinking of a possible new effect which might one day be >> >> implemented in Audacity. Currently, I'm absolutely unsure whether this >> >> kind of effect could be coded in Nyquist at all or whether the only >> >> way is to write such complex stuff in C or whether there's yet another >> >> way of doing it which I don't happen to know about. But I'd be >> >> super-happy if I were told that this thing could indeed be coded in >> >> Nyquist. >> >> Therefore, I'll do my best to describe the effect, as some say, "in >> >> prose", and hope my description is understandable for you all. In case >> >> it isn't, I'm definitely open to clarification. I'll be very happy to >> >> know your opinions about what might be the best way to code this. >> >> I'd like to stress that I'm not intending this effect for real-time >> >> performance at all, even though the description of the effect itself >> >> might make you think I am. I'm not even suggesting something like a >> >> "Preview" facility because I don't want the processing speed to be of >> >> any importance here. In every case, I'm willing to sacrifice speed >> >> over precision, even if the algorithm eventually turned out to be >> >> super-slow. >> >> Although I'd love to have such a thing working one day, I'm even ready >> >> for the possibility that this effect might never be implemented, if I >> >> realize it would be too difficult for me to code (honestly, I've never >> >> coded in anything other than QBasic or briefly in Turbo Pascal, which >> >> would probably require me to learn C all from scratch if C turned out >> >> to be inevitable). >> >> >> >> - The core part of the algorithm is a frequency shifter [1]. Unlike a >> >> pitch shifter, whose aim is to alter all the frequencies by a constant >> >> ratio, a frequency shifter alters all the frequencies by a constant >> >> difference. >> >> - The corresponding dialog box would offer the following parameters: >> >> 1) the amount by which the frequencies should be shifted, given in Hz, >> >> which could be either positive or negative; >> >> 2) two volume settings, namely for "dry" and "wet"; >> >> 3) the amount by which the wet signal is fed back into the input, >> >> given as a value that is less than 100% and more than -100%. >> >> 4) the amount by which the feedback is to be delayed, probably given >> >> in ms, which should always be given as a positive number; this >> >> parameter has no effect if feedback is set to 0. >> >> >> >> [1] The actual realization would go like this: >> >> - A) We store two intermediate copies of our original signal, label >> >> them "IP" and "Q", and modify them as described in [2], >> >> - B) Each of the modified intermediate signals is separately >> >> amplitude-modulated: >> >> IP is multiplied by a cosine wave of the given frequency, >> >> Q is multiplied by a sine wave of the same frequency, >> >> - C) we sum the two products to get the frequency-shifted signal, >> >> - D) this signal, multiplied by the "Wet" coefficient, is sent to the >> >> output, together with the original signal multiplied by the "Dry" >> >> coefficient, >> >> - E) the same frequency-shifted signal, this time multiplied by the >> >> "Feedback" coefficient and delayed by "Delay" ms, is sent back to the >> >> input. >> >> >> >> [2] We make a filter that works like an inverted Hilbert transform, >> >> for which reason I'll call it the IHT. The length of the filter will >> >> probably be hard-coded and unknown to the user. The longer the filter, >> >> the closer the approximation gets to a proper IHT. >> >> - For a positive integer l, the filter length should be either l*4 or >> >> l*4-1 samples. Practically, the two make no difference because every >> >> other coefficient is equal to zero. >> >> - Even though the filter is l*4 samples long, our sample position >> >> indexes, instead of going from 0 to l*4-1, should go from -2*l to >> >> +2*l-1. Let's call them k. Similarly, for a filter of length 4*l-1, >> >> the sample position indexes k would go from -(2*n-1) to +2*n-1, i.e. >> >> from -2*n + 1 to 2*n - 1. >> >> The actual values of the filter coefficients meet the following rule: >> >> - For all even numbers k, the coefficient c(k) is equal to zero. >> >> - For all odd numbers k, the coefficient c(k) is equal to -2/)k*π). >> >> - Next, we convolve our original signal with this filter and store the >> >> result into an intermediate buffer, which may be called Q (as in >> >> "quadrature"). >> >> - Then, depending on whether our filter length is even or odd, we >> >> delay our original signal either by 2*n or by 2*n-1 samples and store >> >> this delayed copy into another intermediate buffer, which we may call >> >> IP (meaning "in phase"). >> >> >> >> You may be wondering why I insist on using an IHT instead of a proper >> >> HT or on multiplying IP by a cosine wave rather than a sine wave. The >> >> answers are: >> >> - If I choose the amount of frequency shifting to be zero and do it >> >> the way I've described, the supposed frequency-shifted signal will >> >> only be delayed by "Delay" ms but in all other aspects it will be >> >> identical to the original sound -- i.e. there won't be any additional >> >> phase shifts or delays. In contrast, if IP were multiplied by a sine >> >> wave and Q were multiplied by a cosine wave, then the supposed >> >> frequency-shifted signal (with a zero frequency shift) would >> >> correspond to the original signal not just delayed but also >> >> Hilbert-transformed. This doesn't seem like an issue if the feedback >> >> is set to zero. However, once I set the feedback to a non-zero value, >> >> this thing starts to matter significantly. >> >> - When I use an IHT, then I can get the desired frequency shift by >> >> adding the two amplitude-modulated signals. In contrast, if I used a >> >> proper HT, adding them would give me the opposite frequency shift and >> >> to get the desired one, I would have to subtract them. >> >> >> >> Okay, that's it. Sorry for such a long post but I didn't want to miss >> >> any important details. >> >> >> >> Thanks for your comments or suggestions. >> >> >> >> Petr >> >> >> >> >> >> >> > >> > >> > >> > -- >> > El software de antivirus Avast ha analizado este correo electrónico en >> > busca >> > de virus. >> > https://www.avast.com/antivirus >> > >> >> >> _______________________________________________ >> audacity-devel mailing list >> aud...@li... >> https://lists.sourceforge.net/lists/listinfo/audacity-devel > > > _______________________________________________ > audacity-devel mailing list > aud...@li... > https://lists.sourceforge.net/lists/listinfo/audacity-devel > |
From: Steve F. <ste...@gm...> - 2021-05-17 09:37:39
|
Interesting stuff. The final line of Robert's "squaring two signals and adding them together" assumes that the selected audio is a mono track. For a two channel track it needs to be: (sum (s-square (h1 (aref *track* 0))) (s-square (h2 (aref *track* 1)))) Steve On Mon, 17 May 2021 at 09:10, Robert Hänggi <aar...@gm...> wrote: > > Here is the plug-in: > > https://www.dropbox.com/t/AK66Xx2uxpvatEOH > It works only on stereo tracks. > The idea is to rotate the right channel by 90° at each application. > So, at the second time, you have basically inverted the right track > and at the forth, it will be back to the original state (although it > won't be exactly the original due to the nature of the IIR/HT with the > needed one sample delay). > In other words, the left and right channels represent the real and > imaginary part of the signal, especially if the track is dual-mono > (same content in both channels). > Let's look at the last line: > (vector (h1 (aref *track* 0)) (h2 (aref *track* 1))) > The signal that comes from the track is always *track*. > If it is mono, you can work with it as it is. > If it is stereo however, it is represented as an array with two > elements. Aref references the left (=0) and right (=1) channel. > The function h1 creates the original signal, the real part. Of course, > this would normally not be necessary but we need to adapt it for the > delay introduced through the IIR (ideally, the delay would be only > half a sample for h2). > the function vector creates again an array from the two sounds. > OK, let's assume that we work only with a mono track. > We could for instance use the Hilbert transform to return the > amplitude envelope of the signal. > This is done by squaring the two signals and adding them together. > First, create a mono track with e.g. a chirp or some other signal or file. > > Copy all the text from the plug-in into the Nyquist prompt. > replace the line with '(vector...' with the following: > > (defun s-square (sig) > (mult sig sig)) > (sum (s-square (h1 *track*)) (s-square (h2 *track*))) > > (you can save this as a preset if you go to the manage button) > > Apply the effect. You should (almost) hear nothing because the track > represents now the amplitude instead of the signal, in other words, > just plenty of DC offset. > > HTH > Robert > > > > On 17/05/2021, Federico Miyara <fm...@fc...> wrote: > > > > Petr, > > > > There is a free program called SPEAR that you can download here: > > > > http://www.klingbeil.com/spear/downloads/ > > <http://www.klingbeil.com/spear/downloads/> > > > > It models a signal by detecting partials and representing their time > > evolution by sine waves > > > > I'm not sure it is sufficiently accessible, but it allows several types > > of edits such as the partial displacemente you mention. > > > > Regards, > > > > Federico Miyara > > > > > > > > On 16/05/2021 02:34, Petr Pařízek via audacity-devel wrote: > >> Hello to all of you, > >> > >> to introduce myself a bit, I'm a piano player and a music composer and > >> a music theorist who is very interested in things regarding digital > >> audio effects. Many years ago, I wrote a lot of small programs for the > >> old QBasic for DOS and currently I'm planning to start learning > >> Nyquist in some near future. > >> FYI, I'm blind and that's why, when manipulating with the contents of > >> a sound file, I often combine listening to the sound and converting > >> the sample values to text, if I want to know more about some tiny > >> details (where most people would probably zoom in the waveform). > >> > >> I'm thinking of a possible new effect which might one day be > >> implemented in Audacity. Currently, I'm absolutely unsure whether this > >> kind of effect could be coded in Nyquist at all or whether the only > >> way is to write such complex stuff in C or whether there's yet another > >> way of doing it which I don't happen to know about. But I'd be > >> super-happy if I were told that this thing could indeed be coded in > >> Nyquist. > >> Therefore, I'll do my best to describe the effect, as some say, "in > >> prose", and hope my description is understandable for you all. In case > >> it isn't, I'm definitely open to clarification. I'll be very happy to > >> know your opinions about what might be the best way to code this. > >> I'd like to stress that I'm not intending this effect for real-time > >> performance at all, even though the description of the effect itself > >> might make you think I am. I'm not even suggesting something like a > >> "Preview" facility because I don't want the processing speed to be of > >> any importance here. In every case, I'm willing to sacrifice speed > >> over precision, even if the algorithm eventually turned out to be > >> super-slow. > >> Although I'd love to have such a thing working one day, I'm even ready > >> for the possibility that this effect might never be implemented, if I > >> realize it would be too difficult for me to code (honestly, I've never > >> coded in anything other than QBasic or briefly in Turbo Pascal, which > >> would probably require me to learn C all from scratch if C turned out > >> to be inevitable). > >> > >> - The core part of the algorithm is a frequency shifter [1]. Unlike a > >> pitch shifter, whose aim is to alter all the frequencies by a constant > >> ratio, a frequency shifter alters all the frequencies by a constant > >> difference. > >> - The corresponding dialog box would offer the following parameters: > >> 1) the amount by which the frequencies should be shifted, given in Hz, > >> which could be either positive or negative; > >> 2) two volume settings, namely for "dry" and "wet"; > >> 3) the amount by which the wet signal is fed back into the input, > >> given as a value that is less than 100% and more than -100%. > >> 4) the amount by which the feedback is to be delayed, probably given > >> in ms, which should always be given as a positive number; this > >> parameter has no effect if feedback is set to 0. > >> > >> [1] The actual realization would go like this: > >> - A) We store two intermediate copies of our original signal, label > >> them "IP" and "Q", and modify them as described in [2], > >> - B) Each of the modified intermediate signals is separately > >> amplitude-modulated: > >> IP is multiplied by a cosine wave of the given frequency, > >> Q is multiplied by a sine wave of the same frequency, > >> - C) we sum the two products to get the frequency-shifted signal, > >> - D) this signal, multiplied by the "Wet" coefficient, is sent to the > >> output, together with the original signal multiplied by the "Dry" > >> coefficient, > >> - E) the same frequency-shifted signal, this time multiplied by the > >> "Feedback" coefficient and delayed by "Delay" ms, is sent back to the > >> input. > >> > >> [2] We make a filter that works like an inverted Hilbert transform, > >> for which reason I'll call it the IHT. The length of the filter will > >> probably be hard-coded and unknown to the user. The longer the filter, > >> the closer the approximation gets to a proper IHT. > >> - For a positive integer l, the filter length should be either l*4 or > >> l*4-1 samples. Practically, the two make no difference because every > >> other coefficient is equal to zero. > >> - Even though the filter is l*4 samples long, our sample position > >> indexes, instead of going from 0 to l*4-1, should go from -2*l to > >> +2*l-1. Let's call them k. Similarly, for a filter of length 4*l-1, > >> the sample position indexes k would go from -(2*n-1) to +2*n-1, i.e. > >> from -2*n + 1 to 2*n - 1. > >> The actual values of the filter coefficients meet the following rule: > >> - For all even numbers k, the coefficient c(k) is equal to zero. > >> - For all odd numbers k, the coefficient c(k) is equal to -2/)k*π). > >> - Next, we convolve our original signal with this filter and store the > >> result into an intermediate buffer, which may be called Q (as in > >> "quadrature"). > >> - Then, depending on whether our filter length is even or odd, we > >> delay our original signal either by 2*n or by 2*n-1 samples and store > >> this delayed copy into another intermediate buffer, which we may call > >> IP (meaning "in phase"). > >> > >> You may be wondering why I insist on using an IHT instead of a proper > >> HT or on multiplying IP by a cosine wave rather than a sine wave. The > >> answers are: > >> - If I choose the amount of frequency shifting to be zero and do it > >> the way I've described, the supposed frequency-shifted signal will > >> only be delayed by "Delay" ms but in all other aspects it will be > >> identical to the original sound -- i.e. there won't be any additional > >> phase shifts or delays. In contrast, if IP were multiplied by a sine > >> wave and Q were multiplied by a cosine wave, then the supposed > >> frequency-shifted signal (with a zero frequency shift) would > >> correspond to the original signal not just delayed but also > >> Hilbert-transformed. This doesn't seem like an issue if the feedback > >> is set to zero. However, once I set the feedback to a non-zero value, > >> this thing starts to matter significantly. > >> - When I use an IHT, then I can get the desired frequency shift by > >> adding the two amplitude-modulated signals. In contrast, if I used a > >> proper HT, adding them would give me the opposite frequency shift and > >> to get the desired one, I would have to subtract them. > >> > >> Okay, that's it. Sorry for such a long post but I didn't want to miss > >> any important details. > >> > >> Thanks for your comments or suggestions. > >> > >> Petr > >> > >> > >> > > > > > > > > -- > > El software de antivirus Avast ha analizado este correo electrónico en busca > > de virus. > > https://www.avast.com/antivirus > > > > > _______________________________________________ > audacity-devel mailing list > aud...@li... > https://lists.sourceforge.net/lists/listinfo/audacity-devel |
From: Robert H. <aar...@gm...> - 2021-05-17 08:09:16
|
Here is the plug-in: https://www.dropbox.com/t/AK66Xx2uxpvatEOH It works only on stereo tracks. The idea is to rotate the right channel by 90° at each application. So, at the second time, you have basically inverted the right track and at the forth, it will be back to the original state (although it won't be exactly the original due to the nature of the IIR/HT with the needed one sample delay). In other words, the left and right channels represent the real and imaginary part of the signal, especially if the track is dual-mono (same content in both channels). Let's look at the last line: (vector (h1 (aref *track* 0)) (h2 (aref *track* 1))) The signal that comes from the track is always *track*. If it is mono, you can work with it as it is. If it is stereo however, it is represented as an array with two elements. Aref references the left (=0) and right (=1) channel. The function h1 creates the original signal, the real part. Of course, this would normally not be necessary but we need to adapt it for the delay introduced through the IIR (ideally, the delay would be only half a sample for h2). the function vector creates again an array from the two sounds. OK, let's assume that we work only with a mono track. We could for instance use the Hilbert transform to return the amplitude envelope of the signal. This is done by squaring the two signals and adding them together. First, create a mono track with e.g. a chirp or some other signal or file. Copy all the text from the plug-in into the Nyquist prompt. replace the line with '(vector...' with the following: (defun s-square (sig) (mult sig sig)) (sum (s-square (h1 *track*)) (s-square (h2 *track*))) (you can save this as a preset if you go to the manage button) Apply the effect. You should (almost) hear nothing because the track represents now the amplitude instead of the signal, in other words, just plenty of DC offset. HTH Robert On 17/05/2021, Federico Miyara <fm...@fc...> wrote: > > Petr, > > There is a free program called SPEAR that you can download here: > > http://www.klingbeil.com/spear/downloads/ > <http://www.klingbeil.com/spear/downloads/> > > It models a signal by detecting partials and representing their time > evolution by sine waves > > I'm not sure it is sufficiently accessible, but it allows several types > of edits such as the partial displacemente you mention. > > Regards, > > Federico Miyara > > > > On 16/05/2021 02:34, Petr Pařízek via audacity-devel wrote: >> Hello to all of you, >> >> to introduce myself a bit, I'm a piano player and a music composer and >> a music theorist who is very interested in things regarding digital >> audio effects. Many years ago, I wrote a lot of small programs for the >> old QBasic for DOS and currently I'm planning to start learning >> Nyquist in some near future. >> FYI, I'm blind and that's why, when manipulating with the contents of >> a sound file, I often combine listening to the sound and converting >> the sample values to text, if I want to know more about some tiny >> details (where most people would probably zoom in the waveform). >> >> I'm thinking of a possible new effect which might one day be >> implemented in Audacity. Currently, I'm absolutely unsure whether this >> kind of effect could be coded in Nyquist at all or whether the only >> way is to write such complex stuff in C or whether there's yet another >> way of doing it which I don't happen to know about. But I'd be >> super-happy if I were told that this thing could indeed be coded in >> Nyquist. >> Therefore, I'll do my best to describe the effect, as some say, "in >> prose", and hope my description is understandable for you all. In case >> it isn't, I'm definitely open to clarification. I'll be very happy to >> know your opinions about what might be the best way to code this. >> I'd like to stress that I'm not intending this effect for real-time >> performance at all, even though the description of the effect itself >> might make you think I am. I'm not even suggesting something like a >> "Preview" facility because I don't want the processing speed to be of >> any importance here. In every case, I'm willing to sacrifice speed >> over precision, even if the algorithm eventually turned out to be >> super-slow. >> Although I'd love to have such a thing working one day, I'm even ready >> for the possibility that this effect might never be implemented, if I >> realize it would be too difficult for me to code (honestly, I've never >> coded in anything other than QBasic or briefly in Turbo Pascal, which >> would probably require me to learn C all from scratch if C turned out >> to be inevitable). >> >> - The core part of the algorithm is a frequency shifter [1]. Unlike a >> pitch shifter, whose aim is to alter all the frequencies by a constant >> ratio, a frequency shifter alters all the frequencies by a constant >> difference. >> - The corresponding dialog box would offer the following parameters: >> 1) the amount by which the frequencies should be shifted, given in Hz, >> which could be either positive or negative; >> 2) two volume settings, namely for "dry" and "wet"; >> 3) the amount by which the wet signal is fed back into the input, >> given as a value that is less than 100% and more than -100%. >> 4) the amount by which the feedback is to be delayed, probably given >> in ms, which should always be given as a positive number; this >> parameter has no effect if feedback is set to 0. >> >> [1] The actual realization would go like this: >> - A) We store two intermediate copies of our original signal, label >> them "IP" and "Q", and modify them as described in [2], >> - B) Each of the modified intermediate signals is separately >> amplitude-modulated: >> IP is multiplied by a cosine wave of the given frequency, >> Q is multiplied by a sine wave of the same frequency, >> - C) we sum the two products to get the frequency-shifted signal, >> - D) this signal, multiplied by the "Wet" coefficient, is sent to the >> output, together with the original signal multiplied by the "Dry" >> coefficient, >> - E) the same frequency-shifted signal, this time multiplied by the >> "Feedback" coefficient and delayed by "Delay" ms, is sent back to the >> input. >> >> [2] We make a filter that works like an inverted Hilbert transform, >> for which reason I'll call it the IHT. The length of the filter will >> probably be hard-coded and unknown to the user. The longer the filter, >> the closer the approximation gets to a proper IHT. >> - For a positive integer l, the filter length should be either l*4 or >> l*4-1 samples. Practically, the two make no difference because every >> other coefficient is equal to zero. >> - Even though the filter is l*4 samples long, our sample position >> indexes, instead of going from 0 to l*4-1, should go from -2*l to >> +2*l-1. Let's call them k. Similarly, for a filter of length 4*l-1, >> the sample position indexes k would go from -(2*n-1) to +2*n-1, i.e. >> from -2*n + 1 to 2*n - 1. >> The actual values of the filter coefficients meet the following rule: >> - For all even numbers k, the coefficient c(k) is equal to zero. >> - For all odd numbers k, the coefficient c(k) is equal to -2/)k*π). >> - Next, we convolve our original signal with this filter and store the >> result into an intermediate buffer, which may be called Q (as in >> "quadrature"). >> - Then, depending on whether our filter length is even or odd, we >> delay our original signal either by 2*n or by 2*n-1 samples and store >> this delayed copy into another intermediate buffer, which we may call >> IP (meaning "in phase"). >> >> You may be wondering why I insist on using an IHT instead of a proper >> HT or on multiplying IP by a cosine wave rather than a sine wave. The >> answers are: >> - If I choose the amount of frequency shifting to be zero and do it >> the way I've described, the supposed frequency-shifted signal will >> only be delayed by "Delay" ms but in all other aspects it will be >> identical to the original sound -- i.e. there won't be any additional >> phase shifts or delays. In contrast, if IP were multiplied by a sine >> wave and Q were multiplied by a cosine wave, then the supposed >> frequency-shifted signal (with a zero frequency shift) would >> correspond to the original signal not just delayed but also >> Hilbert-transformed. This doesn't seem like an issue if the feedback >> is set to zero. However, once I set the feedback to a non-zero value, >> this thing starts to matter significantly. >> - When I use an IHT, then I can get the desired frequency shift by >> adding the two amplitude-modulated signals. In contrast, if I used a >> proper HT, adding them would give me the opposite frequency shift and >> to get the desired one, I would have to subtract them. >> >> Okay, that's it. Sorry for such a long post but I didn't want to miss >> any important details. >> >> Thanks for your comments or suggestions. >> >> Petr >> >> >> > > > > -- > El software de antivirus Avast ha analizado este correo electrónico en busca > de virus. > https://www.avast.com/antivirus > |
From: Petr P. <pet...@ya...> - 2021-05-17 06:59:38
|
Federico wrote: > It models a signal by detecting partials and representing their time evolution by sine waves Thanks for the information, but here this is not necessary. The frequency-shifting part can be done entirely in the time domain. What is necessary, however, is to have a sufficiently long buffer for the filter that does the Hilbert transform. It's a bit like reverberating a signal that has already been reverberated. Imagine that you would run your signal through a reverb effect and that you would feed your result back into the input, delayed by, let's say, a few seconds. So rather than having something like a repeating echo where every repetion sounds softer than the previous one, you have repeating reverberation where each repetition stresses the reverb characteristics more than the previous one. This frequency-shifting stuff is similar, except that the delays can be as small as a fraction of a millisecond. Once we can answer the question how we could achieve the repeating reverb that I've just described, we should be able to answer the question how we can do the repeated frequency shifter. Petr -- Tento e-mail byl zkontrolován na viry programem AVG. http://www.avg.cz |
From: Federico M. <fm...@fc...> - 2021-05-17 05:28:34
|
Petr, There is a free program called SPEAR that you can download here: http://www.klingbeil.com/spear/downloads/ <http://www.klingbeil.com/spear/downloads/> It models a signal by detecting partials and representing their time evolution by sine waves I'm not sure it is sufficiently accessible, but it allows several types of edits such as the partial displacemente you mention. Regards, Federico Miyara On 16/05/2021 02:34, Petr Pařízek via audacity-devel wrote: > Hello to all of you, > > to introduce myself a bit, I'm a piano player and a music composer and > a music theorist who is very interested in things regarding digital > audio effects. Many years ago, I wrote a lot of small programs for the > old QBasic for DOS and currently I'm planning to start learning > Nyquist in some near future. > FYI, I'm blind and that's why, when manipulating with the contents of > a sound file, I often combine listening to the sound and converting > the sample values to text, if I want to know more about some tiny > details (where most people would probably zoom in the waveform). > > I'm thinking of a possible new effect which might one day be > implemented in Audacity. Currently, I'm absolutely unsure whether this > kind of effect could be coded in Nyquist at all or whether the only > way is to write such complex stuff in C or whether there's yet another > way of doing it which I don't happen to know about. But I'd be > super-happy if I were told that this thing could indeed be coded in > Nyquist. > Therefore, I'll do my best to describe the effect, as some say, "in > prose", and hope my description is understandable for you all. In case > it isn't, I'm definitely open to clarification. I'll be very happy to > know your opinions about what might be the best way to code this. > I'd like to stress that I'm not intending this effect for real-time > performance at all, even though the description of the effect itself > might make you think I am. I'm not even suggesting something like a > "Preview" facility because I don't want the processing speed to be of > any importance here. In every case, I'm willing to sacrifice speed > over precision, even if the algorithm eventually turned out to be > super-slow. > Although I'd love to have such a thing working one day, I'm even ready > for the possibility that this effect might never be implemented, if I > realize it would be too difficult for me to code (honestly, I've never > coded in anything other than QBasic or briefly in Turbo Pascal, which > would probably require me to learn C all from scratch if C turned out > to be inevitable). > > - The core part of the algorithm is a frequency shifter [1]. Unlike a > pitch shifter, whose aim is to alter all the frequencies by a constant > ratio, a frequency shifter alters all the frequencies by a constant > difference. > - The corresponding dialog box would offer the following parameters: > 1) the amount by which the frequencies should be shifted, given in Hz, > which could be either positive or negative; > 2) two volume settings, namely for "dry" and "wet"; > 3) the amount by which the wet signal is fed back into the input, > given as a value that is less than 100% and more than -100%. > 4) the amount by which the feedback is to be delayed, probably given > in ms, which should always be given as a positive number; this > parameter has no effect if feedback is set to 0. > > [1] The actual realization would go like this: > - A) We store two intermediate copies of our original signal, label > them "IP" and "Q", and modify them as described in [2], > - B) Each of the modified intermediate signals is separately > amplitude-modulated: > IP is multiplied by a cosine wave of the given frequency, > Q is multiplied by a sine wave of the same frequency, > - C) we sum the two products to get the frequency-shifted signal, > - D) this signal, multiplied by the "Wet" coefficient, is sent to the > output, together with the original signal multiplied by the "Dry" > coefficient, > - E) the same frequency-shifted signal, this time multiplied by the > "Feedback" coefficient and delayed by "Delay" ms, is sent back to the > input. > > [2] We make a filter that works like an inverted Hilbert transform, > for which reason I'll call it the IHT. The length of the filter will > probably be hard-coded and unknown to the user. The longer the filter, > the closer the approximation gets to a proper IHT. > - For a positive integer l, the filter length should be either l*4 or > l*4-1 samples. Practically, the two make no difference because every > other coefficient is equal to zero. > - Even though the filter is l*4 samples long, our sample position > indexes, instead of going from 0 to l*4-1, should go from -2*l to > +2*l-1. Let's call them k. Similarly, for a filter of length 4*l-1, > the sample position indexes k would go from -(2*n-1) to +2*n-1, i.e. > from -2*n + 1 to 2*n - 1. > The actual values of the filter coefficients meet the following rule: > - For all even numbers k, the coefficient c(k) is equal to zero. > - For all odd numbers k, the coefficient c(k) is equal to -2/)k*π). > - Next, we convolve our original signal with this filter and store the > result into an intermediate buffer, which may be called Q (as in > "quadrature"). > - Then, depending on whether our filter length is even or odd, we > delay our original signal either by 2*n or by 2*n-1 samples and store > this delayed copy into another intermediate buffer, which we may call > IP (meaning "in phase"). > > You may be wondering why I insist on using an IHT instead of a proper > HT or on multiplying IP by a cosine wave rather than a sine wave. The > answers are: > - If I choose the amount of frequency shifting to be zero and do it > the way I've described, the supposed frequency-shifted signal will > only be delayed by "Delay" ms but in all other aspects it will be > identical to the original sound -- i.e. there won't be any additional > phase shifts or delays. In contrast, if IP were multiplied by a sine > wave and Q were multiplied by a cosine wave, then the supposed > frequency-shifted signal (with a zero frequency shift) would > correspond to the original signal not just delayed but also > Hilbert-transformed. This doesn't seem like an issue if the feedback > is set to zero. However, once I set the feedback to a non-zero value, > this thing starts to matter significantly. > - When I use an IHT, then I can get the desired frequency shift by > adding the two amplitude-modulated signals. In contrast, if I used a > proper HT, adding them would give me the opposite frequency shift and > to get the desired one, I would have to subtract them. > > Okay, that's it. Sorry for such a long post but I didn't want to miss > any important details. > > Thanks for your comments or suggestions. > > Petr > > > -- El software de antivirus Avast ha analizado este correo electrónico en busca de virus. https://www.avast.com/antivirus |
From: Petr P. <pet...@ya...> - 2021-05-17 04:36:45
|
Robert wrote: > What bothers me is the feedback part. A normal Biquad section would > probably not do because of the limited number of coefficients. Is the limitation constant or does it change depending on something else? I mean, there are programs whose buffer size for a custom filter is hard-coded and there's no way you could alter it. > However, snd-allpoles should do the trick. What is that? > It is a pity that you need IHT instead of HT. I've written an IIR for > the latter. Really? I can imagine how I would do a FIR filter for such a thing but I have no idea how I would get a decent approximation using an IIR filter. Can you elaborate? Also, a proper HT would definitely be useful too, the only difference being that the two amplitude-modulated signals would be subtracted, not added. So yes, it could be done with a proper HT as well. > Besides, I would rather go with Octave than Python to design the > filter or just stay in Nyquist. That's what I'm saying, if it turns out that I can do it in Nyquist, I'm all my ears then. Petr -- Tento e-mail byl zkontrolován na viry programem AVG. http://www.avg.cz |
From: Leland <ll...@ho...> - 2021-05-17 03:58:10
|
Steve, have him open the project, duplicate a track, reverse the new track, save the project, delete the new track, save again. Does the project reduce in size then? (Basically, trying to force compaction.) -----Original Message----- From: Steve Fiddle <ste...@gm...> Sent: Sunday, May 16, 2021 7:34 PM To: Audacity-Devel list <aud...@li...> Subject: [Audacity-devel] Huge AUP3 project issue We have a report on the forum of a 200MB project (expected size) having a saved file size of 1350MB. On opening the 1.35GB project and saving a backup copy of the project, the backup project file is under 200 MB, which is about right for the amount of audio in the project. The forum thread is here: https://forum.audacityteam.org/viewtopic.php?f=46&t=118141 Steve _______________________________________________ audacity-devel mailing list aud...@li... https://lists.sourceforge.net/lists/listinfo/audacity-devel |
From: Mike H. <my...@gm...> - 2021-05-17 01:10:46
|
This reminds me all too much of R1Soft CDP and it's usage of sqlite to store disk block data... After enough operations on the 'disk safe (what they call the database) it was pretty much useless to try and make the file shrink and we simply had to start all over creating a brand new dump of the server data to a brand new database. I hate sqlite for the record. Mike On Sun, May 16, 2021, 18:35 Steve Fiddle <ste...@gm...> wrote: > We have a report on the forum of a 200MB project (expected size) > having a saved file size of 1350MB. > > On opening the 1.35GB project and saving a backup copy of the project, > the backup project file is under 200 MB, which is about right for the > amount of audio in the project. > > The forum thread is here: > https://forum.audacityteam.org/viewtopic.php?f=46&t=118141 > > > Steve > > > _______________________________________________ > audacity-devel mailing list > aud...@li... > https://lists.sourceforge.net/lists/listinfo/audacity-devel > |
From: Steve F. <ste...@gm...> - 2021-05-17 00:34:32
|
We have a report on the forum of a 200MB project (expected size) having a saved file size of 1350MB. On opening the 1.35GB project and saving a backup copy of the project, the backup project file is under 200 MB, which is about right for the amount of audio in the project. The forum thread is here: https://forum.audacityteam.org/viewtopic.php?f=46&t=118141 Steve |
From: Robert H. <aar...@gm...> - 2021-05-16 19:47:37
|
I don't think that there's FFT involved. Of course, you could but the coefficients given are for a plain convolution. Also, if some sort of Hilbert transformation is involved, you need the full FFT spectrum and Nyquist gives you only positive frequencies. Convolution and multiplication with sine/cosine are more or less trivial. What bothers me is the feedback part. A normal Biquad section would probably not do because of the limited number of coefficients. However, snd-allpoles should do the trick. It is a pity that you need IHT instead of HT. I've written an IIR for the latter. Besides, I would rather go with Octave than Python to design the filter or just stay in Nyquist. I'm also blind and examine the samples in the Nyquist output all the time. Cheers Robert On 16/05/2021, Steve Fiddle <ste...@gm...> wrote: > Nyquist does have FFT and IFFT, but they are quite tricky to use. > The Nyquist manual documentation is here: > http://www.cs.cmu.edu/~rbd/doc/nyquist/part11.html#index912 > but if you intend using these functions, then I would highly recommend > reading the "demos/fft_tutorial.htm", which is part of the standard > Nyquist release. > > Steve > > On Sun, 16 May 2021 at 19:10, freddyjohn via audacity-devel > <aud...@li...> wrote: >> >> I’ve only wrote a few time domain effects and have only recently started >> messing around with the frequency domain so take my word with a grain of >> salt. It’s trivial to start playing with your idea in Python with numpy, >> scipy, and matplotlib. Then when you have working implementation you can >> extract the core logic into a language of your choice. >> >> Numpy can help you generate waveforms >> >> import numpy as np >> t=np.linspace(0,5,5*48000) >> y=np.sin(44*t) >> >> And then turn 5 second 44hz sampled at 48000 sps to 16 bit pcm >> >> pcm=y.astype(np.short).tobytes() >> >> want to go into frequency domain and back again? >> >> from scipy.fft import fft, ifft >> frequency_domain = fft(y) >> time_domain = ifft( frequency_domain) >> >> You can visualize what is happening in either domain with matplotlib >> >> from matplotlib import pyplot as plt >> >> plt.plot(y) >> plt.show() >> >> >> >> Sent from ProtonMail for iOS >> >> >> On Sun, May 16, 2021 at 2:05 AM, Petr Pařízek via audacity-devel >> <aud...@li...> wrote: >> >> I wrote: >> >> > - C) we sum the two products to get the frequency-shifted signal, >> >> Oops, my fault. After summing them, we obviously need to skip the first >> n*2 or n*2-1 samples because that's by how much our IP and Q signals >> were delayed. >> (Gosh, I knew I would forget something.) >> >> Petr >> >> >> -- >> Tento e-mail byl zkontrolován na viry programem AVG. >> http://www.avg.cz >> >> >> >> _______________________________________________ >> audacity-devel mailing list >> aud...@li... >> https://lists.sourceforge.net/lists/listinfo/audacity-devel >> >> >> >> _______________________________________________ >> audacity-devel mailing list >> aud...@li... >> https://lists.sourceforge.net/lists/listinfo/audacity-devel > > > _______________________________________________ > audacity-devel mailing list > aud...@li... > https://lists.sourceforge.net/lists/listinfo/audacity-devel > |
From: Steve F. <ste...@gm...> - 2021-05-16 19:24:37
|
Nyquist does have FFT and IFFT, but they are quite tricky to use. The Nyquist manual documentation is here: http://www.cs.cmu.edu/~rbd/doc/nyquist/part11.html#index912 but if you intend using these functions, then I would highly recommend reading the "demos/fft_tutorial.htm", which is part of the standard Nyquist release. Steve On Sun, 16 May 2021 at 19:10, freddyjohn via audacity-devel <aud...@li...> wrote: > > I’ve only wrote a few time domain effects and have only recently started messing around with the frequency domain so take my word with a grain of salt. It’s trivial to start playing with your idea in Python with numpy, scipy, and matplotlib. Then when you have working implementation you can extract the core logic into a language of your choice. > > Numpy can help you generate waveforms > > import numpy as np > t=np.linspace(0,5,5*48000) > y=np.sin(44*t) > > And then turn 5 second 44hz sampled at 48000 sps to 16 bit pcm > > pcm=y.astype(np.short).tobytes() > > want to go into frequency domain and back again? > > from scipy.fft import fft, ifft > frequency_domain = fft(y) > time_domain = ifft( frequency_domain) > > You can visualize what is happening in either domain with matplotlib > > from matplotlib import pyplot as plt > > plt.plot(y) > plt.show() > > > > Sent from ProtonMail for iOS > > > On Sun, May 16, 2021 at 2:05 AM, Petr Pařízek via audacity-devel <aud...@li...> wrote: > > I wrote: > > > - C) we sum the two products to get the frequency-shifted signal, > > Oops, my fault. After summing them, we obviously need to skip the first > n*2 or n*2-1 samples because that's by how much our IP and Q signals > were delayed. > (Gosh, I knew I would forget something.) > > Petr > > > -- > Tento e-mail byl zkontrolován na viry programem AVG. > http://www.avg.cz > > > > _______________________________________________ > audacity-devel mailing list > aud...@li... > https://lists.sourceforge.net/lists/listinfo/audacity-devel > > > > _______________________________________________ > audacity-devel mailing list > aud...@li... > https://lists.sourceforge.net/lists/listinfo/audacity-devel |
From: freddyjohn <fre...@pr...> - 2021-05-16 18:09:34
|
I’ve only wrote a few time domain effects and have only recently started messing around with the frequency domain so take my word with a grain of salt. It’s trivial to start playing with your idea in Python with numpy, scipy, and matplotlib. Then when you have working implementation you can extract the core logic into a language of your choice. Numpy can help you generate waveforms import numpy as np t=np.linspace(0,5,5*48000) y=np.sin(44*t) And then turn 5 second 44hz sampled at 48000 sps to 16 bit pcm pcm=y.astype(np.short).tobytes() want to go into frequency domain and back again? from scipy.fft import fft, ifft frequency_domain = fft(y) time_domain = ifft( frequency_domain) You can visualize what is happening in either domain with matplotlib from matplotlib import pyplot as plt plt.plot(y) plt.show() Sent from ProtonMail for iOS On Sun, May 16, 2021 at 2:05 AM, Petr Pařízek via audacity-devel <aud...@li...> wrote: > I wrote: > >> - C) we sum the two products to get the frequency-shifted signal, > > Oops, my fault. After summing them, we obviously need to skip the first > n*2 or n*2-1 samples because that's by how much our IP and Q signals > were delayed. > (Gosh, I knew I would forget something.) > > Petr > > -- > Tento e-mail byl zkontrolován na viry programem AVG. > http://www.avg.cz > > _______________________________________________ > audacity-devel mailing list > aud...@li... > https://lists.sourceforge.net/lists/listinfo/audacity-devel |
From: Petr P. <pet...@ya...> - 2021-05-16 06:06:13
|
I wrote: > - C) we sum the two products to get the frequency-shifted signal, Oops, my fault. After summing them, we obviously need to skip the first n*2 or n*2-1 samples because that's by how much our IP and Q signals were delayed. (Gosh, I knew I would forget something.) Petr -- Tento e-mail byl zkontrolován na viry programem AVG. http://www.avg.cz |
From: Petr P. <pet...@ya...> - 2021-05-16 05:34:27
|
Hello to all of you, to introduce myself a bit, I'm a piano player and a music composer and a music theorist who is very interested in things regarding digital audio effects. Many years ago, I wrote a lot of small programs for the old QBasic for DOS and currently I'm planning to start learning Nyquist in some near future. FYI, I'm blind and that's why, when manipulating with the contents of a sound file, I often combine listening to the sound and converting the sample values to text, if I want to know more about some tiny details (where most people would probably zoom in the waveform). I'm thinking of a possible new effect which might one day be implemented in Audacity. Currently, I'm absolutely unsure whether this kind of effect could be coded in Nyquist at all or whether the only way is to write such complex stuff in C or whether there's yet another way of doing it which I don't happen to know about. But I'd be super-happy if I were told that this thing could indeed be coded in Nyquist. Therefore, I'll do my best to describe the effect, as some say, "in prose", and hope my description is understandable for you all. In case it isn't, I'm definitely open to clarification. I'll be very happy to know your opinions about what might be the best way to code this. I'd like to stress that I'm not intending this effect for real-time performance at all, even though the description of the effect itself might make you think I am. I'm not even suggesting something like a "Preview" facility because I don't want the processing speed to be of any importance here. In every case, I'm willing to sacrifice speed over precision, even if the algorithm eventually turned out to be super-slow. Although I'd love to have such a thing working one day, I'm even ready for the possibility that this effect might never be implemented, if I realize it would be too difficult for me to code (honestly, I've never coded in anything other than QBasic or briefly in Turbo Pascal, which would probably require me to learn C all from scratch if C turned out to be inevitable). - The core part of the algorithm is a frequency shifter [1]. Unlike a pitch shifter, whose aim is to alter all the frequencies by a constant ratio, a frequency shifter alters all the frequencies by a constant difference. - The corresponding dialog box would offer the following parameters: 1) the amount by which the frequencies should be shifted, given in Hz, which could be either positive or negative; 2) two volume settings, namely for "dry" and "wet"; 3) the amount by which the wet signal is fed back into the input, given as a value that is less than 100% and more than -100%. 4) the amount by which the feedback is to be delayed, probably given in ms, which should always be given as a positive number; this parameter has no effect if feedback is set to 0. [1] The actual realization would go like this: - A) We store two intermediate copies of our original signal, label them "IP" and "Q", and modify them as described in [2], - B) Each of the modified intermediate signals is separately amplitude-modulated: IP is multiplied by a cosine wave of the given frequency, Q is multiplied by a sine wave of the same frequency, - C) we sum the two products to get the frequency-shifted signal, - D) this signal, multiplied by the "Wet" coefficient, is sent to the output, together with the original signal multiplied by the "Dry" coefficient, - E) the same frequency-shifted signal, this time multiplied by the "Feedback" coefficient and delayed by "Delay" ms, is sent back to the input. [2] We make a filter that works like an inverted Hilbert transform, for which reason I'll call it the IHT. The length of the filter will probably be hard-coded and unknown to the user. The longer the filter, the closer the approximation gets to a proper IHT. - For a positive integer l, the filter length should be either l*4 or l*4-1 samples. Practically, the two make no difference because every other coefficient is equal to zero. - Even though the filter is l*4 samples long, our sample position indexes, instead of going from 0 to l*4-1, should go from -2*l to +2*l-1. Let's call them k. Similarly, for a filter of length 4*l-1, the sample position indexes k would go from -(2*n-1) to +2*n-1, i.e. from -2*n + 1 to 2*n - 1. The actual values of the filter coefficients meet the following rule: - For all even numbers k, the coefficient c(k) is equal to zero. - For all odd numbers k, the coefficient c(k) is equal to -2/)k*π). - Next, we convolve our original signal with this filter and store the result into an intermediate buffer, which may be called Q (as in "quadrature"). - Then, depending on whether our filter length is even or odd, we delay our original signal either by 2*n or by 2*n-1 samples and store this delayed copy into another intermediate buffer, which we may call IP (meaning "in phase"). You may be wondering why I insist on using an IHT instead of a proper HT or on multiplying IP by a cosine wave rather than a sine wave. The answers are: - If I choose the amount of frequency shifting to be zero and do it the way I've described, the supposed frequency-shifted signal will only be delayed by "Delay" ms but in all other aspects it will be identical to the original sound -- i.e. there won't be any additional phase shifts or delays. In contrast, if IP were multiplied by a sine wave and Q were multiplied by a cosine wave, then the supposed frequency-shifted signal (with a zero frequency shift) would correspond to the original signal not just delayed but also Hilbert-transformed. This doesn't seem like an issue if the feedback is set to zero. However, once I set the feedback to a non-zero value, this thing starts to matter significantly. - When I use an IHT, then I can get the desired frequency shift by adding the two amplitude-modulated signals. In contrast, if I used a proper HT, adding them would give me the opposite frequency shift and to get the desired one, I would have to subtract them. Okay, that's it. Sorry for such a long post but I didn't want to miss any important details. Thanks for your comments or suggestions. Petr -- Tento e-mail byl zkontrolován na viry programem AVG. http://www.avg.cz |
From: Steve F. <ste...@gm...> - 2021-05-15 11:02:12
|
>From bugzilla issue 2758 Linux: No support for Jack Audio System. James wrote: > Instructions for compiling with jack support need to be added in wiki. > It isn't enough just having them in the forum. > > Instructions for testing with jack (or a link to them) wouldn't hurt either. > I wasted a lot of time googling to just get jack started properly. "Getting jack started properly" is the reason why, on the forum, we rarely recommend using jack. I generally only recommend it when: 1. Jack will bring a significant benefit. 2. The user appears to be sufficiently adept to have a good chance of success. For some, getting jack running can be as simple as installing QjackCtl and then launching it, but for many it can involve a lot of fiddling around to get it working. The situation has improved over the years, but it can still be a pain in the neck for many. In my opinion, "getting jack working" is not our responsibility. Jack runs as part of the operating system, and the precise details are likely to be different according to the distribution. Even for the Ubuntu family, there are difference according to which flavour of Ubuntu you use. re. building with Jack support: For me, now that bug 2758 is fixed, it's just a matter of ensuring that libjack-jackd2-dev is installed. That should be the case for all Debian based distros. For other distros the libjack dependency may have a different name. I think we should list dependencies in https://github.com/audacity/audacity/blob/master/linux/build.txt Steve |
From: Peter S. <pet...@gm...> - 2021-05-14 11:05:09
|
James has logged this P1 moonphase *Bug 2777* <https://bugzilla.audacityteam.org/show_bug.cgi?id=2777> - Windows: Excessive slowness for some Windows users Peter. On Fri, May 14, 2021 at 8:27 AM Peter Sampson < pet...@gm...> wrote: > And this user seems to have got faster again... :-// > > https://forum.audacityteam.org/viewtopic.php?f=46&t=118101&p=425721#p425721 > > Peter > > On Thu, May 13, 2021 at 11:13 PM Steve Fiddle <ste...@gm...> > wrote: > >> >> >> On Thu, 13 May 2021 at 22:16, James Crook <jam...@gm...> >> wrote: >> >>> The slow appearance of the problem possibly fits with checkpointing. >>> >>> Planned changes, to use more standard max .wal size (i.e. block-size x >>> checkpoint-block-count-threshold ) may both make the slowdown smaller and >>> more even, if it is the cause. It may also eliminate it, if it is related >>> to exceeding some in-memory cache size and so thrashing. >>> >>> I would say the fastest way is to get some test builds out that do this, >>> and get feedback from such users. >>> Things that may also help confirm/contradict this are asking about total >>> RAM and about other programs running - especially browsers with many tabs >>> open. Perhaps closing browsers so freeing up RAM will alleviate?? >>> >>> How many are 'worrying numbers'? >>> >> >> This latest one makes 5 separate reports that I've noticed in the last >> couple of weeks. >> There's also a couple of others that may or may not be the same issue. >> >> Actual Audacity bugs make up a small proportion of forum issues, so >> although it's not a big number, it's a significantly large proportion of >> actual bugs. >> >> Steve >> >> >> >>> >>> >>> >>> >>> On Thu, 13 May 2021 at 21:23, Cliff <fly...@gm...> wrote: >>> >>>> I have not seen that on Mac Mojave. The only things that at times seem >>>> slow is saving after a lot of editing. Initial project is usually 2+ hours >>>> and the final project is an hour or 1:15 or so. Compressor is normally >>>> quick, but I’m using the old one and not the new one. Basic operations are >>>> all very normal speeds. >>>> >>>> Cliff >>>> >>>> On May 13, 2021, at 13:51, Steve Fiddle <ste...@gm...> >>>> wrote: >>>> >>>> We're getting a worrying number of reports of Audacity 3.x being >>>> excessively slow for some users. The common factor being that the projects >>>> are longer than about 30 minutes. >>>> >>>> Most of the reports have been on Windows, but so are most of our users, >>>> so that may not be significant. >>>> >>>> On Linux, when working with project of an hour or more I have also >>>> experienced some long waits, especially when closing the project (though I >>>> rarely work on projects that are more than a few minutes long). >>>> >>>> A peculiarity is that the slowness may not appear immediately. It seems >>>> to happen after working on the project for a while, and may be related to >>>> using "Undo". >>>> >>>> Below is the latest report that came in a few minutes ago. >>>> >>>> Steve >>>> >>>> >>>> ---------- Forwarded message --------- >>>> From: [redacted] >>>> Date: Thu, 13 May 2021 at 19:26 >>>> Subject: Audacity 3.02 >>>> >>>> >>>> I appreciate the work you do and you making it available free to the >>>> user. It has helped me for years. >>>> >>>> >>>> >>>> I do not want to complain but I am having some issues that I feel you >>>> should be made aware of. >>>> >>>> >>>> >>>> I am using a Windows 10 Computer. All of my recordings are voice only >>>> and range from 30 to 60 minutes. Over the years I have made many recording >>>> using audacity. I average 2-3 recordings a week and have used audacity >>>> since one of your earliest versions. >>>> >>>> >>>> >>>> A few weeks ago I upgraded to Audacity 3.0. With great expectation, I >>>> loaded it, expecting my “wait” time to be reduced. The reverse happened on >>>> nearly everything I do. It was so bad, I went back to my older version. >>>> Today, I saw 3.02 was out and downloaded it. I recorded a 48 minute audio >>>> without issue. However, When it came to editing, it was a different story. >>>> Below are a few of the “wait times” I experienced. >>>> >>>> >>>> >>>> · Sound Finder took a long time but I did not time it. >>>> >>>> · Label Audio Split took over 20 minutes >>>> >>>> · Compressor took over 40 minutes. >>>> >>>> · Truncate Silence took a long time. >>>> >>>> · Saving the file also took an extremely long time. >>>> >>>> · Closing Audacity took over 21 minutes. >>>> >>>> I hope this information helps resolve some issues. If I can be of help >>>> in evaluating, I would be happy to do so. >>>> >>>> >>>> >>>> Sincerely, >>>> >>>> [redacted] >>>> _______________________________________________ >>>> audacity-devel mailing list >>>> aud...@li... >>>> https://lists.sourceforge.net/lists/listinfo/audacity-devel >>>> >>>> >>>> _______________________________________________ >>>> audacity-devel mailing list >>>> aud...@li... >>>> https://lists.sourceforge.net/lists/listinfo/audacity-devel >>>> >>> _______________________________________________ >>> audacity-devel mailing list >>> aud...@li... >>> https://lists.sourceforge.net/lists/listinfo/audacity-devel >>> >> _______________________________________________ >> audacity-devel mailing list >> aud...@li... >> https://lists.sourceforge.net/lists/listinfo/audacity-devel >> > |