From: Bill D. <dr...@dm...> - 2009-04-15 23:55:01
|
Could I ask what command line I would run to install this patch? w20090329.mh wrote: > > How about the following, which allows a mh.private.ini setting > Insteon_retry_count, which defaults to 2. I too have a frequently > re-occurring problem where usually I get "2 hops left" and then randomly > get lost messages without changing plugged in devices or on/off state of > devices. Still trying to work it out, but just want it to work for now. > > Index: Insteon_Device.pm > =================================================================== > --- Insteon_Device.pm (revision 1646) > +++ Insteon_Device.pm (working copy) > @@ -509,7 +509,7 @@ > if ($$self{queue_timer}->expired or !($$self{awaiting_ack})) { > my $callback = undef; > if ($$self{queue_timer}->expired) { > - if ($$self{_prior_msg} and $$self{_retry_count} > < 2) { > + if ($$self{_prior_msg} and $$self{_retry_count} > < ($::config_parms{'Insteon_retry_count'} || 2)) { > # first check to see if type is an > alllink; if so, then don't keep retrying until > # proper handling of alllink cleanup > status is implemented in Insteon_PLM > if ($$self{_prior_msg}{type} eq > 'alllink' and (!($self->is_plm_controlled))) { > @@ -521,7 +521,7 @@ > } > } else { > &::print_log("[Insteon_Device] WARN: > queue timer on " . $self->get_object_name . > - " expired. Trying next command if > queued."); > + " expired after $$self{_retry_count} > retries. Trying next command if queued."); > $$self{m_status_request_pending} = 0; # > hack--need a better way > if ($self->queue_timer_callback) { > if ($$self{_prior_msg} and > ($$self{_prior_msg}{is_synchronous})) { > > > > Gregg Liming wrote: >> Marc MERLIN wrote: >> >>> On Tue, Apr 14, 2009 at 09:49:58PM -0400, Gregg Liming wrote: >>> >>>> Marc MERLIN wrote: >>>> >>>> [... large snip ...] >>>> >>>> >>>>> I suppose that'll wait for the insteon refactor, >>>>> >>>> yes. pretty much everything insteon-related must wait >>>> >>> Understood. >>> My workaround for now is to resubmit the changes several times, rescan >>> all, >>> delete orphans, sync all. It's heavy weight since a full run of the 3 >>> takes >>> 30 minutes-ish, but it beats me configuring 100-ish link relationships >>> by >>> hand ;) >>> >>> If I have more problems, I may just try and find where your retry code >>> is >>> and up it from 3 to 6 or something. Most times I got nailed because the >>> huge >>> amount of traffic my resyncs now generate get things a bit unhappy >>> sometimes, and 2 retries is not always enough. >>> You might want to consider making the number of retries an option in >>> your >>> new code. >>> >> >> I'll definitely add that ability. If it were me, however, I'd focus on >> solving the environmental problems that obviously exist as you have a >> recurring problem w/ signal delivery. In the mean time, you can find >> the retry limit by searching for "_retry_count" in Insteon_Device.pm. >> Look for the comparison: _retry_count < 2. Change 2 to whatever you >> want. Note that making this change is global. >> >> Gregg >> >> >> ------------------------------------------------------------------------------ >> This SF.net email is sponsored by: >> High Quality Requirements in a Collaborative Environment. >> Download a free trial of Rational Requirements Composer Now! >> http://p.sf.net/sfu/www-ibm-com >> ________________________________________________________ >> To unsubscribe from this list, go to: >> http://sourceforge.net/mail/?group_id=1365 >> >> > > ------------------------------------------------------------------------------ > This SF.net email is sponsored by: > High Quality Requirements in a Collaborative Environment. > Download a free trial of Rational Requirements Composer Now! > http://p.sf.net/sfu/www-ibm-com > ________________________________________________________ > To unsubscribe from this list, go to: > http://sourceforge.net/mail/?group_id=1365 > > > -- View this message in context: http://www.nabble.com/Insteon-sync-all-delete-orphan-links-stalls-tp23049791p23069166.html Sent from the Misterhouse - User mailing list archive at Nabble.com. |