<div dir="ltr">Rancid isn't PCI compliant, but TAC+ is? </div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Oct 20, 2015 at 6:01 PM, Heasley <span dir="ltr"><<a href="mailto:heas@shrubbery.net" target="_blank">heas@shrubbery.net</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class=""><br>
<br>
> Am 20.10.2015 um 13:12 schrieb Matt Almgren <<a href="mailto:matta@surveymonkey.com">matta@surveymonkey.com</a>>:<br>
><br>
> So we moved away from Rancid for something that is more PCI compliant. So far so good, until very recently we see this problem.<br>
><br>
> I have 26 juniper devices in a job in Orion NCM. For some reason, for the last week, the daily backup job reports that 8-10 devices were “unable to login” or “connection refused”. However, when I switch Orion NCM to use local Admin logins on the Junipers versus TAC+ accounts, I see no errors. Something with the communication between the network devices and TAC+ isn’t playing nice together.<br>
><br>
> I’ve tried the following:<br>
><br>
> Increased the SSH Timeout settings on Orion to 120 seconds.<br>
> Decreased the # of concurrent connections from default 11 to 1.<br>
> Reinstalled Orion Job Engine + other tweaks on the Orion NCM side.<br>
> Tried only Juniper devices, or only Arista devices, or 8 instead of 27 devices = all had mixed failures.<br>
><br>
<br>
</span>How many concurrent jobs did you use eirh rancid?<br>
<div class="HOEnZb"><div class="h5"><br>
> None of the failures are consistent. Job 1 has 8/27 failures. Job 2 has 10/27 failures with some that failed in the first job passing in this one. Etc…<br>
><br>
> Remember, local NAS accounts setup in Orion work just fine – TAC+ isn’t even talked to when this happens.<br>
><br>
> Is there any tuning I can do to the TAC+ server to make sure its able to handle the connections? What debug log level should I be looking at to get the best information? I’ve tried 24, 60, and even the higher ones, but they’re too noisy.<br>
><br>
><br>
> —<br>
> Matt Almgren, Sr. Network Engineer<br>
> [cid:29988614-ECDA-44BA-8377-ABD3ACFBCD1C]<br>
> 101 Lytton Avenue, Palo Alto, CA 94301<br>
> m: 408.499.9669<br>
> <a href="http://www.surveymonkey.com" rel="noreferrer" target="_blank">www.surveymonkey.com</a><br>
> -------------- next part --------------<br>
> An HTML attachment was scrubbed...<br>
> URL: <<a href="http://www.shrubbery.net/pipermail/tac_plus/attachments/20151020/1db9bcf7/attachment.html" rel="noreferrer" target="_blank">http://www.shrubbery.net/pipermail/tac_plus/attachments/20151020/1db9bcf7/attachment.html</a>><br>
> -------------- next part --------------<br>
> A non-text attachment was scrubbed...<br>
> Name: 7B2F1B3D-E309-404C-ADEF-2AE84F8259F4[35].png<br>
> Type: image/png<br>
> Size: 8698 bytes<br>
> Desc: 7B2F1B3D-E309-404C-ADEF-2AE84F8259F4[35].png<br>
> URL: <<a href="http://www.shrubbery.net/pipermail/tac_plus/attachments/20151020/1db9bcf7/attachment.png" rel="noreferrer" target="_blank">http://www.shrubbery.net/pipermail/tac_plus/attachments/20151020/1db9bcf7/attachment.png</a>><br>
> _______________________________________________<br>
> tac_plus mailing list<br>
> <a href="mailto:tac_plus@shrubbery.net">tac_plus@shrubbery.net</a><br>
> <a href="http://www.shrubbery.net/mailman/listinfo/tac_plus" rel="noreferrer" target="_blank">http://www.shrubbery.net/mailman/listinfo/tac_plus</a><br>
_______________________________________________<br>
tac_plus mailing list<br>
<a href="mailto:tac_plus@shrubbery.net">tac_plus@shrubbery.net</a><br>
<a href="http://www.shrubbery.net/mailman/listinfo/tac_plus" rel="noreferrer" target="_blank">http://www.shrubbery.net/mailman/listinfo/tac_plus</a><br>
</div></div></blockquote></div><br></div>
<br>
<br>E-Mail to and from me, in connection with the transaction <br>of public business, is subject to the Wyoming Public Records <br>Act and may be disclosed to third parties.<br>