[tac_plus] TAC+ and Solarwinds Orion NCM don't play well together
Matt Almgren
matta at surveymonkey.com
Fri Oct 23 07:41:11 UTC 2015
Rancid stores passwords in the clear. TACACS does not when you use LDAP/PAM authentication.
-- iMatt
> On Oct 21, 2015, at 2:48 PM, Alan McKinnon <alan.mckinnon at gmail.com> wrote:
>
>> On 21/10/2015 19:32, Daniel Schmidt wrote:
>> Rancid isn't PCI compliant, but TAC+ is?
>
>
>
> And in what way is Rancid not PCI compliant?
>
> My reading of PCI is that it has a narrow well-defined scope, and rancid
> is not in it, despite what those with agendas claim.
>
>
>
>
>>
>>> On Tue, Oct 20, 2015 at 6:01 PM, Heasley <heas at shrubbery.net> wrote:
>>>
>>>
>>>
>>>> Am 20.10.2015 um 13:12 schrieb Matt Almgren <matta at surveymonkey.com>:
>>>>
>>>> So we moved away from Rancid for something that is more PCI compliant.
>>> So far so good, until very recently we see this problem.
>>>>
>>>> I have 26 juniper devices in a job in Orion NCM. For some reason, for
>>> the last week, the daily backup job reports that 8-10 devices were “unable
>>> to login” or “connection refused”. However, when I switch Orion NCM to use
>>> local Admin logins on the Junipers versus TAC+ accounts, I see no errors.
>>> Something with the communication between the network devices and TAC+
>>> isn’t playing nice together.
>>>>
>>>> I’ve tried the following:
>>>>
>>>> Increased the SSH Timeout settings on Orion to 120 seconds.
>>>> Decreased the # of concurrent connections from default 11 to 1.
>>>> Reinstalled Orion Job Engine + other tweaks on the Orion NCM side.
>>>> Tried only Juniper devices, or only Arista devices, or 8 instead of 27
>>> devices = all had mixed failures.
>>>
>>> How many concurrent jobs did you use eirh rancid?
>>>
>>>> None of the failures are consistent. Job 1 has 8/27 failures. Job 2
>>> has 10/27 failures with some that failed in the first job passing in this
>>> one. Etc…
>>>>
>>>> Remember, local NAS accounts setup in Orion work just fine – TAC+ isn’t
>>> even talked to when this happens.
>>>>
>>>> Is there any tuning I can do to the TAC+ server to make sure its able to
>>> handle the connections? What debug log level should I be looking at to
>>> get the best information? I’ve tried 24, 60, and even the higher ones, but
>>> they’re too noisy.
>>>>
>>>>
>>>> —
>>>> Matt Almgren, Sr. Network Engineer
>>>> [cid:29988614-ECDA-44BA-8377-ABD3ACFBCD1C]
>>>> 101 Lytton Avenue, Palo Alto, CA 94301
>>>> m: 408.499.9669
>>>> www.surveymonkey.com
>>>> -------------- next part --------------
>>>> An HTML attachment was scrubbed...
>>>> URL: <
>>> http://www.shrubbery.net/pipermail/tac_plus/attachments/20151020/1db9bcf7/attachment.html
>>>>
>>>> -------------- next part --------------
>>>> A non-text attachment was scrubbed...
>>>> Name: 7B2F1B3D-E309-404C-ADEF-2AE84F8259F4[35].png
>>>> Type: image/png
>>>> Size: 8698 bytes
>>>> Desc: 7B2F1B3D-E309-404C-ADEF-2AE84F8259F4[35].png
>>>> URL: <
>>> http://www.shrubbery.net/pipermail/tac_plus/attachments/20151020/1db9bcf7/attachment.png
>>>>
>>>> _______________________________________________
>>>> tac_plus mailing list
>>>> tac_plus at shrubbery.net
>>>> http://www.shrubbery.net/mailman/listinfo/tac_plus
>>> _______________________________________________
>>> tac_plus mailing list
>>> tac_plus at shrubbery.net
>>> http://www.shrubbery.net/mailman/listinfo/tac_plus
>
>
> --
> Alan McKinnon
> alan.mckinnon at gmail.com
>
> _______________________________________________
> tac_plus mailing list
> tac_plus at shrubbery.net
> http://www.shrubbery.net/mailman/listinfo/tac_plus
More information about the tac_plus
mailing list