Sign in to follow this  
Hoopster

[6.5.0] Call Traces when assigning IP to Dockers

7 posts in this topic Last Reply

Recommended Posts

Posted (edited)

Well, since it has happened three times now and each time after assigning an IP address to a docker, that is fairly good evidence that, at least on my system, I am unable to assign a separate IP address to any docker.

 

I have tried assigning an IP address to the following dockers:

 

UniFI

OpenVPN-AS

Pi-Hole

 

Every time, call traces appeared in the syslog hours after the IP address assignment and continued until I either uninstalled the docker or removed the IP address assignment and let it go back to using the unRAID host IP.

 

The latest call traces started after installing Pi-Hole on the evening of March 25.  The call traces started on the 26th and several have been generated every day since then. 

 

I won't do it just yet in case there is something I can test, but, I suspect the call traces would disappear if I uninstalled Pi-Hole (Pi-Hole is not the problem, the docker IP address assignment is the issue).

 

Perhaps I will try one of the 6.5.1 RCs and see if any changes in kernel, etc. impact this.

 

Diagnostics attached.

 

medianas-diagnostics-20180328-0747.zip

Edited by Hoopster

Share this post


Link to post

I updated my main server (where the call traces are occurring) to 6.5.1 RC2 on the off chance that a kernel change would resolve the call trace issue.  Although it took a little longer for them to reappear. the call traces are back and, once, again, are related to ip addressing, macvlan, etc.

 

My next step will be to remove Pi-Hole and see if the call traces disappear.  I am fairly confident that they will.

 

Latest diagnostics attached.

 

 

medianas-diagnostics-20180330-0242.zip

Share this post


Link to post
Posted (edited)

@limetech @bonienl  Since this is obviously not a general defect with unRAID/docker networking I don't expect any resolution from you.  However, I will continue to experiment and look for a resolution on my own should this be helpful to anyone now or in the future.

 

I updated the server to 6.5.1 RC3 three days ago and, so far, have only seen one call trace.  It is still the usual suspect (IP/macvlan), but the call traces appear to have lessened in frequency, or, my recent usage patterns have not triggered them.

 

Since it appears my experience with call traces when assigning IP address to dockers it not shared by other users, perhaps it is a hardware issue unique to my server.  unRAID/docker LAN is currently on eth0.  My MB has two NICs.  Perhaps i will try assigning the dockers to eth1 and see how that affects the issue.  I could try bonding as well. I don't know if any of that will help, but, I suppose it is worth a try.  I believe the GUI/unRAID is tied to eth0, correct?

 

Quote

Apr  1 01:16:52 MediaNAS kernel: CPU: 0 PID: 15111 Comm: kworker/0:2 Not tainted 4.14.31-unRAID #1
Apr  1 01:16:52 MediaNAS kernel: Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./C236 WSI, BIOS P2.50 12/12/2017
Apr  1 01:16:52 MediaNAS kernel: Workqueue: events macvlan_process_broadcast [macvlan]
Apr  1 01:16:52 MediaNAS kernel: task: ffff880759889d00 task.stack: ffffc90008cc8000
Apr  1 01:16:52 MediaNAS kernel: RIP: 0010:__nf_conntrack_confirm+0x97/0x4d6
Apr  1 01:16:52 MediaNAS kernel: RSP: 0018:ffff88086dc03d30 EFLAGS: 00010202
Apr  1 01:16:52 MediaNAS kernel: RAX: 0000000000000188 RBX: 000000000000c318 RCX: 0000000000000001
Apr  1 01:16:52 MediaNAS kernel: RDX: 0000000000000001 RSI: 0000000000000000 RDI: ffffffff81c09260
Apr  1 01:16:52 MediaNAS kernel: RBP: ffff880721881700 R08: 0000000000000101 R09: ffff8807daca1700
Apr  1 01:16:52 MediaNAS kernel: R10: 0000000000000098 R11: 0000000000000000 R12: ffffffff81c8b080
Apr  1 01:16:52 MediaNAS kernel: R13: 0000000000007caa R14: ffff88037f170a00 R15: ffff88037f170a58
Apr  1 01:16:52 MediaNAS kernel: FS:  0000000000000000(0000) GS:ffff88086dc00000(0000) knlGS:0000000000000000
Apr  1 01:16:52 MediaNAS kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Apr  1 01:16:52 MediaNAS kernel: CR2: 00003bcfb3cd2000 CR3: 0000000001c0a005 CR4: 00000000003606f0
Apr  1 01:16:52 MediaNAS kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
Apr  1 01:16:52 MediaNAS kernel: DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Apr  1 01:16:52 MediaNAS kernel: Call Trace:
Apr  1 01:16:52 MediaNAS kernel: <IRQ>
Apr  1 01:16:52 MediaNAS kernel: ipv4_confirm+0xac/0xb4 [nf_conntrack_ipv4]
Apr  1 01:16:52 MediaNAS kernel: nf_hook_slow+0x37/0x96
Apr  1 01:16:52 MediaNAS kernel: ip_local_deliver+0xab/0xd3
Apr  1 01:16:52 MediaNAS kernel: ? inet_del_offload+0x3e/0x3e
Apr  1 01:16:52 MediaNAS kernel: ip_rcv+0x311/0x346
Apr  1 01:16:52 MediaNAS kernel: ? ip_local_deliver_finish+0x1b8/0x1b8
Apr  1 01:16:52 MediaNAS kernel: __netif_receive_skb_core+0x6ba/0x733
Apr  1 01:16:52 MediaNAS kernel: ? enqueue_task_fair+0x94/0x42c
Apr  1 01:16:52 MediaNAS kernel: process_backlog+0x8c/0x12d
Apr  1 01:16:52 MediaNAS kernel: net_rx_action+0xfb/0x24f
Apr  1 01:16:52 MediaNAS kernel: __do_softirq+0xcd/0x1c2
Apr  1 01:16:52 MediaNAS kernel: do_softirq_own_stack+0x2a/0x40
Apr  1 01:16:52 MediaNAS kernel: </IRQ>
Apr  1 01:16:52 MediaNAS kernel: do_softirq+0x46/0x52
Apr  1 01:16:52 MediaNAS kernel: netif_rx_ni+0x21/0x35
Apr  1 01:16:52 MediaNAS kernel: macvlan_broadcast+0x117/0x14f [macvlan]
Apr  1 01:16:52 MediaNAS kernel: ? __switch_to_asm+0x24/0x60
Apr  1 01:16:52 MediaNAS kernel: macvlan_process_broadcast+0xe4/0x114 [macvlan]
Apr  1 01:16:52 MediaNAS kernel: process_one_work+0x14c/0x23f
Apr  1 01:16:52 MediaNAS kernel: ? rescuer_thread+0x258/0x258
Apr  1 01:16:52 MediaNAS kernel: worker_thread+0x1c3/0x292
Apr  1 01:16:52 MediaNAS kernel: kthread+0x111/0x119
Apr  1 01:16:52 MediaNAS kernel: ? kthread_create_on_node+0x3a/0x3a
Apr  1 01:16:52 MediaNAS kernel: ? SyS_exit_group+0xb/0xb
Apr  1 01:16:52 MediaNAS kernel: ret_from_fork+0x35/0x40
Apr  1 01:16:52 MediaNAS kernel: Code: 48 c1 eb 20 89 1c 24 e8 24 f9 ff ff 8b 54 24 04 89 df 89 c6 41 89 c5 e8 a9 fa ff ff 84 c0 75 b9 49 8b 86 80 00 00 00 a8 08 74 02 <0f> 0b 4c 89 f7 e8 03 ff ff ff 49 8b 86 80 00 00 00 0f ba e0 09 
Apr  1 01:16:52 MediaNAS kernel: ---[ end trace f8aa7c492ea55664 ]---

 

Edited by Hoopster

Share this post


Link to post
Posted (edited)

I am now running 6.5.1 RC5 on main server and backup.

 

At least one other user is experiencing the same call traces with an IP address assigned to the Pihole docker.  He has completely different hardware than I do (Supermicro server, Xeon E5) so, I am less certain this is a hardware issue unique to my system.

 

I had to disable Pihole this morning.  Overnight it had completely locked up my unRAID server (perhaps due to the ever increasing number of call traces generated).  Since Pihole was my DNS, the whole network was inaccessible.  I had to hard reset the unRAID server since even the GUI was locked up. 

 

There was a Pihole update last night which I applied after rebooting everything as the previous update was causing many issues for many users - perhaps it was the cause of my problems as well.  Still, ip/macvlan call traces were being generated regularly in the syslog.  I have now disabled the Pihole docker and reset my router DNS back to what it was prior to installing Pihole.  I am sure the call traces will go away as well.  Not the solution I want, but, it is the only one available to me now.

Edited by Hoopster

Share this post


Link to post
Posted (edited)

On to the next attempt to track down the cause and eliminate these ip/macvlan call traces.  My server has two integrated NICs (Intel i210 and Intel i219-LM).  LAN1 is the i210 and LAN2 is the i219-LM.  I have both enabled in the BIOS but I only had a LAN cable connected to LAN1.  In unRAID Network Settings, LAN1 showed up as eth0 and LAN2 as eth1.  LAN2/eth1 had no settings configured as nothing was connected to it.

 

On the assumption this may be hardware related, I have done the following:

 

Step 1 (as a control step) - disable Pihole docker for 24 hours and reset DNS in router to not use Pihole; result - NO call traces.  This confirms the call traces are coming from Pihole with its IP address assignment.

 

Step 2 - disable LAN2 and re-enable Pihole as DNS in router; result - call traces occurred after an hour or two using Pihole (I again disabled Pihole for 16 hours and call traces ceased)

 

Step 3 - disable LAN1 in BIOS and enable LAN2 (now eth0 in unRAID), re-enable Pihole as DNS; results pending as server has currently been running in this configuration for ~1 hour

 

 I also disabled all unused MB and security features in the BIOS.

 

UPDATE:  Now at 28 hours without a call trace.  I am not yet ready to declare victory, but, it looks promising; at least more so than anything else I have tried.

Edited by Hoopster

Share this post


Link to post

After changing the NIC from LAN1 to LAN 2, the system ran without ip/macvlan call traces for over 3 1/2 days.  That's a new record; however the old familiar call trace returned just after updating to 6.5.1 RC6.  I have since rebooted the server and will monitor for the return of ip/macvlan call traces.

Share this post


Link to post

It's been about 2 1/2 days since the last reboot and the server just experienced another macvlan call trace.  It looks like the change of NIC and/or unRAID 6.5.1.RC6 is not a cure; although the frequency of call traces has diminished.

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this  


Copyright © 2005-2018 Lime Technology, Inc.
unRAID® is a registered trademark of Lime Technology, Inc.