Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AWS Server - Can't get vps token #3737

Open
dersch81 opened this issue Jan 4, 2025 · 23 comments
Open

AWS Server - Can't get vps token #3737

dersch81 opened this issue Jan 4, 2025 · 23 comments
Labels

Comments

@dersch81
Copy link

dersch81 commented Jan 4, 2025

Expected Behavior

Stable connection from Router to VPS

Current Behavior

The connection, aggr Speed and so on is great with AWS. But it just runs for a very short time. Then this happens:

Router Syslog:

Jan  4 10:23:52 OpenMPTCProuter user.notice post-tracking-020-status: Check API configuration...
Jan  4 10:23:52 OpenMPTCProuter user.notice post-tracking-020-status: Check API configuration... Done
Jan  4 10:24:08 OpenMPTCProuter user.notice OMR-VPS: Can't get vps token, try later (can't ping server vps on 3.79.35.254, no server API answer on 3.79.35.254)
Jan  4 10:24:49 OpenMPTCProuter daemon.notice netifd: wan3 (5456): udhcpc: sending renew to server 100.64.0.1
Jan  4 10:24:49 OpenMPTCProuter daemon.notice netifd: wan3 (5456): udhcpc: lease of 100.117.46.120 obtained from 100.64.0.1, lease time 300
Jan  4 10:25:17 OpenMPTCProuter user.notice OMR-VPS: Can't get vps token, try later (can't ping server vps on 3.79.35.254, no server API answer on 3.79.35.254)
Jan  4 10:26:17 OpenMPTCProuter user.notice OMR-VPS: Can't get vps token, try later (can't ping server vps on 3.79.35.254, no server API answer on 3.79.35.254)
Jan  4 10:27:17 OpenMPTCProuter user.notice OMR-VPS: Can't get vps token, try later (can't ping server vps on 3.79.35.254, no server API answer on 3.79.35.254)
Jan  4 10:27:19 OpenMPTCProuter daemon.notice netifd: wan3 (5456): udhcpc: sending renew to server 100.64.0.1
Jan  4 10:27:19 OpenMPTCProuter daemon.notice netifd: wan3 (5456): udhcpc: lease of 100.117.46.120 obtained from 100.64.0.1, lease time 300
Jan  4 10:28:17 OpenMPTCProuter user.notice OMR-VPS: Can't get vps token, try later (can't ping server vps on 3.79.35.254, no server API answer on 3.79.35.254)
Jan  4 10:29:17 OpenMPTCProuter user.notice OMR-VPS: Can't get vps token, try later (can't ping server vps on 3.79.35.254, no server API answer on 3.79.35.254)
Jan  4 10:29:49 OpenMPTCProuter daemon.notice netifd: wan3 (5456): udhcpc: sending renew to server 100.64.0.1
Jan  4 10:29:49 OpenMPTCProuter daemon.notice netifd: wan3 (5456): udhcpc: lease of 100.117.46.120 obtained from 100.64.0.1, lease time 300
Jan  4 10:30:17 OpenMPTCProuter user.notice OMR-VPS: Can't get vps token, try later (can't ping server vps on 3.79.35.254, no server API answer on 3.79.35.254)
Jan  4 10:31:17 OpenMPTCProuter user.notice OMR-VPS: Can't get vps token, try later (can't ping server vps on 3.79.35.254, no server API answer on 3.79.35.254)
Jan  4 10:32:17 OpenMPTCProuter user.notice OMR-VPS: Can't get vps token, try later (can't ping server vps on 3.79.35.254, no server API answer on 3.79.35.254)
Jan  4 10:32:19 OpenMPTCProuter daemon.notice netifd: wan3 (5456): udhcpc: sending renew to server 100.64.0.1
Jan  4 10:32:19 OpenMPTCProuter daemon.notice netifd: wan3 (5456): udhcpc: lease of 100.117.46.120 obtained from 100.64.0.1, lease time 300
Jan  4 10:33:17 OpenMPTCProuter user.notice OMR-VPS: Can't get vps token, try later (can't ping server vps on 3.79.35.254, no server API answer on 3.79.35.254)
Jan  4 10:34:17 OpenMPTCProuter user.notice OMR-VPS: Can't get vps token, try later (can't ping server vps on 3.79.35.254, no server API answer on 3.79.35.254)
Jan  4 10:34:49 OpenMPTCProuter daemon.notice netifd: wan3 (5456): udhcpc: sending renew to server 100.64.0.1
Jan  4 10:34:49 OpenMPTCProuter daemon.notice netifd: wan3 (5456): udhcpc: lease of 100.117.46.120 obtained from 100.64.0.1, lease time 300
Jan  4 10:35:17 OpenMPTCProuter user.notice OMR-VPS: Can't get vps token, try later (can't ping server vps on 3.79.35.254, no server API answer on 3.79.35.254)
Jan  4 10:36:17 OpenMPTCProuter user.notice OMR-VPS: Can't get vps token, try later (can't ping server vps on 3.79.35.254, no server API answer on 3.79.35.254)
Jan  4 10:37:17 OpenMPTCProuter user.notice OMR-VPS: Can't get vps token, try later (can't ping server vps on 3.79.35.254, no server API answer on 3.79.35.254)
Jan  4 10:37:19 OpenMPTCProuter daemon.notice netifd: wan3 (5456): udhcpc: sending renew to server 100.64.0.1
Jan  4 10:37:19 OpenMPTCProuter daemon.notice netifd: wan3 (5456): udhcpc: lease of 100.117.46.120 obtained from 100.64.0.1, lease time 300
Jan  4 10:38:17 OpenMPTCProuter user.notice OMR-VPS: Can't get vps token, try later (can't ping server vps on 3.79.35.254, no server API answer on 3.79.35.254)
Jan  4 10:39:17 OpenMPTCProuter user.notice OMR-VPS: Can't get vps token, try later (can't ping server vps on 3.79.35.254, no server API answer on 3.79.35.254)
Jan  4 10:39:49 OpenMPTCProuter daemon.notice netifd: wan3 (5456): udhcpc: sending renew to server 100.64.0.1
Jan  4 10:39:49 OpenMPTCProuter daemon.notice netifd: wan3 (5456): udhcpc: lease of 100.117.46.120 obtained from 100.64.0.1, lease time 300

VPS journalctl -u omr-admin

-- Boot b069c6c21bb545c388f9415702271044 --
Jan 04 02:34:37 ip-172-31-45-95 systemd[1]: Started omr-admin.service - OMR-Admin.
Jan 04 02:35:06 ip-172-31-45-95 omr-admin.py[1232]: Device "eth0" does not exist.
Jan 04 02:35:06 ip-172-31-45-95 omr-admin.py[1236]: Device "eth0" does not exist.
Jan 04 02:35:08 ip-172-31-45-95 omr-admin.py[1289]: Device "eth0" does not exist.
Jan 04 02:35:08 ip-172-31-45-95 omr-admin.py[1293]: Device "eth0" does not exist.
Jan 04 02:35:08 ip-172-31-45-95 omr-admin.py[1339]: Device "eth0" does not exist.
Jan 04 02:35:08 ip-172-31-45-95 omr-admin.py[1343]: Device "eth0" does not exist.
Jan 04 02:35:09 ip-172-31-45-95 omr-admin.py[1392]: Device "eth0" does not exist.
Jan 04 02:35:09 ip-172-31-45-95 omr-admin.py[1396]: Device "eth0" does not exist.
Jan 04 02:35:09 ip-172-31-45-95 omr-admin.py[1445]: Device "eth0" does not exist.
Jan 04 02:35:09 ip-172-31-45-95 omr-admin.py[1449]: Device "eth0" does not exist.
Jan 04 02:35:09 ip-172-31-45-95 omr-admin.py[1495]: Device "eth0" does not exist.
Jan 04 02:35:09 ip-172-31-45-95 omr-admin.py[1499]: Device "eth0" does not exist.
Jan 04 02:39:41 ip-172-31-45-95 omr-admin.py[3069]: Device "eth0" does not exist.
Jan 04 02:39:41 ip-172-31-45-95 omr-admin.py[3068]: Device "eth0" does not exist.
Jan 04 02:39:41 ip-172-31-45-95 omr-admin.py[3079]: Device "eth0" does not exist.
Jan 04 02:39:41 ip-172-31-45-95 omr-admin.py[3075]: Device "eth0" does not exist.
-- Boot 7162a003bb634ff5ba41e05a6bfd746c --
Jan 04 10:46:42 ip-172-31-45-95 systemd[1]: Started omr-admin.service - OMR-Admin.
Jan 04 10:47:06 ip-172-31-45-95 omr-admin.py[1874]: Device "eth0" does not exist.
Jan 04 10:47:06 ip-172-31-45-95 omr-admin.py[1878]: Device "eth0" does not exist.
Jan 04 10:47:06 ip-172-31-45-95 omr-admin.py[1903]: Device "eth0" does not exist.
Jan 04 10:47:08 ip-172-31-45-95 omr-admin.py[1931]: Device "eth0" does not exist.
Jan 04 10:47:08 ip-172-31-45-95 omr-admin.py[1935]: Device "eth0" does not exist.
Jan 04 10:47:09 ip-172-31-45-95 omr-admin.py[2028]: Device "eth0" does not exist.
Jan 04 10:47:09 ip-172-31-45-95 omr-admin.py[2032]: Device "eth0" does not exist.
Jan 04 10:47:09 ip-172-31-45-95 omr-admin.py[2078]: Device "eth0" does not exist.
Jan 04 10:47:09 ip-172-31-45-95 omr-admin.py[2082]: Device "eth0" does not exist.
Jan 04 10:47:09 ip-172-31-45-95 omr-admin.py[2128]: Device "eth0" does not exist.
Jan 04 10:47:09 ip-172-31-45-95 omr-admin.py[2132]: Device "eth0" does not exist.
Jan 04 10:47:10 ip-172-31-45-95 omr-admin.py[2163]: Device "eth0" does not exist.
Jan 04 10:47:10 ip-172-31-45-95 omr-admin.py[2167]: Device "eth0" does not exist.

The AWS interface is ens5

It never recovers. Only rebooting the AWS VPC instance is fixing it for a time.
The AWS VPC is not reachable anymore. The issue starts right after benching it with speedtests, or just after a while connected.

I already found #3733 - the issue is not the same but i installed the latest dev branch you mentioned. Did not fix it.

Besides i have on the router side this all the time:

root@OpenMPTCProuter:~# omr-iperf
Segmentation fault
root@OpenMPTCProuter:~# omr-iperf vps -R
Segmentation fault

Possible Solution

No idea

Steps to Reproduce the Problem

  1. Try a AWS VPC Instance with Debian12 - t3.micro
  2. Bench it with Speedtests from your private Network
  3. It seems it works more stable with Shadowsocks than XRay VLESS

Context (Environment)

I try to have a stable mptcp setup. Had a lot of instability with HT-Hosting.de. Decided to switch to AWS. It is much better when it works. More Down and Up bandwith in aggr mode. But now not useable at all in general.

Specifications

  • OpenMPTCProuter version: 6.6.36-x64v4-xanmod1 0.1031
  • OpenMPTCProuter VPS version: server-test/debian-x86_64.sh | KERNEL="6.6"
  • OpenMPTCProuter VPS provider: AWS
  • OpenMPTCProuter platform: x86_64
  • Country: Germany
@dersch81 dersch81 added the bug label Jan 4, 2025
@Ysurac
Copy link
Owner

Ysurac commented Jan 4, 2025

Check in /etc/shorewall/params.net if correct interface is set. If not, correct it and restart shorewall: systemctl restart shorewall

@dersch81
Copy link
Author

dersch81 commented Jan 4, 2025

It is correct:

cat /etc/shorewall/params.net
NET_IFACE=ens5

But today no outage happened. But the VPS journal output of omr-admin has all the entries. So it seems another issue not related to my issue.

Jan 04 17:37:31 ip-172-31-45-95 omr-admin.py[1152834]: Device "eth0" does not exist.
Jan 04 17:37:31 ip-172-31-45-95 omr-admin.py[1152838]: Device "eth0" does not exist.
Jan 04 17:37:32 ip-172-31-45-95 omr-admin.py[1152868]: Device "eth0" does not exist.
Jan 04 17:37:32 ip-172-31-45-95 omr-admin.py[1152872]: Device "eth0" does not exist.
Jan 04 17:37:32 ip-172-31-45-95 omr-admin.py[1152902]: Device "eth0" does not exist.
Jan 04 17:37:32 ip-172-31-45-95 omr-admin.py[1152906]: Device "eth0" does not exist.
Jan 04 17:37:32 ip-172-31-45-95 omr-admin.py[1152936]: Device "eth0" does not exist.
Jan 04 17:37:32 ip-172-31-45-95 omr-admin.py[1152940]: Device "eth0" does not exist.
Jan 04 17:37:33 ip-172-31-45-95 omr-admin.py[1152970]: Device "eth0" does not exist.
Jan 04 17:37:33 ip-172-31-45-95 omr-admin.py[1152974]: Device "eth0" does not exist.
Jan 04 18:40:48 ip-172-31-45-95 omr-admin.py[1330517]: Device "eth0" does not exist.
Jan 04 18:40:48 ip-172-31-45-95 omr-admin.py[1330521]: Device "eth0" does not exist.
Jan 04 18:40:49 ip-172-31-45-95 omr-admin.py[1330545]: Device "eth0" does not exist.
Jan 04 18:40:51 ip-172-31-45-95 omr-admin.py[1330555]: Device "eth0" does not exist.
Jan 04 18:40:51 ip-172-31-45-95 omr-admin.py[1330559]: Device "eth0" does not exist.
Jan 04 18:40:51 ip-172-31-45-95 omr-admin.py[1330589]: Device "eth0" does not exist.
Jan 04 18:40:51 ip-172-31-45-95 omr-admin.py[1330593]: Device "eth0" does not exist.
Jan 04 18:40:51 ip-172-31-45-95 omr-admin.py[1330623]: Device "eth0" does not exist.
Jan 04 18:40:51 ip-172-31-45-95 omr-admin.py[1330627]: Device "eth0" does not exist.
Jan 04 18:40:52 ip-172-31-45-95 omr-admin.py[1330657]: Device "eth0" does not exist.
Jan 04 18:40:52 ip-172-31-45-95 omr-admin.py[1330661]: Device "eth0" does not exist.
Jan 04 18:40:52 ip-172-31-45-95 omr-admin.py[1330691]: Device "eth0" does not exist.
Jan 04 18:40:52 ip-172-31-45-95 omr-admin.py[1330695]: Device "eth0" does not exist.

@Ysurac
Copy link
Owner

Ysurac commented Jan 4, 2025

Check also in /etc/shorewall6/params.net
If it's ok, can you try snapshot script ? wget -O - https://www.openmptcprouter.com/server-test/debian-x86_64.sh | KERNEL="6.6" sh
Else I will try on AWS.

@dersch81
Copy link
Author

dersch81 commented Jan 5, 2025

Ahhh there it is:

cat /etc/shorewall6/params.net
NET_IFACE=eth0

I will watch now. But your testing feedback with AWS would be very welcome, too. Maybe i overlook something specific there.

@dersch81
Copy link
Author

dersch81 commented Jan 5, 2025

I had no outage for 1,5 days. But did a speedtest right now and the issue happens again. The server does not respond anymore. I guess it is something on the AWS side. Some restriction i don't know. Would be great to find the cause there.

@dersch81
Copy link
Author

dersch81 commented Jan 5, 2025

I have tested the same setup now but with the VPS on Hetzner.de. No issue so far. Slightly lower aggr speed but stable. No speedtest is resulting in a VPS outage. I assume it is something specific on the AWS side.

By the way after the VPS changes on the omr side i can't use iperf anymore:

root@OpenMPTCProuter:~# omr-iperf
Segmentation fault

i understand i shouldn't install iperf from the openwrt repo. But how can i reinstall it? What means segmentation fault?

And my PPPOE uplink is always marked as down but it is not:
image

root@OpenMPTCProuter:~# omr-test-speed pppoe-wan2
Select best test server...
host: scaleway.testdebit.info - ping: 14
host: bordeaux.testdebit.info - ping:
host: aix-marseille.testdebit.info - ping: 28
host: lyon.testdebit.info - ping:
host: lille.testdebit.info - ping:
host: paris.testdebit.info - ping:
host: appliwave.testdebit.info - ping: 19
host: speedtest.frankfurt.linode.com - ping: 5
host: speedtest.tokyo2.linode.com - ping: 256
host: speedtest.singapore.linode.com - ping: 320
host: speedtest.newark.linode.com - ping: 92
host: speedtest.atlanta.linode.com - ping: 108
host: speedtest.dallas.linode.com - ping: 129
host: speedtest.fremont.linode.com - ping: 162
host: ipv4.bouygues.testdebit.info - ping:
host: par.download.datapacket.com - ping: 13
host: nyc.download.datapacket.com - ping: 85
host: ams.download.datapacket.com - ping: 10
host: fra.download.datapacket.com - ping: 5
host: lon.download.datapacket.com - ping: 16
host: mad.download.datapacket.com - ping: 28
host: prg.download.datapacket.com - ping: 10
host: sto.download.datapacket.com - ping: 35
host: vie.download.datapacket.com - ping: 16
host: war.download.datapacket.com - ping: 27
host: atl.download.datapacket.com - ping: 101
host: chi.download.datapacket.com - ping: 106
host: lax.download.datapacket.com - ping: 152
host: mia.download.datapacket.com - ping: 120
host: nyc.download.datapacket.com - ping: 85
host: speedtest.milkywan.fr - ping: 13
Best server is http://speedtest.frankfurt.linode.com/garbage.php?ckSize=10000, running test:
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  378M    0  378M    0     0  17.3M      0 --:--:--  0:00:21 --:--:-- 21.8M

@Ysurac
Copy link
Owner

Ysurac commented Jan 5, 2025

iperf3 is already available, the omr-iperf script only add the key needed for VPS iperf server.
For the pppoe-wan2 you should check in Status->System log why it's detected as down. Can be due to ICMP/ping blocked for example (this can be set in Services->OMR-Tracker Manager with custom settings by interfaces)

@dersch81
Copy link
Author

dersch81 commented Jan 5, 2025

iperf3 is already available, the omr-iperf script only add the key needed for VPS iperf server. For the pppoe-wan2 you should check in Status->System log why it's detected as down. Can be due to ICMP/ping blocked for example (this can be set in Services->OMR-Tracker Manager with custom settings by interfaces)

But i can't use the omr-iperf script because of "Segmentation fault". How can i fix that?

The pppoe output looks good but still marked as down.

Jan  5 16:48:50 OpenMPTCProuter daemon.info pppd[27178]: Using interface pppoe-wan2
Jan  5 16:48:50 OpenMPTCProuter daemon.notice pppd[27178]: Connect: pppoe-wan2 <--> eth3
Jan  5 16:48:50 OpenMPTCProuter user.notice NET: hotplug (iface): action='add' interface='pppoe-wan2'
Jan  5 16:48:50 OpenMPTCProuter daemon.info ModemManager[22202]: hotplug: add network interface pppoe-wan2: event processed
Jan  5 16:48:50 OpenMPTCProuter daemon.info pppd[27178]: Remote message: SRU=39968#SRD=192414#
Jan  5 16:48:50 OpenMPTCProuter daemon.notice pppd[27178]: PAP authentication succeeded
Jan  5 16:48:50 OpenMPTCProuter daemon.notice pppd[27178]: peer from calling number EC:13:DB:13:40:29 authorized
Jan  5 16:48:50 OpenMPTCProuter daemon.notice ttyd[29043]: [2025/01/05 17:48:50:7649] N: rops_handle_POLLIN_netlink: DELADDR
Jan  5 16:48:50 OpenMPTCProuter daemon.notice pppd[27178]: local  IP address 79.206.202.17
Jan  5 16:48:50 OpenMPTCProuter daemon.notice pppd[27178]: remote IP address 62.155.246.179
Jan  5 16:48:50 OpenMPTCProuter daemon.notice pppd[27178]: primary   DNS address 217.237.150.51
Jan  5 16:48:50 OpenMPTCProuter daemon.notice pppd[27178]: secondary DNS address 217.237.148.22
Jan  5 16:48:50 OpenMPTCProuter daemon.notice netifd: Network device 'pppoe-wan2' link is up
Jan  5 16:48:50 OpenMPTCProuter daemon.notice pppd[27178]: local  LL address fe80::9d33:bfaf:8fa1:0e1e
Jan  5 16:48:50 OpenMPTCProuter daemon.notice pppd[27178]: remote LL address fe80::ee13:dbff:fe13:4029
Jan  5 16:48:50 OpenMPTCProuter daemon.notice netifd: Interface 'wan2' is now up
Jan  5 16:48:50 OpenMPTCProuter user.notice firewall: Reloading firewall due to ifup of wan2 (pppoe-wan2)
Jan  5 16:48:51 OpenMPTCProuter user.notice firewall.omr-server: Firewall reload, set server part firewall reloading
Jan  5 16:48:52 OpenMPTCProuter user.notice MPTCP: Set pppoe-wan2 to on from pppoe-wan2 is deactivated
Jan  5 16:48:55 OpenMPTCProuter daemon.info vnstatd[14547]: Info: Interface "pppoe-wan2" enabled.
Jan  5 16:49:00 OpenMPTCProuter user.notice omr-schedule-010-services: Set firewall on server vps

@sieade245
Copy link

Might be of some help. Ive been using an AWS lightsail VPS for the past 2-3 weeks. I have not had any issues like this at all. I'm not using any snapshots though just the stable version of both server and router.

I also get the segmentation fault though trying to use iperf. Messed about with for a bit but gave up. It didnt crash my VPS though.

I've moved back to my main VPS now on ionos. The reason I moved to aws is I had a similar but not that same issue on ionos where one day the vps just stopped working. I hadn't made any changes. My backup VPS also stopped working aws was the only one that was working. A week later the ionos server started working again. I had made no changes. I mention this because as you've highlighted yourself don't rule out a vps provider issue.

@Ysurac
Copy link
Owner

Ysurac commented Jan 6, 2025

@dersch81 iperf may crash due to wrong key, I don't know. You can restore keys with System->OpenMPTCProuter, Wizard tab, "advanced settings" checkbox, and "Force retrieve settings" checkbox.
Else you can use iperf3 directly with another server.

For the pppoe device I don't see any OMR-Tracker log, this seems strange. Can you restart it via /etc/init.d/omr-tracker restart and check log again ?

@dersch81
Copy link
Author

dersch81 commented Jan 6, 2025

omr-iperf is working now. Did the force retrieve (which i did several times before) again.

But also not correct:

root@OpenMPTCProuter:~# omr-iperf
Connecting to host 116.203.76.66, port 65400
[  5] local 130.180.51.242 port 48045 connected to 116.203.76.66 port 65400
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  3.75 MBytes  31.4 Mbits/sec   18    133 KBytes
[  5]   1.00-2.00   sec  5.12 MBytes  43.0 Mbits/sec    9    132 KBytes
[  5]   2.00-3.00   sec  5.25 MBytes  44.1 Mbits/sec   32    109 KBytes
[  5]   3.00-4.00   sec  5.75 MBytes  48.2 Mbits/sec   12    205 KBytes
^C[  5]   4.00-4.08   sec   512 KBytes  50.2 Mbits/sec    0    185 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-4.08   sec  20.4 MBytes  41.9 Mbits/sec   71             sender
[  5]   0.00-4.08   sec  0.00 Bytes  0.00 bits/sec                  receiver
iperf3: interrupt - the client has terminated

root@OpenMPTCProuter:~# omr-iperf vps -R
iperf3: error - unable to receive control message - port may not be available, the other side may have stopped running, etc.: Connection reset by peer
root@OpenMPTCProuter:~#

after restarting the omr-tracking service i just see this:

Jan  6 13:57:15 OpenMPTCProuter user.notice omr-tracker: Launching...
Jan  6 13:57:32 OpenMPTCProuter user.notice omr-tracker: Launched
Jan  6 13:57:33 OpenMPTCProuter daemon.info omr-tracker-xray: xray is up (can contact via http 212.27.48.10)

The tracker itself is working:

image

As you can see there is also a same VDSL line via DHCP on the same POP connected DTAG-DIAL24. Just my local PPPOE is not tracked but working in aggr mode.

@Ysurac
Copy link
Owner

Ysurac commented Jan 6, 2025

I would need uci show network.wan2, uci show openmptcprouter.wan2 and ifstatus wan2.

@dersch81
Copy link
Author

dersch81 commented Jan 6, 2025

root@OpenMPTCProuter:~# uci show network.wan2
network.wan2=interface
network.wan2.device='eth3'
network.wan2.proto='pppoe'
network.wan2.ip4table='wan'
network.wan2.multipath='master'
network.wan2.defaultroute='0'
network.wan2.addlatency='0'
network.wan2.peerdns='0'
network.wan2.ipv6='1'
network.wan2.label='TelekomVDSL'
network.wan2.pppd_options='persist maxfail 0'
network.wan2.auth='both'
network.wan2.username='xxxxxx'
network.wan2.password='xxxxxx'
network.wan2.ip6assign='56'
network.wan2.metric='7'
root@OpenMPTCProuter:~# uci show openmptcprouter.wan2
openmptcprouter.wan2=interface
openmptcprouter.wan2.multipath='master'
openmptcprouter.wan2.multipathvpn='0'
openmptcprouter.wan2.testspeed='1'
openmptcprouter.wan2.state='down'
openmptcprouter.wan2.local_ipv4='192.168.33.27'  <== This is a IP of another WAN interface
openmptcprouter.wan2.manufacturer='huawei'
openmptcprouter.wan2.latency='5'
openmptcprouter.wan2.latency_previous='6'
openmptcprouter.wan2.asn='DTAG-DIAL24'
openmptcprouter.wan2.metric='7'
root@OpenMPTCProuter:~# ifstatus wan2
{
        "up": true,
        "pending": false,
        "available": true,
        "autostart": true,
        "dynamic": false,
        "uptime": 2548,
        "l3_device": "pppoe-wan2",
        "proto": "pppoe",
        "device": "eth3",
        "updated": [
                "addresses",
                "routes"
        ],
        "metric": 7,
        "dns_metric": 0,
        "delegation": true,
        "ipv4-address": [
                {
                        "address": "84.155.xxx.xxx",
                        "mask": 32,
                        "ptpaddress": "62.155.xxx.xxx"
                }
        ],
        "ipv6-address": [
                {
                        "address": "fe80::cdf1:1d51:a1e6:3d5c",
                        "mask": 128
                }
        ],
        "ipv6-prefix": [

        ],
        "ipv6-prefix-assignment": [
                {
                        "address": "fdde:43da:e68c::",
                        "mask": 56,
                        "local-address": {
                                "address": "fdde:43da:e68c::1",
                                "mask": 56
                        }
                }
        ],
        "route": [

        ],
        "dns-server": [

        ],
        "dns-search": [

        ],
        "neighbors": [

        ],
        "inactive": {
                "ipv4-address": [

                ],
                "ipv6-address": [

                ],
                "route": [
                        {
                                "target": "0.0.0.0",
                                "mask": 0,
                                "nexthop": "62.155.246.179",
                                "source": "0.0.0.0/0"
                        }
                ],
                "dns-server": [
                        "217.237.150.51",
                        "217.237.148.22"
                ],
                "dns-search": [

                ],
                "neighbors": [

                ]
        },
        "data": {

        }
}

It is referencing the wrong WAN
image

@Ysurac
Copy link
Owner

Ysurac commented Jan 6, 2025

Strange, I would need ip a to check why it's not using the good interface.
PPPoE is not really tested, I don't have any PPPoE connections...

@dersch81
Copy link
Author

dersch81 commented Jan 6, 2025

root@OpenMPTCProuter:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host proto kernel_lo
       valid_lft forever preferred_lft forever
2: ip6tnl0@NONE: <NOARP> mtu 1452 qdisc noop state DOWN group default qlen 1000
    link/tunnel6 :: brd :: permaddr fe77:20be:1c2f::
3: sit0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
    link/sit 0.0.0.0 brd 0.0.0.0
4: gre0@NONE: <NOARP> mtu 1476 qdisc noop state DOWN group default qlen 1000
    link/gre 0.0.0.0 brd 0.0.0.0
5: gretap0@NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN group default qlen 1000
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
6: erspan0@NONE: <BROADCAST,MULTICAST> mtu 1450 qdisc noop state DOWN group default qlen 1000
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
7: ip6gre0@NONE: <NOARP> mtu 1448 qdisc noop state DOWN group default qlen 1000
    link/gre6 :: brd :: permaddr 8668:1ca0:d03f::
8: teql0: <NOARP> mtu 1500 qdisc noop state DOWN group default qlen 100
    link/void
9: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:d0:b4:03:30:f0 brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.1/29 brd 192.168.2.7 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::2d0:b4ff:fe03:30f0/64 scope link proto kernel_ll
       valid_lft forever preferred_lft forever
10: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc cake state UP group default qlen 1000
    link/ether 00:d0:b4:03:30:f1 brd ff:ff:ff:ff:ff:ff
    inet 100.117.46.120/10 brd 100.127.255.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::2d0:b4ff:fe03:30f1/64 scope link proto kernel_ll
       valid_lft forever preferred_lft forever
11: eth2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether 00:d0:b4:03:30:f2 brd ff:ff:ff:ff:ff:ff
    inet 130.180.51.242/30 brd 130.180.51.243 scope global eth2
       valid_lft forever preferred_lft forever
12: eth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc cake state UP group default qlen 1000
    link/ether 00:d0:b4:03:30:f3 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::2d0:b4ff:fe03:30f3/64 scope link proto kernel_ll
       valid_lft forever preferred_lft forever
144: ifb4eth3: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc cake state UNKNOWN group default qlen 32
    link/ether 82:74:be:18:e9:e7 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::8074:beff:fe18:e9e7/64 scope link proto kernel_ll
       valid_lft forever preferred_lft forever
147: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 100
    link/none
    inet 10.255.255.2 peer 10.255.255.1/32 scope global tun0
       valid_lft forever preferred_lft forever
    inet6 fe80::ffcc:bb57:717e:8584/64 scope link stable-privacy proto kernel_ll
       valid_lft forever preferred_lft forever
154: eth1.111@eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:d0:b4:03:30:f1 brd ff:ff:ff:ff:ff:ff
    inet 10.10.11.30/24 brd 10.10.11.255 scope global eth1.111
       valid_lft forever preferred_lft forever
    inet6 fe80::2d0:b4ff:fe03:30f1/64 scope link proto kernel_ll
       valid_lft forever preferred_lft forever
155: eth1.176@eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:d0:b4:03:30:f1 brd ff:ff:ff:ff:ff:ff
    inet 192.168.176.102/24 brd 192.168.176.255 scope global eth1.176
       valid_lft forever preferred_lft forever
    inet6 fe80::2d0:b4ff:fe03:30f1/64 scope link proto kernel_ll
       valid_lft forever preferred_lft forever
156: eth1.222@eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:d0:b4:03:30:f1 brd ff:ff:ff:ff:ff:ff
    inet 192.168.33.27/24 brd 192.168.33.255 scope global eth1.222
       valid_lft forever preferred_lft forever
    inet6 fe80::2d0:b4ff:fe03:30f1/64 scope link proto kernel_ll
       valid_lft forever preferred_lft forever
163: ifb4eth2: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default qlen 32
    link/ether 8e:24:26:d3:26:b0 brd ff:ff:ff:ff:ff:ff
176: pppoe-wan2: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1492 qdisc fq_codel state UNKNOWN group default qlen 1000
    link/ppp
    inet 79.206.197.209 peer 62.155.246.179/32 scope global pppoe-wan2
       valid_lft forever preferred_lft forever
    inet6 fdde:43da:e68c::1/56 scope global noprefixroute
       valid_lft forever preferred_lft forever
    inet6 fe80::c56e:24f4:39ec:e7de peer fe80::ee13:dbff:fe13:4029/128 scope link
       valid_lft forever preferred_lft forever
187: ifb4eth1: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc cake state UNKNOWN group default qlen 32
    link/ether 66:de:88:f6:a7:bb brd ff:ff:ff:ff:ff:ff
    inet6 fe80::64de:88ff:fef6:a7bb/64 scope link proto kernel_ll
       valid_lft forever preferred_lft forever

@Ysurac
Copy link
Owner

Ysurac commented Jan 6, 2025

A new snapshot image is compiling, I hope this will fix the issue...

@dersch81
Copy link
Author

dersch81 commented Jan 9, 2025

Hi @Ysurac is the snapshot done?

@Ysurac
Copy link
Owner

Ysurac commented Jan 9, 2025

Yes., available on https://snapshots.openmptcprouter.com/6.6/

@dersch81
Copy link
Author

dersch81 commented Jan 9, 2025

Ok installed v0.62 Snapshot from 8th Januar

The symptom has changed but not resoved:

image

That's correct now:

root@OpenMPTCProuter:~# uci show openmptcprouter.wan2
openmptcprouter.wan2=interface
openmptcprouter.wan2.multipath='master'
openmptcprouter.wan2.multipathvpn='0'
openmptcprouter.wan2.testspeed='1'
openmptcprouter.wan2.state='up'
openmptcprouter.wan2.local_ipv4='84.155.xxx.xxx'
openmptcprouter.wan2.manufacturer='huawei'
openmptcprouter.wan2.latency='7'
openmptcprouter.wan2.latency_previous='5'
openmptcprouter.wan2.asn='DTAG-DIAL24'
openmptcprouter.wan2.metric='7'
openmptcprouter.wan2.lc='1736428525'
openmptcprouter.wan2.local_ipv6='fdde:43da:e68c::1'
openmptcprouter.wan2.mptcp_status='MPTCP enabled'
openmptcprouter.wan2.mplc='1736428526'
openmptcprouter.wan2.testspeed_lc='1736428673'

I tried to restart the tracker manually:

root@OpenMPTCProuter:~# /etc/init.d/omr-tracker restart
/etc/rc.common: line 105: service_data: not found


Jan  9 13:20:50 OpenMPTCProuter user.notice omr-tracker: Launching...
Jan  9 13:21:05 OpenMPTCProuter user.notice omr-tracker: Launched
Jan  9 13:21:05 OpenMPTCProuter user.notice post-tracking-002-error: wan1 (eth2) switched off because link down
Jan  9 13:21:05 OpenMPTCProuter user.notice post-tracking-002-error: Delete default route to 116.203.76.66 dev eth2
Jan  9 13:21:05 OpenMPTCProuter daemon.info omr-tracker-xray: xray is up (can contact via http 212.27.48.10)

(eth2 is really down and another WAN)

@Ysurac
Copy link
Owner

Ysurac commented Jan 9, 2025

the service_data error, is due to an OpenWRT change, not a big problem.
For the wan1 down, it's due to the result of "ifstatus wan1" that return up=false here or "ip link show eth2" that report down.

@dersch81
Copy link
Author

dersch81 commented Jan 9, 2025

Yes that's ok. But the PPPOE Tracker issue is still there. But changed from red to orange. So close to green :D

@Ysurac
Copy link
Owner

Ysurac commented Jan 9, 2025

Do you have a message with the orange ? Orange is a warning only.

@dersch81
Copy link
Author

dersch81 commented Jan 9, 2025

Just "some connectivity tests failed" But i have no clue what kind of test.

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants