Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support IPv6 underlay. #27

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

numansiddique
Copy link
Contributor

This patch provides a config option - IPV6_UNDERLAY. If it is
set to "yes", then this patch configures the ovsdb-servers to
listen on IPv6 addresses. All the ovn-controllers and ovn-northd
talk to DB servers using IPv6.

The link local address of interface - eth1 is used for this purpose.
I don't see a reason to assign a global IPv6 addresses to each of
the fake node containers.

Signed-off-by: Numan Siddique [email protected]

This patch provides a config option - IPV6_UNDERLAY. If it is
set to "yes", then this patch configures the ovsdb-servers to
listen on IPv6 addresses. All the ovn-controllers and ovn-northd
talk to DB servers using IPv6.

The link local address of interface - eth1 is used for this purpose.
I don't see a reason to assign a global IPv6 addresses to each of
the fake node containers.

Signed-off-by: Numan Siddique <[email protected]>
@flavio-fernandes
Copy link
Contributor

hm... I am hitting an issue when I try to start cluster from vagrant image. Will look into it a little deeper:

[root@ovnhostvm vagrant]# export ENABLE_SSL=no
[root@ovnhostvm vagrant]# export IPV6_UNDERLAY=yes
[root@ovnhostvm vagrant]# bash -x ./ovn_cluster.sh start

...
Creating a fake VM in ovn-chassis-1 for logical port - sw0-port1
+ docker exec ovn-chassis-1 bash /data/create_fake_vm.sh sw0-port1 sw0p1 50:54:00:00:00:03 10.0.0.3 24 10.0.0.1 1000::3/64 1000::a
ovs-vsctl: no bridge named br-int
Cannot find device "sw0p1"
Cannot find device "sw0p1"
Cannot find device "sw0p1"
Cannot find device "sw0p1"
Cannot find device "sw0p1"
Cannot find device "sw0p1"
Cannot find device "sw0p1"
[root@ovnhostvm vagrant]# docker exec ovn-chassis-1 bash /data/create_fake_vm.sh sw0-port1 sw0p1 50:54:00:00:00:03 10.0.0.3 24 10.0.0.1 1000::3/64 1000::a
Cannot create namespace file "/var/run/netns/sw0p1": File exists
ovs-vsctl: no bridge named br-int
Cannot find device "sw0p1"

Copy link
Contributor

@flavio-fernandes flavio-fernandes left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

checking issue when using Vagrant...

@flavio-fernandes
Copy link
Contributor

flavio-fernandes commented May 9, 2020

Hmm.... looks like there is no ipv6 addresses being configured to eth1
Maybe we need a newer/better version of ovs-docker script that adds ipv6 addrs?

Also, we will likely need to enable ipv6 as part of installing docker in
provisioning/install_docker.sh

Ref: https://michael.stapelberg.ch/posts/2018-12-12-docker-ipv6/
file: /etc/docker/daemon.json

{
  "ipv6": true,
  "fixed-cidr-v6": "2001:db8:13b:330:ffff::/80"
}

Let me know if you need help with any of these... ;)

@numansiddique
Copy link
Contributor Author

That's strange. I tested with Fedora 32 + podman and it worked for me.
Basically this patch uses the IPv6 link local addresses. Each interface should get the LLA configured provided IPv6 is enabled.

I tested now Fed 31 + docker and it worked.

Are you sure IPv6 is enabled in your environment.

@flavio-fernandes
Copy link
Contributor

That's strange. I tested with Fedora 32 + podman and it worked for me.
Basically this patch uses the IPv6 link local addresses. Each interface should get the LLA configured provided IPv6 is enabled.

I tested now Fed 31 + docker and it worked.

Are you sure IPv6 is enabled in your environment.

Strange indeed. I used vagrant to bring up the containers. Will look into it a little bit more now.

@flavio-fernandes
Copy link
Contributor

flavio-fernandes commented May 10, 2020

That's strange. I tested with Fedora 32 + podman and it worked for me.
Basically this patch uses the IPv6 link local addresses. Each interface should get the LLA configured provided IPv6 is enabled.

I tested now Fed 31 + docker and it worked.

Are you sure IPv6 is enabled in your environment.

@numansiddique : Okay, I think I figured out the issue I was having.
The containers were coming up with ipv6 disabled. Please take a look & rebase this PR on
top of #28 .
Using Vagrant is an easy way for reproducing the issue.
All that is needed is to start the containers and do the following command:

docker exec -uroot -it ovn-chassis-1 \
sysctl net.ipv6.conf.all.disable_ipv6 

If you see '1', then you know there is trouble for ipv6 ;)

@numansiddique
Copy link
Contributor Author

That's strange. I tested with Fedora 32 + podman and it worked for me.
Basically this patch uses the IPv6 link local addresses. Each interface should get the LLA configured provided IPv6 is enabled.
I tested now Fed 31 + docker and it worked.
Are you sure IPv6 is enabled in your environment.

@numansiddique : Okay, I think I figured out the issue I was having.
The containers were coming up with ipv6 disabled. Please take a look & rebase this PR on
top of #28 .
Using Vagrant is an easy way for reproducing the issue.
All that is needed is to start the containers and do the following command:

docker exec -uroot -it ovn-chassis-1 \
sysctl net.ipv6.conf.all.disable_ipv6 

If you see '1', then you know there is trouble for ipv6 ;)

ovs_docker from here is used - https://github.com/ovn-org/ovn-fake-multinode/blob/master/ovs-docker

Copy link
Contributor

@flavio-fernandes flavio-fernandes left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The changes look good. And I am able to try it out using Vagrant when I use
#28
Nice!

Copy link
Contributor

@dceara dceara left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This takes care of the DB traffic but I think that IPV6_UNDERLAY=yes should also make ovn-fake-multinode use IPv6 for tunneled traffic between hypervisors (i.e., use IPv6 for ovn-encap-ip).

@flavio-fernandes
Copy link
Contributor

oh sad panda!!!!

==> default: Running provisioner: start_ovn_cluster (shell)...
    default: Running: inline script
    default: Starting OVN cluster
    default: Waiting for containers to be up..
    default: Adding ovs-ports
    default: Error response from daemon: client version 1.40 is too new. Maximum supported API version is 1.39
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.

docker is too new?!?!

@numansiddique
Copy link
Contributor Author

on-zero e

Ack. I will address this. But I'm a bit backlogged because of PTO.

Thanks

@flavio-fernandes
Copy link
Contributor

@numansiddique @dceara shall we re-visit this PR? I forget what is missing here....

@numansiddique
Copy link
Contributor Author

@flavio-fernandes I think @mohammadheib is looking into this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants