SD-EWAN debug

SDWAN(软件定义WAN)是趋势

五角大楼都要采用了。Aruba Bumps Cisco From the Pentagon 

五角大楼要清除思科交换设备,使用边缘平台ESP来自动化部署网络(包括SD-WAN)。

Aruba Networks swept the last remaining vestiges of Cisco’s switching gear from the Pentagon building as part of a Department of Defense modernization effort announced today.

The deal will see Aruba deploy its Edge Services Platform (ESP) architecture to automate the network, eliminating time consuming and often tedious processes such as port mapping and initial switch configuration. Aruba ESP is targeted at campuses, data centers, branches, and remote workers, but it only supports Aruba’s access points (APs), switches, and SD-WAN gateways.

The vendor’s Access Switches will replace existing Cisco switching gear that has reached end of life. The deployment will include more than 150,000 wired ports, distributed across the 6.5 million-square-foot facility. Alongside wired infrastructure, the DoD plans to deploy 3,000 additional Aruba APs to further extend wireless access throughout the campus.

While Aruba ESP can be operated in the cloud, on-site, or as a managed service through one of Aruba’s partners, the DoD will be using Aruba’s ClearPass Policy Manager to orchestrate the networking overhaul. Initially the platform will be used to manage Aruba’s switches to secure access controls across the network. However, Aruba notes that the DoD could extend these controls to its APs, unifying both wired and wireless networks.

SD-EWAN  

SD-EWAN(GitHub) means software define edge WAN, it is used to address the network between multi edge clusters or between edge and internet.

这是intel cloud上海团队作为主力,开发的一个项目。基于openwrt的。

openness 项目在集成,集成过程中遇到很多问题,需要debug,在这里把问题都记录一下。

This is an e2e test-connection script from sdewan source code .  

https://github.com/akraino-edge-stack/icn-sdwan/tree/master/platform/test/e2e-test

The introduction about e2e via hub is here.

https://wiki.akraino.org/display/AK/SD-EWAN+Scenarios

Debug

sdewan depends on ovn3nfv

please follow this way to setup ovn cni: https://github.com/opnfv/ovn4nfv-k8s-plugin/blob/master/doc/how-to-use.md#testing-with-cni-proxy

Utility 

function getpodns()
{
    [[ ! -z  ${2:-$PODNODE} ]] && echo "set: PODNODE=${2:-$PODNODE}" || kubectl get nodes
    echo -e "usage:\n  getpodns \$POD_PATTERN \$PODNODE"
    echo -e "  Search pod pattern: $1 on node: ${2:-$PODNODE} \n"

    PODINFO=`kubectl get pod -A -o wide --field-selector spec.nodeName=${2:-$PODNODE} |grep $1 | head -n 1`
    NS="$(cut -d' ' -f1 <<< $PODINFO)"
    POD=$(awk -F" " '{print $2}' <<< $PODINFO)
    echo "POD:  $POD"
    echo "kubectl -n $NS"
}

check the cluster info

kubectl get node

check CNI config

usally we use mutus to support multi inferface,this command to check the config

cat /etc/cni/net.d/00-multus.conf | jq .
# same command
cat /etc/cni/net.d/00-multus.conf | python -m json.tool

check the cni config by this command 

# for calico
cat  /etc/cni/net.d/*calico.conflist* 
# for flannel
cat /etc/cni/net.d/*flannel.conflist
# for other
cat /etc/cni/net.d/*network*

check sdewan info:

there are two kind depolyment for sdewan, one is as sdewan controller manager,  another as sdewan cnf 

check CNF run correctly 

Split string by delimiter and get N-th element,  Kubernetes API - gets Pods on specific nodes,  Get all worker nodes

get by lable 

kubectl get pods -l 'sdewanPurpose,sdewanPurpose notin ()' -o wide

result

NAME                            READY   STATUS    RESTARTS   AGE    IP              NODE            NOMINATED NODE   READINESS GATES
sdewan-edge-a-5984669d8-r9cc8   1/1     Running   0          162m   10.233.82.217   icn-nuc8i7hvk   <none>           <none>

 get  CNF deatails info

kubectl get deployments  -l 'sdewanPurpose,sdewanPurpose notin ()'
CNFDEP=`kubectl get deployments  -l 'sdewanPurpose,sdewanPurpose notin ()' -o name`
# get the
last master MASTER=`kubectl get node --selector='node-role.kubernetes.io/master' -o=custom-columns=NAME:.metadata.name |tail -1` # get the last worker, if No worker it will be NAME WORKER=`kubectl get node --selector='!node-role.kubernetes.io/master' -o=custom-columns=NAME:.metadata.name |tail -1`
PODNODE=$MASTER SDEWANCNF
=`kubectl get pod -A -o wide --field-selector spec.nodeName=$PODNODE|grep ${CNFDEP##*/} | head -n 1` NS="$(cut -d' ' -f1 <<< $SDEWANCNF)" POD=$(awk -F" " '{print $2}' <<< $SDEWANCNF) # check interface kubectl -n $NS exec -it $POD -- ifconfig # check interface kubectl -n $NS describe pod $POD # get inferfaces annotation kubectl -n $NS get pod $POD -o=jsonpath='{.metadata.annotations}'

get sdewan controller

kubectl get pods -l control-plane=controller-manager -A -o wide

result

NAMESPACE       NAME                                         READY   STATUS    RESTARTS   AGE     IP              NODE            NOMINATED NODE   READINESS GATES
sdewan-system   sdewan-controller-manager-69f8c6c5bf-rjrn5   2/2     Running   0          3h42m   10.233.82.201   icn-nuc8i7hvk   <none>           <none>

 

OVN check

There are 4 componnets for ovn4nfv, ovn-control-plan/ovn-controller/nfn-operator/nfn-agent.

Both ovn-control-plan and nfn-operator are on master. ovn-control-plan and nfn-agent are on worker

  default ovn4nfv  log is /var/log/openvswitch/ovn4k8s.log, ovn log and openvswitch log can be find in the /var/log/openvswitch & /var/log/ovn

grep error /var/log/openvswitch/ovn4k8s.log

# on centos
grep error /var/log/messages |grep sdewan| grep multus
# if too many log,  you can use tail
tail -n 500  /var/log/messages |grep error


# on ubuntu 
grep error /var/log/syslog |grep sdewan| grep multus
# if too many log,  you can use tail
tail -n 500  /var/log/syslog |grep error

check nfn-agent can connect to ovsdb

PODNODE=$MASTER
PODPREFIX=nfn-agent

kubectl get DaemonSet -A |grep $PODPREFIX
NS=`kubectl get DaemonSet -A |grep $PODPREFIX | cut -d " " -f1`
kubectl -n $NS get DaemonSet $PODPREFIX 
PODINFO=`kubectl get pod -A -o wide --field-selector spec.nodeName=$PODNODE|grep $PODPREFIX | head -n 1` 
NS="$(cut -d' ' -f1 <<< $PODINFO)" POD=$(awk -F" " '{print $2}' <<< $PODINFO) kubectl -n $NS exec -it $POD -- ovs-vsctl get open_vswitch . external_ids CONNECTION=`kubectl -n $NS exec -it $POD -- ovs-vsctl get open_vswitch . external_ids | grep -o -P 'tcp:.*?(?=")'` IP="$(cut -d':' -f2 <<< $CONNECTION)" PORT=$(awk -F":" '{print $3}' <<< $CONNECTION) kubectl -n $NS exec -it $POD -- nc -v $IP $PORT

or we can also install ovs in the host

sudo yum install -y epel-release 
sudo yum install -y centos-release-openstack-train 
sudo yum install openvswitch 
sudo ovs-vsctl get open_vswitch . external_ids 

check nfn-agent log (log options

# get the latest 10 minutes logs
kubectl -n $NS logs --since=10m $POD

# get the most recent 500 lines logs
kubectl -n $NS logs --tail=500 $POD

check ovn-controller log

PODPREFIX=ovn-controller
kubectl get DaemonSet -A |grep $PODPREFIX
NS=`kubectl get DaemonSet -A |grep $PODPREFIX | cut -d " " -f1`
kubectl -n $NS get DaemonSet $PODPREFIX 
PODINFO=`kubectl get pod -A -o wide --field-selector spec.nodeName=$PODNODE|grep $PODPREFIX | head -n 1` 
NS="$(cut -d' ' -f1 <<< $PODINFO)" POD=$(awk -F" " '{print $2}' <<< $PODINFO) # get the latest 10 minutes logs kubectl -n $NS logs --since=10m $POD # get the most recent 500 lines logs kubectl -n $NS logs --tail=500 $POD

check nfn-operator log

PATTERN=nfn-operator
getpodns $PATTERN
# get the latest 10 minutes logs
kubectl -n $NS logs --since=10m $POD

# get the most recent 500 lines logs
kubectl -n $NS logs --tail=500 $POD


# get pod
kubectl -n $NS describe pod $POD
kubectl -n $NS get pod $POD

# get depolyment 
kubectl -n $NS get deploy $PATTERN

check ovn-control-plane log

PATTERN=ovn-control-plane
getpodns $PATTERN
# get the latest 10 minutes logs
kubectl -n $NS logs --since=10m $POD

# get the most recent 500 lines logs
kubectl -n $NS logs --tail=500 $POD


# get pod
kubectl -n $NS describe pod $POD
kubectl -n $NS get pod $POD

# get depolyment 
kubectl -n $NS get deploy $PATTERN

check ovn4nfv-cni log

PATTERN=ovn4nfv-cni
getpodns $PATTERN
# get the latest 10 minutes logs
kubectl -n $NS logs --since=10m $POD

# get the most recent 500 lines logs
kubectl -n $NS logs --tail=500 $POD

# get pod
kubectl -n $NS describe pod $POD
kubectl -n $NS get pod $POD

# get depolyment 
kubectl -n $NS get daemonset $PATTERN

ovn4nfv CR

kubectl api-resources  --api-group=k8s.plugin.opnfv.org -o wide

result

NAME               SHORTNAMES   APIGROUP               NAMESPACED   KIND              VERBS
networkchainings                k8s.plugin.opnfv.org   true         NetworkChaining   [delete deletecollection get list patch create update watch]
networks                        k8s.plugin.opnfv.org   true         Network           [delete deletecollection get list patch create update watch]
providernetworks                k8s.plugin.opnfv.org   true         ProviderNetwork   [delete deletecollection get list patch create update watch]

get network info   

GROUP=k8s.plugin.opnfv.org 
for i in `kubectl api-resources --api-group=$GROUP -o name`
do
    echo -e "\n********** $i **********"
    kubectl get $i
    kubectl describe $i
done

result 

 1 ********** networkchainings.k8s.plugin.opnfv.org **********
 2 No resources found in default namespace.
 3 
 4 ********** networks.k8s.plugin.opnfv.org **********
 5 NAME          AGE
 6 ovn-network   3h24m
 7 Name:         ovn-network
 8 Namespace:    default
 9 Labels:       <none>
10 Annotations:  kubectl.kubernetes.io/last-applied-configuration:
11                 {"apiVersion":"k8s.plugin.opnfv.org/v1alpha1","kind":"Network","metadata":{"annotations":{},"name":"ovn-network","namespace":"default"},"s...
12 API Version:  k8s.plugin.opnfv.org/v1alpha1
13 Kind:         Network
14 Metadata:
15   Creation Timestamp:  2020-11-09T02:56:35Z
16   Finalizers:
17     nfnCleanUpNetwork
18   Generation:        2
19   Resource Version:  1700
20   Self Link:         /apis/k8s.plugin.opnfv.org/v1alpha1/namespaces/default/networks/ovn-network
21   UID:               93913e73-f0fb-4f0a-8b55-c0a991b890bf
22 Spec:
23   Cni Type:  ovn4nfv
24   Dns:
25   ipv4Subnets:
26     Exclude Ips:  172.16.30.2..172.16.30.9
27     Gateway:      172.16.30.1/24
28     Name:         subnet1
29     Subnet:       172.16.30.1/24
30 Status:
31   State:  Created
32 Events:   <none>
33 
34 ********** providernetworks.k8s.plugin.opnfv.org **********
35 NAME       AGE
36 pnetwork   3h24m
37 Name:         pnetwork
38 Namespace:    default
39 Labels:       <none>
40 Annotations:  kubectl.kubernetes.io/last-applied-configuration:
41                 {"apiVersion":"k8s.plugin.opnfv.org/v1alpha1","kind":"ProviderNetwork","metadata":{"annotations":{},"name":"pnetwork","namespace":"default...
42 API Version:  k8s.plugin.opnfv.org/v1alpha1
43 Kind:         ProviderNetwork
44 Metadata:
45   Creation Timestamp:  2020-11-09T02:56:35Z
46   Finalizers:
47     nfnCleanUpProviderNetwork
48   Generation:        2
49   Resource Version:  1697
50   Self Link:         /apis/k8s.plugin.opnfv.org/v1alpha1/namespaces/default/providernetworks/pnetwork
51   UID:               12339ccf-68e5-4b5f-943a-efa21f99a824
52 Spec:
53   Cni Type:  ovn4nfv
54   Direct:
55     Direct Node Selector:     all
56     Provider Interface Name:  eth1
57   Dns:
58   ipv4Subnets:
59     Exclude Ips:      10.10.10.2..10.10.10.9
60     Gateway:          10.10.10.1/24
61     Name:             subnet
62     Subnet:           10.10.10.1/24
63   Provider Net Type:  DIRECT
64   Vlan:
65     Provider Interface Name:
66     Vlan Id:
67     Vlan Node Selector:
68 Status:
69   State:  Created
70 Events:   <none>
View Code

SDEWAN CR

kubectl api-resources  --api-group=batch.sdewan.akraino.org -o wide

result

NAME                  SHORTNAMES   APIGROUP                   NAMESPACED   KIND                 VERBS
firewalldnats                      batch.sdewan.akraino.org   true         FirewallDNAT         [delete deletecollection get list patch create update watch]
firewallforwardings                batch.sdewan.akraino.org   true         FirewallForwarding   [delete deletecollection get list patch create update watch]
firewallrules                      batch.sdewan.akraino.org   true         FirewallRule         [delete deletecollection get list patch create update watch]
firewallsnats                      batch.sdewan.akraino.org   true         FirewallSNAT         [delete deletecollection get list patch create update watch]
firewallzones                      batch.sdewan.akraino.org   true         FirewallZone         [delete deletecollection get list patch create update watch]
ipsechosts                         batch.sdewan.akraino.org   true         IpsecHost            [delete deletecollection get list patch create update watch]
ipsecproposals                     batch.sdewan.akraino.org   true         IpsecProposal        [delete deletecollection get list patch create update watch]
ipsecsites                         batch.sdewan.akraino.org   true         IpsecSite            [delete deletecollection get list patch create update watch]
mwan3policies                      batch.sdewan.akraino.org   true         Mwan3Policy          [delete deletecollection get list patch create update watch]
mwan3rules                         batch.sdewan.akraino.org   true         Mwan3Rule            [delete deletecollection get list patch create update watch]

get sdewan cr info   

GROUP=batch.sdewan.akraino.org
for i in `kubectl api-resources --api-group=$GROUP -o name`
do
    echo -e "\n********** $i **********"  
    kubectl get $i
    kubectl describe $i
done

result

  1 ********** firewalldnats.batch.sdewan.akraino.org **********
  2 
  3 ********** firewallforwardings.batch.sdewan.akraino.org **********
  4 
  5 ********** firewallrules.batch.sdewan.akraino.org **********
  6 
  7 ********** firewallsnats.batch.sdewan.akraino.org **********
  8 
  9 ********** firewallzones.batch.sdewan.akraino.org **********
 10 NAME         AGE
 11 ovnnetwork   141m
 12 pnetwork     141m
 13 Name:         ovnnetwork
 14 Namespace:    default
 15 Labels:       sdewanPurpose=sdewan-edge-a
 16 Annotations:  kubectl.kubernetes.io/last-applied-configuration:
 17                 {"apiVersion":"batch.sdewan.akraino.org/v1alpha1","kind":"FirewallZone","metadata":{"annotations":{},"labels":{"sdewanPurpose":"sdewan-edg...
 18 API Version:  batch.sdewan.akraino.org/v1alpha1
 19 Kind:         FirewallZone
 20 Metadata:
 21   Creation Timestamp:  2020-11-09T04:02:56Z
 22   Finalizers:
 23     rule.finalizers.sdewan.akraino.org
 24   Generation:        1
 25   Resource Version:  13047
 26   Self Link:         /apis/batch.sdewan.akraino.org/v1alpha1/namespaces/default/firewallzones/ovnnetwork
 27   UID:               333e44de-6f14-44ee-ba71-00052ce3ab15
 28 Spec:
 29   Forward:  ACCEPT
 30   Input:    ACCEPT
 31   Network:
 32     ovn-network
 33   Output:  ACCEPT
 34 Status:
 35   Applied Generation:  1
 36   Applied Time:        2020-11-09T04:02:56Z
 37   State:               In Sync
 38 Events:                <none>
 39 
 40 
 41 Name:         pnetwork
 42 Namespace:    default
 43 Labels:       sdewanPurpose=sdewan-edge-a
 44 Annotations:  kubectl.kubernetes.io/last-applied-configuration:
 45                 {"apiVersion":"batch.sdewan.akraino.org/v1alpha1","kind":"FirewallZone","metadata":{"annotations":{},"labels":{"sdewanPurpose":"sdewan-edg...
 46 API Version:  batch.sdewan.akraino.org/v1alpha1
 47 Kind:         FirewallZone
 48 Metadata:
 49   Creation Timestamp:  2020-11-09T04:02:56Z
 50   Finalizers:
 51     rule.finalizers.sdewan.akraino.org
 52   Generation:        1
 53   Resource Version:  13050
 54   Self Link:         /apis/batch.sdewan.akraino.org/v1alpha1/namespaces/default/firewallzones/pnetwork
 55   UID:               92cc6efc-4bbd-43bb-afc5-8d33a3e4568e
 56 Spec:
 57   Forward:  ACCEPT
 58   Input:    ACCEPT
 59   Masq:     0
 60   mtu_fix:  1
 61   Network:
 62     pnetwork
 63   Output:  ACCEPT
 64 Status:
 65   Applied Generation:  1
 66   Applied Time:        2020-11-09T04:02:56Z
 67   State:               In Sync
 68 Events:                <none>
 69 
 70 ********** ipsechosts.batch.sdewan.akraino.org **********
 71 NAME        AGE
 72 ipsechost   141m
 73 Name:         ipsechost
 74 Namespace:    default
 75 Labels:       sdewanPurpose=sdewan-edge-a
 76 Annotations:  kubectl.kubernetes.io/last-applied-configuration:
 77                 {"apiVersion":"batch.sdewan.akraino.org/v1alpha1","kind":"IpsecHost","metadata":{"annotations":{},"labels":{"sdewanPurpose":"sdewan-edge-a...
 78 API Version:  batch.sdewan.akraino.org/v1alpha1
 79 Kind:         IpsecHost
 80 Metadata:
 81   Creation Timestamp:  2020-11-09T04:02:46Z
 82   Finalizers:
 83     ipsec.host.finalizers.sdewan.akraino.org
 84   Generation:        1
 85   Resource Version:  13027
 86   Self Link:         /apis/batch.sdewan.akraino.org/v1alpha1/namespaces/default/ipsechosts/ipsechost
 87   UID:               050135d3-a97f-4133-88f8-c77952747b91
 88 Spec:
 89   authentication_method:  psk
 90   Connections:
 91     conn_type:  tunnel
 92     crypto_proposal:
 93       ipsecproposal
 94     local_sourceip:  %config
 95     Mode:            start
 96     Name:            connA
 97     remote_subnet:   192.168.1.1/24,10.10.10.35/32
 98   crypto_proposal:
 99     ipsecproposal
100   force_crypto_proposal:  0
101   local_identifier:       10.10.10.15
102   Name:                   edgeA
103   pre_shared_key:         test_key
104   Remote:                 10.10.10.35
105 Status:
106   Applied Generation:  1
107   Applied Time:        2020-11-09T04:02:51Z
108   State:               In Sync
109 Events:                <none>
110 
111 ********** ipsecproposals.batch.sdewan.akraino.org **********
112 NAME            AGE
113 ipsecproposal   141m
114 Name:         ipsecproposal
115 Namespace:    default
116 Labels:       sdewanPurpose=sdewan-edge-a
117 Annotations:  kubectl.kubernetes.io/last-applied-configuration:
118                 {"apiVersion":"batch.sdewan.akraino.org/v1alpha1","kind":"IpsecProposal","metadata":{"annotations":{},"labels":{"sdewanPurpose":"sdewan-ed...
119 API Version:  batch.sdewan.akraino.org/v1alpha1
120 Kind:         IpsecProposal
121 Metadata:
122   Creation Timestamp:  2020-11-09T04:02:45Z
123   Finalizers:
124     proposal.finalizers.sdewan.akraino.org
125   Generation:        2
126   Resource Version:  13011
127   Self Link:         /apis/batch.sdewan.akraino.org/v1alpha1/namespaces/default/ipsecproposals/ipsecproposal
128   UID:               402001c1-cbeb-4755-94da-fd2279e962ab
129 Spec:
130   dh_group:              modp3072
131   encryption_algorithm:  aes128
132   hash_algorithm:        sha256
133   Name:                  ipsecproposal
134 Status:
135   Applied Generation:  2
136   Applied Time:        2020-11-09T04:02:46Z
137   State:               In Sync
138 Events:                <none>
139 
140 ********** ipsecsites.batch.sdewan.akraino.org **********
141 
142 ********** mwan3policies.batch.sdewan.akraino.org **********
143 
144 ********** mwan3rules.batch.sdewan.akraino.org **********
View Code

 

result

NAME                  SHORTNAMES   APIGROUP                   NAMESPACED   KIND                 VERBS
firewalldnats                      batch.sdewan.akraino.org   true         FirewallDNAT         [delete deletecollection get list patch create update watch]
firewallforwardings                batch.sdewan.akraino.org   true         FirewallForwarding   [delete deletecollection get list patch create update watch]
firewallrules                      batch.sdewan.akraino.org   true         FirewallRule         [delete deletecollection get list patch create update watch]
firewallsnats                      batch.sdewan.akraino.org   true         FirewallSNAT         [delete deletecollection get list patch create update watch]
firewallzones                      batch.sdewan.akraino.org   true         FirewallZone         [delete deletecollection get list patch create update watch]
ipsechosts                         batch.sdewan.akraino.org   true         IpsecHost            [delete deletecollection get list patch create update watch]
ipsecproposals                     batch.sdewan.akraino.org   true         IpsecProposal        [delete deletecollection get list patch create update watch]
ipsecsites                         batch.sdewan.akraino.org   true         IpsecSite            [delete deletecollection get list patch create update watch]
mwan3policies                      batch.sdewan.akraino.org   true         Mwan3Policy          [delete deletecollection get list patch create update watch]
mwan3rules                         batch.sdewan.akraino.org   true         Mwan3Rule            [delete deletecollection get list patch create update watch]

 

oek

only reinstall sdewan

kubectl get namespace sdewan-system -o json > logging.json
grep -n3 '"finalizers":' logging.json

# Remove kubernetes from the finalizers array:
sed -i -e '/"finalizers":/{n;d}' logging.json
grep -n3 '"finalizers":' logging.json

kubectl replace --raw "/api/v1/namespaces/sdewan-system/finalize" -f ./logging.json 


API_SERVER_IP=`ip route get 1 | awk '{match($0, /.+src\s([.0-9]+)/, a);print a[1];exit}'`

helm install sdewan-ctrl /opt/openness-helm-charts/sdewan-crd-ctrl \
  --set spec.sdewan.image.registryIpAddress=$API_SERVER_IP \
  --set spec.sdewan.image.registryPort=5000 \
  --set spec.sdewan.image.name=integratedcloudnative/sdewan-controller \
  --set spec.sdewan.image.tag=0.3.0 \
  --set spec.proxy.image.registryIpAddress=$API_SERVER_IP \
  --set spec.proxy.image.registryPort=5000 \
  --set spec.proxy.image.name=gcr.io/kubebuilder/kube-rbac-proxy \
  --set spec.proxy.image.tag=v0.4.1 

 

 install ovs-tcpdump 

curl -O -L https://raw.githubusercontent.com/openvswitch/ovs/master/utilities/ovs-tcpdump.in

sudo yum install -y epel-release centos-release-openstack-train
sudo yum install openvswitch libibverbs

sudo yum install python-openvswitch

IFC=
python ovs-tcpdump.in --db-sock /var/run/openvswitch/db.sock -i $IFC -nv

openwrt 

opkg update
opkg install wget
opkg install ca-certificates

ovn4vnf install ovs-tcpdump

Try ton install ovs-tcpdump on nfn-agent, failed. 

NFNA=`kubectl get pod -A -o name |grep nfn-agent`
NM=kube-system 
H_PROXY=http://proxy-mu.intel.com:911


kubectl -n $NM exec -it ${NFNA} -- bash -c "echo $'nameserver 10.248.2.1\nnameserver 163.33.253.68\nnameserver 10.216.46.196' >> /etc/resolv.conf" H_PROXY=http://proxy-mu.intel.com:911
kubectl -n $NM exec -it ${NFNA} -- bash -c "http_proxy=$H_PROXY yum install ca-certificates"
kubectl  -n $NM exec -it ${NFNA} -- bash -c "http_proxy=$H_PROXY yum install -y epel-release centos-release-openstack-train"
kubectl  -n $NM exec -it ${NFNA} -- bash -c "http_proxy=$H_PROXY yum search openvswitch"
kubectl -n $NM exec -it ${NFNA} -- bash -c "http_proxy=$H_PROXY dnf install -y python3-pip" 
kubectl -n $NM exec -it ${NFNA} -- bash -c "http_proxy=$H_PROXY yum -y install python3-devel"
kubectl -n $NM exec -it ${NFNA} -- bash -c "http_proxy=$H_PROXY yum -y groupinstall 'development tools'"

# kubectl -n $NM exec -it ${NFNA} -- bash -c "pip3 --proxy $H_PROXY search openvswitch" 
# kubectl -n $NM exec -it ${NFNA} -- bash -c "http_proxy=$H_PROXY yum install -y python-openvswitch"

kubectl  -n $NM exec -it ${NFNA} -- bash -c "pip3 --proxy $H_PROXY install setuptools wheel"
kubectl -n $NM exec -it ${NFNA} -- bash -c "pip3 --proxy $H_PROXY install -U setuptools"
kubectl -n $NM exec -it ${NFNA} -- bash -c "pip3 --proxy $H_PROXY install -U wheel"
kubectl -n $NM exec -it ${NFNA} -- bash -c "pip3 --proxy $H_PRO
XY install -U pip"
kubectl -n $NM exec -it ${NFNA} -- bash -c "pip3 --proxy $H_PROXY install ovs" 
kubectl -n $NM exec -it ${NFNA} -- bash -c "python3 -m pip3 --proxy $H_PROXY install -U pip3"


kubectl -n $NM exec -it ${NFNA} -- bash -c "https_proxy=$H_PROXY http_proxy=$H_PROXY wget https://raw.githubusercontent.com/openvswitch/ovs/master/utilities/ovs-tcpdump.in -O ovs-tcpdump.in" 
kubectl -n $NM exec -it ${NFNA} -- bash -c "http_proxy=$H_PROXY yum install -y tcpdump"
kubectl -n $NM exec -it ${NFNA} -- bash -c "python3 ovs-tcpdump.in"
View Code

Error:

WARNING: Running pip install with root privileges is generally not a good idea. Try pip3 install --user instead.
Collecting ovs
Downloading https://files.pythonhosted.org/packages/93/85/7f783d6872c41c1e95495c5a6ff3e20f7fd276e9fb2394c77d49a07ab6e6/ovs-2.13.0.tar.gz (100kB)
100% |████████████████████████████████| 102kB 627kB/s
Complete output from command python setup.py egg_info:

/----------------------------------------

Command "python setup.py egg_info" failed with error code -9 in /tmp/pip-build-fll8tk_y/ovs/

Then try to install on  ovn-controller, successfully

 

NFNA=`kubectl get pod -A -o name |grep nfn-agent`
NFNA=`kubectl get pod -A -o name |grep ovn-controller`
NM=kube-system 
H_PROXY=http://proxy-mu.intel.com:911


# kubectl -n $NM exec -it ${NFNA} -- bash -c "echo $'nameserver 10.248.2.1\nnameserver 163.33.253.68\nnameserver 10.216.46.196' >> /etc/resolv.conf" H_PROXY=http://proxy-mu.intel.com:911

# NOTE, only ovn-controller can works, but nfn-agent can not install python ovs package. 
kubectl -n $NM exec -it ${NFNA} -- bash -c "http_proxy=$H_PROXY dnf install -y python3-pip" 
kubectl -n $NM exec -it ${NFNA} -- bash -c "pip3 --proxy $H_PROXY install ovs" 

kubectl -n $NM exec -it ${NFNA} -- bash -c "https_proxy=$H_PROXY http_proxy=$H_PROXY wget https://raw.githubusercontent.com/openvswitch/ovs/master/utilities/ovs-tcpdump.in -O ovs-tcpdump.in" 

kubectl -n $NM exec -it ${NFNA} -- bash -c "http_proxy=$H_PROXY yum install -y tcpdump"

kubectl -n $NM exec -it ${NFNA} -- ovs-vsctl list-br
kubectl -n $NM exec -it ${NFNA} -- ovs-vsctl list-ports br-int

IFC=ovn4nfv0-85000f
kubectl -n $NM exec -it ${NFNA} -- bash -c "python3 ovs-tcpdump.in  -i $IFC --db-sock unix:/var/run/openvswitch/db.sock"

 

 

 

check ipsecsites status 

kubectl get ipsecsites.batch.sdewan.akraino.org  -o=custom-columns='NAME:metadata.name,MESSAGE:status.message,STATUS:status.state'
NAME        MESSAGE                                                                           STATUS
ipsecsite   Post http://10.243.72.215/cgi-bin/luci/: dial tcp 10.243.72.215:80: i/o timeout   Trying to apply

 check providernetworks status 

kubectl get providernetworks -o=custom-columns='NAME:metadata.name,NETTYPE:.spec.providerNetType,STATUS:status.state'
NAME        NETTYPE   STATUS
pnetwork1   DIRECT    Created
pnetwork2   DIRECT    Created

 

tunnel  

First, get inside the sdewan cnf and run cmd:

ipsec statusall

This will show the status of ipsec tunnel.

You can also use the command Huifeng provides, but you need to install the ip xfrm first

run ‘opkg install ip-full’

Second, you can check the interfaces, there shall be an extra interface attached under the interface net2

Third, you can ping the virtual ip of the remote edge cnf. To check the ipsec tunnels’ connection.

---------------

you may use below command to check ip sec data in cnf:

ip xfrm policy list – to show all xfrm rule for ip sec

ip xfrm m – monitor packets in channel

REF:

How to Install Pip on CentOS 8      

How To Install Pip On CentOS 8  

How to Install Pip on CentOS 8  

没有一个能工作的。 

ovs-tcpdump  

Fedora, RHEL 7.x Packaging for Open vSwitch (ovs 官网)

Information for build openvswitch-2.9.0-56.el7

OVS-DPDK END TO END TROUBLESHOOTING GUIDE   

 

How to Install and use ovs-tcpdump on Redhat

 

ovs-tcpdump

Regular Linux utilities can’t monitor everything that goes on inside OpenVswitch. The standard tcpdump utility can dump from devices that it knows about but not what’s going on inside the OpenVswitch. You can install ovs-tcpdump and use it similar to the regular utility.

We are doing this as part of the project to install Redhat Openstack with DPDK and VXLAN. We need to monitor how an instance communicates with the Controllers and each other.

Your servers need to be registered with Redhat as Openstack, so that you can access the Redhat Repositories.
SO run:
$ subscription-manager attach –pool=YOURPOOL-OPENSTACK_ID

Ovs-tcpdump comes as part of the package openvswitch-test but this requires python-openvswitch
These are not stored in the same repository, so you need to enable both of the following:

$ yum-config-manager –enable rhel-7-server-openstack-13-devtools-rpms
$ yum-config-manager –enable rhel-7-server-openstack-13-rpms
$ yum makecache
$ yum install python-openvswitch
 $ yum install openvswitch-test

You can now run
$ ovs-tcpdump -i <interface name> <other parameters>
It accepts other parameters like tcpdump, but it doesn’t accept “any” for insterface name, since you can only dump from one interface at a time.
You can see the interface name via the command:
$ ovs-vsctl show

Contact us for G-Suite service

 

设置SNAT

function set_cnf_snat(){
  if [[ -z $1 ]] ; then
   echo "Please specify the namespace and pod name, get all pods name by:"
   echo "  kubectl get pods -A"
   echo "Usage: "
   echo "  set_cnf_snat [\$namespace/]\$pods [-t]"
   return 1
  fi
  test=false
  # https://stackoverflow.com/questions/21542054/how-to-intercept-and-remove-a-command-line-argument-in-bash
  for arg do
    shift
    [ "$arg" = "-t" ] && test=true && continue
    set -- "$@" "$arg"
  done

  NS=${NS:-cnf}
  # NOTE maybe more than one pods
  pods=$(kubectl get pods -n $NS -o NAME)
  netip=$(kubectl exec -n $NS $pods -it -- bash -c "sudo cat /etc/config/ipsec | grep local_identifier|cut -d ' ' -f 3")
  regex=${netip//\'/}
  regex=${regex//$'\r'/}
  route=$(kubectl exec -n $NS $pods -it -- bash -c "sudo ip -4 -o r")
  # NOTE maybe more than one WAN interface
  dev=$(grep $regex <<< "$route" | awk '{match($0, /.+dev\s([^ /]*)/, a);print a[1];exit}')
  # NOTE maybe more than one tunnel on a WAN interface
  tunnel=$(grep "^[0-9. ]*dev $dev"<<< "$route" | awk '{print $1}')
  echo "The overlay ip of CNF is: $tunnel"

  # https://stackoverflow.com/questions/229551/how-to-check-if-a-string-contains-a-substring-in-bash
  ns=${1%/*}
  search=${1##*/}
  if [ -z "${1##*/*}" ] ; then
     ips="$(kubectl get pods -n $ns -o wide |grep $search |awk '{print $6}')"   
  else
     ips="$(kubectl get pods -A -o wide |grep $search |awk '{print $7}')"
  fi

  re='^(0*(1?[0-9]{1,2}|2([0-4][0-9]|5[0-5]))\.){3}'
  re+='0*(1?[0-9]{1,2}|2([‌​0-4][0-9]|5[0-5]))$'
  for ip in $ips;
  do
    if [[ "$ip" =~ $re ]]; then
      echo "Set SNAT for $ip to $tunnel"
      if [ "$test" = true ]; then
        echo "This is just a test mode, please run the command manully:"
        echo "  kubectl exec -it -n ${NS} $pods -- bash -c \"sudo iptables -t nat -A POSTROUTING -s $ip -j SNAT --to-source $tunnel\""
        return 0
      fi
      kubectl exec -it -n ${NS} $pods -- bash -c "sudo iptables -t nat -A POSTROUTING -s $ip -j SNAT --to-source $tunnel"
      kubectl exec -it -n ${NS} $pods -- bash -c 'sudo iptables --line -nv -t nat -L POSTROUTING'
    else
      echo "Not a validate IP: $ip"
    fi
  done
}

删除SNAT

function rm_cnf_snat(){
  if [[ -z $1 ]] ; then
   echo "Please specify the NAT source IP address or Rule Number:"
   pods=$(kubectl get pods -n $NS -o NAME)
   echo "  kubectl exec -it -n cnf $pods -- bash -c 'sudo iptables --line -nv -t nat -L POSTROUTING'"
   kubectl exec -it -n cnf $pods -- bash -c 'sudo iptables --line -nv -t nat -L POSTROUTING'
   echo "Usage: "
   echo "  rm_cnf_snat [\$ip][\$num] [-t]"
   return 1
  fi
  test=false
  # https://stackoverflow.com/questions/21542054/how-to-intercept-and-remove-a-command-line-argument-in-bash
  for arg do
    shift
    [ "$arg" = "-t" ] && test=true && continue
    set -- "$@" "$arg"
  done
  re='^(0*(1?[0-9]{1,2}|2([0-4][0-9]|5[0-5]))\.){3}'
  re+='0*(1?[0-9]{1,2}|2([‌​0-4][0-9]|5[0-5]))$'
  isip=false 
  if [[ "$1" =~ $re ]]; then
    isip=true
  fi

  if [ "$isip" = true ]; then
    rules=$(kubectl exec -it -n cnf $pods -- bash -c 'sudo iptables --line -nv -t nat -L POSTROUTING |grep '"$1"'')
    rule=$(tail -n 1 <<< "$rules")
    tunnel=${rule##*:}
    # Note these 2 commands can print "\r"
    # $(echo $tunnel)  # $(echo "$tunnel")
    tunnel=${tunnel//$'\r'/}
    num=$(tail -n 1 <<< "$rules" | awk '{print $1}')
    if [ "$test" = true ]; then
      echo "This is just a test mode, please run the command manully:"
      echo "  kubectl exec -it -n ${NS} $pods -- bash -c \"sudo iptables -t nat -D POSTROUTING -s $ip -j SNAT --to-source $tunnel\""
      return 0
    fi
    kubectl exec -it -n ${NS} $pods -- bash -c "sudo iptables -t nat -D POSTROUTING -s $ip -j SNAT --to-source $tunnel" 
  else
    num=$1
    if [ "$test" = true ]; then
      echo "This is just a test mode, please run the command manully:"
      echo "  kubectl exec -it -n ${NS} $pods -- bash -c \"sudo iptables -t nat -D POSTROUTING $1\""
      return 0
    fi
    kubectl exec -it -n ${NS} $pods -- bash -c "sudo iptables -t nat -D POSTROUTING $1"
  fi
  kubectl exec -it -n ${NS} $pods -- bash -c 'sudo iptables --line -nv -t nat -L POSTROUTING'
}

 设置DNAT

function set_cnf_dnat(){
  if [[ -z $1 ]] ; then
   echo "Please specify the namespace and pod name, get all pods name by:"
   echo "  kubectl get pods -A"
   echo "Usage: "
   echo "  set_cnf_snat [\$namespace/]\$pods [-t]"
   return 1
  fi
  test=false
  # https://stackoverflow.com/questions/21542054/how-to-intercept-and-remove-a-command-line-argument-in-bash
  for arg do
    shift
    [ "$arg" = "-t" ] && test=true && continue
    set -- "$@" "$arg"
  done

  NS=${NS:-cnf}
  # NOTE maybe more than one pods
  pods=$(kubectl get pods -n $NS -o NAME)
  netip=$(kubectl exec -n $NS $pods -it -- bash -c "sudo cat /etc/config/ipsec | grep local_identifier|cut -d ' ' -f 3")
  regex=${netip//\'/}
  regex=${regex//$'\r'/}
  route=$(kubectl exec -n $NS $pods -it -- bash -c "sudo ip -4 -o r")
  # NOTE maybe more than one WAN interface
  dev=$(grep $regex <<< "$route" | awk '{match($0, /.+dev\s([^ /]*)/, a);print a[1];exit}')
  # NOTE maybe more than one tunnel on a WAN interface
  tunnel=$(grep "^[0-9. ]*dev $dev"<<< "$route" | awk '{print $1}')
  echo "The overlay ip of CNF is: $tunnel"

  # https://stackoverflow.com/questions/229551/how-to-check-if-a-string-contains-a-substring-in-bash
  ns=${1%/*}
  search=${1##*/}
  if [ -z "${1##*/*}" ] ; then
     ips="$(kubectl get pods -n $ns -o wide |grep $search |awk '{print $6}')"   
  else
     ips="$(kubectl get pods -A -o wide |grep $search |awk '{print $7}')"
  fi

  re='^(0*(1?[0-9]{1,2}|2([0-4][0-9]|5[0-5]))\.){3}'
  re+='0*(1?[0-9]{1,2}|2([‌​0-4][0-9]|5[0-5]))$'
  for ip in $ips;
  do
    if [[ "$ip" =~ $re ]]; then
      echo "Set DNAT for $ip from $tunnel"
      if [ "$test" = true ]; then
        echo "This is just a test mode, please run the command manully:"
        echo "  kubectl exec -it -n ${NS} $pods -- iptables -t nat -A PREROUTING -i $dev -d $tunnel -j DNAT --to-destination $ip"
        return 0
      fi
      kubectl exec -it -n ${NS} $pods -- bash -c "sudo iptables -t nat -A PREROUTING -d $tunnel -j DNAT --to-destination $ip"
      kubectl exec -it -n ${NS} $pods -- bash -c 'sudo iptables --line -nv -t nat -L PREROUTING'
    else
      echo "Not a validate IP: $ip"
    fi
  done
}

 删除DNAT  

function rm_cnf_dnat(){
  if [[ -z $1 ]] ; then
   echo "Please specify the NAT source IP address or Rule Number:"
   pods=$(kubectl get pods -n $NS -o NAME)
   echo "  kubectl exec -it -n cnf $pods -- bash -c 'sudo iptables --line -nv -t nat -L PREROUTING'"
   kubectl exec -it -n cnf $pods -- bash -c 'sudo iptables --line -nv -t nat -L PREROUTING'
   echo "Usage: "
   echo "  rm_cnf_snat [\$ip][\$num] [-t]"
   return 1
  fi
  test=false
  # https://stackoverflow.com/questions/21542054/how-to-intercept-and-remove-a-command-line-argument-in-bash
  for arg do
    shift
    [ "$arg" = "-t" ] && test=true && continue
    set -- "$@" "$arg"
  done
  re='^(0*(1?[0-9]{1,2}|2([0-4][0-9]|5[0-5]))\.){3}'
  re+='0*(1?[0-9]{1,2}|2([‌​0-4][0-9]|5[0-5]))$'
  isip=false 
  if [[ "$1" =~ $re ]]; then
    isip=true
  fi

  if [ "$isip" = true ]; then
    rules=$(kubectl exec -it -n cnf $pods -- bash -c 'sudo iptables --line -nv -t nat -L PREROUTING |grep '"$1"'')
    rule=$(tail -n 1 <<< "$rules")
    tunnel=${rule##*:}
    # Note these 2 commands can print "\r"
    # $(echo $tunnel)  # $(echo "$tunnel")
    tunnel=$(tail -n 1 <<< "$rules" | awk '{print $10}')
    num=$(tail -n 1 <<< "$rules" | awk '{print $1}')
    if [ "$test" = true ]; then
      echo "This is just a test mode, please run the command manully:"
      echo "  kubectl exec -it -n ${NS} $pods -- bash -c \"sudo iptables -t nat -D PREROUTING -d $tunnel -j DNAT --to-destination $ip\""
      return 0
    fi
    kubectl exec -it -n ${NS} $pods -- bash -c "sudo iptables -t nat -D PREROUTING -d $tunnel -j DNAT --to-destination $ip" 
  else
    num=$1
    if [ "$test" = true ]; then
      echo "This is just a test mode, please run the command manully:"
      echo "  kubectl exec -it -n ${NS} $pods -- bash -c \"sudo iptables -t nat -D PREROUTING $1\""
      return 0
    fi
    kubectl exec -it -n ${NS} $pods -- bash -c "sudo iptables -t nat -D PREROUTING $1"
  fi
  kubectl exec -it -n ${NS} $pods -- bash -c 'sudo iptables --line -nv -t nat -L PREROUTING'
}

 

 

posted @ 2020-11-06 15:01  lvmxh  阅读(346)  评论(0编辑  收藏  举报