整合營銷服務商

          電腦端+手機端+微信端=數據同步管理

          免費咨詢熱線:

          解析關于Tomcat Servlet-request的獲取請求參數及幾種常用方法

          文分享自華為云社區《淺談Tomcat之Servlet-request獲取請求參數及常用方法-云社區-華為云》,作者:QGS。

          //獲取Map集合中所有的key
          Enumeration<String>   getParameterNames();
          //獲取Map
          Map<String, String[]>   getParameterMap(); 
          //根據key獲取Map集合中的vale  (常用**)
          String[]   getParameterValues(String s);
          //獲取value一維數組的第一個元素   (常用**)
          String    getParameter(String name); 
          瀏覽器向服務器提交的是String類型


          //getParameterNames()獲取所有key值
          Enumeration<String> keys = request.getParameterNames();
          while (keys.hasMoreElements()){
              String key = keys.nextElement();
              System.out.print("key: "+key +" ");
              //getParameterValues(key) 、據key獲取Map集合中的vale
              String[] Values = request.getParameterValues(key);
              if (Values.length>1){
                  for (String value : Values) {
                      System.out.print("value:"+value+" ");
                  }
              }else {
                  System.out.print(Values[0]);
              }
              System.out.println();
          }

          通過標簽中的name獲取value一維數組

          getParameterNames()獲取所有key值

          如果html頁面的數據有更改,瀏覽器清除過緩存在執行。

          //通過標簽中的name獲取value一維數組
          String[] usernames = request.getParameterValues("username");
          String[] pwds = request.getParameterValues("pwd");
          String[] hobbies = request.getParameterValues("hobby");
          for (String username : usernames) {
              System.out.print(username);
          }
          System.out.println();
          for (String pwd : pwds) {
              System.out.print(pwd);
          }
          System.out.println();
          for (String hobby : hobbies) {
              if (hobby.isEmpty()){
                  System.out.println("null");
              }
              System.out.print(hobby);
          }
          System.out.println();
          
          //獲取數組的第一個參數
          String username = request.getParameter("username");
          String pwd = request.getParameter("pwd");
          String hobby = request.getParameter("hobby");
          
          System.out.println("getParameter :"+username+" "+pwd+" "+hobby);

          getParameter獲取數組的第一個參數

          //獲取數組的第一個參數
          String username = request.getParameter("username");
          String pwd = request.getParameter("pwd");
          String hobby = request.getParameter("hobby");

          請求域對象

          Request又稱“請求域”
          應用域對象ServletContext(Servlet上下文對象)、
          當用戶的共享數據很少修改操作并且數據量少的時候,使用ServletContext能夠提升程序的執行效率(應用域綁定數據,看作將數據放到Cache當中,用戶訪問時直接從Cache中提取,減少IO等操作)。
          應用域對象ServletContext的操作方法(類似Map集合的操作)
          //向域綁定數據
          setAttribute(String name , Object obj)
          //從域獲取數據,根據name(key)獲取數據
          Object getAttribute(String name)
          //移除數據,根據name(key)
          removeAttribute(String name)
          請求域對象
          請求域比應用域的范圍小, 占用資源小,生命周期短,請求域對象只在一次請求內有效。
          請求域對象ServletContext的操作方法(類似Map集合的操作)
          //向域綁定數據
          setAttribute(String name , Object obj)
          //從域獲取數據,根據name(key)獲取數據
          Object getAttribute(String name)
          //移除數據,根據name(key)
          removeAttribute(String name)

          案例

          //獲取系統當前時間
          
          Date nowTime =new Date();
          
          //向request域 中綁定數據
          
          request.setAttribute("NowTime",nowTime);
          
          //從request域 獲取數據
          
          Object obj = request.getAttribute("NowTime");
          
          response.setContentType("text/html;charset=utf-8");
          
          response.setCharacterEncoding("utf-8");
          
          PrintWriter out = response.getWriter();
          
          SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd  HH:mm:ss");
          
          String timeStr = sdf.format((Date)obj);
          
          out.print("當前時間: "+ timeStr);

          Servlet轉發機制

          轉發servlet類

          public class ServletA extends HttpServlet {
              @Override
              protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
                  //使用Servlet轉發機制。執行ServletA后,跳轉至ServletB,調用請求轉發器,將request,response參數傳遞給另一個HttpServlet子類
                  request.getRequestDispatcher("/servletB").forward(request,response);
              }
          }
          public class ServletB extends HttpServlet {
              @Override
              protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
                  //獲取系統當前時間
                  Date nowTime =new Date();
                  //向request域 中綁定數據
                  request.setAttribute("NowTime",nowTime);
                  //從request域 獲取數據
                  Object obj = request.getAttribute("NowTime");
                  response.setContentType("text/html;charset=utf-8");
                  response.setCharacterEncoding("utf-8");
                  PrintWriter out = response.getWriter();
                  SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd  HH:mm:ss");
                  String timeStr = sdf.format((Date)obj);
                  out.print("當前時間: "+ timeStr);
              }
          }

          轉發html頁面

          //既可以轉發Servlet類也可以轉發html(屬于Web容器當中合法的資源都可以轉發)
          request.getRequestDispatcher("/share.html").forward(request,response);

          常用方法

          //獲取客戶端的IP地址
          String remoteAddr = request.getRemoteAddr();
          //獲取遠程的用戶
          String remoteUser = request.getRemoteUser();
          //獲取遠程的主機IP
          String remoteHost = request.getRemoteHost();
          //獲取遠程的的端口
          int remotePort = request.getRemotePort();
          //獲取主機服務名
          String serverName = request.getServerName();
          //獲取服務路徑(項目名稱)
          String servletPath = request.getServletPath();
          //獲取服務端口
          int serverPort = request.getServerPort();
          //獲取Servlet上下文  或者this.getServletContext();
          ServletContext servletContext = request.getServletContext();
          //指定字符集(解決不同字符集亂碼問題)
          response.setCharacterEncoding("utf-8");

          點擊下方,第一時間了解華為云新鮮技術~

          華為云博客_大數據博客_AI博客_云計算博客_開發者中心-華為云

          #華為云開發者聯盟#

          在學習Kubernetes過程中,經常會遇到Service無法訪問,這篇文章總結了可能導致的情況,希望能幫助你找到問題所在。

          內容

          為了完成本次演練,先運行部署一個應用:

          # kubectl create deployment web --image=nginx --replicas=3
          deployment.apps/web created
          # kubectl expose deployment web --port=8082 --type=NodePort
          service/web exposed
          

          確保Pod運行:

          #  kubectl get pods,svc
          NAME                      READY   STATUS    RESTARTS   AGE
          pod/dnsutils              1/1     Running   25         25h
          pod/mysql-5ws56           1/1     Running   0          20h
          pod/mysql-fwpgc           1/1     Running   0          25h
          pod/mysql-smggm           1/1     Running   0          20h
          pod/myweb-8dc2n           1/1     Running   0          25h
          pod/myweb-mfbpd           1/1     Running   0          25h
          pod/myweb-zn8z2           1/1     Running   0          25h
          pod/web-96d5df5c8-8fwsb   1/1     Running   0          69s
          pod/web-96d5df5c8-g6hgp   1/1     Running   0          69s
          pod/web-96d5df5c8-t7xzv   1/1     Running   0          69s
          
          NAME                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
          service/kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP          25h
          service/mysql        ClusterIP   10.99.230.190    <none>        3306/TCP         25h
          service/myweb        NodePort    10.105.77.88     <none>        8080:31330/TCP   25h
          service/web          NodePort    10.103.246.193   <none>        8082:31303/TCP   17s
          

          問題1:無法通過 Service 名稱訪問

          如果你是訪問的Service名稱,需要確保CoreDNS服務已經部署:

          # kubectl get pods -n kube-system
          NAME                                 READY   STATUS    RESTARTS   AGE
          coredns-74ff55c5b-8q44c              1/1     Running   0          26h
          coredns-74ff55c5b-f7j5g              1/1     Running   0          26h
          etcd-k8s-master                      1/1     Running   2          26h
          kube-apiserver-k8s-master            1/1     Running   2          26h
          kube-controller-manager-k8s-master   1/1     Running   0          26h
          kube-flannel-ds-f5tn6                1/1     Running   0          21h
          kube-flannel-ds-ftfgf                1/1     Running   0          26h
          kube-proxy-hnp7c                     1/1     Running   0          26h
          kube-proxy-njw8l                     1/1     Running   0          21h
          kube-scheduler-k8s-master            1/1     Running   0          26h
          
          

          確認CoreDNS已部署,如果狀態不是Running,請檢查容器日志進一步查找問題。
          采用dnsutils來測試域名解析。
          dnsutils.yaml

          apiVersion: v1
          kind: Pod
          metadata:
            name: dnsutils
          spec:
            containers:
            - name: dnsutils
              image: mydlqclub/dnsutils:1.3
              imagePullPolicy: IfNotPresent
              command: ["sleep","3600"]
          

          運行并進入容器

          # kubectl create -f dnsutils.yaml
          
          # kubectl exec -it dnsutils sh
          kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
          
          / # nslookup web
          Server:     10.96.0.10
          Address:    10.96.0.10#53
          
          Name:   web.default.svc.cluster.local
          Address: 10.103.246.193

          如果解析失敗,可以嘗試限定命名空間:

          / # nslookup web.default
          Server:     10.96.0.10
          Address:    10.96.0.10#53
          
          Name:   web.default.svc.cluster.local
          Address: 10.103.246.193

          如果解析成功,需要調整應用使用跨命名空間的名稱訪問Service。

          如果仍然解析失敗,嘗試使用完全限定的名稱:

          / # nslookup web.default.svc.cluster.local
          Server:     10.96.0.10
          Address:    10.96.0.10#53
          
          Name:   web.default.svc.cluster.local
          Address: 10.103.246.193
          
          

          說明:其中“default”表示正在操作的命名空間,“svc”表示是一個Service,“cluster.local”是集群域。

          在集群中的Node嘗試指定DNS IP(你的可能不同,可以通過kubectl get svc -n kube-system查看)解析下:

          #  nslookup web.default.svc.cluster.local
          Server:     103.224.222.222
          Address:    103.224.222.222#53
          
          ** server can't find web.default.svc.cluster.local: REFUSED
          
          

          發現查找不到。檢查 /etc/resolv.conf 文件是否正確,增加coreDNS的IP和查找路徑。
          增加:

          nameserver 10.96.0.10
          search default.svc.cluster.local svc.cluster.local cluster.local
          options ndots:5
          

          改為:
          vim /etc/resolv.conf

          # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
          #     DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
          nameserver 103.224.222.222
          nameserver 103.224.222.223
          nameserver 8.8.8.8
          nameserver 10.96.0.10
          search default.svc.cluster.local svc.cluster.local cluster.local
          options ndots:5
          

          說明:

          nameserver:行必須指定CoreDNS Service,它通過在kubelet設置 --cluster-dns 參加自動配置。

          search :行必須包含一個適當的后綴,以便查找 Service 名稱。在本例中,它在本地 Namespace(default.svc.cluster.local)、所有 Namespace 中的 Service(svc.cluster.local)以及集群(cluster.local)中查找服務。

          options :行必須設置足夠高的 ndots,以便 DNS 客戶端庫優先搜索路徑。在默認情況下,Kubernetes 將這個值設置為 5。

          問題2:無法通過 Service IP訪問

          假設可以通過Service名稱訪問(CoreDNS正常工作),那么接下來要測試的 Service 是否工作正常。從集群中的一個節點,訪問 Service 的 IP:

          # curl -I 10.103.246.193
          HTTP/1.1 200 OK
          Server: Tengine
          Date: Sun, 22 Aug 2021 13:04:15 GMT
          Content-Type: text/html
          Content-Length: 1326
          Last-Modified: Wed, 26 Apr 2017 08:03:47 GMT
          Connection: keep-alive
          Vary: Accept-Encoding
          ETag: "59005463-52e"
          Accept-Ranges: bytes
          
          

          本集群異常,連接超時:

          # curl -I 10.103.246.193
          curl: (7) Failed to connect to 10.103.246.193 port 8082: Connection timed out
          
          

          思路1:Service 端口配置是否正確?

          檢查 Service 配置和使用的端口是否正確:

          # kubectl get svc web -o yaml
          apiVersion: v1
          kind: Service
          metadata:
            creationTimestamp: "2021-08-22T04:04:11Z"
            labels:
              app: web
            managedFields:
            - apiVersion: v1
              fieldsType: FieldsV1
              fieldsV1:
                f:metadata:
                  f:labels:
                    .: {}
                    f:app: {}
                f:spec:
                  f:externalTrafficPolicy: {}
                  f:ports:
                    .: {}
                    k:{"port":8082,"protocol":"TCP"}:
                      .: {}
                      f:port: {}
                      f:protocol: {}
                      f:targetPort: {}
                  f:selector:
                    .: {}
                    f:app: {}
                  f:sessionAffinity: {}
                  f:type: {}
              manager: kubectl-expose
              operation: Update
              time: "2021-08-22T04:04:11Z"
            name: web
            namespace: default
            resourceVersion: "118039"
            uid: fa5bbc6b-7a79-45a4-b6ba-e015340d2bab
          spec:
            clusterIP: 10.103.246.193
            clusterIPs:
            - 10.103.246.193
            externalTrafficPolicy: Cluster
            ports:
            - nodePort: 31303
              port: 8082
              protocol: TCP
              targetPort: 8082
            selector:
              app: web
            sessionAffinity: None
            type: NodePort
          status:
            loadBalancer: {}

          說明:

          • spec.ports[]:訪問ClusterIP帶的端口,8082
          • targetPort :目標端口,是容器中服務提供的端口,8082
          • spec.nodePort :集群外部訪問端口,http://NodeIP:31303

          思路2:Service 是否正確關聯到Pod?

          檢查 Service 關聯的 Pod 是否正確:

          # kubectl get pods  -o wide -l app=web
          NAME                  READY   STATUS    RESTARTS   AGE    IP           NODE        NOMINATED NODE   READINESS GATES
          web-96d5df5c8-8fwsb   1/1     Running   0          4h9m   10.244.1.5   k8s-node2   <none>           <none>
          web-96d5df5c8-g6hgp   1/1     Running   0          4h9m   10.244.1.6   k8s-node2   <none>           <none>
          web-96d5df5c8-t7xzv   1/1     Running   0          4h9m   10.244.1.4   k8s-node2   <none>           <none>

          -l app=hostnames 參數是一個標簽選擇器。

          在 Kubernetes 系統中有一個控制循環,它評估每個 Service 的選擇器,并將結果保存到 Endpoints 對象中。

          在k8s-node2上卻是可以通的。

          root@k8s-node2:/data/k8s# curl -I 10.244.1.4
          HTTP/1.1 200 OK
          Server: nginx/1.21.1
          Date: Sun, 22 Aug 2021 08:16:16 GMT
          Content-Type: text/html
          Content-Length: 612
          Last-Modified: Tue, 06 Jul 2021 14:59:17 GMT
          Connection: keep-alive
          ETag: "60e46fc5-264"
          Accept-Ranges: bytes
          

          這3個POD都部署在k8s-node2上,不是查詢的k8s-master節點。
          說明本集群的2個節點不同,大概率是flannel出問題了。

          在 Kubernetes 系統中有一個控制循環,它評估每個 Service 的選擇器,并將結果保存到 Endpoints 對象中。

          root@k8s-master:/data/k8s# kubectl get endpoints web
          NAME   ENDPOINTS                                         AGE
          web    10.244.1.4:8082,10.244.1.5:8082,10.244.1.6:8082   4h14m
          

          結果所示, endpoints 控制器已經為 Service 找到了 Pods。但并不說明關聯的Pod就是正確的,還需要進一步確認Service 的 spec.selector 字段是否與Deployment中的 metadata.labels 字段值一致。

          root@k8s-master:/data/k8s# kubectl get svc web -o yaml
          ...
            selector:
              app: web
          ...

          獲取deployment的信息;

          root@k8s-master:/data/k8s# kubectl get deployment web -o yaml
          
          ...
            selector:
              matchLabels:
                app: web
          ...

          思路3:Pod 是否正常工作?

          檢查Pod是否正常工作,繞過Service,直接訪問Pod IP:

          root@k8s-master:/data/k8s# kubectl get pods -o wide
          NAME                  READY   STATUS    RESTARTS   AGE     IP           NODE         NOMINATED NODE   READINESS GATES
          dnsutils              1/1     Running   29         29h     10.244.0.4   k8s-master   <none>           <none>
          mysql-5ws56           1/1     Running   0          24h     10.244.1.3   k8s-node2    <none>           <none>
          mysql-fwpgc           1/1     Running   0          29h     10.244.0.5   k8s-master   <none>           <none>
          mysql-smggm           1/1     Running   0          24h     10.244.1.2   k8s-node2    <none>           <none>
          myweb-8dc2n           1/1     Running   0          29h     10.244.0.7   k8s-master   <none>           <none>
          myweb-mfbpd           1/1     Running   0          29h     10.244.0.6   k8s-master   <none>           <none>
          myweb-zn8z2           1/1     Running   0          29h     10.244.0.8   k8s-master   <none>           <none>
          web-96d5df5c8-8fwsb   1/1     Running   0          4h21m   10.244.1.5   k8s-node2    <none>           <none>
          web-96d5df5c8-g6hgp   1/1     Running   0          4h21m   10.244.1.6   k8s-node2    <none>           <none>
          web-96d5df5c8-t7xzv   1/1     Running   0          4h21m   10.244.1.4   k8s-node2    <none>           <none>

          部署在另一個節點的pods不可以通信。

          root@k8s-master:/data/k8s# curl -I 10.244.1.3:3306
          curl: (7) Failed to connect to 10.244.1.4 port 3306: Connection timed out

          部署在本節點的pods可以通信。

          root@k8s-master:/data/k8s# curl -I 10.244.0.5:3306
          5.7.35=H9A_)c??b.>,q#99~/~mysql_native_password!?#08S01Got packets out of order

          此處問題在此指向2個節點pod無法通信的問題。

          注: 使用的是 Pod 端口(3306),而不是 Service 端口(3306)。

          如果不能正常響應,說明容器中服務有問題, 這個時候可以用kubectl logs查看日志或者使用 kubectl exec 直接進入 Pod檢查服務。

          除了本身服務問題外,還有可能是CNI網絡組件部署問題,現象是:curl訪問10次,可能只有兩三次能訪問,能訪問時候正好Pod是在當前節點,這并沒有走跨主機網絡。
          如果是這種現象,檢查網絡組件運行狀態和容器日志:

          root@k8s-master:/data/k8s# kubectl get pods -n kube-system
          NAME                                 READY   STATUS    RESTARTS   AGE
          coredns-74ff55c5b-8q44c              1/1     Running   0          29h
          coredns-74ff55c5b-f7j5g              1/1     Running   0          29h
          etcd-k8s-master                      1/1     Running   2          29h
          kube-apiserver-k8s-master            1/1     Running   2          29h
          kube-controller-manager-k8s-master   1/1     Running   0          29h
          kube-flannel-ds-f5tn6                1/1     Running   0          24h
          kube-flannel-ds-ftfgf                1/1     Running   0          29h
          kube-proxy-hnp7c                     1/1     Running   0          29h
          kube-proxy-njw8l                     1/1     Running   0          24h
          kube-scheduler-k8s-master            1/1     Running   0          29h
          

          思路4:kube-proxy 組件正常工作嗎?

          如果到了這里,你的 Service 正在運行,也有 Endpoints, Pod 也正在服務。
          接下來就該檢查負責 Service 的組件kube-proxy是否正常工作。
          確認 kube-proxy 運行狀態:

          root@k8s-master:/data/k8s# ps -ef |grep kube-proxy
          root      8494  8469  0 Aug21 ?        00:00:15 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=k8s-master
          root     24323 25972  0 16:34 pts/1    00:00:00 grep kube-proxy
          

          如果有進程存在,下一步確認它有沒有工作中有錯誤,比如連接主節點失敗。
          要做到這一點,必須查看日志。查看日志方式取決于K8s部署方式,如果是kubeadm部署。
          檢查k8s-master的日志

          root@k8s-master:/data/k8s# kubectl logs kube-proxy-hnp7c  -n kube-system
          I0821 02:41:24.705408       1 node.go:172] Successfully retrieved node IP: 192.168.0.3
          I0821 02:41:24.705709       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.0.3), assume IPv4 operation
          W0821 02:41:24.740886       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
          I0821 02:41:24.740975       1 server_others.go:185] Using iptables Proxier.
          I0821 02:41:24.742224       1 server.go:650] Version: v1.20.5
          I0821 02:41:24.742656       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
          I0821 02:41:24.742680       1 conntrack.go:52] Setting nf_conntrack_max to 131072
          I0821 02:41:24.742931       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
          I0821 02:41:24.742990       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
          I0821 02:41:24.747556       1 config.go:315] Starting service config controller
          I0821 02:41:24.748858       1 shared_informer.go:240] Waiting for caches to sync for service config
          I0821 02:41:24.748901       1 config.go:224] Starting endpoint slice config controller
          I0821 02:41:24.748927       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
          I0821 02:41:24.849006       1 shared_informer.go:247] Caches are synced for endpoint slice config 
          I0821 02:41:24.849071       1 shared_informer.go:247] Caches are synced for service config 
          

          檢查k8s-node2的日志

          root@k8s-master:/data/k8s# kubectl logs kube-proxy-njw8l  -n kube-system
          I0821 07:43:39.092419       1 node.go:172] Successfully retrieved node IP: 192.168.0.5
          I0821 07:43:39.092475       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.0.5), assume IPv4 operation
          W0821 07:43:39.108196       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
          I0821 07:43:39.108294       1 server_others.go:185] Using iptables Proxier.
          I0821 07:43:39.108521       1 server.go:650] Version: v1.20.5
          I0821 07:43:39.108814       1 conntrack.go:52] Setting nf_conntrack_max to 131072
          I0821 07:43:39.109295       1 config.go:315] Starting service config controller
          I0821 07:43:39.109304       1 shared_informer.go:240] Waiting for caches to sync for service config
          I0821 07:43:39.109323       1 config.go:224] Starting endpoint slice config controller
          I0821 07:43:39.109327       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
          I0821 07:43:39.209418       1 shared_informer.go:247] Caches are synced for endpoint slice config 
          I0821 07:43:39.209418       1 shared_informer.go:247] Caches are synced for service config 
          
          

          發現一個信息,Unknown proxy mode "", assuming iptables proxy,表明采用的是iptables模式。

          如果是二進制方式部署:

          journalctl -u kube-proxy
          

          思路5:kube-proxy 是否在寫 iptables 規則?

          kube-proxy 的主要負載 Services 的 負載均衡 規則生成,默認情況下使用iptables實現,檢查一下這些規則是否已經被寫好了。
          檢查k8s-master的iptables記錄:

          root@k8s-master:/data/k8s# iptables-save |grep web
          -A KUBE-NODEPORTS -p tcp -m comment --comment "default/myweb" -m tcp --dport 31330 -j KUBE-MARK-MASQ
          -A KUBE-NODEPORTS -p tcp -m comment --comment "default/myweb" -m tcp --dport 31330 -j KUBE-SVC-FCM76ICS4D7Y4C5Y
          -A KUBE-NODEPORTS -p tcp -m comment --comment "default/web" -m tcp --dport 31303 -j KUBE-MARK-MASQ
          -A KUBE-NODEPORTS -p tcp -m comment --comment "default/web" -m tcp --dport 31303 -j KUBE-SVC-LOLE4ISW44XBNF3G
          -A KUBE-SEP-KYOPKKRUSGN4EPOL -s 10.244.0.8/32 -m comment --comment "default/myweb" -j KUBE-MARK-MASQ
          -A KUBE-SEP-KYOPKKRUSGN4EPOL -p tcp -m comment --comment "default/myweb" -m tcp -j DNAT --to-destination 10.244.0.8:8080
          -A KUBE-SEP-MOKUSSRWIVOFT5Y7 -s 10.244.0.7/32 -m comment --comment "default/myweb" -j KUBE-MARK-MASQ
          -A KUBE-SEP-MOKUSSRWIVOFT5Y7 -p tcp -m comment --comment "default/myweb" -m tcp -j DNAT --to-destination 10.244.0.7:8080
          -A KUBE-SEP-V6Q53FEPJ64J3EJW -s 10.244.1.6/32 -m comment --comment "default/web" -j KUBE-MARK-MASQ
          -A KUBE-SEP-V6Q53FEPJ64J3EJW -p tcp -m comment --comment "default/web" -m tcp -j DNAT --to-destination 10.244.1.6:8082
          -A KUBE-SEP-YCBVNDXW4SG5UDC3 -s 10.244.1.5/32 -m comment --comment "default/web" -j KUBE-MARK-MASQ
          -A KUBE-SEP-YCBVNDXW4SG5UDC3 -p tcp -m comment --comment "default/web" -m tcp -j DNAT --to-destination 10.244.1.5:8082
          -A KUBE-SEP-YQ4MLBG6JI5O2LTN -s 10.244.0.6/32 -m comment --comment "default/myweb" -j KUBE-MARK-MASQ
          -A KUBE-SEP-YQ4MLBG6JI5O2LTN -p tcp -m comment --comment "default/myweb" -m tcp -j DNAT --to-destination 10.244.0.6:8080
          -A KUBE-SEP-ZNATZ23XMS7WU546 -s 10.244.1.4/32 -m comment --comment "default/web" -j KUBE-MARK-MASQ
          -A KUBE-SEP-ZNATZ23XMS7WU546 -p tcp -m comment --comment "default/web" -m tcp -j DNAT --to-destination 10.244.1.4:8082
          -A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.105.77.88/32 -p tcp -m comment --comment "default/myweb cluster IP" -m tcp --dport 8080 -j KUBE-MARK-MASQ
          -A KUBE-SERVICES -d 10.105.77.88/32 -p tcp -m comment --comment "default/myweb cluster IP" -m tcp --dport 8080 -j KUBE-SVC-FCM76ICS4D7Y4C5Y
          -A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.103.246.193/32 -p tcp -m comment --comment "default/web cluster IP" -m tcp --dport 8082 -j KUBE-MARK-MASQ
          -A KUBE-SERVICES -d 10.103.246.193/32 -p tcp -m comment --comment "default/web cluster IP" -m tcp --dport 8082 -j KUBE-SVC-LOLE4ISW44XBNF3G
          -A KUBE-SVC-FCM76ICS4D7Y4C5Y -m comment --comment "default/myweb" -m statistic --mode random --probability 0.33333333349 -j KUBE-SEP-YQ4MLBG6JI5O2LTN
          -A KUBE-SVC-FCM76ICS4D7Y4C5Y -m comment --comment "default/myweb" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-MOKUSSRWIVOFT5Y7
          -A KUBE-SVC-FCM76ICS4D7Y4C5Y -m comment --comment "default/myweb" -j KUBE-SEP-KYOPKKRUSGN4EPOL
          -A KUBE-SVC-LOLE4ISW44XBNF3G -m comment --comment "default/web" -m statistic --mode random --probability 0.33333333349 -j KUBE-SEP-ZNATZ23XMS7WU546
          -A KUBE-SVC-LOLE4ISW44XBNF3G -m comment --comment "default/web" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-YCBVNDXW4SG5UDC3
          -A KUBE-SVC-LOLE4ISW44XBNF3G -m comment --comment "default/web" -j KUBE-SEP-V6Q53FEPJ64J3EJW
          
          

          檢查k8s-node2的iptables記錄:

          root@k8s-node2:/data/k8s# iptables-save |grep web
          -A KUBE-NODEPORTS -p tcp -m comment --comment "default/myweb" -m tcp --dport 31330 -j KUBE-MARK-MASQ
          -A KUBE-NODEPORTS -p tcp -m comment --comment "default/myweb" -m tcp --dport 31330 -j KUBE-SVC-FCM76ICS4D7Y4C5Y
          -A KUBE-NODEPORTS -p tcp -m comment --comment "default/web" -m tcp --dport 31303 -j KUBE-MARK-MASQ
          -A KUBE-NODEPORTS -p tcp -m comment --comment "default/web" -m tcp --dport 31303 -j KUBE-SVC-LOLE4ISW44XBNF3G
          -A KUBE-SEP-KYOPKKRUSGN4EPOL -s 10.244.0.8/32 -m comment --comment "default/myweb" -j KUBE-MARK-MASQ
          -A KUBE-SEP-KYOPKKRUSGN4EPOL -p tcp -m comment --comment "default/myweb" -m tcp -j DNAT --to-destination 10.244.0.8:8080
          -A KUBE-SEP-MOKUSSRWIVOFT5Y7 -s 10.244.0.7/32 -m comment --comment "default/myweb" -j KUBE-MARK-MASQ
          -A KUBE-SEP-MOKUSSRWIVOFT5Y7 -p tcp -m comment --comment "default/myweb" -m tcp -j DNAT --to-destination 10.244.0.7:8080
          -A KUBE-SEP-V6Q53FEPJ64J3EJW -s 10.244.1.6/32 -m comment --comment "default/web" -j KUBE-MARK-MASQ
          -A KUBE-SEP-V6Q53FEPJ64J3EJW -p tcp -m comment --comment "default/web" -m tcp -j DNAT --to-destination 10.244.1.6:8082
          -A KUBE-SEP-YCBVNDXW4SG5UDC3 -s 10.244.1.5/32 -m comment --comment "default/web" -j KUBE-MARK-MASQ
          -A KUBE-SEP-YCBVNDXW4SG5UDC3 -p tcp -m comment --comment "default/web" -m tcp -j DNAT --to-destination 10.244.1.5:8082
          -A KUBE-SEP-YQ4MLBG6JI5O2LTN -s 10.244.0.6/32 -m comment --comment "default/myweb" -j KUBE-MARK-MASQ
          -A KUBE-SEP-YQ4MLBG6JI5O2LTN -p tcp -m comment --comment "default/myweb" -m tcp -j DNAT --to-destination 10.244.0.6:8080
          -A KUBE-SEP-ZNATZ23XMS7WU546 -s 10.244.1.4/32 -m comment --comment "default/web" -j KUBE-MARK-MASQ
          -A KUBE-SEP-ZNATZ23XMS7WU546 -p tcp -m comment --comment "default/web" -m tcp -j DNAT --to-destination 10.244.1.4:8082
          -A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.105.77.88/32 -p tcp -m comment --comment "default/myweb cluster IP" -m tcp --dport 8080 -j KUBE-MARK-MASQ
          -A KUBE-SERVICES -d 10.105.77.88/32 -p tcp -m comment --comment "default/myweb cluster IP" -m tcp --dport 8080 -j KUBE-SVC-FCM76ICS4D7Y4C5Y
          -A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.103.246.193/32 -p tcp -m comment --comment "default/web cluster IP" -m tcp --dport 8082 -j KUBE-MARK-MASQ
          -A KUBE-SERVICES -d 10.103.246.193/32 -p tcp -m comment --comment "default/web cluster IP" -m tcp --dport 8082 -j KUBE-SVC-LOLE4ISW44XBNF3G
          -A KUBE-SVC-FCM76ICS4D7Y4C5Y -m comment --comment "default/myweb" -m statistic --mode random --probability 0.33333333349 -j KUBE-SEP-YQ4MLBG6JI5O2LTN
          -A KUBE-SVC-FCM76ICS4D7Y4C5Y -m comment --comment "default/myweb" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-MOKUSSRWIVOFT5Y7
          -A KUBE-SVC-FCM76ICS4D7Y4C5Y -m comment --comment "default/myweb" -j KUBE-SEP-KYOPKKRUSGN4EPOL
          -A KUBE-SVC-LOLE4ISW44XBNF3G -m comment --comment "default/web" -m statistic --mode random --probability 0.33333333349 -j KUBE-SEP-ZNATZ23XMS7WU546
          -A KUBE-SVC-LOLE4ISW44XBNF3G -m comment --comment "default/web" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-YCBVNDXW4SG5UDC3
          -A KUBE-SVC-LOLE4ISW44XBNF3G -m comment --comment "default/web" -j KUBE-SEP-V6Q53FEPJ64J3EJW
          

          如果你已經講代理模式改為IPVS了,可以通過這種方式查看。正確情況下信息如:

          [root@k8s-node1 ~]# ipvsadm -ln
          Prot LocalAddress:Port Scheduler Flags
            -> RemoteAddress:Port Forward Weight ActiveConn InActConn
          ...
          TCP 10.104.0.64:80 rr
            -> 10.244.169.135:80 Masq 1 0 0
            -> 10.244.36.73:80 Masq 1 0 0
            -> 10.244.169.136:80 Masq 1 0 0...
          

          使用ipvsadm查看ipvs相關規則,若是沒有這個命令能夠直接yum安裝

          apt-get  install -y ipvsadm
          

          目前k8s-master的情況如下:

          root@k8s-master:/data/k8s# ipvsadm -ln
          IP Virtual Server version 1.2.1 (size=4096)
          Prot LocalAddress:Port Scheduler Flags
            -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
          
          

          正常會得到上面結果,如果沒有對應規則,說明kube-proxy組件沒工作或者與當前操作系統不兼容導致生成規則失敗。

          附:Service工作流程圖(附圖為示意,非實際IP地址。)



          問題2解決:無法通過 Service IP訪問

          查看iptables-save的結果沒有發現異常,還是對iptalbes方式不夠熟悉。采用kube-proxy開啟ipvs代替iptables的方案看看。

          在k8s-master,k8s-node2這2個節點執行以下操作。

          加載內核模快

          查看內核模塊是否加載負

          # lsmod|grep ip_vs
          ip_vs_sh               16384  0
          ip_vs_wrr              16384  0
          ip_vs_rr               16384  0
          ip_vs                 147456  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
          nf_conntrack          106496  7 ip_vs,nf_nat,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_ipv4
          libcrc32c              16384  2 raid456,ip_vs
          

          若是沒有加載,使用以下命令加載ipvs相關模塊性能

          modprobe -- ip_vs
          modprobe -- ip_vs_rr
          modprobe -- ip_vs_wrr
          modprobe -- ip_vs_sh
          modprobe -- nf_conntrack_ipv4
          

          更改kube-proxy配置

          # kubectl edit configmap kube-proxy -n kube-system
          

          找到以下部分的內容.net

              ipvs:
                minSyncPeriod: 0s
                scheduler: ""
                syncPeriod: 30s
              kind: KubeProxyConfiguration
              metricsBindAddress: ""
              mode: "ipvs"
              nodePortAddresses: null

          其中mode原來是空,默認為iptables模式,改成ipvs日志
          scheduler默認是空,默認負載均衡算法為輪訓code
          編輯完,保存退出。

          刪除全部kube-proxy的pod

          # kubectl get pods -n kube-system |grep kube-proxy
          kube-proxy-hnp7c                     1/1     Running   0          30h
          kube-proxy-njw8l                     1/1     Running   0          25h
          
          root@k8s-node2:/data/k8s# kubectl delete pod   kube-proxy-hnp7c  -n kube-system
          pod "kube-proxy-hnp7c" deleted
          root@k8s-node2:/data/k8s# kubectl delete pod   kube-proxy-njw8l  -n kube-system 
          pod "kube-proxy-njw8l" deleted
          
          root@k8s-node2:/data/k8s#  kubectl get pods -n kube-system |grep kube-proxy
          kube-proxy-4sv2c                     1/1     Running   0          36s
          kube-proxy-w7kpm                     1/1     Running   0          16s
          
          # kubectl logs kube-proxy-4sv2c  -n kube-system
          
          root@k8s-node2:/data/k8s# kubectl logs kube-proxy-4sv2c  -n kube-system
          I0822 09:36:38.757662       1 node.go:172] Successfully retrieved node IP: 192.168.0.3
          I0822 09:36:38.757707       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.0.3), assume IPv4 operation
          I0822 09:36:38.772798       1 server_others.go:258] Using ipvs Proxier.
          W0822 09:36:38.774131       1 proxier.go:445] IPVS scheduler not specified, use rr by default
          I0822 09:36:38.774388       1 server.go:650] Version: v1.20.5
          I0822 09:36:38.774742       1 conntrack.go:52] Setting nf_conntrack_max to 131072
          I0822 09:36:38.775051       1 config.go:224] Starting endpoint slice config controller
          I0822 09:36:38.775127       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
          I0822 09:36:38.775245       1 config.go:315] Starting service config controller
          I0822 09:36:38.775290       1 shared_informer.go:240] Waiting for caches to sync for service config
          I0822 09:36:38.875365       1 shared_informer.go:247] Caches are synced for endpoint slice config 
          I0822 09:36:38.875616       1 shared_informer.go:247] Caches are synced for service config 
          
          

          .有.....Using ipvs Proxier......便可.

          運行ipvsadm

          使用ipvsadm查看ipvs相關規則,若是沒有這個命令能夠直接使用apt-get安裝。

          root@k8s-master:/data/k8s# ipvsadm -ln
          IP Virtual Server version 1.2.1 (size=4096)
          Prot LocalAddress:Port Scheduler Flags
            -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
          TCP  172.17.0.1:31330 rr
            -> 10.244.0.6:8080              Masq    1      0          0         
            -> 10.244.0.7:8080              Masq    1      0          0         
            -> 10.244.0.8:8080              Masq    1      0          0         
          TCP  192.168.0.3:31303 rr
            -> 10.244.1.4:8082              Masq    1      0          0         
            -> 10.244.1.5:8082              Masq    1      0          0         
            -> 10.244.1.6:8082              Masq    1      0          0         
          TCP  192.168.0.3:31330 rr
            -> 10.244.0.6:8080              Masq    1      0          0         
            -> 10.244.0.7:8080              Masq    1      0          0         
            -> 10.244.0.8:8080              Masq    1      0          0         
          TCP  10.96.0.1:443 rr
            -> 192.168.0.3:6443             Masq    1      0          0         
          TCP  10.96.0.10:53 rr
            -> 10.244.0.2:53                Masq    1      0          0         
            -> 10.244.0.3:53                Masq    1      0          0         
          TCP  10.96.0.10:9153 rr
            -> 10.244.0.2:9153              Masq    1      0          0         
            -> 10.244.0.3:9153              Masq    1      0          0         
          TCP  10.99.230.190:3306 rr
            -> 10.244.0.5:3306              Masq    1      0          0         
            -> 10.244.1.2:3306              Masq    1      0          0         
            -> 10.244.1.3:3306              Masq    1      0          0         
          TCP  10.103.246.193:8082 rr
            -> 10.244.1.4:8082              Masq    1      0          0         
            -> 10.244.1.5:8082              Masq    1      0          0         
            -> 10.244.1.6:8082              Masq    1      0          0         
          TCP  10.105.77.88:8080 rr
            -> 10.244.0.6:8080              Masq    1      0          0         
            -> 10.244.0.7:8080              Masq    1      0          0         
            -> 10.244.0.8:8080              Masq    1      0          0         
          TCP  10.244.0.0:31303 rr
            -> 10.244.1.4:8082              Masq    1      0          0         
            -> 10.244.1.5:8082              Masq    1      0          0         
            -> 10.244.1.6:8082              Masq    1      0          0         
          TCP  10.244.0.0:31330 rr
            -> 10.244.0.6:8080              Masq    1      0          0         
            -> 10.244.0.7:8080              Masq    1      0          0         
            -> 10.244.0.8:8080              Masq    1      0          0         
          TCP  10.244.0.1:31303 rr
            -> 10.244.1.4:8082              Masq    1      0          0         
            -> 10.244.1.5:8082              Masq    1      0          0         
            -> 10.244.1.6:8082              Masq    1      0          0         
          TCP  10.244.0.1:31330 rr
            -> 10.244.0.6:8080              Masq    1      0          0         
            -> 10.244.0.7:8080              Masq    1      0          0         
            -> 10.244.0.8:8080              Masq    1      0          0         
          TCP  127.0.0.1:31303 rr
            -> 10.244.1.4:8082              Masq    1      0          0         
            -> 10.244.1.5:8082              Masq    1      0          0         
            -> 10.244.1.6:8082              Masq    1      0          0         
          TCP  127.0.0.1:31330 rr
            -> 10.244.0.6:8080              Masq    1      0          0         
            -> 10.244.0.7:8080              Masq    1      0          0         
            -> 10.244.0.8:8080              Masq    1      0          0         
          TCP  172.17.0.1:31303 rr
            -> 10.244.1.4:8082              Masq    1      0          0         
            -> 10.244.1.5:8082              Masq    1      0          0         
            -> 10.244.1.6:8082              Masq    1      0          0         
          UDP  10.96.0.10:53 rr
            -> 10.244.0.2:53                Masq    1      0          564       
            -> 10.244.0.3:53                Masq    1      0          563
          root@k8s-master:/data/k8s# curl -I 10.103.246.193:8082
          ^C
          root@k8s-master:/data/k8s# curl -I 114.67.107.240:8082
          ^C
          
          

          還是沒有解決。

          底層的iptables設置

          百度收到了以下一篇文章,解決flannel下k8s pod及容器無法跨主機互通問題,參考其完成在k8s-master和k8s-node2的配置。

          # iptables -P INPUT ACCEPT
          # iptables -P FORWARD ACCEPT
          # iptables -F
          
          # iptables -L -n
          
          root@k8s-master:/data/k8s#  iptables -L -n
          Chain INPUT (policy ACCEPT)
          target     prot opt source               destination         
          JDCLOUDHIDS_IN_LIVE  all  --  0.0.0.0/0            0.0.0.0/0           
          JDCLOUDHIDS_IN  all  --  0.0.0.0/0            0.0.0.0/0           
          
          Chain FORWARD (policy ACCEPT)
          target     prot opt source               destination         
          KUBE-FORWARD  all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes forwarding rules */
          ACCEPT     all  --  10.244.0.0/16        0.0.0.0/0           
          ACCEPT     all  --  0.0.0.0/0            10.244.0.0/16       
          
          Chain OUTPUT (policy ACCEPT)
          target     prot opt source               destination         
          JDCLOUDHIDS_OUT_LIVE  all  --  0.0.0.0/0            0.0.0.0/0           
          JDCLOUDHIDS_OUT  all  --  0.0.0.0/0            0.0.0.0/0           
          
          Chain DOCKER-USER (0 references)
          target     prot opt source               destination         
          
          Chain JDCLOUDHIDS_IN (1 references)
          target     prot opt source               destination         
          
          Chain JDCLOUDHIDS_IN_LIVE (1 references)
          target     prot opt source               destination         
          
          Chain JDCLOUDHIDS_OUT (1 references)
          target     prot opt source               destination         
          
          Chain JDCLOUDHIDS_OUT_LIVE (1 references)
          target     prot opt source               destination         
          
          Chain KUBE-EXTERNAL-SERVICES (0 references)
          target     prot opt source               destination         
          
          Chain KUBE-FIREWALL (0 references)
          target     prot opt source               destination         
          
          Chain KUBE-FORWARD (1 references)
          target     prot opt source               destination         
          ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes forwarding rules */ mark match 0x4000/0x4000
          ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes forwarding conntrack pod source rule */ ctstate RELATED,ESTABLISHED
          ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes forwarding conntrack pod destination rule */ ctstate RELATED,ESTABLISHED
          
          Chain KUBE-KUBELET-CANARY (0 references)
          target     prot opt source               destination         
          
          Chain KUBE-PROXY-CANARY (0 references)
          target     prot opt source               destination         
          
          Chain KUBE-SERVICES (0 references)
          target     prot opt source               destination
          

          然后重新操作,發現服務節點能直接訪問了。但是8082端口還是不能訪問,跨節點ping包還是不成功的。

          root@k8s-master:/data/k8s# curl -I 10.103.246.193:8082
          ^C
          root@k8s-master:/data/k8s# curl -I 114.67.107.240:8082
          ^C
          
          root@k8s-master:/data/k8s# ping 10.244.1.3
          PING 10.244.1.3 (10.244.1.3) 56(84) bytes of data.
          ^C
          --- 10.244.1.3 ping statistics ---
          12 packets transmitted, 0 received, 100% packet loss, time 10999ms
          
          root@k8s-master:/data/k8s# ping 10.244.0.5
          PING 10.244.0.5 (10.244.0.5) 56(84) bytes of data.
          64 bytes from 10.244.0.5: icmp_seq=1 ttl=64 time=0.089 ms
          64 bytes from 10.244.0.5: icmp_seq=2 ttl=64 time=0.082 ms
          ^C
          --- 10.244.0.5 ping statistics ---
          2 packets transmitted, 2 received, 0% packet loss, time 999ms
          rtt min/avg/max/mdev = 0.082/0.085/0.089/0.009 ms
          
          
          # curl -I 10.103.246.193
          HTTP/1.1 200 OK
          Server: Tengine
          Date: Sun, 22 Aug 2021 13:10:02 GMT
          Content-Type: text/html
          Content-Length: 1326
          Last-Modified: Wed, 26 Apr 2017 08:03:47 GMT
          Connection: keep-alive
          Vary: Accept-Encoding
          ETag: "59005463-52e"
          Accept-Ranges: bytes
          

          參考

          (1)K8s常見問題:Service 不能訪問排查流程 https://mp.weixin.qq.com/s/oCRWkBquUnRLC36CPwoZ1Q

          (2)kube-proxy開啟ipvs代替iptables
          https://www.shangmayuan.com/a/8fae7d6c18764194a8adce91.html

          發時經常遇到端口被占用的情況,這個時候我們就需要找出被占用端口的程序,然后結束它,本文為大家介紹如何查找被占用的端口。

          1、打開命令窗口(以管理員身份運行)

          開始—->運行—->cmd,或者是 window+R 組合鍵,調出命令窗口。

          2、查找所有運行的端口

          輸入命令:

          netstat -ano

          該命令列出所有端口的使用情況。

          在列表中我們觀察被占用的端口,比如是 1224,首先找到它。

          3、查看被占用端口對應的 PID

          輸入命令:

          netstat -aon|findstr "8081"

          回車執行該命令,最后一位數字就是 PID, 這里是 9088。

          4、查看指定 PID 的進程

          繼續輸入命令:

          tasklist|findstr "9088"

          回車執行該命令。

          查看是哪個進程或者程序占用了 8081 端口,結果是:node.exe。

          結束進程

          強制(/F參數)殺死 pid 為 9088 的所有進程包括子進程(/T參數):

          taskkill /T /F /PID 9088 

          或者是我們打開任務管理器,切換到進程選項卡,在PID一列查看9088對應的進程是誰,如果看不到PID這一列,如下圖:

          之后我們就可以結束掉這個進程,這樣我們就可以釋放該端口來使用了。

          文章來源:https://www.runoob.com/w3cnote/windows-finds-port-usage.html


          主站蜘蛛池模板: 日韩一区二区三区免费体验| 久久国产午夜精品一区二区三区| 日韩AV在线不卡一区二区三区| 国产成人精品无码一区二区老年人 | 91久久精品一区二区| 国产一区美女视频| 波多野结衣一区二区| 日韩精品一区二区亚洲AV观看| 国产精品亚洲一区二区三区久久 | 国产一区二区三区精品久久呦| 一区二区视频在线播放| 精品视频在线观看你懂的一区| 国产精品无码一区二区三区不卡| 亚洲国产视频一区| 日韩高清国产一区在线| 免费视频精品一区二区三区| 国精产品一区一区三区免费视频| 无码人妻久久一区二区三区蜜桃| 亚洲综合av一区二区三区 | 国产一区二区三区露脸| 天堂不卡一区二区视频在线观看| 人妻AV中文字幕一区二区三区| 一区二区三区在线观看免费| 午夜性色一区二区三区免费不卡视频| 亚洲成AV人片一区二区密柚 | 国产福利日本一区二区三区| 99精品国产一区二区三区2021| 亚洲福利一区二区| 曰韩人妻无码一区二区三区综合部| 精品无码人妻一区二区三区品| 久久青草精品一区二区三区| 亚洲一区中文字幕久久| 精品无码一区二区三区在线| 国产精品一区在线麻豆| 日本一道高清一区二区三区| 日本免费一区二区三区最新vr| 老湿机一区午夜精品免费福利| 国产日韩高清一区二区三区 | 美女免费视频一区二区| 亚洲性日韩精品国产一区二区| 人妻无码一区二区三区四区|