整合營銷服務商

          電腦端+手機端+微信端=數(shù)據(jù)同步管理

          免費咨詢熱線:

          更改kindeditor編輯器,改用支持h5的video標簽替換原有embed標簽

          indeditor是一款不錯的可視化編輯器,不過最近幾年似乎沒在更新,現(xiàn)在h5趨于主流,正好有個業(yè)務需要編輯器支持mp4視頻的播放,考慮到現(xiàn)在h5中的video標簽的強大,于是決定將原來系統(tǒng)中的embed標記更改好video。具體操作方法如下:

          1、在296行

          embed : ['id', 'class', 'src', 'width', 'height', 'type', 'loop', 'autostart', 'quality', '.width', '.height', 'align', 'allowscriptaccess'],

          下面增加以下代碼:

          video : ['id', 'class', 'src', 'width', 'height', 'type', 'loop', 'autostart', 'quality', '.width', '.height', 'align', 'allowscriptaccess','controls'],

          修改后的效果如下圖:

          2、在893-895行代碼段

          if (/\.(swf|flv)(\?|$)/i.test(src)) {

          return 'application/x-shockwave-flash';

          }

          下面增加以下代碼:

          if (/\.(mp4|mp5)(\?|$)/i.test(src)) {

          return 'video/mp4';

          }

          3、然后修改901-903行代碼

          if (/flash/i.test(type)) {

          return 'ke-flash';

          }

          在下面增加

          if (/video/i.test(type)) {

          return 'ke-video';

          }

          修改后的效果如下圖:

          4、在917行代碼function _mediaImg(blankPath, attrs) {

          在其上面增加代碼:

          function _mediaVideo(attrs) {

          var html = '<video ';

          _each(attrs, function(key, val) {

          html += key + '="' + val + '" ';

          });

          html += ' controls="controls" />';

          return html;

          }

          5、在955行代碼:K.mediaEmbed = _mediaEmbed;的下面

          增加代碼 :K.mediaVideo = _mediaVideo;

          好了,這樣當我們上傳視頻時,就會使用video標記來引用視頻了。取代以前的embed標簽 。不過,這里還有一個問題,就是上傳視頻后,編輯器中為空白的(其實已經(jīng)上傳成功,切換到代碼模式也能看到有內(nèi)容)。使用chrome調(diào)試,發(fā)現(xiàn)問題在樣式上。經(jīng)過對比發(fā)現(xiàn)問題在這里:

          之前使用embed標簽顯示視頻的效果:

          替換后使用video展示視頻的效果:

          所以我們增加樣式即可,找到3528行的代碼:'img.ke-media {',

          將其修改為'img.ke-media,img.ke-video {',

          這句話的含義就是ke-video樣式與ke-media一樣。好了,文件修改好,清除瀏覽器緩存(也可以按ctr+f5),再次上傳視頻查看效果解決!

          在前面

          • 不小心拔錯電源了,虛機強制關機,開機后集群死掉了
          • 記錄下解決方案
          • 斷電導致etcd 快照數(shù)據(jù)丟失,沒有備份.基本上是沒辦法處理
          • 可以找專業(yè)的 DBA來處理數(shù)據(jù)看有沒有可能恢復
          • 這篇博文的解決辦法是刪除了 etcd 數(shù)據(jù)目錄中的部分文件。
          • 集群可以啟動,但是 部署的環(huán)境數(shù)據(jù)都丟失了,包括CNI, 集群自帶的 DNS 組件也丟了。
          • 理解不足小伙伴幫忙指正
          • 不管是生產(chǎn)還是測試, k8s集群 ETCD 一定要備份,ETCD 一定要備份,ETCD 一定要備份 ,重要的話說三遍。

          我所渴求的,無非是將心中脫穎語出的本性付諸生活,為何竟如此艱難呢 ------赫爾曼·黑塞《德米安》


          當前集群的狀態(tài)

          ┌──[root@vms81.liruilongs.github.io]-[~]
          └─$kubectl get nodes
          The connection to the server 192.168.26.81:6443 was refused - did you specify the right host or port?
          

          重啟 docke 和 kubelet 嘗試啟動

          ┌──[root@vms81.liruilongs.github.io]-[~]
          └─$systemctl restart docker
          ┌──[root@vms81.liruilongs.github.io]-[~]
          └─$systemctl restart kubelet.service
          

          還是不行,查看下 maser 節(jié)點的 kubelet 日志信息

          ┌──[root@vms81.liruilongs.github.io]-[~]
          └─$journalctl  -u kubelet.service -f
          1月 19 09:32:06 vms81.liruilongs.github.io kubelet[11344]: E0119 09:32:06.703418   11344 kubelet.go:2407] "Error getting node" err="node \"vms81.liruilongs.github.io\" not found"
          1月 19 09:32:06 vms81.liruilongs.github.io kubelet[11344]: E0119 09:32:06.804201   11344 kubelet.go:2407] "Error getting node" err="node \"vms81.liruilongs.github.io\" not found"
          1月 19 09:32:06 vms81.liruilongs.github.io kubelet[11344]: E0119 09:32:06.905156   11344 kubelet.go:2407] "Error getting node" err="node \"vms81.liruilongs.github.io\" not found"
          1月 19 09:32:07 vms81.liruilongs.github.io kubelet[11344]: E0119 09:32:07.005487   11344 kubelet.go:2407] "Error getting node" err="node \"vms81.liruilongs.github.io\" not found"
          1月 19 09:32:07 vms81.liruilongs.github.io kubelet[11344]: E0119 09:32:07.105648   11344 kubelet.go:2407] "Error getting node" err="node \"vms81.liruilongs.github.io\" not found"
          1月 19 09:32:07 vms81.liruilongs.github.io kubelet[11344]: E0119 09:32:07.186066   11344 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://192.168.26.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/vms81.liruilongs.github.io?timeout=10s": dial tcp 192.168.26.81:6443: connect: connection refused
          1月 19 09:32:07 vms81.liruilongs.github.io kubelet[11344]: E0119 09:32:07.205785   11344 kubelet.go:2407] "Error getting node" err="node \"vms81.liruilongs.github.io\" not found"
          

          利用 docker 查看下當前存在的 pod 信息

          ┌──[root@vms81.liruilongs.github.io]-[~]
          └─$docker ps
          CONTAINER ID   IMAGE                                               COMMAND                  CREATED          STATUS              PORTS     NAMES
          d9d6471ce936   b51ddc1014b0                                        "kube-scheduler --au…"   17 minutes ago   Up 17 minutes                 k8s_kube-scheduler_kube-scheduler-vms81.liruilongs.github.io_kube-system_e1b874bfdef201d69db10b200b8f47d5_14
          010c1b8c30c6   5425bcbd23c5                                        "kube-controller-man…"   17 minutes ago   Up 17 minutes                 k8s_kube-controller-manager_kube-controller-manager-vms81.liruilongs.github.io_kube-system_49b7654103f80170bfe29d034f806256_15
          7e215924a1dd   registry.aliyuncs.com/google_containers/pause:3.5   "/pause"                 18 minutes ago   Up About a minute             k8s_POD_etcd-vms81.liruilongs.github.io_kube-system_1502584f9ab841720212d4341d723ba2_7
          f557435d150e   registry.aliyuncs.com/google_containers/pause:3.5   "/pause"                 18 minutes ago   Up 18 minutes                 k8s_POD_kube-scheduler-vms81.liruilongs.github.io_kube-system_e1b874bfdef201d69db10b200b8f47d5_7
          5deaffbc555a   registry.aliyuncs.com/google_containers/pause:3.5   "/pause"                 18 minutes ago   Up 18 minutes                 k8s_POD_kube-controller-manager-vms81.liruilongs.github.io_kube-system_49b7654103f80170bfe29d034f806256_7
          a418c2ce33f2   registry.aliyuncs.com/google_containers/pause:3.5   "/pause"                 18 minutes ago   Up 18 minutes                 k8s_POD_kube-apiserver-vms81.liruilongs.github.io_kube-system_a35cb37b6c90c72f607936b33161eefe_6
          

          etcd 沒有啟動, apiservice 也沒有啟動。

          ┌──[root@vms81.liruilongs.github.io]-[~]
          └─$docker ps -a | grep etcd
          b5e18722315b   004811815584                                        "etcd --advertise-cl…"   5 minutes ago    Exited (2) About a minute ago             k8s_etcd_etcd-vms81.liruilongs.github.io_kube-system_1502584f9ab841720212d4341d723ba2_19
          7e215924a1dd   registry.aliyuncs.com/google_containers/pause:3.5   "/pause"                 21 minutes ago   Up 4 minutes                              k8s_POD_etcd-vms81.liruilongs.github.io_kube-system_1502584f9ab841720212d4341d723ba2_7
          

          嘗試重新啟動 etcd

          ┌──[root@vms81.liruilongs.github.io]-[~]
          └─$docker restart b5e18722315b
          b5e18722315b
          

          查看啟動狀態(tài)

          ┌──[root@vms81.liruilongs.github.io]-[~]
          └─$docker ps -a | grep etcd
          b5e18722315b   004811815584                                        "etcd --advertise-cl…"   5 minutes ago    Exited (2) About a minute ago             k8s_etcd_etcd-vms81.liruilongs.github.io_kube-system_1502584f9ab841720212d4341d723ba2_19
          7e215924a1dd   registry.aliyuncs.com/google_containers/pause:3.5   "/pause"                 21 minutes ago   Up 4 minutes                              k8s_POD_etcd-vms81.liruilongs.github.io_kube-system_1502584f9ab841720212d4341d723ba2_7
          ┌──[root@vms81.liruilongs.github.io]-[~]
          └─$docker logs b5e18722315b
          

          看一下 etcd 對應的日志

          ┌──[root@vms81.liruilongs.github.io]-[~]
          └─$docker logs 8a53cbc545e4
          ..................................................
          {"level":"info","ts":"2023-01-19T01:34:24.332Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/etcd/member/snap/db","took":"5.557212ms"}
          {"level":"warn","ts":"2023-01-19T01:34:24.332Z","caller":"wal/util.go:90","msg":"ignored file in WAL directory","path":"0000000000000014-0000000000185aba.wal.broken"}
          {"level":"info","ts":"2023-01-19T01:34:24.770Z","caller":"etcdserver/server.go:508","msg":"recovered v2 store from snapshot","snapshot-index":26912747,"snapshot-size":"42 kB"}
          {"level":"warn","ts":"2023-01-19T01:34:24.771Z","caller":"snap/db.go:88","msg":"failed to find [SNAPSHOT-INDEX].snap.db","snapshot-index":26912747,"snapshot-file-path":"/var/lib/etcd/member/snap/00000000019aa7eb.snap.db","error":"snap: snapshot file doesn't exist"}
          {"level":"panic","ts":"2023-01-19T01:43:31.738Z","caller":"etcdserver/server.go:515","msg":"failed to recover v3 backend from snapshot","error":"failed to find database snapshot file (snap: snapshot file doesn't exist)","stacktrace":"go.etcd.io/etcd/server/v3/etcdserver.NewServer\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/server/etcdserver/server.go:515\ngo.etcd.io/etcd/server/v3/embed.StartEtcd\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/server/embed/etcd.go:244\ngo.etcd.io/etcd/server/v3/etcdmain.startEtcd\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/server/etcdmain/etcd.go:227\ngo.etcd.io/etcd/server/v3/etcdmain.startEtcdOrProxyV2\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/server/etcdmain/etcd.go:122\ngo.etcd.io/etcd/server/v3/etcdmain.Main\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/server/etcdmain/main.go:40\nmain.main\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/server/main.go:32\nruntime.main\n\t/home/remote/sbatsche/.gvm/gos/go1.16.3/src/runtime/proc.go:225"}
          panic: failed to recover v3 backend from snapshot
          
          goroutine 1 [running]:
          go.uber.org/zap/zapcore.(*CheckedEntry).Write(0xc000114600, 0xc000588240, 0x1, 0x1)
                  /home/remote/sbatsche/.gvm/pkgsets/go1.16.3/global/pkg/mod/go.uber.org/zap@v1.17.0/zapcore/entry.go:234 +0x58d
          go.uber.org/zap.(*Logger).Panic(0xc000080960, 0x122e2fc, 0x2a, 0xc000588240, 0x1, 0x1)
                  /home/remote/sbatsche/.gvm/pkgsets/go1.16.3/global/pkg/mod/go.uber.org/zap@v1.17.0/logger.go:227 +0x85
          go.etcd.io/etcd/server/v3/etcdserver.NewServer(0x7ffe54af1e25, 0x1a, 0x0, 0x0, 0x0, 0x0, 0xc0004cf830, 0x1, 0x1, 0xc0004cfa70, ...)
                  /tmp/etcd-release-3.5.0/etcd/release/etcd/server/etcdserver/server.go:515 +0x1656
          go.etcd.io/etcd/server/v3/embed.StartEtcd(0xc0000ee000, 0xc0000ee600, 0x0, 0x0)
                  /tmp/etcd-release-3.5.0/etcd/release/etcd/server/embed/etcd.go:244 +0xef8
          go.etcd.io/etcd/server/v3/etcdmain.startEtcd(0xc0000ee000, 0x1202a6f, 0x6, 0xc000428401, 0x2)
                  /tmp/etcd-release-3.5.0/etcd/release/etcd/server/etcdmain/etcd.go:227 +0x32
          go.etcd.io/etcd/server/v3/etcdmain.startEtcdOrProxyV2(0xc00003a120, 0x12, 0x12)
                  /tmp/etcd-release-3.5.0/etcd/release/etcd/server/etcdmain/etcd.go:122 +0x257a
          go.etcd.io/etcd/server/v3/etcdmain.Main(0xc00003a120, 0x12, 0x12)
                  /tmp/etcd-release-3.5.0/etcd/release/etcd/server/etcdmain/main.go:40 +0x11f
          main.main()
                  /tmp/etcd-release-3.5.0/etcd/release/etcd/server/main.go:32 +0x45
          

          "msg":"failed to recover v3 backend from snapshot","error":"failed to find database snapshot file (snap: snapshot file doesn't exist)","

          "msg": "從快照恢復v3后臺失敗", "error": "未能找到數(shù)據(jù)庫快照文件(snap: 快照文件不存在)","

          斷電照成數(shù)據(jù)文件損壞了,它希望從快照中恢復,但是沒有快照。

          額,這里沒有備份,所以基本上是沒有辦法修復了。只能通過 kubeadm 重置集群了。

          一些補救措施

          如果說你希望通過一些其他的方式來啟動集群,來獲取一些當前集群的配置信息,下面的方式可以嘗試,但是我的集群使用了下面的方法,所有的 pods 數(shù)據(jù)都丟失了,沒辦法最后重置集群了。

          如果你想使用下面的方式,一定要備份刪除的 etcd 數(shù)據(jù)文件

          etcd master 是一個靜態(tài) pod ,所以我們看下 yaml 文件,配置的數(shù)據(jù)文件中什么位置

          ┌──[root@vms81.liruilongs.github.io]-[~]
          └─$cd /etc/kubernetes/manifests/
          ┌──[root@vms81.liruilongs.github.io]-[/etc/kubernetes/manifests]
          └─$ls
          etcd.yaml  kube-apiserver.yaml  kube-controller-manager.yaml  kube-scheduler.yaml
          

          - --data-dir=/var/lib/etcd

          ┌──[root@vms81.liruilongs.github.io]-[/etc/kubernetes/manifests]
          └─$cat etcd.yaml | grep -e "--"
              - --advertise-client-urls=https://192.168.26.81:2379
              - --cert-file=/etc/kubernetes/pki/etcd/server.crt
              - --client-cert-auth=true
              - --data-dir=/var/lib/etcd
              - --initial-advertise-peer-urls=https://192.168.26.81:2380
              - --initial-cluster=vms81.liruilongs.github.io=https://192.168.26.81:2380
              - --key-file=/etc/kubernetes/pki/etcd/server.key
              - --listen-client-urls=https://127.0.0.1:2379,https://192.168.26.81:2379
              - --listen-metrics-urls=http://127.0.0.1:2381
              - --listen-peer-urls=https://192.168.26.81:2380
              - --name=vms81.liruilongs.github.io
              - --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
              - --peer-client-cert-auth=true
              - --peer-key-file=/etc/kubernetes/pki/etcd/peer.key
              - --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
              - --snapshot-count=10000
              - --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
          

          對應的數(shù)據(jù)文件,可以嘗試對數(shù)據(jù)文件進行修復,如果希望集群可以快速啟動,可以

          ┌──[root@vms81.liruilongs.github.io]-[/var/lib/etcd/member]
          └─$tree
          .
          ├── snap
          │   ├── 0000000000000058-00000000019a0ba7.snap
          │   ├── 0000000000000058-00000000019a32b8.snap
          │   ├── 0000000000000058-00000000019a59c9.snap
          │   ├── 0000000000000058-00000000019a80da.snap
          │   ├── 0000000000000058-00000000019aa7eb.snap
          │   └── db
          └── wal
              ├── 0000000000000014-0000000000185aba.wal.broken
              ├── 0000000000000142-0000000001963c0e.wal
              ├── 0000000000000143-0000000001977bbe.wal
              ├── 0000000000000144-0000000001986aa6.wal
              ├── 0000000000000145-0000000001995ef6.wal
              ├── 0000000000000146-00000000019a544d.wal
              └── 1.tmp
          
          2 directories, 13 files
          

          備份一下數(shù)據(jù)文件

          ┌──[root@vms81.liruilongs.github.io]-[/var/lib/etcd]
          └─$ls
          member
          ┌──[root@vms81.liruilongs.github.io]-[/var/lib/etcd]
          └─$tar -cvf member.tar member/
          member/
          member/snap/
          member/snap/db
          member/snap/0000000000000058-00000000019a0ba7.snap
          member/snap/0000000000000058-00000000019a32b8.snap
          member/snap/0000000000000058-00000000019a59c9.snap
          member/snap/0000000000000058-00000000019a80da.snap
          member/snap/0000000000000058-00000000019aa7eb.snap
          member/wal/
          member/wal/0000000000000142-0000000001963c0e.wal
          member/wal/0000000000000144-0000000001986aa6.wal
          member/wal/0000000000000014-0000000000185aba.wal.broken
          member/wal/0000000000000145-0000000001995ef6.wal
          member/wal/0000000000000146-00000000019a544d.wal
          member/wal/1.tmp
          member/wal/0000000000000143-0000000001977bbe.wal
          ┌──[root@vms81.liruilongs.github.io]-[/var/lib/etcd]
          └─$ls
          member  member.tar
          ┌──[root@vms81.liruilongs.github.io]-[/var/lib/etcd]
          └─$mv member.tar  /tmp/
          ┌──[root@vms81.liruilongs.github.io]-[/var/lib/etcd]
          └─$
          
          ┌──[root@vms81.liruilongs.github.io]-[/var/lib/etcd]
          └─$rm -rf  member/snap/*.snap
          ┌──[root@vms81.liruilongs.github.io]-[/var/lib/etcd]
          └─$rm -rf  member/wal/*.wal
          ┌──[root@vms81.liruilongs.github.io]-[/var/lib/etcd]
          └─$
          

          重新啟動 docker 對應的鏡像,或者重新啟動 kubectl。

          ┌──[root@vms81.liruilongs.github.io]-[/var/lib/etcd]
          └─$docker ps -a | grep etcd
          a3b97cb34d9b   004811815584                                        "etcd --advertise-cl…"   2 minutes ago   Exited (2) 2 minutes ago              k8s_etcd_etcd-vms81.liruilongs.github.io_kube-system_1502584f9ab841720212d4341d723ba2_45
          7e215924a1dd   registry.aliyuncs.com/google_containers/pause:3.5   "/pause"                 3 hours ago     Up 2 hours                            k8s_POD_etcd-vms81.liruilongs.github.io_kube-system_1502584f9ab841720212d4341d723ba2_7
          ┌──[root@vms81.liruilongs.github.io]-[/var/lib/etcd]
          └─$docker start a3b97cb34d9b
          a3b97cb34d9b
          ┌──[root@vms81.liruilongs.github.io]-[/var/lib/etcd]
          └─$docker ps -a | grep etcd
          e1fc068247af   004811815584                                        "etcd --advertise-cl…"   3 seconds ago   Up 2 seconds                          k8s_etcd_etcd-vms81.liruilongs.github.io_kube-system_1502584f9ab841720212d4341d723ba2_46
          a3b97cb34d9b   004811815584                                        "etcd --advertise-cl…"   3 minutes ago   Exited (2) 3 seconds ago              k8s_etcd_etcd-vms81.liruilongs.github.io_kube-system_1502584f9ab841720212d4341d723ba2_45
          7e215924a1dd   registry.aliyuncs.com/google_containers/pause:3.5   "/pause"                 3 hours ago     Up 2 hours                            k8s_POD_etcd-vms81.liruilongs.github.io_kube-system_1502584f9ab841720212d4341d723ba2_7
          ┌──[root@vms81.liruilongs.github.io]-[/var/lib/etcd]
          └─$
          

          查看 Node 狀態(tài)

          ┌──[root@vms81.liruilongs.github.io]-[/var/lib/etcd]
          └─$kubectl get nodes
          NAME                          STATUS   ROLES    AGE   VERSION
          vms155.liruilongs.github.io   Ready    <none>   76s   v1.22.2
          vms81.liruilongs.github.io    Ready    <none>   76s   v1.22.2
          vms82.liruilongs.github.io    Ready    <none>   76s   v1.22.2
          vms83.liruilongs.github.io    Ready    <none>   76s   v1.22.2
          ┌──[root@vms81.liruilongs.github.io]-[/var/lib/etcd]
          └─$
          

          查看集群當前所有的 Pod 。

          ┌──[root@vms81.liruilongs.github.io]-[~/ansible/kubevirt]
          └─$kubectl get pods -A
          NAME                                                 READY   STATUS    RESTARTS         AGE
          etcd-vms81.liruilongs.github.io                      1/1     Running   48 (3h35m ago)   3h53m
          kube-apiserver-vms81.liruilongs.github.io            1/1     Running   48 (3h35m ago)   3h51m
          kube-controller-manager-vms81.liruilongs.github.io   1/1     Running   17 (3h35m ago)   3h51m
          kube-scheduler-vms81.liruilongs.github.io            1/1     Running   16 (3h35m ago)   3h52m
          

          網(wǎng)絡相關的 pod 都不在了,而且 k8s 的 dns 組件也沒有起來, 這里需要 重新配置網(wǎng)絡,有點麻煩,正常情況下如果, 網(wǎng)絡相關的組件沒有起來, 所有節(jié)點應該都是未就緒狀態(tài)。感覺有點妖。。。時間關系,我需要集群來做實驗,所以通過 kubeadm重置了

          ┌──[root@vms81.liruilongs.github.io]-[~/ansible]
          └─$kubectl apply -f calico.yaml
          

          博文參考


          https://github.com/etcd-io/etcd/issues/11949

          ,前言

          XXL-JOB是一個優(yōu)秀的國產(chǎn)開源分布式任務調(diào)度平臺,他有著自己的一套調(diào)度注冊中心,提供了豐富的調(diào)度和阻塞策略等,這些都是可視化的操作,使用起來十分方便。

          由于是國產(chǎn)的,所以上手還是比較快的,而且他的源碼也十分優(yōu)秀,因為是調(diào)試平臺所以線程這一塊的使用是很頻繁的,特別值得學習研究。

          XXL-JOB一共分為兩個模塊,調(diào)度中心模塊和執(zhí)行模塊。具體解釋,我們copy下官網(wǎng)的介紹:

          • 調(diào)度模塊(調(diào)度中心):
            負責管理調(diào)度信息,按照調(diào)度配置發(fā)出調(diào)度請求,自身不承擔業(yè)務代碼。調(diào)度系統(tǒng)與任務解耦,提高了系統(tǒng)可用性和穩(wěn)定性,同時調(diào)度系統(tǒng)性能不再受限于任務模塊;
            支持可視化、簡單且動態(tài)的管理調(diào)度信息,包括任務新建,更新,刪除,GLUE開發(fā)和任務報警等,所有上述操作都會實時生效,同時支持監(jiān)控調(diào)度結果以及執(zhí)行日志,支持執(zhí)行器Failover。
          • 執(zhí)行模塊(執(zhí)行器):
            負責接收調(diào)度請求并執(zhí)行任務邏輯。任務模塊專注于任務的執(zhí)行等操作,開發(fā)和維護更加簡單和高效;
            接收“調(diào)度中心”的執(zhí)行請求、終止請求和日志請求等。

          image

          XXL-JOB中“調(diào)度模塊”和“任務模塊”完全解耦,調(diào)度模塊進行任務調(diào)度時,將會解析不同的任務參數(shù)發(fā)起遠程調(diào)用,調(diào)用各自的遠程執(zhí)行器服務。這種調(diào)用模型類似RPC調(diào)用,調(diào)度中心提供調(diào)用代理的功能,而執(zhí)行器提供遠程服務的功能。

          下面看下springboot環(huán)境下的使用方式,首先看下執(zhí)行器的配置:

              @Bean
              public XxlJobSpringExecutor xxlJobExecutor() {
                  logger.info(">>>>>>>>>>> xxl-job config init.");
                  XxlJobSpringExecutor xxlJobSpringExecutor = new XxlJobSpringExecutor();
                  //調(diào)度中心地址
                  xxlJobSpringExecutor.setAdminAddresses(adminAddresses);
                  //執(zhí)行器AppName
                  xxlJobSpringExecutor.setAppname(appname);
                  //執(zhí)行器注冊地址,默認為空即可
                  xxlJobSpringExecutor.setAddress(address);
                  //執(zhí)行器IP [選填]:默認為空表示自動獲取IP
                  xxlJobSpringExecutor.setIp(ip);
                  //執(zhí)行器端口
                  xxlJobSpringExecutor.setPort(port);
                  //執(zhí)行器通訊TOKEN
                  xxlJobSpringExecutor.setAccessToken(accessToken);
                  //執(zhí)行器運行日志文件存儲磁盤路徑
                  xxlJobSpringExecutor.setLogPath(logPath);
                  //執(zhí)行器日志文件保存天數(shù)
                  xxlJobSpringExecutor.setLogRetentionDays(logRetentionDays);
          
                  return xxlJobSpringExecutor;
              }
          
          

          XXL-JOB提供了多種任務執(zhí)行方式,我們今天看下最簡單的bean執(zhí)行模式。如下:

              /**
               * 1、簡單任務示例(Bean模式)
               */
              @XxlJob("demoJobHandler")
              public void demoJobHandler() throws Exception {
                  XxlJobHelper.log("XXL-JOB, Hello World.");
          
                  for (int i = 0; i < 5; i++) {
                      XxlJobHelper.log("beat at:" + i);
                      TimeUnit.SECONDS.sleep(2);
                  }
                  // default success
              }
          
          

          現(xiàn)在在調(diào)度中心稍做配置,我們這段代碼就可以按照一定的策略進行調(diào)度執(zhí)行,是不是很神奇?我們先看下官網(wǎng)上的解釋:

          原理:每個Bean模式任務都是一個Spring的Bean類實例,它被維護在“執(zhí)行器”項目的Spring容器中。任務類需要加“@JobHandler(value=”名稱”)”注解,因為“執(zhí)行器”會根據(jù)該注解識別Spring容器中的任務。任務類需要繼承統(tǒng)一接口“IJobHandler”,任務邏輯在execute方法中開發(fā),因為“執(zhí)行器”在接收到調(diào)度中心的調(diào)度請求時,將會調(diào)用“IJobHandler”的execute方法,執(zhí)行任務邏輯。

          紙上得來終覺淺,絕知此事要躬行,今天的任務就是跟著這段話,我們大體看一波源碼的實現(xiàn)方式。

          二,XxlJobSpringExecutor

          XxlJobSpringExecutor其實看名字,我們都能想到,這是XXL-JOB為了適應spring模式的應用而開發(fā)的模板類,先看下他的實現(xiàn)結構。

          image

          XxlJobSpringExecutor繼承自XxlJobExecutor,同時由于是用在spring環(huán)境,所以實現(xiàn)了多個spring內(nèi)置的接口來配合實現(xiàn)整個執(zhí)行器模塊功能,每個接口的功能就不細說了,相信大家都可以百度查到。

          我們看下初始化方法afterSingletonsInstantiated

              // start
              @Override
              public void afterSingletonsInstantiated() {
          
                  //注冊每個任務
                  initJobHandlerMethodRepository(applicationContext);
          
                  // refresh GlueFactory
                  GlueFactory.refreshInstance(1);
          
                  // super start
                  try {
                      super.start();
                  } catch (Exception e) {
                      throw new RuntimeException(e);
                  }
              }
          
          

          主流程看上去是比較簡單的,首先是注冊每一個JobHandler,然后進行初始化操作,GlueFactory.refreshInstance(1)是為了另一種調(diào)用模式時用到的,主要是用到了groovy,不在這次的分析中,我們就不看了。我們繼續(xù)看下如何注冊JobHandler的。

           private void initJobHandlerMethodRepository(ApplicationContext applicationContext) {
                  if (applicationContext == null) {
                      return;
                  }
                  // 遍歷所有beans,取出所有包含有@XxlJob的方法
                  String[] beanDefinitionNames = applicationContext.getBeanNamesForType(Object.class, false, true);
                  for (String beanDefinitionName : beanDefinitionNames) {
                      Object bean = applicationContext.getBean(beanDefinitionName);
          
                      Map<Method, XxlJob> annotatedMethods = null;   // referred to :org.springframework.context.event.EventListenerMethodProcessor.processBean
                      try {
                          annotatedMethods = MethodIntrospector.selectMethods(bean.getClass(),
                                  new MethodIntrospector.MetadataLookup<XxlJob>() {
                                      @Override
                                      public XxlJob inspect(Method method) {
                                          return AnnotatedElementUtils.findMergedAnnotation(method, XxlJob.class);
                                      }
                                  });
                      } catch (Throwable ex) {
                          logger.error("xxl-job method-jobhandler resolve error for bean[" + beanDefinitionName + "].", ex);
                      }
                      if (annotatedMethods==null || annotatedMethods.isEmpty()) {
                          continue;
                      }
                      //遍歷@XxlJob方法,取出executeMethod以及注解中對應的initMethod, destroyMethod進行注冊
                      for (Map.Entry<Method, XxlJob> methodXxlJobEntry : annotatedMethods.entrySet()) {
                          Method executeMethod = methodXxlJobEntry.getKey();
                          XxlJob xxlJob = methodXxlJobEntry.getValue();
                          if (xxlJob == null) {
                              continue;
                          }
          
                          String name = xxlJob.value();
                          if (name.trim().length() == 0) {
                              throw new RuntimeException("xxl-job method-jobhandler name invalid, for[" + bean.getClass() + "#" + executeMethod.getName() + "] .");
                          }
                          if (loadJobHandler(name) != null) {
                              throw new RuntimeException("xxl-job jobhandler[" + name + "] naming conflicts.");
                          }
          
                          executeMethod.setAccessible(true);
          
                          // init and destory
                          Method initMethod = null;
                          Method destroyMethod = null;
          
                          if (xxlJob.init().trim().length() > 0) {
                              try {
                                  initMethod = bean.getClass().getDeclaredMethod(xxlJob.init());
                                  initMethod.setAccessible(true);
                              } catch (NoSuchMethodException e) {
                                  throw new RuntimeException("xxl-job method-jobhandler initMethod invalid, for[" + bean.getClass() + "#" + executeMethod.getName() + "] .");
                              }
                          }
                          if (xxlJob.destroy().trim().length() > 0) {
                              try {
                                  destroyMethod = bean.getClass().getDeclaredMethod(xxlJob.destroy());
                                  destroyMethod.setAccessible(true);
                              } catch (NoSuchMethodException e) {
                                  throw new RuntimeException("xxl-job method-jobhandler destroyMethod invalid, for[" + bean.getClass() + "#" + executeMethod.getName() + "] .");
                              }
                          }
          
                          // 注冊 jobhandler
                          registJobHandler(name, new MethodJobHandler(bean, executeMethod, initMethod, destroyMethod));
                      }
                  }
          
              }
          
          

          XxlJobSpringExecutor由于實現(xiàn)了ApplicationContextAware,所以通過applicationContext可以獲得所有容器中的bean實例,再通過MethodIntrospector來過濾出所有包含@XxlJob注解的方法,最后把對應的executeMethod以及注解中對應的initMethod, destroyMethod進行注冊到jobHandlerRepository中,jobHandlerRepository是一個線程安全ConcurrentMap,MethodJobHandler實現(xiàn)自IJobHandler接口的一個模板類,主要作用就是通過反射去執(zhí)行對應的方法。看到這,之前那句話任務類需要加“@JobHandler(value=”名稱”)”注解,因為“執(zhí)行器”會根據(jù)該注解識別Spring容器中的任務。我們就明白了。

          public class MethodJobHandler extends IJobHandler {
              ....
              public MethodJobHandler(Object target, Method method, Method initMethod, Method destroyMethod) {
                  this.target = target;
                  this.method = method;
          
                  this.initMethod = initMethod;
                  this.destroyMethod = destroyMethod;
              }
          
              @Override
              public void execute() throws Exception {
                  Class<?>[] paramTypes = method.getParameterTypes();
                  if (paramTypes.length > 0) {
                      method.invoke(target, new Object[paramTypes.length]);       // method-param can not be primitive-types
                  } else {
                      method.invoke(target);
                  }
              }
          
          

          三,執(zhí)行服務器initEmbedServer

          看完上面的JobHandler注冊,后面接著就是執(zhí)行器模塊的啟動操作了,下面看下start方法:

              public void start() throws Exception {
          
                  // 初始化日志path
                  XxlJobFileAppender.initLogPath(logPath);
          
                  // 注冊adminBizList
                  initAdminBizList(adminAddresses, accessToken);
          
                  // 初始化日志清除線程
                  JobLogFileCleanThread.getInstance().start(logRetentionDays);
          
                  // 初始化回調(diào)線程,用來把執(zhí)行結果回調(diào)給調(diào)度中心
                  TriggerCallbackThread.getInstance().start();
          
                  // 執(zhí)行服務器啟動
                  initEmbedServer(address, ip, port, appname, accessToken);
              }
          
          

          前幾個操作,我們就不細看了,大家有興趣的可以自行查看,我們直接進入initEmbedServer方法查看內(nèi)部服務器如何啟動,以及向調(diào)試中心注冊的。

              private void initEmbedServer(String address, String ip, int port, String appname, String accessToken) throws Exception {
                  ...
                  // start
                  embedServer = new EmbedServer();
                  embedServer.start(address, port, appname, accessToken);
              }
          
          
              public void start(final String address, final int port, final String appname, final String accessToken) {
                  ```
                  // 啟動netty服務器
                  ServerBootstrap bootstrap = new ServerBootstrap();
                  bootstrap.group(bossGroup, workerGroup)
                          .channel(NioServerSocketChannel.class)
                          .childHandler(new ChannelInitializer<SocketChannel>() {
                              @Override
                              public void initChannel(SocketChannel channel) throws Exception {
                                  channel.pipeline()
                                          .addLast(new IdleStateHandler(0, 0, 30 * 3, TimeUnit.SECONDS))  // beat 3N, close if idle
                                          .addLast(new HttpServerCodec())
                                          .addLast(new HttpObjectAggregator(5 * 1024 * 1024))  // merge request & reponse to FULL
                                          .addLast(new EmbedHttpServerHandler(executorBiz, accessToken, bizThreadPool));
                              }
                          })
                          .childOption(ChannelOption.SO_KEEPALIVE, true);
          
                  // bind
                  ChannelFuture future = bootstrap.bind(port).sync();
          
                  logger.info(">>>>>>>>>>> xxl-job remoting server start success, nettype = {}, port = {}", EmbedServer.class, port);
          
                  // 執(zhí)行向調(diào)度中心注冊
                  startRegistry(appname, address);
                  ```
              }
          
          

          因為執(zhí)行器模塊本身需要有通訊交互的需求,不然調(diào)度中心是無法調(diào)用它的,所以內(nèi)嵌了一個netty服務器進行通信。啟動成功后,正式向調(diào)試中心執(zhí)行注冊請求。我們直接看注冊的代碼:

              RegistryParam registryParam = new RegistryParam(RegistryConfig.RegistType.EXECUTOR.name(), appname, address);
              for (AdminBiz adminBiz: XxlJobExecutor.getAdminBizList()) {
                  try {
                      //執(zhí)行注冊請求
                      ReturnT<String> registryResult = adminBiz.registry(registryParam);
                      if (registryResult!=null && ReturnT.SUCCESS_CODE == registryResult.getCode()) {
                          registryResult = ReturnT.SUCCESS;
                          logger.debug(">>>>>>>>>>> xxl-job registry success, registryParam:{}, registryResult:{}", new Object[]{registryParam, registryResult});
                          break;
                      } else {
                          logger.info(">>>>>>>>>>> xxl-job registry fail, registryParam:{}, registryResult:{}", new Object[]{registryParam, registryResult});
                      }
                  } catch (Exception e) {
                      logger.info(">>>>>>>>>>> xxl-job registry error, registryParam:{}", registryParam, e);
                  }
              }
          
          
              @Override
              public ReturnT<String> registry(RegistryParam registryParam) {
                  return XxlJobRemotingUtil.postBody(addressUrl + "api/registry", accessToken, timeout, registryParam, String.class);
              }
          
          

          XxlJobRemotingUtil.postBody就是個符合XXL-JOB規(guī)范的restful的http請求處理,里面不止有注冊請求,還有下線請求,回調(diào)請求等,礙于篇幅,就不一一展示了,調(diào)度中心接到對應的請求,會有對應的DB處理:

                  // services mapping
                  if ("callback".equals(uri)) {
                      List<HandleCallbackParam> callbackParamList = GsonTool.fromJson(data, List.class, HandleCallbackParam.class);
                      return adminBiz.callback(callbackParamList);
                  } else if ("registry".equals(uri)) {
                      RegistryParam registryParam = GsonTool.fromJson(data, RegistryParam.class);
                      return adminBiz.registry(registryParam);
                  } else if ("registryRemove".equals(uri)) {
                      RegistryParam registryParam = GsonTool.fromJson(data, RegistryParam.class);
                      return adminBiz.registryRemove(registryParam);
                  } else {
                      return new ReturnT<String>(ReturnT.FAIL_CODE, "invalid request, uri-mapping("+ uri +") not found.");
                  }
          
          

          跟到這里,我們就已經(jīng)大概了解了整個注冊的流程。同樣當調(diào)度中心向我們執(zhí)行器發(fā)送請求,譬如說執(zhí)行任務調(diào)度的請求時,也是同樣的http請求發(fā)送我們上面分析的執(zhí)行器中內(nèi)嵌netty服務進行操作,這邊只展示調(diào)用方法:

              @Override
              public ReturnT<String> run(TriggerParam triggerParam) {
                  return XxlJobRemotingUtil.postBody(addressUrl + "run", accessToken, timeout, triggerParam, String.class);
              }
          
          

          這樣,我們執(zhí)行器模塊收到請求后會執(zhí)行我們上面注冊中的jobHandle進行對應的方法執(zhí)行,執(zhí)行器會將請求存入“異步執(zhí)行隊列”并且立即響應調(diào)度中心,異步運行對應方法。這樣一套注冊和執(zhí)行的流程就大致走下來了。

          四,結尾

          當然事實上XXL-JOB的代碼還有許多豐富的特性,礙于本人實力不能一一道明,我這也是拋磚引玉,只是把最基礎的一些地方介紹給大家,有興趣的話,大家可以自行查閱相關代碼,總的來說,畢竟是國產(chǎn)開源的優(yōu)秀項目,還是值得贊賞的,也希望國內(nèi)以后有越來越多優(yōu)秀開源框架。

          如果覺得本文對你有幫助,可以轉發(fā)關注支持一下


          主站蜘蛛池模板: 国产美女精品一区二区三区| 亚洲国产精品一区二区三区久久| 国产成人精品一区二区秒拍| 中文字幕一区在线播放| 男人的天堂精品国产一区| 日本精品啪啪一区二区三区| 精品一区二区三区免费观看| 国产午夜精品一区理论片飘花| 国产免费一区二区视频| 中文激情在线一区二区| 一区免费在线观看| 伊人激情AV一区二区三区| 无码人妻少妇色欲AV一区二区| 亚洲色偷精品一区二区三区| 国产成人精品一区在线 | 亚洲免费一区二区| 亚洲V无码一区二区三区四区观看| 91精品国产一区| 波多野结衣一区在线| 精品一区二区三区中文字幕| 国精产品999一区二区三区有限| 亚洲一区二区三区在线视频| 亚洲男女一区二区三区| 福利一区二区视频| 日本免费一区二区三区| 日本韩国黄色一区二区三区| 亚洲国产精品第一区二区| 国产成人精品视频一区| 人妻激情偷乱视频一区二区三区| 国产精品日韩一区二区三区| 午夜福利av无码一区二区| 波多野结衣精品一区二区三区| 日韩一区精品视频一区二区| 婷婷国产成人精品一区二 | 中文字幕人妻AV一区二区| 久夜色精品国产一区二区三区| 精品黑人一区二区三区| 国内国外日产一区二区| 日本精品一区二区三区四区| 97精品一区二区视频在线观看| 三上悠亚亚洲一区高清|