使用 ByConity 作为存储引擎

创建时间:2024-09-27 最近修改时间:2025-02-06

#1. 简介

ByConity (opens new window) 是字节跳动基于 ClickHouse(最新同步自 ClickHouse v23.3)Fork 的项目,支持存算分离。

自 6.6 版本起,DeepFlow 支持通过调整部署参数决定使用 ClickHouse 还是 ByConity,默认使用 ClickHouse,可调整为使用 ByConity。

提示

ByConity 共有 17 个 Pod,其中 9 个 Pod 的 Request 和 Limit 为 1.1C 1280M,1 个 Pod 的 Request 和 Limit 为 1C 1G,1 个 Pod 的 Request 和 Limit 为 1C 512M。组件 byconity-servervw-defaultvw-writer 的本地 Disk Cache 可通过 lru_max_size 配置修改,日志数据存储上限可通过 sizecount 配置修改。 资源需求:

  • CPU: 建议 Kubernetes 集群至少剩余 12C 可分配资源,实际会消耗更高的资源。
  • 内存: 建议 Kubernetes 集群至少剩余 14G 可分配资源,实际会消耗更高的资源。
  • 磁盘: 建议每个数据节点磁盘容量超过 180G,其中本地 Disk Cache byconity-servervw-defaultvw-writer 各 40G,日志数据 byconity-servervw-defaultvw-writer 各 20G。

#2. 部署参数

ByConity 默认对接对象存储,修改 values-custom.yaml,注意将 endpointregionbucketpathak_idak_secret 修改为对象存储的正确参数。建议将 byconity-servervw-defaultvw-writer 副本数量调整至与 deepflow-server 或节点数量相同,在修改 vw-defaultvw-writer 参数之前,请先将以下 defaultWorkervirtualWarehouses 内容(即 values-example-byconity-vw.yaml文件内容 )拷贝到 values-custom.yaml 文件中,并在此基础上进行相应修改:

global:
  storageEngine: byconity
deepflow:
  clickhouse:
    enabled: false
  byconity:
    enabled: true
    byconity:
      configOverwrite:
        storage_configuration:
          disks:
            server_s3_disk_0:
              endpoint: https://oss-cn-beijing-internal.aliyuncs.com
              region: cn-beijing
              bucket: FIX_ME_BUCKET
              path: byconity0
              ak_id: FIX_ME_ACCESS_KEY
              ak_secret: FIX_ME_ACCESS_SECRET
      server:
        replicas: 1 # Number of replicas of pod byconity-server
        storage:
          localDisk:
            pvcSpec:
              storageClassName: openebs-hostpath #replace to your storageClassName
          log:
            pvcSpec:
              storageClassName: openebs-hostpath #replace to your storageClassName
        configOverwrite:
          logger:
            level: trace
            size: 2000M # Log file size limit
            count: 10 # Limitation of the number of log files
          disk_cache_strategies:
            simple:
              lru_max_size: 42949672960 # 40Gi # disk Maximum cache space 40 X 1024 X 1024 X 1024
      tso:
        storage:
          localDisk:
            pvcSpec:
              storageClassName: openebs-hostpath #replace to your storageClassName
          log:
            pvcSpec:
              storageClassName: openebs-hostpath #replace to your storageClassName
      defaultWorker: &defaultWorker
        hostNetwork: false
        livenessProbe:
          exec:
            command: [ "/opt/byconity/scripts/lifecycle/liveness" ]
          failureThreshold: 6
          initialDelaySeconds: 5
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 20
        readinessProbe:
          exec:
            command: [ "/opt/byconity/scripts/lifecycle/readiness" ]
          failureThreshold: 5
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 10
        storage:
          localDisk:
            pvcSpec:
              accessModes:
                - ReadWriteOnce
              resources:
                requests:
                  storage: 50Gi
              storageClassName: openebs-hostpath #replace to your storageClassName
          log:
            pvcSpec:
              accessModes:
                - ReadWriteOnce
              resources:
                requests:
                  storage: 10Gi
              storageClassName: openebs-hostpath #replace to your storageClassName
        configOverwrite:
          logger:
            level: trace
            size: 2000M # Log file size limit
            count: 10 # Limitation of the number of log files
          disk_cache_strategies:
            simple:
              lru_max_object_num: 4000000 # Limit the total number of files
              lru_max_size: 42949672960 # 40Gi
          # timezone: Etc/UTC

      virtualWarehouses:
        - <<: *defaultWorker
          name: vw_default
          replicas: 1 # Number of replicas of pod vw-default
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                - matchExpressions:
                  - key: tsdb
                    operator: In
                    values:
                    - enable
            podAntiAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
              - labelSelector:
                  matchExpressions:
                  - key: byconity-vw
                    operator: In
                    values:
                    - "vw_default"
                  - key: byconity-role
                    operator: In
                    values:
                    - "worker" 
                topologyKey: kubernetes.io/hostname
        - <<: *defaultWorker
          name: vw_write
          replicas: 1 #Number of replicas of pod vw-write
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                - matchExpressions:
                  - key: tsdb
                    operator: In
                    values:
                    - enable
            podAntiAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
              - labelSelector:
                  matchExpressions:
                  - key: byconity-vw
                    operator: In
                    values:
                    - "vw_write"
                  - key: byconity-role
                    operator: In
                    values:
                    - "worker" 
                topologyKey: kubernetes.io/hostname
    fdb:
      clusterSpec:
        processes:
          general:
            volumeClaimTemplate:
              spec:
                storageClassName: openebs-hostpath #replace to your storageClassName
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147

#3. 获取镜像

由于 ByConity 所使用的镜像未包含在 ISO 中,请先下载镜像,并将其上传至镜像仓库,如是自建仓库可参考如下命令:

tar -xzf registry-byconity-latest.tar.gz
/bin/cp -rf registry-byconity/registry/*  /usr/local/deepflow/registry/
1
2

#3.1 通过 Jenkins 下载

Jenkins (opens new window)下载对应版本的镜像

#3.2 通过 OSS 下载

ByConity-Latest (opens new window)下载 Latest 版本镜像包。如需要其他版本镜像,需使用OSS浏览器进行下载,在该路径下载对应版本的镜像:oss://publicshare-unsafe/byconity/

#4. 重新部署 DeepFlow:

/usr/local/deepflow/bin/helm delete deepflow -n deepflow
/usr/local/deepflow/bin/deepflow-deploy -uo deepflow
1
2

#5. 注意事项

  • ByConity 只支持 AMD64 架构。
  • 如果出现部分 byconity-fdb-storage Pod 启动失败的情况,请调整内核参数:
    sudo sysctl -w fs.inotify.max_user_watches=2099999999
    sudo sysctl -w fs.inotify.max_user_instances=2099999999
    sudo sysctl -w fs.inotify.max_queued_events=2099999999
    
    1
    2
    3
  • 如果 ClickHouse 中存在自定义的 1h、1d 聚合 flow_metrics 物化视图,那么在切换为 ByConity 之后需要重建这些物化视图,即在 DeepFlow 企业版的系统-数据节点-数据存储配置页面重新创建聚合数据表。
  • Byconity 依赖于 FoundationDB 集群(简称 FDB),该集群用于存储 Byconity 的元数据。若对 FDB 集群进行删除或重建操作,将会导致 FDB 数据的丢失,进而引发 Byconity 数据的丢失。因此,在卸载 Byconity 的过程中,不会删除 FDB 组件。若确实需要删除该组件,请执行相应的删除操作:
    deepflow-deploy --erase-fdb
    
    1
  • 使用私有仓库导致 FDB 部分组件无法拉取镜像情况,可以使用如下命令解决:
    kubectl patch serviceaccount default  -p '{"imagePullSecrets": [{"name": "myregistrykey"}]}'  -n deepflow
    kubectl delete pod -n deepflow -l foundationdb.org/fdb-cluster-name=deepflow-byconity-fdb
    
    1
    2