INIT
|
@ -0,0 +1,202 @@
|
|||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright [2013-2021] [Alibaba Group Holding Limited]
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
|
|
@ -0,0 +1,57 @@
|
|||
# PolarDB-X Operator 介绍
|
||||
|
||||
---
|
||||
|
||||
PolarDB-X Operator 是一个基于 Kubernetes 的 PolarDB-X 集群管控系统,希望能在 Kubernetes 上提供完整的生命周期管理能力。PolarDB-X Operator 支持运行在私有或者公有的 Kubernetes 集群上安装并部署 PolarDB-X 集群。
|
||||
|
||||
## 限制与说明
|
||||
|
||||
### 操作系统和 CPU 架构
|
||||
|
||||
PolarDB-X Operator 支持在任意环境的 Kubernetes 集群上进行部署,支持异构 Kubernetes 上的组件部署和 PolarDB-X 数据库集群部署。
|
||||
|
||||
目前 PolarDB-X Operator 和 PolarDB-X 集群支持以下操作系统和架构:
|
||||
|
||||
| 操作系统 | CPU 架构 | 推荐配置 |
|
||||
| :------: | :-------------: | :-------------------: |
|
||||
| Linux | x86_64 (amd64) | 32C128G, >= 500G 磁盘 |
|
||||
| Linux | aarch64 (arm64) | 32C128G, >= 500G 磁盘 |
|
||||
|
||||
注: arm64 架构暂无镜像,需要单独编译。
|
||||
|
||||
### 磁盘
|
||||
|
||||
出于磁盘性能考虑,PolarDB-X Operator 使用宿主机上本地盘的某个路径来存放系统脚本和存储节点的数据,默认配置为 `/data`。PolarDB-X Operator 会自动管理其中存放的脚本和数据,请勿随意删除或更改,以免导致系统和 PolarDB-X 集群出现问题。
|
||||
|
||||
若您需要配置不同的路径,可以在安装 Operator 时参考 [[PolarDB-X 安装部署-Operator部署]](./deployment/README.md) 文档修改配置。
|
||||
|
||||
## 安装
|
||||
|
||||
在部署 PolarDB-X 集群前,首先需要在 Kubernetes 上安装 PolarDB-X Operator 的系统。通过借助 Kubernetes 上的包管理工具 helm,你可以快速完成系统的部署,参考文档 [[PolarDB-X 安装部署-快速开始]](./deployment/README.md) 在本地或已有的 Kubernetes 上安装 PolarDB-X Operator 并部署一个 PolarDB-X 测试集群。
|
||||
|
||||
Helm 包中预定义了许多配置,如果你想更改这些配置,可以参考 [[PolarDB-X 安装部署-Operator部署]](./deployment/README.md) 更改配置项,以使它更好的使用 Kubernetes 的资源。
|
||||
|
||||
> 注:为了在本地测试,快速开始中的集群使用了较少的资源,如需进行性能测试,请参考运维指南和 PolarDBXCluster API 文档进行更为规范的部署。
|
||||
|
||||
## API
|
||||
|
||||
为了使 PolarDB-X 能够被 Kubernetes 识别和管理,我们将 PolarDB-X 集群和它的运维操作抽象为多个[定制资源](https://kubernetes.io/zh/docs/concepts/extend-kubernetes/api-extension/custom-resources/):
|
||||
|
||||
+ PolarDBXCluster,定义和描述了 PolarDB-X 集群的拓扑、规格、配置和运维等信息
|
||||
+ XStore,定义和描述了 PolarDB-X 集群的数据节点 (DN) 的拓扑、规格、配置和运维等信息
|
||||
|
||||
您可以使用以下命令来查看 Kubernetes 集群中的这些资源:
|
||||
|
||||
```bash
|
||||
kubectl get polardbxcluster,xstore
|
||||
```
|
||||
|
||||
参考 [[PolarDB-X CRD API](./api/README.md)] 来了解目前支持的所有资源和细节。
|
||||
|
||||
## 运维
|
||||
|
||||
同公有云上的 PolarDB-X 集群一样,PolarDB-X Operator 也支持绝大部分的运维操作,包括部署、删除、升级、升配、扩缩容和动态配置等,您可以参考 [[运维指南](./ops/README.md)] 来了解目前支持的所有的运维操作和使用方法。
|
||||
|
||||
## FAQ
|
||||
|
||||
运维 PolarDB-X 集群时可能会遇到一些问题,[[FAQ]](./faq/README.md) 里整理了常见的问题和处理方法。
|
|
@ -0,0 +1,9 @@
|
|||
# PolarDB-X CRD API
|
||||
|
||||
---
|
||||
|
||||
## polardbx.aliyun.com/v1
|
||||
|
||||
资源类型:
|
||||
|
||||
+ [PolarDBXCluster](./polardbxcluster.md)
|
|
@ -0,0 +1,290 @@
|
|||
# polardbx.aliyun.com/v1 PolarDBXCluster
|
||||
|
||||
使用 PolarDBXCluster 可以自由定义集群的拓扑、规格和配置,可以支持超大规模和不同容灾等级的部署。
|
||||
|
||||
以下是可配置项及相关的字段的含义:
|
||||
|
||||
```yaml
|
||||
apiVersion: polardbx.aliyun.com/v1
|
||||
kind: PolarDBXCluster
|
||||
metadata:
|
||||
name: full
|
||||
spec:
|
||||
# **Optional**
|
||||
#
|
||||
# 是否使用 DN-0 作为共享 GMS 以节省资源,默认值 false
|
||||
#
|
||||
# 不推荐在生产集群使用
|
||||
shareGMS: false
|
||||
|
||||
# **Optional**
|
||||
#
|
||||
# PolarDB-X 集群所支持的 MySQL 协议版本,默认值 5.7
|
||||
# 可选值:5.7, 8.0
|
||||
protocolVersion: 5.7
|
||||
|
||||
# **Optional**
|
||||
#
|
||||
# PolarDB-X 集群在 Kubernetes 内对外暴露的服务名,默认为 .metadata.name
|
||||
serviceName: full
|
||||
|
||||
# **Optional**
|
||||
#
|
||||
# PolarDB-X 集群在 Kubernetes 内对外暴露的服务类型,默认为 ClusterIP
|
||||
# 可选值参考 Service 的类型
|
||||
#
|
||||
# 注:云上的 Kubernetes 集群可使用 LoadBalancer 来绑定 LB
|
||||
serviceType: LoadBalancer
|
||||
|
||||
# **Optional**
|
||||
#
|
||||
# PolarDB-X 集群是否为只读实例,默认为 false
|
||||
readonly: false
|
||||
|
||||
# **Optional**
|
||||
#
|
||||
# PolarDB-X 只读实例所属主实例的名称,默认为空
|
||||
# 当本实例不为只读实例时,此字段无效
|
||||
primaryCluster: pxc-master
|
||||
|
||||
# **Optional**
|
||||
#
|
||||
# PolarDB-X 主实例附属的只读实例,仅在本实例不为只读时生效
|
||||
# 当本实例创建时,根据以下信息创建出与本实例规格和参数相同的只读实例
|
||||
# 本字段不可修改,且仅在创建时有效
|
||||
initReadonly:
|
||||
- # 只读实例 CN 数
|
||||
cnRepilcas: 1
|
||||
# **Optional**
|
||||
#
|
||||
# 只读实例后缀名,不填则会生成随机后缀
|
||||
name: readonly
|
||||
# **Optional**
|
||||
#
|
||||
# 只读实例参数
|
||||
extraParams:
|
||||
AttendHtap: "true"
|
||||
|
||||
# **Optional**
|
||||
#
|
||||
# PolarDB-X 集群安全配置
|
||||
security:
|
||||
# **Optional**
|
||||
#
|
||||
# TLS 相关配置,暂不生效
|
||||
tls:
|
||||
secretName: tls-secret
|
||||
# **Optional**
|
||||
#
|
||||
# 指定用于编码内部密码的 key,引用指定 Secret 的 key
|
||||
encodeKey:
|
||||
name: ek-secret
|
||||
key: key
|
||||
|
||||
# *Optional**
|
||||
#
|
||||
# PolarDB-X 初始账号配置
|
||||
privileges:
|
||||
- username: admin
|
||||
password: "123456"
|
||||
type: SUPER
|
||||
|
||||
# PolarDB-X 集群配置
|
||||
config:
|
||||
# CN 相关配置
|
||||
cn:
|
||||
# 静态配置,修改会导致 CN 集群重建
|
||||
static:
|
||||
# 启用协程, OpenJDK 暂不支持,需使用 dragonwell
|
||||
EnableCoroutine: false
|
||||
# 启用备库一致读
|
||||
EnableReplicaRead: false
|
||||
# 启用 JVM 的远程调试
|
||||
EnableJvmRemoteDebug: false
|
||||
# 自定义 CN 静态配置,key-value 结构
|
||||
ServerProperties:
|
||||
processors: 8
|
||||
# 是否在该(只读)实例 CN 上开启 MPP 能力,主实例 CN 默认开启
|
||||
# 当该参数开启时,该实例会参与多机并行(MPP),同时分担主实例的读流量,反之则不参与
|
||||
AttendHtap: false
|
||||
# 动态配置,修改并 apply 会由 operator 自动推送,key-value 结构
|
||||
dynamic:
|
||||
CONN_POOL_IDLE_TIMEOUT: 30
|
||||
# DN 相关配置
|
||||
dn:
|
||||
# DN my.cnf 配置,覆盖模板部分
|
||||
mycnfOverwrite: |-
|
||||
loose_binlog_checksum: crc32
|
||||
# DN 日志清理间隔
|
||||
logPurgeInterval: 5m
|
||||
# 日志与数据分离存储
|
||||
logDataSeparation: false
|
||||
|
||||
# PolarDB-X 集群拓扑
|
||||
topology:
|
||||
# 集群使用的镜像版本 (tag),默认为空(由 operator 指定)
|
||||
version: v1.0
|
||||
|
||||
# 集群部署规则
|
||||
rules:
|
||||
# 预定义节点选择器
|
||||
selectors:
|
||||
- name: zone-a
|
||||
nodeSelector:
|
||||
nodeSelectorTerms:
|
||||
- matchExpressions:
|
||||
- key: topology.kubernetes.io/zone
|
||||
operator: In
|
||||
values:
|
||||
- cn-hangzhou-a
|
||||
- name: zone-b
|
||||
nodeSelector:
|
||||
nodeSelectorTerms:
|
||||
- matchExpressions:
|
||||
- key: topology.kubernetes.io/zone
|
||||
operator: In
|
||||
values:
|
||||
- cn-hangzhou-b
|
||||
- name: zone-c
|
||||
nodeSelector:
|
||||
nodeSelectorTerms:
|
||||
- matchExpressions:
|
||||
- key: topology.kubernetes.io/zone
|
||||
operator: In
|
||||
values:
|
||||
- cn-hangzhou-c
|
||||
components:
|
||||
# **Optional**
|
||||
#
|
||||
# GMS 部署规则,默认和 DN 一致
|
||||
gms:
|
||||
# 堆叠部署结构,operator 尝试在节点选择器指定的节点中,堆叠部署
|
||||
# 每个存储节点的子节点以达到较高资源利用率的方式,仅供测试使用
|
||||
rolling:
|
||||
replicas: 3
|
||||
selector:
|
||||
reference: zone-a
|
||||
# 节点组部署结构,可以指定每个 DN 的子节点的节点组和节点选择器,
|
||||
# 从而达成跨区、跨城等高可用部署结构
|
||||
nodeSets:
|
||||
- name: cand-zone-a
|
||||
role: Candidate
|
||||
replicas: 1
|
||||
selector:
|
||||
reference: zone-a
|
||||
- name: cand-zone-b
|
||||
role: Candidate
|
||||
replicas: 1
|
||||
selector:
|
||||
reference: zone-b
|
||||
- name: log-zone-c
|
||||
role: Voter
|
||||
replicas: 1
|
||||
selector:
|
||||
reference: zone-c
|
||||
|
||||
# **Optional**
|
||||
#
|
||||
# DN 部署规则,默认为 3 节点,所有节点可部署
|
||||
dn:
|
||||
nodeSets:
|
||||
- name: cands
|
||||
role: Candidate
|
||||
replicas: 2
|
||||
- name: log
|
||||
role: Voter
|
||||
replicas: 1
|
||||
|
||||
# **Optional**
|
||||
#
|
||||
# CN 部署规则,同样按组划分 CN 节点
|
||||
cn:
|
||||
- name: zone-a
|
||||
# 合法值:数字、百分比、(0, 1] 分数,不填写为剩余 replica(只能有一个不填写)
|
||||
# 总和不能超过 .topology.nodes.cn.replicas
|
||||
replicas: 1
|
||||
selector:
|
||||
reference: zone-a
|
||||
- name: zone-b
|
||||
replicas: 1 / 3
|
||||
selector:
|
||||
reference: zone-b
|
||||
- name: zone-c
|
||||
replicas: 34%
|
||||
selector:
|
||||
reference: zone-c
|
||||
|
||||
# **Optional**
|
||||
#
|
||||
# CDC 部署规则,同 CN
|
||||
cdc:
|
||||
- name: half
|
||||
replicas: 50%
|
||||
selector:
|
||||
reference: zone-a
|
||||
- name: half
|
||||
# 带 + 表示向上取整
|
||||
replicas: 50%+
|
||||
selector:
|
||||
reference: zone-b
|
||||
|
||||
nodes:
|
||||
# **Optional**
|
||||
#
|
||||
# GMS 规格配置,默认和 DN 相同
|
||||
gms:
|
||||
template:
|
||||
# 存储节点引擎,默认 galaxy
|
||||
engine: galaxy
|
||||
# 存储节点镜像,默认由 operator 指定
|
||||
image: polardbx-engine:latest
|
||||
# 存储节点 Service 类型,默认为 ClusterIP
|
||||
serviceType: ClusterIP
|
||||
# 存储节点 Pod 是否适用宿主机网络,默认为 true
|
||||
hostNetwork: true
|
||||
# 存储节点磁盘空间限制,不填写无限制(软限制)
|
||||
diskQuota: 10Gi
|
||||
# 存储节点子节点使用的资源,默认为 4c8g
|
||||
resources:
|
||||
limits:
|
||||
cpu: 4
|
||||
memory: 8Gi
|
||||
|
||||
# **Optional**
|
||||
#
|
||||
# DN 规格配置
|
||||
dn:
|
||||
# DN 数量配置,默认为 2
|
||||
replicas: 2
|
||||
template:
|
||||
resources:
|
||||
limits:
|
||||
cpu: 4
|
||||
memory: 8Gi
|
||||
# IO 相关限制,支持 BPS 和 IOPS 限制
|
||||
limits.io:
|
||||
iops: 1000
|
||||
bps: 10Mi
|
||||
|
||||
# CN 规格配置,参数解释同 DN
|
||||
cn:
|
||||
replicas: 3
|
||||
template:
|
||||
image: polardbx-sql:latest
|
||||
hostNetwork: false
|
||||
resources:
|
||||
limits:
|
||||
cpu: 4
|
||||
memory: 8Gi
|
||||
|
||||
# CDC 规格配置,参数解释同 CN,可不配置代表不启动 CDC 能力
|
||||
cdc:
|
||||
replicas: 2
|
||||
template:
|
||||
image: polardbx-cdc:latest
|
||||
hostNetwork: false
|
||||
resources:
|
||||
limits:
|
||||
cpu: 4
|
||||
memory: 8Gi
|
||||
```
|
|
@ -0,0 +1,302 @@
|
|||
# 快速上手
|
||||
|
||||
本文介绍了如何创建一个简单的 Kubernetes 集群,部署 PolarDB-X Operator,并使用 operator 部署一个完整的 PolarDB-X 集群。
|
||||
|
||||
> 注:本文中的部署说明仅用于测试目的,不要直接用于生产环境。
|
||||
|
||||
本文主要包含以下内容:
|
||||
|
||||
1. [创建 Kubernetes 测试集群](#创建-kubernetes-测试集群)
|
||||
2. [部署 PolarDB-X Operator](#部署-polardb-x-operator)
|
||||
3. [部署 PolarDB-X 集群](#部署-polardb-x-集群)
|
||||
4. [连接 PolarDB-X 集群](#连接-polardb-x-集群)
|
||||
5. [销毁 PolarDB-X 集群](#销毁-polardb-x-集群)
|
||||
6. [卸载 PolarDB-X Operator](#卸载-polardb-x-operator)
|
||||
|
||||
# 创建 Kubernetes 测试集群
|
||||
|
||||
本节主要介绍如何使用 [minikube](https://minikube.sigs.k8s.io/docs/start/) 创建 Kubernetes 测试集群,您也可以使用阿里云的 [容器服务 ACK](https://www.aliyun.com/product/kubernetes) 来创建一个 Kubernetes 集群,并遵循教程部署 PolarDB-X Operator 和 PolarDB-X 集群。
|
||||
|
||||
## 使用 minikube 创建 Kubernetes 集群
|
||||
|
||||
[minikube](https://minikube.sigs.k8s.io/docs/start/) 是由社区维护的用于快速创建 Kubernetes 测试集群的工具,适合测试和学习 Kubernetes。使用 minikube 创建的 Kubernetes 集群可以运行在容器或是虚拟机中,本节中以 CentOS 8.2 上创建 Kubernetes 为例。
|
||||
|
||||
> 注:如在其他操作系统例如 macOS 或 Windows 上部署 minikube,部分步骤可能略有不同。
|
||||
|
||||
部署前,请确保已经安装 minikube 和 Docker,并符合以下要求:
|
||||
|
||||
+ 机器规格不小于 4c8g
|
||||
+ minikube >= 1.18.0
|
||||
+ docker >= 1.19.3
|
||||
|
||||
minikube 要求使用非 root 账号进行部署,如果你试用 root 账号访问机器,需要新建一个账号。
|
||||
|
||||
```bash
|
||||
$ useradd -ms /bin/bash polardbx
|
||||
$ usermod -aG docker polardbx
|
||||
```
|
||||
|
||||
如果你使用其他账号,请和上面一样将它加入 docker 组中,以确保它能够直接访问 docker。
|
||||
|
||||
使用 su 切换到账号 `polardbx`,
|
||||
|
||||
```bash
|
||||
$ su polardbx
|
||||
```
|
||||
|
||||
执行下面的命令启动一个 minikube,
|
||||
|
||||
```bash
|
||||
minikube start --cpus 4 --memory 7960 --image-mirror-country cn --registry-mirror=https://docker.mirrors.sjtug.sjtu.edu.cn
|
||||
```
|
||||
|
||||
> 注:这里我们使用了阿里云的 minikube 镜像源以及 SJTU 提供的 docker 镜像源来加速镜像的拉取。
|
||||
|
||||
如果一切运行正常,你将会看到类似下面的输出。
|
||||
|
||||
```bash
|
||||
😄 minikube v1.23.2 on Centos 8.2.2004 (amd64)
|
||||
✨ Using the docker driver based on existing profile
|
||||
❗ Your cgroup does not allow setting memory.
|
||||
▪ More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
|
||||
❗ Your cgroup does not allow setting memory.
|
||||
▪ More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
|
||||
👍 Starting control plane node minikube in cluster minikube
|
||||
🚜 Pulling base image ...
|
||||
🤷 docker "minikube" container is missing, will recreate.
|
||||
🔥 Creating docker container (CPUs=4, Memory=7960MB) ...
|
||||
> kubeadm.sha256: 64 B / 64 B [--------------------------] 100.00% ? p/s 0s
|
||||
> kubelet.sha256: 64 B / 64 B [--------------------------] 100.00% ? p/s 0s
|
||||
> kubectl.sha256: 64 B / 64 B [--------------------------] 100.00% ? p/s 0s
|
||||
> kubeadm: 43.71 MiB / 43.71 MiB [---------------] 100.00% 1.01 MiB p/s 44s
|
||||
> kubectl: 44.73 MiB / 44.73 MiB [-------------] 100.00% 910.41 KiB p/s 51s
|
||||
> kubelet: 146.25 MiB / 146.25 MiB [-------------] 100.00% 2.71 MiB p/s 54s
|
||||
|
||||
▪ Generating certificates and keys ...
|
||||
▪ Booting up control plane ...
|
||||
▪ Configuring RBAC rules ...
|
||||
🔎 Verifying Kubernetes components...
|
||||
▪ Using image registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner:v5 (global image repository)
|
||||
🌟 Enabled addons: storage-provisioner, default-storageclass
|
||||
💡 kubectl not found. If you need it, try: 'minikube kubectl -- get pods -A'
|
||||
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
|
||||
```
|
||||
|
||||
此时 minikube 已经正常运行。minikube 将自动设置 kubectl 的配置文件,如果之前已经安装过 kubectl,现在可以使用 kubectl 来访问集群:
|
||||
|
||||
```bash
|
||||
$ kubectl cluster-info
|
||||
kubectl cluster-info
|
||||
Kubernetes control plane is running at https://192.168.49.2:8443
|
||||
CoreDNS is running at https://192.168.49.2:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
|
||||
|
||||
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
|
||||
```
|
||||
|
||||
如果没有安装 kubectl 的,minikube 也提供了子命令来使用 kubectl:
|
||||
|
||||
```bash
|
||||
$ minikube kubectl -- cluster-info
|
||||
Kubernetes control plane is running at https://192.168.49.2:8443
|
||||
CoreDNS is running at https://192.168.49.2:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
|
||||
|
||||
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
|
||||
```
|
||||
|
||||
> 注意:minikube kubectl 子命令需要在 kubectl 的参数前加 "--",如使用 bash shell 可以用 alias kubectl="minikube kubectl -- " 来设置快捷指令。下文都将使用 kubectl 命令进行操作。
|
||||
|
||||
现在我们可以开始部署 PolarDB-X Operator 了!
|
||||
|
||||
> 测试完成后,执行 minikube delete 来销毁集群。
|
||||
|
||||
# 部署 PolarDB-X Operator
|
||||
|
||||
开始之前,请确保满足以下前置要求:
|
||||
|
||||
+ 已经准备了一个运行中的 Kubernetes 集群,并确保
|
||||
+ 集群版本 >= 1.18.0
|
||||
+ 至少有 2 个可分配的 CPU
|
||||
+ 至少有 4GB 的可分配内存
|
||||
+ 至少有 30GB 以上的磁盘空间
|
||||
+ 已经安装了 kubectl 可以访问 Kubernetes 集群
|
||||
+ 已经安装了 [Helm 3](https://helm.sh/docs/intro/install/)
|
||||
|
||||
|
||||
执行以下命令安装 PolarDB-X Operator。
|
||||
|
||||
```bash
|
||||
$ helm install --namespace polardbx-operator-system --create-namespace polardbx-operator https://github.com/polardb/polardbx-operator/releases/download/v1.2.1/polardbx-operator-1.2.1.tgz
|
||||
```
|
||||
|
||||
您也可以通过 PolarDB-X 的 Helm Chart 仓库安装:
|
||||
```bash
|
||||
helm repo add polardbx https://polardbx-charts.oss-cn-beijing.aliyuncs.com
|
||||
helm install --namespace polardbx-operator-system --create-namespace polardbx-operator polardbx/polardbx-operator
|
||||
```
|
||||
|
||||
期望看到如下输出:
|
||||
|
||||
```bash
|
||||
NAME: polardbx-operator
|
||||
LAST DEPLOYED: Sun Oct 17 15:17:29 2021
|
||||
NAMESPACE: polardbx-operator-system
|
||||
STATUS: deployed
|
||||
REVISION: 1
|
||||
TEST SUITE: None
|
||||
NOTES:
|
||||
polardbx-operator is installed. Please check the status of components:
|
||||
|
||||
kubectl get pods --namespace polardbx-operator-system
|
||||
|
||||
Now have fun with your first PolarDB-X cluster.
|
||||
|
||||
Here's the manifest for quick start:
|
||||
|
||||
```yaml
|
||||
apiVersion: polardbx.aliyun.com/v1
|
||||
kind: PolarDBXCluster
|
||||
metadata:
|
||||
name: quick-start
|
||||
annotations:
|
||||
polardbx/topology-mode-guide: quick-start
|
||||
```
|
||||
|
||||
查看 PolarDB-X Operator 组件的运行情况,等待它们都进入 Running 状态:
|
||||
|
||||
```bash
|
||||
$ kubectl get pods --namespace polardbx-operator-system
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
polardbx-controller-manager-6c858fc5b9-zrhx9 1/1 Running 0 66s
|
||||
polardbx-hpfs-d44zd 1/1 Running 0 66s
|
||||
polardbx-tools-updater-459lc 1/1 Running 0 66s
|
||||
```
|
||||
|
||||
恭喜!PolarDB-X Operator 已经安装完成,现在可以开始部署 PolarDB-X 集群了!
|
||||
|
||||
# 部署 PolarDB-X 集群
|
||||
|
||||
现在我们来快速部署一个 PolarDB-X 集群,它包含 1 个 GMS 节点、1 个 CN 节点、1 个 DN 节点和 1 个 CDC 节点。执行以下命令创建一个这样的集群:
|
||||
|
||||
```bash
|
||||
echo "apiVersion: polardbx.aliyun.com/v1
|
||||
kind: PolarDBXCluster
|
||||
metadata:
|
||||
name: quick-start
|
||||
annotations:
|
||||
polardbx/topology-mode-guide: quick-start" | kubectl apply -f -
|
||||
```
|
||||
|
||||
你将看到以下输出:
|
||||
|
||||
```bash
|
||||
polardbxcluster.polardbx.aliyun.com/quick-start created
|
||||
```
|
||||
|
||||
使用如下命令查看创建状态:
|
||||
|
||||
```bash
|
||||
$ kubectl get polardbxcluster -w
|
||||
NAME GMS CN DN CDC PHASE DISK AGE
|
||||
quick-start 0/1 0/1 0/1 0/1 Creating 35s
|
||||
quick-start 1/1 0/1 1/1 0/1 Creating 93s
|
||||
quick-start 1/1 0/1 1/1 1/1 Creating 4m43s
|
||||
quick-start 1/1 1/1 1/1 1/1 Running 2.4 GiB 4m44s
|
||||
```
|
||||
|
||||
当 PHASE 显示为 Running 时,PolarDB-X 集群已经部署完成!恭喜你,现在可以开始连接并体验 PolarDB-X 分布式数据库了!
|
||||
|
||||
# 连接 PolarDB-X 集群
|
||||
|
||||
PolarDB-X 支持 MySQL 传输协议及绝大多数语法,因此你可以使用 mysql 命令行工具连接 PolarDB-X 进行数据库操作。
|
||||
|
||||
在开始之前,请确保已经安装 mysql 命令行工具。
|
||||
|
||||
## 转发 PolarDB-X 的访问端口
|
||||
|
||||
创建 PolarDB-X 集群时,PolarDB-X Operator 同时会为集群创建用于访问的服务,默认是 ClusterIP 类型。使用下面的命令查看用于访问的服务:
|
||||
|
||||
```bash
|
||||
$ kubectl get svc quick-start
|
||||
```
|
||||
|
||||
期望输出:
|
||||
|
||||
```bash
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
quick-start ClusterIP 10.110.214.223 <none> 3306/TCP,8081/TCP 5m25s
|
||||
```
|
||||
|
||||
我们使用 kubectl 提供的 port-forward 命名将服务的 3306 端口转发到本地,并且保持转发进程存活。
|
||||
|
||||
```bash
|
||||
$ kubectl port-forward svc/quick-start 3306
|
||||
```
|
||||
|
||||
## 连接 PolarDB-X 集群
|
||||
|
||||
Operator 将为 PolarDB-X 集群默认创建一个账号 polardbx_root,并将密码存放在 secret 中。
|
||||
|
||||
使用以下命令查看 polardbx_root 账号的密码:
|
||||
|
||||
```bash
|
||||
$ kubectl get secret quick-start -o jsonpath="{.data['polardbx_root']}" | base64 -d - | xargs echo "Password: "
|
||||
Password: bvp9wjxx
|
||||
```
|
||||
|
||||
保持 port-forward 的运行,重新打开一个终端,执行如下命令连接集群:
|
||||
|
||||
```bash
|
||||
$ mysql -h127.0.0.1 -P3306 -upolardbx_root -pbvp9wjxx
|
||||
```
|
||||
|
||||
期望输出:
|
||||
|
||||
```bash
|
||||
Welcome to the MySQL monitor. Commands end with ; or \g.
|
||||
Your MySQL connection id is 6
|
||||
Server version: 5.6.29 Tddl Server (ALIBABA)
|
||||
|
||||
Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.
|
||||
|
||||
Oracle is a registered trademark of Oracle Corporation and/or its
|
||||
affiliates. Other names may be trademarks of their respective
|
||||
owners.
|
||||
|
||||
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
|
||||
|
||||
mysql>
|
||||
```
|
||||
|
||||
恭喜!你已经成功地部署并连接到了一个 PolarDB-X 分布式数据库集群,现在你可以开始体验分布式数据库的能力了!
|
||||
|
||||
# 销毁 PolarDB-X 集群
|
||||
|
||||
完成测试后,你可以通过以下命令销毁 PolarDB-X 集群。
|
||||
|
||||
```bash
|
||||
$ kubectl delete polardbxcluster quick-start
|
||||
```
|
||||
|
||||
再次查看以确保删除完成
|
||||
|
||||
```bash
|
||||
$ kubectl get polardbxcluster quick-start
|
||||
```
|
||||
|
||||
# 卸载 PolarDB-X Operator
|
||||
|
||||
使用如下命令卸载 PolarDB-X Operator。
|
||||
|
||||
```bash
|
||||
$ helm uninstall --namespace polardbx-operator-system polardbx-operator
|
||||
```
|
||||
|
||||
Helm 卸载并不会删除对应的定制资源 CRD,使用下面的命令查看并删除 PolarDB-X 对应的定制资源:
|
||||
|
||||
```bash
|
||||
$ kubectl get crds | grep polardbx.aliyun.com
|
||||
polardbxclusters.polardbx.aliyun.com 2021-10-17T07:17:27Z
|
||||
xstores.polardbx.aliyun.com 2021-10-17T07:17:27Z
|
||||
|
||||
$ kubectl delete crds polardbxclusters.polardbx.aliyun.com xstores.polardbx.aliyun.com
|
||||
```
|
|
@ -0,0 +1,29 @@
|
|||
修改数据目录
|
||||
========
|
||||
通过以下命令来在安装时指定宿主机:
|
||||
|
||||
- 数据目录 `/polardbx/data` (默认值为 /data)
|
||||
- 日志目录 `/polardbx/log` (默认值为/data-log)
|
||||
- 传输目录 `/polardbx/filestream` (默认值为 /filestream)
|
||||
|
||||
```bash
|
||||
helm install --namespace polardbx-operator-system --set node.volumes.data=/polardbx/data polardbx-operator polardbx/polardbx-operator --create-namespace
|
||||
```
|
||||
|
||||
或者你也可以准备一个 values.yaml 文件,然后通过下面的命令来指定:
|
||||
|
||||
```bash
|
||||
helm install --namespace polardbx-operator-system -f values.yaml polardbx-operator polardbx/polardbx-operator --create-namespace
|
||||
```
|
||||
|
||||
其中 values.yaml 包含以下内容:
|
||||
|
||||
```yaml
|
||||
node:
|
||||
volumes:
|
||||
data: /polardbx/data
|
||||
log: /polardbx/log
|
||||
filestream: /polardbx/filestream
|
||||
```
|
||||
|
||||
> 除了上述目录,容器运行文件目录(通常默认为/var/lib/docker)和k8s根目录(通常默认为/var/lib/kubelet),需要在安装docker或者k8s的时候挂载到合适的目录,防止出现磁盘满的问题。
|
|
@ -0,0 +1,7 @@
|
|||
修改默认镜像仓库
|
||||
========
|
||||
修改默认镜像仓库为 `registry:5000`:
|
||||
|
||||
```bash
|
||||
helm install --namespace polardbx-operator-system --set imageRepo=registry:5000 polardbx-operator polardbx/polardbx-operator --create-namespace
|
||||
```
|
|
@ -0,0 +1,34 @@
|
|||
修改默认镜像
|
||||
========
|
||||
## 系统组件
|
||||
1. 指定系统组件镜像 tag 为 v1.0.1:
|
||||
|
||||
```bash
|
||||
helm install --namespace polardbx-operator-system --set imageTag=v1.0.1 polardbx-operator polardbx/polardbx-operator --create-namespace
|
||||
```
|
||||
|
||||
2. 指定拉取策略为 `Always`:
|
||||
|
||||
```bash
|
||||
helm install --namespace polardbx-operator-system --set imagePullPolicy=Always polardbx-operator polardbx/polardbx-operator --create-namespace
|
||||
```
|
||||
|
||||
## 数据库集群
|
||||
|
||||
1. 指定所有组件的默认 tag 为 `v1`:
|
||||
|
||||
```bash
|
||||
helm install --namespace polardbx-operator-system --set clusterDefaults.version=v1 polardbx-operator polardbx/polardbx-operator --create-namespace
|
||||
```
|
||||
|
||||
2. 覆盖组件默认 tag,例如指定 CN 镜像的 tag 为 `v2`(其余组件仍然为 `clusterDefaults.version`的配置):
|
||||
|
||||
```bash
|
||||
helm install --namespace polardbx-operator-system --set clusterDefaults.galaxysql=polardbx-sql:v2 polardbx-operator polardbx/polardbx-operator --create-namespace
|
||||
```
|
||||
|
||||
3. 覆盖组件默认 repo,例如指定 CN 的镜像 repo 为 `registry:5000`(其余组件仍然为 `imageRepo`的配置):
|
||||
|
||||
```bash
|
||||
helm install --namespace polardbx-operator-system --set clusterDefaults.galaxysql=registry:5000/polardbx-sql polardbx-operator polardbx/polardbx-operator --create-namespace
|
||||
```
|
|
@ -0,0 +1,153 @@
|
|||
## 准备工作
|
||||
开始之前,请确保满足以下前置要求:
|
||||
|
||||
+ 已经准备了一个运行中的 Kubernetes 集群,并确保
|
||||
+ 集群版本 >= 1.18.0
|
||||
+ 至少有 2 个可分配的 CPU
|
||||
+ 至少有 4GB 的可分配内存
|
||||
+ 至少有 30GB 以上的磁盘空间
|
||||
+ 已经安装了 kubectl 可以访问 Kubernetes 集群
|
||||
+ 已经安装了 [Helm 3](https://helm.sh/docs/intro/install/)
|
||||
|
||||
|
||||
执行以下命令安装 PolarDB-X Operator。
|
||||
|
||||
```bash
|
||||
$ helm install --namespace polardbx-operator-system --create-namespace polardbx-operator https://github.com/polardb/polardbx-operator/releases/download/v1.4.0/polardbx-operator-1.4.0.tgz
|
||||
```
|
||||
|
||||
您也可以通过 PolarDB-X 的 Helm Chart 仓库安装:
|
||||
|
||||
```bash
|
||||
helm repo add polardbx https://polardbx-charts.oss-cn-beijing.aliyuncs.com
|
||||
helm install --namespace polardbx-operator-system --create-namespace polardbx-operator polardbx/polardbx-operator
|
||||
```
|
||||
|
||||
期望看到如下输出:
|
||||
|
||||
```bash
|
||||
NAME: polardbx-operator
|
||||
LAST DEPLOYED: Sun Oct 17 15:17:29 2021
|
||||
NAMESPACE: polardbx-operator-system
|
||||
STATUS: deployed
|
||||
REVISION: 1
|
||||
TEST SUITE: None
|
||||
NOTES:
|
||||
polardbx-operator is installed. Please check the status of components:
|
||||
|
||||
kubectl get pods --namespace polardbx-operator-system
|
||||
|
||||
Now have fun with your first PolarDB-X cluster.
|
||||
|
||||
Here's the manifest for quick start:
|
||||
|
||||
```yaml
|
||||
apiVersion: polardbx.aliyun.com/v1
|
||||
kind: PolarDBXCluster
|
||||
metadata:
|
||||
name: quick-start
|
||||
annotations:
|
||||
polardbx/topology-mode-guide: quick-start
|
||||
```
|
||||
|
||||
## 安装选项
|
||||
Helm 安装通常可以指定一些配置选项的值,用于覆盖默认的安装选项。这里介绍几个常见的选项和安装模式:
|
||||
|
||||
- node.volume.data,用来指定宿主机上使用的数据目录,参考 [安装-修改数据目录](./1-installation-data-dir.md) ;
|
||||
- images、imageTag、useLatestImage 和 clusterDefaults,用来修改系统组件和数据库集群默认的镜像集合,参考[安装-修改默认镜像](./1-installation-default-image.md) ;
|
||||
- imageRepo,用来修改默认的镜像仓库,参考[安装-修改默认镜像仓库](./1-installation-default-image-repo.md) ;
|
||||
|
||||
所有安装选项可通过如下命令获取:
|
||||
|
||||
```shell
|
||||
helm show values --namespace polardbx-operator-system https://github.com/polardb/polardbx-operator/releases/download/v1.4.0/polardbx-operator-1.4.0.tgz
|
||||
```
|
||||
|
||||
## 系统检查
|
||||
### 运行情况
|
||||
|
||||
用以下命令查看系统组件的运行情况:
|
||||
|
||||
```bash
|
||||
kubectl -n polardbx-operator-system get pods
|
||||
```
|
||||
|
||||
通常你将看到以下 Pod:
|
||||
|
||||
```bash
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
polardbx-controller-manager-6c858fc5b9-zrhx9 1/1 Running 0 66s
|
||||
polardbx-hpfs-d44zd 1/1 Running 0 66s
|
||||
polardbx-tools-updater-459lc 1/1 Running 0 66s
|
||||
```
|
||||
|
||||
其中:
|
||||
|
||||
- polardbx-controller-manager 是 operator 和 webhook 所在的 Pod,由一个 Deployment 控制创建
|
||||
- polardbx-hpfs 是宿主机远程文件服务所在的 Pod,由一个 DaemonSet 控制创建,因此每个节点会有一个
|
||||
- polardbx-tools-updater 是宿主机上一些公共工具脚本的更新程序,同样由一个 DaemonSet 控制创建
|
||||
|
||||
### 动态配置
|
||||
Operator/Webhook 在运行时加载了一组动态配置,这组配置在 Kubernetes 中存放在 ConfigMap 中。
|
||||
|
||||
用下面的命令查看配置的内容:
|
||||
|
||||
```bash
|
||||
kubectl -n polardbx-operator-system get configmap polardbx-controller-manager-config -o yaml
|
||||
```
|
||||
|
||||
通常你将看到这样的内容:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
data:
|
||||
config.yaml: |-
|
||||
images:
|
||||
repo: polardbx
|
||||
common:
|
||||
prober: probe-proxy:v1.2.0
|
||||
exporter: polardbx-exporter:v1.2.0
|
||||
compute:
|
||||
init: polardbx-init:v1.2.0
|
||||
engine: registry.cn-zhangjiakou.aliyuncs.com/drds_pre/polardbx-sql:20220330-2
|
||||
cdc:
|
||||
engine: registry.cn-zhangjiakou.aliyuncs.com/drds_pre/polardbx-cdc:20220408
|
||||
store:
|
||||
galaxy:
|
||||
engine: polardbx-engine@sha256:a1cf4aabf3e0230d6a63dd9afa125e58baa2a925462a59968ac3b918422bf521
|
||||
exporter: prom/mysqld-exporter:master
|
||||
scheduler:
|
||||
enable_master: true
|
||||
cluster:
|
||||
enable_exporters: true
|
||||
enable_aliyun_ack_resource_controller: true
|
||||
enable_debug_mode_for_compute_nodes: false
|
||||
enable_privileged_container: false
|
||||
store:
|
||||
enable_privileged_container: false
|
||||
host_paths:
|
||||
tools: /data/cache/tools/xstore
|
||||
volume_data: /data/xstore
|
||||
hpfs_endpoint: polardbx-hpfs:6543
|
||||
webhook.yaml: |-
|
||||
validator:
|
||||
|
||||
default:
|
||||
protocol_version: 8
|
||||
storage_engine: galaxy
|
||||
service_type: ClusterIP
|
||||
upgrade_strategy: RollingUpgrade
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
annotations:
|
||||
meta.helm.sh/release-name: polardbx-operator
|
||||
meta.helm.sh/release-namespace: polardbx-operator-system
|
||||
creationTimestamp: "2022-04-01T08:09:55Z"
|
||||
labels:
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
name: polardbx-controller-manager-config
|
||||
namespace: polardbx-operator-system
|
||||
resourceVersion: "2475601453"
|
||||
uid: 585844ea-4b87-4407-98f2-520d02d8cffd
|
||||
```
|
|
@ -0,0 +1,26 @@
|
|||
卸载 PolarDB-X Operator
|
||||
========
|
||||
## 卸载系统组件
|
||||
使用下面的命令卸载:
|
||||
|
||||
```bash
|
||||
helm uninstall --namespace polardbx-operator-system polardbx-operator
|
||||
```
|
||||
|
||||
使用下面的命令删除命名空间:
|
||||
|
||||
```bash
|
||||
kubectl delete namespace polardbx-operator-system
|
||||
```
|
||||
|
||||
注意事项:
|
||||
|
||||
- 卸载后,所有的 `PolarDBXCluster`、`XStore`等相关定制资源无法再自动维护
|
||||
- 出于资源保护的目的,数据库组件的 Pod 上通常有 finalizer 保护,卸载系统组件意味着删除时无法自动移除 finalizer 和回收资源(例如宿主机磁盘)等,需要手工进行,请谨慎操作
|
||||
|
||||
## 卸载定制资源定义 CRD
|
||||
Helm 的卸载不会同时移除集成的 CRD,如果需要彻底卸载,需要手动移除:
|
||||
|
||||
```bash
|
||||
kubectl get crds | grep -E "polardbx.aliyun.com" | cut -d ' ' -f 1 | xargs kubectl delete crds
|
||||
```
|
|
@ -0,0 +1,28 @@
|
|||
升级 PolarDB-X Operator
|
||||
========
|
||||
|
||||
由于 Helm 不会更新 CRD, 因此 PolarDB-X Operator 的升级分为如下两个步骤:
|
||||
1. 更新 CRD
|
||||
2. 升级 Operator
|
||||
|
||||
|
||||
### 更新 CRD
|
||||
|
||||
1. 请拉取版本对应的 [CRD 文件](https://github.com/polardb/polardbx-operator/tree/main/charts/polardbx-operator/crds)。CRD 文件的拉取可以直接拉取源码,也可以下载 PolarDB-X Operator 对应版本的 [Release 包](https://github.com/polardb/polardbx-operator/releases),解压后获取。
|
||||
2. 执行如下命令更新 CRD:
|
||||
```shell
|
||||
kubectl apply -f polardbx-operator/crds
|
||||
```
|
||||
|
||||
|
||||
### 升级 Operator
|
||||
|
||||
```bash
|
||||
helm upgrade --namespace polardbx-operator-system polardbx/polardbx-operator
|
||||
```
|
||||
|
||||
可以同时指定 values.yaml:
|
||||
|
||||
```bash
|
||||
helm upgrade --namespace polardbx-operator-system -f values.yaml polardbx/polardbx-operator
|
||||
```
|
|
@ -0,0 +1,10 @@
|
|||
PolarDB-X Operator 安装部署
|
||||
=========================
|
||||
|
||||
1. [快速开始](./0-quickstart.md)
|
||||
2. [安装](./1-installation.md)
|
||||
1. [修改数据目录](./1-installation-data-dir.md)
|
||||
2. [修改默认镜像](./1-installation-default-image.md)
|
||||
3. [修改默认镜像仓库](./1-installation-default-image-repo.md)
|
||||
3. [卸载](./2-uninstallation.md)
|
||||
4. [升级](./3-upgrade.md)
|
|
@ -0,0 +1,26 @@
|
|||
## polardbx-operator
|
||||
|
||||
执行下面的命令查看 polardbx-operator 所在的 Pod
|
||||
|
||||
```bash
|
||||
kubectl -n polardbx-operator-system get pods -l app.kubernetes.io/component=controller-manager
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
polardbx-controller-manager-597685578-kj4rj 1/1 Running 0 10d
|
||||
```
|
||||
|
||||
使用 `kubectl logs` 命令来查看日志
|
||||
|
||||
```bash
|
||||
kubectl -n polardbx-operator-system logs polardbx-controller-manager-597685578-kj4rj
|
||||
...
|
||||
2022-05-24T02:44:52.140Z INFO controller.xstore control/context.go:155 Executing command {"namespace": "default", "xstore": "pxc-demo-q2gq-gms", "engine": "galaxy", "phase": "Running", "stage": "", "trace": "e8ec86c3-f1f8-4baf-b3df-59cf8197ebcb", "action": "ReconcileConsensusRoleLabels", "step": 2, "pod": "default", "container": "engine", "command": ["/tools/xstore/current/venv/bin/python3", "/tools/xstore/current/cli.py", "consensus", "role", "--report-leader"], "timeout": "10s"}
|
||||
2022-05-24T02:44:52.405Z INFO controller.xstore instance/consensus.go:62 Be aware of pod's role and current leader. {"namespace": "default", "xstore": "pxc-demo-q2gq-gms", "engine": "galaxy", "phase": "Running", "stage": "", "trace": "e8ec86c3-f1f8-4baf-b3df-59cf8197ebcb", "action": "ReconcileConsensusRoleLabels", "step": 2, "pod": "pxc-demo-q2gq-gms-cand-1", "role": "leader", "leader-pod": "pxc-demo-q2gq-gms-cand-1"}
|
||||
2022-05-24T02:44:52.405Z INFO controller.xstore instance/consensus.go:218 Leader not changed. {"namespace": "default", "xstore": "pxc-demo-q2gq-gms", "engine": "galaxy", "phase": "Running", "stage": "", "trace": "e8ec86c3-f1f8-4baf-b3df-59cf8197ebcb", "action": "ReconcileConsensusRoleLabels", "step": 2, "leader-pod": "pxc-demo-q2gq-gms-cand-1"}
|
||||
2022-05-24T02:44:52.405Z INFO controller.xstore instance/volumes.go:193 Not time to update sizes, skip. {"namespace": "default", "xstore": "pxc-demo-q2gq-gms", "engine": "galaxy", "phase": "Running", "stage": "", "trace": "e8ec86c3-f1f8-4baf-b3df-59cf8197ebcb", "action": "UpdateHostPathVolumeSizesPer1m0s", "step": 6}
|
||||
2022-05-24T02:44:52.405Z INFO controller.xstore instance/common.go:117 Update observed generation. {"namespace": "default", "xstore": "pxc-demo-q2gq-gms", "engine": "galaxy", "phase": "Running", "stage": "", "trace": "e8ec86c3-f1f8-4baf-b3df-59cf8197ebcb", "action": "UpdateObservedGeneration", "step": 10, "previous-generation": 2, "current-generation": 2}
|
||||
2022-05-24T02:44:52.405Z INFO controller.xstore instance/common.go:108 Update observed topology and config. {"namespace": "default", "xstore": "pxc-demo-q2gq-gms", "engine": "galaxy", "phase": "Running", "stage": "", "trace": "e8ec86c3-f1f8-4baf-b3df-59cf8197ebcb", "action": "UpdateObservedTopologyAndConfig", "step": 11, "current-generation": 2}
|
||||
2022-05-24T02:44:52.405Z INFO controller.xstore control/common.go:62 Loop while running {"namespace": "default", "xstore": "pxc-demo-q2gq-gms", "engine": "galaxy", "phase": "Running", "stage": "", "trace": "e8ec86c3-f1f8-4baf-b3df-59cf8197ebcb", "action": "RetryAfter10s", "step": 12}
|
||||
2022-05-24T02:44:52.405Z INFO controller.xstore instance/status.go:158 Display status updated! {"namespace": "default", "xstore": "pxc-demo-q2gq-gms", "engine": "galaxy", "phase": "Running", "stage": "", "trace": "e8ec86c3-f1f8-4baf-b3df-59cf8197ebcb", "defer_exec": true, "action": "UpdateDisplayStatus", "step": 13}
|
||||
2022-05-24T02:44:52.405Z INFO controller.xstore instance/status.go:52 Object not changed. {"namespace": "default", "xstore": "pxc-demo-q2gq-gms", "engine": "galaxy", "phase": "Running", "stage": "", "trace": "e8ec86c3-f1f8-4baf-b3df-59cf8197ebcb", "defer_exec": true, "action": "PersistentXStore", "step": 14}
|
||||
2022-05-24T02:44:52.405Z INFO controller.xstore instance/status.go:41 Status not changed. {"namespace": "default", "xstore": "pxc-demo-q2gq-gms", "engine": "galaxy", "phase": "Running", "stage": "", "trace": "e8ec86c3-f1f8-4baf-b3df-59cf8197ebcb", "defer_exec": true, "action": "PersistentStatus", "step": 15}
|
||||
```
|
|
@ -0,0 +1,26 @@
|
|||
1. 执行`perf`命令确认是否安装,若未安装则执行(CentOS):`sudo yum install perf`
|
||||
2. 实时查看热点函数:`perf top --call-graph dwarf -p {PID}`,检查是否能看到mysqld的函数栈,类似AHI的问题比较容易看出来
|
||||
3. 绘制火焰图
|
||||
|
||||
```shell
|
||||
# 如果mysqld在容器中运行
|
||||
# 则拷贝mysqld的二进制文件到宿主机的相同运行路径下
|
||||
docker cp {ContainerId}
|
||||
|
||||
# 找到mysqld进程号
|
||||
ps -ef | grep mysqld
|
||||
|
||||
# 采样40s
|
||||
perf record -F 99 -p {pid} -g --call-graph dwarf -- sleep 40
|
||||
|
||||
# 将二进制的 perf.data 转化为文本形式
|
||||
perf script > out.perf
|
||||
|
||||
# 绘制火焰图
|
||||
# 火焰图工具下载见文末
|
||||
./FlameGraph-master/stackcollapse-perf.pl out.perf > out.folded
|
||||
./FlameGraph-master/flamegraph.pl out.folded > mysqld.svg
|
||||
```
|
||||
|
||||
火焰图工具:[FlameGraph-master.zip](./FlameGraph-master.zip)
|
||||
|
|
@ -0,0 +1,10 @@
|
|||
|
||||
1. 执行如下命令,查看实际拉取的镜像是否存在
|
||||
```shell
|
||||
kubectl describe pod {报错的 pod}
|
||||
```
|
||||
2. 确认是否有镜像仓库的拉取权限。PolarDB-X 默认的镜像仓库是无需权限的,如果使用内部镜像仓库,需要配置鉴权信息,参考文档:[Pull an Image from a Private Registry](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/)
|
||||
3. 第2步确认完成后,删除报错的 pod,让其重建即可。
|
||||
|
||||
|
||||
|
|
@ -0,0 +1,5 @@
|
|||
三种情况:
|
||||
|
||||
1. kill 了 1 号进程
|
||||
1. kill 进程后,1 号进程退出了
|
||||
1. kill 进程后 `liveness`probe 连续失败超过阈值
|
|
@ -0,0 +1,10 @@
|
|||
1. K8S 官方尚未提供从 stopped/completed Pod中拷贝文件的功能,[参考这里](https://github.com/kubernetes/kubectl/issues/454) 。不过可以通过如下命令获取上一个 Pod 的日志信息:
|
||||
|
||||
```shell
|
||||
kubectl logs <podname> -n <namespace> --previous
|
||||
```
|
||||
2. 如果通过查看日志判断 pod 无法达到Running状态是因为探活(Probe容器)失败,可以通过如下命令关闭 Pod 的探活,让 Pod 达到 Running 状态后,进入 Pod 查看文件或者拷贝文件。
|
||||
|
||||
```shell
|
||||
kubectl annotate pod {pod 名} runmode=debug
|
||||
```
|
|
@ -0,0 +1,51 @@
|
|||
## CN
|
||||
拉取 PolarDB-X SQL 代码,执行docker_build.sh 即可。
|
||||
[https://github.com/polardb/polardbx-sql/blob/main/docker_build.sh](https://github.com/polardb/polardbx-sql/blob/main/docker_build.sh)
|
||||
|
||||
## DN
|
||||
|
||||
```dockerfile
|
||||
FROM centos:7
|
||||
|
||||
# Install essential utils
|
||||
RUN yum update -y && \
|
||||
yum install sudo hostname telnet net-tools vim tree less libaio numactl-libs python3 -y && \
|
||||
yum clean all && rm -rf /var/cache/yum && rm -rf /var/tmp/yum-*
|
||||
|
||||
# Remove localtime to make mount possible.
|
||||
RUN rm -f /etc/localtime
|
||||
|
||||
# Create user "mysql" and add it into sudo group
|
||||
RUN useradd -ms /bin/bash mysql && \
|
||||
echo "mysql:mysql" | chpasswd && \
|
||||
echo "mysql ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
|
||||
|
||||
# Install polardbx engine's rpm, use URL to reduce the final image size.
|
||||
ARG POLARDBX_ENGINE_RPM_URL=<url-to-dn-rpm-package>.rpm
|
||||
|
||||
RUN yum install -y ${POLARDBX_ENGINE_RPM_URL} && \
|
||||
yum clean all && rm -rf /var/cache/yum && rm -rf /var/tmp/yum-* # && \
|
||||
# mv /u01/xcluster80_current/* /opt/galaxy_engine/ && rm -rf /u01
|
||||
|
||||
# Target to polardbx engine home.
|
||||
WORKDIR /opt/galaxy_engine
|
||||
|
||||
# Setup environment variables.
|
||||
ENV POLARDBX_ENGINE_HOME=/opt/galaxy_engine
|
||||
ENV PATH=$POLARDBX_ENGINE_HOME/bin:$PATH
|
||||
|
||||
ENTRYPOINT mysqld
|
||||
```
|
||||
|
||||
1. 把上面 Dockerfile 存下来
|
||||
2. 打包 rpm(文档待更新)
|
||||
3. 执行
|
||||
|
||||
```bash
|
||||
docker build --build-arg POLARDBX_ENGINE_RPM_URL=${POLARDBX_ENGINE_RPM_URL} -t polardbx-engine .
|
||||
```
|
||||
|
||||
|
||||
## CDC
|
||||
拉取仓库代码,执行 build.sh 即可。
|
||||
详见:[https://github.com/polardb/polardbx-cdc/blob/main/docker/build.sh](https://github.com/polardb/polardbx-cdc/blob/main/docker/build.sh)
|
|
@ -0,0 +1,39 @@
|
|||
数据库参数修改详见:[《创建数据库参数操作对象》](../ops/configuration/1-cn-variable-load-at-runtime-create-db.md)
|
||||
## 关闭私有协议
|
||||
通过 pxcknobs 修改如下参数即可:
|
||||
|
||||
```shell
|
||||
CONN_POOL_XPROTO_STORAGE_DB_PORT:-1 // DN 的私有协议,-1为关闭,0为自动获取配置
|
||||
CONN_POOL_XPROTO_META_DB_PORT: -1 // Meta db 的私有协议开关,-1为关闭,0为自动获取配置
|
||||
```
|
||||
|
||||
```yaml
|
||||
apiVersion: polardbx.aliyun.com/v1
|
||||
kind: PolarDBXClusterKnobs
|
||||
metadata:
|
||||
name: polardbx-xcluster
|
||||
namespace: development
|
||||
spec:
|
||||
## PolarDB-X 的实例名
|
||||
clusterName: "polardbx-xcluster"
|
||||
knobs:
|
||||
CONN_POOL_XPROTO_STORAGE_DB_PORT: -1
|
||||
CONN_POOL_XPROTO_META_DB_PORT: -1
|
||||
```
|
||||
|
||||
## 开启私有协议
|
||||
配置如下 pxcknobs 参数即可:
|
||||
|
||||
```shell
|
||||
apiVersion: polardbx.aliyun.com/v1
|
||||
kind: PolarDBXClusterKnobs
|
||||
metadata:
|
||||
name: polardbx-xcluster
|
||||
namespace: development
|
||||
spec:
|
||||
## PolarDB-X 的实例名
|
||||
clusterName: "polardbx-xcluster"
|
||||
knobs:
|
||||
CONN_POOL_XPROTO_STORAGE_DB_PORT: 0
|
||||
CONN_POOL_XPROTO_META_DB_PORT: 0
|
||||
```
|
|
@ -0,0 +1,24 @@
|
|||
事务策略是 CN 的动态参数,如何修改可以参考文档:[《创建数据库参数操作对象》](../ops/configuration/1-cn-variable-load-at-runtime-create-db.md)
|
||||
|
||||
## 操作步骤
|
||||
|
||||
1. 配置knobs.yaml 如下所示:
|
||||
|
||||
```yaml
|
||||
apiVersion: polardbx.aliyun.com/v1
|
||||
kind: PolarDBXClusterKnobs
|
||||
metadata:
|
||||
name: kunan-oss
|
||||
spec:
|
||||
clusterName: "tunan-oss"
|
||||
knobs:
|
||||
TRANSACTION_POLICY: XA
|
||||
```
|
||||
|
||||
增加TRANSACTION_POLICY 参数,填写需要的事务策略即可,支持 XA|TSO|TSO_READONLY。
|
||||
|
||||
2. 登录CN,执行如下SQL,检查配置是否生效:
|
||||
|
||||
```mysql
|
||||
begin; show variables like "drds_transaction_policy"; rollback;
|
||||
```
|
|
@ -0,0 +1,55 @@
|
|||
在yaml 中添加 .spec.topology.rules.components 配置gms 和 dn 的 nodesets即可,如下所示:
|
||||
|
||||
```yaml
|
||||
apiVersion: polardbx.aliyun.com/v1
|
||||
kind: PolarDBXCluster
|
||||
metadata:
|
||||
name: pxc-demo
|
||||
spec:
|
||||
topology:
|
||||
rules:
|
||||
components:
|
||||
gms:
|
||||
## 配置nodeSets即可
|
||||
nodeSets:
|
||||
- name: cands
|
||||
role: Candidate
|
||||
replicas: 1
|
||||
dn:
|
||||
## 配置nodeSets即可
|
||||
nodeSets:
|
||||
- name: cands
|
||||
role: Candidate
|
||||
replicas: 1
|
||||
nodes:
|
||||
gms:
|
||||
template:
|
||||
resources:
|
||||
limits:
|
||||
cpu: 2
|
||||
memory: 4Gi
|
||||
cn:
|
||||
replicas: 1
|
||||
template:
|
||||
resources:
|
||||
requests:
|
||||
cpu: 1
|
||||
memory: 4Gi
|
||||
limits:
|
||||
cpu: 2
|
||||
memory: 4Gi
|
||||
dn:
|
||||
replicas: 1
|
||||
template:
|
||||
resources:
|
||||
limits:
|
||||
cpu: 2
|
||||
memory: 4Gi
|
||||
cdc:
|
||||
replicas: 1
|
||||
template:
|
||||
resources:
|
||||
limits:
|
||||
cpu: 2
|
||||
memory: 4Gi
|
||||
```
|
|
@ -0,0 +1,4 @@
|
|||
PolarDB-X Operator 支持容器网络和Host Network 两种模式。在创建 PolarDB-X 实例的时候,如果采用 Host Network, 此时 Pod 直接使用的主机上的随机端口,可能会与主机上的其它进程端口产生冲突,导致服务起不来。
|
||||
|
||||
1. 如果宿主机上占用端口的进程能够停止或者更换端口,停止该进程或者更改端口即可,让 Pod 重新拉起即可。
|
||||
1. 如果端口无法更改,直接重建 PolarDB-X 实例,此时会重新生成新的随机端口,大概率不会遇到端口冲突。
|
|
@ -0,0 +1,22 @@
|
|||
集群创建卡在 Creating 状态有几种可能的原因
|
||||
|
||||
- 组件的 Pod 始终无法 ready,可能的状态可能有 ImagePullBackOff,Pending,CrashBackLoopOff 等
|
||||
- GMS 中 metadb 的元数据无法准备完成
|
||||
- 无法从 CN 处获取版本
|
||||
- ...
|
||||
|
||||
排查思路主要是两个:
|
||||
|
||||
1. 查看本集群 Pod 状态,看是否有异常状态的 Pod
|
||||
1. [查看 polardbx-operator 日志](./1-log.md) ,查看是否有对应集群的 ERROR 日志
|
||||
|
||||
```bash
|
||||
kubectl get pods -l polardbx/name={集群名}
|
||||
```
|
||||
|
||||
| Pod 状态 | 可能的原因 | 排查 & 解决思路 |
|
||||
|------------------------------------------------------------------------|----------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| <ul><li>READY STATUS</li><li>0/ 3 ImagePullBackOff</li></ul> | 镜像拉取失败 <ul><li>镜像写错了</li><li>私有仓库,没有权限</li></ul> | 使用 `kubectl describe` 进一步确定 <ul><li>镜像写错了,更新 PolarDBXCluster 的 spec</li><li>私有仓库,需要[添加权限](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/)</li></ul> |
|
||||
| <ul><li>READY STATUS</li><li>0/ 3 Pending</li></ul> | 资源不足 | 使用 `kubectl describe` 进一步确定 <ul><li>添加节点</li><li>腾挪资源</li></ul> |
|
||||
| <ul><li>READY STATUS</li><li>2/ 3 CrashBackLoopOff</li></ul> | <ul><li>容器反复 crash</li><li>cn 进程挂了</li></ul> | 使用 `kubectl describe` 进一步确定 <ul><li>具体问题具体分析</li><li>describe 看不到错误信息,可以通过[关闭探活](../ops/component/cn/2-liveness.md) 的方式让pod先起来,进入pod 查看相关的日志。</li></ul> |
|
||||
|
|
@ -0,0 +1,23 @@
|
|||
卡在 Deleting 状态,一定是因为存在未处理的 finalizer,首先查看 PolarDBXCluster 是否有这样的 finalizer
|
||||
|
||||
```bash
|
||||
kubectl get pxc {PolarDBX 名} -o jsonpath='{.metadata.finalizers}'
|
||||
["polardbx/finalizer"]
|
||||
```
|
||||
|
||||
通常只有 `polardbx/finalizer` 这一个,应当会由 polardbx-operator 进行处理。如果长时间未处理,需要
|
||||
|
||||
- 确定是否 operator 还存活
|
||||
- 查看 operator 日志来确定原因
|
||||
|
||||
如果存在其他 finalizer,需要确定是否有对应组件会处理,
|
||||
|
||||
- 如果有,则需要对应组件排查原因
|
||||
- 否则,使用 `kubectl edit`手动删除对应的 finalizer
|
||||
|
||||
## 批量操作
|
||||
如果 xstore 或者 cn 的数量较多,可以通过如下命令批量操作(操作前建议阅读命令格式,通过标签过滤的方式筛选出需要删除的对象):
|
||||
|
||||
```shell
|
||||
for i in $(kubectl get xstore -o jsonpath='{.items[*].metadata.name}'); do echo $i; kubectl get xstore $i -o json | jq '.metadata.finalizers = null' | kubectl apply -f -; done
|
||||
```
|
|
@ -0,0 +1,28 @@
|
|||
使用下面的命令获取存储节点 / 元数据节点的 leader 节点:
|
||||
|
||||
```bash
|
||||
kubectl get xstore wuzhe-test2-shmr-dn-3
|
||||
NAME LEADER READY PHASE DISK VERSION AGE
|
||||
wuzhe-test2-shmr-dn-3 wuzhe-test2-shmr-dn-3-cands-0 1/1 Running 1.1 GiB 5.7.14-AliSQL-X-Cluster-1.6.1.1-20220520-log 3m11s
|
||||
```
|
||||
|
||||
其中 `LEADER`列的信息就是 leader 节点所在的 Pod。
|
||||
|
||||
如果该列为空,则表示未发现 leader 节点,需要进一步判断是那种情况:
|
||||
|
||||
1. [排查是否存储节点对应的 Pod 不在运行中](../ops/component/dn/1-dn-node-state-inspect.md)
|
||||
1. [排查 Pod 内部的日志](../ops/component/dn/3-dn-log.md)
|
||||
1. [排查 operator 的日志](./1-log.md)
|
||||
|
||||
存储节点的 leader 是通过以下容器内命令来获取的:
|
||||
|
||||
```bash
|
||||
kubectl exec -it wuzhe-test2-shmr-dn-3-cands-0 bash
|
||||
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
|
||||
Defaulted container "engine" out of: engine, exporter, prober
|
||||
|
||||
[root@iZ8vb9igdh4szqgoyfjt03Z /]
|
||||
#xsl consensus role --report-leader
|
||||
leader
|
||||
wuzhe-test2-shmr-dn-3-cands-0
|
||||
```
|
|
@ -0,0 +1,12 @@
|
|||
Pod 的重启计数在任何一个容器被重启时都会增加,因此首先需要确定是哪个容器重启:使用 `kubectl describe pod {name}` 查看是哪个容器最近在重启。
|
||||
|
||||
重启的原因通常需要排查日志来得知,通常有几种原因:
|
||||
|
||||
1. 容器的 liveness probe 失败超过阈值(通常为 3),这个需要排查进程是否存活以及相关的日志来排除问题
|
||||
1. 容器 1 号进程意外退出,例如在容器内执行了 `killall`
|
||||
|
||||
日志排查合集:
|
||||
|
||||
1. [CN 日志](../ops/component/cn/4-cn-log.md)
|
||||
1. [GMS/DN 日志](../ops/component/dn/3-dn-log.md)
|
||||
1. [CDC 日志](../ops/component/cdc/2-cdc-node-login.md)
|
|
@ -0,0 +1 @@
|
|||
一些情况的处理:[集群创建卡在 Creating 状态](./2-block-in-creating.md)
|
|
@ -0,0 +1,25 @@
|
|||
#### 登录对应的计算节点
|
||||
|
||||
```bash
|
||||
kubectl exec -it <pod-name> -- bash
|
||||
```
|
||||
|
||||
#### dump 内存
|
||||
|
||||
```bash
|
||||
# 通过 JPS 获取进程 ID
|
||||
jps |grep TDDLLauncher
|
||||
|
||||
# dump 内存
|
||||
jmap -dump:live,format=b,file=heap.bin <pid>
|
||||
```
|
||||
|
||||
#### 拷贝文件到本地
|
||||
|
||||
```bash
|
||||
# 推迟计算节点 Pod
|
||||
exit
|
||||
|
||||
# 拷贝内存文件
|
||||
kubectl cp <pod-name>:<dump-file-path-and-name> <local-file-name>
|
||||
```
|
|
@ -0,0 +1 @@
|
|||
参考[获取日志文件的文档](../ops/component/dn/3-dn-log.md) ,区别是位置在容器内的 `/data/mysql/data`目录。
|
|
@ -0,0 +1,42 @@
|
|||
当前镜像里没有集成相应的工具,需要手动上传工具包执行(见附件)。以 CN 中的 tddl 进程为例,如下所示
|
||||
|
||||
```bash
|
||||
# 上传 profiler tar 包到 pod 的 /tmp 目录
|
||||
$ kubectl cp ~/Downloads/async-profiler.tar.gz pxc-yexi-test-cn-c6498459c-hgwn7:/tmp/ -c server
|
||||
|
||||
# 打开 pod 的 shell
|
||||
$ kubectl exec -it pxc-yexi-test-cn-c6498459c-hgwn7 -c server -- bash
|
||||
|
||||
# 解压 tar 包到 /home/admin/tools 目录
|
||||
$ cd /home/admin/tools && tar xzvf /tmp/async-profiler.tar.gz
|
||||
|
||||
# 查看 Tddl 进程
|
||||
$ jps
|
||||
193 TddlLauncher
|
||||
467 DrdsWorker
|
||||
499432 Jps
|
||||
|
||||
# 设置内核参数, 两种情况
|
||||
# 1. 容器是 privileged,直接设置就好
|
||||
# 2. 容器不是 privileged,需要去 Pod 对应的宿主机上设置
|
||||
$ echo 1 >/proc/sys/kernel/perf_event_paranoid
|
||||
|
||||
# 查看内核参数是否正常
|
||||
$ cat /proc/sys/kernel/perf_event_paranoid
|
||||
1
|
||||
|
||||
# 开始 profile
|
||||
$ ./profiler.sh -d 80 -f /tmp/profiler-drds.svg 193
|
||||
|
||||
# 打开一个新的本地 shell,从 pod 里拷贝 svg 火焰图出来
|
||||
$ kubectl cp pxc-yexi-test-cn-c6498459c-hgwn7:/tmp/profiler-drds.svg /tmp/profiler-drds.svg -c server
|
||||
|
||||
# 打开火焰图
|
||||
$ open /tmp/profiler-drds.svg
|
||||
```
|
||||
|
||||

|
||||
|
||||
## 附件
|
||||
|
||||
[async-profiler.tar.gz](./async-profiler.tar.gz)
|
|
@ -0,0 +1,20 @@
|
|||
常见问题
|
||||
==========
|
||||
1. [如何获取系统组件日志](./1-log.md)
|
||||
2. [集群创建卡在 Creating 状态](./2-block-in-creating.md)
|
||||
3. [集群删除卡在 Deleting 状态](./3-block-in-deleting.md)
|
||||
4. [存储节点未发现 Leader 节点](./4-dn-no-leader.md)
|
||||
5. [Pod 意外重启](./5-pod-restart-inccident.md)
|
||||
6. [Pod 始终不能 Running](./6-pod-not-in-running-state.md)
|
||||
7. [计算节点 dump 内存](./7-cn-memory-dump.md)
|
||||
8. [存储节点拷贝 core 文件](./8-dn-core-file.md)
|
||||
9. [计算节点获取火焰图](./9-cn-flame-graph.md)
|
||||
10. [存储节点获取火焰图](./10-dn-flame-graph.md)
|
||||
11. [Pod 始终处于 ImagePullBackOff](./11-block-in-imagepullbackoff.md)
|
||||
12. [容器内 Kill 进程后 Pod 重启](./12-kill-process-in-pod.md)
|
||||
13. [Pod 不在运行如何获取文件](./13-get-logs-from-a-terminated-pod.md)
|
||||
14. [如何构建镜像](./14-docker-image-build.md)
|
||||
15. [如何关闭/开启私有协议](./15-private-rpc-on-off.md)
|
||||
16. [如何调整事务策略](./16-transaction-strategy.md)
|
||||
17. [如何创建单副本实例](./17-one-replica-cluster.md)
|
||||
18. [宿主机网络端口冲突](./18-host-network-port-conflict.md)
|
After Width: | Height: | Size: 829 KiB |
|
@ -0,0 +1,91 @@
|
|||
# PolarDB-X 运维指南
|
||||
|
||||
PolarDB-X 集群有 4 个部分组成:元数据服务(GMS)、计算节点(CN)、存储节点(DN)和日志节点(CDC)。每个部分都包含一个或多个计算资源,在 Kubernetes 中以 Pod 的形式呈现。基于 PolarDB-X Operator,我们可以定制集群每一个部分,比如创建 100 个计算节点,或是将 100 个节点分散在 A 和 B 两个可用区来保证高可用等等。
|
||||
|
||||
## 标签 (Labels)
|
||||
|
||||
在组成 PolarDB-X 集群时,operator 为每个组件赋予了不同的标签,下表展示了一些常用的标签。
|
||||
|
||||
| 标签 | 含义 | 可选值 | 示例 |
|
||||
| :--- | :--- | :--- | :--- |
|
||||
| polardbx/name | 资源所属的 PolarDBXCluster 资源的名字 | | quick-start |
|
||||
| polardbx/role | 资源的角色 | gms,cn,dn,cdc | cn |
|
||||
|
||||
组合这些标签可以选择不同的资源,例如列举 quick-start 集群下的所有 Pod:
|
||||
|
||||
```bash
|
||||
$ kubectl get pods -l polardbx/name=quick-start
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
quick-start-ml92-cdc-default-77979c6699-5dfgg 2/2 Running 0 10m
|
||||
quick-start-ml92-cn-default-6d5956d4f4-jdzr4 3/3 Running 1 (7m9s ago) 10m
|
||||
quick-start-ml92-dn-0-single-0 3/3 Running 0 10m
|
||||
quick-start-ml92-gms-single-0 3/3 Running 0 10m
|
||||
```
|
||||
|
||||
或是列举所有的 CN:
|
||||
|
||||
```bash
|
||||
$ kubectl get pods -l polardbx/name=quick-start,polardbx/role=cn
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
quick-start-ml92-cn-default-6d5956d4f4-jdzr4 3/3 Running 1 (9m1s ago) 12m
|
||||
```
|
||||
|
||||
## 部署 -- 集群拓扑
|
||||
|
||||
为了方便本机测试,[[快速上手](../deployment/0-quickstart.md)] 中展示的集群预先定义了集群的规格和拓扑,将整体资源压缩在 4c8g 以下。
|
||||
|
||||
如果想要部署更适合使用的模式,需要自定义集群的拓扑和规格。[[PolarDBXCluster API](../api/polardbxcluster.md)] 中详细解释 PolarDBXCluster 中可配置字段的含义和可选值,你可以参考它进行配置。当然,配置项是比较多且复杂的,这里给出几个简单的例子以供参考:
|
||||
|
||||
+ 经典集群 -- 16c64g (2 CN + 2 DN)
|
||||
|
||||
```yaml
|
||||
apiVersion: polardbx.aliyun.com/v1
|
||||
kind: PolarDBXCluster
|
||||
metadata:
|
||||
name: classic
|
||||
spec:
|
||||
topology:
|
||||
nodes:
|
||||
cn:
|
||||
replicas: 2
|
||||
template:
|
||||
resources:
|
||||
limits:
|
||||
cpu: 16
|
||||
memory: 64Gi
|
||||
dn:
|
||||
replicas: 2
|
||||
template:
|
||||
resources:
|
||||
limits:
|
||||
cpu: 16
|
||||
memory: 64Gi
|
||||
```
|
||||
|
||||
通常建议不设置 resources 的 requests 以使 Kubernetes 能够使 Pod 独享计算资源,你可以参考 [Kubernetes 的文档](https://kubernetes.io/zh/docs/tasks/configure-pod-container/quality-service-pod/) 来了解 Pod 的服务质量的概念。Operator 默认配置中没有为每个容器都指定资源,如需要确保 Pod 是 Guaranteed 的服务质量,需要打开 EnforceQoSGuaranteed 的门特性,可以参考 [[PolarDB-X 安装部署](../deployment/README.md)] 进行配置。
|
||||
|
||||
在 Kubernetes 集群资源允许的前提下,可以配置规格更大、节点更多的 PolarDB-X 集群。
|
||||
|
||||
## 集群生命周期管理
|
||||
|
||||
参考 [[生命周期管理]](./lifecycle/README.md) 对 PolarDB-X 集群全生命周期进行管理,包括创建、升级、扩缩容、删除等。
|
||||
|
||||
## 组件管理
|
||||
|
||||
参考 [[组件管理]](./component/README.md) 对 PolarDB-X CN、DN 和 CDC 组件进行管理。
|
||||
|
||||
## 访问
|
||||
|
||||
参考 [[连接 PolarDB-X 数据库]](./connection/README.md) 选择合适的访问方式。
|
||||
|
||||
## 配置
|
||||
|
||||
参考 [[数据库参数设置]](./configuration/README.md) 来设置和修改配置。
|
||||
|
||||
## 监控
|
||||
|
||||
参考 [[监控]](./monitor/README.md) 为 PolarDB-X 集群开启监控功能。
|
||||
|
||||
## 日志采集
|
||||
|
||||
参考 [[日志采集]](./logcollector/README.md) 为 PolarDB-X 集群开启日志采集功能。
|
|
@ -0,0 +1,88 @@
|
|||
备份存储方式配置
|
||||
==========
|
||||
|
||||
PolarDB-X Operator 从 1.3.0 版本开始支持全量备份恢复功能。在开启集群的备份恢复之前,需要对备份集的存储方式进行配置。
|
||||
|
||||
您可以通过如下方式完成备份存储方式的配置。
|
||||
|
||||
## 配置备份存储
|
||||
|
||||
### 支持的存储方式
|
||||
|
||||
目前支持的存储方式如下所示:
|
||||
|
||||
* SFTP
|
||||
* Aliyun OSS
|
||||
|
||||
更多存储方式会在后续支持。
|
||||
|
||||
### 配置 SFTP 为备份集存储
|
||||
|
||||
1. 执行如下命令修改 ConfigMap:
|
||||
```shell
|
||||
kubectl -n polardbx-operator-system edit configmap polardbx-hpfs-config
|
||||
```
|
||||
在sinks数组中添加自己的sftp配置,如下所示:
|
||||
```yaml
|
||||
data:
|
||||
config.yaml: |-
|
||||
sinks:
|
||||
- name: default
|
||||
type: sftp
|
||||
host: 127.0.0.1
|
||||
port: 22
|
||||
user: admin
|
||||
password: admin
|
||||
rootPath: /backup
|
||||
```
|
||||
2. 保存之后执行以下命令使配置生效:
|
||||
```shell
|
||||
kubectl -n polardbx-operator-system rollout restart daemonsets polardbx-hpfs
|
||||
```
|
||||
|
||||
配置项解释:
|
||||
- name: 配置项名称,多个 sftp 配置通过 name 区分
|
||||
- type: 配置项类型(具体参照[支持的存储](#支持的存储)), 取值范围:sftp, oss
|
||||
- host: 备份机器ip
|
||||
- port: 备份机器端口
|
||||
- user: 备份机器账户名
|
||||
- password: 备份机器密码
|
||||
- rootPath: 备份集存放的根目录
|
||||
|
||||
### 配置阿里云 OSS 为备份集存储
|
||||
|
||||
1. 执行如下命令修改 ConfigMap:
|
||||
```shell
|
||||
kubectl -n polardbx-operator-system edit configmap polardbx-hpfs-config
|
||||
```
|
||||
在sinks数组中添加自己的oss配置,
|
||||
```yaml
|
||||
data:
|
||||
config.yaml: |-
|
||||
sinks:
|
||||
- name: default
|
||||
type: oss
|
||||
endpoint: endpoint
|
||||
accessKey: ak
|
||||
accessSecret: sk
|
||||
bucket: bucket
|
||||
```
|
||||
2. 保存之后执行以下命令使配置生效:
|
||||
```shell
|
||||
kubectl -n polardbx-operator-system rollout restart daemonsets polardbx-hpfs
|
||||
```
|
||||
|
||||
配置项解释:
|
||||
- name: 配置项名称,多个 oss 配置通过 name 区分
|
||||
- type: 配置项类型(具体参照[支持的存储](#支持的存储)), 取值范围:sftp, oss
|
||||
- endpoint: oss访问域名
|
||||
- accessKey: oss访问id
|
||||
- accessSecret: oss访问密钥
|
||||
- bucket: oss存储空间
|
||||
> 具体介绍可参考:[OSS产品文档](https://help.aliyun.com/document_detail/31827.html)
|
||||
|
||||
|
||||
## 注意事项
|
||||
|
||||
- sinks可以配置多种存储类型,不同类型的配置的name允许重复;每种存储类型也支持多组存储配置,但同一类型下的name不允许重复。
|
||||
- operator可以在未配置存储的情况下正常运行,但需要使用备份恢复时须添加对应的存储配置。
|
|
@ -0,0 +1,77 @@
|
|||
集群备份
|
||||
======
|
||||
|
||||
PolarDB-X Operator 从 1.3.0 版本开始支持全量备份恢复功能。本文介绍如何对 PolarDB-X 进行全量备份。
|
||||
|
||||
## 前置条件
|
||||
1. PolarDB-X Operator 升级到 1.3.0 及以上版本
|
||||
2. 完成备份存储方式配置,参见文档:[备份存储配置](./1-backup-storage-configure.md)
|
||||
|
||||
|
||||
## 发起全量备份
|
||||
|
||||
下面介绍如何通过 PolarDBXBackup 对象为 PolarDB-X 进行全量备份。
|
||||
|
||||
### 创建 PolarDBXBackup 对象
|
||||
|
||||
1. 参照如下示例编写 pxc-backup.yaml 文件:
|
||||
```yaml
|
||||
apiVersion: polardbx.aliyun.com/v1
|
||||
kind: PolarDBXBackup
|
||||
metadata:
|
||||
name: pxcbackup-test
|
||||
spec:
|
||||
cluster:
|
||||
name: polardbx-test
|
||||
retentionTime: 240h
|
||||
storageProvider:
|
||||
storageName: sftp
|
||||
sink: default
|
||||
preferredBackupRole: follower
|
||||
```
|
||||
|
||||
参数说明:
|
||||
* cluster.name: 待备份的目标 PolarDB-X 集群名称
|
||||
* retentionTime: 备份集保留时间,单位小时
|
||||
* storageProvider.storageName: 备份集存储方式,支持 sftp 和 oss
|
||||
* storageProvider.sink: 备份集存储配置的名称,对应[备份存储配置](./1-backup-storage-configure.md)中的 name 字段
|
||||
* preferredBackupRole( 该参数仅适用于 1.4.0 及后续版本 ): 进行备份的节点角色,可选择 `follower` 和 `leader`,默认为 `follower`;**若使用 `leader` 发起备份,可能会对业务造成影响,请谨慎配置**
|
||||
|
||||
2.使用下面的命令创建 PolarDBXBackup 对象,触发全量备份:
|
||||
```bash
|
||||
kubectl create -f pxc-backup.yaml
|
||||
```
|
||||
|
||||
### 查看全量备份进度
|
||||
|
||||
您可以使用以下指令查看全量备份的进度:
|
||||
```bash
|
||||
kubectl get pxb
|
||||
```
|
||||
|
||||
当进度中的`PHASE`变为`Finished`后备份即表示全量备份完成。
|
||||
```bash
|
||||
NAME CLUSTER START END RESTORE_TIME PHASE AGE
|
||||
pxcbackup-test polardbx-test 2022-10-21T04:56:38Z 2022-10-21T04:58:21Z 2022-10-21T04:57:23Z Finished 4m15s
|
||||
```
|
||||
其中,进度里的`RESTORE_TIME`字段表示该备份集可以恢复到的最新的时间点。
|
||||
|
||||
### 注意事项
|
||||
|
||||
- PolarDBXBackup 对象的metadata.name字段表示备份集的名称,多次构建备份集需要修改该字段
|
||||
|
||||
## 备份集查阅
|
||||
|
||||
全量备份完成后,备份集存放在如下路径,您可以在 SFTP 配置的主机或者 OSS bucket 中查看对应的备份集文件。
|
||||
|
||||
```
|
||||
{root_path}/polardbx-backup/{pxc_name}/{pxc_backup_name}-{timestamp}
|
||||
```
|
||||
|
||||
- root_path取决于存储配置
|
||||
- 若采用sftp作为存储,则该值为sink.rootPath
|
||||
- 若采用oss作为存储,则该值为sink.bucket
|
||||
- polardbx-backup为固定字段
|
||||
- pxc_name是待备份的集群的名字
|
||||
- pxc_backup_name是备份集的名字
|
||||
- timestamp是备份开始的时间戳(UTC+0)
|
|
@ -0,0 +1,96 @@
|
|||
集群恢复
|
||||
======
|
||||
PolarDB-X Operator 从 1.3.0 版本开始支持全量备份恢复功能。本文介绍如何通过已有的备份集恢复出 PolarDB-X 集群。
|
||||
|
||||
## 恢复 PolarDB-X 集群
|
||||
|
||||
PolarDB-X 备份集恢复支持两种方式:
|
||||
|
||||
* 指定备份集对象进行恢复
|
||||
* 指定备份集文件进行恢复
|
||||
|
||||
### 指定备份集对象进行恢复
|
||||
|
||||
这一种方式必须确保备份集对应的`PolarDBXBackup`对象仍然留存在K8S集群中,并且保证远程存储中仍然保存着备份文件。
|
||||
|
||||
```yaml
|
||||
apiVersion: polardbx.aliyun.com/v1
|
||||
kind: PolarDBXCluster
|
||||
metadata:
|
||||
name: pxc-restore
|
||||
spec:
|
||||
topology:
|
||||
nodes:
|
||||
cn:
|
||||
template:
|
||||
image: polardbx/polardbx-sql:latest
|
||||
dn:
|
||||
template:
|
||||
image: polardbx/polardbx-engine:latest
|
||||
restore:
|
||||
backupset: pxcbackup-test
|
||||
syncSpecWithOriginalCluster: false
|
||||
```
|
||||
|
||||
参数说明
|
||||
* topology: 实例规格,可参照[实例创建](../lifecycle/1-create.md)
|
||||
* restore.backupset: 备份集(备份对象)名称
|
||||
* restore.syncSpecWithOriginalCluster( 该参数仅适用于 1.4.0 及后续版本 ): 是否保持实例规格和原实例一致,默认取值为`false`,不保持一致;**目前不支持集群异构恢复,这意味着数据节点数会强制与原实例保持一致**
|
||||
|
||||
|
||||
### 指定备份集文件进行恢复
|
||||
|
||||
这一种恢复方式仅支持 1.4.0 版本以后产出的备份集,只须保证远程存储中仍然保存着备份文件即可。
|
||||
|
||||
```yaml
|
||||
apiVersion: polardbx.aliyun.com/v1
|
||||
kind: PolarDBXCluster
|
||||
metadata:
|
||||
name: pxc-restore
|
||||
spec:
|
||||
topology:
|
||||
nodes:
|
||||
cn:
|
||||
template:
|
||||
image: polardbx/polardbx-sql:latest
|
||||
dn:
|
||||
template:
|
||||
image: polardbx/polardbx-engine:latest
|
||||
restore:
|
||||
from:
|
||||
backupSetPath: /polardbx/backup/pxcbackup-test
|
||||
storageProvider:
|
||||
storageName: sftp
|
||||
sink: default
|
||||
syncSpecWithOriginalCluster: false
|
||||
```
|
||||
|
||||
参数说明
|
||||
* topology: 实例规格,可参照[实例创建](../lifecycle/1-create.md)
|
||||
* restore.from.backupSetPath: 备份集的远程存储路径
|
||||
* restore.storageProvider: 备份使用的存储配置,可参照[集群备份](./2-cluster-backup.md)
|
||||
* restore.syncSpecWithOriginalCluster( 该参数仅适用于 1.4.0 及后续版本 ): 是否保持实例规格和原实例一致,默认取值为`false`,不保持一致;**目前不支持集群异构恢复,这意味着数据节点数会强制与原实例保持一致**
|
||||
|
||||
参照上述示例编写恢复用的yaml文件,这里需要注意指定创建方式是`restore`,通过以下命令进行恢复:
|
||||
|
||||
```bash
|
||||
kubectl apply -f pxc-restore.yaml
|
||||
```
|
||||
|
||||
可通过以下命令观察恢复进度:
|
||||
|
||||
```bash
|
||||
kubectl get pxc
|
||||
```
|
||||
|
||||
当状态中的`PHASE`变为`RUNNING`后整个恢复流程就完成了
|
||||
|
||||
```bash
|
||||
NAME GMS CN DN CDC PHASE DISK AGE
|
||||
pxc-restore 1/1 1/1 2/2 1/1 Running 20.3 GiB 22m
|
||||
```
|
||||
|
||||
## 注意事项
|
||||
|
||||
- 快速的恢复操作只需在yaml文件中指定希望使用的镜像即可,否则将会使用默认的镜像,更多的规格配置可以参考[集群创建](../lifecycle/1-create.md)
|
||||
- 目前的恢复功能只支持同构恢复,暂不支持节点数量的变更
|
|
@ -0,0 +1,68 @@
|
|||
备份调度
|
||||
==========
|
||||
PolarDB-X Operator 从 1.4.0 版本开始支持全量备份调度功能。本文介绍如何为集群配置备份调度。
|
||||
|
||||
## 注意事项
|
||||
|
||||
- 若到达调度时间,对应的PolarDB-X 集群有其他备份正在进行中,则此次备份任务会等进行中的备份任务结束后再开始
|
||||
- 若同时发起多个调度,请合理制定调度规则,避免同一时间触发多个备份
|
||||
|
||||
## 创建备份调度
|
||||
|
||||
PolarDBXBackupSchedule 对象的示例如下所示:
|
||||
|
||||
```yaml
|
||||
apiVersion: polardbx.aliyun.com/v1
|
||||
kind: PolarDBXBackupSchedule
|
||||
metadata:
|
||||
name: pxc-schedule
|
||||
spec:
|
||||
schedule: "*/20 * * * *"
|
||||
maxBackupCount: 5
|
||||
suspend: false
|
||||
backupSpec:
|
||||
cluster:
|
||||
name: polardbx-test
|
||||
retentionTime: 240h
|
||||
storageProvider:
|
||||
storageName: sftp
|
||||
sink: default
|
||||
preferredBackupRole: follower
|
||||
```
|
||||
|
||||
参数说明:
|
||||
* schedule: 调度规则,即定期发起备份的时间点,须使用合规的cron表达式指定
|
||||
* maxBackupCount: 保存的备份集数量上限,当备份集数超过上限,会从最旧的备份集开始清理,默认值为0,表示不做清理
|
||||
* suspend: 调度是否暂停,默认为 `false`,表示不暂停
|
||||
* backupSpec: 备份配置,可参考[集群备份](./2-cluster-backup.md)
|
||||
|
||||
参照上述示例编写 pxc-schedule.yaml 文件,通过以下命令创建备份调度:
|
||||
|
||||
```bash
|
||||
kubectl apply -f pxc-schedule.yaml
|
||||
```
|
||||
|
||||
可通过以下命令观察调度状态:
|
||||
|
||||
```bash
|
||||
kubectl get pbs
|
||||
```
|
||||
|
||||
可以从状态中获取到如下信息:
|
||||
|
||||
```bash
|
||||
NAME SCHEDULE LAST_BACKUP_TIME NEXT_BACKUP_TIME LAST_BACKUP
|
||||
pxc-schedule */20 * * * * 2023-03-16T08:00:00Z 2023-03-16T08:20:00Z polardbx-test-backup-202303160800
|
||||
```
|
||||
|
||||
## 调度规则示例
|
||||
|
||||
PolarDBXBackupSchedule 对象的 `spec.schedule` 字段表示调度规则,遵循标准cron表达式的格式要求,下表是一些调度规则的示例:
|
||||
|
||||
| 调度规则 | 规则含义 |
|
||||
| ----- | ------ |
|
||||
| */20 * * * * | 每20分钟发起备份 |
|
||||
| 0 * * * * | 每小时发起备份 |
|
||||
| 0 0 * * 1 | 每周一的0点发起备份 |
|
||||
| 0 2 * * 1,4 | 周一和周四的2点发起备份 |
|
||||
| 0 2 */2 * * | 每两天的2点发起备份 |
|
|
@ -0,0 +1,10 @@
|
|||
备份恢复
|
||||
===
|
||||
|
||||
> PolarDB-X Operator 从1.3.0版本开始支持备份恢复功能
|
||||
|
||||
1. [备份集存储方式配置](./1-backup-storage-configure.md)
|
||||
2. [集群备份](./2-cluster-backup)
|
||||
3. [增量日志备份](./2-binlog-backup.md)
|
||||
4. [集群恢复](./3-cluster-restore)
|
||||
5. [指定时间点恢复](./pitr)
|
|
@ -0,0 +1,84 @@
|
|||
增量日志备份
|
||||
======
|
||||
|
||||
PolarDB-X Operator 从 1.4.0 版本开始支持增量日志备份功能。本文介绍如何对 PolarDB-X 进行增量日志备份。
|
||||
> 此处的增量日志为,DN节点上生成的一致性日志(类似mysql binlog日志),默认在DN容器内的/data/mysql/log目录中
|
||||
|
||||
## 前置条件
|
||||
1. PolarDB-X Operator 升级到 1.4.0 及以上版本
|
||||
2. 完成备份存储方式配置,参见文档:[备份存储配置](./1-backup-storage-configure.md)
|
||||
|
||||
|
||||
## 发起增量日志备份
|
||||
|
||||
下面介绍如何通过 PolarDBXBackupBinlog 对象为 PolarDB-X 进行增量日志备份。
|
||||
|
||||
### 创建 PolarDBXBackupBinlog 对象
|
||||
|
||||
1. 参照如下示例编写 pxc-backup-binlog.yaml 文件:
|
||||
|
||||
```yaml
|
||||
apiVersion: polardbx.aliyun.com/v1 # API 组/版本
|
||||
kind: PolarDBXBackupBinlog # API 名称
|
||||
metadata:
|
||||
name: backupbinlogforpolardb-x #增量日志备份任务名称
|
||||
spec:
|
||||
pxcName: polardb-x # 待备份的目标 PolarDB-X 集群名称
|
||||
pxcUid: 8f634de1-5a4e-4e1c-b2dc-e8763384d83a # 待备份的目标 PolarDB-X 集群名称UID
|
||||
remoteExpireLogHours: 168 # 在远程端(OSS或者SFTP)上的保存小时数
|
||||
localExpireLogHours: 7 # 在数据节点本地的保存小时数
|
||||
maxLocalBinlogCount: 60 # 在数据节点本地保存的增量日志文件保存个数
|
||||
pointInTimeRecover: true # 是否支持指定时间点恢复
|
||||
binlogChecksum: CRC32 # 增量日志校验码
|
||||
storageProvider:
|
||||
storageName: oss # 存储方式,支持 sftp 和 oss
|
||||
sink: osssink # 存储配置项的名称
|
||||
```
|
||||
|
||||
参数说明:
|
||||
* pxcName: 待备份的目标 PolarDB-X 集群名称, 必填字段
|
||||
* pxcUid: 待备份的目标 PolarDB-X 集群名称UID,可选字段,一般不填
|
||||
* remoteExpireLogHours: 在远程端(OSS或者SFTP)上的保存小时数,可选字段,默认值为 168
|
||||
* localExpireLogHours: 在数据节点本地的保存小时数,可选字段,默认值为 7
|
||||
* maxLocalBinlogCount: 在数据节点本地保存的增量日志文件保存个数,可选字段,默认值为 60
|
||||
* pointInTimeRecover: 是否支持指定时间点恢复,可选字段,默认值为 true
|
||||
* binlogChecksum: 增量日志校验码,可选字段,默认值为 CRC32
|
||||
* storageProvider.storageName: 备份集存储方式,支持 sftp 和 oss,必填字段
|
||||
* storageProvider.sink: 备份集存储配置的名称,对应[备份存储配置](./1-backup-storage-configure.md)中的 name 字段,必填字段
|
||||
|
||||
2.使用下面的命令创建 PolarDBXBackupBinlog 对象,开启增量日志备份:
|
||||
```bash
|
||||
kubectl create -f pxc-backup-binlog.yaml
|
||||
```
|
||||
3.查看增量日志备份运行阶段是否为`running`
|
||||
```bash
|
||||
kubectl get pxcblog
|
||||
```
|
||||
|
||||
## 增量日志备份查阅
|
||||
|
||||
增量日志备份文件存放在如下路径,您可以在 SFTP 配置的主机或者 OSS bucket 中查看对应的文件。
|
||||
|
||||
增量日志的元数据文件
|
||||
```
|
||||
{root_path}/polardbx-binlogbackup/{namespace}/{pxc_name}/{pxc_uid}/{xstore_name}/{xstore_uid}/{pod_name}/{version}/{batch_name}/binlog-meta/mysql_bin.{number}.txt
|
||||
```
|
||||
增量日志文件
|
||||
```
|
||||
{root_path}/polardbx-binlogbackup/{namespace}/{pxc_name}/{pxc_uid}/{xstore_name}/{xstore_uid}/{pod_name}/{version}/{batch_name}/binlog-file/mysql_bin.{number}
|
||||
```
|
||||
|
||||
- root_path取决于存储配置
|
||||
- 若采用sftp作为存储,则该值为sink.rootPath
|
||||
- 若采用oss作为存储,则该值为sink.bucket
|
||||
- polardbx-binlogbackup为固定字段
|
||||
- namespace是目标PolarDB-X Cluster所在namesapce
|
||||
- pxc_name是目标PolarDB-X Cluster的名称
|
||||
- pxc_uid是目标PolarDB-X Cluster的UID
|
||||
- xstore_name是备份文件所属的xstore名称
|
||||
- xstore_uid是备份文件所属的xstore的uid
|
||||
- pod_name是备份文件所属的pod的名称
|
||||
- version是备份文件所属的pod的版本号
|
||||
- batch_name是批目录名称,1000个文件为一批
|
||||
- binlog-file和binlog-meta为固定字段
|
||||
- number为增量日志文件的序号
|
|
@ -0,0 +1,51 @@
|
|||
指定时间点恢复
|
||||
======
|
||||
|
||||
PolarDB-X Operator 从 1.4.0 版本开始支持指定时间点恢复。本文介绍如何对 PolarDB-X 进行指定时间点恢复。
|
||||
|
||||
## 前置条件
|
||||
|
||||
1. PolarDB-X Operator 升级到 1.4.0 及以上版本
|
||||
2. 已经配置增量日志备份,并配置支持指定时间点恢复(默认为支持)
|
||||
3. 恢复的时间点之前存在一个全量备份集
|
||||
4. 全量备份集中的恢复时间到指定的恢复时间之间有连续的增量日志文件
|
||||
|
||||
## 恢复 PolarDB-X 集群
|
||||
|
||||
```yaml
|
||||
apiVersion: polardbx.aliyun.com/v1
|
||||
kind: PolarDBXCluster
|
||||
metadata:
|
||||
name: pxc-pitr-restore # 恢复出的集群名字
|
||||
spec:
|
||||
topology: # 集群规格
|
||||
nodes:
|
||||
cn:
|
||||
template:
|
||||
image: polardbx/galaxysql:latest
|
||||
dn:
|
||||
template:
|
||||
image: polardbx/galaxyengine:latest
|
||||
restore: # 指定集群的创建方式是恢复
|
||||
from:
|
||||
clusterName: polardb-x-2 # 源PolarDB-X 集群名称
|
||||
time: "2023-03-20T02:06:46Z" # 恢复的时间点
|
||||
```
|
||||
|
||||
参照上述示例编写恢复用的yaml文件,这里需要注意指定创建方式是`restore`,通过以下命令进行恢复:
|
||||
|
||||
```bash
|
||||
kubectl apply -f pxc-pitr-restore.yaml
|
||||
```
|
||||
|
||||
> 注意:
|
||||
> * 如果全量备份集的时间点到指定的恢复时间之间,存在数据节点的增删操作,会导致恢复失败;
|
||||
> * 如果全量备份集的时间点到指定的恢复时间之间,数据备节点发生过备库重搭等原因导致增量日志没有连续产生,会导致恢复失败;
|
||||
> * 指定的恢复时间点附近如果有DDL操作,会有元数据不一致的问题
|
||||
|
||||
> 建议:
|
||||
> * 定期做全量备份
|
||||
> * 在发生数据节点的增删后,发起一次全量备份任务
|
||||
|
||||
其余操作步骤,可参考 [集群恢复](./3-cluster-restore.md)
|
||||
|
|
@ -0,0 +1,25 @@
|
|||
组件管理
|
||||
=======
|
||||
|
||||
### 计算节点运维
|
||||
|
||||
1. [检查节点状态](./cn/1-node-state-inspect.md)
|
||||
2. [配置存活性、可用性探测](./cn/2-liveness.md)
|
||||
3. [登录计算节点容器](./cn/3-cn-pod-login.md)
|
||||
4. [获取计算节点日志](./cn/4-cn-log.md)
|
||||
5. [删除/重建计算节点](./cn/5-node-delete.md)
|
||||
|
||||
### 存储节点运维
|
||||
|
||||
1. [检查节点状态](./dn/1-dn-node-state-inspect.md)
|
||||
2. [登录内部节点](./dn/2-dn-node-login.md)
|
||||
3. [获取内部节点日志](./dn/3-dn-log.md)
|
||||
4. [获取存储节点连接信息](./dn/4-dn-connection.md)
|
||||
5. [获取存储节点任务信息](./dn/5-dn-task-info.md)
|
||||
6. [删除/重建内部节点](./dn/6-dn-delete.md)
|
||||
|
||||
### 日志节点运维
|
||||
|
||||
1. [检查节点状态](./cdc/1-cdc-state-inspect.md)
|
||||
2. [登录日志节点容器](./cdc/2-cdc-node-login.md)
|
||||
3. [重建日志节点](./cdc/3-cdc-delete.md)
|
|
@ -0,0 +1,16 @@
|
|||
检查日志节点状态
|
||||
=====
|
||||
执行如下命令获取 cdc 的 pod 列表:
|
||||
|
||||
```shell
|
||||
kubectl get pods -l polardbx/role=cdc
|
||||
|
||||
```
|
||||
期望得到如下结果,通过 READY , STATUS 字段判断 cdc pod 是否正常。
|
||||
|
||||
```shell
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
tunan-oss-drsg-cdc-default-57d97f5bc8-8z4mz 2/2 Running 0 14d
|
||||
tunan-oss-drsg-cdc-default-57d97f5bc8-qhvnq 2/2 Running 0 14d
|
||||
```
|
||||
|
|
@ -0,0 +1,11 @@
|
|||
登录日志节点
|
||||
======
|
||||
执行如下命令登录日志节点容器:
|
||||
|
||||
```shell
|
||||
kubectl exec -it {pod 名} bash
|
||||
```
|
||||
|
||||
cdc 的日志在 /home/admin/logs/ 下面
|
||||
|
||||
CDC 的容器内会有三个 java 进程,daemon,dumper,final,日志分别在对应的目录下。
|
|
@ -0,0 +1,7 @@
|
|||
重建日志节点
|
||||
======
|
||||
执行如下命令delete cdc的pod 即可触发重建:
|
||||
|
||||
```shell
|
||||
kubectl delete pod {pod 名}
|
||||
```
|
|
@ -0,0 +1,29 @@
|
|||
检查计算节点状态
|
||||
=======
|
||||
执行如下命令查看 PolarDB-X 集群CN的总体状态:
|
||||
|
||||
```shell
|
||||
kubectl get pxc
|
||||
```
|
||||
|
||||
期望得到如下输出,可以查看CN的数目以及当前ready的数目
|
||||
|
||||
```shell
|
||||
NAME GMS CN DN CDC PHASE DISK AGE
|
||||
classic 1/1 2/2 3/3 1/1 Running 40.3 GiB 4d2h
|
||||
```
|
||||
|
||||
|
||||
执行如下命令获取 cn 的 pod 列表:
|
||||
|
||||
```shell
|
||||
kubectl get pods -l polardbx/role=cn
|
||||
```
|
||||
|
||||
期望得到如下结果,通过 READY , STATUS 字段判断 cn pod 是否正常。
|
||||
|
||||
```shell
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
classic-4gss-cn-default-5488d667fd-74lz2 3/3 Running 0 4d2h
|
||||
classic-4gss-cn-default-5488d667fd-hn7fj 3/3 Running 1 4d2h
|
||||
```
|
|
@ -0,0 +1,20 @@
|
|||
存活性、可用性配置
|
||||
=======
|
||||
为了保证服务的高可用,如果发现某个组件(CN,DN,GMS,CDC)探活失败,K8s 会自动重启对应的pod以恢复服务。
|
||||
但是在部分场景下我们需要重启cn 进程(修改了某个参数)或者排查进程挂的原因,此时我们不希望K8s 因为探活失败把cn 的pod 删除,那么可以执行如下命令可以关闭所有 CN 节点的探活:
|
||||
|
||||
```bash
|
||||
kubectl annotate pod -l polardbx/role=cn runmode=debug
|
||||
```
|
||||
|
||||
或者只关闭某个cn pod的探活:
|
||||
|
||||
```bash
|
||||
kubectl annotate pod {pod 名} runmode=debug
|
||||
```
|
||||
|
||||
重新打开 cn 的探活:
|
||||
|
||||
```bash
|
||||
kubectl annotate --overwrite pod {pod 名} runmode-
|
||||
```
|
|
@ -0,0 +1,14 @@
|
|||
登录计算节点
|
||||
=======
|
||||
## 登录 Pod
|
||||
如果 CN 处于 ready 状态,执行如下命令即可登录 CN Pod:
|
||||
|
||||
```shell
|
||||
kubectl exec -it {pod 名} bash
|
||||
```
|
||||
|
||||
如果CN pod 因为探活失败处于 Crash 状态,可以通过如下命令关闭探活,让pod 处于 ready 状态后再执行上述命令登录 pod。
|
||||
|
||||
```shell
|
||||
kubectl annotate pod {pod 名} runmode=debug
|
||||
```
|
|
@ -0,0 +1,10 @@
|
|||
查看计算节点日志
|
||||
========
|
||||
1. 参考 《[3. 登录计算节点容器](./3-cn-pod-login.md) 》进入CN 的容器
|
||||
2. 进入 /home/admin/drds-server/logs 目录下查看需要的日志
|
||||
3. 如果需要拷贝日志文件到本地,可以通过如下命令:
|
||||
|
||||
```shell
|
||||
kubectl cp {pod 名}:{pod 内的日志文件} {本地目录}
|
||||
```
|
||||
|
|
@ -0,0 +1,13 @@
|
|||
重建计算节点
|
||||
========
|
||||
重建所有的 cn 节点,执行如下命令:
|
||||
|
||||
```shell
|
||||
kubectl delete pod -l polardbx/name={实例名},polardbx/role=cn
|
||||
```
|
||||
|
||||
重建单个 cn 节点,执行如下命令:
|
||||
|
||||
```shell
|
||||
kubectl delete pod {pod 名}
|
||||
```
|
|
@ -0,0 +1,68 @@
|
|||
## 查询 XStore 列表
|
||||
执行如下命令查询所有DN 的列表:
|
||||
|
||||
```shell
|
||||
kubectl get xstore -l polardbx/name={实例名}
|
||||
```
|
||||
|
||||
得到如下输出:
|
||||
|
||||
```shell
|
||||
NAME LEADER READY PHASE DISK VERSION AGE
|
||||
tunan-oss-drsg-dn-0 tunan-oss-drsg-dn-0-cand-1 3/3 Running 11.7 GiB 8.0.18 20d
|
||||
tunan-oss-drsg-dn-1 tunan-oss-drsg-dn-1-cand-1 3/3 Running 11.0 GiB 8.0.18 20d
|
||||
```
|
||||
|
||||
PHASE 显示的是 每个 DN 的状态,LEADER 显示的是当前 DN 的 Leader pod。
|
||||
|
||||
## 查看 DN Pod
|
||||
如果想查询PolarDB-X DN 的所有 pod,执行如下命令:
|
||||
|
||||
```shell
|
||||
kubectl get pod -l polardbx/name={实例名},polardbx/role=dn
|
||||
```
|
||||
|
||||
得到如下结果:
|
||||
|
||||
```shell
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
tunan-oss-drsg-dn-0-cand-0 3/3 Running 0 20d
|
||||
tunan-oss-drsg-dn-0-cand-1 3/3 Running 0 20d
|
||||
tunan-oss-drsg-dn-0-log-0 3/3 Running 0 20d
|
||||
tunan-oss-drsg-dn-1-cand-0 3/3 Running 0 20d
|
||||
tunan-oss-drsg-dn-1-cand-1 3/3 Running 0 20d
|
||||
tunan-oss-drsg-dn-1-log-0 3/3 Running 0 20d
|
||||
```
|
||||
|
||||
如果想查看每个dn pod的角色,执行如下命令:
|
||||
|
||||
```shell
|
||||
kubectl get pod -l polardbx/name={实例名},polardbx/role=dn --show-labels
|
||||
```
|
||||
|
||||
得到如下输出, 其中xstore/role=follower 表示的就是pod的role。
|
||||
> 注:如果xstore/role 该标签没有值,说明 DN 正在进行选主或者选主出现了问题
|
||||
|
||||
```shell
|
||||
NAME READY STATUS RESTARTS AGE LABELS
|
||||
tunan-oss-drsg-dn-0-cand-0 3/3 Running 0 20d polardbx/dn-index=0,polardbx/name=tunan-oss,polardbx/rand=drsg,polardbx/role=dn,xstore/generation=2,xstore/name=tunan-oss-drsg-dn-0,xstore/node-role=candidate,xstore/node-set=cand,xstore/pod=tunan-oss-drsg-dn-0-cand-0,xstore/port-lock=16148,xstore/role=follower
|
||||
```
|
||||
|
||||
## 查看特定角色的 DN pod
|
||||
查看所有的 leader pod:
|
||||
|
||||
```shell
|
||||
kubectl get pod -l polardbx/name={实例名},polardbx/role=dn,xstore/role=leader
|
||||
```
|
||||
|
||||
查看所有的 follower pod:
|
||||
|
||||
```shell
|
||||
kubectl get pod -l polardbx/name={实例名},polardbx/role=dn,xstore/role=follower
|
||||
```
|
||||
|
||||
查看所有的 logger pod
|
||||
|
||||
```shell
|
||||
kubectl get pod -l polardbx/name={实例名},polardbx/role=dn,xstore/role=logger
|
||||
```
|
|
@ -0,0 +1,15 @@
|
|||
## 登录 Pod
|
||||
找到需要登录的 DN 的 pod,如果pod 处于 3/3 ready的状态,执行如下命令即可:
|
||||
|
||||
```shell
|
||||
kubectl exec -it {pod 名} bash
|
||||
```
|
||||
|
||||
如果 DN 的 pod 因为探活失败被不停的重启,执行如下命令关闭探活后再登录:
|
||||
|
||||
```shell
|
||||
kubectl annotate pod -l polardbx/role=dn runmode=debug
|
||||
```
|
||||
|
||||
## 进入 MySQL 命令行
|
||||
执行 `myc` 命令即可
|
|
@ -0,0 +1,41 @@
|
|||
## 容器可以登录
|
||||
如果 DN 的 pod 能够正常(或者关闭探活后)登录,则登录后进入 /data/mysql/log/目录,重点查看 alert.log 即可。
|
||||
|
||||
## 容器无法登录
|
||||
通过如下命令查看 dn engine 的启动日志:
|
||||
|
||||
```shell
|
||||
kubectl logs {pod 名} engine
|
||||
```
|
||||
|
||||
如果发现日志中有报错,显示 mysql 初始化失败,则需要通过如下的方式前往 dn pod 在主机上的目录查看 alert.log
|
||||
|
||||
1. 加上 -o wide 参数,找到dn pod 所在的机器:
|
||||
|
||||
```shell
|
||||
kubectl get pod {pod 名} -o wide
|
||||
```
|
||||
|
||||
得到如下输出:
|
||||
|
||||
```shell
|
||||
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
|
||||
tunan-oss-drsg-dn-0-cand-1 3/3 Running 0 20d 172.16.0.129 cn-zhangjiakou.172.16.0.129 <none> <none>
|
||||
```
|
||||
|
||||
其中 NODE 即 该pod 调度的机器。
|
||||
|
||||
2. 执行如下命令获取 dn pod 在宿主机上的实际目录:
|
||||
|
||||
```shell
|
||||
kubectl get pod {pod 名} -o json | grep "/data/xstore/default"
|
||||
```
|
||||
|
||||
期望得到如下输出:
|
||||
|
||||
```shell
|
||||
"path": "/data/xstore/default/tunan-oss-drsg-dn-0-cand-1"
|
||||
```
|
||||
|
||||
3. 前往该机器的上述目录即可查看日志。
|
||||
|
|
@ -0,0 +1,29 @@
|
|||
## 获取用户名密码
|
||||
用户名默认是:admin
|
||||
密码通过如下命令获取:
|
||||
|
||||
```shell
|
||||
kubectl get secret {dn 名} -o jsonpath={.data.admin} | base64 -d - | xargs echo "Password"
|
||||
```
|
||||
|
||||
## 获取连接串
|
||||
执行如下命令获取 clusterIp:
|
||||
|
||||
```shell
|
||||
kubectl get svc {dn 名}
|
||||
```
|
||||
|
||||
期望得到如下输出:
|
||||
|
||||
```shell
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
tunan-oss-drsg-dn-0 ClusterIP 192.168.192.73 <none> 3306/TCP,31306/TCP 21d
|
||||
```
|
||||
|
||||
如果在 K8s 集群内部直接通过 cluster-ip + 3306 端口即可访问。
|
||||
|
||||
如果在 K8s 外部,执行如下命令将端口转发到本地访问:
|
||||
|
||||
```shell
|
||||
kubectl port-forward svc/{dn 名} 3306:3306
|
||||
```
|
|
@ -0,0 +1,39 @@
|
|||
注:只有在升级的时候有用。
|
||||
|
||||
使用下面的命令查看任务信息,
|
||||
|
||||
```bash
|
||||
kubectl get cm {xstore}-task -o yaml
|
||||
```
|
||||
|
||||
结构为:
|
||||
|
||||
```go
|
||||
type ExecutionContext struct {
|
||||
// Topologies in uses.
|
||||
Topologies map[int64]*xstore.Topology `json:"topologies,omitempty"`
|
||||
|
||||
// Generation.
|
||||
Generation int64 `json:"generation,omitempty"`
|
||||
|
||||
// Current running nodes.
|
||||
Running map[string]model.PaxosNodeStatus `json:"running,omitempty"`
|
||||
|
||||
// Tracking nodes. This is the tracking set of the paxos node configuration.
|
||||
Tracking map[string]model.PaxosNodeStatus `json:"tracking,omitempty"`
|
||||
|
||||
// Expected nodes.
|
||||
Expected map[string]model.PaxosNode `json:"expected,omitempty"`
|
||||
|
||||
// Current usable volumes.
|
||||
Volumes map[string]model.PaxosVolume `json:"volumes,omitempty"`
|
||||
|
||||
// Plan.
|
||||
Plan *plan.Plan `json:"plan,omitempty"`
|
||||
|
||||
// StepIndex of the plan.
|
||||
StepIndex int `json:"step_index,omitempty"`
|
||||
|
||||
PodFactory factory.ExtraPodFactory `json:"-"`
|
||||
}
|
||||
```
|
|
@ -0,0 +1,29 @@
|
|||
## Delete pod
|
||||
执行如下命令直接删除 对应的 dn pod,k8s 会自动重建该 pod:
|
||||
|
||||
```shell
|
||||
kubectl delete pod {dn pod名}
|
||||
```
|
||||
|
||||
|
||||
## Graceful Shutdown
|
||||
部分场景下我们需要对 DN 进行graceful的 shutdown然后重启,可先执行如下命令关闭dn的探活:
|
||||
|
||||
```shell
|
||||
kubectl annotate pod {dn pod 名} runmode=debug
|
||||
```
|
||||
|
||||
然后直接登录对应的 dn pod,输入myc 命令进入 MySQL 命令行,执行如下命令:
|
||||
|
||||
```shell
|
||||
mysql> shutdown;
|
||||
```
|
||||
|
||||
即可重启 DN 的进程,此时 dn 的pod 不会重建。
|
||||
|
||||
搞完后记得执行如下命令打开探活:
|
||||
|
||||
```shell
|
||||
kubectl annotate --overwrite pod {dn pod 名} runmode-
|
||||
```
|
||||
|
|
@ -0,0 +1,22 @@
|
|||
直接编辑 PolarDBXCluster的yaml,这部分参数修改因为需要重启才能生效,所以会自动触发cn pod 重建。通过如下命令修改:
|
||||
|
||||
```shell
|
||||
kubectl edit pxc {pxc 名称}
|
||||
```
|
||||
|
||||
修改:.spec.config.cn.static,添加需要修改的参数,如下所示:
|
||||
|
||||
```yaml
|
||||
# 静态配置,修改会导致 CN 集群重建
|
||||
static:
|
||||
# 启用协程, OpenJDK 暂不支持,需使用 dragonwell
|
||||
EnableCoroutine: false
|
||||
# 启用备库一致读
|
||||
EnableReplicaRead: false
|
||||
# 启用 JVM 的远程调试
|
||||
EnableJvmRemoteDebug: false
|
||||
# 自定义 CN 静态配置,key-value 结构
|
||||
# value 的值类型为 int 或 string,因此 bool 类型需要手动写为 string,例如 "true"、"false"
|
||||
ServerProperties:
|
||||
processors: 8
|
||||
```
|
|
@ -0,0 +1,43 @@
|
|||
CN 的动态参数支持直接在 PolarDBXCluster 对象的 yaml 中修改,详见 .spec.config.cn.dynamic。不过这种配置方式也存在一些问题:
|
||||
|
||||
- 集群配置项过多,集群定义过长掩盖其他细节
|
||||
- PolarDBXCluster 不仅需要负责集群(容器)的维护,还需要负责配置项的维护,逻辑复杂且容易出错
|
||||
- 单向同步导致其他途径(比如 set global)设置的参数失效
|
||||
|
||||
因此开源版本中,支持了通过 Knobs 对象修改 CN 的动态参数。
|
||||
|
||||
Knobs 对象的 yaml 定义如下:
|
||||
|
||||
```yaml
|
||||
apiVersion: polardbx.aliyun.com/v1
|
||||
kind: PolarDBXClusterKnobs
|
||||
metadata:
|
||||
name: polardbx-xcluster
|
||||
namespace: development
|
||||
spec:
|
||||
## PolarDB-X 的实例名
|
||||
clusterName: "polardbx-xcluster"
|
||||
# 创建时不需要指定
|
||||
knobs:
|
||||
## 参数列表
|
||||
CONN_POOL_MAX_POOL_SIZE: 100
|
||||
RECORD_SQL: "true"
|
||||
|
||||
```
|
||||
|
||||
>注:CN 的动态参数列表详见:[https://help.aliyun.com/document_detail/316576.html](https://help.aliyun.com/document_detail/316576.html)
|
||||
>
|
||||
>注:布尔参数值需要用字符串传入。
|
||||
|
||||
编辑好上述的 yaml 文件后 执行即可
|
||||
|
||||
```shell
|
||||
kubectl apply -f {knbos yaml 文件}
|
||||
```
|
||||
|
||||
执行如下 命令查看 knobs的列表:
|
||||
|
||||
```shell
|
||||
kubectl get pxcknobs
|
||||
```
|
||||
|
|
@ -0,0 +1,5 @@
|
|||
执行如下命令删除 pxcknobs 对象即可,pxcknobs 之前配置的参数不会被重置。
|
||||
|
||||
```shell
|
||||
kubectl delete pxcknobs {knobs 名}
|
||||
```
|
|
@ -0,0 +1,7 @@
|
|||
如果 PolarDB-X Cluster 存在 Knobs对象的话,直接edit改对象,添加/修改/删除对应的参数即可,执行如下命令:
|
||||
|
||||
```shell
|
||||
kubectl edit pxcknobs {pxcknobs 名称}
|
||||
```
|
||||
|
||||
如果 knobs 不存在,直接创建一个新的 knobs 对象,参考:[创建数据库参数操作对象](./1-cn-variable-load-at-runtime-create-db.md)
|
|
@ -0,0 +1,26 @@
|
|||
## 修改 PXC YAML
|
||||
直接修改 pxc yaml 的 .spec.config.dn即可,添加相关的mysql 参数,如下图所示:
|
||||
|
||||
```shell
|
||||
# DN 相关配置
|
||||
dn:
|
||||
# DN my.cnf 配置,覆盖模板部分
|
||||
mycnfOverwrite: |-
|
||||
loose_binlog_checksum: crc32
|
||||
logPurgeInterval: 5m
|
||||
logDataSeparation: false
|
||||
```
|
||||
|
||||
注意:如果部分my.cnf 参数需要重启后才能生效,需要手动重启 DN 的 mysql 进程。
|
||||
|
||||
非 my.cnf 参数目前支持设置:
|
||||
- binlog的清理时间,修改 .spec.config.dn.logPurgeInterval 即可。
|
||||
- 日志与数据是否分离存储,修改 .spec.config.dn.logDataSeparation 即可
|
||||
|
||||
## Set Global 指令
|
||||
除了修改yaml 外,也可以通过 CN 的 set global 指令修改 DN 参数,登录CN,执行如下SQL:
|
||||
|
||||
```shell
|
||||
set ENABLE_SET_GLOBAL = true; -- 开启 set global 功能
|
||||
set global {dn 参数}; -- 关闭 sql日志
|
||||
```
|
|
@ -0,0 +1,98 @@
|
|||
## 参数模板
|
||||
PolarDB-X Operator从1.3.0版本开始支持参数模板功能
|
||||
|
||||
在实例初始化时,可以指定参数模板文件,对CN和DN指定一系列需要的模板参数。
|
||||
|
||||
参数模板需要通过yaml文件的形式进行配置。
|
||||
|
||||
```shell
|
||||
kubectl apply -f {参数模板文件名称}.yaml
|
||||
```
|
||||
|
||||
### 参数模板说明
|
||||
|
||||
注:在参数列表中,每个参数需要指定7个不同的属性,包括:
|
||||
- name(名称)
|
||||
- 参数名称
|
||||
- defaultValue(默认值)
|
||||
- 参数的默认值,格式为字符串
|
||||
- mode(修改模式)
|
||||
- 参数的模式,包含 read-only 和 read-write
|
||||
- restart(是否重启)
|
||||
- 参数修改后是否需要重启实例
|
||||
- unit(单位)
|
||||
- 参数的单位,包含 INT, DOUBLE, STRING, TZ(Time Zone), HOUR_RANGE
|
||||
- divisibilityFactor(整除因子)
|
||||
- 单位为INT的参数需要设置整除因子,其他单位默认为0
|
||||
- optional(取值范围)
|
||||
- 单位为INT, DOUBLE, HOUR_RANGE的参数,取值范围是一段范围,如:"[1000-60000]"
|
||||
- 单位为STRING或TZ的参数,取值范围是一些可选项,如:"[ON|OFF]"
|
||||
|
||||
参数模板的样例如下:
|
||||
|
||||
```yaml
|
||||
## 参数模板示例
|
||||
apiVersion: polardbx.aliyun.com/v1
|
||||
kind: PolarDBXParameterTemplate
|
||||
metadata:
|
||||
name: parameterTemplate
|
||||
spec:
|
||||
nodeType:
|
||||
cn:
|
||||
# 参数列表
|
||||
paramList:
|
||||
- name: CONN_POOL_BLOCK_TIMEOUT
|
||||
defaultValue: "5000"
|
||||
mode: read-write
|
||||
restart: false
|
||||
unit: INT
|
||||
divisibilityFactor: 1
|
||||
optional: "[1000-60000]"
|
||||
- ...
|
||||
dn:
|
||||
name: dnTemplate
|
||||
paramList:
|
||||
- name: innodb_use_native_aio
|
||||
defaultValue: "OFF"
|
||||
mode: readonly
|
||||
restart: false
|
||||
unit: STRING
|
||||
divisibilityFactor: 0
|
||||
optional: "[ON|OFF]"
|
||||
- ...
|
||||
gms: ...
|
||||
```
|
||||
|
||||
### 查看参数模板
|
||||
|
||||
实例会默认在default namespace中应用[8.0版本的参数模板](./3-parameter-template8.0.yaml),如果想在其他namespace创建实例,需要在相应的的namespace中创建参数模板对象。
|
||||
|
||||
可以通过如下命令查看已配置的所有参数模板。
|
||||
|
||||
```shell
|
||||
kubectl get PolarDBXParameterTemplate
|
||||
# 或者可用简称
|
||||
kubectl get pxpt
|
||||
```
|
||||
|
||||
### PolarDBXCluster配置
|
||||
|
||||
配置好参数模板后,就可以在启动PolarDBXCluster的yaml文件中指定需要的参数模板
|
||||
|
||||
```yaml
|
||||
# 添加参数模板
|
||||
apiVersion: polardbx.aliyun.com/v1
|
||||
kind: PolarDBXCluster
|
||||
metadata:
|
||||
name: pxc
|
||||
spec:
|
||||
...
|
||||
...
|
||||
# 需要配置的参数模板
|
||||
parameterTemplate:
|
||||
name: product
|
||||
```
|
||||
|
||||
注:
|
||||
- 实例在应用参数模板后,会对CN或DN中默认的参数根据参数模板中的默认值进行调整。此外,configmap中my.cnf.overwrite字段的参数具有更高优先级,不会被参数模板修改。
|
||||
- 目前不支持对配置了参数模板的实例修改参数模板,也不支持对运行的实例添加参数模板。若想修改运行中实例的参数,需使用动态参数功能。
|
|
@ -0,0 +1,66 @@
|
|||
## 动态参数
|
||||
PolarDB-X Operator从1.3.0版本开始支持动态参数功能
|
||||
|
||||
在实例运行时,可以通过指定动态参数文件来修改CN和DN的参数.
|
||||
|
||||
动态参数需要通过yaml文件的形式进行配置。
|
||||
|
||||
```shell
|
||||
kubectl apply -f {动态参数文件名称}.yaml
|
||||
```
|
||||
|
||||
### 动态参数说明
|
||||
|
||||
动态参数在应用时需要指定基础的参数模板和实例的名称,当名称不存在时,会验证失败。
|
||||
此外,动态参数需要通过参数模板中属性的校验,否则也会验证失败。
|
||||
|
||||
注:由于部分参数在修改后需要重启实例,所以需要指定重启方式,包括直接重启(restart)和滚动重启(rollingRestart)两种,目前DN只支持滚动重启。
|
||||
|
||||
在参数列表中,每个参数需要指定2个属性,包括:
|
||||
- name(名称)
|
||||
- 参数名称
|
||||
- value(取值)
|
||||
- 参数的取值,格式为字符串
|
||||
|
||||
动态参数的样例如下:
|
||||
|
||||
```yaml
|
||||
# 添加动态参数
|
||||
apiVersion: polardbx.aliyun.com/v1
|
||||
kind: PolarDBXParameter
|
||||
metadata:
|
||||
name: test-param
|
||||
labels:
|
||||
parameter: dynamic
|
||||
spec:
|
||||
# 实例名称
|
||||
clusterName: pxc
|
||||
# 参数模板名称
|
||||
templateName: product
|
||||
nodeType:
|
||||
cn:
|
||||
name: cn-parameter
|
||||
# 重启方式
|
||||
restartType: rollingRestart
|
||||
# 参数列表
|
||||
paramList:
|
||||
- name: CONN_POOL_MAX_POOL_SIZE
|
||||
value: "1000"
|
||||
dn:
|
||||
name: dn-parameter
|
||||
restartType: rollingRestart
|
||||
paramList:
|
||||
- name: autocommit
|
||||
value: "OFF"
|
||||
- ...
|
||||
```
|
||||
|
||||
### 查看动态参数
|
||||
|
||||
可以通过如下命令查看已配置的所有动态参数。
|
||||
|
||||
```shell
|
||||
kubectl get PolarDBXParameter
|
||||
# 或者可用简称
|
||||
kubectl get pxp
|
||||
```
|
|
@ -0,0 +1,23 @@
|
|||
参数设置
|
||||
===========
|
||||
|
||||
### CN 静态参数
|
||||
|
||||
[CN 静态参数](./1-cn-variable-at-startup.md)
|
||||
|
||||
### CN 动态参数
|
||||
1. [创建数据库参数操作对象](./1-cn-variable-load-at-runtime-create-db.md)
|
||||
|
||||
2. [修改数据库参数](./1-cn-variable-load-at-runtime-update-db.md)
|
||||
|
||||
3. [删除数据库参数操作对象](./1-cn-variable-load-at-runtime-delete-db.md)
|
||||
|
||||
### DN 参数
|
||||
|
||||
[DN 参数](./2-dn-variable.md)
|
||||
|
||||
### 参数模板
|
||||
[参数模板](./3-parameter-template.md)
|
||||
|
||||
### 动态参数
|
||||
[动态参数](./4-dynamic-parameter.md)
|
|
@ -0,0 +1,13 @@
|
|||
PolarDB-X 默认的 root 账号都是: polardbx_root,您在登录后可以通过[权限管理语句](https://help.aliyun.com/document_detail/313296.html) 修改密码或者创建新的账号供业务访问。
|
||||
|
||||
polardbx_root 账号的密码随机生成,执行下面的命令获取 PolarDB-X root 账号的密码:
|
||||
|
||||
```shell
|
||||
kubectl get secret {PolarDB-X 集群名} -o jsonpath="{.data['polardbx_root']}" | base64 -d - | xargs echo "Password: "
|
||||
```
|
||||
|
||||
期望输出:
|
||||
|
||||
```shell
|
||||
Password: *******
|
||||
```
|
|
@ -0,0 +1,29 @@
|
|||
## 非 CN Pod 访问
|
||||
如果你在 K8s 集群内的pod上访问 PolarDB-X,可以直接通过 cluster-ip 访问。 创建 PolarDB-X 集群时,PolarDB-X Operator 同时会为集群创建用于访问的服务,默认是 ClusterIP 类型。使用下面的命令查看用于访问的服务:
|
||||
|
||||
```shell
|
||||
$ kubectl get svc {PolarDB-X 集群名}
|
||||
```
|
||||
|
||||
期望输出:
|
||||
|
||||
```shell
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
quick-start ClusterIP 10.110.214.223 <none> 3306/TCP,8081/TCP 5m25s
|
||||
```
|
||||
|
||||
如果您是在 K8s 集群内进行访问,可以直接使用上面输出的 Cluster-IP 即可。PolarDB-X 服务默认的端口都是 3306.
|
||||
> ClusterIP 是通过 K8s 集群的内部 IP 暴露服务,选择该访问方式时,只能在集群内部访问
|
||||
|
||||
执行如下命令,输入上面获取的密码后,即可连接 PolarDB-X:
|
||||
|
||||
```shell
|
||||
mysql -h10.110.214.223 -P3306 -upolardbx_root -p
|
||||
```
|
||||
|
||||
> **说明: **
|
||||
> - 此处**-P**为大写字母,默认端口为3306。
|
||||
> - 为保障密码安全,**-p**后请不要填写密码,会在执行整行命令后提示您输入密码,输入后按回车即可登录。
|
||||
|
||||
## CN Pod 内访问
|
||||
直接在cn pod 内输入 myc 命令即可登录
|
|
@ -0,0 +1,36 @@
|
|||
### 通过 port-forward 转发到本地访问
|
||||
如果您在 K8s 集群外想访问 PolarDB-X 数据库,但是没有配置 LoadBalancer, 可以通过如下命令将服务的 3306 端口转发到本地,并且保持转发进程存活。
|
||||
|
||||
```shell
|
||||
kubectl port-forward svc/{PolarDB-X 集群名} 3306
|
||||
```
|
||||
|
||||
> 如果您机器的3306端口被占用,可以通过如下命令将服务转发到指定的端口上:kubectl port-forward svc/{PolarDB-X 集群名} {新端口}:3306
|
||||
|
||||
新开一个终端,执行如下命令即可连接 PolarDB-X:
|
||||
|
||||
```shell
|
||||
mysql -h127.0.0.1 -P{转发端口} -upolardbx_root -p
|
||||
```
|
||||
|
||||
> **说明: **
|
||||
> - 此处**-P**为大写字母,默认端口为3306。
|
||||
> - 为保障密码安全,**-p**后请不要填写密码,会在执行整行命令后提示您输入密码,输入后按回车即可登录。
|
||||
|
||||
### 通过 NodePort 访问
|
||||
如果创建 PolarDB-X 集群的时候指定了 [serviceType: LoadBalancer](../../api/polardbxcluster.md) ,也可以直接通过 NodePort的方式进行访问。
|
||||
|
||||
通过如下命令获取所有的 nodePort:
|
||||
|
||||
```shell
|
||||
kubectl get svc -l polardbx/name={集群名},polardbx/cn-type=rw -o jsonpath="{.items[0].spec.ports[0].nodePort}" | xargs echo "NodePort:"
|
||||
```
|
||||
|
||||
通过如下命令获取 IP 列表:
|
||||
|
||||
```shell
|
||||
kubectl get pods -l polardbx/name={集群名},polardbx/role=cn -o jsonpath="{range .items[*]}{.status.hostIP}{'\n'}{end}"
|
||||
```
|
||||
|
||||
通过上述结果中的任意 IP + NodePort 即可访问 PolarDB-X:
|
||||

|
|
@ -0,0 +1,25 @@
|
|||
### 通过 LoadBalancer 访问
|
||||
若运行在有 LoadBalancer 的环境,比如阿里云平台,建议使用云平台的 LoadBalancer 特性。在创建 PolarDB-X 集群时指定 `.spec.serviceType` 为 LoadBalancer,operator 将会自动创建类型为 LoadBalancer 的服务(Service),此时当云平台支持时 Kubernetes 会自动为该服务配置,如下所示:
|
||||
|
||||
```bash
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
xxxxxxxxx LoadBalancer 192.168.247.39 8.209.29.16 3306:30612/TCP,8081:30370/TCP 28h
|
||||
```
|
||||
|
||||
此时可使用 EXTERNAL-IP 所示的 IP 进行访问:
|
||||
|
||||
```bash
|
||||
mysql -h8.209.29.16 -P3306 -upolardbx_root -p
|
||||
```
|
||||
### 通过机器的公网 IP 访问
|
||||
如果您为K8s 集群内的部分机器开启了公网ip 地址,可以通过 port-foward 的方式将实例的访问端口映射到有公网ip的机器上。
|
||||
|
||||
在有公网ip的机器上执行如下命令进行端口转发:
|
||||
|
||||
```shell
|
||||
kubectl port-forward svc/{PolarDB-X 集群名} 3306 --address=0.0.0.0
|
||||
```
|
||||
|
||||
注意:--address=0.0.0.0 需要加上,允许外部ip 访问
|
||||
|
||||
配置机器对应的安全组或者防火墙,允许3306 端口被外部机器访问。之后通过机器的公网IP + 3306 端口访问即可。
|
|
@ -0,0 +1,10 @@
|
|||
连接 PolarDB-X 数据库
|
||||
===================
|
||||
|
||||
[获取用户名密码](./1-account.md)
|
||||
|
||||
[K8S 集群内连接](./2-connect-in-cluster.md)
|
||||
|
||||
[K8S 集群外连接](./3-connect-outside-cluster.md)
|
||||
|
||||
[公网连接](./4-connect-from-internet.md)
|
After Width: | Height: | Size: 383 KiB |
|
@ -0,0 +1,109 @@
|
|||
# CDC节点创建
|
||||
PolarDB-X CDC 组件内置于 PolarDB-X 实例中,想要体验 PolarDB-X CDC 的功能,需要拉起一个 PolarDB-X 集群。
|
||||
## 全局Binlog
|
||||
* 通过 PXD 部署:参考 [通过PXD部署集群](https://doc.polardbx.com/quickstart/topics/quickstart-pxd-cluster.html),可以在拓扑文件中编辑 CDC 相关的标签值指定 CDC 集群的配置。
|
||||
* `image`:CDC 节点的镜像
|
||||
* `replica`:CDC 节点的个数
|
||||
* `nodes`:每个 CDC 节点的具体配置
|
||||
* `resources`:分配给 CDC 节点的内存等资源
|
||||
* 通过 K8S 部署:参考 [通过K8S部署](https://doc.polardbx.com/quickstart/topics/quickstart-k8s.html),默认会创建一个 CDC 节点,负责全局 Binlog 的生成。
|
||||
## Binlog多流
|
||||
Binlog 多流目前只支持使用 K8S 进行部署,在进行部署之前需要准备好`minikube`和`PolarDB-X Operator`环境,环境配置方法参考 [准备工作](https://doc.polardbx.com/operator/deployment/1-installation.html) 。
|
||||
|
||||
接下来,我们需要准备一个描述 PolarDB-X 集群的 YAML 文件,示例如下:
|
||||
```yaml
|
||||
apiVersion: polardbx.aliyun.com/v1
|
||||
kind: PolarDBXCluster
|
||||
metadata:
|
||||
name: polardbx-test
|
||||
spec:
|
||||
config:
|
||||
cdc:
|
||||
envs:
|
||||
binlogx_stream_group_name: "group1"
|
||||
binlogx_stream_count: "3"
|
||||
binlogx_transmit_hash_level: "RECORD"
|
||||
topology:
|
||||
nodes:
|
||||
cdc:
|
||||
replicas: 2
|
||||
xReplicas: 2
|
||||
template:
|
||||
resources:
|
||||
limits:
|
||||
cpu: "1"
|
||||
memory: 1Gi
|
||||
requests:
|
||||
cpu: 500m
|
||||
memory: 500Mi
|
||||
image: polardbx/polardbx-cdc:latest
|
||||
cn:
|
||||
replicas: 1
|
||||
template:
|
||||
resources:
|
||||
limits:
|
||||
cpu: "2"
|
||||
memory: 4Gi
|
||||
requests:
|
||||
cpu: 500m
|
||||
memory: 1Gi
|
||||
image: polardbx/polardbx-sql:latest
|
||||
dn:
|
||||
replicas: 2
|
||||
template:
|
||||
engine: galaxy
|
||||
resources:
|
||||
limits:
|
||||
cpu: "2"
|
||||
memory: 8Gi
|
||||
requests:
|
||||
cpu: 500m
|
||||
memory: 500Mi
|
||||
image: polardbx/polardbx-engine:latest
|
||||
gms:
|
||||
template:
|
||||
engine: galaxy
|
||||
resources:
|
||||
limits:
|
||||
cpu: "1"
|
||||
memory: 1Gi
|
||||
requests:
|
||||
cpu: 500m
|
||||
memory: 500Mi
|
||||
image: polardbx/polardbx-engine:latest
|
||||
```
|
||||
注:目前 `PolarDB-X Operator` 仅支持拉起单个多流group,并且需要同时拉起全局Binlog。
|
||||
|
||||
其中多流相关的配置如下:
|
||||
* `xReplicas`: 多流节点个数
|
||||
* `binlogx_stream_group_name`:多流流组名称
|
||||
* `binlogx_stream_count`:流的个数
|
||||
* `binlogx_transmit_hash_level`:多流数据分发的哈希规则,目前支持三种规则:
|
||||
* `RECORD`:按行哈希
|
||||
* `TABLE`:按表哈希
|
||||
* `DATABASE`:按库哈希
|
||||
|
||||
使用下面的命令创建 PolarDB-X Cluster 对象:
|
||||
```shell
|
||||
kubectl create -f polardbx-test.yaml
|
||||
```
|
||||
使用下面的命令观察 PolarDB-X Cluster 对象的状态:
|
||||
```shell
|
||||
kubectl get pxc polardbx-test
|
||||
```
|
||||
```text
|
||||
NAME GMS CN DN CDC PHASE DISK AGE
|
||||
polardbx-test 0/1 0/2 0/2 0/3 Creating 5s
|
||||
```
|
||||
当状态中 PHASE 为 Running 时,PolarDB-X 集群就创建完成了。
|
||||
```shell
|
||||
kubectl get pxc polardbx-test
|
||||
```
|
||||
```text
|
||||
NAME GMS CN DN CDC PHASE DISK AGE
|
||||
polardbx-test 1/1 2/2 2/2 3/3 Running 6.2Gi 63s
|
||||
```
|
||||
使用下面的命令获得所有Binlog多流Pod的名称:
|
||||
```shell
|
||||
kubectl get pods -l polardbx/group=g-1
|
||||
```
|
|
@ -0,0 +1,205 @@
|
|||
## 同城三机房
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
topology:
|
||||
rules:
|
||||
selectors:
|
||||
- name: zone-a
|
||||
...
|
||||
- name: zone-b
|
||||
...
|
||||
- name: zone-c
|
||||
...
|
||||
components:
|
||||
cn:
|
||||
- name: zone-a
|
||||
replicas: 1 / 3
|
||||
selector:
|
||||
reference: zone-a
|
||||
- name: zone-b
|
||||
replicas: 1 / 3
|
||||
selector:
|
||||
reference: zone-b
|
||||
- name: zone-c
|
||||
replicas: 1 / 3
|
||||
selector:
|
||||
reference: zone-c
|
||||
cdc:
|
||||
- name: zone-a
|
||||
replicas: 1 / 3
|
||||
selector:
|
||||
reference: zone-a
|
||||
- name: zone-b
|
||||
replicas: 1 / 3
|
||||
selector:
|
||||
reference: zone-b
|
||||
- name: zone-c
|
||||
replicas: 1 / 3
|
||||
selector:
|
||||
reference: zone-c
|
||||
dn:
|
||||
nodeSets:
|
||||
- name: cand-zone-a
|
||||
role: Candidate
|
||||
replicas: 1
|
||||
selector:
|
||||
reference: zone-a
|
||||
- name: cand-zone-b
|
||||
role: Candidate
|
||||
replicas: 1
|
||||
selector:
|
||||
reference: zone-b
|
||||
- name: log-zone-c
|
||||
role: Voter
|
||||
replicas: 1
|
||||
selector:
|
||||
reference: zone-c
|
||||
```
|
||||
## 两地三中心
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
topology:
|
||||
rules:
|
||||
selectors:
|
||||
- name: region-1-zone-a
|
||||
...
|
||||
- name: region-1-zone-b
|
||||
...
|
||||
- name: region-2-zone-c
|
||||
...
|
||||
components:
|
||||
cn:
|
||||
- name: region-1-zone-a
|
||||
replicas: 1 / 3
|
||||
selector:
|
||||
reference: region-1-zone-a
|
||||
- name: region-1-zone-b
|
||||
replicas: 1 / 3
|
||||
selector:
|
||||
reference: region-1-zone-b
|
||||
- name: region-2-zone-c
|
||||
replicas: 1 / 3
|
||||
selector:
|
||||
reference: region-2-zone-c
|
||||
cdc:
|
||||
- name: region-1-zone-a
|
||||
replicas: 1 / 3
|
||||
selector:
|
||||
reference: region-1-zone-a
|
||||
- name: region-1-zone-b
|
||||
replicas: 1 / 3
|
||||
selector:
|
||||
reference: region-1-zone-b
|
||||
- name: region-2-zone-c
|
||||
replicas: 1 / 3
|
||||
selector:
|
||||
reference: region-2-zone-c
|
||||
dn:
|
||||
nodeSets:
|
||||
- name: cand-region-1-zone-a
|
||||
role: Candidate
|
||||
replicas: 1
|
||||
selector:
|
||||
reference: region-1-zone-a
|
||||
- name: cand-region-2-zone-c
|
||||
role: Candidate
|
||||
replicas: 1
|
||||
selector:
|
||||
reference: region-2-zone-c
|
||||
- name: region-1-zone-b
|
||||
role: Voter
|
||||
replicas: 1
|
||||
selector:
|
||||
reference: region-1-zone-b
|
||||
```
|
||||
|
||||
## 三地五中心
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
topology:
|
||||
rules:
|
||||
selectors:
|
||||
- name: region-1-zone-a
|
||||
...
|
||||
- name: region-1-zone-b
|
||||
...
|
||||
- name: region-2-zone-c
|
||||
...
|
||||
- name: region-2-zone-d
|
||||
...
|
||||
- name: region-3-zone-e
|
||||
...
|
||||
components:
|
||||
cn:
|
||||
- name: region-1-zone-a
|
||||
replicas: 1 / 5
|
||||
selector:
|
||||
reference: region-1-zone-a
|
||||
- name: region-1-zone-b
|
||||
replicas: 1 / 5
|
||||
selector:
|
||||
reference: region-1-zone-b
|
||||
- name: region-2-zone-c
|
||||
replicas: 1 / 5
|
||||
selector:
|
||||
reference: region-2-zone-c
|
||||
- name: region-2-zone-d
|
||||
replicas: 1 / 5
|
||||
selector:
|
||||
reference: region-2-zone-d
|
||||
- name: region-3-zone-e
|
||||
replicas: 1 / 5
|
||||
selector:
|
||||
reference: region-3-zone-e
|
||||
cdc:
|
||||
- name: region-1-zone-a
|
||||
replicas: 1 / 5
|
||||
selector:
|
||||
reference: region-1-zone-a
|
||||
- name: region-1-zone-b
|
||||
replicas: 1 / 5
|
||||
selector:
|
||||
reference: region-1-zone-b
|
||||
- name: region-2-zone-c
|
||||
replicas: 1 / 5
|
||||
selector:
|
||||
reference: region-2-zone-c
|
||||
- name: region-2-zone-d
|
||||
replicas: 1 / 5
|
||||
selector:
|
||||
reference: region-2-zone-d
|
||||
- name: region-3-zone-e
|
||||
replicas: 1 / 5
|
||||
selector:
|
||||
reference: region-3-zone-e
|
||||
dn:
|
||||
nodeSets:
|
||||
- name: cand-region-1-zone-a
|
||||
role: Candidate
|
||||
replicas: 1
|
||||
selector:
|
||||
reference: region-1-zone-a
|
||||
- name: cand-region-1-zone-b
|
||||
role: Candidate
|
||||
replicas: 1
|
||||
selector:
|
||||
reference: region-1-zone-b
|
||||
- name: cand-region-3-zone-c
|
||||
role: Candidate
|
||||
replicas: 1
|
||||
selector:
|
||||
reference: region-3-zone-c
|
||||
- name: cand-region-4-zone-d
|
||||
role: Candidate
|
||||
replicas: 1
|
||||
selector:
|
||||
reference: region-4-zone-d
|
||||
- name: region-3-zone-e
|
||||
role: Voter
|
||||
replicas: 1
|
||||
selector:
|
||||
reference: region-3-zone-e
|
||||
```
|
|
@ -0,0 +1,14 @@
|
|||
|
||||
Kubernetes 中容器默认是在 Kubernetes 的容器网络中的,此时网络通信会增加一定的代价和延迟。Kubernetes 支持将容器放到宿主机的网络空间中,这种方式在 Pod 上表现为 `hostNetwork` 为 true。
|
||||
|
||||
PolarDBXCluster 也支持将节点的容器放到宿主机网络中,但有几个限制:
|
||||
|
||||
- 节点需要监听的端口是随机生成的,不保证不冲突
|
||||
- 节点升级中可能会遇到端口冲突起不来的情况,需要手动处理
|
||||
|
||||
每个组件的`hostNetwork`都在对应的 `template`字段中,可以分别指定
|
||||
|
||||
- `spec.topology.gms.template.hostNetwork`
|
||||
- `spec.topology.cn.template.hostNetwork`
|
||||
- `spec.topology.dn.template.hostNetwork`
|
||||
- `spec.topology.cdc.template.hostNetwork`
|
|
@ -0,0 +1,34 @@
|
|||
在 `topology.rules.nodeSelectors`中,你可以定义一组预置的节点选择器,然后在后面的 `topology.rules.components`中引用它们。关于节点选择器的定义方法和含义,参考[官方文档](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/) 。
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
topology:
|
||||
rules:
|
||||
selectors:
|
||||
- name: zone-a
|
||||
nodeSelector:
|
||||
nodeSelectorTerms:
|
||||
- matchExpressions:
|
||||
- key: topology.kubernetes.io/zone
|
||||
operator: In
|
||||
values:
|
||||
- cn-hangzhou-a
|
||||
- name: zone-b
|
||||
nodeSelector:
|
||||
nodeSelectorTerms:
|
||||
- matchExpressions:
|
||||
- key: topology.kubernetes.io/zone
|
||||
operator: In
|
||||
values:
|
||||
- cn-hangzhou-b
|
||||
- name: zone-c
|
||||
nodeSelector:
|
||||
nodeSelectorTerms:
|
||||
- matchExpressions:
|
||||
- key: topology.kubernetes.io/zone
|
||||
operator: In
|
||||
values:
|
||||
- cn-hangzhou-c
|
||||
```
|
||||
|
||||
节点选择器可以帮助我们控制部署实例的拓扑,例如两地三中心,三地五中心等,具体的使用可以参考:[容灾部署示例](./1-create-ha-example.md) 。
|
|
@ -0,0 +1,82 @@
|
|||
## 只读实例创建
|
||||
对于PolarDB-X Operator 1.3.0及以上的版本,您可以创建只读实例,并指定其所属的 PolarDB-X 主实例。
|
||||
|
||||
只读实例的存储节点通过增加 Learner 副本的方式来保证物理资源隔离,提供了读写分离的特性,同时能够基于全局时钟来确保只读查询的强一致性。
|
||||
|
||||
您可以通过以下两种方法创建只读实例:
|
||||
|
||||
### 1. 为已有主实例添加只读实例
|
||||
创建独立的 PolarDBXCluster yaml 配置文件,令`spec.readonly`为`true`,并指定`spec.primaryCluster`为其所属的主实例名,示例如下:
|
||||
``` yaml
|
||||
# readonly.yaml
|
||||
apiVersion: polardbx.aliyun.com/v1
|
||||
kind: PolarDBXCluster
|
||||
metadata:
|
||||
name: pxc-readonly
|
||||
spec:
|
||||
readonly: true
|
||||
primaryCluster: pxc-master # 主实例名
|
||||
topology:
|
||||
nodes:
|
||||
cn:
|
||||
replicas: 1
|
||||
template:
|
||||
resources:
|
||||
limits:
|
||||
cpu: 2
|
||||
memory: 4Gi
|
||||
image: polardbx/polardbx-sql:latest
|
||||
imagePullPolicy: Always
|
||||
dn:
|
||||
# DN replicas 会自动与主实例的 DN replicas 保持同步,无需显式指定
|
||||
template:
|
||||
resources:
|
||||
limits:
|
||||
cpu: 2
|
||||
memory: 4Gi
|
||||
image: polardbx/polardbx-engine:latest
|
||||
imagePullPolicy: IfNotPresent
|
||||
config:
|
||||
cn:
|
||||
static:
|
||||
AttendHtap: true # 是否支持 HTAP
|
||||
```
|
||||
### 2. 同时创建主实例与只读实例
|
||||
在创建主实例时,在主实例 PolarDBXCluster yaml 配置文件的`spec.initReadonly`字段中添加附属只读实例的信息。这种方法创建出的只读实例规格和参数与主实例相同,示例如下:
|
||||
``` yaml
|
||||
# pxc-with-readonly.yaml
|
||||
apiVersion: polardbx.aliyun.com/v1
|
||||
kind: PolarDBXCluster
|
||||
metadata:
|
||||
name: pxc
|
||||
spec:
|
||||
initReadonly:
|
||||
- cnReplicas: 1 # 只读实例 CN 数
|
||||
name: readonly # 只读实例后缀名,本例中将生成名为 "pxc-readonly" 的只读实例,不填则会生成随机后缀
|
||||
extraParams:
|
||||
AttendHtap: "true" # 是否支持 HTAP
|
||||
topology:
|
||||
nodes:
|
||||
cn:
|
||||
replicas: 1
|
||||
template:
|
||||
resources:
|
||||
limits:
|
||||
cpu: 2
|
||||
memory: 4Gi
|
||||
image: polardbx/polardbx-sql:latest
|
||||
imagePullPolicy: Always
|
||||
dn:
|
||||
replicas: 1
|
||||
template:
|
||||
resources:
|
||||
limits:
|
||||
cpu: 2
|
||||
memory: 4Gi
|
||||
image: polardbx/polardbx-engine:latest
|
||||
imagePullPolicy: IfNotPresent
|
||||
```
|
||||
|
||||
### 3. 连接只读实例
|
||||
|
||||
您可以直接连接只读实例,连接方法同主实例,见[连接 PolarDB-X 数据库](../connection/README.md),对应的集群实例名称填入只读实例名即可。
|
|
@ -0,0 +1,3 @@
|
|||
PolarDBXCluster 支持将 GMS 的功能合并到第一个 DN(DN-0)中来减少整体使用的资源,这种情况适合进行测试部署。
|
||||
|
||||
想要指定这种部署模式,将 `spec.shareGMS`设置为 true 即可。需要注意的是,极简模式和普通模式不能来回切换。
|
|
@ -0,0 +1,39 @@
|
|||
有状态节点规则是针对元数据、存储节点的内部节点,有两种形式:
|
||||
|
||||
- nodeSet,每个 GMS、DN 都遵从 nodeSet 的规则来部署内部节点
|
||||
- rolling,只针对 DN,会将内部节点按照堆叠的方式部署在 Kubernetes 集群内的所有可用的节点之上(用于测试),从而最大化资源利用
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
topology:
|
||||
rules:
|
||||
components:
|
||||
# **Optional**
|
||||
#
|
||||
# GMS 部署规则,默认和 DN 一致
|
||||
gms:
|
||||
# 堆叠部署结构,operator 尝试在节点选择器指定的节点中,堆叠部署
|
||||
# 每个存储节点的子节点以达到较高资源利用率的方式,仅供测试使用
|
||||
rolling:
|
||||
replicas: 3
|
||||
selector:
|
||||
reference: zone-a
|
||||
# 节点组部署结构,可以指定每个 DN 的子节点的节点组和节点选择器,
|
||||
# 从而达成跨区、跨城等高可用部署结构
|
||||
nodeSets:
|
||||
- name: cand-zone-a
|
||||
role: Candidate
|
||||
replicas: 1
|
||||
selector:
|
||||
reference: zone-a
|
||||
- name: cand-zone-b
|
||||
role: Candidate
|
||||
replicas: 1
|
||||
selector:
|
||||
reference: zone-b
|
||||
- name: log-zone-c
|
||||
role: Voter
|
||||
replicas: 1
|
||||
selector:
|
||||
reference: zone-c
|
||||
```
|
|
@ -0,0 +1,26 @@
|
|||
示例:
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
topology:
|
||||
rules:
|
||||
components:
|
||||
# **Optional**
|
||||
#
|
||||
# CN 部署规则,同样按组划分 CN 节点
|
||||
cn:
|
||||
- name: zone-a
|
||||
# 合法值:数字、百分比、(0, 1] 分数,不填写为剩余 replica(只能有一个不填写)
|
||||
# 总和不能超过 .topology.nodes.cn.replicas
|
||||
replicas: 1
|
||||
selector:
|
||||
reference: zone-a
|
||||
- name: zone-b
|
||||
replicas: 1 / 3
|
||||
selector:
|
||||
reference: zone-b
|
||||
- name: zone-c
|
||||
replicas: 34%
|
||||
selector:
|
||||
reference: zone-c
|
||||
```
|
|
@ -0,0 +1,62 @@
|
|||
前言:完整的 PolarDBXCluster 定义参考[这里](../../api/polardbxcluster.md) 。
|
||||
|
||||
首先准备一个描述 PolarDBXCluster 的 yaml 文件:
|
||||
|
||||
```yaml
|
||||
apiVersion: polardbx.aliyun.com/v1 # API 组 / 版本
|
||||
kind: PolarDBXCluster # API 名称
|
||||
metadata: # 对象元数据
|
||||
name: polardbx-test # 对象名字
|
||||
namespace: default # 所在命名空间
|
||||
labels: # 对象标签集合
|
||||
kind: test
|
||||
spec: # Spec
|
||||
topology: # 拓扑定义
|
||||
nodes: # 节点规格和数量
|
||||
cn:
|
||||
replicas: 2
|
||||
template:
|
||||
image: polardbx/polardbx-sql:latest
|
||||
resources:
|
||||
limits:
|
||||
cpu: 4
|
||||
memory: 16Gi
|
||||
dn:
|
||||
replicas: 2
|
||||
template:
|
||||
image: polardbx/polardbx-engine:latest
|
||||
resources:
|
||||
limits:
|
||||
cpu: 4
|
||||
memory: 16Gi
|
||||
cdc:
|
||||
replicas: 2
|
||||
template:
|
||||
image: polardbx/polardbx-cdc:latest
|
||||
resources:
|
||||
limits:
|
||||
cpu: 4
|
||||
memory: 16Gi
|
||||
```
|
||||
|
||||
使用下面的命令创建 PolarDBXCluster 对象:
|
||||
|
||||
```bash
|
||||
kubectl create -f polardbx-test.yaml
|
||||
```
|
||||
|
||||
使用下面的命令观察 PolarDBXCluster 对象的状态:
|
||||
|
||||
```bash
|
||||
kubectl get pxc polardbx-test
|
||||
NAME GMS CN DN CDC PHASE DISK AGE
|
||||
polardbx-test 0/1 0/2 0/2 0/2 Creating 5s
|
||||
```
|
||||
|
||||
当状态中 `PHASE` 为 `Running` 时,PolarDB-X 集群就创建完成了。
|
||||
|
||||
```bash
|
||||
kubectl get pxc polardbx-test
|
||||
NAME GMS CN DN CDC PHASE DISK AGE
|
||||
polardbx-test 1/1 2/2 2/2 2/2 Running 6.2Gi 63s
|
||||
```
|
|
@ -0,0 +1,22 @@
|
|||
使用下面的命令删除 PolarDBXCluster 集群(对象),其中 `polardbx-test` 是 PolarDBXCluster 对象名
|
||||
|
||||
```bash
|
||||
kubectl delete pxc polardbx-test
|
||||
```
|
||||
|
||||
此时查看对象状态可能会看到 `PHASE`在 `Deleting`,
|
||||
|
||||
```bash
|
||||
kubectl get pxc polardbx-test
|
||||
NAME GMS CN DN CDC PHASE DISK AGE
|
||||
polardbx-test 1/1 2/2 2/2 2/2 Deleting 6.2Gi 2m1s
|
||||
```
|
||||
|
||||
或者报错对象已经不存在
|
||||
|
||||
```bash
|
||||
kubectl get pxc polardbx-test
|
||||
Error from server (NotFound): polardbxclusters.polardbx.aliyun.com "polardbx-test" not found
|
||||
```
|
||||
|
||||
当 PolarDBXCluster 主实例被删除时,其附属的只读实例也会随之删除
|
|
@ -0,0 +1,17 @@
|
|||
注:本文升级指修改某个或某几个组件的镜像,实际操作中你可以同时进行升级、升配、扩缩容动作。
|
||||
|
||||
以前文[《1. 创建》](./1-create.md) 中的 yaml 为例,假设我们想要更新 CN 的镜像为 `polardbx/polardbx-sql:v2.0`,那么可以使用 `kubectl edit` 或是 `kubectl patch` 的方式修改 `.spec` 下的镜像字段,这里演示 `kubectl patch`的方式:
|
||||
|
||||
```bash
|
||||
kubectl patch pxc polardbx-test -p '{"spec": {"topology": {"nodes": {"cn": {"template": {"image": "polardbx/polardbx-sql:v2.0"}}}}}}'
|
||||
```
|
||||
|
||||
稍后观察集群状态,`PHASE`会进入 `Upgrading` 状态,表明正在升级中:
|
||||
|
||||
```bash
|
||||
kubectl get pxc polardbx-test
|
||||
NAME GMS CN DN CDC PHASE DISK AGE
|
||||
polardbx-test 1/1 1/2 2/2 2/2 Upgrading 6.2Gi 93s
|
||||
```
|
||||
|
||||
当 `PHASE`重新变为 `Running`时,升级完成。
|
|
@ -0,0 +1,7 @@
|
|||
除了是修改资源配置以外,其余同[《3. 升级》](./3-update.md) 。
|
||||
|
||||
```bash
|
||||
kubectl patch pxc polardbx-test -p '{"spec": {"topology": {"nodes": {"cn": {"template": {"resources": {"limits": {"cpu": 4, "memory": "16Gi"}}}}}}}}'
|
||||
```
|
||||
|
||||
同样 `PHASE`从 `Running`进入 `Upgrading`,然后再回到 `Running`。
|
|
@ -0,0 +1,23 @@
|
|||
除了是增加节点以外,其余同[《3. 升级》](./3-update.md) 。
|
||||
|
||||
```bash
|
||||
kubectl patch pxc polardbx-test -p '{"spec": {"topology": {"nodes": {"dn": {"replicas": 3}}}}}'
|
||||
```
|
||||
|
||||
同样 `PHASE`从 `Running`进入 `Upgrading`,然后再回到 `Running`。
|
||||
|
||||
```bash
|
||||
kubectl get pxc polardbx-test
|
||||
NAME GMS CN DN CDC PHASE DISK AGE
|
||||
polardbx-test 1/1 1/2 2/3 2/2 Upgrading 6.2Gi 93s
|
||||
```
|
||||
|
||||
但你会看到 DN 的数量预期变为了 `2/3` 和 `3/3`。同时,operator 会自动进行数据的均衡,因此还可能涉及到数据的搬迁。你可以通过如下方式查看数据搬迁进度:
|
||||
|
||||
```bash
|
||||
kubectl get pxc polardbx-test -o wide
|
||||
NAME PROTOCOL GMS CN DN CDC PHASE DISK STAGE REBALANCE VERSION AGE
|
||||
polardbx-test 8.0 1/1 2/2 3/3 2/2 Upgrading 22.6 GiB RebalanceWatch 50% 8.0.3-PXC-5.4.13-20220418/8.0.18 35d
|
||||
```
|
||||
|
||||
|
|
@ -0,0 +1,5 @@
|
|||
同[《5. 扩容》](./5-scale-out.md) 一样,除了是缩减节点。同样,数据会自动进行搬迁。
|
||||
|
||||
```bash
|
||||
kubectl patch pxc polardbx-test -p '{"spec": {"topology": {"nodes": {"dn": {"replicas": 1}}}}}'
|
||||
```
|
|
@ -0,0 +1,5 @@
|
|||
几种情况下,operator 无法响应新的操作:
|
||||
|
||||
1. `PHASE` 在 `Deleting`状态,意味着在删除中
|
||||
2. `PHASE` 在 `Locked`状态,意味着在锁定中
|
||||
3. `PHASE` 在 `Upgrading`状态,且 `STAGE` 在 `RebalanceStart`、`RebalanceWatch`和 `Clean` 状态时,无法中断,意味着此时在进行数据的迁移工作
|
|
@ -0,0 +1,15 @@
|
|||
除了[几个特殊情况](./7-rollback-exception.md) ,在任意一个变更过程中,你都可以再次变更对象的 `.spec` 字段来触发新的操作,operator 会及时响应以达到预期效果。
|
||||
|
||||
因此,中断/回滚上一次操作的方式是再进行一次操作,将 `.spec`改回之前的状态,例如:
|
||||
|
||||
```bash
|
||||
kubectl patch pxc polardbx-test -p '{"spec": {"topology": {"nodes": {"dn": {"replicas": 3}}}}}'
|
||||
```
|
||||
|
||||
之后立刻将 `replicas`改回 2
|
||||
|
||||
```bash
|
||||
kubectl patch pxc polardbx-test -p '{"spec": {"topology": {"nodes": {"dn": {"replicas": 2}}}}}'
|
||||
```
|
||||
|
||||
那么 PolarDB-X 集群将继续稳定运行。
|
|
@ -0,0 +1,50 @@
|
|||
复杂操作指并不是单纯的升级、升配、扩缩容,而是混合了部分或者所有意图的操作方式。Kubernetes 的声明式 API 要求我们高效地实现这样的操作,因此 operator 也支持了。
|
||||
|
||||
举个例子,我们将同时
|
||||
|
||||
1. 修改 CN 的镜像为 polardbx/polardbx-sql:v2.0
|
||||
2. 修改 CDC 的配置为 8C32G
|
||||
3. 增加 DN 的节点,到 3 个
|
||||
|
||||
同样,我们可以用 `kubectl edit` 和 `kubectl patch` 的方式实现,这里演示 `kubectl patch` 的方式。
|
||||
|
||||
首先准备一个 patch 文件,
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
topology:
|
||||
nodes:
|
||||
cn:
|
||||
template:
|
||||
image: polardbx/polardbx-sql:v2.0
|
||||
dn:
|
||||
replicas: 3
|
||||
cdc:
|
||||
template:
|
||||
resources:
|
||||
limits:
|
||||
cpu: 8
|
||||
memory: 32Gi
|
||||
```
|
||||
|
||||
执行下面的命令来进行上述操作:
|
||||
|
||||
```bash
|
||||
kubectl patch pxc polardbx-test --patch-file patch.yaml
|
||||
```
|
||||
|
||||
此时我们将同时观察到 CN、DN、CDC 的变化和数据搬迁:
|
||||
|
||||
```bash
|
||||
kubectl get pxc polardbx-test -o wide
|
||||
NAME PROTOCOL GMS CN DN CDC PHASE DISK STAGE REBALANCE VERSION AGE
|
||||
polardbx-test 8.0 1/1 1/2 2/3 1/2 Upgrading 22.6 GiB 8.0.3-PXC-5.4.13-20220418/8.0.18 35d
|
||||
```
|
||||
|
||||
```bash
|
||||
kubectl get pxc polardbx-test -o wide
|
||||
NAME PROTOCOL GMS CN DN CDC PHASE DISK STAGE REBALANCE VERSION AGE
|
||||
polardbx-test 8.0 1/1 2/2 3/3 2/2 Upgrading 22.6 GiB RebalanceWatch 50% 8.0.3-PXC-5.4.13-20220418/8.0.18 35d
|
||||
```
|
||||
|
||||
注:如《[不可中断的情况](./7-rollback-exception.md) 》中所说,数据搬迁中不可中断。
|
|
@ -0,0 +1,19 @@
|
|||
生命周期管理
|
||||
==========
|
||||
|
||||
1. [创建](./1-create.md)
|
||||
1. [集群拓扑规则-节点选择器 (NodeSelector)](./1-create-node-selector.md)
|
||||
2. [集群拓扑规则-无状态节点规则(计算、日志节点)](./1-create-stateless-node-rule.md)
|
||||
3. [集群拓扑规则-有状态节点规则(GMS、存储节点)](./1-create-state-node-rule.md)
|
||||
4. [集群拓扑规则-容灾部署示例](./1-create-ha-example.md)
|
||||
5. [容器宿主机网络模式](./1-create-host-network-mode.md)
|
||||
6. [极简部署模式 (ShareGMS)](./1-create-simple-mode.md)
|
||||
7. [只读实例创建](./1-create-readonly-pxc.md)
|
||||
2. [删除](./2-delete.md)
|
||||
3. [升级](./3-update.md)
|
||||
4. [升配](./4-upgrade.md)
|
||||
5. [扩容](./5-scale-out.md)
|
||||
6. [缩容](./6-scale-in.md)
|
||||
7. [操作中断/回滚](./7-rollback.md)
|
||||
1. [不可中断的情况](./7-rollback-exception.md)
|
||||
8. [复杂操作](./8-complex-ops.md)
|
|
@ -0,0 +1,204 @@
|
|||
# 日志采集
|
||||
|
||||
本文介绍如何在 k8s 集群中为 PolarDB-X 数据库开启日志采集功能。
|
||||
|
||||
## 采集内容
|
||||
### 计算节点的日志
|
||||
| 日志 | Pod内路径 | 是否进行了解析 |
|
||||
| --- | --- | --- |
|
||||
| SQL日志 | /home/admin/drds-server/logs/*/sql.log | 是 |
|
||||
| 慢日志 | /home/admin/drds-server/logs/*/slow.log | 是 |
|
||||
| 错误日志 | /home/admin/drds-server/logs/*/tddl.log | 否 |
|
||||
>容器内路径中的*表示任意目录名
|
||||
|
||||
## 安装 PolarDB-X LogCollector
|
||||
PolarDB-X通过Filebeat采集日志,将原始日志发送到Logstash进行解析并发送给最后的存储端。
|
||||
### 前置要求
|
||||
1. 已经准备了一个运行中的 K8s 集群,并确保集群版本 >= 1.18.0
|
||||
2. 已经安装了 [Helm 3](https://helm.sh/docs/intro/install/)
|
||||
3. 已经安装 PolarDB-X Operator 1.2.2 及以上的版本
|
||||
|
||||
### Helm包安装
|
||||
首先创建一个名为 polardbx-logcollector 的命名空间:
|
||||
```
|
||||
kubectl create namespace polardbx-logcollector
|
||||
```
|
||||
|
||||
执行如下命令安装 PolarDB-X LogCollector:
|
||||
```
|
||||
helm install --namespace polardbx-logcollector polardbx-logcollector https://github.com/polardb/polardbx-operator/releases/download/v1.3.0/polardbx-logcollector-1.3.0.tgz
|
||||
```
|
||||
|
||||
您也可以通过 PolarDB-X 的 Helm Chart 仓库安装:
|
||||
```bash
|
||||
helm repo add polardbx https://polardbx-charts.oss-cn-beijing.aliyuncs.com
|
||||
helm install --namespace polardbx-logcollector polardbx-logcollector polardbx/polardbx-logcollector
|
||||
```
|
||||
> 注:默认安装配置下,Filebeat通过DaemonSet的形式安装到k8s集群的机器上,默认每个Filebeat Pod会占用占用 500MB 内存和 1 个CPU核;Logstash Pod默认部署一个,每个占用 1.5GB 内存和 2 个CPU核。具体默认可查看: [values.yaml](https://github.com/polardb/polardbx-operator/blob/main/charts/polardbx-logcollector/values.yaml)。
|
||||
|
||||
期望看到如下输出:
|
||||
```
|
||||
polardbx-operator logcollector plugin is installed. Please check the status of components:
|
||||
|
||||
kubectl get pods --namespace {{ .Release.Namespace }}
|
||||
|
||||
Now start to collect logs of your polardbx cluster.
|
||||
```
|
||||
|
||||
## 查看日志
|
||||
|
||||
### 开启日志采集
|
||||
PolarDB-X 集群的日志采集功能默认关闭,您可以通过如下命令控制控制日志采集的开启与关闭:
|
||||
|
||||
打开 PolarDB-X 实例的 CN 节点日志采集:
|
||||
```
|
||||
kubectl patch pxc {pxc name} --patch '{"spec":{"config":{"cn":{"enableAuditLog":true}}}}' --type merge
|
||||
```
|
||||
关闭 PolarDB-X 实例的 CN 节点日志采集:
|
||||
```
|
||||
kubectl patch pxc {pxc name} --patch '{"spec":{"config":{"cn":{"enableAuditLog":false}}}}' --type merge
|
||||
```
|
||||
|
||||
### 在 Logstash 标准输出查看日志
|
||||
|
||||
PolarDB-X 使用Logstash作为日志的解析和上报组件, 默认将日志输出到标准控制台,方便使用者验证日志的采集和解析链路是否正常。通过下面的命令查看日志采集:
|
||||
```shell
|
||||
kubectl logs -f {logstash pod name} -n polardbx-logcollector
|
||||
```
|
||||
|
||||
|
||||
## 将日志投递到其它系统
|
||||
|
||||
Logstash支持多种[输出插件](https://www.elastic.co/guide/en/logstash/current/output-plugins.html) , 您也可[开发自己的输出插件](https://www.elastic.co/guide/en/logstash/current/output-new-plugin.html) , 根据实际需求将 PolarDB-X 日志投递到其它系统做进一步分析。
|
||||
|
||||
Logstash 的 output plugin 配置保存在 polardbx-logcollector 命名空间下名为 logstash-pipeline 的 ConfigMap 中,您可以通过如下命令修改 logstash 的 output 配置。
|
||||
```shell
|
||||
kubectl edit configmap logstash-pipeline -n polardbx-logcollector
|
||||
```
|
||||
|
||||
logstash-pipeline 的 output 配置如下图所示:
|
||||
|
||||

|
||||
|
||||
下面本文将以 ElasticSearch 为例,介绍如何配置 Logstash,将 PolarDB-X 投递到 ElasticSearch 集群。
|
||||
|
||||
### 投递日志至 ElasticSearch
|
||||
|
||||
如果您的环境已经有 ES 集群,可以直接跳过《创建 ElasticSearch》。
|
||||
|
||||
#### 创建 ElasticSearch
|
||||
|
||||
参考如下文档在 K8s 集群中快速部署一个测试的 ES 集群。
|
||||
|
||||
1. [部署 ElasticSearch Operator](https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-eck.html#k8s-deploy-eck)
|
||||
2. [部署 ElasticSearch Cluster](https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-elasticsearch.html), 该步骤中需要获取ES 集群的endpoint,用户名,密码,证书。
|
||||
ES 集群的访问证书可以通过如下命令获取:
|
||||
```shell
|
||||
kubectl get secret quickstart-es-http-certs-public -o=jsonpath='{.data.ca\.crt}'
|
||||
```
|
||||
3. [部署 Kibana](https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-kibana.html)
|
||||
> 注:上述 ES 集群仅用于测试,生产环境请自行配置 ES 集群。
|
||||
|
||||
#### 更新证书 Secret
|
||||
|
||||
如果 ES 集群采用 HTTP 访问,可以跳过该步骤。
|
||||
|
||||
如果 ES 集群采用HTTPS 访问,需要配置证书,证书文件(/usr/share/logstash/config/certs/ca.crt)已通过 polardbx-logcollector 命名空间的 elastic-certs-public secret 挂载到 Logstash 的 Pod 中,通过如下命令更新secret:
|
||||
```shell
|
||||
kubectl edit secret elastic-certs-public -n polardbx-logcollector
|
||||
```
|
||||
|
||||
#### 配置 logstash output
|
||||
|
||||
前置条件:
|
||||
- k8s集群内可访问的 ES 集群地址;
|
||||
- 打开ES自动创建索引功能;
|
||||
- ES集群的上创建一个API Key 或者 账号密码;
|
||||
- 如果使用https,则需要ES集群证书。将内容写入ES证书Secret中,证书Secret为 polardbx-logcollector 命名空间的 elastic-certs-public,证书文件名为ca.crt。
|
||||
|
||||
通过如下命令更新 Logstash 的output 配置:
|
||||
|
||||
```shell
|
||||
kubectl edit configmap logstash-pipeline -n polardbx-logcollector
|
||||
```
|
||||
|
||||
例如,下面给出了一个 ES 集群配置的示例:
|
||||
```
|
||||
output {
|
||||
elasticsearch {
|
||||
hosts => ["https://quickstart-es-http.default:9200"]
|
||||
user => elastic
|
||||
password => sTF9B37N0jAF45Kn2Jwt874N
|
||||
ssl => true
|
||||
cacert => "/usr/share/logstash/config/certs/ca.crt"
|
||||
index => "%{[@metadata][target_index]}"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
- 如需了解更多的配置,可访问 [Elastic Search Output Plugins Options](https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html#plugins-outputs-elasticsearch-options)
|
||||
|
||||

|
||||
|
||||
启用elasticsearch输出插件后,记得**注释掉stdout的输出配置**。
|
||||
|
||||
#### 访问Kibana
|
||||
|
||||
参考[部署Kibana](https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-kibana.html#k8s-deploy-kibana), 登录Kibana 创建三个 Index Pattern, 用于查询日志:
|
||||
|
||||
| 日志类型 | Index Pattern |
|
||||
| --- | --- |
|
||||
| SQL日志 | cn_sql_log-* |
|
||||
| 慢日志 | cn_slow_log-* |
|
||||
| 错误日志 | cn_tddl_log-* |
|
||||
|
||||
Kibana 创建 Index Pattern 如下图所示:
|
||||

|
||||
#### 日志效果图
|
||||
SQL日志
|
||||

|
||||
|
||||
错误日志
|
||||

|
||||
|
||||
慢日志
|
||||

|
||||
|
||||
### 其他
|
||||
- [已有输出的插件](https://www.elastic.co/guide/en/logstash/current/output-plugins.html)
|
||||
- [开发新的输出插件](https://www.elastic.co/guide/en/logstash/current/output-new-plugin.html)
|
||||
|
||||
# values.yaml介绍
|
||||
用户可根据自己实际情况对 polardbx-logcollector 的安装配置进行定制,values.yaml路径:charts/polardbx-logcollector/values.yaml,文件对各个配置项进行了详细的注解。
|
||||
|
||||
# 日志字段介绍
|
||||
查看[日志字段介绍](2-logfield.md)。
|
||||
|
||||
# 资源配置和性能调优建议
|
||||
## 资源
|
||||
|
||||
| logstash单核 | filebeat单核 |
|
||||
|---------------|----------------|
|
||||
| 5000 events/s | 12000 events/s |
|
||||
为了让核数得到充分利用,且不会oom的产生,我们需要合理配置内存资源、并发数、缓存大小等
|
||||
|
||||
## 建议根据具体场景调整的参数
|
||||
|
||||
### filebeat的filebeat.yml配置文件
|
||||
ConfigMap名称为filebeat-config。
|
||||
参数:
|
||||
- SQL日志配置项里的harvester_buffer_size大小
|
||||
- queue.mem配置
|
||||
|
||||
参考:[filebeat配置](https://www.elastic.co/guide/en/beats/filebeat/current/configuring-howto-filebeat.html)
|
||||
|
||||
### logstash的jvm.options配置文件
|
||||
ConfigMap名称为logstash-config。参数:
|
||||
- -Xms和-Xmx
|
||||
|
||||
### logstash的logstash.yml配置文件
|
||||
ConfigMap名称为logstash-config。参数:
|
||||
- pipeline.batch.size
|
||||
- pipeline.workers
|
||||
|
||||
参考:[logstash配置](https://www.elastic.co/guide/en/logstash/current/config-setting-files.html)
|
|
@ -0,0 +1,206 @@
|
|||
# 日志字段说明
|
||||
本文介绍如何在logstash上报的日志内容中,各字段的说明。
|
||||
## SQL日志
|
||||
### 字段说明
|
||||
| **字段组** | **字段名称** | **描述** |
|
||||
| --- | --- |--------------------------------------------------------------------|
|
||||
| fields | instance_id | 实例名称。 |
|
||||
| | node_name | 计算节点pod所在的node名称。 |
|
||||
| | log_type | 日志类型。 |
|
||||
| | pod_name | 计算节点pod名称。 |
|
||||
| message | log_time | 日志打印时间戳。 |
|
||||
| | physical_affected_rows | 物理影响行数。 |
|
||||
| | total_physical_get_connection_time_cost | 物理连接获取耗时, 单位ns。 |
|
||||
| | sql | 被执行的SQL语句。 |
|
||||
| | fetched_rows | 从存储拉取的记录行数。 |
|
||||
| | total_physical_time_cost | 物理执行总耗时,包括物理SQL耗时与物理结果集耗时,单位ns。 |
|
||||
| | total_physical_sql_execution_time_cost | 物理SQL执行的总耗时之和,单位ns。 |
|
||||
| | schema | 数据库。 |
|
||||
| | logical_time_cost | 逻辑层执行耗时(即DRDS层的消耗的CPU时间),单位ns。 |
|
||||
| | affected_rows | 若执行的是DML,表示受影响的行数;若执行的是查询语句,表示返回结果的行数。 |
|
||||
| | logical_optimizer_time_cost | 从sql接收到生成Plan的时间,即SQL在优化器的总耗时,单位ns。 |
|
||||
| | logical_executor_time_cost | 完整执行整个plan的总逻辑耗时(这个指标已去除了物理层总耗时),单位ns。 |
|
||||
| | workload_type | SQL执行时的负载类型,取值范围如下:TP:事务类型的负载;AP:分析类型的负载。 |
|
||||
| | response_time | 响应时间,单位:微秒。 |
|
||||
| | template_id | 模板SQL的哈希值。 |
|
||||
| | user | 执行SQL的用户名。 |
|
||||
| | trace_id | SQL执行的TRACE ID。 |
|
||||
| | extra_info | 额外信息。包括客户端地址(ipport),prepare语句编号(stmt_id),事物策略(trx),负载类型(wt),内核版本(ver) |
|
||||
|
||||
|
||||
### 例子
|
||||
```json
|
||||
{
|
||||
"_index": "cn_sql_log-2022.11.16",
|
||||
"_type": "_doc",
|
||||
"_id": "oz1rf4QB-sddlgYFeymE",
|
||||
"_version": 1,
|
||||
"_score": null,
|
||||
"_source": {
|
||||
"@version": "1",
|
||||
"@timestamp": "2022-11-16T07:50:58.507Z",
|
||||
"host": {
|
||||
"name": "filebeat-vtggm"
|
||||
},
|
||||
"fields": {
|
||||
"pod_name": "busu-pxchostnet-ql8p-cn-default-84cdc67d84-4w8ww",
|
||||
"log_type": "cn_sql_log",
|
||||
"instance_id": "busu-pxchostnet",
|
||||
"node_name": "cn-beijing.192.168.1.250"
|
||||
},
|
||||
"message": {
|
||||
"total_physical_sql_execution_time_cost": 680097,
|
||||
"total_physical_get_connection_time_cost": 4544,
|
||||
"fetched_rows": 0,
|
||||
"trace_id": "153a4ba63d007000",
|
||||
"extra_info": "ipport=192.168.0.3:58888 wt=TP ver=5.4.13-20220621",
|
||||
"affected_rows": 0,
|
||||
"template_id": "3e4e0512",
|
||||
"logical_time_cost": -3315586127,
|
||||
"user": "polardbx_root",
|
||||
"response_time": 1203,
|
||||
"physical_affected_rows": 0,
|
||||
"schema": "polardbx",
|
||||
"logical_optimizer_time_cost": 27655,
|
||||
"log_time": "2022-11-16 15:50:58.509",
|
||||
"sql": "SELECT engine, external_endpoint, file_uri, access_key_id, access_key_secret FROM metadb.file_storage_info",
|
||||
"total_physical_time_cost": 686243,
|
||||
"logical_executor_time_cost": -3315613782
|
||||
},
|
||||
"tags": [
|
||||
"beats_input_codec_plain_applied"
|
||||
]
|
||||
},
|
||||
"fields": {
|
||||
"@timestamp": [
|
||||
"2022-11-16T07:50:58.507Z"
|
||||
]
|
||||
},
|
||||
"sort": [
|
||||
1668585058507
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## 慢日志
|
||||
### 字段说明
|
||||
| **字段组** | **字段名称** | **描述** |
|
||||
| --- | --- | --- |
|
||||
| fields | instance_id | 实例名称。 |
|
||||
| | log_type | 日志类型。 |
|
||||
| | node_name | 计算节点pod所在的node名称。 |
|
||||
| | pod_name | 计算节点pod名称。 |
|
||||
| message | log_time | 日志打印时间。 |
|
||||
| | time | 执行时间,单位ms。 |
|
||||
| | host | 客户端ip。 |
|
||||
| | port | 客户端端口。 |
|
||||
| | sql | 执行SQL语句。 |
|
||||
| | affected_rows | 影响行数。 |
|
||||
| | trace_id | 跟踪编号。 |
|
||||
| | server_version | 内核版本。 |
|
||||
| | user | 用户名。 |
|
||||
| | schema | 数据库名。 |
|
||||
|
||||
### 例子
|
||||
```json
|
||||
{
|
||||
"_index": "cn_slow_log-2022.08.09",
|
||||
"_type": "_doc",
|
||||
"_id": "CxdugYIB4-sIO7p8dvv-",
|
||||
"_version": 1,
|
||||
"_score": null,
|
||||
"_source": {
|
||||
"fields": {
|
||||
"instance_id": "busu-pxchostnet",
|
||||
"node_name": "cn-beijing.192.168.0.207",
|
||||
"log_type": "cn_slow_log",
|
||||
"pod_name": "busu-pxchostnet-ldcw-cn-default-c754df994-xqhhj"
|
||||
},
|
||||
"@version": "1",
|
||||
"message": {
|
||||
"log_time": "2022-08-09 15:07:55.720",
|
||||
"time": "2001",
|
||||
"host": "127.0.0.1",
|
||||
"port": "35812",
|
||||
"sql": "select sleep(2)",
|
||||
"affected_rows": "1",
|
||||
"trace_id": "14bacc6508402000",
|
||||
"server_version": "5.4.13-16534775",
|
||||
"user": "polardbx_root",
|
||||
"schema": "busudb"
|
||||
},
|
||||
"host": {
|
||||
"name": "filebeat-wg47m"
|
||||
},
|
||||
"@timestamp": "2022-08-09T07:07:55.720Z",
|
||||
"tags": [
|
||||
"beats_input_codec_plain_applied"
|
||||
]
|
||||
},
|
||||
"fields": {
|
||||
"@timestamp": [
|
||||
"2022-08-09T07:07:55.720Z"
|
||||
]
|
||||
},
|
||||
"highlight": {
|
||||
"message.schema": [
|
||||
"@kibana-highlighted-field@busudb@/kibana-highlighted-field@"
|
||||
]
|
||||
},
|
||||
"sort": [
|
||||
1660028875720
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## 错误日志
|
||||
### 字段说明
|
||||
| **字段组** | **字段名称** | **描述** |
|
||||
| --- | --- | --- |
|
||||
| fields | instance_id | 实例名称。 |
|
||||
| | log_type | 日志类型。 |
|
||||
| | node_name | 计算节点pod所在的node名称。 |
|
||||
| | pod_name | 计算节点pod名称。 |
|
||||
| / | logger | 打印者名称。 |
|
||||
| | loglevel | 日子级别 |
|
||||
| | message | 错误内容 |
|
||||
| | thread | 线程名称 |
|
||||
|
||||
### 例子
|
||||
```json
|
||||
{
|
||||
"_index": "cn_tddl_log-2022.08.09",
|
||||
"_type": "_doc",
|
||||
"_id": "3oWGgYIBMBS_DyGstwZ2",
|
||||
"_version": 1,
|
||||
"_score": null,
|
||||
"_source": {
|
||||
"loglevel": "WARN",
|
||||
"thread": "ManagerExecutor-14-thread-160",
|
||||
"logger": " com.alibaba.polardbx.manager.ManagerConnection",
|
||||
"host": {
|
||||
"name": "filebeat-wg47m"
|
||||
},
|
||||
"message": "[user=polardbx_root,host=127.0.0.1,port=37150,schema=null] Index: 17, Size: 17\njava.lang.IndexOutOfBoundsException: Index: 17, Size: 17\n\tat java.util.ArrayList.rangeCheck(ArrayList.java:659)\n\tat java.util.ArrayList.get(ArrayList.java:435)\n\tat com.alibaba.polardbx.net.packet.RowDataPacket.getPacketLength(RowDataPacket.java:111)\n\tat com.alibaba.polardbx.net.packet.RowDataPacket.write(RowDataPacket.java:85)\n\tat com.alibaba.polardbx.manager.response.ShowHtc.execute(ShowHtc.java:130)\n\tat com.alibaba.polardbx.manager.handler.ShowHandler.handle(ShowHandler.java:93)\n\tat com.alibaba.polardbx.manager.ManagerQueryHandler.query(ManagerQueryHandler.java:68)\n\tat com.alibaba.polardbx.net.handler.QueryHandler.queryRaw(QueryHandler.java:29)\n\tat com.alibaba.polardbx.net.FrontendConnection.query(FrontendConnection.java:474)\n\tat com.alibaba.polardbx.net.handler.FrontendCommandHandler.handle(FrontendCommandHandler.java:65)\n\tat com.alibaba.polardbx.manager.ManagerConnection.lambda$handleData$0(ManagerConnection.java:62)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat java.lang.Thread.run(Thread.java:855)\n\tat com.alibaba.wisp.engine.WispTask.runOutsideWisp(WispTask.java:299)\n\tat com.alibaba.wisp.engine.WispTask.runCommand(WispTask.java:274)\n\tat com.alibaba.wisp.engine.WispTask.access$100(WispTask.java:53)\n\tat com.alibaba.wisp.engine.WispTask$CacheableCoroutine.run(WispTask.java:241)\n\tat java.dyn.CoroutineBase.startInternal(CoroutineBase.java:62)",
|
||||
"fields": {
|
||||
"instance_id": "busu-pxchostnet",
|
||||
"node_name": "cn-beijing.192.168.0.207",
|
||||
"log_type": "cn_tddl_log",
|
||||
"pod_name": "busu-pxchostnet-ldcw-cn-default-c754df994-xqhhj"
|
||||
},
|
||||
"@version": "1",
|
||||
"tags": [
|
||||
"beats_input_codec_plain_applied"
|
||||
],
|
||||
"@timestamp": "2022-08-09T07:34:16.157Z"
|
||||
},
|
||||
"fields": {
|
||||
"@timestamp": [
|
||||
"2022-08-09T07:34:16.157Z"
|
||||
]
|
||||
},
|
||||
"sort": [
|
||||
1660030456157
|
||||
]
|
||||
}
|
||||
```
|
|
@ -0,0 +1,6 @@
|
|||
日志采集
|
||||
====
|
||||
|
||||
[日志采集](./1-logcollector.md)
|
||||
|
||||
[日志字段](./2-logfield.md)
|
After Width: | Height: | Size: 30 KiB |
After Width: | Height: | Size: 404 KiB |
After Width: | Height: | Size: 336 KiB |
After Width: | Height: | Size: 25 KiB |
After Width: | Height: | Size: 97 KiB |
After Width: | Height: | Size: 427 KiB |
After Width: | Height: | Size: 474 KiB |