]>
Commit | Line | Data |
---|---|---|
e59887d8 HZ |
1 | COarse-grained LOck-stepping Virtual Machines for Non-stop Service |
2 | ---------------------------------------- | |
3 | Copyright (c) 2016 Intel Corporation | |
4 | Copyright (c) 2016 HUAWEI TECHNOLOGIES CO., LTD. | |
5 | Copyright (c) 2016 Fujitsu, Corp. | |
6 | ||
7 | This work is licensed under the terms of the GNU GPL, version 2 or later. | |
8 | See the COPYING file in the top-level directory. | |
9 | ||
10 | This document gives an overview of COLO's design and how to use it. | |
11 | ||
12 | == Background == | |
13 | Virtual machine (VM) replication is a well known technique for providing | |
14 | application-agnostic software-implemented hardware fault tolerance, | |
15 | also known as "non-stop service". | |
16 | ||
17 | COLO (COarse-grained LOck-stepping) is a high availability solution. | |
18 | Both primary VM (PVM) and secondary VM (SVM) run in parallel. They receive the | |
19 | same request from client, and generate response in parallel too. | |
20 | If the response packets from PVM and SVM are identical, they are released | |
21 | immediately. Otherwise, a VM checkpoint (on demand) is conducted. | |
22 | ||
23 | == Architecture == | |
24 | ||
25 | The architecture of COLO is shown in the diagram below. | |
26 | It consists of a pair of networked physical nodes: | |
27 | The primary node running the PVM, and the secondary node running the SVM | |
28 | to maintain a valid replica of the PVM. | |
29 | PVM and SVM execute in parallel and generate output of response packets for | |
30 | client requests according to the application semantics. | |
31 | ||
32 | The incoming packets from the client or external network are received by the | |
33 | primary node, and then forwarded to the secondary node, so that both the PVM | |
34 | and the SVM are stimulated with the same requests. | |
35 | ||
36 | COLO receives the outbound packets from both the PVM and SVM and compares them | |
37 | before allowing the output to be sent to clients. | |
38 | ||
39 | The SVM is qualified as a valid replica of the PVM, as long as it generates | |
40 | identical responses to all client requests. Once the differences in the outputs | |
41 | are detected between the PVM and SVM, COLO withholds transmission of the | |
42 | outbound packets until it has successfully synchronized the PVM state to the SVM. | |
43 | ||
44 | Primary Node Secondary Node | |
45 | +------------+ +-----------------------+ +------------------------+ +------------+ | |
46 | | | | HeartBeat |<----->| HeartBeat | | | | |
47 | | Primary VM | +-----------|-----------+ +-----------|------------+ |Secondary VM| | |
48 | | | | | | | | |
49 | | | +-----------|-----------+ +-----------|------------+ | | | |
50 | | | |QEMU +---v----+ | |QEMU +----v---+ | | | | |
51 | | | | |Failover| | | |Failover| | | | | |
52 | | | | +--------+ | | +--------+ | | | | |
53 | | | | +---------------+ | | +---------------+ | | | | |
54 | | | | | VM Checkpoint |-------------->| VM Checkpoint | | | | | |
55 | | | | +---------------+ | | +---------------+ | | | | |
56 | | | | | | | | | | |
57 | |Requests<---------------------------^------------------------------------------>Requests| | |
58 | |Responses----------------------\ /--|--------------\ /------------------------Responses| | |
59 | | | | | | | | | | | | | | | |
60 | | | | +-----------+ | | | | | | | +------------+ | | | | |
61 | | | | | COLO disk | | | | | | | | | COLO disk | | | | | |
62 | | | | | Manager |-|-|--|--------------|--|->| Manager | | | | | |
63 | | | | +|----------+ | | | | | | | +-----------|+ | | | | |
64 | | | | | | | | | | | | | | | | | |
65 | +------------+ +--|------------|-|--|--+ +---|--|--------------|--+ +------------+ | |
66 | | | | | | | | | |
67 | +-------------+ | +----------v-v--|--+ +---|--v-----------+ | +-------------+ | |
68 | | VM Monitor | | | COLO Proxy | | COLO Proxy | | | VM Monitor | | |
69 | | | | |(compare packet) | | (adjust sequence)| | | | | |
70 | +-------------+ | +----------|----^--+ +------------------+ | +-------------+ | |
71 | | | | | | |
72 | +------------------|------------|----|--+ +---------------------|------------------+ | |
73 | | Kernel | | | | | Kernel | | | |
74 | +------------------|------------|----|--+ +---------------------|------------------+ | |
75 | | | | | | |
76 | +--------------v+ +--------v----|--+ +------------------+ +v-------------+ | |
77 | | Storage | |External Network| | External Network | | Storage | | |
78 | +---------------+ +----------------+ +------------------+ +--------------+ | |
79 | ||
80 | == Components introduction == | |
81 | ||
82 | You can see there are several components in COLO's diagram of architecture. | |
83 | Their functions are described below. | |
84 | ||
85 | HeartBeat: | |
86 | Runs on both the primary and secondary nodes, to periodically check platform | |
87 | availability. When the primary node suffers a hardware fail-stop failure, | |
88 | the heartbeat stops responding, the secondary node will trigger a failover | |
89 | as soon as it determines the absence. | |
90 | ||
91 | COLO disk Manager: | |
92 | When primary VM writes data into image, the colo disk manger captures this data | |
93 | and sends it to secondary VM's which makes sure the context of secondary VM's | |
94 | image is consistent with the context of primary VM 's image. | |
95 | For more details, please refer to docs/block-replication.txt. | |
96 | ||
97 | Checkpoint/Failover Controller: | |
98 | Modifications of save/restore flow to realize continuous migration, | |
99 | to make sure the state of VM in Secondary side is always consistent with VM in | |
100 | Primary side. | |
101 | ||
102 | COLO Proxy: | |
103 | Delivers packets to Primary and Seconday, and then compare the responses from | |
104 | both side. Then decide whether to start a checkpoint according to some rules. | |
105 | Please refer to docs/colo-proxy.txt for more informations. | |
106 | ||
107 | Note: | |
108 | HeartBeat has not been implemented yet, so you need to trigger failover process | |
109 | by using 'x-colo-lost-heartbeat' command. | |
110 | ||
111 | == Test procedure == | |
112 | 1. Startup qemu | |
113 | Primary: | |
114 | # qemu-kvm -enable-kvm -m 2048 -smp 2 -qmp stdio -vnc :7 -name primary \ | |
115 | -device piix3-usb-uhci \ | |
116 | -device usb-tablet -netdev tap,id=hn0,vhost=off \ | |
117 | -device virtio-net-pci,id=net-pci0,netdev=hn0 \ | |
118 | -drive if=virtio,id=primary-disk0,driver=quorum,read-pattern=fifo,vote-threshold=1,\ | |
119 | children.0.file.filename=1.raw,\ | |
120 | children.0.driver=raw -S | |
121 | Secondary: | |
122 | # qemu-kvm -enable-kvm -m 2048 -smp 2 -qmp stdio -vnc :7 -name secondary \ | |
123 | -device piix3-usb-uhci \ | |
124 | -device usb-tablet -netdev tap,id=hn0,vhost=off \ | |
125 | -device virtio-net-pci,id=net-pci0,netdev=hn0 \ | |
126 | -drive if=none,id=secondary-disk0,file.filename=1.raw,driver=raw,node-name=node0 \ | |
127 | -drive if=virtio,id=active-disk0,driver=replication,mode=secondary,\ | |
128 | file.driver=qcow2,top-id=active-disk0,\ | |
129 | file.file.filename=/mnt/ramfs/active_disk.img,\ | |
130 | file.backing.driver=qcow2,\ | |
131 | file.backing.file.filename=/mnt/ramfs/hidden_disk.img,\ | |
132 | file.backing.backing=secondary-disk0 \ | |
133 | -incoming tcp:0:8888 | |
134 | ||
135 | 2. On Secondary VM's QEMU monitor, issue command | |
136 | {'execute':'qmp_capabilities'} | |
137 | { 'execute': 'nbd-server-start', | |
138 | 'arguments': {'addr': {'type': 'inet', 'data': {'host': 'xx.xx.xx.xx', 'port': '8889'} } } | |
139 | } | |
140 | {'execute': 'nbd-server-add', 'arguments': {'device': 'secondeary-disk0', 'writable': true } } | |
141 | ||
142 | Note: | |
143 | a. The qmp command nbd-server-start and nbd-server-add must be run | |
144 | before running the qmp command migrate on primary QEMU | |
145 | b. Active disk, hidden disk and nbd target's length should be the | |
146 | same. | |
147 | c. It is better to put active disk and hidden disk in ramdisk. | |
148 | ||
149 | 3. On Primary VM's QEMU monitor, issue command: | |
150 | {'execute':'qmp_capabilities'} | |
151 | { 'execute': 'human-monitor-command', | |
152 | 'arguments': {'command-line': 'drive_add -n buddy driver=replication,mode=primary,file.driver=nbd,file.host=xx.xx.xx.xx,file.port=8889,file.export=secondary-disk0,node-name=nbd_client0'}} | |
153 | { 'execute':'x-blockdev-change', 'arguments':{'parent': 'primary-disk0', 'node': 'nbd_client0' } } | |
154 | { 'execute': 'migrate-set-capabilities', | |
155 | 'arguments': {'capabilities': [ {'capability': 'x-colo', 'state': true } ] } } | |
156 | { 'execute': 'migrate', 'arguments': {'uri': 'tcp:xx.xx.xx.xx:8888' } } | |
157 | ||
158 | Note: | |
159 | a. There should be only one NBD Client for each primary disk. | |
160 | b. xx.xx.xx.xx is the secondary physical machine's hostname or IP | |
161 | c. The qmp command line must be run after running qmp command line in | |
162 | secondary qemu. | |
163 | ||
164 | 4. After the above steps, you will see, whenever you make changes to PVM, SVM will be synced. | |
165 | You can issue command '{ "execute": "migrate-set-parameters" , "arguments":{ "x-checkpoint-delay": 2000 } }' | |
166 | to change the checkpoint period time | |
167 | ||
168 | 5. Failover test | |
169 | You can kill Primary VM and run 'x_colo_lost_heartbeat' in Secondary VM's | |
170 | monitor at the same time, then SVM will failover and client will not detect this | |
171 | change. | |
172 | ||
173 | Before issuing '{ "execute": "x-colo-lost-heartbeat" }' command, we have to | |
174 | issue block related command to stop block replication. | |
175 | Primary: | |
176 | Remove the nbd child from the quorum: | |
177 | { 'execute': 'x-blockdev-change', 'arguments': {'parent': 'colo-disk0', 'child': 'children.1'}} | |
178 | { 'execute': 'human-monitor-command','arguments': {'command-line': 'drive_del blk-buddy0'}} | |
179 | Note: there is no qmp command to remove the blockdev now | |
180 | ||
181 | Secondary: | |
182 | The primary host is down, so we should do the following thing: | |
183 | { 'execute': 'nbd-server-stop' } | |
184 | ||
185 | == TODO == | |
186 | 1. Support continuous VM replication. | |
187 | 2. Support shared storage. | |
188 | 3. Develop the heartbeat part. | |
189 | 4. Reduce checkpoint VM’s downtime while doing checkpoint. |