]>
Commit | Line | Data |
---|---|---|
e59887d8 HZ |
1 | COarse-grained LOck-stepping Virtual Machines for Non-stop Service |
2 | ---------------------------------------- | |
3 | Copyright (c) 2016 Intel Corporation | |
4 | Copyright (c) 2016 HUAWEI TECHNOLOGIES CO., LTD. | |
5 | Copyright (c) 2016 Fujitsu, Corp. | |
6 | ||
7 | This work is licensed under the terms of the GNU GPL, version 2 or later. | |
8 | See the COPYING file in the top-level directory. | |
9 | ||
10 | This document gives an overview of COLO's design and how to use it. | |
11 | ||
12 | == Background == | |
13 | Virtual machine (VM) replication is a well known technique for providing | |
14 | application-agnostic software-implemented hardware fault tolerance, | |
15 | also known as "non-stop service". | |
16 | ||
17 | COLO (COarse-grained LOck-stepping) is a high availability solution. | |
18 | Both primary VM (PVM) and secondary VM (SVM) run in parallel. They receive the | |
19 | same request from client, and generate response in parallel too. | |
20 | If the response packets from PVM and SVM are identical, they are released | |
21 | immediately. Otherwise, a VM checkpoint (on demand) is conducted. | |
22 | ||
23 | == Architecture == | |
24 | ||
25 | The architecture of COLO is shown in the diagram below. | |
26 | It consists of a pair of networked physical nodes: | |
27 | The primary node running the PVM, and the secondary node running the SVM | |
28 | to maintain a valid replica of the PVM. | |
29 | PVM and SVM execute in parallel and generate output of response packets for | |
30 | client requests according to the application semantics. | |
31 | ||
32 | The incoming packets from the client or external network are received by the | |
33 | primary node, and then forwarded to the secondary node, so that both the PVM | |
34 | and the SVM are stimulated with the same requests. | |
35 | ||
36 | COLO receives the outbound packets from both the PVM and SVM and compares them | |
37 | before allowing the output to be sent to clients. | |
38 | ||
39 | The SVM is qualified as a valid replica of the PVM, as long as it generates | |
40 | identical responses to all client requests. Once the differences in the outputs | |
41 | are detected between the PVM and SVM, COLO withholds transmission of the | |
42 | outbound packets until it has successfully synchronized the PVM state to the SVM. | |
43 | ||
a38299bf ZC |
44 | Primary Node Secondary Node |
45 | +------------+ +-----------------------+ +------------------------+ +------------+ | |
46 | | | | HeartBeat +<----->+ HeartBeat | | | | |
47 | | Primary VM | +-----------+-----------+ +-----------+------------+ |Secondary VM| | |
48 | | | | | | | | |
49 | | | +-----------|-----------+ +-----------|------------+ | | | |
50 | | | |QEMU +---v----+ | |QEMU +----v---+ | | | | |
51 | | | | |Failover| | | |Failover| | | | | |
52 | | | | +--------+ | | +--------+ | | | | |
53 | | | | +---------------+ | | +---------------+ | | | | |
54 | | | | | VM Checkpoint +-------------->+ VM Checkpoint | | | | | |
55 | | | | +---------------+ | | +---------------+ | | | | |
56 | |Requests<--------------------------\ /-----------------\ /--------------------->Requests| | |
57 | | | | ^ ^ | | | | | | | | |
58 | |Responses+---------------------\ /-|-|------------\ /-------------------------+Responses| | |
59 | | | | | | | | | | | | | | | | | | |
60 | | | | +-----------+ | | | | | | | | | | +----------+ | | | | |
61 | | | | | COLO disk | | | | | | | | | | | | COLO disk| | | | | |
62 | | | | | Manager +---------------------------->| Manager | | | | | |
63 | | | | ++----------+ v v | | | | | v v | +---------++ | | | | |
64 | | | | |+-----------+-+-+-++| | ++-+--+-+---------+ | | | | | |
65 | | | | || COLO Proxy || | | COLO Proxy | | | | | | |
66 | | | | || (compare packet || | |(adjust sequence | | | | | | |
67 | | | | ||and mirror packet)|| | | and ACK) | | | | | | |
68 | | | | |+------------+---+-+| | +-----------------+ | | | | | |
69 | +------------+ +-----------------------+ +------------------------+ +------------+ | |
70 | +------------+ | | | | +------------+ | |
71 | | VM Monitor | | | | | | VM Monitor | | |
72 | +------------+ | | | | +------------+ | |
73 | +---------------------------------------+ +----------------------------------------+ | |
74 | | Kernel | | | | | Kernel | | | |
75 | +---------------------------------------+ +----------------------------------------+ | |
76 | | | | | | |
77 | +--------------v+ +---------v---+--+ +------------------+ +v-------------+ | |
78 | | Storage | |External Network| | External Network | | Storage | | |
79 | +---------------+ +----------------+ +------------------+ +--------------+ | |
80 | ||
e59887d8 HZ |
81 | |
82 | == Components introduction == | |
83 | ||
84 | You can see there are several components in COLO's diagram of architecture. | |
85 | Their functions are described below. | |
86 | ||
87 | HeartBeat: | |
88 | Runs on both the primary and secondary nodes, to periodically check platform | |
89 | availability. When the primary node suffers a hardware fail-stop failure, | |
90 | the heartbeat stops responding, the secondary node will trigger a failover | |
91 | as soon as it determines the absence. | |
92 | ||
93 | COLO disk Manager: | |
94 | When primary VM writes data into image, the colo disk manger captures this data | |
95 | and sends it to secondary VM's which makes sure the context of secondary VM's | |
96 | image is consistent with the context of primary VM 's image. | |
97 | For more details, please refer to docs/block-replication.txt. | |
98 | ||
99 | Checkpoint/Failover Controller: | |
100 | Modifications of save/restore flow to realize continuous migration, | |
101 | to make sure the state of VM in Secondary side is always consistent with VM in | |
102 | Primary side. | |
103 | ||
104 | COLO Proxy: | |
806be373 | 105 | Delivers packets to Primary and Secondary, and then compare the responses from |
e59887d8 | 106 | both side. Then decide whether to start a checkpoint according to some rules. |
963e64a4 | 107 | Please refer to docs/colo-proxy.txt for more information. |
e59887d8 HZ |
108 | |
109 | Note: | |
110 | HeartBeat has not been implemented yet, so you need to trigger failover process | |
111 | by using 'x-colo-lost-heartbeat' command. | |
112 | ||
8e640892 ZC |
113 | == COLO operation status == |
114 | ||
115 | +-----------------+ | |
116 | | | | |
117 | | Start COLO | | |
118 | | | | |
119 | +--------+--------+ | |
120 | | | |
121 | | Main qmp command: | |
122 | | migrate-set-capabilities with x-colo | |
123 | | migrate | |
124 | | | |
125 | v | |
126 | +--------+--------+ | |
127 | | | | |
128 | | COLO running | | |
129 | | | | |
130 | +--------+--------+ | |
131 | | | |
132 | | Main qmp command: | |
133 | | x-colo-lost-heartbeat | |
134 | | or | |
135 | | some error happened | |
136 | v | |
137 | +--------+--------+ | |
138 | | | send qmp event: | |
139 | | COLO failover | COLO_EXIT | |
140 | | | | |
141 | +-----------------+ | |
142 | ||
143 | COLO use the qmp command to switch and report operation status. | |
144 | The diagram just shows the main qmp command, you can get the detail | |
145 | in test procedure. | |
146 | ||
e59887d8 | 147 | == Test procedure == |
90dfe59b LS |
148 | Note: Here we are running both instances on the same host for testing, |
149 | change the IP Addresses if you want to run it on two hosts. Initally | |
150 | 127.0.0.1 is the Primary Host and 127.0.0.2 is the Secondary Host. | |
151 | ||
152 | == Startup qemu == | |
153 | 1. Primary: | |
154 | Note: Initally, $imagefolder/primary.qcow2 needs to be copied to all hosts. | |
155 | You don't need to change any IP's here, because 0.0.0.0 listens on any | |
156 | interface. The chardev's with 127.0.0.1 IP's loopback to the local qemu | |
157 | instance. | |
158 | ||
159 | # imagefolder="/mnt/vms/colo-test-primary" | |
160 | ||
161 | # qemu-system-x86_64 -enable-kvm -cpu qemu64,+kvmclock -m 512 -smp 1 -qmp stdio \ | |
162 | -device piix3-usb-uhci -device usb-tablet -name primary \ | |
163 | -netdev tap,id=hn0,vhost=off,helper=/usr/lib/qemu/qemu-bridge-helper \ | |
164 | -device rtl8139,id=e0,netdev=hn0 \ | |
165 | -chardev socket,id=mirror0,host=0.0.0.0,port=9003,server,nowait \ | |
166 | -chardev socket,id=compare1,host=0.0.0.0,port=9004,server,wait \ | |
167 | -chardev socket,id=compare0,host=127.0.0.1,port=9001,server,nowait \ | |
168 | -chardev socket,id=compare0-0,host=127.0.0.1,port=9001 \ | |
169 | -chardev socket,id=compare_out,host=127.0.0.1,port=9005,server,nowait \ | |
170 | -chardev socket,id=compare_out0,host=127.0.0.1,port=9005 \ | |
171 | -object filter-mirror,id=m0,netdev=hn0,queue=tx,outdev=mirror0 \ | |
172 | -object filter-redirector,netdev=hn0,id=redire0,queue=rx,indev=compare_out \ | |
173 | -object filter-redirector,netdev=hn0,id=redire1,queue=rx,outdev=compare0 \ | |
174 | -object iothread,id=iothread1 \ | |
175 | -object colo-compare,id=comp0,primary_in=compare0-0,secondary_in=compare1,\ | |
176 | outdev=compare_out0,iothread=iothread1 \ | |
177 | -drive if=ide,id=colo-disk0,driver=quorum,read-pattern=fifo,vote-threshold=1,\ | |
178 | children.0.file.filename=$imagefolder/primary.qcow2,children.0.driver=qcow2 -S | |
179 | ||
180 | 2. Secondary: | |
181 | Note: Active and hidden images need to be created only once and the | |
182 | size should be the same as primary.qcow2. Again, you don't need to change | |
183 | any IP's here, except for the $primary_ip variable. | |
184 | ||
185 | # imagefolder="/mnt/vms/colo-test-secondary" | |
186 | # primary_ip=127.0.0.1 | |
187 | ||
188 | # qemu-img create -f qcow2 $imagefolder/secondary-active.qcow2 10G | |
189 | ||
190 | # qemu-img create -f qcow2 $imagefolder/secondary-hidden.qcow2 10G | |
191 | ||
192 | # qemu-system-x86_64 -enable-kvm -cpu qemu64,+kvmclock -m 512 -smp 1 -qmp stdio \ | |
193 | -device piix3-usb-uhci -device usb-tablet -name secondary \ | |
194 | -netdev tap,id=hn0,vhost=off,helper=/usr/lib/qemu/qemu-bridge-helper \ | |
195 | -device rtl8139,id=e0,netdev=hn0 \ | |
196 | -chardev socket,id=red0,host=$primary_ip,port=9003,reconnect=1 \ | |
197 | -chardev socket,id=red1,host=$primary_ip,port=9004,reconnect=1 \ | |
198 | -object filter-redirector,id=f1,netdev=hn0,queue=tx,indev=red0 \ | |
199 | -object filter-redirector,id=f2,netdev=hn0,queue=rx,outdev=red1 \ | |
200 | -object filter-rewriter,id=rew0,netdev=hn0,queue=all \ | |
201 | -drive if=none,id=parent0,file.filename=$imagefolder/primary.qcow2,driver=qcow2 \ | |
202 | -drive if=none,id=childs0,driver=replication,mode=secondary,file.driver=qcow2,\ | |
203 | top-id=colo-disk0,file.file.filename=$imagefolder/secondary-active.qcow2,\ | |
204 | file.backing.driver=qcow2,file.backing.file.filename=$imagefolder/secondary-hidden.qcow2,\ | |
205 | file.backing.backing=parent0 \ | |
206 | -drive if=ide,id=colo-disk0,driver=quorum,read-pattern=fifo,vote-threshold=1,\ | |
207 | children.0=childs0 \ | |
208 | -incoming tcp:0.0.0.0:9998 | |
209 | ||
210 | ||
211 | 3. On Secondary VM's QEMU monitor, issue command | |
e59887d8 | 212 | {'execute':'qmp_capabilities'} |
90dfe59b LS |
213 | {'execute': 'nbd-server-start', 'arguments': {'addr': {'type': 'inet', 'data': {'host': '0.0.0.0', 'port': '9999'} } } } |
214 | {'execute': 'nbd-server-add', 'arguments': {'device': 'parent0', 'writable': true } } | |
e59887d8 HZ |
215 | |
216 | Note: | |
217 | a. The qmp command nbd-server-start and nbd-server-add must be run | |
218 | before running the qmp command migrate on primary QEMU | |
219 | b. Active disk, hidden disk and nbd target's length should be the | |
220 | same. | |
90dfe59b LS |
221 | c. It is better to put active disk and hidden disk in ramdisk. They |
222 | will be merged into the parent disk on failover. | |
e59887d8 | 223 | |
90dfe59b | 224 | 4. On Primary VM's QEMU monitor, issue command: |
e59887d8 | 225 | {'execute':'qmp_capabilities'} |
90dfe59b LS |
226 | {'execute': 'human-monitor-command', 'arguments': {'command-line': 'drive_add -n buddy driver=replication,mode=primary,file.driver=nbd,file.host=127.0.0.2,file.port=9999,file.export=parent0,node-name=replication0'}} |
227 | {'execute': 'x-blockdev-change', 'arguments':{'parent': 'colo-disk0', 'node': 'replication0' } } | |
228 | {'execute': 'migrate-set-capabilities', 'arguments': {'capabilities': [ {'capability': 'x-colo', 'state': true } ] } } | |
229 | {'execute': 'migrate', 'arguments': {'uri': 'tcp:127.0.0.2:9998' } } | |
e59887d8 HZ |
230 | |
231 | Note: | |
232 | a. There should be only one NBD Client for each primary disk. | |
90dfe59b | 233 | b. The qmp command line must be run after running qmp command line in |
e59887d8 HZ |
234 | secondary qemu. |
235 | ||
90dfe59b | 236 | 5. After the above steps, you will see, whenever you make changes to PVM, SVM will be synced. |
e59887d8 | 237 | You can issue command '{ "execute": "migrate-set-parameters" , "arguments":{ "x-checkpoint-delay": 2000 } }' |
90dfe59b LS |
238 | to change the idle checkpoint period time |
239 | ||
240 | 6. Failover test | |
241 | You can kill one of the VMs and Failover on the surviving VM: | |
242 | ||
243 | If you killed the Secondary, then follow "Primary Failover". After that, | |
244 | if you want to resume the replication, follow "Primary resume replication" | |
245 | ||
246 | If you killed the Primary, then follow "Secondary Failover". After that, | |
247 | if you want to resume the replication, follow "Secondary resume replication" | |
248 | ||
249 | == Primary Failover == | |
250 | The Secondary died, resume on the Primary | |
251 | ||
252 | {'execute': 'x-blockdev-change', 'arguments':{ 'parent': 'colo-disk0', 'child': 'children.1'} } | |
253 | {'execute': 'human-monitor-command', 'arguments':{ 'command-line': 'drive_del replication0' } } | |
254 | {'execute': 'object-del', 'arguments':{ 'id': 'comp0' } } | |
255 | {'execute': 'object-del', 'arguments':{ 'id': 'iothread1' } } | |
256 | {'execute': 'object-del', 'arguments':{ 'id': 'm0' } } | |
257 | {'execute': 'object-del', 'arguments':{ 'id': 'redire0' } } | |
258 | {'execute': 'object-del', 'arguments':{ 'id': 'redire1' } } | |
259 | {'execute': 'x-colo-lost-heartbeat' } | |
260 | ||
261 | == Secondary Failover == | |
262 | The Primary died, resume on the Secondary and prepare to become the new Primary | |
263 | ||
264 | {'execute': 'nbd-server-stop'} | |
265 | {'execute': 'x-colo-lost-heartbeat'} | |
266 | ||
267 | {'execute': 'object-del', 'arguments':{ 'id': 'f2' } } | |
268 | {'execute': 'object-del', 'arguments':{ 'id': 'f1' } } | |
269 | {'execute': 'chardev-remove', 'arguments':{ 'id': 'red1' } } | |
270 | {'execute': 'chardev-remove', 'arguments':{ 'id': 'red0' } } | |
271 | ||
272 | {'execute': 'chardev-add', 'arguments':{ 'id': 'mirror0', 'backend': {'type': 'socket', 'data': {'addr': { 'type': 'inet', 'data': { 'host': '0.0.0.0', 'port': '9003' } }, 'server': true } } } } | |
273 | {'execute': 'chardev-add', 'arguments':{ 'id': 'compare1', 'backend': {'type': 'socket', 'data': {'addr': { 'type': 'inet', 'data': { 'host': '0.0.0.0', 'port': '9004' } }, 'server': true } } } } | |
274 | {'execute': 'chardev-add', 'arguments':{ 'id': 'compare0', 'backend': {'type': 'socket', 'data': {'addr': { 'type': 'inet', 'data': { 'host': '127.0.0.1', 'port': '9001' } }, 'server': true } } } } | |
275 | {'execute': 'chardev-add', 'arguments':{ 'id': 'compare0-0', 'backend': {'type': 'socket', 'data': {'addr': { 'type': 'inet', 'data': { 'host': '127.0.0.1', 'port': '9001' } }, 'server': false } } } } | |
276 | {'execute': 'chardev-add', 'arguments':{ 'id': 'compare_out', 'backend': {'type': 'socket', 'data': {'addr': { 'type': 'inet', 'data': { 'host': '127.0.0.1', 'port': '9005' } }, 'server': true } } } } | |
277 | {'execute': 'chardev-add', 'arguments':{ 'id': 'compare_out0', 'backend': {'type': 'socket', 'data': {'addr': { 'type': 'inet', 'data': { 'host': '127.0.0.1', 'port': '9005' } }, 'server': false } } } } | |
278 | ||
279 | == Primary resume replication == | |
280 | Resume replication after new Secondary is up. | |
281 | ||
282 | Start the new Secondary (Steps 2 and 3 above), then on the Primary: | |
283 | {'execute': 'drive-mirror', 'arguments':{ 'device': 'colo-disk0', 'job-id': 'resync', 'target': 'nbd://127.0.0.2:9999/parent0', 'mode': 'existing', 'format': 'raw', 'sync': 'full'} } | |
284 | ||
285 | Wait until disk is synced, then: | |
286 | {'execute': 'stop'} | |
287 | {'execute': 'block-job-cancel', 'arguments':{ 'device': 'resync'} } | |
288 | ||
289 | {'execute': 'human-monitor-command', 'arguments':{ 'command-line': 'drive_add -n buddy driver=replication,mode=primary,file.driver=nbd,file.host=127.0.0.2,file.port=9999,file.export=parent0,node-name=replication0'}} | |
290 | {'execute': 'x-blockdev-change', 'arguments':{ 'parent': 'colo-disk0', 'node': 'replication0' } } | |
291 | ||
292 | {'execute': 'object-add', 'arguments':{ 'qom-type': 'filter-mirror', 'id': 'm0', 'props': { 'netdev': 'hn0', 'queue': 'tx', 'outdev': 'mirror0' } } } | |
293 | {'execute': 'object-add', 'arguments':{ 'qom-type': 'filter-redirector', 'id': 'redire0', 'props': { 'netdev': 'hn0', 'queue': 'rx', 'indev': 'compare_out' } } } | |
294 | {'execute': 'object-add', 'arguments':{ 'qom-type': 'filter-redirector', 'id': 'redire1', 'props': { 'netdev': 'hn0', 'queue': 'rx', 'outdev': 'compare0' } } } | |
295 | {'execute': 'object-add', 'arguments':{ 'qom-type': 'iothread', 'id': 'iothread1' } } | |
296 | {'execute': 'object-add', 'arguments':{ 'qom-type': 'colo-compare', 'id': 'comp0', 'props': { 'primary_in': 'compare0-0', 'secondary_in': 'compare1', 'outdev': 'compare_out0', 'iothread': 'iothread1' } } } | |
297 | ||
298 | {'execute': 'migrate-set-capabilities', 'arguments':{ 'capabilities': [ {'capability': 'x-colo', 'state': true } ] } } | |
299 | {'execute': 'migrate', 'arguments':{ 'uri': 'tcp:127.0.0.2:9998' } } | |
300 | ||
301 | Note: | |
302 | If this Primary previously was a Secondary, then we need to insert the | |
303 | filters before the filter-rewriter by using the | |
304 | "'insert': 'before', 'position': 'id=rew0'" Options. See below. | |
305 | ||
306 | == Secondary resume replication == | |
307 | Become Primary and resume replication after new Secondary is up. Note | |
308 | that now 127.0.0.1 is the Secondary and 127.0.0.2 is the Primary. | |
309 | ||
310 | Start the new Secondary (Steps 2 and 3 above, but with primary_ip=127.0.0.2), | |
311 | then on the old Secondary: | |
312 | {'execute': 'drive-mirror', 'arguments':{ 'device': 'colo-disk0', 'job-id': 'resync', 'target': 'nbd://127.0.0.1:9999/parent0', 'mode': 'existing', 'format': 'raw', 'sync': 'full'} } | |
313 | ||
314 | Wait until disk is synced, then: | |
315 | {'execute': 'stop'} | |
316 | {'execute': 'block-job-cancel', 'arguments':{ 'device': 'resync' } } | |
e59887d8 | 317 | |
90dfe59b LS |
318 | {'execute': 'human-monitor-command', 'arguments':{ 'command-line': 'drive_add -n buddy driver=replication,mode=primary,file.driver=nbd,file.host=127.0.0.1,file.port=9999,file.export=parent0,node-name=replication0'}} |
319 | {'execute': 'x-blockdev-change', 'arguments':{ 'parent': 'colo-disk0', 'node': 'replication0' } } | |
e59887d8 | 320 | |
90dfe59b LS |
321 | {'execute': 'object-add', 'arguments':{ 'qom-type': 'filter-mirror', 'id': 'm0', 'props': { 'insert': 'before', 'position': 'id=rew0', 'netdev': 'hn0', 'queue': 'tx', 'outdev': 'mirror0' } } } |
322 | {'execute': 'object-add', 'arguments':{ 'qom-type': 'filter-redirector', 'id': 'redire0', 'props': { 'insert': 'before', 'position': 'id=rew0', 'netdev': 'hn0', 'queue': 'rx', 'indev': 'compare_out' } } } | |
323 | {'execute': 'object-add', 'arguments':{ 'qom-type': 'filter-redirector', 'id': 'redire1', 'props': { 'insert': 'before', 'position': 'id=rew0', 'netdev': 'hn0', 'queue': 'rx', 'outdev': 'compare0' } } } | |
324 | {'execute': 'object-add', 'arguments':{ 'qom-type': 'iothread', 'id': 'iothread1' } } | |
325 | {'execute': 'object-add', 'arguments':{ 'qom-type': 'colo-compare', 'id': 'comp0', 'props': { 'primary_in': 'compare0-0', 'secondary_in': 'compare1', 'outdev': 'compare_out0', 'iothread': 'iothread1' } } } | |
e59887d8 | 326 | |
90dfe59b LS |
327 | {'execute': 'migrate-set-capabilities', 'arguments':{ 'capabilities': [ {'capability': 'x-colo', 'state': true } ] } } |
328 | {'execute': 'migrate', 'arguments':{ 'uri': 'tcp:127.0.0.1:9998' } } | |
e59887d8 HZ |
329 | |
330 | == TODO == | |
90dfe59b LS |
331 | 1. Support shared storage. |
332 | 2. Develop the heartbeat part. | |
333 | 3. Reduce checkpoint VM’s downtime while doing checkpoint. |