]>
Commit | Line | Data |
---|---|---|
f58ae59c JQ |
1 | = Migration = |
2 | ||
3 | QEMU has code to load/save the state of the guest that it is running. | |
dda5336e | 4 | These are two complementary operations. Saving the state just does |
f58ae59c JQ |
5 | that, saves the state for each device that the guest is running. |
6 | Restoring a guest is just the opposite operation: we need to load the | |
7 | state of each device. | |
8 | ||
dda5336e | 9 | For this to work, QEMU has to be launched with the same arguments the |
f58ae59c JQ |
10 | two times. I.e. it can only restore the state in one guest that has |
11 | the same devices that the one it was saved (this last requirement can | |
dda5336e | 12 | be relaxed a bit, but for now we can consider that configuration has |
f58ae59c JQ |
13 | to be exactly the same). |
14 | ||
15 | Once that we are able to save/restore a guest, a new functionality is | |
16 | requested: migration. This means that QEMU is able to start in one | |
dda5336e SW |
17 | machine and being "migrated" to another machine. I.e. being moved to |
18 | another machine. | |
f58ae59c JQ |
19 | |
20 | Next was the "live migration" functionality. This is important | |
21 | because some guests run with a lot of state (specially RAM), and it | |
22 | can take a while to move all state from one machine to another. Live | |
23 | migration allows the guest to continue running while the state is | |
24 | transferred. Only while the last part of the state is transferred has | |
25 | the guest to be stopped. Typically the time that the guest is | |
26 | unresponsive during live migration is the low hundred of milliseconds | |
dda5336e | 27 | (notice that this depends on a lot of things). |
f58ae59c JQ |
28 | |
29 | === Types of migration === | |
30 | ||
31 | Now that we have talked about live migration, there are several ways | |
32 | to do migration: | |
33 | ||
34 | - tcp migration: do the migration using tcp sockets | |
35 | - unix migration: do the migration using unix sockets | |
36 | - exec migration: do the migration using the stdin/stdout through a process. | |
37 | - fd migration: do the migration using an file descriptor that is | |
dda5336e | 38 | passed to QEMU. QEMU doesn't care how this file descriptor is opened. |
f58ae59c | 39 | |
dda5336e | 40 | All these four migration protocols use the same infrastructure to |
f58ae59c JQ |
41 | save/restore state devices. This infrastructure is shared with the |
42 | savevm/loadvm functionality. | |
43 | ||
7465dfec | 44 | === State Live Migration === |
f58ae59c JQ |
45 | |
46 | This is used for RAM and block devices. It is not yet ported to vmstate. | |
47 | <Fill more information here> | |
48 | ||
49 | === What is the common infrastructure === | |
50 | ||
51 | QEMU uses a QEMUFile abstraction to be able to do migration. Any type | |
dda5336e | 52 | of migration that wants to use QEMU infrastructure has to create a |
f58ae59c JQ |
53 | QEMUFile with: |
54 | ||
55 | QEMUFile *qemu_fopen_ops(void *opaque, | |
dda5336e | 56 | QEMUFilePutBufferFunc *put_buffer, |
f58ae59c | 57 | QEMUFileGetBufferFunc *get_buffer, |
1964a397 | 58 | QEMUFileCloseFunc *close); |
f58ae59c JQ |
59 | |
60 | The functions have the following functionality: | |
61 | ||
62 | This function writes a chunk of data to a file at the given position. | |
dda5336e | 63 | The pos argument can be ignored if the file is only used for |
f58ae59c JQ |
64 | streaming. The handler should try to write all of the data it can. |
65 | ||
66 | typedef int (QEMUFilePutBufferFunc)(void *opaque, const uint8_t *buf, | |
67 | int64_t pos, int size); | |
68 | ||
69 | Read a chunk of data from a file at the given position. The pos argument | |
70 | can be ignored if the file is only be used for streaming. The number of | |
71 | bytes actually read should be returned. | |
72 | ||
73 | typedef int (QEMUFileGetBufferFunc)(void *opaque, uint8_t *buf, | |
74 | int64_t pos, int size); | |
75 | ||
dda5336e | 76 | Close a file and return an error code. |
f58ae59c JQ |
77 | |
78 | typedef int (QEMUFileCloseFunc)(void *opaque); | |
79 | ||
f58ae59c JQ |
80 | You can use any internal state that you need using the opaque void * |
81 | pointer that is passed to all functions. | |
82 | ||
f58ae59c JQ |
83 | The important functions for us are put_buffer()/get_buffer() that |
84 | allow to write/read a buffer into the QEMUFile. | |
85 | ||
7465dfec | 86 | === How to save the state of one device === |
f58ae59c JQ |
87 | |
88 | The state of a device is saved using intermediate buffers. There are | |
89 | some helper functions to assist this saving. | |
90 | ||
91 | There is a new concept that we have to explain here: device state | |
92 | version. When we migrate a device, we save/load the state as a series | |
93 | of fields. Some times, due to bugs or new functionality, we need to | |
94 | change the state to store more/different information. We use the | |
95 | version to identify each time that we do a change. Each version is | |
dda5336e SW |
96 | associated with a series of fields saved. The save_state always saves |
97 | the state as the newer version. But load_state sometimes is able to | |
f58ae59c JQ |
98 | load state from an older version. |
99 | ||
7465dfec | 100 | === Legacy way === |
f58ae59c JQ |
101 | |
102 | This way is going to disappear as soon as all current users are ported to VMSTATE. | |
103 | ||
104 | Each device has to register two functions, one to save the state and | |
105 | another to load the state back. | |
106 | ||
107 | int register_savevm(DeviceState *dev, | |
108 | const char *idstr, | |
109 | int instance_id, | |
110 | int version_id, | |
111 | SaveStateHandler *save_state, | |
112 | LoadStateHandler *load_state, | |
113 | void *opaque); | |
114 | ||
115 | typedef void SaveStateHandler(QEMUFile *f, void *opaque); | |
116 | typedef int LoadStateHandler(QEMUFile *f, void *opaque, int version_id); | |
117 | ||
118 | The important functions for the device state format are the save_state | |
119 | and load_state. Notice that load_state receives a version_id | |
dda5336e SW |
120 | parameter to know what state format is receiving. save_state doesn't |
121 | have a version_id parameter because it always uses the latest version. | |
f58ae59c JQ |
122 | |
123 | === VMState === | |
124 | ||
125 | The legacy way of saving/loading state of the device had the problem | |
dda5336e SW |
126 | that we have to maintain two functions in sync. If we did one change |
127 | in one of them and not in the other, we would get a failed migration. | |
f58ae59c JQ |
128 | |
129 | VMState changed the way that state is saved/loaded. Instead of using | |
130 | a function to save the state and another to load it, it was changed to | |
131 | a declarative way of what the state consisted of. Now VMState is able | |
132 | to interpret that definition to be able to load/save the state. As | |
133 | the state is declared only once, it can't go out of sync in the | |
134 | save/load functions. | |
135 | ||
7465dfec | 136 | An example (from hw/input/pckbd.c) |
f58ae59c JQ |
137 | |
138 | static const VMStateDescription vmstate_kbd = { | |
139 | .name = "pckbd", | |
140 | .version_id = 3, | |
141 | .minimum_version_id = 3, | |
35d08458 | 142 | .fields = (VMStateField[]) { |
f58ae59c JQ |
143 | VMSTATE_UINT8(write_cmd, KBDState), |
144 | VMSTATE_UINT8(status, KBDState), | |
145 | VMSTATE_UINT8(mode, KBDState), | |
146 | VMSTATE_UINT8(pending, KBDState), | |
147 | VMSTATE_END_OF_LIST() | |
148 | } | |
149 | }; | |
150 | ||
151 | We are declaring the state with name "pckbd". | |
152 | The version_id is 3, and the fields are 4 uint8_t in a KBDState structure. | |
153 | We registered this with: | |
154 | ||
155 | vmstate_register(NULL, 0, &vmstate_kbd, s); | |
156 | ||
dda5336e | 157 | Note: talk about how vmstate <-> qdev interact, and what the instance ids mean. |
f58ae59c JQ |
158 | |
159 | You can search for VMSTATE_* macros for lots of types used in QEMU in | |
7465dfec | 160 | include/hw/hw.h. |
f58ae59c | 161 | |
7465dfec | 162 | === More about versions === |
f58ae59c JQ |
163 | |
164 | You can see that there are several version fields: | |
165 | ||
dda5336e | 166 | - version_id: the maximum version_id supported by VMState for that device. |
f58ae59c JQ |
167 | - minimum_version_id: the minimum version_id that VMState is able to understand |
168 | for that device. | |
169 | - minimum_version_id_old: For devices that were not able to port to vmstate, we can | |
767adce2 PM |
170 | assign a function that knows how to read this old state. This field is |
171 | ignored if there is no load_state_old handler. | |
f58ae59c JQ |
172 | |
173 | So, VMState is able to read versions from minimum_version_id to | |
767adce2 PM |
174 | version_id. And the function load_state_old() (if present) is able to |
175 | load state from minimum_version_id_old to minimum_version_id. This | |
176 | function is deprecated and will be removed when no more users are left. | |
f58ae59c JQ |
177 | |
178 | === Massaging functions === | |
179 | ||
dda5336e | 180 | Sometimes, it is not enough to be able to save the state directly |
f58ae59c JQ |
181 | from one structure, we need to fill the correct values there. One |
182 | example is when we are using kvm. Before saving the cpu state, we | |
183 | need to ask kvm to copy to QEMU the state that it is using. And the | |
184 | opposite when we are loading the state, we need a way to tell kvm to | |
185 | load the state for the cpu that we have just loaded from the QEMUFile. | |
186 | ||
187 | The functions to do that are inside a vmstate definition, and are called: | |
188 | ||
189 | - int (*pre_load)(void *opaque); | |
190 | ||
191 | This function is called before we load the state of one device. | |
192 | ||
193 | - int (*post_load)(void *opaque, int version_id); | |
194 | ||
195 | This function is called after we load the state of one device. | |
196 | ||
197 | - void (*pre_save)(void *opaque); | |
198 | ||
199 | This function is called before we save the state of one device. | |
200 | ||
201 | Example: You can look at hpet.c, that uses the three function to | |
202 | massage the state that is transferred. | |
203 | ||
a6c5c079 AK |
204 | If you use memory API functions that update memory layout outside |
205 | initialization (i.e., in response to a guest action), this is a strong | |
206 | indication that you need to call these functions in a post_load callback. | |
207 | Examples of such memory API functions are: | |
208 | ||
209 | - memory_region_add_subregion() | |
210 | - memory_region_del_subregion() | |
211 | - memory_region_set_readonly() | |
212 | - memory_region_set_enabled() | |
213 | - memory_region_set_address() | |
214 | - memory_region_set_alias_offset() | |
215 | ||
f58ae59c JQ |
216 | === Subsections === |
217 | ||
218 | The use of version_id allows to be able to migrate from older versions | |
219 | to newer versions of a device. But not the other way around. This | |
220 | makes very complicated to fix bugs in stable branches. If we need to | |
221 | add anything to the state to fix a bug, we have to disable migration | |
222 | to older versions that don't have that bug-fix (i.e. a new field). | |
223 | ||
dda5336e | 224 | But sometimes, that bug-fix is only needed sometimes, not always. For |
f58ae59c JQ |
225 | instance, if the device is in the middle of a DMA operation, it is |
226 | using a specific functionality, .... | |
227 | ||
228 | It is impossible to create a way to make migration from any version to | |
dda5336e | 229 | any other version to work. But we can do better than only allowing |
7465dfec | 230 | migration from older versions to newer ones. For that fields that are |
dda5336e | 231 | only needed sometimes, we add the idea of subsections. A subsection |
f58ae59c JQ |
232 | is "like" a device vmstate, but with a particularity, it has a Boolean |
233 | function that tells if that values are needed to be sent or not. If | |
234 | this functions returns false, the subsection is not sent. | |
235 | ||
236 | On the receiving side, if we found a subsection for a device that we | |
237 | don't understand, we just fail the migration. If we understand all | |
238 | the subsections, then we load the state with success. | |
239 | ||
240 | One important note is that the post_load() function is called "after" | |
241 | loading all subsections, because a newer subsection could change same | |
242 | value that it uses. | |
243 | ||
244 | Example: | |
245 | ||
246 | static bool ide_drive_pio_state_needed(void *opaque) | |
247 | { | |
248 | IDEState *s = opaque; | |
249 | ||
7465dfec LL |
250 | return ((s->status & DRQ_STAT) != 0) |
251 | || (s->bus->error_status & BM_STATUS_PIO_RETRY); | |
f58ae59c JQ |
252 | } |
253 | ||
254 | const VMStateDescription vmstate_ide_drive_pio_state = { | |
255 | .name = "ide_drive/pio_state", | |
256 | .version_id = 1, | |
257 | .minimum_version_id = 1, | |
f58ae59c JQ |
258 | .pre_save = ide_drive_pio_pre_save, |
259 | .post_load = ide_drive_pio_post_load, | |
5cd8cada | 260 | .needed = ide_drive_pio_state_needed, |
35d08458 | 261 | .fields = (VMStateField[]) { |
f58ae59c JQ |
262 | VMSTATE_INT32(req_nb_sectors, IDEState), |
263 | VMSTATE_VARRAY_INT32(io_buffer, IDEState, io_buffer_total_len, 1, | |
dda5336e | 264 | vmstate_info_uint8, uint8_t), |
f58ae59c JQ |
265 | VMSTATE_INT32(cur_io_buffer_offset, IDEState), |
266 | VMSTATE_INT32(cur_io_buffer_len, IDEState), | |
267 | VMSTATE_UINT8(end_transfer_fn_idx, IDEState), | |
268 | VMSTATE_INT32(elementary_transfer_size, IDEState), | |
269 | VMSTATE_INT32(packet_transfer_size, IDEState), | |
270 | VMSTATE_END_OF_LIST() | |
271 | } | |
272 | }; | |
273 | ||
274 | const VMStateDescription vmstate_ide_drive = { | |
275 | .name = "ide_drive", | |
276 | .version_id = 3, | |
277 | .minimum_version_id = 0, | |
f58ae59c | 278 | .post_load = ide_drive_post_load, |
35d08458 | 279 | .fields = (VMStateField[]) { |
f58ae59c JQ |
280 | .... several fields .... |
281 | VMSTATE_END_OF_LIST() | |
282 | }, | |
5cd8cada JQ |
283 | .subsections = (const VMStateDescription*[]) { |
284 | &vmstate_ide_drive_pio_state, | |
285 | NULL | |
f58ae59c JQ |
286 | } |
287 | }; | |
288 | ||
289 | Here we have a subsection for the pio state. We only need to | |
290 | save/send this state when we are in the middle of a pio operation | |
291 | (that is what ide_drive_pio_state_needed() checks). If DRQ_STAT is | |
292 | not enabled, the values on that fields are garbage and don't need to | |
293 | be sent. | |
2bfdd1c8 DDAG |
294 | |
295 | = Return path = | |
296 | ||
297 | In most migration scenarios there is only a single data path that runs | |
298 | from the source VM to the destination, typically along a single fd (although | |
299 | possibly with another fd or similar for some fast way of throwing pages across). | |
300 | ||
301 | However, some uses need two way communication; in particular the Postcopy | |
302 | destination needs to be able to request pages on demand from the source. | |
303 | ||
304 | For these scenarios there is a 'return path' from the destination to the source; | |
305 | qemu_file_get_return_path(QEMUFile* fwdpath) gives the QEMUFile* for the return | |
306 | path. | |
307 | ||
308 | Source side | |
309 | Forward path - written by migration thread | |
310 | Return path - opened by main thread, read by return-path thread | |
311 | ||
312 | Destination side | |
313 | Forward path - read by main thread | |
314 | Return path - opened by main thread, written by main thread AND postcopy | |
315 | thread (protected by rp_mutex) | |
316 | ||
317 | = Postcopy = | |
318 | 'Postcopy' migration is a way to deal with migrations that refuse to converge | |
319 | (or take too long to converge) its plus side is that there is an upper bound on | |
320 | the amount of migration traffic and time it takes, the down side is that during | |
321 | the postcopy phase, a failure of *either* side or the network connection causes | |
322 | the guest to be lost. | |
323 | ||
324 | In postcopy the destination CPUs are started before all the memory has been | |
325 | transferred, and accesses to pages that are yet to be transferred cause | |
326 | a fault that's translated by QEMU into a request to the source QEMU. | |
327 | ||
328 | Postcopy can be combined with precopy (i.e. normal migration) so that if precopy | |
329 | doesn't finish in a given time the switch is made to postcopy. | |
330 | ||
331 | === Enabling postcopy === | |
332 | ||
333 | To enable postcopy, issue this command on the monitor prior to the | |
334 | start of migration: | |
335 | ||
336 | migrate_set_capability x-postcopy-ram on | |
337 | ||
338 | The normal commands are then used to start a migration, which is still | |
339 | started in precopy mode. Issuing: | |
340 | ||
341 | migrate_start_postcopy | |
342 | ||
343 | will now cause the transition from precopy to postcopy. | |
344 | It can be issued immediately after migration is started or any | |
345 | time later on. Issuing it after the end of a migration is harmless. | |
346 | ||
347 | Note: During the postcopy phase, the bandwidth limits set using | |
348 | migrate_set_speed is ignored (to avoid delaying requested pages that | |
349 | the destination is waiting for). | |
350 | ||
351 | === Postcopy device transfer === | |
352 | ||
353 | Loading of device data may cause the device emulation to access guest RAM | |
354 | that may trigger faults that have to be resolved by the source, as such | |
355 | the migration stream has to be able to respond with page data *during* the | |
356 | device load, and hence the device data has to be read from the stream completely | |
357 | before the device load begins to free the stream up. This is achieved by | |
358 | 'packaging' the device data into a blob that's read in one go. | |
359 | ||
360 | Source behaviour | |
361 | ||
362 | Until postcopy is entered the migration stream is identical to normal | |
363 | precopy, except for the addition of a 'postcopy advise' command at | |
364 | the beginning, to tell the destination that postcopy might happen. | |
365 | When postcopy starts the source sends the page discard data and then | |
366 | forms the 'package' containing: | |
367 | ||
368 | Command: 'postcopy listen' | |
369 | The device state | |
370 | A series of sections, identical to the precopy streams device state stream | |
371 | containing everything except postcopiable devices (i.e. RAM) | |
372 | Command: 'postcopy run' | |
373 | ||
374 | The 'package' is sent as the data part of a Command: 'CMD_PACKAGED', and the | |
375 | contents are formatted in the same way as the main migration stream. | |
376 | ||
377 | During postcopy the source scans the list of dirty pages and sends them | |
378 | to the destination without being requested (in much the same way as precopy), | |
379 | however when a page request is received from the destination, the dirty page | |
380 | scanning restarts from the requested location. This causes requested pages | |
381 | to be sent quickly, and also causes pages directly after the requested page | |
382 | to be sent quickly in the hope that those pages are likely to be used | |
383 | by the destination soon. | |
384 | ||
385 | Destination behaviour | |
386 | ||
387 | Initially the destination looks the same as precopy, with a single thread | |
388 | reading the migration stream; the 'postcopy advise' and 'discard' commands | |
389 | are processed to change the way RAM is managed, but don't affect the stream | |
390 | processing. | |
391 | ||
392 | ------------------------------------------------------------------------------ | |
393 | 1 2 3 4 5 6 7 | |
394 | main -----DISCARD-CMD_PACKAGED ( LISTEN DEVICE DEVICE DEVICE RUN ) | |
395 | thread | | | |
396 | | (page request) | |
397 | | \___ | |
398 | v \ | |
399 | listen thread: --- page -- page -- page -- page -- page -- | |
400 | ||
401 | a b c | |
402 | ------------------------------------------------------------------------------ | |
403 | ||
404 | On receipt of CMD_PACKAGED (1) | |
405 | All the data associated with the package - the ( ... ) section in the | |
406 | diagram - is read into memory (into a QEMUSizedBuffer), and the main thread | |
407 | recurses into qemu_loadvm_state_main to process the contents of the package (2) | |
408 | which contains commands (3,6) and devices (4...) | |
409 | ||
410 | On receipt of 'postcopy listen' - 3 -(i.e. the 1st command in the package) | |
411 | a new thread (a) is started that takes over servicing the migration stream, | |
412 | while the main thread carries on loading the package. It loads normal | |
413 | background page data (b) but if during a device load a fault happens (5) the | |
414 | returned page (c) is loaded by the listen thread allowing the main threads | |
415 | device load to carry on. | |
416 | ||
417 | The last thing in the CMD_PACKAGED is a 'RUN' command (6) letting the destination | |
418 | CPUs start running. | |
419 | At the end of the CMD_PACKAGED (7) the main thread returns to normal running behaviour | |
420 | and is no longer used by migration, while the listen thread carries | |
421 | on servicing page data until the end of migration. | |
422 | ||
423 | === Postcopy states === | |
424 | ||
425 | Postcopy moves through a series of states (see postcopy_state) from | |
426 | ADVISE->DISCARD->LISTEN->RUNNING->END | |
427 | ||
428 | Advise: Set at the start of migration if postcopy is enabled, even | |
429 | if it hasn't had the start command; here the destination | |
430 | checks that its OS has the support needed for postcopy, and performs | |
431 | setup to ensure the RAM mappings are suitable for later postcopy. | |
432 | The destination will fail early in migration at this point if the | |
433 | required OS support is not present. | |
434 | (Triggered by reception of POSTCOPY_ADVISE command) | |
435 | ||
436 | Discard: Entered on receipt of the first 'discard' command; prior to | |
437 | the first Discard being performed, hugepages are switched off | |
438 | (using madvise) to ensure that no new huge pages are created | |
439 | during the postcopy phase, and to cause any huge pages that | |
440 | have discards on them to be broken. | |
441 | ||
442 | Listen: The first command in the package, POSTCOPY_LISTEN, switches | |
443 | the destination state to Listen, and starts a new thread | |
444 | (the 'listen thread') which takes over the job of receiving | |
445 | pages off the migration stream, while the main thread carries | |
446 | on processing the blob. With this thread able to process page | |
447 | reception, the destination now 'sensitises' the RAM to detect | |
448 | any access to missing pages (on Linux using the 'userfault' | |
449 | system). | |
450 | ||
451 | Running: POSTCOPY_RUN causes the destination to synchronise all | |
452 | state and start the CPUs and IO devices running. The main | |
453 | thread now finishes processing the migration package and | |
454 | now carries on as it would for normal precopy migration | |
455 | (although it can't do the cleanup it would do as it | |
456 | finishes a normal migration). | |
457 | ||
458 | End: The listen thread can now quit, and perform the cleanup of migration | |
459 | state, the migration is now complete. | |
460 | ||
461 | === Source side page maps === | |
462 | ||
463 | The source side keeps two bitmaps during postcopy; 'the migration bitmap' | |
464 | and 'unsent map'. The 'migration bitmap' is basically the same as in | |
465 | the precopy case, and holds a bit to indicate that page is 'dirty' - | |
466 | i.e. needs sending. During the precopy phase this is updated as the CPU | |
467 | dirties pages, however during postcopy the CPUs are stopped and nothing | |
468 | should dirty anything any more. | |
469 | ||
470 | The 'unsent map' is used for the transition to postcopy. It is a bitmap that | |
471 | has a bit cleared whenever a page is sent to the destination, however during | |
472 | the transition to postcopy mode it is combined with the migration bitmap | |
473 | to form a set of pages that: | |
474 | a) Have been sent but then redirtied (which must be discarded) | |
475 | b) Have not yet been sent - which also must be discarded to cause any | |
476 | transparent huge pages built during precopy to be broken. | |
477 | ||
478 | Note that the contents of the unsentmap are sacrificed during the calculation | |
479 | of the discard set and thus aren't valid once in postcopy. The dirtymap | |
480 | is still valid and is used to ensure that no page is sent more than once. Any | |
481 | request for a page that has already been sent is ignored. Duplicate requests | |
482 | such as this can happen as a page is sent at about the same time the | |
483 | destination accesses it. | |
484 |