* - to make easier to know what to free at the end of migration
*
* This way we always know who is the owner of each "pages" struct,
- * and we don't need any loocking. It belongs to the migration thread
+ * and we don't need any locking. It belongs to the migration thread
* or to the channel thread. Switching is safe because the migration
* thread is using the channel mutex when changing it, and the channel
* have to had finish with its own, otherwise pending_job can't be
/**
* migration_bitmap_find_dirty: find the next dirty page from start
*
- * Returns the byte offset within memory region of the start of a dirty page
+ * Returns the page offset within memory region of the start of a dirty page
*
* @rs: current RAM state
* @rb: RAMBlock where to search for dirty pages
* find_dirty_block: find the next dirty page and update any state
* associated with the search process.
*
- * Returns if a page is found
+ * Returns true if a page is found
*
* @rs: current RAM state
* @pss: data about the state of the current dirty page scan
*
* Skips pages that are already sent (!dirty)
*
- * Returns if a queued page is found
+ * Returns true if a queued page is found
*
* @rs: current RAM state
* @pss: data about the state of the current dirty page scan
/* we want to check in the 1st loop, just in case it was the 1st time
and we had to sync the dirty bitmap.
- qemu_get_clock_ns() is a bit expensive, so we only check each some
+ qemu_clock_get_ns() is a bit expensive, so we only check each some
iterations
*/
if ((i & 63) == 0) {