]>
Commit | Line | Data |
---|---|---|
9d3a4736 AK |
1 | The memory API |
2 | ============== | |
3 | ||
4 | The memory API models the memory and I/O buses and controllers of a QEMU | |
5 | machine. It attempts to allow modelling of: | |
6 | ||
7 | - ordinary RAM | |
8 | - memory-mapped I/O (MMIO) | |
9 | - memory controllers that can dynamically reroute physical memory regions | |
69ddaf66 | 10 | to different destinations |
9d3a4736 AK |
11 | |
12 | The memory model provides support for | |
13 | ||
14 | - tracking RAM changes by the guest | |
15 | - setting up coalesced memory for kvm | |
16 | - setting up ioeventfd regions for kvm | |
17 | ||
2d40178a PB |
18 | Memory is modelled as an acyclic graph of MemoryRegion objects. Sinks |
19 | (leaves) are RAM and MMIO regions, while other nodes represent | |
20 | buses, memory controllers, and memory regions that have been rerouted. | |
21 | ||
22 | In addition to MemoryRegion objects, the memory API provides AddressSpace | |
23 | objects for every root and possibly for intermediate MemoryRegions too. | |
24 | These represent memory as seen from the CPU or a device's viewpoint. | |
9d3a4736 AK |
25 | |
26 | Types of regions | |
27 | ---------------- | |
28 | ||
5056c0c3 | 29 | There are multiple types of memory regions (all represented by a single C type |
9d3a4736 AK |
30 | MemoryRegion): |
31 | ||
32 | - RAM: a RAM region is simply a range of host memory that can be made available | |
33 | to the guest. | |
5056c0c3 PM |
34 | You typically initialize these with memory_region_init_ram(). Some special |
35 | purposes require the variants memory_region_init_resizeable_ram(), | |
36 | memory_region_init_ram_from_file(), or memory_region_init_ram_ptr(). | |
9d3a4736 AK |
37 | |
38 | - MMIO: a range of guest memory that is implemented by host callbacks; | |
39 | each read or write causes a callback to be called on the host. | |
0c52a80e C |
40 | You initialize these with memory_region_init_io(), passing it a |
41 | MemoryRegionOps structure describing the callbacks. | |
5056c0c3 PM |
42 | |
43 | - ROM: a ROM memory region works like RAM for reads (directly accessing | |
a1777f7f PM |
44 | a region of host memory), and forbids writes. You initialize these with |
45 | memory_region_init_rom(). | |
46 | ||
47 | - ROM device: a ROM device memory region works like RAM for reads | |
48 | (directly accessing a region of host memory), but like MMIO for | |
49 | writes (invoking a callback). You initialize these with | |
50 | memory_region_init_rom_device(). | |
5056c0c3 PM |
51 | |
52 | - IOMMU region: an IOMMU region translates addresses of accesses made to it | |
53 | and forwards them to some other target memory region. As the name suggests, | |
54 | these are only needed for modelling an IOMMU, not for simple devices. | |
55 | You initialize these with memory_region_init_iommu(). | |
9d3a4736 AK |
56 | |
57 | - container: a container simply includes other memory regions, each at | |
58 | a different offset. Containers are useful for grouping several regions | |
59 | into one unit. For example, a PCI BAR may be composed of a RAM region | |
60 | and an MMIO region. | |
61 | ||
62 | A container's subregions are usually non-overlapping. In some cases it is | |
63 | useful to have overlapping regions; for example a memory controller that | |
64 | can overlay a subregion of RAM with MMIO or ROM, or a PCI controller | |
65 | that does not prevent card from claiming overlapping BARs. | |
66 | ||
5056c0c3 PM |
67 | You initialize a pure container with memory_region_init(). |
68 | ||
9d3a4736 AK |
69 | - alias: a subsection of another region. Aliases allow a region to be |
70 | split apart into discontiguous regions. Examples of uses are memory banks | |
71 | used when the guest address space is smaller than the amount of RAM | |
72 | addressed, or a memory controller that splits main memory to expose a "PCI | |
73 | hole". Aliases may point to any type of region, including other aliases, | |
74 | but an alias may not point back to itself, directly or indirectly. | |
5056c0c3 PM |
75 | You initialize these with memory_region_init_alias(). |
76 | ||
77 | - reservation region: a reservation region is primarily for debugging. | |
78 | It claims I/O space that is not supposed to be handled by QEMU itself. | |
79 | The typical use is to track parts of the address space which will be | |
80 | handled by the host kernel when KVM is enabled. | |
81 | You initialize these with memory_region_init_reservation(), or by | |
82 | passing a NULL callback parameter to memory_region_init_io(). | |
9d3a4736 | 83 | |
6f1ce94a PM |
84 | It is valid to add subregions to a region which is not a pure container |
85 | (that is, to an MMIO, RAM or ROM region). This means that the region | |
86 | will act like a container, except that any addresses within the container's | |
87 | region which are not claimed by any subregion are handled by the | |
88 | container itself (ie by its MMIO callbacks or RAM backing). However | |
89 | it is generally possible to achieve the same effect with a pure container | |
90 | one of whose subregions is a low priority "background" region covering | |
91 | the whole address range; this is often clearer and is preferred. | |
92 | Subregions cannot be added to an alias region. | |
9d3a4736 AK |
93 | |
94 | Region names | |
95 | ------------ | |
96 | ||
97 | Regions are assigned names by the constructor. For most regions these are | |
98 | only used for debugging purposes, but RAM regions also use the name to identify | |
99 | live migration sections. This means that RAM region names need to have ABI | |
100 | stability. | |
101 | ||
102 | Region lifecycle | |
103 | ---------------- | |
104 | ||
8b5c2160 PB |
105 | A region is created by one of the memory_region_init*() functions and |
106 | attached to an object, which acts as its owner or parent. QEMU ensures | |
107 | that the owner object remains alive as long as the region is visible to | |
108 | the guest, or as long as the region is in use by a virtual CPU or another | |
109 | device. For example, the owner object will not die between an | |
110 | address_space_map operation and the corresponding address_space_unmap. | |
d8d95814 | 111 | |
8b5c2160 PB |
112 | After creation, a region can be added to an address space or a |
113 | container with memory_region_add_subregion(), and removed using | |
114 | memory_region_del_subregion(). | |
115 | ||
116 | Various region attributes (read-only, dirty logging, coalesced mmio, | |
117 | ioeventfd) can be changed during the region lifecycle. They take effect | |
118 | as soon as the region is made visible. This can be immediately, later, | |
119 | or never. | |
120 | ||
121 | Destruction of a memory region happens automatically when the owner | |
122 | object dies. | |
123 | ||
124 | If however the memory region is part of a dynamically allocated data | |
125 | structure, you should call object_unparent() to destroy the memory region | |
126 | before the data structure is freed. For an example see VFIOMSIXInfo | |
127 | and VFIOQuirk in hw/vfio/pci.c. | |
128 | ||
129 | You must not destroy a memory region as long as it may be in use by a | |
130 | device or CPU. In order to do this, as a general rule do not create or | |
131 | destroy memory regions dynamically during a device's lifetime, and only | |
132 | call object_unparent() in the memory region owner's instance_finalize | |
133 | callback. The dynamically allocated data structure that contains the | |
134 | memory region then should obviously be freed in the instance_finalize | |
135 | callback as well. | |
136 | ||
137 | If you break this rule, the following situation can happen: | |
138 | ||
139 | - the memory region's owner had a reference taken via memory_region_ref | |
140 | (for example by address_space_map) | |
141 | ||
142 | - the region is unparented, and has no owner anymore | |
143 | ||
144 | - when address_space_unmap is called, the reference to the memory region's | |
145 | owner is leaked. | |
146 | ||
147 | ||
148 | There is an exception to the above rule: it is okay to call | |
149 | object_unparent at any time for an alias or a container region. It is | |
150 | therefore also okay to create or destroy alias and container regions | |
151 | dynamically during a device's lifetime. | |
152 | ||
153 | This exceptional usage is valid because aliases and containers only help | |
154 | QEMU building the guest's memory map; they are never accessed directly. | |
155 | memory_region_ref and memory_region_unref are never called on aliases | |
156 | or containers, and the above situation then cannot happen. Exploiting | |
157 | this exception is rarely necessary, and therefore it is discouraged, | |
158 | but nevertheless it is used in a few places. | |
159 | ||
160 | For regions that "have no owner" (NULL is passed at creation time), the | |
161 | machine object is actually used as the owner. Since instance_finalize is | |
162 | never called for the machine object, you must never call object_unparent | |
163 | on regions that have no owner, unless they are aliases or containers. | |
d8d95814 | 164 | |
9d3a4736 AK |
165 | |
166 | Overlapping regions and priority | |
167 | -------------------------------- | |
168 | Usually, regions may not overlap each other; a memory address decodes into | |
169 | exactly one target. In some cases it is useful to allow regions to overlap, | |
170 | and sometimes to control which of an overlapping regions is visible to the | |
171 | guest. This is done with memory_region_add_subregion_overlap(), which | |
172 | allows the region to overlap any other region in the same container, and | |
173 | specifies a priority that allows the core to decide which of two regions at | |
174 | the same address are visible (highest wins). | |
8002ccd6 MA |
175 | Priority values are signed, and the default value is zero. This means that |
176 | you can use memory_region_add_subregion_overlap() both to specify a region | |
177 | that must sit 'above' any others (with a positive priority) and also a | |
178 | background region that sits 'below' others (with a negative priority). | |
9d3a4736 | 179 | |
6f1ce94a PM |
180 | If the higher priority region in an overlap is a container or alias, then |
181 | the lower priority region will appear in any "holes" that the higher priority | |
182 | region has left by not mapping subregions to that area of its address range. | |
183 | (This applies recursively -- if the subregions are themselves containers or | |
184 | aliases that leave holes then the lower priority region will appear in these | |
185 | holes too.) | |
186 | ||
187 | For example, suppose we have a container A of size 0x8000 with two subregions | |
8210f5f6 XZ |
188 | B and C. B is a container mapped at 0x2000, size 0x4000, priority 2; C is |
189 | an MMIO region mapped at 0x0, size 0x6000, priority 1. B currently has two | |
6f1ce94a PM |
190 | of its own subregions: D of size 0x1000 at offset 0 and E of size 0x1000 at |
191 | offset 0x2000. As a diagram: | |
192 | ||
b3f3fdeb WJ |
193 | 0 1000 2000 3000 4000 5000 6000 7000 8000 |
194 | |------|------|------|------|------|------|------|------| | |
195 | A: [ ] | |
6f1ce94a PM |
196 | C: [CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC] |
197 | B: [ ] | |
198 | D: [DDDDD] | |
199 | E: [EEEEE] | |
200 | ||
201 | The regions that will be seen within this address range then are: | |
202 | [CCCCCCCCCCCC][DDDDD][CCCCC][EEEEE][CCCCC] | |
203 | ||
204 | Since B has higher priority than C, its subregions appear in the flat map | |
205 | even where they overlap with C. In ranges where B has not mapped anything | |
206 | C's region appears. | |
207 | ||
208 | If B had provided its own MMIO operations (ie it was not a pure container) | |
209 | then these would be used for any addresses in its range not handled by | |
210 | D or E, and the result would be: | |
211 | [CCCCCCCCCCCC][DDDDD][BBBBB][EEEEE][BBBBB] | |
212 | ||
213 | Priority values are local to a container, because the priorities of two | |
214 | regions are only compared when they are both children of the same container. | |
215 | This means that the device in charge of the container (typically modelling | |
216 | a bus or a memory controller) can use them to manage the interaction of | |
217 | its child regions without any side effects on other parts of the system. | |
218 | In the example above, the priorities of D and E are unimportant because | |
219 | they do not overlap each other. It is the relative priority of B and C | |
220 | that causes D and E to appear on top of C: D and E's priorities are never | |
221 | compared against the priority of C. | |
222 | ||
9d3a4736 AK |
223 | Visibility |
224 | ---------- | |
225 | The memory core uses the following rules to select a memory region when the | |
226 | guest accesses an address: | |
227 | ||
228 | - all direct subregions of the root region are matched against the address, in | |
229 | descending priority order | |
230 | - if the address lies outside the region offset/size, the subregion is | |
231 | discarded | |
6f1ce94a PM |
232 | - if the subregion is a leaf (RAM or MMIO), the search terminates, returning |
233 | this leaf region | |
9d3a4736 AK |
234 | - if the subregion is a container, the same algorithm is used within the |
235 | subregion (after the address is adjusted by the subregion offset) | |
6f1ce94a | 236 | - if the subregion is an alias, the search is continued at the alias target |
9d3a4736 | 237 | (after the address is adjusted by the subregion offset and alias offset) |
6f1ce94a PM |
238 | - if a recursive search within a container or alias subregion does not |
239 | find a match (because of a "hole" in the container's coverage of its | |
240 | address range), then if this is a container with its own MMIO or RAM | |
241 | backing the search terminates, returning the container itself. Otherwise | |
242 | we continue with the next subregion in priority order | |
243 | - if none of the subregions match the address then the search terminates | |
244 | with no match found | |
9d3a4736 AK |
245 | |
246 | Example memory map | |
247 | ------------------ | |
248 | ||
249 | system_memory: container@0-2^48-1 | |
250 | | | |
251 | +---- lomem: alias@0-0xdfffffff ---> #ram (0-0xdfffffff) | |
252 | | | |
253 | +---- himem: alias@0x100000000-0x11fffffff ---> #ram (0xe0000000-0xffffffff) | |
254 | | | |
b3f3fdeb | 255 | +---- vga-window: alias@0xa0000-0xbffff ---> #pci (0xa0000-0xbffff) |
9d3a4736 AK |
256 | | (prio 1) |
257 | | | |
258 | +---- pci-hole: alias@0xe0000000-0xffffffff ---> #pci (0xe0000000-0xffffffff) | |
259 | ||
260 | pci (0-2^32-1) | |
261 | | | |
262 | +--- vga-area: container@0xa0000-0xbffff | |
263 | | | | |
264 | | +--- alias@0x00000-0x7fff ---> #vram (0x010000-0x017fff) | |
265 | | | | |
266 | | +--- alias@0x08000-0xffff ---> #vram (0x020000-0x027fff) | |
267 | | | |
268 | +---- vram: ram@0xe1000000-0xe1ffffff | |
269 | | | |
270 | +---- vga-mmio: mmio@0xe2000000-0xe200ffff | |
271 | ||
272 | ram: ram@0x00000000-0xffffffff | |
273 | ||
69ddaf66 | 274 | This is a (simplified) PC memory map. The 4GB RAM block is mapped into the |
9d3a4736 AK |
275 | system address space via two aliases: "lomem" is a 1:1 mapping of the first |
276 | 3.5GB; "himem" maps the last 0.5GB at address 4GB. This leaves 0.5GB for the | |
277 | so-called PCI hole, that allows a 32-bit PCI bus to exist in a system with | |
278 | 4GB of memory. | |
279 | ||
280 | The memory controller diverts addresses in the range 640K-768K to the PCI | |
7075ba30 | 281 | address space. This is modelled using the "vga-window" alias, mapped at a |
9d3a4736 AK |
282 | higher priority so it obscures the RAM at the same addresses. The vga window |
283 | can be removed by programming the memory controller; this is modelled by | |
284 | removing the alias and exposing the RAM underneath. | |
285 | ||
286 | The pci address space is not a direct child of the system address space, since | |
287 | we only want parts of it to be visible (we accomplish this using aliases). | |
288 | It has two subregions: vga-area models the legacy vga window and is occupied | |
289 | by two 32K memory banks pointing at two sections of the framebuffer. | |
290 | In addition the vram is mapped as a BAR at address e1000000, and an additional | |
291 | BAR containing MMIO registers is mapped after it. | |
292 | ||
293 | Note that if the guest maps a BAR outside the PCI hole, it would not be | |
294 | visible as the pci-hole alias clips it to a 0.5GB range. | |
295 | ||
9d3a4736 AK |
296 | MMIO Operations |
297 | --------------- | |
298 | ||
299 | MMIO regions are provided with ->read() and ->write() callbacks; in addition | |
300 | various constraints can be supplied to control how these callbacks are called: | |
301 | ||
302 | - .valid.min_access_size, .valid.max_access_size define the access sizes | |
303 | (in bytes) which the device accepts; accesses outside this range will | |
304 | have device and bus specific behaviour (ignored, or machine check) | |
ef00bdaf PM |
305 | - .valid.unaligned specifies that the *device being modelled* supports |
306 | unaligned accesses; if false, unaligned accesses will invoke the | |
307 | appropriate bus or CPU specific behaviour. | |
9d3a4736 AK |
308 | - .impl.min_access_size, .impl.max_access_size define the access sizes |
309 | (in bytes) supported by the *implementation*; other access sizes will be | |
310 | emulated using the ones available. For example a 4-byte write will be | |
69ddaf66 | 311 | emulated using four 1-byte writes, if .impl.max_access_size = 1. |
edc1ba7a FZ |
312 | - .impl.unaligned specifies that the *implementation* supports unaligned |
313 | accesses; if false, unaligned accesses will be emulated by two aligned | |
314 | accesses. | |
ef00bdaf | 315 | - .old_mmio eases the porting of code that was formerly using |
edc1ba7a | 316 | cpu_register_io_memory(). It should not be used in new code. |