]>
Commit | Line | Data |
---|---|---|
1da177e4 LT |
1 | Dynamic DMA mapping using the generic device |
2 | ============================================ | |
3 | ||
4 | James E.J. Bottomley <[email protected]> | |
5 | ||
6 | This document describes the DMA API. For a more gentle introduction | |
77f2ea2f | 7 | of the API (and actual examples), see Documentation/DMA-API-HOWTO.txt. |
1da177e4 | 8 | |
77f2ea2f BH |
9 | This API is split into two pieces. Part I describes the basic API. |
10 | Part II describes extensions for supporting non-consistent memory | |
11 | machines. Unless you know that your driver absolutely has to support | |
12 | non-consistent platforms (this is usually only legacy platforms) you | |
13 | should only use the API described in part I. | |
1da177e4 | 14 | |
f5a69f4c | 15 | Part I - dma_ API |
1da177e4 LT |
16 | ------------------------------------- |
17 | ||
77f2ea2f BH |
18 | To get the dma_ API, you must #include <linux/dma-mapping.h>. This |
19 | provides dma_addr_t and the interfaces described below. | |
1da177e4 | 20 | |
77f2ea2f | 21 | A dma_addr_t can hold any valid DMA or bus address for the platform. It |
f311a724 | 22 | can be given to a device to use as a DMA source or target. A CPU cannot |
77f2ea2f BH |
23 | reference a dma_addr_t directly because there may be translation between |
24 | its physical address space and the bus address space. | |
1da177e4 | 25 | |
77f2ea2f | 26 | Part Ia - Using large DMA-coherent buffers |
1da177e4 LT |
27 | ------------------------------------------ |
28 | ||
29 | void * | |
30 | dma_alloc_coherent(struct device *dev, size_t size, | |
a12e2c6c | 31 | dma_addr_t *dma_handle, gfp_t flag) |
1da177e4 LT |
32 | |
33 | Consistent memory is memory for which a write by either the device or | |
34 | the processor can immediately be read by the processor or device | |
21440d31 DB |
35 | without having to worry about caching effects. (You may however need |
36 | to make sure to flush the processor's write buffers before telling | |
37 | devices to read that memory.) | |
1da177e4 LT |
38 | |
39 | This routine allocates a region of <size> bytes of consistent memory. | |
1da177e4 | 40 | |
77f2ea2f | 41 | It returns a pointer to the allocated region (in the processor's virtual |
1da177e4 LT |
42 | address space) or NULL if the allocation failed. |
43 | ||
77f2ea2f BH |
44 | It also returns a <dma_handle> which may be cast to an unsigned integer the |
45 | same width as the bus and given to the device as the bus address base of | |
46 | the region. | |
47 | ||
1da177e4 LT |
48 | Note: consistent memory can be expensive on some platforms, and the |
49 | minimum allocation length may be as big as a page, so you should | |
50 | consolidate your requests for consistent memory as much as possible. | |
51 | The simplest way to do that is to use the dma_pool calls (see below). | |
52 | ||
77f2ea2f BH |
53 | The flag parameter (dma_alloc_coherent() only) allows the caller to |
54 | specify the GFP_ flags (see kmalloc()) for the allocation (the | |
a12e2c6c | 55 | implementation may choose to ignore flags that affect the location of |
f5a69f4c | 56 | the returned memory, like GFP_DMA). |
1da177e4 | 57 | |
842fa69f AM |
58 | void * |
59 | dma_zalloc_coherent(struct device *dev, size_t size, | |
60 | dma_addr_t *dma_handle, gfp_t flag) | |
61 | ||
62 | Wraps dma_alloc_coherent() and also zeroes the returned memory if the | |
63 | allocation attempt succeeded. | |
64 | ||
1da177e4 | 65 | void |
a12e2c6c | 66 | dma_free_coherent(struct device *dev, size_t size, void *cpu_addr, |
1da177e4 | 67 | dma_addr_t dma_handle) |
1da177e4 | 68 | |
77f2ea2f BH |
69 | Free a region of consistent memory you previously allocated. dev, |
70 | size and dma_handle must all be the same as those passed into | |
71 | dma_alloc_coherent(). cpu_addr must be the virtual address returned by | |
72 | the dma_alloc_coherent(). | |
1da177e4 | 73 | |
aa24886e DB |
74 | Note that unlike their sibling allocation calls, these routines |
75 | may only be called with IRQs enabled. | |
76 | ||
1da177e4 | 77 | |
77f2ea2f | 78 | Part Ib - Using small DMA-coherent buffers |
1da177e4 LT |
79 | ------------------------------------------ |
80 | ||
81 | To get this part of the dma_ API, you must #include <linux/dmapool.h> | |
82 | ||
77f2ea2f | 83 | Many drivers need lots of small DMA-coherent memory regions for DMA |
1da177e4 LT |
84 | descriptors or I/O buffers. Rather than allocating in units of a page |
85 | or more using dma_alloc_coherent(), you can use DMA pools. These work | |
77f2ea2f | 86 | much like a struct kmem_cache, except that they use the DMA-coherent allocator, |
1da177e4 | 87 | not __get_free_pages(). Also, they understand common hardware constraints |
a12e2c6c | 88 | for alignment, like queue heads needing to be aligned on N-byte boundaries. |
1da177e4 LT |
89 | |
90 | ||
91 | struct dma_pool * | |
92 | dma_pool_create(const char *name, struct device *dev, | |
93 | size_t size, size_t align, size_t alloc); | |
94 | ||
77f2ea2f | 95 | dma_pool_create() initializes a pool of DMA-coherent buffers |
1da177e4 LT |
96 | for use with a given device. It must be called in a context which |
97 | can sleep. | |
98 | ||
e18b890b | 99 | The "name" is for diagnostics (like a struct kmem_cache name); dev and size |
1da177e4 LT |
100 | are like what you'd pass to dma_alloc_coherent(). The device's hardware |
101 | alignment requirement for this type of data is "align" (which is expressed | |
102 | in bytes, and must be a power of two). If your device has no boundary | |
103 | crossing restrictions, pass 0 for alloc; passing 4096 says memory allocated | |
104 | from this pool must not cross 4KByte boundaries. | |
105 | ||
106 | ||
a12e2c6c | 107 | void *dma_pool_alloc(struct dma_pool *pool, gfp_t gfp_flags, |
1da177e4 LT |
108 | dma_addr_t *dma_handle); |
109 | ||
77f2ea2f BH |
110 | This allocates memory from the pool; the returned memory will meet the |
111 | size and alignment requirements specified at creation time. Pass | |
112 | GFP_ATOMIC to prevent blocking, or if it's permitted (not | |
113 | in_interrupt, not holding SMP locks), pass GFP_KERNEL to allow | |
114 | blocking. Like dma_alloc_coherent(), this returns two values: an | |
f311a724 | 115 | address usable by the CPU, and the DMA address usable by the pool's |
77f2ea2f | 116 | device. |
1da177e4 LT |
117 | |
118 | ||
119 | void dma_pool_free(struct dma_pool *pool, void *vaddr, | |
120 | dma_addr_t addr); | |
121 | ||
1da177e4 | 122 | This puts memory back into the pool. The pool is what was passed to |
f311a724 | 123 | dma_pool_alloc(); the CPU (vaddr) and DMA addresses are what |
1da177e4 LT |
124 | were returned when that routine allocated the memory being freed. |
125 | ||
126 | ||
127 | void dma_pool_destroy(struct dma_pool *pool); | |
128 | ||
77f2ea2f | 129 | dma_pool_destroy() frees the resources of the pool. It must be |
1da177e4 LT |
130 | called in a context which can sleep. Make sure you've freed all allocated |
131 | memory back to the pool before you destroy it. | |
132 | ||
133 | ||
134 | Part Ic - DMA addressing limitations | |
135 | ------------------------------------ | |
136 | ||
137 | int | |
138 | dma_supported(struct device *dev, u64 mask) | |
1da177e4 LT |
139 | |
140 | Checks to see if the device can support DMA to the memory described by | |
141 | mask. | |
142 | ||
143 | Returns: 1 if it can and 0 if it can't. | |
144 | ||
145 | Notes: This routine merely tests to see if the mask is possible. It | |
146 | won't change the current mask settings. It is more intended as an | |
147 | internal API for use by the platform than an external API for use by | |
148 | driver writers. | |
149 | ||
4aa806b7 RK |
150 | int |
151 | dma_set_mask_and_coherent(struct device *dev, u64 mask) | |
152 | ||
153 | Checks to see if the mask is possible and updates the device | |
154 | streaming and coherent DMA mask parameters if it is. | |
155 | ||
156 | Returns: 0 if successful and a negative error if not. | |
157 | ||
1da177e4 LT |
158 | int |
159 | dma_set_mask(struct device *dev, u64 mask) | |
1da177e4 LT |
160 | |
161 | Checks to see if the mask is possible and updates the device | |
162 | parameters if it is. | |
163 | ||
164 | Returns: 0 if successful and a negative error if not. | |
165 | ||
6a1961f4 FT |
166 | int |
167 | dma_set_coherent_mask(struct device *dev, u64 mask) | |
6a1961f4 FT |
168 | |
169 | Checks to see if the mask is possible and updates the device | |
170 | parameters if it is. | |
171 | ||
172 | Returns: 0 if successful and a negative error if not. | |
173 | ||
1da177e4 LT |
174 | u64 |
175 | dma_get_required_mask(struct device *dev) | |
176 | ||
175add19 JK |
177 | This API returns the mask that the platform requires to |
178 | operate efficiently. Usually this means the returned mask | |
1da177e4 LT |
179 | is the minimum required to cover all of memory. Examining the |
180 | required mask gives drivers with variable descriptor sizes the | |
181 | opportunity to use smaller descriptors as necessary. | |
182 | ||
183 | Requesting the required mask does not alter the current mask. If you | |
175add19 JK |
184 | wish to take advantage of it, you should issue a dma_set_mask() |
185 | call to set the mask to the value returned. | |
1da177e4 LT |
186 | |
187 | ||
188 | Part Id - Streaming DMA mappings | |
189 | -------------------------------- | |
190 | ||
191 | dma_addr_t | |
192 | dma_map_single(struct device *dev, void *cpu_addr, size_t size, | |
193 | enum dma_data_direction direction) | |
1da177e4 LT |
194 | |
195 | Maps a piece of processor virtual memory so it can be accessed by the | |
77f2ea2f | 196 | device and returns the bus address of the memory. |
1da177e4 | 197 | |
77f2ea2f | 198 | The direction for both APIs may be converted freely by casting. |
1da177e4 LT |
199 | However the dma_ API uses a strongly typed enumerator for its |
200 | direction: | |
201 | ||
f5a69f4c FT |
202 | DMA_NONE no direction (used for debugging) |
203 | DMA_TO_DEVICE data is going from the memory to the device | |
204 | DMA_FROM_DEVICE data is coming from the device to the memory | |
205 | DMA_BIDIRECTIONAL direction isn't known | |
1da177e4 | 206 | |
77f2ea2f BH |
207 | Notes: Not all memory regions in a machine can be mapped by this API. |
208 | Further, contiguous kernel virtual space may not be contiguous as | |
209 | physical memory. Since this API does not provide any scatter/gather | |
210 | capability, it will fail if the user tries to map a non-physically | |
211 | contiguous piece of memory. For this reason, memory to be mapped by | |
212 | this API should be obtained from sources which guarantee it to be | |
213 | physically contiguous (like kmalloc). | |
214 | ||
215 | Further, the bus address of the memory must be within the | |
216 | dma_mask of the device (the dma_mask is a bit mask of the | |
217 | addressable region for the device, i.e., if the bus address of | |
218 | the memory ANDed with the dma_mask is still equal to the bus | |
219 | address, then the device can perform DMA to the memory). To | |
1da177e4 | 220 | ensure that the memory allocated by kmalloc is within the dma_mask, |
a12e2c6c | 221 | the driver may specify various platform-dependent flags to restrict |
77f2ea2f BH |
222 | the bus address range of the allocation (e.g., on x86, GFP_DMA |
223 | guarantees to be within the first 16MB of available bus addresses, | |
1da177e4 LT |
224 | as required by ISA devices). |
225 | ||
226 | Note also that the above constraints on physical contiguity and | |
227 | dma_mask may not apply if the platform has an IOMMU (a device which | |
77f2ea2f BH |
228 | maps an I/O bus address to a physical memory address). However, to be |
229 | portable, device driver writers may *not* assume that such an IOMMU | |
230 | exists. | |
1da177e4 LT |
231 | |
232 | Warnings: Memory coherency operates at a granularity called the cache | |
233 | line width. In order for memory mapped by this API to operate | |
234 | correctly, the mapped region must begin exactly on a cache line | |
235 | boundary and end exactly on one (to prevent two separately mapped | |
236 | regions from sharing a single cache line). Since the cache line size | |
237 | may not be known at compile time, the API will not enforce this | |
238 | requirement. Therefore, it is recommended that driver writers who | |
239 | don't take special care to determine the cache line size at run time | |
240 | only map virtual regions that begin and end on page boundaries (which | |
241 | are guaranteed also to be cache line boundaries). | |
242 | ||
243 | DMA_TO_DEVICE synchronisation must be done after the last modification | |
244 | of the memory region by the software and before it is handed off to | |
a12e2c6c RD |
245 | the driver. Once this primitive is used, memory covered by this |
246 | primitive should be treated as read-only by the device. If the device | |
1da177e4 LT |
247 | may write to it at any point, it should be DMA_BIDIRECTIONAL (see |
248 | below). | |
249 | ||
250 | DMA_FROM_DEVICE synchronisation must be done before the driver | |
251 | accesses data that may be changed by the device. This memory should | |
a12e2c6c | 252 | be treated as read-only by the driver. If the driver needs to write |
1da177e4 LT |
253 | to it at any point, it should be DMA_BIDIRECTIONAL (see below). |
254 | ||
255 | DMA_BIDIRECTIONAL requires special handling: it means that the driver | |
256 | isn't sure if the memory was modified before being handed off to the | |
257 | device and also isn't sure if the device will also modify it. Thus, | |
258 | you must always sync bidirectional memory twice: once before the | |
259 | memory is handed off to the device (to make sure all memory changes | |
260 | are flushed from the processor) and once before the data may be | |
261 | accessed after being used by the device (to make sure any processor | |
a12e2c6c | 262 | cache lines are updated with data that the device may have changed). |
1da177e4 LT |
263 | |
264 | void | |
265 | dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size, | |
266 | enum dma_data_direction direction) | |
1da177e4 LT |
267 | |
268 | Unmaps the region previously mapped. All the parameters passed in | |
269 | must be identical to those passed in (and returned) by the mapping | |
270 | API. | |
271 | ||
272 | dma_addr_t | |
273 | dma_map_page(struct device *dev, struct page *page, | |
274 | unsigned long offset, size_t size, | |
275 | enum dma_data_direction direction) | |
1da177e4 LT |
276 | void |
277 | dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size, | |
278 | enum dma_data_direction direction) | |
1da177e4 LT |
279 | |
280 | API for mapping and unmapping for pages. All the notes and warnings | |
281 | for the other mapping APIs apply here. Also, although the <offset> | |
282 | and <size> parameters are provided to do partial page mapping, it is | |
283 | recommended that you never use these unless you really know what the | |
284 | cache width is. | |
285 | ||
286 | int | |
8d8bb39b | 287 | dma_mapping_error(struct device *dev, dma_addr_t dma_addr) |
1da177e4 | 288 | |
77f2ea2f | 289 | In some circumstances dma_map_single() and dma_map_page() will fail to create |
1da177e4 | 290 | a mapping. A driver can check for these errors by testing the returned |
77f2ea2f | 291 | DMA address with dma_mapping_error(). A non-zero return value means the mapping |
a12e2c6c | 292 | could not be created and the driver should take appropriate action (e.g. |
1da177e4 LT |
293 | reduce current DMA mapping usage or delay and try again later). |
294 | ||
21440d31 DB |
295 | int |
296 | dma_map_sg(struct device *dev, struct scatterlist *sg, | |
297 | int nents, enum dma_data_direction direction) | |
1da177e4 | 298 | |
77f2ea2f | 299 | Returns: the number of bus address segments mapped (this may be shorter |
1d678f36 FT |
300 | than <nents> passed in if some elements of the scatter/gather list are |
301 | physically or virtually adjacent and an IOMMU maps them with a single | |
302 | entry). | |
1da177e4 LT |
303 | |
304 | Please note that the sg cannot be mapped again if it has been mapped once. | |
305 | The mapping process is allowed to destroy information in the sg. | |
306 | ||
77f2ea2f | 307 | As with the other mapping interfaces, dma_map_sg() can fail. When it |
1da177e4 LT |
308 | does, 0 is returned and a driver must take appropriate action. It is |
309 | critical that the driver do something, in the case of a block driver | |
310 | aborting the request or even oopsing is better than doing nothing and | |
311 | corrupting the filesystem. | |
312 | ||
21440d31 DB |
313 | With scatterlists, you use the resulting mapping like this: |
314 | ||
315 | int i, count = dma_map_sg(dev, sglist, nents, direction); | |
316 | struct scatterlist *sg; | |
317 | ||
79eb0145 | 318 | for_each_sg(sglist, sg, count, i) { |
21440d31 DB |
319 | hw_address[i] = sg_dma_address(sg); |
320 | hw_len[i] = sg_dma_len(sg); | |
321 | } | |
322 | ||
323 | where nents is the number of entries in the sglist. | |
324 | ||
325 | The implementation is free to merge several consecutive sglist entries | |
326 | into one (e.g. with an IOMMU, or if several pages just happen to be | |
327 | physically contiguous) and returns the actual number of sg entries it | |
328 | mapped them to. On failure 0, is returned. | |
329 | ||
330 | Then you should loop count times (note: this can be less than nents times) | |
331 | and use sg_dma_address() and sg_dma_len() macros where you previously | |
332 | accessed sg->address and sg->length as shown above. | |
333 | ||
334 | void | |
335 | dma_unmap_sg(struct device *dev, struct scatterlist *sg, | |
336 | int nhwentries, enum dma_data_direction direction) | |
1da177e4 | 337 | |
a12e2c6c | 338 | Unmap the previously mapped scatter/gather list. All the parameters |
1da177e4 LT |
339 | must be the same as those and passed in to the scatter/gather mapping |
340 | API. | |
341 | ||
342 | Note: <nents> must be the number you passed in, *not* the number of | |
77f2ea2f | 343 | bus address entries returned. |
1da177e4 | 344 | |
9705ef7e FT |
345 | void |
346 | dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, size_t size, | |
347 | enum dma_data_direction direction) | |
348 | void | |
9705ef7e FT |
349 | dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle, size_t size, |
350 | enum dma_data_direction direction) | |
351 | void | |
9705ef7e FT |
352 | dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, int nelems, |
353 | enum dma_data_direction direction) | |
354 | void | |
9705ef7e FT |
355 | dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, int nelems, |
356 | enum dma_data_direction direction) | |
9705ef7e | 357 | |
f311a724 | 358 | Synchronise a single contiguous or scatter/gather mapping for the CPU |
9705ef7e FT |
359 | and device. With the sync_sg API, all the parameters must be the same |
360 | as those passed into the single mapping API. With the sync_single API, | |
361 | you can use dma_handle and size parameters that aren't identical to | |
362 | those passed into the single mapping API to do a partial sync. | |
363 | ||
364 | Notes: You must do this: | |
365 | ||
366 | - Before reading values that have been written by DMA from the device | |
367 | (use the DMA_FROM_DEVICE direction) | |
368 | - After writing values that will be written to the device using DMA | |
369 | (use the DMA_TO_DEVICE) direction | |
370 | - before *and* after handing memory to the device if the memory is | |
371 | DMA_BIDIRECTIONAL | |
372 | ||
373 | See also dma_map_single(). | |
374 | ||
a75b0a2f AK |
375 | dma_addr_t |
376 | dma_map_single_attrs(struct device *dev, void *cpu_addr, size_t size, | |
377 | enum dma_data_direction dir, | |
378 | struct dma_attrs *attrs) | |
379 | ||
380 | void | |
381 | dma_unmap_single_attrs(struct device *dev, dma_addr_t dma_addr, | |
382 | size_t size, enum dma_data_direction dir, | |
383 | struct dma_attrs *attrs) | |
384 | ||
385 | int | |
386 | dma_map_sg_attrs(struct device *dev, struct scatterlist *sgl, | |
387 | int nents, enum dma_data_direction dir, | |
388 | struct dma_attrs *attrs) | |
389 | ||
390 | void | |
391 | dma_unmap_sg_attrs(struct device *dev, struct scatterlist *sgl, | |
392 | int nents, enum dma_data_direction dir, | |
393 | struct dma_attrs *attrs) | |
394 | ||
395 | The four functions above are just like the counterpart functions | |
396 | without the _attrs suffixes, except that they pass an optional | |
397 | struct dma_attrs*. | |
398 | ||
77f2ea2f | 399 | struct dma_attrs encapsulates a set of "DMA attributes". For the |
a75b0a2f AK |
400 | definition of struct dma_attrs see linux/dma-attrs.h. |
401 | ||
77f2ea2f | 402 | The interpretation of DMA attributes is architecture-specific, and |
a75b0a2f AK |
403 | each attribute should be documented in Documentation/DMA-attributes.txt. |
404 | ||
405 | If struct dma_attrs* is NULL, the semantics of each of these | |
406 | functions is identical to those of the corresponding function | |
407 | without the _attrs suffix. As a result dma_map_single_attrs() | |
408 | can generally replace dma_map_single(), etc. | |
409 | ||
410 | As an example of the use of the *_attrs functions, here's how | |
411 | you could pass an attribute DMA_ATTR_FOO when mapping memory | |
412 | for DMA: | |
413 | ||
414 | #include <linux/dma-attrs.h> | |
415 | /* DMA_ATTR_FOO should be defined in linux/dma-attrs.h and | |
416 | * documented in Documentation/DMA-attributes.txt */ | |
417 | ... | |
418 | ||
419 | DEFINE_DMA_ATTRS(attrs); | |
420 | dma_set_attr(DMA_ATTR_FOO, &attrs); | |
421 | .... | |
422 | n = dma_map_sg_attrs(dev, sg, nents, DMA_TO_DEVICE, &attr); | |
423 | .... | |
424 | ||
425 | Architectures that care about DMA_ATTR_FOO would check for its | |
426 | presence in their implementations of the mapping and unmapping | |
427 | routines, e.g.: | |
428 | ||
429 | void whizco_dma_map_sg_attrs(struct device *dev, dma_addr_t dma_addr, | |
430 | size_t size, enum dma_data_direction dir, | |
431 | struct dma_attrs *attrs) | |
432 | { | |
433 | .... | |
434 | int foo = dma_get_attr(DMA_ATTR_FOO, attrs); | |
435 | .... | |
436 | if (foo) | |
437 | /* twizzle the frobnozzle */ | |
438 | .... | |
439 | ||
1da177e4 LT |
440 | |
441 | Part II - Advanced dma_ usage | |
442 | ----------------------------- | |
443 | ||
f5a69f4c FT |
444 | Warning: These pieces of the DMA API should not be used in the |
445 | majority of cases, since they cater for unlikely corner cases that | |
446 | don't belong in usual drivers. | |
1da177e4 LT |
447 | |
448 | If you don't understand how cache line coherency works between a | |
449 | processor and an I/O device, you should not be using this part of the | |
450 | API at all. | |
451 | ||
452 | void * | |
453 | dma_alloc_noncoherent(struct device *dev, size_t size, | |
a12e2c6c | 454 | dma_addr_t *dma_handle, gfp_t flag) |
1da177e4 LT |
455 | |
456 | Identical to dma_alloc_coherent() except that the platform will | |
457 | choose to return either consistent or non-consistent memory as it sees | |
458 | fit. By using this API, you are guaranteeing to the platform that you | |
459 | have all the correct and necessary sync points for this memory in the | |
460 | driver should it choose to return non-consistent memory. | |
461 | ||
462 | Note: where the platform can return consistent memory, it will | |
463 | guarantee that the sync points become nops. | |
464 | ||
465 | Warning: Handling non-consistent memory is a real pain. You should | |
77f2ea2f | 466 | only use this API if you positively know your driver will be |
1da177e4 LT |
467 | required to work on one of the rare (usually non-PCI) architectures |
468 | that simply cannot make consistent memory. | |
469 | ||
470 | void | |
471 | dma_free_noncoherent(struct device *dev, size_t size, void *cpu_addr, | |
472 | dma_addr_t dma_handle) | |
473 | ||
a12e2c6c | 474 | Free memory allocated by the nonconsistent API. All parameters must |
1da177e4 LT |
475 | be identical to those passed in (and returned by |
476 | dma_alloc_noncoherent()). | |
477 | ||
1da177e4 LT |
478 | int |
479 | dma_get_cache_alignment(void) | |
480 | ||
a12e2c6c | 481 | Returns the processor cache alignment. This is the absolute minimum |
1da177e4 LT |
482 | alignment *and* width that you must observe when either mapping |
483 | memory or doing partial flushes. | |
484 | ||
485 | Notes: This API may return a number *larger* than the actual cache | |
486 | line, but it will guarantee that one or more cache lines fit exactly | |
487 | into the width returned by this call. It will also always be a power | |
a12e2c6c | 488 | of two for easy alignment. |
1da177e4 | 489 | |
1da177e4 | 490 | void |
d3fa72e4 | 491 | dma_cache_sync(struct device *dev, void *vaddr, size_t size, |
1da177e4 LT |
492 | enum dma_data_direction direction) |
493 | ||
494 | Do a partial sync of memory that was allocated by | |
495 | dma_alloc_noncoherent(), starting at virtual address vaddr and | |
496 | continuing on for size. Again, you *must* observe the cache line | |
497 | boundaries when doing this. | |
498 | ||
499 | int | |
88a984ba | 500 | dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr, |
1da177e4 LT |
501 | dma_addr_t device_addr, size_t size, int |
502 | flags) | |
503 | ||
77f2ea2f | 504 | Declare region of memory to be handed out by dma_alloc_coherent() when |
1da177e4 LT |
505 | it's asked for coherent memory for this device. |
506 | ||
f311a724 BH |
507 | phys_addr is the CPU physical address to which the memory is currently |
508 | assigned (this will be ioremapped so the CPU can access the region). | |
1da177e4 | 509 | |
77f2ea2f | 510 | device_addr is the bus address the device needs to be programmed |
88a984ba | 511 | with to actually address this memory (this will be handed out as the |
a12e2c6c | 512 | dma_addr_t in dma_alloc_coherent()). |
1da177e4 LT |
513 | |
514 | size is the size of the area (must be multiples of PAGE_SIZE). | |
515 | ||
77f2ea2f | 516 | flags can be ORed together and are: |
1da177e4 LT |
517 | |
518 | DMA_MEMORY_MAP - request that the memory returned from | |
4ae0edc2 | 519 | dma_alloc_coherent() be directly writable. |
1da177e4 LT |
520 | |
521 | DMA_MEMORY_IO - request that the memory returned from | |
77f2ea2f | 522 | dma_alloc_coherent() be addressable using read()/write()/memcpy_toio() etc. |
1da177e4 | 523 | |
a12e2c6c | 524 | One or both of these flags must be present. |
1da177e4 LT |
525 | |
526 | DMA_MEMORY_INCLUDES_CHILDREN - make the declared memory be allocated by | |
527 | dma_alloc_coherent of any child devices of this one (for memory residing | |
528 | on a bridge). | |
529 | ||
530 | DMA_MEMORY_EXCLUSIVE - only allocate memory from the declared regions. | |
531 | Do not allow dma_alloc_coherent() to fall back to system memory when | |
532 | it's out of memory in the declared region. | |
533 | ||
534 | The return value will be either DMA_MEMORY_MAP or DMA_MEMORY_IO and | |
535 | must correspond to a passed in flag (i.e. no returning DMA_MEMORY_IO | |
536 | if only DMA_MEMORY_MAP were passed in) for success or zero for | |
537 | failure. | |
538 | ||
539 | Note, for DMA_MEMORY_IO returns, all subsequent memory returned by | |
540 | dma_alloc_coherent() may no longer be accessed directly, but instead | |
541 | must be accessed using the correct bus functions. If your driver | |
542 | isn't prepared to handle this contingency, it should not specify | |
543 | DMA_MEMORY_IO in the input flags. | |
544 | ||
545 | As a simplification for the platforms, only *one* such region of | |
546 | memory may be declared per device. | |
547 | ||
548 | For reasons of efficiency, most platforms choose to track the declared | |
549 | region only at the granularity of a page. For smaller allocations, | |
550 | you should use the dma_pool() API. | |
551 | ||
552 | void | |
553 | dma_release_declared_memory(struct device *dev) | |
554 | ||
555 | Remove the memory region previously declared from the system. This | |
556 | API performs *no* in-use checking for this region and will return | |
557 | unconditionally having removed all the required structures. It is the | |
a12e2c6c | 558 | driver's job to ensure that no parts of this memory region are |
1da177e4 LT |
559 | currently in use. |
560 | ||
561 | void * | |
562 | dma_mark_declared_memory_occupied(struct device *dev, | |
563 | dma_addr_t device_addr, size_t size) | |
564 | ||
565 | This is used to occupy specific regions of the declared space | |
566 | (dma_alloc_coherent() will hand out the first free region it finds). | |
567 | ||
a12e2c6c | 568 | device_addr is the *device* address of the region requested. |
1da177e4 | 569 | |
a12e2c6c | 570 | size is the size (and should be a page-sized multiple). |
1da177e4 LT |
571 | |
572 | The return value will be either a pointer to the processor virtual | |
573 | address of the memory, or an error (via PTR_ERR()) if any part of the | |
574 | region is occupied. | |
187f9c3f JR |
575 | |
576 | Part III - Debug drivers use of the DMA-API | |
577 | ------------------------------------------- | |
578 | ||
77f2ea2f | 579 | The DMA-API as described above has some constraints. DMA addresses must be |
187f9c3f JR |
580 | released with the corresponding function with the same size for example. With |
581 | the advent of hardware IOMMUs it becomes more and more important that drivers | |
582 | do not violate those constraints. In the worst case such a violation can | |
583 | result in data corruption up to destroyed filesystems. | |
584 | ||
585 | To debug drivers and find bugs in the usage of the DMA-API checking code can | |
586 | be compiled into the kernel which will tell the developer about those | |
587 | violations. If your architecture supports it you can select the "Enable | |
588 | debugging of DMA-API usage" option in your kernel configuration. Enabling this | |
589 | option has a performance impact. Do not enable it in production kernels. | |
590 | ||
591 | If you boot the resulting kernel will contain code which does some bookkeeping | |
592 | about what DMA memory was allocated for which device. If this code detects an | |
593 | error it prints a warning message with some details into your kernel log. An | |
594 | example warning message may look like this: | |
595 | ||
596 | ------------[ cut here ]------------ | |
597 | WARNING: at /data2/repos/linux-2.6-iommu/lib/dma-debug.c:448 | |
598 | check_unmap+0x203/0x490() | |
599 | Hardware name: | |
600 | forcedeth 0000:00:08.0: DMA-API: device driver frees DMA memory with wrong | |
601 | function [device address=0x00000000640444be] [size=66 bytes] [mapped as | |
602 | single] [unmapped as page] | |
603 | Modules linked in: nfsd exportfs bridge stp llc r8169 | |
604 | Pid: 0, comm: swapper Tainted: G W 2.6.28-dmatest-09289-g8bb99c0 #1 | |
605 | Call Trace: | |
606 | <IRQ> [<ffffffff80240b22>] warn_slowpath+0xf2/0x130 | |
607 | [<ffffffff80647b70>] _spin_unlock+0x10/0x30 | |
608 | [<ffffffff80537e75>] usb_hcd_link_urb_to_ep+0x75/0xc0 | |
609 | [<ffffffff80647c22>] _spin_unlock_irqrestore+0x12/0x40 | |
610 | [<ffffffff8055347f>] ohci_urb_enqueue+0x19f/0x7c0 | |
611 | [<ffffffff80252f96>] queue_work+0x56/0x60 | |
612 | [<ffffffff80237e10>] enqueue_task_fair+0x20/0x50 | |
613 | [<ffffffff80539279>] usb_hcd_submit_urb+0x379/0xbc0 | |
614 | [<ffffffff803b78c3>] cpumask_next_and+0x23/0x40 | |
615 | [<ffffffff80235177>] find_busiest_group+0x207/0x8a0 | |
616 | [<ffffffff8064784f>] _spin_lock_irqsave+0x1f/0x50 | |
617 | [<ffffffff803c7ea3>] check_unmap+0x203/0x490 | |
618 | [<ffffffff803c8259>] debug_dma_unmap_page+0x49/0x50 | |
619 | [<ffffffff80485f26>] nv_tx_done_optimized+0xc6/0x2c0 | |
620 | [<ffffffff80486c13>] nv_nic_irq_optimized+0x73/0x2b0 | |
621 | [<ffffffff8026df84>] handle_IRQ_event+0x34/0x70 | |
622 | [<ffffffff8026ffe9>] handle_edge_irq+0xc9/0x150 | |
623 | [<ffffffff8020e3ab>] do_IRQ+0xcb/0x1c0 | |
624 | [<ffffffff8020c093>] ret_from_intr+0x0/0xa | |
625 | <EOI> <4>---[ end trace f6435a98e2a38c0e ]--- | |
626 | ||
627 | The driver developer can find the driver and the device including a stacktrace | |
628 | of the DMA-API call which caused this warning. | |
629 | ||
630 | Per default only the first error will result in a warning message. All other | |
631 | errors will only silently counted. This limitation exist to prevent the code | |
632 | from flooding your kernel log. To support debugging a device driver this can | |
633 | be disabled via debugfs. See the debugfs interface documentation below for | |
634 | details. | |
635 | ||
636 | The debugfs directory for the DMA-API debugging code is called dma-api/. In | |
637 | this directory the following files can currently be found: | |
638 | ||
639 | dma-api/all_errors This file contains a numeric value. If this | |
640 | value is not equal to zero the debugging code | |
641 | will print a warning for every error it finds | |
19f59460 ML |
642 | into the kernel log. Be careful with this |
643 | option, as it can easily flood your logs. | |
187f9c3f JR |
644 | |
645 | dma-api/disabled This read-only file contains the character 'Y' | |
646 | if the debugging code is disabled. This can | |
647 | happen when it runs out of memory or if it was | |
648 | disabled at boot time | |
649 | ||
650 | dma-api/error_count This file is read-only and shows the total | |
651 | numbers of errors found. | |
652 | ||
653 | dma-api/num_errors The number in this file shows how many | |
654 | warnings will be printed to the kernel log | |
655 | before it stops. This number is initialized to | |
656 | one at system boot and be set by writing into | |
657 | this file | |
658 | ||
659 | dma-api/min_free_entries | |
660 | This read-only file can be read to get the | |
661 | minimum number of free dma_debug_entries the | |
662 | allocator has ever seen. If this value goes | |
663 | down to zero the code will disable itself | |
664 | because it is not longer reliable. | |
665 | ||
666 | dma-api/num_free_entries | |
667 | The current number of free dma_debug_entries | |
668 | in the allocator. | |
669 | ||
016ea687 JR |
670 | dma-api/driver-filter |
671 | You can write a name of a driver into this file | |
672 | to limit the debug output to requests from that | |
673 | particular driver. Write an empty string to | |
674 | that file to disable the filter and see | |
675 | all errors again. | |
676 | ||
187f9c3f JR |
677 | If you have this code compiled into your kernel it will be enabled by default. |
678 | If you want to boot without the bookkeeping anyway you can provide | |
679 | 'dma_debug=off' as a boot parameter. This will disable DMA-API debugging. | |
680 | Notice that you can not enable it again at runtime. You have to reboot to do | |
681 | so. | |
682 | ||
016ea687 JR |
683 | If you want to see debug messages only for a special device driver you can |
684 | specify the dma_debug_driver=<drivername> parameter. This will enable the | |
685 | driver filter at boot time. The debug code will only print errors for that | |
686 | driver afterwards. This filter can be disabled or changed later using debugfs. | |
687 | ||
187f9c3f JR |
688 | When the code disables itself at runtime this is most likely because it ran |
689 | out of dma_debug_entries. These entries are preallocated at boot. The number | |
690 | of preallocated entries is defined per architecture. If it is too low for you | |
691 | boot with 'dma_debug_entries=<your_desired_number>' to overwrite the | |
692 | architectural default. | |
6c9c6d63 SK |
693 | |
694 | void debug_dmap_mapping_error(struct device *dev, dma_addr_t dma_addr); | |
695 | ||
696 | dma-debug interface debug_dma_mapping_error() to debug drivers that fail | |
77f2ea2f | 697 | to check DMA mapping errors on addresses returned by dma_map_single() and |
6c9c6d63 SK |
698 | dma_map_page() interfaces. This interface clears a flag set by |
699 | debug_dma_map_page() to indicate that dma_mapping_error() has been called by | |
700 | the driver. When driver does unmap, debug_dma_unmap() checks the flag and if | |
701 | this flag is still set, prints warning message that includes call trace that | |
702 | leads up to the unmap. This interface can be called from dma_mapping_error() | |
77f2ea2f | 703 | routines to enable DMA mapping error check debugging. |
6c9c6d63 | 704 |