1               Dynamic DMA mapping using the generic device
2               ============================================
3
4        James E.J. Bottomley <James.Bottomley@HansenPartnership.com>
5
6This document describes the DMA API.  For a more gentle introduction
7of the API (and actual examples), see Documentation/DMA-API-HOWTO.txt.
8
9This API is split into two pieces.  Part I describes the basic API.
10Part II describes extensions for supporting non-consistent memory
11machines.  Unless you know that your driver absolutely has to support
12non-consistent platforms (this is usually only legacy platforms) you
13should only use the API described in part I.
14
15Part I - dma_ API
16-------------------------------------
17
18To get the dma_ API, you must #include <linux/dma-mapping.h>.  This
19provides dma_addr_t and the interfaces described below.
20
21A dma_addr_t can hold any valid DMA address for the platform.  It can be
22given to a device to use as a DMA source or target.  A CPU cannot reference
23a dma_addr_t directly because there may be translation between its physical
24address space and the DMA address space.
25
26Part Ia - Using large DMA-coherent buffers
27------------------------------------------
28
29void *
30dma_alloc_coherent(struct device *dev, size_t size,
31			     dma_addr_t *dma_handle, gfp_t flag)
32
33Consistent memory is memory for which a write by either the device or
34the processor can immediately be read by the processor or device
35without having to worry about caching effects.  (You may however need
36to make sure to flush the processor's write buffers before telling
37devices to read that memory.)
38
39This routine allocates a region of <size> bytes of consistent memory.
40
41It returns a pointer to the allocated region (in the processor's virtual
42address space) or NULL if the allocation failed.
43
44It also returns a <dma_handle> which may be cast to an unsigned integer the
45same width as the bus and given to the device as the DMA address base of
46the region.
47
48Note: consistent memory can be expensive on some platforms, and the
49minimum allocation length may be as big as a page, so you should
50consolidate your requests for consistent memory as much as possible.
51The simplest way to do that is to use the dma_pool calls (see below).
52
53The flag parameter (dma_alloc_coherent() only) allows the caller to
54specify the GFP_ flags (see kmalloc()) for the allocation (the
55implementation may choose to ignore flags that affect the location of
56the returned memory, like GFP_DMA).
57
58void *
59dma_zalloc_coherent(struct device *dev, size_t size,
60			     dma_addr_t *dma_handle, gfp_t flag)
61
62Wraps dma_alloc_coherent() and also zeroes the returned memory if the
63allocation attempt succeeded.
64
65void
66dma_free_coherent(struct device *dev, size_t size, void *cpu_addr,
67			   dma_addr_t dma_handle)
68
69Free a region of consistent memory you previously allocated.  dev,
70size and dma_handle must all be the same as those passed into
71dma_alloc_coherent().  cpu_addr must be the virtual address returned by
72the dma_alloc_coherent().
73
74Note that unlike their sibling allocation calls, these routines
75may only be called with IRQs enabled.
76
77
78Part Ib - Using small DMA-coherent buffers
79------------------------------------------
80
81To get this part of the dma_ API, you must #include <linux/dmapool.h>
82
83Many drivers need lots of small DMA-coherent memory regions for DMA
84descriptors or I/O buffers.  Rather than allocating in units of a page
85or more using dma_alloc_coherent(), you can use DMA pools.  These work
86much like a struct kmem_cache, except that they use the DMA-coherent allocator,
87not __get_free_pages().  Also, they understand common hardware constraints
88for alignment, like queue heads needing to be aligned on N-byte boundaries.
89
90
91	struct dma_pool *
92	dma_pool_create(const char *name, struct device *dev,
93			size_t size, size_t align, size_t alloc);
94
95dma_pool_create() initializes a pool of DMA-coherent buffers
96for use with a given device.  It must be called in a context which
97can sleep.
98
99The "name" is for diagnostics (like a struct kmem_cache name); dev and size
100are like what you'd pass to dma_alloc_coherent().  The device's hardware
101alignment requirement for this type of data is "align" (which is expressed
102in bytes, and must be a power of two).  If your device has no boundary
103crossing restrictions, pass 0 for alloc; passing 4096 says memory allocated
104from this pool must not cross 4KByte boundaries.
105
106
107	void *dma_pool_alloc(struct dma_pool *pool, gfp_t gfp_flags,
108			dma_addr_t *dma_handle);
109
110This allocates memory from the pool; the returned memory will meet the
111size and alignment requirements specified at creation time.  Pass
112GFP_ATOMIC to prevent blocking, or if it's permitted (not
113in_interrupt, not holding SMP locks), pass GFP_KERNEL to allow
114blocking.  Like dma_alloc_coherent(), this returns two values:  an
115address usable by the CPU, and the DMA address usable by the pool's
116device.
117
118
119	void dma_pool_free(struct dma_pool *pool, void *vaddr,
120			dma_addr_t addr);
121
122This puts memory back into the pool.  The pool is what was passed to
123dma_pool_alloc(); the CPU (vaddr) and DMA addresses are what
124were returned when that routine allocated the memory being freed.
125
126
127	void dma_pool_destroy(struct dma_pool *pool);
128
129dma_pool_destroy() frees the resources of the pool.  It must be
130called in a context which can sleep.  Make sure you've freed all allocated
131memory back to the pool before you destroy it.
132
133
134Part Ic - DMA addressing limitations
135------------------------------------
136
137int
138dma_supported(struct device *dev, u64 mask)
139
140Checks to see if the device can support DMA to the memory described by
141mask.
142
143Returns: 1 if it can and 0 if it can't.
144
145Notes: This routine merely tests to see if the mask is possible.  It
146won't change the current mask settings.  It is more intended as an
147internal API for use by the platform than an external API for use by
148driver writers.
149
150int
151dma_set_mask_and_coherent(struct device *dev, u64 mask)
152
153Checks to see if the mask is possible and updates the device
154streaming and coherent DMA mask parameters if it is.
155
156Returns: 0 if successful and a negative error if not.
157
158int
159dma_set_mask(struct device *dev, u64 mask)
160
161Checks to see if the mask is possible and updates the device
162parameters if it is.
163
164Returns: 0 if successful and a negative error if not.
165
166int
167dma_set_coherent_mask(struct device *dev, u64 mask)
168
169Checks to see if the mask is possible and updates the device
170parameters if it is.
171
172Returns: 0 if successful and a negative error if not.
173
174u64
175dma_get_required_mask(struct device *dev)
176
177This API returns the mask that the platform requires to
178operate efficiently.  Usually this means the returned mask
179is the minimum required to cover all of memory.  Examining the
180required mask gives drivers with variable descriptor sizes the
181opportunity to use smaller descriptors as necessary.
182
183Requesting the required mask does not alter the current mask.  If you
184wish to take advantage of it, you should issue a dma_set_mask()
185call to set the mask to the value returned.
186
187
188Part Id - Streaming DMA mappings
189--------------------------------
190
191dma_addr_t
192dma_map_single(struct device *dev, void *cpu_addr, size_t size,
193		      enum dma_data_direction direction)
194
195Maps a piece of processor virtual memory so it can be accessed by the
196device and returns the DMA address of the memory.
197
198The direction for both APIs may be converted freely by casting.
199However the dma_ API uses a strongly typed enumerator for its
200direction:
201
202DMA_NONE		no direction (used for debugging)
203DMA_TO_DEVICE		data is going from the memory to the device
204DMA_FROM_DEVICE		data is coming from the device to the memory
205DMA_BIDIRECTIONAL	direction isn't known
206
207Notes:  Not all memory regions in a machine can be mapped by this API.
208Further, contiguous kernel virtual space may not be contiguous as
209physical memory.  Since this API does not provide any scatter/gather
210capability, it will fail if the user tries to map a non-physically
211contiguous piece of memory.  For this reason, memory to be mapped by
212this API should be obtained from sources which guarantee it to be
213physically contiguous (like kmalloc).
214
215Further, the DMA address of the memory must be within the
216dma_mask of the device (the dma_mask is a bit mask of the
217addressable region for the device, i.e., if the DMA address of
218the memory ANDed with the dma_mask is still equal to the DMA
219address, then the device can perform DMA to the memory).  To
220ensure that the memory allocated by kmalloc is within the dma_mask,
221the driver may specify various platform-dependent flags to restrict
222the DMA address range of the allocation (e.g., on x86, GFP_DMA
223guarantees to be within the first 16MB of available DMA addresses,
224as required by ISA devices).
225
226Note also that the above constraints on physical contiguity and
227dma_mask may not apply if the platform has an IOMMU (a device which
228maps an I/O DMA address to a physical memory address).  However, to be
229portable, device driver writers may *not* assume that such an IOMMU
230exists.
231
232Warnings:  Memory coherency operates at a granularity called the cache
233line width.  In order for memory mapped by this API to operate
234correctly, the mapped region must begin exactly on a cache line
235boundary and end exactly on one (to prevent two separately mapped
236regions from sharing a single cache line).  Since the cache line size
237may not be known at compile time, the API will not enforce this
238requirement.  Therefore, it is recommended that driver writers who
239don't take special care to determine the cache line size at run time
240only map virtual regions that begin and end on page boundaries (which
241are guaranteed also to be cache line boundaries).
242
243DMA_TO_DEVICE synchronisation must be done after the last modification
244of the memory region by the software and before it is handed off to
245the driver.  Once this primitive is used, memory covered by this
246primitive should be treated as read-only by the device.  If the device
247may write to it at any point, it should be DMA_BIDIRECTIONAL (see
248below).
249
250DMA_FROM_DEVICE synchronisation must be done before the driver
251accesses data that may be changed by the device.  This memory should
252be treated as read-only by the driver.  If the driver needs to write
253to it at any point, it should be DMA_BIDIRECTIONAL (see below).
254
255DMA_BIDIRECTIONAL requires special handling: it means that the driver
256isn't sure if the memory was modified before being handed off to the
257device and also isn't sure if the device will also modify it.  Thus,
258you must always sync bidirectional memory twice: once before the
259memory is handed off to the device (to make sure all memory changes
260are flushed from the processor) and once before the data may be
261accessed after being used by the device (to make sure any processor
262cache lines are updated with data that the device may have changed).
263
264void
265dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size,
266		 enum dma_data_direction direction)
267
268Unmaps the region previously mapped.  All the parameters passed in
269must be identical to those passed in (and returned) by the mapping
270API.
271
272dma_addr_t
273dma_map_page(struct device *dev, struct page *page,
274		    unsigned long offset, size_t size,
275		    enum dma_data_direction direction)
276void
277dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size,
278	       enum dma_data_direction direction)
279
280API for mapping and unmapping for pages.  All the notes and warnings
281for the other mapping APIs apply here.  Also, although the <offset>
282and <size> parameters are provided to do partial page mapping, it is
283recommended that you never use these unless you really know what the
284cache width is.
285
286int
287dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
288
289In some circumstances dma_map_single() and dma_map_page() will fail to create
290a mapping. A driver can check for these errors by testing the returned
291DMA address with dma_mapping_error(). A non-zero return value means the mapping
292could not be created and the driver should take appropriate action (e.g.
293reduce current DMA mapping usage or delay and try again later).
294
295	int
296	dma_map_sg(struct device *dev, struct scatterlist *sg,
297		int nents, enum dma_data_direction direction)
298
299Returns: the number of DMA address segments mapped (this may be shorter
300than <nents> passed in if some elements of the scatter/gather list are
301physically or virtually adjacent and an IOMMU maps them with a single
302entry).
303
304Please note that the sg cannot be mapped again if it has been mapped once.
305The mapping process is allowed to destroy information in the sg.
306
307As with the other mapping interfaces, dma_map_sg() can fail. When it
308does, 0 is returned and a driver must take appropriate action. It is
309critical that the driver do something, in the case of a block driver
310aborting the request or even oopsing is better than doing nothing and
311corrupting the filesystem.
312
313With scatterlists, you use the resulting mapping like this:
314
315	int i, count = dma_map_sg(dev, sglist, nents, direction);
316	struct scatterlist *sg;
317
318	for_each_sg(sglist, sg, count, i) {
319		hw_address[i] = sg_dma_address(sg);
320		hw_len[i] = sg_dma_len(sg);
321	}
322
323where nents is the number of entries in the sglist.
324
325The implementation is free to merge several consecutive sglist entries
326into one (e.g. with an IOMMU, or if several pages just happen to be
327physically contiguous) and returns the actual number of sg entries it
328mapped them to. On failure 0, is returned.
329
330Then you should loop count times (note: this can be less than nents times)
331and use sg_dma_address() and sg_dma_len() macros where you previously
332accessed sg->address and sg->length as shown above.
333
334	void
335	dma_unmap_sg(struct device *dev, struct scatterlist *sg,
336		int nhwentries, enum dma_data_direction direction)
337
338Unmap the previously mapped scatter/gather list.  All the parameters
339must be the same as those and passed in to the scatter/gather mapping
340API.
341
342Note: <nents> must be the number you passed in, *not* the number of
343DMA address entries returned.
344
345void
346dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, size_t size,
347			enum dma_data_direction direction)
348void
349dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle, size_t size,
350			   enum dma_data_direction direction)
351void
352dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, int nelems,
353		    enum dma_data_direction direction)
354void
355dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, int nelems,
356		       enum dma_data_direction direction)
357
358Synchronise a single contiguous or scatter/gather mapping for the CPU
359and device. With the sync_sg API, all the parameters must be the same
360as those passed into the single mapping API. With the sync_single API,
361you can use dma_handle and size parameters that aren't identical to
362those passed into the single mapping API to do a partial sync.
363
364Notes:  You must do this:
365
366- Before reading values that have been written by DMA from the device
367  (use the DMA_FROM_DEVICE direction)
368- After writing values that will be written to the device using DMA
369  (use the DMA_TO_DEVICE) direction
370- before *and* after handing memory to the device if the memory is
371  DMA_BIDIRECTIONAL
372
373See also dma_map_single().
374
375dma_addr_t
376dma_map_single_attrs(struct device *dev, void *cpu_addr, size_t size,
377		     enum dma_data_direction dir,
378		     struct dma_attrs *attrs)
379
380void
381dma_unmap_single_attrs(struct device *dev, dma_addr_t dma_addr,
382		       size_t size, enum dma_data_direction dir,
383		       struct dma_attrs *attrs)
384
385int
386dma_map_sg_attrs(struct device *dev, struct scatterlist *sgl,
387		 int nents, enum dma_data_direction dir,
388		 struct dma_attrs *attrs)
389
390void
391dma_unmap_sg_attrs(struct device *dev, struct scatterlist *sgl,
392		   int nents, enum dma_data_direction dir,
393		   struct dma_attrs *attrs)
394
395The four functions above are just like the counterpart functions
396without the _attrs suffixes, except that they pass an optional
397struct dma_attrs*.
398
399struct dma_attrs encapsulates a set of "DMA attributes". For the
400definition of struct dma_attrs see linux/dma-attrs.h.
401
402The interpretation of DMA attributes is architecture-specific, and
403each attribute should be documented in Documentation/DMA-attributes.txt.
404
405If struct dma_attrs* is NULL, the semantics of each of these
406functions is identical to those of the corresponding function
407without the _attrs suffix. As a result dma_map_single_attrs()
408can generally replace dma_map_single(), etc.
409
410As an example of the use of the *_attrs functions, here's how
411you could pass an attribute DMA_ATTR_FOO when mapping memory
412for DMA:
413
414#include <linux/dma-attrs.h>
415/* DMA_ATTR_FOO should be defined in linux/dma-attrs.h and
416 * documented in Documentation/DMA-attributes.txt */
417...
418
419	DEFINE_DMA_ATTRS(attrs);
420	dma_set_attr(DMA_ATTR_FOO, &attrs);
421	....
422	n = dma_map_sg_attrs(dev, sg, nents, DMA_TO_DEVICE, &attr);
423	....
424
425Architectures that care about DMA_ATTR_FOO would check for its
426presence in their implementations of the mapping and unmapping
427routines, e.g.:
428
429void whizco_dma_map_sg_attrs(struct device *dev, dma_addr_t dma_addr,
430			     size_t size, enum dma_data_direction dir,
431			     struct dma_attrs *attrs)
432{
433	....
434	int foo =  dma_get_attr(DMA_ATTR_FOO, attrs);
435	....
436	if (foo)
437		/* twizzle the frobnozzle */
438	....
439
440
441Part II - Advanced dma_ usage
442-----------------------------
443
444Warning: These pieces of the DMA API should not be used in the
445majority of cases, since they cater for unlikely corner cases that
446don't belong in usual drivers.
447
448If you don't understand how cache line coherency works between a
449processor and an I/O device, you should not be using this part of the
450API at all.
451
452void *
453dma_alloc_noncoherent(struct device *dev, size_t size,
454			       dma_addr_t *dma_handle, gfp_t flag)
455
456Identical to dma_alloc_coherent() except that the platform will
457choose to return either consistent or non-consistent memory as it sees
458fit.  By using this API, you are guaranteeing to the platform that you
459have all the correct and necessary sync points for this memory in the
460driver should it choose to return non-consistent memory.
461
462Note: where the platform can return consistent memory, it will
463guarantee that the sync points become nops.
464
465Warning:  Handling non-consistent memory is a real pain.  You should
466only use this API if you positively know your driver will be
467required to work on one of the rare (usually non-PCI) architectures
468that simply cannot make consistent memory.
469
470void
471dma_free_noncoherent(struct device *dev, size_t size, void *cpu_addr,
472			      dma_addr_t dma_handle)
473
474Free memory allocated by the nonconsistent API.  All parameters must
475be identical to those passed in (and returned by
476dma_alloc_noncoherent()).
477
478int
479dma_get_cache_alignment(void)
480
481Returns the processor cache alignment.  This is the absolute minimum
482alignment *and* width that you must observe when either mapping
483memory or doing partial flushes.
484
485Notes: This API may return a number *larger* than the actual cache
486line, but it will guarantee that one or more cache lines fit exactly
487into the width returned by this call.  It will also always be a power
488of two for easy alignment.
489
490void
491dma_cache_sync(struct device *dev, void *vaddr, size_t size,
492	       enum dma_data_direction direction)
493
494Do a partial sync of memory that was allocated by
495dma_alloc_noncoherent(), starting at virtual address vaddr and
496continuing on for size.  Again, you *must* observe the cache line
497boundaries when doing this.
498
499int
500dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr,
501			    dma_addr_t device_addr, size_t size, int
502			    flags)
503
504Declare region of memory to be handed out by dma_alloc_coherent() when
505it's asked for coherent memory for this device.
506
507phys_addr is the CPU physical address to which the memory is currently
508assigned (this will be ioremapped so the CPU can access the region).
509
510device_addr is the DMA address the device needs to be programmed
511with to actually address this memory (this will be handed out as the
512dma_addr_t in dma_alloc_coherent()).
513
514size is the size of the area (must be multiples of PAGE_SIZE).
515
516flags can be ORed together and are:
517
518DMA_MEMORY_MAP - request that the memory returned from
519dma_alloc_coherent() be directly writable.
520
521DMA_MEMORY_IO - request that the memory returned from
522dma_alloc_coherent() be addressable using read()/write()/memcpy_toio() etc.
523
524One or both of these flags must be present.
525
526DMA_MEMORY_INCLUDES_CHILDREN - make the declared memory be allocated by
527dma_alloc_coherent of any child devices of this one (for memory residing
528on a bridge).
529
530DMA_MEMORY_EXCLUSIVE - only allocate memory from the declared regions. 
531Do not allow dma_alloc_coherent() to fall back to system memory when
532it's out of memory in the declared region.
533
534The return value will be either DMA_MEMORY_MAP or DMA_MEMORY_IO and
535must correspond to a passed in flag (i.e. no returning DMA_MEMORY_IO
536if only DMA_MEMORY_MAP were passed in) for success or zero for
537failure.
538
539Note, for DMA_MEMORY_IO returns, all subsequent memory returned by
540dma_alloc_coherent() may no longer be accessed directly, but instead
541must be accessed using the correct bus functions.  If your driver
542isn't prepared to handle this contingency, it should not specify
543DMA_MEMORY_IO in the input flags.
544
545As a simplification for the platforms, only *one* such region of
546memory may be declared per device.
547
548For reasons of efficiency, most platforms choose to track the declared
549region only at the granularity of a page.  For smaller allocations,
550you should use the dma_pool() API.
551
552void
553dma_release_declared_memory(struct device *dev)
554
555Remove the memory region previously declared from the system.  This
556API performs *no* in-use checking for this region and will return
557unconditionally having removed all the required structures.  It is the
558driver's job to ensure that no parts of this memory region are
559currently in use.
560
561void *
562dma_mark_declared_memory_occupied(struct device *dev,
563				  dma_addr_t device_addr, size_t size)
564
565This is used to occupy specific regions of the declared space
566(dma_alloc_coherent() will hand out the first free region it finds).
567
568device_addr is the *device* address of the region requested.
569
570size is the size (and should be a page-sized multiple).
571
572The return value will be either a pointer to the processor virtual
573address of the memory, or an error (via PTR_ERR()) if any part of the
574region is occupied.
575
576Part III - Debug drivers use of the DMA-API
577-------------------------------------------
578
579The DMA-API as described above has some constraints. DMA addresses must be
580released with the corresponding function with the same size for example. With
581the advent of hardware IOMMUs it becomes more and more important that drivers
582do not violate those constraints. In the worst case such a violation can
583result in data corruption up to destroyed filesystems.
584
585To debug drivers and find bugs in the usage of the DMA-API checking code can
586be compiled into the kernel which will tell the developer about those
587violations. If your architecture supports it you can select the "Enable
588debugging of DMA-API usage" option in your kernel configuration. Enabling this
589option has a performance impact. Do not enable it in production kernels.
590
591If you boot the resulting kernel will contain code which does some bookkeeping
592about what DMA memory was allocated for which device. If this code detects an
593error it prints a warning message with some details into your kernel log. An
594example warning message may look like this:
595
596------------[ cut here ]------------
597WARNING: at /data2/repos/linux-2.6-iommu/lib/dma-debug.c:448
598	check_unmap+0x203/0x490()
599Hardware name:
600forcedeth 0000:00:08.0: DMA-API: device driver frees DMA memory with wrong
601	function [device address=0x00000000640444be] [size=66 bytes] [mapped as
602single] [unmapped as page]
603Modules linked in: nfsd exportfs bridge stp llc r8169
604Pid: 0, comm: swapper Tainted: G        W  2.6.28-dmatest-09289-g8bb99c0 #1
605Call Trace:
606 <IRQ>  [<ffffffff80240b22>] warn_slowpath+0xf2/0x130
607 [<ffffffff80647b70>] _spin_unlock+0x10/0x30
608 [<ffffffff80537e75>] usb_hcd_link_urb_to_ep+0x75/0xc0
609 [<ffffffff80647c22>] _spin_unlock_irqrestore+0x12/0x40
610 [<ffffffff8055347f>] ohci_urb_enqueue+0x19f/0x7c0
611 [<ffffffff80252f96>] queue_work+0x56/0x60
612 [<ffffffff80237e10>] enqueue_task_fair+0x20/0x50
613 [<ffffffff80539279>] usb_hcd_submit_urb+0x379/0xbc0
614 [<ffffffff803b78c3>] cpumask_next_and+0x23/0x40
615 [<ffffffff80235177>] find_busiest_group+0x207/0x8a0
616 [<ffffffff8064784f>] _spin_lock_irqsave+0x1f/0x50
617 [<ffffffff803c7ea3>] check_unmap+0x203/0x490
618 [<ffffffff803c8259>] debug_dma_unmap_page+0x49/0x50
619 [<ffffffff80485f26>] nv_tx_done_optimized+0xc6/0x2c0
620 [<ffffffff80486c13>] nv_nic_irq_optimized+0x73/0x2b0
621 [<ffffffff8026df84>] handle_IRQ_event+0x34/0x70
622 [<ffffffff8026ffe9>] handle_edge_irq+0xc9/0x150
623 [<ffffffff8020e3ab>] do_IRQ+0xcb/0x1c0
624 [<ffffffff8020c093>] ret_from_intr+0x0/0xa
625 <EOI> <4>---[ end trace f6435a98e2a38c0e ]---
626
627The driver developer can find the driver and the device including a stacktrace
628of the DMA-API call which caused this warning.
629
630Per default only the first error will result in a warning message. All other
631errors will only silently counted. This limitation exist to prevent the code
632from flooding your kernel log. To support debugging a device driver this can
633be disabled via debugfs. See the debugfs interface documentation below for
634details.
635
636The debugfs directory for the DMA-API debugging code is called dma-api/. In
637this directory the following files can currently be found:
638
639	dma-api/all_errors	This file contains a numeric value. If this
640				value is not equal to zero the debugging code
641				will print a warning for every error it finds
642				into the kernel log. Be careful with this
643				option, as it can easily flood your logs.
644
645	dma-api/disabled	This read-only file contains the character 'Y'
646				if the debugging code is disabled. This can
647				happen when it runs out of memory or if it was
648				disabled at boot time
649
650	dma-api/error_count	This file is read-only and shows the total
651				numbers of errors found.
652
653	dma-api/num_errors	The number in this file shows how many
654				warnings will be printed to the kernel log
655				before it stops. This number is initialized to
656				one at system boot and be set by writing into
657				this file
658
659	dma-api/min_free_entries
660				This read-only file can be read to get the
661				minimum number of free dma_debug_entries the
662				allocator has ever seen. If this value goes
663				down to zero the code will disable itself
664				because it is not longer reliable.
665
666	dma-api/num_free_entries
667				The current number of free dma_debug_entries
668				in the allocator.
669
670	dma-api/driver-filter
671				You can write a name of a driver into this file
672				to limit the debug output to requests from that
673				particular driver. Write an empty string to
674				that file to disable the filter and see
675				all errors again.
676
677If you have this code compiled into your kernel it will be enabled by default.
678If you want to boot without the bookkeeping anyway you can provide
679'dma_debug=off' as a boot parameter. This will disable DMA-API debugging.
680Notice that you can not enable it again at runtime. You have to reboot to do
681so.
682
683If you want to see debug messages only for a special device driver you can
684specify the dma_debug_driver=<drivername> parameter. This will enable the
685driver filter at boot time. The debug code will only print errors for that
686driver afterwards. This filter can be disabled or changed later using debugfs.
687
688When the code disables itself at runtime this is most likely because it ran
689out of dma_debug_entries. These entries are preallocated at boot. The number
690of preallocated entries is defined per architecture. If it is too low for you
691boot with 'dma_debug_entries=<your_desired_number>' to overwrite the
692architectural default.
693
694void debug_dmap_mapping_error(struct device *dev, dma_addr_t dma_addr);
695
696dma-debug interface debug_dma_mapping_error() to debug drivers that fail
697to check DMA mapping errors on addresses returned by dma_map_single() and
698dma_map_page() interfaces. This interface clears a flag set by
699debug_dma_map_page() to indicate that dma_mapping_error() has been called by
700the driver. When driver does unmap, debug_dma_unmap() checks the flag and if
701this flag is still set, prints warning message that includes call trace that
702leads up to the unmap. This interface can be called from dma_mapping_error()
703routines to enable DMA mapping error check debugging.
704
705