Lines Matching refs:accesses
41 - Locks vs memory accesses.
42 - Locks vs I/O accesses.
121 The set of accesses as seen by the memory system in the middle can be arranged
194 (*) On any given CPU, dependent memory accesses will be issued in order, with
251 (*) It _must_ be assumed that overlapping memory accesses may be merged or
458 an ACQUIRE on a given variable, all memory accesses preceding any prior
460 words, within a given variable's critical section, all accesses of all
484 (*) There is no guarantee that any of the memory accesses specified before a
487 access queue that accesses of the appropriate type may not cross.
492 of the first CPU's accesses occur, but see the next point:
495 from a second CPU's accesses, even _if_ the second CPU uses a memory
500 hardware[*] will not reorder the memory accesses. CPU cache coherency
1304 on the combined order of CPU 1's and CPU 2's accesses.
1328 compiler from moving the memory accesses either side of it to the other side:
1334 for barrier() that affects only the specific accesses flagged by the
1339 (*) Prevents the compiler from reordering accesses following the
1340 barrier() to precede any accesses preceding the barrier().
1365 In short, ACCESS_ONCE() provides cache coherence for accesses from
1474 (*) The compiler is within its rights to reorder memory accesses unless
1518 by something that also accesses 'flag' and 'msg', for example,
1569 multiple smaller accesses. For example, given an architecture having
1649 and will order overlapping accesses correctly with respect to itself.
1657 used to control MMIO effects on accesses through relaxed memory I/O windows.
1750 See the subsection "Locks vs I/O accesses" for more information.
1824 the two accesses can themselves then cross:
1912 anything at all - especially with respect to I/O accesses - unless combined
2078 separate data accesses. Thus the above sleeper ought to do:
2126 Then there is no guarantee as to what order CPU 3 will see the accesses to *A
2171 Without smp_mb__after_unlock_lock(), the accesses are not guaranteed
2178 Under certain circumstances (especially involving NUMA), I/O accesses within
2350 In this case, the barrier makes a guarantee that all memory accesses before the
2351 barrier will appear to happen before all the memory accesses after the barrier
2353 the memory accesses before the barrier will be complete by the time the barrier
2459 make the right memory accesses in exactly the right order.
2462 in that the carefully sequenced accesses in the driver code won't reach the
2464 efficient to reorder, combine or merge accesses - something that would cause
2468 routines - such as inb() or writel() - which know how to make such accesses
2517 If ordering rules are relaxed, it must be assumed that accesses done inside an
2519 accesses performed in an interrupt - and vice versa - unless implicit or
2522 Normally this won't be a problem because the I/O accesses done inside such
2592 respect to normal memory accesses (e.g. DMA buffers) nor do they guarantee
2594 required, an mmiowb() barrier can be used. Note that relaxed accesses to
2679 accesses to be performed. The core may place these in the queue in any order
2684 accesses cross from the CPU side of things to the memory side of things, and
2691 [!] MMIO or other device accesses may bypass the cache system. This depends on
2826 cachelets for normal memory accesses. The semantics of the Alpha removes the
2858 Amongst these properties is usually the fact that such accesses bypass the
2859 caching entirely and go directly to the device buses. This means MMIO accesses
2860 may, in effect, overtake accesses to cached memory that were emitted earlier.
2900 (*) the order of the memory accesses may be rearranged to promote better use
2904 memory or I/O hardware that can do batched accesses of adjacent locations,
2922 _own_ accesses appear to be correctly ordered, without the need for a memory
2941 accesses: