Lines Matching refs:request
59 2.3 Changes in the request structure
63 3.2.1 Traversing segments and completion units in a request
117 a per-queue level (e.g maximum request size, maximum number of segments in
134 Sets two variables that limit the size of the request.
136 - The request queue's max_sectors, which is a soft size in
140 - The request queue's max_hw_sectors, which is a hard limit
141 and reflects the maximum size request a driver can handle
148 Maximum physical segments you can handle in a request. 128
152 Maximum dma segments the hardware can handle in a request. 128
176 setting the queue bounce limit for the request queue for the device
230 add request, extract request, which makes it possible to abstract specific
257 from above e.g indicating that an i/o is just a readahead request, or priority
265 Arjan's proposed request priority scheme allows higher levels some broad
266 control (high/med/low) over the priority of an i/o request vs other pending
293 the blk_do_rq routine can be used to place the request on the queue and
295 invoke a lower level driver specific interface with the request as a
298 If the request is a means for passing on special information associated with
299 the command, then such information is associated with the request->special
300 field (rather than misuse the request->buffer field which is meant for the
301 request data buffer's virtual mapping).
303 For passing request data, the caller must build up a bio descriptor
305 bio segments or uses the block layer end*request* functions for i/o
306 completion. Alternatively one could directly use the request->buffer field to
308 addresses passed in this way and ignores bio entries for the request type
310 request->buffer, request->sector and request->nr_sectors or
311 request->current_nr_sectors fields itself rather than using the block layer
313 (See 2.3 or Documentation/block/request.txt for a brief explanation of
314 the request structure fields)
324 <SUP: What I meant here was that if the request doesn't have a bio, then
326 and hence can't be used for advancing request state settings on the
330 and always returns 0 if there are none associated with the request.
336 A request can be created with a pre-built custom command to be sent directly
337 to the device. The cmd block in the request structure has room for filling
339 command pre-building, and the type of the request is now indicated
342 The request structure flags can be set up to indicate the type of request
344 packet command issued via blk_do_rq, REQ_SPECIAL: special request).
347 Drivers can now specify a request prepare function (q->prep_rq_fn) that the
348 block layer would invoke to pre-build device commands for a given request,
349 or perform other preparatory processing for the request. This is routine is
350 called by elv_next_request(), i.e. typically just before servicing a request.
356 request on the queue, rather than construct the command on the fly in the
357 driver while servicing the request queue when it may affect latencies in
359 pre-building would be to do it whenever we fail to merge on a request.
360 Now REQ_NOMERGE is set in the request flags to skip this one in the future,
370 layer, and the low level request structure was associated with a chain of
371 buffer heads for a contiguous i/o request. This led to certain inefficiencies
432 struct bio *bi_next; /* request queue link */
456 - Splitting of an i/o request across multiple devices (as in the case of
462 by using rq_for_each_segment. This handles the fact that a request
470 (*) unrelated merges -- a request ends up containing two or more bios that
494 The request structure is the structure that gets passed down to low level
495 drivers. The block layer make_request function builds up a request structure,
497 use of block layer helper routine elv_next_request to pull the next request
500 request structure.
505 Refer to Documentation/block/request.txt for details about all the request
509 struct request {
546 int tag; /* command tag associated with request */
564 request that remain to be transferred (no change). The purpose of the
566 over the request to the driver. These values are updated by block on
568 transfer and invokes block end*request helpers to mark this. The
579 Code that sets up its own request structures and passes them down to
642 3.2.1 Traversing segments and completion units in a request
645 in the request list (drivers should avoid directly trying to do it
661 gather lists from a request, so a driver need not do it on its own.
666 to modify the internals of request to scatterlist conversion down the line
681 hw data segments in a request (i.e. the maximum number of address/length
685 of physical data segments in a request (i.e. the largest sized scatter list
693 request can be kicked of) as before. With the introduction of multi-page
708 is crossed on completion of a transfer. (The end*request* functions should
709 be used if only if the request has come down from block/bio path, not for
712 3.2.5 Generic request command tagging
734 blk_queue_start_tag(struct request_queue *q, struct request *rq)
736 Start tagged operation for this request. A free tag number between
737 0 and 'depth' is assigned to the request (rq->tag holds this number),
742 blk_queue_end_tag(struct request_queue *q, struct request *rq)
744 End tagged operation on this request. 'rq' is removed from the internal
747 To minimize struct request and queue overhead, the tag helpers utilize some
748 of the same request members that are used for normal request queue management.
749 This means that a request cannot both be an active tag and be on the queue
750 list at the same time. blk_queue_start_tag() will remove the request, but
752 completion of the request to the block layer. This means ending tag
757 queue. For instance, on IDE any tagged request error needs to clear both
764 to the request queue. The driver will receive them again on the
771 tag number to the associated request. These are, in no particular order:
779 Returns a pointer to the request associated with tag 'tag'.
792 Returns 1 if the request 'rq' is tagged.
799 struct request **tag_index; /* array or pointers to rq */
807 a bit of explaining. Normally we don't care too much about request ordering,
879 of its request processing, since that would make it hard for the higher layer
881 all such transient state should either be maintained in the request structure,
915 being merged, the request is gone.
917 elevator_merged_fn called when a request in the scheduler has been
919 scheduler for example, to reposition the request
924 request safely. The io scheduler may still
940 elevator_add_req_fn* called to add a new request into the scheduler
943 elevator_latter_req_fn These return the request before or after the
947 elevator_completed_req_fn called when a request is completed.
950 current context to queue a new request even if
956 specific storage for a request.
958 elevator_activate_req_fn Called when device driver first sees a request.
960 determine when actual execution of a request
963 a request by requeueing it.
984 optimal disk scan and request servicing performance (based on generic
997 request in sort order to prevent binary tree lookups.
1003 AS and deadline use a hash table indexed by the last sector of a request. This
1007 "Front merges", a new request being merged at the front of an existing request,
1017 queue is empty when a request comes in, then it plugs the request queue
1037 a big request from the broken up pieces coming by.
1052 granular locking. The request queue structure has a pointer to the
1103 supposed to handle looping directly over the request list.
1104 (struct request->queue has been removed)
1107 It used to handle always just the first buffer_head in a request, now
1124 where a driver received a request ala this before:
1152 - elevator support for kiobuf request merging (axboe)