Lines Matching refs:and
9 and based on user options switch IO policies in the background.
15 on devices. This policy is implemented in generic block layer and can be
31 - Compile and boot into kernel and mount IO controller (blkio); see
41 - Set weights of group test1 and test2
45 - Create two same size files (say 512MB each) on same disk (file1, file2) and
60 on looking at (with the help of script), at blkio.disk_time and
61 blkio.disk_sectors files of both test1 and test2 groups. This will tell how
62 much disk time (in milliseconds), each group got and how many sectors each
85 - Run dd to read a file and see if rate is throttled to 1MB/s or not.
98 Both CFQ and throttling implement hierarchy support; however,
100 enabled from cgroup side, which currently is a development option and
111 CFQ by default and throttling with "sane_behavior" will handle the
147 on all the devices until and unless overridden by per device rule.
186 two fields specify the major and minor number of the device and
192 two fields specify the major and minor number of the device and
199 or async. First two fields specify the major and minor number of the
200 device, third field specifies the operation type and the fourth field
206 or async. First two fields specify the major and minor number of the
207 device, third field specifies the operation type and the fourth field
211 - Total amount of time between request dispatch and request completion
220 specify the major and minor number of the device, third field
221 specifies the operation type and the fourth field specifies the
235 read or write, sync or async. First two fields specify the major and
237 and the fourth field specifies the io_wait_time in ns.
264 got a timeslice and will not include the current delta.
273 time it had a pending request and will not include the current delta.
281 idle_time accumulated till the last idle period and will not include
288 and minor number of the device and third field specifies the number
326 Note: If both BW and IOPS rules are specified for a device, then IO is
332 or async. First two fields specify the major and minor number of the
333 device, third field specifies the operation type and the fourth field
339 or async. First two fields specify the major and minor number of the
340 device, third field specifies the operation type and the fourth field
354 This happens because CFQ idles on a single queue and single queue might not
356 one can try setting slice_idle=0 and that would switch CFQ to IOPS
359 That means CFQ will not idle between cfq queues of a cfq group and hence be
360 able to driver higher queue depth and achieve better throughput. That also
361 means that cfq provides fairness among groups in terms of IOPS and not in
366 If one disables idling on individual cfq queues and cfq service trees by
370 By default group_idle is same as slice_idle and does not do anything if
374 groups and put applications in that group which are not driving enough
375 IO to keep disk busy. In that case set group_idle=0, and CFQ will not idle
376 on individual groups and throughput should improve.
381 Page cache is dirtied through buffered writes and shared mmaps and
383 mechanism. Writeback sits between the memory and IO domains and
384 regulates the proportion of dirty memory by balancing dirtying and
389 to operate accounting for cgroup resource restrictions and all
392 If both the blkio and memory controllers are used on the v2 hierarchy
393 and the filesystem supports cgroup writeback, writeback operations
394 correctly follow the resource restrictions imposed by both memory and
397 Writeback examines both system-wide and per-cgroup dirty memory status
398 and enforces the more restrictive of the two. Also, writeback control
399 parameters which are absolute values - vm.dirty_bytes and
404 granularity between memory controller and writeback. While memory
413 complicated and inefficient. The only use case which suffers from
415 regions of the same inode, which is an unlikely use case and decided
417 ownership on the first use and doesn't update it until the page is
432 Should be called for each bio carrying writeback data and associates
434 between bio allocation and submission.
440 writeback session, it's the easiest and most natural to call it as
450 the configuration, the bio may be executed at a lower priority and if