Lines Matching refs:by
111 CFQ by default and throttling with "sane_behavior" will handle the
115 directly generated by tasks in that cgroup.
147 on all the devices until and unless overridden by per device rule.
154 by blkio.weight.
191 - number of sectors transferred to/from disk by the group. First
193 third field specifies the number of sectors transferred by the
197 - Number of bytes transferred to/from the disk by the group. These
198 are further divided by the type of operation - read or write, sync
204 - Number of IOs completed to/from the disk by the group. These
205 are further divided by the type of operation - read or write, sync
212 for the IOs done by this cgroup. This is in nanoseconds to make it
218 io_service_time > actual time elapsed. This time is further divided by
232 (there might be a time lag here due to re-ordering of requests by the
234 devices too. This time is further divided by the type of operation -
241 cgroup. This is further divided by the type of operation - read or
246 cgroup. This is further divided by the type of operation - read or
260 cumulative total of the amount of time spent by each IO in that cgroup
277 This is the amount of time spent by the IO scheduler idling for a
330 - Number of IOs (bio) completed to/from the disk by the group (as
331 seen by throttling policy). These are further divided by the type
336 blkio.io_serviced does accounting as seen by CFQ and counts are in
339 of bios as seen by throttling policy. These bios can later be
340 merged by elevator and total number of requests completed can be
344 - Number of bytes transferred to/from the disk by the group. These
345 are further divided by the type of operation - read or write, sync
351 updated by CFQ. The difference between two is that
378 If one disables idling on individual cfq queues and cfq service trees by