1= Userfaultfd = 2 3== Objective == 4 5Userfaults allow the implementation of on-demand paging from userland 6and more generally they allow userland to take control of various 7memory page faults, something otherwise only the kernel code could do. 8 9For example userfaults allows a proper and more optimal implementation 10of the PROT_NONE+SIGSEGV trick. 11 12== Design == 13 14Userfaults are delivered and resolved through the userfaultfd syscall. 15 16The userfaultfd (aside from registering and unregistering virtual 17memory ranges) provides two primary functionalities: 18 191) read/POLLIN protocol to notify a userland thread of the faults 20 happening 21 222) various UFFDIO_* ioctls that can manage the virtual memory regions 23 registered in the userfaultfd that allows userland to efficiently 24 resolve the userfaults it receives via 1) or to manage the virtual 25 memory in the background 26 27The real advantage of userfaults if compared to regular virtual memory 28management of mremap/mprotect is that the userfaults in all their 29operations never involve heavyweight structures like vmas (in fact the 30userfaultfd runtime load never takes the mmap_sem for writing). 31 32Vmas are not suitable for page- (or hugepage) granular fault tracking 33when dealing with virtual address spaces that could span 34Terabytes. Too many vmas would be needed for that. 35 36The userfaultfd once opened by invoking the syscall, can also be 37passed using unix domain sockets to a manager process, so the same 38manager process could handle the userfaults of a multitude of 39different processes without them being aware about what is going on 40(well of course unless they later try to use the userfaultfd 41themselves on the same region the manager is already tracking, which 42is a corner case that would currently return -EBUSY). 43 44== API == 45 46When first opened the userfaultfd must be enabled invoking the 47UFFDIO_API ioctl specifying a uffdio_api.api value set to UFFD_API (or 48a later API version) which will specify the read/POLLIN protocol 49userland intends to speak on the UFFD and the uffdio_api.features 50userland requires. The UFFDIO_API ioctl if successful (i.e. if the 51requested uffdio_api.api is spoken also by the running kernel and the 52requested features are going to be enabled) will return into 53uffdio_api.features and uffdio_api.ioctls two 64bit bitmasks of 54respectively all the available features of the read(2) protocol and 55the generic ioctl available. 56 57Once the userfaultfd has been enabled the UFFDIO_REGISTER ioctl should 58be invoked (if present in the returned uffdio_api.ioctls bitmask) to 59register a memory range in the userfaultfd by setting the 60uffdio_register structure accordingly. The uffdio_register.mode 61bitmask will specify to the kernel which kind of faults to track for 62the range (UFFDIO_REGISTER_MODE_MISSING would track missing 63pages). The UFFDIO_REGISTER ioctl will return the 64uffdio_register.ioctls bitmask of ioctls that are suitable to resolve 65userfaults on the range registered. Not all ioctls will necessarily be 66supported for all memory types depending on the underlying virtual 67memory backend (anonymous memory vs tmpfs vs real filebacked 68mappings). 69 70Userland can use the uffdio_register.ioctls to manage the virtual 71address space in the background (to add or potentially also remove 72memory from the userfaultfd registered range). This means a userfault 73could be triggering just before userland maps in the background the 74user-faulted page. 75 76The primary ioctl to resolve userfaults is UFFDIO_COPY. That 77atomically copies a page into the userfault registered range and wakes 78up the blocked userfaults (unless uffdio_copy.mode & 79UFFDIO_COPY_MODE_DONTWAKE is set). Other ioctl works similarly to 80UFFDIO_COPY. They're atomic as in guaranteeing that nothing can see an 81half copied page since it'll keep userfaulting until the copy has 82finished. 83 84== QEMU/KVM == 85 86QEMU/KVM is using the userfaultfd syscall to implement postcopy live 87migration. Postcopy live migration is one form of memory 88externalization consisting of a virtual machine running with part or 89all of its memory residing on a different node in the cloud. The 90userfaultfd abstraction is generic enough that not a single line of 91KVM kernel code had to be modified in order to add postcopy live 92migration to QEMU. 93 94Guest async page faults, FOLL_NOWAIT and all other GUP features work 95just fine in combination with userfaults. Userfaults trigger async 96page faults in the guest scheduler so those guest processes that 97aren't waiting for userfaults (i.e. network bound) can keep running in 98the guest vcpus. 99 100It is generally beneficial to run one pass of precopy live migration 101just before starting postcopy live migration, in order to avoid 102generating userfaults for readonly guest regions. 103 104The implementation of postcopy live migration currently uses one 105single bidirectional socket but in the future two different sockets 106will be used (to reduce the latency of the userfaults to the minimum 107possible without having to decrease /proc/sys/net/ipv4/tcp_wmem). 108 109The QEMU in the source node writes all pages that it knows are missing 110in the destination node, into the socket, and the migration thread of 111the QEMU running in the destination node runs UFFDIO_COPY|ZEROPAGE 112ioctls on the userfaultfd in order to map the received pages into the 113guest (UFFDIO_ZEROCOPY is used if the source page was a zero page). 114 115A different postcopy thread in the destination node listens with 116poll() to the userfaultfd in parallel. When a POLLIN event is 117generated after a userfault triggers, the postcopy thread read() from 118the userfaultfd and receives the fault address (or -EAGAIN in case the 119userfault was already resolved and waken by a UFFDIO_COPY|ZEROPAGE run 120by the parallel QEMU migration thread). 121 122After the QEMU postcopy thread (running in the destination node) gets 123the userfault address it writes the information about the missing page 124into the socket. The QEMU source node receives the information and 125roughly "seeks" to that page address and continues sending all 126remaining missing pages from that new page offset. Soon after that 127(just the time to flush the tcp_wmem queue through the network) the 128migration thread in the QEMU running in the destination node will 129receive the page that triggered the userfault and it'll map it as 130usual with the UFFDIO_COPY|ZEROPAGE (without actually knowing if it 131was spontaneously sent by the source or if it was an urgent page 132requested through an userfault). 133 134By the time the userfaults start, the QEMU in the destination node 135doesn't need to keep any per-page state bitmap relative to the live 136migration around and a single per-page bitmap has to be maintained in 137the QEMU running in the source node to know which pages are still 138missing in the destination node. The bitmap in the source node is 139checked to find which missing pages to send in round robin and we seek 140over it when receiving incoming userfaults. After sending each page of 141course the bitmap is updated accordingly. It's also useful to avoid 142sending the same page twice (in case the userfault is read by the 143postcopy thread just before UFFDIO_COPY|ZEROPAGE runs in the migration 144thread). 145