Skip to content

Commit 43e027e

Browse files
Baolin Wangakpm00
authored andcommitted
mm: memory: extend finish_fault() to support large folio
Patch series "add mTHP support for anonymous shmem", v5. Anonymous pages have already been supported for multi-size (mTHP) allocation through commit 19eaf44, that can allow THP to be configured through the sysfs interface located at '/sys/kernel/mm/transparent_hugepage/hugepage-XXkb/enabled'. However, the anonymous shmem will ignore the anonymous mTHP rule configured through the sysfs interface, and can only use the PMD-mapped THP, that is not reasonable. Many implement anonymous page sharing through mmap(MAP_SHARED | MAP_ANONYMOUS), especially in database usage scenarios, therefore, users expect to apply an unified mTHP strategy for anonymous pages, also including the anonymous shared pages, in order to enjoy the benefits of mTHP. For example, lower latency than PMD-mapped THP, smaller memory bloat than PMD-mapped THP, contiguous PTEs on ARM architecture to reduce TLB miss etc. As discussed in the bi-weekly MM meeting[1], the mTHP controls should control all of shmem, not only anonymous shmem, but support will be added iteratively. Therefore, this patch set starts with support for anonymous shmem. The primary strategy is similar to supporting anonymous mTHP. Introduce a new interface '/mm/transparent_hugepage/hugepage-XXkb/shmem_enabled', which can have almost the same values as the top-level '/sys/kernel/mm/transparent_hugepage/shmem_enabled', with adding a new additional "inherit" option and dropping the testing options 'force' and 'deny'. By default all sizes will be set to "never" except PMD size, which is set to "inherit". This ensures backward compatibility with the anonymous shmem enabled of the top level, meanwhile also allows independent control of anonymous shmem enabled for each mTHP. Use the page fault latency tool to measure the performance of 1G anonymous shmem with 32 threads on my machine environment with: ARM64 Architecture, 32 cores, 125G memory: base: mm-unstable user-time sys_time faults_per_sec_per_cpu faults_per_sec 0.04s 3.10s 83516.416 2669684.890 mm-unstable + patchset, anon shmem mTHP disabled user-time sys_time faults_per_sec_per_cpu faults_per_sec 0.02s 3.14s 82936.359 2630746.027 mm-unstable + patchset, anon shmem 64K mTHP enabled user-time sys_time faults_per_sec_per_cpu faults_per_sec 0.08s 0.31s 678630.231 17082522.495 From the data above, it is observed that the patchset has a minimal impact when mTHP is not enabled (some fluctuations observed during testing). When enabling 64K mTHP, there is a significant improvement of the page fault latency. [1] https://lore.kernel.org/all/[email protected]/ This patch (of 6): Add large folio mapping establishment support for finish_fault() as a preparation, to support multi-size THP allocation of anonymous shmem pages in the following patches. Keep the same behavior (per-page fault) for non-anon shmem to avoid inflating the RSS unintentionally, and we can discuss what size of mapping to build when extending mTHP to control non-anon shmem in the future. [[email protected]: avoid going beyond the PMD pagetable size] Link: https://lkml.kernel.org/r/[email protected] [[email protected]: use 'PTRS_PER_PTE' instead of 'PTRS_PER_PTE - 1'] Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/3a190892355989d42f59cf9f2f98b94694b0d24d.1718090413.git.baolin.wang@linux.alibaba.com Signed-off-by: Baolin Wang <[email protected]> Reviewed-by: Zi Yan <[email protected]> Reviewed-by: Kefeng Wang <[email protected]> Cc: Daniel Gomez <[email protected]> Cc: David Hildenbrand <[email protected]> Cc: "Huang, Ying" <[email protected]> Cc: Hugh Dickins <[email protected]> Cc: Lance Yang <[email protected]> Cc: Pankaj Raghav <[email protected]> Cc: Ryan Roberts <[email protected]> Cc: Yang Shi <[email protected]> Cc: Barry Song <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
1 parent 29e9412 commit 43e027e

File tree

1 file changed

+51
-10
lines changed

1 file changed

+51
-10
lines changed

mm/memory.c

Lines changed: 51 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -4826,9 +4826,12 @@ vm_fault_t finish_fault(struct vm_fault *vmf)
48264826
{
48274827
struct vm_area_struct *vma = vmf->vma;
48284828
struct page *page;
4829+
struct folio *folio;
48294830
vm_fault_t ret;
48304831
bool is_cow = (vmf->flags & FAULT_FLAG_WRITE) &&
48314832
!(vma->vm_flags & VM_SHARED);
4833+
int type, nr_pages;
4834+
unsigned long addr = vmf->address;
48324835

48334836
/* Did we COW the page? */
48344837
if (is_cow)
@@ -4859,24 +4862,62 @@ vm_fault_t finish_fault(struct vm_fault *vmf)
48594862
return VM_FAULT_OOM;
48604863
}
48614864

4865+
folio = page_folio(page);
4866+
nr_pages = folio_nr_pages(folio);
4867+
4868+
/*
4869+
* Using per-page fault to maintain the uffd semantics, and same
4870+
* approach also applies to non-anonymous-shmem faults to avoid
4871+
* inflating the RSS of the process.
4872+
*/
4873+
if (!vma_is_anon_shmem(vma) || unlikely(userfaultfd_armed(vma))) {
4874+
nr_pages = 1;
4875+
} else if (nr_pages > 1) {
4876+
pgoff_t idx = folio_page_idx(folio, page);
4877+
/* The page offset of vmf->address within the VMA. */
4878+
pgoff_t vma_off = vmf->pgoff - vmf->vma->vm_pgoff;
4879+
/* The index of the entry in the pagetable for fault page. */
4880+
pgoff_t pte_off = pte_index(vmf->address);
4881+
4882+
/*
4883+
* Fallback to per-page fault in case the folio size in page
4884+
* cache beyond the VMA limits and PMD pagetable limits.
4885+
*/
4886+
if (unlikely(vma_off < idx ||
4887+
vma_off + (nr_pages - idx) > vma_pages(vma) ||
4888+
pte_off < idx ||
4889+
pte_off + (nr_pages - idx) > PTRS_PER_PTE)) {
4890+
nr_pages = 1;
4891+
} else {
4892+
/* Now we can set mappings for the whole large folio. */
4893+
addr = vmf->address - idx * PAGE_SIZE;
4894+
page = &folio->page;
4895+
}
4896+
}
4897+
48624898
vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd,
4863-
vmf->address, &vmf->ptl);
4899+
addr, &vmf->ptl);
48644900
if (!vmf->pte)
48654901
return VM_FAULT_NOPAGE;
48664902

48674903
/* Re-check under ptl */
4868-
if (likely(!vmf_pte_changed(vmf))) {
4869-
struct folio *folio = page_folio(page);
4870-
int type = is_cow ? MM_ANONPAGES : mm_counter_file(folio);
4871-
4872-
set_pte_range(vmf, folio, page, 1, vmf->address);
4873-
add_mm_counter(vma->vm_mm, type, 1);
4874-
ret = 0;
4875-
} else {
4876-
update_mmu_tlb(vma, vmf->address, vmf->pte);
4904+
if (nr_pages == 1 && unlikely(vmf_pte_changed(vmf))) {
4905+
update_mmu_tlb(vma, addr, vmf->pte);
4906+
ret = VM_FAULT_NOPAGE;
4907+
goto unlock;
4908+
} else if (nr_pages > 1 && !pte_range_none(vmf->pte, nr_pages)) {
4909+
update_mmu_tlb_range(vma, addr, vmf->pte, nr_pages);
48774910
ret = VM_FAULT_NOPAGE;
4911+
goto unlock;
48784912
}
48794913

4914+
folio_ref_add(folio, nr_pages - 1);
4915+
set_pte_range(vmf, folio, page, nr_pages, addr);
4916+
type = is_cow ? MM_ANONPAGES : mm_counter_file(folio);
4917+
add_mm_counter(vma->vm_mm, type, nr_pages);
4918+
ret = 0;
4919+
4920+
unlock:
48804921
pte_unmap_unlock(vmf->pte, vmf->ptl);
48814922
return ret;
48824923
}

0 commit comments

Comments
 (0)