Skip to content

Conversation

atishp04
Copy link
Owner

@atishp04 atishp04 commented May 22, 2025

Summary by Sourcery

Support the SBI v3 PMU event info extension with extended raw event widths and integrate it into KVM with proper guest memory validation

New Features:

  • Add SBI v3 PMU event info support via pmu_sbi_check_event_info and riscv_pmu_get_event_info to query firmware for available events
  • Extend raw PMU event encoding to 56 bits for SBI v3 with new masks (RISCV_PMU_RAW_EVENT_V2_MASK) and index (RISCV_PMU_RAW_EVENT_V2_IDX)
  • Introduce KVM SBI v3 PMU event info handler (kvm_riscv_vcpu_pmu_event_info) and expose SBI_EXT_PMU_EVENT_GET_INFO in KVM

Enhancements:

  • Add sbi_v3_available flag and conditionally use v3-specific event probing
  • Simplify pmu_sbi_event_map by delegating to riscv_pmu_get_event_info
  • Add kvm_vcpu_validate_gpa_range helper and replace direct GPA-to-HVA checks in KVM PMU/SBI handlers
  • Update KVM SBI version to 3.0 and add new SBI PMU extension ID for event info

Summary by CodeRabbit

  • New Features

    • Added support for SBI PMU version 3.0, including extended event configuration and new event types.
    • Introduced the ability to query information for multiple PMU events in bulk.
    • Enhanced PMU event mapping and validation for improved compatibility and accuracy.
  • Bug Fixes

    • Improved validation of guest physical address ranges and permissions for memory operations.
  • Refactor

    • Simplified and centralized event info retrieval and address validation logic for better maintainability.

atishp04 added 4 commits May 21, 2025 17:18
SBI v3.0 specification[1] added two new improvements to the PMU chaper.
The SBI v3.0 specification is frozen and under public review phase as
per the RISC-V International guidelines. 

1. Added an additional get_event_info function to query event availablity
in bulk instead of individual SBI calls for each event. This helps in
improving the boot time.

2. Raw event width allowed by the platform is widened to have 56 bits
with RAW event v2 as per new clarification in the priv ISA[2].

Apart from implementing these new features, this series improves the gpa
range check in KVM and updates the kvm SBI implementation to SBI v3.0.

The opensbi patches have been merged. This series can be found at [3].

[1] https://github.com/riscv-non-isa/riscv-sbi-doc/releases/download/v3.0-rc7/riscv-sbi.pdf 
[2] riscv/riscv-isa-manual#1578
[3] https://github.com/atishp04/linux/tree/b4/pmu_event_info_v3

To: Anup Patel <[email protected]>
To: Will Deacon <[email protected]>
To: Mark Rutland <[email protected]>
To: Paul Walmsley <[email protected]>
To: Palmer Dabbelt <[email protected]>
To: Mayuresh Chitale <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: Palmer Dabbelt <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Atish Patra <[email protected]>

---
Changes in v3:
- Rebased on top of v6.15-rc7 
- Link to v2: https://lore.kernel.org/r/[email protected]

Changes in v2:
- Dropped PATCH 2 to be taken during rcX.
- Improved gpa range check validation by introducing a helper function
  and checking the entire range.
- Link to v1: https://lore.kernel.org/r/[email protected]

--- b4-submit-tracking ---
# This section is used internally by b4 prep for tracking purposes.
{
  "series": {
    "revision": 3,
    "change-id": "20241018-pmu_event_info-986e21ce6bd3",
    "prefixes": [],
    "history": {
      "v1": [
        "[email protected]"
      ],
      "v2": [
        "[email protected]"
      ]
    },
    "prerequisites": [
      "message-id: <[email protected]>",
      "base-commit: v6.13-rc2"
    ]
  }
}
There are new PMU related features introduced in SBI v3.0.
1. Raw Event v2 which allows mhpmeventX value to be 56 bit wide.
2. Get Event info function to do a bulk query at one shot.

Signed-off-by: Atish Patra <[email protected]>
SBI v3.0 introduced a new raw event type that allows wider
mhpmeventX width to be programmed via CFG_MATCH.

Use the raw event v2 if SBI v3.0 is available.

Signed-off-by: Atish Patra <[email protected]>
SBI v3.0 introuced a new raw event type v2 for wider mhpmeventX
programming. Add the support in kvm for that.

Signed-off-by: Atish Patra <[email protected]>
Copy link

sourcery-ai bot commented May 22, 2025

Reviewer's Guide

This PR upgrades the RISC-V PMU implementation to support the new SBI v3 event-info extension, extends raw event width, refactors event mapping into a generic helper, and adds corresponding KVM dispatch and validation for guest event-info operations.

Class Diagram: PMU Event Info SBIv3 Extension Components

classDiagram
    direction LR
    class riscv_pmu_event_info {
      +u32 event_idx
      +u32 output
      +u64 event_data
    }

    class sbi_ext_pmu_fid_enum {
      << (E,orchid) Enumeration >>
      +SBI_EXT_PMU_EVENT_GET_INFO
      +...
    }

    class sbi_pmu_event_type_enum {
      << (E,orchid) Enumeration >>
      +SBI_PMU_EVENT_TYPE_RAW_V2
      +...
    }

    class PMU_Driver_Logic {
      +bool sbi_v3_available
      +pmu_sbi_check_event_info() int
      +riscv_pmu_get_event_info(u32 type, u64 config, u64* econfig) int
      +pmu_sbi_check_std_events() void
      +pmu_sbi_event_map(perf_event event, u64* econfig) int
    }
    PMU_Driver_Logic ..> riscv_pmu_event_info : Uses (New struct)
    PMU_Driver_Logic ..> sbi_ext_pmu_fid_enum : Uses (New FID for SBI call)
    PMU_Driver_Logic ..> sbi_pmu_event_type_enum : Uses (New RAWv2 type)

    class KVM_PMU_Logic {
      +kvm_riscv_vcpu_pmu_event_info(kvm_vcpu vcpu, ...) int
      +kvm_pmu_get_perf_event_type(ulong eidx) u32
      +kvm_pmu_get_perf_event_config(ulong eidx, u64 evt_data) u64
    }
    KVM_PMU_Logic ..> riscv_pmu_event_info : Uses (New struct for guest interaction)
    KVM_PMU_Logic ..> PMU_Driver_Logic : Calls riscv_pmu_get_event_info (Refactored helper)
    KVM_PMU_Logic ..> sbi_pmu_event_type_enum : Uses (Handles new RAWv2 type)

    class KVM_Utils {
        +kvm_vcpu_validate_gpa_range(kvm_vcpu vcpu, gpa_t gpa, ulong len, bool write_access) int
    }
    KVM_PMU_Logic ..> KVM_Utils : Calls (New GPA validation helper)

    class KVM_SBI_Dispatcher {
        +kvm_sbi_ext_pmu_handler(kvm_vcpu vcpu, ...) int
    }
    KVM_SBI_Dispatcher ..> sbi_ext_pmu_fid_enum : Handles SBI_EXT_PMU_EVENT_GET_INFO
    KVM_SBI_Dispatcher ..> KVM_PMU_Logic : Calls kvm_riscv_vcpu_pmu_event_info

    class Global_Constants {
        << (C,lightgrey) Constants >>
        +RISCV_PMU_RAW_EVENT_V2_MASK
        +RISCV_PMU_RAW_EVENT_V2_IDX
        +RISCV_PMU_EVENT_INFO_OUTPUT_MASK
    }
    PMU_Driver_Logic ..> Global_Constants : Uses
    KVM_PMU_Logic ..> Global_Constants : Uses (via PMU_Driver_Logic or directly)
Loading

File-Level Changes

Change Details Files
Support SBI v3 PMU event info in perf driver
  • Extended event format bits from 0–47 to 0–55
  • Added sbi_v3_available flag and detect for SBI v3
  • Implemented pmu_sbi_check_event_info to bulk-query supported events via SBI
  • Hooked new check into pmu_sbi_check_std_events when v3 is available
drivers/perf/riscv_pmu_sbi.c
Refactor event mapping using generic riscv_pmu_get_event_info
  • Introduced riscv_pmu_get_event_info to unify type/config handling
  • Updated pmu_sbi_event_map to call the new helper
  • Exported riscv_pmu_get_event_info and added prototype
drivers/perf/riscv_pmu_sbi.c
include/linux/perf/riscv_pmu.h
Extend SBI PMU extension definitions
  • Added SBI_EXT_PMU_EVENT_GET_INFO FID
  • Defined riscv_pmu_event_info struct and output mask
  • Introduced extended raw event mask (56-bit) and V2 index
arch/riscv/include/asm/sbi.h
Add KVM GPA validation and integrate PMU event info SBI call
  • Added kvm_vcpu_validate_gpa_range helper for shmem checks
  • Implemented kvm_riscv_vcpu_pmu_event_info to handle guest event-info SBI calls
  • Dispatched new SBI_EXT_PMU_EVENT_GET_INFO in vcpu_sbi_pmu.c
  • Extended RAW_V2 support in vcpu_pmu.c and validation in vcpu_sbi_sta.c
virt/kvm/kvm_main.c
include/linux/kvm_host.h
arch/riscv/kvm/vcpu_pmu.c
arch/riscv/kvm/vcpu_sbi_pmu.c
arch/riscv/kvm/vcpu_sbi_sta.c
arch/riscv/include/asm/kvm_vcpu_pmu.h
Bump KVM SBI version to 3.0
  • Updated KVM_SBI_VERSION_MAJOR from 2 to 3
arch/riscv/include/asm/kvm_vcpu_sbi.h

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link

coderabbitai bot commented May 22, 2025

Walkthrough

This set of changes introduces support for RISC-V SBI PMU version 3.0, expands the PMU event interface to handle wider raw events, and adds mechanisms for querying event information in bulk. It refactors and centralizes event validation logic, updates version macros, and introduces new helper functions for validating guest physical address ranges.

Changes

File(s) Change Summary
arch/riscv/include/asm/kvm_vcpu_pmu.h, arch/riscv/kvm/vcpu_pmu.c Added the function kvm_riscv_vcpu_pmu_event_info for PMU event info queries and its declaration.
arch/riscv/include/asm/kvm_vcpu_sbi.h Updated KVM_SBI_VERSION_MAJOR macro from 2 to 3.
arch/riscv/include/asm/sbi.h Added new SBI PMU event info function ID, event info structure, macros, and support for extended raw event types.
arch/riscv/kvm/vcpu_sbi_pmu.c Added handling for SBI_EXT_PMU_EVENT_GET_INFO in the PMU SBI handler.
arch/riscv/kvm/vcpu_sbi_sta.c Simplified GPA range validation using kvm_vcpu_validate_gpa_range for steal time shared memory.
drivers/perf/riscv_pmu_sbi.c Added SBI v3.0 support, extended event config width, introduced riscv_pmu_get_event_info, optimized event info queries, and refactored event mapping logic.
include/linux/kvm_host.h, virt/kvm/kvm_main.c Added and implemented kvm_vcpu_validate_gpa_range for validating GPA ranges.
include/linux/perf/riscv_pmu.h Declared riscv_pmu_get_event_info function prototype.

Sequence Diagram(s)

sequenceDiagram
    participant Guest
    participant KVM
    participant PMU
    participant PerfDriver

    Guest->>KVM: SBI_EXT_PMU_EVENT_GET_INFO (params)
    KVM->>KVM: kvm_riscv_vcpu_pmu_event_info(...)
    KVM->>PMU: riscv_pmu_get_event_info(type, config, &econfig)
    PMU->>PerfDriver: Query event info
    PerfDriver-->>PMU: Return event info
    PMU-->>KVM: Return event info
    KVM->>Guest: Write event info to guest memory
    KVM-->>Guest: Return SBI error code/status
Loading

Poem

In the warren of code, new features appear,
PMU grows wiser, with version three near.
Events now wider, info queried in bulk,
Validation’s sharper—no bugs to sulk!
Hopping through memory, addresses align,
This bunny approves: the code’s looking fine!
🐇✨

Note

⚡️ AI Code Reviews for VS Code, Cursor, Windsurf

CodeRabbit now has a plugin for VS Code, Cursor and Windsurf. This brings AI code reviews directly in the code editor. Each commit is reviewed immediately, finding bugs before the PR is raised. Seamless context handoff to your AI code agent ensures that you can easily incorporate review feedback.
Learn more here.


Note

⚡️ Faster reviews with caching

CodeRabbit now supports caching for code and dependencies, helping speed up reviews. This means quicker feedback, reduced wait times, and a smoother review experience overall. Cached data is encrypted and stored securely. This feature will be automatically enabled for all accounts on May 30th. To opt out, configure Review - Disable Cache at either the organization or repository level. If you prefer to disable all data retention across your organization, simply turn off the Data Retention setting under your Organization Settings.
Enjoy the performance boost—your workflow just got faster.

Warning

Review ran into problems

🔥 Problems

Git: Failed to clone repository. Please run the @coderabbitai full review command to re-trigger a full review. If the issue persists, set path_filters to include or exclude specific files.

✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@atishp04 atishp04 requested a review from Copilot May 22, 2025 18:22
Copy link

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR adds support for SBI v3.0 PMU event information in KVM/RISC-V, introduces a helper to validate guest physical address ranges, and updates related headers and handlers.

  • Add kvm_vcpu_validate_gpa_range() for checking guest GPA access
  • Implement new SBI PMU event‐info calls (SBI_EXT_PMU_EVENT_GET_INFO) and expose riscv_pmu_get_event_info()
  • Bump KVM SBI version to 3.0 and update riscv PMU headers

Reviewed Changes

Copilot reviewed 10 out of 10 changed files in this pull request and generated 5 comments.

Show a summary per file
File Description
virt/kvm/kvm_main.c New GPA‐range validation helper and export
include/linux/kvm_host.h Prototype for GPA‐range validator
include/linux/perf/riscv_pmu.h Declaration of riscv_pmu_get_event_info()
drivers/perf/riscv_pmu_sbi.c SBI v3 PMU event‐info logic, new helper, symbol export
arch/riscv/kvm/vcpu_sbi_sta.c Use GPA‐range validator instead of direct prot checks
arch/riscv/kvm/vcpu_sbi_pmu.c Handle new SBI_EXT event and use GPA validator
arch/riscv/include/asm/sbi.h New SBI PMU event structs, enums, and masks
arch/riscv/include/asm/kvm_vcpu_sbi.h Bump KVM SBI version major to 3
arch/riscv/include/asm/kvm_vcpu_pmu.h Prototype for new PMU event‐info vCPU handler
Comments suppressed due to low confidence (2)

virt/kvm/kvm_main.c:3308

  • [nitpick] The variable name 'seg' is ambiguous; consider renaming it to 'segment_len' or similar for clarity.
int seg;

virt/kvm/kvm_main.c:3304

  • Consider adding unit tests for 'kvm_vcpu_validate_gpa_range()' covering boundary cases and write vs read access.
int kvm_vcpu_validate_gpa_range(struct kvm_vcpu *vcpu, gpa_t gpa, unsigned long len,


while ((seg = next_segment(len, offset)) != 0) {
hva = kvm_vcpu_gfn_to_hva_prot(vcpu, gfn, &writable);
if (kvm_is_error_hva(hva) || (writable ^ write_access))
Copy link
Preview

Copilot AI May 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] Using bitwise XOR (^) on bools is less clear than 'writable != write_access'; consider replacing for readability.

Suggested change
if (kvm_is_error_hva(hva) || (writable ^ write_access))
if (kvm_is_error_hva(hva) || (writable != write_access))

Copilot uses AI. Check for mistakes.

#ifdef CONFIG_RISCV_PMU_SBI
int riscv_pmu_get_hpm_info(u32 *hw_ctr_width, u32 *num_hw_ctr);
#endif

Copy link
Preview

Copilot AI May 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add a brief comment describing the purpose, return values, and expected side‐effects of 'riscv_pmu_get_event_info'.

Suggested change
/**
* riscv_pmu_get_event_info - Retrieve event information for a given type and configuration.
* @type: The type of the event to query.
* @config: The configuration value associated with the event.
* @econfig: Pointer to store the expanded configuration for the event.
*
* This function retrieves detailed information about a specific event type and
* configuration. It populates the provided pointer with the expanded configuration
* value if the event is supported.
*
* Return: 0 on success, or a negative error code if the event is unsupported or
* an error occurs.
*/

Copilot uses AI. Check for mistakes.


return ret;
}
EXPORT_SYMBOL(riscv_pmu_get_event_info);
Copy link
Preview

Copilot AI May 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For consistency with other KVM symbols exported under GPL, consider using EXPORT_SYMBOL_GPL instead of EXPORT_SYMBOL.

Suggested change
EXPORT_SYMBOL(riscv_pmu_get_event_info);
EXPORT_SYMBOL_GPL(riscv_pmu_get_event_info);

Copilot uses AI. Check for mistakes.

Comment on lines 320 to 323
for (int j = 0; j < ARRAY_SIZE(pmu_cache_event_map[i]); j++) {
for (int k = 0; k < ARRAY_SIZE(pmu_cache_event_map[i][j]); k++)
event_info_shmem[count++].event_idx =
pmu_cache_event_map[i][j][k].event_idx;
Copy link
Preview

Copilot AI May 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] This inner 'j' shadows the outer 'j' declared earlier; consider renaming one of them to avoid confusion.

Suggested change
for (int j = 0; j < ARRAY_SIZE(pmu_cache_event_map[i]); j++) {
for (int k = 0; k < ARRAY_SIZE(pmu_cache_event_map[i][j]); k++)
event_info_shmem[count++].event_idx =
pmu_cache_event_map[i][j][k].event_idx;
for (int cache_level = 0; cache_level < ARRAY_SIZE(pmu_cache_event_map[i]); cache_level++) {
for (int k = 0; k < ARRAY_SIZE(pmu_cache_event_map[i][cache_level]); k++)
event_info_shmem[count++].event_idx =
pmu_cache_event_map[i][cache_level][k].event_idx;

Copilot uses AI. Check for mistakes.

else
return SBI_ERR_INVALID_ADDRESS;
}

Copy link
Preview

Copilot AI May 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Although the comment mentions 64-byte alignment, there's no explicit check for alignment here; add '(shmem & 63) == 0' validation before range checking.

Suggested change
/* Ensure the shmem is 64-byte aligned as required by the spec. */
if (shmem & (SZ_64 - 1))
return SBI_ERR_INVALID_PARAM;

Copilot uses AI. Check for mistakes.

Copy link

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @atishp04 - I've reviewed your changes - here's some feedback:

  • In kvm_riscv_vcpu_pmu_event_info(), on kzalloc failure you return -ENOMEM without setting retdata->err_val; map alloc errors to an SBI_ERR_* code and always set retdata->err_val before returning.
  • The event_info handler only enforces 16-byte alignment for the shmem pointer, whereas other handlers require 64-byte alignment—verify this matches the PMU spec or unify alignment checks across handlers.
  • Consider consolidating the repeated guest‐memory validation and translation logic (in snapshot_set_shmem and event_info) into a shared helper to ensure consistency across all SBI PMU handlers.
Here's what I looked at during the review
  • 🟡 General issues: 1 issue found
  • 🟢 Security: all looks good
  • 🟢 Review instructions: all looks good
  • 🟢 Testing: all looks good
  • 🟢 Documentation: all looks good

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

@atishp04 atishp04 force-pushed the b4/pmu_event_info_v3 branch from 102d85d to f5b1c76 Compare May 22, 2025 18:41
@atishp04 atishp04 requested a review from Copilot May 22, 2025 18:56
Copy link

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

Adds support for SBI v3 PMU event info, extends raw event encoding width, and integrates these into KVM with proper guest‐memory validation.

  • Introduce kvm_vcpu_validate_gpa_range to centralize GPA range and access checks.
  • Add riscv_pmu_get_event_info and wire it through the SBI PMU driver and KVM handlers.
  • Expose a new SBI PMU extension (EVENT_GET_INFO) in KVM and implement the in‐guest event‐info handler.

Reviewed Changes

Copilot reviewed 10 out of 10 changed files in this pull request and generated 2 comments.

Show a summary per file
File Description
virt/kvm/kvm_main.c Added kvm_vcpu_validate_gpa_range for GPA range checks.
include/linux/perf/riscv_pmu.h Declared riscv_pmu_get_event_info.
include/linux/kvm_host.h Declared kvm_vcpu_validate_gpa_range.
drivers/perf/riscv_pmu_sbi.c Implemented SBI v3 event info probing and riscv_pmu_get_event_info.
arch/riscv/kvm/vcpu_sbi_sta.c Replaced direct HVA checks with kvm_vcpu_validate_gpa_range.
arch/riscv/kvm/vcpu_sbi_pmu.c Added SBI_EXT_PMU_EVENT_GET_INFO dispatch and handler.
arch/riscv/kvm/vcpu_pmu.c Added support for SBI_PMU_EVENT_TYPE_RAW_V2.
arch/riscv/include/asm/sbi.h Defined riscv_pmu_event_info struct and v2 masks/indices.
arch/riscv/include/asm/kvm_vcpu_sbi.h Bumped SBI major version to 3.
arch/riscv/include/asm/kvm_vcpu_pmu.h Declared kvm_riscv_vcpu_pmu_event_info.
Comments suppressed due to low confidence (2)

arch/riscv/kvm/vcpu_sbi_pmu.c:508

  • The return value of kvm_vcpu_write_guest is not captured and ret from the previous loop is reused to check for errors. Assign the return code to ret (or a new variable) before the if and then handle failures accordingly.
kvm_vcpu_write_guest(vcpu, shmem, einfo, shmem_size);

virt/kvm/kvm_main.c:3304

  • New validation logic in kvm_vcpu_validate_gpa_range lacks unit tests. Consider adding tests covering edge cases (cross-page, full-page, read vs. write) to prevent regressions.
int kvm_vcpu_validate_gpa_range(struct kvm_vcpu *vcpu, gpa_t gpa, unsigned long len,

Copy link
Preview

Copilot AI May 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Although you note the 64-byte alignment requirement, there is no check before calling kvm_vcpu_validate_gpa_range. Add an explicit alignment check, for example: if (shmem & 63) return SBI_ERR_INVALID_ADDRESS;.

Suggested change
/* The spec requires the shmem to be 64-byte aligned. */
/* The spec requires the shmem to be 64-byte aligned. */
if (shmem & (SZ_64 - 1))
return SBI_ERR_INVALID_ADDRESS;

Copilot uses AI. Check for mistakes.

Comment on lines 321 to 323
Copy link
Preview

Copilot AI May 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] The inner loop declares int k, shadowing the outer k variable. Consider renaming the inner loop variable (e.g., k2) to avoid confusion.

Suggested change
for (int k = 0; k < ARRAY_SIZE(pmu_cache_event_map[i][j]); k++)
event_info_shmem[count++].event_idx =
pmu_cache_event_map[i][j][k].event_idx;
for (int k2 = 0; k2 < ARRAY_SIZE(pmu_cache_event_map[i][j]); k2++)
event_info_shmem[count++].event_idx =
pmu_cache_event_map[i][j][k2].event_idx;

Copilot uses AI. Check for mistakes.

atishp04 added 6 commits May 22, 2025 12:02
With the new SBI PMU event info function, we can query the availability
of the all standard SBI PMU events at boot time with a single ecall.
This improves the bootime by avoiding making an SBI call for each
standard PMU event. Since this function is defined only in SBI v3.0,
invoke this only if the underlying SBI implementation is v3.0 or higher.

Signed-off-by: Atish Patra <[email protected]>
The event mapping function can be used in event info function to find out
the corresponding SBI PMU event encoding during the get_event_info function
as well. Refactor and export it so that it can be invoked from kvm and
internal driver.

Signed-off-by: Atish Patra <[email protected]>
The arch specific code may need to validate a gpa range if it is a shared
memory between the host and the guest. Currently, there are few places
where it is used in RISC-V implementation. Given the nature of the function
it may be used for other architectures. Hence, a common helper function
is added.

Signed-off-by: Atish Patra <[email protected]>
Remove the duplicate code and use the new helper function to validate
the shared memory gpa address.

Signed-off-by: Atish Patra <[email protected]>
The new get_event_info funciton allows the guest to query the presence
of multiple events with single SBI call. Currently, the perf driver
in linux guest invokes it for all the standard SBI PMU events. Support
the SBI function implementation in KVM as well.

Signed-off-by: Atish Patra <[email protected]>
Upgrade the SBI version to v3.0 so that corresponding features
can be enabled in the guest.

Signed-off-by: Atish Patra <[email protected]>
@atishp04 atishp04 force-pushed the b4/pmu_event_info_v3 branch from f5b1c76 to 166ee60 Compare May 22, 2025 19:02
@atishp04 atishp04 requested a review from Copilot May 22, 2025 19:02
Copy link

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

Adds support for the SBI v3 PMU event info extension, extends raw event widths to 56 bits, and integrates the new interface into KVM with consolidated guest-memory validation.

  • Introduce kvm_vcpu_validate_gpa_range for unified GPA range checks and update SBI handlers to use it
  • Extend raw PMU event encoding to 56 bits and implement pmu_sbi_check_event_info / riscv_pmu_get_event_info in the perf driver
  • Expose the new SBI_EXT_PMU_EVENT_GET_INFO call in KVM, bump SBI version to 3.0, and wire up the new handler

Reviewed Changes

Copilot reviewed 10 out of 10 changed files in this pull request and generated 2 comments.

Show a summary per file
File Description
virt/kvm/kvm_main.c Add kvm_vcpu_validate_gpa_range helper and export it
include/linux/perf/riscv_pmu.h Declare riscv_pmu_get_event_info API
include/linux/kvm_host.h Prototype kvm_vcpu_validate_gpa_range
drivers/perf/riscv_pmu_sbi.c Implement SBI v3 event info probing and extend raw-event mask
arch/riscv/kvm/vcpu_sbi_sta.c Replace GFN-to-HVA prot calls with GPA-range validation
arch/riscv/kvm/vcpu_sbi_pmu.c Add SBI_EXT_PMU_EVENT_GET_INFO handler and kvm_riscv_vcpu_pmu_event_info
arch/riscv/kvm/vcpu_pmu.c Support SBI_PMU_EVENT_TYPE_RAW_V2 and updated config masking
arch/riscv/include/asm/sbi.h Define riscv_pmu_event_info struct and new RAW_V2 constants
arch/riscv/include/asm/kvm_vcpu_sbi.h Bump KVM_SBI_VERSION_MAJOR to 3
arch/riscv/include/asm/kvm_vcpu_pmu.h Declare kvm_riscv_vcpu_pmu_event_info
Comments suppressed due to low confidence (3)

arch/riscv/kvm/vcpu_sbi_pmu.c:508

  • The return value of kvm_vcpu_write_guest is not captured; using the existing 'ret' from the event loop may misinterpret success/failure. Capture and check write_guest's return separately before proceeding.
kvm_vcpu_write_guest(vcpu, shmem, einfo, shmem_size);

arch/riscv/kvm/vcpu_sbi_pmu.c:467

  • The alignment check uses (SZ_16 - 1) which enforces 16-byte alignment; verify whether the SBI v3 PMU event info spec requires 64-byte alignment instead and update the mask accordingly.
if (flags != 0 || (saddr_low & (SZ_16 - 1))) {

virt/kvm/kvm_main.c:3308

  • [nitpick] The variable name 'seg' is ambiguous; consider renaming it to 'chunk_len' or 'segment_len' to clarify its purpose in the GPA-range iteration.
int seg;

}
}

base_addr = __pa(event_info_shmem);
Copy link
Preview

Copilot AI May 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using __pa() on slab-allocated memory can be unsafe; consider using virt_to_phys() or dma_map_kernel() to obtain a correct physical address for the event_info_shmem buffer.

Suggested change
base_addr = __pa(event_info_shmem);
base_addr = virt_to_phys(event_info_shmem);

Copilot uses AI. Check for mistakes.

Comment on lines +94 to +95
int riscv_pmu_get_event_info(u32 type, u64 config, u64 *econfig);

Copy link
Preview

Copilot AI May 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] The new declaration for riscv_pmu_get_event_info should be inside the CONFIG_RISCV_PMU guard to avoid exposing it when PMU support is disabled.

Suggested change
int riscv_pmu_get_event_info(u32 type, u64 config, u64 *econfig);
#ifdef CONFIG_RISCV_PMU
int riscv_pmu_get_event_info(u32 type, u64 config, u64 *econfig);

Copilot uses AI. Check for mistakes.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (4)
include/linux/perf/riscv_pmu.h (1)

94-94: Add documentation for the new PMU event info function.

The function declaration lacks documentation explaining its purpose, parameters, and return values, which would improve API usability.

Add documentation for the function:

+/**
+ * riscv_pmu_get_event_info - Retrieve event information for a given type and configuration.
+ * @type: The type of the event to query.
+ * @config: The configuration value associated with the event.
+ * @econfig: Pointer to store the expanded configuration for the event.
+ *
+ * This function retrieves detailed information about a specific event type and
+ * configuration. It populates the provided pointer with the expanded configuration
+ * value if the event is supported.
+ *
+ * Return: 0 on success, or a negative error code if the event is unsupported or
+ *         an error occurs.
+ */
 int riscv_pmu_get_event_info(u32 type, u64 config, u64 *econfig);
arch/riscv/kvm/vcpu_sbi_sta.c (1)

112-114: 🛠️ Refactor suggestion

Add explicit 64-byte alignment check for shmem

While there's an alignment check for shmem_phys_lo at line 100, there should be an explicit check for the 64-byte alignment of the final shmem value before calling kvm_vcpu_validate_gpa_range. This is especially important since the value of shmem is constructed from both shmem_phys_lo and potentially shmem_phys_hi.

/* The spec requires the shmem to be 64-byte aligned. */
+if (shmem & (SZ_64 - 1))
+	return SBI_ERR_INVALID_ADDRESS;
if (kvm_vcpu_validate_gpa_range(vcpu, shmem, 64, true))
	return SBI_ERR_INVALID_ADDRESS;
virt/kvm/kvm_main.c (1)

3315-3315: Consider using direct comparison for better readability.

Using bitwise XOR (^) on booleans is less intuitive than a direct comparison.

-  if (kvm_is_error_hva(hva) || (writable ^ write_access))
+  if (kvm_is_error_hva(hva) || (writable != write_access))
drivers/perf/riscv_pmu_sbi.c (1)

303-361: Use virt_to_phys() instead of __pa() on slab memory

Previous feedback already highlighted that __pa() is only guaranteed for direct-mapped memory. While kcalloc() currently returns physically contiguous memory, using virt_to_phys() (or switching to alloc_pages/get_free_page) avoids architecture-specific corner-cases and keeps the code consistent with the rest of the driver.

🧹 Nitpick comments (4)
arch/riscv/include/asm/sbi.h (2)

162-169: Nit: add a kernel-doc style comment for the new struct & flag.

A one-liner describing what output == 1 actually means (e.g. “event supported on this HW”) helps future readers and tools that generate docs.


172-176: Macro naming consistency

RISCV_PMU_RAW_EVENT_IDX & RISCV_PMU_RAW_EVENT_V2_IDX now encode two raw formats. Consider adding an explicit comment near the two defines («/* type field = 0x2 /», «/ type field = 0x3 */») so that grepping for “raw event” shows both quickly.

drivers/perf/riscv_pmu_sbi.c (2)

382-386: Graceful fallback if v3 query fails

Good idea to try the bulk EVENT_GET_INFO path first. One minor suggestion: log ret value on failure (pr_debug()?) so that diagnosing firmware issues is simpler.


415-478: Edge-case in RAW event validation

Inside PERF_TYPE_RAW, case 0 rejects configs with any bit 56-63 set, but also rejects values where bit 48-55 are set when running on pre-v3 firmware. That is correct, yet the error path silently returns ‑ENOENT which upper layers translate to “event unsupported”. Consider returning -EINVAL for malformed encodings so that userspace can distinguish “bad encoding” from “unsupported event”.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4a95bc1 and 166ee60.

📒 Files selected for processing (10)
  • arch/riscv/include/asm/kvm_vcpu_pmu.h (1 hunks)
  • arch/riscv/include/asm/kvm_vcpu_sbi.h (1 hunks)
  • arch/riscv/include/asm/sbi.h (3 hunks)
  • arch/riscv/kvm/vcpu_pmu.c (4 hunks)
  • arch/riscv/kvm/vcpu_sbi_pmu.c (1 hunks)
  • arch/riscv/kvm/vcpu_sbi_sta.c (1 hunks)
  • drivers/perf/riscv_pmu_sbi.c (7 hunks)
  • include/linux/kvm_host.h (1 hunks)
  • include/linux/perf/riscv_pmu.h (1 hunks)
  • virt/kvm/kvm_main.c (1 hunks)
🔇 Additional comments (11)
arch/riscv/include/asm/kvm_vcpu_sbi.h (1)

14-14: Version increment appropriately signals API changes.

The version bump from 2 to 3 indicates a significant update to the KVM SBI interface, aligning with the introduction of new SBI PMU event querying capabilities and extended event encoding formats.

arch/riscv/kvm/vcpu_sbi_pmu.c (1)

76-78: PMU event info support properly implemented.

The addition of the SBI_EXT_PMU_EVENT_GET_INFO case follows the existing pattern of the switch statement, correctly passing the CPU context registers to the new handler function.

include/linux/kvm_host.h (1)

1386-1387: New GPA range validation function provides better abstraction.

This function declaration centralizes guest physical address validation logic, providing a consistent interface for validating address ranges with write permissions. The function signature is clear and well-structured.

arch/riscv/include/asm/kvm_vcpu_pmu.h (1)

101-103: Function declaration looks good

The new function declaration for kvm_riscv_vcpu_pmu_event_info follows the pattern of other similar functions in this file and provides all necessary parameters for the PMU event information functionality.

virt/kvm/kvm_main.c (1)

3304-3323: New utility function for validating guest physical address ranges.

This function iterates over a GPA range and verifies that each page segment is accessible with the required permissions. It's a useful addition that centralizes permission checking logic for guest memory ranges.

arch/riscv/include/asm/sbi.h (1)

138-139: Confirm ABI alignment of new FID SBI_EXT_PMU_EVENT_GET_INFO.

SBI_EXT_PMU_COUNTER_FW_READ_HI previously had the numeric value 7 (after implicit auto-increment).
By inserting SBI_EXT_PMU_EVENT_GET_INFO immediately afterwards the new FID will be 8.
That matches the upstream v3.0 PMU spec, so nothing to fix – just double-check any userspace that may still treat the snapshot-set‐shmem call as the final ID.

arch/riscv/kvm/vcpu_pmu.c (3)

63-65: Good inclusion of new RAW-V2 type.


132-134: Mask matches v3.0 width – looks correct.


433-436: Nice clean-up with kvm_vcpu_validate_gpa_range().

drivers/perf/riscv_pmu_sbi.c (2)

62-63: Updated format string matches extended width.
Remember that users of perf-tooling may need a corresponding update; otherwise looks fine.


1542-1544: sbi_v3_available initialisation looks correct.

You correctly gate v3 functionality on major >= 3. No issues spotted here.

Comment on lines +456 to +520
int kvm_riscv_vcpu_pmu_event_info(struct kvm_vcpu *vcpu, unsigned long saddr_low,
unsigned long saddr_high, unsigned long num_events,
unsigned long flags, struct kvm_vcpu_sbi_return *retdata)
{
struct riscv_pmu_event_info *einfo;
int shmem_size = num_events * sizeof(*einfo);
gpa_t shmem;
u32 eidx, etype;
u64 econfig;
int ret;

if (flags != 0 || (saddr_low & (SZ_16 - 1))) {
ret = SBI_ERR_INVALID_PARAM;
goto out;
}

shmem = saddr_low;
if (saddr_high != 0) {
if (IS_ENABLED(CONFIG_32BIT)) {
shmem |= ((gpa_t)saddr_high << 32);
} else {
ret = SBI_ERR_INVALID_ADDRESS;
goto out;
}
}

if (kvm_vcpu_validate_gpa_range(vcpu, shmem, shmem_size, true)) {
ret = SBI_ERR_INVALID_ADDRESS;
goto out;
}

einfo = kzalloc(shmem_size, GFP_KERNEL);
if (!einfo)
return -ENOMEM;

ret = kvm_vcpu_read_guest(vcpu, shmem, einfo, shmem_size);
if (ret) {
ret = SBI_ERR_FAILURE;
goto free_mem;
}

for (int i = 0; i < num_events; i++) {
eidx = einfo[i].event_idx;
etype = kvm_pmu_get_perf_event_type(eidx);
econfig = kvm_pmu_get_perf_event_config(eidx, einfo[i].event_data);
ret = riscv_pmu_get_event_info(etype, econfig, NULL);
if (ret > 0)
einfo[i].output = 1;
else
einfo[i].output = 0;
}

kvm_vcpu_write_guest(vcpu, shmem, einfo, shmem_size);
if (ret) {
ret = SBI_ERR_FAILURE;
goto free_mem;
}

free_mem:
kfree(einfo);
out:
retdata->err_val = ret;

return 0;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Multiple functional bugs in kvm_riscv_vcpu_pmu_event_info()

  1. ret is uninitialised on the success path – we may return a random value to the guest.
  2. The return value of kvm_vcpu_write_guest() is ignored, so write failures go unnoticed.
  3. Heap allocation uses GFP_KERNEL; this path executes in the vCPU exit handler and may run in atomic context. Existing code in this file allocates with GFP_ATOMIC.
  4. An earlier failure in the loop is overwritten by the next iteration; we should report any error, not just the last one.

Proposed fix:

@@
-	struct riscv_pmu_event_info *einfo;
-	int shmem_size = num_events * sizeof(*einfo);
+	struct riscv_pmu_event_info *einfo;
+	const int shmem_size = num_events * sizeof(*einfo);
 	gpa_t shmem;
 	u32 eidx, etype;
 	u64 econfig;
-	int ret;
+	int ret = 0;
+	int err_seen = 0;
@@
-	einfo = kzalloc(shmem_size, GFP_KERNEL);
+	einfo = kzalloc(shmem_size, GFP_ATOMIC);
@@
-		ret = riscv_pmu_get_event_info(etype, econfig, NULL);
-		if (ret > 0)
+		ret = riscv_pmu_get_event_info(etype, econfig, NULL);
+		if (ret > 0)
 			einfo[i].output = 1;
 		else
 			einfo[i].output = 0;
-	}
-
-	kvm_vcpu_write_guest(vcpu, shmem, einfo, shmem_size);
-	if (ret) {
-		ret = SBI_ERR_FAILURE;
-		goto free_mem;
+		if (ret < 0)
+			err_seen = 1;
+	}
+
+	ret = kvm_vcpu_write_guest(vcpu, shmem, einfo, shmem_size);
+	if (ret) {
+		ret = SBI_ERR_FAILURE;
+		goto free_mem;
 	}
-
-free_mem:
-	kfree(einfo);
-out:
-	retdata->err_val = ret;
-
-	return 0;
+free_mem:
+	kfree(einfo);
+out:
+	if (err_seen && !ret)
+		ret = SBI_ERR_FAILURE;
+	retdata->err_val = ret;
+
+	return 0;
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
int kvm_riscv_vcpu_pmu_event_info(struct kvm_vcpu *vcpu, unsigned long saddr_low,
unsigned long saddr_high, unsigned long num_events,
unsigned long flags, struct kvm_vcpu_sbi_return *retdata)
{
struct riscv_pmu_event_info *einfo;
int shmem_size = num_events * sizeof(*einfo);
gpa_t shmem;
u32 eidx, etype;
u64 econfig;
int ret;
if (flags != 0 || (saddr_low & (SZ_16 - 1))) {
ret = SBI_ERR_INVALID_PARAM;
goto out;
}
shmem = saddr_low;
if (saddr_high != 0) {
if (IS_ENABLED(CONFIG_32BIT)) {
shmem |= ((gpa_t)saddr_high << 32);
} else {
ret = SBI_ERR_INVALID_ADDRESS;
goto out;
}
}
if (kvm_vcpu_validate_gpa_range(vcpu, shmem, shmem_size, true)) {
ret = SBI_ERR_INVALID_ADDRESS;
goto out;
}
einfo = kzalloc(shmem_size, GFP_KERNEL);
if (!einfo)
return -ENOMEM;
ret = kvm_vcpu_read_guest(vcpu, shmem, einfo, shmem_size);
if (ret) {
ret = SBI_ERR_FAILURE;
goto free_mem;
}
for (int i = 0; i < num_events; i++) {
eidx = einfo[i].event_idx;
etype = kvm_pmu_get_perf_event_type(eidx);
econfig = kvm_pmu_get_perf_event_config(eidx, einfo[i].event_data);
ret = riscv_pmu_get_event_info(etype, econfig, NULL);
if (ret > 0)
einfo[i].output = 1;
else
einfo[i].output = 0;
}
kvm_vcpu_write_guest(vcpu, shmem, einfo, shmem_size);
if (ret) {
ret = SBI_ERR_FAILURE;
goto free_mem;
}
free_mem:
kfree(einfo);
out:
retdata->err_val = ret;
return 0;
}
int kvm_riscv_vcpu_pmu_event_info(struct kvm_vcpu *vcpu, unsigned long saddr_low,
unsigned long saddr_high, unsigned long num_events,
unsigned long flags, struct kvm_vcpu_sbi_return *retdata)
{
struct riscv_pmu_event_info *einfo;
const int shmem_size = num_events * sizeof(*einfo);
gpa_t shmem;
u32 eidx, etype;
u64 econfig;
int ret = 0;
int err_seen = 0;
if (flags != 0 || (saddr_low & (SZ_16 - 1))) {
ret = SBI_ERR_INVALID_PARAM;
goto out;
}
shmem = saddr_low;
if (saddr_high != 0) {
if (IS_ENABLED(CONFIG_32BIT)) {
shmem |= ((gpa_t)saddr_high << 32);
} else {
ret = SBI_ERR_INVALID_ADDRESS;
goto out;
}
}
if (kvm_vcpu_validate_gpa_range(vcpu, shmem, shmem_size, true)) {
ret = SBI_ERR_INVALID_ADDRESS;
goto out;
}
einfo = kzalloc(shmem_size, GFP_ATOMIC);
if (!einfo)
return -ENOMEM;
ret = kvm_vcpu_read_guest(vcpu, shmem, einfo, shmem_size);
if (ret) {
ret = SBI_ERR_FAILURE;
goto free_mem;
}
for (int i = 0; i < num_events; i++) {
eidx = einfo[i].event_idx;
etype = kvm_pmu_get_perf_event_type(eidx);
econfig = kvm_pmu_get_perf_event_config(eidx, einfo[i].event_data);
ret = riscv_pmu_get_event_info(etype, econfig, NULL);
if (ret > 0)
einfo[i].output = 1;
else
einfo[i].output = 0;
if (ret < 0)
err_seen = 1;
}
ret = kvm_vcpu_write_guest(vcpu, shmem, einfo, shmem_size);
if (ret) {
ret = SBI_ERR_FAILURE;
goto free_mem;
}
free_mem:
kfree(einfo);
out:
if (err_seen && !ret)
ret = SBI_ERR_FAILURE;
retdata->err_val = ret;
return 0;
}
🤖 Prompt for AI Agents
In arch/riscv/kvm/vcpu_pmu.c from lines 456 to 520, initialize the variable ret
to 0 at the start to avoid returning uninitialized values on success. Check and
handle the return value of kvm_vcpu_write_guest to detect write failures and set
ret accordingly. Replace GFP_KERNEL with GFP_ATOMIC in the kzalloc call to
ensure safe allocation in atomic context. Modify the loop to preserve any error
returned by riscv_pmu_get_event_info instead of overwriting ret on each
iteration, so that any error is reported properly.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (3)
include/linux/perf/riscv_pmu.h (1)

94-94: Add documentation for the riscv_pmu_get_event_info function.

The function declaration looks good, but it would benefit from documentation explaining its purpose, parameters, and return values.


/**
 * riscv_pmu_get_event_info - Retrieve event information for a given type and configuration.
 * @type: The type of the event to query.
 * @config: The configuration value associated with the event.
 * @econfig: Pointer to store the expanded configuration for the event.
 *
 * This function retrieves detailed information about a specific event type and
 * configuration. It populates the provided pointer with the expanded configuration
 * value if the event is supported.
 *
 * Return: 0 on success, or a negative error code if the event is unsupported or
 *         an error occurs.
 */
virt/kvm/kvm_main.c (1)

3315-3315: Consider using an explicit comparison for boolean values.

Using bitwise XOR (^) on booleans works but is less readable than a direct comparison.

-	if (kvm_is_error_hva(hva) || (writable ^ write_access))
+	if (kvm_is_error_hva(hva) || (writable != write_access))
drivers/perf/riscv_pmu_sbi.c (1)

327-333: __pa() on slab memory – same concern as before

Calling __pa() on a slab-allocated (kcalloc) buffer is unsafe on systems
with highmem/VMEMMAP because the linear mapping is not guaranteed. Use
virt_to_phys() (or, even better, page_to_phys() on an allocated page) as
suggested in the previous review.

🧹 Nitpick comments (3)
arch/riscv/kvm/vcpu_pmu.c (2)

62-66: Mark the intentional fall-through to silence -Wimplicit-fallthrough

The back-to-back case SBI_PMU_EVENT_TYPE_RAW: and
case SBI_PMU_EVENT_TYPE_RAW_V2: share the same body, but the compiler may
emit -Wimplicit-fallthrough warnings when building with CONFIG_WERROR.

-case SBI_PMU_EVENT_TYPE_RAW:
+case SBI_PMU_EVENT_TYPE_RAW:		/* fall through */

(or use the fallthrough; pseudo-keyword when compiling under C23 / GCC).


130-135: Missing fall-through annotation here as well

Same remark as above for kvm_pmu_get_perf_event_config().

drivers/perf/riscv_pmu_sbi.c (1)

415-478: Return code semantics in riscv_pmu_get_event_info()

For hardware/raw events the function may legitimately return 0
(event-idx 0). Call-sites such as the new KVM helper treat ret > 0
as success and anything else as failure, which mistakenly flags a valid
index 0 as “not supported”. Consider changing the API to:

return ret >= 0 ? 0 : ret; /* or use a dedicated bool output flag */

and let callers rely on the separate output bit.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4a95bc1 and 166ee60.

📒 Files selected for processing (10)
  • arch/riscv/include/asm/kvm_vcpu_pmu.h (1 hunks)
  • arch/riscv/include/asm/kvm_vcpu_sbi.h (1 hunks)
  • arch/riscv/include/asm/sbi.h (3 hunks)
  • arch/riscv/kvm/vcpu_pmu.c (4 hunks)
  • arch/riscv/kvm/vcpu_sbi_pmu.c (1 hunks)
  • arch/riscv/kvm/vcpu_sbi_sta.c (1 hunks)
  • drivers/perf/riscv_pmu_sbi.c (7 hunks)
  • include/linux/kvm_host.h (1 hunks)
  • include/linux/perf/riscv_pmu.h (1 hunks)
  • virt/kvm/kvm_main.c (1 hunks)
🔇 Additional comments (7)
arch/riscv/kvm/vcpu_sbi_pmu.c (1)

76-78: Implementation of SBI PMU event information interface looks good.

This case properly adds support for the new SBI_EXT_PMU_EVENT_GET_INFO function ID, passing the guest parameters from the CPU context to the implementation function.

arch/riscv/include/asm/kvm_vcpu_sbi.h (1)

14-14: Version bump correctly reflects SBI PMU v3 support.

Updating the major version from 2 to 3 is appropriate for the introduction of the new SBI PMU v3.0 interface support.

include/linux/kvm_host.h (1)

1386-1387: New GPA validation helper function looks good.

This new utility function centralizes the guest physical address range validation logic, which will help maintain consistency across the codebase and avoid duplication. The function signature includes all necessary parameters for the validation: the vcpu, starting GPA, length, and write access permission.

arch/riscv/include/asm/kvm_vcpu_pmu.h (1)

101-103: Function declaration looks good!

The new function declaration for kvm_riscv_vcpu_pmu_event_info follows the established pattern of other PMU functions in this file. Based on the AI summary, this function will support SBI PMU version 3.0 by providing an expanded PMU event information query mechanism.

arch/riscv/kvm/vcpu_sbi_sta.c (1)

112-114: Simplified GPA validation approach

The refactoring improves code maintainability by using the new kvm_vcpu_validate_gpa_range function to directly validate the guest physical address range, replacing the previous approach that required manual GFN-to-HVA conversion and write permission checking.

Note that 64-byte alignment is already checked at line 100 for shmem_phys_lo, which is assigned to shmem at line 103.

virt/kvm/kvm_main.c (1)

3304-3323: Good addition of a central GPA range validation function.

This new function provides a clear and reusable way to validate guest physical address ranges, ensuring the entire range is accessible with the specified write access permission. This will help prevent code duplication across the codebase.

arch/riscv/include/asm/sbi.h (1)

162-169: Addition looks good – matches the v3.0 spec

The new riscv_pmu_event_info structure and RISCV_PMU_EVENT_INFO_OUTPUT_MASK constant are semantically correct and align with the upstream draft for SBI v3.0. No functional issues spotted.

Comment on lines +456 to +520
int kvm_riscv_vcpu_pmu_event_info(struct kvm_vcpu *vcpu, unsigned long saddr_low,
unsigned long saddr_high, unsigned long num_events,
unsigned long flags, struct kvm_vcpu_sbi_return *retdata)
{
struct riscv_pmu_event_info *einfo;
int shmem_size = num_events * sizeof(*einfo);
gpa_t shmem;
u32 eidx, etype;
u64 econfig;
int ret;

if (flags != 0 || (saddr_low & (SZ_16 - 1))) {
ret = SBI_ERR_INVALID_PARAM;
goto out;
}

shmem = saddr_low;
if (saddr_high != 0) {
if (IS_ENABLED(CONFIG_32BIT)) {
shmem |= ((gpa_t)saddr_high << 32);
} else {
ret = SBI_ERR_INVALID_ADDRESS;
goto out;
}
}

if (kvm_vcpu_validate_gpa_range(vcpu, shmem, shmem_size, true)) {
ret = SBI_ERR_INVALID_ADDRESS;
goto out;
}

einfo = kzalloc(shmem_size, GFP_KERNEL);
if (!einfo)
return -ENOMEM;

ret = kvm_vcpu_read_guest(vcpu, shmem, einfo, shmem_size);
if (ret) {
ret = SBI_ERR_FAILURE;
goto free_mem;
}

for (int i = 0; i < num_events; i++) {
eidx = einfo[i].event_idx;
etype = kvm_pmu_get_perf_event_type(eidx);
econfig = kvm_pmu_get_perf_event_config(eidx, einfo[i].event_data);
ret = riscv_pmu_get_event_info(etype, econfig, NULL);
if (ret > 0)
einfo[i].output = 1;
else
einfo[i].output = 0;
}

kvm_vcpu_write_guest(vcpu, shmem, einfo, shmem_size);
if (ret) {
ret = SBI_ERR_FAILURE;
goto free_mem;
}

free_mem:
kfree(einfo);
out:
retdata->err_val = ret;

return 0;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

⚠️ Potential issue

Several correctness issues in kvm_riscv_vcpu_pmu_event_info()

  1. Integer-overflow & sign issues
    int shmem_size = num_events * sizeof(*einfo);
    num_events comes from the guest, may be 64-bit and very large.
    • Multiplication is done in unsigned long then truncated to signed int.
    • This can overflow or wrap negative, leading to over-allocation or OOB
    when validating the GPA range.

  2. Use-after-success semantics
    The same ret variable is reused for
    – the result of kvm_vcpu_read_guest(),
    – every iteration of riscv_pmu_get_event_info(),
    – later used after kvm_vcpu_write_guest().
    A successful, positive return value from the last iteration turns into
    an error path because the final if (ret) check interprets any non-zero
    as failure.

  3. Event-0 handled as “not present”
    output is set only when ret > 0. If the PMU maps an event to index 0,
    output will be cleared even though the event is valid.

  4. Memory allocation style
    kzalloc(shmem_size, GFP_KERNEL) does not protect against the overflow
    mentioned in (1). Prefer kvcalloc() / kvmalloc_array() which has a
    built-in overflow check.

A robust rewrite could look like:

-    int shmem_size = num_events * sizeof(*einfo);
+    size_t shmem_size;
+
+    if (!num_events ||
+        check_mul_overflow(num_events, sizeof(*einfo), &shmem_size)) {
+        ret = SBI_ERR_INVALID_PARAM;
+        goto out;
+    }

-    einfo = kzalloc(shmem_size, GFP_KERNEL);
+    einfo = kvmalloc_array(num_events, sizeof(*einfo), GFP_KERNEL | __GFP_ZERO);

@@
-    ret = kvm_vcpu_read_guest(vcpu, shmem, einfo, shmem_size);
-    if (ret) {
+    if (kvm_vcpu_read_guest(vcpu, shmem, einfo, shmem_size)) {
         ret = SBI_ERR_FAILURE;
         goto free_mem;
     }
@@
-        ret = riscv_pmu_get_event_info(etype, econfig, NULL);
-        if (ret > 0)
+        int rc = riscv_pmu_get_event_info(etype, econfig, NULL);
+        if (rc >= 0)
             einfo[i].output = 1;
         else
             einfo[i].output = 0;
     }
@@
-    kvm_vcpu_write_guest(vcpu, shmem, einfo, shmem_size);
-    if (ret) {
+    if (kvm_vcpu_write_guest(vcpu, shmem, einfo, shmem_size)) {
         ret = SBI_ERR_FAILURE;
         goto free_mem;
     }

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In arch/riscv/kvm/vcpu_pmu.c around lines 456 to 520, fix integer overflow by
changing shmem_size to an unsigned long or size_t and use a safe allocation
function like kvcalloc or kvmalloc_array to allocate einfo, which checks for
overflow. Separate the ret variable usage by introducing a new variable for the
loop over riscv_pmu_get_event_info() results to avoid overwriting the error
status from kvm_vcpu_read_guest() and kvm_vcpu_write_guest(). Adjust the logic
that sets einfo[i].output to treat ret >= 0 as valid (not just ret > 0) so event
index 0 is correctly handled as present. Finally, ensure GPA range validation
uses the correct size type matching shmem_size to prevent out-of-bounds errors.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

♻️ Duplicate comments (3)
include/linux/perf/riscv_pmu.h (1)

94-94: Added function to retrieve PMU event information.

This function declaration provides an interface to query detailed PMU event information, which is a key part of the SBI v3 PMU event info extension support. It appears to follow the design pattern of other PMU-related functions in this file.

However, the function lacks documentation describing its purpose, parameters, and return values.

Consider adding a documentation comment above the function declaration:

+/**
+ * riscv_pmu_get_event_info - Retrieve event information for a given type and configuration.
+ * @type: The type of the event to query.
+ * @config: The configuration value associated with the event.
+ * @econfig: Pointer to store the expanded configuration for the event.
+ *
+ * This function retrieves detailed information about a specific event type and
+ * configuration. It populates the provided pointer with the expanded configuration
+ * value if the event is supported.
+ *
+ * Return: 0 on success, or a negative error code if the event is unsupported or
+ *         an error occurs.
+ */
 int riscv_pmu_get_event_info(u32 type, u64 config, u64 *econfig);
virt/kvm/kvm_main.c (1)

3304-3322: Good implementation of GPA range validation, but consider improving readability

This new helper function centralizes guest physical address range validation for KVM, which is a good design choice for maintainability. The implementation iterates over the GPA range in page-sized segments and properly checks both address validity and access permissions.

On line 3315, the use of XOR (^) operator for boolean comparison writable ^ write_access is functionally correct but less readable than a direct comparison. Consider replacing it with:

-    if (kvm_is_error_hva(hva) || (writable ^ write_access))
+    if (kvm_is_error_hva(hva) || (writable != write_access))

This makes the intent clearer without changing the functionality.

drivers/perf/riscv_pmu_sbi.c (1)

303-334: Avoid __pa() on slab memory (repeat).

This block still converts a slab/kmalloc() address with __pa(). On some architectures that is undefined once high-mem comes into play; virt_to_phys() is the safe helper.

-	base_addr = __pa(event_info_shmem);
+	base_addr = virt_to_phys(event_info_shmem);
🧹 Nitpick comments (6)
arch/riscv/kvm/vcpu_sbi_sta.c (1)

112-113: Comment only needs to describe the alignment check once

Line 100 already enforces the 64-byte alignment, so repeating the requirement here as a comment is fine but could be misleading if the check ever moves.
Consider moving the comment beside the actual check (line 100) or updating the sentence to “see check above”.

arch/riscv/include/asm/kvm_vcpu_pmu.h (1)

101-103: Add kerneldoc for the new public symbol

kvm_riscv_vcpu_pmu_event_info() is now part of the public PMU KVM API but lacks a brief kerneldoc block describing parameters, expected return codes, and error semantics. Adding it will help future callers and automated docs.

arch/riscv/include/asm/sbi.h (1)

172-176: Mask name/purpose mismatch could confuse future readers.

RISCV_PMU_RAW_EVENT_V2_MASK is documented as “extended hpmeventX width value”, yet the mask stops at bit 55, leaving bits 56–61 unused. That is correct for the v3.0 encoding (bits 63-62 differentiate classes, 56-61 reserved) but the comment does not convey this.

Consider tightening the comment to something like:

-/* SBI v3.0 allows extended hpmeventX width value */
+/* SBI v3.0 raw hardware event: valid bits [55:0] – upper 8 bits are class/reserved */
drivers/perf/riscv_pmu_sbi.c (3)

62-63: config:0-55 format string needs an accompanying comment.

The earlier format stopped at bit 47. Given that bits 56-61 are now reserved, it would help tooling & users if we document that they must stay zero for raw hardware events.

No code change required—just a short comment above the attribute.


65-67: Make feature flags read-only after boot.

sbi_v2_available and the newly-added sbi_v3_available are set once during early init and never change.
Marking them __ro_after_init prevents accidental writes and allows the compiler to fold branches:

-static bool sbi_v3_available;
+static bool sbi_v3_available __ro_after_init;

(Do the same for sbi_v2_available while you are here.)


429-454: Minor logic tidy-up for raw-event validation.

Within the v3 path you return -ENOENT when bits 56-63 are set even though the error is “invalid parameter”. Returning -EINVAL makes diagnostics clearer and aligns with the sbi_err_map_linux_errno() mapping.

-				if (!(config & ~RISCV_PMU_RAW_EVENT_V2_MASK)) {
+				if (config & ~RISCV_PMU_RAW_EVENT_V2_MASK) {
+					return -EINVAL;
+				} else {
 					if (econfig)
 						*econfig = config & RISCV_PMU_RAW_EVENT_V2_MASK;
 					ret = RISCV_PMU_RAW_EVENT_V2_IDX;
 				}

(This is optional – functional impact is small.)

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4a95bc1 and 166ee60.

📒 Files selected for processing (10)
  • arch/riscv/include/asm/kvm_vcpu_pmu.h (1 hunks)
  • arch/riscv/include/asm/kvm_vcpu_sbi.h (1 hunks)
  • arch/riscv/include/asm/sbi.h (3 hunks)
  • arch/riscv/kvm/vcpu_pmu.c (4 hunks)
  • arch/riscv/kvm/vcpu_sbi_pmu.c (1 hunks)
  • arch/riscv/kvm/vcpu_sbi_sta.c (1 hunks)
  • drivers/perf/riscv_pmu_sbi.c (7 hunks)
  • include/linux/kvm_host.h (1 hunks)
  • include/linux/perf/riscv_pmu.h (1 hunks)
  • virt/kvm/kvm_main.c (1 hunks)
🔇 Additional comments (5)
arch/riscv/include/asm/kvm_vcpu_sbi.h (1)

14-14: KVM SBI version has been updated to support new PMU features.

This version bump from 2.0 to 3.0 aligns with the introduction of SBI v3.0 PMU event info extension support in KVM. The major version increment properly signals a feature enhancement in the KVM SBI interface.

arch/riscv/kvm/vcpu_sbi_pmu.c (1)

76-78: Added handler for the new PMU event info extension.

The implementation adds support for the SBI_EXT_PMU_EVENT_GET_INFO extension function by routing it to the new kvm_riscv_vcpu_pmu_event_info handler. This follows the same pattern as the other PMU extension functions.

#!/bin/bash
# Check if the kvm_riscv_vcpu_pmu_event_info function is defined elsewhere
rg "kvm_riscv_vcpu_pmu_event_info" -A 2 -B 2
include/linux/kvm_host.h (1)

1386-1387: Added helper for validating guest physical address ranges.

This new function centralizes guest physical address range validation, which improves code maintainability by eliminating duplicate GPA-to-HVA translation and permission checking across multiple SBI handlers.

#!/bin/bash
# Look for the implementation of this function
rg "kvm_vcpu_validate_gpa_range" --type c
arch/riscv/kvm/vcpu_pmu.c (2)

63-66: RAW _V2 routed to PERF_TYPE_RAW looks good

Mapping the new SBI_PMU_EVENT_TYPE_RAW_V2 to PERF_TYPE_RAW is consistent with RAW v1 handling and perf-core expectations.


132-134: Correct mask for RAW _V2

Using the dedicated RISCV_PMU_RAW_EVENT_V2_MASK guarantees we do not truncate the extended 56-bit field.

Comment on lines +433 to 436
if (kvm_vcpu_validate_gpa_range(vcpu, saddr, PAGE_SIZE, true)) {
sbiret = SBI_ERR_INVALID_ADDRESS;
goto out;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Potential memory leak on repeated snapshot-set calls

kvpmu->sdata is (re)allocated unconditionally, but the old buffer is only freed when the caller passes SBI_SHMEM_DISABLE.
A guest can therefore leak memory by calling SNAPSHOT_SET_SHMEM twice with different addresses.

Diff suggestion:

-    kvpmu->sdata = kzalloc(snapshot_area_size, GFP_ATOMIC);
+    kfree(kvpmu->sdata);
+    kvpmu->sdata = kzalloc(snapshot_area_size, GFP_ATOMIC);

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In arch/riscv/kvm/vcpu_pmu.c around lines 433 to 436, the code reallocates
kvpmu->sdata without freeing the old buffer unless SBI_SHMEM_DISABLE is passed,
causing a memory leak on repeated SNAPSHOT_SET_SHMEM calls. To fix this, before
reallocating kvpmu->sdata, check if it is already allocated and free the
existing buffer to prevent leaks when the guest calls SNAPSHOT_SET_SHMEM
multiple times with different addresses.

Comment on lines +456 to +520
int kvm_riscv_vcpu_pmu_event_info(struct kvm_vcpu *vcpu, unsigned long saddr_low,
unsigned long saddr_high, unsigned long num_events,
unsigned long flags, struct kvm_vcpu_sbi_return *retdata)
{
struct riscv_pmu_event_info *einfo;
int shmem_size = num_events * sizeof(*einfo);
gpa_t shmem;
u32 eidx, etype;
u64 econfig;
int ret;

if (flags != 0 || (saddr_low & (SZ_16 - 1))) {
ret = SBI_ERR_INVALID_PARAM;
goto out;
}

shmem = saddr_low;
if (saddr_high != 0) {
if (IS_ENABLED(CONFIG_32BIT)) {
shmem |= ((gpa_t)saddr_high << 32);
} else {
ret = SBI_ERR_INVALID_ADDRESS;
goto out;
}
}

if (kvm_vcpu_validate_gpa_range(vcpu, shmem, shmem_size, true)) {
ret = SBI_ERR_INVALID_ADDRESS;
goto out;
}

einfo = kzalloc(shmem_size, GFP_KERNEL);
if (!einfo)
return -ENOMEM;

ret = kvm_vcpu_read_guest(vcpu, shmem, einfo, shmem_size);
if (ret) {
ret = SBI_ERR_FAILURE;
goto free_mem;
}

for (int i = 0; i < num_events; i++) {
eidx = einfo[i].event_idx;
etype = kvm_pmu_get_perf_event_type(eidx);
econfig = kvm_pmu_get_perf_event_config(eidx, einfo[i].event_data);
ret = riscv_pmu_get_event_info(etype, econfig, NULL);
if (ret > 0)
einfo[i].output = 1;
else
einfo[i].output = 0;
}

kvm_vcpu_write_guest(vcpu, shmem, einfo, shmem_size);
if (ret) {
ret = SBI_ERR_FAILURE;
goto free_mem;
}

free_mem:
kfree(einfo);
out:
retdata->err_val = ret;

return 0;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Uninitialised ret, ignored error paths, and GFP context issues in kvm_riscv_vcpu_pmu_event_info

Several correctness problems can surface here:

  1. ret is not initialised on the success path. If num_events == 0, ret remains uninitialised and we propagate stack garbage to the guest via retdata->err_val.
  2. Return value of kvm_vcpu_write_guest() is ignored; failures after populating einfo are silently lost.
  3. ret is reused for two unrelated purposes (value from riscv_pmu_get_event_info() and from the final HVA write) which can mask an earlier guest-write failure.
  4. kzalloc(shmem_size, GFP_KERNEL) may sleep inside the vcpu exit path; the snapshot helper above correctly uses GFP_ATOMIC.
  5. No sanity cap on num_events; an attacker can request an unbounded allocation. RISCV_KVM_MAX_COUNTERS or a PAGE-based limit should be applied.
  6. Missing alignment check for num_events * sizeof() can overflow int.

Proposed minimal fix:

@@
-    struct riscv_pmu_event_info *einfo;
-    int shmem_size = num_events * sizeof(*einfo);
+    struct riscv_pmu_event_info *einfo;
+    size_t shmem_size;
+    int ret = 0;          /* default: success */
+
+    if (num_events == 0 || num_events > RISCV_KVM_MAX_COUNTERS)
+        return retdata->err_val = SBI_ERR_INVALID_PARAM, 0;
+
+    if (check_mul_overflow(num_events, sizeof(*einfo), &shmem_size))
+        return retdata->err_val = SBI_ERR_INVALID_PARAM, 0;
@@
-    einfo = kzalloc(shmem_size, GFP_KERNEL);
+    einfo = kzalloc(shmem_size, GFP_ATOMIC);
@@
-        ret = riscv_pmu_get_event_info(etype, econfig, NULL);
-        if (ret > 0)
+        int rc = riscv_pmu_get_event_info(etype, econfig, NULL);
+        if (rc > 0)
             einfo[i].output = 1;
         else
             einfo[i].output = 0;
@@
-    kvm_vcpu_write_guest(vcpu, shmem, einfo, shmem_size);
-    if (ret) {
-        ret = SBI_ERR_FAILURE;
+    if (kvm_vcpu_write_guest(vcpu, shmem, einfo, shmem_size)) {
+        ret = SBI_ERR_FAILURE;
         goto free_mem;
     }

(The full patch will need appropriate headers for check_mul_overflow().)

These issues can cause guest-visible mis-reports and, in the worst case, host memory exhaustion.

🤖 Prompt for AI Agents
In arch/riscv/kvm/vcpu_pmu.c around lines 456 to 520, fix uninitialized ret by
initializing it to 0 at the start to avoid propagating garbage on success; add a
sanity check to cap num_events using RISCV_KVM_MAX_COUNTERS or a PAGE-based
limit and use check_mul_overflow() to prevent integer overflow in shmem_size
calculation; replace kzalloc with GFP_ATOMIC to avoid sleeping in vcpu exit
path; check and handle the return value of kvm_vcpu_write_guest to detect and
propagate errors properly; use separate variables for return values from
riscv_pmu_get_event_info and kvm_vcpu_write_guest to avoid masking errors;
ensure all error paths set ret appropriately before assigning retdata->err_val.

Comment on lines +162 to +169
struct riscv_pmu_event_info {
u32 event_idx;
u32 output;
u64 event_data;
};

#define RISCV_PMU_EVENT_INFO_OUTPUT_MASK 0x01

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Guarantee ABI-stable layout for struct riscv_pmu_event_info.

This structure is exchanged with firmware, so its in-memory layout must be identical on 32- and 64-bit kernels and across compilers.
Because the first two u32 members are followed by a u64, the compiler will naturally add 4 bytes of padding after output on 32-bit builds, but may keep the structure packed (16 bytes) or grow it to 24 bytes on 64-bit builds depending on the ABI rules. Any mismatch will corrupt the buffer consumed by SBI.

Please make the layout explicit:

 struct riscv_pmu_event_info {
-	u32 event_idx;
-	u32 output;
-	u64 event_data;
+	u32 event_idx;
+	u32 output;
+	u64 event_data;
 } __packed;               /* or __aligned(4) */
+
+static_assert(sizeof(struct riscv_pmu_event_info) == 16, "SBI ABI break!");

Alternatively, use BUILD_BUG_ON() in a C file if you would rather avoid <linux/build_bug.h> in the UAPI header.

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In arch/riscv/include/asm/sbi.h around lines 162 to 169, the struct
riscv_pmu_event_info has an ABI stability issue due to potential padding
differences between 32- and 64-bit kernels. To fix this, explicitly control the
struct layout by adding explicit padding fields or using compiler attributes
like __packed__ to ensure the struct size and alignment are consistent across
architectures. Alternatively, add a BUILD_BUG_ON() check in a C file to verify
the struct size at compile time and prevent ABI mismatches.

Comment on lines +340 to +353
if (!(event_info_shmem[i].output & RISCV_PMU_EVENT_INFO_OUTPUT_MASK))
pmu_hw_event_map[i].event_idx = -ENOENT;
}

count = ARRAY_SIZE(pmu_hw_event_map);

for (i = 0; i < ARRAY_SIZE(pmu_cache_event_map); i++) {
for (j = 0; j < ARRAY_SIZE(pmu_cache_event_map[i]); j++) {
for (k = 0; k < ARRAY_SIZE(pmu_cache_event_map[i][j]); k++) {
if (!(event_info_shmem[count].output &
RISCV_PMU_EVENT_INFO_OUTPUT_MASK))
pmu_cache_event_map[i][j][k].event_idx = -ENOENT;
count++;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Storing negative errno in an unsigned field is fragile.

event_idx is a u32, yet the code writes -ENOENT into it.
Later you read the field into an int and rely on sign-extension to recover ‑ENOENT. While that currently works, it is implementation-defined and breaks the moment event_idx becomes u64.

-			pmu_hw_event_map[i].event_idx = -ENOENT;
+			pmu_hw_event_map[i].event_idx = (u32)SBI_PMU_EVENT_IDX_INVALID;

(or change event_idx to s32 everywhere).

Also consider defining a dedicated constant (e.g. SBI_PMU_EVENT_IDX_INVALID) to make the sentinel value self-describing.

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In drivers/perf/riscv_pmu_sbi.c around lines 340 to 353, the code assigns
-ENOENT to the unsigned u32 field event_idx, which is fragile and relies on
implementation-defined sign extension. To fix this, change the type of event_idx
from u32 to s32 consistently throughout the codebase or alternatively define a
dedicated constant like SBI_PMU_EVENT_IDX_INVALID with an appropriate unsigned
sentinel value to represent invalid entries. Update all assignments and checks
of event_idx to use this new constant or signed type to ensure clarity and
correctness.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant