Matthew Auld [Wed, 28 Aug 2024 09:22:58 +0000 (10:22 +0100)]
drm/xe/pat: sanity check compression and coh_mode
There is an implicit assumption in the driver that compression and
coh_1way+ are mutually exclusive. If this is ever not true then userptr
and imported dma-buf from external device will have uncleared ccs state.
Add a build bug for this so we don't forget.
Matthew Auld [Wed, 28 Aug 2024 10:43:42 +0000 (11:43 +0100)]
drm/xe: prevent potential UAF in pf_provision_vf_ggtt()
The node ptr can point to an already freed ptr, if we hit the path with
an already allocated node. We later dereference that pointer with:
xe_gt_assert(gt, !xe_ggtt_node_allocated(node));
which is a potential UAF. Fix this by not stashing the ptr for node.
Also since it is likely a bad idea to leave config->ggtt_region pointing
to a stale ptr, also set that to NULL by calling
pf_release_vf_config_ggtt() instead of pf_release_ggtt().
Some VF accessible registers (like GuC scratch registers) must be
explicitly reset during the FLR. While this is today done by the GuC
firmware, according to the design, this should be responsibility of
the PF driver, as future platforms may require more registers to be
reset. Likewise GuC, the PF can access VFs registers by adding some
platform specific offset to the original register address.
drm/xe: Use xe_pm_runtime_get in xe_bo_move() if reclaim-safe.
xe_bo_move() might be called in the TTM swapout path from validation
by another TTM device. If so, we are not likely to have a RPM
reference. So iff xe_pm_runtime_get() is safe to call from reclaim,
use it instead of xe_pm_runtime_get_noresume().
Strictly this is currently needed only if handle_system_ccs is true,
but use xe_pm_runtime_get() if possible anyway to increase test
coverage.
At the same time warn if handle_system_ccs is true and we can't
call xe_pm_runtime_get() from reclaim context. This will likely trip
if someone tries to enable SRIOV on LNL, without fixing Xe SRIOV
runtime resume / suspend.
Rodrigo Vivi [Fri, 30 Aug 2024 18:35:07 +0000 (14:35 -0400)]
drm/xe/display: Avoid encoder_suspend at runtime suspend
Fix circular locking dependency on runtime suspend.
<4> [74.952215] ======================================================
<4> [74.952217] WARNING: possible circular locking dependency detected
<4> [74.952219] 6.10.0-rc7-xe #1 Not tainted
<4> [74.952221] ------------------------------------------------------
<4> [74.952223] kworker/7:1/82 is trying to acquire lock:
<4> [74.952226] ffff888120548488 (&dev->mode_config.mutex){+.+.}-{3:3}, at: drm_modeset_lock_all+0x40/0x1e0 [drm]
<4> [74.952260]
but task is already holding lock:
<4> [74.952262] ffffffffa0ae59c0 (xe_pm_runtime_lockdep_map){+.+.}-{0:0}, at: xe_pm_runtime_suspend+0x2f/0x340 [xe]
<4> [74.952322]
which lock already depends on the new lock.
The commit 'b1d90a86 ("drm/xe: Use the encoder suspend helper also used
by the i915 driver")' didn't do anything wrong. It actually fixed a
critical bug, because the encoder_suspend was never getting actually
called because it was returning if (has_display(xe)) instead of
if (!has_display(xe)). However, this ended up introducing the encoder
suspend calls in the runtime routines as well, causing the circular
locking dependency.
drm/xe: Remove redundant [drm] tag from xe_assert() message
Since commit 178c0a33c421 ("drm/print: Add generic drm dev printk
function") the output from drm_WARN() includes previously missing
the [drm] tag, so now xe_assert() is printing it twice:
Michal Wajdeczko [Fri, 30 Aug 2024 13:20:59 +0000 (15:20 +0200)]
drm/xe/pf: Add thresholds to the VF KLV config
We are pushing threshold KLV to the GuC immediately during the
threshold provisioning, but those configs will be lost during a
GT reset. Include threshold KLVs while encoding full VF config
buffer to make sure the GuC receives all of the config KLVs.
Matt Roper [Thu, 29 Aug 2024 22:06:22 +0000 (15:06 -0700)]
drm/xe/hwmon: Treat hwmon as a per-device concept
There's only one instance of hwmon per device, and MMIO access to it is
always done through the root tile. The code has been passing around a
pointer to the root tile's primary GT, which is confusing since this
isn't really a GT-level concept. Replace that pointer with an xe_device
pointer and use xe_root_mmio_gt(xe) to get a pointer when we need to do
register MMIO. This makes things easier to follow, and also cleans up
the code in preparation for a much larger MMIO register access overhaul
that's coming soon.
Matt Roper [Thu, 29 Aug 2024 22:06:21 +0000 (15:06 -0700)]
drm/xe/pcode: Treat pcode as per-tile rather than per-GT
There's only one instance of the pcode per tile, and for GT-related
accesses both the primary and media GT share the same register
interface. Since Xe was using per-GT locking, the pcode mutex wasn't
actually protecting everything that it should since concurrent accesses
related to a tile's primary GT and media GT were possible.
Michal Wajdeczko [Wed, 28 Aug 2024 21:08:09 +0000 (23:08 +0200)]
drm/xe/pf: Improve VF control
Our initial VF control implementation was focused on providing
a very minimal support for the VF_STATE_NOTIFY events just to
meet GuC requirements, without tracking a VF state or doing any
expected actions (like cleanup in case of the FLR notification).
Try to improve this by defining set of VF state machines, each
responsible for processing one activity (PAUSE, RESUME, STOP or
FLR). All required steps defined by the VF state machine are then
executed by the PF worker from the dedicated workqueue.
Any external requests or notifications simply try to transition
between the states to trigger a work and then wait for that work
to finish. Some predefined default timeouts are used to avoid
changing existing API calls, but it should be easy to extend the
control API to also accept specific timeout values.
Michal Wajdeczko [Wed, 28 Aug 2024 21:08:08 +0000 (23:08 +0200)]
drm/xe/pf: Drop GuC notifications for non-existing VF
It is unlikely that GuC will ever send a G2H notification with an
invalid VFID and it is currently harmless if that actually happen.
But in upcoming patches we will start using that VFID as an index
and we must be sure it is a valid to avoid a crash due to a buggy
firmware or a currupted G2H message.
Michal Wajdeczko [Wed, 28 Aug 2024 21:08:06 +0000 (23:08 +0200)]
drm/xe/pf: Add function to sanitize VF resources
On current platforms it is a PF driver responsibility to clear
some of the VF's resources during a VF FLR. Add simple function
that will clear configured VF resources (GGTT, LMEM). We will
start using this function soon.
drm/xe/gsc: Wedge the device if the GSCCS reset fails
Due to the special handling of the GSCCS in HW, we can't escalate to GT
reset when we receive the reset failure interrupt; the specs indicate
that we should trigger an FLR instead, but we do not have support for
that at the moment, so the HW will stay permanently in a broken state.
We should therefore mark the device as wedged, the same as if the GT
reset had failed.
This is useful for debug, in case something goes wrong with the GSC. The
info includes the version information and the current value of the HECI1
status registers.
drm/xe/gsc: Track the platform in the compatibility version
The GSC compatibility version number is reset for each new platform. To
indicate this, the version includes a number that identifies the
platform (102 = MTL, 104 = LNL); this matches what happens for the
release version, where the major number also identifies a platform.
To make it clearer in our logs that the compatibility version is
specific to the platform, it is useful to include this platform number.
However, given that our binary names already include the platform, it is
not necessary to add this extra number there.
drm/xe/gsc: Fix FW status if the firmware is already loaded
We set the FW status to "TRANSFERRED" after the load completes and to
"RUNNING"once we're done with proxy init, so do the same if we're trying
to re-load the FW and it is already loaded.
Note that there is no difference in driver behavior between the 2
states, but it's useful to be accurate when we dump the status for
debug.
drm/xe/gsc: Do not attempt to load the GSC multiple times
The GSC HW is only reset by driver FLR or D3cold entry. We don't support
the former at runtime, while the latter is only supported on DGFX, for
which we don't support GSC. Therefore, if GSC failed to load previously
there is no need to try again because the HW is stuck in the error state.
An assert has been added so that if we ever add DGFX support we'll know
we need to handle the D3 case.
v2: use "< 0" instead of "!= 0" in the FW state error check (Julia).
Karthik Poosa [Tue, 27 Aug 2024 15:53:01 +0000 (21:23 +0530)]
drm/xe/hwmon: Fix WRITE_I1 param from u32 to u16
WRITE_I1 sub-command of the POWER_SETUP pcode command accepts a u16
parameter instead of u32. This change prevents potential illegal
sub-command errors.
v2: Mask uval instead of changing the prototype. (Badal)
Battlemage platform is sufficiently tested and found stable. CI is also
pretty stable. Remove the force_probe requirement to enable the platform
support by default.
Make sure that ggtt_node_remove() is invoked only if both node and ggtt
are not null. Move the null checks to the caller function
xe_ggtt_node_remove().
Thomas Hellström [Mon, 26 Aug 2024 14:34:50 +0000 (16:34 +0200)]
drm/xe: Use separate rpm lockdep map for non-d3cold-capable devices
For non-d3cold-capable devices we'd like to be able to wake up the
device from reclaim. In particular, for Lunar Lake we'd like to be
able to blit CCS metadata to system at shrink time; at least from
kswapd where it's reasonable OK to wait for rpm resume and a
preceding rpm suspend.
Therefore use a separate lockdep map for such devices and prime it
reclaim-tainted.
v2:
- Rename lockmap acquire- and release functions. (Rodrigo Vivi).
- Reinstate the old xe_pm_runtime_lockdep_prime() function and
rename it to xe_rpm_might_enter_cb(). (Matthew Auld).
- Introduce a separate xe_pm_runtime_lockdep_prime function
called from module init for known required locking orders.
v3:
- Actually hook up the prime function at module init.
v4:
- Rebase.
v5:
- Don't use reclaim-safe RPM with sriov.
Nirmoy Das [Wed, 28 Aug 2024 08:36:34 +0000 (10:36 +0200)]
Revert "drm/xe/lnl: Offload system clear page activity to GPU"
This optimization relied on having to clear CCS on allocations.
If there is no need to clear CCS on allocations then this would mostly
help in reducing CPU utilization.
Revert this patch at this moment because of:
1 Currently Xe can't do clear on free and using a invalid ttm flag,
TTM_TT_FLAG_CLEARED_ON_FREE which could poison global ttm pool on
multi-device setup.
2 Also for LNL CPU:WB doesn't require clearing CCS as such BO will
not be allowed to bind with compression PTE. Subsequent patch will
disable clearing CCS for CPU:WB BOs for LNL.
drm/xe: Support 'nomodeset' kernel command-line option
Setting 'nomodeset' on the kernel command line disables all graphics
drivers with modesetting capabilities, leaving only firmware drivers,
such as simpledrm or efifb.
Most DRM drivers automatically support 'nomodeset' via DRM's module
helper macros. In xe, which uses regular module_init(), manually call
drm_firmware_drivers_only() to test for 'nomodeset'. Do not register
the driver if set.
v2:
- use xe's init table (Lucas)
- do NULL test for init/exit functions
drm/xe: Fix total initialization in xe_ggtt_print_holes()
Clang warns (or errors with CONFIG_DRM_WERROR or CONFIG_WERROR):
drivers/gpu/drm/xe/xe_ggtt.c:810:3: error: variable 'total' is uninitialized when used here [-Werror,-Wuninitialized]
810 | total += hole_size;
| ^~~~~
drivers/gpu/drm/xe/xe_ggtt.c:798:11: note: initialize the variable 'total' to silence this warning
798 | u64 total;
| ^
| = 0
1 error generated.
Move the zero initialization of total from
xe_gt_sriov_pf_config_print_available_ggtt() to xe_ggtt_print_holes() to
resolve the warning.
drm/xe/display: handle HPD polling in display runtime suspend/resume
In XE, display runtime suspend / resume routines are called only
if d3cold is allowed. This makes the driver unable to detect any
HPDs once the device goes into runtime suspend state in platforms
like LNL. Update the display runtime suspend / resume routines
to include HPD polling regardless of d3cold status.
While xe_display_pm_suspend/resume() performs steps during runtime
suspend/resume that shouldn't happen, like suspending MST and they
are missing other steps like enabling DC9, this patchset is meant
to keep the current behavior wrt. these, leaving the corresponding
updates for a follow-up
v2: have a separate function for display runtime s/r (Rodrigo)
v3: better streamlining of system s/r and runtime s/r calls (Imre)
Matthew Brost [Tue, 20 Aug 2024 17:29:54 +0000 (10:29 -0700)]
drm/xe: Set firmware state to loadable before registering guc_fini_hw
The guc_fini_hw registered calls __xe_uc_fw_status which is only
expected to be called after initializing fw state. Move this before
registering guc_fini_hw.
Matthew Brost [Tue, 20 Aug 2024 17:29:53 +0000 (10:29 -0700)]
drm/xe: Move ggtt_fini to devm managed
ggtt->scratch is destroyed via devm, ggtt_fini sets ggtt->scratch to
NULL, ggtt->scratch in GGTT clears, so ensure ggtt->scratch is set NULL
before the BO is destroyed.
Rodrigo Vivi [Wed, 21 Aug 2024 19:38:42 +0000 (15:38 -0400)]
drm/xe: Fix missing runtime outer protection for ggtt_remove_node
Defer the ggtt node removal to a thread if runtime_pm is not active.
The ggtt node removal can be called from multiple places, including
places where we cannot protect with outer callers and places we are
within other locks. So, try to grab the runtime reference if the
device is already active, otherwise defer the removal to a separate
thread from where we are sure we can wake the device up.
v2: - use xe wq instead of system wq (Matt and CI)
- Avoid GFP_KERNEL to be future proof since this removal can
be called from outside our drivers and we don't want to block
if atomic is needed. (Brost)
v3: amend forgot chunk declaring xe_device.
v4: Use a xe_ggtt_region to encapsulate the node and remova info,
wihtout the need for any memory allocation at runtime.
v5: Actually fill the delayed_removal.invalidate (Brost)
v6: - Ensure that ggtt_region is not freed before work finishes (Auld)
- Own wq to ensures that the queued works are flushed before
ggtt_fini (Brost)
v7: also free ggtt_region on early !bound return (Auld)
v8: Address the null deref (CI)
v9: Based on the new xe_ggtt_node for the proper care of the lifetime
of the object.
v10: Redo the lost v5 change. (Brost)
v11: Simplify the invalidate_on_remove (Lucas)
Rodrigo Vivi [Wed, 21 Aug 2024 19:38:41 +0000 (15:38 -0400)]
drm/xe: Make xe_ggtt_node struct independent
In some rare cases, the drm_mm node cannot be removed synchronously
due to runtime PM conditions. In this situation, the node removal will
be delegated to a workqueue that will be able to wake up the device
before removing the node.
However, in this situation, the lifetime of the xe_ggtt_node cannot
be restricted to the lifetime of the parent object. So, this patch
introduces the infrastructure so the xe_ggtt_node struct can be
allocated in advance and freed when needed.
By having the ggtt backpointer, it also ensure that the init function
is always called before any attempt to insert or reserve the node
in the GGTT.
v2: s/xe_ggtt_node_force_fini/xe_ggtt_node_fini and use it
internaly (Brost)
v3: - Use GF_NOFS for node allocation (CI)
- Avoid ggtt argument, now that we have it inside the node (Lucas)
- Fix some missed fini cases (CI)
v4: - Fix SRIOV critical case where config->ggtt_region was
lost (Michal)
- Avoid ggtt argument also on removal (missed case on v3) (Michal)
- Remove useless checks (Michal)
- Return 0 instead of negative errno on a u32 addr. (Michal)
- s/xe_ggtt_assign/xe_ggtt_node_assign for coherence, while we
are touching it (Michal)
v5: - Fix VFs' ggtt_balloon
The xe_ggtt component uses drm_mm to manage the GGTT.
The drm_mm_node is just a node inside drm_mm, but in Xe we use that
only in the GGTT context. So, this patch encapsulates the drm_mm_node
into a xe_ggtt's new struct.
This is the first step towards limiting all the drm_mm access
through xe_ggtt. The ultimate goal is to have a better control of
the node insertion and removal, so the removal can be delegated
to a delayed workqueue.
Rodrigo Vivi [Wed, 21 Aug 2024 19:38:31 +0000 (15:38 -0400)]
drm/xe: Removed unused xe_ggtt_printk
Apparently this was only useful when enabling ggtt support
for the very first time and never used again.
It is also not useful now that we have the ggtt_dump available
through debugfs.
Matthew Auld [Wed, 21 Aug 2024 17:19:18 +0000 (18:19 +0100)]
drm/xe: fixup xe_alloc_pf_queue
kzalloc expects number of bytes, therefore we should convert the number
of dw into bytes, otherwise we are likely just accessing beyond the
array causing all kinds of carnage. Also fixup the error handling while
we are here.
Matthew Brost [Thu, 15 Aug 2024 19:35:22 +0000 (12:35 -0700)]
drm/xe: Drop HW fence pointer to HW fence ctx
The HW fence ctx objects are not ref counted rather tied to the life of
an LRC object. HW fences reference the HW fence ctx, HW fences can
outlive LRCs thus resulting in UAF. Drop the HW fence pointer to HW
fence ctx rather just store what is needed directly in HW fence.
v2:
- Fix typo in commit (Ashutosh)
- Use snprintf (Ashutosh)
Stuart Summers [Sat, 17 Aug 2024 02:47:32 +0000 (02:47 +0000)]
drm/xe/guc: Bump the G2H queue size to account for page faults
With the increase in the size of the recoverable page fault
queue, we want to ensure the initial messages from GuC in
the G2H buffer have space while we transfer those out to the
actual pf_queue. Bump the G2H queue size to account for this
increase in the pf_queue size.
Stuart Summers [Sat, 17 Aug 2024 02:47:31 +0000 (02:47 +0000)]
drm/xe: Use topology to determine page fault queue size
Currently the page fault queue size is hard coded. However
the hardware supports faulting for each EU and each CS.
For some applications running on hardware with a large
number of EUs and CSs, this can result in an overflow of
the page fault queue.
Add a small calculation to determine the page fault queue
size based on the number of EUs and CSs in the platform as
detmined by fuses.
Nirmoy Das [Fri, 16 Aug 2024 13:51:54 +0000 (15:51 +0200)]
drm/xe/lnl: Offload system clear page activity to GPU
On LNL because of flat CCS, driver creates migrates job to clear
CCS meta data. Extend that to also clear system pages using GPU.
Inform TTM to allocate pages without __GFP_ZERO to avoid double page
clearing by clearing out TTM_TT_FLAG_ZERO_ALLOC flag and set
TTM_TT_FLAG_CLEARED_ON_FREE while freeing to skip ttm pool's clear
on free as XE now takes care of clearing pages. If a bo is in system
placement such as BO created with DRM_XE_GEM_CREATE_FLAG_DEFER_BACKING
and there is a cpu map then for such BO gpu clear will be avoided as
there is no dma mapping for such BO at that moment to create migration
jobs.
Tested this patch api_overhead_benchmark_l0 from
https://github.com/intel/compute-benchmarks
Without the patch:
api_overhead_benchmark_l0 --testFilter=UsmMemoryAllocation:
UsmMemoryAllocation(api=l0 type=Host size=4KB) 84.206 us
UsmMemoryAllocation(api=l0 type=Host size=1GB) 105775.56 us
erf tool top 5 entries:
71.44% api_overhead_be [kernel.kallsyms] [k] clear_page_erms
6.34% api_overhead_be [kernel.kallsyms] [k] __pageblock_pfn_to_page
2.24% api_overhead_be [kernel.kallsyms] [k] cpa_flush
2.15% api_overhead_be [kernel.kallsyms] [k] pages_are_mergeable
1.94% api_overhead_be [kernel.kallsyms] [k] find_next_iomem_res
With the patch:
api_overhead_benchmark_l0 --testFilter=UsmMemoryAllocation:
UsmMemoryAllocation(api=l0 type=Host size=4KB) 79.439 us
UsmMemoryAllocation(api=l0 type=Host size=1GB) 98677.75 us
Perf tool top 5 entries:
11.16% api_overhead_be [kernel.kallsyms] [k] __pageblock_pfn_to_page
7.85% api_overhead_be [kernel.kallsyms] [k] cpa_flush
7.59% api_overhead_be [kernel.kallsyms] [k] find_next_iomem_res
7.24% api_overhead_be [kernel.kallsyms] [k] pages_are_mergeable
5.53% api_overhead_be [kernel.kallsyms] [k] lookup_address_in_pgd_attr
Without this patch clear_page_erms() dominates execution time which is
also not pipelined with migration jobs. With this patch page clearing
will get pipelined with migration job and will free CPU for more work.
v2: Handle regression on dgfx(Himal)
Update commit message as no ttm API changes needed.
v3: Fix Kunit test.
v4: handle data leak on cpu mmap(Thomas)
v5: s/gpu_page_clear/gpu_page_clear_sys and move setting
it to xe_ttm_sys_mgr_init() and other nits (Matt Auld)
v6: Disable it when init_on_alloc and/or init_on_free is active(Matt)
Use compute-benchmarks as reporter used it to report this
allocation latency issue also a proper test application than mime.
In v5, the test showed significant reduction in alloc latency but
that is not the case any more, I think this was mostly because
previous test was done on IFWI which had low mem BW from CPU.
drm/xe/display: Make display suspend/resume work on discrete
We should unpin before evicting all memory, and repin after GT resume.
This way, we preserve the contents of the framebuffers, and won't hang
on resume due to migration engine not being restored yet.
drm/xe/display: Match i915 driver suspend/resume sequences better
Suspend fbdev sooner, and disable user access before suspending to
prevent some races. I've noticed this when comparing xe suspend to
i915's.
Matches the following commits from i915: 24b412b1bfeb ("drm/i915: Disable intel HPD poll after DRM poll init/enable") 1ef28d86bea9 ("drm/i915: Suspend the framebuffer console earlier during system suspend") bd738d859e71 ("drm/i915: Prevent modesets during driver init/shutdown")
Thanks to Imre for pointing me to those commits.
Driver shutdown is currently missing, but I have some idea how to
implement it next.
Matthew Auld [Wed, 14 Aug 2024 11:01:30 +0000 (12:01 +0100)]
drm/xe: prevent UAF around preempt fence
The fence lock is part of the queue, therefore in the current design
anything locking the fence should then also hold a ref to the queue to
prevent the queue from being freed.
However, currently it looks like we signal the fence and then drop the
queue ref, but if something is waiting on the fence, the waiter is
kicked to wake up at some later point, where upon waking up it first
grabs the lock before checking the fence state. But if we have already
dropped the queue ref, then the lock might already be freed as part of
the queue, leading to uaf.
To prevent this, move the fence lock into the fence itself so we don't
run into lifetime issues. Alternative might be to have device level
lock, or only release the queue in the fence release callback, however
that might require pushing to another worker to avoid locking issues.
Those counters were used to keep track of the numbers VMs in fault mode
and in non-fault mode, to determine if the whole device was in fault mode
or not. This is no longer needed so remove those variables and their
usages.
Francois Dugast [Fri, 9 Aug 2024 15:51:35 +0000 (17:51 +0200)]
drm/xe/vm: Remove restriction that all VMs must be faulting if one is
With this restriction, all VMs on the device must be faulting VMs if there
is already one faulting VM, in which case the device is considered in
fault mode. This prevents for example an application from running 3D jobs
for the compositor while submitting a SVM compute job on the same device.
Now that mutual exclusion of faulting LR jobs and dma fence jobs is
ensured on the hw engine group, remove this restriction to allow running
faulting and non-faulting VMs on the same device.
Francois Dugast [Fri, 9 Aug 2024 15:51:34 +0000 (17:51 +0200)]
drm/xe/exec: Switch hw engine group execution mode upon job submission
If the job about to be submitted is a dma-fence job, update the current
execution mode of the hw engine group. This triggers an immediate suspend
of the exec queues running faulting long-running jobs.
If the job about to be submitted is a long-running job, kick a new worker
used to resume the exec queues running faulting long-running jobs once
the dma-fence jobs have completed.
v2: Kick the resume worker from exec IOCTL, switch to unordered workqueue,
destroy it after use (Matt Brost)
v3: Do not resume if no exec queue was suspended (Matt Brost)
v4: Squash commits (Matt Brost)
v5: Do not kick the worker when xe_vm_in_preempt_fence_mode (Matt Brost)
Francois Dugast [Fri, 9 Aug 2024 15:51:33 +0000 (17:51 +0200)]
drm/xe/hw_engine_group: Ensure safe transition between execution modes
Provide a way to safely transition execution modes of the hw engine group
ahead of the actual execution. When necessary, either wait for running
jobs to complete or preempt them, thus ensuring mutual exclusion between
execution modes.
Unlike a mutex, the rw_semaphore used in this context allows multiple
submissions in the same mode.
v2: Use lockdep_assert_held_write, add annotations (Matt Brost)
Francois Dugast [Fri, 9 Aug 2024 15:51:31 +0000 (17:51 +0200)]
drm/xe/exec_queue: Prepare last fence for hw engine group resume context
Ensure we can safely take a ref of the exec queue's last fence from the
context of resuming jobs from the hw engine group. The locking requirements
differ from the general case, hence the introduction of this new function.
v2: Add kernel doc, rework the code to prevent code duplication
Francois Dugast [Fri, 9 Aug 2024 15:51:30 +0000 (17:51 +0200)]
drm/xe/exec_queue: Remove duplicated code
This code section is the same as the body of
xe_exec_queue_last_fence_put_unlocked() so call the function instead and
remove duplicated code to make maintenance easier.
Add helpers to safely add and delete the exec queues attached to a hw
engine group, and make use them at the time of creation and destruction of
the exec queues. Keeping track of them is required to control the
execution mode of the hw engine group.
v2: Improve error handling and robustness, suspend exec queues created in
fault mode if group in dma-fence mode, init queue link (Matt Brost)
v3: Delete queue from hw engine group when it is destroyed by the user,
also clean up at the time of closing the file in case the user did
not destroy the queue
v4: Use correct list when checking if empty, do not add the queue if VM
is in xe_vm_in_preempt_fence_mode (Matt Brost)
v5: Remove unrelated newline, add checks and asserts for group, unwind on
suspend failure (Matt Brost)
Francois Dugast [Fri, 9 Aug 2024 15:51:27 +0000 (17:51 +0200)]
drm/xe/guc_submit: Make suspend_wait interruptible
Rely on wait_event_interruptible_timeout() to put the process to sleep
with TASK_INTERRUPTIBLE. It allows using this function in interruptible
context.
v2: Propagate error on wait_event_interruptible_timeout (Matt Brost)
A xe_hw_engine_group is a group of hw engines. Two hw engines belong to
the same xe_hw_engine_group if one hw engine cannot make progress while
the other is stuck on a page fault.
Typically, hw engines of the same group share some resources such as EUs,
but this really depends on the hardware configuration of the platforms.
The simple engines partitioning proposed here might be too conservative
but is intended to work for existing platforms. It can be optimized later
if more sets of independent engines are identified.
The hw engine groups are intended to be used in the context of faulting
long-running jobs submissions.
v2: Move to own files, improve error handling (Matt Brost)
Matthew Brost [Fri, 16 Aug 2024 03:40:33 +0000 (20:40 -0700)]
drm/xe: Use reserved copy engine for user binds on faulting devices
User binds map to engines with can fault, faults depend on user binds
completion, thus we can deadlock. Avoid this by using reserved copy
engine for user binds on faulting devices.
While we are here, normalize bind queue creation with a helper.
v2:
- Pass in extensions to bind queue creation (CI)
v3:
- s/resevered/reserved (Lucas)
- Fix NULL hwe check (Jonathan)
Matt Roper [Thu, 15 Aug 2024 17:26:04 +0000 (10:26 -0700)]
drm/xe/mcr: Try to derive dss_per_grp from hwconfig attributes
When steering MCR register ranges of type "DSS," the group_id and
instance_id values are calculated by dividing the DSS pool according to
the size of a gslice or cslice, depending on the platform. These values
haven't changed much on past platforms, so we've been able to hardcode
the proper divisor so far. However the layout may not be so fixed on
future platforms so the proper, future-proof way to determine this is by
using some of the attributes from the GuC's hwconfig table. The
hwconfig has two attributes reflecting the architectural maximum slice
and subslice counts (i.e., before any fusing is considered) that can be
used for the purposes of calculating MCR steering targets.
If the hwconfig is lacking the necessary values (which should only be
possible on older platforms before these attributes were added), we can
still fall back to the old hardcoded values. Going forward the hwconfig
is expected to always provide the information we need on newer
platforms, and any failure to do so will be considered a bug in the
firmware that will prevent us from switching to the buggy firmware
release.
It's worth noting that over time GuC's hwconfig has provided a couple
different keys with similar-sounding descriptions. For our purposes
here, we only trust the newer key "70" which has supplanted the
similarly-named key "2" that existed on older platforms.
Matt Roper [Thu, 15 Aug 2024 17:26:03 +0000 (10:26 -0700)]
drm/xe: Add debugfs to dump GuC's hwconfig
Although the query uapi is the official way to get at the GuC's hwconfig
table contents, it's still useful to have a quick debugfs interface to
dump the table in a human-readable format while debugging the driver.
Lucas De Marchi [Fri, 16 Aug 2024 17:05:54 +0000 (10:05 -0700)]
Merge drm/drm-next into drm-xe-next
Get drm-xe-next on v6.11-rc2 and synchronized with drm-intel-next for
the display side. This resolves the current conflict for the
enable_display module parameter and allows further pending refactors.
Drmm actions are not the right ones to clean up BOs and we should use
devm instead. However, we can also instead just allocate the objects
using the managed_bo function, which will internally register the
correct cleanup call and therefore allows us to simplify the code.
While at it, switch to drmm_kzalloc for the GSC proxy allocation to
further simplify the cleanup.
Matthew Brost [Fri, 9 Aug 2024 23:28:30 +0000 (16:28 -0700)]
drm/xe: Fix tile fini sequence
Only set tile->mmio.regs to NULL if not the root tile in tile_fini. The
root tile mmio regs is setup ealier in MMIO init thus it should be set
to NULL in mmio_fini.