UAPI Changes:
- Expose per tile media freq factor in sysfs (Ashutosh Dixit, Dale B Stimson)
- Document memory residency and Flat-CCS capability of obj (Ramalingam C)
- Disable GETPARAM lookups of I915_PARAM_[SUB]SLICE_MASK on Xe_HP+ (Matt Roper)
Cross-subsystem Changes:
- Rename intel-gtt symbols (Lucas De Marchi)
Core Changes:
Driver Changes:
- Support programming the EU priority in the GuC descriptor (DG2) (Matthew Brost)
- DG2 HuC loading support (Daniele Ceraolo Spurio)
- Fix build error without CONFIG_PM (YueHaibing)
- Enable THP on Icelake and beyond (Tvrtko Ursulin)
- Only setup private tmpfs mount when needed and fix logging (Tvrtko Ursulin)
- Make __guc_reset_context aware of guilty engines (Umesh Nerlige Ramappa)
- DG2 small bar memory probing fixes (Nirmoy Das)
- Remove unnecessary GuC err capture noise (Alan Previn)
- Fix i915_gem_object_ggtt_pin_ww regression on old platforms (Maarten Lankhorst)
- Fix undefined behavior in GuC backend due to shift overflowing the constant (Borislav Petkov)
- New DG2 workarounds (Swathi Dhanavanthri, Anshuman Gupta)
- Report no hwconfig support on ADL-N (Balasubramani Vivekanandan)
- Fix error_state_read ptr + offset use (Alan Previn)
- Expose per tile media freq factor in sysfs (Ashutosh Dixit, Dale B Stimson)
- Fix memory leaks in per-gt sysfs (Ashutosh Dixit)
- Fix dma_resv fence handling in multi-batch execbuf (Nirmoy Das)
- Add extra registers to GPU error dump on Gen11+ (Stuart Summers)
- More PVC+DG2 workarounds (Matt Roper)
- Improve user experience and driver robustness under SIGINT or similar (Tvrtko Ursulin)
- Don't show engine classes not present (Tvrtko Ursulin)
- Improve on suspend / resume time with VT-d enabled (Thomas Hellström)
- Add missing else (katrinzhou)
- Don't leak lmem mapping in vma_evict (Juha-Pekka Heikkila)
- Add smem fallback allocation for dpt (Juha-Pekka Heikkila)
- Tweak the ordering in cpu_write_needs_clflush (Matthew Auld)
- Do not access rq->engine without a reference (Niranjana Vishwanathapura)
- Revert "drm/i915: Hold reference to intel_context over life of i915_request" (Niranjana Vishwanathapura)
- Don't update engine busyness stats too frequently (Alan Previn)
- Add additional steps for Wa_22011802037 for execlist backend (Umesh Nerlige Ramappa)
- Fix a lockdep warning at error capture (Nirmoy Das)
- Ponte Vecchio prep work and new blitter engines (Matt Roper, John Harrison, Lucas De Marchi)
- Read correct RP_STATE_CAP register (PVC) (Matt Roper)
- Define MOCS table for PVC (Ayaz A Siddiqui)
- Driver refactor and support Ponte Vecchio forcewake handling (Matt Roper)
- Remove additional 3D flags from PIPE_CONTROL (Ponte Vecchio) (Stuart Summers)
- XEHPSDV and PVC do not use HuC (Daniele Ceraolo Spurio)
- Extract stepping information from PCI revid (Ponte Vecchio) (Matt Roper)
- Add initial PVC workarounds (Stuart Summers)
- SSEU handling driver refactor and Ponte Vecchio support (Matt Roper)
- GuC depriv applies to PVC (Matt Roper)
- Add register steering (Ponte Vecchio) (Matt Roper)
- Add recommended MMIO setting (Ponte Vecchio) (Matt Roper)
- Move multicast register handling to a dedicated file (Matt Roper)
- Cleanup interface for MCR operations (Matt Roper)
- Extend i915_vma_pin_iomap() (CQ Tang)
- Re-do the intel-gtt split (Lucas De Marchi)
- Correct duplicated/misplaced GT register definitions (Matt Roper)
- Prefer "XEHP_" prefix for registers (Matt Roper)
- Don't use DRM_DEBUG_WARN_ON for unexpected l3bank/mslice config (Tvrtko Ursulin)
- Don't use DRM_DEBUG_WARN_ON for ring unexpectedly not idle (Tvrtko Ursulin)
- Make drop_pages() return bool (Lucas De Marchi)
- Fix CFI violation with show_dynamic_id() (Nathan Chancellor)
- Use i915_probe_error instead of drm_error in GuC code (Vinay Belgaumkar)
- Fix use of static in macro mismatch (Andi Shyti)
- Update tiled blits selftest (Bommu Krishnaiah)
- Future-proof platform checks (Matt Roper)
- Only include what's needed (Jani Nikula)
- remove accidental static from a local variable (Jani Nikula)
- Add global forcewake request to drpc (Vinay Belgaumkar)
- Fix spelling typo in comment (pengfuyuan)
- Increase timeout for live_parallel_switch selftest (Akeem G Abodunrin)
- Use non-blocking H2G for waitboost (Vinay Belgaumkar)
Signed-off-by: Dave Airlie <[email protected]>
From: Tvrtko Ursulin <
[email protected]>
Link: https://patchwork.freedesktop.org/patch/msgid/YrwtLM081SQUG1Dc@tursulin-desk
gt/intel_gt_debugfs.o \
gt/intel_gt_engines_debugfs.o \
gt/intel_gt_irq.o \
+ gt/intel_gt_mcr.o \
gt/intel_gt_pm.o \
gt/intel_gt_pm_debugfs.o \
gt/intel_gt_pm_irq.o \
gt/shmem_utils.o \
gt/sysfs_engines.o
# x86 intel-gtt module support
- gt-$(CONFIG_X86) += gt/intel_gt_gmch.o
+ gt-$(CONFIG_X86) += gt/intel_ggtt_gmch.o
# autogenerated null render state
gt-y += \
gt/gen6_renderstate.o \
display/intel_combo_phy.o \
display/intel_connector.o \
display/intel_crtc.o \
+ display/intel_crtc_state_dump.o \
display/intel_cursor.o \
display/intel_display.o \
display/intel_display_power.o \
display/intel_hdcp.o \
display/intel_hotplug.o \
display/intel_lpe_audio.o \
+ display/intel_modeset_verify.o \
+ display/intel_modeset_setup.o \
display/intel_overlay.o \
display/intel_pch_display.o \
display/intel_pch_refclk.o \
const struct drm_i915_gem_pwrite *arg)
{
struct address_space *mapping = obj->base.filp->f_mapping;
+ const struct address_space_operations *aops = mapping->a_ops;
char __user *user_data = u64_to_user_ptr(arg->data_ptr);
u64 remain, offset;
unsigned int pg;
if (err)
return err;
- err = pagecache_write_begin(obj->base.filp, mapping,
- offset, len, 0,
- &page, &data);
+ err = aops->write_begin(obj->base.filp, mapping, offset, len,
+ &page, &data);
if (err < 0)
return err;
len);
kunmap_atomic(vaddr);
- err = pagecache_write_end(obj->base.filp, mapping,
- offset, len, len - unwritten,
- page, data);
+ err = aops->write_end(obj->base.filp, mapping, offset, len,
+ len - unwritten, page, data);
if (err < 0)
return err;
{
struct drm_i915_gem_object *obj;
struct file *file;
+ const struct address_space_operations *aops;
resource_size_t offset;
int err;
GEM_BUG_ON(obj->write_domain != I915_GEM_DOMAIN_CPU);
file = obj->base.filp;
+ aops = file->f_mapping->a_ops;
offset = 0;
do {
unsigned int len = min_t(typeof(size), size, PAGE_SIZE);
struct page *page;
void *pgdata, *vaddr;
- err = pagecache_write_begin(file, file->f_mapping,
- offset, len, 0,
- &page, &pgdata);
+ err = aops->write_begin(file, file->f_mapping, offset, len,
+ &page, &pgdata);
if (err < 0)
goto fail;
memcpy(vaddr, data, len);
kunmap(page);
- err = pagecache_write_end(file, file->f_mapping,
- offset, len, len,
- page, pgdata);
+ err = aops->write_end(file, file->f_mapping, offset, len, len,
+ page, pgdata);
if (err < 0)
goto fail;
static int init_shmem(struct intel_memory_region *mem)
{
- int err;
-
- err = i915_gemfs_init(mem->i915);
- if (err) {
- DRM_NOTE("Unable to create a private tmpfs mount, hugepage support will be disabled(%d).\n",
- err);
- }
-
+ i915_gemfs_init(mem->i915);
intel_memory_region_set_name(mem, "system");
- return 0; /* Don't error, we can simply fallback to the kernel mnt */
+ return 0; /* We have fallback to the kernel mnt if gemfs init failed. */
}
static int release_shmem(struct intel_memory_region *mem)
if (unlikely(intel_context_is_closed(ce) &&
!intel_engine_has_heartbeat(engine)))
- intel_context_set_banned(ce);
+ intel_context_set_exiting(ce);
- if (unlikely(intel_context_is_banned(ce) || bad_request(rq)))
+ if (unlikely(!intel_context_is_schedulable(ce) || bad_request(rq)))
reset_active(rq, engine);
if (IS_ENABLED(CONFIG_DRM_I915_DEBUG_GEM))
i915_request_put(rq);
}
+ static u32 map_i915_prio_to_lrc_desc_prio(int prio)
+ {
+ if (prio > I915_PRIORITY_NORMAL)
+ return GEN12_CTX_PRIORITY_HIGH;
+ else if (prio < I915_PRIORITY_NORMAL)
+ return GEN12_CTX_PRIORITY_LOW;
+ else
+ return GEN12_CTX_PRIORITY_NORMAL;
+ }
+
static u64 execlists_update_context(struct i915_request *rq)
{
struct intel_context *ce = rq->context;
desc = ce->lrc.desc;
if (rq->engine->flags & I915_ENGINE_HAS_EU_PRIORITY)
- desc |= lrc_desc_priority(rq_prio(rq));
+ desc |= map_i915_prio_to_lrc_desc_prio(rq_prio(rq));
/*
* WaIdleLiteRestore:bdw,skl
/* Force a fast reset for terminated contexts (ignoring sysfs!) */
if (unlikely(intel_context_is_banned(rq->context) || bad_request(rq)))
- return 1;
+ return INTEL_CONTEXT_BANNED_PREEMPT_TIMEOUT_MS;
return READ_ONCE(engine->props.preempt_timeout_ms);
}
* submission. If we don't cancel the timer now,
* we will see that the timer has expired and
* reschedule the tasklet; continually until the
- * next context switch or other preeemption event.
+ * next context switch or other preemption event.
*
* Since we have decided to reschedule based on
* consumption of this timeslice, if we submit the
ring_set_paused(engine, 1);
intel_engine_stop_cs(engine);
+ /*
+ * Wa_22011802037:gen11/gen12: In addition to stopping the cs, we need
+ * to wait for any pending mi force wakeups
+ */
+ if (IS_GRAPHICS_VER(engine->i915, 11, 12))
+ intel_engine_wait_for_pending_mi_fw(engine);
+
engine->execlists.reset_ccid = active_ccid(engine);
}
#include <drm/drm_connector.h>
#include <drm/ttm/ttm_device.h>
-#include "display/intel_bios.h"
#include "display/intel_cdclk.h"
#include "display/intel_display.h"
#include "display/intel_display_power.h"
#define I915_COLOR_UNEVICTABLE (-1) /* a non-vma sharing the address space */
-enum drrs_type {
- DRRS_TYPE_NONE,
- DRRS_TYPE_STATIC,
- DRRS_TYPE_SEAMLESS,
-};
-
#define QUIRK_LVDS_SSC_DISABLE (1<<1)
#define QUIRK_INVERT_BRIGHTNESS (1<<2)
#define QUIRK_BACKLIGHT_PRESENT (1<<3)
/* bdb version */
u16 version;
- struct drm_display_mode *lfp_lvds_vbt_mode; /* if any */
- struct drm_display_mode *sdvo_lvds_vbt_mode; /* if any */
-
/* Feature bits */
unsigned int int_tv_support:1;
- unsigned int lvds_dither:1;
unsigned int int_crt_support:1;
unsigned int lvds_use_ssc:1;
unsigned int int_lvds_support:1;
unsigned int display_clock_mode:1;
unsigned int fdi_rx_polarity_inverted:1;
- unsigned int panel_type:4;
int lvds_ssc_freq;
- unsigned int bios_lvds_val; /* initial [PCH_]LVDS reg val in VBIOS */
enum drm_panel_orientation orientation;
bool override_afc_startup;
u8 override_afc_startup_val;
- u8 seamless_drrs_min_refresh_rate;
- enum drrs_type drrs_type;
-
- struct {
- int rate;
- int lanes;
- int preemphasis;
- int vswing;
- int bpp;
- struct edp_power_seq pps;
- u8 drrs_msa_timing_delay;
- bool low_vswing;
- bool initialized;
- bool hobl;
- } edp;
-
- struct {
- bool enable;
- bool full_link;
- bool require_aux_wakeup;
- int idle_frames;
- int tp1_wakeup_time_us;
- int tp2_tp3_wakeup_time_us;
- int psr2_tp2_tp3_wakeup_time_us;
- } psr;
-
- struct {
- u16 pwm_freq_hz;
- u16 brightness_precision_bits;
- bool present;
- bool active_low_pwm;
- u8 min_brightness; /* min_brightness/255 of max */
- u8 controller; /* brightness controller number */
- enum intel_backlight_type type;
- } backlight;
-
- /* MIPI DSI */
- struct {
- u16 panel_id;
- struct mipi_config *config;
- struct mipi_pps_data *pps;
- u16 bl_ports;
- u16 cabc_ports;
- u8 seq_version;
- u32 size;
- u8 *data;
- const u8 *sequence[MIPI_SEQ_MAX];
- u8 *deassert_seq; /* Used by fixup_mipi_sequences() */
- enum drm_panel_orientation orientation;
- } dsi;
-
int crt_ddc_pin;
struct list_head display_devices;
#define INTEL_DISPLAY_STEP(__i915) (RUNTIME_INFO(__i915)->step.display_step)
#define INTEL_GRAPHICS_STEP(__i915) (RUNTIME_INFO(__i915)->step.graphics_step)
#define INTEL_MEDIA_STEP(__i915) (RUNTIME_INFO(__i915)->step.media_step)
+ #define INTEL_BASEDIE_STEP(__i915) (RUNTIME_INFO(__i915)->step.basedie_step)
#define IS_DISPLAY_STEP(__i915, since, until) \
(drm_WARN_ON(&(__i915)->drm, INTEL_DISPLAY_STEP(__i915) == STEP_NONE), \
(drm_WARN_ON(&(__i915)->drm, INTEL_MEDIA_STEP(__i915) == STEP_NONE), \
INTEL_MEDIA_STEP(__i915) >= (since) && INTEL_MEDIA_STEP(__i915) < (until))
+ #define IS_BASEDIE_STEP(__i915, since, until) \
+ (drm_WARN_ON(&(__i915)->drm, INTEL_BASEDIE_STEP(__i915) == STEP_NONE), \
+ INTEL_BASEDIE_STEP(__i915) >= (since) && INTEL_BASEDIE_STEP(__i915) < (until))
+
static __always_inline unsigned int
__platform_mask_index(const struct intel_runtime_info *info,
enum intel_platform p)
(IS_DG2(__i915) && \
IS_DISPLAY_STEP(__i915, since, until))
+ #define IS_PVC_BD_STEP(__i915, since, until) \
+ (IS_PONTEVECCHIO(__i915) && \
+ IS_BASEDIE_STEP(__i915, since, until))
+
+ #define IS_PVC_CT_STEP(__i915, since, until) \
+ (IS_PONTEVECCHIO(__i915) && \
+ IS_GRAPHICS_STEP(__i915, since, until))
+
#define IS_LP(dev_priv) (INTEL_INFO(dev_priv)->is_lp)
#define IS_GEN9_LP(dev_priv) (GRAPHICS_VER(dev_priv) == 9 && IS_LP(dev_priv))
#define IS_GEN9_BC(dev_priv) (GRAPHICS_VER(dev_priv) == 9 && !IS_LP(dev_priv))
})
#define RCS_MASK(gt) \
ENGINE_INSTANCES_MASK(gt, RCS0, I915_MAX_RCS)
+ #define BCS_MASK(gt) \
+ ENGINE_INSTANCES_MASK(gt, BCS0, I915_MAX_BCS)
#define VDBOX_MASK(gt) \
ENGINE_INSTANCES_MASK(gt, VCS0, I915_MAX_VCS)
#define VEBOX_MASK(gt) \
#define HAS_RUNTIME_PM(dev_priv) (INTEL_INFO(dev_priv)->has_runtime_pm)
#define HAS_64BIT_RELOC(dev_priv) (INTEL_INFO(dev_priv)->has_64bit_reloc)
- #define HAS_MSLICES(dev_priv) \
- (INTEL_INFO(dev_priv)->has_mslices)
-
/*
* Set this flag, when platform requires 64K GTT page sizes or larger for
* device local memory access.
#define HAS_LSPCON(dev_priv) (IS_DISPLAY_VER(dev_priv, 9, 10))
+ #define HAS_L3_CCS_READ(i915) (INTEL_INFO(i915)->has_l3_ccs_read)
+
/* DPF == dynamic parity feature */
#define HAS_L3_DPF(dev_priv) (INTEL_INFO(dev_priv)->has_l3_dpf)
#define NUM_L3_SLICES(dev_priv) (IS_HSW_GT3(dev_priv) ? \
/* Only valid when HAS_DISPLAY() is true */
#define INTEL_DISPLAY_ENABLED(dev_priv) \
- (drm_WARN_ON(&(dev_priv)->drm, !HAS_DISPLAY(dev_priv)), !(dev_priv)->params.disable_display)
+ (drm_WARN_ON(&(dev_priv)->drm, !HAS_DISPLAY(dev_priv)), \
+ !(dev_priv)->params.disable_display && \
+ !intel_opregion_headless_sku(dev_priv))
#define HAS_GUC_DEPRIVILEGE(dev_priv) \
(INTEL_INFO(dev_priv)->has_guc_deprivilege)
#define HAS_MBUS_JOINING(i915) (IS_ALDERLAKE_P(i915))
+ #define HAS_3D_PIPELINE(i915) (INTEL_INFO(i915)->has_3d_pipeline)
+
+ #define HAS_ONE_EU_PER_FUSE_BIT(i915) (INTEL_INFO(i915)->has_one_eu_per_fuse_bit)
+
/* i915_gem.c */
void i915_gem_init_early(struct drm_i915_private *dev_priv);
void i915_gem_cleanup_early(struct drm_i915_private *dev_priv);
#define GEN12_COMPUTE2_RING_BASE 0x1e000
#define GEN12_COMPUTE3_RING_BASE 0x26000
#define BLT_RING_BASE 0x22000
+ #define XEHPC_BCS1_RING_BASE 0x3e0000
+ #define XEHPC_BCS2_RING_BASE 0x3e2000
+ #define XEHPC_BCS3_RING_BASE 0x3e4000
+ #define XEHPC_BCS4_RING_BASE 0x3e6000
+ #define XEHPC_BCS5_RING_BASE 0x3e8000
+ #define XEHPC_BCS6_RING_BASE 0x3ea000
+ #define XEHPC_BCS7_RING_BASE 0x3ec000
+ #define XEHPC_BCS8_RING_BASE 0x3ee000
#define DG1_GSC_HECI1_BASE 0x00258000
#define DG1_GSC_HECI2_BASE 0x00259000
#define DG2_GSC_HECI1_BASE 0x00373000
#define BXT_RP_STATE_CAP _MMIO(0x138170)
#define GEN9_RP_STATE_LIMITS _MMIO(0x138148)
#define XEHPSDV_RP_STATE_CAP _MMIO(0x250014)
+ #define PVC_RP_STATE_CAP _MMIO(0x281014)
#define GT0_PERF_LIMIT_REASONS _MMIO(0x1381a8)
#define GT0_PERF_LIMIT_REASONS_MASK 0xde3
#define DG1_UNCORE_GET_INIT_STATUS 0x0
#define DG1_UNCORE_INIT_STATUS_COMPLETE 0x1
#define GEN12_PCODE_READ_SAGV_BLOCK_TIME_US 0x23
+ #define XEHP_PCODE_FREQUENCY_CONFIG 0x6e /* xehpsdv, pvc */
+ /* XEHP_PCODE_FREQUENCY_CONFIG sub-commands (param1) */
+ #define PCODE_MBOX_FC_SC_READ_FUSED_P0 0x0
+ #define PCODE_MBOX_FC_SC_READ_FUSED_PN 0x1
+ /* PCODE_MBOX_DOMAIN_* - mailbox domain IDs */
+ /* XEHP_PCODE_FREQUENCY_CONFIG param2 */
+ #define PCODE_MBOX_DOMAIN_NONE 0x0
+ #define PCODE_MBOX_DOMAIN_MEDIAFF 0x3
#define GEN6_PCODE_DATA _MMIO(0x138128)
#define GEN6_PCODE_FREQ_IA_RATIO_SHIFT 8
#define GEN6_PCODE_FREQ_RING_RATIO_SHIFT 16
(((reg) & GEN7_L3CDERRST1_SUBBANK_MASK) >> 8)
#define GEN7_L3CDERRST1_ENABLE (1 << 7)
-/* Audio */
-#define G4X_AUD_VID_DID _MMIO(DISPLAY_MMIO_BASE(dev_priv) + 0x62020)
-#define INTEL_AUDIO_DEVCL 0x808629FB
-#define INTEL_AUDIO_DEVBLC 0x80862801
-#define INTEL_AUDIO_DEVCTG 0x80862802
-
-#define G4X_AUD_CNTL_ST _MMIO(0x620B4)
-#define G4X_ELDV_DEVCL_DEVBLC (1 << 13)
-#define G4X_ELDV_DEVCTG (1 << 14)
-#define G4X_ELD_ADDR_MASK (0xf << 5)
-#define G4X_ELD_ACK (1 << 4)
-#define G4X_HDMIW_HDMIEDID _MMIO(0x6210C)
-
-#define _IBX_HDMIW_HDMIEDID_A 0xE2050
-#define _IBX_HDMIW_HDMIEDID_B 0xE2150
-#define IBX_HDMIW_HDMIEDID(pipe) _MMIO_PIPE(pipe, _IBX_HDMIW_HDMIEDID_A, \
- _IBX_HDMIW_HDMIEDID_B)
-#define _IBX_AUD_CNTL_ST_A 0xE20B4
-#define _IBX_AUD_CNTL_ST_B 0xE21B4
-#define IBX_AUD_CNTL_ST(pipe) _MMIO_PIPE(pipe, _IBX_AUD_CNTL_ST_A, \
- _IBX_AUD_CNTL_ST_B)
-#define IBX_ELD_BUFFER_SIZE_MASK (0x1f << 10)
-#define IBX_ELD_ADDRESS_MASK (0x1f << 5)
-#define IBX_ELD_ACK (1 << 4)
-#define IBX_AUD_CNTL_ST2 _MMIO(0xE20C0)
-#define IBX_CP_READY(port) ((1 << 1) << (((port) - 1) * 4))
-#define IBX_ELD_VALID(port) ((1 << 0) << (((port) - 1) * 4))
-
-#define _CPT_HDMIW_HDMIEDID_A 0xE5050
-#define _CPT_HDMIW_HDMIEDID_B 0xE5150
-#define CPT_HDMIW_HDMIEDID(pipe) _MMIO_PIPE(pipe, _CPT_HDMIW_HDMIEDID_A, _CPT_HDMIW_HDMIEDID_B)
-#define _CPT_AUD_CNTL_ST_A 0xE50B4
-#define _CPT_AUD_CNTL_ST_B 0xE51B4
-#define CPT_AUD_CNTL_ST(pipe) _MMIO_PIPE(pipe, _CPT_AUD_CNTL_ST_A, _CPT_AUD_CNTL_ST_B)
-#define CPT_AUD_CNTRL_ST2 _MMIO(0xE50C0)
-
-#define _VLV_HDMIW_HDMIEDID_A (VLV_DISPLAY_BASE + 0x62050)
-#define _VLV_HDMIW_HDMIEDID_B (VLV_DISPLAY_BASE + 0x62150)
-#define VLV_HDMIW_HDMIEDID(pipe) _MMIO_PIPE(pipe, _VLV_HDMIW_HDMIEDID_A, _VLV_HDMIW_HDMIEDID_B)
-#define _VLV_AUD_CNTL_ST_A (VLV_DISPLAY_BASE + 0x620B4)
-#define _VLV_AUD_CNTL_ST_B (VLV_DISPLAY_BASE + 0x621B4)
-#define VLV_AUD_CNTL_ST(pipe) _MMIO_PIPE(pipe, _VLV_AUD_CNTL_ST_A, _VLV_AUD_CNTL_ST_B)
-#define VLV_AUD_CNTL_ST2 _MMIO(VLV_DISPLAY_BASE + 0x620C0)
-
/* These are the 4 32-bit write offset registers for each stream
* output buffer. It determines the offset from the
* 3DSTATE_SO_BUFFERs that the next streamed vertex output goes to.
*/
#define GEN7_SO_WRITE_OFFSET(n) _MMIO(0x5280 + (n) * 4)
-#define _IBX_AUD_CONFIG_A 0xe2000
-#define _IBX_AUD_CONFIG_B 0xe2100
-#define IBX_AUD_CFG(pipe) _MMIO_PIPE(pipe, _IBX_AUD_CONFIG_A, _IBX_AUD_CONFIG_B)
-#define _CPT_AUD_CONFIG_A 0xe5000
-#define _CPT_AUD_CONFIG_B 0xe5100
-#define CPT_AUD_CFG(pipe) _MMIO_PIPE(pipe, _CPT_AUD_CONFIG_A, _CPT_AUD_CONFIG_B)
-#define _VLV_AUD_CONFIG_A (VLV_DISPLAY_BASE + 0x62000)
-#define _VLV_AUD_CONFIG_B (VLV_DISPLAY_BASE + 0x62100)
-#define VLV_AUD_CFG(pipe) _MMIO_PIPE(pipe, _VLV_AUD_CONFIG_A, _VLV_AUD_CONFIG_B)
-
-#define AUD_CONFIG_N_VALUE_INDEX (1 << 29)
-#define AUD_CONFIG_N_PROG_ENABLE (1 << 28)
-#define AUD_CONFIG_UPPER_N_SHIFT 20
-#define AUD_CONFIG_UPPER_N_MASK (0xff << 20)
-#define AUD_CONFIG_LOWER_N_SHIFT 4
-#define AUD_CONFIG_LOWER_N_MASK (0xfff << 4)
-#define AUD_CONFIG_N_MASK (AUD_CONFIG_UPPER_N_MASK | AUD_CONFIG_LOWER_N_MASK)
-#define AUD_CONFIG_N(n) \
- (((((n) >> 12) & 0xff) << AUD_CONFIG_UPPER_N_SHIFT) | \
- (((n) & 0xfff) << AUD_CONFIG_LOWER_N_SHIFT))
-#define AUD_CONFIG_PIXEL_CLOCK_HDMI_SHIFT 16
-#define AUD_CONFIG_PIXEL_CLOCK_HDMI_MASK (0xf << 16)
-#define AUD_CONFIG_PIXEL_CLOCK_HDMI_25175 (0 << 16)
-#define AUD_CONFIG_PIXEL_CLOCK_HDMI_25200 (1 << 16)
-#define AUD_CONFIG_PIXEL_CLOCK_HDMI_27000 (2 << 16)
-#define AUD_CONFIG_PIXEL_CLOCK_HDMI_27027 (3 << 16)
-#define AUD_CONFIG_PIXEL_CLOCK_HDMI_54000 (4 << 16)
-#define AUD_CONFIG_PIXEL_CLOCK_HDMI_54054 (5 << 16)
-#define AUD_CONFIG_PIXEL_CLOCK_HDMI_74176 (6 << 16)
-#define AUD_CONFIG_PIXEL_CLOCK_HDMI_74250 (7 << 16)
-#define AUD_CONFIG_PIXEL_CLOCK_HDMI_148352 (8 << 16)
-#define AUD_CONFIG_PIXEL_CLOCK_HDMI_148500 (9 << 16)
-#define AUD_CONFIG_PIXEL_CLOCK_HDMI_296703 (10 << 16)
-#define AUD_CONFIG_PIXEL_CLOCK_HDMI_297000 (11 << 16)
-#define AUD_CONFIG_PIXEL_CLOCK_HDMI_593407 (12 << 16)
-#define AUD_CONFIG_PIXEL_CLOCK_HDMI_594000 (13 << 16)
-#define AUD_CONFIG_DISABLE_NCTS (1 << 3)
-
-/* HSW Audio */
-#define _HSW_AUD_CONFIG_A 0x65000
-#define _HSW_AUD_CONFIG_B 0x65100
-#define HSW_AUD_CFG(trans) _MMIO_TRANS(trans, _HSW_AUD_CONFIG_A, _HSW_AUD_CONFIG_B)
-
-#define _HSW_AUD_MISC_CTRL_A 0x65010
-#define _HSW_AUD_MISC_CTRL_B 0x65110
-#define HSW_AUD_MISC_CTRL(trans) _MMIO_TRANS(trans, _HSW_AUD_MISC_CTRL_A, _HSW_AUD_MISC_CTRL_B)
-
-#define _HSW_AUD_M_CTS_ENABLE_A 0x65028
-#define _HSW_AUD_M_CTS_ENABLE_B 0x65128
-#define HSW_AUD_M_CTS_ENABLE(trans) _MMIO_TRANS(trans, _HSW_AUD_M_CTS_ENABLE_A, _HSW_AUD_M_CTS_ENABLE_B)
-#define AUD_M_CTS_M_VALUE_INDEX (1 << 21)
-#define AUD_M_CTS_M_PROG_ENABLE (1 << 20)
-#define AUD_CONFIG_M_MASK 0xfffff
-
-#define _HSW_AUD_DIP_ELD_CTRL_ST_A 0x650b4
-#define _HSW_AUD_DIP_ELD_CTRL_ST_B 0x651b4
-#define HSW_AUD_DIP_ELD_CTRL(trans) _MMIO_TRANS(trans, _HSW_AUD_DIP_ELD_CTRL_ST_A, _HSW_AUD_DIP_ELD_CTRL_ST_B)
-
-/* Audio Digital Converter */
-#define _HSW_AUD_DIG_CNVT_1 0x65080
-#define _HSW_AUD_DIG_CNVT_2 0x65180
-#define AUD_DIG_CNVT(trans) _MMIO_TRANS(trans, _HSW_AUD_DIG_CNVT_1, _HSW_AUD_DIG_CNVT_2)
-#define DIP_PORT_SEL_MASK 0x3
-
-#define _HSW_AUD_EDID_DATA_A 0x65050
-#define _HSW_AUD_EDID_DATA_B 0x65150
-#define HSW_AUD_EDID_DATA(trans) _MMIO_TRANS(trans, _HSW_AUD_EDID_DATA_A, _HSW_AUD_EDID_DATA_B)
-
-#define HSW_AUD_PIPE_CONV_CFG _MMIO(0x6507c)
-#define HSW_AUD_PIN_ELD_CP_VLD _MMIO(0x650c0)
-#define AUDIO_INACTIVE(trans) ((1 << 3) << ((trans) * 4))
-#define AUDIO_OUTPUT_ENABLE(trans) ((1 << 2) << ((trans) * 4))
-#define AUDIO_CP_READY(trans) ((1 << 1) << ((trans) * 4))
-#define AUDIO_ELD_VALID(trans) ((1 << 0) << ((trans) * 4))
-
-#define _AUD_TCA_DP_2DOT0_CTRL 0x650bc
-#define _AUD_TCB_DP_2DOT0_CTRL 0x651bc
-#define AUD_DP_2DOT0_CTRL(trans) _MMIO_TRANS(trans, _AUD_TCA_DP_2DOT0_CTRL, _AUD_TCB_DP_2DOT0_CTRL)
-#define AUD_ENABLE_SDP_SPLIT REG_BIT(31)
-
-#define HSW_AUD_CHICKENBIT _MMIO(0x65f10)
-#define SKL_AUD_CODEC_WAKE_SIGNAL (1 << 15)
-
-#define AUD_FREQ_CNTRL _MMIO(0x65900)
-#define AUD_PIN_BUF_CTL _MMIO(0x48414)
-#define AUD_PIN_BUF_ENABLE REG_BIT(31)
-
-#define AUD_TS_CDCLK_M _MMIO(0x65ea0)
-#define AUD_TS_CDCLK_M_EN REG_BIT(31)
-#define AUD_TS_CDCLK_N _MMIO(0x65ea4)
-
-/* Display Audio Config Reg */
-#define AUD_CONFIG_BE _MMIO(0x65ef0)
-#define HBLANK_EARLY_ENABLE_ICL(pipe) (0x1 << (20 - (pipe)))
-#define HBLANK_EARLY_ENABLE_TGL(pipe) (0x1 << (24 + (pipe)))
-#define HBLANK_START_COUNT_MASK(pipe) (0x7 << (3 + ((pipe) * 6)))
-#define HBLANK_START_COUNT(pipe, val) (((val) & 0x7) << (3 + ((pipe)) * 6))
-#define NUMBER_SAMPLES_PER_LINE_MASK(pipe) (0x3 << ((pipe) * 6))
-#define NUMBER_SAMPLES_PER_LINE(pipe, val) (((val) & 0x3) << ((pipe) * 6))
-
-#define HBLANK_START_COUNT_8 0
-#define HBLANK_START_COUNT_16 1
-#define HBLANK_START_COUNT_32 2
-#define HBLANK_START_COUNT_64 3
-#define HBLANK_START_COUNT_96 4
-#define HBLANK_START_COUNT_128 5
-
/*
* HSW - ICL power wells
*
#define SGGI_DIS REG_BIT(15)
#define SGR_DIS REG_BIT(13)
- #define XEHPSDV_TILE0_ADDR_RANGE _MMIO(0x4900)
- #define XEHPSDV_TILE_LMEM_RANGE_SHIFT 8
-
- #define XEHPSDV_FLAT_CCS_BASE_ADDR _MMIO(0x4910)
- #define XEHPSDV_CCS_BASE_SHIFT 8
-
- /* gamt regs */
- #define GEN8_L3_LRA_1_GPGPU _MMIO(0x4dd4)
- #define GEN8_L3_LRA_1_GPGPU_DEFAULT_VALUE_BDW 0x67F1427F /* max/min for LRA1/2 */
- #define GEN8_L3_LRA_1_GPGPU_DEFAULT_VALUE_CHV 0x5FF101FF /* max/min for LRA1/2 */
- #define GEN9_L3_LRA_1_GPGPU_DEFAULT_VALUE_SKL 0x67F1427F /* " " */
- #define GEN9_L3_LRA_1_GPGPU_DEFAULT_VALUE_BXT 0x5FF101FF /* " " */
-
- #define MMCD_MISC_CTRL _MMIO(0x4ddc) /* skl+ */
- #define MMCD_PCLA (1 << 31)
- #define MMCD_HOTSPOT_EN (1 << 27)
-
#define _ICL_PHY_MISC_A 0x64C00
#define _ICL_PHY_MISC_B 0x64C04
#define _DG2_PHY_MISC_TC1 0x64C14 /* TC1="PHY E" but offset as if "PHY F" */
#include <linux/pm_runtime.h>
#include <drm/drm_atomic_helper.h>
+#include <drm/drm_blend.h>
#include <drm/drm_fourcc.h>
#include <drm/drm_plane_helper.h>
}
static void intel_read_wm_latency(struct drm_i915_private *dev_priv,
- u16 wm[8])
+ u16 wm[])
{
struct intel_uncore *uncore = &dev_priv->uncore;
skl_ddb_entry_init_from_hw(ddb_y, val);
}
-void skl_pipe_ddb_get_hw_state(struct intel_crtc *crtc,
- struct skl_ddb_entry *ddb,
- struct skl_ddb_entry *ddb_y)
+static void skl_pipe_ddb_get_hw_state(struct intel_crtc *crtc,
+ struct skl_ddb_entry *ddb,
+ struct skl_ddb_entry *ddb_y)
{
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
enum intel_display_power_domain power_domain;
return data_rate;
}
-const struct skl_wm_level *
+static const struct skl_wm_level *
skl_plane_wm_level(const struct skl_pipe_wm *pipe_wm,
enum plane_id plane_id,
int level)
return &wm->wm[level];
}
-const struct skl_wm_level *
+static const struct skl_wm_level *
skl_plane_trans_wm(const struct skl_pipe_wm *pipe_wm,
enum plane_id plane_id)
{
skl_ddb_entry_write(dev_priv, CUR_BUF_CFG(pipe), ddb);
}
-bool skl_wm_level_equals(const struct skl_wm_level *l1,
- const struct skl_wm_level *l2)
+static bool skl_wm_level_equals(const struct skl_wm_level *l1,
+ const struct skl_wm_level *l2)
{
return l1->enable == l2->enable &&
l1->ignore_lines == l2->ignore_lines &&
level->lines = REG_FIELD_GET(PLANE_WM_LINES_MASK, val);
}
-void skl_pipe_wm_get_hw_state(struct intel_crtc *crtc,
- struct skl_pipe_wm *out)
+static void skl_pipe_wm_get_hw_state(struct intel_crtc *crtc,
+ struct skl_pipe_wm *out)
{
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
enum pipe pipe = crtc->pipe;
!(intel_uncore_read(&dev_priv->uncore, DISP_ARB_CTL) & DISP_FBC_WM_DIS);
}
+void intel_wm_state_verify(struct intel_crtc *crtc,
+ struct intel_crtc_state *new_crtc_state)
+{
+ struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
+ struct skl_hw_state {
+ struct skl_ddb_entry ddb[I915_MAX_PLANES];
+ struct skl_ddb_entry ddb_y[I915_MAX_PLANES];
+ struct skl_pipe_wm wm;
+ } *hw;
+ const struct skl_pipe_wm *sw_wm = &new_crtc_state->wm.skl.optimal;
+ int level, max_level = ilk_wm_max_level(dev_priv);
+ struct intel_plane *plane;
+ u8 hw_enabled_slices;
+
+ if (DISPLAY_VER(dev_priv) < 9 || !new_crtc_state->hw.active)
+ return;
+
+ hw = kzalloc(sizeof(*hw), GFP_KERNEL);
+ if (!hw)
+ return;
+
+ skl_pipe_wm_get_hw_state(crtc, &hw->wm);
+
+ skl_pipe_ddb_get_hw_state(crtc, hw->ddb, hw->ddb_y);
+
+ hw_enabled_slices = intel_enabled_dbuf_slices_mask(dev_priv);
+
+ if (DISPLAY_VER(dev_priv) >= 11 &&
+ hw_enabled_slices != dev_priv->dbuf.enabled_slices)
+ drm_err(&dev_priv->drm,
+ "mismatch in DBUF Slices (expected 0x%x, got 0x%x)\n",
+ dev_priv->dbuf.enabled_slices,
+ hw_enabled_slices);
+
+ for_each_intel_plane_on_crtc(&dev_priv->drm, crtc, plane) {
+ const struct skl_ddb_entry *hw_ddb_entry, *sw_ddb_entry;
+ const struct skl_wm_level *hw_wm_level, *sw_wm_level;
+
+ /* Watermarks */
+ for (level = 0; level <= max_level; level++) {
+ hw_wm_level = &hw->wm.planes[plane->id].wm[level];
+ sw_wm_level = skl_plane_wm_level(sw_wm, plane->id, level);
+
+ if (skl_wm_level_equals(hw_wm_level, sw_wm_level))
+ continue;
+
+ drm_err(&dev_priv->drm,
+ "[PLANE:%d:%s] mismatch in WM%d (expected e=%d b=%u l=%u, got e=%d b=%u l=%u)\n",
+ plane->base.base.id, plane->base.name, level,
+ sw_wm_level->enable,
+ sw_wm_level->blocks,
+ sw_wm_level->lines,
+ hw_wm_level->enable,
+ hw_wm_level->blocks,
+ hw_wm_level->lines);
+ }
+
+ hw_wm_level = &hw->wm.planes[plane->id].trans_wm;
+ sw_wm_level = skl_plane_trans_wm(sw_wm, plane->id);
+
+ if (!skl_wm_level_equals(hw_wm_level, sw_wm_level)) {
+ drm_err(&dev_priv->drm,
+ "[PLANE:%d:%s] mismatch in trans WM (expected e=%d b=%u l=%u, got e=%d b=%u l=%u)\n",
+ plane->base.base.id, plane->base.name,
+ sw_wm_level->enable,
+ sw_wm_level->blocks,
+ sw_wm_level->lines,
+ hw_wm_level->enable,
+ hw_wm_level->blocks,
+ hw_wm_level->lines);
+ }
+
+ hw_wm_level = &hw->wm.planes[plane->id].sagv.wm0;
+ sw_wm_level = &sw_wm->planes[plane->id].sagv.wm0;
+
+ if (HAS_HW_SAGV_WM(dev_priv) &&
+ !skl_wm_level_equals(hw_wm_level, sw_wm_level)) {
+ drm_err(&dev_priv->drm,
+ "[PLANE:%d:%s] mismatch in SAGV WM (expected e=%d b=%u l=%u, got e=%d b=%u l=%u)\n",
+ plane->base.base.id, plane->base.name,
+ sw_wm_level->enable,
+ sw_wm_level->blocks,
+ sw_wm_level->lines,
+ hw_wm_level->enable,
+ hw_wm_level->blocks,
+ hw_wm_level->lines);
+ }
+
+ hw_wm_level = &hw->wm.planes[plane->id].sagv.trans_wm;
+ sw_wm_level = &sw_wm->planes[plane->id].sagv.trans_wm;
+
+ if (HAS_HW_SAGV_WM(dev_priv) &&
+ !skl_wm_level_equals(hw_wm_level, sw_wm_level)) {
+ drm_err(&dev_priv->drm,
+ "[PLANE:%d:%s] mismatch in SAGV trans WM (expected e=%d b=%u l=%u, got e=%d b=%u l=%u)\n",
+ plane->base.base.id, plane->base.name,
+ sw_wm_level->enable,
+ sw_wm_level->blocks,
+ sw_wm_level->lines,
+ hw_wm_level->enable,
+ hw_wm_level->blocks,
+ hw_wm_level->lines);
+ }
+
+ /* DDB */
+ hw_ddb_entry = &hw->ddb[PLANE_CURSOR];
+ sw_ddb_entry = &new_crtc_state->wm.skl.plane_ddb[PLANE_CURSOR];
+
+ if (!skl_ddb_entry_equal(hw_ddb_entry, sw_ddb_entry)) {
+ drm_err(&dev_priv->drm,
+ "[PLANE:%d:%s] mismatch in DDB (expected (%u,%u), found (%u,%u))\n",
+ plane->base.base.id, plane->base.name,
+ sw_ddb_entry->start, sw_ddb_entry->end,
+ hw_ddb_entry->start, hw_ddb_entry->end);
+ }
+ }
+
+ kfree(hw);
+}
+
void intel_enable_ipc(struct drm_i915_private *dev_priv)
{
u32 val;
static void dg2_init_clock_gating(struct drm_i915_private *i915)
{
- /* Wa_22010954014:dg2_g10 */
- if (IS_DG2_G10(i915))
- intel_uncore_rmw(&i915->uncore, XEHP_CLOCK_GATE_DIS, 0,
- SGSI_SIDECLK_DIS);
+ /* Wa_22010954014:dg2 */
+ intel_uncore_rmw(&i915->uncore, XEHP_CLOCK_GATE_DIS, 0,
+ SGSI_SIDECLK_DIS);
/*
* Wa_14010733611:dg2_g10
SGR_DIS | SGGI_DIS);
}
+ static void pvc_init_clock_gating(struct drm_i915_private *dev_priv)
+ {
+ /* Wa_14012385139:pvc */
+ if (IS_PVC_BD_STEP(dev_priv, STEP_A0, STEP_B0))
+ intel_uncore_rmw(&dev_priv->uncore, XEHP_CLOCK_GATE_DIS, 0, SGR_DIS);
+
+ /* Wa_22010954014:pvc */
+ if (IS_PVC_BD_STEP(dev_priv, STEP_A0, STEP_B0))
+ intel_uncore_rmw(&dev_priv->uncore, XEHP_CLOCK_GATE_DIS, 0, SGSI_SIDECLK_DIS);
+ }
+
static void cnp_init_clock_gating(struct drm_i915_private *dev_priv)
{
if (!HAS_PCH_CNP(dev_priv))
.init_clock_gating = platform##_init_clock_gating, \
}
+ CG_FUNCS(pvc);
CG_FUNCS(dg2);
CG_FUNCS(xehpsdv);
CG_FUNCS(adlp);
*/
void intel_init_clock_gating_hooks(struct drm_i915_private *dev_priv)
{
- if (IS_DG2(dev_priv))
+ if (IS_PONTEVECCHIO(dev_priv))
+ dev_priv->clock_gating_funcs = &pvc_clock_gating_funcs;
+ else if (IS_DG2(dev_priv))
dev_priv->clock_gating_funcs = &dg2_clock_gating_funcs;
else if (IS_XEHPSDV(dev_priv))
dev_priv->clock_gating_funcs = &xehpsdv_clock_gating_funcs;