Xiao Guangrong [Mon, 15 Jun 2015 08:55:31 +0000 (16:55 +0800)]
KVM: MTRR: sort variable MTRRs
Sort all valid variable MTRRs based on its base address, it will help us to
check a range to see if it's fully contained in variable MTRRs
Signed-off-by: Xiao Guangrong <[email protected]>
[Fix list insertion sort, simplify var_mtrr_range_is_valid to just
test the V bit. - Paolo] Signed-off-by: Paolo Bonzini <[email protected]>
Xiao Guangrong [Mon, 15 Jun 2015 08:55:29 +0000 (16:55 +0800)]
KVM: MTRR: introduce fixed_mtrr_segment table
This table summarizes the information of fixed MTRRs and introduce some APIs
to abstract its operation which helps us to clean up the code and will be
used in later patches
Signed-off-by: Xiao Guangrong <[email protected]>
[Change range_size to range_shift, in order to avoid udivdi3 errors.
- Paolo] Signed-off-by: Paolo Bonzini <[email protected]>
Xiao Guangrong [Mon, 15 Jun 2015 08:55:22 +0000 (16:55 +0800)]
KVM: x86: move MTRR related code to a separate file
MTRR code locates in x86.c and mmu.c so that move them to a separate file to
make the organization more clearer and it will be the place where we fully
implement vMTRR
Bandan Das [Thu, 11 Jun 2015 06:05:33 +0000 (02:05 -0400)]
KVM: nSVM: Check for NRIPS support before updating control field
If hardware doesn't support DecodeAssist - a feature that provides
more information about the intercept in the VMCB, KVM decodes the
instruction and then updates the next_rip vmcb control field.
However, NRIP support itself depends on cpuid Fn8000_000A_EDX[NRIPS].
Since skip_emulated_instruction() doesn't verify nrip support
before accepting control.next_rip as valid, avoid writing this
field if support isn't present.
This patch fixes this issue by breaking up the allocation of
the table and its entries into individual kzalloc calls.
These could all be satisfied with order-0 allocations, which
are less likely to fail.
The downside of this change is the lower performance, because
of more calls to kzalloc. But given how often kvm_set_irq_routing
is called in the lifetime of a guest, it doesn't really
matter much.
Signed-off-by: Joerg Roedel <[email protected]>
[Avoid sparse warning through rcu_access_pointer. - Paolo] Signed-off-by: Paolo Bonzini <[email protected]>
Arnaud Ebalard [Thu, 18 Jun 2015 13:46:29 +0000 (15:46 +0200)]
crypto: marvell/cesa - add support for Kirkwood and Dove SoCs
Add the Kirkwood and Dove SoC descriptions, and control the allhwsupport
module parameter to avoid probing the CESA IP when the old CESA driver is
enabled (unless it is explicitly requested to do so).
Boris BREZILLON [Thu, 18 Jun 2015 13:46:28 +0000 (15:46 +0200)]
crypto: marvell/cesa - add support for Orion SoCs
Add the Orion SoC description, and select this implementation by default
to support non-DT probing: Orion is the only platform where non-DT boards
are declaring the CESA block.
Control the allhwsupport module parameter to avoid probing the CESA IP when
the old CESA driver is enabled (unless it is explicitly requested to do
so).
The old and new marvell CESA drivers both support Orion and Kirkwood SoCs.
Add a module parameter to choose whether these SoCs should be attached to
the new or the old driver.
The default policy is to keep attaching those IPs to the old driver if it
is enabled, until we decide the new CESA driver is stable/secure enough.
Boris BREZILLON [Thu, 18 Jun 2015 13:46:21 +0000 (15:46 +0200)]
crypto: marvell/cesa - add TDMA support
The CESA IP supports CPU offload through a dedicated DMA engine (TDMA)
which can control the crypto block.
When you use this mode, all the required data (operation metadata and
payload data) are transferred using DMA, and the results are retrieved
through DMA when possible (hash results are not retrieved through DMA yet),
thus reducing the involvement of the CPU and providing better performances
in most cases (for small requests, the cost of DMA preparation might
exceed the performance gain).
Note that some CESA IPs do not embed this dedicated DMA, hence the
activation of this feature on a per platform basis.
Boris BREZILLON [Thu, 18 Jun 2015 13:46:20 +0000 (15:46 +0200)]
crypto: marvell/cesa - add a new driver for Marvell's CESA
The existing mv_cesa driver supports some features of the CESA IP but is
quite limited, and reworking it to support new features (like involving the
TDMA engine to offload the CPU) is almost impossible.
This driver has been rewritten from scratch to take those new features into
account.
This commit introduce the base infrastructure allowing us to add support
for DMA optimization.
It also includes support for one hash (SHA1) and one cipher (AES)
algorithm, and enable those features on the Armada 370 SoC.
Other algorithms and platforms will be added later on.
Boris BREZILLON [Thu, 18 Jun 2015 13:46:19 +0000 (15:46 +0200)]
crypto: mv_cesa - explicitly define kirkwood and dove compatible strings
We are about to add a new driver to support new features like using the
TDMA engine to offload the CPU.
Orion, Dove and Kirkwood platforms are already using the mv_cesa driver,
but Orion SoCs do not embed the TDMA engine, which means we will have to
differentiate them if we want to get TDMA support on Dove and Kirkwood.
In the other hand, the migration from the old driver to the new one is not
something all people are willing to do without first auditing the new
driver.
Hence we have to support the new compatible in the mv_cesa driver so that
new platforms with updated DTs can still attach their crypto engine device
to this driver.
Boris BREZILLON [Thu, 18 Jun 2015 13:46:18 +0000 (15:46 +0200)]
crypto: mv_cesa - use gen_pool to reserve the SRAM memory region
The mv_cesa driver currently expects the SRAM memory region to be passed
as a platform device resource.
This approach implies two drawbacks:
- the DT representation is wrong
- the only one that can access the SRAM is the crypto engine
The last point is particularly annoying in some cases: for example on
armada 370, a small region of the crypto SRAM is used to implement the
cpuidle, which means you would not be able to enable both cpuidle and the
CESA driver.
To address that problem, we explicitly define the SRAM device in the DT
and then reference the sram node from the crypto engine node.
Also note that the old way of retrieving the SRAM memory region is still
supported, or in other words, backward compatibility is preserved.
Herbert Xu [Fri, 19 Jun 2015 14:07:07 +0000 (22:07 +0800)]
Merge branch 'mvebu/drivers' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc
Merge the mvebu/drivers branch of the arm-soc tree which contains
just a single patch bfa1ce5f38938cc9e6c7f2d1011f88eba2b9e2b2 ("bus:
mvebu-mbus: add mv_mbus_dram_info_nooverlap()") that happens to be
a prerequisite of the new marvell/cesa crypto driver.
Will Deacon [Fri, 19 Jun 2015 12:56:33 +0000 (13:56 +0100)]
arm64: vdso: work-around broken ELF toolchains in Makefile
When building the kernel with a bare-metal (ELF) toolchain, the -shared
option may not be passed down to collect2, resulting in silent corruption
of the vDSO image (in particular, the DYNAMIC section is omitted).
The effect of this corruption is that the dynamic linker fails to find
the vDSO symbols and libc is instead used for the syscalls that we
intended to optimise (e.g. gettimeofday). Functionally, there is no
issue as the sigreturn trampoline is still intact and located by the
kernel.
This patch fixes the problem by explicitly passing -shared to the linker
when building the vDSO.
Thomas Gleixner [Tue, 26 May 2015 22:50:35 +0000 (22:50 +0000)]
timer: Minimize nohz off overhead
If nohz is disabled on the kernel command line the [hr]timer code
still calls wake_up_nohz_cpu() and tick_nohz_full_cpu(), a pretty
pointless exercise. Cache nohz_active in [hr]timer per cpu bases and
avoid the overhead.
We probably should have a cached value for nohz full in the per cpu
bases as well to avoid the cpumask check. The base cache line is hot
already, the cpumask not necessarily.
Thomas Gleixner [Tue, 26 May 2015 22:50:33 +0000 (22:50 +0000)]
timer: Reduce timer migration overhead if disabled
Eric reported that the timer_migration sysctl is not really nice
performance wise as it needs to check at every timer insertion whether
the feature is enabled or not. Further the check does not live in the
timer code, so we have an extra function call which checks an extra
cache line to figure out that it is disabled.
We can do better and store that information in the per cpu (hr)timer
bases. I pondered to use a static key, but that's a nightmare to
update from the nohz code and the timer base cache line is hot anyway
when we select a timer base.
The old logic enabled the timer migration unconditionally if
CONFIG_NO_HZ was set even if nohz was disabled on the kernel command
line.
With this modification, we start off with migration disabled. The user
visible sysctl is still set to enabled. If the kernel switches to NOHZ
migration is enabled, if the user did not disable it via the sysctl
prior to the switch. If nohz=off is on the kernel command line,
migration stays disabled no matter what.
Thomas Gleixner [Tue, 26 May 2015 22:50:31 +0000 (22:50 +0000)]
timer: Stats: Simplify the flags handling
Simplify the handling of the flag storage for the timer statistics. No
intermediate storage anymore. Just hand over the flags field.
I left the printout of 'deferrable' for now because changing this
would be an ABI update and I have no idea how strong people feel about
that. OTOH, I wonder whether we should kill the whole timer stats
stuff because all of that information can be retrieved via ftrace/perf
as well.
Thomas Gleixner [Tue, 26 May 2015 22:50:29 +0000 (22:50 +0000)]
timer: Replace timer base by a cpu index
Instead of storing a pointer to the per cpu tvec_base we can simply
cache a CPU index in the timer_list and use that to get hold of the
correct per cpu tvec_base. This is only used in lock_timer_base() and
the slightly larger code is peanuts versus the spinlock operation and
the d-cache foot print of the timer wheel.
Aside of that this allows to get rid of following nuisances:
- boot_tvec_base
That statically allocated 4k bss data is just kept around so the
timer has a home when it gets statically initialized. It serves no
other purpose.
With the CPU index we assign the timer to CPU0 at static
initialization time and therefor can avoid the whole boot_tvec_base
dance. That also simplifies the init code, which just can use the
per cpu base.
Before:
text data bss dec hex filename
17491 9201 4160 30852 7884 ../build/kernel/time/timer.o
After:
text data bss dec hex filename
17440 9193 0 26633 6809 ../build/kernel/time/timer.o
- Overloading the base pointer with various flags
The CPU index has enough space to hold the flags (deferrable,
irqsafe) so we can get rid of the extra masking and bit fiddling
with the base pointer.
As a benefit we reduce the size of struct timer_list on 64 bit
machines. 4 - 8 bytes, a size reduction up to 15% per struct timer_list,
which is a real win as we have tons of them embedded in other structs.
This changes also the newly added deferrable printout of the timer
start trace point to capture and print all timer->flags, which allows
us to decode the target cpu of the timer as well.
We might have used bitfields for this, but that would change the
static initializers and the init function for no value to accomodate
big endian bitfields.
Thomas Gleixner [Tue, 26 May 2015 22:50:26 +0000 (22:50 +0000)]
timer: Remove FIFO "guarantee"
The FIFO guarantee is only there if two timers are queued into the
same bucket at the same jiffie on the same cpu:
- The slack value depends on the delta between expiry and enqueue
time, so the resulting expiry time can be different for timers
which are queued in different jiffies.
- Timers which are queued into the secondary array end up after a
later queued timer which was queued into the primary array due to
cascading.
- Timers can end up on different cpus due to the NOHZ target moving
around. Obviously there is no guarantee of expiry ordering between
cpus.
So anything which relies on FIFO behaviour of the timer wheel is
broken already.
This is a preparatory patch for converting the timer wheel to hlist
which reduces the memory foot print of the wheel by 50%.
It's a seperate patch so any (unlikely to happen) regression caused by
this can be identified clearly.
Thomas Gleixner [Tue, 26 May 2015 22:50:24 +0000 (22:50 +0000)]
timers: Sanitize catchup_timer_jiffies() usage
catchup_timer_jiffies() has been applied blindly to several functions
without looking for possible better ways to do it.
1) internal_add_timer()
Move the update to base->all_timers before we actually insert the
timer into the wheel.
2) detach_if_pending()
Again the update to base->all_timers allows us to explicitely do
the timer_jiffies update in place, if this was the last timer which
got removed.
3) __run_timers()
We only check on entry, which is silly, because base->timer_jiffies
can be behind - especially on NOHZ kernels - and if there is a
single deferrable timer somewhere between base->timer_jiffies and
jiffies we expire it and then loop until base->timer_jiffies ==
jiffies.
Boris Brezillon [Fri, 27 Mar 2015 22:53:15 +0000 (23:53 +0100)]
clk: at91: pll: fix input range validity check
The PLL impose a certain input range to work correctly, but it appears that
this input range does not apply on the input clock (or parent clock) but
on the input clock after it has passed the PLL divisor.
Fix the implementation accordingly.
Wanpeng Li [Wed, 13 May 2015 06:01:06 +0000 (14:01 +0800)]
sched: Remove superfluous resetting of the p->dl_throttled flag
Resetting the p->dl_throttled flag in rt_mutex_setprio() (for a task that is going
to be boosted) is superfluous, as the natural place to do so is in
replenish_dl_entity().
If the task was on the runqueue and it is boosted by a DL task, it will be enqueued
back with ENQUEUE_REPLENISH flag set, which can guarantee that dl_throttled is
reset in replenish_dl_entity().
This patch drops the resetting of throttled status in function rt_mutex_setprio().
Wanpeng Li [Wed, 13 May 2015 06:01:03 +0000 (14:01 +0800)]
sched/deadline: Reduce rq lock contention by eliminating locking of non-feasible target
This patch adds a check that prevents futile attempts to move DL tasks
to a CPU with active tasks of equal or earlier deadline. The same
behavior as commit 80e3d87b2c55 ("sched/rt: Reduce rq lock contention
by eliminating locking of non-feasible target") for rt class.
Wanpeng Li [Wed, 13 May 2015 06:01:01 +0000 (14:01 +0800)]
sched/deadline: Optimize pull_dl_task()
pull_dl_task() uses pick_next_earliest_dl_task() to select a migration
candidate; this is sub-optimal since the next earliest task -- as per
the regular runqueue -- might not be migratable at all. This could
result in iterating the entire runqueue looking for a task.
Instead iterate the pushable queue -- this queue only contains tasks
that have at least 2 cpus set in their cpus_allowed mask.
sched/preempt: Fix preempt notifiers documentation about hlist_del() within unsafe iteration
preempt_notifier_unregister() documents:
"This is safe to call from within a preemption notifier."
However, both fire_sched_in_preempt_notifiers() and
fire_sched_out_preempt_notifiers() are using hlist_for_each_entry(),
which is not safe against entry removal during iteration.
Inspection of the KVM code does not reveal any use of
preempt_notifier_unregister() within the preempt notifiers.
Peter Zijlstra [Fri, 5 Jun 2015 15:30:23 +0000 (17:30 +0200)]
sched/stop_machine: Fix deadlock between multiple stop_two_cpus()
Jiri reported a machine stuck in multi_cpu_stop() with
migrate_swap_stop() as function and with the following src,dst cpu
pairs: {11, 4} {13, 11} { 4, 13}
4 11 13
cpuM: queue(4 ,13)
*Ma
cpuN: queue(13,11)
*N Na
*M Mb
cpuO: queue(11, 4)
*O Oa
*Nb
*Ob
Where *X denotes the cpu running the queueing of cpu-X and X[ab] denotes
the first/second queued work.
You'll observe the top of the workqueue for each cpu: 4,11,13 to be work
from cpus: M, O, N resp. IOW. deadlock.
Do away with the queueing trickery and introduce lg_double_lock() to
lock both CPUs and fully serialize the stop_two_cpus() callers instead
of the partial (and buggy) serialization we have now.
sched/debug: Add sum_sleep_runtime to /proc/<pid>/sched
When CONFIG_SCHEDSTATS is enabled, /proc/<pid>/sched prints almost all
sched statistics except sum_sleep_runtime. Since sum_sleep_runtime is
a good info to collect, add this it to /proc/<pid>/sched.
George Beshers [Thu, 18 Jun 2015 15:25:13 +0000 (10:25 -0500)]
locking/lockdep: Remove hard coded array size dependency
An apparent oversight left a hardcoded '4' in place when
LOCKSTAT_POINTS was introduced.
The contention_point[] and contending_point[] arrays in the
structs lock_class and lock_class_stats need to be the same
size for the loops in lock_stats() to be correct.
This patch allows LOCKSTAT_POINTS to be changed without
affecting the correctness of the code.
Waiman Long [Tue, 9 Jun 2015 15:19:13 +0000 (11:19 -0400)]
locking/qrwlock: Don't contend with readers when setting _QW_WAITING
The current cmpxchg() loop in setting the _QW_WAITING flag for writers
in queue_write_lock_slowpath() will contend with incoming readers
causing possibly extra cmpxchg() operations that are wasteful. This
patch changes the code to do a byte cmpxchg() to eliminate contention
with new readers.
A multithreaded microbenchmark running 5M read_lock/write_lock loop
on a 8-socket 80-core Westmere-EX machine running 4.0 based kernel
with the qspinlock patch have the following execution times (in ms)
with and without the patch:
With small number of contending threads, this patch can improve
locking performance by up to 10%. With more contending threads,
however, the gain diminishes.
Palik, Imre [Mon, 8 Jun 2015 12:46:49 +0000 (14:46 +0200)]
perf/x86: Honor the architectural performance monitoring version
Architectural performance monitoring, version 1, doesn't support fixed counters.
Currently, even if a hypervisor advertises support for architectural
performance monitoring version 1, perf may still try to use the fixed
counters, as the constraints are set up based on the CPU model.
This patch ensures that perf honors the architectural performance monitoring
version returned by CPUID, and it only uses the fixed counters for version 2
and above.
(Some of the ideas in this patch came from Peter Zijlstra.)
Intel PT is a separate PMU and it is not using any of the x86_pmu
code paths, which means in particular that the active_events counter
remains intact when new PT events are created.
However, PT uses the generic x86_pmu PMI handler for its PMI handling needs.
The problem here is that the latter checks active_events and in case of it
being zero, exits without calling the actual x86_pmu.handle_nmi(), which
results in unknown NMI errors and massive data loss for PT.
The effect is not visible if there are other perf events in the system
at the same time that keep active_events counter non-zero, for instance
if the NMI watchdog is running, so one needs to disable it to reproduce
the problem.
At the same time, the active_events counter besides doing what the name
suggests also implicitly serves as a PMC hardware and DS area reference
counter.
This patch adds a separate reference counter for the PMC hardware, leaving
active_events for actually counting the events and makes sure it also
counts PT and BTS events.
perf/x86/intel/bts: Fix DS area sharing with x86_pmu events
Currently, the intel_bts driver relies on the DS area allocated by the x86_pmu
code in its event_init() path, which is a bug: creating a BTS event while
no x86_pmu events are present results in a NULL pointer dereference.
The same DS area is also used by PEBS sampling, which makes it quite a bit
trickier to have a separate one for intel_bts' purposes.
This patch makes intel_bts driver use the same DS allocation and reference
counting code as x86_pmu to make sure it is always present when either
intel_bts or x86_pmu need it.
Andi Kleen [Thu, 11 Jun 2015 20:52:22 +0000 (13:52 -0700)]
perf/x86: Add more Broadwell model numbers
This patch adds additional model numbers for Broadwell to perf.
Support for Broadwell with Iris Pro (Intel Core i7-57xxC)
and support for Broadwell Server Xeon.
Michael Ellerman [Fri, 19 Jun 2015 07:23:48 +0000 (17:23 +1000)]
Merge branch 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/scottwood/linux into next
Freescale updates from Scott:
"Highlights include more 8xx optimizations, an e6500 hugetlb optimization,
QMan device tree nodes, t1024/t1023 support, and various fixes and
cleanup."
Michael Neuling [Tue, 16 Jun 2015 06:45:44 +0000 (16:45 +1000)]
cxl: Add CXL_KERNEL_API config option
Add CXL_KERNEL_API config option so drivers which depend on this new
functionality won't be enabled until this is visible.
This is useful for merging the cxlflash driver which comes in via the SCSI
tree. The cxlflash driver can depend on CXL_KERNEL_API, hence it won't be
enabled in the SCSI tree until this new config option is merged via the powerpc
tree. Hence all trees will be bisectable at all times.
powerpc/powernv: Fix wrong IOMMU table in pnv_ioda_setup_bus_dma()
When pnv_pci_ioda_fixup() is called during PHB fixup time, each PE in
the sorted list of PEs (phb::pe_dma_list) is iterated to setup the PE's
DMA32 space by pnv_ioda_setup_bus_dma() if the PE's DMA32 weight is bigger
than zero. The function also assigns all the subordinate PCI devices of
the PE's primary bus with the PE's DMA32 IOMMU table. It causes the PCI
devicess in the child PEs, which don't have DMA weight, receives wrong
IOMMU table and then IOMMU group.
The patch fixes above issue by more check on the PE's coverage and don't
assign IOMMU table to those PCI devices, which belong to the child PEs.
The problem was found on Firestone platform initially.
Current swap encoding in pte can't support large pfns
above 4TB. Change the swap encoding such that we put
the swap type in the PTE bits. Also add build checks
to make sure we don't overlap with HPTEFLAGS.
Sam bobroff [Fri, 12 Jun 2015 01:06:32 +0000 (11:06 +1000)]
powerpc/tm: Abort syscalls in active transactions
This patch changes the syscall handler to doom (tabort) active
transactions when a syscall is made and return very early without
performing the syscall and keeping side effects to a minimum (no CPU
accounting or system call tracing is performed). Also included is a
new HWCAP2 bit, PPC_FEATURE2_HTM_NOSC, to indicate this
behaviour to userspace.
Currently, the system call instruction automatically suspends an
active transaction which causes side effects to persist when an active
transaction fails.
This does change the kernel's behaviour, but in a way that was
documented as unsupported. It doesn't reduce functionality as
syscalls will still be performed after tsuspend; it just requires that
the transaction be explicitly suspended. It also provides a
consistent interface and makes the behaviour of user code
substantially the same across powerpc and platforms that do not
support suspended transactions (e.g. x86 and s390).
Performance measurements using
http://ozlabs.org/~anton/junkcode/null_syscall.c indicate the cost of
a normal (non-aborted) system call increases by about 0.25%.
Update the "IBM Power in-Nest Crypto Acceleration" and
"IBM Power 842 compression accelerator" sections to specify the correct
files.
The "IBM Power in-Nest Crypto Acceleration" was originally the only
NX driver, and so its section listed all drivers/crypto/nx/ files,
but now there is also the 842 driver which has its own section. This
lists explicitly what files are owned by the Crypto driver and which
files are owned by the 842 compression driver.
Dan Streetman [Thu, 18 Jun 2015 16:05:30 +0000 (12:05 -0400)]
crypto: nx - add LE support to pSeries platform driver
Add support to the nx-842-pseries.c driver for running in little endian
mode.
The pSeries platform NX 842 driver currently only works as big endian.
This adds cpu_to_be*() and be*_to_cpu() in the appropriate places to
work in LE mode also.
Herbert Xu [Thu, 18 Jun 2015 06:25:55 +0000 (14:25 +0800)]
crypto: caam - Reintroduce DESC_MAX_USED_BYTES
I incorrectly removed DESC_MAX_USED_BYTES when enlarging the size
of the shared descriptor buffers, thus making it four times larger
than what is necessary. This patch restores the division by four
calculation.
Herbert Xu [Thu, 18 Jun 2015 06:00:49 +0000 (14:00 +0800)]
crypto: aead - Fix aead_instance struct size
The struct aead_instance is meant to extend struct crypto_instance
by incorporating the extra members of struct aead_alg. However,
the current layout which is copied from shash/ahash does not specify
the struct fully. In particular only aead_alg is present.
For shash/ahash this works because users there add extra headroom
to sizeof(struct crypto_instance) when allocating the instance.
Unfortunately for aead, this bit was lost when the new aead_instance
was added.
Rather than fixing it like shash/ahash, this patch simply expands
struct aead_instance to contain what is supposed to be there, i.e.,
adding struct crypto_instance.
In order to not break existing AEAD users, this is done through an
anonymous union.
Herbert Xu [Thu, 18 Jun 2015 06:00:48 +0000 (14:00 +0800)]
crypto: api - Add CRYPTO_MINALIGN_ATTR to struct crypto_alg
The struct crypto_alg is embedded into various type-specific structs
such as aead_alg. This is then used as part of instances such as
struct aead_instance. It is also embedded into the generic struct
crypto_instance. In order to ensure that struct aead_instance can
be converted to struct crypto_instance when necessary, we need to
ensure that crypto_alg is aligned properly.
This patch adds an alignment attribute to struct crypto_alg to
ensure this.
Linus Torvalds [Fri, 19 Jun 2015 03:02:27 +0000 (17:02 -1000)]
Merge branch 'i2c/for-current' of git://git.kernel.org/pub/scm/linux/kernel/git/wsa/linux
Pull i2c documentation fix from Wolfram Sang:
"Here is a small documentation fix for I2C.
We already had a user who unsuccessfully tried to get the new slave
framework running with the currently broken example. So, before this
happens again, I'd like to have this how-to-use section fixed for 4.1
already. So that no more hacking time is wasted"
* 'i2c/for-current' of git://git.kernel.org/pub/scm/linux/kernel/git/wsa/linux:
i2c: slave: fix the example how to instantiate from userspace
Dave Airlie [Fri, 19 Jun 2015 01:58:39 +0000 (11:58 +1000)]
Merge tag 'drm-intel-fixes-2015-06-18' of git://anongit.freedesktop.org/drm-intel into drm-fixes
one fix, one revert
* tag 'drm-intel-fixes-2015-06-18' of git://anongit.freedesktop.org/drm-intel:
Revert "drm/i915: Don't skip request retirement if the active list is empty"
drm/i915: Always reset vma->ggtt_view.pages cache on unbinding
Dave Airlie [Fri, 19 Jun 2015 01:55:29 +0000 (11:55 +1000)]
Merge branch 'drm-fixes-4.1' of git://people.freedesktop.org/~deathsimple/linux into drm-fixes
two radeon fixes
one MST fix,
one query addition, destined for stable, and to fix a regression
* 'drm-fixes-4.1' of git://people.freedesktop.org/~deathsimple/linux:
drm/radeon: don't probe MST on hw we don't support it on
drm/radeon: Add RADEON_INFO_VA_UNMAP_WORKING query
Merge branches 'pm-clk', 'pm-domains' and 'powercap'
* pm-clk:
PM / clk: Print acquired clock name in addition to con_id
PM / clk: Fix clock error check in __pm_clk_add()
drivers: sh: remove boilerplate code and use USE_PM_CLK_RUNTIME_OPS
arm: davinci: remove boilerplate code and use USE_PM_CLK_RUNTIME_OPS
arm: omap1: remove boilerplate code and use USE_PM_CLK_RUNTIME_OPS
arm: keystone: remove boilerplate code and use USE_PM_CLK_RUNTIME_OPS
PM / clock_ops: Provide default runtime ops to users
* pm-domains:
PM / Domains: Skip timings during syscore suspend/resume
* powercap:
powercap / RAPL: Support Knights Landing
powercap / RAPL: Floor frequency setting in Atom SoC
* pm-sleep:
PM / sleep: trace_device_pm_callback coverage in dpm_prepare/complete
PM / wakeup: add a dummy wakeup_source to record statistics
PM / sleep: Make suspend-to-idle-specific code depend on CONFIG_SUSPEND
PM / sleep: Return -EBUSY from suspend_enter() on wakeup detection
PM / tick: Add tracepoints for suspend-to-idle diagnostics
PM / sleep: Fix symbol name in a comment in kernel/power/main.c
leds / PM: fix hibernation on arm when gpio-led used with CPU led trigger
ARM: omap-device: use SET_NOIRQ_SYSTEM_SLEEP_PM_OPS
bus: omap_l3_noc: add missed callbacks for suspend-to-disk
PM / sleep: Add macro to define common noirq system PM callbacks
PM / sleep: Refine diagnostic messages in enter_state()
PM / wakeup: validate wakeup source before activating it.
* pm-runtime:
PM / Runtime: Update last_busy in rpm_resume
PM / runtime: add note about re-calling in during device probe()
* pm-cpufreq: (37 commits)
cpufreq: dt: allow driver to boot automatically
intel_pstate: Fix overflow in busy_scaled due to long delay
cpufreq: qoriq: optimize the CPU frequency switching time
cpufreq: gx-suspmod: Fix two typos in two comments
cpufreq: nforce2: Fix typo in comment to function nforce2_init()
cpufreq: governor: Serialize governor callbacks
cpufreq: governor: split cpufreq_governor_dbs()
cpufreq: governor: register notifier from cs_init()
cpufreq: Remove cpufreq_update_policy()
cpufreq: Restart governor as soon as possible
cpufreq: Call cpufreq_policy_put_kobj() from cpufreq_policy_free()
cpufreq: Initialize policy->kobj while allocating policy
cpufreq: Stop migrating sysfs files on hotplug
cpufreq: Don't allow updating inactive policies from sysfs
intel_pstate: Force setting target pstate when required
intel_pstate: change some inconsistent debug information
cpufreq: Track cpu managing sysfs kobjects separately
cpufreq: Fix for typos in two comments
cpufreq: Mark policy->governor = NULL for inactive policies
cpufreq: Manage governor usage history with 'policy->last_governor'
...
* pm-cpuidle:
cpuidle: Do not use CPUIDLE_DRIVER_STATE_START in cpuidle.c
cpuidle: Select a different state on tick_broadcast_enter() failures
sched / idle: Call default_idle_call() from cpuidle_enter_state()
sched / idle: Call idle_set_state() from cpuidle_enter_state()
cpuidle: Fix the kerneldoc comment for cpuidle_enter_state()
sched / idle: Eliminate the "reflect" check from cpuidle_idle_call()
cpuidle: Check the sign of index in cpuidle_reflect()
sched / idle: Move the default idle call code to a separate function
* acpi-video: (38 commits)
ACPI / video: Make acpi_video_unregister_backlight() private
acpi-video-detect: Remove old API
toshiba-acpi: Port to new backlight interface selection API
thinkpad-acpi: Port to new backlight interface selection API
sony-laptop: Port to new backlight interface selection API
samsung-laptop: Port to new backlight interface selection API
msi-wmi: Port to new backlight interface selection API
msi-laptop: Port to new backlight interface selection API
intel-oaktrail: Port to new backlight interface selection API
ideapad-laptop: Port to new backlight interface selection API
fujitsu-laptop: Port to new backlight interface selection API
eeepc-laptop: Port to new backlight interface selection API
dell-wmi: Port to new backlight interface selection API
dell-laptop: Port to new backlight interface selection API
compal-laptop: Port to new backlight interface selection API
asus-wmi: Port to new backlight interface selection API
asus-laptop: Port to new backlight interface selection API
apple-gmux: Port to new backlight interface selection API
acer-wmi: Port to new backlight interface selection API
ACPI / video: Fix acpi_video _register vs _unregister_backlight race
...