Jon Paul Maloy [Thu, 5 Feb 2015 13:36:42 +0000 (08:36 -0500)]
tipc: simplify connection abort notifications when links break
The new input message queue in struct tipc_link can be used for
delivering connection abort messages to subscribing sockets. This
makes it possible to simplify the code for such cases.
This commit removes the temporary list in tipc_node_unlock()
used for transforming abort subscriptions to messages. Instead, the
abort messages are now created at the moment of lost contact, and
then added to the last failed link's generic input queue for delivery
to the sockets concerned.
Jon Paul Maloy [Thu, 5 Feb 2015 13:36:41 +0000 (08:36 -0500)]
tipc: resolve race problem at unicast message reception
TIPC handles message cardinality and sequencing at the link layer,
before passing messages upwards to the destination sockets. During the
upcall from link to socket no locks are held. It is therefore possible,
and we see it happen occasionally, that messages arriving in different
threads and delivered in sequence still bypass each other before they
reach the destination socket. This must not happen, since it violates
the sequentiality guarantee.
We solve this by adding a new input buffer queue to the link structure.
Arriving messages are added safely to the tail of that queue by the
link, while the head of the queue is consumed, also safely, by the
receiving socket. Sequentiality is secured per socket by only allowing
buffers to be dequeued inside the socket lock. Since there may be multiple
simultaneous readers of the queue, we use a 'filter' parameter to reduce
the risk that they peek the same buffer from the queue, hence also
reducing the risk of contention on the receiving socket locks.
This solves the sequentiality problem, and seems to cause no measurable
performance degradation.
A nice side effect of this change is that lock handling in the functions
tipc_rcv() and tipc_bcast_rcv() now becomes uniform, something that
will enable future simplifications of those functions.
Jon Paul Maloy [Thu, 5 Feb 2015 13:36:40 +0000 (08:36 -0500)]
tipc: use existing sk_write_queue for outgoing packet chain
The list for outgoing traffic buffers from a socket is currently
allocated on the stack. This forces us to initialize the queue for
each sent message, something costing extra CPU cycles in the most
critical data path. Later in this series we will introduce a new
safe input buffer queue, something that would force us to initialize
even the spinlock of the outgoing queue. A closer analysis reveals
that the queue always is filled and emptied within the same lock_sock()
session. It is therefore safe to use a queue aggregated in the socket
itself for this purpose. Since there already exists a queue for this
in struct sock, sk_write_queue, we introduce use of that queue in
this commit.
Jon Paul Maloy [Thu, 5 Feb 2015 13:36:39 +0000 (08:36 -0500)]
tipc: split up function tipc_msg_eval()
The function tipc_msg_eval() is in reality doing two related, but
different tasks. First it tries to find a new destination for named
messages, in case there was no first lookup, or if the first lookup
failed. Second, it does what its name suggests, evaluating the validity
of the message and its destination, and returning an appropriate error
code depending on the result.
This is confusing, and in this commit we choose to break it up into two
functions. A new function, tipc_msg_lookup_dest(), first attempts to find
a new destination, if the message is of the right type. If this lookup
fails, or if the message should not be subject to a second lookup, the
already existing tipc_msg_reverse() is called. This function performs
prepares the message for rejection, if applicable.
Jon Paul Maloy [Thu, 5 Feb 2015 13:36:38 +0000 (08:36 -0500)]
tipc: enqueue arrived buffers in socket in separate function
The code for enqueuing arriving buffers in the function tipc_sk_rcv()
contains long code lines and currently goes to two indentation levels.
As a cosmetic preparaton for the next commits, we break it out into
a separate function.
Jon Paul Maloy [Thu, 5 Feb 2015 13:36:37 +0000 (08:36 -0500)]
tipc: simplify message forwarding and rejection in socket layer
Despite recent improvements, the handling of error codes and return
values at reception of messages in the socket layer is still confusing.
In this commit, we try to make it more comprehensible. First, we
separate between the return values coming from the functions called
by tipc_sk_rcv(), -those are TIPC specific error codes, and the
return values returned by tipc_sk_rcv() itself. Second, we don't
use the returned TIPC error code as indication for whether a buffer
should be forwarded/rejected or not; instead we use the buffer pointer
passed along with filter_msg(). This separation is necessary because
we sometimes want to forward messages even when there is no error
(i.e., protocol messages and successfully secondary looked up data
messages).
Jon Paul Maloy [Thu, 5 Feb 2015 13:36:36 +0000 (08:36 -0500)]
tipc: reduce usage of context info in socket and link
The most common usage of namespace information is when we fetch the
own node addess from the net structure. This leads to a lot of
passing around of a parameter of type 'struct net *' between
functions just to make them able to obtain this address.
However, in many cases this is unnecessary. The own node address
is readily available as a member of both struct tipc_sock and
tipc_link, and can be fetched from there instead.
The fact that the vast majority of functions in socket.c and link.c
anyway are maintaining a pointer to their respective base structures
makes this option even more compelling.
In this commit, we introduce the inline functions tsk_own_node()
and link_own_node() to make it easy for functions to fetch the node
address from those structs instead of having to pass along and
dereference the namespace struct.
In particular, we make calls to the msg_xx() functions in msg.{h,c}
context independent by directly passing them the own node address
as parameter when needed. Those functions should be regarded as
leaves in the code dependency tree, and it is hence desirable to
keep them namspace unaware.
Apart from a potential positive effect on cache behavior, these
changes make it easier to introduce the changes that will follow
later in this series.
Takashi Iwai [Thu, 5 Feb 2015 10:15:24 +0000 (11:15 +0100)]
hso: Use static attribute groups for sysfs entry
Pass the static attribute groups and the driver data via
tty_port_register_device_attr() instead of manual device_create_file()
and device_remove_file() calls.
Sabrina Dubroca [Wed, 4 Feb 2015 22:08:50 +0000 (23:08 +0100)]
pktgen: fix UDP checksum computation
This patch fixes two issues in UDP checksum computation in pktgen.
First, the pseudo-header uses the source and destination IP
addresses. Currently, the ports are used for IPv4.
Second, the UDP checksum covers both header and data. So we need to
generate the data earlier (move pktgen_finalize_skb up), and compute
the checksum for UDP header + data.
Fixes: c26bf4a51308c ("pktgen: Add UDPCSUM flag to support UDP checksums") Signed-off-by: Sabrina Dubroca <[email protected]> Acked-by: Thomas Graf <[email protected]> Signed-off-by: David S. Miller <[email protected]>
RFC 4429 ("Optimistic DAD") states that optimistic addresses
should be treated as deprecated addresses. From section 2.1:
Unless noted otherwise, components of the IPv6 protocol stack
should treat addresses in the Optimistic state equivalently to
those in the Deprecated state, indicating that the address is
available for use but should not be used if another suitable
address is available.
Optimistic addresses are indeed avoided when other addresses are
available (i.e. at source address selection time), but they have
not heretofore been available for things like explicit bind() and
sendmsg() with struct in6_pktinfo, etc.
This change makes optimistic addresses treated more like
deprecated addresses than tentative ones.
Eric Sandeen [Thu, 5 Feb 2015 22:53:02 +0000 (09:53 +1100)]
xfs: report proper f_files in statfs if we overshoot imaxpct
Normally, a statfs syscall reports m_maxicount as f_files
(total file nodes in file system) because it is supposed
to be the upper limit for dynamically-allocated inodes.
It's possible, however, to overshoot imaxpct / m_maxicount.
If this happens, we should report the actual number of allocated
inodes, which is contained in sb_icount. Add one more adjustment
to the statfs code to make this happen.
flowcache: Fix kernel panic in flow_cache_flush_task
flow_cache_flush_task references a structure member flow_cache_gc_work
where it should reference flow_cache_flush_task instead.
Kernel panic occurs on kernels using IPsec during XFRM garbage
collection. The garbage collection interval can be shortened using the
following sysctl settings:
The net/core/dev.c conflict was the overlap of one commit marking an
existing function static whilst another was adding a new function.
In the include/linux/if_vlan.h case, the type used for a local
variable was changed in 'net', whereas the function got rewritten
to fix a stacked vlan bug in 'net-next'.
In drivers/vhost/net.c, Al Viro's iov_iter conversions in 'net-next'
overlapped with an endainness fix for VHOST 1.0 in 'net'.
In drivers/net/vxlan.c, vxlan_find_vni() added a 'flags' parameter
in 'net-next' whereas in 'net' there was a bug fix to pass in the
correct network namespace pointer in calls to this function.
Joonsoo Kim [Thu, 5 Feb 2015 20:25:23 +0000 (12:25 -0800)]
mm/debug_pagealloc: fix build failure on ppc and some other archs
Kim Phillips reported following build failure.
LD init/built-in.o
mm/built-in.o: In function `free_pages_prepare':
mm/page_alloc.c:770: undefined reference to `.kernel_map_pages'
mm/built-in.o: In function `prep_new_page':
mm/page_alloc.c:933: undefined reference to `.kernel_map_pages'
mm/built-in.o: In function `map_pages':
mm/compaction.c:61: undefined reference to `.kernel_map_pages'
make: *** [vmlinux] Error 1
Reason for this problem is that commit 031bc5743f15
("mm/debug-pagealloc: make debug-pagealloc boottime configurable")
forgot to remove the old declaration of kernel_map_pages() for some
architectures. This patch removes them to fix build failure.
Ryusuke Konishi [Thu, 5 Feb 2015 20:25:20 +0000 (12:25 -0800)]
nilfs2: fix deadlock of segment constructor over I_SYNC flag
Nilfs2 eventually hangs in a stress test with fsstress program. This
issue was caused by the following deadlock over I_SYNC flag between
nilfs_segctor_thread() and writeback_sb_inodes():
nilfs_segctor_thread()
nilfs_segctor_thread_construct()
nilfs_segctor_unlock()
nilfs_dispose_list()
iput()
iput_final()
evict()
inode_wait_for_writeback() * wait for I_SYNC flag
writeback_sb_inodes()
* set I_SYNC flag on inode->i_state
__writeback_single_inode()
do_writepages()
nilfs_writepages()
nilfs_construct_dsync_segment()
nilfs_segctor_sync()
* wait for completion of segment constructor
inode_sync_complete()
* clear I_SYNC flag after __writeback_single_inode() completed
writeback_sb_inodes() calls do_writepages() for dirty inodes after
setting I_SYNC flag on inode->i_state. do_writepages() in turn calls
nilfs_writepages(), which can run segment constructor and wait for its
completion. On the other hand, segment constructor calls iput(), which
can call evict() and wait for the I_SYNC flag on
inode_wait_for_writeback().
Since segment constructor doesn't know when I_SYNC will be set, it
cannot know whether iput() will block or not unless inode->i_nlink has a
non-zero count. We can prevent evict() from being called in iput() by
implementing sop->drop_inode(), but it's not preferable to leave inodes
with i_nlink == 0 for long periods because it even defers file
truncation and inode deallocation. So, this instead resolves the
deadlock by calling iput() asynchronously with a workqueue for inodes
with i_nlink == 0.
in mem_cgroup_migrate when shmem wants to replace a swap cache page
because of shmem_should_replace_page (the page is allocated from an
inappropriate zone). shmem_replace_page expects that the oldpage is not
on LRU list and calls mem_cgroup_migrate without lrucare. This is
obviously incorrect because swapcache pages might be on the LRU list
(e.g. swapin readahead page).
Fix this by enabling lrucare for the migration in shmem_replace_page.
Also clarify that lrucare should be used even if one of the pages might
be on LRU list.
The BUG_ON will trigger only when CONFIG_DEBUG_VM is enabled but even
without that the migration code might leave the old page on an
inappropriate memcg' LRU which is not that critical because the page
would get removed with its last reference but it is still confusing.
Arnd Bergmann [Thu, 5 Feb 2015 20:25:12 +0000 (12:25 -0800)]
mm: export "high_memory" symbol on !MMU
The symbol 'high_memory' is provided on both MMU- and NOMMU-kernels, but
only one of them is exported, which leads to module build errors in
drivers that work fine built-in:
Shiraz Hashim [Thu, 5 Feb 2015 20:25:06 +0000 (12:25 -0800)]
mm: pagewalk: call pte_hole() for VM_PFNMAP during walk_page_range
walk_page_range() silently skips vma having VM_PFNMAP set, which leads
to undesirable behaviour at client end (who called walk_page_range).
Userspace applications get the wrong data, so the effect is like just
confusing users (if the applications just display the data) or sometimes
killing the processes (if the applications do something with
misunderstanding virtual addresses due to the wrong data.)
For example for pagemap_read, when no callbacks are called against
VM_PFNMAP vma, pagemap_read may prepare pagemap data for next virtual
address range at wrong index.
Eventually userspace may get wrong pagemap data for a task.
Corresponding to a VM_PFNMAP marked vma region, kernel may report
mappings from subsequent vma regions. User space in turn may account
more pages (than really are) to the task.
In my case I was using procmem, procrack (Android utility) which uses
pagemap interface to account RSS pages of a task. Due to this bug it
was giving a wrong picture for vmas (with VM_PFNMAP set).
Takashi Iwai [Thu, 5 Feb 2015 20:31:19 +0000 (21:31 +0100)]
Merge tag 'asoc-fix-ac97-v3.19-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound into for-linus
ASoC: AC'97 fixes
These are rather too large for this late in the release cycle but
they're clear, well understood and have been tested to fix a regression
which was introduced for v3.19. The details are all in Lars' changelog
and they've been cooking in -next for a while, to a large extent out
of conservatism about the size.
Takashi Iwai [Thu, 5 Feb 2015 20:30:43 +0000 (21:30 +0100)]
Merge tag 'asoc-fix-intel-v3.19-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound into for-linus
ASoC: Fix for Intel firmware name
Another one liner that arrived after the earlier pull request. There's
a trivial conflict with my -next branch, I'll send a pull request with
that resolution and some other new stuff before Monday.
Takashi Iwai [Thu, 5 Feb 2015 20:29:33 +0000 (21:29 +0100)]
Merge tag 'asoc-fix-v3.19-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound into for-linus
ASoC: Fixes for v3.19
A few last minute fixes for v3.19, all driver specific. None of them
stand out particularly - it's all the standard people who are affected
will care stuff.
The Samsung fix is a DT only fix for the audio controller, it's being
merged via the ASoC tree due to process messups (the submitter sent it
at the end of a tangentally related series rather than separately to the
ARM folks) in order to make sure that it gets to people sooner.
Kevin Strasser [Thu, 5 Feb 2015 20:12:07 +0000 (12:12 -0800)]
ASoC: Intel: fix sst firmware path for cht-bsw-rt5672
All sst firmware is provided under the intel directory of the linux-firmware
tree. By default this directory structure is kept when installing on a target
system. Change the path to expect a default linux-firmware installation.
1) Stretch ACKs can kill performance with Reno and CUBIC congestion
control, largely due to LRO and GRO. Fix from Neal Cardwell.
2) Fix userland breakage because we accidently emit zero length netlink
messages from the bridging code. From Roopa Prabhu.
3) Carry handling in generic csum_tcpudp_nofold is broken, fix from
Karl Beldan.
4) Remove bogus dev_set_net() calls from CAIF driver, from Nicolas
Dichtel.
5) Make sure PPP deflation never returns a length greater then the
output buffer, otherwise we overflow and trigger skb_over_panic().
Fix from Florian Westphal.
6) COSA driver needs VIRT_TO_BUS Kconfig dependencies, from Arnd
Bergmann.
7) Don't increase route cached MTU on datagram too big ICMPs. From Li
Wei.
8) Fix error path leaks in nf_tables, from Pablo Neira Ayuso.
9) Fix bitmask handling regression in netlink that broke things like
acpi userland tools. From Pablo Neira Ayuso.
10) Wrong header pointer passed to param_type2af() in SCTP code, from
Saran Maruti Ramanara.
11) Stacked vlans not handled correctly by vlan_get_protocol(), from
Toshiaki Makita.
12) Add missing DMA memory barrier to xgene driver, from Iyappan
Subramanian.
13) Fix crash in rate estimators, from Eric Dumazet.
14) We've been adding various workarounds, one after another, for the
change which added the per-net tcp_sock. It was meant to reduce
socket contention but added lots of problems.
Reduce this instead to a proper per-cpu socket and that rids us of
all the daemons.
From Eric Dumazet.
15) Fix memory corruption and OOPS in mlx4 driver, from Jack
Morgenstein.
16) When we disabled UFO in the virtio_net device, it introduces some
serious performance regressions. The orignal problem was IPV6
fragment ID generation, so fix that properly instead. From Vlad
Yasevich.
17) sr9700 driver build breaks on xtensa because it defines macros with
the same name as those used by the arch code. Use more unique
names. From Chen Gang.
18) Fix endianness in new virio 1.0 mode of the vhost net driver, from
Michael S Tsirkin.
19) Several sysctls were setting the maxlen attribute incorrectly, from
Sasha Levin.
20) Don't accept an FQ scheduler quantum of zero, that leads to crashes.
From Kenneth Klette Jonassen.
21) Fix dumping of non-existing actions in the packet scheduler
classifier. From Ignacy Gawędzki.
22) Return the write work_done value when doing TX work in the qlcnic
driver.
23) ip6gre_err accesses the info field with the wrong endianness, from
Sabrina Dubroca.
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (54 commits)
sit: fix some __be16/u16 mismatches
ipv6: fix sparse errors in ip6_make_flowlabel()
net: remove some sparse warnings
flow_keys: n_proto type should be __be16
ip6_gre: fix endianness errors in ip6gre_err
qlcnic: Fix NAPI poll routine for Tx completion
amd-xgbe: Set RSS enablement based on hardware features
amd-xgbe: Adjust for zero-based traffic class count
cls_api.c: Fix dumping of non-existing actions' stats.
pkt_sched: fq: avoid hang when quantum 0
net: rds: use correct size for max unacked packets and bytes
vhost/net: fix up num_buffers endian-ness
gianfar: correct the bad expression while writing bit-pattern
net: usb: sr9700: Use 'SR_' prefix for the common register macros
Revert "drivers/net: Disable UFO through virtio"
Revert "drivers/net, ipv6: Select IPv6 fragment idents for virtio UFO packets"
ipv6: Select fragment id during UFO segmentation if not set.
xen-netback: stop the guest rx thread after a fatal error
net/mlx4_core: Fix kernel Oops (mem corruption) when working with more than 80 VFs
isdn: off by one in connect_res()
...
Linus Torvalds [Thu, 5 Feb 2015 19:17:15 +0000 (11:17 -0800)]
Merge tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi
Pull SCSI fixes from James Bottomley:
"This patch set is fixing two serious problems which have turned up
late in the release cycle.
The first fixes a problem with 4k sector disks where the transfer
length (amount of data sent to the disk) was getting increased every
time the disk was revalidated leading to potential for overflows.
The other is a regression oops fix for some of our last merge window
code"
* tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi:
sd: Fix max transfer length for 4k disks
scsi: fix device handler detach oops
Linus Torvalds [Thu, 5 Feb 2015 19:11:44 +0000 (11:11 -0800)]
Merge branch 'drm-fixes' of git://people.freedesktop.org/~airlied/linux
Pull drm fixes from Dave Airlie:
"Radeon and amdkfd fixes.
Radeon ones mostly for oops in some test/benchmark functions since
fencing changes, and one regression fix for old GPUs,
There is one cirrus regression fix, the 32bpp broke userspace, so this
hides it behind a module option for the few users who care.
I'm off for a few days, so this is probably the final pull I have, if
I see fixes from Intel I'll forward the pull as I should have email"
* 'drm-fixes' of git://people.freedesktop.org/~airlied/linux:
drm/cirrus: Limit modes depending on bpp option
drm/radeon: fix the crash in test functions
drm/radeon: fix the crash in benchmark functions
drm/radeon: properly set vm fragment size for TN/RL
drm/radeon: don't init gpuvm if accel is disabled (v3)
drm/radeon: fix PLLs on RS880 and older v2
drm/amdkfd: Don't create BUG due to incorrect user parameter
drm/amdkfd: max num of queues can't be 0
drm/amdkfd: Fix bug in accounting of queues
Linus Torvalds [Thu, 5 Feb 2015 19:07:25 +0000 (11:07 -0800)]
Merge tag 'spi-v3.19-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi
Pull spi fixes from Mark Brown:
"A couple of driver specific fixes:
- Disable DMA mode for i.MX6DL chips due to a hardware bug.
- Don't use devm_kzalloc() outside of bind/unbind paths in the
fsl-dspi driver, fixing memory leaks"
* tag 'spi-v3.19-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi:
spi: imx: use pio mode for i.mx6dl
spi: spi-fsl-dspi: Remove usage of devm_kzalloc
Linus Torvalds [Thu, 5 Feb 2015 18:57:29 +0000 (10:57 -0800)]
Merge tag 'pm+acpi-3.19-fin' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
Pull ACPI power management fix from Rafael Wysocki:
"This is a revert of an ACPI Low-power Subsystem (LPSS) driver change
that was supposed to improve power management of the LPSS DMA
controller, but introduced more serious problems.
Since fixing them turns out to be non-trivial, it is better to revert
the commit in question at this point and try to fix the original issue
differently in the next cycle"
* tag 'pm+acpi-3.19-fin' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm:
Revert "ACPI / LPSS: introduce a 'proxy' device to power on LPSS for DMA"
Kevin Strasser [Wed, 4 Feb 2015 19:35:07 +0000 (11:35 -0800)]
ASoC: Intel: fix sst firmware path
All sst firmware is provided under the intel directory of the linux-firmware
tree. By default this directory structure is kept when installing on a target
system. Change the path to expect a default linux-firmware installation.
Jie Yang [Thu, 5 Feb 2015 14:56:48 +0000 (22:56 +0800)]
ASoC: Intel: add a status for runtime suspend/resume
For runtime suspend/resume, it is some different with suspend/resume,
e.g. codec power supply won't be switch off, codec jack detection
still working(to wake up system from Jack event), won't call call
snd_soc_suspend/resume, etc.
So here, we add a platform PM status, HSW_PM_STATE_RTD3, to make
the status more clear, when in idle, it will enter this status, to
transfer from HSW_PM_STATE_RTD3 to HSW_PM_STATE_D3, we will do those
extra jobs, and vice versa for resuming.
return type of wait_for_completion_timeout is unsigned long not int, this
patch uses the return value of wait_for_completion_timeout in the condition
directly rather than adding a additional appropriately typed variable.
return type of wait_for_completion_timeout is unsigned long not int, this
patch uses the return value of wait_for_completion_timeout in the condition
directly rather than assigning it to an incorrect type variable.
This patch adds new regulator driver to support max77843
MFD(Multi Function Device) chip`s regulators.
The Max77843 has two voltage regulators for USB safeout.
xenbus: Add proper handling of XS_ERROR from Xenbus for transactions.
If Xenstore sends back a XS_ERROR for TRANSACTION_END, the driver BUGs
because it cannot find the matching transaction in the list. For
TRANSACTION_START, it leaks memory.
Check the message as returned from xenbus_dev_request_and_reply(), and
clean up for TRANSACTION_START or discard the error for
TRANSACTION_END.
Lv Zheng [Thu, 5 Feb 2015 08:27:35 +0000 (16:27 +0800)]
ACPI / EC: Update revision due to raw handler mode.
The bug fixes around GPE races have been done to the EC driver by the
previous commits. This patch increases the revision to 3 to indicate the
behavior differences between the old and the new drivers. The
copyright/authorship notices are also updated.
Lv Zheng [Thu, 5 Feb 2015 08:27:29 +0000 (16:27 +0800)]
ACPI / EC: Reduce ec_poll() by referencing the last register access timestamp.
Timeout in the ec_poll() doesn't refer to the last register access time. It
thus can win the competition against the acpi_ec_gpe_handler() if a
transaction takes longer than 1ms but individual register accesses are less
than 1ms. In some cases, it can make the following silicon bug easier to
be triggered:
GPE EN is not wired to the GPE trigger line, so when GPE STS is already
set when 1 is written to GPE EN, no GPE can be triggered.
This patch adds register access timestamp reference support for ec_poll()
to reduce the number of ec_poll() invocations.
Lv Zheng [Thu, 5 Feb 2015 08:27:22 +0000 (16:27 +0800)]
ACPI / EC: Fix several GPE handling issues by deploying ACPI_GPE_DISPATCH_RAW_HANDLER mode.
This patch switches EC driver into ACPI_GPE_DISPATCH_RAW_HANDLER mode where
the GPE lock is not held for acpi_ec_gpe_handler() and the ACPICA internal
GPE enabling/disabling/clearing operations are bypassed so that further
improvements are possible with the GPE APIs.
There are 2 strong reasons for deploying raw GPE handler mode in the EC
driver:
1. Some hardware logics can control their interrupts via their own
registers, so their interrupts can be disabled/enabled/acknowledged
without using the super IRQ controller provided functions. While there
is no mean (EC commands) for the EC driver to achieve this.
2. During suspending, the EC driver is still working for a while to
complete the platform firmware provided functionailities using ec_poll()
after all GPEs are disabled (see acpi_ec_block_transactions()), which
means the EC driver will drive the EC GPE out of the GPE core's control.
Without deploying the raw GPE handler mode, we can see many races between
the EC driver and the GPE core due to the above restrictions:
1. There is a race condition due to ACPICA internal GPE
disabling/clearing/enabling logics in acpi_ev_gpe_dispatch():
Orignally EC GPE is disabled (EN=0), cleared (STS=0) before invoking a
GPE handler and re-enabled (EN=1) after invoking a GPE handler in
acpi_ev_gpe_dispatch(). When re-enabling appears, GPE may be flagged
(STS=1).
=================================================================
(event pending A)
=================================================================
acpi_ev_gpe_dispatch() ec_poll()
EN=0
STS=0
acpi_ec_gpe_handler()
*****************************************************************
(event handling A)
Lock(EC)
advance_transaction()
EC_SC read
=================================================================
(event pending B)
=================================================================
EC_SC handled
Unlock(EC)
*****************************************************************
*****************************************************************
(event handling B)
Lock(EC)
advance_transaction()
EC_SC read
=================================================================
(event pending C)
=================================================================
EC_SC handled
Unlock(EC)
*****************************************************************
EN=1
This race condition is the root cause of different issues on different
silicon variations.
A. Silicon variation A:
On some platforms, GPE will be triggered due to "writing 1 to EN when
STS=1". This is because both EN and STS lines are wired to the GPE
trigger line.
1. Issue 1:
We can see no-op acpi_ec_gpe_handler() invoked on such platforms.
This is because:
a. event pending B: An event can arrive after ACPICA's GPE
clearing performed in acpi_ev_gpe_dispatch(), this event may
fail to be detected by EC_SC read that is performed before its
arrival;
b. event handling B: The event can be handled in ec_poll() because
EC lock is released after acpi_ec_gpe_handler() invocation;
c. There is no code in ec_poll() to clear STS but the GPE can
still be triggered by the EN=1 write performed in
acpi_ev_finish_gpe(), this leads to a no-op EC GPE handler
invocation;
d. As no-op GPE handler invocations are counted by the EC driver
to trigger the command storming conditions, the wrong no-op
GPE handler invocations thus can easily trigger wrong command
storming conditions.
Note 1:
If we removed GPE disabling/enabling code from
acpi_ev_gpe_dispatch(), we could still see no-op GPE handlers
triggered by the event arriving after the GPE clearing and before
the GPE handling on both silicon variation A and B. This can only
occur if the CPU is very slow (timing slice between STS=0 write
and EC_SC read should be short enough before hardware sets another
GPE indication). Thus this is very rare and is not what we need to
fix.
B. Silicon variation B:
On other platforms, GPE may not be triggered due to "writing 1 to EN
when STS=1". This is because only STS line is wired to the GPE
trigger line.
2. Issue 2:
We can see GPE loss on such platforms. This is because:
a. event pending B vs. event handling A: An event can arrive after
ACPICA's GPE handling performed in acpi_ev_gpe_dispatch(), or
event pending C vs. event handling B: An event can arrive after
Linux's GPE handling performed in ec_poll(),
these events may fail to be detected by EC_SC read that is
performed before their arrival;
b. The GPE cannot be triggered by EN=1 write performed in
acpi_ev_finish_gpe();
c. If no polling mechanism is implemented in the driver for the
pending event (for example, SCI_EVT), this event is lost due to
no GPE being triggered.
Note 2:
On most platforms, there might be another rule that GPE may not be
triggered due to "writing 1 to STS when STS=1 and EN=1".
Then on silicon variation B, an even worse case is if the issue 2
event loss happens, further events may never trigger GPE again on
such platforms due to being blocked by the current STS=1. Unless
someone clears STS, all events have to be polled.
2. There is a race condition due to lacking in GPE status checking in EC
driver:
Originally, GPE status is checked in ACPICA core but not checked in
the GPE handler. Thus since the status checking and handling is not
locked, it can be interrupted by another handling path.
=================================================================
(event pending A)
=================================================================
acpi_ev_gpe_detect() ec_poll()
if (EN==1 && STS==1)
*****************************************************************
(event handling A)
Lock(EC)
advance_transaction()
EC_SC read
EC_SC handled
Unlock(EC)
*****************************************************************
acpi_ev_gpe_dispatch()
EN=0
STS=0
acpi_ec_gpe_handler()
*****************************************************************
(event handling B)
Lock(EC)
advance_transaction()
EC_SC read
Unlock(EC)
*****************************************************************
3. Issue 3:
We can see no-op acpi_ec_gpe_handler() invoked on both silicon
variation A and B. This is because:
a. event pending A: An event can arrive to trigger an EC GPE and
ACPICA checks it and is about to invoke the EC GPE handler;
b. event handling A: The event can be handled in ec_poll() because
EC lock is not held after the GPE status checking;
c. event handling B: Then when the EC GPE handler is invoked, it
becomes a no-op GPE handler invocation.
d. As no-op GPE handler invocations are counted by the EC driver
to trigger the command storming conditions, the wrong no-op
GPE handler invocations thus can easily trigger wrong command
storming conditions.
Note 3:
This no-op GPE handler invocation is rare because the time between
the IRQ arrival and the acpi_ec_gpe_handler() invocation is less than
the timeout value waited in ec_poll(). So most of the no-op GPE
handler invocations are caused by the reason described in issue 1.
3. There is a race condition due to ACPICA internal GPE clearing logic in
acpi_enable_gpe():
During runtime, acpi_enable_gpe() can be invoked by the EC storming
prevention code. When it is invoked, GPE may be flagged (STS=1).
=================================================================
(event pending A)
=================================================================
acpi_ev_gpe_dispatch() acpi_ec_transaction()
EN=0
STS=0
acpi_ec_gpe_handler()
*****************************************************************
(event handling A)
Lock(EC)
advance_transaction()
EC_SC read
EC_SC handled
Unlock(EC)
*****************************************************************
EN=1 ?
Lock(EC)
Unlock(EC)
=================================================================
(event pending B)
=================================================================
acpi_enable_gpe()
STS=0
EN=1
4. Issue 4:
We can see GPE loss on both silicon variation A and B platforms.
This is because:
a. event pending B: An event can arrive right before ACPICA's GPE
clearing performed in acpi_enable_gpe();
b. If the GPE is cleared when GPE is disabled, then EN=1 write in
acpi_enable_gpe() cannot trigger this GPE;
c. If no polling mechanism is implemented in the driver for this
event (for example, SCI_EVT), this event is lost due to no GPE
being triggered.
Note 4:
Currently we don't have this issue, but after we switch the EC
driver into ACPI_GPE_DISPATCH_RAW_HANDLER mode, we need to take care
of handling this because the EN=1 write in acpi_ev_gpe_dispatch()
will be abandoned.
There might be more race issues for the current GPE handler usages. This is
because the EC IRQ's enabling/disabling/checking/clearing/handling
operations should be locked by a single lock that is under the EC driver's
control to achieve the serialization. Which means we need to invoke GPE
APIs with EC driver's lock held and all ACPICA internal GPE operations
related to the GPE handler should be abandoned. Invoking GPE APIs inside of
the EC driver lock and bypassing ACPICA internal GPE operations requires
the ACPI_GPE_DISPATCH_RAW_HANDLER mode where the same lock used by the APIs
are released prior than invoking the handlers. Otherwise, we can see dead
locks due to circular locking dependencies (see Reference below).
This patch then switches the EC driver into the
ACPI_GPE_DISPATCH_RAW_HANDLER mode so that it can perform correct GPE
operations using the GPE APIs:
1. Bypasses EN modifications performed in acpi_ev_gpe_dispatch() by
using acpi_install_gpe_raw_handler() and invoking all GPE APIs with EC
spin lock held. This can fix issue 1 as it makes a non frequent GPE
enabling/disabling environment.
2. Bypasses STS clearing performed in acpi_enable_gpe() by replacing
acpi_enable_gpe()/acpi_disable_gpe() with acpi_set_gpe(). This can fix
issue 4. And this can also help to fix issue 1 as it makes a no sudden
GPE clearing environment when GPE is frequently enabled/disabled.
3. Ensures STS acknowledged before handling by invoking acpi_clear_gpe()
in advance_transaction(). This can finally fix issue 1 even in a
frequent GPE enabling/disabling environment. And this can also finally
fix issue 3 when issue 2 is fixed.
Note 3:
GPE clearing is edge triggered W1C, which means we can clear it right
before handling it. Since all EC GPE indications are handled in
advance_transaction() by previous commits, we can now move GPE clearing
into it to implement the correct GPE clearing.
Note 4:
We can use acpi_set_gpe() which is not shared GPE safer instead of
acpi_enable_gpe()/acpi_disable_gpe() because EC GPE is not shared by
other hardware, which is mentioned in the ACPI specification 5.0, 12.6
Interrupt Model: "OSPM driver treats this as an edge event (the EC SCI
cannot be shared)". So we can stop using shared GPE safer APIs
acpi_enable_gpe()/acpi_disable_gpe() in the EC driver. Otherwise
cleanups need to be made in acpi_ev_enable_gpe() to bypass the GPE
clearing logic before keeping acpi_enable_gpe().
This patch also invokes advance_transaction() when GPE is re-enabled in the
task context which:
1. Ensures EN=1 can trigger GPE by checking and handling EC status register
right after EN=1 writes. This can fix issue 2.
After applying this patch, without frequent GPE enablings considered:
=================================================================
(event pending A)
=================================================================
acpi_ec_gpe_handler() ec_poll()
*****************************************************************
(event handling A)
Lock(EC)
advance_transaction()
if STS==1
STS=0
EC_SC read
=================================================================
(event pending B)
=================================================================
EC_SC handled
Unlock(EC)
*****************************************************************
*****************************************************************
(event handling B)
Lock(EC)
advance_transaction()
if STS==1
STS=0
EC_SC read
=================================================================
(event pending C)
=================================================================
EC_SC handled
Unlock(EC)
*****************************************************************
The event pending for issue 1 (event pending B) can arrive as a next GPE
due to the previous IRQ context STS=0 write. And if it is handled by
ec_poll() (event handling B), as it is also acknowledged by ec_poll(), the
event pending for issue 2 (event pending C) can properly arrive as a next
GPE after the task context STS=0 write. So no GPE will be lost and never
triggered due to GPE clearing performed in the wrong position. And since
all GPE handling is performed after a locked GPE status checking, we can
hardly see no-op GPE handler invocations due to issue 1 and 3. We may still
see no-op GPE handler invocations due to "Note 1", but as it is inevitable,
it needn't be fixed.
After applying this patch, with frequent GPE enablings considered:
=================================================================
(event pending A)
=================================================================
acpi_ec_gpe_handler() acpi_ec_transaction()
*****************************************************************
(event handling A)
Lock(EC)
advance_transaction()
if STS==1
STS=0
EC_SC read
=================================================================
(event pending B)
=================================================================
EC_SC handled
Unlock(EC)
*****************************************************************
*****************************************************************
(event handling B)
Lock(EC)
EN=1
if STS==1
advance_transaction()
if STS==1
STS=0
EC_SC read
=================================================================
(event pending C)
=================================================================
EC_SC handled
Unlock(EC)
*****************************************************************
The event pending for issue 2 can be manually handled by
advance_transaction(). And after the STS=0 write performed in the manual
triggered advance_transaction(), GPE can always arrive. So no GPE will be
lost due to frequent GPE disabling/enabling performed in the driver like
issue 4.
Note 5:
It's ideally when EN=1 write occurred, an IRQ thread should be woken up to
handle the GPE when the GPE was raised. But this requires the IRQ thread to
contain the poller code for all EC GPE indications, while currently some of
the indications are handled in the user tasks. It then is very hard for the
code to determine whether a user task should be invoked or the poller work
item should be scheduled. So we have to invoke advance_transaction()
directly now and it leaves us such a restriction for the GPE re-enabling:
it must be performed in the task context to avoid starving the GPEs.
As a conclusion: we can see the EC GPE is always handled in serial after
deploying the raw GPE handler mode:
Lock(EC)
if (STS==1)
STS=0
EC_SC read
EC_SC handled
Unlock(EC)
The EC driver specific lock is responsible to make the EC GPE handling
processes serialized so that EC can handle its GPE from both IRQ and task
contexts and the next IRQ can be ensured to arrive after this process.
Note 6:
We have many EC_FLAGS_MSI qurik users in the current driver. They all seem
to be suffering from unexpected GPE triggering source lost. And they are
false root caused to a timing issue. Since EC communication protocol has
already flow control defined, timing shouldn't be the root cause, while
this fix might be fixing the root cause of the old bugs.
For acpi_set_gpe()/acpi_enable_gpe(), our target is to purify them to be APIs
that can be used for various GPE handling models, so we need them to be
pure GPE enabling APIs. GPE enabling/disabling has 2 use cases:
1. Driver may permanently enable/disable GPEs according to the usage
counts.
1. When upper layers (the users of the driver) submit requests to the
driver, it means they care about the underlying hardware. GPE need
to be enabled for the first request submission and disabled for the
last request completion.
2. When the GPE is shared between 2+ silicon logics. GPE need to be
enabled for either silicon logic's driver and disabled when all of
the drivers are not started.
For these cases, acpi_enable_gpe()/acpi_disable_gpe() should be used. When
the usage count is increased from 0 to 1, the GPE is enabled and it is
disabled when the usage count is decrased from 1 to 0.
2. Driver may temporarily disables the GPE to enter an GPE polling mode and
wants to re-enable it later.
1. Prevent GPE storming: when a driver cannot fully solve the condition
that triggered the GPE in the GPE context, in order not to trigger
GPE storm, driver has to disable GPE to switch into the polling mode
and re-enables it in the non interrupt context after the storming
condition is cleared.
2. Meet throughput requirement: some IO drivers need to poll hardware
again and again until nothing indicated instead of just handling once
for one interruption, this need to be done in the polling mode or the
IO flood may prevent the GPE handler from returning.
3. Meet realtime requirement: in order not to block CPU to handle higher
realtime prioritized GPEs, lower priority GPEs can be handled in the
polling mode.
For these cases, acpi_set_gpe() should be used to switch to/from the
polling mode.
This patch adds unconditional GPE enabling support into acpi_set_gpe() so
that this API can be used by the drivers to switch back from the GPE
polling mode unconditionally.
Originally this function includes GPE clearing logic in it.
First, the GPE clearing is typically used in the GPE handling code to:
1. Acknowledge the GPE when we know there is an edge triggered GPE raised
and is about to handle it, otherwise the unexpected clearing may lead to
a GPE loss;
2. Issue actions after we have handled a level triggered GPE, otherwise
the unexpected clearing may trigger unwanted OSPM actions to the
hardware (for example, clocking in out-dated write FIFO data).
Thus the GPE clearing is not suitable to be used in the GPE enabling APIs.
Second, the combination of acknowledging and enabling may also not be
expected by the hardware drivers. For GPE clearing, we have a seperate API
acpi_clear_gpe(). There are cases drivers do want the 2 operations to be
split. So splitting these 2 operations could facilitates drivers the
maximum possibilities to achieve success. For a combined one, we already
have acpi_finish_gpe() ready to be invoked.
Given the fact that drivers should complete all outstanding requests before
putting themselves into the sleep states, as this API is executed for
outstanding requests, it should also have nothing to do with the
"RUN"/"WAKE" distinguishing. That's why the acpi_set_gpe(ACPI_GPE_ENABLE)
should not be implemented by acpi_hw_low_set_gpe(ACPI_GPE_CONDITIONAL_ENABLE).
This patch thus converts acpi_set_gpe(ACPI_GPE_ENABLE) into
acpi_hw_low_set_gpe(ACPI_GPE_ENABLE) to achieve a seperate GPE enabling API.
Drivers then are encouraged to use this API when they need to switch
to/from the GPE polling mode.
Note that the acpi_set_gpe()/acpi_finish_gpe() should be first introduced to
Linux using a divergence reduction patch before sending a linuxized version
of this patch. Lv Zheng.
Lv Zheng [Thu, 5 Feb 2015 08:27:09 +0000 (16:27 +0800)]
ACPICA: Events: Introduce acpi_set_gpe()/acpi_finish_gpe() to reduce divergences
This can help to reduce source code differences between Linux and ACPICA
upstream. Further driver cleanups also require these APIs to eliminate GPE
storms.
1. acpi_set_gpe(): An API that driver should invoke in the case it wants
to disable/enable IRQ without honoring the reference
count implemented in the acpi_disable_gpe() and
acpi_enable_gpe(). Note that this API should only be
invoked inside a acpi_enable_gpe()/acpi_disable_gpe()
pair.
2. acpi_finish_gpe(): Drivers returned ACPI_REENABLE_GPE unset from the
GPE handler should use this API instead of
invoking acpi_set_gpe()/acpi_enable_gpe() to
re-enable the GPE.
The GPE APIs can be invoked inside of a GPE handler or in the task context
with a driver provided lock held. This driver provided lock is safe to be
held in the GPE handler by the driver.
Since whether the GPE should be disabled/enabled/cleared should only be
determined by the GPE driver's state machine:
1. GPE should be disabled if the driver wants to switch to the GPE polling
mode when a GPE storm condition is indicated and should be enabled if
the driver wants to switch back to the GPE interrupt mode when all of
the storm conditions are cleared. The conditions should be protected by
the driver's specific lock.
2. GPE should be enabled if the driver has accepted more than one request
and should be disabled if the driver has completed all of the requests.
The request count should be protected by the driver's specific lock.
3. GPE should be cleared either when the driver is about to handle an edge
triggered GPE or when the driver has completed to handle a level
triggered GPE. The handling code should be protected by the driver's
specific lock.
Thus the GPE enabling/disabling/clearing operations are likely to be
performed with the driver's specific lock held while we currently cannot do
this. This is because:
1. We have the acpi_gbl_gpe_lock held before invoking the GPE driver's
handler. Driver's specific lock is likely to be held inside of the
handler, thus we can see some dead lock issues due to the reversed
locking order or recursive locking. In order to solve such dead lock
issues, we need to unlock the acpi_gbl_gpe_lock before invoking the
handler. BZ 1100.
2. Since GPE disabling/enabling/clearing should be determined by the GPE
driver's state machine, we shouldn't perform such operations inside of
ACPICA for a GPE handler to mess up the driver's state machine. BZ 1101.
Originally this patch includes a logic to flush GPE handlers, it is dropped
due to the following reasons:
1. This is a different issue;
2. Linux OSL has fixed this by flushing SCI in acpi_os_wait_events_complete().
We will pick up this topic when the Linux OSL fix turns out to be not
sufficient.
Note that currently the internal operations and the acpi_gbl_gpe_lock are
also used by ACPI_GPE_DISPATCH_METHOD and ACPI_GPE_DISPATCH_NOTIFY. In
order not to introduce regressions, we add one
ACPI_GPE_DISPATCH_RAW_HANDLER type to be distiguished from
ACPI_GPE_DISPATCH_HANDLER. For which the acpi_gbl_gpe_lock is unlocked before
invoking the GPE handler and the internal enabling/disabling operations are
bypassed to allow drivers to perform them at a proper position using the
GPE APIs and ACPI_GPE_DISPATCH_RAW_HANDLER users should invoke acpi_set_gpe()
instead of acpi_enable_gpe()/acpi_disable_gpe() to bypass the internal GPE
clearing code in acpi_enable_gpe(). Lv Zheng.
Known issues:
1. Edge-triggered GPE lost for frequent enablings
On some buggy silicon platforms, GPE enable line may not be directly
wired to the GPE trigger line. In that case, when GPE enabling is
frequently performed for edge-triggered GPEs, GPE status may stay set
without being triggered.
This patch may maginify this problem as it allows GPE enabling to be
parallel performed during the process the GPEs are handled.
This is an existing issue, because:
1. For task context:
Current ACPI_GPE_DISPATCH_METHOD practices have proven that this
isn't a real issue - we can re-enable edge-triggered GPE in a work
queue where the GPE status bit might already be set.
2. For IRQ context:
This can even happen when the GPE enabling occurs before returning
from the GPE handler and after unlocking the GPE lock.
Thus currently no code is included to protect this.
In function acpi_hw_low_set_gpe(), cast enable_mask to u8 before
storing. The mask was read from a 32 bit register but is an 8 bit
value. Fixes Visual Studio compiler warning.
There is an issue in acpi_install_gpe_handler() and acpi_remove_gpe_handler().
The code to obtain the GPE dispatcher type from the Handler->original_flags
is wrong:
if (((Handler->original_flags & ACPI_GPE_DISPATCH_METHOD) ||
(Handler->original_flags & ACPI_GPE_DISPATCH_NOTIFY)) &&
ACPI_GPE_DISPATCH_NOTIFY is 0x03 and ACPI_GPE_DISPATCH_METHOD is 0x02, thus
this statement is TRUE for the following dispatcher types:
0x01 (ACPI_GPE_DISPATCH_HANDLER): not expected
0x02 (ACPI_GPE_DISPATCH_METHOD): expected
0x03 (ACPI_GPE_DISPATCH_NOTIFY): expected
There is no functional issue due to this because Handler->original_flags is
only set in acpi_install_gpe_handler(), and an earlier checker has excluded
the ACPI_GPE_DISPATCH_HANDLER:
if ((gpe_event_info->Flags & ACPI_GPE_DISPATCH_MASK) ==
ACPI_GPE_DISPATCH_HANDLER)
{
Status = AE_ALREADY_EXISTS;
goto free_and_exit;
}
...
Handler->original_flags = (u8) (gpe_event_info->Flags &
(ACPI_GPE_XRUPT_TYPE_MASK | ACPI_GPE_DISPATCH_MASK));
We need to clean this up before modifying the GPE dispatcher type values.
In order to prevent such issue from happening in the future, this patch
introduces ACPI_GPE_DISPATCH_TYPE() macro to be used to obtain the GPE
dispatcher types. Lv Zheng.
This patch follows acpi_ev_fixed_event_detect(), which invokes
acpi_gbl_global_event_handler instead of invoking it in
acpi_ev_fixed_event_dispatch(), moves acpi_gbl_global_event_handler from
acpi_ev_gpe_dispatch() to acpi_ev_gpe_detect(). This makes further cleanups
around acpi_ev_gpe_dispatch() simpler. Lv Zheng.
This patch prevents acpi_remove_gpe_handler() from leaking the stale
gpe_event_info->Dispatch.Handler to the caller to avoid possible NULL pointer
references. Lv Zheng.
Currently we rely on the logic that GPE blocks will never be deleted,
otherwise we can be broken by the race between acpi_ev_create_gpe_block(),
acpi_ev_delete_gpe_block() and acpi_ev_gpe_detect().
On the other hand, if we want to protect GPE block creation/deletion, we
need to use a different synchronization facility to protect the period
between acpi_ev_gpe_dispatch() and acpi_ev_asynch_enable_gpe(). Which leaves us
no choice but abandoning the ACPI_MTX_EVENTS used during this period.
This patch removes ACPI_MTX_EVENTS used during this period and the
acpi_ev_valid_gpe_event() to reflect current restriction. Lv Zheng.
This patch deletes a sanity check from acpi_ev_enable_gpe().
This kind of check is already done in
acpi_enable_gpe()/acpi_remove_gpe_handler()/acpi_update_all_gpes() before invoking
acpi_ev_enable_gpe():
1. acpi_enable_gpe(): same check (skip if DISPATCH_NONE) is now implemented.
2. acpi_remove_gpe_handler(): a more strict check (skip if !DISPATCH_HANDLER)
is implemented.
3. acpi_update_all_gpes(): a more strict check (skip if DISPATCH_NONE ||
DISPATCH_HANDLER || CAN_WAKE)
4. acpi_set_gpe(): since it is invoked by the OSPM driver where the GPE
handler is known to be available, such check isn't needed.
So we can simply remove this duplicated check from acpi_ev_enable_gpe().
Lv Zheng.
Lv Zheng [Thu, 5 Feb 2015 07:19:48 +0000 (15:19 +0800)]
ACPICA: Events: Back port "ACPICA: Save current masks of enabled GPEs after enable register writes"
This is a back port result of the Linux commit:
Commit c50f13c672df758b59e026c15b9118f3ed46edc4
Subject: ACPICA: Save current masks of enabled GPEs after enable register writes
Besides of the indent divergences, only a missing prototype added due to
the ACPICA internal coding style.
Yinghai Lu [Thu, 5 Feb 2015 05:44:48 +0000 (13:44 +0800)]
ACPI: Add interfaces to parse IOAPIC ID for IOAPIC hotplug
We need to parse APIC ID for IOAPIC registration for IOAPIC hotplug.
ACPI _MAT method and MADT table are used to figure out IOAPIC ID, just
like parsing CPU APIC ID for CPU hotplug.
Jiang Liu [Thu, 5 Feb 2015 05:44:47 +0000 (13:44 +0800)]
x86/PCI: Refine the way to release PCI IRQ resources
Some PCI device drivers assume that pci_dev->irq won't change after
calling pci_disable_device() and pci_enable_device() during suspend and
resume.
Commit c03b3b0738a5 ("x86, irq, mpparse: Release IOAPIC pin when
PCI device is disabled") frees PCI IRQ resources when pci_disable_device()
is called and reallocate IRQ resources when pci_enable_device() is
called again. This breaks above assumption. So commit 3eec595235c1
("x86, irq, PCI: Keep IRQ assignment for PCI devices during
suspend/hibernation") and 9eabc99a635a ("x86, irq, PCI: Keep IRQ
assignment for runtime power management") fix the issue by avoiding
freeing/reallocating IRQ resources during PCI device suspend/resume.
They achieve this by checking dev.power.is_prepared and
dev.power.runtime_status. PM maintainer, Rafael, then pointed out that
it's really an ugly fix which leaking PM internal state information to
IRQ subsystem.
Recently David Vrabel <[email protected]> also reports an
regression in pciback driver caused by commit cffe0a2b5a34 ("x86, irq:
Keep balance of IOAPIC pin reference count"). Please refer to:
http://lkml.org/lkml/2015/1/14/546
So this patch refine the way to release PCI IRQ resources. Instead of
releasing PCI IRQ resources in pci_disable_device()/
pcibios_disable_device(), we now release it at driver unbinding
notification BUS_NOTIFY_UNBOUND_DRIVER. In other word, we only release
PCI IRQ resources when there's no driver bound to the PCI device, and
it keeps the assumption that pci_dev->irq won't through multiple
invocation of pci_enable_device()/pci_disable_device().
Jiang Liu [Thu, 5 Feb 2015 05:44:43 +0000 (13:44 +0800)]
resources: Move struct resource_list_entry from ACPI into resource core
Currently ACPI, PCI and pnp all implement the same resource list
management with different data structure. We need to transfer from
one data structure into another when passing resources from one
subsystem into another subsystem. So move struct resource_list_entry
from ACPI into resource core and rename it as resource_entry,
then it could be reused by different subystems and avoid the data
structure conversion.
Introduce dedicated header file resource_ext.h instead of embedding
it into ioport.h to avoid header file inclusion order issues.
James Hogan [Mon, 19 Jan 2015 15:38:24 +0000 (15:38 +0000)]
IRQCHIP: mips-gic: Avoid rerouting timer IRQs for smp-cmp
Commit e9de688dac65 ("irqchip: mips-gic: Support local interrupts")
changed the GIC irqchip driver so that all local interrupts were routed
to the same CPU pin used for external interrupts. Unfortunately this
causes a regression when smp-cmp is used. The CPUs are started by the
bootloader and put in a timer based waiting poll loop, but when their
timer interrupts are rerouted to a different IRQ pin which is not
unmasked they never wake up.
Since smp-cmp support is deprecated and everybody who was using it
should be switching to smp-cps which brings up the secondary CPUs
without bootloader assistance, I've gone for the simple fix which can be
easily removed once smp-cmp is removed, rather than a fully generic fix.
In __gic_init() the local GIC_VPE_TIMER_MAP register is read to find the
boot-time routing of the local timer interrupt, and a chained handler is
added to that CPU pin as well as the normal one.
Eric Dumazet [Wed, 4 Feb 2015 23:12:04 +0000 (15:12 -0800)]
sit: fix some __be16/u16 mismatches
Fixes following sparse warnings :
net/ipv6/sit.c:1509:32: warning: incorrect type in assignment (different base types)
net/ipv6/sit.c:1509:32: expected restricted __be16 [usertype] sport
net/ipv6/sit.c:1509:32: got unsigned short
net/ipv6/sit.c:1514:32: warning: incorrect type in assignment (different base types)
net/ipv6/sit.c:1514:32: expected restricted __be16 [usertype] dport
net/ipv6/sit.c:1514:32: got unsigned short
net/ipv6/sit.c:1711:38: warning: incorrect type in argument 3 (different base types)
net/ipv6/sit.c:1711:38: expected unsigned short [unsigned] [usertype] value
net/ipv6/sit.c:1711:38: got restricted __be16 [usertype] sport
net/ipv6/sit.c:1713:38: warning: incorrect type in argument 3 (different base types)
net/ipv6/sit.c:1713:38: expected unsigned short [unsigned] [usertype] value
net/ipv6/sit.c:1713:38: got restricted __be16 [usertype] dport
Eric Dumazet [Wed, 4 Feb 2015 23:03:25 +0000 (15:03 -0800)]
ipv6: fix sparse errors in ip6_make_flowlabel()
include/net/ipv6.h:713:22: warning: incorrect type in assignment (different base types)
include/net/ipv6.h:713:22: expected restricted __be32 [usertype] hash
include/net/ipv6.h:713:22: got unsigned int
include/net/ipv6.h:719:25: warning: restricted __be32 degrades to integer
include/net/ipv6.h:719:22: warning: invalid assignment: ^=
include/net/ipv6.h:719:22: left side has type restricted __be32
include/net/ipv6.h:719:22: right side has type unsigned int
Eric Dumazet [Wed, 4 Feb 2015 21:31:54 +0000 (13:31 -0800)]
flow_keys: n_proto type should be __be16
(struct flow_keys)->n_proto is in network order, use
proper type for this.
Fixes following sparse errors :
net/core/flow_dissector.c:139:39: warning: incorrect type in assignment (different base types)
net/core/flow_dissector.c:139:39: expected unsigned short [unsigned] [usertype] n_proto
net/core/flow_dissector.c:139:39: got restricted __be16 [assigned] [usertype] proto
net/core/flow_dissector.c:237:23: warning: incorrect type in assignment (different base types)
net/core/flow_dissector.c:237:23: expected unsigned short [unsigned] [usertype] n_proto
net/core/flow_dissector.c:237:23: got restricted __be16 [assigned] [usertype] proto
Signed-off-by: Eric Dumazet <[email protected]> Fixes: e0f31d849867 ("flow_keys: Record IP layer protocol in skb_flow_dissect()") Signed-off-by: David S. Miller <[email protected]>
Sabrina Dubroca [Wed, 4 Feb 2015 14:25:09 +0000 (15:25 +0100)]
ip6_gre: fix endianness errors in ip6gre_err
info is in network byte order, change it back to host byte order
before use. In particular, the current code sets the MTU of the tunnel
to a wrong (too big) value.
Takashi Iwai [Wed, 4 Feb 2015 13:38:55 +0000 (14:38 +0100)]
xen-netfront: Use static attribute groups for sysfs entries
Instead of manual calls of device_create_file() and
device_remove_files(), assign the static attribute groups to netdev
groups array. This simplifies the code and avoids the possible
races.
Takashi Iwai [Wed, 4 Feb 2015 13:37:34 +0000 (14:37 +0100)]
tun: Use static attribute groups for sysfs entries
Instead of manual calls of device_create_file() and
device_remove_files(), assign the static attribute groups to netdev
groups array. This simplifies the code and avoids the possible
races.
this is a pull request of 2 patches for net-next/master.
Nicholas Mc Guire contributes a patch for the janz-ican3 driver to fix
a mismatch in an assignment. Ahmed S. Darwish contributes a patch for
the kvaser_usb driver, to make the driver more robust during the
bus-off handling.
====================
Shahed Shaikh [Wed, 4 Feb 2015 10:41:25 +0000 (05:41 -0500)]
qlcnic: Fix NAPI poll routine for Tx completion
After d75b1ade567f ("net: less interrupt masking in NAPI")
driver's NAPI poll routine is expected to return
exact budget value if it wants to be re-called.
Signed-off-by: Shahed Shaikh <[email protected]> Fixes: d75b1ade567f ("net: less interrupt masking in NAPI") Signed-off-by: David S. Miller <[email protected]>