commit b632d710149f1a638ffa6e999cfbd9ee9fbdada6 Author: Greg Kroah-Hartman Date: Wed Dec 20 10:07:34 2017 +0100 Linux 4.9.71 commit ed70a2212526bb26b5850f538b5a53793fdd4abf Author: Miaoqing Pan Date: Wed Sep 27 09:13:34 2017 +0800 ath9k: fix tx99 potential info leak [ Upstream commit ee0a47186e2fa9aa1c56cadcea470ca0ba8c8692 ] When the user sets count to zero the string buffer would remain completely uninitialized which causes the kernel to parse its own stack data, potentially leading to an info leak. In addition to that, the string might be not terminated properly when the user data does not contain a 0-terminator. Signed-off-by: Miaoqing Pan Reviewed-by: Christoph Böhmwalder Signed-off-by: Kalle Valo Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 8f23eb16afd8f230f15ee020159469275adabd15 Author: Matteo Croce Date: Thu Oct 12 16:12:37 2017 +0200 icmp: don't fail on fragment reassembly time exceeded [ Upstream commit 258bbb1b0e594ad5f5652cb526b3c63e6a7fad3d ] The ICMP implementation currently replies to an ICMP time exceeded message (type 11) with an ICMP host unreachable message (type 3, code 1). However, time exceeded messages can either represent "time to live exceeded in transit" (code 0) or "fragment reassembly time exceeded" (code 1). Unconditionally replying to "fragment reassembly time exceeded" with host unreachable messages might cause unjustified connection resets which are now easily triggered as UFO has been removed, because, in turn, sending large buffers triggers IP fragmentation. The issue can be easily reproduced by running a lot of UDP streams which is likely to trigger IP fragmentation: # start netserver in the test namespace ip netns add test ip netns exec test netserver # create a VETH pair ip link add name veth0 type veth peer name veth0 netns test ip link set veth0 up ip -n test link set veth0 up for i in $(seq 20 29); do # assign addresses to both ends ip addr add dev veth0 192.168.$i.1/24 ip -n test addr add dev veth0 192.168.$i.2/24 # start the traffic netperf -L 192.168.$i.1 -H 192.168.$i.2 -t UDP_STREAM -l 0 & done # wait send_data: data send error: No route to host (errno 113) netperf: send_omni: send_data failed: No route to host We need to differentiate instead: if fragment reassembly time exceeded is reported, we need to silently drop the packet, if time to live exceeded is reported, maintain the current behaviour. In both cases increment the related error count "icmpInTimeExcds". While at it, fix a typo in a comment, and convert the if statement into a switch to mate it more readable. Signed-off-by: Matteo Croce Signed-off-by: David S. Miller Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 2eb165b9fbb722416252c054ac2ef2c3eb777460 Author: Alex Vesker Date: Tue Oct 10 10:36:41 2017 +0300 IB/ipoib: Grab rtnl lock on heavy flush when calling ndo_open/stop [ Upstream commit b4b678b06f6eef18bff44a338c01870234db0bc9 ] When ndo_open and ndo_stop are called RTNL lock should be held. In this specific case ipoib_ib_dev_open calls the offloaded ndo_open which re-sets the number of TX queue assuming RTNL lock is held. Since RTNL lock is not held, RTNL assert will fail. Signed-off-by: Alex Vesker Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 0c70b35bf1583fa5fc9134b87d32cf0bc27d5023 Author: Bart Van Assche Date: Wed Oct 11 10:48:45 2017 -0700 RDMA/cma: Avoid triggering undefined behavior [ Upstream commit c0b64f58e8d49570aa9ee55d880f92c20ff0166b ] According to the C standard the behavior of computations with integer operands is as follows: * A computation involving unsigned operands can never overflow, because a result that cannot be represented by the resulting unsigned integer type is reduced modulo the number that is one greater than the largest value that can be represented by the resulting type. * The behavior for signed integer underflow and overflow is undefined. Hence only use unsigned integers when checking for integer overflow. This patch is what I came up with after having analyzed the following smatch warnings: drivers/infiniband/core/cma.c:3448: cma_resolve_ib_udp() warn: signed overflow undefined. 'offset + conn_param->private_data_len < conn_param->private_data_len' drivers/infiniband/core/cma.c:3505: cma_connect_ib() warn: signed overflow undefined. 'offset + conn_param->private_data_len < conn_param->private_data_len' Signed-off-by: Bart Van Assche Acked-by: Sean Hefty Signed-off-by: Doug Ledford Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 31eb4108e107aefc9cd5d662b3624533327848fd Author: Alexander Duyck Date: Fri Oct 13 13:40:24 2017 -0700 macvlan: Only deliver one copy of the frame to the macvlan interface [ Upstream commit dd6b9c2c332b40f142740d1b11fb77c653ff98ea ] This patch intoduces a slight adjustment for macvlan to address the fact that in source mode I was seeing two copies of any packet addressed to the macvlan interface being delivered where there should have been only one. The issue appears to be that one copy was delivered based on the source MAC address and then the second copy was being delivered based on the destination MAC address. To fix it I am just treating a unicast address match as though it is not a match since source based macvlan isn't supposed to be matching based on the destination MAC anyway. Fixes: 79cf79abce71 ("macvlan: add source mode") Signed-off-by: Alexander Duyck Signed-off-by: David S. Miller Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit b64ab3ca9d31162cbdbd8acc2d5b68cf37302527 Author: Jan Kara Date: Mon Oct 16 11:38:11 2017 +0200 udf: Avoid overflow when session starts at large offset [ Upstream commit abdc0eb06964fe1d2fea6dd1391b734d0590365d ] When session starts beyond offset 2^31 the arithmetics in udf_check_vsd() would overflow. Make sure the computation is done in large enough type. Reported-by: Cezary Sliwa Signed-off-by: Jan Kara Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 91e0cf85caea7f977361f662941100821370d74e Author: Dan Carpenter Date: Wed Oct 4 10:50:37 2017 +0300 scsi: bfa: integer overflow in debugfs [ Upstream commit 3e351275655d3c84dc28abf170def9786db5176d ] We could allocate less memory than intended because we do: bfad->regdata = kzalloc(len << 2, GFP_KERNEL); The shift can overflow leading to a crash. This is debugfs code so the impact is very small. I fixed the network version of this in March with commit 13e2d5187f6b ("bna: integer overflow bug in debugfs"). Fixes: ab2a9ba189e8 ("[SCSI] bfa: add debugfs support") Signed-off-by: Dan Carpenter Signed-off-by: Martin K. Petersen Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 64da4e8d00f16bc7db25f6d819c4beac2656a90f Author: weiping zhang Date: Thu Oct 12 14:56:44 2017 +0800 scsi: sd: change allow_restart to bool in sysfs interface [ Upstream commit 658e9a6dc1126f21fa417cd213e1cdbff8be0ba2 ] /sys/class/scsi_disk/0:2:0:0/allow_restart can be changed to 0 unexpectedly by writing an invalid string such as the following: echo asdf > /sys/class/scsi_disk/0:2:0:0/allow_restart Signed-off-by: weiping zhang Signed-off-by: Martin K. Petersen Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 1cafdac8919136bd2ee6e0f607c49aacb9bb854b Author: weiping zhang Date: Thu Oct 12 14:57:06 2017 +0800 scsi: sd: change manage_start_stop to bool in sysfs interface [ Upstream commit 623401ee33e42cee64d333877892be8db02951eb ] /sys/class/scsi_disk/0:2:0:0/manage_start_stop can be changed to 0 unexpectly by writing an invalid string. Signed-off-by: weiping zhang Signed-off-by: Martin K. Petersen Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 8315bcf841ae26d266dbaecd8a47956632478799 Author: Jia-Ju Bai Date: Sun Oct 8 19:54:07 2017 +0800 rtl8188eu: Fix a possible sleep-in-atomic bug in rtw_disassoc_cmd [ Upstream commit 08880f8e08cbd814e870e9d3ab9530abc1bce226 ] The driver may sleep under a spinlock, and the function call path is: rtw_set_802_11_bssid(acquire the spinlock) rtw_disassoc_cmd kzalloc(GFP_KERNEL) --> may sleep To fix it, GFP_KERNEL is replaced with GFP_ATOMIC. This bug is found by my static analysis tool and my code review. Signed-off-by: Jia-Ju Bai Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 6641d3e307f5b42237a99f2aa961051a71c50393 Author: Jia-Ju Bai Date: Sun Oct 8 19:54:45 2017 +0800 rtl8188eu: Fix a possible sleep-in-atomic bug in rtw_createbss_cmd [ Upstream commit 2bf9806d4228f7a6195f8e03eda0479d2a93b411 ] The driver may sleep under a spinlock, and the function call path is: rtw_surveydone_event_callback(acquire the spinlock) rtw_createbss_cmd kzalloc(GFP_KERNEL) --> may sleep To fix it, GFP_KERNEL is replaced with GFP_ATOMIC. This bug is found by my static analysis tool and my code review. Signed-off-by: Jia-Ju Bai Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 28e006e14ff9b2fe55078c02bb92212b09664517 Author: Jia-Ju Bai Date: Mon Oct 9 16:45:55 2017 +0800 vt6655: Fix a possible sleep-in-atomic bug in vt6655_suspend [ Upstream commit 42c8eb3f6e15367981b274cb79ee4657e2c6949d ] The driver may sleep under a spinlock, and the function call path is: vt6655_suspend (acquire the spinlock) pci_set_power_state __pci_start_power_transition (drivers/pci/pci.c) msleep --> may sleep To fix it, pci_set_power_state is called without having a spinlock. This bug is found by my static analysis tool and my code review. Signed-off-by: Jia-Ju Bai Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 04d5a2d5d2d079ce5966d7e67324c9c4f648c951 Author: Parav Pandit Date: Mon Oct 16 08:45:16 2017 +0300 IB/core: Fix calculation of maximum RoCE MTU [ Upstream commit 99260132fde7bddc6e0132ce53da94d1c9ccabcb ] The original code only took into consideration the largest header possible after the IB_BTH_BYTES. This was incorrect, as the largest possible header size is the largest possible combination of headers we might run into. The new code accounts for all possible headers in the largest possible combination and subtracts that from the MTU to make sure that all packets will fit on the wire. Link: https://www.spinics.net/lists/linux-rdma/msg54558.html Fixes: 3c86aa70bf67 ("RDMA/cm: Add RDMA CM support for IBoE devices") Signed-off-by: Parav Pandit Reviewed-by: Daniel Jurgens Reported-by: Roland Dreier Signed-off-by: Leon Romanovsky Signed-off-by: Doug Ledford Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit c744ecec01ae6a92541c381d3548f8753aa50ba2 Author: Kurt Garloff Date: Tue Oct 17 09:10:45 2017 +0200 scsi: scsi_devinfo: Add REPORTLUN2 to EMC SYMMETRIX blacklist entry [ Upstream commit 909cf3e16a5274fe2127cf3cea5c8dba77b2c412 ] All EMC SYMMETRIX support REPORT_LUNS, even if configured to report SCSI-2 for whatever reason. Signed-off-by: Kurt Garloff Signed-off-by: Hannes Reinecke Signed-off-by: Martin K. Petersen Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit f39486bd37ee3edd8658aff1226337c47e4ba32e Author: NeilBrown Date: Tue Oct 17 16:18:36 2017 +1100 raid5: Set R5_Expanded on parity devices as well as data. [ Upstream commit 235b6003fb28f0dd8e7ed8fbdb088bb548291766 ] When reshaping a fully degraded raid5/raid6 to a larger nubmer of devices, the new device(s) are not in-sync and so that can make the newly grown stripe appear to be "failed". To avoid this, we set the R5_Expanded flag to say "Even though this device is not fully in-sync, this block is safe so don't treat the device as failed for this stripe". This flag is set for data devices, not not for parity devices. Consequently, if you have a RAID6 with two devices that are partly recovered and a spare, and start a reshape to include the spare, then when the reshape gets past the point where the recovery was up to, it will think the stripes are failed and will get into an infinite loop, failing to make progress. So when contructing parity on an EXPAND_READY stripe, set R5_Expanded. Reported-by: Curt Signed-off-by: NeilBrown Signed-off-by: Shaohua Li Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 4fdb10391bcafe698f06dd105b0d829744eb1926 Author: Linus Walleij Date: Wed Oct 11 11:57:15 2017 +0200 pinctrl: adi2: Fix Kconfig build problem [ Upstream commit 1c363531dd814dc4fe10865722bf6b0f72ce4673 ] The build robot is complaining on Blackfin: drivers/pinctrl/pinctrl-adi2.c: In function 'port_setup': >> drivers/pinctrl/pinctrl-adi2.c:221:21: error: dereferencing pointer to incomplete type 'struct gpio_port_t' writew(readw(®s->port_fer) & ~BIT(offset), ^~ drivers/pinctrl/pinctrl-adi2.c: In function 'adi_gpio_ack_irq': >> drivers/pinctrl/pinctrl-adi2.c:266:18: error: dereferencing pointer to incomplete type 'struct bfin_pint_regs' if (readl(®s->invert_set) & pintbit) ^~ It seems the driver need to include and to compile. The Blackfin architecture was re-defining the Kconfig PINCTRL symbol which is not OK, so replaced this with PINCTRL_BLACKFIN_ADI2 which selects PINCTRL and PINCTRL_ADI2 just like most arches do. Further, the old GPIO driver symbol GPIO_ADI was possible to select at the same time as selecting PINCTRL. This was not working because the arch-local header contains an explicit #ifndef PINCTRL clause making compilation break if you combine them. The same is true for DEBUG_MMRS. Make sure the ADI2 pinctrl driver is not selected at the same time as the old GPIO implementation. (This should be converted to use gpiolib or pincontrol and move to drivers/...) Also make sure the old GPIO_ADI driver or DEBUG_MMRS is not selected at the same time as the new PINCTRL implementation, and only make PINCTRL_ADI2 selectable for the Blackfin families that actually have it. This way it is still possible to add e.g. I2C-based pin control expanders on the Blackfin. Cc: Steven Miao Cc: Huanhuan Feng Signed-off-by: Linus Walleij Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit afeeff4d6156b4965af9cf30ba05b05f04b9c7b0 Author: Bin Liu Date: Tue Dec 5 08:45:30 2017 -0600 usb: musb: da8xx: fix babble condition handling commit bd3486ded7a0c313a6575343e6c2b21d14476645 upstream. When babble condition happens, the musb controller might automatically turns off VBUS. On DA8xx platform, the controller generates drvvbus interrupt for turning off VBUS along with the babble interrupt. In this case, we should handle the babble interrupt first and recover from the babble condition. This change ignores the drvvbus interrupt if babble interrupt is also generated at the same time, so the babble recovery routine works properly. Signed-off-by: Bin Liu Signed-off-by: Greg Kroah-Hartman commit 92ad6c13e17ebea6175b978dc0f5b485e7e68cf1 Author: nixiaoming Date: Fri Sep 15 17:45:56 2017 +0800 tty fix oops when rmmod 8250 [ Upstream commit c79dde629d2027ca80329c62854a7635e623d527 ] After rmmod 8250.ko tty_kref_put starts kwork (release_one_tty) to release proc interface oops when accessing driver->driver_name in proc_tty_unregister_driver Use jprobe, found driver->driver_name point to 8250.ko static static struct uart_driver serial8250_reg .driver_name= serial, Use name in proc_dir_entry instead of driver->driver_name to fix oops test on linux 4.1.12: BUG: unable to handle kernel paging request at ffffffffa01979de IP: [] strchr+0x0/0x30 PGD 1a0d067 PUD 1a0e063 PMD 851c1f067 PTE 0 Oops: 0000 [#1] PREEMPT SMP Modules linked in: ... ... [last unloaded: 8250] CPU: 7 PID: 116 Comm: kworker/7:1 Tainted: G O 4.1.12 #1 Hardware name: Insyde RiverForest/Type2 - Board Product Name1, BIOS NE5KV904 12/21/2015 Workqueue: events release_one_tty task: ffff88085b684960 ti: ffff880852884000 task.ti: ffff880852884000 RIP: 0010:[] [] strchr+0x0/0x30 RSP: 0018:ffff880852887c90 EFLAGS: 00010282 RAX: ffffffff81a5eca0 RBX: ffffffffa01979de RCX: 0000000000000004 RDX: ffff880852887d10 RSI: 000000000000002f RDI: ffffffffa01979de RBP: ffff880852887cd8 R08: 0000000000000000 R09: ffff88085f5d94d0 R10: 0000000000000195 R11: 0000000000000000 R12: ffffffffa01979de R13: ffff880852887d00 R14: ffffffffa01979de R15: ffff88085f02e840 FS: 0000000000000000(0000) GS:ffff88085f5c0000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: ffffffffa01979de CR3: 0000000001a0c000 CR4: 00000000001406e0 Stack: ffffffff812349b1 ffff880852887cb8 ffff880852887d10 ffff88085f5cd6c2 ffff880852800a80 ffffffffa01979de ffff880852800a84 0000000000000010 ffff88085bb28bd8 ffff880852887d38 ffffffff812354f0 ffff880852887d08 Call Trace: [] ? __xlate_proc_name+0x71/0xd0 [] remove_proc_entry+0x40/0x180 [] ? _raw_spin_lock_irqsave+0x41/0x60 [] ? destruct_tty_driver+0x60/0xe0 [] proc_tty_unregister_driver+0x28/0x40 [] destruct_tty_driver+0x88/0xe0 [] tty_driver_kref_put+0x1d/0x20 [] release_one_tty+0x5a/0xd0 [] process_one_work+0x139/0x420 [] worker_thread+0x121/0x450 [] ? process_scheduled_works+0x40/0x40 [] kthread+0xec/0x110 [] ? tg_rt_schedulable+0x210/0x220 [] ? kthread_freezable_should_stop+0x80/0x80 [] ret_from_fork+0x42/0x70 [] ? kthread_freezable_should_stop+0x80/0x80 Signed-off-by: nixiaoming Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 421910e924b63f2ccd4d433c77c5cd88063546ba Author: Matthias Brugger Date: Sat Oct 21 10:17:47 2017 +0200 soc: mediatek: pwrap: fix compiler errors [ Upstream commit fb2c1934f30577756e55e24e8870b45c78da3bc2 ] When compiling using sparse, we got the following error: drivers/soc/mediatek/mtk-pmic-wrap.c:686:25: error: dubious one-bit signed bitfield Changing the data type to unsigned fixes this. Signed-off-by: Matthias Brugger Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 7745382fe86ce5354dc459948c086d3b5778fb9d Author: Michael Ellerman Date: Mon Oct 9 21:52:44 2017 +1100 powerpc/perf/hv-24x7: Fix incorrect comparison in memord [ Upstream commit 05c14c03138532a3cb2aa29c2960445c8753343b ] In the hv-24x7 code there is a function memord() which tries to implement a sort function return -1, 0, 1. However one of the conditions is incorrect, such that it can never be true, because we will have already returned. I don't believe there is a bug in practice though, because the comparisons are an optimisation prior to calling memcmp(). Fix it by swapping the second comparision, so it can be true. Reported-by: David Binderman Signed-off-by: Michael Ellerman Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit ab9d2571149322120efcf3d6ce9c24003c946ca1 Author: Martin Wilck Date: Fri Oct 20 16:51:08 2017 -0500 scsi: hpsa: destroy sas transport properties before scsi_host [ Upstream commit dfb2e6f46b3074eb85203d8f0888b71ec1c2e37a ] This patch cleans up a lot of warnings when unloading the driver. A current example of the stack trace starts with: [ 142.570715] sysfs group 'power' not found for kobject 'port-5:0' There can be hundreds of these messages during a driver unload. I am resubmitting this patch on behalf of Martin Wilck with his permission. His original patch can be found here: https://www.spinics.net/lists/linux-scsi/msg102085.html This patch did not help until Hannes's commit 9441284fbc39 ("scsi-fixup-kernel-warning-during-rmmod") was applied to the kernel. --------------------------- Original patch description: --------------------------- Unloading the hpsa driver causes warnings [ 1063.793652] WARNING: CPU: 1 PID: 4850 at ../fs/sysfs/group.c:237 device_del+0x54/0x240() [ 1063.793659] sysfs group ffffffff81cf21a0 not found for kobject 'port-2:0' with two different stacks: 1) [ 1063.793774] [] device_del+0x54/0x240 [ 1063.793780] [] transport_remove_classdev+0x4a/0x60 [ 1063.793784] [] attribute_container_device_trigger+0xa6/0xb0 [ 1063.793802] [] sas_port_delete+0x126/0x160 [scsi_transport_sas] [ 1063.793819] [] hpsa_free_sas_port+0x3c/0x70 [hpsa] 2) [ 1063.797103] [] device_del+0x54/0x240 [ 1063.797118] [] sas_port_delete+0x12e/0x160 [scsi_transport_sas] [ 1063.797134] [] hpsa_free_sas_port+0x3c/0x70 [hpsa] This is caused by the fact that host device hostX is deleted before the SAS transport devices hostX/port-a:b. This patch fixes this by reverting the order of device deletions. Tested-by: Don Brace Reviewed-by: Don Brace Signed-off-by: Martin Wilck Signed-off-by: Don Brace Signed-off-by: Martin K. Petersen Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 1723d6668df5107c6026cbbf7afd2c615d74e006 Author: Martin Wilck Date: Fri Oct 20 16:51:14 2017 -0500 scsi: hpsa: cleanup sas_phy structures in sysfs when unloading [ Upstream commit 55ca38b4255bb336c2d35990bdb2b368e19b435a ] I am resubmitting this patch on behalf of Martin Wilck with his permission. The original patch can be found here: https://www.spinics.net/lists/linux-scsi/msg102083.html This patch did not help until Hannes's commit 9441284fbc39 ("scsi-fixup-kernel-warning-during-rmmod") was applied to the kernel. -------------------------------------- Original patch description from Martin: -------------------------------------- When the hpsa module is unloaded using rmmod, dangling symlinks remain under /sys/class/sas_phy. Fix this by calling sas_phy_delete() rather than sas_phy_free (which, according to comments, should not be called for PHYs that have been set up successfully, anyway). Tested-by: Don Brace Reviewed-by: Don Brace Signed-off-by: Martin Wilck Signed-off-by: Don Brace Signed-off-by: Martin K. Petersen Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 237e053346f1b049ca9f571040c51c61f0740766 Author: Alex Williamson Date: Wed Oct 11 15:35:56 2017 -0600 PCI: Detach driver before procfs & sysfs teardown on device remove [ Upstream commit 16b6c8bb687cc3bec914de09061fcb8411951fda ] When removing a device, for example a VF being removed due to SR-IOV teardown, a "soft" hot-unplug via 'echo 1 > remove' in sysfs, or an actual hot-unplug, we first remove the procfs and sysfs attributes for the device before attempting to release the device from any driver bound to it. Unbinding the driver from the device can take time. The device might need to write out data or it might be actively in use. If it's in use by userspace through a vfio driver, the unbind might block until the user releases the device. This leads to a potentially non-trivial amount of time where the device exists, but we've torn down the interfaces that userspace uses to examine devices, for instance lspci might generate this sort of error: pcilib: Cannot open /sys/bus/pci/devices/0000:01:0a.3/config lspci: Unable to read the standard configuration space header of device 0000:01:0a.3 We don't seem to have any dependence on this teardown ordering in the kernel, so let's unbind the driver first, which is also more symmetric with the instantiation of the device in pci_bus_add_device(). Signed-off-by: Alex Williamson Signed-off-by: Bjorn Helgaas Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 8f84f861f99c44f03f5ed57b40e3bfe951cbae3a Author: Leon Romanovsky Date: Wed Oct 25 07:41:11 2017 +0300 RDMA/cxgb4: Declare stag as __be32 [ Upstream commit 35fb2a88ed4b77356fa679a8525c869a3594e287 ] The scqe.stag is actually __b32, fix it. drivers/infiniband/hw/cxgb4/cq.c:754:52: warning: cast to restricted __be32 Cc: Steve Wise Signed-off-by: Leon Romanovsky Reviewed-by: Steve Wise Signed-off-by: Doug Ledford Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 769bca9339f0775aa2985d4cae9be0e0d72092af Author: Christoph Hellwig Date: Tue Oct 17 14:16:19 2017 -0700 xfs: fix incorrect extent state in xfs_bmap_add_extent_unwritten_real [ Upstream commit 5e422f5e4fd71d18bc6b851eeb3864477b3d842e ] There was one spot in xfs_bmap_add_extent_unwritten_real that didn't use the passed in new extent state but always converted to normal, leading to wrong behavior when converting from normal to unwritten. Only found by code inspection, it seems like this code path to move partial extent from written to unwritten while merging it with the next extent is rarely exercised. Signed-off-by: Christoph Hellwig Reviewed-by: Brian Foster Reviewed-by: Darrick J. Wong Signed-off-by: Darrick J. Wong Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit c82209949bba864177300a681e0c736f982c87ab Author: Brian Foster Date: Thu Oct 26 09:31:16 2017 -0700 xfs: fix log block underflow during recovery cycle verification [ Upstream commit 9f2a4505800607e537e9dd9dea4f55c4b0c30c7a ] It is possible for mkfs to format very small filesystems with too small of an internal log with respect to the various minimum size and block count requirements. If this occurs when the log happens to be smaller than the scan window used for cycle verification and the scan wraps the end of the log, the start_blk calculation in xlog_find_head() underflows and leads to an attempt to scan an invalid range of log blocks. This results in log recovery failure and a failed mount. Since there may be filesystems out in the wild with this kind of geometry, we cannot simply refuse to mount. Instead, cap the scan window for cycle verification to the size of the physical log. This ensures that the cycle verification proceeds as expected when the scan wraps the end of the log. Reported-by: Zorro Lang Signed-off-by: Brian Foster Reviewed-by: Darrick J. Wong Signed-off-by: Darrick J. Wong Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit fc4177eacfa6e95a243402790b362736879f1cfd Author: Jiri Slaby Date: Wed Oct 25 15:57:55 2017 +0200 l2tp: cleanup l2tp_tunnel_delete calls [ Upstream commit 4dc12ffeaeac939097a3f55c881d3dc3523dff0c ] l2tp_tunnel_delete does not return anything since commit 62b982eeb458 ("l2tp: fix race condition in l2tp_tunnel_delete"). But call sites of l2tp_tunnel_delete still do casts to void to avoid unused return value warnings. Kill these now useless casts. Signed-off-by: Jiri Slaby Cc: Sabrina Dubroca Cc: Guillaume Nault Cc: David S. Miller Cc: netdev@vger.kernel.org Acked-by: Guillaume Nault Signed-off-by: David S. Miller Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 6a559523ee6626d191be1da9031845c368e4a6a9 Author: Christoph Hellwig Date: Wed Oct 18 13:20:01 2017 +0200 nvme: use kref_get_unless_zero in nvme_find_get_ns [ Upstream commit 2dd4122854f697afc777582d18548dded03ce5dd ] For kref_get_unless_zero to protect against lookup vs free races we need to use it in all places where we aren't guaranteed to already hold a reference. There is no such guarantee in nvme_find_get_ns, so switch to kref_get_unless_zero in this function. Signed-off-by: Christoph Hellwig Reviewed-by: Sagi Grimberg Reviewed-by: Hannes Reinecke Reviewed-by: Johannes Thumshirn Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit e2fce5a5578d175cd61822473228ab454782bec7 Author: Osama Khan Date: Sat Oct 21 10:42:21 2017 +0000 platform/x86: hp_accel: Add quirk for HP ProBook 440 G4 [ Upstream commit 163ca80013aafb6dc9cb295de3db7aeab9ab43f8 ] Added support for HP ProBook 440 G4 laptops by including the accelerometer orientation quirk for that device. Testing was performed based on the axis orientation guidelines here: https://www.kernel.org/doc/Documentation/misc-devices/lis3lv02d which states "If the left side is elevated, X increases (becomes positive)". When tested, on lifting the left edge, x values became increasingly negative thus indicating an inverted x-axis on the installed lis3lv02d chip. This was compensated by adding an entry for this device in hp_accel.c specifying the quirk as x_inverted. The patch was tested on a ProBook 440 G4 device and x-axis as well as y and z-axis values are now generated as per spec. Signed-off-by: Osama Khan Signed-off-by: Andy Shevchenko Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 7fab68e1f7308b2b7e3ebdd2483eb7a9572ccec5 Author: Christophe JAILLET Date: Sun Sep 10 13:19:38 2017 +0200 btrfs: tests: Fix a memory leak in error handling path in 'run_test()' [ Upstream commit 9ca2e97fa3c3216200afe35a3b111ec51cc796d2 ] If 'btrfs_alloc_path()' fails, we must free the resources already allocated, as done in the other error handling paths in this function. Signed-off-by: Christophe JAILLET Reviewed-by: Qu Wenruo Signed-off-by: David Sterba Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit b7ada2c0ea291e7bf22b51745afd9f1ccfc91292 Author: Nick Desaulniers Date: Fri Oct 27 09:33:41 2017 -0700 arm64: prevent regressions in compressed kernel image size when upgrading to binutils 2.27 [ Upstream commit fd9dde6abcb9bfe6c6bee48834e157999f113971 ] Upon upgrading to binutils 2.27, we found that our lz4 and gzip compressed kernel images were significantly larger, resulting is 10ms boot time regressions. As noted by Rahul: "aarch64 binaries uses RELA relocations, where each relocation entry includes an addend value. This is similar to x86_64. On x86_64, the addend values are also stored at the relocation offset for relative relocations. This is an optimization: in the case where code does not need to be relocated, the loader can simply skip processing relative relocations. In binutils-2.25, both bfd and gold linkers did this for x86_64, but only the gold linker did this for aarch64. The kernel build here is using the bfd linker, which stored zeroes at the relocation offsets for relative relocations. Since a set of zeroes compresses better than a set of non-zero addend values, this behavior was resulting in much better lz4 compression. The bfd linker in binutils-2.27 is now storing the actual addend values at the relocation offsets. The behavior is now consistent with what it does for x86_64 and what gold linker does for both architectures. The change happened in this upstream commit: https://sourceware.org/git/?p=binutils-gdb.git;a=commit;h=1f56df9d0d5ad89806c24e71f296576d82344613 Since a bunch of zeroes got replaced by non-zero addend values, we see the side effect of lz4 compressed image being a bit bigger. To get the old behavior from the bfd linker, "--no-apply-dynamic-relocs" flag can be used: $ LDFLAGS="--no-apply-dynamic-relocs" make With this flag, the compressed image size is back to what it was with binutils-2.25. If the kernel is using ASLR, there aren't additional runtime costs to --no-apply-dynamic-relocs, as the relocations will need to be applied again anyway after the kernel is relocated to a random address. If the kernel is not using ASLR, then presumably the current default behavior of the linker is better. Since the static linker performed the dynamic relocs, and the kernel is not moved to a different address at load time, it can skip applying the relocations all over again." Some measurements: $ ld -v GNU ld (binutils-2.25-f3d35cf6) 2.25.51.20141117 ^ $ ls -l vmlinux -rwxr-x--- 1 ndesaulniers eng 300652760 Oct 26 11:57 vmlinux $ ls -l Image.lz4-dtb -rw-r----- 1 ndesaulniers eng 16932627 Oct 26 11:57 Image.lz4-dtb $ ld -v GNU ld (binutils-2.27-53dd00a1) 2.27.0.20170315 ^ pre patch: $ ls -l vmlinux -rwxr-x--- 1 ndesaulniers eng 300376208 Oct 26 11:43 vmlinux $ ls -l Image.lz4-dtb -rw-r----- 1 ndesaulniers eng 18159474 Oct 26 11:43 Image.lz4-dtb post patch: $ ls -l vmlinux -rwxr-x--- 1 ndesaulniers eng 300376208 Oct 26 12:06 vmlinux $ ls -l Image.lz4-dtb -rw-r----- 1 ndesaulniers eng 16932466 Oct 26 12:06 Image.lz4-dtb By Siqi's measurement w/ gzip: binutils 2.27 with this patch (with --no-apply-dynamic-relocs): Image 41535488 Image.gz 13404067 binutils 2.27 without this patch (without --no-apply-dynamic-relocs): Image 41535488 Image.gz 14125516 Any compression scheme should be able to get better results from the longer runs of zeros, not just GZIP and LZ4. 10ms boot time savings isn't anything to get excited about, but users of arm64+compression+bfd-2.27 should not have to pay a penalty for no runtime improvement. Reported-by: Gopinath Elanchezhian Reported-by: Sindhuri Pentyala Reported-by: Wei Wang Suggested-by: Ard Biesheuvel Suggested-by: Rahul Chaudhry Suggested-by: Siqi Lin Suggested-by: Stephen Hines Signed-off-by: Nick Desaulniers Reviewed-by: Ard Biesheuvel [will: added comment to Makefile] Signed-off-by: Will Deacon Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 52aaa748a948a1bd50fa2ef0d2b9ffad11b0e672 Author: Patel Jay P Date: Mon Oct 23 06:05:53 2017 -0700 Ib/hfi1: Return actual operational VLs in port info query [ Upstream commit 00f9203119dd2774564407c7a67b17d81916298b ] __subn_get_opa_portinfo stores value returned by hfi1_get_ib_cfg() as operational vls. hfi1_get_ib_cfg() returns vls_operational field in hfi1_pportdata. The problem with this is that the value is always equal to vls_supported field in hfi1_pportdata. The logic to calculate operational_vls is to set value passed by FM (in __subn_set_opa_portinfo routine). If no value is passed then default value is stored in operational_vls. Field actual_vls_operational is calculated on the basis of buffer control table. Hence, modifying hfi1_get_ib_cfg() to return actual_operational_vls when used with HFI1_IB_CFG_OP_VLS parameter Reviewed-by: Mike Marciniszyn Reviewed-by: Dennis Dalessandro Signed-off-by: Patel Jay P Signed-off-by: Dennis Dalessandro Signed-off-by: Doug Ledford Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 9102ed6a5f6a53434f69264ac8457994fde14a88 Author: tang.junhui Date: Mon Oct 30 14:46:34 2017 -0700 bcache: fix wrong cache_misses statistics [ Upstream commit c157313791a999646901b3e3c6888514ebc36d62 ] Currently, Cache missed IOs are identified by s->cache_miss, but actually, there are many situations that missed IOs are not assigned a value for s->cache_miss in cached_dev_cache_miss(), for example, a bypassed IO (s->iop.bypass = 1), or the cache_bio allocate failed. In these situations, it will go to out_put or out_submit, and s->cache_miss is null, which leads bch_mark_cache_accounting() to treat this IO as a hit IO. [ML: applied by 3-way merge] Signed-off-by: tang.junhui Reviewed-by: Michael Lyle Reviewed-by: Coly Li Signed-off-by: Jens Axboe Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit c2a0531f59c363fd5b50be46b97248091b13484b Author: Liang Chen Date: Mon Oct 30 14:46:35 2017 -0700 bcache: explicitly destroy mutex while exiting [ Upstream commit 330a4db89d39a6b43f36da16824eaa7a7509d34d ] mutex_destroy does nothing most of time, but it's better to call it to make the code future proof and it also has some meaning for like mutex debug. As Coly pointed out in a previous review, bcache_exit() may not be able to handle all the references properly if userspace registers cache and backing devices right before bch_debug_init runs and bch_debug_init failes later. So not exposing userspace interface until everything is ready to avoid that issue. Signed-off-by: Liang Chen Reviewed-by: Michael Lyle Reviewed-by: Coly Li Reviewed-by: Eric Wheeler Signed-off-by: Jens Axboe Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 75f66eeae657688f1e600214ae297422834d3c43 Author: Bob Peterson Date: Wed Sep 20 08:30:04 2017 -0500 GFS2: Take inode off order_write list when setting jdata flag [ Upstream commit cc555b09d8c3817aeebda43a14ab67049a5653f7 ] This patch fixes a deadlock caused when the jdata flag is set for inodes that are already on the ordered write list. Since it is on the ordered write list, log_flush calls gfs2_ordered_write which calls filemap_fdatawrite. But since the inode had the jdata flag set, that calls gfs2_jdata_writepages, which tries to start a new transaction. A new transaction cannot be started because it tries to acquire the log_flush rwsem which is already locked by the log flush operation. The bottom line is: We cannot switch an inode from ordered to jdata until we eliminate any ordered data pages (via log flush) or any log_flush operation afterward will create the circular dependency above. So we need to flush the log before setting the diskflags to switch the file mode, then we need to remove the inode from the ordered writes list. Before this patch, the log flush was done for jdata->ordered, but that's wrong. If we're going from jdata to ordered, we don't need to call gfs2_log_flush because the call to filemap_fdatawrite will do it for us: filemap_fdatawrite() -> __filemap_fdatawrite_range() __filemap_fdatawrite_range() -> do_writepages() do_writepages() -> gfs2_jdata_writepages() gfs2_jdata_writepages() -> gfs2_log_flush() This patch modifies function do_gfs2_set_flags so that if a file has its jdata flag set, and it's already on the ordered write list, the log will be flushed and it will be removed from the list before setting the flag. Signed-off-by: Bob Peterson Signed-off-by: Andreas Gruenbacher Acked-by: Abhijith Das Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 026ffaf65879b3ecc12ffa9c45dfa5894370bd60 Author: Douglas Gilbert Date: Sun Oct 29 10:47:19 2017 -0400 scsi: scsi_debug: write_same: fix error report [ Upstream commit e33d7c56450b0a5c7290cbf9e1581fab5174f552 ] The scsi_debug driver incorrectly suggests there is an error with the SCSI WRITE SAME command when the number_of_logical_blocks is greater than 1. It will also suggest there is an error when NDOB (no data-out buffer) is set and the number_of_logical_blocks is greater than 0. Both are valid, fix. Signed-off-by: Douglas Gilbert Signed-off-by: Martin K. Petersen Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit d8914530f247900885850af7cccf81a2b33907b9 Author: Daniel Lezcano Date: Thu Oct 19 19:05:58 2017 +0200 thermal/drivers/step_wise: Fix temperature regulation misbehavior [ Upstream commit 07209fcf33542c1ff1e29df2dbdf8f29cdaacb10 ] There is a particular situation when the cooling device is cpufreq and the heat dissipation is not efficient enough where the temperature increases little by little until reaching the critical threshold and leading to a SoC reset. The behavior is reproducible on a hikey6220 with bad heat dissipation (eg. stacked with other boards). Running a simple C program doing while(1); for each CPU of the SoC makes the temperature to reach the passive regulation trip point and ends up to the maximum allowed temperature followed by a reset. This issue has been also reported by running the libhugetlbfs test suite. What is observed is a ping pong between two cpu frequencies, 1.2GHz and 900MHz while the temperature continues to grow. It appears the step wise governor calls get_target_state() the first time with the throttle set to true and the trend to 'raising'. The code selects logically the next state, so the cpu frequency decreases from 1.2GHz to 900MHz, so far so good. The temperature decreases immediately but still stays greater than the trip point, then get_target_state() is called again, this time with the throttle set to true *and* the trend to 'dropping'. From there the algorithm assumes we have to step down the state and the cpu frequency jumps back to 1.2GHz. But the temperature is still higher than the trip point, so get_target_state() is called with throttle=1 and trend='raising' again, we jump to 900MHz, then get_target_state() is called with throttle=1 and trend='dropping', we jump to 1.2GHz, etc ... but the temperature does not stabilizes and continues to increase. [ 237.922654] thermal thermal_zone0: Trip0[type=1,temp=65000]:trend=1,throttle=1 [ 237.922678] thermal thermal_zone0: Trip1[type=1,temp=75000]:trend=1,throttle=1 [ 237.922690] thermal cooling_device0: cur_state=0 [ 237.922701] thermal cooling_device0: old_target=0, target=1 [ 238.026656] thermal thermal_zone0: Trip0[type=1,temp=65000]:trend=2,throttle=1 [ 238.026680] thermal thermal_zone0: Trip1[type=1,temp=75000]:trend=2,throttle=1 [ 238.026694] thermal cooling_device0: cur_state=1 [ 238.026707] thermal cooling_device0: old_target=1, target=0 [ 238.134647] thermal thermal_zone0: Trip0[type=1,temp=65000]:trend=1,throttle=1 [ 238.134667] thermal thermal_zone0: Trip1[type=1,temp=75000]:trend=1,throttle=1 [ 238.134679] thermal cooling_device0: cur_state=0 [ 238.134690] thermal cooling_device0: old_target=0, target=1 In this situation the temperature continues to increase while the trend is oscillating between 'dropping' and 'raising'. We need to keep the current state untouched if the throttle is set, so the temperature can decrease or a higher state could be selected, thus preventing this oscillation. Keeping the next_target untouched when 'throttle' is true at 'dropping' time fixes the issue. The following traces show the governor does not change the next state if trend==2 (dropping) and throttle==1. [ 2306.127987] thermal thermal_zone0: Trip0[type=1,temp=65000]:trend=1,throttle=1 [ 2306.128009] thermal thermal_zone0: Trip1[type=1,temp=75000]:trend=1,throttle=1 [ 2306.128021] thermal cooling_device0: cur_state=0 [ 2306.128031] thermal cooling_device0: old_target=0, target=1 [ 2306.231991] thermal thermal_zone0: Trip0[type=1,temp=65000]:trend=2,throttle=1 [ 2306.232016] thermal thermal_zone0: Trip1[type=1,temp=75000]:trend=2,throttle=1 [ 2306.232030] thermal cooling_device0: cur_state=1 [ 2306.232042] thermal cooling_device0: old_target=1, target=1 [ 2306.335982] thermal thermal_zone0: Trip0[type=1,temp=65000]:trend=0,throttle=1 [ 2306.336006] thermal thermal_zone0: Trip1[type=1,temp=75000]:trend=0,throttle=1 [ 2306.336021] thermal cooling_device0: cur_state=1 [ 2306.336034] thermal cooling_device0: old_target=1, target=1 [ 2306.439984] thermal thermal_zone0: Trip0[type=1,temp=65000]:trend=2,throttle=1 [ 2306.440008] thermal thermal_zone0: Trip1[type=1,temp=75000]:trend=2,throttle=0 [ 2306.440022] thermal cooling_device0: cur_state=1 [ 2306.440034] thermal cooling_device0: old_target=1, target=0 [ ... ] After a while, if the temperature continues to increase, the next state becomes 2 which is 720MHz on the hikey. That results in the temperature stabilizing around the trip point. [ 2455.831982] thermal thermal_zone0: Trip0[type=1,temp=65000]:trend=1,throttle=1 [ 2455.832006] thermal thermal_zone0: Trip1[type=1,temp=75000]:trend=1,throttle=0 [ 2455.832019] thermal cooling_device0: cur_state=1 [ 2455.832032] thermal cooling_device0: old_target=1, target=1 [ 2455.935985] thermal thermal_zone0: Trip0[type=1,temp=65000]:trend=0,throttle=1 [ 2455.936013] thermal thermal_zone0: Trip1[type=1,temp=75000]:trend=0,throttle=0 [ 2455.936027] thermal cooling_device0: cur_state=1 [ 2455.936040] thermal cooling_device0: old_target=1, target=1 [ 2456.043984] thermal thermal_zone0: Trip0[type=1,temp=65000]:trend=0,throttle=1 [ 2456.044009] thermal thermal_zone0: Trip1[type=1,temp=75000]:trend=0,throttle=0 [ 2456.044023] thermal cooling_device0: cur_state=1 [ 2456.044036] thermal cooling_device0: old_target=1, target=1 [ 2456.148001] thermal thermal_zone0: Trip0[type=1,temp=65000]:trend=1,throttle=1 [ 2456.148028] thermal thermal_zone0: Trip1[type=1,temp=75000]:trend=1,throttle=1 [ 2456.148042] thermal cooling_device0: cur_state=1 [ 2456.148055] thermal cooling_device0: old_target=1, target=2 [ 2456.252009] thermal thermal_zone0: Trip0[type=1,temp=65000]:trend=2,throttle=1 [ 2456.252041] thermal thermal_zone0: Trip1[type=1,temp=75000]:trend=2,throttle=0 [ 2456.252058] thermal cooling_device0: cur_state=2 [ 2456.252075] thermal cooling_device0: old_target=2, target=1 IOW, this change is needed to keep the state for a cooling device if the temperature trend is oscillating while the temperature increases slightly. Without this change, the situation above leads to a catastrophic crash by a hardware reset on hikey. This issue has been reported to happen on an OMAP dra7xx also. Signed-off-by: Daniel Lezcano Cc: Keerthy Cc: John Stultz Cc: Leo Yan Tested-by: Keerthy Reviewed-by: Keerthy Signed-off-by: Eduardo Valentin Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 019433db872f9032cabd2a4a0fbd3df2bb14f539 Author: Kuninori Morimoto Date: Wed Nov 1 07:16:58 2017 +0000 ASoC: rsnd: rsnd_ssi_run_mods() needs to care ssi_parent_mod [ Upstream commit 21781e87881f9c420871b1d1f3f29d4cd7bffb10 ] SSI parent mod might be NULL. ssi_parent_mod() needs to care about it. Otherwise, it uses negative shift. This patch fixes it. Signed-off-by: Kuninori Morimoto Signed-off-by: Mark Brown Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit cf16dac8bd9868cf22203ac498e8e997ad7b0ca1 Author: Gao Feng Date: Tue Oct 31 18:25:37 2017 +0800 ppp: Destroy the mutex when cleanup [ Upstream commit f02b2320b27c16b644691267ee3b5c110846f49e ] The mutex_destroy only makes sense when enable DEBUG_MUTEX. For the good readbility, it's better to invoke it in exit func when the init func invokes mutex_init. Signed-off-by: Gao Feng Acked-by: Guillaume Nault Signed-off-by: David S. Miller Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 27f5597c98590cdd01f2c9bebbf36cb4bce28253 Author: Michał Mirosław Date: Tue Sep 19 04:48:10 2017 +0200 clk: tegra: Fix cclk_lp divisor register [ Upstream commit 54eff2264d3e9fd7e3987de1d7eba1d3581c631e ] According to comments in code and common sense, cclk_lp uses its own divisor, not cclk_g's. Fixes: b08e8c0ecc42 ("clk: tegra: add clock support for Tegra30") Signed-off-by: Michał Mirosław Acked-By: Peter De Schrijver Signed-off-by: Thierry Reding Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 54809e38a6298fde41c0b72c2423f5d7c9e847fe Author: Leo Yan Date: Fri Sep 1 08:47:14 2017 +0800 clk: hi6220: mark clock cs_atb_syspll as critical [ Upstream commit d2a3671ebe6479483a12f94fcca63c058d95ad64 ] Clock cs_atb_syspll is pll used for coresight trace bus; when clock cs_atb_syspll is disabled and operates its child clock node cs_atb results in system hang. So mark clock cs_atb_syspll as critical to keep it enabled. Cc: Guodong Xu Cc: Zhangfei Gao Cc: Haojian Zhuang Signed-off-by: Leo Yan Signed-off-by: Michael Turquette Link: lkml.kernel.org/r/1504226835-2115-2-git-send-email-leo.yan@linaro.org Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 47b63ea40ee158fbd032047935cf8b4d388ccd24 Author: Sébastien Szymanski Date: Tue Aug 1 12:40:07 2017 +0200 clk: imx6: refine hdmi_isfr's parent to make HDMI work on i.MX6 SoCs w/o VPU [ Upstream commit c68ee58d9ee7b856ac722f18f4f26579c8fbd2b4 ] On i.MX6 SoCs without VPU (in my case MCIMX6D4AVT10AC), the hdmi driver fails to probe: [ 2.540030] dwhdmi-imx 120000.hdmi: Unsupported HDMI controller (0000:00:00) [ 2.548199] imx-drm display-subsystem: failed to bind 120000.hdmi (ops dw_hdmi_imx_ops): -19 [ 2.557403] imx-drm display-subsystem: master bind failed: -19 That's because hdmi_isfr's parent, video_27m, is not correctly ungated. As explained in commit 5ccc248cc537 ("ARM: imx6q: clk: Add support for mipi_core_cfg clock as a shared clock gate"), video_27m is gated by CCM_CCGR3[CG8]. On i.MX6 SoCs with VPU, the hdmi is working thanks to the CCM_CMEOR[mod_en_ov_vpu] bit which makes the video_27m ungated whatever is in CCM_CCGR3[CG8]. The issue can be reproduced by setting CCMEOR[mod_en_ov_vpu] to 0. Make the HDMI work in every case by setting hdmi_isfr's parent to mipi_core_cfg. Signed-off-by: Sébastien Szymanski Reviewed-by: Fabio Estevam Signed-off-by: Stephen Boyd Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit d6b6302c36b5459f3f2fa8d5e978b3e25f7e2723 Author: Chen Zhong Date: Thu Oct 5 11:50:23 2017 +0800 clk: mediatek: add the option for determining PLL source clock [ Upstream commit c955bf3998efa3355790a4d8c82874582f1bc727 ] Since the previous setup always sets the PLL using crystal 26MHz, this doesn't always happen in every MediaTek platform. So the patch added flexibility for assigning extra member for determining the PLL source clock. Signed-off-by: Chen Zhong Signed-off-by: Sean Wang Signed-off-by: Stephen Boyd Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 2850c3ec0d25a4f729bd7f2212e178c9303e697a Author: Jan Kara Date: Fri Nov 3 12:21:21 2017 +0100 mm: Handle 0 flags in _calc_vm_trans() macro [ Upstream commit 592e254502041f953e84d091eae2c68cba04c10b ] _calc_vm_trans() does not handle the situation when some of the passed flags are 0 (which can happen if these VM flags do not make sense for the architecture). Improve the _calc_vm_trans() macro to return 0 in such situation. Since all passed flags are constant, this does not add any runtime overhead. Signed-off-by: Jan Kara Signed-off-by: Dan Williams Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 18498f1c7089035faba2bc9ba64c5c96542ae0c4 Author: Robert Baronescu Date: Tue Oct 10 13:22:00 2017 +0300 crypto: tcrypt - fix buffer lengths in test_aead_speed() [ Upstream commit 7aacbfcb331ceff3ac43096d563a1f93ed46e35e ] Fix the way the length of the buffers used for encryption / decryption are computed. For e.g. in case of encryption, input buffer does not contain an authentication tag. Signed-off-by: Robert Baronescu Signed-off-by: Herbert Xu Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 2ed46cbf23fc48ea63e3a3eaa820de5d54534cb6 Author: Suzuki K Poulose Date: Fri Nov 3 11:45:18 2017 +0000 arm-ccn: perf: Prevent module unload while PMU is in use [ Upstream commit c7f5828bf77dcbd61d51f4736c1d5aa35663fbb4 ] When the PMU driver is built as a module, the perf expects the pmu->module to be valid, so that the driver is prevented from being unloaded while it is in use. Fix the CCN pmu driver to fill in this field. Fixes: a33b0daab73a0 ("bus: ARM CCN PMU driver") Cc: Pawel Moll Cc: Will Deacon Acked-by: Mark Rutland Signed-off-by: Suzuki K Poulose Signed-off-by: Will Deacon Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit c843e9f8f97ff0d7c3a8232a314b20c1328ba697 Author: Eryu Guan Date: Wed Nov 1 21:43:50 2017 -0700 xfs: truncate pagecache before writeback in xfs_setattr_size() [ Upstream commit 350976ae21873b0d36584ea005076356431b8f79 ] On truncate down, if new size is not block size aligned, we zero the rest of block to avoid exposing stale data to user, and iomap_truncate_page() skips zeroing if the range is already in unwritten state or a hole. Then we writeback from on-disk i_size to the new size if this range hasn't been written to disk yet, and truncate page cache beyond new EOF and set in-core i_size. The problem is that we could write data between di_size and newsize before removing the page cache beyond newsize, as the extents may still be in unwritten state right after a buffer write. As such, the page of data that newsize lies in has not been zeroed by page cache invalidation before it is written, and xfs_do_writepage() hasn't triggered it's "zero data beyond EOF" case because we haven't updated in-core i_size yet. Then a subsequent mmap read could see non-zeros past EOF. I occasionally see this in fsx runs in fstests generic/112, a simplified fsx operation sequence is like (assuming 4k block size xfs): fallocate 0x0 0x1000 0x0 keep_size write 0x0 0x1000 0x0 truncate 0x0 0x800 0x1000 punch_hole 0x0 0x800 0x800 mapread 0x0 0x800 0x800 where fallocate allocates unwritten extent but doesn't update i_size, buffer write populates the page cache and extent is still unwritten, truncate skips zeroing page past new EOF and writes the page to disk, punch_hole invalidates the page cache, at last mapread reads the block back and sees non-zero beyond EOF. Fix it by moving truncate_setsize() to before writeback so the page cache invalidation zeros the partial page at the new EOF. This also triggers "zero data beyond EOF" in xfs_do_writepage() at writeback time, because newsize has been set and page straddles the newsize. Also fixed the wrong 'end' param of filemap_write_and_wait_range() call while we're at it, the 'end' is inclusive and should be 'newsize - 1'. Suggested-by: Dave Chinner Signed-off-by: Eryu Guan Acked-by: Dave Chinner Reviewed-by: Brian Foster Reviewed-by: Darrick J. Wong Signed-off-by: Darrick J. Wong Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 03bfadfb0d7990bf558dabd6b26e4f06bc32c45b Author: Gary R Hook Date: Fri Nov 3 10:50:34 2017 -0600 iommu/amd: Limit the IOVA page range to the specified addresses [ Upstream commit b92b4fb5c14257c0e7eae291ecc1f7b1962e1699 ] The extent of pages specified when applying a reserved region should include up to the last page of the range, but not the page following the range. Signed-off-by: Gary R Hook Fixes: 8d54d6c8b8f3 ('iommu/amd: Implement apply_dm_region call-back') Signed-off-by: Alex Williamson Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit cb0acb37010c216e4781224ad4ab4c12ca9cb5fe Author: Liu Bo Date: Fri Nov 3 11:24:44 2017 -0600 badblocks: fix wrong return value in badblocks_set if badblocks are disabled [ Upstream commit 39b4954c0a1556f8f7f1fdcf59a227117fcd8a0b ] MD's rdev_set_badblocks() expects that badblocks_set() returns 1 if badblocks are disabled, otherwise, rdev_set_badblocks() will record superblock changes and return success in that case and md will fail to report an IO error which it should. This bug has existed since badblocks were introduced in commit 9e0e252a048b ("badblocks: Add core badblock management code"). Signed-off-by: Liu Bo Acked-by: Guoqing Jiang Signed-off-by: Shaohua Li Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit dcdca123814c5303f0e4d0fba67756d35bb0fd1f Author: Jiang Yi Date: Fri Aug 11 11:29:44 2017 +0800 target/file: Do not return error for UNMAP if length is zero [ Upstream commit 594e25e73440863981032d76c9b1e33409ceff6e ] The function fd_execute_unmap() in target_core_file.c calles ret = file->f_op->fallocate(file, mode, pos, len); Some filesystems implement fallocate() to return error if length is zero (e.g. btrfs) but according to SCSI Block Commands spec UNMAP should return success for zero length. Signed-off-by: Jiang Yi Signed-off-by: Nicholas Bellinger Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 998201fdc5c9144a2dfaec8811a722c3c5e731d3 Author: tangwenji Date: Thu Aug 24 19:59:37 2017 +0800 target:fix condition return in core_pr_dump_initiator_port() [ Upstream commit 24528f089d0a444070aa4f715ace537e8d6bf168 ] When is pr_reg->isid_present_at_reg is false,this function should return. This fixes a regression originally introduced by: commit d2843c173ee53cf4c12e7dfedc069a5bc76f0ac5 Author: Andy Grover Date: Thu May 16 10:40:55 2013 -0700 target: Alter core_pr_dump_initiator_port for ease of use Signed-off-by: tangwenji Signed-off-by: Nicholas Bellinger Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit a4f54ec403da9f5eafaba60d67ae8b2613a1bae5 Author: tangwenji Date: Fri Sep 15 16:03:13 2017 +0800 iscsi-target: fix memory leak in lio_target_tiqn_addtpg() [ Upstream commit 12d5a43b2dffb6cd28062b4e19024f7982393288 ] tpg must free when call core_tpg_register() return fail Signed-off-by: tangwenji Signed-off-by: Nicholas Bellinger Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit e086a82a926ace86fc14da0442437c2024d68001 Author: Bart Van Assche Date: Tue Oct 31 11:03:17 2017 -0700 target/iscsi: Fix a race condition in iscsit_add_reject_from_cmd() [ Upstream commit cfe2b621bb18d86e93271febf8c6e37622da2d14 ] Avoid that cmd->se_cmd.se_tfo is read after a command has already been freed. Signed-off-by: Bart Van Assche Cc: Christoph Hellwig Cc: Mike Christie Reviewed-by: Hannes Reinecke Signed-off-by: Nicholas Bellinger Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit abc4b4420af8d2d31fe38dddf998855f5cf9d8dc Author: Kuppuswamy Sathyanarayanan Date: Sun Oct 29 02:49:54 2017 -0700 platform/x86: intel_punit_ipc: Fix resource ioremap warning [ Upstream commit 6cc8cbbc8868033f279b63e98b26b75eaa0006ab ] For PUNIT device, ISPDRIVER_IPC and GTDDRIVER_IPC resources are not mandatory. So when PMC IPC driver creates a PUNIT device, if these resources are not available then it creates dummy resource entries for these missing resources. But during PUNIT device probe, doing ioremap on these dummy resources generates following warning messages. intel_punit_ipc: can't request region for resource [mem 0x00000000] intel_punit_ipc: can't request region for resource [mem 0x00000000] intel_punit_ipc: can't request region for resource [mem 0x00000000] intel_punit_ipc: can't request region for resource [mem 0x00000000] This patch fixes this issue by adding extra check for resource size before performing ioremap operation. Signed-off-by: Kuppuswamy Sathyanarayanan Signed-off-by: Andy Shevchenko Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 6e5a846d5172c5225bda764797a07c828b5b2650 Author: Christophe Leroy Date: Wed Oct 18 11:16:47 2017 +0200 powerpc/ipic: Fix status get and status clear [ Upstream commit 6b148a7ce72a7f87c81cbcde48af014abc0516a9 ] IPIC Status is provided by register IPIC_SERSR and not by IPIC_SERMR which is the mask register. Signed-off-by: Christophe Leroy Signed-off-by: Michael Ellerman Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit d7e7c431d62101505a6a0321986dba498252fd7f Author: William A. Kennington III Date: Fri Sep 22 16:58:00 2017 -0700 powerpc/opal: Fix EBUSY bug in acquiring tokens [ Upstream commit 71e24d7731a2903b1ae2bba2b2971c654d9c2aa6 ] The current code checks the completion map to look for the first token that is complete. In some cases, a completion can come in but the token can still be on lease to the caller processing the completion. If this completed but unreleased token is the first token found in the bitmap by another tasks trying to acquire a token, then the __test_and_set_bit call will fail since the token will still be on lease. The acquisition will then fail with an EBUSY. This patch reorganizes the acquisition code to look at the opal_async_token_map for an unleased token. If the token has no lease it must have no outstanding completions so we should never see an EBUSY, unless we have leased out too many tokens. Since opal_async_get_token_inrerruptible is protected by a semaphore, we will practically never see EBUSY anymore. Fixes: 8d7248232208 ("powerpc/powernv: Infrastructure to support OPAL async completion") Signed-off-by: William A. Kennington III Signed-off-by: Michael Ellerman Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit a463f9c5dfd1bb62f128e6883103117af6f3db1f Author: KUWAZAWA Takuya Date: Sun Oct 15 20:54:10 2017 +0900 netfilter: ipvs: Fix inappropriate output of procfs [ Upstream commit c5504f724c86ee925e7ffb80aa342cfd57959b13 ] Information about ipvs in different network namespace can be seen via procfs. How to reproduce: # ip netns add ns01 # ip netns add ns02 # ip netns exec ns01 ip a add dev lo 127.0.0.1/8 # ip netns exec ns02 ip a add dev lo 127.0.0.1/8 # ip netns exec ns01 ipvsadm -A -t 10.1.1.1:80 # ip netns exec ns02 ipvsadm -A -t 10.1.1.2:80 The ipvsadm displays information about its own network namespace only. # ip netns exec ns01 ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.1.1.1:80 wlc # ip netns exec ns02 ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.1.1.2:80 wlc But I can see information about other network namespace via procfs. # ip netns exec ns01 cat /proc/net/ip_vs IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 0A010101:0050 wlc TCP 0A010102:0050 wlc # ip netns exec ns02 cat /proc/net/ip_vs IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 0A010102:0050 wlc Signed-off-by: KUWAZAWA Takuya Acked-by: Julian Anastasov Signed-off-by: Pablo Neira Ayuso Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit b3b6d1eea0de7115353a3706e5b0c3ff56be0663 Author: Matthias Brugger Date: Mon Oct 30 12:37:55 2017 +0100 iommu/mediatek: Fix driver name [ Upstream commit 395df08d2e1de238a9c8c33fdcd0e2160efd63a9 ] There exist two Mediatek iommu drivers for the two different generations of the device. But both drivers have the same name "mtk-iommu". This breaks the registration of the second driver: Error: Driver 'mtk-iommu' is already registered, aborting... Fix this by changing the name for first generation to "mtk-iommu-v1". Fixes: b17336c55d89 ("iommu/mediatek: add support for mtk iommu generation one HW") Signed-off-by: Matthias Brugger Signed-off-by: Alex Williamson Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 9a4bf05126f42c2632729ab0da503021d74ed454 Author: Mika Westerberg Date: Fri Oct 13 21:35:43 2017 +0300 PCI: Do not allocate more buses than available in parent [ Upstream commit a20c7f36bd3d20d245616ae223bb9d05dfb6f050 ] One can ask more buses to be reserved for hotplug bridges by passing pci=hpbussize=N in the kernel command line. If the parent bus does not have enough bus space available we incorrectly create child bus with the requested number of subordinate buses. In the example below hpbussize is set to one more than we have available buses in the root port: pci 0000:07:00.0: [8086:1578] type 01 class 0x060400 pci 0000:07:00.0: scanning [bus 00-00] behind bridge, pass 0 pci 0000:07:00.0: bridge configuration invalid ([bus 00-00]), reconfiguring pci 0000:07:00.0: scanning [bus 00-00] behind bridge, pass 1 pci_bus 0000:08: busn_res: can not insert [bus 08-ff] under [bus 07-3f] (conflicts with (null) [bus 07-3f]) pci_bus 0000:08: scanning bus ... pci_bus 0000:0a: bus scan returning with max=40 pci_bus 0000:0a: busn_res: [bus 0a-ff] end is updated to 40 pci_bus 0000:0a: [bus 0a-40] partially hidden behind bridge 0000:07 [bus 07-3f] pci_bus 0000:08: bus scan returning with max=40 pci_bus 0000:08: busn_res: [bus 08-ff] end is updated to 40 Instead of allowing this, limit the subordinate number to be less than or equal the maximum subordinate number allocated for the parent bus (if it has any). Signed-off-by: Mika Westerberg [bhelgaas: remove irrelevant dmesg messages] Signed-off-by: Bjorn Helgaas Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 29a404be7b30237e0894057422407722d3f8bf45 Author: Shriya Date: Fri Oct 13 10:06:41 2017 +0530 powerpc/powernv/cpufreq: Fix the frequency read by /proc/cpuinfo [ Upstream commit cd77b5ce208c153260ed7882d8910f2395bfaabd ] The call to /proc/cpuinfo in turn calls cpufreq_quick_get() which returns the last frequency requested by the kernel, but may not reflect the actual frequency the processor is running at. This patch makes a call to cpufreq_get() instead which returns the current frequency reported by the hardware. Fixes: fb5153d05a7d ("powerpc: powernv: Implement ppc_md.get_proc_freq()") Signed-off-by: Shriya Signed-off-by: Michael Ellerman Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit f44d28e0348dbb081784ab02be57954d75efa13e Author: Qiang Date: Thu Sep 28 11:54:34 2017 +0800 PCI/PME: Handle invalid data when reading Root Status [ Upstream commit 3ad3f8ce50914288731a3018b27ee44ab803e170 ] PCIe PME and native hotplug share the same interrupt number, so hotplug interrupts are also processed by PME. In some cases, e.g., a Link Down interrupt, a device may be present but unreachable, so when we try to read its Root Status register, the read fails and we get all ones data (0xffffffff). Previously, we interpreted that data as PCI_EXP_RTSTA_PME being set, i.e., "some device has asserted PME," so we scheduled pcie_pme_work_fn(). This caused an infinite loop because pcie_pme_work_fn() tried to handle PME requests until PCI_EXP_RTSTA_PME is cleared, but with the link down, PCI_EXP_RTSTA_PME can't be cleared. Check for the invalid 0xffffffff data everywhere we read the Root Status register. 1469d17dd341 ("PCI: pciehp: Handle invalid data when reading from non-existent devices") added similar checks in the hotplug driver. Signed-off-by: Qiang Zheng [bhelgaas: changelog, also check in pcie_pme_work_fn(), use "~0" to follow other similar checks] Signed-off-by: Bjorn Helgaas Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 5a7192bc389ea5ac194d046d6a218b7431680a31 Author: Peter Ujfalusi Date: Wed Nov 8 12:02:25 2017 +0200 dmaengine: ti-dma-crossbar: Correct am335x/am43xx mux value type [ Upstream commit 288e7560e4d3e259aa28f8f58a8dfe63627a1bf6 ] The used 0x1f mask is only valid for am335x family of SoC, different family using this type of crossbar might have different number of electable events. In case of am43xx family 0x3f mask should have been used for example. Instead of trying to handle each family's mask, just use u8 type to store the mux value since the event offsets are aligned to byte offset. Fixes: 42dbdcc6bf965 ("dmaengine: ti-dma-crossbar: Add support for crossbar on AM33xx/AM43xx") Signed-off-by: Peter Ujfalusi Signed-off-by: Vinod Koul Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 03a48dc96505f5b900e44d50e69ea3ee3686c279 Author: Pankaj Bharadiya Date: Tue Nov 7 16:16:19 2017 +0530 ASoC: Intel: Skylake: Fix uuid_module memory leak in failure case [ Upstream commit f8e066521192c7debe59127d90abbe2773577e25 ] In the loop that adds the uuid_module to the uuid_list list, allocated memory is not properly freed in the error path free uuid_list whenever any of the memory allocation in the loop fails to avoid memory leak. Signed-off-by: Pankaj Bharadiya Signed-off-by: Guneshwor Singh Acked-By: Vinod Koul Signed-off-by: Mark Brown Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 9146b10f8cd64c2e467c8f98ccb19dcb15812e83 Author: Philipp Zabel Date: Tue Nov 7 13:12:17 2017 +0100 rtc: pcf8563: fix output clock rate [ Upstream commit a3350f9c57ffad569c40f7320b89da1f3061c5bb ] The pcf8563_clkout_recalc_rate function erroneously ignores the frequency index read from the CLKO register and always returns 32768 Hz. Fixes: a39a6405d5f9 ("rtc: pcf8563: add CLKOUT to common clock framework") Signed-off-by: Philipp Zabel Signed-off-by: Alexandre Belloni Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit cf53526f3312f972043d0c6fa745cfb9f777a0c9 Author: Christophe JAILLET Date: Thu Nov 9 18:09:28 2017 +0100 video: fbdev: au1200fb: Return an error code if a memory allocation fails [ Upstream commit 8cae353e6b01ac3f18097f631cdbceb5ff28c7f3 ] 'ret' is known to be 0 at this point. In case of memory allocation error in 'framebuffer_alloc()', return -ENOMEM instead. Signed-off-by: Christophe JAILLET Cc: Tejun Heo Signed-off-by: Bartlomiej Zolnierkiewicz Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 90e2591f6f3fd15ac766ffbaed119f43f01fef33 Author: Christophe JAILLET Date: Thu Nov 9 18:09:28 2017 +0100 video: fbdev: au1200fb: Release some resources if a memory allocation fails [ Upstream commit 451f130602619a17c8883dd0b71b11624faffd51 ] We should go through the error handling code instead of returning -ENOMEM directly. Signed-off-by: Christophe JAILLET Cc: Tejun Heo Signed-off-by: Bartlomiej Zolnierkiewicz Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 92c3c7db8336f6678b31ee0d08ecf3bab81a80ff Author: Ladislav Michl Date: Thu Nov 9 18:09:30 2017 +0100 video: udlfb: Fix read EDID timeout [ Upstream commit c98769475575c8a585f5b3952f4b5f90266f699b ] While usb_control_msg function expects timeout in miliseconds, a value of HZ is used. Replace it with USB_CTRL_GET_TIMEOUT and also fix error message which looks like: udlfb: Read EDID byte 78 failed err ffffff92 as error is either negative errno or number of bytes transferred use %d format specifier. Returned EDID is in second byte, so return error when less than two bytes are received. Fixes: 18dffdf8913a ("staging: udlfb: enhance EDID and mode handling support") Signed-off-by: Ladislav Michl Cc: Bernie Thompson Signed-off-by: Bartlomiej Zolnierkiewicz Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit aecce5fc047a99c057931f08977ba6042bb931c4 Author: Geert Uytterhoeven Date: Thu Nov 9 18:09:33 2017 +0100 fbdev: controlfb: Add missing modes to fix out of bounds access [ Upstream commit ac831a379d34109451b3c41a44a20ee10ecb615f ] Dan's static analysis says: drivers/video/fbdev/controlfb.c:560 control_setup() error: buffer overflow 'control_mac_modes' 20 <= 21 Indeed, control_mac_modes[] has only 20 elements, while VMODE_MAX is 22, which may lead to an out of bounds read when parsing vmode commandline options. The bug was introduced in v2.4.5.6, when 2 new modes were added to macmodes.h, but control_mac_modes[] wasn't updated: https://kernel.opensuse.org/cgit/kernel/diff/include/video/macmodes.h?h=v2.5.2&id=29f279c764808560eaceb88fef36cbc35c529aad Augment control_mac_modes[] with the two new video modes to fix this. Reported-by: Dan Carpenter Signed-off-by: Geert Uytterhoeven Cc: Dan Carpenter Cc: Benjamin Herrenschmidt Signed-off-by: Bartlomiej Zolnierkiewicz Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 0312ab0f0e437cadb9eabef72a1e26a676d2dd64 Author: Robert Stonehouse Date: Tue Nov 7 17:30:30 2017 +0000 sfc: don't warn on successful change of MAC [ Upstream commit cbad52e92ad7f01f0be4ca58bde59462dc1afe3a ] Fixes: 535a61777f44e ("sfc: suppress handled MCDI failures when changing the MAC address") Signed-off-by: Bert Kenward Signed-off-by: David S. Miller Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit da73389e8aa7975a46b9e85234fcbd7885d0eabd Author: Sébastien Szymanski Date: Fri Nov 10 10:01:43 2017 +0100 HID: cp2112: fix broken gpio_direction_input callback [ Upstream commit 7da85fbf1c87d4f73621e0e7666a3387497075a9 ] When everything goes smoothly, ret is set to 0 which makes the function to return EIO error. Fixes: 8e9faa15469e ("HID: cp2112: fix gpio-callback error handling") Signed-off-by: Sébastien Szymanski Reviewed-by: Benjamin Tissoires Signed-off-by: Jiri Kosina Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit e15628b293a7dad278d5bbf7d56a539858b0be8f Author: Dou Liyang Date: Fri Mar 3 16:02:23 2017 +0800 Revert "x86/acpi: Set persistent cpuid <-> nodeid mapping when booting" [ Upstream commit c962cff17dfa11f4a8227ac16de2b28aea3312e4 ] Revert: dc6db24d2476 ("x86/acpi: Set persistent cpuid <-> nodeid mapping when booting") The mapping of "cpuid <-> nodeid" is established at boot time via ACPI tables to keep associations of workqueues and other node related items consistent across cpu hotplug. But, ACPI tables are unreliable and failures with that boot time mapping have been reported on machines where the ACPI table and the physical information which is retrieved at actual hotplug is inconsistent. Revert the mapping implementation so it can be replaced with a less error prone approach. Signed-off-by: Dou Liyang Tested-by: Xiaolong Ye Cc: rjw@rjwysocki.net Cc: linux-acpi@vger.kernel.org Cc: guzheng1@huawei.com Cc: izumi.taku@jp.fujitsu.com Cc: lenb@kernel.org Link: http://lkml.kernel.org/r/1488528147-2279-2-git-send-email-douly.fnst@cn.fujitsu.com Signed-off-by: Thomas Gleixner Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 25b0b3f2373d31d40151b0fbdf35c0035d332a61 Author: Mike Christie Date: Thu Mar 2 04:59:50 2017 -0600 target: fix race during implicit transition work flushes [ Upstream commit 760bf578edf8122f2503a3a6a3f4b0de3b6ce0bb ] This fixes the following races: 1. core_alua_do_transition_tg_pt could have read tg_pt_gp_alua_access_state and gone into this if chunk: if (!explicit && atomic_read(&tg_pt_gp->tg_pt_gp_alua_access_state) == ALUA_ACCESS_STATE_TRANSITION) { and then core_alua_do_transition_tg_pt_work could update the state. core_alua_do_transition_tg_pt would then only set tg_pt_gp_alua_pending_state and the tg_pt_gp_alua_access_state would not get updated with the second calls state. 2. core_alua_do_transition_tg_pt could be setting tg_pt_gp_transition_complete while the tg_pt_gp_transition_work is already completing. core_alua_do_transition_tg_pt then waits on the completion that will never be called. To handle these issues, we just call flush_work which will return when core_alua_do_transition_tg_pt_work has completed so there is no need to do the complete/wait. And, if core_alua_do_transition_tg_pt_work was running, instead of trying to sneak in the state change, we just schedule up another core_alua_do_transition_tg_pt_work call. Note that this does not handle a possible race where there are multiple threads call core_alua_do_transition_tg_pt at the same time. I think we need a mutex in target_tg_pt_gp_alua_access_state_store. Signed-off-by: Mike Christie Signed-off-by: Nicholas Bellinger Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 892e4f9bc2e218b0126bb4ae7191be366fb4b421 Author: Mike Christie Date: Thu Mar 2 04:59:48 2017 -0600 target: fix ALUA transition timeout handling [ Upstream commit d7175373f2745ed4abe5b388d5aabd06304f801e ] The implicit transition time tells initiators the min time to wait before timing out a transition. We currently schedule the transition to occur in tg_pt_gp_implicit_trans_secs seconds so there is no room for delays. If core_alua_do_transition_tg_pt_work->core_alua_update_tpg_primary_metadata needs to write out info to a remote file, then the initiator can easily time out the operation. Signed-off-by: Mike Christie Signed-off-by: Nicholas Bellinger Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 0d34f4770ea11c92af2c6a15c73aaebc75ea94ed Author: Mike Christie Date: Wed Mar 1 23:13:26 2017 -0600 target: Use system workqueue for ALUA transitions [ Upstream commit 207ee84133c00a8a2a5bdec94df4a5b37d78881c ] If tcmu-runner is processing a STPG and needs to change the kernel's ALUA state then we cannot use the same work queue for task management requests and ALUA transitions, because we could deadlock. The problem occurs when a STPG times out before tcmu-runner is able to call into target_tg_pt_gp_alua_access_state_store-> core_alua_do_port_transition -> core_alua_do_transition_tg_pt -> queue_work. In this case, the tmr is on the work queue waiting for the STPG to complete, but the STPG transition is now queued behind the waiting tmr. Note: This bug will also be fixed by this patch: http://www.spinics.net/lists/target-devel/msg14560.html which switches the tmr code to use the system workqueues. For both, I am not sure if we need a dedicated workqueue since it is not a performance path and I do not think we need WQ_MEM_RECLAIM to make forward progress to free up memory like the block layer does. Signed-off-by: Mike Christie Signed-off-by: Nicholas Bellinger Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 8f60ef94477cb0dd3023b337640ea634886ac1d1 Author: Zygo Blaxell Date: Fri Mar 10 16:45:44 2017 -0500 btrfs: add missing memset while reading compressed inline extents [ Upstream commit e1699d2d7bf6e6cce3e1baff19f9dd4595a58664 ] This is a story about 4 distinct (and very old) btrfs bugs. Commit c8b978188c ("Btrfs: Add zlib compression support") added three data corruption bugs for inline extents (bugs #1-3). Commit 93c82d5750 ("Btrfs: zero page past end of inline file items") fixed bug #1: uncompressed inline extents followed by a hole and more extents could get non-zero data in the hole as they were read. The fix was to add a memset in btrfs_get_extent to zero out the hole. Commit 166ae5a418 ("btrfs: fix inline compressed read err corruption") fixed bug #2: compressed inline extents which contained non-zero bytes might be replaced with zero bytes in some cases. This patch removed an unhelpful memset from uncompress_inline, but the case where memset is required was missed. There is also a memset in the decompression code, but this only covers decompressed data that is shorter than the ram_bytes from the extent ref record. This memset doesn't cover the region between the end of the decompressed data and the end of the page. It has also moved around a few times over the years, so there's no single patch to refer to. This patch fixes bug #3: compressed inline extents followed by a hole and more extents could get non-zero data in the hole as they were read (i.e. bug #3 is the same as bug #1, but s/uncompressed/compressed/). The fix is the same: zero out the hole in the compressed case too, by putting a memset back in uncompress_inline, but this time with correct parameters. The last and oldest bug, bug #0, is the cause of the offending inline extent/hole/extent pattern. Bug #0 is a subtle and mostly-harmless quirk of behavior somewhere in the btrfs write code. In a few special cases, an inline extent and hole are allowed to persist where they normally would be combined with later extents in the file. A fast reproducer for bug #0 is presented below. A few offending extents are also created in the wild during large rsync transfers with the -S flag. A Linux kernel build (git checkout; make allyesconfig; make -j8) will produce a handful of offending files as well. Once an offending file is created, it can present different content to userspace each time it is read. Bug #0 is at least 4 and possibly 8 years old. I verified every vX.Y kernel back to v3.5 has this behavior. There are fossil records of this bug's effects in commits all the way back to v2.6.32. I have no reason to believe bug #0 wasn't present at the beginning of btrfs compression support in v2.6.29, but I can't easily test kernels that old to be sure. It is not clear whether bug #0 is worth fixing. A fix would likely require injecting extra reads into currently write-only paths, and most of the exceptional cases caused by bug #0 are already handled now. Whether we like them or not, bug #0's inline extents followed by holes are part of the btrfs de-facto disk format now, and we need to be able to read them without data corruption or an infoleak. So enough about bug #0, let's get back to bug #3 (this patch). An example of on-disk structure leading to data corruption found in the wild: item 61 key (606890 INODE_ITEM 0) itemoff 9662 itemsize 160 inode generation 50 transid 50 size 47424 nbytes 49141 block group 0 mode 100644 links 1 uid 0 gid 0 rdev 0 flags 0x0(none) item 62 key (606890 INODE_REF 603050) itemoff 9642 itemsize 20 inode ref index 3 namelen 10 name: DB_File.so item 63 key (606890 EXTENT_DATA 0) itemoff 8280 itemsize 1362 inline extent data size 1341 ram 4085 compress(zlib) item 64 key (606890 EXTENT_DATA 4096) itemoff 8227 itemsize 53 extent data disk byte 5367308288 nr 20480 extent data offset 0 nr 45056 ram 45056 extent compression(zlib) Different data appears in userspace during each read of the 11 bytes between 4085 and 4096. The extent in item 63 is not long enough to fill the first page of the file, so a memset is required to fill the space between item 63 (ending at 4085) and item 64 (beginning at 4096) with zero. Here is a reproducer from Liu Bo, which demonstrates another method of creating the same inline extent and hole pattern: Using 'page_poison=on' kernel command line (or enable CONFIG_PAGE_POISONING) run the following: # touch foo # chattr +c foo # xfs_io -f -c "pwrite -W 0 1000" foo # xfs_io -f -c "falloc 4 8188" foo # od -x foo # echo 3 >/proc/sys/vm/drop_caches # od -x foo This produce the following on my box: Correct output: file contains 1000 data bytes followed by zeros: 0000000 cdcd cdcd cdcd cdcd cdcd cdcd cdcd cdcd * 0001740 cdcd cdcd cdcd cdcd 0000 0000 0000 0000 0001760 0000 0000 0000 0000 0000 0000 0000 0000 * 0020000 Actual output: the data after the first 1000 bytes will be different each run: 0000000 cdcd cdcd cdcd cdcd cdcd cdcd cdcd cdcd * 0001740 cdcd cdcd cdcd cdcd 6c63 7400 635f 006d 0001760 5f74 6f43 7400 435f 0053 5f74 7363 7400 0002000 435f 0056 5f74 6164 7400 645f 0062 5f74 (...) Signed-off-by: Zygo Blaxell Reviewed-by: Liu Bo Reviewed-by: Chris Mason Signed-off-by: Chris Mason Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 5d460d359abefe59e1cac738d37296f3cd4bc1f8 Author: Olga Kornievskaia Date: Wed Mar 8 14:39:15 2017 -0500 NFSv4.1 respect server's max size in CREATE_SESSION [ Upstream commit 033853325fe3bdc70819a8b97915bd3bca41d3af ] Currently client doesn't respect max sizes server returns in CREATE_SESSION. nfs4_session_set_rwsize() gets called and server->rsize, server->wsize are 0 so they never get set to the sizes returned by the server. Signed-off-by: Olga Kornievskaia Signed-off-by: Anna Schumaker Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 88af4e34771c88a09ed722bbf4346a2ddfee6dbe Author: Daniel Drake Date: Tue Feb 7 13:08:23 2017 -0600 efi/esrt: Cleanup bad memory map log messages [ Upstream commit 822f5845f710e57d7e2df1fd1ee00d6e19d334fe ] The Intel Compute Stick STCK1A8LFC and Weibu F3C platforms both log 2 error messages during boot: efi: requested map not found. esrt: ESRT header is not in the memory map. Searching the web, this seems to affect many other platforms too. Since these messages are logged as errors, they appear on-screen during the boot process even when using the "quiet" boot parameter used by distros. Demote the ESRT error to a warning so that it does not appear on-screen, and delete the error logging from efi_mem_desc_lookup; both callsites of that function log more specific messages upon failure. Out of curiosity I looked closer at the Weibu F3C. There is no entry in the UEFI-provided memory map which corresponds to the ESRT pointer, but hacking the code to map it anyway, the ESRT does appear to be valid with 2 entries. Signed-off-by: Daniel Drake Cc: Matt Fleming Acked-by: Peter Jones Signed-off-by: Ard Biesheuvel Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit e30b840d466898a02df6986dbb633b967f0ea594 Author: Daniel Borkmann Date: Wed Mar 15 22:53:37 2017 +0100 perf symbols: Fix symbols__fixup_end heuristic for corner cases [ Upstream commit e7ede72a6d40cb3a30c087142d79381ca8a31dab ] The current symbols__fixup_end() heuristic for the last entry in the rb tree is suboptimal as it leads to not being able to recognize the symbol in the call graph in a couple of corner cases, for example: i) If the symbol has a start address (f.e. exposed via kallsyms) that is at a page boundary, then the roundup(curr->start, 4096) for the last entry will result in curr->start == curr->end with a symbol length of zero. ii) If the symbol has a start address that is shortly before a page boundary, then also here, curr->end - curr->start will just be very few bytes, where it's unrealistic that we could perform a match against. Instead, change the heuristic to roundup(curr->start, 4096) + 4096, so that we can catch such corner cases and have a better chance to find that specific symbol. It's still just best effort as the real end of the symbol is unknown to us (and could even be at a larger offset than the current range), but better than the current situation. Alexei reported that he recently run into case i) with a JITed eBPF program (these are all page aligned) as the last symbol which wasn't properly shown in the call graph (while other eBPF program symbols in the rb tree were displayed correctly). Since this is a generic issue, lets try to improve the heuristic a bit. Reported-and-Tested-by: Alexei Starovoitov Signed-off-by: Daniel Borkmann Fixes: 2e538c4a1847 ("perf tools: Improve kernel/modules symbol lookup") Link: http://lkml.kernel.org/r/bb5c80d27743be6f12afc68405f1956a330e1bc9.1489614365.git.daniel@iogearbox.net Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 2a47e7de0822ec4794c6a44d8a43c66c49f4d355 Author: Dmitry Vyukov Date: Sat Mar 4 13:46:12 2017 +0100 tty: fix data race in tty_ldisc_ref_wait() [ Upstream commit a4a3e061149f09c075f108b6f1cf04d9739a6bc2 ] tty_ldisc_ref_wait() checks tty->ldisc under tty->ldisc_sem. But if ldisc==NULL it releases them sem and reloads tty->ldisc without holding the sem. This is wrong and can lead to returning non-NULL ldisc without protection. Don't reload tty->ldisc second time. Signed-off-by: Dmitry Vyukov Cc: syzkaller@googlegroups.com Cc: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman Cc: Jiri Slaby Cc: Peter Hurley Cc: One Thousand Gnomes Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 70f450fc8610464d4ac2274b403b8b15797c27af Author: Dmitry Vyukov Date: Sat Mar 4 14:55:19 2017 +0100 tty: don't panic on OOM in tty_set_ldisc() [ Upstream commit 5362544bebe85071188dd9e479b5a5040841c895 ] If tty_ldisc_open() fails in tty_set_ldisc(), it tries to go back to the old discipline or N_TTY. But that can fail as well, in such case it panics. This is not a graceful way to handle OOM. Leave ldisc==NULL if all attempts fail instead. Also use existing tty_ldisc_reinit() helper function instead of tty_ldisc_restore(). Also don't WARN/BUG in tty_ldisc_reinit() if N_TTY fails, which would have the same net effect of bringing kernel down on OOM. Instead print a single line message about what has happened. Signed-off-by: Dmitry Vyukov Cc: syzkaller@googlegroups.com Cc: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman Cc: Jiri Slaby Cc: Peter Hurley Cc: One Thousand Gnomes Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 3d57ec51d204d220d0eff1e903fb58af86a889de Author: David Howells Date: Thu Mar 16 16:27:10 2017 +0000 rxrpc: Ignore BUSY packets on old calls [ Upstream commit 4d4a6ac73e7466c2085c307fac41f74ce4568a45 ] If we receive a BUSY packet for a call we think we've just completed, the packet is handed off to the connection processor to deal with - but the connection processor doesn't expect a BUSY packet and so flags a protocol error. Fix this by simply ignoring the BUSY packet for the moment. The symptom of this may appear as a system call failing with EPROTO. This may be triggered by pressing ctrl-C under some circumstances. This comes about we abort calls due to interruption by a signal (which we shouldn't do, but that's going to be a large fix and mostly in fs/afs/). What happens is that we abort the call and may also abort follow up calls too (this needs offloading somehoe). So we see a transmission of something like the following sequence of packets: DATA for call N ABORT call N DATA for call N+1 ABORT call N+1 in very quick succession on the same channel. However, the peer may have deferred the processing of the ABORT from the call N to a background thread and thus sees the DATA message from the call N+1 coming in before it has cleared the channel. Thus it sends a BUSY packet[*]. [*] Note that some implementations (OpenAFS, for example) mark the BUSY packet with one plus the callNumber of the call prior to call N. Ordinarily, this would be call N, but there's no requirement for the calls on a channel to be numbered strictly sequentially (the number is required to increase). This is wrong and means that the callNumber in the BUSY packet should be ignored (it really ought to be N+1 since that's what it's in response to). Signed-off-by: David Howells Signed-off-by: David S. Miller Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 42b6d6e824d349eef8a3d1925a2667564a31b8b8 Author: David Ahern Date: Mon Mar 13 16:49:10 2017 -0700 net: mpls: Fix nexthop alive tracking on down events [ Upstream commit 61733c91c454a61be0ffc93fe46a5d5f2f048c1c ] Alive tracking of nexthops can account for a link twice if the carrier goes down followed by an admin down of the same link rendering multipath routes useless. This is similar to 79099aab38c8 for UNREGISTER events and DOWN events. Fix by tracking number of alive nexthops in mpls_ifdown similar to the logic in mpls_ifup. Checking the flags per nexthop once after all events have been processed is simpler than trying to maintian a running count through all event combinations. Also, WRITE_ONCE is used instead of ACCESS_ONCE to set rt_nhn_alive per a comment from checkpatch: WARNING: Prefer WRITE_ONCE(, ) over ACCESS_ONCE() = Fixes: c89359a42e2a4 ("mpls: support for dead routes") Signed-off-by: David Ahern Acked-by: Robert Shearman Signed-off-by: David S. Miller Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit fd27dbcae9371d0f15b3f0fcb69a544e41d819ba Author: Jack Morgenstein Date: Mon Mar 13 19:29:08 2017 +0200 net/mlx4_core: Avoid delays during VF driver device shutdown [ Upstream commit 4cbe4dac82e423ecc9a0ba46af24a860853259f4 ] Some Hypervisors detach VFs from VMs by instantly causing an FLR event to be generated for a VF. In the mlx4 case, this will cause that VF's comm channel to be disabled before the VM has an opportunity to invoke the VF device's "shutdown" method. For such Hypervisors, there is a race condition between the VF's shutdown method and its internal-error detection/reset thread. The internal-error detection/reset thread (which runs every 5 seconds) also detects a disabled comm channel. If the internal-error detection/reset flow wins the race, we still get delays (while that flow tries repeatedly to detect comm-channel recovery). The cited commit fixed the command timeout problem when the internal-error detection/reset flow loses the race. This commit avoids the unneeded delays when the internal-error detection/reset flow wins. Fixes: d585df1c5ccf ("net/mlx4_core: Avoid command timeouts during VF driver device shutdown") Signed-off-by: Jack Morgenstein Reported-by: Simon Xiao Signed-off-by: Tariq Toukan Signed-off-by: David S. Miller Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 65bfe003dcebd1144edacedb43c2f96d813bab79 Author: Sagi Grimberg Date: Thu Mar 9 13:45:52 2017 +0200 nvmet-rdma: Fix a possible uninitialized variable dereference [ Upstream commit b25634e2a051bef4b2524b11adddfbfa6448f6cd ] When handling a new recv command, we grab a new rsp resource and check for the queue state being live. In case the queue is not in live state, we simply restore the rsp back to the free list. However in this flow we didn't set rsp->queue yet, so we cannot dereference it. Instead, make sure to initialize rsp->queue (and other rsp members) as soon as possible so we won't reference uninitialized variables. Reported-by: Yi Zhang Reported-by: Raju Rangoju Reviewed-by: Christoph Hellwig Tested-by: Raju Rangoju Signed-off-by: Sagi Grimberg Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 571e47760de55281834f28a54f75d51285e7787d Author: Sagi Grimberg Date: Mon Mar 6 18:46:20 2017 +0200 nvmet: confirm sq percpu has scheduled and switched to atomic [ Upstream commit d11ea004a458b982e19b188c386e25a9b66ec446 ] percpu_ref_kill is not enough to prevent subsequent percpu_ref_tryget_live from failing. Hence call perfcpu_ref_kill_confirm to make it safe. Reviewed-by: Christoph Hellwig Signed-off-by: Sagi Grimberg Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit af0cee086b0920612de5d6ad2a8f406792b5aa9d Author: Sagi Grimberg Date: Mon Feb 27 18:44:45 2017 +0200 nvme-loop: fix a possible use-after-free when destroying the admin queue [ Upstream commit e4c5d3762e2d6d274bd1cc948c47063becfa2103 ] we need to destroy the nvmet sq and let it finish gracefully before continue to cleanup the queue. Reviewed-by: Christoph Hellwig Signed-off-by: Sagi Grimberg Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit a8939aac82b0802fa66ca6455ed8bc88913556cf Author: David Howells Date: Thu Mar 16 16:27:49 2017 +0000 afs: Fix abort on signal while waiting for call completion [ Upstream commit 954cd6dc02a65065aecb7150962c0870c5b0e322 ] Fix the way in which a call that's in progress and being waited for is aborted in the case that EINTR is detected. We should be sending RX_USER_ABORT rather than RX_CALL_DEAD as the abort code. Note that since the only two ways out of the loop are if the call completes or if a signal happens, the kill-the-call clause after the loop has finished can only happen in the case of EINTR. This means that we only have one abort case to deal with, not two, and the "KWC" case can never happen and so can be deleted. Note further that simply aborting the call isn't necessarily the best thing here since at this point: the request has been entirely sent and it's likely the server will do the operation anyway - whether we abort it or not. In future, we should punt the handling of the remainder of the call off to a background thread. Reported-by: Marc Dionne Signed-off-by: David Howells Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit d43dda072544ecf7890eab6a920fb602c9aafa63 Author: David Howells Date: Thu Mar 16 16:27:48 2017 +0000 afs: Fix afs_kill_pages() [ Upstream commit 7286a35e893176169b09715096a4aca557e2ccd2 ] Fix afs_kill_pages() in two ways: (1) If a writeback has been partially flushed, then if we try and kill the pages it contains, some of them may no longer be undergoing writeback and end_page_writeback() will assert. Fix this by checking to see whether the page in question is actually undergoing writeback before ending that writeback. (2) The loop that scans for pages to kill doesn't increase the first page index, and so the loop may not terminate, but it will try to process the same pages over and over again. Fix this by increasing the first page index to one after the last page we processed. Signed-off-by: David Howells Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 856bb4b609ee1cac2a5c74a30c40ad82e782dabf Author: David Howells Date: Thu Mar 16 16:27:48 2017 +0000 afs: Fix page leak in afs_write_begin() [ Upstream commit 6d06b0d25209c80e99c1e89700f1e09694a3766b ] afs_write_begin() leaks a ref and a lock on a page if afs_fill_page() fails. Fix the leak by unlocking and releasing the page in the error path. Signed-off-by: David Howells Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 833acb3e09dba37c277ed16bc46792c310066264 Author: Marc Dionne Date: Thu Mar 16 16:27:47 2017 +0000 afs: Populate and use client modification time [ Upstream commit ab94f5d0dd6fd82e7eeca5e7c8096eaea0a0261f ] The inode timestamps should be set from the client time in the status received from the server, rather than the server time which is meant for internal server use. Set AFS_SET_MTIME and populate the mtime for operations that take an input status, such as file/dir creation and StoreData. If an input time is not provided the server will set the vnode times based on the current server time. In a situation where the server has some skew with the client, this could lead to the client seeing a timestamp in the future for a file that it just created or wrote. Signed-off-by: Marc Dionne Signed-off-by: David Howells Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit a3e7a29abf0bf205292ee63bd5d1de54aaa6d18f Author: David Howells Date: Thu Mar 16 16:27:47 2017 +0000 afs: Better abort and net error handling [ Upstream commit 70af0e3bd65142f9e674961c975451638a7ce1d5 ] If we receive a network error, a remote abort or a protocol error whilst we're still transmitting data, make sure we return an appropriate error to the caller rather than ESHUTDOWN or ECONNABORTED. Signed-off-by: David Howells Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit ab239061161929a3aa048cb514b0183b8742b04c Author: David Howells Date: Thu Mar 16 16:27:47 2017 +0000 afs: Invalid op ID should abort with RXGEN_OPCODE [ Upstream commit 1157f153f37a8586765034470e4f00a4a6c4ce6f ] When we are given an invalid operation ID, we should abort that with RXGEN_OPCODE rather than RX_INVALID_OPERATION. Also map RXGEN_OPCODE to -ENOTSUPP. Signed-off-by: David Howells Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 972e7b7cbf5c82e9b0b963c9443a30dd3354687b Author: David Howells Date: Thu Mar 16 16:27:47 2017 +0000 afs: Fix the maths in afs_fs_store_data() [ Upstream commit 146a1192783697810b63a1e41c4d59fc93387340 ] afs_fs_store_data() works out of the size of the write it's going to make, but it uses 32-bit unsigned subtraction in one place that gets automatically cast to loff_t. However, if to < offset, then the number goes negative, but as the result isn't signed, this doesn't get sign-extended to 64-bits when placed in a loff_t. Fix by casting the operands to loff_t. Signed-off-by: David Howells Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 9329ae4cb10e2563b8ec15a3d6a2a15dfe16fd3d Author: Tina Ruchandani Date: Thu Mar 16 16:27:46 2017 +0000 afs: Prevent callback expiry timer overflow [ Upstream commit 56e714312e7dbd6bb83b2f78d3ec19a404c7649f ] get_seconds() returns real wall-clock seconds. On 32-bit systems this value will overflow in year 2038 and beyond. This patch changes afs_vnode record to use ktime_get_real_seconds() instead, for the fields cb_expires and cb_expires_at. Signed-off-by: Tina Ruchandani Signed-off-by: David Howells Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 7da1b85a75d4dd9b090a242ea162f27d2aea7518 Author: Tina Ruchandani Date: Thu Mar 16 16:27:46 2017 +0000 afs: Migrate vlocation fields to 64-bit [ Upstream commit 8a79790bf0b7da216627ffb85f52cfb4adbf1e4e ] get_seconds() returns real wall-clock seconds. On 32-bit systems this value will overflow in year 2038 and beyond. This patch changes afs's vlocation record to use ktime_get_real_seconds() instead, for the fields time_of_death and update_at. Signed-off-by: Tina Ruchandani Signed-off-by: David Howells Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 7286fad1570444b924568b13994843ab28b55dd4 Author: David Howells Date: Thu Mar 16 16:27:45 2017 +0000 afs: Flush outstanding writes when an fd is closed [ Upstream commit 58fed94dfb17e89556b5705f20f90e5b2971b6a1 ] Flush outstanding writes in afs when an fd is closed. This is what NFS and CIFS do. Reported-by: Marc Dionne Signed-off-by: David Howells Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit eaaad7646d3de73ca30653e635261827242be124 Author: Marc Dionne Date: Thu Mar 16 16:27:44 2017 +0000 afs: Deal with an empty callback array [ Upstream commit bcd89270d93b7edebb5de5e5e7dca1a77a33496e ] Servers may send a callback array that is the same size as the FID array, or an empty array. If the callback count is 0, the code would attempt to read (fid_count * 12) bytes of data, which would fail and result in an unmarshalling error. This would lead to stale data for remotely modified files or directories. Store the callback array size in the internal afs_call structure and use that to determine the amount of data to read. Signed-off-by: Marc Dionne Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 900048089cc135fea39452ec32aa44415e1c8dba Author: Marc Dionne Date: Thu Mar 16 16:27:44 2017 +0000 afs: Adjust mode bits processing [ Upstream commit 627f46943ff90bcc32ddeb675d881c043c6fa2ae ] Mode bits for an afs file should not be enforced in the usual way. For files, the absence of user bits can restrict file access with respect to what is granted by the server. These bits apply regardless of the owner or the current uid; the rest of the mode bits (group, other) are ignored. Signed-off-by: Marc Dionne Signed-off-by: David Howells Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit ba47c159748068a3371fdc0d77c7c1dcef2b7271 Author: Marc Dionne Date: Thu Mar 16 16:27:43 2017 +0000 afs: Populate group ID from vnode status [ Upstream commit 6186f0788b31f44affceeedc7b48eb10faea120d ] The group was hard coded to GLOBAL_ROOT_GID; use the group ID that was received from the server. Signed-off-by: Marc Dionne Signed-off-by: David Howells Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit c250fae9ad4bb1d629b8451c2e07b300e0fb8bb8 Author: David Howells Date: Thu Mar 16 16:27:43 2017 +0000 afs: Fix missing put_page() [ Upstream commit 29c8bbbd6e21daa0997d1c3ee886b897ee7ad652 ] In afs_writepages_region(), inside the loop where we find dirty pages to deal with, one of the if-statements is missing a put_page(). Signed-off-by: David Howells Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit b29c7b7c62d5aa8ba3748c6dfebeee19c279fc14 Author: Alex Deucher Date: Wed Mar 15 21:11:46 2017 -0400 drm/radeon: reinstate oland workaround for sclk [ Upstream commit 66822d815ae61ecb2d9dba9031517e8a8476969d ] Higher sclks seem to be unstable on some boards. bug: https://bugs.freedesktop.org/show_bug.cgi?id=100222 Signed-off-by: Alex Deucher Cc: stable@vger.kernel.org Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 2a84fce9b0396b62c6e3020b5fb9f5378697b6ae Author: yong mao Date: Sat Mar 4 15:10:03 2017 +0800 mmc: mediatek: Fixed bug where clock frequency could be set wrong [ Upstream commit 40ceda09c8c84694c2ca6b00bcc6dc71e8e62d96 ] This patch can fix two issues: Issue 1: In previous code, div may be overflow when setting clock frequency as f_min. We can use DIV_ROUND_UP to fix this boundary related issue. Issue 2: In previous code, we can not set the correct clock frequency when div equals 0xff. Signed-off-by: Yong Mao Signed-off-by: Chaotian Jing Reviewed-by: Daniel Kurtz Signed-off-by: Ulf Hansson Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 28714e962a719d1898b6cff5d51352bac2135534 Author: Steven Rostedt (VMware) Date: Thu Mar 2 15:10:59 2017 +0100 sched/deadline: Use deadline instead of period when calculating overflow [ Upstream commit 2317d5f1c34913bac5971d93d69fb6c31bb74670 ] I was testing Daniel's changes with his test case, and tweaked it a little. Instead of having the runtime equal to the deadline, I increased the deadline ten fold. Daniel's test case had: attr.sched_runtime = 2 * 1000 * 1000; /* 2 ms */ attr.sched_deadline = 2 * 1000 * 1000; /* 2 ms */ attr.sched_period = 2 * 1000 * 1000 * 1000; /* 2 s */ To make it more interesting, I changed it to: attr.sched_runtime = 2 * 1000 * 1000; /* 2 ms */ attr.sched_deadline = 20 * 1000 * 1000; /* 20 ms */ attr.sched_period = 2 * 1000 * 1000 * 1000; /* 2 s */ The results were rather surprising. The behavior that Daniel's patch was fixing came back. The task started using much more than .1% of the CPU. More like 20%. Looking into this I found that it was due to the dl_entity_overflow() constantly returning true. That's because it uses the relative period against relative runtime vs the absolute deadline against absolute runtime. runtime / (deadline - t) > dl_runtime / dl_period There's even a comment mentioning this, and saying that when relative deadline equals relative period, that the equation is the same as using deadline instead of period. That comment is backwards! What we really want is: runtime / (deadline - t) > dl_runtime / dl_deadline We care about if the runtime can make its deadline, not its period. And then we can say "when the deadline equals the period, the equation is the same as using dl_period instead of dl_deadline". After correcting this, now when the task gets enqueued, it can throttle correctly, and Daniel's fix to the throttling of sleeping deadline tasks works even when the runtime and deadline are not the same. Signed-off-by: Steven Rostedt (VMware) Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Daniel Bristot de Oliveira Cc: Juri Lelli Cc: Linus Torvalds Cc: Luca Abeni Cc: Mike Galbraith Cc: Peter Zijlstra Cc: Romulo Silva de Oliveira Cc: Steven Rostedt Cc: Thomas Gleixner Cc: Tommaso Cucinotta Link: http://lkml.kernel.org/r/02135a27f1ae3fe5fd032568a5a2f370e190e8d7.1488392936.git.bristot@redhat.com Signed-off-by: Ingo Molnar Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit a2e29113f1abe5299c0af6ab841cbdddfd427fcf Author: Daniel Bristot de Oliveira Date: Thu Mar 2 15:10:58 2017 +0100 sched/deadline: Throttle a constrained deadline task activated after the deadline [ Upstream commit df8eac8cafce7d086be3bd5cf5a838fa37594dfb ] During the activation, CBS checks if it can reuse the current task's runtime and period. If the deadline of the task is in the past, CBS cannot use the runtime, and so it replenishes the task. This rule works fine for implicit deadline tasks (deadline == period), and the CBS was designed for implicit deadline tasks. However, a task with constrained deadline (deadine < period) might be awakened after the deadline, but before the next period. In this case, replenishing the task would allow it to run for runtime / deadline. As in this case deadline < period, CBS enables a task to run for more than the runtime / period. In a very loaded system, this can cause a domino effect, making other tasks miss their deadlines. To avoid this problem, in the activation of a constrained deadline task after the deadline but before the next period, throttle the task and set the replenishing timer to the begin of the next period, unless it is boosted. Reproducer: --------------- %< --------------- int main (int argc, char **argv) { int ret; int flags = 0; unsigned long l = 0; struct timespec ts; struct sched_attr attr; memset(&attr, 0, sizeof(attr)); attr.size = sizeof(attr); attr.sched_policy = SCHED_DEADLINE; attr.sched_runtime = 2 * 1000 * 1000; /* 2 ms */ attr.sched_deadline = 2 * 1000 * 1000; /* 2 ms */ attr.sched_period = 2 * 1000 * 1000 * 1000; /* 2 s */ ts.tv_sec = 0; ts.tv_nsec = 2000 * 1000; /* 2 ms */ ret = sched_setattr(0, &attr, flags); if (ret < 0) { perror("sched_setattr"); exit(-1); } for(;;) { /* XXX: you may need to adjust the loop */ for (l = 0; l < 150000; l++); /* * The ideia is to go to sleep right before the deadline * and then wake up before the next period to receive * a new replenishment. */ nanosleep(&ts, NULL); } exit(0); } --------------- >% --------------- On my box, this reproducer uses almost 50% of the CPU time, which is obviously wrong for a task with 2/2000 reservation. Signed-off-by: Daniel Bristot de Oliveira Signed-off-by: Peter Zijlstra (Intel) Cc: Juri Lelli Cc: Linus Torvalds Cc: Luca Abeni Cc: Mike Galbraith Cc: Peter Zijlstra Cc: Romulo Silva de Oliveira Cc: Steven Rostedt Cc: Thomas Gleixner Cc: Tommaso Cucinotta Link: http://lkml.kernel.org/r/edf58354e01db46bf42df8d2dd32418833f68c89.1488392936.git.bristot@redhat.com Signed-off-by: Ingo Molnar Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 9cc56a00eab7604f2c99ce27665dbb4fa9fa3280 Author: Daniel Bristot de Oliveira Date: Thu Mar 2 15:10:57 2017 +0100 sched/deadline: Make sure the replenishment timer fires in the next period [ Upstream commit 5ac69d37784b237707a7b15d199cdb6c6fdb6780 ] Currently, the replenishment timer is set to fire at the deadline of a task. Although that works for implicit deadline tasks because the deadline is equals to the begin of the next period, that is not correct for constrained deadline tasks (deadline < period). For instance: f.c: --------------- %< --------------- int main (void) { for(;;); } --------------- >% --------------- # gcc -o f f.c # trace-cmd record -e sched:sched_switch \ -e syscalls:sys_exit_sched_setattr \ chrt -d --sched-runtime 490000000 \ --sched-deadline 500000000 \ --sched-period 1000000000 0 ./f # trace-cmd report | grep "{pid of ./f}" After setting parameters, the task is replenished and continue running until being throttled: f-11295 [003] 13322.113776: sys_exit_sched_setattr: 0x0 The task is throttled after running 492318 ms, as expected: f-11295 [003] 13322.606094: sched_switch: f:11295 [-1] R ==> watchdog/3:32 [0] But then, the task is replenished 500719 ms after the first replenishment: -0 [003] 13322.614495: sched_switch: swapper/3:0 [120] R ==> f:11295 [-1] Running for 490277 ms: f-11295 [003] 13323.104772: sched_switch: f:11295 [-1] R ==> swapper/3:0 [120] Hence, in the first period, the task runs 2 * runtime, and that is a bug. During the first replenishment, the next deadline is set one period away. So the runtime / period starts to be respected. However, as the second replenishment took place in the wrong instant, the next replenishment will also be held in a wrong instant of time. Rather than occurring in the nth period away from the first activation, it is taking place in the (nth period - relative deadline). Signed-off-by: Daniel Bristot de Oliveira Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Luca Abeni Reviewed-by: Steven Rostedt (VMware) Reviewed-by: Juri Lelli Cc: Linus Torvalds Cc: Mike Galbraith Cc: Peter Zijlstra Cc: Romulo Silva de Oliveira Cc: Steven Rostedt Cc: Thomas Gleixner Cc: Tommaso Cucinotta Link: http://lkml.kernel.org/r/ac50d89887c25285b47465638354b63362f8adff.1488392936.git.bristot@redhat.com Signed-off-by: Ingo Molnar Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 0a4d4dac5e342889331a88e70ee24f29c7b64873 Author: Wanpeng Li Date: Mon Mar 6 21:51:28 2017 -0800 sched/deadline: Add missing update_rq_clock() in dl_task_timer() [ Upstream commit dcc3b5ffe1b32771c9a22e2c916fb94c4fcf5b79 ] The following warning can be triggered by hot-unplugging the CPU on which an active SCHED_DEADLINE task is running on: ------------[ cut here ]------------ WARNING: CPU: 7 PID: 0 at kernel/sched/sched.h:833 replenish_dl_entity+0x71e/0xc40 rq->clock_update_flags < RQCF_ACT_SKIP CPU: 7 PID: 0 Comm: swapper/7 Tainted: G B 4.11.0-rc1+ #24 Hardware name: LENOVO ThinkCentre M8500t-N000/SHARKBAY, BIOS FBKTC1AUS 02/16/2016 Call Trace: dump_stack+0x85/0xc4 __warn+0x172/0x1b0 warn_slowpath_fmt+0xb4/0xf0 ? __warn+0x1b0/0x1b0 ? debug_check_no_locks_freed+0x2c0/0x2c0 ? cpudl_set+0x3d/0x2b0 replenish_dl_entity+0x71e/0xc40 enqueue_task_dl+0x2ea/0x12e0 ? dl_task_timer+0x777/0x990 ? __hrtimer_run_queues+0x270/0xa50 dl_task_timer+0x316/0x990 ? enqueue_task_dl+0x12e0/0x12e0 ? enqueue_task_dl+0x12e0/0x12e0 __hrtimer_run_queues+0x270/0xa50 ? hrtimer_cancel+0x20/0x20 ? hrtimer_interrupt+0x119/0x600 hrtimer_interrupt+0x19c/0x600 ? trace_hardirqs_off+0xd/0x10 local_apic_timer_interrupt+0x74/0xe0 smp_apic_timer_interrupt+0x76/0xa0 apic_timer_interrupt+0x93/0xa0 The DL task will be migrated to a suitable later deadline rq once the DL timer fires and currnet rq is offline. The rq clock of the new rq should be updated. This patch fixes it by updating the rq clock after holding the new rq's rq lock. Signed-off-by: Wanpeng Li Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Matt Fleming Cc: Juri Lelli Cc: Linus Torvalds Cc: Mike Galbraith Cc: Peter Zijlstra Cc: Thomas Gleixner Link: http://lkml.kernel.org/r/1488865888-15894-1-git-send-email-wanpeng.li@hotmail.com Signed-off-by: Ingo Molnar Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 8d3a318194ec6a3c52384cbc63894e8dfcc15c9b Author: Sara Sharon Date: Tue Mar 14 09:50:35 2017 +0200 iwlwifi: mvm: cleanup pending frames in DQA mode [ Upstream commit 9a3fcf912ef7f5c6e18f9af6875dd13f7311f7aa ] When a station is asleep, the fw will set it as "asleep". All queues that are used only by one station will be stopped by the fw. In pre-DQA mode this was relevant for aggregation queues. However, in DQA mode a queue is owned by one station only, so all queues will be stopped. As a result, we don't expect to get filtered frames back to mac80211 and don't have to maintain the entire pending_frames state logic, the same way as we do in aggregations. The correct behavior is to align DQA behavior with the aggregation queue behaviour pre-DQA: - Don't count pending frames. - Let mac80211 know we have frames in these queues so that it can properly handle trigger frames. When a trigger frame is received, mac80211 tells the driver to send frames from the queues using release_buffered_frames. The driver will tell the fw to let frames out even if the station is asleep. This is done by iwl_mvm_sta_modify_sleep_tx_count. Reported-and-tested-by: Jens Axboe Reported-by: Linus Torvalds Signed-off-by: Sara Sharon Signed-off-by: Luca Coelho Signed-off-by: Kalle Valo Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit a524bb57dd367c299b6c61ff39df6328ae49ed08 Author: Vitaly Kuznetsov Date: Sat Mar 4 18:13:59 2017 -0700 Drivers: hv: util: move waiting for release to hv_utils_transport itself [ Upstream commit e9c18ae6eb2b312f16c63e34b43ea23926daa398 ] Waiting for release_event in all three drivers introduced issues on release as on_reset() hook is not always called. E.g. if the device was never opened we will never get the completion. Move the waiting code to hvutil_transport_destroy() and make sure it is only called when the device is open. hvt->lock serialization should guarantee the absence of races. Fixes: 5a66fecbf6aa ("Drivers: hv: util: kvp: Fix a rescind processing issue") Fixes: 20951c7535b5 ("Drivers: hv: util: Fcopy: Fix a rescind processing issue") Fixes: d77044d142e9 ("Drivers: hv: util: Backup: Fix a rescind processing issue") Reported-by: Dexuan Cui Tested-by: Dexuan Cui Signed-off-by: Vitaly Kuznetsov Signed-off-by: K. Y. Srinivasan Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit da626b13ce329cb2629897251226ca8be54ab919 Author: Alex Deucher Date: Tue Mar 14 14:42:03 2017 -0400 drm/radeon/si: add dpm quirk for Oland [ Upstream commit 0f424de1fd9bc4ab24bd1fe5430ab5618e803e31 ] OLAND 0x1002:0x6604 0x1028:0x066F 0x00 seems to have problems with higher sclks. Acked-by: Christian König Signed-off-by: Alex Deucher Cc: stable@vger.kernel.org Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 1867eb805091fbcd654161dbb15acfe3b92f1599 Author: Taku Izumi Date: Wed Mar 15 13:47:50 2017 +0900 fjes: Fix wrong netdevice feature flags [ Upstream commit fe8daf5fa715f7214952f06a387e4b7de818c5be ] This patch fixes netdev->features for Extended Socket network device. Currently Extended Socket network device's netdev->feature claims NETIF_F_HW_CSUM, however this is completely wrong. There's no feature of checksum offloading. That causes invalid TCP/UDP checksum and packet rejection when IP forwarding from Extended Socket network device to other network device. NETIF_F_HW_CSUM should be omitted. Signed-off-by: Taku Izumi Signed-off-by: David S. Miller Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 91510a623baa8c3be2033e42634ebc837e819635 Author: Don Brace Date: Fri Mar 10 14:35:23 2017 -0600 scsi: hpsa: do not timeout reset operations [ Upstream commit 2ef2884980873081a4edae92f9d88dd580c85f6e ] Resets can take longer than DEFAULT_TIMEOUT. Reviewed-by: Scott Benesh Reviewed-by: Scott Teel Reviewed-by: Tomas Henzl Signed-off-by: Don Brace Signed-off-by: Martin K. Petersen Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 0f07e7611184cd78a26dd1fbf13513264896cdac Author: Don Brace Date: Fri Mar 10 14:35:17 2017 -0600 scsi: hpsa: limit outstanding rescans [ Upstream commit 87b9e6aa87d9411f1059aa245c0c79976bc557ac ] Avoid rescan storms. No need to queue another if one is pending. Reviewed-by: Scott Benesh Reviewed-by: Scott Teel Reviewed-by: Tomas Henzl Signed-off-by: Don Brace Signed-off-by: Martin K. Petersen Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit c81410a4353d45766564ba6af9a067d33d1525d7 Author: Don Brace Date: Fri Mar 10 14:35:11 2017 -0600 scsi: hpsa: update check for logical volume status [ Upstream commit 85b29008d8af6d94a0723aaa8d93cfb6e041158b ] - Add in a new case for volume offline. Resolves internal testing bug for multilun array management. - Return correct status for failed TURs. Reviewed-by: Scott Benesh Reviewed-by: Scott Teel Signed-off-by: Don Brace Signed-off-by: Martin K. Petersen Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 8652baa5a31d9f65f93464d27d9a330e3dee6191 Author: Kuninori Morimoto Date: Tue Mar 14 09:34:49 2017 +0900 ASoC: rcar: clear DE bit only in PDMACHCR when it stops [ Upstream commit 62a10498afb27370ec6018e9d802b74850fd8d9a ] R-Car datasheet indicates "Clear DE in PDMACHCR" for transfer stop, but current code clears all bits in PDMACHCR. Because of this, DE bit might never been cleared, and it causes CMD overflow. This patch fixes this issue. Signed-off-by: Kuninori Morimoto Tested-by: Hiroyuki Yokoyama Signed-off-by: Mark Brown Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit fd2530a4ee62c17bc8e8715cc6f449c1a1f7f212 Author: Stafford Horne Date: Mon Mar 13 07:44:45 2017 +0900 openrisc: fix issue handling 8 byte get_user calls [ Upstream commit 154e67cd8e8f964809d0e75e44bb121b169c75b3 ] Was getting the following error with allmodconfig: ERROR: "__get_user_bad" [lib/test_user_copy.ko] undefined! This was simply a missing break statement, causing an unwanted fall through. Signed-off-by: Stafford Horne Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 18b39b61b2c66556396f8584f0769f221d3383f6 Author: Alexander Shishkin Date: Thu Jun 30 16:10:51 2016 +0300 intel_th: pci: Add Gemini Lake support [ Upstream commit 340837f985c2cb87ca0868d4aa9ce42b0fab3a21 ] This adds Intel(R) Trace Hub PCI ID for Gemini Lake SOC. Signed-off-by: Alexander Shishkin Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 3544f57578a6110f14e837c2a002a8eadad510cf Author: Arnd Bergmann Date: Tue Mar 14 22:27:11 2017 +0100 drm: amd: remove broken include path [ Upstream commit 655d9ca9ac075da1ef2a45012ba48a39f6eb1f58 ] The AMD ACP driver adds "-I../acp -I../acp/include" to the gcc command line, which makes no sense, since these are evaluated relative to the build directory. When we build with "make W=1", they instead cause a warning: cc1: error: ../acp/: No such file or directory [-Werror=missing-include-dirs] cc1: error: ../acp/include: No such file or directory [-Werror=missing-include-dirs] cc1: all warnings being treated as errors ../scripts/Makefile.build:289: recipe for target 'drivers/gpu/drm/amd/amdgpu/amdgpu_drv.o' failed ../scripts/Makefile.build:289: recipe for target 'drivers/gpu/drm/amd/amdgpu/amdgpu_device.o' failed ../scripts/Makefile.build:289: recipe for target 'drivers/gpu/drm/amd/amdgpu/amdgpu_kms.o' failed This removes the subdir-ccflags variable that evidently did not serve any purpose here. Signed-off-by: Arnd Bergmann Signed-off-by: Alex Deucher Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 4c9c0971271dfcc5f0e74232446f938e2bed717b Author: Ram Amrani Date: Tue Mar 14 15:26:02 2017 +0200 qed: Fix interrupt flags on Rx LL2 [ Upstream commit 1df2adedcce17ad4a39fba74f0e2b611f797fe10 ] Before iterating over the the LL2 Rx ring, the ring's spinlock is taken via spin_lock_irqsave(). The actual processing of the packet [including handling by the protocol driver] is done without said lock, so qed releases the spinlock and re-claims it afterwards. Problem is that the final spin_lock_irqrestore() at the end of the iteration uses the original flags saved from the initial irqsave() instead of the flags from the most recent irqsave(). So it's possible that the interrupt status would be incorrect at the end of the processing. Fixes: 0a7fb11c23c0 ("qed: Add Light L2 support"); CC: Ram Amrani Signed-off-by: Yuval Mintz Signed-off-by: David S. Miller Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit ac04ab9624b5337628a8a5b96eee81a6fee704e5 Author: Mintz, Yuval Date: Tue Mar 14 15:26:00 2017 +0200 qed: Fix mapping leak on LL2 rx flow [ Upstream commit 752ecb2da11124a948567076b60767dc8034cfa5 ] When receiving an Rx LL2 packet, qed fails to unmap the previous buffer. Fixes: 0a7fb11c23c0 ("qed: Add Light L2 support"); Signed-off-by: Yuval Mintz Signed-off-by: David S. Miller Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 8de6d7b28d2f135838275258b715a52db27fdb1c Author: Ram Amrani Date: Tue Mar 14 15:25:58 2017 +0200 qed: Align CIDs according to DORQ requirement [ Upstream commit f3e48119b97f56fb09310c95d49da122a27003d7 ] The Doorbell HW block can be configured at a granularity of 16 x CIDs, so we need to make sure that the actual number of CIDs configured would be a multiplication of 16. Today, when RoCE is enabled - given that the number is unaligned, doorbelling the higher CIDs would fail to reach the firmware and would eventually timeout. Fixes: dbb799c39717 ("qed: Initialize hardware for new protocols") Signed-off-by: Ram Amrani Signed-off-by: Yuval Mintz Signed-off-by: David S. Miller Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit fddc3df7647e796c8947215a281db95109d17ac1 Author: Jiri Pirko Date: Tue Mar 14 14:00:01 2017 +0100 mlxsw: reg: Fix SPVMLR max record count [ Upstream commit e9093b1183bbac462d2caef3eac165778c0b1bf1 ] The num_rec field is 8 bit, so the maximal count number is 255. This fixes vlans learning not being enabled for wider ranges than 255. Fixes: a4feea74cd7a ("mlxsw: reg: Add Switch Port VLAN MAC Learning register definition") Signed-off-by: Jiri Pirko Reviewed-by: Ido Schimmel Signed-off-by: David S. Miller Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 4c8b4e60b5755b65c39db2adcb3cc5ba293e118e Author: Jiri Pirko Date: Tue Mar 14 14:00:00 2017 +0100 mlxsw: reg: Fix SPVM max record count [ Upstream commit f004ec065b4879d6bc9ba0211af2169b3ce3097f ] The num_rec field is 8 bit, so the maximal count number is 255. This fixes vlans not being enabled for wider ranges than 255. Fixes: b2e345f9a454 ("mlxsw: reg: Add Switch Port VID and Switch Port VLAN Membership registers definitions") Signed-off-by: Jiri Pirko Reviewed-by: Ido Schimmel Signed-off-by: David S. Miller Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 6c548e90a0bcfc8beb29ccb0de5d7d40efb41da9 Author: Vlad Yasevich Date: Tue Mar 14 08:58:08 2017 -0400 net: Resend IGMP memberships upon peer notification. [ Upstream commit 37c343b4f4e70e9dc328ab04903c0ec8d154c1a4 ] When we notify peers of potential changes, it's also good to update IGMP memberships. For example, during VM migration, updating IGMP memberships will redirect existing multicast streams to the VM at the new location. Signed-off-by: Vladislav Yasevich Acked-by: Michael S. Tsirkin Signed-off-by: David S. Miller Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 889163d75fe5e0cb4ead674851e08c4e706d645c Author: Arnd Bergmann Date: Tue Mar 14 13:54:12 2017 +0100 irqchip/mvebu-odmi: Select GENERIC_MSI_IRQ_DOMAIN [ Upstream commit fa23b9d1b89fdc34f296f02e496a20aeff5736be ] This driver uses the MSI domain but has no strict dependency on PCI_MSI, so we may run into a build failure when CONFIG_GENERIC_MSI_IRQ_DOMAIN is disabled: drivers/irqchip/irq-mvebu-odmi.c:152:15: error: variable 'odmi_msi_ops' has initializer but incomplete type static struct msi_domain_ops odmi_msi_ops = { ^~~~~~~~~~~~~~ drivers/irqchip/irq-mvebu-odmi.c:155:15: error: variable 'odmi_msi_domain_info' has initializer but incomplete type static struct msi_domain_info odmi_msi_domain_info = { ^~~~~~~~~~~~~~~ drivers/irqchip/irq-mvebu-odmi.c:156:3: error: 'struct msi_domain_info' has no member named 'flags' .flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS), ^~~~~ drivers/irqchip/irq-mvebu-odmi.c:156:12: error: 'MSI_FLAG_USE_DEF_DOM_OPS' undeclared here (not in a function) .flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS), ^~~~~~~~~~~~~~~~~~~~~~~~ drivers/irqchip/irq-mvebu-odmi.c:156:39: error: 'MSI_FLAG_USE_DEF_CHIP_OPS' undeclared here (not in a function); did you mean 'MSI_FLAG_USE_DEF_DOM_OPS'? Selecting the option from this driver seems to solve this nicely, though I could not find any other instance of this in irqchip drivers. Signed-off-by: Arnd Bergmann Acked-by: Thomas Petazzoni Signed-off-by: Marc Zyngier Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit e30ccb5f1c7e00ee189a0991e633b3034801899c Author: Matthias Kaehlcke Date: Mon Mar 13 14:30:29 2017 -0700 dmaengine: Fix array index out of bounds warning in __get_unmap_pool() [ Upstream commit 23f963e91fd81f44f6b316b1c24db563354c6be8 ] This fixes the following warning when building with clang and CONFIG_DMA_ENGINE_RAID=n : drivers/dma/dmaengine.c:1102:11: error: array index 2 is past the end of the array (which contains 1 element) [-Werror,-Warray-bounds] return &unmap_pool[2]; ^ ~ drivers/dma/dmaengine.c:1083:1: note: array 'unmap_pool' declared here static struct dmaengine_unmap_pool unmap_pool[] = { ^ drivers/dma/dmaengine.c:1104:11: error: array index 3 is past the end of the array (which contains 1 element) [-Werror,-Warray-bounds] return &unmap_pool[3]; ^ ~ drivers/dma/dmaengine.c:1083:1: note: array 'unmap_pool' declared here static struct dmaengine_unmap_pool unmap_pool[] = { Signed-off-by: Matthias Kaehlcke Reviewed-by: Dan Williams Signed-off-by: Vinod Koul Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 46cbe3f51c511b648de0220b14ab4f5832f00020 Author: Johan Hovold Date: Mon Mar 13 13:42:03 2017 +0100 net: wimax/i2400m: fix NULL-deref at probe [ Upstream commit 6e526fdff7be4f13b24f929a04c0e9ae6761291e ] Make sure to check the number of endpoints to avoid dereferencing a NULL-pointer or accessing memory beyond the endpoint array should a malicious device lack the expected endpoints. The endpoints are specifically dereferenced in the i2400m_bootrom_init path during probe (e.g. in i2400mu_tx_bulk_out). Fixes: f398e4240fce ("i2400m/USB: probe/disconnect, dev init/shutdown and reset backends") Cc: Inaky Perez-Gonzalez Signed-off-by: Johan Hovold Signed-off-by: David S. Miller Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 2e70c4d5de8751df5d14a578459a58ef0471e1ef Author: Tahsin Erdogan Date: Fri Mar 10 12:09:49 2017 -0800 writeback: fix memory leak in wb_queue_work() [ Upstream commit 4a3a485b1ed0e109718cc8c9d094fa0f552de9b2 ] When WB_registered flag is not set, wb_queue_work() skips queuing the work, but does not perform the necessary clean up. In particular, if work->auto_free is true, it should free the memory. The leak condition can be reprouced by following these steps: mount /dev/sdb /mnt/sdb /* In qemu console: device_del sdb */ umount /dev/sdb Above will result in a wb_queue_work() call on an unregistered wb and thus leak memory. Reported-by: John Sperbeck Signed-off-by: Tahsin Erdogan Reviewed-by: Jan Kara Signed-off-by: Jens Axboe Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit d28046fb8c435fc63f51dca30cdeb423d8ab4479 Author: Sagi Grimberg Date: Mon Mar 13 16:10:11 2017 +0200 blk-mq: Fix tagset reinit in the presence of cpu hot-unplug [ Upstream commit 0067d4b020ea07a58540acb2c5fcd3364bf326e0 ] In case cpu was unplugged, we need to make sure not to assume that the tags for that cpu are still allocated. so check for null tags when reinitializing a tagset. Reported-by: Yi Zhang Signed-off-by: Sagi Grimberg Signed-off-by: Jens Axboe Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 143d13d1e6c0a9ea7e6e86a0279907b174a42b4d Author: Hiroyuki Yokoyama Date: Wed Mar 1 03:51:00 2017 +0000 ASoC: rsnd: fix sound route path when using SRC6/SRC9 [ Upstream commit a1c2ff53726907aff5feb37e4cfd45c1ff626431 ] This patch fixes the problem that the missing value of the route path setting table and incorrect values are set in the CMD_ROUTE_SELECT register. Signed-off-by: Hiroyuki Yokoyama [Kuninori: shared data on MIX and non-MIX case] Signed-off-by: Kuninori Morimoto Signed-off-by: Mark Brown Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 97b75dad9dd1bc352e199d15516978edb06792fd Author: Florian Westphal Date: Thu Mar 9 23:22:30 2017 +0100 netfilter: bridge: honor frag_max_size when refragmenting [ Upstream commit 4ca60d08cbe65f501baad64af50fceba79c19fbb ] consider a bridge with mtu 9000, but end host sending smaller packets to another host with mtu < 9000. In this case, after reassembly, bridge+defrag would refragment, and then attempt to send the reassembled packet as long as it was below 9k. Instead we have to cap by the largest fragment size seen. Signed-off-by: Florian Westphal Signed-off-by: Pablo Neira Ayuso Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 38780b9ae45a40f0f39f0279c381397fb910b072 Author: Tomi Valkeinen Date: Tue Feb 28 10:11:45 2017 +0200 drm/omap: fix dmabuf mmap for dma_alloc'ed buffers [ Upstream commit 9fa1d7537242bd580ffa99c4725a0407096aad26 ] omap_gem_dmabuf_mmap() returns an error (with a WARN) when called for a buffer which is allocated with dma_alloc_*(). This prevents dmabuf mmap from working on SoCs without DMM, e.g. AM4 and OMAP3. I could not find any reason for omap_gem_dmabuf_mmap() rejecting such buffers, and just removing the if() fixes the limitation. Signed-off-by: Tomi Valkeinen Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 8fb782bbd21214a211e8e05dd0230ddd9b255e5e Author: Dmitry Torokhov Date: Tue Feb 28 17:14:41 2017 -0800 Input: i8042 - add TUXEDO BU1406 (N24_25BU) to the nomux list [ Upstream commit a4c2a13129f7c5bcf81704c06851601593303fd5 ] TUXEDO BU1406 does not implement active multiplexing mode properly, and takes around 550 ms in i8042_set_mux_mode(). Given that the device does not have external AUX port, there is no downside in disabling the MUX mode. Reported-by: Paul Menzel Suggested-by: Vojtech Pavlik Reviewed-by: Marcos Paulo de Souza Signed-off-by: Dmitry Torokhov Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 817f60ccf72cf7d98c4a96d408660bbbcb495489 Author: NeilBrown Date: Fri Mar 10 11:36:39 2017 +1100 NFSD: fix nfsd_reset_versions for NFSv4. [ Upstream commit 800a938f0bf9130c8256116649c0cc5806bfb2fd ] If you write "-2 -3 -4" to the "versions" file, it will notice that no versions are enabled, and nfsd_reset_versions() is called. This enables all major versions, not no minor versions. So we lose the invariant that NFSv4 is only advertised when at least one minor is enabled. Fix the code to explicitly enable minor versions for v4, change it to use nfsd_vers() to test and set, and simplify the code. Signed-off-by: NeilBrown Signed-off-by: J. Bruce Fields Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 0154269f9c122440b841e06c2c16cde10c26bf7b Author: NeilBrown Date: Fri Mar 10 11:36:39 2017 +1100 NFSD: fix nfsd_minorversion(.., NFSD_AVAIL) [ Upstream commit 928c6fb3a9bfd6c5b287aa3465226add551c13c0 ] Current code will return 1 if the version is supported, and -1 if it isn't. This is confusing and inconsistent with the one place where this is used. So change to return 1 if it is supported, and zero if not. i.e. an error is never returned. Signed-off-by: NeilBrown Signed-off-by: J. Bruce Fields Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 063c753ef78bee13265bf02df185704e8fdff1f9 Author: Dave Airlie Date: Fri Mar 10 12:13:04 2017 +1000 drm/amdgpu: fix parser init error path to avoid crash in parser fini [ Upstream commit 607523d19c9d67ba4cf7bdaced644f11ed04992c ] If we don't reset the chunk info in the error path, the subsequent fini path will double free. Reviewed-by: Christian König Signed-off-by: Dave Airlie Signed-off-by: Alex Deucher Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 3d40364d333e3de4cc2e49c80ad8ddb242c46315 Author: Oleksandr Tyshchenko Date: Mon Feb 27 14:30:26 2017 +0200 iommu/io-pgtable-arm-v7s: Check for leaf entry before dereferencing it [ Upstream commit a03849e7210277fa212779b7cd9c30e1ab6194b2 ] Do a check for already installed leaf entry at the current level before dereferencing it in order to avoid walking the page table down with wrong pointer to the next level. Signed-off-by: Oleksandr Tyshchenko CC: Will Deacon CC: Robin Murphy Signed-off-by: Will Deacon Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 721c136ac2780e519cd44940ba47f08cdeafec9d Author: Daniel Jurgens Date: Fri Mar 10 14:33:02 2017 +0200 net/mlx5: Don't save PCI state when PCI error is detected [ Upstream commit 5d47f6c89d568ab61712d8c40676fbb020b68752 ] When a PCI error is detected the PCI state could be corrupt, don't save it in that flow. Save the state after initialization. After restoring the PCI state during slot reset save it again, restoring the state destroys the previously saved state info. Fixes: 05ac2c0b7438 ('net/mlx5: Fix race between PCI error handlers and health work') Signed-off-by: Daniel Jurgens Signed-off-by: Saeed Mahameed Signed-off-by: David S. Miller Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 248cbd97be813adfa675d48e1e702f9d5e7ffca8 Author: Paul Blakey Date: Fri Mar 10 14:33:01 2017 +0200 net/mlx5: Fix create autogroup prev initializer [ Upstream commit af36370569eb37420e1e78a2e60c277b781fcd00 ] The autogroups list is a list of non overlapping group boundaries sorted by their start index. If the autogroups list wasn't empty and an empty group slot was found at the start of the list, the new group was added to the end of the list instead of the beginning, as the prev initializer was incorrect. When this was repeated, it caused multiple groups to have overlapping boundaries. Fixed that by correctly initializing the prev pointer to the start of the list. Fixes: eccec8da3b4e ('net/mlx5: Keep autogroups list ordered') Signed-off-by: Paul Blakey Reviewed-by: Mark Bloch Signed-off-by: Saeed Mahameed Signed-off-by: David S. Miller Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 515d78dc0a89a14c10ce3b3f007c99508aa65e61 Author: David Howells Date: Fri Mar 10 07:48:49 2017 +0000 rxrpc: Wake up the transmitter if Rx window size increases on the peer [ Upstream commit 702f2ac87a9a8da23bf8506466bc70175fc970b2 ] The RxRPC ACK packet may contain an extension that includes the peer's current Rx window size for this call. We adjust the local Tx window size to match. However, the transmitter can stall if the receive window is reduced to 0 by the peer and then reopened. This is because the normal way that the transmitter is re-energised is by dropping something out of our Tx queue and thus making space. When a single gap is made, the transmitter is woken up. However, because there's nothing in the Tx queue at this point, this doesn't happen. To fix this, perform a wake_up() any time we see the peer's Rx window size increasing. The observable symptom is that calls start failing on ETIMEDOUT and the following: kAFS: SERVER DEAD state=-62 appears in dmesg. Signed-off-by: David Howells Signed-off-by: David S. Miller Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit e85b9bc29b04bbde927c2c556e0832327bc62279 Author: Doug Berger Date: Thu Mar 9 16:58:48 2017 -0800 net: bcmgenet: Power up the internal PHY before probing the MII [ Upstream commit 6be371b053dc86f11465cc1abce2e99bda0a0574 ] When using the internal PHY it must be powered up when the MII is probed or the PHY will not be detected. Since the PHY is powered up at reset this has not been a problem. However, when the kernel is restarted with kexec the PHY will likely be powered down when the kernel starts so it will not be detected and the Ethernet link will not be established. This commit explicitly powers up the internal PHY when the GENET driver is probed to correct this behavior. Fixes: 1c1008c793fa ("net: bcmgenet: add main driver file") Signed-off-by: Doug Berger Reviewed-by: Florian Fainelli Signed-off-by: David S. Miller Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit f9ac24794f2e688336c64686129df6d2133c9c0b Author: Doug Berger Date: Thu Mar 9 16:58:47 2017 -0800 net: bcmgenet: synchronize irq0 status between the isr and task [ Upstream commit 07c52d6a0b955a8a28834f9354793cfc4b81d0e9 ] Add a spinlock to ensure that irq0_stat is not unintentionally altered as the result of preemption. Also removed unserviced irq0 interrupts and removed irq1_stat since there is no bottom half service for those interrupts. Fixes: 1c1008c793fa ("net: bcmgenet: add main driver file") Signed-off-by: Doug Berger Reviewed-by: Florian Fainelli Signed-off-by: David S. Miller Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 4c3727f6add57b03197b25af01dd1aa26090ed00 Author: Doug Berger Date: Thu Mar 9 16:58:46 2017 -0800 net: bcmgenet: power down internal phy if open or resume fails [ Upstream commit 7627409cc4970e8c8b9de6945ad86a575290a94e ] Since the internal PHY is powered up during the open and resume functions it should be powered back down if the functions fail. Fixes: 1c1008c793fa ("net: bcmgenet: add main driver file") Signed-off-by: Doug Berger Reviewed-by: Florian Fainelli Signed-off-by: David S. Miller Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 66e522ab02cc7ae47cb8f1247e7b74efdc16072e Author: Doug Berger Date: Thu Mar 9 16:58:45 2017 -0800 net: bcmgenet: reserved phy revisions must be checked first [ Upstream commit eca4bad73409aedc6ff22f823c18b67a4f08c851 ] The reserved gphy_rev value of 0x01ff must be tested before the old or new scheme for GPHY major versioning are tested, otherwise it will be treated as 0xff00 according to the old scheme. Fixes: b04a2f5b9ff5 ("net: bcmgenet: add support for new GENET PHY revision scheme") Signed-off-by: Doug Berger Reviewed-by: Florian Fainelli Signed-off-by: David S. Miller Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit dc8d63c43af042d5740c29f23906788ffbb6eaf4 Author: Doug Berger Date: Thu Mar 9 16:58:44 2017 -0800 net: bcmgenet: correct MIB access of UniMAC RUNT counters [ Upstream commit 1ad3d225e5a40ca6c586989b4baaca710544c15a ] The gap between the Tx status counters and the Rx RUNT counters is now being added to allow correct reporting of the registers. Fixes: 1c1008c793fa ("net: bcmgenet: add main driver file") Signed-off-by: Doug Berger Reviewed-by: Florian Fainelli Signed-off-by: David S. Miller Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit bb5c42a5b1d2b484939c1a1d18306853db8fcba9 Author: Doug Berger Date: Thu Mar 9 16:58:43 2017 -0800 net: bcmgenet: correct the RBUF_OVFL_CNT and RBUF_ERR_CNT MIB values [ Upstream commit ffff71328a3c321f7c14cc1edd33577717037744 ] The location of the RBUF overflow and error counters has moved between different version of the GENET MAC. This commit corrects the driver to read from the correct locations depending on the version of the GENET MAC. Fixes: 1c1008c793fa ("net: bcmgenet: add main driver file") Signed-off-by: Doug Berger Reviewed-by: Florian Fainelli Signed-off-by: David S. Miller Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 72cd0c3f66165cce2c91012382a57800f54e3bff Author: Michael Chan Date: Wed Mar 8 18:44:35 2017 -0500 bnxt_en: Ignore 0 value in autoneg supported speed from firmware. [ Upstream commit 520ad89a54edea84496695d528f73ddcf4a52ea4 ] In some situations, the firmware will return 0 for autoneg supported speed. This may happen if the firmware detects no SFP module, for example. The driver should ignore this so that we don't end up with an invalid autoneg setting with nothing advertised. When SFP module is inserted, we'll get the updated settings from firmware at that time. Signed-off-by: Michael Chan Signed-off-by: David S. Miller Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit ae0ebdba96674c278674d2b12eedde0681e42f91 Author: Alexander Potapenko Date: Wed Mar 8 18:08:16 2017 +0100 net: initialize msg.msg_flags in recvfrom [ Upstream commit 9f138fa609c47403374a862a08a41394be53d461 ] KMSAN reports a use of uninitialized memory in put_cmsg() because msg.msg_flags in recvfrom haven't been initialized properly. The flag values don't affect the result on this path, but it's still a good idea to initialize them explicitly. Signed-off-by: Alexander Potapenko Signed-off-by: David S. Miller Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 6783015096dc66dee0035baeec410f16b6ee2d84 Author: Andrea Arcangeli Date: Thu Mar 9 16:17:14 2017 -0800 userfaultfd: selftest: vm: allow to build in vm/ directory [ Upstream commit 46aa6a302b53f543f8e8b8e1714dc5e449ad36a6 ] linux/tools/testing/selftests/vm $ make gcc -Wall -I ../../../../usr/include compaction_test.c -lrt -o /compaction_test /usr/lib/gcc/x86_64-pc-linux-gnu/4.9.4/../../../../x86_64-pc-linux-gnu/bin/ld: cannot open output file /compaction_test: Permission denied collect2: error: ld returned 1 exit status make: *** [../lib.mk:54: /compaction_test] Error 1 Since commit a8ba798bc8ec ("selftests: enable O and KBUILD_OUTPUT") selftests/vm build fails if run from the "selftests/vm" directory, but it works in the selftests/ directory. It's quicker to be able to do a local vm-only build after a tree wipe and this patch allows for it again. Link: http://lkml.kernel.org/r/20170302173738.18994-4-aarcange@redhat.com Signed-off-by: Andrea Arcangeli Cc: Mike Rapoport Cc: "Dr. David Alan Gilbert" Cc: Mike Kravetz Cc: Pavel Emelyanov Cc: Hillf Danton Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 275314e90c5e84c972be2da278cee60279e07fc7 Author: Andrea Arcangeli Date: Thu Mar 9 16:16:28 2017 -0800 userfaultfd: shmem: __do_fault requires VM_FAULT_NOPAGE [ Upstream commit 6bbc4a4144b1a69743022ac68dfaf6e7d993abb9 ] __do_fault assumes vmf->page has been initialized and is valid if VM_FAULT_NOPAGE is not returned by vma->vm_ops->fault(vma, vmf). handle_userfault() in turn should return VM_FAULT_NOPAGE if it doesn't return VM_FAULT_SIGBUS or VM_FAULT_RETRY (the other two possibilities). This VM_FAULT_NOPAGE case is only invoked when signal are pending and it didn't matter for anonymous memory before. It only started to matter since shmem was introduced. hugetlbfs also takes a different path and doesn't exercise __do_fault. Link: http://lkml.kernel.org/r/20170228154201.GH5816@redhat.com Signed-off-by: Andrea Arcangeli Reported-by: Dmitry Vyukov Cc: "Kirill A. Shutemov" Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 9bcd15bdfb6151e0418fa9321d4dce2b2eb0cb16 Author: Guoqing Jiang Date: Fri Feb 24 11:15:12 2017 +0800 md-cluster: free md_cluster_info if node leave cluster [ Upstream commit 9c8043f337f14d1743006dfc59c03e80a42e3884 ] To avoid memory leak, we need to free the cinfo which is allocated when node join cluster. Reviewed-by: NeilBrown Signed-off-by: Guoqing Jiang Signed-off-by: Shaohua Li Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 9841d7b08ff6285123663acddd0ec7aa292402b7 Author: Chunfeng Yun Date: Thu Mar 9 15:39:34 2017 +0200 usb: xhci-mtk: check hcc_params after adding primary hcd [ Upstream commit 94a631d91ad341b3b4bdac72d1104d9f090e0ca9 ] hcc_params is set in xhci_gen_setup() called from usb_add_hcd(), so checks the Maximum Primary Stream Array Size in the hcc_params register after adding primary hcd. Signed-off-by: Chunfeng Yun Signed-off-by: Mathias Nyman Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 215df1f35542081485cff549830ff01af98c5279 Author: Radim Krčmář Date: Tue Mar 7 17:51:49 2017 +0100 KVM: nVMX: do not warn when MSR bitmap address is not backed [ Upstream commit 05d8d34611139f8435af90ac54b65eb31e82e388 ] Before trying to do nested_get_page() in nested_vmx_merge_msr_bitmap(), we have already checked that the MSR bitmap address is valid (4k aligned and within physical limits). SDM doesn't specify what happens if the there is no memory mapped at the valid address, but Intel CPUs treat the situation as if the bitmap was configured to trap all MSRs. KVM already does that by returning false and a correct handling doesn't need the guest-trigerrable warning that was reported by syzkaller: (The warning was originally there to catch some possible bugs in nVMX.) ------------[ cut here ]------------ WARNING: CPU: 0 PID: 7832 at arch/x86/kvm/vmx.c:9709 nested_vmx_merge_msr_bitmap arch/x86/kvm/vmx.c:9709 [inline] WARNING: CPU: 0 PID: 7832 at arch/x86/kvm/vmx.c:9709 nested_get_vmcs12_pages+0xfb6/0x15c0 arch/x86/kvm/vmx.c:9640 Kernel panic - not syncing: panic_on_warn set ... CPU: 0 PID: 7832 Comm: syz-executor1 Not tainted 4.10.0+ #229 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011 Call Trace: __dump_stack lib/dump_stack.c:15 [inline] dump_stack+0x2ee/0x3ef lib/dump_stack.c:51 panic+0x1fb/0x412 kernel/panic.c:179 __warn+0x1c4/0x1e0 kernel/panic.c:540 warn_slowpath_null+0x2c/0x40 kernel/panic.c:583 nested_vmx_merge_msr_bitmap arch/x86/kvm/vmx.c:9709 [inline] nested_get_vmcs12_pages+0xfb6/0x15c0 arch/x86/kvm/vmx.c:9640 enter_vmx_non_root_mode arch/x86/kvm/vmx.c:10471 [inline] nested_vmx_run+0x6186/0xaab0 arch/x86/kvm/vmx.c:10561 handle_vmlaunch+0x1a/0x20 arch/x86/kvm/vmx.c:7312 vmx_handle_exit+0xfc0/0x3f00 arch/x86/kvm/vmx.c:8526 vcpu_enter_guest arch/x86/kvm/x86.c:6982 [inline] vcpu_run arch/x86/kvm/x86.c:7044 [inline] kvm_arch_vcpu_ioctl_run+0x1418/0x4840 arch/x86/kvm/x86.c:7205 kvm_vcpu_ioctl+0x673/0x1120 arch/x86/kvm/../../../virt/kvm/kvm_main.c:2570 Reported-by: Dmitry Vyukov Reviewed-by: Jim Mattson [Jim Mattson explained the bare metal behavior: "I believe this behavior would be documented in the chipset data sheet rather than the SDM, since the chipset returns all 1s for an unclaimed read."] Signed-off-by: Radim Krčmář Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 50fc2d4152fb03bfbdec186403ce5c49fd0ff77e Author: Javier Martinez Canillas Date: Wed Feb 22 15:23:22 2017 -0300 usb: phy: isp1301: Add OF device ID table [ Upstream commit fd567653bdb908009b650f079bfd4b63169e2ac4 ] The driver doesn't have a struct of_device_id table but supported devices are registered via Device Trees. This is working on the assumption that a I2C device registered via OF will always match a legacy I2C device ID and that the MODALIAS reported will always be of the form i2c:. But this could change in the future so the correct approach is to have an OF device ID table if the devices are registered via OF. Signed-off-by: Javier Martinez Canillas Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit bf864220a59c51a676717cbee7f346a400c5ab47 Author: Ilan peer Date: Mon Dec 26 18:17:36 2016 +0200 mac80211: Fix addition of mesh configuration element commit 57629915d568c522ac1422df7bba4bee5b5c7a7c upstream. The code was setting the capabilities byte to zero, after it was already properly set previously. Fix it. The bug was found while debugging hwsim mesh tests failures that happened since the commit mentioned below. Fixes: 76f43b4c0a93 ("mac80211: Remove invalid flag operations in mesh TSF synchronization") Signed-off-by: Ilan Peer Reviewed-by: Masashi Honma Signed-off-by: Johannes Berg Cc: Richard Schütz Cc: Mathias Kretschmer Signed-off-by: Greg Kroah-Hartman commit 32e2ae03283b4047f5af9fd1178fe24f38e96601 Author: Chandan Rajendra Date: Mon Dec 11 15:00:57 2017 -0500 ext4: fix crash when a directory's i_size is too small commit 9d5afec6b8bd46d6ed821aa1579634437f58ef1f upstream. On a ppc64 machine, when mounting a fuzzed ext2 image (generated by fsfuzzer) the following call trace is seen, VFS: brelse: Trying to free free buffer WARNING: CPU: 1 PID: 6913 at /root/repos/linux/fs/buffer.c:1165 .__brelse.part.6+0x24/0x40 .__brelse.part.6+0x20/0x40 (unreliable) .ext4_find_entry+0x384/0x4f0 .ext4_lookup+0x84/0x250 .lookup_slow+0xdc/0x230 .walk_component+0x268/0x400 .path_lookupat+0xec/0x2d0 .filename_lookup+0x9c/0x1d0 .vfs_statx+0x98/0x140 .SyS_newfstatat+0x48/0x80 system_call+0x58/0x6c This happens because the directory that ext4_find_entry() looks up has inode->i_size that is less than the block size of the filesystem. This causes 'nblocks' to have a value of zero. ext4_bread_batch() ends up not reading any of the directory file's blocks. This renders the entries in bh_use[] array to continue to have garbage data. buffer_uptodate() on bh_use[0] can then return a zero value upon which brelse() function is invoked. This commit fixes the bug by returning -ENOENT when the directory file has no associated blocks. Reported-by: Abdul Haleem Signed-off-by: Chandan Rajendra Signed-off-by: Greg Kroah-Hartman commit 6a851bb99e5c6569e51eb798d6a38b759fdb3302 Author: Eryu Guan Date: Sun Dec 3 22:52:51 2017 -0500 ext4: fix fdatasync(2) after fallocate(2) operation commit c894aa97577e47d3066b27b32499ecf899bfa8b0 upstream. Currently, fallocate(2) with KEEP_SIZE followed by a fdatasync(2) then crash, we'll see wrong allocated block number (stat -c %b), the blocks allocated beyond EOF are all lost. fstests generic/468 exposes this bug. Commit 67a7d5f561f4 ("ext4: fix fdatasync(2) after extent manipulation operations") fixed all the other extent manipulation operation paths such as hole punch, zero range, collapse range etc., but forgot the fallocate case. So similarly, fix it by recording the correct journal tid in ext4 inode in fallocate(2) path, so that ext4_sync_file() will wait for the right tid to be committed on fdatasync(2). This addresses the test failure in xfstests test generic/468. Signed-off-by: Eryu Guan Signed-off-by: Theodore Ts'o Signed-off-by: Greg Kroah-Hartman commit 679dbeac0b6bb551e1f3b95673695b22b2ac953d Author: Adam Wallis Date: Mon Nov 27 10:45:01 2017 -0500 dmaengine: dmatest: move callback wait queue to thread context commit 6f6a23a213be51728502b88741ba6a10cda2441d upstream. Commit adfa543e7314 ("dmatest: don't use set_freezable_with_signal()") introduced a bug (that is in fact documented by the patch commit text) that leaves behind a dangling pointer. Since the done_wait structure is allocated on the stack, future invocations to the DMATEST can produce undesirable results (e.g., corrupted spinlocks). Commit a9df21e34b42 ("dmaengine: dmatest: warn user when dma test times out") attempted to WARN the user that the stack was likely corrupted but did not fix the actual issue. This patch fixes the issue by pushing the wait queue and callback structs into the the thread structure. If a failure occurs due to time, dmaengine_terminate_all will force the callback to safely call wake_up_all() without possibility of using a freed pointer. Bug: https://bugzilla.kernel.org/show_bug.cgi?id=197605 Fixes: adfa543e7314 ("dmatest: don't use set_freezable_with_signal()") Reviewed-by: Sinan Kaya Suggested-by: Shunyong Yang Signed-off-by: Adam Wallis Signed-off-by: Vinod Koul Signed-off-by: Greg Kroah-Hartman commit 744cb5ab337218e7a258d2a82c6f5d7afc72cd16 Author: David Lechner Date: Sun Dec 3 19:54:41 2017 -0600 eeprom: at24: change nvmem stride to 1 commit 7f6d2ecd3d7acaf205ea7b3e96f9ffc55b92298b upstream. Trying to read the MAC address from an eeprom that has an offset that is not a multiple of 4 causes an error currently. Fix it by changing the nvmem stride to 1. Signed-off-by: David Lechner [Bartosz: tweaked the commit message] Signed-off-by: Bartosz Golaszewski Signed-off-by: Greg Kroah-Hartman commit d266817f5028fef8ff521a77777ef1c4b8de890e Author: Steven Rostedt Date: Sat Dec 2 13:04:54 2017 -0500 sched/rt: Do not pull from current CPU if only one CPU to pull commit f73c52a5bcd1710994e53fbccc378c42b97a06b6 upstream. Daniel Wagner reported a crash on the BeagleBone Black SoC. This is a single CPU architecture, and does not have a functional arch_send_call_function_single_ipi() implementation which can crash the kernel if that is called. As it only has one CPU, it shouldn't be called, but if the kernel is compiled for SMP, the push/pull RT scheduling logic now calls it for irq_work if the one CPU is overloaded, it can use that function to call itself and crash the kernel. Ideally, we should disable the SCHED_FEAT(RT_PUSH_IPI) if the system only has a single CPU. But SCHED_FEAT is a constant if sched debugging is turned off. Another fix can also be used, and this should also help with normal SMP machines. That is, do not initiate the pull code if there's only one RT overloaded CPU, and that CPU happens to be the current CPU that is scheduling in a lower priority task. Even on a system with many CPUs, if there's many RT tasks waiting to run on a single CPU, and that CPU schedules in another RT task of lower priority, it will initiate the PULL logic in case there's a higher priority RT task on another CPU that is waiting to run. But if there is no other CPU with waiting RT tasks, it will initiate the RT pull logic on itself (as it still has RT tasks waiting to run). This is a wasted effort. Not only does this help with SMP code where the current CPU is the only one with RT overloaded tasks, it should also solve the issue that Daniel encountered, because it will prevent the PULL logic from executing, as there's only one CPU on the system, and the check added here will cause it to exit the RT pull code. Reported-by: Daniel Wagner Signed-off-by: Steven Rostedt (VMware) Acked-by: Peter Zijlstra Cc: Linus Torvalds Cc: Sebastian Andrzej Siewior Cc: Thomas Gleixner Cc: linux-rt-users Fixes: 4bdced5c9 ("sched/rt: Simplify the IPI based RT balancing logic") Link: http://lkml.kernel.org/r/20171202130454.4cbbfe8d@vmware.local.home Signed-off-by: Ingo Molnar Signed-off-by: Greg Kroah-Hartman commit 9c537f06d61ada88ccc8080be79213ffce1ebf15 Author: Scott Mayhew Date: Fri Dec 8 16:00:12 2017 -0500 nfs: don't wait on commit in nfs_commit_inode() if there were no commit requests commit dc4fd9ab01ab379ae5af522b3efd4187a7c30a31 upstream. If there were no commit requests, then nfs_commit_inode() should not wait on the commit or mark the inode dirty, otherwise the following BUG_ON can be triggered: [ 1917.130762] kernel BUG at fs/inode.c:578! [ 1917.130766] Oops: Exception in kernel mode, sig: 5 [#1] [ 1917.130768] SMP NR_CPUS=2048 NUMA pSeries [ 1917.130772] Modules linked in: iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi blocklayoutdriver rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache sunrpc sg nx_crypto pseries_rng ip_tables xfs libcrc32c sd_mod crc_t10dif crct10dif_generic crct10dif_common ibmvscsi scsi_transport_srp ibmveth scsi_tgt dm_mirror dm_region_hash dm_log dm_mod [ 1917.130805] CPU: 2 PID: 14923 Comm: umount.nfs4 Tainted: G ------------ T 3.10.0-768.el7.ppc64 #1 [ 1917.130810] task: c0000005ecd88040 ti: c00000004cea0000 task.ti: c00000004cea0000 [ 1917.130813] NIP: c000000000354178 LR: c000000000354160 CTR: c00000000012db80 [ 1917.130816] REGS: c00000004cea3720 TRAP: 0700 Tainted: G ------------ T (3.10.0-768.el7.ppc64) [ 1917.130820] MSR: 8000000100029032 CR: 22002822 XER: 20000000 [ 1917.130828] CFAR: c00000000011f594 SOFTE: 1 GPR00: c000000000354160 c00000004cea39a0 c0000000014c4700 c0000000018cc750 GPR04: 000000000000c750 80c0000000000000 0600000000000000 04eeb76bea749a03 GPR08: 0000000000000034 c0000000018cc758 0000000000000001 d000000005e619e8 GPR12: c00000000012db80 c000000007b31200 0000000000000000 0000000000000000 GPR16: 0000000000000000 0000000000000000 0000000000000000 0000000000000000 GPR20: 0000000000000000 0000000000000000 0000000000000000 0000000000000000 GPR24: 0000000000000000 c000000000dfc3ec 0000000000000000 c0000005eefc02c0 GPR28: d0000000079dbd50 c0000005b94a02c0 c0000005b94a0250 c0000005b94a01c8 [ 1917.130867] NIP [c000000000354178] .evict+0x1c8/0x350 [ 1917.130871] LR [c000000000354160] .evict+0x1b0/0x350 [ 1917.130873] Call Trace: [ 1917.130876] [c00000004cea39a0] [c000000000354160] .evict+0x1b0/0x350 (unreliable) [ 1917.130880] [c00000004cea3a30] [c0000000003558cc] .evict_inodes+0x13c/0x270 [ 1917.130884] [c00000004cea3af0] [c000000000327d20] .kill_anon_super+0x70/0x1e0 [ 1917.130896] [c00000004cea3b80] [d000000005e43e30] .nfs_kill_super+0x20/0x60 [nfs] [ 1917.130900] [c00000004cea3c00] [c000000000328a20] .deactivate_locked_super+0xa0/0x1b0 [ 1917.130903] [c00000004cea3c80] [c00000000035ba54] .cleanup_mnt+0xd4/0x180 [ 1917.130907] [c00000004cea3d10] [c000000000119034] .task_work_run+0x114/0x150 [ 1917.130912] [c00000004cea3db0] [c00000000001ba6c] .do_notify_resume+0xcc/0x100 [ 1917.130916] [c00000004cea3e30] [c00000000000a7b0] .ret_from_except_lite+0x5c/0x60 [ 1917.130919] Instruction dump: [ 1917.130921] 7fc3f378 486734b5 60000000 387f00a0 38800003 4bdcb365 60000000 e95f00a0 [ 1917.130927] 694a0060 7d4a0074 794ad182 694a0001 <0b0a0000> 892d02a4 2f890000 40de0134 Signed-off-by: Scott Mayhew Signed-off-by: Anna Schumaker Signed-off-by: Greg Kroah-Hartman commit 3bdb508d686e4943bfb761d78dc915dd825f811f Author: Mathias Nyman Date: Fri Dec 8 18:10:05 2017 +0200 xhci: Don't add a virt_dev to the devs array before it's fully allocated commit 5d9b70f7d52eb14bb37861c663bae44de9521c35 upstream. Avoid null pointer dereference if some function is walking through the devs array accessing members of a new virt_dev that is mid allocation. Add the virt_dev to xhci->devs[i] _after_ the virt_device and all its members are properly allocated. issue found by KASAN: null-ptr-deref in xhci_find_slot_id_by_port "Quick analysis suggests that xhci_alloc_virt_device() is not mutex protected. If so, there is a time frame where xhci->devs[slot_id] is set but not fully initialized. Specifically, xhci->devs[i]->udev can be NULL." Signed-off-by: Mathias Nyman Signed-off-by: Greg Kroah-Hartman commit 7336f5481f6cf913a2d29d98c6e11f4bbe19d3b2 Author: Sukumar Ghorai Date: Wed Aug 16 14:46:55 2017 -0700 Bluetooth: btusb: driver to enable the usb-wakeup feature commit a0085f2510e8976614ad8f766b209448b385492f upstream. BT-Controller connected as platform non-root-hub device and usb-driver initialize such device with wakeup disabled, Ref. usb_new_device(). At present wakeup-capability get enabled by hid-input device from usb function driver(e.g. BT HID device) at runtime. Again some functional driver does not set usb-wakeup capability(e.g LE HID device implement as HID-over-GATT), and can't wakeup the host on USB. Most of the device operation (such as mass storage) initiated from host (except HID) and USB wakeup aligned with host resume procedure. For BT device, usb-wakeup capability need to enable form btusc driver as a generic solution for multiple profile use case and required for USB remote wakeup (in-bus wakeup) while host is suspended. Also usb-wakeup feature need to enable/disable with HCI interface up and down. Signed-off-by: Sukumar Ghorai Signed-off-by: Amit K Bag Acked-by: Oliver Neukum Signed-off-by: Marcel Holtmann Cc: Matthias Kaehlcke Signed-off-by: Greg Kroah-Hartman commit cdfe4c0091a8fa5e0a25d82fa0d4684382ac880d Author: Chunfeng Yun Date: Fri Dec 8 18:10:06 2017 +0200 usb: xhci: fix TDS for MTK xHCI1.1 commit 72b663a99c074a8d073e7ecdae446cfb024ef551 upstream. For MTK's xHCI 1.0 or latter, TD size is the number of max packet sized packets remaining in the TD, not including this TRB (following spec). For MTK's xHCI 0.96 and older, TD size is the number of max packet sized packets remaining in the TD, including this TRB (not following spec). Signed-off-by: Chunfeng Yun Signed-off-by: Mathias Nyman Signed-off-by: Greg Kroah-Hartman commit e081bd0d70bddbf912f6f7f0f82e3e1ac2c52ae6 Author: Yan, Zheng Date: Thu Nov 30 11:59:22 2017 +0800 ceph: drop negative child dentries before try pruning inode's alias commit 040d786032bf59002d374b86d75b04d97624005c upstream. Negative child dentry holds reference on inode's alias, it makes d_prune_aliases() do nothing. Signed-off-by: "Yan, Zheng" Reviewed-by: Jeff Layton Signed-off-by: Ilya Dryomov Signed-off-by: Greg Kroah-Hartman commit 14513e49c43cd3149a03ff9e1c223c3d5803ad09 Author: Shuah Khan Date: Thu Dec 7 14:16:50 2017 -0700 usbip: fix stub_send_ret_submit() vulnerability to null transfer_buffer commit be6123df1ea8f01ee2f896a16c2b7be3e4557a5a upstream. stub_send_ret_submit() handles urb with a potential null transfer_buffer, when it replays a packet with potential malicious data that could contain a null buffer. Add a check for the condition when actual_length > 0 and transfer_buffer is null. Reported-by: Secunia Research Signed-off-by: Shuah Khan Signed-off-by: Greg Kroah-Hartman commit f3e957266ae56c200fb13a42309c50f84576c64a Author: Shuah Khan Date: Thu Dec 7 14:16:48 2017 -0700 usbip: fix stub_rx: harden CMD_SUBMIT path to handle malicious input commit c6688ef9f29762e65bce325ef4acd6c675806366 upstream. Harden CMD_SUBMIT path to handle malicious input that could trigger large memory allocations. Add checks to validate transfer_buffer_length and number_of_packets to protect against bad input requesting for unbounded memory allocations. Validate early in get_pipe() and return failure. Reported-by: Secunia Research Signed-off-by: Shuah Khan Signed-off-by: Greg Kroah-Hartman commit b6dbace92ed744a80af75036e93615324cd27aec Author: Felipe Balbi Date: Mon Sep 26 10:51:18 2016 +0300 usb: add helper to extract bits 12:11 of wMaxPacketSize commit 541b6fe63023f3059cf85d47ff2767a3e42a8e44 upstream. According to USB Specification 2.0 table 9-4, wMaxPacketSize is a bitfield. Endpoint's maxpacket is laid out in bits 10:0. For high-speed, high-bandwidth isochronous endpoints, bits 12:11 contain a multiplier to tell us how many transactions we want to try per uframe. This means that if we want an isochronous endpoint to issue 3 transfers of 1024 bytes per uframe, wMaxPacketSize should contain the value: 1024 | (2 << 11) or 5120 (0x1400). In order to make Host and Peripheral controller drivers' life easier, we're adding a helper which returns bits 12:11. Note that no care is made WRT to checking endpoint type and gadget's speed. That's left for drivers to handle. Signed-off-by: Felipe Balbi Signed-off-by: Greg Kroah-Hartman commit 20e825cdf7a1627f92371d45a5322ccecdebcb3b Author: Shuah Khan Date: Thu Dec 7 14:16:47 2017 -0700 usbip: fix stub_rx: get_pipe() to validate endpoint number commit 635f545a7e8be7596b9b2b6a43cab6bbd5a88e43 upstream. get_pipe() routine doesn't validate the input endpoint number and uses to reference ep_in and ep_out arrays. Invalid endpoint number can trigger BUG(). Range check the epnum and returning error instead of calling BUG(). Change caller stub_recv_cmd_submit() to handle the get_pipe() error return. Reported-by: Secunia Research Signed-off-by: Shuah Khan Signed-off-by: Greg Kroah-Hartman commit 99542e468b76ae180675566692e0528c4c712661 Author: Alan Stern Date: Tue Dec 12 14:25:13 2017 -0500 USB: core: prevent malicious bNumInterfaces overflow commit 48a4ff1c7bb5a32d2e396b03132d20d552c0eca7 upstream. A malicious USB device with crafted descriptors can cause the kernel to access unallocated memory by setting the bNumInterfaces value too high in a configuration descriptor. Although the value is adjusted during parsing, this adjustment is skipped in one of the error return paths. This patch prevents the problem by setting bNumInterfaces to 0 initially. The existing code already sets it to the proper value after parsing is complete. Signed-off-by: Alan Stern Reported-by: Andrey Konovalov Signed-off-by: Greg Kroah-Hartman commit 0d29ae4f5033453ea86d60a736bed58fa4c15007 Author: David Kozub Date: Tue Dec 5 22:40:04 2017 +0100 USB: uas and storage: Add US_FL_BROKEN_FUA for another JMicron JMS567 ID commit 62354454625741f0569c2cbe45b2d192f8fd258e upstream. There is another JMS567-based USB3 UAS enclosure (152d:0578) that fails with the following error: [sda] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [sda] tag#0 Sense Key : Illegal Request [current] [sda] tag#0 Add. Sense: Invalid field in cdb The issue occurs both with UAS (occasionally) and mass storage (immediately after mounting a FS on a disk in the enclosure). Enabling US_FL_BROKEN_FUA quirk solves this issue. This patch adds an UNUSUAL_DEV with US_FL_BROKEN_FUA for the enclosure for both UAS and mass storage. Signed-off-by: David Kozub Acked-by: Alan Stern Signed-off-by: Greg Kroah-Hartman commit d760f903419509d933b27d387854eb1e35f88cdf Author: Changbin Du Date: Thu Nov 30 11:39:43 2017 +0800 tracing: Allocate mask_str buffer dynamically commit 90e406f96f630c07d631a021fd4af10aac913e77 upstream. The default NR_CPUS can be very large, but actual possible nr_cpu_ids usually is very small. For my x86 distribution, the NR_CPUS is 8192 and nr_cpu_ids is 4. About 2 pages are wasted. Most machines don't have so many CPUs, so define a array with NR_CPUS just wastes memory. So let's allocate the buffer dynamically when need. With this change, the mutext tracing_cpumask_update_lock also can be removed now, which was used to protect mask_str. Link: http://lkml.kernel.org/r/1512013183-19107-1-git-send-email-changbin.du@intel.com Fixes: 36dfe9252bd4c ("ftrace: make use of tracing_cpumask") Signed-off-by: Changbin Du Signed-off-by: Steven Rostedt (VMware) Signed-off-by: Greg Kroah-Hartman commit d1175423ce67bd1903c4f876ba6774e3a48abdc1 Author: NeilBrown Date: Thu Dec 14 15:32:38 2017 -0800 autofs: fix careless error in recent commit commit 302ec300ef8a545a7fc7f667e5fd743b091c2eeb upstream. Commit ecc0c469f277 ("autofs: don't fail mount for transient error") was meant to replace an 'if' with a 'switch', but instead added the 'switch' leaving the case in place. Link: http://lkml.kernel.org/r/87zi6wstmw.fsf@notabene.neil.brown.name Fixes: ecc0c469f277 ("autofs: don't fail mount for transient error") Reported-by: Ben Hutchings Signed-off-by: NeilBrown Cc: Ian Kent Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman commit c32e053a11f231376f0899ef906fd43f8fc8dbd0 Author: Eric Biggers Date: Tue Nov 28 20:56:59 2017 -0800 crypto: salsa20 - fix blkcipher_walk API usage commit ecaaab5649781c5a0effdaf298a925063020500e upstream. When asked to encrypt or decrypt 0 bytes, both the generic and x86 implementations of Salsa20 crash in blkcipher_walk_done(), either when doing 'kfree(walk->buffer)' or 'free_page((unsigned long)walk->page)', because walk->buffer and walk->page have not been initialized. The bug is that Salsa20 is calling blkcipher_walk_done() even when nothing is in 'walk.nbytes'. But blkcipher_walk_done() is only meant to be called when a nonzero number of bytes have been provided. The broken code is part of an optimization that tries to make only one call to salsa20_encrypt_bytes() to process inputs that are not evenly divisible by 64 bytes. To fix the bug, just remove this "optimization" and use the blkcipher_walk API the same way all the other users do. Reproducer: #include #include #include int main() { int algfd, reqfd; struct sockaddr_alg addr = { .salg_type = "skcipher", .salg_name = "salsa20", }; char key[16] = { 0 }; algfd = socket(AF_ALG, SOCK_SEQPACKET, 0); bind(algfd, (void *)&addr, sizeof(addr)); reqfd = accept(algfd, 0, 0); setsockopt(algfd, SOL_ALG, ALG_SET_KEY, key, sizeof(key)); read(reqfd, key, sizeof(key)); } Reported-by: syzbot Fixes: eb6f13eb9f81 ("[CRYPTO] salsa20_generic: Fix multi-page processing") Signed-off-by: Eric Biggers Signed-off-by: Herbert Xu Signed-off-by: Greg Kroah-Hartman commit 43259d07fceb8cc1f5ba7e8003ae19023e0620f5 Author: Eric Biggers Date: Tue Nov 28 18:01:38 2017 -0800 crypto: hmac - require that the underlying hash algorithm is unkeyed commit af3ff8045bbf3e32f1a448542e73abb4c8ceb6f1 upstream. Because the HMAC template didn't check that its underlying hash algorithm is unkeyed, trying to use "hmac(hmac(sha3-512-generic))" through AF_ALG or through KEYCTL_DH_COMPUTE resulted in the inner HMAC being used without having been keyed, resulting in sha3_update() being called without sha3_init(), causing a stack buffer overflow. This is a very old bug, but it seems to have only started causing real problems when SHA-3 support was added (requires CONFIG_CRYPTO_SHA3) because the innermost hash's state is ->import()ed from a zeroed buffer, and it just so happens that other hash algorithms are fine with that, but SHA-3 is not. However, there could be arch or hardware-dependent hash algorithms also affected; I couldn't test everything. Fix the bug by introducing a function crypto_shash_alg_has_setkey() which tests whether a shash algorithm is keyed. Then update the HMAC template to require that its underlying hash algorithm is unkeyed. Here is a reproducer: #include #include int main() { int algfd; struct sockaddr_alg addr = { .salg_type = "hash", .salg_name = "hmac(hmac(sha3-512-generic))", }; char key[4096] = { 0 }; algfd = socket(AF_ALG, SOCK_SEQPACKET, 0); bind(algfd, (const struct sockaddr *)&addr, sizeof(addr)); setsockopt(algfd, SOL_ALG, ALG_SET_KEY, key, sizeof(key)); } Here was the KASAN report from syzbot: BUG: KASAN: stack-out-of-bounds in memcpy include/linux/string.h:341 [inline] BUG: KASAN: stack-out-of-bounds in sha3_update+0xdf/0x2e0 crypto/sha3_generic.c:161 Write of size 4096 at addr ffff8801cca07c40 by task syzkaller076574/3044 CPU: 1 PID: 3044 Comm: syzkaller076574 Not tainted 4.14.0-mm1+ #25 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Call Trace: __dump_stack lib/dump_stack.c:17 [inline] dump_stack+0x194/0x257 lib/dump_stack.c:53 print_address_description+0x73/0x250 mm/kasan/report.c:252 kasan_report_error mm/kasan/report.c:351 [inline] kasan_report+0x25b/0x340 mm/kasan/report.c:409 check_memory_region_inline mm/kasan/kasan.c:260 [inline] check_memory_region+0x137/0x190 mm/kasan/kasan.c:267 memcpy+0x37/0x50 mm/kasan/kasan.c:303 memcpy include/linux/string.h:341 [inline] sha3_update+0xdf/0x2e0 crypto/sha3_generic.c:161 crypto_shash_update+0xcb/0x220 crypto/shash.c:109 shash_finup_unaligned+0x2a/0x60 crypto/shash.c:151 crypto_shash_finup+0xc4/0x120 crypto/shash.c:165 hmac_finup+0x182/0x330 crypto/hmac.c:152 crypto_shash_finup+0xc4/0x120 crypto/shash.c:165 shash_digest_unaligned+0x9e/0xd0 crypto/shash.c:172 crypto_shash_digest+0xc4/0x120 crypto/shash.c:186 hmac_setkey+0x36a/0x690 crypto/hmac.c:66 crypto_shash_setkey+0xad/0x190 crypto/shash.c:64 shash_async_setkey+0x47/0x60 crypto/shash.c:207 crypto_ahash_setkey+0xaf/0x180 crypto/ahash.c:200 hash_setkey+0x40/0x90 crypto/algif_hash.c:446 alg_setkey crypto/af_alg.c:221 [inline] alg_setsockopt+0x2a1/0x350 crypto/af_alg.c:254 SYSC_setsockopt net/socket.c:1851 [inline] SyS_setsockopt+0x189/0x360 net/socket.c:1830 entry_SYSCALL_64_fastpath+0x1f/0x96 Reported-by: syzbot Signed-off-by: Eric Biggers Signed-off-by: Herbert Xu Signed-off-by: Greg Kroah-Hartman commit cd9b59861f9cad2f1b36eace975060cfc3bc46e4 Author: Eric Biggers Date: Sun Nov 26 23:16:49 2017 -0800 crypto: rsa - fix buffer overread when stripping leading zeroes commit d2890c3778b164fde587bc16583f3a1c87233ec5 upstream. In rsa_get_n(), if the buffer contained all 0's and "FIPS mode" is enabled, we would read one byte past the end of the buffer while scanning the leading zeroes. Fix it by checking 'n_sz' before '!*ptr'. This bug was reachable by adding a specially crafted key of type "asymmetric" (requires CONFIG_RSA and CONFIG_X509_CERTIFICATE_PARSER). KASAN report: BUG: KASAN: slab-out-of-bounds in rsa_get_n+0x19e/0x1d0 crypto/rsa_helper.c:33 Read of size 1 at addr ffff88003501a708 by task keyctl/196 CPU: 1 PID: 196 Comm: keyctl Not tainted 4.14.0-09238-g1d3b78bbc6e9 #26 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.11.0-20171110_100015-anatol 04/01/2014 Call Trace: rsa_get_n+0x19e/0x1d0 crypto/rsa_helper.c:33 asn1_ber_decoder+0x82a/0x1fd0 lib/asn1_decoder.c:328 rsa_set_pub_key+0xd3/0x320 crypto/rsa.c:278 crypto_akcipher_set_pub_key ./include/crypto/akcipher.h:364 [inline] pkcs1pad_set_pub_key+0xae/0x200 crypto/rsa-pkcs1pad.c:117 crypto_akcipher_set_pub_key ./include/crypto/akcipher.h:364 [inline] public_key_verify_signature+0x270/0x9d0 crypto/asymmetric_keys/public_key.c:106 x509_check_for_self_signed+0x2ea/0x480 crypto/asymmetric_keys/x509_public_key.c:141 x509_cert_parse+0x46a/0x620 crypto/asymmetric_keys/x509_cert_parser.c:129 x509_key_preparse+0x61/0x750 crypto/asymmetric_keys/x509_public_key.c:174 asymmetric_key_preparse+0xa4/0x150 crypto/asymmetric_keys/asymmetric_type.c:388 key_create_or_update+0x4d4/0x10a0 security/keys/key.c:850 SYSC_add_key security/keys/keyctl.c:122 [inline] SyS_add_key+0xe8/0x290 security/keys/keyctl.c:62 entry_SYSCALL_64_fastpath+0x1f/0x96 Allocated by task 196: __do_kmalloc mm/slab.c:3711 [inline] __kmalloc_track_caller+0x118/0x2e0 mm/slab.c:3726 kmemdup+0x17/0x40 mm/util.c:118 kmemdup ./include/linux/string.h:414 [inline] x509_cert_parse+0x2cb/0x620 crypto/asymmetric_keys/x509_cert_parser.c:106 x509_key_preparse+0x61/0x750 crypto/asymmetric_keys/x509_public_key.c:174 asymmetric_key_preparse+0xa4/0x150 crypto/asymmetric_keys/asymmetric_type.c:388 key_create_or_update+0x4d4/0x10a0 security/keys/key.c:850 SYSC_add_key security/keys/keyctl.c:122 [inline] SyS_add_key+0xe8/0x290 security/keys/keyctl.c:62 entry_SYSCALL_64_fastpath+0x1f/0x96 Fixes: 5a7de97309f5 ("crypto: rsa - return raw integers for the ASN.1 parser") Cc: Tudor Ambarus Signed-off-by: Eric Biggers Reviewed-by: James Morris Reviewed-by: David Howells Signed-off-by: Herbert Xu Signed-off-by: Greg Kroah-Hartman commit 1fb73eae9624e2d3583581cd80bf408cc9b3916d Author: Martin Kaiser Date: Tue Oct 17 22:53:08 2017 +0200 mfd: fsl-imx25: Clean up irq settings during removal commit 18f77393796848e68909e65d692c1d1436f06e06 upstream. When fsl-imx25-tsadc is compiled as a module, loading, unloading and reloading the module will lead to a crash. Unable to handle kernel paging request at virtual address bf005430 [] (irq_find_matching_fwspec) from [] (of_irq_get+0x58/0x74) [] (of_irq_get) from [] (platform_get_irq+0x48/0xc8) [] (platform_get_irq) from [] (mx25_tsadc_probe+0x220/0x2f4 [fsl_imx25_tsadc]) irq_find_matching_fwspec() loops over all registered irq domains. The irq domain is still registered from last time the module was loaded but the pointer to its operations is invalid after the module was unloaded. Add a removal function which clears the irq handler and removes the irq domain. With this cleanup in place, it's possible to unload and reload the module. Signed-off-by: Martin Kaiser Reviewed-by: Lucas Stach Signed-off-by: Lee Jones Signed-off-by: Greg Kroah-Hartman