commit bc69c961f59512012af64efd4ff20b3cb67c99ce Author: Greg Kroah-Hartman Date: Sat Nov 16 10:27:52 2019 +0100 Linux 4.4.202 commit 6534fc5ddf887ca38ee2e04a23b062b4503b3d20 Author: Vineela Tummalapalli Date: Mon Nov 4 12:22:01 2019 +0100 x86/bugs: Add ITLB_MULTIHIT bug infrastructure commit db4d30fbb71b47e4ecb11c4efa5d8aad4b03dfae upstream. Some processors may incur a machine check error possibly resulting in an unrecoverable CPU lockup when an instruction fetch encounters a TLB multi-hit in the instruction TLB. This can occur when the page size is changed along with either the physical address or cache type. The relevant erratum can be found here: https://bugzilla.kernel.org/show_bug.cgi?id=205195 There are other processors affected for which the erratum does not fully disclose the impact. This issue affects both bare-metal x86 page tables and EPT. It can be mitigated by either eliminating the use of large pages or by using careful TLB invalidations when changing the page size in the page tables. Just like Spectre, Meltdown, L1TF and MDS, a new bit has been allocated in MSR_IA32_ARCH_CAPABILITIES (PSCHANGE_MC_NO) and will be set on CPUs which are mitigated against this issue. Signed-off-by: Vineela Tummalapalli Co-developed-by: Pawan Gupta Signed-off-by: Pawan Gupta Signed-off-by: Paolo Bonzini Signed-off-by: Thomas Gleixner [bwh: Backported to 4.4: - No support for X86_VENDOR_HYGON, ATOM_AIRMONT_NP - Adjust context, indentation] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit b081576edd675fe37690080eb9093c1779422657 Author: Josh Poimboeuf Date: Wed Nov 6 20:26:46 2019 -0600 x86/speculation/taa: Fix printing of TAA_MSG_SMT on IBRS_ALL CPUs commit 012206a822a8b6ac09125bfaa210a95b9eb8f1c1 upstream. For new IBRS_ALL CPUs, the Enhanced IBRS check at the beginning of cpu_bugs_smt_update() causes the function to return early, unintentionally skipping the MDS and TAA logic. This is not a problem for MDS, because there appears to be no overlap between IBRS_ALL and MDS-affected CPUs. So the MDS mitigation would be disabled and nothing would need to be done in this function anyway. But for TAA, the TAA_MSG_SMT string will never get printed on Cascade Lake and newer. The check is superfluous anyway: when 'spectre_v2_enabled' is SPECTRE_V2_IBRS_ENHANCED, 'spectre_v2_user' is always SPECTRE_V2_USER_NONE, and so the 'spectre_v2_user' switch statement handles it appropriately by doing nothing. So just remove the check. Fixes: 1b42f017415b ("x86/speculation/taa: Add mitigation for TSX Async Abort") Signed-off-by: Josh Poimboeuf Signed-off-by: Thomas Gleixner Reviewed-by: Tyler Hicks Reviewed-by: Borislav Petkov Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit faaa0750f5c426102e095b9343b50b868a3a45f0 Author: Michal Hocko Date: Wed Oct 23 12:35:50 2019 +0200 x86/tsx: Add config options to set tsx=on|off|auto commit db616173d787395787ecc93eef075fa975227b10 upstream. There is a general consensus that TSX usage is not largely spread while the history shows there is a non trivial space for side channel attacks possible. Therefore the tsx is disabled by default even on platforms that might have a safe implementation of TSX according to the current knowledge. This is a fair trade off to make. There are, however, workloads that really do benefit from using TSX and updating to a newer kernel with TSX disabled might introduce a noticeable regressions. This would be especially a problem for Linux distributions which will provide TAA mitigations. Introduce config options X86_INTEL_TSX_MODE_OFF, X86_INTEL_TSX_MODE_ON and X86_INTEL_TSX_MODE_AUTO to control the TSX feature. The config setting can be overridden by the tsx cmdline options. [ bp: Text cleanups from Josh. ] Suggested-by: Borislav Petkov Signed-off-by: Michal Hocko Signed-off-by: Pawan Gupta Signed-off-by: Borislav Petkov Signed-off-by: Thomas Gleixner Reviewed-by: Josh Poimboeuf [bwh: Backported to 4.4: adjust doc filename] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit d4ce17c877b104cec728b80f66d880d3dc0d5a59 Author: Pawan Gupta Date: Wed Oct 23 12:32:55 2019 +0200 x86/speculation/taa: Add documentation for TSX Async Abort commit a7a248c593e4fd7a67c50b5f5318fe42a0db335e upstream. Add the documenation for TSX Async Abort. Include the description of the issue, how to check the mitigation state, control the mitigation, guidance for system administrators. [ bp: Add proper SPDX tags, touch ups by Josh and me. ] Co-developed-by: Antonio Gomez Iglesias Signed-off-by: Pawan Gupta Signed-off-by: Antonio Gomez Iglesias Signed-off-by: Borislav Petkov Signed-off-by: Thomas Gleixner Reviewed-by: Mark Gross Reviewed-by: Tony Luck Reviewed-by: Josh Poimboeuf [bwh: Backported to 4.4: - Drop changes to ReST index files - Drop "nosmt" documentation - Adjust filenames, context] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit f6eabafe0fa2a007219af36340414f961bedde9d Author: Pawan Gupta Date: Wed Oct 23 12:28:57 2019 +0200 x86/tsx: Add "auto" option to the tsx= cmdline parameter commit 7531a3596e3272d1f6841e0d601a614555dc6b65 upstream. Platforms which are not affected by X86_BUG_TAA may want the TSX feature enabled. Add "auto" option to the TSX cmdline parameter. When tsx=auto disable TSX when X86_BUG_TAA is present, otherwise enable TSX. More details on X86_BUG_TAA can be found here: https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html [ bp: Extend the arg buffer to accommodate "auto\0". ] Signed-off-by: Pawan Gupta Signed-off-by: Borislav Petkov Signed-off-by: Thomas Gleixner Reviewed-by: Tony Luck Reviewed-by: Josh Poimboeuf [bwh: Backported to 4.4: adjust filename] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 5b3b71385e07830aab9ee1c6f03bf140736e1fdc Author: Pawan Gupta Date: Wed Oct 23 12:23:33 2019 +0200 kvm/x86: Export MDS_NO=0 to guests when TSX is enabled commit e1d38b63acd843cfdd4222bf19a26700fd5c699e upstream. Export the IA32_ARCH_CAPABILITIES MSR bit MDS_NO=0 to guests on TSX Async Abort(TAA) affected hosts that have TSX enabled and updated microcode. This is required so that the guests don't complain, "Vulnerable: Clear CPU buffers attempted, no microcode" when the host has the updated microcode to clear CPU buffers. Microcode update also adds support for MSR_IA32_TSX_CTRL which is enumerated by the ARCH_CAP_TSX_CTRL bit in IA32_ARCH_CAPABILITIES MSR. Guests can't do this check themselves when the ARCH_CAP_TSX_CTRL bit is not exported to the guests. In this case export MDS_NO=0 to the guests. When guests have CPUID.MD_CLEAR=1, they deploy MDS mitigation which also mitigates TAA. Signed-off-by: Pawan Gupta Signed-off-by: Borislav Petkov Signed-off-by: Thomas Gleixner Tested-by: Neelima Krishnan Reviewed-by: Tony Luck Reviewed-by: Josh Poimboeuf Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit d601096be42c88a20240dddd426c6273899f2468 Author: Pawan Gupta Date: Wed Oct 23 12:19:51 2019 +0200 x86/speculation/taa: Add sysfs reporting for TSX Async Abort commit 6608b45ac5ecb56f9e171252229c39580cc85f0f upstream. Add the sysfs reporting file for TSX Async Abort. It exposes the vulnerability and the mitigation state similar to the existing files for the other hardware vulnerabilities. Sysfs file path is: /sys/devices/system/cpu/vulnerabilities/tsx_async_abort Signed-off-by: Pawan Gupta Signed-off-by: Borislav Petkov Signed-off-by: Thomas Gleixner Tested-by: Neelima Krishnan Reviewed-by: Mark Gross Reviewed-by: Tony Luck Reviewed-by: Greg Kroah-Hartman Reviewed-by: Josh Poimboeuf Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 1cdc174d7996a5fb413cd9c1a2b66581a0e80a99 Author: Pawan Gupta Date: Wed Oct 23 11:30:45 2019 +0200 x86/speculation/taa: Add mitigation for TSX Async Abort commit 1b42f017415b46c317e71d41c34ec088417a1883 upstream. TSX Async Abort (TAA) is a side channel vulnerability to the internal buffers in some Intel processors similar to Microachitectural Data Sampling (MDS). In this case, certain loads may speculatively pass invalid data to dependent operations when an asynchronous abort condition is pending in a TSX transaction. This includes loads with no fault or assist condition. Such loads may speculatively expose stale data from the uarch data structures as in MDS. Scope of exposure is within the same-thread and cross-thread. This issue affects all current processors that support TSX, but do not have ARCH_CAP_TAA_NO (bit 8) set in MSR_IA32_ARCH_CAPABILITIES. On CPUs which have their IA32_ARCH_CAPABILITIES MSR bit MDS_NO=0, CPUID.MD_CLEAR=1 and the MDS mitigation is clearing the CPU buffers using VERW or L1D_FLUSH, there is no additional mitigation needed for TAA. On affected CPUs with MDS_NO=1 this issue can be mitigated by disabling the Transactional Synchronization Extensions (TSX) feature. A new MSR IA32_TSX_CTRL in future and current processors after a microcode update can be used to control the TSX feature. There are two bits in that MSR: * TSX_CTRL_RTM_DISABLE disables the TSX sub-feature Restricted Transactional Memory (RTM). * TSX_CTRL_CPUID_CLEAR clears the RTM enumeration in CPUID. The other TSX sub-feature, Hardware Lock Elision (HLE), is unconditionally disabled with updated microcode but still enumerated as present by CPUID(EAX=7).EBX{bit4}. The second mitigation approach is similar to MDS which is clearing the affected CPU buffers on return to user space and when entering a guest. Relevant microcode update is required for the mitigation to work. More details on this approach can be found here: https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html The TSX feature can be controlled by the "tsx" command line parameter. If it is force-enabled then "Clear CPU buffers" (MDS mitigation) is deployed. The effective mitigation state can be read from sysfs. [ bp: - massage + comments cleanup - s/TAA_MITIGATION_TSX_DISABLE/TAA_MITIGATION_TSX_DISABLED/g - Josh. - remove partial TAA mitigation in update_mds_branch_idle() - Josh. - s/tsx_async_abort_cmdline/tsx_async_abort_parse_cmdline/g ] Signed-off-by: Pawan Gupta Signed-off-by: Borislav Petkov Signed-off-by: Thomas Gleixner Reviewed-by: Josh Poimboeuf [bwh: Backported to 4.4: - Add #include "cpu.h" in bugs.c - Drop __ro_after_init attribute - Drop "nosmt" support - Adjust context, indentation] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit ab7b39b99f030a15f4f398ea6c2fbe9600e4818c Author: Pawan Gupta Date: Wed Oct 23 11:01:53 2019 +0200 x86/cpu: Add a "tsx=" cmdline option with TSX disabled by default commit 95c5824f75f3ba4c9e8e5a4b1a623c95390ac266 upstream. Add a kernel cmdline parameter "tsx" to control the Transactional Synchronization Extensions (TSX) feature. On CPUs that support TSX control, use "tsx=on|off" to enable or disable TSX. Not specifying this option is equivalent to "tsx=off". This is because on certain processors TSX may be used as a part of a speculative side channel attack. Carve out the TSX controlling functionality into a separate compilation unit because TSX is a CPU feature while the TSX async abort control machinery will go to cpu/bugs.c. [ bp: - Massage, shorten and clear the arg buffer. - Clarifications of the tsx= possible options - Josh. - Expand on TSX_CTRL availability - Pawan. ] Signed-off-by: Pawan Gupta Signed-off-by: Borislav Petkov Signed-off-by: Thomas Gleixner Reviewed-by: Josh Poimboeuf [bwh: Backported to 4.4: - Drop __ro_after_init attribute - Adjust filenames, context] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 725afc0d60cf8b9e3bf023920c881f4a4261dc49 Author: Pawan Gupta Date: Wed Oct 23 10:52:35 2019 +0200 x86/cpu: Add a helper function x86_read_arch_cap_msr() commit 286836a70433fb64131d2590f4bf512097c255e1 upstream. Add a helper function to read the IA32_ARCH_CAPABILITIES MSR. Signed-off-by: Pawan Gupta Signed-off-by: Borislav Petkov Signed-off-by: Thomas Gleixner Tested-by: Neelima Krishnan Reviewed-by: Mark Gross Reviewed-by: Tony Luck Reviewed-by: Josh Poimboeuf Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 124635392ef394772850172bd5370e62cfe781b4 Author: Pawan Gupta Date: Wed Oct 23 10:45:50 2019 +0200 x86/msr: Add the IA32_TSX_CTRL MSR commit c2955f270a84762343000f103e0640d29c7a96f3 upstream. Transactional Synchronization Extensions (TSX) may be used on certain processors as part of a speculative side channel attack. A microcode update for existing processors that are vulnerable to this attack will add a new MSR - IA32_TSX_CTRL to allow the system administrator the option to disable TSX as one of the possible mitigations. The CPUs which get this new MSR after a microcode upgrade are the ones which do not set MSR_IA32_ARCH_CAPABILITIES.MDS_NO (bit 5) because those CPUs have CPUID.MD_CLEAR, i.e., the VERW implementation which clears all CPU buffers takes care of the TAA case as well. [ Note that future processors that are not vulnerable will also support the IA32_TSX_CTRL MSR. ] Add defines for the new IA32_TSX_CTRL MSR and its bits. TSX has two sub-features: 1. Restricted Transactional Memory (RTM) is an explicitly-used feature where new instructions begin and end TSX transactions. 2. Hardware Lock Elision (HLE) is implicitly used when certain kinds of "old" style locks are used by software. Bit 7 of the IA32_ARCH_CAPABILITIES indicates the presence of the IA32_TSX_CTRL MSR. There are two control bits in IA32_TSX_CTRL MSR: Bit 0: When set, it disables the Restricted Transactional Memory (RTM) sub-feature of TSX (will force all transactions to abort on the XBEGIN instruction). Bit 1: When set, it disables the enumeration of the RTM and HLE feature (i.e. it will make CPUID(EAX=7).EBX{bit4} and CPUID(EAX=7).EBX{bit11} read as 0). The other TSX sub-feature, Hardware Lock Elision (HLE), is unconditionally disabled by the new microcode but still enumerated as present by CPUID(EAX=7).EBX{bit4}, unless disabled by IA32_TSX_CTRL_MSR[1] - TSX_CTRL_CPUID_CLEAR. Signed-off-by: Pawan Gupta Signed-off-by: Borislav Petkov Signed-off-by: Thomas Gleixner Tested-by: Neelima Krishnan Reviewed-by: Mark Gross Reviewed-by: Tony Luck Reviewed-by: Josh Poimboeuf [bwh: Backported to 4.4: adjust context] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit b6b8d3bde6be69dce236117062949575d5fbaa73 Author: Paolo Bonzini Date: Mon Aug 19 17:24:07 2019 +0200 KVM: x86: use Intel speculation bugs and features as derived in generic x86 code commit 0c54914d0c52a15db9954a76ce80fee32cf318f4 upstream. Similar to AMD bits, set the Intel bits from the vendor-independent feature and bug flags, because KVM_GET_SUPPORTED_CPUID does not care about the vendor and they should be set on AMD processors as well. Suggested-by: Jim Mattson Reviewed-by: Jim Mattson Signed-off-by: Paolo Bonzini Signed-off-by: Thomas Gleixner [bwh: Backported to 4.4: adjust context] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit f5850490b49c0bdde44c873fac6221d38ff252d0 Author: Jim Mattson Date: Wed May 9 14:29:35 2018 -0700 kvm: x86: IA32_ARCH_CAPABILITIES is always supported commit 1eaafe91a0df4157521b6417b3dd8430bf5f52f0 upstream. If there is a possibility that a VM may migrate to a Skylake host, then the hypervisor should report IA32_ARCH_CAPABILITIES.RSBA[bit 2] as being set (future work, of course). This implies that CPUID.(EAX=7,ECX=0):EDX.ARCH_CAPABILITIES[bit 29] should be set. Therefore, kvm should report this CPUID bit as being supported whether or not the host supports it. Userspace is still free to clear the bit if it chooses. For more information on RSBA, see Intel's white paper, "Retpoline: A Branch Target Injection Mitigation" (Document Number 337131-001), currently available at https://bugzilla.kernel.org/show_bug.cgi?id=199511. Since the IA32_ARCH_CAPABILITIES MSR is emulated in kvm, there is no dependency on hardware support for this feature. Signed-off-by: Jim Mattson Reviewed-by: Konrad Rzeszutek Wilk Fixes: 28c1c9fabf48 ("KVM/VMX: Emulate MSR_IA32_ARCH_CAPABILITIES") Signed-off-by: Radim Krčmář Signed-off-by: Greg Kroah-Hartman Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit eb5a31b4b374ed29aedf597d75394bee4505e30b Author: Sean Christopherson Date: Thu Mar 7 15:43:02 2019 -0800 KVM: x86: Emulate MSR_IA32_ARCH_CAPABILITIES on AMD hosts commit 0cf9135b773bf32fba9dd8e6699c1b331ee4b749 upstream. The CPUID flag ARCH_CAPABILITIES is unconditioinally exposed to host userspace for all x86 hosts, i.e. KVM advertises ARCH_CAPABILITIES regardless of hardware support under the pretense that KVM fully emulates MSR_IA32_ARCH_CAPABILITIES. Unfortunately, only VMX hosts handle accesses to MSR_IA32_ARCH_CAPABILITIES (despite KVM_GET_MSRS also reporting MSR_IA32_ARCH_CAPABILITIES for all hosts). Move the MSR_IA32_ARCH_CAPABILITIES handling to common x86 code so that it's emulated on AMD hosts. Fixes: 1eaafe91a0df4 ("kvm: x86: IA32_ARCH_CAPABILITIES is always supported") Cc: stable@vger.kernel.org Reported-by: Xiaoyao Li Cc: Jim Mattson Signed-off-by: Sean Christopherson Signed-off-by: Paolo Bonzini Signed-off-by: Greg Kroah-Hartman [bwh: Backported to 4.4: adjust context] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 873e2f25ccb7112f8818d7ddb77456e91e188691 Author: Ben Hutchings Date: Tue Oct 8 23:35:30 2019 +0100 KVM: Introduce kvm_get_arch_capabilities() Extracted from commit 5b76a3cff011 "KVM: VMX: Tell the nested hypervisor to skip L1D flush on vmentry". We will need this to let a nested hypervisor know that we have applied the mitigation for TAA. Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 0b42886a355c83e26fcf0a379451d91e8ab7f7af Author: Nicholas Piggin Date: Mon Nov 28 12:42:26 2016 +1100 powerpc/boot: Request no dynamic linker for boot wrapper commit ff45000fcb56b5b0f1a14a865d3541746d838a0a upstream. The boot wrapper performs its own relocations and does not require PT_INTERP segment. However currently we don't tell the linker that. Prior to binutils 2.28 that works OK. But since binutils commit 1a9ccd70f9a7 ("Fix the linker so that it will not silently generate ELF binaries with invalid program headers. Fix readelf to report such invalid binaries.") binutils tries to create a program header segment due to PT_INTERP, and the link fails because there is no space for it: ld: arch/powerpc/boot/zImage.pseries: Not enough room for program headers, try linking with -N ld: final link failed: Bad value So tell the linker not to do that, by passing --no-dynamic-linker. Cc: stable@vger.kernel.org Reported-by: Anton Blanchard Signed-off-by: Nicholas Piggin [mpe: Drop dependency on ld-version.sh and massage change log] Signed-off-by: Michael Ellerman [ajd: backport to v4.4 (resolve conflict with a comment line)] Signed-off-by: Andrew Donnellan Signed-off-by: Greg Kroah-Hartman commit c311fe2c7457c6cd150fcf0465e34f79fc7267d7 Author: Nicholas Piggin Date: Sun Nov 27 13:46:20 2016 +1100 powerpc: Fix compiling a BE kernel with a powerpc64le toolchain commit 4dc831aa88132f835cefe876aa0206977c4d7710 upstream. GCC can compile with either endian, but the default ABI version is set based on the default endianness of the toolchain. Alan Modra says: you need both -mbig and -mabi=elfv1 to make a powerpc64le gcc generate powerpc64 code The opposite is true for powerpc64 when generating -mlittle it requires -mabi=elfv2 to generate v2 ABI, which we were already doing. This change adds ABI annotations together with endianness for all cases, LE and BE. This fixes the case of building a BE kernel with a toolchain that is LE by default. Signed-off-by: Nicholas Piggin Tested-by: Naveen N. Rao Signed-off-by: Michael Ellerman Cc: Andrew Donnellan Signed-off-by: Greg Kroah-Hartman commit bd52aa88e296f9cdfe30b4e0bd482ddd959d8cc2 Author: Michael Ellerman Date: Tue Aug 9 22:43:46 2016 +1000 powerpc/Makefile: Use cflags-y/aflags-y for setting endian options commit 164af597ce945751e2dcd53d0a86e84203a6d117 upstream. When we introduced the little endian support, we added the endian flags to CC directly using override. I don't know the history of why we did that, I suspect no one does. Although this mostly works, it has one bug, which is that CROSS32CC doesn't get -mbig-endian. That means when the compiler is little endian by default and the user is building big endian, vdso32 is incorrectly compiled as little endian and the kernel fails to build. Instead we can add the endian flags to cflags-y/aflags-y, and then append those to KBUILD_CFLAGS/KBUILD_AFLAGS. This has the advantage of being 1) less ugly, 2) the documented way of adding flags in the arch Makefile and 3) it fixes building vdso32 with a LE toolchain. Signed-off-by: Michael Ellerman Cc: Andrew Donnellan Signed-off-by: Greg Kroah-Hartman commit 615ae6716945d7263d424a7a735022fd3ef17099 Author: Jonas Gorski Date: Mon Dec 10 12:40:38 2018 +0100 MIPS: BCM63XX: fix switch core reset on BCM6368 commit 8a38dacf87180738d42b058334c951eba15d2d47 upstream. The Ethernet Switch core mask was set to 0, causing the switch core to be not reset on BCM6368 on boot. Provide the proper mask so the switch core gets reset to a known good state. Fixes: 799faa626c71 ("MIPS: BCM63XX: add core reset helper") Signed-off-by: Jonas Gorski Signed-off-by: Paul Burton Cc: linux-mips@vger.kernel.org Cc: Ralf Baechle Cc: James Hogan Cc: Florian Fainelli Signed-off-by: Amit Pundir Signed-off-by: Greg Kroah-Hartman commit 411b1be44169c62006c31261d2a9ac1385e348cd Author: Junaid Shahid Date: Mon Nov 11 16:17:05 2019 -0800 kvm: mmu: Don't read PDPTEs when paging is not enabled [ Upstream commit d35b34a9a70edae7ef923f100e51b8b5ae9fe899 ] kvm should not attempt to read guest PDPTEs when CR0.PG = 0 and CR4.PAE = 1. Signed-off-by: Junaid Shahid Signed-off-by: Paolo Bonzini Signed-off-by: Sean Christopherson Signed-off-by: Sasha Levin