Source-Changes-HG archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
[src/trunk]: src/sys/arch/arm/conf PR kern/54486
details: https://anonhg.NetBSD.org/src/rev/492e5441eaa0
branches: trunk
changeset: 965005:492e5441eaa0
user: rin <rin%NetBSD.org@localhost>
date: Mon Aug 26 17:18:42 2019 +0000
description:
PR kern/54486
Workaround for alignment faults on ARMv6+, at least occur with
axe(4) and athn(4) drivers.
For ARMv6+, unaligned access is enabled by default. However, it
cannot be used for non-cacheable memory, which is used as DMA
buffers. This results in alignment faults above. A real fix is
to use cacheable memory as DMA buffers. However, it breaks some
drivers, awge(4) and vchiq(4) at least.
Until we figure out problems and fix them, we choose a fail-safe
workaround here; forbid unaligned memory access for whole kernel.
Affects on performance is negligibly small as far as we can see.
XXX
pullup netbsd-9
diffstat:
sys/arch/arm/conf/Makefile.arm | 22 +++++++++++++++++++++-
1 files changed, 21 insertions(+), 1 deletions(-)
diffs (36 lines):
diff -r 25021aa77bcc -r 492e5441eaa0 sys/arch/arm/conf/Makefile.arm
--- a/sys/arch/arm/conf/Makefile.arm Mon Aug 26 15:35:14 2019 +0000
+++ b/sys/arch/arm/conf/Makefile.arm Mon Aug 26 17:18:42 2019 +0000
@@ -1,4 +1,4 @@
-# $NetBSD: Makefile.arm,v 1.49 2018/09/22 12:24:01 rin Exp $
+# $NetBSD: Makefile.arm,v 1.50 2019/08/26 17:18:42 rin Exp $
# Makefile for NetBSD
#
@@ -53,6 +53,26 @@
CPPFLAGS.cpufunc_asm_arm11.S+= -mcpu=arm1136j-s
CPPFLAGS.cpufunc_asm_xscale.S+= -mcpu=xscale
+.if !empty(MACHINE_ARCH:Mearmv6*) || !empty(MACHINE_ARCH:Mearmv7*)
+# XXX
+#
+# Workaround for alignment faults on ARMv6+, at least occur with
+# axe(4) and athn(4) drivers.
+#
+# For ARMv6+, unaligned access is enabled by default. However, it
+# cannot be used for non-cacheable memory, which is used as DMA
+# buffers. This results in alignment faults above. A real fix is
+# to use cacheable memory as DMA buffers. However, it breaks some
+# drivers, awge(4) and vchiq(4) at least.
+#
+# Until we figure out problems and fix them, we choose a fail-safe
+# workaround here; forbid unaligned memory access for whole kernel.
+# Affects on performance is negligibly small as far as we can see.
+#
+# See PR kern/54486 for more details.
+CFLAGS+= -mno-unaligned-access
+.endif
+
##
## (3) libkern and compat
##
Home |
Main Index |
Thread Index |
Old Index