Source-Changes-HG archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
[src/netbsd-9]: src/sys/external/bsd/compiler_rt/dist/lib/builtins Pull up fo...
details: https://anonhg.NetBSD.org/src/rev/96716152f66e
branches: netbsd-9
changeset: 963862:96716152f66e
user: martin <martin%NetBSD.org@localhost>
date: Tue May 05 18:32:16 2020 +0000
description:
Pull up following revision(s) (requested by jmcneill in ticket #889):
sys/external/bsd/compiler_rt/dist/lib/builtins/clear_cache.c: revision 1.4
Align addresses to cache lines in __clear_cache for aarch64.
This corrects an issue where if the start and end address fall in different
lines, and the end address is not cache line size aligned, the last line
will not be invalidated properly.
Patch from compiler-rt upstream: https://reviews.llvm.org/rCRT323315
diffstat:
sys/external/bsd/compiler_rt/dist/lib/builtins/clear_cache.c | 6 ++++--
1 files changed, 4 insertions(+), 2 deletions(-)
diffs (20 lines):
diff -r 668043d02622 -r 96716152f66e sys/external/bsd/compiler_rt/dist/lib/builtins/clear_cache.c
--- a/sys/external/bsd/compiler_rt/dist/lib/builtins/clear_cache.c Mon May 04 14:04:11 2020 +0000
+++ b/sys/external/bsd/compiler_rt/dist/lib/builtins/clear_cache.c Tue May 05 18:32:16 2020 +0000
@@ -143,12 +143,14 @@
* uintptr_t in case this runs in an IPL32 environment.
*/
const size_t dcache_line_size = 4 << ((ctr_el0 >> 16) & 15);
- for (addr = xstart; addr < xend; addr += dcache_line_size)
+ for (addr = xstart & ~(dcache_line_size - 1); addr < xend;
+ addr += dcache_line_size)
__asm __volatile("dc cvau, %0" :: "r"(addr));
__asm __volatile("dsb ish");
const size_t icache_line_size = 4 << ((ctr_el0 >> 0) & 15);
- for (addr = xstart; addr < xend; addr += icache_line_size)
+ for (addr = xstart & ~(icache_line_size - 1); addr < xend;
+ addr += icache_line_size)
__asm __volatile("ic ivau, %0" :: "r"(addr));
__asm __volatile("isb sy");
#elif defined(__sparc__)
Home |
Main Index |
Thread Index |
Old Index