Subject: kern/28951: broken NQNFS client support in 2.0
To: None <kern-bug-people@netbsd.org, gnats-admin@netbsd.org,>
From: Matthew Mondor <mm_lists@pulsar-zone.net>
List: netbsd-bugs
Date: 01/13/2005 14:06:00
>Number: 28951
>Category: kern
>Synopsis: broken NQNFS client support in 2.0
>Confidential: no
>Severity: serious
>Priority: high
>Responsible: kern-bug-people
>State: open
>Class: sw-bug
>Submitter-Id: net
>Arrival-Date: Thu Jan 13 14:06:00 +0000 2005
>Originator: Matthew Mondor
>Release: NetBSD 2.0
>Organization:
>Environment:
System: NetBSD hal.xisop 2.0 NetBSD 2.0 (HAL) #0: Sat Jan 1 15:00:45 EST 2005 root@hal.xisop:/usr/src/sys/arch/i386/compile/HAL i386
Architecture: i386
Machine: i386
>Description:
I am getting occasional crashes which I can quite effectively reproduce when
creating alot of file I/O load using either locate.updatedb or a cvs update
of a rather large tree, while at the same time causing lstat(2) calls to an
nfs server via a remotely mounted filesystem which uses NQNFS cache sync
extensions. The system no longer occurs with this feature disabled.
The server also runs NetBSD 2.0 on an i686.
Thanks to Chuck Silvers who helped to find out that NQNFS extensions code
was the cause, but there is no fix at this time yet.
VERSION
=======
# uname -a
NetBSD hal.xisop 2.0 NetBSD 2.0 (HAL) #0: Sat Jan 1 15:00:45 EST 2005 root@hal.xisop:/usr/src/sys/arch/i386/compile/HAL i386
DMESG
=====
# dmesg -M /var/crash/netbsd.3.core
[...]
uvm_fault(0xc71d0378, 0, 0, 2) -> 0xe
fatal page fault in supervisor mode
trap type 6 code 2 eip c0173231 cs 8 eflags 10213 cr2 10c ilevel 0
panic: trap
syncing disks... 9 6 5 4 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 giving up
Printing vnodes for busy buffers
tag 1 type VBLK, usecount 14573, writecount 0, refcount 2958,
tag VT_UFS, ino 3566716, on dev 0, 0 flags 0x0, effnlink 1, nlink 1
mode 060640, owner 0, group 5, size 0 not locked
giving up
[...]
BACKTRACE
=========
# gdb /usr/src/sys/arch/i386/compile/HAL/netbsd.gdb
(gdb) target kcore /var/crash/netbsd.3.core
panic: trap
#0 0xc0481000 in ?? ()
(gdb) bt
#0 0xc0481000 in ?? ()
#1 0xc0285e97 in cpu_reboot (howto=256, bootstr=0x0)
at ../../../../arch/i386/i386/machdep.c:745
#2 0xc021a149 in panic (fmt=0xc0344f02 "trap")
at ../../../../kern/subr_prf.c:242
#3 0xc028f62e in trap (frame=0xc8f3dda4)
at ../../../../arch/i386/i386/trap.c:296
#4 0xc0102dd9 in calltrap ()
#5 0xc024b31a in VOP_INACTIVE (vp=0xc6cb56a0, p=0xc71ce660)
at ../../../../kern/vnode_if.c:1024
#6 0xc02411ee in vput (vp=0xc6cb56a0) at ../../../../kern/vfs_subr.c:1322
#7 0xc0247289 in sys___lstat13 (l=0xc6f47c64, v=0xc8f3df64, retval=0xc8f3df5c)
at ../../../../kern/vfs_syscalls.c:2210
#8 0xc028ef25 in syscall_plain (frame=0xc8f3dfa8)
at ../../../../arch/i386/i386/syscall.c:156
VNODE
=====
(gdb) print *(struct vnode *)0xc6cb56a0
$1 = {v_uobj = {vmobjlock = {lock_data = 0}, pgops = 0xc0382450, memq = {
tqh_first = 0x0, tqh_last = 0xc6cb56a8}, uo_npages = 0, uo_refs = 0},
v_size = 7846, v_flag = 0, v_numoutput = 0, v_writecount = 0, v_holdcnt = 0,
v_mount = 0xc0a01a00, v_op = 0xc08d5b00, v_freelist = {tqe_next = 0x0,
tqe_prev = 0xc76c8c34}, v_mntvnodes = {le_next = 0xc87c79d0,
le_prev = 0xc0a01a14}, v_cleanblkhd = {lh_first = 0x0}, v_dirtyblkhd = {
lh_first = 0x0}, v_synclist = {le_next = 0x0, le_prev = 0x0}, v_dnclist = {
lh_first = 0x0}, v_nclist = {lh_first = 0xc880b408}, v_un = {
vu_mountedhere = 0x0, vu_socket = 0x0, vu_specinfo = 0x0,
vu_fifoinfo = 0x0}, v_lease = 0x0, v_type = VREG, v_tag = VT_NFS,
v_lock = {lk_interlock = {lock_data = 0}, lk_flags = 0, lk_sharecount = 0,
lk_exclusivecount = 0, lk_recurselevel = 0, lk_waitcount = 0,
lk_wmesg = 0xc0342140 "vnlock", lk_un = {lk_un_sleep = {
lk_sleep_lockholder = -1, lk_sleep_locklwp = 0, lk_sleep_prio = 20,
lk_sleep_timo = 0}, lk_un_spin = {lk_spin_cpu = 4294967295}}},
v_vnlock = 0xc6cb5710, v_data = 0xc6f363e8, v_klist = {slh_first = 0x0}}
PROCESS
=======
(gdb) print *(struct proc *)0xc71ce660
$2 = {p_list = {le_next = 0xc71ceb28, le_prev = 0xc03c0d28},
p_cred = 0xc6a99510, p_fd = 0xc7177d28, p_cwdi = 0xc6a9b1e0,
p_stats = 0xc8f3f00c, p_limit = 0xc6b323a8, p_vmspace = 0xc71d0378,
p_sigacts = 0xc8f3e0a0, p_ksems = 0x0, p_exitsig = 20, p_flag = 16386,
p_stat = 2 '\002', p_pad1 = "¾Þ", p_pid = 1496, p_pglist = {
le_next = 0xc71ce000, le_prev = 0xc71ce368}, p_pptr = 0xc71ce330,
p_sibling = {le_next = 0x0, le_prev = 0xc71ce37c}, p_children = {
lh_first = 0x0}, p_lock = {lock_data = -559038737}, p_lwps = {
lh_first = 0xc6f47c64}, p_raslist = {lh_first = 0x0}, p_nlwps = 1,
p_nrlwps = 1, p_nzlwps = 0, p_nlwpid = 1, p_nstopchild = 0, p_sa = 0x0,
p_estcpu = 14, p_cpticks = 43, p_pctcpu = 29, p_opptr = 0x0, p_timers = 0x0,
p_rtime = {tv_sec = 0, tv_usec = 95536}, p_uticks = 0, p_sticks = 73,
p_iticks = 0, p_traceflag = 0, p_tracep = 0x0, p_systrace = 0x0,
p_textvp = 0xc730385c, p_emul = 0xc031f680, p_emuldata = 0x0, p_userret = 0,
p_userret_arg = 0x0, p_execsw = 0xc031f490, p_klist = {slh_first = 0x0},
p_sigctx = {ps_siglist = {__bits = {0, 0, 0, 0}}, ps_sigcheck = 0 '\0',
ps_sigwaited = 0x0, ps_sigwait = 0x0, ps_silock = {lock_data = 0},
ps_siginfo = {cqh_first = 0xc71ce750, cqh_last = 0xc71ce750}, ps_sigstk = {
ss_sp = 0x0, ss_size = 0, ss_flags = 4}, ps_oldmask = {__bits = {524288,
0, 0, 0}}, ps_flags = 0, ps_signo = 0, ps_code = 0, ps_lwp = 0,
ps_sigcode = 0x0, ps_sigmask = {__bits = {0, 0, 0, 0}}, ps_sigignore = {
__bits = {2554888192, 0, 0, 0}}, ps_sigcatch = {__bits = {0, 0, 0, 0}}},
p_nice = 20 '\024', p_comm = "ls\0gs\0-4.0.2\0\0\0\0", p_pgrp = 0xc6ad5240,
p_psstr = 0xbfbffff0, p_psargv = 0, p_psnargv = 4, p_psenv = 8,
p_psnenv = 12, p_xstat = 48879, p_acflag = 0, p_ru = 0xdeadbeef, p_md = {
md_flags = 0, md_syscall = 0xc028eea0 <syscall_plain>, md_astpending = 1}}
# ps ax -kpgid -o pid,paddr,ucomm -M /var/crash/netbsd.3.core | grep c71ce660
1496 c71ce660 ls
Note that "p_vmspace = 0xc71d0378" refers to same address as uvm_fault()
displayed in dmesg.
>How-To-Repeat:
Mount a remote NFS filesystem using the -q flag to mount_nfs, or
via /etc/fstab in options. Generate alot of hard disk filesystem
activity, such as cvs update on netbsd sources or pkgsrc,
locate.updatedb, etc... Eventually access the nfs filesystem and
execute commands requireing lstat(2) syscall, such as ls(1) with -l,
and watch the system crash after some tries. I had to enable
DIAGNOSTIC in kernel, as well as support for gdb -g, and to disable
ddb to finally be able to eventually get a coredump saved (the box
often froze instead, despite ddb being disabled).
>Fix:
Unknown, if not fixed for 2.1 the feature should be marked
experimental or broken in mount_nfs(8).
>Unformatted:
To: gnats-bugs@gnats.NetBSD.org
Subject: broken NQNFS client support in 2.0
From: mm_lists@pulsar-zone.net
Reply-To: mm_lists@pulsar-zone.net
X-send-pr-version: 3.95