From nobody@FreeBSD.org Sun Jun 18 09:10:22 2006 Return-Path: Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 1E96D16A479 for ; Sun, 18 Jun 2006 09:10:22 +0000 (UTC) (envelope-from nobody@FreeBSD.org) Received: from www.freebsd.org (www.freebsd.org [216.136.204.117]) by mx1.FreeBSD.org (Postfix) with ESMTP id C395A43D53 for ; Sun, 18 Jun 2006 09:10:21 +0000 (GMT) (envelope-from nobody@FreeBSD.org) Received: from www.freebsd.org (localhost [127.0.0.1]) by www.freebsd.org (8.13.1/8.13.1) with ESMTP id k5I9ALLg007721 for ; Sun, 18 Jun 2006 09:10:21 GMT (envelope-from nobody@www.freebsd.org) Received: (from nobody@localhost) by www.freebsd.org (8.13.1/8.13.1/Submit) id k5I9ALBG007719; Sun, 18 Jun 2006 09:10:21 GMT (envelope-from nobody) Message-Id: <200606180910.k5I9ALBG007719@www.freebsd.org> Date: Sun, 18 Jun 2006 09:10:21 GMT From: Eirik Oeverby To: freebsd-gnats-submit@FreeBSD.org Subject: panic: sleeping thread (Sleeping thread ... owns a non-sleepable lock) X-Send-Pr-Version: www-2.3 >Number: 99094 >Category: kern >Synopsis: [linprocfs] panic: sleeping thread (Sleeping thread ... owns a non-sleepable lock) >Confidential: no >Severity: serious >Priority: medium >Responsible: freebsd-bugs >State: closed >Quarter: >Keywords: >Date-Required: >Class: sw-bug >Submitter-Id: current-users >Arrival-Date: Sun Jun 18 09:20:19 GMT 2006 >Closed-Date: Tue Dec 22 17:10:16 UTC 2009 >Last-Modified: Tue Dec 22 17:10:16 UTC 2009 >Originator: Eirik Oeverby >Release: 6.1-STABLE >Organization: >Environment: FreeBSD anduin.net 6.1-STABLE FreeBSD 6.1-STABLE #2: Wed May 31 20:13:06 CEST 2006 root@anduin.net:/usr/obj/usr/src/sys/ANDUIN amd64 >Description: Since 6.1-RELEASE (possibly earlier, not sure) I've been seeing panics on a semi-regular basis, up to 2-3 times per week. The panic goes: Sleeping thread (tid 100082, pid 84236) owns a non-sleepable lock panic: sleeping thread cpuid = 0 KDB: enter: panic [thread pid 84235 tid 100474 ] Stopped at kdb_enter+0x2f: nop ..where pid 84236 is a sh instance (in this particular case). I cannot reproduce this on demand, but I usually only have to wait a few days (it'll happen, at the latest, whenever I think it'll survive another evening and go out for a beer...). Calling boot() or reset from db> just causes the box to hang, I have to power cycle it at this point. Not using WITNESS or INVARIANTS (compiling a kernel with those enabled now). kernel config: include GENERIC ident ANDUIN makeoptions DEBUG=-g # Generic options HZ=1000 options SMP # IPFW support options IPFIREWALL options IPDIVERT options DUMMYNET # PF support #options PFIL_HOOKS device pf device pflog device pfsync # ALTQ support options ALTQ options ALTQ_CBQ options ALTQ_HFSC options ALTQ_PRIQ options ALTQ_RED options ALTQ_NOPCC # IPSEC options IPSEC options IPSEC_ESP options IPSEC_FILTERGIF # Debugging options KDB options DDB options BREAK_TO_DEBUGGER options KDB_STOP_NMI dmesg: Copyright (c) 1992-2006 The FreeBSD Project. Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994 The Regents of the University of California. All rights reserved. FreeBSD 6.1-STABLE #2: Wed May 31 20:13:06 CEST 2006 root@anduin.net:/usr/obj/usr/src/sys/ANDUIN WARNING: debug.mpsafenet forced to 0 as ipsec requires Giant WARNING: MPSAFE network stack disabled, expect reduced performance. ACPI APIC Table: Timecounter "i8254" frequency 1193182 Hz quality 0 CPU: AMD Opteron(tm) Processor 242 (1595.14-MHz K8-class CPU) Origin = "AuthenticAMD" Id = 0xf5a Stepping = 10 Features=0x78bfbff AMD Features=0xe0500800 real memory = 2147418112 (2047 MB) avail memory = 2061500416 (1966 MB) FreeBSD/SMP: Multiprocessor System Detected: 2 CPUs cpu0 (BSP): APIC ID: 0 cpu1 (AP): APIC ID: 1 MADT: Forcing active-low polarity and level trigger for SCI ioapic0 irqs 0-23 on motherboard ioapic1 irqs 24-27 on motherboard ioapic2 irqs 28-31 on motherboard kbd1 at kbdmux0 acpi0: on motherboard acpi0: Power Button (fixed) Timecounter "ACPI-fast" frequency 3579545 Hz quality 1000 acpi_timer0: <24-bit timer at 3.579545MHz> port 0x5008-0x500b on acpi0 cpu0: on acpi0 acpi_throttle0: on cpu0 cpu1: on acpi0 pcib0: port 0xcf8-0xcff on acpi0 pci0: on pcib0 pcib1: at device 6.0 on pci0 pci3: on pcib1 ohci0: mem 0xfeafc000-0xfeafcfff irq 19 at device 0.0 on pci3 ohci0: [GIANT-LOCKED] usb0: OHCI version 1.0, legacy support usb0: on ohci0 usb0: USB revision 1.0 uhub0: AMD OHCI root hub, class 9/0, rev 1.00/1.00, addr 1 uhub0: 3 ports with 3 removable, self powered ohci1: mem 0xfeafd000-0xfeafdfff irq 19 at device 0.1 on pci3 ohci1: [GIANT-LOCKED] usb1: OHCI version 1.0, legacy support usb1: on ohci1 usb1: USB revision 1.0 uhub1: AMD OHCI root hub, class 9/0, rev 1.00/1.00, addr 1 uhub1: 3 ports with 3 removable, self powered atapci0: port 0xb400-0xb407,0xb000-0xb003,0xac00-0xac07,0xa800-0xa803,0xa400-0xa40f mem 0xfeafec00-0xfeafefff irq 19 at device 5.0 on pci3 ata2: on atapci0 ata3: on atapci0 ata4: on atapci0 ata5: on atapci0 pci3: at device 6.0 (no driver attached) fxp0: port 0xbc00-0xbc3f mem 0xfeafb000-0xfeafbfff,0xfeaa0000-0xfeabffff irq 18 at device 8.0 on pci3 miibus0: on fxp0 inphy0: on miibus0 inphy0: 10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, auto fxp0: Ethernet address: 00:e0:81:2a:11:64 fxp0: [GIANT-LOCKED] isab0: at device 7.0 on pci0 isa0: on isab0 atapci1: port 0x1f0-0x1f7,0x3f6,0x170-0x177,0x376,0xffa0-0xffaf at device 7.1 on pci0 ata0: on atapci1 ata1: on atapci1 pci0: at device 7.2 (no driver attached) pci0: at device 7.3 (no driver attached) pcib2: at device 10.0 on pci0 pci2: on pcib2 ahd0: port 0x9000-0x90ff,0x9c00-0x9cff mem 0xfc8fc000-0xfc8fdfff irq 24 at device 6.0 on pci2 ahd0: [GIANT-LOCKED] aic7902: Ultra320 Wide Channel A, SCSI Id=7, PCI-X 67-100Mhz, 512 SCBs ahd1: port 0x9800-0x98ff,0x9400-0x94ff mem 0xfc8fe000-0xfc8fffff irq 25 at device 6.1 on pci2 ahd1: [GIANT-LOCKED] aic7902: Ultra320 Wide Channel B, SCSI Id=7, PCI-X 67-100Mhz, 512 SCBs bge0: mem 0xfc8b0000-0xfc8bffff,0xfc8a0000-0xfc8affff irq 24 at device 9.0 on pci2 miibus1: on bge0 brgphy0: on miibus1 brgphy0: 10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, 1000baseTX, 1000baseTX-FDX, auto bge0: Ethernet address: 00:e0:81:2a:59:8c bge0: [GIANT-LOCKED] bge1: mem 0xfc8e0000-0xfc8effff,0xfc8d0000-0xfc8dffff irq 25 at device 9.1 on pci2 miibus2: on bge1 brgphy1: on miibus2 brgphy1: 10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, 1000baseTX, 1000baseTX-FDX, auto bge1: Ethernet address: 00:e0:81:2a:59:8d bge1: [GIANT-LOCKED] pci0: at device 10.1 (no driver attached) pcib3: at device 11.0 on pci0 pci1: on pcib3 pci0: at device 11.1 (no driver attached) acpi_button0: on acpi0 atkbdc0: port 0x60,0x64 irq 1 on acpi0 atkbd0: flags 0x1 irq 1 on atkbdc0 kbd0 at atkbd0 atkbd0: [GIANT-LOCKED] sio0: <16550A-compatible COM port> port 0x3f8-0x3ff irq 4 flags 0x10 on acpi0 sio0: type 16550A, console sio1: <16550A-compatible COM port> port 0x2f8-0x2ff irq 3 on acpi0 sio1: type 16550A fdc0: port 0x3f0-0x3f5,0x3f7 irq 6 drq 2 on acpi0 fdc0: [FAST] fd0: <1440-KB 3.5" drive> on fdc0 drive 0 ppc0: port 0x378-0x37f irq 7 on acpi0 ppc0: Generic chipset (NIBBLE-only) in COMPATIBLE mode ppbus0: on ppc0 plip0: on ppbus0 lpt0: on ppbus0 lpt0: Interrupt-driven port ppi0: on ppbus0 orm0: at iomem 0xc0000-0xc7fff,0xd1000-0xd57ff on isa0 sc0: at flags 0x100 on isa0 sc0: VGA <16 virtual consoles, flags=0x300> vga0: at port 0x3c0-0x3df iomem 0xa0000-0xbffff on isa0 Timecounters tick every 1.000 msec module_register_init: MOD_LOAD (amr_linux, 0xffffffff806b3ba0, 0) error 6 IPsec: Initialized Security Association Processing. ipfw2 (+ipv6) initialized, divert loadable, rule-based forwarding disabled, default to deny, logging disabled Waiting 5 seconds for SCSI devices to settle acd0: CDROM at ata1-slave UDMA33 ad4: 152627MB at ata2-master SATA150 ad6: 152627MB at ata3-master SATA150 sa0 at ahd0 bus 0 target 6 lun 0 sa0: Removable Sequential Access SCSI-2 device sa0: 10.000MB/s transfers (10.000MHz, offset 32) da0 at ahd0 bus 0 target 0 lun 0 da0: Fixed Direct Access SCSI-3 device da0: 40.000MB/s transfers (20.000MHz, offset 63, 16bit), Tagged Queueing Enabled da0: 35003MB (71687340 512 byte sectors: 255H 63S/T 4462C) da1 at ahd0 bus 0 target 9 lun 0 da1: Fixed Direct Access SCSI-3 device da1: 40.000MB/s transfers (20.000MHz, offset 63, 16bit), Tagged Queueing Enabled da1: 35003MB (71687340 512 byte sectors: 255H 63S/T 4462C) ar0: 152625MB status: READY ar0: disk0 READY (master) using ad4 at ata2-master ar0: disk1 READY (mirror) using ad6 at ata3-master SMP: AP CPU #1 Launched! GEOM_MIRROR: Device gm0s1 created (id=3433009533). GEOM_MIRROR: Device gm0s1: provider da0s1 detected. GEOM_MIRROR: Device gm0s1: provider da1s1 detected. GEOM_MIRROR: Device gm0s1: provider da1s1 activated. GEOM_MIRROR: Device gm0s1: provider mirror/gm0s1 launched. GEOM_MIRROR: Device gm0s1: rebuilding provider da0s1. Trying to mount root from ufs:/dev/mirror/gm0s1a WARNING: / was not properly dismounted GEOM_MIRROR: Device gm0s1: rebuilding provider da0s1 finished. GEOM_MIRROR: Device gm0s1: provider da0s1 activated. bge0: link state changed to UP pid 877 (clamd), uid 106 inumber 8751 on /var: filesystem full Backtrace and ps output: db> bt Tracing pid 84235 tid 100474 td 0xffffff006f759000 kdb_enter() at kdb_enter+0x2f panic() at panic+0x291 propagate_priority() at propagate_priority+0x1bd turnstile_wait() at turnstile_wait+0x266 _mtx_lock_sleep() at _mtx_lock_sleep+0xa6 vm_map_pmap_enter() at vm_map_pmap_enter+0x316 vm_map_insert() at vm_map_insert+0x207 elf64_map_insert() at elf64_map_insert+0x209 elf64_load_section() at elf64_load_section+0xbe elf64_load_file() at elf64_load_file+0x324 exec_elf64_imgact() at exec_elf64_imgact+0x759 kern_execve() at kern_execve+0x51c execve() at execve+0x5d syscall() at syscall+0x642 Xfast_syscall() at Xfast_syscall+0xa8 --- syscall (59, FreeBSD ELF64, execve), rip = 0x80091db4c, rsp = 0x7fffffffe598, rbp = 0xffffffff --- db> ps pid proc uid ppid pgrp flag stat wmesg wchan cmd 84236 ffffff005f51f680 65534 84234 84233 4000000 [SLPQ user map 0xffffff003124b730][SLP] sh 84235 ffffff003bd9c340 65534 84234 84233 4000000 [LOCK vm page queue mutex ffffff005f8c3d00] sh 84234 ffffff006ba02340 65534 84233 84233 0000000 [SLPQ wait 0xffffff006ba02340][SLP] sh 84233 ffffff00768849c0 65534 83853 84233 0004000 [SLPQ piperd 0xffffff00152ca600][SLP] sh 83853 ffffff0049db2340 0 866 866 0000000 [SLPQ piperd 0xffffff0037541000][SLP] perl 83852 ffffff004e118680 10005 83851 83845 0000001 [SLPQ sbwait 0xffffff00651e5ad8][SLP] perl 83851 ffffff003739b000 10005 83849 83845 0004000 [SLPQ accept 0xffffff006c3f7796][SLP] perl 83849 ffffff0022dda340 10005 83845 83845 0004000 [SLPQ wait 0xffffff0022dda340][SLP] sh 83845 ffffff002a29f680 10005 83843 83845 0004000 [SLPQ wait 0xffffff002a29f680][SLP] sh 83843 ffffff0067d129c0 0 1148 1148 0000000 [SLPQ piperd 0xffffff005ca38300][SLP] cron 83814 ffffff00140c4000 80 3341 3341 0000100 [SLPQ select 0xffffffff80957550][SLP] httpd 83813 ffffff005a81a9c0 80 3341 3341 0000100 [SLPQ accept 0xffffff00402362c6][SLP] httpd 83810 ffffff005b243340 1000 1068 1068 0004000 [SLPQ select 0xffffffff80957550][SLP] imapd 83809 ffffff0010ca2340 80 3341 3341 0000100 [SLPQ accept 0xffffff00402362c6][SLP] httpd 83807 ffffff004d2979c0 80 3341 3341 0000100 [SLPQ accept 0xffffff00402362c6][SLP] httpd 83805 ffffff0046605000 26 83804 1040 0000101 [SLPQ connec 0xffffff0020b9cc66][SLP] exim-4.54-0 83804 ffffff001188a340 0 82788 1040 0000101 [SLPQ select 0xffffffff80957550][SLP] exim-4.54-0 83791 ffffff005e199340 80 3341 3341 0000100 [SLPQ accept 0xffffff00402362c6][SLP] httpd 83761 ffffff003bfd9340 80 3341 3341 0000100 [SLPQ accept 0xffffff00402362c6][SLP] httpd 83719 ffffff0010471680 80 3341 3341 0000100 [SLPQ accept 0xffffff00402362c6][SLP] httpd 83682 ffffff0020ca5340 1026 83681 83682 0004002 [SLPQ ttyin 0xffffff0066f1a410][SLP] tcsh 83681 ffffff0021727680 1026 83678 83678 0000100 [SLPQ select 0xffffffff80957550][SLP] sshd 83678 ffffff0021727000 0 1127 83678 0004100 [SLPQ sbwait 0xffffff002255c870][SLP] sshd 83536 ffffff0020ca5680 1026 1068 1068 0004000 [SLPQ select 0xffffffff80957550][SLP] imapd 83027 ffffff001be36000 26 83026 1040 0000101 [SLPQ connec 0xffffff0026b8a2c6][SLP] exim-4.54-0 83026 ffffff0021727340 0 75209 1040 0000101 [SLPQ select 0xffffffff80957550][SLP] exim-4.54-0 82788 ffffff007296a340 0 1040 1040 0004100 [SLPQ wait 0xffffff007296a340][SLP] exim-4.54-0 82715 ffffff00755b79c0 0 70194 70194 0000000 [SLPQ select 0xffffffff80957550][SLP] perl5.8.7 82695 ffffff003bd9c680 26 82694 82694 0000101 [SLPQ select 0xffffffff80957550][SLP] exim-4.54-0 82694 ffffff004b7fc680 0 1 82694 0004101 [SLPQ select 0xffffffff80957550][SLP] exim-4.54-0 82083 ffffff0024815000 1000 1068 1068 0004000 [SLPQ select 0xffffffff80957550][SLP] imapd 80236 ffffff003ad87000 1000 1068 1068 0004000 [SLPQ select 0xffffffff80957550][SLP] imapd 79938 ffffff001188a000 1000 1068 1068 0004000 [SLPQ select 0xffffffff80957550][SLP] imapd 79480 ffffff006912b340 125 2824 2824 0004100 [SLPQ select 0xffffffff80957550][SLP] pickup 78846 ffffff005ab59340 0 1040 1040 0004100 [SLPQ piperd 0xffffff0062bf6000][SLP] exim-4.54-0 77667 ffffff006605f000 1011 1068 1068 0004000 [SLPQ select 0xffffffff80957550][SLP] imapd 77111 ffffff0021976000 90 1068 1068 0004000 [SLPQ select 0xffffffff80957550][SLP] imapd 75209 ffffff0067dd89c0 0 1040 1040 0004100 [SLPQ wait 0xffffff0067dd89c0][SLP] exim-4.54-0 73684 ffffff00385ed680 0 70194 70194 0000000 [SLPQ select 0xffffffff80957550][SLP] perl5.8.7 72257 ffffff0064357340 80 696 696 0000100 [SLPQ accept 0xffffff005b61d2c6][SLP] httpd 72256 ffffff004d2f6340 80 696 696 0000100 [SLPQ accept 0xffffff005b61d2c6][SLP] httpd 72160 ffffff0021555680 80 696 696 0000100 [SLPQ accept 0xffffff005b61d2c6][SLP] httpd 63793 ffffff0046605340 80 696 696 0000100 [SLPQ accept 0xffffff005b61d2c6][SLP] httpd 63133 ffffff0046a67000 80 696 696 0000100 [SLPQ accept 0xffffff005b61d2c6][SLP] httpd 11692 ffffff0079047340 80 696 696 0000100 [SLPQ accept 0xffffff005b61d2c6][SLP] httpd 10895 ffffff007ab40340 80 696 696 0000100 [SLPQ accept 0xffffff005b61d2c6][SLP] httpd 89488 ffffff00104719c0 80 696 696 0000100 [SLPQ accept 0xffffff005b61d2c6][SLP] httpd 46958 ffffff004d467340 80 696 696 0000100 [SLPQ accept 0xffffff005b61d2c6][SLP] httpd 42485 ffffff0007ef39c0 80 696 696 0000100 [SLPQ accept 0xffffff005b61d2c6][SLP] httpd 89761 ffffff006605f680 6681 89758 89761 0004002 [SLPQ select 0xffffffff80957550][SLP] ssh 89758 ffffff005a4d2340 6681 89757 89758 0004002 [SLPQ wait 0xffffff005a4d2340][SLP] bash 89757 ffffff0032c4c9c0 6681 89744 89744 0000100 [SLPQ select 0xffffffff80957550][SLP] sshd 89744 ffffff00385ed9c0 0 1127 89744 0004100 [SLPQ sbwait 0xffffff003ea4c608][SLP] sshd 79563 ffffff006aee1000 10002 1 79563 0000000 [SLPQ select 0xffffffff80957550][SLP] ezb 79555 ffffff00599c09c0 10002 79554 79555 0004002 [SLPQ ttyin 0xffffff0072b48410][SLP] bash 79554 ffffff003739b9c0 10002 79460 79460 0000100 [SLPQ select 0xffffffff80957550][SLP] sshd 79460 ffffff007329b680 0 1127 79460 0004100 [SLPQ sbwait 0xffffff005134e870][SLP] sshd 70194 ffffff0067688340 0 1 70194 0000000 [SLPQ select 0xffffffff80957550][SLP] perl5.8.7 53955 ffffff004782a680 6681 53954 53955 0004002 [SLPQ select 0xffffffff80957550][SLP] irssi 53954 ffffff002a29f340 6681 53953 53954 0004002 [SLPQ wait 0xffffff002a29f340][SLP] bash 53953 ffffff0067e22000 6681 53950 53950 0000100 [SLPQ select 0xffffffff80957550][SLP] sshd 53950 ffffff00399c39c0 0 1127 53950 0004100 [SLPQ sbwait 0xffffff0059f6fd40][SLP] sshd 4298 ffffff003bd9c000 1026 1068 1068 0004000 [SLPQ select 0xffffffff80957550][SLP] imapd 51752 ffffff00215949c0 80 3114 3114 0000100 [SLPQ accept 0xffffff0045b0f2c6][SLP] httpd 51751 ffffff00248159c0 80 3114 3114 0000100 [SLPQ accept 0xffffff0045b0f2c6][SLP] httpd 51750 ffffff0040d0e000 80 3114 3114 0000100 [SLPQ accept 0xffffff0045b0f2c6][SLP] httpd 51732 ffffff0046a67340 80 3114 3114 0000100 [SLPQ accept 0xffffff0045b0f2c6][SLP] httpd 51592 ffffff00755b7680 80 3114 3114 0000100 [SLPQ accept 0xffffff0045b0f2c6][SLP] httpd 51591 ffffff004e1189c0 80 3114 3114 0000100 [SLPQ accept 0xffffff0045b0f2c6][SLP] httpd 51590 ffffff0007ef3340 80 3114 3114 0000100 [SLPQ accept 0xffffff0045b0f2c6][SLP] httpd 51588 ffffff00140c4680 80 3114 3114 0000100 [SLPQ accept 0xffffff0045b0f2c6][SLP] httpd 51587 ffffff0043309680 80 3114 3114 0000100 [SLPQ accept 0xffffff0045b0f2c6][SLP] httpd 3057 ffffff00772c6340 88 3032 3031 000c082 (threaded) mysqld thread 0xffffff006be44980 ksegrp 0xffffff00052dad80 [SLPQ kserel 0xffffff00052dadd8][SLP] thread 0xffffff0034f33980 ksegrp 0xffffff00052dad80 [SLPQ kserel 0xffffff00052dadd8][SLP] thread 0xffffff00135a94c0 ksegrp 0xffffff00052dad80 [SLPQ select 0xffffffff80957550][SLP] thread 0xffffff004cbad980 ksegrp 0xffffff003ed40360 [SLPQ kserel 0xffffff003ed403b8][SLP] thread 0xffffff0041dcc980 ksegrp 0xffffff0012a68360 [SLPQ sbwait 0xffffff0040536608][SLP] thread 0xffffff006be24be0 ksegrp 0xffffff000cce86c0 [SLPQ sigwait 0xffffffffb5266a38][SLP] thread 0xffffff0068b1b720 ksegrp 0xffffff0012a68e10 [SLPQ ksesigwait 0xffffff00772c6528][SLP] 3032 ffffff0043309340 88 1 3031 0004002 [SLPQ wait 0xffffff0043309340][SLP] sh 63730 ffffff00022e8000 80 2786 2786 0000100 [SLPQ accept 0xffffff0061d94c66][SLP] httpd 58997 ffffff0041c49680 80 2786 2786 0000100 [SLPQ accept 0xffffff0061d94c66][SLP] httpd 39585 ffffff0009ab2340 2003 39584 39585 0004102 [SLPQ pause 0xffffff0009ab23a8][SLP] screen 39584 ffffff0040d0e340 2003 39583 39584 0004002 [SLPQ wait 0xffffff0040d0e340][SLP][SWAP] bash 39583 ffffff001188a680 2003 39580 39580 0000100 [SLPQ select 0xffffffff80957550][SLP] sshd 39580 ffffff004ff21000 0 1127 39580 0004100 [SLPQ sbwait 0xffffff002e0a8ad8][SLP][SWAP] sshd 9988 ffffff0067e229c0 2003 9878 9988 0004003 [SLPQ select 0xffffffff80957550][SLP] irssi 9878 ffffff0020e26680 2003 9876 9878 0004002 [SLPQ wait 0xffffff0020e26680][SLP][SWAP] bash 9876 ffffff0021594340 2003 1 9876 0000100 [SLPQ select 0xffffffff80957550][SLP] screen 8673 ffffff0021594680 194 1 8673 0000000 [SLPQ kqread 0xffffff006767c800][SLP] ircd 5181 ffffff00679e3340 6694 5179 5181 0004002 [SLPQ select 0xffffffff80957550][SLP] irssi 5179 ffffff00622fe680 6694 1 5179 0000100 [SLPQ select 0xffffffff80957550][SLP] screen 3633 ffffff006912b000 0 2547 3633 0004100 [SLPQ select 0xffffffff80957550][SLP] fam 3554 ffffff003be1a340 0 1 3554 0000000 [SLPQ nanslp 0xffffffff8094a560][SLP] cron 3538 ffffff00436919c0 0 1 3538 0000000 [SLPQ select 0xffffffff80957550][SLP] inetd 3518 ffffff005c488680 25 1 3518 0000100 [SLPQ pause 0xffffff005c4886e8][SLP] sendmail 3508 ffffff003bfd9680 0 1 3508 0000100 [SLPQ pause 0xffffff003bfd96e8][SLP] sendmail 3456 ffffff0042cc6000 0 1 3456 0000100 [SLPQ select 0xffffffff80957550][SLP] sshd 3408 ffffff0049436680 0 1 3408 0000000 [SLPQ nanslp 0xffffffff8094a560][SLP] cron 3398 ffffff005c488000 0 1 3398 0000000 [SLPQ select 0xffffffff80957550][SLP] inetd 3375 ffffff0042cc6340 25 1 3375 0000100 [SLPQ pause 0xffffff0042cc63a8][SLP] sendmail 3364 ffffff007aa919c0 0 1 3364 0000100 [SLPQ select 0xffffffff80957550][SLP] sendmail 3353 ffffff00448c9680 0 1 3353 0000100 [SLPQ select 0xffffffff80957550][SLP] sshd 3341 ffffff0064357680 0 1 3341 0000000 [SLPQ select 0xffffffff80957550][SLP] httpd 3296 ffffff005b243680 0 1 3296 0000000 [SLPQ nanslp 0xffffffff8094a560][SLP] cron 3282 ffffff0046a67680 0 1 3282 0000000 [SLPQ select 0xffffffff80957550][SLP] inetd 3247 ffffff005ab599c0 25 1 3247 0000100 [SLPQ pause 0xffffff005ab59a28][SLP] sendmail 3244 ffffff004a7319c0 0 1 3244 0000000 [SLPQ nanslp 0xffffffff8094a560][SLP] cron 3231 ffffff004782a000 0 1 3231 0000000 [SLPQ select 0xffffffff80957550][SLP] inetd 3228 ffffff0063ad6000 0 1 3228 0000100 [SLPQ select 0xffffffff80957550][SLP] sendmail 3209 ffffff00448c99c0 25 1 3209 0000100 [SLPQ pause 0xffffff00448c9a28][SLP] sendmail 3159 ffffff0045a8e340 0 1 3159 0000100 [SLPQ select 0xffffffff80957550][SLP] sshd 3156 ffffff0049eff9c0 88 3067 3062 000c080 (threaded) mysqld thread 0xffffff003bc7b4c0 ksegrp 0xffffff00796b5360 [SLPQ kserel 0xffffff00796b53b8][SLP] thread 0xffffff003bc7b720 ksegrp 0xffffff00796b5360 [SLPQ kserel 0xffffff00796b53b8][SLP] thread 0xffffff003bc7bbe0 ksegrp 0xffffff00796b5360 [SLPQ select 0xffffffff80957550][SLP] thread 0xffffff0064858720 ksegrp 0xffffff0044708d80 [SLPQ sigwait 0xffffffffb4fe6a38][SLP] thread 0xffffff00583ed260 ksegrp 0xffffff0044708e10 [SLPQ ksesigwait 0xffffff0049effba8][SLP] 3136 ffffff006633c000 0 1 3136 0000100 [SLPQ select 0xffffffff80957550][SLP] sshd 3114 ffffff00418cb340 0 1 3114 0000000 [SLPQ select 0xffffffff80957550][SLP] httpd 3096 ffffff004782a340 0 1 3096 0000000 [SLPQ nanslp 0xffffffff8094a560][SLP] cron 3083 ffffff00416a9340 0 1 3083 0000000 [SLPQ select 0xffffffff80957550][SLP] inetd 3067 ffffff00643579c0 88 1 3062 0004000 [SLPQ wait 0xffffff00643579c0][SLP][SWAP] sh 3054 ffffff0063214000 25 1 3054 0000100 [SLPQ pause 0xffffff0063214068][SLP] sendmail 3037 ffffff0043691340 0 1 3037 0000100 [SLPQ select 0xffffffff80957550][SLP] sendmail 3014 ffffff006633c9c0 80 2786 2786 0000100 [SLPQ accept 0xffffff0061d94c66][SLP] httpd 3013 ffffff00448c9000 80 2786 2786 0000100 [SLPQ accept 0xffffff0061d94c66][SLP] httpd 3012 ffffff0063214340 80 2786 2786 0000100 [SLPQ accept 0xffffff0061d94c66][SLP] httpd 3011 ffffff0049db2000 80 2786 2786 0000100 [SLPQ accept 0xffffff0061d94c66][SLP] httpd 3010 ffffff0049eff680 80 2786 2786 0000100 [SLPQ accept 0xffffff0061d94c66][SLP] httpd 2997 ffffff005f2db9c0 1001 1 2997 0000000 [SLPQ accept 0xffffff0061d942c6][SLP][SWAP] svnserve 2973 ffffff006605f340 80 2939 2938 0004000 [SLPQ accept 0xffffff0061d9405e][SLP][SWAP] perl 2971 ffffff006633c680 80 2939 2938 0004000 [SLPQ accept 0xffffff00627ae9fe][SLP][SWAP] perl 2966 ffffff0065966340 80 2939 2938 0004000 [SLPQ accept 0xffffff0051606c66][SLP][SWAP] perl 2950 ffffff004b71d340 80 2939 2938 0004000 [SLPQ accept 0xffffff005134e2c6][SLP][SWAP] perl 2939 ffffff0049eff000 80 1 2938 0000100 [SLPQ kqread 0xffffff0066033800][SLP] lighttpd 2840 ffffff0063ad6340 0 1 2840 0000100 [SLPQ select 0xffffffff80957550][SLP] sshd 2829 ffffff005eca4340 125 2824 2824 0004100 [SLPQ select 0xffffffff80957550][SLP] qmgr 2824 ffffff004a731340 0 1 2824 0004100 [SLPQ select 0xffffffff80957550][SLP] master 2786 ffffff004a731680 0 1 2786 0000000 [SLPQ nanslp 0xffffffff8094a560][SLP] httpd 2771 ffffff006605f9c0 88 2706 2522 000c080 (threaded) mysqld thread 0xffffff0030e10000 ksegrp 0xffffff005af1a990 [SLPQ kserel 0xffffff005af1a9e8][SLP] thread 0xffffff0072518000 ksegrp 0xffffff005af1a990 [SLPQ select 0xffffffff80957550][SLP] thread 0xffffff002b8544c0 ksegrp 0xffffff005af1a990 [SLPQ kserel 0xffffff005af1a9e8][SLP] thread 0xffffff0042f6f260 ksegrp 0xffffff00423ca1b0 [SLPQ sigwait 0xffffffffb50aea38][SLP] thread 0xffffff0064619980 ksegrp 0xffffff0065b27120 [SLPQ ksesigwait 0xffffff006605fba8][SLP] 2710 ffffff0065966000 0 1 2710 0000000 [SLPQ select 0xffffffff80957550][SLP] syslogd 2706 ffffff006633c340 88 1 2522 0004000 [SLPQ wait 0xffffff006633c340][SLP][SWAP] sh 2674 ffffff004b71d000 0 1 2674 0004002 [SLPQ ttyin 0xffffff0078568810][SLP][SWAP] getty 2673 ffffff004a0909c0 0 1 2673 0004002 [SLPQ ttyin 0xffffff007ad24410][SLP][SWAP] getty 2672 ffffff0049436000 0 1 2672 0004002 [SLPQ ttyin 0xffffff007ad24c10][SLP][SWAP] getty 2671 ffffff0064357000 0 1 2671 0004002 [SLPQ ttyin 0xffffff00796d5c10][SLP][SWAP] getty 2670 ffffff0049436340 0 1 2670 0004002 [SLPQ ttyin 0xffffff007ac83010][SLP][SWAP] getty 2669 ffffff00638c99c0 0 1 2669 0004002 [SLPQ ttyin 0xffffff007ad46410][SLP][SWAP] getty 2668 ffffff004b71d9c0 0 1 2668 0004002 [SLPQ ttyin 0xffffff00796d5010][SLP][SWAP] getty 2667 ffffff0061eda340 0 1 2667 0004002 [SLPQ ttyin 0xffffff007ad45010][SLP][SWAP] getty 2666 ffffff0063214680 0 1 2666 0004002 [SLPQ ttyin 0xffffff007ad45810][SLP][SWAP] getty 2547 ffffff00638c9340 0 1 2547 0000000 [SLPQ select 0xffffffff80957550][SLP] inetd 2528 ffffff0059455340 88 2455 2336 000c082 (threaded) mysqld thread 0xffffff0064c7a000 ksegrp 0xffffff006207dab0 [SLPQ kserel 0xffffff006207db08][SLP] thread 0xffffff0049e93980 ksegrp 0xffffff006207dab0 [SLPQ kserel 0xffffff006207db08][SLP] thread 0xffffff0058b19be0 ksegrp 0xffffff006207dab0 [SLPQ select 0xffffffff80957550][SLP] thread 0xffffff0065665be0 ksegrp 0xffffff0065b271b0 [SLPQ sigwait 0xffffffffb4fd2a38][SLP] thread 0xffffff0064858be0 ksegrp 0xffffff0065b27240 [SLPQ ksesigwait 0xffffff0059455528][SLP] 2500 ffffff0063035680 0 1 50 0000002 [SLPQ select 0xffffffff80957550][SLP] rxstack 2467 ffffff005a459340 0 1 2467 0000000 [SLPQ select 0xffffffff80957550][SLP] syslogd 2455 ffffff00659669c0 88 1 2336 0004002 [SLPQ wait 0xffffff00659669c0][SLP][SWAP] sh 2449 ffffff004a090340 0 1 2449 0000000 [SLPQ nanslp 0xffffffff8094a560][SLP] cron 2438 ffffff0063035000 0 1 2438 0000000 [SLPQ select 0xffffffff80957550][SLP] inetd 2427 ffffff00630359c0 25 1 2427 0000100 [SLPQ pause 0xffffff0063035a28][SLP] sendmail 2420 ffffff004a090000 0 1 2420 0000100 [SLPQ select 0xffffffff80957550][SLP] sendmail 2360 ffffff00638c9680 0 1 2360 0000000 [SLPQ nanslp 0xffffffff8094a560][SLP] cron 2347 ffffff0063035340 0 1 2347 0000000 [SLPQ select 0xffffffff80957550][SLP] inetd 2313 ffffff00638c9000 25 1 2313 0000100 [SLPQ pause 0xffffff00638c9068][SLP] sendmail 2298 ffffff004b71e680 0 1 2298 0000100 [SLPQ select 0xffffffff80957550][SLP] sendmail 2292 ffffff004b7fc000 0 1 2292 0000100 [SLPQ select 0xffffffff80957550][SLP] sshd 2257 ffffff005f2db340 0 1 2257 0000100 [SLPQ select 0xffffffff80957550][SLP] sshd 2026 ffffff0049eff340 0 1 2026 0000000 [SLPQ select 0xffffffff80957550][SLP] syslogd 2018 ffffff004b71e9c0 0 1 2018 0000000 [SLPQ select 0xffffffff80957550][SLP] syslogd 1796 ffffff005a81a680 0 1 1796 0000000 [SLPQ select 0xffffffff80957550][SLP] syslogd 1695 ffffff005f7be9c0 0 1 1695 0000000 [SLPQ select 0xffffffff80957550][SLP] syslogd 1650 ffffff006225f9c0 0 1 1650 0000000 [SLPQ select 0xffffffff80957550][SLP] syslogd 1204 ffffff004b71e000 0 1201 1201 0000000 [SLPQ select 0xffffffff80957550][SLP] fsavd 1201 ffffff005a4599c0 0 1 1201 0000000 [SLPQ select 0xffffffff80957550][SLP] fsavd 1148 ffffff007ab409c0 0 1 1148 0000000 [SLPQ nanslp 0xffffffff8094a560][SLP] cron 1137 ffffff005e8e5000 389 1 1137 0008180 (threaded) slapd thread 0xffffff00662fc720 ksegrp 0xffffff007ba045a0 [SLPQ kserel 0xffffff007ba045f8][SLP] thread 0xffffff0048c53260 ksegrp 0xffffff007ba045a0 [SLPQ kserel 0xffffff007ba045f8][SLP] thread 0xffffff003e50a000 ksegrp 0xffffff007ba045a0 [SLPQ select 0xffffffff80957550][SLP] thread 0xffffff004c4614c0 ksegrp 0xffffff005a8db870 [SLPQ ksesigwait 0xffffff005e8e51e8][SLP] 1127 ffffff005f7be340 0 1 1127 0000100 [SLPQ select 0xffffffff80957550][SLP] sshd 1110 ffffff005b484680 70 1109 1106 0000000 [SLPQ select 0xffffffff80957550][SLP] postgres 1109 ffffff005a4d2680 70 1106 1106 0000000 [SLPQ select 0xffffffff80957550][SLP] postgres 1108 ffffff007aa91680 70 1106 1106 0000000 [SLPQ select 0xffffffff80957550][SLP] postgres 1106 ffffff0061eda9c0 70 1 1106 0000000 [SLPQ select 0xffffffff80957550][SLP] postgres 1094 ffffff005b2439c0 0 1035 1034 0000002 [SLPQ select 0xffffffff80957550][SLP] authdaemond 1093 ffffff005ab59000 0 1035 1034 0000002 [SLPQ select 0xffffffff80957550][SLP] authdaemond 1092 ffffff007ab40680 0 1035 1034 0000002 [SLPQ select 0xffffffff80957550][SLP] authdaemond 1091 ffffff005b405340 0 1035 1034 0000002 [SLPQ select 0xffffffff80957550][SLP] authdaemond 1090 ffffff007aa91340 0 1035 1034 0000002 [SLPQ select 0xffffffff80957550][SLP] authdaemond 1079 ffffff005e8e5680 0 1078 1079 0004002 [SLPQ select 0xffffffff80957550][SLP] couriertcpd 1078 ffffff005a4d2000 0 1 1078 0000003 [SLPQ piperd 0xffffff005dbef000][SLP] courierlogger 1068 ffffff005b405680 0 1067 1068 0004002 [SLPQ select 0xffffffff80957550][SLP] couriertcpd 1067 ffffff005ab59680 0 1 1067 0000003 [SLPQ piperd 0xffffff0061bb9c00][SLP] courierlogger 1053 ffffff005ed38340 0 1052 1053 0004002 [SLPQ select 0xffffffff80957550][SLP] couriertcpd 1052 ffffff005eca4680 0 1 1052 0000003 [SLPQ piperd 0xffffff005e0d0300][SLP] courierlogger 1040 ffffff005a4d29c0 26 1 1040 0000100 [SLPQ select 0xffffffff80957550][SLP] exim-4.54-0 1035 ffffff005f51f9c0 0 1034 1034 0004002 [SLPQ select 0xffffffff80957550][SLP] authdaemond 1034 ffffff005f51f340 0 1 1034 0000003 [SLPQ piperd 0xffffff005c0e5000][SLP] courierlogger 1027 ffffff005b405000 106 1 1027 0000100 [SLPQ pause 0xffffff005b405068][SLP] freshclam 866 ffffff005a81a340 0 1 866 0004000 [SLPQ select 0xffffffff80957550][SLP] perl 851 ffffff005b484340 0 1 850 0000000 [SLPQ select 0xffffffff80957550][SLP] snmpd 849 ffffff005b484000 88 821 820 000c082 (threaded) mysqld thread 0xffffff0032bc0be0 ksegrp 0xffffff005af1a360 [SLPQ kserel 0xffffff005af1a3b8][SLP] thread 0xffffff005e7e6980 ksegrp 0xffffff005af1a360 [SLPQ select 0xffffffff80957550][SLP] thread 0xffffff006be444c0 ksegrp 0xffffff005af1a360 [SLPQ kserel 0xffffff005af1a3b8][SLP] thread 0xffffff00588ee4c0 ksegrp 0xffffff005a8db900 [SLPQ sigwait 0xffffffffb4e2ba38][SLP] thread 0xffffff0058af2be0 ksegrp 0xffffff005a8db990 [SLPQ ksesigwait 0xffffff005b4841e8][SLP] 821 ffffff005e8e59c0 88 1 820 0004002 [SLPQ wait 0xffffff005e8e59c0][SLP][SWAP] sh 803 ffffff005ed38000 522 1 803 0000000 [SLPQ select 0xffffffff80957550][SLP] python2.4 802 ffffff00622fe9c0 522 774 802 0004000 [SLPQ select 0xffffffff80957550][SLP] ssl_esock 774 ffffff005b243000 522 1 764 000c082 (threaded) beam thread 0xffffff004eec2be0 ksegrp 0xffffff005af1a5a0 [SLPQ kserel 0xffffff005af1a5f8][SLP] thread 0xffffff00135a9980 ksegrp 0xffffff005af1a5a0 [SLPQ select 0xffffffff80957550][SLP] thread 0xffffff0061aed720 ksegrp 0xffffff005af1a5a0 [SLPQ kserel 0xffffff005af1a5f8][SLP] thread 0xffffff00588ee980 ksegrp 0xffffff00796b5c60 [SLPQ kserel 0xffffff00796b5cb8][SLP] thread 0xffffff0058af2980 ksegrp 0xffffff00796b5cf0 [SLPQ ksesigwait 0xffffff005b2431e8][SLP] 750 ffffff005a459000 522 1 749 0000000 [SLPQ select 0xffffffff80957550][SLP] epmd 719 ffffff0061eda000 0 1 719 0000000 [SLPQ select 0xffffffff80957550][SLP] ntpd 696 ffffff005a459680 0 1 696 0000000 [SLPQ select 0xffffffff80957550][SLP] httpd 635 ffffff005eca49c0 0 631 631 0000000 [SLPQ - 0xffffff005b73b200][SLP][SWAP] nfsd 634 ffffff006225f680 0 631 631 0000000 [SLPQ - 0xffffff005b73b400][SLP][SWAP] nfsd 633 ffffff005f7be000 0 631 631 0000000 [SLPQ - 0xffffff005e9c7a00][SLP][SWAP] nfsd 632 ffffff005ed389c0 0 631 631 0000000 [SLPQ - 0xffffff005f2a6400][SLP] nfsd 631 ffffff005e8e5340 0 1 631 0000000 [SLPQ select 0xffffffff80957550][SLP] nfsd 629 ffffff005f7be680 0 1 629 0000000 [SLPQ select 0xffffffff80957550][SLP] mountd 604 ffffff0061eda680 0 1 604 0000000 [SLPQ select 0xffffffff80957550][SLP] rpcbind 544 ffffff00622fe340 53 1 544 0000100 [SLPQ select 0xffffffff80957550][SLP] named 465 ffffff007aa91000 0 1 465 0000000 [SLPQ select 0xffffffff80957550][SLP] syslogd 425 ffffff005eca4000 0 1 425 0000000 [SLPQ select 0xffffffff80957550][SLP] devd 244 ffffff005f2db680 0 0 0 0000204 [SLPQ mdwait 0xffffff005f40e800][SLP] md0 226 ffffff006225f340 0 1 226 0000000 [SLPQ pause 0xffffff006225f3a8][SLP][SWAP] adjkerntz 49 ffffff00796d4000 0 0 0 0000204 [SLPQ m:w1 0xffffff0000f7e000][SLP] g_mirror gm0s1 48 ffffff00796d4340 0 0 0 0000204 [SLPQ - 0xffffffffb2950be4][SLP] schedcpu 47 ffffff00796d4680 0 0 0 0000204 [SLPQ - 0xffffffff8096d558][SLP] nfsiod 3 46 ffffff00796d49c0 0 0 0 0000204 [SLPQ - 0xffffffff8096d550][SLP] nfsiod 2 45 ffffff007acba9c0 0 0 0 0000204 [SLPQ - 0xffffffff8096d548][SLP] nfsiod 1 44 ffffff007a87c000 0 0 0 0000204 [SLPQ - 0xffffffff8096d540][SLP] nfsiod 0 43 ffffff007a87c340 0 0 0 0000204 [SLPQ sdflush 0xffffffff80974060][SLP] softdepflush 42 ffffff007a87c680 0 0 0 0000204 [SLPQ syncer 0xffffffff8094a140][SLP] syncer 41 ffffff007a87c9c0 0 0 0 0000204 [SLPQ vlruwt 0xffffff007a87c9c0][SLP] vnlru 40 ffffff007aaff000 0 0 0 0000204 [SLPQ psleep 0xffffffff80957e18][SLP] bufdaemon 39 ffffff007aaff340 0 0 0 000020c [SLPQ pgzero 0xffffffff809759e0][SLP] pagezero 38 ffffff007aaff680 0 0 0 0000204 [SLPQ psleep 0xffffffff809750ac][SLP] vmdaemon 37 ffffff007aaff9c0 0 0 0 0000204 [SLPQ psleep 0xffffffff8097505c][SLP] pagedaemon 36 ffffff007ab40000 0 0 0 0000204 [IWAIT] irq7: ppc0 35 ffffff007ba01680 0 0 0 0000204 [SLPQ - 0xffffff007a82f848][SLP] fdc0 34 ffffff007ba019c0 0 0 0 0000204 [IWAIT] swi0: sio 33 ffffff007aa77000 0 0 0 0000204 [IWAIT] irq1: atkbd0 32 ffffff007aa77340 0 0 0 0000204 [SLPQ idle 0xffffffff86aa5000][SLP] aic_recovery1 31 ffffff007aa77680 0 0 0 0000204 [IWAIT] irq25: bge1 ahd1 30 ffffff007aa779c0 0 0 0 0000204 [SLPQ idle 0xffffffff86aa1000][SLP] aic_recovery0 29 ffffff007acba000 0 0 0 0000204 [IWAIT] irq24: bge0 ahd0 28 ffffff007acba340 0 0 0 0000204 [IWAIT] irq15: ata1 27 ffffff007acba680 0 0 0 0000204 [IWAIT] irq14: ata0 26 ffffff007ba2a680 0 0 0 0000204 [IWAIT] irq18: fxp0 25 ffffff007ba2a9c0 0 0 0 0000204 [SLPQ usbevt 0xffffffff86a9d420][SLP] usb1 24 ffffff007ba5d000 0 0 0 0000204 [SLPQ usbtsk 0xffffffff80945310][SLP] usbtask 23 ffffff007ba5d340 0 0 0 0000204 [SLPQ usbevt 0xffffffff86a9b420][SLP] usb0 22 ffffff007ba5d680 0 0 0 0000204 [IWAIT] irq19: ohci0 ohci+ 21 ffffff007ba5d9c0 0 0 0 0000204 [IWAIT] irq9: acpi0 20 ffffff007ba01000 0 0 0 0000204 [IWAIT] swi6: + 9 ffffff007ba01340 0 0 0 0000204 [SLPQ - 0xffffff0000ceb500][SLP] thread taskq 19 ffffff007ba039c0 0 0 0 0000204 [IWAIT] swi5: + 8 ffffff007ba06000 0 0 0 0000204 [SLPQ - 0xffffff0000ceb900][SLP] acpi_task2 7 ffffff007ba06340 0 0 0 0000204 [SLPQ - 0xffffff0000ceb900][SLP] acpi_task1 6 ffffff007ba06680 0 0 0 0000204 [SLPQ - 0xffffff0000ceb900][SLP] acpi_task0 18 ffffff007ba069c0 0 0 0 0000204 [IWAIT] swi2: cambio 5 ffffff007ba2a000 0 0 0 0000204 [SLPQ - 0xffffff0000cebc00][SLP] kqueue taskq 17 ffffff007ba2a340 0 0 0 0000204 [IWAIT] swi6: task queue 16 ffffff007ba1a340 0 0 0 0000204 [SLPQ - 0xffffffff80942ee0][SLP] yarrow 4 ffffff007ba1a680 0 0 0 0000204 [SLPQ - 0xffffffff80945c08][SLP] g_down 3 ffffff007ba1a9c0 0 0 0 0000204 [SLPQ - 0xffffffff80945c00][SLP] g_up 2 ffffff007ba03000 0 0 0 0000204 [SLPQ - 0xffffffff80945bf0][SLP] g_event 15 ffffff007ba03340 0 0 0 0000204 [IWAIT] swi3: vm 14 ffffff007ba03680 0 0 0 000020c [IWAIT] swi4: clock sio 13 ffffff007ba33000 0 0 0 0000204 [IWAIT] swi1: net 12 ffffff007ba33340 0 0 0 000020c [Can run] idle: cpu0 11 ffffff007ba33680 0 0 0 000020c [CPU 1] idle: cpu1 1 ffffff007ba339c0 0 0 1 0004200 [SLPQ wait 0xffffff007ba339c0][SLP] init 10 ffffff007ba1a000 0 0 0 0000204 [SLPQ ktrace 0xffffffff80946d00][SLP] ktrace 0 ffffffff80945d60 0 0 0 0000200 [IWAIT] swapper db> >How-To-Repeat: Cannot be triggered. >Fix: No known fix or workaround. >Release-Note: >Audit-Trail: From: John Baldwin To: bug-followup@freebsd.org, ltning-freebsd@anduin.net Cc: Subject: Re: kern/99094: panic: sleeping thread (Sleeping thread ... owns a non-sleepable lock) Date: Thu, 29 Jun 2006 07:26:17 -0400 You need to get a stack trace of the thread mentioned in the message that actually misbehaved. It actually should have been printed on the console when it panic'd since you have DDB in the kernel. -- John Baldwin From: =?ISO-8859-1?Q?Eirik_=D8verby?= To: John Baldwin Cc: bug-followup@freebsd.org Subject: Re: kern/99094: panic: sleeping thread (Sleeping thread ... owns a non-sleepable lock) Date: Fri, 30 Jun 2006 16:56:14 +0200 Hm, I thought I had that in my report? I have to find a way to automate this. I've just moved the installation to a newly partitioned array, to make sure I have room for crash dumps, and I have the following in rc.conf: dumpdev="AUTO" dumpdir="/usr/crash" from my reading, that should be enough. In addition I added KDB_UNATTENDED to the kernel config, as I cannot risk that the box is down for hours before I have a chance to get to the debugger console every time. The question is: Will it actually do an automatic dump before rebooting, or will a dump *always* require manual intervention? And will a dump contain all necessary information? Thanks, /Eirik On Jun 29, 2006, at 1:26 PM, John Baldwin wrote: > You need to get a stack trace of the thread mentioned in the message > that actually misbehaved. It actually should have been printed on > the console when it panic'd since you have DDB in the kernel. > > -- > John Baldwin > > From: John Baldwin To: Eirik =?iso-8859-1?q?=D8verby?= Cc: bug-followup@freebsd.org Subject: Re: kern/99094: panic: sleeping thread (Sleeping thread ... owns a non-sleepable lock) Date: Fri, 30 Jun 2006 11:06:47 -0400 On Friday 30 June 2006 10:56, Eirik =D8verby wrote: > Hm, I thought I had that in my report? from the messages: Sleeping thread (tid 100082, pid 84236) owns a non-sleepable lock panic: sleeping thread cpuid = 0 KDB: enter: panic [thread pid 84235 tid 100474 ] This means pid 84236 misbehaved, and pid 84235 found that it had misbehaved: The sole stack trace is of pid 84235: db> bt Tracing pid 84235 tid 100474 td 0xffffff006f759000 ^^^^^ :) > I have to find a way to automate this. I've just moved the > installation to a newly partitioned array, to make sure I have room > for crash dumps, and I have the following in rc.conf: > > dumpdev="AUTO" > dumpdir="/usr/crash" > > from my reading, that should be enough. > > In addition I added KDB_UNATTENDED to the kernel config, as I cannot > risk that the box is down for hours before I have a chance to get to > the debugger console every time. The question is: Will it actually do > an automatic dump before rebooting, or will a dump *always* require > manual intervention? And will a dump contain all necessary information? You can get a stack trace from the dump using kgdb. You'll have to use 'info threads' in gdb to find the thread with the corresponding pid, then use the gdb 'thread' command to switch to that thread (using the gdb thread number, not the tid or pid) and then do a 'bt' or 'where' to get the trace. -- John Baldwin From: =?ISO-8859-1?Q?Eirik_=D8verby?= To: John Baldwin Cc: bug-followup@freebsd.org Subject: Re: kern/99094: panic: sleeping thread (Sleeping thread ... owns a non-sleepable lock) Date: Sat, 1 Jul 2006 14:04:08 +0200 --Apple-Mail-4--497926925 Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset=US-ASCII; delsp=yes; format=flowed Hi again, I now have WITNESS and INVARIANTS in the kernel, and today it hung again. It looks somewhat different than before, but I am fairly certain it's the same error. I've tried to include everything asked for, except that I was silly enough to do a "call boot()" before dumping, so it only started dumping after the boot() call failed. Nevertheless; I do have a dump now, if it's of any use. Below you'll find the panic message, a bt, a ps, and then the output of a "c", which is exactly the same as the first message except it's not chopped off due to terminal size, and finally the panic resulting from the boot() call. /Eirik Expensive timeout(9) function: 0xffffffff802944a0(0xffffffff86ca7000) 0.016565232 s Expensive timeout(9) function: 0xffffffff804f3b00(0) 0.021799841 s Expensive timeout(9) function: 0xffffffff804455e0(0xffffff007ad7b000) 0.047916735 s malloc(M_WAITOK) of "1024", forcing M_NOWAIT with the following non- sleepable locks held: exclusive sleep mutex vm object (standard object) r = 0 (0xffffff0018f3fe00) locked @ /usr/src/sys/compat/linprocfs/lin9 KDB: enter: witness_warn [thread pid 77487 tid 100323 ] Stopped at kdb_enter+0x2f: nop db> db> bt Tracing pid 77487 tid 100323 td 0xffffff00531794c0 kdb_enter() at kdb_enter+0x2f witness_warn() at witness_warn+0x2e0 uma_zalloc_arg() at uma_zalloc_arg+0x1ee malloc() at malloc+0xab vn_fullpath() at vn_fullpath+0x56 linprocfs_doprocmaps() at linprocfs_doprocmaps+0x31e pfs_read() at pfs_read+0x2a7 VOP_READ_APV() at VOP_READ_APV+0x74 vn_read() at vn_read+0x232 dofileread() at dofileread+0x94 kern_readv() at kern_readv+0x60 read() at read+0x4a ia32_syscall() at ia32_syscall+0x167 Xint0x80_syscall() at Xint0x80_syscall+0x5d db> ps pid proc uid ppid pgrp flag stat wmesg wchan cmd 77510 ffffff0027630340 2 77485 77462 0004000 [RUNQ] mv 77509 ffffff0036d2a000 2 77466 77441 0006000 [RUNQ] dd 77502 ffffff00342d0340 0 77457 77457 0004000 [RUNQ] perl 77492 ffffff0072508680 90 77454 77454 0004000 [SLPQ sbwait 0xffffff000d7513a0][SLP] fetchmail 77489 ffffff004d91c9c0 91 77470 77470 0004000 [RUNQ] python2.4 77488 ffffff006a4e0680 91 77469 77469 0004000 [RUNQ] python2.4 77487 ffffff0034673680 0 77451 77451 0004000 [CPU 0] bdc 77485 ffffff0050a969c0 2 77462 77462 0004000 [SLPQ wait 0xffffff0050a969c0][SLP] sh 77480 ffffff004d2b7000 0 77452 77452 0004000 [CPU 1] perl5.8.8 77479 ffffff006a4e0000 0 77453 77453 0004000 [SLPQ vnread 0xffffffff9fe13b08][SLP] antivir 77470 ffffff00506f0680 91 77465 77470 0004000 [SLPQ wait 0xffffff00506f0680][SLP] sh 77469 ffffff0072508340 91 77459 77469 0004000 [SLPQ wait 0xffffff0072508340][SLP] sh 77466 ffffff0061b3c680 2 77441 77441 0004000 [SLPQ wait 0xffffff0061b3c680][SLP] sh 77465 ffffff00215f3000 0 1026 1026 0000000 [SLPQ piperd 0xffffff00209e4000][SLP] cron 77462 ffffff0014911680 2 77456 77462 0004000 [SLPQ wait 0xffffff0014911680][SLP] sh 77459 ffffff0036d2a680 0 1026 1026 0000000 [SLPQ piperd 0xffffff003c30c900][SLP] cron 77457 ffffff0036028000 0 77449 77457 0004000 [SLPQ wait 0xffffff0036028000][SLP] sh 77456 ffffff0037a4a9c0 0 1026 1026 0000000 [SLPQ piperd 0xffffff0036b82c00][SLP] cron 77454 ffffff004a2c2680 90 77448 77454 0004000 [SLPQ wait 0xffffff004a2c2680][SLP] sh 77453 ffffff0038ef2000 0 77446 77453 0004000 [SLPQ wait 0xffffff0038ef2000][SLP] sh 77452 ffffff0037a4a000 0 77447 77452 0004000 [SLPQ wait 0xffffff0037a4a000][SLP] sh 77451 ffffff004d91c680 0 77445 77451 0004000 [SLPQ wait 0xffffff004d91c680][SLP] sh 77449 ffffff00274119c0 0 1026 1026 0000000 [SLPQ piperd 0xffffff0060f41900][SLP] cron 77448 ffffff002f74f000 0 1026 1026 0000000 [SLPQ piperd 0xffffff0011eb3600][SLP] cron 77447 ffffff0049c40000 0 1026 1026 0000000 [SLPQ piperd 0xffffff0001a8f300][SLP] cron 77446 ffffff004b9c29c0 0 1026 1026 0000000 [SLPQ piperd 0xffffff0008eedc00][SLP] cron 77445 ffffff0036028340 0 1026 1026 0000000 [SLPQ piperd 0xffffff002c885300][SLP] cron 77441 ffffff0038ef2340 2 77438 77441 0004000 [SLPQ wait 0xffffff0038ef2340][SLP] sh 77438 ffffff005c5f0680 0 2842 2842 0000000 [SLPQ piperd 0xffffff0008547000][SLP] cron 76840 ffffff0059fc79c0 80 2553 2553 0000100 [RUNQ] httpd 76832 ffffff0035d0b9c0 80 2553 2553 0000100 [RUNQ] httpd 76831 ffffff00346739c0 80 2553 2553 0000100 [SLPQ select 0xffffffff809c6310][SLP] httpd 76830 ffffff0038ef29c0 80 2553 2553 0000100 [SLPQ select 0xffffffff809c6310][SLP] httpd 76829 ffffff005d0ce680 80 2553 2553 0000100 [SLPQ accept 0xffffff004c6b6c66][SLP] httpd 76828 ffffff006053e680 80 2553 2553 0000100 [SLPQ accept 0xffffff004c6b6c66][SLP] httpd 76827 ffffff00739e9340 80 2553 2553 0000100 [SLPQ accept 0xffffff004c6b6c66][SLP] httpd 76823 ffffff0032f879c0 80 2553 2553 0000100 [SLPQ accept 0xffffff004c6b6c66][SLP] httpd 76821 ffffff0061fd8000 80 2553 2553 0000100 [RUNQ] httpd 76810 ffffff00342d09c0 80 2553 2553 0000100 [SLPQ accept 0xffffff004c6b6c66][SLP] httpd 76798 ffffff005342b340 80 2553 2553 0000100 [SLPQ accept 0xffffff004c6b6c66][SLP] httpd 76782 ffffff0032f87000 1000 979 979 0004000 [SLPQ select 0xffffffff809c6310][SLP] imapd 76304 ffffff005115b340 125 2598 2598 0004100 [SLPQ select 0xffffffff809c6310][SLP] pickup 76163 ffffff0032f87680 0 812 812 0000000 [SLPQ select 0xffffffff809c6310][SLP] perl5.8.8 73976 ffffff005ff7d680 1000 979 979 0004000 [SLPQ select 0xffffffff809c6310][SLP] imapd 72957 ffffff00342d0680 0 812 812 0000000 [SLPQ select 0xffffffff809c6310][SLP] perl5.8.8 72944 ffffff005d0ce000 1001 979 979 0004000 [SLPQ select 0xffffffff809c6310][SLP] imapd 72008 ffffff00360289c0 80 742 742 0000100 [SLPQ accept 0xffffff005d10c9fe][SLP] httpd 72007 ffffff0074950680 80 742 742 0000100 [SLPQ accept 0xffffff005d10c9fe][SLP] httpd 71893 ffffff006c2ce000 6676 71892 71893 0004002 [SLPQ ttyin 0xffffff0060795810][SLP] bash 71892 ffffff002456d680 6676 71890 71890 0000100 [SLPQ select 0xffffffff809c6310][SLP] sshd 71890 ffffff004b361000 0 1021 71890 0004100 [SLPQ sbwait 0xffffff002c38c870][SLP] sshd 71889 ffffff004a2c2340 90 979 979 0004000 [SLPQ select 0xffffffff809c6310][SLP] imapd 66142 ffffff0074950000 1000 66141 66142 0004102 [SLPQ pause 0xffffff0074950068][SLP] screen 66141 ffffff0035ae9680 1000 66140 66141 0004002 [SLPQ wait 0xffffff0035ae9680][SLP] bash 66140 ffffff004db7a340 1000 66135 66135 0000100 [SLPQ select 0xffffffff809c6310][SLP] sshd 66135 ffffff002c60b680 0 1021 66135 0004100 [SLPQ sbwait 0xffffff0049d3d3a0][SLP] sshd 58735 ffffff0051e7d9c0 10002 1 58735 0000000 [SLPQ select 0xffffffff809c6310][SLP] ezb 56610 ffffff0051cfc9c0 80 742 742 0000100 [SLPQ accept 0xffffff005d10c9fe][SLP] httpd 35196 ffffff005fe7f000 80 742 742 0000100 [SLPQ accept 0xffffff005d10c9fe][SLP] httpd 35192 ffffff006a4e0340 80 742 742 0000100 [SLPQ accept 0xffffff005d10c9fe][SLP] httpd 35191 ffffff0061b3c000 80 742 742 0000100 [SLPQ accept 0xffffff005d10c9fe][SLP] httpd 35190 ffffff00506f09c0 80 742 742 0000100 [SLPQ accept 0xffffff005d10c9fe][SLP] httpd 35189 ffffff004e009000 80 742 742 0000100 [SLPQ accept 0xffffff005d10c9fe][SLP] httpd 35188 ffffff002f74f340 80 742 742 0000100 [SLPQ accept 0xffffff005d10c9fe][SLP] httpd 35185 ffffff002c60b340 80 742 742 0000100 [SLPQ accept 0xffffff005d10c9fe][SLP] httpd 14960 ffffff0035ae99c0 6694 14958 14960 0004002 [SLPQ select 0xffffffff809c6310][SLP] irssi 14958 ffffff0034673000 6694 1 14958 0000100 [SLPQ select 0xffffffff809c6310][SLP] screen 12267 ffffff0074950340 2003 979 979 0004000 [SLPQ select 0xffffffff809c6310][SLP] imapd 11765 ffffff00739e9000 2003 979 979 0004000 [SLPQ select 0xffffffff809c6310][SLP] imapd 11500 ffffff006143e340 80 2556 2556 0000100 [SLPQ accept 0xffffff00451b59fe][SLP] httpd 10483 ffffff0047119340 80 2353 2353 0000100 [SLPQ accept 0xffffff004c4d52c6][SLP] httpd 10481 ffffff003be99680 80 2353 2353 0000100 [SLPQ accept 0xffffff004c4d52c6][SLP] httpd 3244 ffffff0035b71000 1017 979 979 0004000 [SLPQ select 0xffffffff809c6310][SLP] imapd 3231 ffffff0047f369c0 90 979 979 0004000 [SLPQ select 0xffffffff809c6310][SLP] imapd 3228 ffffff005ba74000 1006 979 979 0004000 [SLPQ select 0xffffffff809c6310][SLP] imapd 3227 ffffff0061b3c340 90 979 979 0004000 [SLPQ select 0xffffffff809c6310][SLP] imapd 3100 ffffff006143e000 80 2353 2353 0000100 [SLPQ accept 0xffffff004c4d52c6][SLP] httpd 3029 ffffff003bec4340 1026 979 979 0004000 [SLPQ select 0xffffffff809c6310][SLP] imapd 3007 ffffff004b9c2680 1017 979 979 0004000 [SLPQ select 0xffffffff809c6310][SLP] imapd 2962 ffffff0051cfc000 0 1497 2962 0004100 [SLPQ select 0xffffffff809c6310][SLP] fam 2945 ffffff004db7a9c0 0 2944 2945 0004002 [SLPQ ttyin 0xffffff0066152410][SLP] bash 2944 ffffff0032f87340 0 2941 2944 0004102 [SLPQ wait 0xffffff0032f87340][SLP] su 2941 ffffff0050f1d000 1000 2939 2941 0004002 [SLPQ wait 0xffffff0050f1d000][SLP] bash 2939 ffffff004b864340 1000 1 2939 0000100 [SLPQ select 0xffffffff809c6310][SLP] screen 2911 ffffff004a2c29c0 0 1 2911 0000000 [SLPQ nanslp 0xffffffff8093c780][SLP] cron 2901 ffffff0050a96000 0 1 2901 0000000 [SLPQ select 0xffffffff809c6310][SLP] inetd 2885 ffffff003ceda680 25 1 2885 0000100 [SLPQ pause 0xffffff003ceda6e8][SLP] sendmail 2879 ffffff0051264680 0 1 2879 0000000 [SLPQ nanslp 0xffffffff8093c780][SLP] cron 2868 ffffff0052df3340 0 1 2868 0000000 [SLPQ select 0xffffffff809c6310][SLP] inetd 2842 ffffff003be99000 0 1 2842 0000000 [SLPQ nanslp 0xffffffff8093c780][SLP] cron 2838 ffffff004d91c340 25 1 2838 0000100 [SLPQ pause 0xffffff004d91c3a8][SLP] sendmail 2834 ffffff0048118340 0 1 2834 0000100 [SLPQ select 0xffffffff809c6310][SLP] sendmail 2824 ffffff0049c40680 0 1 2824 0000000 [SLPQ select 0xffffffff809c6310][SLP] inetd 2818 ffffff003c4f8000 0 1 2818 0000100 [SLPQ select 0xffffffff809c6310][SLP] sshd 2805 ffffff005366a340 0 1 2805 0000100 [SLPQ select 0xffffffff809c6310][SLP] sshd 2801 ffffff0050f31340 25 1 2801 0000100 [SLPQ pause 0xffffff0050f313a8][SLP] sendmail 2797 ffffff003c4f8680 0 1 2797 0000100 [SLPQ select 0xffffffff809c6310][SLP] sendmail 2790 ffffff003be999c0 0 1 2790 0000100 [SLPQ select 0xffffffff809c6310][SLP] sshd 2784 ffffff003ca17340 0 1 2784 0000000 [SLPQ nanslp 0xffffffff8093c780][SLP] cron 2772 ffffff0050f319c0 0 1 2772 0000000 [SLPQ select 0xffffffff809c6310][SLP] inetd 2768 ffffff003ca179c0 88 2708 2707 000c080 (threaded) mysqld thread 0xffffff004809c720 ksegrp 0xffffff004de38240 [SLPQ kserel 0xffffff004de38298][SLP] thread 0xffffff000ac4d000 ksegrp 0xffffff004de38240 [SLPQ kserel 0xffffff004de38298][SLP] thread 0xffffff00380434c0 ksegrp 0xffffff004de38240 [SLPQ select 0xffffffff809c6310][SLP] thread 0xffffff00397e0720 ksegrp 0xffffff004ccb9480 [SLPQ sigwait 0xffffffffb52c5a38][SLP] thread 0xffffff00397e0980 ksegrp 0xffffff003b203360 [SLPQ ksesigwait 0xffffff003ca17ba8][SLP] 2715 ffffff003ceda340 25 1 2715 0000100 [SLPQ pause 0xffffff003ceda3a8][SLP] sendmail 2711 ffffff003ceda9c0 0 1 2711 0000100 [SLPQ pause 0xffffff003cedaa28][SLP] sendmail 2708 ffffff003bec4680 88 1 2707 0004000 [SLPQ wait 0xffffff003bec4680][SLP] sh 2689 ffffff004cf9b9c0 125 2598 2598 0004100 [SLPQ select 0xffffffff809c6310][SLP] qmgr 2674 ffffff0050817680 0 1 2674 0000100 [SLPQ select 0xffffffff809c6310][SLP] sshd 2664 ffffff005ff7d340 1001 1 2664 0000000 [SLPQ accept 0xffffff0048155c66][SLP] svnserve 2662 ffffff00512649c0 88 2614 2603 000c080 (threaded) mysqld thread 0xffffff0046cbd4c0 ksegrp 0xffffff0050c43870 [SLPQ kserel 0xffffff0050c438c8][SLP] thread 0xffffff0038043be0 ksegrp 0xffffff0050c43870 [SLPQ select 0xffffffff809c6310][SLP] thread 0xffffff004a031720 ksegrp 0xffffff0050c43870 [SLPQ kserel 0xffffff0050c438c8][SLP] thread 0xffffff0061935720 ksegrp 0xffffff003b203900 [SLPQ kserel 0xffffff003b203958][SLP] thread 0xffffff00474c5260 ksegrp 0xffffff004ccb9090 [SLPQ kserel 0xffffff004ccb90e8][SLP] thread 0xffffff003c17bbe0 ksegrp 0xffffff004ccb9120 [SLPQ sbwait 0xffffff003d154d40][SLP] thread 0xffffff003c121260 ksegrp 0xffffff004ccb91b0 [SLPQ sigwait 0xffffffffb52d9a38][SLP] thread 0xffffff004cf7e260 ksegrp 0xffffff004ccb9240 [SLPQ ksesigwait 0xffffff0051264ba8][SLP] 2631 ffffff0047119000 80 2556 2556 0000100 [SLPQ accept 0xffffff00451b59fe][SLP] httpd 2630 ffffff0050f31000 80 2556 2556 0000100 [SLPQ accept 0xffffff00451b59fe][SLP] httpd 2629 ffffff004d2b7680 80 2556 2556 0000100 [SLPQ accept 0xffffff00451b59fe][SLP] httpd 2628 ffffff0047f36000 80 2556 2556 0000100 [SLPQ accept 0xffffff00451b59fe][SLP] httpd 2627 ffffff00507159c0 80 2556 2556 0000100 [SLPQ accept 0xffffff00451b59fe][SLP] httpd 2614 ffffff0051264340 88 1 2603 0004000 [SLPQ wait 0xffffff0051264340][SLP] sh 2607 ffffff004a2c2000 80 1 2606 0000100 [SLPQ kqread 0xffffff004c3d5600][SLP] lighttpd 2598 ffffff005fe7f9c0 0 1 2598 0004100 [SLPQ select 0xffffffff809c6310][SLP] master 2556 ffffff0050817340 0 1 2556 0000000 [SLPQ nanslp 0xffffffff8093c780][SLP] httpd 2553 ffffff0050a96340 0 1 2553 0000000 [SLPQ select 0xffffffff809c6310][SLP] httpd 2552 ffffff004d2b79c0 80 2353 2353 0000100 [SLPQ accept 0xffffff004c4d52c6][SLP] httpd 2548 ffffff006253c340 80 2353 2353 0000100 [SLPQ accept 0xffffff004c4d52c6][SLP] httpd 2513 ffffff0051cfc680 88 2417 2330 000c080 (threaded) mysqld thread 0xffffff001455d260 ksegrp 0xffffff005bfb5480 [SLPQ kserel 0xffffff005bfb54d8][SLP] thread 0xffffff00563734c0 ksegrp 0xffffff002d4d4480 [SLPQ kserel 0xffffff002d4d44d8][SLP] thread 0xffffff00160c0720 ksegrp 0xffffff005bfb5480 [SLPQ kserel 0xffffff005bfb54d8][SLP] thread 0xffffff0013e554c0 ksegrp 0xffffff005bfb5480 [SLPQ select 0xffffffff809c6310][SLP] thread 0xffffff004cf7e4c0 ksegrp 0xffffff004de38480 [SLPQ sigwait 0xffffffffb51b2a38][SLP] thread 0xffffff00474c5000 ksegrp 0xffffff004ccb92d0 [SLPQ ksesigwait 0xffffff0051cfc868][SLP] 2507 ffffff0048118000 0 1 2507 0000000 [SLPQ nanslp 0xffffffff8093c780][SLP] cron 2492 ffffff004b9c2340 0 1 2492 0000000 [SLPQ select 0xffffffff809c6310][SLP] inetd 2460 ffffff004b8649c0 194 1 2460 0000000 [SLPQ kqread 0xffffff0050da5a00][SLP] ircd 2457 ffffff0052df3680 0 1 2457 0000000 [SLPQ nanslp 0xffffffff8093c780][SLP] cron 2452 ffffff004db7a680 25 1 2452 0000100 [SLPQ pause 0xffffff004db7a6e8][SLP] sendmail 2437 ffffff0047e57000 0 1 2437 0000000 [SLPQ select 0xffffffff809c6310][SLP] inetd 2431 ffffff0047f36340 0 1 2431 0000100 [SLPQ select 0xffffffff809c6310][SLP] sendmail 2417 ffffff004b361680 88 1 2330 0004000 [SLPQ wait 0xffffff004b361680][SLP] sh 2396 ffffff0049c409c0 25 1 2396 0000100 [SLPQ pause 0xffffff0049c40a28][SLP] sendmail 2365 ffffff004cf9b000 0 1 2365 0000100 [SLPQ select 0xffffffff809c6310][SLP] sendmail 2353 ffffff0062199340 0 1 2353 0000000 [SLPQ select 0xffffffff809c6310][SLP] httpd 2338 ffffff005d0ce9c0 0 1 2338 0000100 [SLPQ select 0xffffffff809c6310][SLP] sshd 2301 ffffff004b9c2000 0 1 2301 0000100 [SLPQ select 0xffffffff809c6310][SLP] sshd 2296 ffffff004b864000 0 1 2296 0000000 [SLPQ nanslp 0xffffffff8093c780][SLP] cron 2275 ffffff0050f31680 0 1 2275 0000000 [SLPQ select 0xffffffff809c6310][SLP] inetd 2241 ffffff004cf9b680 25 1 2241 0000100 [SLPQ pause 0xffffff004cf9b6e8][SLP] sendmail 2233 ffffff005ccef340 0 1 2233 0000100 [SLPQ select 0xffffffff809c6310][SLP] sendmail 2208 ffffff0050f1d340 88 2113 1989 000c080 (threaded) mysqld thread 0xffffff00214ec260 ksegrp 0xffffff00625be480 [SLPQ kserel 0xffffff00625be4d8][SLP] thread 0xffffff001afcabe0 ksegrp 0xffffff00625be480 [SLPQ kserel 0xffffff00625be4d8][SLP] thread 0xffffff000f979000 ksegrp 0xffffff00625be480 [SLPQ select 0xffffffff809c6310][SLP] thread 0xffffff0057bd9720 ksegrp 0xffffff004ccb93f0 [SLPQ sigwait 0xffffffffb4fe6a38][SLP] thread 0xffffff005c10b720 ksegrp 0xffffff004de38510 [SLPQ ksesigwait 0xffffff0050f1d528][SLP] 2113 ffffff004d2b7340 88 1 1989 0004000 [SLPQ wait 0xffffff004d2b7340][SLP] sh 2093 ffffff005366a680 0 1 2093 0000100 [SLPQ select 0xffffffff809c6310][SLP] sshd 1991 ffffff005115b680 0 1 1991 0000000 [SLPQ select 0xffffffff809c6310][SLP] syslogd 1921 ffffff004db7a000 0 1 1921 0000000 [SLPQ select 0xffffffff809c6310][SLP] syslogd 1874 ffffff005ba749c0 0 1 1874 0000000 [SLPQ select 0xffffffff809c6310][SLP] syslogd 1821 ffffff006053e9c0 0 1 1821 0000000 [SLPQ select 0xffffffff809c6310][SLP] syslogd 1677 ffffff0050715000 0 1 1677 0000000 [SLPQ select 0xffffffff809c6310][SLP] syslogd 1669 ffffff005d0ce340 0 1 1669 0000000 [SLPQ select 0xffffffff809c6310][SLP] syslogd 1583 ffffff005115b9c0 0 1 1583 0000000 [SLPQ select 0xffffffff809c6310][SLP] syslogd 1527 ffffff0052df39c0 0 1 1527 0004002 [SLPQ ttyin 0xffffff0079a09810][SLP] getty 1526 ffffff0050817000 0 1 1526 0004002 [SLPQ ttyin 0xffffff0079368410][SLP] getty 1525 ffffff0061834680 0 1 1525 0004002 [SLPQ ttyin 0xffffff0079368c10][SLP] getty 1524 ffffff0050f1d9c0 0 1 1524 0004002 [SLPQ ttyin 0xffffff0079e74c10][SLP] getty 1523 ffffff0050f1d680 0 1 1523 0004002 [SLPQ ttyin 0xffffff0078447010][SLP] getty 1522 ffffff006257d9c0 0 1 1522 0004002 [SLPQ ttyin 0xffffff0079e6a410][SLP] getty 1521 ffffff0050715340 0 1 1521 0004002 [SLPQ ttyin 0xffffff0079e74010][SLP] getty 1520 ffffff0059fc7680 0 1 1520 0004002 [SLPQ ttyin 0xffffff0079a49010][SLP] getty 1519 ffffff005b1b3340 0 1 1519 0004002 [SLPQ ttyin 0xffffff0079a49810][SLP] getty 1497 ffffff0051cfc340 0 1 1497 0000000 [SLPQ select 0xffffffff809c6310][SLP] inetd 1464 ffffff0051264000 0 1 46 0000002 [SLPQ select 0xffffffff809c6310][SLP] rxstack 1064 ffffff007ac01680 0 1063 1063 0000000 [SLPQ select 0xffffffff809c6310][SLP] fsavd 1063 ffffff005b8ca9c0 0 1 1063 0000000 [SLPQ select 0xffffffff809c6310][SLP] fsavd 1026 ffffff0059fc7000 0 1 1026 0000000 [SLPQ nanslp 0xffffffff8093c780][SLP] cron 1021 ffffff0052ea9680 0 1 1021 0000100 [SLPQ select 0xffffffff809c6310][SLP] sshd 1008 ffffff005c5f09c0 70 1007 1004 0000000 [SLPQ select 0xffffffff809c6310][SLP] postgres 1007 ffffff0052ea9000 70 1004 1004 0000000 [SLPQ select 0xffffffff809c6310][SLP] postgres 1006 ffffff0052ea99c0 70 1004 1004 0000000 [SLPQ select 0xffffffff809c6310][SLP] postgres 1004 ffffff005fe7f340 70 1 1004 0000000 [SLPQ select 0xffffffff809c6310][SLP] postgres 989 ffffff005342b9c0 0 988 989 0004002 [SLPQ select 0xffffffff809c6310][SLP] couriertcpd 988 ffffff005342b680 0 1 988 0000003 [SLPQ piperd 0xffffff00613a9600][SLP] courierlogger 979 ffffff005ff7d9c0 0 978 979 0004002 [SLPQ select 0xffffffff809c6310][SLP] couriertcpd 978 ffffff005ccef9c0 0 1 978 0000003 [SLPQ piperd 0xffffff005a1e8900][SLP] courierlogger 967 ffffff005ba74680 0 966 967 0004002 [SLPQ select 0xffffffff809c6310][SLP] couriertcpd 966 ffffff005b1b39c0 0 1 966 0000003 [SLPQ piperd 0xffffff005a1e8600][SLP] courierlogger 936 ffffff005366a9c0 389 1 936 0008181 (threaded) slapd thread 0xffffff003b892260 ksegrp 0xffffff0061c57d80 [SLPQ kserel 0xffffff0061c57dd8][SLP] thread 0xffffff00372904c0 ksegrp 0xffffff0061c57d80 [SLPQ select 0xffffffff809c6310][SLP] thread 0xffffff0054338720 ksegrp 0xffffff0061c57d80 [SLPQ kserel 0xffffff0061c57dd8][SLP] thread 0xffffff0056a8d720 ksegrp 0xffffff00625be3f0 [SLPQ ksesigwait 0xffffff005366aba8][SLP] 861 ffffff00600fa000 0 1 860 0000000 [SLPQ select 0xffffffff809c6310][SLP] snmpd 817 ffffff0061fd8680 26 1 817 0000100 [SLPQ select 0xffffffff809c6310][SLP] exim-4.62-0 812 ffffff005ba74340 0 1 812 0000000 [SLPQ select 0xffffffff809c6310][SLP] perl5.8.8 809 ffffff005c5f0340 0 793 789 0000002 [SLPQ select 0xffffffff809c6310][SLP] authdaemond 808 ffffff005b1b3000 0 793 789 0000002 [SLPQ select 0xffffffff809c6310][SLP] authdaemond 807 ffffff006053e340 0 793 789 0000002 [SLPQ select 0xffffffff809c6310][SLP] authdaemond 806 ffffff006257d000 0 793 789 0000002 [SLPQ select 0xffffffff809c6310][SLP] authdaemond 805 ffffff006257d340 0 793 789 0000002 [SLPQ select 0xffffffff809c6310][SLP] authdaemond 804 ffffff005ccef000 88 774 773 000c082 (threaded) mysqld thread 0xffffff002e921980 ksegrp 0xffffff007ba04bd0 [SLPQ kserel 0xffffff007ba04c28][SLP] thread 0xffffff006419e260 ksegrp 0xffffff007ba04bd0 [SLPQ kserel 0xffffff007ba04c28][SLP] thread 0xffffff000c50c720 ksegrp 0xffffff007ba04bd0 [SLPQ select 0xffffffff809c6310][SLP] thread 0xffffff0057bd9000 ksegrp 0xffffff005bfb56c0 [SLPQ sigwait 0xffffffffb4fd7a38][SLP] thread 0xffffff005c10bbe0 ksegrp 0xffffff005bfb5750 [SLPQ ksesigwait 0xffffff005ccef1e8][SLP] 793 ffffff0062199680 0 789 789 0004002 [SLPQ select 0xffffffff809c6310][SLP] authdaemond 789 ffffff005fe7f680 0 1 789 0000003 [SLPQ piperd 0xffffff0060f41600][SLP] courierlogger 774 ffffff005c5f0000 88 1 773 0004002 [SLPQ wait 0xffffff005c5f0000][SLP] sh 765 ffffff005ccef680 106 1 765 0000100 [SLPQ pause 0xffffff005ccef6e8][SLP] freshclam 760 ffffff006257d680 106 1 760 0008180 (threaded) clamd thread 0xffffff004a031000 ksegrp 0xffffff007ba18090 [SLPQ kserel 0xffffff007ba180e8][SLP] thread 0xffffff005153a000 ksegrp 0xffffff007ba18090 [SLPQ kserel 0xffffff007ba180e8][SLP] thread 0xffffff00292da4c0 ksegrp 0xffffff007ba18090 [SLPQ accept 0xffffff005c77d796][SLP] thread 0xffffff0057bd94c0 ksegrp 0xffffff005bfb55a0 [SLPQ ksesigwait 0xffffff006257d868][SLP] 742 ffffff005b1b3680 0 1 742 0000000 [SLPQ select 0xffffffff809c6310][SLP] httpd 720 ffffff005b8ca680 0 1 720 0000000 [SLPQ select 0xffffffff809c6310][SLP] ntpd 643 ffffff00621999c0 0 638 638 0000000 [SLPQ - 0xffffff005d435400][SLP] nfsd 642 ffffff0061834340 0 638 638 0000000 [SLPQ - 0xffffff005fc5e200][SLP] nfsd 640 ffffff005ff7d000 0 638 638 0000000 [SLPQ - 0xffffff005fc5e400][SLP] nfsd 639 ffffff0061834000 0 638 638 0000000 [SLPQ - 0xffffff005f83f600][SLP] nfsd 638 ffffff0062199000 0 1 638 0000000 [SLPQ select 0xffffffff809c6310][SLP] nfsd 636 ffffff0061fd8340 0 1 636 0000000 [SLPQ select 0xffffffff809c6310][SLP] mountd 633 ffffff00600fa9c0 0 1 633 0000000 [SLPQ select 0xffffffff809c6310][SLP] amd 567 ffffff006253c9c0 0 1 567 0000000 [SLPQ select 0xffffffff809c6310][SLP] rpcbind 532 ffffff007ac019c0 53 1 532 0000100 [SLPQ select 0xffffffff809c6310][SLP] named 465 ffffff00600fa340 0 1 465 0000000 [SLPQ select 0xffffffff809c6310][SLP] syslogd 411 ffffff007ac01340 0 1 411 0000000 [SLPQ select 0xffffffff809c6310][SLP] devd 230 ffffff006253c000 0 0 0 0000204 [SLPQ mdwait 0xffffff00619fc000][SLP] md0 212 ffffff0061fd89c0 0 1 212 0000000 [SLPQ pause 0xffffff0061fd8a28][SLP] adjkerntz 45 ffffff007acf99c0 0 0 0 0000204 [SLPQ m:w1 0xffffff000109ec00][SLP] g_mirror boot0s1 44 ffffff007aafd000 0 0 0 0000204 [SLPQ - 0xffffffffb2adebe4][SLP] schedcpu 43 ffffff007aafd340 0 0 0 0000204 [SLPQ sdflush 0xffffffff809e2e00][SLP] softdepflush 42 ffffff007aafd680 0 0 0 0000204 [SLPQ vlruwt 0xffffff007aafd680][SLP] vnlru 41 ffffff007aafd9c0 0 0 0 0000204 [SLPQ syncer 0xffffffff8093c360][SLP] syncer 40 ffffff007ac00000 0 0 0 0000204 [SLPQ psleep 0xffffffff809c6bd8][SLP] bufdaemon 39 ffffff007ac00340 0 0 0 000020c [SLPQ pgzero 0xffffffff809e4780][SLP] pagezero 38 ffffff007ac00680 0 0 0 0000204 [SLPQ psleep 0xffffffff809e3e4c][SLP] vmdaemon 37 ffffff007ac009c0 0 0 0 0000204 [SLPQ psleep 0xffffffff809e3dfc][SLP] pagedaemon 36 ffffff007ac01000 0 0 0 0000204 [IWAIT] irq7: ppc0 35 ffffff007b9e1680 0 0 0 0000204 [SLPQ - 0xffffff007ab0e848][SLP] fdc0 34 ffffff007b9e19c0 0 0 0 0000204 [IWAIT] swi0: sio 33 ffffff007ab56000 0 0 0 0000204 [IWAIT] irq1: atkbd0 32 ffffff007ab56340 0 0 0 0000204 [SLPQ idle 0xffffffff86ca3000][SLP] aic_recovery1 31 ffffff007ab56680 0 0 0 0000204 [IWAIT] irq25: bge1 ahd1 30 ffffff007ab569c0 0 0 0 0000204 [SLPQ idle 0xffffffff86c9f000][SLP] aic_recovery0 29 ffffff007acf9000 0 0 0 0000204 [LOCK Giant ffffff000123fb00] irq24: bge0 ahd0 28 ffffff007acf9340 0 0 0 0000204 [IWAIT] irq15: ata1 27 ffffff007acf9680 0 0 0 0000204 [IWAIT] irq14: ata0 26 ffffff007ba0a680 0 0 0 0000204 [IWAIT] irq18: fxp0 25 ffffff007ba0a9c0 0 0 0 0000204 [SLPQ usbevt 0xffffffff86c9b420][SLP] usb1 24 ffffff007ba5d000 0 0 0 0000204 [SLPQ usbtsk 0xffffffff80937510][SLP] usbtask 23 ffffff007ba5d340 0 0 0 0000204 [SLPQ usbevt 0xffffffff86c99420][SLP] usb0 22 ffffff007ba5d680 0 0 0 0000204 [IWAIT] irq19: ohci0 ohci+ 21 ffffff007ba5d9c0 0 0 0 0000204 [IWAIT] irq9: acpi0 9 ffffff007b9e1000 0 0 0 0000204 [SLPQ - 0xffffff0000e85500][SLP] kqueue taskq 8 ffffff007b9e1340 0 0 0 0000204 [SLPQ - 0xffffff0000d71400][SLP] acpi_task2 7 ffffff007ba039c0 0 0 0 0000204 [SLPQ - 0xffffff0000d71400][SLP] acpi_task1 6 ffffff007ba28000 0 0 0 0000204 [SLPQ - 0xffffff0000d71400][SLP] acpi_task0 20 ffffff007ba28340 0 0 0 0000204 [IWAIT] swi6: task queue 19 ffffff007ba28680 0 0 0 0000204 [IWAIT] swi6: + 5 ffffff007ba289c0 0 0 0 0000204 [SLPQ - 0xffffff0000d71900][SLP] thread taskq 18 ffffff007ba0a000 0 0 0 0000204 [IWAIT] swi5: + 17 ffffff007ba0a340 0 0 0 0000204 [IWAIT] swi2: cambio 16 ffffff007ba5a340 0 0 0 0000204 [SLPQ - 0xffffffff809350c0][SLP] yarrow 4 ffffff007ba5a680 0 0 0 0000204 [SLPQ - 0xffffffff80937e08][SLP] g_down 3 ffffff007ba5a9c0 0 0 0 0000204 [SLPQ - 0xffffffff80937e00][SLP] g_up 2 ffffff007ba03000 0 0 0 0000204 [SLPQ - 0xffffffff80937df0][SLP] g_event 15 ffffff007ba03340 0 0 0 0000204 [IWAIT] swi3: vm 14 ffffff007ba03680 0 0 0 000020c [LOCK Giant ffffff000123fb00] swi4: clock sio 13 ffffff007ba33000 0 0 0 0000204 [IWAIT] swi1: net 12 ffffff007ba33340 0 0 0 000020c [Can run] idle: cpu0 11 ffffff007ba33680 0 0 0 000020c [Can run] idle: cpu1 1 ffffff007ba339c0 0 0 1 0004200 [SLPQ wait 0xffffff007ba339c0][SLP] init 10 ffffff007ba5a000 0 0 0 0000204 [SLPQ ktrace 0xffffffff80938f00][SLP] ktrace 0 ffffffff80937f60 0 0 0 0000200 [IWAIT] swapper 11503 ffffff0061b3c9c0 80 2607 2606 0006000 zomb[INACTIVE] perl 2663 ffffff004b864680 80 2607 2606 0006000 zomb[INACTIVE] perl 2632 ffffff0049c40340 80 2607 2606 0006000 zomb[INACTIVE] perl db> c malloc(M_WAITOK) of "1024", forcing M_NOWAIT with the following non- sleepable locks held: exclusive sleep mutex vm object (standard object) r = 0 (0xffffff0018f3fe00) locked @ /usr/src/sys/compat/linprocfs/ linprocfs.c:879 KDB: enter: witness_warn [thread pid 77487 tid 100323 ] Stopped at kdb_enter+0x2f: nop db> call boot() panic: blockable sleep lock (sleep mutex) eventhandler @ /usr/src/sys/ kern/subr_eventhandler.c:212 cpuid = 0 KDB: stack backtrace: panic() at panic+0x253 witness_checkorder() at witness_checkorder+0x5c3 _mtx_lock_flags() at _mtx_lock_flags+0x4a eventhandler_find_list() at eventhandler_find_list+0x32 boot() at boot+0x96 db_fncall() at db_fncall+0xb1 db_command_loop() at db_command_loop+0x3b5 db_trap() at db_trap+0x63 kdb_trap() at kdb_trap+0xa9 trap() at trap+0x1b4 calltrap() at calltrap+0x5 --- trap 0x3, rip = 0xffffffff8043d68f, rsp = 0xffffffffb5374390, rbp = 0xffffffffb53743a0 --- kdb_enter() at kdb_enter+0x2f witness_warn() at witness_warn+0x2e0 uma_zalloc_arg() at uma_zalloc_arg+0x1ee malloc() at malloc+0xab vn_fullpath() at vn_fullpath+0x56 linprocfs_doprocmaps() at linprocfs_doprocmaps+0x31e pfs_read() at pfs_read+0x2a7 VOP_READ_APV() at VOP_READ_APV+0x74 vn_read() at vn_read+0x232 dofileread() at dofileread+0x94 kern_readv() at kern_readv+0x60 read() at read+0x4a ia32_syscall() at ia32_syscall+0x167 Xint0x80_syscall() at Xint0x80_syscall+0x5d Uptime: 17h2m19s Dumping 2047 MB (2 chunks) chunk 0: 1MB (156 pages) ... ok From: John Baldwin To: Eirik =?iso-8859-15?q?=D8verby?= Cc: bug-followup@freebsd.org, des@freebsd.org Subject: Re: kern/99094: panic: sleeping thread (Sleeping thread ... owns a non-sleepable lock) Date: Wed, 5 Jul 2006 14:25:41 -0400 Well, the problem is in linprocfs. It is trying to do some very expensive things while holding a mutex. Here's the code excerpt: if (lobj) { vp = lobj->handle; VM_OBJECT_LOCK(lobj); off = IDX_TO_OFF(lobj->size); if (lobj->type == OBJT_VNODE && lobj->handle) { vn_fullpath(td, vp, &name, &freename); VOP_GETATTR(vp, &vat, td->td_ucred, td); ino = vat.va_fileid; } flags = obj->flags; ref_count = obj->ref_count; shadow_count = obj->shadow_count; VM_OBJECT_UNLOCK(lobj); The VM_OBJECT_LOCK() is a mutex, and it can't really hold a mutex while calling things like vn_fullpath() and VOP_GETATTR() as those can block, etc; It needs to probably be reordered to grab copies of the object fields under the object lock, take a ref on the vnode (via vref) then do the vn_fullpath() and VOP_GETATTR() after dropping the vm object lock and finally do a vrele() to drop the vnode reference. I'm cc'ing des@ as he's the linprocfs maintainer and should be able to help with this further. -- John Baldwin From: "Eirik Oeverby" To: "John Baldwin" Cc: Eirik =?iso-8859-1?Q?=D8verby?= , bug-followup@freebsd.org, des@freebsd.org Subject: Re: kern/99094: panic: sleeping thread (Sleeping thread ... owns a non-sleepable lock) Date: Thu, 6 Jul 2006 01:32:24 +0200 (CEST) Brilliant! Thanks! I have already disabled linprocfs, so I should expect the system to be stable now, then. PS: This was introduced between some of the RCs for 6.1, as far as I can remember. /Eirik State-Changed-From-To: open->feedback State-Changed-By: kmacy State-Changed-When: Sun Nov 18 08:40:01 UTC 2007 State-Changed-Why: So this is really a linprocfs bug and you're not using linprocfs any more? http://www.freebsd.org/cgi/query-pr.cgi?pr=99094 From: =?ISO-8859-1?Q?Eirik_=D8verby?= To: bug-followup@FreeBSD.org, ltning-freebsd@anduin.net Cc: Subject: Re: kern/99094: panic: sleeping thread (Sleeping thread ... owns a non-sleepable lock) Date: Sun, 18 Nov 2007 11:23:00 +0100 Yes, it seems to have been a linprocfs bug. I no longer require linprocfs on any of the relevant systems, and after disabling it I haven't seen the problem again. Thanks, /Eirik State-Changed-From-To: feedback->open State-Changed-By: linimon State-Changed-When: Mon Feb 25 12:30:49 UTC 2008 State-Changed-Why: Feedback received. The problem seems to be in linprocfs. http://www.freebsd.org/cgi/query-pr.cgi?pr=99094 State-Changed-From-To: open->closed State-Changed-By: jh State-Changed-When: Tue Dec 22 17:10:14 UTC 2009 State-Changed-Why: Fixed by r161094. http://www.freebsd.org/cgi/query-pr.cgi?pr=99094 >Unformatted: