|
From: Thomas M. <th...@m3...> - 2013-11-06 20:31:42
|
Am Mittwoch, den 06.11.2013, 20:59 +0100 schrieb Richard Weinberger:
> Am 06.11.2013 20:52, schrieb Thomas Meyer:
> > Am Mittwoch, den 06.11.2013, 13:40 +0100 schrieb Richard Weinberger:
> >> On Tue, Nov 5, 2013 at 9:21 PM, Thomas Meyer <th...@m3...> wrote:
> >>> Hi,
> >>>
> >>> I'm running Fedora 20 inside a 3.12 UML kernel and the "yum upgrade -y"
> >>> command seems to get stuck after a while/few minutes.
> >>>
> >>> Any ideas what's going one here? How to debug this?
> >>>
> >>> It looks like the process running yum is in state ptrace stopped, but
> >>> doesn't continue.
> >>
> >> Got only yum stuck or the whole UML kernel?
> >
> > How to tell? It feels like the whole kernel got stuck.
>
> Login on another shell... :)
oops, yes, of course! I tried that and the getty was able to receive my
username, but after pressing enter nothing happened any more, some
timeout message appeared.
I logged in via mconsole and did an emergency sync and a halt command
and restarted uml. now I'm receiving funny warnings/bad pages:
systemd-udevd[599]: starting version 208
EXT4-fs (ubda): re-mounted. Opts: (null)
systemd-journald[338]: Received request to flush runtime journal from
PID 1
systemd-journald[338]:
File /var/log/journal/8e4cbfea404512ae70096c6202c9a3bf/system.journal
corrupted or uncleanly shut down, renaming and replacing.
------------[ cut here ]------------
WARNING: CPU: 0 PID: 0 at lib/timerqueue.c:74 timerqueue_del+0x57/0x60()
Modules linked in:
CPU: 0 PID: 0 Comm: swapper Not tainted 3.12.0-00048-gbe408cd #22
6036d968 60370947 00000000 00000000 00000009 60324f52 0000004a 6036d9b0
602972e5 6036da00 60030bc1 6036d9d0 90b153c0 6036da10 603816c0
6037e4e8
00000000 00000001 8dccbcf0 6036da10 60030d15 6036da30 601cf007
603816c0
Call Trace:
6036d9a8: [<602972e5>] dump_stack+0x17/0x19
6036d9b8: [<60030bc1>] warn_slowpath_common+0x71/0x90
6036da08: [<60030d15>] warn_slowpath_null+0x15/0x20
6036da18: [<601cf007>] timerqueue_del+0x57/0x60
6036da38: [<6004dfe6>] __remove_hrtimer+0x46/0xa0
6036da78: [<6004e548>] __hrtimer_start_range_ns+0xd8/0x1e0
6036dad8: [<6004e683>] hrtimer_start+0x13/0x20
6036dae8: [<60067e12>] __tick_nohz_idle_enter.constprop.28+0x2d2/0x360
6036db58: [<60068329>] tick_nohz_irq_exit+0x19/0x20
6036db68: [<60034f45>] irq_exit+0x85/0xa0
6036db78: [<6005dac8>] generic_handle_irq+0x28/0x30
6036db88: [<60014fa8>] do_IRQ+0x28/0x40
6036dba8: [<60016af0>] timer_handler+0x20/0x30
6036dbc8: [<6002834e>] real_alarm_handler+0x3e/0x50
6036dd38: [<60028b53>] os_nsecs+0x13/0x30
6036dd58: [<60028b53>] os_nsecs+0x13/0x30
6036dd78: [<60028a98>] timer_one_shot+0x68/0x80
6036dda8: [<60016acc>] itimer_next_event+0xc/0x10
6036ddb8: [<6006704a>] clockevents_program_event+0x6a/0xf0
6036dde8: [<6006780a>] tick_program_event+0x1a/0x20
6036ddf8: [<6004df9b>] hrtimer_force_reprogram+0x6b/0x70
6036de08: [<6004e03c>] __remove_hrtimer+0x9c/0xa0
6036de28: [<601cef40>] timerqueue_add+0x60/0xb0
6036de48: [<6004e514>] __hrtimer_start_range_ns+0xa4/0x1e0
6036dea8: [<6004e683>] hrtimer_start+0x13/0x20
6036deb8: [<6007c298>] rcu_sched_qs+0x78/0x90
6036dee8: [<60028224>] unblock_signals+0x64/0x80
6036df08: [<60015b0a>] arch_cpu_idle+0x3a/0x50
6036df18: [<6007c3cf>] rcu_idle_enter+0x6f/0xb0
6036df38: [<6005da4e>] cpu_startup_entry+0x8e/0xe0
6036df48: [<602991b9>] schedule_preempt_disabled+0x9/0x10
6036df58: [<60294228>] rest_init+0x68/0x70
6036df68: [<600027f5>] check_bugs+0xe/0x19
6036df78: [<60001635>] start_kernel+0x27f/0x286
6036df80: [<600011c5>] unknown_bootoption+0x0/0x185
6036dfb8: [<6000289a>] start_kernel_proc+0x31/0x35
6036dfd8: [<6001561a>] new_thread_handler+0x7a/0xa0
---[ end trace 41ecadffe5cf650c ]---
BUG: Bad page state in process systemd-journal pfn:2d25c
page:0000000062c97420 count:0 mapcount:1 mapping: (null)
index:0x2
page flags: 0x80008(uptodate|swapbacked)
Modules linked in:
CPU: 0 PID: 338 Comm: systemd-journal Tainted: G W
3.12.0-00048-gbe408cd #22
8dc759b8 60370947 60309985 62c97420 00000000 62c97458 60383ac0 8dc75a00
602972e5 8dc75a30 60082ce0 ffffffffff0a0210 8dc75adc 8dc75a50
62c97420 8dc75a60
60082df0 60309982 62c97420 00080008 00000000 8dc75aa0 60084561
00000000
Call Trace:
8dc759f8: [<602972e5>] dump_stack+0x17/0x19
8dc75a08: [<60082ce0>] bad_page+0xb0/0x100
8dc75a38: [<60082df0>] free_pages_prepare+0xc0/0xd0
8dc75a68: [<60084561>] free_hot_cold_page+0x21/0x130
8dc75aa8: [<60084ace>] free_hot_cold_page_list+0x3e/0x60
8dc75ad8: [<60087958>] release_pages+0x158/0x1a0
8dc75b38: [<60087a62>] pagevec_lru_move_fn+0xc2/0xe0
8dc75b50: [<60087120>] __pagevec_lru_add_fn+0x0/0xc0
8dc75b98: [<60087e91>] lru_add_drain_cpu+0x71/0x80
8dc75bb8: [<60087f6b>] lru_add_drain+0xb/0x10
8dc75bc8: [<600a0136>] exit_mmap+0x46/0x170
8dc75bf0: [<60018a50>] copy_chunk_to_user+0x0/0x30
8dc75c28: [<6002e2a7>] mmput.part.62+0x27/0xc0
8dc75c48: [<6002e359>] mmput+0x19/0x20
8dc75c58: [<6003117a>] exit_mm+0x10a/0x150
8dc75ca8: [<60031de1>] do_exit+0x331/0x4a0
8dc75ce8: [<600330de>] do_group_exit+0x3e/0xd0
8dc75d18: [<6003de7b>] get_signal_to_deliver+0x1bb/0x4d0
8dc75d48: [<600535cb>] wake_up_state+0xb/0x10
8dc75d88: [<60016757>] kern_do_signal+0x57/0x150
8dc75e08: [<60028460>] set_signals+0x30/0x40
8dc75e28: [<6003d175>] force_sig_info+0xb5/0xd0
8dc75e68: [<60016871>] do_signal+0x21/0x30
8dc75e88: [<60017d14>] fatal_sigsegv+0x24/0x30
8dc75ea8: [<6002b2d3>] userspace+0x2c3/0x4d0
8dc75f78: [<600273d7>] save_registers+0x17/0x30
8dc75f88: [<6002df20>] arch_prctl+0x150/0x180
8dc75fd8: [<600156a9>] fork_handler+0x69/0x70
Disabling lock debugging due to kernel taint
BUG: Bad page state in process systemd-journal pfn:2cc8c
page:0000000062c82ea0 count:0 mapcount:1 mapping: (null)
index:0x3
page flags: 0x80008(uptodate|swapbacked)
Modules linked in:
CPU: 0 PID: 338 Comm: systemd-journal Tainted: G B W
3.12.0-00048-gbe408cd #22
8dc759b8 60370947 60309985 62c82ea0 00000000 62c82ed8 60383ac0 8dc75a00
602972e5 8dc75a30 60082ce0 ffffffffff0a0210 8dc75adc 8dc75a50
62c82ea0 8dc75a60
60082df0 60309982 62c82ea0 00080008 00000000 8dc75aa0 60084561
00000000
Call Trace:
8dc759f8: [<602972e5>] dump_stack+0x17/0x19
8dc75a08: [<60082ce0>] bad_page+0xb0/0x100
8dc75a38: [<60082df0>] free_pages_prepare+0xc0/0xd0
8dc75a68: [<60084561>] free_hot_cold_page+0x21/0x130
8dc75aa8: [<60084ace>] free_hot_cold_page_list+0x3e/0x60
8dc75ad8: [<60087958>] release_pages+0x158/0x1a0
8dc75b38: [<60087a62>] pagevec_lru_move_fn+0xc2/0xe0
8dc75b50: [<60087120>] __pagevec_lru_add_fn+0x0/0xc0
8dc75b98: [<60087e91>] lru_add_drain_cpu+0x71/0x80
8dc75bb8: [<60087f6b>] lru_add_drain+0xb/0x10
8dc75bc8: [<600a0136>] exit_mmap+0x46/0x170
8dc75bf0: [<60018a50>] copy_chunk_to_user+0x0/0x30
8dc75c28: [<6002e2a7>] mmput.part.62+0x27/0xc0
8dc75c48: [<6002e359>] mmput+0x19/0x20
8dc75c58: [<6003117a>] exit_mm+0x10a/0x150
8dc75ca8: [<60031de1>] do_exit+0x331/0x4a0
8dc75ce8: [<600330de>] do_group_exit+0x3e/0xd0
8dc75d18: [<6003de7b>] get_signal_to_deliver+0x1bb/0x4d0
8dc75d48: [<600535cb>] wake_up_state+0xb/0x10
8dc75d88: [<60016757>] kern_do_signal+0x57/0x150
8dc75e08: [<60028460>] set_signals+0x30/0x40
8dc75e28: [<6003d175>] force_sig_info+0xb5/0xd0
8dc75e68: [<60016871>] do_signal+0x21/0x30
8dc75e88: [<60017d14>] fatal_sigsegv+0x24/0x30
8dc75ea8: [<6002b2d3>] userspace+0x2c3/0x4d0
8dc75f78: [<600273d7>] save_registers+0x17/0x30
8dc75f88: [<6002df20>] arch_prctl+0x150/0x180
8dc75fd8: [<600156a9>] fork_handler+0x69/0x70
BUG: failure at mm/slab.c:1813/kmem_freepages()!
Kernel panic - not syncing: BUG!
CPU: 0 PID: 338 Comm: systemd-journal Tainted: G B W
3.12.0-00048-gbe408cd #22
8dc75988 60370947 00000000 602ffc6e 8de9a000 9080cdf0 00000000 8dc759d0
602972e5 8dc75ad0 60294a4a 00000000 3000000008 8dc75ae0 8dc75a00
8dc75a30
60423e00 00000007 00000006 8dc75720 000000f7 00000715 602a4944
ffffffffffffffff
Call Trace:
8dc759c8: [<602972e5>] dump_stack+0x17/0x19
8dc759d8: [<60294a4a>] panic+0xf4/0x1e2
8dc75a68: [<60084619>] free_hot_cold_page+0xd9/0x130
8dc75aa8: [<60087362>] __put_single_page+0x22/0x30
8dc75ad8: [<600ae4a5>] kmem_freepages.isra.69+0x135/0x140
8dc75af8: [<600aedb9>] slab_destroy+0x29/0x60
8dc75b18: [<600aef2c>] free_block+0x13c/0x150
8dc75b48: [<60296210>] cache_flusharray+0x60/0x84
8dc75b78: [<600aed87>] kmem_cache_free+0xa7/0xb0
8dc75ba8: [<6009d1f5>] remove_vma+0x45/0x50
8dc75bc8: [<600a01b4>] exit_mmap+0xc4/0x170
8dc75c28: [<6002e2a7>] mmput.part.62+0x27/0xc0
8dc75c48: [<6002e359>] mmput+0x19/0x20
8dc75c58: [<6003117a>] exit_mm+0x10a/0x150
8dc75ca8: [<60031de1>] do_exit+0x331/0x4a0
8dc75ce8: [<600330de>] do_group_exit+0x3e/0xd0
8dc75d18: [<6003de7b>] get_signal_to_deliver+0x1bb/0x4d0
8dc75d48: [<600535cb>] wake_up_state+0xb/0x10
8dc75d88: [<60016757>] kern_do_signal+0x57/0x150
8dc75e08: [<60028460>] set_signals+0x30/0x40
8dc75e28: [<6003d175>] force_sig_info+0xb5/0xd0
8dc75e68: [<60016871>] do_signal+0x21/0x30
8dc75e88: [<60017d14>] fatal_sigsegv+0x24/0x30
8dc75ea8: [<6002b2d3>] userspace+0x2c3/0x4d0
8dc75f78: [<600273d7>] save_registers+0x17/0x30
8dc75f88: [<6002df20>] arch_prctl+0x150/0x180
8dc75fd8: [<600156a9>] fork_handler+0x69/0x70
Modules linked in:
Pid: 338, comm: systemd-journal Tainted: G B W
3.12.0-00048-gbe408cd
RIP: 0033:[<0000000041621463>]
RSP: 0000007fbfe99b28 EFLAGS: 00000246
RAX: 0000000000000001 RBX: 0000007fbfe99b40 RCX: ffffffffffffffff
RDX: 0000000000000001 RSI: 0000007fbfe99b30 RDI: 0000000000000007
RBP: 00000000ffffffff R08: 0000000000000001 R09: 000000552ace5943
R10: 00000000ffffffff R11: 0000000000000246 R12: 0000007fbfe99b30
R13: 00000000000003e8 R14: 0004ea87f4e0a28c R15: 0000000000000000
Call Trace:
8dc75968: [<6001837b>] panic_exit+0x2b/0x50
8dc75978: [<60016990>] show_stack+0x40/0xe0
8dc75988: [<6004fb2c>] notifier_call_chain+0x4c/0x70
8dc759c8: [<6004fb71>] atomic_notifier_call_chain+0x11/0x20
8dc759d8: [<60294a5b>] panic+0x105/0x1e2
8dc75a68: [<60084619>] free_hot_cold_page+0xd9/0x130
8dc75aa8: [<60087362>] __put_single_page+0x22/0x30
8dc75ad8: [<600ae4a5>] kmem_freepages.isra.69+0x135/0x140
8dc75af8: [<600aedb9>] slab_destroy+0x29/0x60
8dc75b18: [<600aef2c>] free_block+0x13c/0x150
8dc75b48: [<60296210>] cache_flusharray+0x60/0x84
8dc75b78: [<600aed87>] kmem_cache_free+0xa7/0xb0
8dc75ba8: [<6009d1f5>] remove_vma+0x45/0x50
8dc75bc8: [<600a01b4>] exit_mmap+0xc4/0x170
8dc75c28: [<6002e2a7>] mmput.part.62+0x27/0xc0
8dc75c48: [<6002e359>] mmput+0x19/0x20
8dc75c58: [<6003117a>] exit_mm+0x10a/0x150
8dc75ca8: [<60031de1>] do_exit+0x331/0x4a0
8dc75ce8: [<600330de>] do_group_exit+0x3e/0xd0
8dc75d18: [<6003de7b>] get_signal_to_deliver+0x1bb/0x4d0
8dc75d48: [<600535cb>] wake_up_state+0xb/0x10
8dc75d88: [<60016757>] kern_do_signal+0x57/0x150
8dc75e08: [<60028460>] set_signals+0x30/0x40
8dc75e28: [<6003d175>] force_sig_info+0xb5/0xd0
8dc75e68: [<60016871>] do_signal+0x21/0x30
8dc75e88: [<60017d14>] fatal_sigsegv+0x24/0x30
8dc75ea8: [<6002b2d3>] userspace+0x2c3/0x4d0
8dc75f78: [<600273d7>] save_registers+0x17/0x30
8dc75f88: [<6002df20>] arch_prctl+0x150/0x180
8dc75fd8: [<600156a9>] fork_handler+0x69/0x70
------------[ cut here ]------------
WARNING: CPU: 0 PID: 618 at lib/timerqueue.c:74 timerqueue_del
+0x57/0x60()
Modules linked in:
CPU: 0 PID: 618 Comm: plymouthd Not tainted 3.12.0-00048-gbe408cd #22
6036f5b8 60370947 6004fc41 00000000 00000009 60324f52 0000004a 6036f600
602972e5 6036f650 60030bc1 6036f830 6036f630 00000000 603816c0
6037e4e8
00000002 00000000 603816c0 6036f660 60030d15 6036f680 601cf007
603816c0
Call Trace:
6036f5c8: [<6004fc41>] raw_notifier_call_chain+0x11/0x20
6036f5f8: [<602972e5>] dump_stack+0x17/0x19
6036f608: [<60030bc1>] warn_slowpath_common+0x71/0x90
6036f658: [<60030d15>] warn_slowpath_null+0x15/0x20
6036f668: [<601cf007>] timerqueue_del+0x57/0x60
6036f688: [<6004dfe6>] __remove_hrtimer+0x46/0xa0
6036f6c8: [<6004e2fd>] __run_hrtimer.isra.35+0x2d/0xf0
6036f6e8: [<6004e7c7>] hrtimer_interrupt+0xb7/0x210
6036f738: [<60016a3f>] um_timer+0xf/0x20
6036f748: [<6005e0f1>] handle_irq_event_percpu+0x31/0x130
6036f798: [<6005e213>] handle_irq_event+0x23/0x40
6036f7b8: [<60060287>] handle_edge_irq+0x67/0x120
6036f7d8: [<6005dac8>] generic_handle_irq+0x28/0x30
6036f7e8: [<60014fa3>] do_IRQ+0x23/0x40
6036f808: [<60016af0>] timer_handler+0x20/0x30
6036f828: [<6002834e>] real_alarm_handler+0x3e/0x50
6036f838: [<6001a400>] winch_thread+0x0/0x1b0
6036f8b0: [<6003f612>] sigsuspend+0x22/0x90
6036fb48: [<6002840a>] alarm_handler+0x3a/0x50
6036fb68: [<60027f7d>] hard_handler+0x7d/0xc0
6036fba0: [<6001a400>] winch_thread+0x0/0x1b0
6036fc18: [<6001a400>] winch_thread+0x0/0x1b0
6036fc68: [<6003f612>] sigsuspend+0x22/0x90
---[ end trace 41ecadffe5cf650c ]---
>
> >> Does yum a ptrace() within UML or did you observe that from the outside?
> >
> > I saw it from outside in below process listing.
>
> Okay. All UML childs do ptrace() as UML uses ptrace() for system call emulation.
>
> >>
> >>> The process tree looks also strange:
> >>>
> >>> 20330 pts/3 S+ 1:18 | \_ ./linux ubd0=ext3fs.img mem=768M systemd.unit=multi-user.target umid=fedora20
> >>> 20337 pts/3 S+ 0:00 | \_ ./linux ubd0=ext3fs.img mem=768M systemd.unit=multi-user.target umid=fedora20
> >>> 20338 pts/3 S+ 0:03 | \_ ./linux ubd0=ext3fs.img mem=768M systemd.unit=multi-user.target umid=fedora20
> >>> 20339 pts/3 S+ 0:00 | \_ ./linux ubd0=ext3fs.img mem=768M systemd.unit=multi-user.target umid=fedora20
> >>> 20347 pts/3 t+ 0:00 | \_ ./linux ubd0=ext3fs.img mem=768M systemd.unit=multi-user.target umid=fedora20
> >>> 20405 pts/3 t+ 0:00 | \_ ./linux ubd0=ext3fs.img mem=768M systemd.unit=multi-user.target umid=fedora20
> >>> 20469 pts/3 t+ 0:00 | \_ ./linux ubd0=ext3fs.img mem=768M systemd.unit=multi-user.target umid=fedora20
> >>> 20615 pts/3 S+ 0:00 | \_ xterm -T Virtual Console #1 (fedora20) -e port-helper -uml-socket /tmp/xterm-pipeiW6d5k
> >>> 20625 ? Ss 0:00 | | \_ port-helper -uml-socket /tmp/xterm-pipeiW6d5k
> >>> 20626 ? Zs 0:00 | \_ [linux] <defunct>
> >>> 20630 pts/3 t+ 0:00 | \_ ./linux ubd0=ext3fs.img mem=768M systemd.unit=multi-user.target umid=fedora20
> >>> 20642 pts/3 t+ 0:00 | \_ ./linux ubd0=ext3fs.img mem=768M systemd.unit=multi-user.target umid=fedora20
> >>> 20650 pts/3 t+ 0:00 | \_ ./linux ubd0=ext3fs.img mem=768M systemd.unit=multi-user.target umid=fedora20
> >>> 20651 pts/3 t+ 0:00 | \_ ./linux ubd0=ext3fs.img mem=768M systemd.unit=multi-user.target umid=fedora20
> >>> 20663 pts/3 t+ 0:00 | \_ ./linux ubd0=ext3fs.img mem=768M systemd.unit=multi-user.target umid=fedora20
> >>> 20681 pts/3 t+ 0:00 | \_ ./linux ubd0=ext3fs.img mem=768M systemd.unit=multi-user.target umid=fedora20
> >>> 20684 pts/3 t+ 0:00 | \_ ./linux ubd0=ext3fs.img mem=768M systemd.unit=multi-user.target umid=fedora20
> >>> 20690 pts/3 t+ 0:00 | \_ ./linux ubd0=ext3fs.img mem=768M systemd.unit=multi-user.target umid=fedora20
> >>> 20691 pts/3 t+ 0:00 | \_ ./linux ubd0=ext3fs.img mem=768M systemd.unit=multi-user.target umid=fedora20
> >>> 20699 pts/3 t+ 0:00 | \_ ./linux ubd0=ext3fs.img mem=768M systemd.unit=multi-user.target umid=fedora20
> >>> 20709 pts/3 t+ 0:00 | \_ ./linux ubd0=ext3fs.img mem=768M systemd.unit=multi-user.target umid=fedora20
> >>> 20722 pts/3 t+ 0:00 | \_ ./linux ubd0=ext3fs.img mem=768M systemd.unit=multi-user.target umid=fedora20
> >>> 20754 pts/3 S+ 0:00 | \_ xterm -T Virtual Console #2 (fedora20) -e port-helper -uml-socket /tmp/xterm-pipetxRIbS
> >>> 20757 ? Ss 0:00 | | \_ port-helper -uml-socket /tmp/xterm-pipetxRIbS
> >>> 20755 pts/3 S+ 0:00 | \_ xterm -T Virtual Console #6 (fedora20) -e port-helper -uml-socket /tmp/xterm-pipedhXmGp
> >>> 20762 ? Ss 0:00 | | \_ port-helper -uml-socket /tmp/xterm-pipedhXmGp
> >>> 20758 ? Zs 0:00 | \_ [linux] <defunct>
> >>> 20760 pts/3 t+ 0:00 | \_ ./linux ubd0=ext3fs.img mem=768M systemd.unit=multi-user.target umid=fedora20
> >>> 20763 ? Zs 0:00 | \_ [linux] <defunct>
> >>> 20797 pts/3 S+ 0:00 | \_ xterm -T Virtual Console #3 (fedora20) -e port-helper -uml-socket /tmp/xterm-pipeULItXd
> >>> 20812 ? Ss 0:00 | | \_ port-helper -uml-socket /tmp/xterm-pipeULItXd
> >>> 20813 ? Zs 0:00 | \_ [linux] <defunct>
> >>> 20815 pts/3 S+ 0:00 | \_ xterm -T Virtual Console #5 (fedora20) -e port-helper -uml-socket /tmp/xterm-pipeaKUbD3
> >>> 20876 ? Ss 0:00 | | \_ port-helper -uml-socket /tmp/xterm-pipeaKUbD3
> >>> 20877 ? Zs 0:00 | \_ [linux] <defunct>
> >>> 20896 pts/3 t+ 0:00 | \_ ./linux ubd0=ext3fs.img mem=768M systemd.unit=multi-user.target umid=fedora20
> >>> 20909 pts/3 t+ 0:00 | \_ ./linux ubd0=ext3fs.img mem=768M systemd.unit=multi-user.target umid=fedora20
> >>> 21005 pts/3 t+ 0:00 | \_ ./linux ubd0=ext3fs.img mem=768M systemd.unit=multi-user.target umid=fedora20
> >>> 21007 pts/3 Z+ 0:00 | \_ [uml_net] <defunct>
> >>> 21019 pts/3 t+ 0:00 | \_ ./linux ubd0=ext3fs.img mem=768M systemd.unit=multi-user.target umid=fedora20
> >>> 21112 pts/3 t+ 0:00 | \_ ./linux ubd0=ext3fs.img mem=768M systemd.unit=multi-user.target umid=fedora20
> >>> 21125 pts/3 Z+ 0:00 | \_ [uml_net] <defunct>
> >>> 22164 pts/3 t+ 0:00 | \_ ./linux ubd0=ext3fs.img mem=768M systemd.unit=multi-user.target umid=fedora20
> >>> 22211 pts/3 t+ 0:00 | \_ ./linux ubd0=ext3fs.img mem=768M systemd.unit=multi-user.target umid=fedora20
> >>> 22224 pts/3 t+ 0:00 | \_ ./linux ubd0=ext3fs.img mem=768M systemd.unit=multi-user.target umid=fedora20
> >>> 22380 pts/3 t+ 0:51 | \_ ./linux ubd0=ext3fs.img mem=768M systemd.unit=multi-user.target umid=fedora20
> >>> 21965 pts/3 t+ 0:00 | \_ ./linux ubd0=ext3fs.img mem=768M systemd.unit=multi-user.target umid=fedora20
> >>> 21968 pts/3 t+ 0:00 | \_ ./linux ubd0=ext3fs.img mem=768M systemd.unit=multi-user.target umid=fedora20
> >>> 21983 pts/3 t+ 0:00 | \_ ./linux ubd0=ext3fs.img mem=768M systemd.unit=multi-user.target umid=fedora20
> >>> 22053 pts/3 t+ 0:00 | \_ ./linux ubd0=ext3fs.img mem=768M systemd.unit=multi-user.target umid=fedora20
> >>> 22058 pts/3 t+ 0:00 | \_ ./linux ubd0=ext3fs.img mem=768M systemd.unit=multi-user.target umid=fedora20
> >>> 22887 pts/3 t+ 0:00 | \_ ./linux ubd0=ext3fs.img mem=768M systemd.unit=multi-user.target umid=fedora20
> >>
> >> Remain the tasks in state Z or are they fipping around?
> >
> > All process remain in thier state. nothing seems to happen any more.
>
> Ok. The it would be nice to find out what the UML main thread does.
> gdb can tell you.
>
> >> Maybe the UML userspace creates many threads and on the host side UML
> >> didn't call wait() jet...
> >
> > I don't think so. yum is probably not a big thread user, I guess.
>
> Isn't it a huge chunk of python? ;-)
>
> Thanks,
> //richard
|