Menu

#2375 oom panic: out of memory when using fsp indprovider

Function
pending-fixed
sfcb (1090)
8
2012-04-06
2012-03-06
No

This defect is from LTC #78376

This issue was noticed during FSP indication provider related test run on power system.
The system is encountering "out-of-memory" issue. The issue is due to unreleased objects.

<4>oom-killer: gfp_mask=0x201d2, order=0
<4>Call Trace:
<4>[C4807C80] [C000B1BC] show_stack+0x34/0x194 (unreliable)
<4>[C4807CB0] [C0046014] out_of_memory+0x174/0x1bc
<4>[C4807CF0] [C0048150] __alloc_pages+0x3d8/0x430
<4>[C4807D60] [C0049B18] __do_page_cache_readahead+0xf0/0x280
<4>[C4807DF0] [C0045084] filemap_nopage+0x10c/0x3f8
<4>[C4807E20] [C004F6F8] __handle_mm_fault+0x170/0x9f4
<4>[C4807E80] [C000C430] do_page_fault+0x218/0x4e4
<4>[C4807F40] [C0003334] handle_page_fault+0xc/0x80
<4>Mem-info:
<4>DMA per-cpu:
<4>cpu 0 hot: high 90, batch 15 used:89
<4>cpu 0 cold: high 30, batch 7 used:27
<4>DMA32 per-cpu: empty
<4>Normal per-cpu: empty
<4>HighMem per-cpu: empty
<4>Free pages: 1756kB (0kB HighMem)
<4>Active:33372 inactive:308 dirty:0 writeback:0 unstable:0 free:439 slab:5601
mapped:33404 pagetables:1055
<4>DMA free:1756kB min:1764kB low:2204kB high:2644kB active:133488kB
inactive:1232kB present:194560kB pages_scanned:319509 all_unreclaimable? yes
<4>lowmem_reserve[]: 0 0 0 0
<4>DMA32 free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB present:0kB
pages_scanned:0 all_unreclaimable? no
<4>lowmem_reserve[]: 0 0 0 0
<4>Normal free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB present:0kB
pages_scanned:0 all_unreclaimable? no
<4>lowmem_reserve[]: 0 0 0 0
<4>HighMem free:0kB min:128kB low:128kB high:128kB active:0kB inactive:0kB
present:0kB pages_scanned:0 all_unreclaimable? no
<4>lowmem_reserve[]: 0 0 0 0
<4>DMA: 1*4kB 1*8kB 3*16kB 1*32kB 0*64kB 1*128kB 0*256kB 1*512kB 1*1024kB
0*2048kB 0*4096kB = 1756kB
<4>DMA32: empty
<4>Normal: empty
<4>HighMem: empty
<4>Free swap: 0kB
<4>48640 pages of RAM
<4>0 pages of HIGHMEM
<4>2230 free pages
<4>5316 reserved pages
<4>1020 pages shared
<4>0 pages swap cached
<4>Out of Memory Error, possible bad process is xinetd, pid 758, total size of
vm = 886
<0>Kernel panic - not syncing: Out of memory...
<4>
<4> mcore: started on CPU0
<4>mcore: starting normal mcore dump
<4>copying PRA to mcore header at c0300030
<4>mcore: marked 4712 pages SND; won't be in MCORE
<4>mcore: marked 6267 vital pages
<4>mcore: 48640 pages -> dest=949 kernel=13561 user=33389 pool=10 skip_top=162
skip_bot=0
<4>mcore: saving 6267 vital pages
<4>crash_store_page_raw: memory low!
<4>crash_alloc_dest_page: memory low, 10 pages left
<4>crash_alloc_dest_page: memory low, 9 pages left
<4>crash_alloc_dest_page: memory low, 8 pages left
<4>crash: freed 33389 pages (136761344 bytes) with bit 30
<4>mcore: saved 6267 vital pages (7766 msecs)
<4>mcore: saving other kernel pages
<4>mcore: saved 2592 other kernel pages (10238 msecs)
<4>mcore: not saving user pages (not reserved)
<4>8859 in pages, 2829 out pages, 1 pagelist
<4>
<4>mcore: src pages skipped = 35078 src pages saved = 8859
<4>mcore: data_pages = 2829 map_pages = 9
<4>mcore: completed, crash_dump_header = 0xc0300000
<4>mcore: took 10363 msecs
<4>restarting system.

KERNEL:
/esw/fips740/Builds/b0106a_1204.740/obj/ppc/boot/vmlinux-2.6.16.27-0-fsp1
DEBUGINFO:
/esw/fips740/Builds/b0106a_1204.740/obj/ppc/boot//vmlinux-2.6.16.27-0-fsp1.debug
DUMPFILE: ./entry_8/BDMPP_MCORE_DATA
CPUS: 1
DATE: Sat Mar 6 15:34:20 2004
UPTIME: 01:11:43
LOAD AVERAGE: 27.01, 18.67, 10.25
TASKS: 293
NODENAME: nicklausfp2
RELEASE: 2.6.16.27-0-fsp1
VERSION: #1 Fri Oct 7 16:39:49 UTC 2011
MACHINE: ppc (unknown Mhz)
MEMORY: 190 MB
PANIC: "Out of memory..."
PID: 23933
COMMAND: "rmgrorb"
TASK: cb9cb810 [THREAD_INFO: c4806000]
CPU: 0
STATE: TASK_RUNNING (PANIC)

Discussion

  • Narasimha Sharoff

    3497209-78376-patch

     
  • Narasimha Sharoff

    patch applied to cvs head

     
  • Chris Buccella

    Chris Buccella - 2012-04-06

    committed to git Apr 4

     
  • Chris Buccella

    Chris Buccella - 2012-04-06
    • status: open --> pending-fixed
     

Log in to post a comment.