Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ MPCreate: 173] Can't allocate memory for mempool! #309

Open
rohitjo opened this issue Oct 8, 2020 · 5 comments
Open

[ MPCreate: 173] Can't allocate memory for mempool! #309

rohitjo opened this issue Oct 8, 2020 · 5 comments

Comments

@rohitjo
Copy link

rohitjo commented Oct 8, 2020

[DEBUG] Initializing mtcp... |
|
--------------------------------------------------------------------------------- |
Loading mtcp configuration from : client.conf |
Loading interface setting |
EAL: Detected 16 lcore(s) |
EAL: Detected 1 NUMA nodes |
EAL: Auto-detected process type: PRIMARY |
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket |
EAL: Some devices want iova as va but pa will be used because.. EAL: IOMMU does not support IOVA as VA |
EAL: No free hugepages reported in hugepages-1048576kB |
EAL: No free hugepages reported in hugepages-1048576kB |
EAL: Probing VFIO support... |
EAL: VFIO support initialized |
EAL: PCI device 0000:00:1f.6 on NUMA socket -1 |
EAL: Invalid NUMA socket, default to 0 |
EAL: probe driver: 8086:15bb net_e1000_em |
EAL: PCI device 0000:02:00.0 on NUMA socket -1 |
EAL: Invalid NUMA socket, default to 0 |
EAL: probe driver: 8086:1533 net_e1000_igb |
EAL: using IOMMU type 1 (Type 1) |
EAL: Ignore mapping IO port bar(2) |
EAL: Auto-detected process type: PRIMARY |
Configurations: |
Number of CPU cores available: 1 |
Number of CPU cores to use: 1 |
Maximum number of concurrency per core: 10000 |
Maximum number of preallocated buffers per core: 10000 |
Receive buffer size: 6291456 |
Send buffer size: 4194304 |
TCP timeout seconds: 30 |
TCP timewait seconds: 0 |
NICs to print statistics: |
--------------------------------------------------------------------------------- |
Interfaces: |
Number of NIC queues: 1 |
--------------------------------------------------------------------------------- |
Loading routing configurations from : config/route.conf |
fopen: No such file or directory |
Skip loading static routing table |
Routes: |
(blank) |
--------------------------------------------------------------------------------- |
Loading ARP table from : config/arp.conf |
fopen: No such file or directory |
Skip loading static ARP table |
ARP Table: |
(blank) |
--------------------------------------------------------------------------------- |
|
Checking link statusdone |
Configuration updated by mtcp_setconf(). |
[DEBUG] Creating thread context...
[ MPCreate: 173] Can't allocate memory for mempool!

@rohitjo
Copy link
Author

rohitjo commented Oct 8, 2020

cat /proc/meminfo | grep Huge |
AnonHugePages: 0 kB |
ShmemHugePages: 0 kB |
FileHugePages: 0 kB |
HugePages_Total: 8192 |
HugePages_Free: 8180 |
HugePages_Rsvd: 0 |
HugePages_Surp: 0 |
Hugepagesize: 2048 kB |
Hugetlb: 16777216 kB

@MrBean818
Copy link

MrBean818 commented Dec 23, 2020

I met the same problem today.
./client wait 10.0.0.1 1234 100 then exit which same error.
Failed in tcp_send_buffer.c:42
sbm->mp = (mem_pool_t)MPCreate(pool_name, chunk_size, (uint64_t)chunk_size * cnum)

root@ckun-MS-1:~/github/mtcp/apps/perf# ./client wait 10.0.0.1 1234 100
[DEBUG] Wait mode
Configuration updated by mtcp_setconf().
[DEBUG] Initializing mtcp...
Loading mtcp configuration from : client.conf
Loading interface setting
EAL: Detected 16 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Auto-detected process type: PRIMARY
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: PCI device 0000:01:00.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 8086:158b net_i40e
PMD: Global register is changed during support QinQ parser
PMD: Global register is changed during configure hash input set
PMD: Global register is changed during configure fdir mask
PMD: Global register is changed during configure hash mask
PMD: Global register is changed during support QinQ cloud filter
PMD: Global register is changed during disable FDIR flexible payload
EAL: PCI device 0000:01:00.1 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 8086:158b net_i40e
EAL: Auto-detected process type: PRIMARY
Configurations:
Number of CPU cores available: 1
Number of CPU cores to use: 1
Maximum number of concurrency per core: 2
Maximum number of preallocated buffers per core: 2
Receive buffer size: 6291456
Send buffer size: 4194304
TCP timeout seconds: 30
TCP timewait seconds: 0
NICs to print statistics:
"---------------------------------------------------------------------------------
Interfaces:
Number of NIC queues: 1
---------------------------------------------------------------------------------
Loading routing configurations from : config/route.conf
fopen: No such file or directory
Skip loading static routing table
Routes:
(blank)
---------------------------------------------------------------------------------
Loading ARP table from : config/arp.conf
fopen: No such file or directory
Skip loading static ARP table
ARP Table:
(blank)
---------------------------------------------------------------------------------

Checking link statusdone
Configuration updated by mtcp_setconf().
[DEBUG] Creating thread context...
[  MPCreate: 173] Can't allocate memory for mempool!
root@ckun-MS-1:~/github/mtcp/apps/perf# 
root@ckun-MS-1:~/github/mtcp/apps/perf# cat /proc/meminfo | grep Huge
AnonHugePages:         0 kB
ShmemHugePages:        0 kB
HugePages_Total:   16384
HugePages_Free:    16370
HugePages_Rsvd:        


HugePages_Surp:        0 

@MrBean818
Copy link

located the failing in
rte_mempool.c:1066
if (rte_mempool_populate_default(mp) < 0) goto fail;

@MrBean818
Copy link

located in
memzone_reserve_aligned_thread_unsafe
rte_errno == EINVAL

So far I didn't get the root cause , maybe some config is needed. anyone can help?

Actually I don't know how to use gdb to debug the functions in dpdk step by step with apps/perf/client
I just move functions(rename as xx_rte_mempool_populate_default etc..) mtcp/src/memory_mgt.c, then complie , so that I can get more information.

@lyd19997
Copy link

"[ MPCreate: 173] Can't allocate memory for mempool!".
located the failing in
if (rte_mempool_populate_default(mp) < 0) goto fail;

I solved the problem by changing the client.conf. Like this:

Receive buffer size of sockets

#rcvbuf = 6291456
rcvbuf = 16384

Send buffer size of sockets

sndbuf = 2048
#sndbuf = 4194304
#sndbuf = 41943040
#sndbuf = 146000

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants