-
Notifications
You must be signed in to change notification settings - Fork 82
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Segmentation faults when using Queue (non full push path) with netmap pool #83
Comments
Following the full-push idea, you would have to implement a push BandwidthRatedUnqueue. Maybe having the internals similar to a Pipeliner. However pull should still work. All of those problems probably happen because of a leak. For the "no more netmap buffers", using NetmapInfo(EXTRA_BUFFER 65536) will confirm it. If they all disappear then they are lost somewhere. Also, if you use only the Queue but not rate limiters, does it work? |
Segfaults also happen without rate limiters (i.e. when using I also noticed that usage of As for multihreaded scenarios and
|
I'm not sure who's using netmap anymore so I'm sadly applying wontfix to this... |
I emulate a virtual network by connecting several Click processes with netmap native (patched) veth pairs and
From/ToNetmapDevice
.Now I want to rate limit
ToNetmapDevice
interfaces.My first attempt was to use
BandwidthRatedSplitter
in order to maintain full push path. However, TCP rate control algorithms go crazy with that. When I limit an interface to 1 Gbps, rate of a TCP flow varies between 0 and 1.5 Gbps and iperf reports average 200 Mbps.So I decided to use
Queue -> BandwidthRatedUnqueue
. This introduced a problem of empty runs and failed pulls because Click was repeatedly scheduling run_task. UsingQuickNoteQueue
insteadQueue
reduced this problem to an acceptable level. But what is more important, TCP rate was smooth and equal to limit, the same as I would limit interface bandwidth withtc
command.This was working fine as long as all Click processes were connected serially. When I emulated any non-linear network, I started to observe segfaults. They happen on packet allocation or freeing. Here are some examples:
Most of them happen in
KernelTun
, but they are not specific to this element. When I replace it withFrom/ToDevice
, the same happens inside it. These elements are the ones which allocate/deallocate packets mostly (sinceFrom/ToNetmapDevice
only forward them). I use these elements for connecting with routing daemons.With
Pipeliner
insteadQueue -> BandwidthRatedUnqueue
segfaults do not happen, but of course I can not rate limit it.All the above applies to single thread runs (
click -j 1
).With multiple threads, these same segfaults happen. In addition, Click crashes even in linear network after forwarding some number of packets with messages "No more netmap buffers" and "netmap_ring_reinit". This happen also with
Pipeliner
. So the only configurations working with multiple threads are full push paths withoutQueue
andPipeliner
.So there are at least two problems:
Queue
is used and processes are connected non-serially (happen both with single and multiple threads)Queue
orPipeliner
is used, no matter how routers are connected (happens only with multiple threads)Full push paths works fine, both with single and multi threads and any network topology.
So are these bugs? Or
Queue
elements are meant not to be used in netmap pool mode? If so, how rate limiting can be achieved? Should it be implemented inPipeliner
? But it still would only work with single thread.The text was updated successfully, but these errors were encountered: