Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

perl segfaults in some cases on FreeBSD #11

Open
christianlavoie opened this issue Jan 22, 2012 · 0 comments
Open

perl segfaults in some cases on FreeBSD #11

christianlavoie opened this issue Jan 22, 2012 · 0 comments

Comments

@christianlavoie
Copy link

Here follows an email discussion with Daisuke Maki, which he asked me to convert into a github issue:

Short story: patch at the end (after the line with ******) to make
your ZeroMQ module not segfault in some cases on FreeBSD.

Long story: tried your ZeroMQ 0.20 module on FreeBSD 9.0, perl 5.14.1,
zmq 2.1.10.

Characteristics of this binary (from libperl): 
  Compile-time options: DEBUGGING MULTIPLICITY PERL_DONT_CREATE_GVSV
                        PERL_IMPLICIT_CONTEXT PERL_MALLOC_WRAP PERL_POISON
                        PERL_PRESERVE_IVUV PERL_TRACK_MEMPOOL USE_64_BIT_ALL
                        USE_64_BIT_INT USE_ITHREADS USE_LARGE_FILES
                        USE_PERLIO USE_PERL_ATOF USE_REENTRANT_API
                        USE_SITECUSTOMIZE
  Built under freebsd
  Compiled at Jan 20 2012 17:44:49

The 104_ipc and rt64944 tests fail because the forked child segfaults
in PerlZMQ_free_string's call to Safefree:

Reading symbols from /usr/local/bin/perl...done.
(gdb) set args t/104_ipc.t
(gdb) run
Starting program: /usr/local/bin/perl t/104_ipc.t
[New LWP 101242]
1..3
[New Thread 801c07400 (LWP 101242)]
ok 1 - use ZeroMQ;
[New Thread 801c09000 (LWP 101466)]

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 801c09000 (LWP 101466)]
0x000000080099b3da in Perl_safesysfree (where=0x80232a4e0) at util.c:256
256         DEBUG_m( PerlIO_printf(Perl_debug_log, "0x%"UVxf": (%05ld)
free\n",PTR2UV(where),(long)PL_an++));
(gdb) bt
#0  0x000000080099b3da in Perl_safesysfree (where=0x80232a4e0) at util.c:256
#1  0x0000000802c4d0c2 in PerlZMQ_free_string (data=0x80232a4e0,
hint=0x0) at xs/perl_zeromq.xs:190
#2  0x0000000802e940d1 in zmq_msg_close () from /usr/local/lib/libzmq.so.1
#3  0x0000000802e79765 in zmq::encoder_t::message_ready() () from
/usr/local/lib/libzmq.so.1
#4  0x0000000802e955c1 in zmq::zmq_engine_t::out_event() () from
/usr/local/lib/libzmq.so.1
#5  0x0000000802e7bc2b in zmq::kqueue_t::loop() () from
/usr/local/lib/libzmq.so.1
#6  0x0000000802e8f5f7 in thread_routine () from /usr/local/lib/libzmq.so.1
#7  0x00000008014d7274 in ?? () from /lib/libthr.so.3
#8  0x0000000000000000 in ?? ()

That GDB cut and paste is actually from a doctored 104_ipc.t,
inverting who's the child and who's the parent in the fork call, so
that the segfaulting one is the parent that's being gdb'ed:

 # cat t/104_ipc.t
use strict;
use Test::More tests => 3;
use Test::SharedFork;
use File::Temp;

BEGIN {
   use_ok "ZeroMQ", qw(ZMQ_REP ZMQ_REQ);
}

my $path = File::Temp->new(UNLINK => 0);
my $pid = Test::SharedFork->fork();
if ($pid == 0) {
   my $ctxt = ZeroMQ::Context->new();
   my $parent_sock = $ctxt->socket(ZMQ_REP);
   $parent_sock->bind( "ipc://$path" );
   my $msg = $parent_sock->recv;
   is $msg->data, "Hello from $pid", "message is the expected message";
   waitpid $pid, 0;
} elsif ($pid) {
   sleep 1; # hmmm, not a good way to do this...
   my $ctxt = ZeroMQ::Context->new();
   my $child = $ctxt->socket( ZMQ_REQ );
   $child->connect( "ipc://$path" );
   $child->send( "Hello from $$" );
   pass "Send successful";
} else {
   die "Could not fork: $!";
}

unlink $path;

The following patch fixes things for me, and valgrind confirms it doesn't add
any new memory leaks; I think that ZMQ 2.1 doesn't free the memory for the
msg before the end of PerlZMQ_Raw_zmq_msg_init_data and
PerlZMQ_Raw_zmq_send calls, but a wee bit after. The patch's not perfect,
as it doesn't map to the 0MQ API anymore (in particular, perl zmq_init_msg_data
doesn't call C zmq_init_msg_data anymore). I don't quite understand the
Perl internals to be sure, but my guess is some weird interaction between lifetime
of SVs in perl, and zmq's asynchronously cleaning up the msg buffer causes things
to segfault on my machine without the following:

--- ZeroMQ-0.20/xs/perl_zeromq.xs 2012-01-11 20:59:06.000000000 -0500
+++ ZeroMQ-0.21/xs/perl_zeromq.xs 2012-01-20 19:04:02.000000000 -0500
@@ -182,16 +182,10 @@

    croak("ZeroMQ::Socket: Invalid ZeroMQ::Socket object was passed
to mg_find");
    return NULL; /* not reached */
 }

-STATIC_INLINE void
-PerlZMQ_free_string(void *data, void *hint) {
-    PERL_UNUSED_VAR(hint);
-    Safefree( (char *) data );
-}
-
 #include "mg-xs.inc"

 MODULE = ZeroMQ    PACKAGE = ZeroMQ   PREFIX = PerlZMQ_

 PROTOTYPES: DISABLED
@@ -328,11 +322,12 @@
            x_data_len = size;
        }
        Newxz( RETVAL, 1, PerlZMQ_Raw_Message );
        Newxz( x_data, x_data_len, char );
        Copy( sv_data, x_data, x_data_len, char );
-        rc = zmq_msg_init_data(RETVAL, x_data, x_data_len,
PerlZMQ_free_string, NULL);
+        rc = zmq_msg_init_size(RETVAL, x_data_len);
+        memcpy(zmq_msg_data(RETVAL), x_data, x_data_len);
        if ( rc != 0 ) {
            SET_BANG;
            zmq_msg_close( RETVAL );
            RETVAL = NULL;
        }
@@ -534,13 +529,14 @@
            char *data = SvPV(message, data_len);
            zmq_msg_t msg;

            Newxz(x_data, data_len, char);
            Copy(data, x_data, data_len, char);
-            zmq_msg_init_data(&msg, x_data, data_len,
PerlZMQ_free_string, NULL);
+            zmq_msg_init_size(&msg, data_len);
+            memcpy(zmq_msg_data(&msg), data, data_len);
            RETVAL = zmq_send(socket->socket, &msg, flags);
-            zmq_msg_close( &msg );
+            zmq_msg_close(&msg);
        }
    OUTPUT:
        RETVAL

 SV *
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant