Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

U 20.04 for binary build & release #232

Merged
merged 1 commit into from
Mar 29, 2023
Merged

U 20.04 for binary build & release #232

merged 1 commit into from
Mar 29, 2023

Conversation

n-connect
Copy link
Contributor

Build and GitHub release to 20.04

Build and GitHub release to 20.04
@rustdesk rustdesk merged commit 7bbed69 into rustdesk:master Mar 29, 2023
@n-connect
Copy link
Contributor Author

n-connect commented Mar 29, 2023

👍
@rustdesk, would you manually push out a release, to be able to check it out? 1.1.7-3 for example?

@rustdesk
Copy link
Owner

does this fix anything?

@n-connect
Copy link
Contributor Author

I hope for it.
The auto-build of hbbs/hbbr for FreeBSD core dumped. My manual build (with same command) works on now multiple boxes - shared it another temporary repo, while this works.

The only difference between the auto-build and my manual was the rust version:

  • the working ones were 1.67.1 @ 1.68
  • the auto-build was 1.62 (fixed toolchain version, changed in previous PR)

@rustdesk
Copy link
Owner

👍

@rustdesk
Copy link
Owner

1.1.7-3

added

@madpilot78
Copy link
Contributor

@n-connect

Hi, I'm trying to use this on FreeBSD, also creating a port. I'm building in poudriere, but the resulting binaries core dump.

I'm building and running on 13.1-RELEASE-p7.

The rust compiler I'm using is version 1.68.2 as provided by FreeBSD ports.

Any suggestions on how to solve this or help diagnose?

@n-connect
Copy link
Contributor Author

There's an interesting issue on FreeBSD, if you run the binaries from other than bash it core dumps (started with csh, then tested with tcsh and sh too). I've shared my previous build at a temp. repo here - this are just running for me, and the service script can sart/stop/restart - but my user's shell set to bash.
We did not figured it out yet, just asked the .cshrc .shrc files from @mickereg.
You can go to this direct comment from PR #209 @mickeyreg modified my rc.d scripts adding screen for it, so it can run as a service.

Please check it out if you have the same result, eg. binaries hbbs|r runs if you do:

  • start bash with simply type in bash, enter; then run the hbbs|r binaries
  • 2nd: change your user's shell to bash re-login an try running them/using the rc.d scripts first the "originals" from the main repo (for from the tar.gz I linked above). Thank check with rebooting the box if they so start at init/boot time. If not, you can use the screen modified rc.d scripts

@madpilot78
Copy link
Contributor

I have discovered PR #209 while you were replying. I see he has also patched the software with a very simple change, which I'm going to test.

I'll also test with bash ot screen (or maybe tmux) if necessary.

Thanks for the suggestions.

@n-connect
Copy link
Contributor Author

n-connect commented Mar 30, 2023

Okay, before you use the patch, make sure you have the same backtrack output from the core dump - because I don't. I only have stack overflow and more generic code references. I'll put up, here is you need it.
So modifying the logging to file capability from async to direct simply not solved the core dump in my case.

core dump check:

  • lldb --core ./hbbr.core ./hbbr
  • in the prompt of lldb bt or bt all

@n-connect
Copy link
Contributor Author

n-connect commented Mar 30, 2023

Made a fresh build, it still works here with bash (errors cause of previous build is running as service)

[root@srv /tmp/rustdesk-server/target/x86_64-unknown-freebsd/release]# ./hbbs
[2023-03-30 13:06:07.242919 +02:00] INFO [src/common.rs:143] Private/public key written to id_ed25519/id_ed25519.pub
[2023-03-30 13:06:07.242994 +02:00] INFO [src/peer.rs:84] DB_URL=./db_v2.sqlite3
[2023-03-30 13:06:07.294102 +02:00] INFO [src/rendezvous_server.rs:98] serial=0
[2023-03-30 13:06:07.294125 +02:00] INFO [src/common.rs:46] rendezvous-servers=[]
[2023-03-30 13:06:07.294135 +02:00] INFO [src/rendezvous_server.rs:100] Listening on tcp/udp :21116
[2023-03-30 13:06:07.294136 +02:00] INFO [src/rendezvous_server.rs:101] Listening on tcp :21115, extra port for NAT test
[2023-03-30 13:06:07.294138 +02:00] INFO [src/rendezvous_server.rs:102] Listening on websocket :21118
[2023-03-30 13:06:07.294160 +02:00] INFO [libs/hbb_common/src/udp.rs:35] Receive buf size of udp [::]:21116: Ok(42080)
[2023-03-30 13:06:07.294239 +02:00] INFO [libs/hbb_common/src/udp.rs:35] Receive buf size of udp 0.0.0.0:21116: Ok(42080)
Error: Address already in use (os error 48)
[root@srv /tmp/rustdesk-server/target/x86_64-unknown-freebsd/release]# ./hbbr
[2023-03-30 13:06:11.349169 +02:00] INFO [src/relay_server.rs:60] #blacklist(blacklist.txt): 0
[2023-03-30 13:06:11.349194 +02:00] INFO [src/relay_server.rs:75] #blocklist(blocklist.txt): 0
[2023-03-30 13:06:11.349198 +02:00] INFO [src/relay_server.rs:81] Listening on tcp :21117
[2023-03-30 13:06:11.349199 +02:00] INFO [src/relay_server.rs:83] Listening on websocket :21119
[2023-03-30 13:06:11.349220 +02:00] INFO [src/relay_server.rs:86] Start
Error: Address already in use (os error 48)
[root@srv /tmp/rustdesk-server/target/x86_64-unknown-freebsd/release]# ls -la
total 23231
drwxr-xr-x    7 root  wheel        21 Mar 30 13:06 .
drwxr-xr-x    3 root  wheel         4 Mar 30 12:49 ..
-rw-r--r--    1 root  wheel         0 Mar 30 12:49 .cargo-lock
drwxr-xr-x  251 root  wheel       251 Mar 30 12:49 .fingerprint
drwxr-xr-x   42 root  wheel        42 Mar 30 12:49 build
-rw-r--r--    1 root  wheel     24576 Mar 30 13:06 db_v2.sqlite3
-rw-r--r--    1 root  wheel     32768 Mar 30 13:06 db_v2.sqlite3-shm
-rw-r--r--    1 root  wheel     41232 Mar 30 13:06 db_v2.sqlite3-wal
drwxr-xr-x    2 root  wheel       626 Mar 30 13:02 deps
drwxr-xr-x    2 root  wheel         2 Mar 30 12:49 examples
-rwxr-xr-x    2 root  wheel   5524464 Mar 30 13:02 hbbr
-rw-r--r--    1 root  wheel      2111 Mar 30 13:02 hbbr.d
-rwxr-xr-x    2 root  wheel  10006456 Mar 30 13:01 hbbs
-rw-r--r--    1 root  wheel      2057 Mar 30 13:02 hbbs.d
-rw-r--r--    1 root  wheel        88 Mar 30 13:06 id_ed25519
-rw-r--r--    1 root  wheel        44 Mar 30 13:06 id_ed25519.pub
drwxr-xr-x    2 root  wheel         2 Mar 30 12:49 incremental
-rw-r--r--    1 root  wheel      2019 Mar 30 13:02 libhbbs.d
-rw-r--r--    2 root  wheel   6886660 Mar 30 13:01 libhbbs.rlib
-rwxr-xr-x    2 root  wheel    738488 Mar 30 13:01 rustdesk-utils
-rw-r--r--    1 root  wheel      2068 Mar 30 13:02 rustdesk-utils.d

@madpilot78 can you share your core dump output, so we can compare them?

@madpilot78
Copy link
Contributor

madpilot78 commented Mar 30, 2023

@n-connect

with a debug build I get the backtrace at the tail of this comment.

An interesting line is:

frame #7: 0x0000000001359fc8 hbbr`hbbr::main::hf854450b5689821e at hbbr.rs:20:19

that file also has a WriteMode::Async logger, so both hbbr.rs and main.rs need to be patched I guess. Going to try that.

(In fact I misunderstood the patch from #209 on first try)

* thread #1, name = 'hbbr', stop reason = signal SIGSEGV
  * frame #0: 0x0000000801c1139c libc.so.7`___lldb_unnamed_symbol5309 + 684
    frame #1: 0x0000000801c51f75 libc.so.7`___lldb_unnamed_symbol5929 + 21
    frame #2: 0x0000000801c0445c libc.so.7`___lldb_unnamed_symbol5275 + 572
    frame #3: 0x0000000001942c0e hbbr`std::thread::local::os::Key$LT$T$GT$::get::h22821e11e8d35e83 + 126
    frame #4: 0x0000000001941b32 hbbr`std::sys::unix::stack_overflow::imp::signal_handler::h315b13bc04c662f6 (.llvm.1203363881124934408) + 34
    frame #5: 0x0000000801a6058e libthr.so.3`___lldb_unnamed_symbol672 + 222
    frame #6: 0x0000000801a5fb3f libthr.so.3`___lldb_unnamed_symbol653 + 319
    frame #7: 0x00007ffffffff8a3 [vdso]
    frame #8: 0x0000000801c332f9 libc.so.7`___lldb_unnamed_symbol5725 + 57
    frame #9: 0x0000000801c1005d libc.so.7`___lldb_unnamed_symbol5299 + 141
    frame #10: 0x0000000801c39c2d libc.so.7`___lldb_unnamed_symbol5781 + 381
    frame #11: 0x0000000801c524f3 libc.so.7`___lldb_unnamed_symbol5934 + 179
    frame #12: 0x0000000801c5242a libc.so.7`___lldb_unnamed_symbol5933 + 42
    frame #13: 0x0000000801c5537a libc.so.7`___lldb_unnamed_symbol5954 + 506
    frame #14: 0x0000000801c0434b libc.so.7`___lldb_unnamed_symbol5275 + 299
    frame #15: 0x0000000801c56e81 libc.so.7`strdup + 33
    frame #16: 0x0000000801a579a1 libthr.so.3`pthread_setname_np + 33
    frame #17: 0x000000000147e96f hbbr`std::thread::Builder::spawn_unchecked_::_$u7b$$u7b$closure$u7d$$u7d$::h2127289236bd20ad at mod.rs:546:17
    frame #18: 0x000000000144d49f hbbr`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hfaa00249cf9a3263((null)=0x0000000802219100, (null)=<unavailable>) at function.rs:250:5
    frame #19: 0x000000000193f153 hbbr`std::sys::unix::thread::Thread::new::thread_start::ha074350b6869db1b + 35
    frame #20: 0x0000000801a5683a libthr.so.3`___lldb_unnamed_symbol556 + 314
  thread #2, name = 'hbbr', stop reason = signal SIGSEGV
    frame #0: 0x000000000190208a hbbr`clap::app::validator::Validator::validate_arg_values::hf196ee8292556eec(self=0x00007fffffffbfa8, arg=0x000000080224d1e8, ma=0x0000000802248198, matcher=0x00007fffffffc1c8) at validator.rs:145:33
    frame #1: 0x00000000019053b3 hbbr`clap::app::validator::Validator::validate_matched_args::he86955417516a4f5(self=0x00007fffffffbfa8, matcher=0x00007fffffffc1c8) at validator.rs:293:17
    frame #2: 0x00000000019005b2 hbbr`clap::app::validator::Validator::validate::h9284596abcf1a23e(self=0x00007fffffffbfa8, needs_val_of=ParseResult @ 0x00007fffffffbfb0, subcmd_name=Option<alloc::string::String> @ 0x00007fffffffbfd0, matcher=0x00007fffffffc1c8) at validator.rs:80:9
    frame #3: 0x00000000018bfced hbbr`clap::app::parser::Parser::get_matches_with::h225e5fc1d5949979(self=0x00007fffffffc7f0, matcher=0x00007fffffffc1c8, it=0x00007fffffffc328) at parser.rs:1241:9
    frame #4: 0x00000000018cef8c hbbr`clap::app::App::get_matches_from_safe_borrow::h50578f4e83fe4cf5(self=0x00007fffffffc7f0, itr=0x00007fffffffca48) at mod.rs:1642:25
    frame #5: 0x00000000018ce47c hbbr`clap::app::App::get_matches_from::ha7c3754388dd20b1(self=App @ 0x00007fffffffc7f0, itr=0x00007fffffffca48) at mod.rs:1522:9
    frame #6: 0x00000000018ce3d6 hbbr`clap::app::App::get_matches::h99f633f6fe8febc0(self=<unavailable>) at mod.rs:1463:9
    frame #7: 0x0000000001359fc8 hbbr`hbbr::main::hf854450b5689821e at hbbr.rs:20:19
    frame #8: 0x000000000133421b hbbr`core::ops::function::FnOnce::call_once::h7b45e4305266c56c((null)=(hbbr`hbbr::main::hf854450b5689821e at hbbr.rs:9), (null)=<unavailable>) at function.rs:250:5
    frame #9: 0x0000000001387e2e hbbr`std::sys_common::backtrace::__rust_begin_short_backtrace::h28fb42aa52691659(f=(hbbr`hbbr::main::hf854450b5689821e at hbbr.rs:9)) at backtrace.rs:121:18
    frame #10: 0x000000000132ae61 hbbr`std::rt::lang_start::_$u7b$$u7b$closure$u7d$$u7d$::h75ca4ff683de82a6 at rt.rs:166:18
    frame #11: 0x000000000194f4d4 hbbr`std::rt::lang_start_internal::hdf4d11d23112f297 + 36
    frame #12: 0x000000000132ae3a hbbr`std::rt::lang_start::h09757c5d75092bf8(main=(hbbr`hbbr::main::hf854450b5689821e at hbbr.rs:9), argc=3, argv=0x00007fffffffeb20, sigpipe='\0') at rt.rs:165:17
    frame #13: 0x000000000135a75e hbbr`main + 30
    frame #14: 0x000000000132222d hbbr`_start(ap=<unavailable>, cleanup=<unavailable>) at crt1_c.c:75:7
  thread #3, name = 'flexi_logger-async_', stop reason = signal SIGSEGV
    frame #0: 0x0000000801bc55ba libc.so.7`__sys__umtx_op + 10
    frame #1: 0x000000000196c6c3 hbbr`std::sys::unix::futex::futex_wait::hc46d1432feaa0397 + 147
    frame #2: 0x00000000019636f0 hbbr`std::thread::park::hafb32f112bd40e56 + 64
    frame #3: 0x00000000014773bc hbbr`crossbeam_channel::context::Context::wait_until::h1b2944a96ede45ba(self=0x00007fffdfffd5b8, deadline=Option<std::time::Instant> @ 0x00007fffdfffd3c8) at context.rs:177:17
    frame #4: 0x0000000001430cd6 hbbr`crossbeam_channel::flavors::list::Channel$LT$T$GT$::recv::_$u7b$$u7b$closure$u7d$$u7d$::h8a5fbbbf1e021a28(cx=0x00007fffdfffd5b8) at list.rs:479:27
    frame #5: 0x0000000001477d3e hbbr`crossbeam_channel::context::Context::with::_$u7b$$u7b$closure$u7d$$u7d$::h4c7a246884d9f35d(cx=0x00007fffdfffd5b8) at context.rs:52:13
    frame #6: 0x0000000001478073 hbbr`crossbeam_channel::context::Context::with::_$u7b$$u7b$closure$u7d$$u7d$::h814c6bbbdebe5591(cell=0x0000000802a00028) at context.rs:60:31
    frame #7: 0x000000000144bffd hbbr`std::thread::local::LocalKey$LT$T$GT$::try_with::ha1196f65079c801b(self=0x00000000019984f0, f={closure_env#1}<crossbeam_channel::flavors::list::{impl#3}::recv::{closure_env#1}<alloc::vec::Vec<u8, alloc::alloc::Global>>, ()> @ 0x00007fffdfffd650) at local.rs:446:16
    frame #8: 0x0000000001477532 hbbr`crossbeam_channel::context::Context::with::h5334a797a545ae53(f=<unavailable>) at context.rs:55:9
    frame #9: 0x0000000001430bda hbbr`crossbeam_channel::flavors::list::Channel$LT$T$GT$::recv::hb69f83ba77a8cc74(self=0x0000000802227000, deadline=Option<std::time::Instant> @ 0x00007fffdfffd758) at list.rs:469:13
    frame #10: 0x000000000142eb9e hbbr`crossbeam_channel::channel::Receiver$LT$T$GT$::recv::h97378b1d3a0ccf2b(self=0x00007fffdfffda38) at channel.rs:815:43
    frame #11: 0x0000000001457850 hbbr`flexi_logger::threads::start_async_stdwriter::_$u7b$$u7b$closure$u7d$$u7d$::h213bf756d644f905 at threads.rs:147:27
    frame #12: 0x0000000001449210 hbbr`std::sys_common::backtrace::__rust_begin_short_backtrace::h96ca771e57e57299(f=<unavailable>) at backtrace.rs:121:18
    frame #13: 0x0000000001480361 hbbr`std::thread::Builder::spawn_unchecked_::_$u7b$$u7b$closure$u7d$$u7d$::_$u7b$$u7b$closure$u7d$$u7d$::hdb31f2aa48b57853 at mod.rs:558:17
    frame #14: 0x0000000001447661 hbbr`_$LT$core..panic..unwind_safe..AssertUnwindSafe$LT$F$GT$$u20$as$u20$core..ops..function..FnOnce$LT$$LP$$RP$$GT$$GT$::call_once::h7163747587cd49a4(self=<unavailable>, _args=<unavailable>) at unwind_safe.rs:271:9
    frame #15: 0x000000000145de85 hbbr`std::panicking::try::do_call::hbdccab12e2b21bfb(data="\U00000001") at panicking.rs:483:40
    frame #16: 0x000000000146180b hbbr`__rust_try + 27
    frame #17: 0x000000000145d3d0 hbbr`std::panicking::try::h0a25885607fe16e0(f=<unavailable>) at panicking.rs:447:19
    frame #18: 0x000000000145d2e9 hbbr`std::panic::catch_unwind::hc7c13c4db9e69301(f=<unavailable>) at panic.rs:140:14
    frame #19: 0x000000000147ef5e hbbr`std::thread::Builder::spawn_unchecked_::_$u7b$$u7b$closure$u7d$$u7d$::h6868fc6df7f9e2da at mod.rs:557:30
    frame #20: 0x000000000144d44f hbbr`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::h90d6fde0cfa59ab4((null)=0x000000080221f000, (null)=<unavailable>) at function.rs:250:5
    frame #21: 0x000000000193f153 hbbr`std::sys::unix::thread::Thread::new::thread_start::ha074350b6869db1b + 35
    frame #22: 0x0000000801a5683a libthr.so.3`___lldb_unnamed_symbol556 + 314
* thread #1, name = 'hbbr', stop reason = signal SIGSEGV
  * frame #0: 0x0000000801c1139c libc.so.7`___lldb_unnamed_symbol5309 + 684
    frame #1: 0x0000000801c51f75 libc.so.7`___lldb_unnamed_symbol5929 + 21
    frame #2: 0x0000000801c0445c libc.so.7`___lldb_unnamed_symbol5275 + 572
    frame #3: 0x0000000001942c0e hbbr`std::thread::local::os::Key$LT$T$GT$::get::h22821e11e8d35e83 + 126
    frame #4: 0x0000000001941b32 hbbr`std::sys::unix::stack_overflow::imp::signal_handler::h315b13bc04c662f6 (.llvm.1203363881124934408) + 34
    frame #5: 0x0000000801a6058e libthr.so.3`___lldb_unnamed_symbol672 + 222
    frame #6: 0x0000000801a5fb3f libthr.so.3`___lldb_unnamed_symbol653 + 319
    frame #7: 0x00007ffffffff8a3 [vdso]
    frame #8: 0x0000000801c332f9 libc.so.7`___lldb_unnamed_symbol5725 + 57
    frame #9: 0x0000000801c1005d libc.so.7`___lldb_unnamed_symbol5299 + 141
    frame #10: 0x0000000801c39c2d libc.so.7`___lldb_unnamed_symbol5781 + 381
    frame #11: 0x0000000801c524f3 libc.so.7`___lldb_unnamed_symbol5934 + 179
    frame #12: 0x0000000801c5242a libc.so.7`___lldb_unnamed_symbol5933 + 42
    frame #13: 0x0000000801c5537a libc.so.7`___lldb_unnamed_symbol5954 + 506
    frame #14: 0x0000000801c0434b libc.so.7`___lldb_unnamed_symbol5275 + 299
    frame #15: 0x0000000801c56e81 libc.so.7`strdup + 33
    frame #16: 0x0000000801a579a1 libthr.so.3`pthread_setname_np + 33
    frame #17: 0x000000000147e96f hbbr`std::thread::Builder::spawn_unchecked_::_$u7b$$u7b$closure$u7d$$u7d$::h2127289236bd20ad at mod.rs:546:17
    frame #18: 0x000000000144d49f hbbr`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hfaa00249cf9a3263((null)=0x0000000802219100, (null)=<unavailable>) at function.rs:250:5
    frame #19: 0x000000000193f153 hbbr`std::sys::unix::thread::Thread::new::thread_start::ha074350b6869db1b + 35
    frame #20: 0x0000000801a5683a libthr.so.3`___lldb_unnamed_symbol556 + 314
  thread #2, name = 'hbbr', stop reason = signal SIGSEGV
    frame #0: 0x000000000190208a hbbr`clap::app::validator::Validator::validate_arg_values::hf196ee8292556eec(self=0x00007fffffffbfa8, arg=0x000000080224d1e8, ma=0x0000000802248198, matcher=0x00007fffffffc1c8) at validator.rs:145:33
    frame #1: 0x00000000019053b3 hbbr`clap::app::validator::Validator::validate_matched_args::he86955417516a4f5(self=0x00007fffffffbfa8, matcher=0x00007fffffffc1c8) at validator.rs:293:17
    frame #2: 0x00000000019005b2 hbbr`clap::app::validator::Validator::validate::h9284596abcf1a23e(self=0x00007fffffffbfa8, needs_val_of=ParseResult @ 0x00007fffffffbfb0, subcmd_name=Option<alloc::string::String> @ 0x00007fffffffbfd0, matcher=0x00007fffffffc1c8) at validator.rs:80:9
    frame #3: 0x00000000018bfced hbbr`clap::app::parser::Parser::get_matches_with::h225e5fc1d5949979(self=0x00007fffffffc7f0, matcher=0x00007fffffffc1c8, it=0x00007fffffffc328) at parser.rs:1241:9
    frame #4: 0x00000000018cef8c hbbr`clap::app::App::get_matches_from_safe_borrow::h50578f4e83fe4cf5(self=0x00007fffffffc7f0, itr=0x00007fffffffca48) at mod.rs:1642:25
    frame #5: 0x00000000018ce47c hbbr`clap::app::App::get_matches_from::ha7c3754388dd20b1(self=App @ 0x00007fffffffc7f0, itr=0x00007fffffffca48) at mod.rs:1522:9
    frame #6: 0x00000000018ce3d6 hbbr`clap::app::App::get_matches::h99f633f6fe8febc0(self=<unavailable>) at mod.rs:1463:9
    frame #7: 0x0000000001359fc8 hbbr`hbbr::main::hf854450b5689821e at hbbr.rs:20:19
    frame #8: 0x000000000133421b hbbr`core::ops::function::FnOnce::call_once::h7b45e4305266c56c((null)=(hbbr`hbbr::main::hf854450b5689821e at hbbr.rs:9), (null)=<unavailable>) at function.rs:250:5
    frame #9: 0x0000000001387e2e hbbr`std::sys_common::backtrace::__rust_begin_short_backtrace::h28fb42aa52691659(f=(hbbr`hbbr::main::hf854450b5689821e at hbbr.rs:9)) at backtrace.rs:121:18
    frame #10: 0x000000000132ae61 hbbr`std::rt::lang_start::_$u7b$$u7b$closure$u7d$$u7d$::h75ca4ff683de82a6 at rt.rs:166:18
    frame #11: 0x000000000194f4d4 hbbr`std::rt::lang_start_internal::hdf4d11d23112f297 + 36
    frame #12: 0x000000000132ae3a hbbr`std::rt::lang_start::h09757c5d75092bf8(main=(hbbr`hbbr::main::hf854450b5689821e at hbbr.rs:9), argc=3, argv=0x00007fffffffeb20, sigpipe='\0') at rt.rs:165:17
    frame #13: 0x000000000135a75e hbbr`main + 30
    frame #14: 0x000000000132222d hbbr`_start(ap=<unavailable>, cleanup=<unavailable>) at crt1_c.c:75:7
  thread #3, name = 'flexi_logger-async_', stop reason = signal SIGSEGV
    frame #0: 0x0000000801bc55ba libc.so.7`__sys__umtx_op + 10
    frame #1: 0x000000000196c6c3 hbbr`std::sys::unix::futex::futex_wait::hc46d1432feaa0397 + 147
    frame #2: 0x00000000019636f0 hbbr`std::thread::park::hafb32f112bd40e56 + 64
    frame #3: 0x00000000014773bc hbbr`crossbeam_channel::context::Context::wait_until::h1b2944a96ede45ba(self=0x00007fffdfffd5b8, deadline=Option<std::time::Instant> @ 0x00007fffdfffd3c8) at context.rs:177:17
    frame #4: 0x0000000001430cd6 hbbr`crossbeam_channel::flavors::list::Channel$LT$T$GT$::recv::_$u7b$$u7b$closure$u7d$$u7d$::h8a5fbbbf1e021a28(cx=0x00007fffdfffd5b8) at list.rs:479:27
    frame #5: 0x0000000001477d3e hbbr`crossbeam_channel::context::Context::with::_$u7b$$u7b$closure$u7d$$u7d$::h4c7a246884d9f35d(cx=0x00007fffdfffd5b8) at context.rs:52:13
    frame #6: 0x0000000001478073 hbbr`crossbeam_channel::context::Context::with::_$u7b$$u7b$closure$u7d$$u7d$::h814c6bbbdebe5591(cell=0x0000000802a00028) at context.rs:60:31
    frame #7: 0x000000000144bffd hbbr`std::thread::local::LocalKey$LT$T$GT$::try_with::ha1196f65079c801b(self=0x00000000019984f0, f={closure_env#1}<crossbeam_channel::flavors::list::{impl#3}::recv::{closure_env#1}<alloc::vec::Vec<u8, alloc::alloc::Global>>, ()> @ 0x00007fffdfffd650) at local.rs:446:16
    frame #8: 0x0000000001477532 hbbr`crossbeam_channel::context::Context::with::h5334a797a545ae53(f=<unavailable>) at context.rs:55:9
    frame #9: 0x0000000001430bda hbbr`crossbeam_channel::flavors::list::Channel$LT$T$GT$::recv::hb69f83ba77a8cc74(self=0x0000000802227000, deadline=Option<std::time::Instant> @ 0x00007fffdfffd758) at list.rs:469:13
    frame #10: 0x000000000142eb9e hbbr`crossbeam_channel::channel::Receiver$LT$T$GT$::recv::h97378b1d3a0ccf2b(self=0x00007fffdfffda38) at channel.rs:815:43
    frame #11: 0x0000000001457850 hbbr`flexi_logger::threads::start_async_stdwriter::_$u7b$$u7b$closure$u7d$$u7d$::h213bf756d644f905 at threads.rs:147:27
    frame #12: 0x0000000001449210 hbbr`std::sys_common::backtrace::__rust_begin_short_backtrace::h96ca771e57e57299(f=<unavailable>) at backtrace.rs:121:18
    frame #13: 0x0000000001480361 hbbr`std::thread::Builder::spawn_unchecked_::_$u7b$$u7b$closure$u7d$$u7d$::_$u7b$$u7b$closure$u7d$$u7d$::hdb31f2aa48b57853 at mod.rs:558:17
    frame #14: 0x0000000001447661 hbbr`_$LT$core..panic..unwind_safe..AssertUnwindSafe$LT$F$GT$$u20$as$u20$core..ops..function..FnOnce$LT$$LP$$RP$$GT$$GT$::call_once::h7163747587cd49a4(self=<unavailable>, _args=<unavailable>) at unwind_safe.rs:271:9
    frame #15: 0x000000000145de85 hbbr`std::panicking::try::do_call::hbdccab12e2b21bfb(data="\U00000001") at panicking.rs:483:40
    frame #16: 0x000000000146180b hbbr`__rust_try + 27
    frame #17: 0x000000000145d3d0 hbbr`std::panicking::try::h0a25885607fe16e0(f=<unavailable>) at panicking.rs:447:19
    frame #18: 0x000000000145d2e9 hbbr`std::panic::catch_unwind::hc7c13c4db9e69301(f=<unavailable>) at panic.rs:140:14
    frame #19: 0x000000000147ef5e hbbr`std::thread::Builder::spawn_unchecked_::_$u7b$$u7b$closure$u7d$$u7d$::h6868fc6df7f9e2da at mod.rs:557:30
    frame #20: 0x000000000144d44f hbbr`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::h90d6fde0cfa59ab4((null)=0x000000080221f000, (null)=<unavailable>) at function.rs:250:5
    frame #21: 0x000000000193f153 hbbr`std::sys::unix::thread::Thread::new::thread_start::ha074350b6869db1b + 35
    frame #22: 0x0000000801a5683a libthr.so.3`___lldb_unnamed_symbol556 + 314

@madpilot78
Copy link
Contributor

I confirm that changing .write_mode(WriteMode::Async) to .write_mode(WriteMode::Direct) in both src/main.rs and src/hbbr.rs fixes the issue for me.

Should I send it as a pull request?

@mickeyreg
Copy link

IMHO: Hard to say :(

WriteMode::Async works ok on Linux and also works ok in bash or screen on FreeBSD.

WriteMode::Direct works fine in any shell on FreeBSD and I didn't make any test on Linux or other system.

I don't know what is the main differenece between WriteMode::Async and WriteMode::Direct.

Maybe something should be changed in the flexi_logger library (https://docs.rs/flexi_logger/latest/flexi_logger/) not in the rustdesk-server?

I think taht if you are creating port fo FreeBSD you can add the patch to change this line before compile. It is rather the workaround not the solution.

@madpilot78
Copy link
Contributor

@mickeyreg I'm already testing this as a local patch in the port. I find it easier to compile and install software in FreeBSD leveraging ports anyway, even for personal use, than building them by hand.

I have no development experience in rust, I actually don't know the language. I can barely understand something of it just because it is very similar to C/C++, but I could suggest using an #ifdef __FreeBSD__ equivalent.

@n-connect
Copy link
Contributor Author

@madpilot78 , @mickeyreg
Okay, so in your FreeBSD machines you have a consistent core backtrace, and the 'hubbr.rs' listed with 'bt'
From mines it is missing from mine completely. I have 13.1-p3 Would you try my fresh build and send back here the backtrace from that?

Here's a fresh execution with /bin/tcsh as shell for the user:

root@srv:/tmp/rustdesk-server/target/x86_64-unknown-freebsd/release # ./hbbr
Segmentation fault (core dumped)
root@srv:/tmp/rustdesk-server/target/x86_64-unknown-freebsd/release # ls -la
total 25671
drwxr-xr-x    7 root  wheel        22 Mar 30 17:29 .
drwxr-xr-x    3 root  wheel         4 Mar 30 12:49 ..
-rw-r--r--    1 root  wheel         0 Mar 30 12:49 .cargo-lock
drwxr-xr-x  251 root  wheel       251 Mar 30 12:49 .fingerprint
drwxr-xr-x   42 root  wheel        42 Mar 30 12:49 build
-rw-r--r--    1 root  wheel     24576 Mar 30 13:06 db_v2.sqlite3
-rw-r--r--    1 root  wheel     32768 Mar 30 13:06 db_v2.sqlite3-shm
-rw-r--r--    1 root  wheel     41232 Mar 30 13:06 db_v2.sqlite3-wal
drwxr-xr-x    2 root  wheel       626 Mar 30 13:02 deps
drwxr-xr-x    2 root  wheel         2 Mar 30 12:49 examples
-rwxr-xr-x    2 root  wheel   5524464 Mar 30 13:02 hbbr
-rw-------    1 root  wheel  20246528 Mar 30 17:29 hbbr.core
-rw-r--r--    1 root  wheel      2111 Mar 30 13:02 hbbr.d
-rwxr-xr-x    2 root  wheel  10006456 Mar 30 13:01 hbbs
-rw-r--r--    1 root  wheel      2057 Mar 30 13:02 hbbs.d
-rw-r--r--    1 root  wheel        88 Mar 30 13:06 id_ed25519
-rw-r--r--    1 root  wheel        44 Mar 30 13:06 id_ed25519.pub
drwxr-xr-x    2 root  wheel         2 Mar 30 12:49 incremental
-rw-r--r--    1 root  wheel      2019 Mar 30 13:02 libhbbs.d
-rw-r--r--    2 root  wheel   6886660 Mar 30 13:01 libhbbs.rlib
-rwxr-xr-x    2 root  wheel    738488 Mar 30 13:01 rustdesk-utils
-rw-r--r--    1 root  wheel      2068 Mar 30 13:02 rustdesk-utils.d
root@srv:/tmp/rustdesk-server/target/x86_64-unknown-freebsd/release # lldb --core ./hbbr.core ./hbbr
(lldb) target create "./hbbr" --core "./hbbr.core"
Core file '/tmp/rustdesk-server/target/x86_64-unknown-freebsd/release/hbbr.core' (x86_64) was loaded.
(lldb) bt all
* thread #1, name = 'hbbr', stop reason = signal SIGSEGV
  * frame #0: 0x00000008015f139c libc.so.7`___lldb_unnamed_symbol5309 + 684
    frame #1: 0x0000000801631f75 libc.so.7`___lldb_unnamed_symbol5929 + 21
    frame #2: 0x00000008015e445c libc.so.7`___lldb_unnamed_symbol5275 + 572
    frame #3: 0x000000000136977e hbbr`std::thread::local::os::Key$LT$T$GT$::get::h62ef9e223911d2f3 + 126
    frame #4: 0x0000000001364e82 hbbr`std::sys::unix::stack_overflow::imp::signal_handler::h651a8d1403926099 (.llvm.9484877940066933952) + 34
    frame #5: 0x000000080144058e libthr.so.3`___lldb_unnamed_symbol672 + 222
    frame #6: 0x000000080143fb3f libthr.so.3`___lldb_unnamed_symbol653 + 319
    frame #7: 0x00007ffffffff2d3 [vdso]
    frame #8: 0x000000080143f634 libthr.so.3`___lldb_unnamed_symbol649 + 36
    frame #9: 0x000000080143c400 libthr.so.3`___lldb_unnamed_symbol615 + 512
    frame #10: 0x00000008016136d1 libc.so.7`___lldb_unnamed_symbol5726 + 977
    frame #11: 0x00000008016132f9 libc.so.7`___lldb_unnamed_symbol5725 + 57
    frame #12: 0x00000008015f005d libc.so.7`___lldb_unnamed_symbol5299 + 141
    frame #13: 0x0000000801619c2d libc.so.7`___lldb_unnamed_symbol5781 + 381
    frame #14: 0x00000008016324f3 libc.so.7`___lldb_unnamed_symbol5934 + 179
    frame #15: 0x000000080163242a libc.so.7`___lldb_unnamed_symbol5933 + 42
    frame #16: 0x000000080163537a libc.so.7`___lldb_unnamed_symbol5954 + 506
    frame #17: 0x00000008015e434b libc.so.7`___lldb_unnamed_symbol5275 + 299
    frame #18: 0x0000000801636e81 libc.so.7`strdup + 33
    frame #19: 0x00000008014379a1 libthr.so.3`pthread_setname_np + 33
    frame #20: 0x0000000001227896 hbbr`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hfb0e44d528ab3719 + 54
    frame #21: 0x0000000001368853 hbbr`std::sys::unix::thread::Thread::new::thread_start::hb1a6085c22a021af + 35
    frame #22: 0x000000080143683a libthr.so.3`___lldb_unnamed_symbol556 + 314
  thread #2, name = 'hbbr', stop reason = signal SIGSEGV
    frame #0: 0x00000008015a637a libc.so.7`__sys_fstat + 10
    frame #1: 0x000000080159b9bb libc.so.7`___lldb_unnamed_symbol4943 + 315
    frame #2: 0x000000080159a925 libc.so.7`___lldb_unnamed_symbol4936 + 117
    frame #3: 0x000000080159af23 libc.so.7`localtime_r + 51
    frame #4: 0x000000000125363b hbbr`chrono::sys::inner::time_to_local_tm::h2f038c03063ecdf8 + 59
    frame #5: 0x0000000001253948 hbbr`chrono::offset::local::Local::now::h6ac9f9d328a549ee + 88
    frame #6: 0x0000000001213e46 hbbr`std::sys_common::once::futex::Once::call::hc6e30c8ca881413f + 470
    frame #7: 0x0000000001232c82 hbbr`flexi_logger::logger::Logger::build::h29bbb585b5b28a05 + 2450
    frame #8: 0x00000000012321f1 hbbr`flexi_logger::logger::Logger::start::haf9785373284d4ce + 49
    frame #9: 0x00000000011c8fea hbbr`hbbr::main::h9a752489a163aaf3 + 586
    frame #10: 0x00000000011b4e63 hbbr`std::sys_common::backtrace::__rust_begin_short_backtrace::h63efb0e523e9e16b + 3
    frame #11: 0x00000000011d83cd hbbr`std::rt::lang_start::_$u7b$$u7b$closure$u7d$$u7d$::h38d472ad09e2f28b (.llvm.2447033222116085739) + 13
    frame #12: 0x0000000001373224 hbbr`std::rt::lang_start_internal::h1ca1ec0ef460ba41 + 36
    frame #13: 0x00000000011c9905 hbbr`main + 37
    frame #14: 0x0000000001193dad hbbr`_start(ap=<unavailable>, cleanup=<unavailable>) at crt1_c.c:75:7
  thread #3, name = 'flexi_logger-async_', stop reason = signal SIGSEGV
    frame #0: 0x00000008015a55ba libc.so.7`__sys__umtx_op + 10
    frame #1: 0x000000000138efd3 hbbr`std::sys::unix::futex::futex_wait::h3326056c5a53a58e + 147
    frame #2: 0x00000000013841a0 hbbr`std::thread::park::h92a472e3eccac287 + 64
    frame #3: 0x0000000001229f47 hbbr`crossbeam_channel::context::Context::with::_$u7b$$u7b$closure$u7d$$u7d$::hea161fcaa8ff0dc7 + 1399
    frame #4: 0x000000000122a834 hbbr`crossbeam_channel::flavors::list::Channel$LT$T$GT$::recv::h190542a220c067ef + 740
    frame #5: 0x000000000121d674 hbbr`crossbeam_channel::channel::Receiver$LT$T$GT$::recv::h9cc7734b17d489a0 + 84
    frame #6: 0x0000000001221612 hbbr`std::sys_common::backtrace::__rust_begin_short_backtrace::hb6a7bb30795a9ad3 + 98
    frame #7: 0x0000000001226f7e hbbr`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::h2b00da5ae4caf004 + 190
    frame #8: 0x0000000001368853 hbbr`std::sys::unix::thread::Thread::new::thread_start::hb1a6085c22a021af + 35
    frame #9: 0x000000080143683a libthr.so.3`___lldb_unnamed_symbol556 + 314
(lldb)

@mickeyreg
Copy link

@n-connect : You have the same error in the backtrace.

    frame #9: 0x00000000011c8fea hbbr`hbbr::main::h9a752489a163aaf3 + 586

and

thread #3, name = 'flexi_logger-async_', stop reason = signal SIGSEGV

Make a debug build (cargo build --target=x86_64-unknown-freebsd) and you will see the name of the source file.

@rustdesk
Copy link
Owner

rustdesk commented Mar 30, 2023

If flexi_logger is the problem, you can disable flexi_logger for bsd.

@n-connect
Copy link
Contributor Author

That's the Github Actions' auto build on mine. Yep, it has it at the beginning thread #3, name = 'flexi_logger-async_', stop reason = signal SIGSEGV , I stand corrected thanks. I only did the release builds from this year, when the local-ip-address package started to support FreeBSD.

I'm still interested why it is happening. Another rust code also using flexi_logger has none of these issues, nor needs workarounds notify_push. They are not using the .write_mode(WriteMode::Async) statement as they directly go to syslog already, already handling PIDs, and spawns child process with the expected service user. Let say "completely" demonized, a good direction for rustdesk-server too. :)
Also from Flexi_logger it is clear the log level set is not independent, if you are running multiple rust binaries with flexi_logger on the same box. It could be connected too.

@rustdesk it is not about completely removing flexi_logger, @mickeyreg & @madpilot78 changed & tested: if we change the code to directwrite from async it works.
Before we do a PR for that, it would be good to know if it will for for the Linux builds. OR have a kind exception handling and set it only for [Free]BSD OS family, a high level selection is it FreeBSD or not can help The which_freebsd detection is cross-compilation unfriendly

root@srv:/tmp/amd64fb_1.1.7-3 # lldb --core ./hbbr.core ./hbbr
(lldb) target create "./hbbr" --core "./hbbr.core"
Core file '/tmp/amd64fb_1.1.7-3/hbbr.core' (x86_64) was loaded.
(lldb) bt all
* thread #1, name = 'hbbr', stop reason = signal SIGSEGV
  * frame #0: 0x000000080177b39c libc.so.7`___lldb_unnamed_symbol5309 + 684
    frame #1: 0x00000008017bbf75 libc.so.7`___lldb_unnamed_symbol5929 + 21
    frame #2: 0x000000080176e45c libc.so.7`___lldb_unnamed_symbol5275 + 572
    frame #3: 0x0000000001259044 hbbr`std::sys::unix::stack_overflow::imp::signal_handler::hfe93d583076f5b5d [inlined] alloc::alloc::alloc::h779b246fd6cc825d at alloc.rs:95:14
    frame #4: 0x0000000001259034 hbbr`std::sys::unix::stack_overflow::imp::signal_handler::hfe93d583076f5b5d [inlined] alloc::alloc::Global::alloc_impl::hb72b34f6fbd74e74 at alloc.rs:177:73
    frame #5: 0x0000000001259034 hbbr`std::sys::unix::stack_overflow::imp::signal_handler::hfe93d583076f5b5d [inlined] _$LT$alloc..alloc..Global$u20$as$u20$core..alloc..Allocator$GT$::allocate::h944ffa281a4cb960 at alloc.rs:237:9
    frame #6: 0x0000000001259034 hbbr`std::sys::unix::stack_overflow::imp::signal_handler::hfe93d583076f5b5d [inlined] alloc::alloc::exchange_malloc::habba3743a1671068 at alloc.rs:326:11
    frame #7: 0x0000000001259034 hbbr`std::sys::unix::stack_overflow::imp::signal_handler::hfe93d583076f5b5d [inlined] std::thread::local::os::Key$LT$T$GT$::try_initialize::h18c3cf33a7d299e6 at local.rs:1111:42
    frame #8: 0x000000000125900c hbbr`std::sys::unix::stack_overflow::imp::signal_handler::hfe93d583076f5b5d [inlined] std::thread::local::os::Key$LT$T$GT$::get::h3058fe02d9cf9f69 at local.rs:1093:22
    frame #9: 0x0000000001258fc4 hbbr`std::sys::unix::stack_overflow::imp::signal_handler::hfe93d583076f5b5d [inlined] std::sys_common::thread_info::THREAD_INFO::__getit::hdd3449f3518cff0b at local.rs:271:21
    frame #10: 0x0000000001258fc4 hbbr`std::sys::unix::stack_overflow::imp::signal_handler::hfe93d583076f5b5d [inlined] std::thread::local::LocalKey$LT$T$GT$::try_with::hc8caba423455aecd at local.rs:445:32
    frame #11: 0x0000000001258fc4 hbbr`std::sys::unix::stack_overflow::imp::signal_handler::hfe93d583076f5b5d [inlined] std::sys_common::thread_info::ThreadInfo::with::hd1f47f6ac3fe41bb at thread_info.rs:20:9
    frame #12: 0x0000000001258fc4 hbbr`std::sys::unix::stack_overflow::imp::signal_handler::hfe93d583076f5b5d [inlined] std::sys_common::thread_info::stack_guard::h11f7bb16119a7249 at thread_info.rs:38:5
    frame #13: 0x0000000001258fc4 hbbr`std::sys::unix::stack_overflow::imp::signal_handler::hfe93d583076f5b5d at stack_overflow.rs:83:21
    frame #14: 0x00000008015ca58e libthr.so.3`___lldb_unnamed_symbol672 + 222
    frame #15: 0x00000008015c9b3f libthr.so.3`___lldb_unnamed_symbol653 + 319
    frame #16: 0x00007ffffffff2d3 [vdso]
    frame #17: 0x00000008015c9634 libthr.so.3`___lldb_unnamed_symbol649 + 36
    frame #18: 0x00000008015c6400 libthr.so.3`___lldb_unnamed_symbol615 + 512
    frame #19: 0x000000080179d6d1 libc.so.7`___lldb_unnamed_symbol5726 + 977
    frame #20: 0x000000080179d2f9 libc.so.7`___lldb_unnamed_symbol5725 + 57
    frame #21: 0x000000080177a05d libc.so.7`___lldb_unnamed_symbol5299 + 141
    frame #22: 0x00000008017a3c2d libc.so.7`___lldb_unnamed_symbol5781 + 381
    frame #23: 0x00000008017bc4f3 libc.so.7`___lldb_unnamed_symbol5934 + 179
    frame #24: 0x00000008017bc42a libc.so.7`___lldb_unnamed_symbol5933 + 42
    frame #25: 0x00000008017bf37a libc.so.7`___lldb_unnamed_symbol5954 + 506
    frame #26: 0x000000080176e34b libc.so.7`___lldb_unnamed_symbol5275 + 299
    frame #27: 0x00000008017c0e81 libc.so.7`strdup + 33
    frame #28: 0x00000008015c19a1 libthr.so.3`pthread_setname_np + 33
    frame #29: 0x00000000010fe256 hbbr`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::h07a4b40807aee2c3 + 54
    frame #30: 0x0000000001259853 hbbr`std::sys::unix::thread::Thread::new::thread_start::he4c5a4fee59a14e3 [inlined] _$LT$alloc..boxed..Box$LT$F$C$A$GT$$u20$as$u20$core..ops..function..FnOnce$LT$Args$GT$$GT$::call_once::hc76ab82cd4abedae at boxed.rs:2000:9
    frame #31: 0x000000000125984d hbbr`std::sys::unix::thread::Thread::new::thread_start::he4c5a4fee59a14e3 [inlined] _$LT$alloc..boxed..Box$LT$F$C$A$GT$$u20$as$u20$core..ops..function..FnOnce$LT$Args$GT$$GT$::call_once::haeb7136c505f48b1 at boxed.rs:2000:9
    frame #32: 0x0000000001259846 hbbr`std::sys::unix::thread::Thread::new::thread_start::he4c5a4fee59a14e3 at thread.rs:108:17
    frame #33: 0x00000008015c083a libthr.so.3`___lldb_unnamed_symbol556 + 314
  thread #2, name = 'hbbr', stop reason = signal SIGSEGV
    frame #0: 0x000000080172596c libc.so.7`___lldb_unnamed_symbol4943 + 236
    frame #1: 0x000000080172698c libc.so.7`___lldb_unnamed_symbol4945 + 108
    frame #2: 0x000000080172634a libc.so.7`___lldb_unnamed_symbol4943 + 2762
    frame #3: 0x0000000801724925 libc.so.7`___lldb_unnamed_symbol4936 + 117
    frame #4: 0x0000000801724f23 libc.so.7`localtime_r + 51
    frame #5: 0x0000000001137bdb hbbr`chrono::sys::inner::time_to_local_tm::h59180c88741aa1dd + 59
    frame #6: 0x0000000001137ee8 hbbr`chrono::offset::local::Local::now::h05217ccf1416f389 + 88
    frame #7: 0x000000000106ea76 hbbr`std::sys_common::once::futex::Once::call::ha2a4a00c15cdeef0 + 470
    frame #8: 0x0000000001109fe2 hbbr`flexi_logger::logger::Logger::build::h9a21288c93bf8a8b + 2450
    frame #9: 0x0000000001109551 hbbr`flexi_logger::logger::Logger::start::hf86046e155947b62 + 49
    frame #10: 0x00000000010b0d9a hbbr`hbbr::main::h0806a2dc36dac0e9 + 586
    frame #11: 0x000000000109ccb3 hbbr`std::sys_common::backtrace::__rust_begin_short_backtrace::hfbfa0b63fe68f4fe + 3
    frame #12: 0x00000000010bfedd hbbr`std::rt::lang_start::_$u7b$$u7b$closure$u7d$$u7d$::h9d8d1bc9e8f17c6c (.llvm.8970259907653779150) + 13
    frame #13: 0x000000000124a09f hbbr`std::rt::lang_start_internal::hdc807b46bedf0645 [inlined] core::ops::function::impls::_$LT$impl$u20$core..ops..function..FnOnce$LT$A$GT$$u20$for$u20$$RF$F$GT$::call_once::hac8490524262e996 at function.rs:606:13
    frame #14: 0x000000000124a09c hbbr`std::rt::lang_start_internal::hdc807b46bedf0645 [inlined] std::panicking::try::do_call::h497984f8dc6a2d6c at panicking.rs:483:40
    frame #15: 0x000000000124a09c hbbr`std::rt::lang_start_internal::hdc807b46bedf0645 [inlined] std::panicking::try::h73117b7b0393afe9 at panicking.rs:447:19
    frame #16: 0x000000000124a09c hbbr`std::rt::lang_start_internal::hdc807b46bedf0645 [inlined] std::panic::catch_unwind::h03680c139fa6c59e at panic.rs:137:14
    frame #17: 0x000000000124a09c hbbr`std::rt::lang_start_internal::hdc807b46bedf0645 [inlined] std::rt::lang_start_internal::_$u7b$$u7b$closure$u7d$$u7d$::h1e00342f2d8f5662 at rt.rs:148:48
    frame #18: 0x000000000124a09c hbbr`std::rt::lang_start_internal::hdc807b46bedf0645 [inlined] std::panicking::try::do_call::h1612eace755b9ff5 at panicking.rs:483:40
    frame #19: 0x000000000124a09c hbbr`std::rt::lang_start_internal::hdc807b46bedf0645 [inlined] std::panicking::try::hc62aa9261e6b2054 at panicking.rs:447:19
    frame #20: 0x000000000124a09c hbbr`std::rt::lang_start_internal::hdc807b46bedf0645 [inlined] std::panic::catch_unwind::hb800dc004171c43d at panic.rs:137:14
    frame #21: 0x000000000124a09c hbbr`std::rt::lang_start_internal::hdc807b46bedf0645 at rt.rs:148:20
    frame #22: 0x00000000010b16e5 hbbr`main + 37
    frame #23: 0x000000000107e162 hbbr`_start(ap=<unavailable>, cleanup=<unavailable>) at crt1.c:76:7
  thread #3, name = 'flexi_logger-async_', stop reason = signal SIGSEGV
    frame #0: 0x000000080172f5ba libc.so.7`__sys__umtx_op + 10
    frame #1: 0x000000000124ab88 hbbr`std::thread::park::h5b98aa38d5bbed74 [inlined] std::sys::unix::futex::futex_wait::hfc697dc16a6320d6 at futex.rs:52:21
    frame #2: 0x000000000124ab6c hbbr`std::thread::park::h5b98aa38d5bbed74 [inlined] std::sys_common::thread_parker::futex::Parker::park::h132c22bf2b9a3b3d at futex.rs:52:13
    frame #3: 0x000000000124ab60 hbbr`std::thread::park::h5b98aa38d5bbed74 at mod.rs:999:9
    frame #4: 0x00000000011012a7 hbbr`crossbeam_channel::context::Context::with::_$u7b$$u7b$closure$u7d$$u7d$::h701ba1b755403589 + 1399
    frame #5: 0x0000000001101b94 hbbr`crossbeam_channel::flavors::list::Channel$LT$T$GT$::recv::h906528f07e5e1ceb + 740
    frame #6: 0x00000000010f4c44 hbbr`crossbeam_channel::channel::Receiver$LT$T$GT$::recv::heddf1fb6dce4accd + 84
    frame #7: 0x00000000010f83f2 hbbr`std::sys_common::backtrace::__rust_begin_short_backtrace::h325a4252c18dee9b + 98
    frame #8: 0x00000000010fe83e hbbr`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::h1b501c6e9ac935b2 + 190
    frame #9: 0x0000000001259853 hbbr`std::sys::unix::thread::Thread::new::thread_start::he4c5a4fee59a14e3 [inlined] _$LT$alloc..boxed..Box$LT$F$C$A$GT$$u20$as$u20$core..ops..function..FnOnce$LT$Args$GT$$GT$::call_once::hc76ab82cd4abedae at boxed.rs:2000:9
    frame #10: 0x000000000125984d hbbr`std::sys::unix::thread::Thread::new::thread_start::he4c5a4fee59a14e3 [inlined] _$LT$alloc..boxed..Box$LT$F$C$A$GT$$u20$as$u20$core..ops..function..FnOnce$LT$Args$GT$$GT$::call_once::haeb7136c505f48b1 at boxed.rs:2000:9
    frame #11: 0x0000000001259846 hbbr`std::sys::unix::thread::Thread::new::thread_start::he4c5a4fee59a14e3 at thread.rs:108:17
    frame #12: 0x00000008015c083a libthr.so.3`___lldb_unnamed_symbol556 + 314
(lldb)

@paspo
Copy link
Contributor

paspo commented Mar 30, 2023

They are not using the .write_mode(WriteMode::Async) statement as they directly go to syslog already, already handling PIDs, and spawns child process with the expected service user. Let say "completely" demonized, a good direction for rustdesk-server too. :)

IMHO, it'll be a good idea to directly log to syslog, at least on linux.

freebsd-git pushed a commit to freebsd/freebsd-ports that referenced this pull request Mar 30, 2023
RustDesk-server is a self hosted server for the RustDesk remote
desktop software.

WWW: https://rustdesk.com/server/

Patches obtained from/discussed in:

rustdesk/rustdesk-server#232
rustdesk/rustdesk-server#209
@madpilot78
Copy link
Contributor

madpilot78 commented Mar 30, 2023

In the while I have added rustdesk-server to the FreeBSD ports collection:

https://www.freshports.org/net/rustdesk-server/

I have included the patches discussed here for now, and done slight modifications to the startup scripts. I configured it to log via syslog and the port also installs syslog and newsyslog configuration files (users can customize them later, obviously). Feel free to take what you like.

I also regenerated the lock file. The one in the repo looks outdated, and was confusing the ports cargo helpers.

Nothing is set in stone though, so if better patches are submitted here or to the port I will adopt them and submit them here also.

@n-connect
Copy link
Contributor Author

@madpilot78 great, thx of taking care of that world, I kinda never started to use the ports, would take too much time.

Check #233 , made the change @mickeyreg found out, along with a simple check to do it for FB only, so no other platforms affected -> those will remain Async. Also took your rc.d script variable optimizations too.

@rustdesk the FB core dump issue fixed.

@MikaelUrankar
Copy link

FWIW I have an old FreeBSD port (v1.1.5) that use only this patch:

--- Cargo.toml.orig     2022-05-30 12:01:22 UTC
+++ Cargo.toml
@@ -42,7 +42,7 @@ http = "0.2"
 regex = "1.4"
 tower-http = { version = "0.2", features = ["fs", "trace", "cors"] }
 http = "0.2"
-flexi_logger = { version = "0.22", features = ["async", "use_chrono_for_offset"] }
+flexi_logger = { version = "0.22", features = ["async", "use_chrono_for_offset", "dont_minimize_extra_stacks"] }
 
 [build-dependencies]
 hbb_common = { path = "libs/hbb_common" }

@n-connect
Copy link
Contributor Author

@MikaelUrankar

Thanks, do you know where the "dont_minimize_extra_stacks" option should be applied / where/how the port system would put it? Or can you share the sources of the port v1.1.5? I've made some workaround in #233 , but would be good to know if there's a better approach, or if that actually pinned the root cause.
Thx!

(The 2nd thing I am curious about, how that v1.1.5 port worked with the old local-ip-address dependency from 2022-05-22 on FB, the version officially supported FreeBSD release in 2022-12-30 a v0.5.0 - but that another story :) )

@MikaelUrankar
Copy link

I've pushed my (very old) port here: https://github.com/MikaelUrankar/rustdesk-server-ports

I don't know what the problem is with the local-ip-address.

@madpilot78
Copy link
Contributor

I don't know what the problem is with the local-ip-address.

@MikaelUrankar Looks like the lock file was not updated to account for the update in the local-ip-address update. In my port I "fixed" it by regenerating the lock file and adding that as a local patch. Don't know how that worked with previous versions.

Regarding the dont_minimize_extra_stacks I don't know what that is, I will be busy with other matters now that I committed the port (that at present is working fine), but I will follow this thread and if anything comes about I can update it.

@MikaelUrankar
Copy link

dont_minimize_extra_stacks is explained here https://github.com/emabee/flexi_logger#dont_minimize_extra_stacks

@n-connect
Copy link
Contributor Author

Thx for the pieces. The "dont_minimize_extra_stacks" seems to connected our core dump based on this thread 'flexi_logger-flusher' has overflowed its stack #95. Poster has issues with WriteMode::BufferAndFlush and moved on with Async.

Will try with the option "dont_minimize_extra_stacks" in here with Cargo.toml, to see if it makes a difference

@n-connect
Copy link
Contributor Author

I can confirm the "dont_minimize_extra_stacks" option in Carg.toml does the trick: no code change required in main.rs/hbbr.rs for "WriteMode::Direct" necessary. The binaries run under csh without core dump.

@mickeyreg @madpilot78 @paspo please do test it. If it works for your too it makes my latest PR #233 partly unnecessary.

Cargo.toml from: flexi_logger = { version = "0.22", features = ["async", "use_chrono_for_offset"] }
to: flexi_logger = { version = "0.22", features = ["async", "use_chrono_for_offset", "dont_minimize_extra_stacks"] }

My test run with that:

root@srv:~ # echo $SHELL
/bin/csh
root@srv:~ # cd /tmp/rustdesk-server/target/x86_64-unknown-freebsd/release/
root@srv:/tmp/rustdesk-server/target/x86_64-unknown-freebsd/release # ls -la
total 23136
drwxr-xr-x    7 root  wheel        16 Mar 31 09:47 .
drwxr-xr-x    3 root  wheel         4 Mar 31 09:34 ..
-rw-r--r--    1 root  wheel         0 Mar 31 09:34 .cargo-lock
drwxr-xr-x  251 root  wheel       251 Mar 31 09:34 .fingerprint
drwxr-xr-x   42 root  wheel        42 Mar 31 09:34 build
drwxr-xr-x    2 root  wheel       626 Mar 31 09:47 deps
drwxr-xr-x    2 root  wheel         2 Mar 31 09:34 examples
-rwxr-xr-x    2 root  wheel   5524480 Mar 31 09:47 hbbr
-rw-r--r--    1 root  wheel      2111 Mar 31 09:47 hbbr.d
-rwxr-xr-x    2 root  wheel  10007192 Mar 31 09:47 hbbs
-rw-r--r--    1 root  wheel      2057 Mar 31 09:47 hbbs.d
drwxr-xr-x    2 root  wheel         2 Mar 31 09:34 incremental
-rw-r--r--    1 root  wheel      2019 Mar 31 09:47 libhbbs.d
-rw-r--r--    2 root  wheel   6888894 Mar 31 09:46 libhbbs.rlib
-rwxr-xr-x    2 root  wheel    738504 Mar 31 09:47 rustdesk-utils
-rw-r--r--    1 root  wheel      2068 Mar 31 09:47 rustdesk-utils.d
root@srv:/tmp/rustdesk-server/target/x86_64-unknown-freebsd/release # ./hbbs
[2023-03-31 09:49:05.671704 +02:00] INFO [src/common.rs:143] Private/public key written to id_ed25519/id_ed25519.pub
[2023-03-31 09:49:05.671763 +02:00] INFO [src/peer.rs:84] DB_URL=./db_v2.sqlite3
[2023-03-31 09:49:05.770861 +02:00] INFO [src/rendezvous_server.rs:98] serial=0
[2023-03-31 09:49:05.770887 +02:00] INFO [src/common.rs:46] rendezvous-servers=[]
[2023-03-31 09:49:05.770891 +02:00] INFO [src/rendezvous_server.rs:100] Listening on tcp/udp :21116
[2023-03-31 09:49:05.770893 +02:00] INFO [src/rendezvous_server.rs:101] Listening on tcp :21115, extra port for NAT test
[2023-03-31 09:49:05.770895 +02:00] INFO [src/rendezvous_server.rs:102] Listening on websocket :21118
[2023-03-31 09:49:05.770917 +02:00] INFO [libs/hbb_common/src/udp.rs:35] Receive buf size of udp [::]:21116: Ok(42080)
[2023-03-31 09:49:05.770982 +02:00] INFO [libs/hbb_common/src/udp.rs:35] Receive buf size of udp 0.0.0.0:21116: Ok(42080)
Error: Address already in use (os error 48)
root@srv:/tmp/rustdesk-server/target/x86_64-unknown-freebsd/release # ./hbbr
[2023-03-31 09:49:14.866610 +02:00] INFO [src/relay_server.rs:60] #blacklist(blacklist.txt): 0
[2023-03-31 09:49:14.866638 +02:00] INFO [src/relay_server.rs:75] #blocklist(blocklist.txt): 0
[2023-03-31 09:49:14.866642 +02:00] INFO [src/relay_server.rs:81] Listening on tcp :21117
[2023-03-31 09:49:14.866644 +02:00] INFO [src/relay_server.rs:83] Listening on websocket :21119
[2023-03-31 09:49:14.866647 +02:00] INFO [src/relay_server.rs:86] Start
Error: Address already in use (os error 48)  

@madpilot78
Copy link
Contributor

@n-connect Yes, using "dont_minimize_extra_stacks" seems to work. It's starting correctly, without crashing.

I have had no time to perform more thorough testing.

@n-connect
Copy link
Contributor Author

@madpilot78 thx, you are on amd64 13.1-p7 if I remember right, correct?

@madpilot78
Copy link
Contributor

@n-connect exactly.

@n-connect
Copy link
Contributor Author

n-connect commented Mar 31, 2023

@mickeyreg @paspo can you test it too? I've uploaded an updated tar.gz with my build according to dont_minimize_extra_stacks.

@n-connect
Copy link
Contributor Author

While, its happening the next interesting thing is finding out, if the Github Actions build will also work without core dump? That build still produce a binary for FB v12.3, however the toolchain version set from 1.62 to 1.67.1 - highest available in U20.04 builder/runner VM.

@madpilot78
Copy link
Contributor

Please note that FreeBSD 12.3 is reaching EoL today, minimum supported version is 12.4, which will EoL at the end of the year.

@mickeyreg
Copy link

@n-connect , copied from @madpilot78:

Yes, using "dont_minimize_extra_stacks" seems to work. It's starting correctly, without crashing.

I have had no time to perform more thorough testing.

;)

@n-connect
Copy link
Contributor Author

n-connect commented Mar 31, 2023

Please note that FreeBSD 12.3 is reaching EoL today, minimum supported version is 12.4, which will EoL at the end of the year.

Yep, you're right - and that's why I wrote this very PR. Its the repo owner who needs some convincing on EoL stuff, but he/she(?) agreed finally :). I wanna know why the rust binaries built by GitHub Actions "compiled for" v12.3. This very PR #232 raised the Ubuntu VM version to be able use higher rust and toolchain (U20.04 vs U22.04 is rust 1.67.1 vs rust 1.68 - see below my build with 1.67.1 are looking good).

Okay, here's the road so far (Supernatural series, anyone?):

  • local-ip-address got updated end of last year, accidentally I got informed -> raised PRs to change the Cargo.toml so it can be used it on FB.
  • raised PR to get it [auto]built here in the repo
  • everything seemed fine, except the autobuild core dumped -> raised PR to update the GH Actions VM, to raise rust/toolchain level (this PR)
  • @mickeyreg raised the issue his own builds core dump too
  • made a temp repo with my builds and did tests with mickey (and his build too) back and forth -> something with the different shells, screen bash solves but from boot/init the FB service start fails
  • mickey founds the Async vs Direct workaround
  • you, @madpilot78 join the efforts & make the freshport version overnight (cool) with the logging changed globally to "Direct" -> which at freshport perfectly fine, as that for BSD only
  • raised another PR with a simple OS detection based selection for that piece of logging code to change it for FB only others can be untouched (I guess we won't be need it at the end) -> still open, plus I've asked @rustdesk to wait
  • because finally @MikaelUrankar brought the key info into our attention

Thanks everyone again.

Now we (I), want a good autobuild here in Github as the original code hosted here.
(Plus I want to mod the CI/CD code to make a tar.gz with the binaries and the rc.d scripts too for FB, maybe a pre/post install script for easier application, but that another story.)

So the GH Actions' autobuild. after this very PR merged - still gives this kind of binary:

[root@srv /tmp/amd64fb_1.1.7-3]# ls -la
total 28619
drwxr-xr-x   2 root  wheel         6 Mar 30 17:49 .
drwxr-x---  25 root  wheel        57 Mar 31 09:34 ..
-rwxr-xr-x   1 root  wheel   8794592 Mar 29 19:28 hbbr
-rw-------   1 root  wheel  20262912 Mar 30 17:49 hbbr.core
-rwxr-xr-x   1 root  wheel  12997240 Mar 29 19:28 hbbs
-rwxr-xr-x   1 root  wheel   4402800 Mar 29 19:28 rustdesk-utils
[root@srv /tmp/amd64fb_1.1.7-3]# file ./hbbs
./hbbs: ELF 64-bit LSB pie executable, x86-64, version 1 (FreeBSD), dynamically linked, interpreter /libexec/ld-elf.so.1, for FreeBSD 12.3, FreeBSD-style, with debug_info, not stripped
[root@srv /tmp/amd64fb_1.1.7-3]#

My manual builds with rust 1.67.1 give this:

[root@srv /tmp/rustdesk-server/target/x86_64-unknown-freebsd/release]# ls -la
total 23247
drwxr-xr-x    7 root  wheel        21 Mar 31 09:49 .
drwxr-xr-x    3 root  wheel         4 Mar 31 09:34 ..
-rw-r--r--    1 root  wheel         0 Mar 31 09:34 .cargo-lock
drwxr-xr-x  251 root  wheel       251 Mar 31 09:34 .fingerprint
drwxr-xr-x   42 root  wheel        42 Mar 31 09:34 build
-rw-r--r--    1 root  wheel     24576 Mar 31 09:49 db_v2.sqlite3
-rw-r--r--    1 root  wheel     32768 Mar 31 09:49 db_v2.sqlite3-shm
-rw-r--r--    1 root  wheel     41232 Mar 31 09:49 db_v2.sqlite3-wal
drwxr-xr-x    2 root  wheel       626 Mar 31 09:47 deps
drwxr-xr-x    2 root  wheel         2 Mar 31 09:34 examples
-rwxr-xr-x    2 root  wheel   5524480 Mar 31 09:47 hbbr
-rw-r--r--    1 root  wheel      2111 Mar 31 09:47 hbbr.d
-rwxr-xr-x    2 root  wheel  10007192 Mar 31 09:47 hbbs
-rw-r--r--    1 root  wheel      2057 Mar 31 09:47 hbbs.d
-rw-r--r--    1 root  wheel        88 Mar 31 09:49 id_ed25519
-rw-r--r--    1 root  wheel        44 Mar 31 09:49 id_ed25519.pub
drwxr-xr-x    2 root  wheel         2 Mar 31 09:34 incremental
-rw-r--r--    1 root  wheel      2019 Mar 31 09:47 libhbbs.d
-rw-r--r--    2 root  wheel   6888894 Mar 31 09:46 libhbbs.rlib
-rwxr-xr-x    2 root  wheel    738504 Mar 31 09:47 rustdesk-utils
-rw-r--r--    1 root  wheel      2068 Mar 31 09:47 rustdesk-utils.d
[root@srv /tmp/rustdesk-server/target/x86_64-unknown-freebsd/release]# file ./hbbs
./hbbs: ELF 64-bit LSB pie executable, x86-64, version 1 (FreeBSD), dynamically linked, interpreter /libexec/ld-elf.so.1, FreeBSD-style, with debug_info, not stripped

freebsd-git pushed a commit to freebsd/freebsd-ports that referenced this pull request Mar 31, 2023
Modify patch, needed to avoid crashes on startup, to improved one,
discussed in upstream PR.

Adding the "dont_minimize_extra_stacks" option to the flexi_logger
avoids the stack overflow causing the startup crashes.

Obtained from:	rustdesk/rustdesk-server#232 (comment)
@madpilot78
Copy link
Contributor

I have actually updated the FreeBSD port to use @MikaelUrankar patch, since it works fine.

@madpilot78
Copy link
Contributor

@n-connect I admit that making the port took me some time, I had been working on it for a few weeks, but due to other things I was putting little time/effort in it, I just sprinted it these days, also because I needed it t use it myself.

And it's working like a charm, I must say.

I also plan to work on a port for the desktop client, but that will not happen anytime soon I suspect. I looks a little more complex.

n-connect added a commit to n-connect/rustdesk-server that referenced this pull request Apr 1, 2023
Flexi_logger options for async writemode rustdesk#232 (comment)
@n-connect
Copy link
Contributor Author

I have actually updated the FreeBSD port to use @MikaelUrankar patch, since it works fine.

@madpilot78 👍

In the meantime:

  • the actual build command differ from cargo build --release --target=x86_64-unknown-freebsd. They are cross build --release --all-features --target=x86_64-unknown-freebsd and the same cross crate used for the same arch as the runner cross build --release --all-features --target=x86_64-unknown-linux-musl Idk, if they should be identical as cargo vs rustc build commands, will turn out
  • previous open PR closed (too messy)
  • new PR merged pushing through the changes here in the main repo

ericbsd pushed a commit to ghostbsd/ghostbsd-ports that referenced this pull request Apr 2, 2023
RustDesk-server is a self hosted server for the RustDesk remote
desktop software.

WWW: https://rustdesk.com/server/

Patches obtained from/discussed in:

rustdesk/rustdesk-server#232
rustdesk/rustdesk-server#209
ericbsd pushed a commit to ghostbsd/ghostbsd-ports that referenced this pull request Apr 2, 2023
Modify patch, needed to avoid crashes on startup, to improved one,
discussed in upstream PR.

Adding the "dont_minimize_extra_stacks" option to the flexi_logger
avoids the stack overflow causing the startup crashes.

Obtained from:	rustdesk/rustdesk-server#232 (comment)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants