Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Segmentation fault on ls -l in Alpine Linux #153

Open
blasterspike opened this issue Mar 25, 2018 · 8 comments
Open

Segmentation fault on ls -l in Alpine Linux #153

blasterspike opened this issue Mar 25, 2018 · 8 comments

Comments

@blasterspike
Copy link

Hi,
I'm trying to build a Docker Image with Alpine Linux to use hubicfuse.
This is how I'm building the Docker Image

FROM alpine:latest

RUN apk update && \
    apk upgrade && \
    apk add -t build-dependencies \
            git \
            g++ \
            make \
            curl \
            fuse-dev \
            pkgconfig \
            curl-dev \
            libxml2-dev \
            openssl-dev \
            json-c-dev \
            file-dev && \
    cd /tmp && \
    git clone https://github.com/TurboGit/hubicfuse.git && \
    cd /tmp/hubicfuse && \
    ./configure && \
    make && \
    make install && \
    mkdir /mnt/hubic

then I run the Docker Container with

docker run -ti --rm \
  --name hubicfuse \
  --cap-add SYS_ADMIN \
  --device /dev/fuse \
  -v $(pwd)/hubicfuse:/root/.hubicfuse \
  my-hubicfuse:latest \
  /bin/sh

and inside I mount a directory with hubicfuse as explained in the README

/ # hubicfuse -d /mnt/hubic -o noauto_cache,sync_read,allow_other
settings_filename = /root/.hubicfuse
debug_level = 1
get_extended_metadata = 1
curl_progress_state = 1
enable_chmod = 1
enable_chown = 1
==DBG 0 [2018-03-25 17:09:40.]:7==Authenticating... (client_id = '***')
==DBG 0 [2018-03-25 17:09:42.]:7==HUBIC TOKEN_URL result: '{"expires_in":21600,"access_token":"***","token_type":"Bearer"}'

==DBG 0 [2018-03-25 17:09:42.]:7==HUBIC Access token: ***

==DBG 0 [2018-03-25 17:09:42.]:7==HUBIC Token type  : Bearer

==DBG 0 [2018-03-25 17:09:42.]:7==HUBIC Expire in   : 21600

==DBG 1 [2018-03-25 17:09:42.]:7==add_header(Authorization:Bearer ***)
==DBG 0 [2018-03-25 17:09:42.]:7==CRED_URL result: '{"token":"***","endpoint":"https://lb1040.hubic.ovh.net/v1/AUTH_***","expires":"2018-03-26T19:05:28+02:00"}'

FUSE library version: 2.9.7
nullpath_ok: 0
nopath: 0
utime_omit_ok: 0
unique: 1, opcode: INIT (26), nodeid: 0, insize: 56, pid: 0
INIT: 7.26
flags=0x001ffffb
max_readahead=0x00020000
   INIT: 7.19
   flags=0x00000010
   max_readahead=0x00020000
   max_write=0x00020000
   max_background=0
   congestion_threshold=0
   unique: 1, success, outsize: 40

When I do an

/ # ls /mnt/hubic
default

everything is working fine

unique: 2, opcode: GETATTR (3), nodeid: 1, insize: 56, pid: 21162
getattr /
==DBG 0 [2018-03-25 17:10:38.]:9==cfs_getattr(/)
==DBG 0 [2018-03-25 17:10:38.]:9==exit 0: cfs_getattr(/)
   unique: 2, success, outsize: 120
unique: 3, opcode: OPENDIR (27), nodeid: 1, insize: 48, pid: 21162
   unique: 3, success, outsize: 32
unique: 4, opcode: READDIR (28), nodeid: 1, insize: 80, pid: 21162
readdir[0] from 0
==DBG 0 [2018-03-25 17:10:38.]:9==cfs_readdir(/)
==DBG 1 [2018-03-25 17:10:38.]:9==caching_list_directory(/)
==DBG 1 [2018-03-25 17:10:38.]:9==cloudfs_list_directory()
==DBG 1 [2018-03-25 17:10:38.]:9==send_request_size(GET) (/?format=xml)
==DBG 1 [2018-03-25 17:10:38.]:9==add_header(X-Auth-Token:***)
==DBG 1 [2018-03-25 17:10:38.]:9==check_path_info()
==DBG 1 [2018-03-25 17:10:38.]:9==check_caching_list_directory()
==DBG 1 [2018-03-25 17:10:38.]:9==exit 1: check_caching_list_directory() [CACHE-DIR-MISS]
==DBG 1 [2018-03-25 17:10:38.]:9==exit 0: check_path_info() [CACHE-MISS]
==DBG 1 [2018-03-25 17:10:38.]:9==send_request_size: GET XML (/?format=xml)
==DBG 1 [2018-03-25 17:10:38.]:9==status: send_request_size(/?format=xml) started HTTP REQ:https://lb1040.hubic.ovh.net/v1/AUTH_***/?format=xml
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0*   Trying 147.135.143.143...
* TCP_NODELAY set
  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0* Connected to lb1040.hubic.ovh.net (147.135.143.143) port 443 (#0)
* ALPN, offering http/1.1
* successfully set certificate verify locations:
  CAfile: /etc/ssl/certs/ca-certificates.crt
  CApath: none
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN, server did not agree to a protocol
* Server certificate:
*  subject: OU=Domain Control Validated; OU=PositiveSSL Wildcard; CN=*.hubic.ovh.net
*  start date: Jul  3 00:00:00 2017 GMT
*  expire date: Jul  2 23:59:59 2020 GMT
*  subjectAltName: host "lb1040.hubic.ovh.net" matched cert's "*.hubic.ovh.net"
*  issuer: C=GB; ST=Greater Manchester; L=Salford; O=COMODO CA Limited; CN=COMODO RSA Domain Validation Secure Server CA
*  SSL certificate verify ok.
> GET /v1/AUTH_***/?format=xml HTTP/1.1
Host: lb1040.hubic.ovh.net
User-Agent: CloudFuse
Accept: */*
X-Auth-Token: ***

< HTTP/1.1 200 OK
< Content-Length: 237
< X-Account-Storage-Policy-Policy-1-Bytes-Used: 0
< X-Account-Storage-Policy-Policy-1-Object-Count: 3
< X-Account-Object-Count: 3
< X-Account-Meta-Quota: 26843545600
< X-Timestamp: 1520681424.56409
< X-Account-Meta-Temp-Url-Key: XzRpA2ksJ5BD
< X-Account-Storage-Policy-Policy-1-Container-Count: 1
< X-Account-Bytes-Used: 0
< X-Account-Container-Count: 1
< Content-Type: application/xml; charset=utf-8
< Accept-Ranges: bytes
< X-Trans-Id: ***
< X-Openstack-Request-Id: ***
< Date: Sun, 25 Mar 2018 17:10:39 GMT
< X-IPLB-Instance: 13554
< 
100   237  100   237    0     0    142      0  0:00:01  0:00:01 --:--:--   142
* Connection #0 to host lb1040.hubic.ovh.net left intact
==DBG 1 [2018-03-25 17:10:40.]:9==status: send_request_size(/?format=xml) completed HTTP REQ:https://lb1040.hubic.ovh.net/v1/AUTH_***/?format=xml total_time=1.7 seconds
==DBG 0 [2018-03-25 17:10:40.]:9==exit 0: send_request_size(/?format=xml) speed=1.7 sec (GET) [HTTP OK]
==DBG 0 [2018-03-25 17:10:40.]:9==get_time_as_string: input time length too long, 4691732962903619 > max=2147483647, trimming!
==DBG 1 [2018-03-25 17:10:40.]:9==new dir_entry /default size=0 application/directory dir=1 lnk=0 mod=[1970-01-01 00:00:00.0]
==DBG 1 [2018-03-25 17:10:40.]:9==exit: cloudfs_list_directory()
==DBG 1 [2018-03-25 17:10:40.]:9==caching_list_directory: new_cache() [CACHE-CREATE]
==DBG 0 [2018-03-25 17:10:40.]:9==new_cache()
==DBG 1 [2018-03-25 17:10:40.]:9==exit: new_cache()
==DBG 1 [2018-03-25 17:10:40.]:9==exit 2: caching_list_directory()
==DBG 0 [2018-03-25 17:10:40.]:9==exit 1: cfs_readdir(/)
   unique: 4, success, outsize: 112
unique: 5, opcode: LOOKUP (1), nodeid: 1, insize: 48, pid: 21162
LOOKUP /default
getattr /default
==DBG 0 [2018-03-25 17:10:40.]:10==cfs_getattr(/default)
==DBG 1 [2018-03-25 17:10:40.]:10==path_info(/default)
==DBG 1 [2018-03-25 17:10:40.]:10==caching_list_directory()
==DBG 1 [2018-03-25 17:10:40.]:10==caching_list_directory() [CACHE-DIR-HIT]
==DBG 1 [2018-03-25 17:10:40.]:10==exit 2: caching_list_directory()
==DBG 1 [2018-03-25 17:10:40.]:10==path_info() [CACHE-DIR-HIT]
==DBG 1 [2018-03-25 17:10:40.]:10==exit 1: path_info(/default) [CACHE-FILE-HIT]
==DBG 1 [2018-03-25 17:10:40.]:10==get_file_metadata(/default)
==DBG 1 [2018-03-25 17:10:40.]:10==send_request_size(GET) (%2Fdefault)
==DBG 1 [2018-03-25 17:10:40.]:10==add_header(X-Auth-Token:***)
==DBG 1 [2018-03-25 17:10:40.]:10==send_request_size: GET HEADERS only((null))
==DBG 1 [2018-03-25 17:10:40.]:10==status: send_request_size(/default) started HTTP REQ:https://lb1040.hubic.ovh.net/v1/AUTH_***/default
* Found bundle for host lb1040.hubic.ovh.net: 0x56016a2d8f60 [can pipeline]
* Re-using existing connection! (#0) with host lb1040.hubic.ovh.net
* Connected to lb1040.hubic.ovh.net (147.135.143.143) port 443 (#0)
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0> HEAD /v1/AUTH_***/default HTTP/1.1
Host: lb1040.hubic.ovh.net
User-Agent: CloudFuse
Accept: */*
X-Auth-Token: ***

< HTTP/1.1 204 No Content
< Content-Length: 0
< X-Container-Object-Count: 3
< Accept-Ranges: bytes
< X-Storage-Policy: Policy-1
< Last-Modified: Sat, 10 Mar 2018 11:30:26 GMT
< X-Container-Bytes-Used: 0
< X-Timestamp: 1520681425.51075
< Content-Type: text/plain; charset=utf-8
< X-Trans-Id: ***
< X-Openstack-Request-Id: ***
< Date: Sun, 25 Mar 2018 17:10:39 GMT
< X-IPLB-Instance: 13554
< 
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
* Connection #0 to host lb1040.hubic.ovh.net left intact
==DBG 1 [2018-03-25 17:10:40.]:10==status: send_request_size(/default) completed HTTP REQ:https://lb1040.hubic.ovh.net/v1/AUTH_***/default total_time=0.0 seconds
==DBG 0 [2018-03-25 17:10:40.]:10==exit 0: send_request_size(/default) speed=0.0 sec (GET) [HTTP OK]
==DBG 1 [2018-03-25 17:10:40.]:10==exit: get_file_metadata(/default)
==DBG 0 [2018-03-25 17:10:40.]:10==get_time_as_string: input time length too long, 4691732962903619 > max=2147483647, trimming!
==DBG 1 [2018-03-25 17:10:40.]:10==cfs_getattr: atime=[1970-01-01 00:00:00.0]
==DBG 0 [2018-03-25 17:10:40.]:10==get_time_as_string: input time length too long, 4691732962903619 > max=2147483647, trimming!
==DBG 1 [2018-03-25 17:10:40.]:10==cfs_getattr: mtime=[1970-01-01 00:00:00.0]
==DBG 0 [2018-03-25 17:10:40.]:10==get_time_as_string: input time length too long, 4691732962903619 > max=2147483647, trimming!
==DBG 1 [2018-03-25 17:10:40.]:10==cfs_getattr: ctime=[1970-01-01 00:00:00.0]
==DBG 0 [2018-03-25 17:10:40.]:10==exit 2: cfs_getattr(/default)
   NODEID: 2
   unique: 5, success, outsize: 144
unique: 6, opcode: READDIR (28), nodeid: 1, insize: 80, pid: 21162
   unique: 6, success, outsize: 16
unique: 7, opcode: RELEASEDIR (29), nodeid: 1, insize: 64, pid: 0
   unique: 7, success, outsize: 16

But when I type ls -l I get a segmentation fault

/ # ls -l /mnt/hubic
total 0
Segmentation fault
unique: 8, opcode: GETATTR (3), nodeid: 1, insize: 56, pid: 21165
getattr /
==DBG 0 [2018-03-25 17:11:22.]:10==cfs_getattr(/)
==DBG 0 [2018-03-25 17:11:22.]:10==exit 0: cfs_getattr(/)
   unique: 8, success, outsize: 120
unique: 9, opcode: OPENDIR (27), nodeid: 1, insize: 48, pid: 21165
   unique: 9, success, outsize: 32
unique: 10, opcode: READDIR (28), nodeid: 1, insize: 80, pid: 21165
readdir[0] from 0
==DBG 0 [2018-03-25 17:11:22.]:19==cfs_readdir(/)
==DBG 1 [2018-03-25 17:11:22.]:19==caching_list_directory(/)
==DBG 1 [2018-03-25 17:11:22.]:19==caching_list_directory() [CACHE-DIR-HIT]
==DBG 1 [2018-03-25 17:11:22.]:19==exit 2: caching_list_directory()
==DBG 0 [2018-03-25 17:11:22.]:19==exit 1: cfs_readdir(/)
   unique: 10, success, outsize: 112
unique: 11, opcode: LOOKUP (1), nodeid: 1, insize: 48, pid: 21165
LOOKUP /default
getattr /default
==DBG 0 [2018-03-25 17:11:22.]:10==cfs_getattr(/default)
==DBG 1 [2018-03-25 17:11:22.]:10==path_info(/default)
==DBG 1 [2018-03-25 17:11:22.]:10==caching_list_directory()
==DBG 1 [2018-03-25 17:11:22.]:10==caching_list_directory() [CACHE-DIR-HIT]
==DBG 1 [2018-03-25 17:11:22.]:10==exit 2: caching_list_directory()
==DBG 1 [2018-03-25 17:11:22.]:10==path_info() [CACHE-DIR-HIT]
==DBG 1 [2018-03-25 17:11:22.]:10==exit 1: path_info(/default) [CACHE-FILE-HIT]
==DBG 0 [2018-03-25 17:11:22.]:10==get_time_as_string: input time length too long, 4691732962903619 > max=2147483647, trimming!
==DBG 1 [2018-03-25 17:11:22.]:10==cfs_getattr: atime=[1970-01-01 00:00:00.0]
==DBG 0 [2018-03-25 17:11:22.]:10==get_time_as_string: input time length too long, 4691732962903619 > max=2147483647, trimming!
==DBG 1 [2018-03-25 17:11:22.]:10==cfs_getattr: mtime=[1970-01-01 00:00:00.0]
==DBG 0 [2018-03-25 17:11:22.]:10==get_time_as_string: input time length too long, 4691732962903619 > max=2147483647, trimming!
==DBG 1 [2018-03-25 17:11:22.]:10==cfs_getattr: ctime=[1970-01-01 00:00:00.0]
==DBG 0 [2018-03-25 17:11:22.]:10==exit 2: cfs_getattr(/default)
   NODEID: 2
   unique: 11, success, outsize: 144
unique: 12, opcode: READDIR (28), nodeid: 1, insize: 80, pid: 21165
   unique: 12, success, outsize: 16
unique: 13, opcode: RELEASEDIR (29), nodeid: 1, insize: 64, pid: 0
   unique: 13, success, outsize: 16

This is the content of my $HOME/.hubicfuse file

client_id=***
client_secret=***
refresh_token=***
get_extended_metadata=false
curl_verbose=true
curl_progress_state=true
debug_level=1
enable_chmod=true
enable_chown=true

What could be the problem?

Thanks

Massimo

@TurboGit
Copy link
Owner

I'm using only client_id, client_secret and refresh_token. So maybe one of the other options. Try without:

get_extended_metadata=false
curl_progress_state=true
enable_chmod=true
enable_chown=true

@blasterspike
Copy link
Author

Thanks for your reply.
In my $HOME/.hubicfuse I have left only

client_id=***
client_secret=***
refresh_token=***

Unfortunately I'm getting the same error

/ # ls -l /mnt/hubic
total 0
Segmentation fault
unique: 2, opcode: GETATTR (3), nodeid: 1, insize: 56, pid: 21298
getattr /
==DBG 0 [2018-03-25 18:45:56.]:12==cfs_getattr(/)
==DBG 0 [2018-03-25 18:45:56.]:12==exit 0: cfs_getattr(/)
   unique: 2, success, outsize: 120
unique: 3, opcode: OPENDIR (27), nodeid: 1, insize: 48, pid: 21298
   unique: 3, success, outsize: 32
unique: 4, opcode: READDIR (28), nodeid: 1, insize: 80, pid: 21298
readdir[0] from 0
==DBG 0 [2018-03-25 18:45:56.]:12==cfs_readdir(/)
==DBG 0 [2018-03-25 18:45:58.]:12==exit 0: send_request_size(/?format=xml) speed=2.0 sec (GET) [HTTP OK]
==DBG 0 [2018-03-25 18:45:58.]:12==get_time_as_string: input time length too long, 3100162535172419 > max=2147483647, trimming!
==DBG 0 [2018-03-25 18:45:58.]:12==new_cache()
==DBG 0 [2018-03-25 18:45:58.]:12==exit 1: cfs_readdir(/)
   unique: 4, success, outsize: 112
unique: 5, opcode: LOOKUP (1), nodeid: 1, insize: 48, pid: 21298
LOOKUP /default
getattr /default
==DBG 0 [2018-03-25 18:45:58.]:13==cfs_getattr(/default)
==DBG 0 [2018-03-25 18:45:58.]:13==get_time_as_string: input time length too long, 3100162535172419 > max=2147483647, trimming!
==DBG 0 [2018-03-25 18:45:58.]:13==get_time_as_string: input time length too long, 3100162535172419 > max=2147483647, trimming!
==DBG 0 [2018-03-25 18:45:58.]:13==get_time_as_string: input time length too long, 3100162535172419 > max=2147483647, trimming!
==DBG 0 [2018-03-25 18:45:58.]:13==exit 2: cfs_getattr(/default)
   NODEID: 2
   unique: 5, success, outsize: 144
unique: 6, opcode: READDIR (28), nodeid: 1, insize: 80, pid: 21298
   unique: 6, success, outsize: 16
unique: 7, opcode: RELEASEDIR (29), nodeid: 1, insize: 64, pid: 0
   unique: 7, success, outsize: 16

@TurboGit
Copy link
Owner

And I suppose this is only from a Docker image? It works fine on the host?

@blasterspike
Copy link
Author

I don't have at the moment an host with Alpine Linux. I can setup a VM to test it if needed.
I'm running this Docker Container on my MacBook Pro.
I have tested this other Docker Image
https://github.com/francklemoine/hubicfuse
which is based on debian:jessie and it is working, so I suspect it might be only a problem with Alpine Linux. Is there anything else that I can do to debug the issue?

@TurboGit
Copy link
Owner

At this stage you'll need to build in debug mode and to run under GDB to see where the crash happens.

@blasterspike
Copy link
Author

I have no idea where to start to do that, I'll read up.
In the meantime I have noticed in dmesg that I get the following with ls -l

[177620.447341] traps: ls[23891] general protection ip:7fb250017fcb sp:7ffd915af650 error:0
[177620.448212]  in ld-musl-x86_64.so.1[7fb24ffc4000+89000]

@TurboGit
Copy link
Owner

Indeed, I don't know either how to run a fuse application under GDB. The solution would be to compile in debug mode without optimization:

$ CFLAGS=-g ./configure
$ make

You'll need to have more debug info into the code itself to trace where the crash happens.

I have no better solution. Given the error above it sounds like the issue is in the musl (a libc implementation) which is probably what the docker env is using.

@blasterspike
Copy link
Author

I have compiled in debug mode using

CFLAGS=-g ./configure

as you suggested.
To be able to use gdb inside a Docker container, I add to run it adding these parameters

--cap-add=SYS_PTRACE --security-opt seccomp=unconfined

When I run

gdb --args ls -l /mnt/hubic

I get

(gdb) run
Starting program: /bin/ls -l /mnt/hubic
total 0

Program received signal SIGSEGV, Segmentation fault.
0x00007ffff7dc5fcb in __asctime () from /lib/ld-musl-x86_64.so.1

and running hubicfuse

gdb --args hubicfuse -d /mnt/hubic -o noauto_cache,sync_read,allow_other
[...]
(gdb) run
Starting program: /usr/local/bin/hubicfuse -d /mnt/hubic -o noauto_cache,sync_read,allow_other
[...]
FUSE library version: 2.9.7
nullpath_ok: 0
nopath: 0
utime_omit_ok: 0
[New LWP 15]
[New LWP 16]
unique: 1, opcode: INIT (26), nodeid: 0, insize: 56, pid: 0
INIT: 7.26
flags=0x001ffffb
max_readahead=0x00020000
   INIT: 7.19
   flags=0x00000010
   max_readahead=0x00020000
   max_write=0x00020000
   max_background=0
   congestion_threshold=0
   unique: 1, success, outsize: 40
unique: 2, opcode: GETATTR (3), nodeid: 1, insize: 56, pid: 13563
getattr /
==DBG 0 [2018-04-26 18:03:52.]:15==cfs_getattr(/)
==DBG 0 [2018-04-26 18:03:52.]:15==exit 0: cfs_getattr(/)
   unique: 2, success, outsize: 120
unique: 3, opcode: OPENDIR (27), nodeid: 1, insize: 48, pid: 13563
   unique: 3, success, outsize: 32
unique: 4, opcode: READDIR (28), nodeid: 1, insize: 80, pid: 13563
readdir[0] from 0
==DBG 0 [2018-04-26 18:03:52.]:15==cfs_readdir(/)
==DBG 1 [2018-04-26 18:03:52.]:15==caching_list_directory(/)
==DBG 1 [2018-04-26 18:03:52.]:15==cloudfs_list_directory()
==DBG 1 [2018-04-26 18:03:52.]:15==send_request_size(GET) (/?format=xml)
==DBG 1 [2018-04-26 18:03:52.]:15==add_header(X-Auth-Token:******)
==DBG 1 [2018-04-26 18:03:52.]:15==check_path_info()
==DBG 1 [2018-04-26 18:03:52.]:15==check_caching_list_directory()
==DBG 1 [2018-04-26 18:03:52.]:15==exit 1: check_caching_list_directory() [CACHE-DIR-MISS]
==DBG 1 [2018-04-26 18:03:52.]:15==exit 0: check_path_info() [CACHE-MISS]
==DBG 1 [2018-04-26 18:03:52.]:15==send_request_size: GET XML (/?format=xml)
==DBG 1 [2018-04-26 18:03:52.]:15==status: send_request_size(/?format=xml) started HTTP REQ:https://lb1040.hubic.ovh.net/v1/AUTH_******/?format=xml
[New LWP 22]
[LWP 22 exited]
==DBG 1 [2018-04-26 18:03:53.]:15==status: send_request_size(/?format=xml) completed HTTP REQ:https://lb1040.hubic.ovh.net/v1/AUTH_******/?format=xml total_time=0.7 seconds
==DBG 0 [2018-04-26 18:03:53.]:15==exit 0: send_request_size(/?format=xml) speed=0.7 sec (GET) [HTTP OK]
==DBG 0 [2018-04-26 18:03:53.]:15==get_time_as_string: input time length too long, 3780814486500419 > max=2147483647, trimming!
==DBG 1 [2018-04-26 18:03:53.]:15==new dir_entry /default size=0 application/directory dir=1 lnk=0 mod=[1970-01-01 00:00:00.0]
==DBG 1 [2018-04-26 18:03:53.]:15==exit: cloudfs_list_directory()
==DBG 1 [2018-04-26 18:03:53.]:15==caching_list_directory: new_cache() [CACHE-CREATE]
==DBG 0 [2018-04-26 18:03:53.]:15==new_cache()
==DBG 1 [2018-04-26 18:03:53.]:15==exit: new_cache()
==DBG 1 [2018-04-26 18:03:53.]:15==exit 2: caching_list_directory()
==DBG 0 [2018-04-26 18:03:53.]:15==exit 1: cfs_readdir(/)
   unique: 4, success, outsize: 112
unique: 5, opcode: LOOKUP (1), nodeid: 1, insize: 48, pid: 13563
LOOKUP /default
getattr /default
==DBG 0 [2018-04-26 18:03:53.]:16==cfs_getattr(/default)
==DBG 1 [2018-04-26 18:03:53.]:16==path_info(/default)
==DBG 1 [2018-04-26 18:03:53.]:16==caching_list_directory()
==DBG 1 [2018-04-26 18:03:53.]:16==caching_list_directory() [CACHE-DIR-HIT]
==DBG 1 [2018-04-26 18:03:53.]:16==exit 2: caching_list_directory()
==DBG 1 [2018-04-26 18:03:53.]:16==path_info() [CACHE-DIR-HIT]
==DBG 1 [2018-04-26 18:03:53.]:16==exit 1: path_info(/default) [CACHE-FILE-HIT]
==DBG 0 [2018-04-26 18:03:53.]:16==get_time_as_string: input time length too long, 3780814486500419 > max=2147483647, trimming!
==DBG 1 [2018-04-26 18:03:53.]:16==cfs_getattr: atime=[1970-01-01 00:00:00.0]
==DBG 0 [2018-04-26 18:03:53.]:16==get_time_as_string: input time length too long, 3780814486500419 > max=2147483647, trimming!
==DBG 1 [2018-04-26 18:03:53.]:16==cfs_getattr: mtime=[1970-01-01 00:00:00.0]
==DBG 0 [2018-04-26 18:03:53.]:16==get_time_as_string: input time length too long, 3780814486500419 > max=2147483647, trimming!
==DBG 1 [2018-04-26 18:03:53.]:16==cfs_getattr: ctime=[1970-01-01 00:00:00.0]
==DBG 0 [2018-04-26 18:03:53.]:16==exit 2: cfs_getattr(/default)
   NODEID: 2
   unique: 5, success, outsize: 144
unique: 6, opcode: READDIR (28), nodeid: 1, insize: 80, pid: 13563
   unique: 6, success, outsize: 16
unique: 7, opcode: RELEASEDIR (29), nodeid: 1, insize: 64, pid: 0
   unique: 7, success, outsize: 16

In dmesg I still see the error

[29686.937925] traps: ld-musl-x86_64.[11897] general protection ip:7f706226efcb sp:7ffc6dba4520 error:0
[29686.939584]  in ld-musl-x86_64.so.1[7f706221b000+89000]

Another thing worth to mention is the 2 warnings that I'm getting while compiling but I don't know if they are relevant.

/usr/lib/gcc/x86_64-alpine-linux-musl/6.4.0/../../../../x86_64-alpine-linux-musl/bin/ld: warning: libssl.so.44, needed by /usr/lib/gcc/x86_64-alpine-linux-musl/6.4.0/../../../../lib/libcurl.so, may conflict with libssl.so.1.0.0
/usr/lib/gcc/x86_64-alpine-linux-musl/6.4.0/../../../../x86_64-alpine-linux-musl/bin/ld: warning: libcrypto.so.42, needed by /usr/lib/gcc/x86_64-alpine-linux-musl/6.4.0/../../../../lib/libcurl.so, may conflict with libcrypto.so.1.0.0

Is there anything else that I can do to debug more?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants