Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IPNS names do not resolve when published with different key than self #6360

Closed
marcinczenko opened this issue May 21, 2019 · 21 comments
Closed
Labels
kind/bug A bug in existing code (including security flaws)

Comments

@marcinczenko
Copy link

Version information:

ipfs --version --all
go-ipfs version: 0.4.20-
Repo version: 7
System version: arm/linux
Golang version: go1.12.4

As requested by @aschmahmann this is continuation of our conversation from ipfs-shipyard/integration-mini-projects#4.

I was initially testing publishing and resolving of IPNS together with pubsub service but it looks like the problem is more fundamental.

ipfs resolve <name> seems to be failing when the name has been published using --key option and the key is not self.

I included quite in-depth description of the problem in my comments in ipfs-shipyard/integration-mini-projects#4: especially ipfs-shipyard/integration-mini-projects#4 (comment) and ipfs-shipyard/integration-mini-projects#4 (comment).

The testing has been done between two different nodes - both permanently connected and on - one on AWS and another one running on RaspberryPi. In both case they use port 4001 and we keep it open.

@aschmahmann please let me know if there anything I can do to help improving this. IPNS resolution is crucial for me.

@aschmahmann
Copy link
Contributor

@marcinczenko it looks like you're having DHT issues. Here's a couple things I've run on my machine with 0.4.20 that seem to work.

Shell 1:
ipfs daemon
ipfs name publish --key=ipnstest QmSomeData
// after a minute or two (I had 2min 10 seconds on my machine) this results in:
// Published to QmIPNSKey: /ipfs/QmSomeData

Shell 2:
ipfs daemon
ipfs name resolve QmIPNSKey

// after a minute it should return either:
// Error: could not resolve name
// or: /ipfs/QmSomeData

If it can't resolve the name initially it's almost always happened for me that running the resolve a second time results in the name resolving. Btw I think the main difference between "self" and other keys is that during the DHT search for "self" the peer is the best possible source of truth for the data. This is because the DHT works by trying to find the peer with ID closest to "key" and in this case key = self.

@Stebalien does it look like I'm missing anything here?

I'm currently doing some work on IPNS improvements that should help with this by allowing an option that is a less reliant on the DHT, and any help is always appreciated. Give me 1-2 days to figure out what kind of chunks I can carve out and I'll post back.

@marcinczenko
Copy link
Author

marcinczenko commented May 22, 2019

@aschmahmann Thanks for checking it.

When you say Shell 1 or Shell 2 do you use different nodes (physically located in different networks - so not on the same LAN)? Because when I try to resolve from the same machine where I published then the name always resolves regardless of the key I am using. But when I am on a different machine, on different network (in different country to be precise), I had the described problems.

I will perform more cross-checks between different nodes - for now I was focused on two of them.

So far though, my content seems to resolve well although today I saw something strange: when I do ipfs dag get zdu... on the node, then it seems to hang. But when I then go to ipfs.io and try to get the requested CID it resolves instantly and at the very moment also the request from the node resolves immediately. Maybe this is some indication of a problem with DHT. After getting the content from the public gateway once, IPNS seem to also resolve quite better (it still fails on the first request, but then it indeed seems to resolve on subsequent attempts). On the other hand, ipfs dht findpeer always resolves to a correct node... On my RaspberryPI node I only keep port 4001 open (tcp) whereas on AWS nodes I see that I also opened port 4002 (also tcp). Do I need to open more? What else can I do about potential DHT problems?

Aside from the above - isn't it already a problem that the node does not resolve the first time in your case? Why? Cannot we do something about having at least a better diagnostic message? For the moment it looks like the strategy would be to always try at least twice or otherwise assume that the resolution failed? Not very elegant, but I would be fine with it if I get it consistent - I can try even 10 times :).

I do not understand yet the implementation details, so I do not really get your explanation about the difference between self and other keys. Do you have any hints where do you think I start to get better grasp of it?

Thanks in advance for any help.

@Stebalien Stebalien added the kind/bug A bug in existing code (including security flaws) label May 22, 2019
@marcinczenko
Copy link
Author

marcinczenko commented May 22, 2019

@aschmahmann I aligned the ports and did some more tests.

I am not sure if 4002 changes anything, but from my more recent tests across three different nodes, it looks like you may be right - the key seem to indeed have some influence on resolving.

So with some keys I really cannot get it resolved and with other key I got it resolved even after first attempt to only resolve successfully after 10 attempts later. Very unpredictable. On the other hand, self resolves always - full predictability, from every node I tried.

I am still puzzled why in some case going to ipfs.io and resolving CID that is linked to an IPNS name seem to fix things up at least in some cases (looks like sometimes ipfs.io triggers something that makes faster discovery in the given region).

So I am really puzzled... Is such level of uncertainty something that you currently describe like normal?

How can we make this better? It feels impossible to build any commercial solution that depends on IPNS in such a shape and because our solution requires IPNS resolution that is at the very least deterministic, it would be nice to hear your opinion about the future of IPNS.

@aschmahmann
Copy link
Contributor

aschmahmann commented May 23, 2019

@marcinczenko so our options for what's going on here are either:

  1. There is a bug publishing with --key, or
  2. Maybe it's just DHT issues manifesting in this example

There are some libp2p folks investigating DHT improvements that I can connect you to if that's up your alley of interest. Additionally, if you'd like to do some debugging to verify the --key issues are really in the DHT that would definitely be helpful.

There are also a couple of things I'm working on now that should help with IPNS resolution speed, and that if you'd like to help would be great!

  1. Make IPNS over PubSub work independently of the DHT
  2. Make it so we can use go-libp2p-rendezvous (the code is here) as a peer discovery mechanism

If you're interested in the PubSub persistence I can detail more about how to break that up, but I would recommend taking a stab at the rendezvous work since once we have that it will be possible to have a fork of go-ipfs that is quite fast.

In case you're wondering, rendezvous isn't magic it's a form of centralization. However, unlike a traditionally centralized naming system (e.g. DNS) we're only centralizing the pointers to interested peers as opposed to the records themselves. Also, the plan is to create an open or federated system around rendezvous to decrease the centralization going forward.

Let me know if you have any questions or if you're interested in one of these topics. I'm available here, or on the #ipfs IRC or Matrix channels.

@marcinczenko
Copy link
Author

marcinczenko commented May 23, 2019

Ok. Please give me a couple of days to process this - I am traveling at the moment but I will be back on Saturday.

I would like helping out as much as I can. Sometimes I may be struggling a bit with available time, but it is on my roadmap to grasp a more about internals of IPFS and the whole eco-system and potentially contribute. Let's then start with what you suggest. I do like idea on getting more insights about IPNS and PubSub as I would love to have IPNS that resolves faster and more reliably. We can always reach out to DHT people if you find it helpful later.

I will contact you on IRC or Matrix. In the meantime I will look at the description of the issues you mention. Do you guys maybe also use Slack?

@marcinczenko
Copy link
Author

The links seem to be not valid. Do you mean: libp2p/go-libp2p-pubsub#171 for "attempt 1" and libp2p/go-libp2p-pubsub#175 for "some background info"?

@aschmahmann
Copy link
Contributor

Yep sorry, about the broken links, fixed them.

@marcinczenko
Copy link
Author

@aschmahmann Are you still using Riot? I am trying to reach you there, but I am not sure if you are still using it...

@aschmahmann
Copy link
Contributor

@marcinczenko yep, although I've been travelling for the past week and have a bit more ahead of me. Will try and take a look tomorrow or later this week.

@marcinczenko
Copy link
Author

@aschmahmann Thanks - take your time!

@DougAnderson444
Copy link

@marcinczenko Thank you for opening this issue, I have been having a tough time resolving both 'self' and other keys on other nodes, even with pubsub enabled everywhere. Have you made any progress or discovered a work around at all?

@marcinczenko
Copy link
Author

@DougAnderson444 I did not manage to get IPNS name resolution to work. But, because I need to get some sort of name resolution to work sooner or later, I started to do some experiments with the IPFS PubSub service and see to which extent I can build up something from it - I seem to be building my own-purpose IPNS myself basically ;). I have some experiments with only two nodes at the moment, so it does not say anything conclusively - but if I cannot make a stable system with two directly connected nodes that how am I going to make it work in more complex cases? It is really nothing then, but if you would like to take a look, this is part of an open source project https://github.com/identity-box/identity-box. To see what kind of strange and naive things I am doing currently, you may like to take a look at https://idbox.online/developers/idbox-raspberry-pi and than https://github.com/identity-box/identity-box/tree/master/workspaces/nameservice and maybe https://github.com/identity-box/identity-box/tree/master/workspaces/identity-service. Maybe to see if anything of this applies to you, knowing something about the context may help: https://idbox.online. The code is still very aggressively changed, so please forgive me if I am not precise with something. What we have deployed at the moment is stable and works reliably - so at least it seems that the basics are working. Now the scaling up, but that comes soon I hope.

Because the IPNS name resolution is not the only issue I am trying to solve, I am not really under strong pressure to get this to work immediately - that's why I can afford not to rush too much (or in other words, I seem to have bigger problems - like funding :)). However, the longer I look at IPNS/IPFS the more I am inclined to build something myself... The constraints make people creative I suppose ;).

@aschmahmann
Copy link
Contributor

@DougAnderson444 it seems from your comments on discuss that you are using js-ipfs nodes with IPNS over PubSub. go-ipfs 0.5+ have some pretty significant changes to IPNS over PubSub that are not yet implemented in js-ipfs. If you're still having trouble getting IPNS over PubSub functioning using go-ipfs 0.5+ peers feel free to open an issue on discuss and tag me.

@marcinczenko sorry you're still having issues. If you haven't yet I'd check out the post 0.5 IPNS over PubSub and see if that works well for you. If you're looking to utilize some building blocks that are more high level in your identity service you may want to look into Textile threads/buckets as they may have some of the application layer concerns taken care of for you.

@DougAnderson444
Copy link

Thanks @aschmahmann I did tag you in there.

Pubsub seems to work fine between JS-GO for the /record/, but in go-ipfs 0.6.0 is not adding the /ipns/ record to name pubsub subs when the name resolve <peerId> fails. I think this is why go-ipfs is not picking up on the msg when it's published, what do you think?

@aschmahmann
Copy link
Contributor

@DougAnderson444 I think you're correct there. I'll do some investigating into what's going on there. I suspect that it didn't previously matter since IPNS over PubSub didn't really work unless you could first resolve data via the DHT, but since go-ipfs 0.5 these subsystems are independent which has surfaced some issues.

@DougAnderson444
Copy link

DougAnderson444 commented Jul 13, 2020

@aschmahmann I think I solved one issue here, see the pull request. Thanks for your help & insight on this one!

@aschmahmann
Copy link
Contributor

Thanks @DougAnderson444 I'll take a look when I can, although I suspect I'll be a bit busy this week. btw you caused me to poke around in the IPNS code base a bit and I found the bug/issue where IPNS over PubSub subscriptions fail if the DHT lookup fails and/or takes too long.

It's on my PR todo list and I think should close out this issue once it's done. The issue stems from this function call happening before the IPNS over PubSub router gets touched leading to a few issues, including:

https://github.com/ipfs/go-ipfs/blob/b3e5ffc41ae4ef46402ff38be21c66912b59bc42/namesys/routing.go#L77

  1. This is basically a waste of time since we've been embedding public keys in IPNS records since I think 0.4.14 and enough of the network should have upgraded by now that we don't need this
  2. We don't search the DHT or subscribe to pubsub if we fail to find the public key in the DHT

If we remove the Public Key searching this probably goes away.

@DougAnderson444
Copy link

DougAnderson444 commented Jul 15, 2020

I was mistaken, I cannot get keys other than 'self' to work either. When using imported keys, I get this error:

failed to dial QmXeEpvmAJGi4BUhrbyWdqpGqjwREBQHB9fL4noKSmqFb9: no addresses

I think go-IPFS is looking for a peerID that doesn't exist, because the key was an import onto the keychain instead of being a dialable peerID?

@aschmahmann
Copy link
Contributor

aschmahmann commented Jul 15, 2020

I cannot get keys other than 'self' to work either

Do you mean "I cannot get keys other than self to work if there is pubsub, but no DHT support"? If so then yes, that's my issue above. Otherwise can you explain in more detail, since I can successfully find and publish non-self keys and find them on public gateways (e.g. http://dweb.link/ipns/QmPJFHSPVJEGNrVt9JSKSJvFd8iYqPLeJ2itZgdyzAGuSE).

failed to dial QmXeEpvmAJGi4BUhrbyWdqpGqjwREBQHB9fL4noKSmqFb9: no addresses

If that's your IPNS key then it's probably just part of how GetPublicKey works https://github.com/libp2p/go-libp2p-kad-dht/blob/330b9beabaacd7e44aa8c80c19435b1bf1081212/records.go#L20. It will simultaneously try and "dial" (with a background DHT query to find the peer if you're not already connected) the peer and ask it directly for the public key and it will simultaneously search the DHT for the key.

@DougAnderson444
Copy link

Hi Adin, Question: are you importing that "key other than self" in the go/CLI before you resolve it in go-ipfs?

If yes, can you try name resolve using a key that has not been imported, to see if the /record/ and /ipns/ are created and go-ipfs is subscribed to the topic?

Why would anyone want to do this, you may ask? Well, this is exactly what happens when the key is imported in js-ipfs; it doesn't get imported in go-ipfs, yet we are trying to use go-ipfs to resolve it. This is what the current tests are doing: resolve first (to save the records in Go-Ipfs), then publish it in Js-ipfs once go-ipfs is subscribed.

So perhaps this is a different issue? Since we do have the DHT support?

@Stebalien
Copy link
Member

Fixed in #7549.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug A bug in existing code (including security flaws)
Projects
None yet
Development

No branches or pull requests

4 participants