Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using the s3 plugin with DNF #72

Open
chandanadesilva opened this issue Oct 15, 2018 · 9 comments
Open

Using the s3 plugin with DNF #72

chandanadesilva opened this issue Oct 15, 2018 · 9 comments
Assignees

Comments

@chandanadesilva
Copy link

I am able to successfully use the plugin on a CentOS 7 based ec2 instance.
But I am struggling to use it on my Fedora 28 laptop.

The current RPM installs the plugin and config in yum related directories.

Am I correct in assuming that the config should go into /etc/dnf/plugins/ and the plugin itself should go into /usr/lib/python2.7/site-packages/dnf-plugins ?

On my Fedora28 laptop, the current plugins are all Python3. Would the s3 plugin work in python3 or 2.7 ?
Thanks in advance

@andrewegel
Copy link

The underlying Plugin interface between Yum and Dnf is rather large and more than just "migrate to python3 syntax", so I don't expect the code in its current state to work.

I would like to see this feature though (May even pick up implementing it if I find the time). Debian/Ubuntu have their apt-transport-s3 in all of their supported major versions, so having this gap in DNF systems would bring EL8 (and the fedoras) up to parity in terms of using private S3 buckets as a repo.

@cedws
Copy link

cedws commented Dec 3, 2022

I had a look at writing a dnf plugin. The API has a a lot of similarities with yum's. You can set request headers for a repository, but unfortunately that's not enough.

https://dnf.readthedocs.io/en/latest/api_repos.html?highlight=repo#dnf.repo.Repo.set_http_headers

To make a GET request to S3, you have to do some complicated stuff to calculate a signature and one of the inputs is the request path. There's no way of knowing what path dnf needs to grab unless you can intercept each request and calculate the headers each time.

https://docs.aws.amazon.com/AmazonS3/latest/userguide/RESTAuthentication.html

Yum's YumRepository class exposes a method that returns a URLGrabber interface. So if you write your own class with YumRepository as its base and override the grab method, you can use your own URLGrabber and intercept requests. That's what this plugin does.

https://github.com/seporaitis/yum-s3-iam/blob/master/s3iam.py#L181-L196

DNF's API has nothing like this as far as I can see. In the changelogs I spotted that they've explicitly dropped URLGrabber.

https://dnf.readthedocs.io/en/latest/api_repos.html?highlight=repo#dnf.repo.Repo

I think the only way to do this with DNF would be to write an HTTP proxy that either goes to the metadata API on the local machine, or gets passed the IAM credentials via headers by the DNF client injected by a plugin.

Here's some scaffolding for a DNF plugin if anybody wants to give it a go:

#!/usr/bin/env python

import dnf

class BucketIAMPlugin(dnf.Plugin):
  name = "bucketiam"

  def __init__(self, base, cli):
    super(BucketIAMPlugin, self).__init__(base, cli)

  def config(self):
    conf = self.read_config(self.base.conf)

    for repo in self.base.repos.all():
      repo.set_http_headers(("Authorization", ""))

@cedws
Copy link

cedws commented Dec 5, 2022

One more thing. There's a urlopen method in the dnf.Base class. I tried monkey patching it but couldn't get it to be called, and even if it worked, it would intercept all requests so it would be unideal. It would also conflict with other plugins that might do this.

@lorengordon
Copy link

Possibly related, I was poking around the dnf repos and noticed this issue, looking to replace librepo with powerloader:

And powerloader claims native S3 support...

@cedws
Copy link

cedws commented Dec 8, 2022

I've found an existing proxy seemingly built by Amazon themselves that can create SigV4 headers. I haven't tried it but see no reason this shouldn't work.

https://github.com/awslabs/aws-sigv4-proxy

@cedws
Copy link

cedws commented Dec 9, 2022

Proxy works. Leaving notes here for anybody who might want them.

  1. Start it up
$ docker run --rm -d -p 8080:8080 public.ecr.aws/aws-observability/aws-sigv4-proxy
  1. Create a new .repo file in /etc/yum.repos.d
[<REPO NAME>]
name = <REPO NAME>
baseurl = http://s3.<BUCKET REGION>.amazonaws.com/<BUCKET NAME>/<REPO NAME>
proxy = http://localhost:8080

where <REPO NAME> would be the name of a DNF repository, like baseos or appstream. Should be a directory in your bucket. Note that although the baseurl is a HTTP URL, the proxy makes a HTTPS connection to S3.

  1. Enjoy

  2. Consider enabling GPG checking, for example

[<REPO NAME>]
...
+ gpgcheck = 1
+ gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Rocky-9

@grzegorz-gn
Copy link

Proxy works. Leaving notes here for anybody who might want them.

  1. Start it up
$ docker run --rm -d -p 8080:8080 public.ecr.aws/aws-observability/aws-sigv4-proxy
  1. Create a new .repo file in /etc/yum.repos.d
[<REPO NAME>]
name = <REPO NAME>
baseurl = http://s3.<BUCKET REGION>.amazonaws.com/<BUCKET NAME>/<REPO NAME>
proxy = http://localhost:8080

where <REPO NAME> would be the name of a DNF repository, like baseos or appstream. Should be a directory in your bucket. Note that although the baseurl is a HTTP URL, the proxy makes a HTTPS connection to S3.

  1. Enjoy
  2. Consider enabling GPG checking, for example
[<REPO NAME>]
...
+ gpgcheck = 1
+ gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Rocky-9

Hi
I tried to use this approach but i'm receiving 502 error like:

Errors during downloading metadata for repository 'test':

@pawank94
Copy link

pawank94 commented Jun 3, 2024

Even i am getting 502 like grzegorz-gn. Any resolutions?

@pawank94
Copy link

pawank94 commented Jun 4, 2024

for anybody who stumbles into the 502 problem, we were able to solve it using following steps,

  1. Change docker command as follows
docker run --rm -d -e 'AWS_ACCESS_KEY_ID=<AWS KEYS>' \
-e 'AWS_SECRET_ACCESS_KEY=<AWS ACCESS KEY>' \
-e 'AWS_SESSION_TOKEN=<AWS SESSION TOKEN>' \
-p 8921:8080 public.ecr.aws/aws-observability/aws-sigv4-proxy \
--verbose \
--log-failed-requests \
--log-signing-process \
--no-verify-ssl \
--name s3 \
--host s3.amazonaws.com \
--region us-east-1 \
--sign-host s3.amazonaws.com

as per documentation of aws-sigv4-proxy, It is required to pass 'host header' along with the request. 502 error is due to the default behavior of aws-sigv4-proxy of not passing 'host header'. we had to add host and sign-host configs along with docker parameters while starting the container to get it to work.

  1. We had to change conf file as follows
[s3-noarch]
name=S3 DNF repo
baseurl=http://localhost:8921/<BUCKET_NAME>/<PATH TO YUM REPO>
enabled=1

Hope it helps.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

7 participants