Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issues with fileAdapter when integrating Amazon S3 #5658

Open
paulo-rossy opened this issue Dec 23, 2024 · 2 comments
Open

Issues with fileAdapter when integrating Amazon S3 #5658

paulo-rossy opened this issue Dec 23, 2024 · 2 comments

Comments

@paulo-rossy
Copy link
Contributor

paulo-rossy commented Dec 23, 2024

Issues with fileAdapter when integrating Amazon S3

I decided to integrate Amazon AWS S3 into my project using the open-condo codebase, which already includes scripts for AWS S3. In theory, these should work with Amazon S3 as well. However, I noticed that the S3 logic is named SberCloudFileAdapter. Wouldn’t it be better to rename it to something more generic, like S3CloudFileAdapter?

I created an Amazon S3 bucket, granted the necessary permissions, and set the following environment variable in my .env file:

SBERCLOUD_OBS_CONFIG={"bucket": "<bucketName>", "s3Options": {"server": "https://s3.eu-central-1.amazonaws.com", "access_key_id": "<access_key_id>", "secret_access_key": "<secret_access_key>"}}

After launching the project, I encountered the following error:

The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256.

According to [this Stack Overflow post](https://stackoverflow.com/questions/26533245/the-authorization-mechanism-you-have-provided-is-not-supported-please-use-aws4), you need to specify a signature version, but it seems like the esdk-obs-nodejs library doesn’t support it.

I managed to work around the issue by looking at the [official example](https://support.huaweicloud.com/intl/en-us/sdk-nodejs-devg-obs/obs_29_0109.html), which explains that you can provide a security_token. To make it work, I renamed secret_access_key to security_token and commented out the required parameter in packages/keystone/fileAdapter/fileAdapter.js:

createSbercloudFileApapter() {
  const config = this.getEnvConfig('SBERCLOUD_OBS_CONFIG', [
    'bucket',
    's3Options.server',
    's3Options.access_key_id',
    //'s3Options.secret_access_key',
    's3Options.security_token', // added
  ])
  if (!config) {
    return null
  }
  return new SberCloudFileAdapter({ ...config, folder: this.folder, isPublic: this.isPublic, saveFileName: this.saveFileName })
}

This allowed me to upload empty text files, but any file containing data triggered the following error:

A header you provided implies functionality that is not implemented

A similar issue is described in [this Stack Overflow post]. It seems related to how the stream is handled in packages/keystone/fileAdapter/sberCloudFileAdapter.js:

this.s3.putObject({
  Body: stream,
  ContentType: mimetype,
  Bucket: this.bucket,
  Key: key,
  ...uploadParams,
})

I attempted to fix it by modifying the code as follows:

const { Readable } = require('stream')
const { statSync } = require('node:fs')

const saveFile = (resolve, reject) => {
  const uploadParams = this.uploadParams({ ...fileData, meta })
  const readable = Readable.from(stream) // added
  const stat = statSync(stream.path)     // added
  const contentLength = stat.size        // added

  this.s3.putObject({
    Body: readable,           // replaced
    ContentType: mimetype,
    Bucket: this.bucket,
    Key: key,
    ContentLength: contentLength, // added
    ...uploadParams,
  })
}

After that, a new error appeared when uploading documents via the UI:

"extensions": {
  "code": "INTERNAL_SERVER_ERROR",
  "messageForDeveloper": "The \"path\" argument must be of type string or an instance of Buffer or URL. Received undefined\n\nGraphQL request:2:3\n1 | mutation createDocuments($data: [DocumentsCreateInput]) {\n2 |   objs: createDocuments(data: $data) {\n  |   ^\n3 |     organization {"
}

However, using the following script, I can successfully upload files:

require('dotenv').config()

const fs = require('fs')
const path = require('path')
const { v4: uuidv4 } = require('uuid')
const FileAdapter = require('./fileAdapter')

const Adapter = new FileAdapter('test', false)

const saveTestImage = async () => {
  try {
    const filePath = path.join(__dirname, 'sample.jpg')

    if (!fs.existsSync(filePath)) {
      throw new Error(`No file: ${filePath}`)
    }

    const stream = fs.createReadStream(filePath)
    const id = uuidv4()
    const filename = path.basename(filePath)
    const mimetype = 'image/jpeg'
    const encoding = undefined
    const meta = { description: 'Test' }

    console.log(' filename, id, mimetype, encoding, meta ', filename, id, mimetype, encoding, meta)
    const result = await Adapter.save({ stream, filename, id, mimetype, encoding, meta })

    console.log('Done', result)
  } catch (error) {
    console.error('Error', error)
  }
}

saveTestImage()

I also tried converting the stream to a buffer, but in that case the uploaded file has zero size.

Is there a recommended approach or any additional steps I should take to properly configure AWS S3 integration?

Alternative Test Script with AWS SDK

For testing, I tried another approach using the official AWS SDK:

// Import the AWS SDK library.
const AWS = require('aws-sdk')

// Configure AWS SDK with credentials and region.
AWS.config.update({
  accessKeyId: '<YOUR_ACCESS_KEY_ID>',
  secretAccessKey: '<YOUR_SECRET_ACCESS_KEY>',
  region: 'eu-central-1', // Replace with your S3 bucket region.
})

// Create an S3 client instance.
const s3 = new AWS.S3()

async function putObject() {
  try {
    const params = {
      Bucket: 'bucket name',     // Specify the bucket name.
      Key: 'example/objectname',   // Specify the object key.
      Body: 'Hello S3',           // Specify the object content.
    }

    // Upload the object.
    const result = await s3.putObject(params).promise()

    console.log('Put object(%s) under the bucket(%s) successful!!', params.Key, params.Bucket)
    console.log('ETag: %s', result.ETag)
  } catch (error) {
    console.error('An error occurred while putting the object to S3:', error)
  }
}

putObject()

In addition, I set the following permissions in AWS:

Cross-origin resource sharing (CORS):

[
  {
    "AllowedHeaders": ["*"],
    "AllowedMethods": ["HEAD", "GET", "PUT", "POST", "DELETE"],
    "AllowedOrigins": ["*"],
    "ExposeHeaders": []
  }
]

Bucket policy:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "PublicReadGetObject",
      "Effect": "Allow",
      "Principal": "*",
      "Action": "s3:GetObject",
      "Resource": "arn:aws:s3:::<bucket name>/*"
    },
    {
      "Effect": "Allow",
      "Principal": "*",
      "Action": "s3:PutObject",
      "Resource": "arn:aws:s3:::<bucket name>/*"
    },
    {
      "Effect": "Allow",
      "Principal": "*",
      "Action": "s3:PutObjectAcl",
      "Resource": "arn:aws:s3:::<bucket name>/*"
    }
  ]
}

Are there any additional steps or configurations required to properly connect and use Amazon AWS S3?

@dkoviazin
Copy link
Contributor

SBERCLOUD_OBS_CONFIG uses huawei cloud sdk api

If you want to use AWS S3 you can add one more fileAdapter

https://www.npmjs.com/package/keystone-storage-adapter-s3

@pahaz
Copy link
Member

pahaz commented Jan 14, 2025

You can check the KSv5 docs: https://v5.keystonejs.com/keystonejs/file-adapters/#s3fileadapter

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

3 participants