diff --git a/LICENSE b/License.md similarity index 87% rename from LICENSE rename to License.md index f1102159..f4e96f70 100644 --- a/LICENSE +++ b/License.md @@ -1,6 +1,6 @@ -The MIT License +The MIT License (MIT) -Copyright (c) 2011-2012 C2FO +Copyright (c) 2019 C2FO, Inc Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal @@ -9,13 +9,13 @@ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: -The above copyright notice and this permission notice shall be included in -all copies or substantial portions of the Software. +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN -THE SOFTWARE. \ No newline at end of file +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/README.md b/README.md index a7acc507..fbf2b723 100644 --- a/README.md +++ b/README.md @@ -1,105 +1,126 @@ -# vfs - Virtual File System -> Go library to generalize commands and behavior when interacting with various file systems. +# vfs -The vfs library includes interfaces which allow you to interact with files and locations on various file systems in a generic way. Currently supported file systems: -* Local fs (Windows, OS X, Linux) -* Amazon S3 -* GCS +-- -These interfaces are composed of standard Go library interfaces, allowing for simple file manipulation within, and between the supported file systems. +Package vfs provides a platform-independent, generalized set of filesystem +functionality across a number of filesystem types such as os, S3, and GCS. -At C2FO we have created a factory system that is integrated with our app configuration that allows for simply initializing the various locations we tend to do file work in. You can build your own similar system directly on top of the various file system implementations and the provided generic interfaces, or you can use the simple interface included in the vfs package. -The usage examples below will detail this simple interface. We will eventually be providing a version of our factory as an example of how this library can be used in a more complex project. -A couple notes on configuration for this interface (vfssimple.NewFile and vfssimple.NewLocation): -* Before calling either function you must initialize any file systems you expect to be using. -* Local: The local file system requires no configuration. Simply call vfssimple.InitializeLocalFileSystem so the internals are prepared to expect "file:///" URIs. -* S3: The vfssimple.InitializeS3FileSystem() method requires authentication parameters for the user, see godoc for this function. -* GCS: In addition to calling vfssimple.InitializeGSFileSystem, you are expected to have authenticated with GCS using the Google Cloud Shell for the user running the app. We will be looking into more flexible forms of authentication (similar to the S3 library) in the future, but this was an ideal use case for us to start with, and therefore, all that is currently provided. +### Philosophy -## Installation +When building our platform, initially we wrote a library that was something to +the effect of -OS X, Linux, and Windows: + if config.DISK == "S3" { + // do some s3 filesystem operation + } else if config.DISK == "mock" { + // fake something + } else { + // do some native os.xxx operation + } -```sh -glide install github.com/c2fo/vfs -``` +Not only was ugly but because the behaviors of each "filesystem" were +different and we had to constantly alter the file locations and pass a bucket +string (even if the fs didn't know what a bucket was). -## Usage example +We found a handful of third-party libraries that were interesting but none of +them had everything we needed/wanted. Of particular inspiration was +https://github.com/spf13/afero in its composition of the super-powerful stdlib +[io.*](https://godoc.org/io) interfaces. Unforunately, it didn't support Google Cloud Storage and there +was still a lot of passing around of strings and structs. Few, if any, of the +vfs-like libraries provided interfaces to easily and confidently create new +filesystem backends. -```go -import "github.com/c2fo/vfs/vfssimple" +###### What we needed/wanted was the following(and more): -// The following functions tell vfssimple we expect to handle a particular file system in subsequent calls to -// vfssimple.NewFile() and vfssimple.NewLocation -// Local files, ie: "file:///" -vfssimple.InitializeLocalFileSystem() +* self-contained set of structs that could be passed around like a file/dir handle +* the struct would represent an existing or nonexistant file/dir +* provide common (and only common) functionality across all filesystem so that after initialization, we don't care + what the underlying filesystem is and can therefore write our code agnostically/portably +* use [io.*](https://godoc.org/io) interfaces such as [io.Reader](https://godoc.org/io#Reader) and [io.Writer](https://godoc.org/io#Writer) without needing to call a separate function +* extensibility to easily add other needed filesytems like Micrsoft Azure Cloud File Storage or SFTP +* prefer native atomic functions when possible (ie S3 to S3 moving would use the native move api call rather than + copy-delete) +* a uniform way of addressing files regardless of filesystem. This is why we use complete URI's in [vfssimple](docs/vfssimple.md) +* [fmt.Stringer](https://godoc.org/fmt#Stringer) interface so that the file struct passed to a log message (or other [Stringer](https://godoc.org/fmt#Stringer) use) would show the URI +* mockable filesystem +* pluggability so that third-party implemenations of our interfaces could be used -// Google Cloud Storage, ie: "gs://" -vfs.InitializeGSFileSystem() -// Amazon S3, ie: "s3://" -vfssimple.InitializeS3FileSystem(accessKeyId, secreteAccessKey, token) +### Install -// alternative to above for S3, if you've already initialized a client of interface s3iface.S3API -vfssimple.SetS3Client(client) -``` +Go install: -You can then use those file systems to initialize locations which you'll be referencing frequently, or initialize files directly + go get -u github.com/c2fo/vfs/... -```go -osFile, err := vfssimple.NewFile("file:///path/to/file.txt") -s3File, err := vfssimple.NewFile("s3://bucket/prefix/file.txt") +Glide installation: -osLocation, err := vfssimple.NewLocation("file:///tmp") -s3Location, err := vfssimple.NewLocation("s3://bucket") + glide install github.com/c2fo/vfs -osTmpFile, err := osLocation.NewFile("anotherFile.txt") // file at /tmp/anotherFile.txt -``` -With a number of files and locations between s3 and the local file system you can perform a number of actions without any consideration for the system's api or implementation details. +### Usage -```go -osFileExists, err := osFile.Exists() // true, nil -s3FileExists, err := s3File.Exists() // false, nil -err = osFile.CopyToFile(s3File) // nil -s3FileExists, err = s3File.Exists() // true, nil +We provde [vfssimple](docs/vfssimple.md) as basic way of initializing filesystem backends (see each +implemnations's docs about authentiation). [vfssimple](docs/vfssimple.md) pulls in every c2fo/vfs +backend. If you need to reduce the backend requirements (and app memory +footprint) or add a third party backend, you'll need to implement your own +"factory". See [backend](docs/backend.md) doc for more info. -movedOsFile, err := osFile.MoveToLocation(osLocation) -osFileExists, err = osFile.Exists() // false, nil (move actions delete the original file) -movedOsFileExists, err := movedOsFile.Exists() // true, nil +You can then use those file systems to initialize locations which you'll be +referencing frequently, or initialize files directly -s3FileUri := s3File.URI() // s3://bucket/prefix/file.txt -s3FileName := s3File.Name() // file.txt -s3FilePath := s3File.Path() // /prefix/file.txt + osFile, err := vfssimple.NewFile("file:///path/to/file.txt") + s3File, err := vfssimple.NewFile("s3://bucket/prefix/file.txt") -// vfs.File and vfs.Location implement fmt.Stringer, returning x.URI() -fmt.Sprintf("Working on file: %s", s3File) // "Working on file: s3://bucket/prefix/file.txt" -``` + osLocation, err := vfssimple.NewLocation("file:///tmp") + s3Location, err := vfssimple.NewLocation("s3://bucket") -## Development setup + osTmpFile, err := osLocation.NewFile("anotherFile.txt") // file at /tmp/anotherFile.txt -Fork the project and clone it locally, then in the cloned directory... +With a number of files and locations between s3 and the local file system you +can perform a number of actions without any consideration for the system's api +or implementation details. -```sh -glide install -go test $(glide novendor) -``` + osFileExists, err := osFile.Exists() // true, nil + s3FileExists, err := s3File.Exists() // false, nil + err = osFile.CopyToFile(s3File) // nil + s3FileExists, err = s3File.Exists() // true, nil + + movedOsFile, err := osFile.MoveToLocation(osLocation) + osFileExists, err = osFile.Exists() // false, nil (move actions delete the original file) + movedOsFileExists, err := movedOsFile.Exists() // true, nil -## Release History + s3FileUri := s3File.URI() // s3://bucket/prefix/file.txt + s3FileName := s3File.Name() // file.txt + s3FilePath := s3File.Path() // /prefix/file.txt -* 0.1.0 - * The first release - * Support for local file system, s3, and gcs - * Initial README.md -* 1.0.0 - * Apply last of bugfixes from old repo -* 1.1.0 - * Enable server-side encryption on S3 (matching GCS) as a more sane, secure default for files is at rest -* 1.2.0 - * For the S3 implementation of the File interface, ensure the file exists in S3 after it is written before continuing. +### Third-party Backends -## Meta + * none so far + +Feel free to send a pull request if you want to add your backend to the list. + +### See also: +* [vfscp](docs/vfscp.md) +* [vfssimple](docs/vfssimple.md) +* [backend](docs/backend.md) + * [os backend](docs/os.md) + * [gs backend](docs/gs.md) + * [s3 backend](docs/s3.md) +* [utils](docs/utils.md) + + +### Ideas + +Things to add: + +* Add SFTP backend +* Add Azure storage backend +* Add in-memory backend +* Provide better List() functionality with more abstracted filtering and paging (iterator?) Return File structs vs URIs? +* Add better/any context.Context() support for deadline and cancellation + +### Contrubutors Brought to you by the Enterprise Pipeline team at C2FO: @@ -109,14 +130,165 @@ Jason Coble - [@jasonkcoble](https://twitter.com/jasonkcoble) - jason@c2fo.com Chris Roush – chris.roush@c2fo.com -Distributed under the MIT license. See ``LICENSE`` for more information. - -[https://github.com/c2fo/](https://github.com/c2fo/) +https://github.com/c2fo/ -## Contributing +### Contributing 1. Fork it () -2. Create your feature branch (`git checkout -b feature/fooBar`) -3. Commit your changes (`git commit -am 'Add some fooBar'`) -4. Push to the branch (`git push origin feature/fooBar`) -5. Create a new Pull Request \ No newline at end of file +1. Create your feature branch (`git checkout -b feature/fooBar`) +1. Commit your changes (`git commit -am 'Add some fooBar'`) +1. Push to the branch (`git push origin feature/fooBar`) +1. Create a new Pull Request + + +### License + +Distributed under the MIT license. See `http://github.com/c2fo/vfs/License.md +for more information. + +## Interfaces + +#### type File + +```go +type File interface { + io.Closer + io.Reader + io.Seeker + io.Writer + fmt.Stringer + + // Exists returns boolean if the file exists on the file system. Also returns an error if any. + Exists() (bool, error) + + // Location returns the vfs.Location for the File. + Location() Location + + // CopyToLocation will copy the current file to the provided location. If the file already exists at the location, + // the contents will be overwritten with the current file's contents. In the case of an error, nil is returned + // for the file. + CopyToLocation(location Location) (File, error) + + // CopyToFile will copy the current file to the provided file instance. If the file already exists, + // the contents will be overwritten with the current file's contents. In the case of an error, nil is returned + // for the file. + CopyToFile(File) error + + // MoveToLocation will move the current file to the provided location. If the file already exists at the location, + // the contents will be overwritten with the current file's contents. In the case of an error, nil is returned + // for the file. + MoveToLocation(location Location) (File, error) + + // MoveToFile will move the current file to the provided file instance. If a file with the current file's name already exists, + // the contents will be overwritten with the current file's contents. The current instance of the file will be removed. + MoveToFile(File) error + + // Delete unlinks the File on the filesystem. + Delete() error + + // LastModified returns the timestamp the file was last modified (as *time.Time). + LastModified() (*time.Time, error) + + // Size returns the size of the file in bytes. + Size() (uint64, error) + + // Path returns absolute path (with leading slash) including filename, ie /some/path/to/file.txt + Path() string + + // Name returns the base name of the file path. For file:///some/path/to/file.txt, it would return file.txt + Name() string + + // URI returns the fully qualified URI for the File. IE, s3://bucket/some/path/to/file.txt + URI() string +} +``` + +File represents a file on a filesystem. A File may or may not actually exist on +the filesystem. + +#### type FileSystem + +```go +type FileSystem interface { + // NewFile initializes a File on the specified volume at path 'name'. On error, nil is returned + // for the file. + NewFile(volume string, name string) (File, error) + + // NewLocation initializes a Location on the specified volume with the given path. On error, nil is returned + // for the location. + NewLocation(volume string, path string) (Location, error) + + // Name returns the name of the FileSystem ie: s3, disk, gcs, etc... + Name() string + + // Scheme, related to Name, is the uri scheme used by the FileSystem: s3, file, gs, etc... + Scheme() string +} +``` + +FileSystem represents a filesystem with any authentication accounted for. + +#### type Location + +```go +type Location interface { + fmt.Stringer + + // List returns a slice of strings representing the base names of the files found at the Location. All implementations + // are expected to return ([]string{}, nil) in the case of a non-existent directory/prefix/location. If the user + // cares about the distinction between an empty location and a non-existent one, Location.Exists() should be checked + // first. + List() ([]string, error) + + // ListByPrefix returns a slice of strings representing the base names of the files found in Location whose + // filenames match the given prefix. An empty slice will be returned even for locations that don't exist. + ListByPrefix(prefix string) ([]string, error) + + // ListByRegex returns a slice of strings representing the base names of the files found in Location that + // matched the given regular expression. An empty slice will be returned even for locations that don't exist. + ListByRegex(regex *regexp.Regexp) ([]string, error) + + // Returns the volume as string. Some filesystems may not have a volume and will return "". In URI parlance, + // volume equates to authority. For example s3://mybucket/path/to/file.txt, volume would return "mybucket". + Volume() string + + //Path returns absolute path to the Location with leading and trailing slashes, ie /some/path/to/ + Path() string + + // Exists returns boolean if the file exists on the file system. Also returns an error if any. + Exists() (bool, error) + + // NewLocation is an initializer for a new Location relative to the existing one. For instance, for location: + // file://some/path/to/, calling NewLocation("../../") would return a new vfs.Location representing file://some/. + // The new location instance should be on the same file system volume as the location it originated from. + NewLocation(relativePath string) (Location, error) + + // ChangeDir updates the existing Location's path to the provided relative path. For instance, for location: + // file://some/path/to/, calling ChangeDir("../../") update the location instance to file://some/. + ChangeDir(relativePath string) error + + //FileSystem returns the underlying vfs.FileSystem struct for Location. + FileSystem() FileSystem + + // NewFile will instantiate a vfs.File instance at the current location's path. In the case of an error, + // nil will be returned. + NewFile(fileName string) (File, error) + + // DeleteFile deletes the file of the given name at the location. This is meant to be a short cut for + // instantiating a new file and calling delete on that, with all the necessary error handling overhead. + DeleteFile(fileName string) error + + // URI returns the fully qualified URI for the Location. IE, file://bucket/some/path/ + URI() string +} +``` + +Location represents a filesystem path which serves as a start point for +directory-like functionality. A location may or may not actually exist on the +filesystem. + +#### type Options + +```go +type Options interface{} +``` diff --git a/backend/all/all.go b/backend/all/all.go new file mode 100644 index 00000000..242e49d6 --- /dev/null +++ b/backend/all/all.go @@ -0,0 +1,8 @@ +// Package all imports all VFS implementations. +package all + +import ( + _ "github.com/c2fo/vfs/backend/gs" + _ "github.com/c2fo/vfs/backend/os" + _ "github.com/c2fo/vfs/backend/s3" +) diff --git a/backend/backend.go b/backend/backend.go new file mode 100644 index 00000000..e029f4bd --- /dev/null +++ b/backend/backend.go @@ -0,0 +1,56 @@ +package backend + +import ( + "sort" + "sync" + + "github.com/c2fo/vfs" +) + +var mmu sync.RWMutex +var m map[string]vfs.FileSystem + +// Register a new filesystem in backend map +func Register(name string, v vfs.FileSystem) { + mmu.Lock() + m[name] = v + mmu.Unlock() +} + +// Unregister unregisters a filesystem from backend map +func Unregister(name string) { + mmu.Lock() + delete(m, name) + mmu.Unlock() +} + +// UnregisterAll unregisters all filesystems from backend map +func UnregisterAll() { + // mainly for tests + mmu.Lock() + m = make(map[string]vfs.FileSystem) + mmu.Unlock() +} + +// Backend returns the backend filesystem by name +func Backend(name string) vfs.FileSystem { + mmu.RLock() + defer mmu.RUnlock() + return m[name] +} + +// RegisteredBackends returns an array of backend names +func RegisteredBackends() []string { + var f []string + mmu.RLock() + for k := range m { + f = append(f, k) + } + mmu.RUnlock() + sort.Strings(f) + return f +} + +func init() { + m = make(map[string]vfs.FileSystem) +} diff --git a/backend/doc.go b/backend/doc.go new file mode 100644 index 00000000..06bd0cf5 --- /dev/null +++ b/backend/doc.go @@ -0,0 +1,78 @@ +/* +Package backend provides a means of allowing backend filesystems to self-register on load via an init() call to +backend.Register("some name", vfs.Filesystem) + +In this way, a caller of vfs backends can simply load the backend filesystem (and ONLY those needed) and begin using it: + + package main + + // import backend and each backend you intend to use + import( + "github.com/c2fo/vfs/backend" + _ "github.com/c2fo/vfs/backend/os" + _ "github.com/c2fo/vfs/backend/s3" + ) + + func main() { + var err error + var osfile, s3file vfs.File + + // THEN begin using the filesystems + osfile, err = backend.Backend(os.Scheme).NewFile("", "/path/to/file.txt") + if err != nil { + panic(err) + } + + s3file, err = backend.Backend(os.Scheme).NewFile("", "/some/file.txt") + if err != nil { + panic(err) + } + + err = osfile.CopyTo(s3file) + if err != nil { + panic(err) + } + } + +Development + +To create your own backend, you must create a package that implements the interfaces: vfs.Filesystem, vfs.Location, and vfs.File. +Then ensure it registers itself on load: + + pacakge myexoticfilesystem + + import( + ... + "github.com/c2fo/vfs" + "github.com/c2fo/vfs/backend" + ) + + // IMPLEMENT vfs interfaces + ... + + // register backend + func init() { + backend.Register( + "My Exotic Filesystem", + &MyExoticFilesystem{}, + ) + } + +Then do use it in some other package do + pacakge MyExoticFilesystem + + import( + "github.com/c2fo/vfs/backend" + _ "github.com/acme/myexoticfilesystem" + ) + + ... + + func useNewBackend() error { + myExoticFs, err = backend.Backend(myexoticfilesystem.Scheme) + ... + } + +Thats it. Simple. +*/ +package backend diff --git a/backend/gs/doc.go b/backend/gs/doc.go new file mode 100644 index 00000000..90563f3a --- /dev/null +++ b/backend/gs/doc.go @@ -0,0 +1,75 @@ +/* +Package gs Google Cloud Storage VFS implementation. + +Usage + +Rely on github.com/c2fo/vfs/backend + + import( + "github.com/c2fo/vfs/backend" + _ "github.com/c2fo/vfs/backend/gs" + ) + + func UseFs() error { + fs, err := backend.Backend("Google Cloud Storage") + ... + } + +Or call directly: + + import "github.com/c2fo/vfs/backend/gs" + + func DoSomething() { + fs := gs.NewFilesystem() + ... + } + +gs can be augmented with the following implementation-specific methods. Backend returns vfs.Filesystem interface so it +would have to be cast as gs.Filesystem to use the following: + + func DoSomething() { + + ... + + // cast if fs was created using backend.Backend(). Not necessary if created directly from gs.NewFilsystem(). + fs = fs.(gs.Filesystem) + + // to use your own "context" + ctx := context.Background() + fs = fs.WithContext(ctx) + + // to pass in client options + fs = fs.WithOptions( + gs.Options{ + CredentialFile: "/root/.gcloud/account.json", + Scopes: []string{"ScopeReadOnly"}, + //default scope is "ScopeFullControl" + }, + ) + + // to pass specific client, for instance no-auth client + ctx := context.Background() + client, _ := storage.NewClient(ctx, option.WithoutAuthentication()) + fs = fs.WithClient(client) + } + +Authentication + +Authentication, by default, occurs automatically when Client() is called. It looks for credentials in the following places, +preferring the first location found: + + 1. A JSON file whose path is specified by the GOOGLE_APPLICATION_CREDENTIALS environment variable + 2. A JSON file in a location known to the gcloud command-line tool. + On Windows, this is %APPDATA%/gcloud/application_default_credentials.json. + On other systems, $HOME/.config/gcloud/application_default_credentials.json. + 3. On Google App Engine it uses the appengine.AccessToken function. + 4. On Google Compute Engine and Google App Engine Managed VMs, it fetches credentials from the metadata server. + +See https://cloud.google.com/docs/authentication/production for more autho info + +See Also + +See: https://github.com/googleapis/google-cloud-go/tree/master/storage + +*/ +package gs diff --git a/gs/file.go b/backend/gs/file.go similarity index 83% rename from gs/file.go rename to backend/gs/file.go index b4878f3c..d85108ac 100644 --- a/gs/file.go +++ b/backend/gs/file.go @@ -2,6 +2,7 @@ package gs import ( "bytes" + "context" "errors" "fmt" "io" @@ -13,35 +14,31 @@ import ( "cloud.google.com/go/storage" "github.com/c2fo/vfs" + "github.com/c2fo/vfs/utils" ) const ( doesNotExistError = "storage: object doesn't exist" ) +//File implements vfs.File interface for GS fs. type File struct { - fileSystem *FileSystem - bucket string - key string - tempFile *os.File - writeBuffer *bytes.Buffer - bucketHandle *storage.BucketHandle - objectHandle *storage.ObjectHandle + fileSystem *FileSystem + bucket string + key string + tempFile *os.File + writeBuffer *bytes.Buffer } // Close cleans up underlying mechanisms for reading from and writing to the file. Closes and removes the // local temp file, and triggers a write to GCS of anything in the f.writeBuffer if it has been created. -func (f *File) Close() (rerr error) { - //setup multi error return using named error rerr - errs := vfs.NewMutliErr() - defer func() { rerr = errs.OrNil() }() - +func (f *File) Close() error { if f.tempFile != nil { - defer errs.DeferFunc(f.tempFile.Close) + defer f.tempFile.Close() err := os.Remove(f.tempFile.Name()) if err != nil && !os.IsNotExist(err) { - return errs.Append(err) + return err } f.tempFile = nil @@ -49,13 +46,19 @@ func (f *File) Close() (rerr error) { if f.writeBuffer != nil { - w := f.getObjectHandle().NewWriter(f.fileSystem.ctx) + handle, err := f.getObjectHandle() + if err != nil { + return err + } + + ctx, cancel := context.WithCancel(f.fileSystem.ctx) + w := handle.NewWriter(ctx) if _, err := io.Copy(w, f.writeBuffer); err != nil { - //CloseWithError always returns nil - _ = w.CloseWithError(err) - return errs.Append(err) + //cancel context (replaces CloseWithError) + cancel() + return err } - defer errs.DeferFunc(w.Close) + defer w.Close() } f.writeBuffer = nil @@ -122,7 +125,7 @@ func (f *File) Exists() (bool, error) { func (f *File) Location() vfs.Location { return vfs.Location(&Location{ fileSystem: f.fileSystem, - prefix: vfs.EnsureTrailingSlash(vfs.CleanPrefix(path.Dir(f.key))), + prefix: utils.EnsureTrailingSlash(utils.CleanPrefix(path.Dir(f.key))), bucket: f.bucket, }) } @@ -170,7 +173,7 @@ func (f *File) CopyToFile(targetFile vfs.File) error { return f.copyWithinGCSToFile(tf) } - if err := vfs.TouchCopy(targetFile, f); err != nil { + if err := utils.TouchCopy(targetFile, f); err != nil { return err } //Close target to flush and ensure that cursor isn't at the end of the file when the caller reopens for read @@ -215,7 +218,11 @@ func (f *File) Delete() error { if err := f.Close(); err != nil { return err } - return f.getObjectHandle().Delete(f.fileSystem.ctx) + handle, err := f.getObjectHandle() + if err != nil { + return err + } + return handle.Delete(f.fileSystem.ctx) } // LastModified returns the 'Updated' property from the GCS attributes. @@ -227,7 +234,7 @@ func (f *File) LastModified() (*time.Time, error) { return &attr.Updated, nil } -// LastModified returns the 'Size' property from the GCS attributes. +// Size returns the 'Size' property from the GCS attributes. func (f *File) Size() (uint64, error) { attr, err := f.getObjectAttrs() if err != nil { @@ -248,7 +255,7 @@ func (f *File) Name() string { // URI returns a full GCS URI string of the file. func (f *File) URI() string { - return vfs.GetFileURI(vfs.File(f)) + return utils.GetFileURI(vfs.File(f)) } func (f *File) checkTempFile() error { @@ -268,7 +275,12 @@ func (f *File) copyToLocalTempReader() (*os.File, error) { return nil, err } - outputReader, err := f.getObjectHandle().NewReader(f.fileSystem.ctx) + handle, err := f.getObjectHandle() + if err != nil { + return nil, err + } + + outputReader, err := handle.NewReader(f.fileSystem.ctx) if err != nil { return nil, err } @@ -293,32 +305,35 @@ func (f *File) copyToLocalTempReader() (*os.File, error) { return tmpFile, nil } -// getBucketHandle returns cached Bucket struct for file -func (f *File) getBucketHandle() *storage.BucketHandle { - if f.bucketHandle != nil { - return f.bucketHandle - } - f.bucketHandle = f.fileSystem.client.Bucket(f.bucket) - return f.bucketHandle -} - // getObjectHandle returns cached Object struct for file -func (f *File) getObjectHandle() *storage.ObjectHandle { - if f.objectHandle != nil { - return f.objectHandle +func (f *File) getObjectHandle() (*storage.ObjectHandle, error) { + client, err := f.fileSystem.Client() + if err != nil { + return nil, err } - f.objectHandle = f.getBucketHandle().Object(f.key) - return f.objectHandle + return client.Bucket(f.bucket).Object(f.key), nil } // getObjectAttrs returns the file's attributes func (f *File) getObjectAttrs() (*storage.ObjectAttrs, error) { - return f.getObjectHandle().Attrs(f.fileSystem.ctx) + handle, err := f.getObjectHandle() + if err != nil { + return nil, err + } + return handle.Attrs(f.fileSystem.ctx) } func (f *File) copyWithinGCSToFile(targetFile *File) error { + tHandle, err := targetFile.getObjectHandle() + if err != nil { + return err + } + fHandle, err := f.getObjectHandle() + if err != nil { + return err + } // Copy content and modify metadata. - copier := targetFile.getObjectHandle().CopierFrom(f.getObjectHandle()) + copier := tHandle.CopierFrom(fHandle) attrs, gerr := f.getObjectAttrs() if gerr != nil { return gerr @@ -330,7 +345,7 @@ func (f *File) copyWithinGCSToFile(targetFile *File) error { } // Just copy content. - _, err := targetFile.getObjectHandle().CopierFrom(f.getObjectHandle()).Run(f.fileSystem.ctx) + _, err = tHandle.CopierFrom(fHandle).Run(f.fileSystem.ctx) return err } @@ -344,7 +359,7 @@ func newFile(fs *FileSystem, bucket, key string) (*File, error) { if bucket == "" || key == "" { return nil, errors.New("non-empty strings for Bucket and Key are required") } - key = vfs.CleanPrefix(key) + key = utils.CleanPrefix(key) return &File{ fileSystem: fs, bucket: bucket, diff --git a/backend/gs/fileSystem.go b/backend/gs/fileSystem.go new file mode 100644 index 00000000..6d19f1ae --- /dev/null +++ b/backend/gs/fileSystem.go @@ -0,0 +1,94 @@ +package gs + +import ( + "cloud.google.com/go/storage" + "golang.org/x/net/context" + + "github.com/c2fo/vfs" + "github.com/c2fo/vfs/backend" + "github.com/c2fo/vfs/utils" +) + +//Scheme defines the filesystem type. +const Scheme = "gs" +const name = "Google Cloud Storage" + +// FileSystem implements vfs.Filesystem for the GCS filesystem. +type FileSystem struct { + client *storage.Client + ctx context.Context + options vfs.Options +} + +// NewFile function returns the gcs implementation of vfs.File. +func (fs *FileSystem) NewFile(volume string, name string) (vfs.File, error) { + return newFile(fs, volume, name) +} + +// NewLocation function returns the s3 implementation of vfs.Location. +func (fs *FileSystem) NewLocation(volume string, path string) (loc vfs.Location, err error) { + loc = &Location{ + fileSystem: fs, + bucket: volume, + prefix: utils.EnsureTrailingSlash(path), + } + return +} + +// Name returns "Google Cloud Storage" +func (fs *FileSystem) Name() string { + return name +} + +// Scheme return "gs" as the initial part of a file URI ie: gs:// +func (fs *FileSystem) Scheme() string { + return Scheme +} + +// Client returns the underlying google storage client, creating it, if necessary +// See Overview for authentication resolution +func (fs *FileSystem) Client() (*storage.Client, error) { + if fs.client == nil { + gsClientOpts := parseClientOptions(fs.options) + client, err := storage.NewClient(fs.ctx, gsClientOpts...) + if err != nil { + return nil, err + } + fs.client = client + } + return fs.client, nil +} + +// WithOptions sets options for client and returns the filesystem (chainable) +func (fs *FileSystem) WithOptions(opts vfs.Options) *FileSystem { + fs.options = opts + //we set client to nil to ensure that a new client is created using the new context when Client() is called + fs.client = nil + return fs +} + +// WithContext passes in user context and returns the filesystem (chainable) +func (fs *FileSystem) WithContext(ctx context.Context) *FileSystem { + fs.ctx = ctx + //we set client to nil to ensure that a new client is created using the new context when Client() is called + fs.client = nil + return fs +} + +// WithClient passes in a google storage client and returns the filesystem (chainable) +func (fs *FileSystem) WithClient(client *storage.Client) *FileSystem { + fs.client = client + return fs +} + +// NewFileSystem intializer for FileSystem struct accepts google cloud storage client and returns Filesystem or error. +func NewFileSystem() *FileSystem { + fs := &FileSystem{} + fs = fs.WithContext(context.Background()) + return fs +} + +func init() { + //registers a default Filesystem + backend.Register(Scheme, NewFileSystem()) +} diff --git a/gs/location.go b/backend/gs/location.go similarity index 78% rename from gs/location.go rename to backend/gs/location.go index 3c511e2c..1261bda7 100644 --- a/gs/location.go +++ b/backend/gs/location.go @@ -1,6 +1,3 @@ -// Google Cloud Storage VFS implementation. -// -// See: https://github.com/GoogleCloudPlatform/google-cloud-go. package gs import ( @@ -12,9 +9,10 @@ import ( "google.golang.org/api/iterator" "github.com/c2fo/vfs" + "github.com/c2fo/vfs/utils" ) -// Implements vfs.Location +// Location implements vfs.Location for gs fs. type Location struct { fileSystem *FileSystem prefix string @@ -22,7 +20,7 @@ type Location struct { bucketHandle *storage.BucketHandle } -// String returns the full URI of the file. +// String returns the full URI of the location. func (l *Location) String() string { return l.URI() } @@ -43,7 +41,12 @@ func (l *Location) ListByPrefix(filenamePrefix string) ([]string, error) { Versions: false, } - it := l.getBucketHandle().Objects(l.fileSystem.ctx, q) + handle, err := l.getBucketHandle() + if err != nil { + return nil, err + } + + it := handle.Objects(l.fileSystem.ctx, q) var fileNames []string for { @@ -62,14 +65,14 @@ func (l *Location) ListByPrefix(filenamePrefix string) ([]string, error) { return fileNames, nil } -// ListByPrefix returns a list of file names at the location which match the provided regular expression. +// ListByRegex returns a list of file names at the location which match the provided regular expression. func (l *Location) ListByRegex(regex *regexp.Regexp) ([]string, error) { keys, err := l.List() if err != nil { return []string{}, err } - filteredKeys := []string{} + var filteredKeys []string for _, key := range keys { if regex.MatchString(key) { filteredKeys = append(filteredKeys, key) @@ -85,7 +88,7 @@ func (l *Location) Volume() string { // Path returns the path of the file at the current location, starting with a leading '/' func (l *Location) Path() string { - return "/" + vfs.EnsureTrailingSlash(l.prefix) + return "/" + utils.EnsureTrailingSlash(l.prefix) } // Exists returns whether the location exists or not. In the case of an error, false is returned. @@ -113,7 +116,7 @@ func (l *Location) NewLocation(relativePath string) (vfs.Location, error) { // ChangeDir changes the current location's path to the new, relative path. func (l *Location) ChangeDir(relativePath string) error { newPrefix := path.Join(l.prefix, relativePath) - l.prefix = vfs.EnsureTrailingSlash(vfs.CleanPrefix(newPrefix)) + l.prefix = utils.EnsureTrailingSlash(utils.CleanPrefix(newPrefix)) return nil } @@ -137,21 +140,30 @@ func (l *Location) DeleteFile(fileName string) error { return file.Delete() } -// URI returns a URI string for the GCS file. +// URI returns a URI string for the GCS location. func (l *Location) URI() string { - return vfs.GetLocationURI(l) + return utils.GetLocationURI(l) } // getBucketHandle returns cached Bucket struct for file -func (l *Location) getBucketHandle() *storage.BucketHandle { +func (l *Location) getBucketHandle() (*storage.BucketHandle, error) { if l.bucketHandle != nil { - return l.bucketHandle + return l.bucketHandle, nil } - l.bucketHandle = l.fileSystem.client.Bucket(l.bucket) - return l.bucketHandle + + client, err := l.fileSystem.Client() + if err != nil { + return nil, err + } + l.bucketHandle = client.Bucket(l.bucket) + return l.bucketHandle, nil } // getObjectAttrs returns the file's attributes func (l *Location) getBucketAttrs() (*storage.BucketAttrs, error) { - return l.getBucketHandle().Attrs(l.fileSystem.ctx) + handle, err := l.getBucketHandle() + if err != nil { + return nil, err + } + return handle.Attrs(l.fileSystem.ctx) } diff --git a/backend/gs/options.go b/backend/gs/options.go new file mode 100644 index 00000000..3f64e03c --- /dev/null +++ b/backend/gs/options.go @@ -0,0 +1,35 @@ +package gs + +import ( + "google.golang.org/api/option" + + "github.com/c2fo/vfs" +) + +// Options holds Google Cloud Storage -specific options. Currently only client options are used. +type Options struct { + APIKey string `json:"apiKey,omitempty"` + CredentialFile string `json:"credentialFilePath,omitempty"` + Endpoint string `json:"endpoint,omitempty"` + Scopes []string `json:"WithoutAuthentication,omitempty"` +} + +func parseClientOptions(opts vfs.Options) []option.ClientOption { + googleClientOpts := []option.ClientOption{} + + // we only care about 'gs.Options' types, skip anything else + if opts, ok := opts.(Options); ok { + switch { + case opts.APIKey != "": + googleClientOpts = append(googleClientOpts, option.WithAPIKey(opts.APIKey)) + case opts.CredentialFile != "": + //TODO: this is Deprecated: Use WithCredentialsFile instead (once we update google cloud sdk) + googleClientOpts = append(googleClientOpts, option.WithServiceAccountFile(opts.CredentialFile)) + case opts.Endpoint != "": + googleClientOpts = append(googleClientOpts, option.WithEndpoint(opts.Endpoint)) + case len(opts.Scopes) > 0: + googleClientOpts = append(googleClientOpts, option.WithScopes(opts.Scopes...)) + } + } + return googleClientOpts +} diff --git a/backend/os/doc.go b/backend/os/doc.go new file mode 100644 index 00000000..2dc51056 --- /dev/null +++ b/backend/os/doc.go @@ -0,0 +1,31 @@ +/* +Package os built-in os lib VFS implementation. + +Usage + +Rely on github.com/c2fo/vfs/backend + + import( + "github.com/c2fo/vfs/backend" + _ "github.com/c2fo/vfs/backend/os" + ) + + func UseFs() error { + fs, err := backend.Backend("os") + ... + } + +Or call directly: + + import _os "github.com/c2fo/vfs/backend/os" + + func DoSomething() { + fs := &_os.FileSystem{} + ... + } + +See Also + +See: https://golang.org/pkg/os/ +*/ +package os diff --git a/os/file.go b/backend/os/file.go similarity index 94% rename from os/file.go rename to backend/os/file.go index b1d0be3a..bfe49343 100644 --- a/os/file.go +++ b/backend/os/file.go @@ -1,7 +1,6 @@ package os import ( - "errors" "fmt" "os" "path" @@ -9,6 +8,7 @@ import ( "time" "github.com/c2fo/vfs" + "github.com/c2fo/vfs/utils" ) //File implements vfs.File interface for S3 fs. @@ -28,9 +28,9 @@ func newFile(name string) (*File, error) { fullPath = filepath.Dir(fullPath) - fullPath = vfs.AddTrailingSlash(fullPath) + fullPath = utils.AddTrailingSlash(fullPath) - location := Location{fileSystem: vfs.FileSystem(new(FileSystem)), name: fullPath} + location := Location{fileSystem: &FileSystem{}, name: fullPath} return &File{name: fileName, location: &location}, nil } @@ -93,7 +93,7 @@ func (f *File) Read(p []byte) (int, error) { if exists, err := f.Exists(); err != nil { return 0, err } else if !exists { - return 0, errors.New(fmt.Sprintf("Failed to read. File does not exist at %s", f)) + return 0, fmt.Errorf("failed to read. File does not exist at %s", f) } file, err := f.openFile() @@ -186,7 +186,7 @@ func (f *File) CopyToLocation(location vfs.Location) (vfs.File, error) { // URI returns the File's URI as a string. func (f *File) URI() string { - return vfs.GetFileURI(f) + return utils.GetFileURI(f) } // String implement fmt.Stringer, returning the file's URI as the default string. @@ -200,7 +200,7 @@ func (f *File) copyWithName(name string, location vfs.Location) (vfs.File, error return nil, err } - if err := vfs.TouchCopy(newFile, f); err != nil { + if err := utils.TouchCopy(newFile, f); err != nil { return nil, err } fCloseErr := f.Close() diff --git a/backend/os/fileSystem.go b/backend/os/fileSystem.go new file mode 100644 index 00000000..d0c6c284 --- /dev/null +++ b/backend/os/fileSystem.go @@ -0,0 +1,42 @@ +package os + +import ( + "github.com/c2fo/vfs" + "github.com/c2fo/vfs/backend" + "github.com/c2fo/vfs/utils" +) + +//Scheme defines the filesystem type. +const Scheme = "file" +const name = "os" + +// FileSystem implements vfs.Filesystem for the OS filesystem. +type FileSystem struct{} + +// NewFile function returns the os implementation of vfs.File. +func (fs *FileSystem) NewFile(volume string, name string) (vfs.File, error) { + file, err := newFile(name) + return file, err +} + +// NewLocation function returns the os implementation of vfs.Location. +func (fs *FileSystem) NewLocation(volume string, name string) (vfs.Location, error) { + return &Location{ + fileSystem: fs, + name: utils.AddTrailingSlash(name), + }, nil +} + +// Name returns "os" +func (fs *FileSystem) Name() string { + return name +} + +// Scheme return "file" as the initial part of a file URI ie: file:// +func (fs *FileSystem) Scheme() string { + return Scheme +} + +func init() { + backend.Register(Scheme, &FileSystem{}) +} diff --git a/os/file_test.go b/backend/os/file_test.go similarity index 87% rename from os/file_test.go rename to backend/os/file_test.go index 5158cd2a..a58f653a 100644 --- a/os/file_test.go +++ b/backend/os/file_test.go @@ -10,7 +10,6 @@ import ( "github.com/stretchr/testify/mock" "github.com/stretchr/testify/suite" - "fmt" "github.com/c2fo/vfs" "github.com/c2fo/vfs/mocks" ) @@ -22,11 +21,19 @@ import ( type osFileTest struct { suite.Suite testFile vfs.File - fileSystem FileSystem + fileSystem *FileSystem +} + +func (s *osFileTest) SetupSuite() { + setupTestFiles() +} + +func (s *osFileTest) TearDownSuite() { + teardownTestFiles() } func (s *osFileTest) SetupTest() { - fs := FileSystem{} + fs := &FileSystem{} file, err := fs.NewFile("", "test_files/test.txt") if err != nil { @@ -103,7 +110,7 @@ func (s *osFileTest) TestCopyToLocation() { func (s *osFileTest) TestCopyToFile() { expectedText := "hello world" - otherFs := new(mocks.FileSystem) + otherFs := &mocks.FileSystem{} otherFile := new(mocks.File) location := Location{"/some/path", otherFs} @@ -361,10 +368,69 @@ func (s *osFileTest) TestURI() { func (s *osFileTest) TestStringer() { file, _ := s.fileSystem.NewFile("", "/some/file/test.txt") - s.Equal("file:///some/file/test.txt", fmt.Sprintf("%s", file)) + s.Equal("file:///some/file/test.txt", file.String()) } func TestOSFile(t *testing.T) { suite.Run(t, new(osFileTest)) _ = os.Remove("test_files/new.txt") } + +/* + Setup TEST FILES +*/ +func setupTestFiles() { + + // setup "test_files" dir + createDir("test_files") + + // setup "test_files/test.txt" + writeStringFile("test_files/empty.txt", ``) + + // setup "test_files/test.txt" + writeStringFile("test_files/prefix-file.txt", `hello, Dave`) + + // setup "test_files/test.txt" + writeStringFile("test_files/test.txt", `hello world`) + + // setup "test_files/subdir" dir + createDir("test_files/subdir") + + // setup "test_files/subdir/test.txt" + writeStringFile("test_files/subdir/test.txt", `hello world too`) +} + +func teardownTestFiles() { + err := os.RemoveAll("test_files") + if err != nil { + panic(err) + } +} + +func createDir(dirname string) { + + perm := os.FileMode(0755) + err := os.Mkdir(dirname, perm) + if err != nil { + teardownTestFiles() + panic(err) + } +} + +func writeStringFile(filename, data string) { + f, err := os.Create(filename) + if err != nil { + teardownTestFiles() + panic(err) + } + _, err = f.WriteString(data) + if err != nil { + teardownTestFiles() + panic(err) + } + err = f.Close() + if err != nil { + teardownTestFiles() + panic(err) + } +} diff --git a/os/location.go b/backend/os/location.go similarity index 93% rename from os/location.go rename to backend/os/location.go index 248150f4..dc3b6bdc 100644 --- a/os/location.go +++ b/backend/os/location.go @@ -8,6 +8,7 @@ import ( "strings" "github.com/c2fo/vfs" + "github.com/c2fo/vfs/utils" ) //Location implements the vfs.Location interface specific to OS fs. @@ -42,7 +43,7 @@ func (l *Location) List() ([]string, error) { // ListByPrefix returns a slice of all files starting with "prefix" in the top directory of of the location. func (l *Location) ListByPrefix(prefix string) ([]string, error) { - if err := vfs.ValidateFilePrefix(prefix); err != nil { + if err := utils.ValidateFilePrefix(prefix); err != nil { return nil, err } return l.fileList(func(name string) bool { @@ -83,7 +84,7 @@ func (l *Location) fileList(testEval fileTest) ([]string, error) { return files, nil } -// Volume returns if any of of the location. Given "C:\foo\bar" it returns "C:" on Windows. On other platforms it returns "". +// Volume returns the volume, if any, of the location. Given "C:\foo\bar" it returns "C:" on Windows. On other platforms it returns "". func (l *Location) Volume() string { return filepath.VolumeName(l.name) } @@ -109,7 +110,7 @@ func (l *Location) Exists() (bool, error) { // URI returns the Location's URI as a string. func (l *Location) URI() string { - return vfs.GetLocationURI(l) + return utils.GetLocationURI(l) } // String implement fmt.Stringer, returning the location's URI as the default string. @@ -126,7 +127,7 @@ func (l *Location) NewLocation(relativePath string) (vfs.Location, error) { return nil, err } - fullPath = vfs.AddTrailingSlash(fullPath) + fullPath = utils.AddTrailingSlash(fullPath) return &Location{ name: fullPath, fileSystem: l.fileSystem, diff --git a/os/location_test.go b/backend/os/location_test.go similarity index 92% rename from os/location_test.go rename to backend/os/location_test.go index 279e10f2..3b70030f 100644 --- a/os/location_test.go +++ b/backend/os/location_test.go @@ -1,7 +1,6 @@ package os import ( - "fmt" "io/ioutil" "os" "path/filepath" @@ -12,6 +11,7 @@ import ( "github.com/stretchr/testify/suite" "github.com/c2fo/vfs" + "github.com/c2fo/vfs/utils" ) /********************************** @@ -21,11 +21,19 @@ import ( type osLocationTest struct { suite.Suite testFile vfs.File - fileSystem FileSystem + fileSystem *FileSystem +} + +func (s *osLocationTest) SetupSuite() { + setupTestFiles() +} + +func (s *osLocationTest) TearDownSuite() { + teardownTestFiles() } func (s *osLocationTest) SetupTest() { - fs := FileSystem{} + fs := &FileSystem{} file, err := fs.NewFile("", "test_files/test.txt") if err != nil { @@ -71,7 +79,7 @@ func (s *osLocationTest) TestListByPrefix() { s.Equal(expected, actual) _, err := s.testFile.Location().ListByPrefix("bad/prefix") - s.EqualError(err, vfs.BadFilePrefix, "got expected error") + s.EqualError(err, utils.BadFilePrefix, "got expected error") } func (s *osLocationTest) TestListByRegex() { @@ -140,15 +148,15 @@ func (s *osLocationTest) TestURI() { func (s *osLocationTest) TestStringer() { file, _ := s.fileSystem.NewFile("", "/some/file/test.txt") location := file.Location() - s.Equal("file:///some/file/", fmt.Sprintf("%s", location)) + s.Equal("file:///some/file/", location.String()) } func (s *osLocationTest) TestDeleteFile() { dir, err := ioutil.TempDir("test_files", "example") s.NoError(err, "Setup not expected to fail.") defer func() { - err := os.RemoveAll(dir) - s.NoError(err, "Cleanup shouldn't fail.") + derr := os.RemoveAll(dir) + s.NoError(derr, "Cleanup shouldn't fail.") }() expectedText := "file to delete" diff --git a/backend/s3/doc.go b/backend/s3/doc.go new file mode 100644 index 00000000..250579aa --- /dev/null +++ b/backend/s3/doc.go @@ -0,0 +1,84 @@ +/* +Package s3 AWS S3 VFS implementation. + +Usage + +Rely on github.com/c2fo/vfs/backend + + import( + "github.com/c2fo/vfs/backend" + _ "github.com/c2fo/vfs/backend/s3" + ) + + func UseFs() error { + fs, err := backend.Backend("AWS S3") + ... + } + +Or call directly: + + import "github.com/c2fo/vfs/backend/s3" + + func DoSomething() { + fs := gs.NewFilesystem() + ... + } + +s3 can be augmented with the following implementation-specific methods. Backend returns vfs.Filesystem interface so it +would have to be cast as s3.Filesystem to use the following: + + func DoSomething() { + + ... + + // cast if fs was created using backend.Backend(). Not necessary if created directly from s3.NewFilsystem(). + fs = fs.(s3.Filesystem) + + // to pass in client options + fs = fs.WithOptions( + s3.Options{ + AccessKeyID: "AKIAIOSFODNN7EXAMPLE", + SecretAccessKey: "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY", + Region: "us-west-2", + }, + ) + + // to pass specific client, for instance a mock client + s3apiMock := &mocks.S3API{} + s3apiMock.On("GetObject", mock.AnythingOfType("*s3.GetObjectInput")). + Return(&s3.GetObjectOutput{ + Body: nopCloser{bytes.NewBufferString("Hello world!")}, + }, nil) + fs = fs.WithClient(s3apiMock) + } + +Authentication + +Authentication, by default, occurs automatically when Client() is called. It looks for credentials in the following places, +preferring the first location found: + + 1. StaticProvider - set of credentials which are set programmatically, and will never expire. + 2. EnvProvider - credentials from the environment variables of the + running process. Environment credentials never expire. + Environment variables used: + + * Access Key ID: AWS_ACCESS_KEY_ID or AWS_ACCESS_KEY + * Secret Access Key: AWS_SECRET_ACCESS_KEY or AWS_SECRET_KEY + + 3. SharedCredentialsProvider - looks for "AWS_SHARED_CREDENTIALS_FILE" env variable. If the + env value is empty will default to current user's home directory. + + * Linux/OSX: "$HOME/.aws/credentials" + * Windows: "%USERPROFILE%\.aws\credentials" + + 4. RemoteCredProvider - default remote endpoints such as EC2 or ECS IAM Roles + 5. EC2RoleProvider - credentials from the EC2 service, and keeps track if those credentials are expired + +See the following for more auth info: https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html +and https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html + +See Also + +See: https://github.com/aws/aws-sdk-go/tree/master/service/s3 +*/ +package s3 diff --git a/s3/file.go b/backend/s3/file.go similarity index 90% rename from s3/file.go rename to backend/s3/file.go index af78c046..bfdb7d1a 100644 --- a/s3/file.go +++ b/backend/s3/file.go @@ -16,6 +16,7 @@ import ( "github.com/c2fo/vfs" "github.com/c2fo/vfs/mocks" + "github.com/c2fo/vfs/utils" ) //File implements vfs.File interface for S3 fs. @@ -35,7 +36,7 @@ func newFile(fs *FileSystem, bucket, key string) (*File, error) { if bucket == "" || key == "" { return nil, errors.New("non-empty strings for bucket and key are required") } - key = vfs.CleanPrefix(key) + key = utils.CleanPrefix(key) return &File{ fileSystem: fs, bucket: bucket, @@ -109,7 +110,7 @@ func (f *File) CopyToFile(targetFile vfs.File) error { return f.copyWithinS3ToFile(tf) } - if err := vfs.TouchCopy(targetFile, f); err != nil { + if err := utils.TouchCopy(targetFile, f); err != nil { return err } //Close target to flush and ensure that cursor isn't at the end of the file when the caller reopens for read @@ -185,7 +186,12 @@ func (f *File) Delete() error { return err } - _, err := f.fileSystem.Client.DeleteObject(&s3.DeleteObjectInput{ + client, err := f.fileSystem.Client() + if err != nil { + return err + } + + _, err = client.DeleteObject(&s3.DeleteObjectInput{ Key: &f.key, Bucket: &f.bucket, }) @@ -194,29 +200,32 @@ func (f *File) Delete() error { // Close cleans up underlying mechanisms for reading from and writing to the file. Closes and removes the // local temp file, and triggers a write to s3 of anything in the f.writeBuffer if it has been created. -func (f *File) Close() (rerr error) { - //setup multi error return using named error - errs := vfs.NewMutliErr() - defer func() { rerr = errs.OrNil() }() +func (f *File) Close() error { if f.tempFile != nil { - defer errs.DeferFunc(f.tempFile.Close) + defer f.tempFile.Close() err := os.Remove(f.tempFile.Name()) if err != nil && !os.IsNotExist(err) { - return errs.Append(err) + return err } f.tempFile = nil } if f.writeBuffer != nil { - uploader := s3manager.NewUploaderWithClient(f.fileSystem.Client) + client, err := f.fileSystem.Client() + if err != nil { + return err + } + + uploader := s3manager.NewUploaderWithClient(client) uploadInput := f.uploadInput() uploadInput.Body = f.writeBuffer - _, err := uploader.Upload(uploadInput) + + _, err = uploader.Upload(uploadInput) if err != nil { - return errs.Append(err) + return err } } @@ -268,7 +277,7 @@ func (f *File) Write(data []byte) (res int, err error) { // URI returns the File's URI as a string. func (f *File) URI() string { - return vfs.GetFileURI(f) + return utils.GetFileURI(f) } // String implement fmt.Stringer, returning the file's URI as the default string. @@ -281,19 +290,32 @@ func (f *File) String() string { */ func (f *File) getHeadObject() (*s3.HeadObjectOutput, error) { headObjectInput := new(s3.HeadObjectInput).SetKey(f.key).SetBucket(f.bucket) - return f.fileSystem.Client.HeadObject(headObjectInput) + client, err := f.fileSystem.Client() + if err != nil { + return nil, err + } + return client.HeadObject(headObjectInput) } func (f *File) copyWithinS3ToFile(targetFile *File) error { copyInput := new(s3.CopyObjectInput).SetKey(targetFile.key).SetBucket(targetFile.bucket).SetCopySource(path.Join(f.bucket, f.key)) - _, err := f.fileSystem.Client.CopyObject(copyInput) + client, err := f.fileSystem.Client() + if err != nil { + return err + } + _, err = client.CopyObject(copyInput) return err } func (f *File) copyWithinS3ToLocation(location vfs.Location) (vfs.File, error) { copyInput := new(s3.CopyObjectInput).SetKey(path.Join(location.Path(), f.Name())).SetBucket(location.Volume()).SetCopySource(path.Join(f.bucket, f.key)) - _, err := f.fileSystem.Client.CopyObject(copyInput) + + client, err := f.fileSystem.Client() + if err != nil { + return nil, err + } + _, err = client.CopyObject(copyInput) if err != nil { return nil, err } @@ -337,16 +359,6 @@ func (f *File) copyToLocalTempReader() (*os.File, error) { return tmpFile, nil } -func (f *File) putObjectInput() *s3.PutObjectInput { - return new(s3.PutObjectInput).SetBucket(f.bucket).SetKey(f.key) -} - -func (f *File) putObject(reader io.ReadSeeker) error { - _, err := f.fileSystem.Client.PutObject(f.putObjectInput().SetBody(reader)) - - return err -} - //TODO: need to provide an implementation-agnostic container for providing config options such as SSE func (f *File) uploadInput() *s3manager.UploadInput { sseType := "AES256" @@ -362,7 +374,11 @@ func (f *File) getObjectInput() *s3.GetObjectInput { } func (f *File) getObject() (io.ReadCloser, error) { - getOutput, err := f.fileSystem.Client.GetObject(f.getObjectInput()) + client, err := f.fileSystem.Client() + if err != nil { + return nil, err + } + getOutput, err := client.GetObject(f.getObjectInput()) if err != nil { return nil, err } @@ -388,7 +404,7 @@ func waitUntilFileExists(file vfs.File, retries int) error { var retryCount = 0 for { if retryCount == retries { - return errors.New(fmt.Sprintf("Failed to find file %s after %d", file, retries)) + return fmt.Errorf("failed to find file %s after %d", file, retries) } //check for existing file diff --git a/backend/s3/fileSystem.go b/backend/s3/fileSystem.go new file mode 100644 index 00000000..f044b29d --- /dev/null +++ b/backend/s3/fileSystem.go @@ -0,0 +1,99 @@ +package s3 + +import ( + "fmt" + "github.com/aws/aws-sdk-go/service/s3" + "github.com/aws/aws-sdk-go/service/s3/s3iface" + + "github.com/c2fo/vfs" + "github.com/c2fo/vfs/backend" + "github.com/c2fo/vfs/utils" +) + +// Scheme defines the filesystem type. +const Scheme = "s3" +const name = "AWS S3" + +// FileSystem implements vfs.Filesystem for the S3 filesystem. +type FileSystem struct { + client s3iface.S3API + options vfs.Options +} + +// NewFile function returns the s3 implementation of vfs.File. +func (fs *FileSystem) NewFile(volume string, name string) (vfs.File, error) { + return newFile(fs, volume, name) +} + +// NewLocation function returns the s3 implementation of vfs.Location. +func (fs *FileSystem) NewLocation(volume string, name string) (vfs.Location, error) { + name = utils.CleanPrefix(name) + return &Location{ + fileSystem: fs, + prefix: name, + bucket: volume, + }, nil +} + +// Name returns "AWS S3" +func (fs *FileSystem) Name() string { + return name +} + +// Scheme return "s3" as the initial part of a file URI ie: s3:// +func (fs *FileSystem) Scheme() string { + return Scheme +} + +// Client returns the underlying aws s3 client, creating it, if necessary +// See Overview for authentication resolution +func (fs *FileSystem) Client() (s3iface.S3API, error) { + if fs.client == nil { + if fs.options == nil { + fs.options = Options{} + } + + if opts, ok := fs.options.(Options); ok { + var err error + fs.client, err = getClient(opts) + if err != nil { + return nil, err + } + } else { + return nil, fmt.Errorf("unable to create client, vfs.Options must be an s3.Options") + } + } + return fs.client, nil +} + +// WithOptions sets options for client and returns the filesystem (chainable) +func (fs *FileSystem) WithOptions(opts vfs.Options) *FileSystem { + + // only set options if vfs.Options is s3.Options + if opts, ok := fs.options.(Options); ok { + fs.options = opts + //we set client to nil to ensure that a new client is created using the new context when Client() is called + fs.client = nil + } + return fs +} + +// WithClient passes in an s3 client and returns the filesystem (chainable) +func (fs *FileSystem) WithClient(client interface{}) *FileSystem { + switch client.(type) { + case s3iface.S3API, *s3.S3: + fs.client = client.(s3iface.S3API) + fs.options = nil + } + return fs +} + +// NewFileSystem intializer for fileSystem struct accepts aws-sdk s3iface.S3API client and returns Filesystem or error. +func NewFileSystem() *FileSystem { + return &FileSystem{} +} + +func init() { + //registers a default Filesystem + backend.Register(Scheme, NewFileSystem()) +} diff --git a/s3/fileSystem_test.go b/backend/s3/fileSystem_test.go similarity index 80% rename from s3/fileSystem_test.go rename to backend/s3/fileSystem_test.go index b1048d17..d758683b 100644 --- a/s3/fileSystem_test.go +++ b/backend/s3/fileSystem_test.go @@ -17,17 +17,12 @@ var ( ) func (ts *fileSystemTestSuite) SetupTest() { - var err error s3apiMock = &mocks.S3API{} - s3fs, err = NewFileSystem(s3apiMock) - if err != nil { - ts.Fail("Shouldn't return an error creating NewFileSystem.") - } + s3fs = &FileSystem{} } func (ts *fileSystemTestSuite) TestNewFileSystem() { - newFS, err := NewFileSystem(s3apiMock) - ts.Nil(err, "s3.NewFileSystem() shouldn't return an error") + newFS := NewFileSystem().WithClient(s3apiMock) ts.NotNil(newFS, "Should return a new fileSystem for s3") } diff --git a/s3/file_test.go b/backend/s3/file_test.go similarity index 98% rename from s3/file_test.go rename to backend/s3/file_test.go index 1820b480..f76c7d92 100644 --- a/s3/file_test.go +++ b/backend/s3/file_test.go @@ -13,7 +13,6 @@ import ( "github.com/stretchr/testify/mock" "github.com/stretchr/testify/suite" - "fmt" "github.com/c2fo/vfs" "github.com/c2fo/vfs/mocks" ) @@ -31,7 +30,7 @@ var ( func (ts *fileTestSuite) SetupTest() { var err error s3apiMock = &mocks.S3API{} - fs = FileSystem{Client: s3apiMock} + fs = FileSystem{client: s3apiMock} testFile, err = fs.NewFile("bucket", "some/path/to/file.txt") if err != nil { ts.Fail("Shouldn't return error creating test s3.File instance.") @@ -68,7 +67,6 @@ func (ts *fileTestSuite) TestRead() { } // TODO: Write on Close() (actual s3 calls wait until file is closed to be made.) - func (ts *fileTestSuite) TestWrite() { file, err := fs.NewFile("bucket", "hello.txt") if err != nil { @@ -158,7 +156,7 @@ func (ts *fileTestSuite) TestNotExists() { func (ts *fileTestSuite) TestCopyToFile() { targetFile := &File{ fileSystem: &FileSystem{ - Client: s3apiMock, + client: s3apiMock, }, bucket: "TestBucket", key: "testKey.txt", @@ -190,7 +188,7 @@ func (ts *fileTestSuite) TestEmptyCopyToFile() { func (ts *fileTestSuite) TestMoveToFile() { targetFile := &File{ fileSystem: &FileSystem{ - Client: s3apiMock, + client: s3apiMock, }, bucket: "TestBucket", key: "testKey.txt", @@ -208,7 +206,7 @@ func (ts *fileTestSuite) TestMoveToFile() { func (ts *fileTestSuite) TestMoveToFile_CopyError() { targetFile := &File{ fileSystem: &FileSystem{ - Client: s3apiMock, + client: s3apiMock, }, bucket: "TestBucket", key: "testKey.txt", @@ -399,16 +397,16 @@ func (ts *fileTestSuite) TestPath() { func (ts *fileTestSuite) TestURI() { s3apiMock = &mocks.S3API{} - fs = FileSystem{Client: s3apiMock} + fs = FileSystem{client: s3apiMock} file, _ := fs.NewFile("mybucket", "/some/file/test.txt") expected := "s3://mybucket/some/file/test.txt" ts.Equal(expected, file.URI(), "%s does not match %s", file.URI(), expected) } func (ts *fileTestSuite) TestStringer() { - fs = FileSystem{Client: &mocks.S3API{}} + fs = FileSystem{client: &mocks.S3API{}} file, _ := fs.NewFile("mybucket", "/some/file/test.txt") - ts.Equal("s3://mybucket/some/file/test.txt", fmt.Sprintf("%s", file)) + ts.Equal("s3://mybucket/some/file/test.txt", file.String()) } func TestFile(t *testing.T) { diff --git a/s3/location.go b/backend/s3/location.go similarity index 87% rename from s3/location.go rename to backend/s3/location.go index 91ff23c7..9fa121b2 100644 --- a/s3/location.go +++ b/backend/s3/location.go @@ -9,6 +9,7 @@ import ( "github.com/aws/aws-sdk-go/service/s3" "github.com/c2fo/vfs" + "github.com/c2fo/vfs/utils" ) //Location implements the vfs.Location interface specific to S3 fs. @@ -22,14 +23,14 @@ type Location struct { // set to the location's path. This will make a call to the s3 API for every 1000 keys to return. // If you have many thousands of keys at the given location, this could become quite expensive. func (l *Location) List() ([]string, error) { - listObjectsInput := l.getListObjectsInput().SetPrefix(vfs.EnsureTrailingSlash(l.prefix)) + listObjectsInput := l.getListObjectsInput().SetPrefix(utils.EnsureTrailingSlash(l.prefix)) return l.fullLocationList(listObjectsInput) } // ListByPrefix calls the s3 API with the location's prefix modified relatively by the prefix arg passed to the // function. The resource considerations of List() apply to this function as well. func (l *Location) ListByPrefix(prefix string) ([]string, error) { - if err := vfs.ValidateFilePrefix(prefix); err != nil { + if err := utils.ValidateFilePrefix(prefix); err != nil { return nil, err } searchPrefix := path.Join(l.prefix, prefix) @@ -45,7 +46,7 @@ func (l *Location) ListByRegex(regex *regexp.Regexp) ([]string, error) { return []string{}, err } - filteredKeys := []string{} + var filteredKeys []string for _, key := range keys { if regex.MatchString(key) { filteredKeys = append(filteredKeys, key) @@ -61,7 +62,7 @@ func (l *Location) Volume() string { // Path returns the prefix the location references in most s3 calls. func (l *Location) Path() string { - return "/" + vfs.EnsureTrailingSlash(l.prefix) + return "/" + utils.EnsureTrailingSlash(l.prefix) } // Exists returns true if the bucket exists, and the user in the underlying s3.fileSystem.Client has the appropriate @@ -69,7 +70,11 @@ func (l *Location) Path() string { // false and any errors passed back from the API. func (l *Location) Exists() (bool, error) { headBucketInput := new(s3.HeadBucketInput).SetBucket(l.bucket) - _, err := l.fileSystem.Client.HeadBucket(headBucketInput) + client, err := l.fileSystem.Client() + if err != nil { + return false, err + } + _, err = client.HeadBucket(headBucketInput) if err == nil { return true, nil } @@ -98,7 +103,7 @@ func (l *Location) NewLocation(relativePath string) (vfs.Location, error) { // so the only return is any error. For this implementation there are no errors. func (l *Location) ChangeDir(relativePath string) error { newPrefix := path.Join(l.prefix, relativePath) - l.prefix = vfs.CleanPrefix(newPrefix) + l.prefix = utils.CleanPrefix(newPrefix) return nil } @@ -108,7 +113,7 @@ func (l *Location) NewFile(filePath string) (vfs.File, error) { newFile := &File{ fileSystem: l.fileSystem, bucket: l.bucket, - key: vfs.CleanPrefix(path.Join(l.prefix, filePath)), + key: utils.CleanPrefix(path.Join(l.prefix, filePath)), } return newFile, nil } @@ -130,7 +135,7 @@ func (l *Location) FileSystem() vfs.FileSystem { // URI returns the Location's URI as a string. func (l *Location) URI() string { - return vfs.GetLocationURI(l) + return utils.GetLocationURI(l) } // String implement fmt.Stringer, returning the location's URI as the default string. @@ -143,13 +148,17 @@ func (l *Location) String() string { */ func (l *Location) fullLocationList(input *s3.ListObjectsInput) ([]string, error) { - keys := []string{} + var keys []string + client, err := l.fileSystem.Client() + if err != nil { + return keys, err + } for { - listObjectsOutput, err := l.fileSystem.Client.ListObjects(input) + listObjectsOutput, err := client.ListObjects(input) if err != nil { return []string{}, err } - newKeys := getNamesFromObjectSlice(listObjectsOutput.Contents, vfs.EnsureTrailingSlash(l.prefix)) + newKeys := getNamesFromObjectSlice(listObjectsOutput.Contents, utils.EnsureTrailingSlash(l.prefix)) keys = append(keys, newKeys...) // if s3 response "IsTruncated" we need to call List again with @@ -169,7 +178,7 @@ func (l *Location) getListObjectsInput() *s3.ListObjectsInput { } func getNamesFromObjectSlice(objects []*s3.Object, locationPrefix string) []string { - keys := []string{} + var keys []string for _, object := range objects { if *object.Key != locationPrefix { keys = append(keys, strings.TrimPrefix(*object.Key, locationPrefix)) diff --git a/s3/location_test.go b/backend/s3/location_test.go similarity index 97% rename from s3/location_test.go rename to backend/s3/location_test.go index 6e79fe8d..de1ad9d5 100644 --- a/s3/location_test.go +++ b/backend/s3/location_test.go @@ -11,8 +11,8 @@ import ( "github.com/stretchr/testify/mock" "github.com/stretchr/testify/suite" - "github.com/c2fo/vfs" "github.com/c2fo/vfs/mocks" + "github.com/c2fo/vfs/utils" ) type locationTestSuite struct { @@ -23,7 +23,7 @@ type locationTestSuite struct { func (lt *locationTestSuite) SetupTest() { lt.s3apiMock = &mocks.S3API{} - lt.fs = &FileSystem{lt.s3apiMock} + lt.fs = &FileSystem{client: lt.s3apiMock} } func (lt *locationTestSuite) TestList() { @@ -103,7 +103,7 @@ func (lt *locationTestSuite) TestListByPrefix() { bucket := "bucket" locPath := "dir1/" prefix := "fil" - apiCallPrefix := vfs.EnsureTrailingSlash(path.Join(locPath, prefix)) + apiCallPrefix := utils.EnsureTrailingSlash(path.Join(locPath, prefix)) delimiter := "/" isTruncated := false lt.s3apiMock.On("ListObjects", &s3.ListObjectsInput{ @@ -248,6 +248,7 @@ func (lt *locationTestSuite) TestNewLocation() { } func (lt *locationTestSuite) TestDeleteFile() { + lt.s3apiMock.On("HeadObject", mock.AnythingOfType("*s3.HeadObjectInput")).Return(&s3.HeadObjectOutput{}, nil) lt.s3apiMock.On("DeleteObject", mock.AnythingOfType("*s3.DeleteObjectInput")).Return(&s3.DeleteObjectOutput{}, nil) loc := &Location{lt.fs, "old", "bucket"} diff --git a/backend/s3/options.go b/backend/s3/options.go new file mode 100644 index 00000000..22eba7dc --- /dev/null +++ b/backend/s3/options.go @@ -0,0 +1,94 @@ +package s3 + +import ( + "net/http" + "os" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/credentials" + "github.com/aws/aws-sdk-go/aws/credentials/ec2rolecreds" + "github.com/aws/aws-sdk-go/aws/defaults" + "github.com/aws/aws-sdk-go/aws/ec2metadata" + "github.com/aws/aws-sdk-go/aws/session" + "github.com/aws/aws-sdk-go/service/s3" + "github.com/aws/aws-sdk-go/service/s3/s3iface" +) + +// Options holds s3-specific options. Currently only client options are used. +type Options struct { + AccessKeyID string `json:"accessKeyId,omitempty"` + SecretAccessKey string `json:"secretAccessKey,omitempty"` + SessionToken string `json:"sessionToken,omitempty"` + Region string `json:"region,omitempty"` + Endpoint string `json:"endpoint,omitempty"` +} + +func getClient(opt Options) (s3iface.S3API, error) { + + p := make([]credentials.Provider, 0) + + if opt.AccessKeyID != "" && opt.SecretAccessKey != "" { + // Make the auth + v := credentials.Value{ + AccessKeyID: opt.AccessKeyID, + SecretAccessKey: opt.SecretAccessKey, + SessionToken: opt.SessionToken, + } + // A StaticProvider is a set of credentials which are set programmatically, + // and will never expire. + p = append(p, &credentials.StaticProvider{Value: v}) + + } + + // A EnvProvider retrieves credentials from the environment variables of the + // running process. Environment credentials never expire. + // + // Environment variables used: + // + // * Access Key ID: AWS_ACCESS_KEY_ID or AWS_ACCESS_KEY + // + // * Secret Access Key: AWS_SECRET_ACCESS_KEY or AWS_SECRET_KEY + p = append(p, &credentials.EnvProvider{}) + + // Path to the shared credentials file. + // + // SharedCredentialsProvider will look for "AWS_SHARED_CREDENTIALS_FILE" env variable. If the + // env value is empty will default to current user's home directory. + // Linux/OSX: "$HOME/.aws/credentials" + // Windows: "%USERPROFILE%\.aws\credentials" + p = append(p, &credentials.SharedCredentialsProvider{}) + + lowTimeoutClient := &http.Client{Timeout: 1 * time.Second} // low timeout to ec2 metadata service + + // RemoteCredProvider for default remote endpoints such as EC2 or ECS IAM Roles + def := defaults.Get() + def.Config.HTTPClient = lowTimeoutClient + p = append(p, defaults.RemoteCredProvider(*def.Config, def.Handlers)) + + // EC2RoleProvider retrieves credentials from the EC2 service, and keeps track if those credentials are expired + p = append(p, &ec2rolecreds.EC2RoleProvider{ + Client: ec2metadata.New(session.New(), &aws.Config{ + HTTPClient: lowTimeoutClient, + }), + ExpiryWindow: 3, + }) + + awsConfig := aws.Config{Logger: aws.NewDefaultLogger()} + if opt.Region != "" { + awsConfig = *awsConfig.WithRegion(opt.Region) + } else if val, ok := os.LookupEnv("AWS_DEFAULT_REGION"); ok { + awsConfig = *awsConfig.WithRegion(val) + } + awsConfig = *awsConfig.WithEndpoint(opt.Endpoint) + + awsConfig = *awsConfig.WithCredentials(credentials.NewChainCredentials(p)) + + s, err := session.NewSessionWithOptions(session.Options{ + Config: awsConfig, + }) + if err != nil { + return nil, err + } + return s3.New(s), nil +} diff --git a/doc.go b/doc.go new file mode 100644 index 00000000..f030f899 --- /dev/null +++ b/doc.go @@ -0,0 +1,121 @@ +/* +Package vfs provides a platform-independent, generalized set of filesystem functionality across a number of +filesystem types such as os, S3, and GCS. + +Philosophy + +When building our platform, initially we wrote a library that was something to the effect of + if config.DISK == "S3" { + // do some s3 filesystem operation + } else if config.DISK == "mock" { + // fake something + } else { + // do some native os.xxx operation + } + +Not only was ugly but because the behaviors of each "filesystem" were different and we had to constantly alter the +file locations and pass a bucket string (even if the fs didn't know what a bucket was). + +We found a handful of third-party libraries that were interesting but none of them had everything we needed/wanted. Of +particular inspiration was https://github.com/spf13/afero in its composition of the super-powerful stdlib io.* interfaces. +Unforunately, it didn't support Google Cloud Storage and there was still a lot of passing around of strings and structs. +Few, if any, of the vfs-like libraries provided interfaces to easily and confidently create new filesystem backends. + +What we needed/wanted was the following(and more): + * self-contained set of structs that could be passed around like a file/dir handle + * the struct would represent an existing or nonexistant file/dir + * provide common (and only common) functionality across all filesystem so that after initialization, we don't care + what the underlying filesystem is and can therefore write our code agnostically/portably + * use io.* interfaces such as io.Reader and io.Writer without needing to call a separate function + * extensibility to easily add other needed filesytems like Micrsoft Azure Cloud File Storage or SFTP + * prefer native atomic functions when possible (ie S3 to S3 moving would use the native move api call rather than + copy-delete) + * a uniform way of addressing files regardless of filesystem. This is why we use complete URI's in vfssimple + * fmt.Stringer interface so that the file struct passed to a log message (or other Stringer use) would show the URI + * mockable filesystem + * pluggability so that third-party implemenations of our interfaces could be used + +Install + +Go install: + go get -u github.com/c2fo/vfs/... + +Glide installation: + glide install github.com/c2fo/vfs + +Usage + +We provde vfssimple as basic way of initializing filesystem backends (see each implemnations's docs about authentiation). +vfssimple pulls in every c2fo/vfs backend. If you need to reduce the backend requirements (and app memory footprint) or +add a third party backend, you'll need to implement your own "factory". See backend doc for more info. + +You can then use those file systems to initialize locations which you'll be referencing frequently, or initialize files directly + + + osFile, err := vfssimple.NewFile("file:///path/to/file.txt") + s3File, err := vfssimple.NewFile("s3://bucket/prefix/file.txt") + + osLocation, err := vfssimple.NewLocation("file:///tmp") + s3Location, err := vfssimple.NewLocation("s3://bucket") + + osTmpFile, err := osLocation.NewFile("anotherFile.txt") // file at /tmp/anotherFile.txt + + +With a number of files and locations between s3 and the local file system you can perform a number of actions without any consideration for the system's api or +implementation details. + + osFileExists, err := osFile.Exists() // true, nil + s3FileExists, err := s3File.Exists() // false, nil + err = osFile.CopyToFile(s3File) // nil + s3FileExists, err = s3File.Exists() // true, nil + + movedOsFile, err := osFile.MoveToLocation(osLocation) + osFileExists, err = osFile.Exists() // false, nil (move actions delete the original file) + movedOsFileExists, err := movedOsFile.Exists() // true, nil + + s3FileUri := s3File.URI() // s3://bucket/prefix/file.txt + s3FileName := s3File.Name() // file.txt + s3FilePath := s3File.Path() // /prefix/file.txt + +Third-party Backends + + * none so far + +Feel free to send a pull request if you want to add your backend to the list. + +Ideas + +Things to add: + * Add SFTP backend + * Add Azure storage backend + * Add in-memory backend + * Provide better List() functionality with more abstracted filering and paging (iterator?) Retrun File structs vs URIs? + * Add better/any context.Context() support + * update s3 and google sdk libs + * provide for go mod and/or dep installs + +Contrubutors + +Brought to you by the Enterprise Pipeline team at C2FO: + +John Judd - john.judd@c2fo.com + +Jason Coble - [@jasonkcoble](https://twitter.com/jasonkcoble) - jason@c2fo.com + +Chris Roush – chris.roush@c2fo.com + +https://github.com/c2fo/ + +Contributing + + 1. Fork it () + 2. Create your feature branch (`git checkout -b feature/fooBar`) + 3. Commit your changes (`git commit -am 'Add some fooBar'`) + 4. Push to the branch (`git push origin feature/fooBar`) + 5. Create a new Pull Request + +License + +Distributed under the MIT license. See `http://github.com/c2fo/vfs/License.md for more information. +*/ +package vfs diff --git a/docs/backend.md b/docs/backend.md new file mode 100644 index 00000000..ed1130c1 --- /dev/null +++ b/docs/backend.md @@ -0,0 +1,122 @@ +# backend + +-- + + +Package backend provides a means of allowing backend filesystems to +self-register on load via an init() call to backend.Register("some scheme", +vfs.Filesystem) + +In this way, a caller of vfs backends can simply load the backend filesystem +(and ONLY those needed) and begin using it: + + package main + + // import backend and each backend you intend to use + import( + "github.com/c2fo/vfs/backend" + _ "github.com/c2fo/vfs/backend/os" + _ "github.com/c2fo/vfs/backend/s3" + ) + + func main() { + var err error + var osfile, s3file vfs.File + + // THEN begin using the filesystems + osfile, err = backend.Backend(os.Scheme).NewFile("", "/path/to/file.txt") + if err != nil { + panic(err) + } + + s3file, err = backend.Backend(os.Scheme).NewFile("", "/some/file.txt") + if err != nil { + panic(err) + } + + err = osfile.CopyTo(s3file) + if err != nil { + panic(err) + } + } + + +### Development + +To create your own backend, you must create a package that implements the interfaces: +[vfs.FileSystem](../README.md#type-filesystem), [vfs.Location](../README.md#type-location), and +[vfs.File](../README.md#type-file). Then ensure it registers itself on load: + + pacakge myexoticfilesystem + + import( + ... + "github.com/c2fo/vfs" + "github.com/c2fo/vfs/backend" + ) + + // IMPLEMENT vfs interfaces + ... + + // register backend + func init() { + backend.Register( + "My Exotic Filesystem", + &MyExoticFilesystem{}, + ) + } + +Then do use it in some other package do + + pacakge MyExoticFilesystem + + import( + "github.com/c2fo/vfs/backend" + _ "github.com/acme/myexoticfilesystem" + ) + + ... + + func useNewBackend() error { + myExoticFs, err = backend.Backend(myexoticfilesystem.Scheme) + ... + } + +Thats it. Simple. + +## Usage + +#### func Backend + +```go +func Backend(name string) vfs.FileSystem +``` +Backend returns the backend filesystem by name + +#### func Register + +```go +func Register(name string, v vfs.FileSystem) +``` +Register a new filesystem in backend map + +#### func RegisteredBackends + +```go +func RegisteredBackends() []string +``` +RegisteredBackends returns an array of backend names + +#### func Unregister + +```go +func Unregister(name string) +``` +Unregister unregisters a filesystem from backend map + +#### func UnregisterAll + +```go +func UnregisterAll() +``` +UnregisterAll unregisters all filesystems from backend map diff --git a/docs/gs.md b/docs/gs.md new file mode 100644 index 00000000..d4556e65 --- /dev/null +++ b/docs/gs.md @@ -0,0 +1,432 @@ +# gs + +-- + + +Package gs Google Cloud Storage VFS implementation. + + +### Usage + +Rely on [github.com/c2fo/vfs/backend](backend.md) + + import( + "github.com/c2fo/vfs/backend" + _ "github.com/c2fo/vfs/backend/gs" + ) + + func UseFs() error { + fs, err := backend.Backend("Google Cloud Storage") + ... + } + +Or call directly: + + import "github.com/c2fo/vfs/backend/gs" + + func DoSomething() { + fs := gs.NewFilesystem() + ... + } + +gs can be augmented with the following implementation-specific methods. [Backend](backend.md) +returns [vfs.FileSystem](../README.md#type-filesystem) interface so it would have to be cast as gs.Filesystem to +use the following: + + func DoSomething() { + + ... + + // cast if fs was created using backend.Backend(). Not necessary if created directly from gs.NewFilsystem(). + fs = fs.(gs.Filesystem) + + // to use your own "context" + ctx := context.Background() + fs = fs.WithContext(ctx) + + // to pass in client options + fs = fs.WithOptions( + gs.Options{ + CredentialFile: "/root/.gcloud/account.json", + Scopes: []string{"ScopeReadOnly"}, + //default scope is "ScopeFullControl" + }, + ) + + // to pass specific client, for instance no-auth client + ctx := context.Background() + client, _ := storage.NewClient(ctx, option.WithoutAuthentication()) + fs = fs.WithClient(client) + } + + +### Authentication + +Authentication, by default, occurs automatically when [Client()](#func-filesystem-client) is called. It +looks for credentials in the following places, preferring the first location +found: + +1. A JSON file whose path is specified by the GOOGLE_APPLICATION_CREDENTIALS environment variable +1. A JSON file in a location known to the gcloud command-line tool. + * On Windows, this is %APPDATA%/gcloud/application_default_credentials.json. + * On other systems, $HOME/.config/gcloud/application_default_credentials.json. +1. On Google App Engine it uses the appengine.AccessToken function. +1. On Google Compute Engine and Google App Engine Managed VMs, it fetches credentials from the metadata server. + +See https://cloud.google.com/docs/authentication/production for more autho info + + +### See Also + +See: https://github.com/googleapis/google-cloud-go/tree/master/storage + +## Usage + +```go +const Scheme = "gs" +``` +Scheme defines the filesystem type. + +#### type File + +```go +type File struct { +} +``` + +File implements [vfs.File](../README.md#type-file) interface for GS fs. + +#### func (*File) Close + +```go +func (f *File) Close() error +``` +Close cleans up underlying mechanisms for reading from and writing to the file. +Closes and removes the local temp file, and triggers a write to GCS of anything +in the f.writeBuffer if it has been created. + +#### func (*File) CopyToFile + +```go +func (f *File) CopyToFile(targetFile vfs.File) error +``` +CopyToFile puts the contents of File into the targetFile passed. Uses the GCS +CopierFrom method if the target file is also on GCS, otherwise uses [io.Copy](https://godoc.org/io#Copy). + +#### func (*File) CopyToLocation + +```go +func (f *File) CopyToLocation(location vfs.Location) (vfs.File, error) +``` +CopyToLocation creates a copy of *File, using the file's current name as the new +file's name at the given location. If the given location is also GCS, the GCS +API for copying files will be utilized, otherwise, standard [io.Copy](https://godoc.org/io#Copy) will be done +to the new file. + +#### func (*File) Delete + +```go +func (f *File) Delete() error +``` +Delete clears any local temp file, or write buffer from read/writes to the file, +then makes a DeleteObject call to s3 for the file. Returns any error returned by +the API. + +#### func (*File) Exists + +```go +func (f *File) Exists() (bool, error) +``` +Exists returns a boolean of whether or not the object exists in GCS. + +#### func (*File) LastModified + +```go +func (f *File) LastModified() (*time.Time, error) +``` +LastModified returns the 'Updated' property from the GCS attributes. + +#### func (*File) Location + +```go +func (f *File) Location() vfs.Location +``` +Location returns a Location instance for the file's current location. + +TODO should this be including trailing slash? + +#### func (*File) MoveToFile + +```go +func (f *File) MoveToFile(targetFile vfs.File) error +``` +MoveToFile puts the contents of File into the targetFile passed using +File.CopyToFile. If the copy succeeds, the source file is deleted. Any errors +from the copy or delete are returned. + +#### func (*File) MoveToLocation + +```go +func (f *File) MoveToLocation(location vfs.Location) (vfs.File, error) +``` +MoveToLocation works by first calling File.CopyToLocation(vfs.Location) then, if +that succeeds, it deletes the original file, returning the new file. If the copy +process fails the error is returned, and the Delete isn't called. If the call to +Delete fails, the error and the file generated by the copy are both returned. + +#### func (*File) Name + +```go +func (f *File) Name() string +``` +Name returns the file name. + +#### func (*File) Path + +```go +func (f *File) Path() string +``` +Path returns full path with leading slash of the GCS file key. + +#### func (*File) Read + +```go +func (f *File) Read(p []byte) (n int, err error) +``` +Read implements the standard for [io.Reader](https://godoc.org/io#Reader). For this to work with an GCS file, a +temporary local copy of the file is created, and reads work on that. This file +is closed and removed upon calling f.Close() + +#### func (*File) Seek + +```go +func (f *File) Seek(offset int64, whence int) (int64, error) +``` +Seek implements the standard for io.Seeker. A temporary local copy of the GCS +file is created (the same one used for Reads) which Seek() acts on. This file is +closed and removed upon calling f.Close() + +#### func (*File) Size + +```go +func (f *File) Size() (uint64, error) +``` +Size returns the 'Size' property from the GCS attributes. + +#### func (*File) String + +```go +func (f *File) String() string +``` +String returns the file URI string. + +#### func (*File) URI + +```go +func (f *File) URI() string +``` +URI returns a full GCS URI string of the file. + +#### func (*File) Write + +```go +func (f *File) Write(data []byte) (n int, err error) +``` +Write implements the standard for [io.Writer](https://godoc.org/io#Writer). A buffer is added to with each +subsequent write. Calling [Close()](#func-file-close) will write the contents back to GCS. + +#### type FileSystem + +```go +type FileSystem struct { +} +``` + +FileSystem implements [vfs.FileSystem](../README.md#type-filesystem) for the GCS filesystem. + +#### func NewFileSystem + +```go +func NewFileSystem() *FileSystem +``` +NewFileSystem intializer for [FileSystem](#type-filesystem) struct accepts google cloud storage +client and returns Filesystem or error. + +#### func (*FileSystem) Client + +```go +func (fs *FileSystem) Client() (*storage.Client, error) +``` +Client returns the underlying google storage client, creating it, if +necessary See [Authenication](#authentication) section for authentication resolution + +#### func (*FileSystem) Name + +```go +func (fs *FileSystem) Name() string +``` +Name returns "Google Cloud Storage" + +#### func (*FileSystem) NewFile + +```go +func (fs *FileSystem) NewFile(volume string, name string) (vfs.File, error) +``` +NewFile function returns the gcs implementation of [vfs.File](../README.md#type-file). + +#### func (*FileSystem) NewLocation + +```go +func (fs *FileSystem) NewLocation(volume string, path string) (loc vfs.Location, err error) +``` +NewLocation function returns the s3 implementation of [vfs.Location](../README.md#type-location). + +#### func (*FileSystem) Scheme + +```go +func (fs *FileSystem) Scheme() string +``` +Scheme return "gs" as the initial part of a file URI ie: gs:// + +#### func (*FileSystem) WithClient + +```go +func (fs *FileSystem) WithClient(client *storage.Client) *FileSystem +``` +WithClient passes in a google storage client and returns the filesystem +(chainable) + +#### func (*FileSystem) WithContext + +```go +func (fs *FileSystem) WithContext(ctx context.Context) *FileSystem +``` +WithContext passes in user context and returns the filesystem (chainable) + +#### func (*FileSystem) WithOptions + +```go +func (fs *FileSystem) WithOptions(opts vfs.Options) *FileSystem +``` +WithOptions sets options for client and returns the filesystem (chainable) + +#### type Location + +```go +type Location struct { +} +``` + +Location implements [vfs.Location](../README.md#type-location) for gs fs. + +#### func (*Location) ChangeDir + +```go +func (l *Location) ChangeDir(relativePath string) error +``` +ChangeDir changes the current location's path to the new, relative path. + +#### func (*Location) DeleteFile + +```go +func (l *Location) DeleteFile(fileName string) error +``` +DeleteFile deletes the file at the given path, relative to the current location. + +#### func (*Location) Exists + +```go +func (l *Location) Exists() (bool, error) +``` +Exists returns whether the location exists or not. In the case of an error, +false is returned. + +#### func (*Location) FileSystem + +```go +func (l *Location) FileSystem() vfs.FileSystem +``` +FileSystem returns the GCS file system instance. + +#### func (*Location) List + +```go +func (l *Location) List() ([]string, error) +``` +List returns a list of file name strings for the current location. + +#### func (*Location) ListByPrefix + +```go +func (l *Location) ListByPrefix(filenamePrefix string) ([]string, error) +``` +ListByPrefix returns a slice of file base names and any error, if any prefix +means filename prefix and therefore should not have slash List functions return +only files [List](#func-location-list) functions return only basenames + +#### func (*Location) ListByRegex + +```go +func (l *Location) ListByRegex(regex *regexp.Regexp) ([]string, error) +``` +ListByRegex returns a list of file names at the location which match the +provided regular expression. + +#### func (*Location) NewFile + +```go +func (l *Location) NewFile(filePath string) (vfs.File, error) +``` +NewFile returns a new file instance at the given path, relative to the current +location. + +#### func (*Location) NewLocation + +```go +func (l *Location) NewLocation(relativePath string) (vfs.Location, error) +``` +NewLocation creates a new location instance relative to the current location's +path. + +#### func (*Location) Path + +```go +func (l *Location) Path() string +``` +Path returns the path of the file at the current location, starting with a +leading '/' + +#### func (*Location) String + +```go +func (l *Location) String() string +``` +String returns the full URI of the location. + +#### func (*Location) URI + +```go +func (l *Location) URI() string +``` +URI returns a URI string for the GCS location. + +#### func (*Location) Volume + +```go +func (l *Location) Volume() string +``` +Volume returns the GCS bucket name. + +#### type Options + +```go +type Options struct { + APIKey string `json:"apiKey,omitempty"` + CredentialFile string `json:"credentialFilePath,omitempty"` + Endpoint string `json:"endpoint,omitempty"` + Scopes []string `json:"WithoutAuthentication,omitempty"` +} +``` + +Options holds Google Cloud Storage -specific options. Currently only client +options are used. diff --git a/docs/os.md b/docs/os.md new file mode 100644 index 00000000..3c8f4f2e --- /dev/null +++ b/docs/os.md @@ -0,0 +1,338 @@ +# os + +-- + +Package os built-in os lib VFS implementation. + + +### Usage + +Rely on github.com/c2fo/vfs/backend + + import( + "github.com/c2fo/vfs/backend" + _ "github.com/c2fo/vfs/backend/os" + ) + + func UseFs() error { + fs, err := backend.Backend("os") + ... + } + +Or call directly: + + import _os "github.com/c2fo/vfs/backend/os" + + func DoSomething() { + fs := &_os.FileSystem{} + ... + } + + +### See Also + +See: https://golang.org/pkg/os/ + +## Usage + +```go +const Scheme = "file" +``` +Scheme defines the filesystem type. + +#### type File + +```go +type File struct { +} +``` + +File implements [vfs.File](../README.md#type-file) interface for S3 fs. + +#### func (*File) Close + +```go +func (f *File) Close() error +``` +Close implements the [io.Closer](https://godoc.org/io#Closer) interface, closing the underlying *os.File. its +an error, if any. + +#### func (*File) CopyToFile + +```go +func (f *File) CopyToFile(target vfs.File) error +``` +CopyToFile copies the file to a new File. It accepts a [vfs.File](../README.md#type-file) and returns an +error, if any. + +#### func (*File) CopyToLocation + +```go +func (f *File) CopyToLocation(location vfs.Location) (vfs.File, error) +``` +CopyToLocation copies existing File to new Location with the same name. It +accepts a [vfs.Location](../README.md#type-location) and returns a [vfs.File](../README.md#type-file) and error, if any. + +#### func (*File) Delete + +```go +func (f *File) Delete() error +``` +Delete unlinks the file returning any error or nil. + +#### func (*File) Exists + +```go +func (f *File) Exists() (bool, error) +``` +Exists true if the file exists on the filesystem, otherwise false, and an error, +if any. + +#### func (*File) LastModified + +```go +func (f *File) LastModified() (*time.Time, error) +``` +LastModified returns the timestamp of the file's mtime or error, if any. + +#### func (*File) Location + +```go +func (f *File) Location() vfs.Location +``` +Location returns the underlying [os.Location](#type-location). + +#### func (*File) MoveToFile + +```go +func (f *File) MoveToFile(target vfs.File) error +``` +MoveToFile move a file. It accepts a target vfs.File and returns an error, if +any. + +__TODO:__ we might consider using os.Rename() for efficiency when +target.Location().FileSystem().Scheme equals f.Location().FileSystem().Scheme() + +#### func (*File) MoveToLocation + +```go +func (f *File) MoveToLocation(location vfs.Location) (vfs.File, error) +``` +MoveToLocation moves a file to a new Location. It accepts a target vfs.Location +and returns a vfs.File and an error, if any. + +__TODO:__ we might consider using os.Rename() for effenciency when location.FileSystem().Scheme() equals +f.Location().FileSystem().Scheme() + +#### func (*File) Name + +```go +func (f *File) Name() string +``` +Name returns the full name of the File relative to [Location.Name()](#func-filesystem-name). + +#### func (*File) Path + +```go +func (f *File) Path() string +``` +Path returns the the path of the File relative to [Location.Name()](#func-filesystem-name). + +#### func (*File) Read + +```go +func (f *File) Read(p []byte) (int, error) +``` +Read implements the [io.Reader](https://godoc.org/io#Reader) interface. It returns the bytes read and an error, +if any. + +#### func (*File) Seek + +```go +func (f *File) Seek(offset int64, whence int) (int64, error) +``` +Seek implements the io.Seeker interface. It accepts an offset and "whench" where +0 means relative to the origin of the file, 1 means relative to the current +offset, and 2 means relative to the end. It returns the new offset and an error, +if any. + +#### func (*File) Size + +```go +func (f *File) Size() (uint64, error) +``` +Size returns the size (in bytes) of the [File](#type-file) or any error. + +#### func (*File) String + +```go +func (f *File) String() string +``` +String implement [fmt.Stringer](https://godoc.org/fmt#Stringer), returning the file's URI as the default string. + +#### func (*File) URI + +```go +func (f *File) URI() string +``` +URI returns the [File](#type-file)'s URI as a string. + +#### func (*File) Write + +```go +func (f *File) Write(p []byte) (n int, err error) +``` +Write implements the [io.Writer](https://godoc.org/io#Writer) interface. It accepts a slice of bytes and +returns the number of btyes written and an error, if any. + +#### type FileSystem + +```go +type FileSystem struct{} +``` + +FileSystem implements [vfs.FileSystem](../README.md#type-filesystem) for the OS filesystem. + +#### func (*FileSystem) Name + +```go +func (fs *FileSystem) Name() string +``` +Name returns "os" + +#### func (*FileSystem) NewFile + +```go +func (fs *FileSystem) NewFile(volume string, name string) (vfs.File, error) +``` +NewFile function returns the os implementation of [vfs.File](../README.md#type-file). + +#### func (*FileSystem) NewLocation + +```go +func (fs *FileSystem) NewLocation(volume string, name string) (vfs.Location, error) +``` +NewLocation function returns the os implementation of [vfs.Location](../README.md#type-location). + +#### func (*FileSystem) Scheme + +```go +func (fs *FileSystem) Scheme() string +``` +Scheme return "file" as the initial part of a file URI ie: file:// + +#### type Location + +```go +type Location struct { +} +``` + +Location implements the [vfs.Location](../README.md#type-location) interface specific to OS fs. + +#### func (*Location) ChangeDir + +```go +func (l *Location) ChangeDir(relativePath string) error +``` +ChangeDir takes a relative path, and modifies the underlying [Location](#type-location)'s path. +The caller is modified by this so the only return is any error. For this +implementation there are no errors. + +#### func (*Location) DeleteFile + +```go +func (l *Location) DeleteFile(fileName string) error +``` +DeleteFile deletes the file of the given name at the location. This is meant to +be a short cut for instantiating a new file and calling delete on that with all +the necessary error handling overhead. + +#### func (*Location) Exists + +```go +func (l *Location) Exists() (bool, error) +``` +Exists returns true if the location exists, and the calling user has the +appropriate permissions. Will receive false without an error if the location +simply doesn't exist. Otherwise could receive false and any errors passed back +from the OS. + +#### func (*Location) FileSystem + +```go +func (l *Location) FileSystem() vfs.FileSystem +``` +FileSystem returns a [vfs.FileSystem](../README.md#type-filesystem) interface of the location's underlying +fileSystem. + +#### func (*Location) List + +```go +func (l *Location) List() ([]string, error) +``` +List returns a slice of all files in the top directory of of the location. + +#### func (*Location) ListByPrefix + +```go +func (l *Location) ListByPrefix(prefix string) ([]string, error) +``` +ListByPrefix returns a slice of all files starting with "prefix" in the top +directory of of the location. + +#### func (*Location) ListByRegex + +```go +func (l *Location) ListByRegex(regex *regexp.Regexp) ([]string, error) +``` +ListByRegex returns a slice of all files matching the regex in the top directory +of of the location. + +#### func (*Location) NewFile + +```go +func (l *Location) NewFile(fileName string) (vfs.File, error) +``` +NewFile uses the properties of the calling location to generate a [vfs.File](../README.md#type-file) +(backed by an [os.File](#type-file)). A string argument is expected to be a relative path to +the location's current path. + +#### func (*Location) NewLocation + +```go +func (l *Location) NewLocation(relativePath string) (vfs.Location, error) +``` +NewLocation makes a copy of the underlying [Location](#type-location), then modifies its path by +calling ChangeDir with the relativePath argument, returning the resulting +location. The only possible errors come from the call to ChangeDir. + +#### func (*Location) Path + +```go +func (l *Location) Path() string +``` +Path returns the location path. + +#### func (*Location) String + +```go +func (l *Location) String() string +``` +String implement [fmt.Stringer](https://godoc.org/fmt#Stringer), returning the location's URI as the default +string. + +#### func (*Location) URI + +```go +func (l *Location) URI() string +``` +URI returns the [Location](#type-location)'s URI as a string. + +#### func (*Location) Volume + +```go +func (l *Location) Volume() string +``` +Volume returns the volume, if any, of the location. Given "C:\foo\bar" it returns "C:" on +Windows. On other platforms it returns "". diff --git a/docs/s3.md b/docs/s3.md new file mode 100644 index 00000000..926d9d9d --- /dev/null +++ b/docs/s3.md @@ -0,0 +1,453 @@ +# s3 + +-- + +Package s3 AWS S3 VFS implementation. + + +### Usage + +Rely on github.com/c2fo/vfs/backend + + import( + "github.com/c2fo/vfs/backend" + _ "github.com/c2fo/vfs/backend/s3" + ) + + func UseFs() error { + fs, err := backend.Backend("AWS S3") + ... + } + +Or call directly: + + import "github.com/c2fo/vfs/backend/s3" + + func DoSomething() { + fs := gs.NewFilesystem() + ... + } + +s3 can be augmented with the following implementation-specific methods. Backend +returns [vfs.Filesystem](../README.md#type-filesystem) interface so it would have to be cast as [s3.Filesystem](#type-filesystem) to +use the following: + + func DoSomething() { + + ... + + // cast if fs was created using backend.Backend(). Not necessary if created directly from s3.NewFilsystem(). + fs = fs.(s3.Filesystem) + + // to pass in client options + fs = fs.WithOptions( + s3.Options{ + AccessKeyID: "AKIAIOSFODNN7EXAMPLE", + SecretAccessKey: "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY", + Region: "us-west-2", + }, + ) + + // to pass specific client, for instance a mock client + s3apiMock := &mocks.S3API{} + s3apiMock.On("GetObject", mock.AnythingOfType("*s3.GetObjectInput")). + Return(&s3.GetObjectOutput{ + Body: nopCloser{bytes.NewBufferString("Hello world!")}, + }, nil) + fs = fs.WithClient(s3apiMock) + } + + +### Authentication + +Authentication, by default, occurs automatically when [Client()](#func-filesystem-client) is called. It +looks for credentials in the following places, preferring the first location +found: + +1. StaticProvider - set of credentials which are set programmatically, and will never expire. +1. EnvProvider - credentials from the environment variables of the + running process. Environment credentials never expire. + Environment variables used: + + * Access Key ID: AWS_ACCESS_KEY_ID or AWS_ACCESS_KEY + * Secret Access Key: AWS_SECRET_ACCESS_KEY or AWS_SECRET_KEY + +1. SharedCredentialsProvider - looks for "AWS_SHARED_CREDENTIALS_FILE" env variable. If the + env value is empty will default to current user's home directory. + + * Linux/OSX: "$HOME/.aws/credentials" + * Windows: "%USERPROFILE%\.aws\credentials" + +1. RemoteCredProvider - default remote endpoints such as EC2 or ECS IAM Roles +1. EC2RoleProvider - credentials from the EC2 service, and keeps track if those credentials are expired + +See the following for more auth info: +https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html and +https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html + + +### See Also + +See: https://github.com/aws/aws-sdk-go/tree/master/service/s3 + +## Usage + +```go +const Scheme = "s3" +``` +Scheme defines the filesystem type. + +#### type File + +```go +type File struct { +} +``` + +File implements [vfs.File](../README.md#type-file) interface for S3 fs. + +#### func (*File) Close + +```go +func (f *File) Close() error +``` +Close cleans up underlying mechanisms for reading from and writing to the file. +Closes and removes the local temp file, and triggers a write to s3 of anything +in the f.writeBuffer if it has been created. + +#### func (*File) CopyToFile + +```go +func (f *File) CopyToFile(targetFile vfs.File) error +``` +CopyToFile puts the contents of File into the targetFile passed. Uses the S3 +CopyObject method if the target file is also on S3, otherwise uses [io.Copy](https://godoc.org/io#Copy). + +#### func (*File) CopyToLocation + +```go +func (f *File) CopyToLocation(location vfs.Location) (vfs.File, error) +``` +CopyToLocation creates a copy of [*File](#type-file), using the file's current name as the new +file's name at the given location. If the given location is also s3, the AWS API +for copying files will be utilized, otherwise, standard [io.Copy](https://godoc.org/io#Copy) will be done to +the new file. + +#### func (*File) Delete + +```go +func (f *File) Delete() error +``` +Delete clears any local temp file, or write buffer from read/writes to the file, +then makes a DeleteObject call to s3 for the file. Returns any error returned by +the API. + +#### func (*File) Exists + +```go +func (f *File) Exists() (bool, error) +``` +Exists returns a boolean of whether or not the object exists on s3, based on a +call for the object's HEAD through the s3 API. + +#### func (*File) LastModified + +```go +func (f *File) LastModified() (*time.Time, error) +``` +LastModified returns the LastModified property of a HEAD request to the s3 +object. + +#### func (*File) Location + +```go +func (f *File) Location() vfs.Location +``` +Location returns a [vfs.Location](../README.md#type-location) at the location of the object. IE: if file is at +s3://bucket/here/is/the/file.txt the location points to s3://bucket/here/is/the/ + +#### func (*File) MoveToFile + +```go +func (f *File) MoveToFile(targetFile vfs.File) error +``` +MoveToFile puts the contents of File into the targetFile passed using +[File.CopyToFile](#func-file-copytofile). If the copy succeeds, the source file is deleted. Any errors +from the copy or delete are returned. + +#### func (*File) MoveToLocation + +```go +func (f *File) MoveToLocation(location vfs.Location) (vfs.File, error) +``` +MoveToLocation works by first calling [File.CopyToLocation](#func-file-copytolocation)([vfs.Location](../README.md#type-location)) then, if +that succeeds, it deletes the original file, returning the new file. If the copy +process fails the error is returned, and the [Delete](#func-file-delete) isn't called. If the call to +[Delete](#func-file-delete) fails, the error and the file generated by the copy are both returned. + +#### func (*File) Name + +```go +func (f *File) Name() string +``` +Name returns the name portion of the file's _key_ property. IE: "file.txt" of +"s3://some/path/to/file.txt + +#### func (*File) Path + +```go +func (f *File) Path() string +``` +Path return the directory portion of the file's _key_. IE: "path/to" of +"s3://some/path/to/file.txt + +#### func (*File) Read + +```go +func (f *File) Read(p []byte) (n int, err error) +``` +Read implements the standard for [io.Reader](https://godoc.org/io#Reader). For this to work with an s3 file, a +temporary local copy of the file is created, and reads work on that. This file +is closed and removed upon calling [f.Close()](#func-file-close) + +#### func (*File) Seek + +```go +func (f *File) Seek(offset int64, whence int) (int64, error) +``` +Seek implements the standard for io.Seeker. A temporary local copy of the s3 +file is created (the same one used for Reads) which Seek() acts on. This file is +closed and removed upon calling f.Close() + +#### func (*File) Size + +```go +func (f *File) Size() (uint64, error) +``` +Size returns the ContentLength value from an s3 HEAD request on the file's +object. + +#### func (*File) String + +```go +func (f *File) String() string +``` +String implement [fmt.Stringer](https://godoc.org/fmt#Stringer), returning the file's URI as the default string. + +#### func (*File) URI + +```go +func (f *File) URI() string +``` +URI returns the File's URI as a string. + +#### func (*File) Write + +```go +func (f *File) Write(data []byte) (res int, err error) +``` +Write implements the standard for [io.Writer](https://godoc.org/io#Writer). A buffer is added to with each +subsequent write. When [f.Close()](#func-file-closer) is called, the contents of the buffer are used +to initiate the PutObject to s3. The underlying implementation uses s3manager +which will determine whether it is appropriate to call PutObject, or initiate a +multi-part upload. + +#### type FileSystem + +```go +type FileSystem struct { +} +``` + +FileSystem implements vfs.Filesystem for the S3 filesystem. + +#### func NewFileSystem + +```go +func NewFileSystem() *FileSystem +``` +NewFileSystem intializer for fileSystem struct accepts aws-sdk s3iface.S3API +client and returns Filesystem or error. + +#### func (*FileSystem) Client + +```go +func (fs *FileSystem) Client() (s3iface.S3API, error) +``` +Client returns the underlying aws s3 client, creating it, if necessary +See [Authentication](#authentication) for authentication resolution + +#### func (*FileSystem) Name + +```go +func (fs *FileSystem) Name() string +``` +Name returns "AWS S3" + +#### func (*FileSystem) NewFile + +```go +func (fs *FileSystem) NewFile(volume string, name string) (vfs.File, error) +``` +NewFile function returns the s3 implementation of [vfs.File](../README.md#type-file). + +#### func (*FileSystem) NewLocation + +```go +func (fs *FileSystem) NewLocation(volume string, name string) (vfs.Location, error) +``` +NewLocation function returns the s3 implementation of [vfs.Location](../README.md#type-location). + +#### func (*FileSystem) Scheme + +```go +func (fs *FileSystem) Scheme() string +``` +Scheme return "s3" as the initial part of a file URI ie: s3:// + +#### func (*FileSystem) WithClient + +```go +func (fs *FileSystem) WithClient(client interface{}) *FileSystem +``` +WithClient passes in an s3 client and returns the filesystem (chainable) + +#### func (*FileSystem) WithOptions + +```go +func (fs *FileSystem) WithOptions(opts vfs.Options) *FileSystem +``` +WithOptions sets options for client and returns the filesystem (chainable) + +#### type Location + +```go +type Location struct { +} +``` + +Location implements the [vfs.Location](../README.md#type-location) interface specific to S3 fs. + +#### func (*Location) ChangeDir + +```go +func (l *Location) ChangeDir(relativePath string) error +``` +ChangeDir takes a relative path, and modifies the underlying [Location](#type-location)'s path. +The caller is modified by this so the only return is any error. For this +implementation there are no errors. + +#### func (*Location) DeleteFile + +```go +func (l *Location) DeleteFile(fileName string) error +``` +DeleteFile removes the file at fileName path. + +#### func (*Location) Exists + +```go +func (l *Location) Exists() (bool, error) +``` +Exists returns true if the bucket exists, and the user in the underlying +[s3.fileSystem.Client()](#func-filesystem-client) has the appropriate permissions. Will receive false without +an error if the bucket simply doesn't exist. Otherwise could receive false and +any errors passed back from the API. + +#### func (*Location) FileSystem + +```go +func (l *Location) FileSystem() vfs.FileSystem +``` +FileSystem returns a [vfs.fileSystem](../README.md#type-filesystem) interface of the location's underlying +fileSystem. + +#### func (*Location) List + +```go +func (l *Location) List() ([]string, error) +``` +List calls the s3 API to list all objects in the location's bucket, with a +prefix automatically set to the location's path. This will make a call to the s3 +API for every 1000 keys to return. If you have many thousands of keys at the +given location, this could become quite expensive. + +#### func (*Location) ListByPrefix + +```go +func (l *Location) ListByPrefix(prefix string) ([]string, error) +``` +ListByPrefix calls the s3 API with the location's prefix modified relatively by +the prefix arg passed to the function. The resource considerations of [List()](#func-location-list) +apply to this function as well. + +#### func (*Location) ListByRegex + +```go +func (l *Location) ListByRegex(regex *regexp.Regexp) ([]string, error) +``` +ListByRegex retrieves the keys of all the files at the location's current path, +then filters out all those that don't match the given regex. The resource +considerations of [List()](#func-location-list) apply here as well. + +#### func (*Location) NewFile + +```go +func (l *Location) NewFile(filePath string) (vfs.File, error) +``` +NewFile uses the properties of the calling location to generate a vfs.File +(backed by an [s3.File](#type-file)). The filePath argument is expected to be a relative path +to the location's current path. + +#### func (*Location) NewLocation + +```go +func (l *Location) NewLocation(relativePath string) (vfs.Location, error) +``` +NewLocation makes a copy of the underlying Location, then modifies its path by +calling [ChangeDir](#func-location-changedir) with the relativePath argument, returning the resulting +location. The only possible errors come from the call to [ChangeDir](#func-location-changedir), which, for +the s3 implementation doesn't ever result in an error. + +#### func (*Location) Path + +```go +func (l *Location) Path() string +``` +Path returns the prefix the location references in most s3 calls. + +#### func (*Location) String + +```go +func (l *Location) String() string +``` +String implement [fmt.Stringer](https://godoc.org/fmt#Stringer), returning the location's URI as the default +string. + +#### func (*Location) URI + +```go +func (l *Location) URI() string +``` +URI returns the Location's URI as a string. + +#### func (*Location) Volume + +```go +func (l *Location) Volume() string +``` +Volume returns the bucket the location is contained in. + +#### type Options + +```go +type Options struct { + AccessKeyID string `json:"accessKeyId,omitempty"` + SecretAccessKey string `json:"secretAccessKey,omitempty"` + SessionToken string `json:"sessionToken,omitempty"` + Region string `json:"region,omitempty"` + Endpoint string `json:"endpoint,omitempty"` +} +``` + +Options holds s3-specific options. Currently only client options are used. diff --git a/docs/utils.md b/docs/utils.md new file mode 100644 index 00000000..74733fb8 --- /dev/null +++ b/docs/utils.md @@ -0,0 +1,70 @@ +# utils + +-- + +## Usage + +```go +const ( + // Windows constant represents a target operating system running a version of Microsoft Windows + Windows = "windows" + // BadFilePrefix constant is returned when path has leading slash or backslash + BadFilePrefix = "expecting only a filename prefix, which may not include slashes or backslashes" +) +``` + +#### func AddTrailingSlash + +```go +func AddTrailingSlash(path string) string +``` +AddTrailingSlash is a helper function accepts a path string and returns the path +string with a trailing slash if there wasn't one. + +#### func CleanPrefix + +```go +func CleanPrefix(prefix string) string +``` +CleanPrefix resolves relative dot pathing, removing any leading . or / and +removes any trailing / + +#### func EnsureTrailingSlash + +```go +func EnsureTrailingSlash(dir string) string +``` +EnsureTrailingSlash is like AddTrailingSlash but will only ever use / since it's +use for web uri's, never an Windows OS path. + +#### func GetFileURI + +```go +func GetFileURI(f vfs.File) string +``` +GetFileURI returns a File URI + +#### func GetLocationURI + +```go +func GetLocationURI(l vfs.Location) string +``` +GetLocationURI returns a Location URI + +#### func TouchCopy + +```go +func TouchCopy(writer, reader vfs.File) error +``` +TouchCopy is a wrapper around [io.Copy](https://godoc.org/io#Copy) which ensures that even empty source files +(reader) will get written as an empty file. It guarantees a Write() call on the +target file. + +#### func ValidateFilePrefix + +```go +func ValidateFilePrefix(filenamePrefix string) error +``` +ValidateFilePrefix performs a validation check on a prefix. The prefix should +not include "/" or "\\" characters. An error is returned if either of those +conditions are true. diff --git a/docs/vfscp.md b/docs/vfscp.md new file mode 100644 index 00000000..3a63790a --- /dev/null +++ b/docs/vfscp.md @@ -0,0 +1,30 @@ +# vfscp + +-- + +vfscp copies a file from one place to another, even between supported remote +systems. Complete URI (scheme:// authority/path) required except for local +filesystem. See github.com/c2fo/vfs docs for authentication. + + +### Usage + +vfscp's usage is extremlely simple: + + vfscp + -help prints help message + + +### Examples + +Local OS URI's can be expressed without a scheme: + + vfscp /some/local/file.txt s3://mybucket/path/to/myfile.txt + +But may also be use the full scheme uri: + + vfscp file:///some/local/file.txt s3://mybucket/path/to/myfile.txt + +Copy a file from Google Cloud Storage to Amazon S3 + + vfscp gs://googlebucket/some/path/photo.jpg s3://awsS3bucket/path/to/photo.jpg diff --git a/docs/vfssimple.md b/docs/vfssimple.md new file mode 100644 index 00000000..ab864652 --- /dev/null +++ b/docs/vfssimple.md @@ -0,0 +1,75 @@ +# vfssimple + +-- + +Package vfssimple provides a basic and easy to use set of functions to any +supported backend filesystem by using full URI's: + +* Local OS: file:///some/path/to/file.txt +* Amazon S3: s3://mybucket/path/to/file.txt +* Google Cloud Storage: gs://mybucket/path/to/file.txt + + +### Usage + +Just import vfssimple. + + package main + + import( + "github.com/c2fo/vfs/vfssimple" + ) + + ... + + func DoSomething() error { + myLocalDir, err := vfssimple.NewLocation("file:///tmp/") + if err != nil { + return err + } + + myS3File, err := vfssimple.NewFile("s3://mybucket/some/path/to/key.txt") + if err != nil { + return err + } + + localFile, err := myS3File.MoveToLocation(myLocalDir) + if err != nil { + return err + } + + } + + +### Authentication and Options + +vfssimple is largely an example of how to initialize a set of backend +filesystems. It only provides a default initialization of the individual file +systems. See backend docs for specific authentication info for each backend but +generally speaking, most backends can use Environment variables to set +credentials or client options. + +To do more, especially if you need to pass in specific [vfs.Options](../README.md#type-options)'s via +WithOption() or perhaps a mock client for testing via WithClient() or something +else, you'd need to implement your own factory. See [backend](backend.md) +for more information. + +## Functions + +#### func NewFile + +```go +func NewFile(uri string) (vfs.File, error) +``` +NewFile is a convenience function that allows for instantiating a file based on +a uri string. Any backend filesystem is supported, though some may require prior +configuration. See the docs for specific requirements of each. + +#### func NewLocation + +```go +func NewLocation(uri string) (vfs.Location, error) +``` +NewLocation is a convenience function that allows for instantiating a location +based on a uri string.Any backend filesystem is supported, though some may +require prior configuration. See the docs for specific requirements of each diff --git a/errors.go b/errors.go deleted file mode 100644 index 0174bc76..00000000 --- a/errors.go +++ /dev/null @@ -1,52 +0,0 @@ -// This file contains unified error handling for vfs as well as a technique for handling -// multiple errors in the event of deferred method calls such as file.Close() -package vfs - -import ( - "errors" - "fmt" -) - -// MultiErr provides a set of functions to handle the scenario where, because of errors in defers, -// we have a way to handle the potenetial of multiple errors. For instance, if you do a open a file, -// defer it's close, then fail to Seek. The seek fauilure has one error but then the Close fails as -// well. This ensure neither are ignored. -type MultiErr struct { - errs []error -} - -// Constructor for generating a zero-value MultiErr reference. -func NewMutliErr() *MultiErr { - return &MultiErr{} -} - -// Returns the error message string. -func (me *MultiErr) Error() string { - var errorString string - for _, err := range me.errs { - errorString = fmt.Sprintf("%s%s\n", errorString, err.Error()) - } - return errorString -} - -// Appends the provided errors to the errs slice for future message reporting. -func (me *MultiErr) Append(errs ...error) error { - me.errs = append(me.errs, errs...) - return errors.New("return value for multiErr must be set in the first deferred function") -} - -// If there are no errors in the MultErr instance, then return nil, otherwise return the full MultiErr instance. -func (me *MultiErr) OrNil() error { - if len(me.errs) > 0 { - return me - } - return nil -} - -type singleErrReturn func() error - -func (me *MultiErr) DeferFunc(f singleErrReturn) { - if err := f(); err != nil { - _ = me.Append(err) - } -} diff --git a/errors_example_test.go b/errors_example_test.go deleted file mode 100644 index d7dc87fc..00000000 --- a/errors_example_test.go +++ /dev/null @@ -1,27 +0,0 @@ -package vfs - -func ExampleMultiErr_DeferFunc() { - // NOTE: We use a named error in the function since our first defer will set it based on any appended errors - _ = func(f File) (rerr error) { - //THESE LINES REQUIRED - errs := NewMutliErr() - defer func() { rerr = errs.OrNil() }() - - _, err := f.Read(nil) - if err != nil { - //for REGULAR ERROR RETURNS we just return the Appended errors - return errs.Append(err) - } - - // for defers, use DeferFunc and pass it the func name - defer errs.DeferFunc(f.Close) - - _, err = f.Seek(0, 0) - if err != nil { - //for REGULAR ERROR RETURNS we just return the Appended errors - return errs.Append(err) - } - - return nil - } -} diff --git a/glide.lock b/glide.lock index 923f23fe..b3936507 100644 --- a/glide.lock +++ b/glide.lock @@ -1,5 +1,5 @@ -hash: 2f63863ba5da2ced393ce0c2a7ba24a66d7abb10a52156af19fb5cdbcb3cb389 -updated: 2017-08-10T16:03:45.974114809-05:00 +hash: 5b478d9da67f4436af2290281ccbee12e60231dd4c19ec150fd134a7afb261bf +updated: 2019-01-15T16:47:22.267376-06:00 imports: - name: cloud.google.com/go version: 085c05ca074a8de9107005f9baa6308eae7eaf41 @@ -11,7 +11,7 @@ imports: - internal/version - storage - name: github.com/aws/aws-sdk-go - version: d05c000e0b41647375a4093373eb1301e02c8a4e + version: d2d8f8c33f49af99cdd889f6897ffd525c520407 subpackages: - aws - aws/awserr @@ -22,15 +22,24 @@ imports: - aws/credentials - aws/credentials/ec2rolecreds - aws/credentials/endpointcreds + - aws/credentials/processcreds - aws/credentials/stscreds + - aws/csm - aws/defaults - aws/ec2metadata - aws/endpoints - aws/request - aws/session - aws/signer/v4 + - internal/ini + - internal/s3err + - internal/sdkio + - internal/sdkrand + - internal/sdkuri - internal/shareddefaults - private/protocol + - private/protocol/eventstream + - private/protocol/eventstream/eventstreamapi - private/protocol/query - private/protocol/query/queryutil - private/protocol/rest @@ -44,8 +53,8 @@ imports: version: 6d212800a42e8ab5c146b8ace3490ee17e5225f9 subpackages: - spew -- name: github.com/go-ini/ini - version: 887c8d36f8411bededfd2281daa3907f5f36552e +- name: github.com/fatih/color + version: 5b77d2a35fb0ede96d138fc9a99f5c9b6aef11b4 - name: github.com/golang/protobuf version: 18c9bb3261723cd5401db4d0c9fbc5c3b6c70fe8 subpackages: @@ -56,6 +65,10 @@ imports: version: 9af46dd5a1713e8b5cd71106287eba3cefdde50b - name: github.com/jmespath/go-jmespath version: c01cf91b011868172fdcd9f41838e80c9d716264 +- name: github.com/mattn/go-colorable + version: efa589957cd060542a26d2dd7832fd6a6c6c3ade +- name: github.com/mattn/go-isatty + version: 6ca4dbf54d38eea1a992b3c722a76a5d1c4cb25c - name: github.com/pmezard/go-difflib version: d8ed2627bdf02c080bf22230dbb337003b7aba2d subpackages: @@ -63,14 +76,12 @@ imports: - name: github.com/stretchr/objx version: cbeaeb16a013161a98496fad62933b1d21786672 - name: github.com/stretchr/testify - version: 69483b4bd14f5845b5a1e55bca19e954e827f1d0 + version: ffdc059bfe9ce6a4e144ba849dbedead332c6053 subpackages: - assert - mock - require - suite -- name: github.com/urfave/cli - version: 0bdeddeeb0f650497d603c4ad7b20cfe685682f6 - name: golang.org/x/net version: ffcf1bedda3b04ebb15a168a59800a73d6dc0f4d subpackages: @@ -89,6 +100,10 @@ imports: - internal - jws - jwt +- name: golang.org/x/sys + version: c11f84a56e43e20a78cee75a7c034031ecf57d1f + subpackages: + - unix - name: golang.org/x/text version: f4b4367115ec2de254587813edaa901bc1c723a8 subpackages: diff --git a/glide.yaml b/glide.yaml index 93afc9da..90f1804c 100644 --- a/glide.yaml +++ b/glide.yaml @@ -5,15 +5,23 @@ import: subpackages: - storage - package: github.com/aws/aws-sdk-go - version: ~1.10.18 + version: ^1.16.19 subpackages: + - aws - aws/awserr + - aws/credentials + - aws/credentials/ec2rolecreds + - aws/defaults + - aws/ec2metadata - aws/request + - aws/session - service/s3 - service/s3/s3iface - service/s3/s3manager +- package: github.com/fatih/color + version: ^1.7.0 - package: github.com/stretchr/testify - version: ~1.1.4 + version: ^1.3.0 subpackages: - mock - package: golang.org/x/net @@ -23,5 +31,4 @@ import: version: 48e49d1645e228d1c50c3d54fb476b2224477303 subpackages: - iterator -- package: github.com/urfave/cli - version: ^1.19.1 + - option diff --git a/go.mod b/go.mod new file mode 100644 index 00000000..68424da6 --- /dev/null +++ b/go.mod @@ -0,0 +1,19 @@ +module github.com/c2fo/vfs + +require ( + cloud.google.com/go v0.34.0 + github.com/aws/aws-sdk-go v1.16.19 + github.com/davecgh/go-spew v1.1.1 // indirect + github.com/fatih/color v1.7.0 + github.com/google/martian v2.1.0+incompatible // indirect + github.com/googleapis/gax-go v2.0.2+incompatible // indirect + github.com/mattn/go-colorable v0.0.9 // indirect + github.com/mattn/go-isatty v0.0.4 // indirect + github.com/stretchr/objx v0.1.1 // indirect + github.com/stretchr/testify v1.3.0 + golang.org/x/net v0.0.0-20190110200230-915654e7eabc + golang.org/x/oauth2 v0.0.0-20190111185915-36a7019397c4 // indirect + golang.org/x/sys v0.0.0-20190114130336-2be517255631 // indirect + google.golang.org/api v0.1.0 + google.golang.org/genproto v0.0.0-20190111180523-db91494dd46c // indirect +) diff --git a/go.sum b/go.sum new file mode 100644 index 00000000..c67e1e82 --- /dev/null +++ b/go.sum @@ -0,0 +1,92 @@ +cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= +cloud.google.com/go v0.34.0 h1:eOI3/cP2VTU6uZLDYAoic+eyzzB9YyGmJ7eIjl8rOPg= +cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= +git.apache.org/thrift.git v0.0.0-20180902110319-2566ecd5d999/go.mod h1:fPE2ZNJGynbRyZ4dJvy6G277gSllfV2HJqblrnkyeyg= +github.com/aws/aws-sdk-go v1.16.19 h1:eQypou1JciH0C87wYbj9uii0YVG3hS0S4UY78oWmUvM= +github.com/aws/aws-sdk-go v1.16.19/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo= +github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q= +github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw= +github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= +github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/fatih/color v1.7.0 h1:DkWD4oS2D8LGGgTQ6IvwJJXSL5Vp2ffcQg58nFV38Ys= +github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4= +github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04= +github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b h1:VKtxabqXZkF25pY9ekfRL6a582T4P37/31XEstQ5p58= +github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q= +github.com/golang/lint v0.0.0-20180702182130-06c8688daad7/go.mod h1:tluoj9z5200jBnyusfRPU2LqT6J+DAorxEvtC7LHB+E= +github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A= +github.com/golang/protobuf v1.2.0 h1:P3YflyNX/ehuJFLhxviNdFxQPkGK5cDcApsge1SqnvM= +github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= +github.com/google/go-cmp v0.2.0 h1:+dTQ8DZQJz0Mb/HjFlkptS1FeQ4cWSnN941F8aEG4SQ= +github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M= +github.com/google/martian v2.1.0+incompatible h1:/CP5g8u/VJHijgedC/Legn3BAbAaWPgecwXBIDzw5no= +github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs= +github.com/googleapis/gax-go v2.0.2+incompatible h1:silFMLAnr330+NRuag/VjIGF7TLp/LBrV2CJKFLWEww= +github.com/googleapis/gax-go v2.0.2+incompatible/go.mod h1:SFVmujtThgffbyetf+mdk2eWhX2bMyUtNHzFKcPA9HY= +github.com/grpc-ecosystem/grpc-gateway v1.5.0/go.mod h1:RSKVYQBd5MCa4OVpNdGskqpgL2+G+NZTnrVHpWWfpdw= +github.com/jmespath/go-jmespath v0.0.0-20180206201540-c2b33e8439af h1:pmfjZENx5imkbgOkpRUYLnmbU7UEFbjtDA2hxJ1ichM= +github.com/jmespath/go-jmespath v0.0.0-20180206201540-c2b33e8439af/go.mod h1:Nht3zPeWKUH0NzdCt2Blrr5ys8VGpn0CEB0cQHVjt7k= +github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck= +github.com/mattn/go-colorable v0.0.9 h1:UVL0vNpWh04HeJXV0KLcaT7r06gOH2l4OW6ddYRUIY4= +github.com/mattn/go-colorable v0.0.9/go.mod h1:9vuHe8Xs5qXnSaW/c/ABM9alt+Vo+STaOChaDxuIBZU= +github.com/mattn/go-isatty v0.0.4 h1:bnP0vzxcAdeI1zdubAl5PjU6zsERjGZb7raWodagDYs= +github.com/mattn/go-isatty v0.0.4/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4= +github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0= +github.com/openzipkin/zipkin-go v0.1.1/go.mod h1:NtoC/o8u3JlF1lSlyPNswIbeQH9bJTmOf0Erfk+hxe8= +github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= +github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= +github.com/prometheus/client_golang v0.8.0/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw= +github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo= +github.com/prometheus/common v0.0.0-20180801064454-c7de2306084e/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro= +github.com/prometheus/procfs v0.0.0-20180725123919-05ee40e3a273/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk= +github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= +github.com/stretchr/objx v0.1.1 h1:2vfRuCMp5sSVIDSqO8oNnWJq7mPa6KVP3iPIwFBuy8A= +github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= +github.com/stretchr/testify v1.3.0 h1:TivCn/peBQ7UY8ooIcPgZFpTNSz0Q2U6UrFlUfqbe0Q= +github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= +go.opencensus.io v0.18.0 h1:Mk5rgZcggtbvtAun5aJzAtjKKN/t0R3jJPlWILlv938= +go.opencensus.io v0.18.0/go.mod h1:vKdFvxhtzZ9onBp9VKHK8z/sRpBMnKAsufL7wlDrCOA= +golang.org/x/lint v0.0.0-20180702182130-06c8688daad7/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE= +golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE= +golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= +golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= +golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= +golang.org/x/net v0.0.0-20181106065722-10aee1819953/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= +golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= +golang.org/x/net v0.0.0-20190110200230-915654e7eabc h1:Yx9JGxI1SBhVLFjpAkWMaO1TF+xyqtHLjZpvQboJGiM= +golang.org/x/net v0.0.0-20190110200230-915654e7eabc/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= +golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= +golang.org/x/oauth2 v0.0.0-20181203162652-d668ce993890/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= +golang.org/x/oauth2 v0.0.0-20190111185915-36a7019397c4 h1:Xi5aaGtyrfSB/gXS4Kal2NNpB7uzffL3yzWi2kByI18= +golang.org/x/oauth2 v0.0.0-20190111185915-36a7019397c4/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= +golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4 h1:YUO/7uOKsKeq9UokNS62b8FYywz3ker1l1vDZRCRefw= +golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= +golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= +golang.org/x/sys v0.0.0-20190114130336-2be517255631 h1:g/5trXm6f9Tm+ochb21RlFNnF63lt+elB9hVBqtPu5Y= +golang.org/x/sys v0.0.0-20190114130336-2be517255631/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= +golang.org/x/text v0.3.0 h1:g61tztE5qeGQ89tm6NTjjM9VPIm088od1l6aSorWRWg= +golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= +golang.org/x/tools v0.0.0-20180828015842-6cd1fcedba52/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= +google.golang.org/api v0.0.0-20180910000450-7ca32eb868bf/go.mod h1:4mhQ8q/RsB7i+udVvVy5NUi08OU8ZlA0gRVgrF7VFY0= +google.golang.org/api v0.1.0 h1:K6z2u68e86TPdSdefXdzvXgR1zEMa+459vBSfWYAZkI= +google.golang.org/api v0.1.0/go.mod h1:UGEZY7KEX120AnNLIHFMKIo4obdJhkp2tPbaPlQx13Y= +google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM= +google.golang.org/appengine v1.3.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= +google.golang.org/appengine v1.4.0 h1:/wp5JvzpHIxhs/dumFmF7BXTf3Z+dd4uXta4kVyO508= +google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= +google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= +google.golang.org/genproto v0.0.0-20180831171423-11092d34479b/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= +google.golang.org/genproto v0.0.0-20181202183823-bd91e49a0898/go.mod h1:7Ep/1NZk928CDR8SjdVbjWNpdIf6nzjE3BTgJDr2Atg= +google.golang.org/genproto v0.0.0-20190111180523-db91494dd46c h1:LZllHYjdJnynBfmwysp+s4yhMzfc+3BzhdqzAMvwjoc= +google.golang.org/genproto v0.0.0-20190111180523-db91494dd46c/go.mod h1:7Ep/1NZk928CDR8SjdVbjWNpdIf6nzjE3BTgJDr2Atg= +google.golang.org/grpc v1.14.0/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw= +google.golang.org/grpc v1.16.0/go.mod h1:0JHn/cJsOMiMfNA9+DeHDlAU7KAAB5GDlYFpa9MZMio= +google.golang.org/grpc v1.17.0 h1:TRJYBgMclJvGYn2rIMjj+h9KtMt5r1Ij7ODVRIZkwhk= +google.golang.org/grpc v1.17.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs= +gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= +honnef.co/go/tools v0.0.0-20180728063816-88497007e858/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= diff --git a/gs/fileSystem.go b/gs/fileSystem.go deleted file mode 100644 index a7550738..00000000 --- a/gs/fileSystem.go +++ /dev/null @@ -1,51 +0,0 @@ -package gs - -import ( - "cloud.google.com/go/storage" - "golang.org/x/net/context" - - "github.com/c2fo/vfs" -) - -//Scheme defines the filesystem type. -const Scheme = "gs" - -// FileSystem implements vfs.Filesystem for the GCS filesystem. -type FileSystem struct { - client *storage.Client - ctx context.Context -} - -// NewFile function returns the gcs implementation of vfs.File. -func (fs *FileSystem) NewFile(volume string, name string) (vfs.File, error) { - file, err := newFile(fs, volume, name) - return vfs.File(file), err -} - -// NewLocation function returns the s3 implementation of vfs.Location. -func (fs *FileSystem) NewLocation(volume string, path string) (loc vfs.Location, err error) { - loc = &Location{ - fileSystem: fs, - bucket: volume, - prefix: vfs.EnsureTrailingSlash(path), - } - return -} - -// Name returns "Google Cloud Storage" -func (fs *FileSystem) Name() string { - return "Google Cloud Storage" -} - -// Scheme return "gs" as the initial part of a file URI ie: gs:// -func (fs *FileSystem) Scheme() string { - return Scheme -} - -// NewFileSystem intializer for FileSystem struct accepts google cloud storage client and returns Filesystem or error. -func NewFileSystem(ctx context.Context, client *storage.Client) *FileSystem { - return &FileSystem{ - client: client, - ctx: ctx, - } -} diff --git a/mocks/S3API.go b/mocks/S3API.go index 7bf8aefc..4d333708 100644 --- a/mocks/S3API.go +++ b/mocks/S3API.go @@ -1,10 +1,11 @@ +// Code generated by mockery v1.0.0. DO NOT EDIT. + package mocks import aws "github.com/aws/aws-sdk-go/aws" import mock "github.com/stretchr/testify/mock" import request "github.com/aws/aws-sdk-go/aws/request" import s3 "github.com/aws/aws-sdk-go/service/s3" -import s3iface "github.com/aws/aws-sdk-go/service/s3/s3iface" // S3API is an autogenerated mock type for the S3API type type S3API struct { @@ -61,7 +62,14 @@ func (_m *S3API) AbortMultipartUploadRequest(_a0 *s3.AbortMultipartUploadInput) // AbortMultipartUploadWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) AbortMultipartUploadWithContext(_a0 aws.Context, _a1 *s3.AbortMultipartUploadInput, _a2 ...request.Option) (*s3.AbortMultipartUploadOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.AbortMultipartUploadOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.AbortMultipartUploadInput, ...request.Option) *s3.AbortMultipartUploadOutput); ok { @@ -132,7 +140,14 @@ func (_m *S3API) CompleteMultipartUploadRequest(_a0 *s3.CompleteMultipartUploadI // CompleteMultipartUploadWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) CompleteMultipartUploadWithContext(_a0 aws.Context, _a1 *s3.CompleteMultipartUploadInput, _a2 ...request.Option) (*s3.CompleteMultipartUploadOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.CompleteMultipartUploadOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.CompleteMultipartUploadInput, ...request.Option) *s3.CompleteMultipartUploadOutput); ok { @@ -203,7 +218,14 @@ func (_m *S3API) CopyObjectRequest(_a0 *s3.CopyObjectInput) (*request.Request, * // CopyObjectWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) CopyObjectWithContext(_a0 aws.Context, _a1 *s3.CopyObjectInput, _a2 ...request.Option) (*s3.CopyObjectOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.CopyObjectOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.CopyObjectInput, ...request.Option) *s3.CopyObjectOutput); ok { @@ -274,7 +296,14 @@ func (_m *S3API) CreateBucketRequest(_a0 *s3.CreateBucketInput) (*request.Reques // CreateBucketWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) CreateBucketWithContext(_a0 aws.Context, _a1 *s3.CreateBucketInput, _a2 ...request.Option) (*s3.CreateBucketOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.CreateBucketOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.CreateBucketInput, ...request.Option) *s3.CreateBucketOutput); ok { @@ -345,7 +374,14 @@ func (_m *S3API) CreateMultipartUploadRequest(_a0 *s3.CreateMultipartUploadInput // CreateMultipartUploadWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) CreateMultipartUploadWithContext(_a0 aws.Context, _a1 *s3.CreateMultipartUploadInput, _a2 ...request.Option) (*s3.CreateMultipartUploadOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.CreateMultipartUploadOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.CreateMultipartUploadInput, ...request.Option) *s3.CreateMultipartUploadOutput); ok { @@ -439,7 +475,14 @@ func (_m *S3API) DeleteBucketAnalyticsConfigurationRequest(_a0 *s3.DeleteBucketA // DeleteBucketAnalyticsConfigurationWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) DeleteBucketAnalyticsConfigurationWithContext(_a0 aws.Context, _a1 *s3.DeleteBucketAnalyticsConfigurationInput, _a2 ...request.Option) (*s3.DeleteBucketAnalyticsConfigurationOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.DeleteBucketAnalyticsConfigurationOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.DeleteBucketAnalyticsConfigurationInput, ...request.Option) *s3.DeleteBucketAnalyticsConfigurationOutput); ok { @@ -510,7 +553,14 @@ func (_m *S3API) DeleteBucketCorsRequest(_a0 *s3.DeleteBucketCorsInput) (*reques // DeleteBucketCorsWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) DeleteBucketCorsWithContext(_a0 aws.Context, _a1 *s3.DeleteBucketCorsInput, _a2 ...request.Option) (*s3.DeleteBucketCorsOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.DeleteBucketCorsOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.DeleteBucketCorsInput, ...request.Option) *s3.DeleteBucketCorsOutput); ok { @@ -531,6 +581,84 @@ func (_m *S3API) DeleteBucketCorsWithContext(_a0 aws.Context, _a1 *s3.DeleteBuck return r0, r1 } +// DeleteBucketEncryption provides a mock function with given fields: _a0 +func (_m *S3API) DeleteBucketEncryption(_a0 *s3.DeleteBucketEncryptionInput) (*s3.DeleteBucketEncryptionOutput, error) { + ret := _m.Called(_a0) + + var r0 *s3.DeleteBucketEncryptionOutput + if rf, ok := ret.Get(0).(func(*s3.DeleteBucketEncryptionInput) *s3.DeleteBucketEncryptionOutput); ok { + r0 = rf(_a0) + } else { + if ret.Get(0) != nil { + r0 = ret.Get(0).(*s3.DeleteBucketEncryptionOutput) + } + } + + var r1 error + if rf, ok := ret.Get(1).(func(*s3.DeleteBucketEncryptionInput) error); ok { + r1 = rf(_a0) + } else { + r1 = ret.Error(1) + } + + return r0, r1 +} + +// DeleteBucketEncryptionRequest provides a mock function with given fields: _a0 +func (_m *S3API) DeleteBucketEncryptionRequest(_a0 *s3.DeleteBucketEncryptionInput) (*request.Request, *s3.DeleteBucketEncryptionOutput) { + ret := _m.Called(_a0) + + var r0 *request.Request + if rf, ok := ret.Get(0).(func(*s3.DeleteBucketEncryptionInput) *request.Request); ok { + r0 = rf(_a0) + } else { + if ret.Get(0) != nil { + r0 = ret.Get(0).(*request.Request) + } + } + + var r1 *s3.DeleteBucketEncryptionOutput + if rf, ok := ret.Get(1).(func(*s3.DeleteBucketEncryptionInput) *s3.DeleteBucketEncryptionOutput); ok { + r1 = rf(_a0) + } else { + if ret.Get(1) != nil { + r1 = ret.Get(1).(*s3.DeleteBucketEncryptionOutput) + } + } + + return r0, r1 +} + +// DeleteBucketEncryptionWithContext provides a mock function with given fields: _a0, _a1, _a2 +func (_m *S3API) DeleteBucketEncryptionWithContext(_a0 aws.Context, _a1 *s3.DeleteBucketEncryptionInput, _a2 ...request.Option) (*s3.DeleteBucketEncryptionOutput, error) { + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) + + var r0 *s3.DeleteBucketEncryptionOutput + if rf, ok := ret.Get(0).(func(aws.Context, *s3.DeleteBucketEncryptionInput, ...request.Option) *s3.DeleteBucketEncryptionOutput); ok { + r0 = rf(_a0, _a1, _a2...) + } else { + if ret.Get(0) != nil { + r0 = ret.Get(0).(*s3.DeleteBucketEncryptionOutput) + } + } + + var r1 error + if rf, ok := ret.Get(1).(func(aws.Context, *s3.DeleteBucketEncryptionInput, ...request.Option) error); ok { + r1 = rf(_a0, _a1, _a2...) + } else { + r1 = ret.Error(1) + } + + return r0, r1 +} + // DeleteBucketInventoryConfiguration provides a mock function with given fields: _a0 func (_m *S3API) DeleteBucketInventoryConfiguration(_a0 *s3.DeleteBucketInventoryConfigurationInput) (*s3.DeleteBucketInventoryConfigurationOutput, error) { ret := _m.Called(_a0) @@ -581,7 +709,14 @@ func (_m *S3API) DeleteBucketInventoryConfigurationRequest(_a0 *s3.DeleteBucketI // DeleteBucketInventoryConfigurationWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) DeleteBucketInventoryConfigurationWithContext(_a0 aws.Context, _a1 *s3.DeleteBucketInventoryConfigurationInput, _a2 ...request.Option) (*s3.DeleteBucketInventoryConfigurationOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.DeleteBucketInventoryConfigurationOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.DeleteBucketInventoryConfigurationInput, ...request.Option) *s3.DeleteBucketInventoryConfigurationOutput); ok { @@ -652,7 +787,14 @@ func (_m *S3API) DeleteBucketLifecycleRequest(_a0 *s3.DeleteBucketLifecycleInput // DeleteBucketLifecycleWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) DeleteBucketLifecycleWithContext(_a0 aws.Context, _a1 *s3.DeleteBucketLifecycleInput, _a2 ...request.Option) (*s3.DeleteBucketLifecycleOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.DeleteBucketLifecycleOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.DeleteBucketLifecycleInput, ...request.Option) *s3.DeleteBucketLifecycleOutput); ok { @@ -723,7 +865,14 @@ func (_m *S3API) DeleteBucketMetricsConfigurationRequest(_a0 *s3.DeleteBucketMet // DeleteBucketMetricsConfigurationWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) DeleteBucketMetricsConfigurationWithContext(_a0 aws.Context, _a1 *s3.DeleteBucketMetricsConfigurationInput, _a2 ...request.Option) (*s3.DeleteBucketMetricsConfigurationOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.DeleteBucketMetricsConfigurationOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.DeleteBucketMetricsConfigurationInput, ...request.Option) *s3.DeleteBucketMetricsConfigurationOutput); ok { @@ -794,7 +943,14 @@ func (_m *S3API) DeleteBucketPolicyRequest(_a0 *s3.DeleteBucketPolicyInput) (*re // DeleteBucketPolicyWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) DeleteBucketPolicyWithContext(_a0 aws.Context, _a1 *s3.DeleteBucketPolicyInput, _a2 ...request.Option) (*s3.DeleteBucketPolicyOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.DeleteBucketPolicyOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.DeleteBucketPolicyInput, ...request.Option) *s3.DeleteBucketPolicyOutput); ok { @@ -865,7 +1021,14 @@ func (_m *S3API) DeleteBucketReplicationRequest(_a0 *s3.DeleteBucketReplicationI // DeleteBucketReplicationWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) DeleteBucketReplicationWithContext(_a0 aws.Context, _a1 *s3.DeleteBucketReplicationInput, _a2 ...request.Option) (*s3.DeleteBucketReplicationOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.DeleteBucketReplicationOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.DeleteBucketReplicationInput, ...request.Option) *s3.DeleteBucketReplicationOutput); ok { @@ -961,7 +1124,14 @@ func (_m *S3API) DeleteBucketTaggingRequest(_a0 *s3.DeleteBucketTaggingInput) (* // DeleteBucketTaggingWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) DeleteBucketTaggingWithContext(_a0 aws.Context, _a1 *s3.DeleteBucketTaggingInput, _a2 ...request.Option) (*s3.DeleteBucketTaggingOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.DeleteBucketTaggingOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.DeleteBucketTaggingInput, ...request.Option) *s3.DeleteBucketTaggingOutput); ok { @@ -1032,7 +1202,14 @@ func (_m *S3API) DeleteBucketWebsiteRequest(_a0 *s3.DeleteBucketWebsiteInput) (* // DeleteBucketWebsiteWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) DeleteBucketWebsiteWithContext(_a0 aws.Context, _a1 *s3.DeleteBucketWebsiteInput, _a2 ...request.Option) (*s3.DeleteBucketWebsiteOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.DeleteBucketWebsiteOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.DeleteBucketWebsiteInput, ...request.Option) *s3.DeleteBucketWebsiteOutput); ok { @@ -1055,7 +1232,14 @@ func (_m *S3API) DeleteBucketWebsiteWithContext(_a0 aws.Context, _a1 *s3.DeleteB // DeleteBucketWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) DeleteBucketWithContext(_a0 aws.Context, _a1 *s3.DeleteBucketInput, _a2 ...request.Option) (*s3.DeleteBucketOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.DeleteBucketOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.DeleteBucketInput, ...request.Option) *s3.DeleteBucketOutput); ok { @@ -1174,7 +1358,14 @@ func (_m *S3API) DeleteObjectTaggingRequest(_a0 *s3.DeleteObjectTaggingInput) (* // DeleteObjectTaggingWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) DeleteObjectTaggingWithContext(_a0 aws.Context, _a1 *s3.DeleteObjectTaggingInput, _a2 ...request.Option) (*s3.DeleteObjectTaggingOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.DeleteObjectTaggingOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.DeleteObjectTaggingInput, ...request.Option) *s3.DeleteObjectTaggingOutput); ok { @@ -1197,7 +1388,14 @@ func (_m *S3API) DeleteObjectTaggingWithContext(_a0 aws.Context, _a1 *s3.DeleteO // DeleteObjectWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) DeleteObjectWithContext(_a0 aws.Context, _a1 *s3.DeleteObjectInput, _a2 ...request.Option) (*s3.DeleteObjectOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.DeleteObjectOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.DeleteObjectInput, ...request.Option) *s3.DeleteObjectOutput); ok { @@ -1268,7 +1466,14 @@ func (_m *S3API) DeleteObjectsRequest(_a0 *s3.DeleteObjectsInput) (*request.Requ // DeleteObjectsWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) DeleteObjectsWithContext(_a0 aws.Context, _a1 *s3.DeleteObjectsInput, _a2 ...request.Option) (*s3.DeleteObjectsOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.DeleteObjectsOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.DeleteObjectsInput, ...request.Option) *s3.DeleteObjectsOutput); ok { @@ -1289,6 +1494,84 @@ func (_m *S3API) DeleteObjectsWithContext(_a0 aws.Context, _a1 *s3.DeleteObjects return r0, r1 } +// DeletePublicAccessBlock provides a mock function with given fields: _a0 +func (_m *S3API) DeletePublicAccessBlock(_a0 *s3.DeletePublicAccessBlockInput) (*s3.DeletePublicAccessBlockOutput, error) { + ret := _m.Called(_a0) + + var r0 *s3.DeletePublicAccessBlockOutput + if rf, ok := ret.Get(0).(func(*s3.DeletePublicAccessBlockInput) *s3.DeletePublicAccessBlockOutput); ok { + r0 = rf(_a0) + } else { + if ret.Get(0) != nil { + r0 = ret.Get(0).(*s3.DeletePublicAccessBlockOutput) + } + } + + var r1 error + if rf, ok := ret.Get(1).(func(*s3.DeletePublicAccessBlockInput) error); ok { + r1 = rf(_a0) + } else { + r1 = ret.Error(1) + } + + return r0, r1 +} + +// DeletePublicAccessBlockRequest provides a mock function with given fields: _a0 +func (_m *S3API) DeletePublicAccessBlockRequest(_a0 *s3.DeletePublicAccessBlockInput) (*request.Request, *s3.DeletePublicAccessBlockOutput) { + ret := _m.Called(_a0) + + var r0 *request.Request + if rf, ok := ret.Get(0).(func(*s3.DeletePublicAccessBlockInput) *request.Request); ok { + r0 = rf(_a0) + } else { + if ret.Get(0) != nil { + r0 = ret.Get(0).(*request.Request) + } + } + + var r1 *s3.DeletePublicAccessBlockOutput + if rf, ok := ret.Get(1).(func(*s3.DeletePublicAccessBlockInput) *s3.DeletePublicAccessBlockOutput); ok { + r1 = rf(_a0) + } else { + if ret.Get(1) != nil { + r1 = ret.Get(1).(*s3.DeletePublicAccessBlockOutput) + } + } + + return r0, r1 +} + +// DeletePublicAccessBlockWithContext provides a mock function with given fields: _a0, _a1, _a2 +func (_m *S3API) DeletePublicAccessBlockWithContext(_a0 aws.Context, _a1 *s3.DeletePublicAccessBlockInput, _a2 ...request.Option) (*s3.DeletePublicAccessBlockOutput, error) { + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) + + var r0 *s3.DeletePublicAccessBlockOutput + if rf, ok := ret.Get(0).(func(aws.Context, *s3.DeletePublicAccessBlockInput, ...request.Option) *s3.DeletePublicAccessBlockOutput); ok { + r0 = rf(_a0, _a1, _a2...) + } else { + if ret.Get(0) != nil { + r0 = ret.Get(0).(*s3.DeletePublicAccessBlockOutput) + } + } + + var r1 error + if rf, ok := ret.Get(1).(func(aws.Context, *s3.DeletePublicAccessBlockInput, ...request.Option) error); ok { + r1 = rf(_a0, _a1, _a2...) + } else { + r1 = ret.Error(1) + } + + return r0, r1 +} + // GetBucketAccelerateConfiguration provides a mock function with given fields: _a0 func (_m *S3API) GetBucketAccelerateConfiguration(_a0 *s3.GetBucketAccelerateConfigurationInput) (*s3.GetBucketAccelerateConfigurationOutput, error) { ret := _m.Called(_a0) @@ -1339,7 +1622,14 @@ func (_m *S3API) GetBucketAccelerateConfigurationRequest(_a0 *s3.GetBucketAccele // GetBucketAccelerateConfigurationWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) GetBucketAccelerateConfigurationWithContext(_a0 aws.Context, _a1 *s3.GetBucketAccelerateConfigurationInput, _a2 ...request.Option) (*s3.GetBucketAccelerateConfigurationOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.GetBucketAccelerateConfigurationOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.GetBucketAccelerateConfigurationInput, ...request.Option) *s3.GetBucketAccelerateConfigurationOutput); ok { @@ -1410,7 +1700,14 @@ func (_m *S3API) GetBucketAclRequest(_a0 *s3.GetBucketAclInput) (*request.Reques // GetBucketAclWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) GetBucketAclWithContext(_a0 aws.Context, _a1 *s3.GetBucketAclInput, _a2 ...request.Option) (*s3.GetBucketAclOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.GetBucketAclOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.GetBucketAclInput, ...request.Option) *s3.GetBucketAclOutput); ok { @@ -1481,7 +1778,14 @@ func (_m *S3API) GetBucketAnalyticsConfigurationRequest(_a0 *s3.GetBucketAnalyti // GetBucketAnalyticsConfigurationWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) GetBucketAnalyticsConfigurationWithContext(_a0 aws.Context, _a1 *s3.GetBucketAnalyticsConfigurationInput, _a2 ...request.Option) (*s3.GetBucketAnalyticsConfigurationOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.GetBucketAnalyticsConfigurationOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.GetBucketAnalyticsConfigurationInput, ...request.Option) *s3.GetBucketAnalyticsConfigurationOutput); ok { @@ -1552,7 +1856,14 @@ func (_m *S3API) GetBucketCorsRequest(_a0 *s3.GetBucketCorsInput) (*request.Requ // GetBucketCorsWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) GetBucketCorsWithContext(_a0 aws.Context, _a1 *s3.GetBucketCorsInput, _a2 ...request.Option) (*s3.GetBucketCorsOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.GetBucketCorsOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.GetBucketCorsInput, ...request.Option) *s3.GetBucketCorsOutput); ok { @@ -1573,6 +1884,84 @@ func (_m *S3API) GetBucketCorsWithContext(_a0 aws.Context, _a1 *s3.GetBucketCors return r0, r1 } +// GetBucketEncryption provides a mock function with given fields: _a0 +func (_m *S3API) GetBucketEncryption(_a0 *s3.GetBucketEncryptionInput) (*s3.GetBucketEncryptionOutput, error) { + ret := _m.Called(_a0) + + var r0 *s3.GetBucketEncryptionOutput + if rf, ok := ret.Get(0).(func(*s3.GetBucketEncryptionInput) *s3.GetBucketEncryptionOutput); ok { + r0 = rf(_a0) + } else { + if ret.Get(0) != nil { + r0 = ret.Get(0).(*s3.GetBucketEncryptionOutput) + } + } + + var r1 error + if rf, ok := ret.Get(1).(func(*s3.GetBucketEncryptionInput) error); ok { + r1 = rf(_a0) + } else { + r1 = ret.Error(1) + } + + return r0, r1 +} + +// GetBucketEncryptionRequest provides a mock function with given fields: _a0 +func (_m *S3API) GetBucketEncryptionRequest(_a0 *s3.GetBucketEncryptionInput) (*request.Request, *s3.GetBucketEncryptionOutput) { + ret := _m.Called(_a0) + + var r0 *request.Request + if rf, ok := ret.Get(0).(func(*s3.GetBucketEncryptionInput) *request.Request); ok { + r0 = rf(_a0) + } else { + if ret.Get(0) != nil { + r0 = ret.Get(0).(*request.Request) + } + } + + var r1 *s3.GetBucketEncryptionOutput + if rf, ok := ret.Get(1).(func(*s3.GetBucketEncryptionInput) *s3.GetBucketEncryptionOutput); ok { + r1 = rf(_a0) + } else { + if ret.Get(1) != nil { + r1 = ret.Get(1).(*s3.GetBucketEncryptionOutput) + } + } + + return r0, r1 +} + +// GetBucketEncryptionWithContext provides a mock function with given fields: _a0, _a1, _a2 +func (_m *S3API) GetBucketEncryptionWithContext(_a0 aws.Context, _a1 *s3.GetBucketEncryptionInput, _a2 ...request.Option) (*s3.GetBucketEncryptionOutput, error) { + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) + + var r0 *s3.GetBucketEncryptionOutput + if rf, ok := ret.Get(0).(func(aws.Context, *s3.GetBucketEncryptionInput, ...request.Option) *s3.GetBucketEncryptionOutput); ok { + r0 = rf(_a0, _a1, _a2...) + } else { + if ret.Get(0) != nil { + r0 = ret.Get(0).(*s3.GetBucketEncryptionOutput) + } + } + + var r1 error + if rf, ok := ret.Get(1).(func(aws.Context, *s3.GetBucketEncryptionInput, ...request.Option) error); ok { + r1 = rf(_a0, _a1, _a2...) + } else { + r1 = ret.Error(1) + } + + return r0, r1 +} + // GetBucketInventoryConfiguration provides a mock function with given fields: _a0 func (_m *S3API) GetBucketInventoryConfiguration(_a0 *s3.GetBucketInventoryConfigurationInput) (*s3.GetBucketInventoryConfigurationOutput, error) { ret := _m.Called(_a0) @@ -1623,7 +2012,14 @@ func (_m *S3API) GetBucketInventoryConfigurationRequest(_a0 *s3.GetBucketInvento // GetBucketInventoryConfigurationWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) GetBucketInventoryConfigurationWithContext(_a0 aws.Context, _a1 *s3.GetBucketInventoryConfigurationInput, _a2 ...request.Option) (*s3.GetBucketInventoryConfigurationOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.GetBucketInventoryConfigurationOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.GetBucketInventoryConfigurationInput, ...request.Option) *s3.GetBucketInventoryConfigurationOutput); ok { @@ -1717,7 +2113,14 @@ func (_m *S3API) GetBucketLifecycleConfigurationRequest(_a0 *s3.GetBucketLifecyc // GetBucketLifecycleConfigurationWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) GetBucketLifecycleConfigurationWithContext(_a0 aws.Context, _a1 *s3.GetBucketLifecycleConfigurationInput, _a2 ...request.Option) (*s3.GetBucketLifecycleConfigurationOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.GetBucketLifecycleConfigurationOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.GetBucketLifecycleConfigurationInput, ...request.Option) *s3.GetBucketLifecycleConfigurationOutput); ok { @@ -1765,7 +2168,14 @@ func (_m *S3API) GetBucketLifecycleRequest(_a0 *s3.GetBucketLifecycleInput) (*re // GetBucketLifecycleWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) GetBucketLifecycleWithContext(_a0 aws.Context, _a1 *s3.GetBucketLifecycleInput, _a2 ...request.Option) (*s3.GetBucketLifecycleOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.GetBucketLifecycleOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.GetBucketLifecycleInput, ...request.Option) *s3.GetBucketLifecycleOutput); ok { @@ -1836,7 +2246,14 @@ func (_m *S3API) GetBucketLocationRequest(_a0 *s3.GetBucketLocationInput) (*requ // GetBucketLocationWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) GetBucketLocationWithContext(_a0 aws.Context, _a1 *s3.GetBucketLocationInput, _a2 ...request.Option) (*s3.GetBucketLocationOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.GetBucketLocationOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.GetBucketLocationInput, ...request.Option) *s3.GetBucketLocationOutput); ok { @@ -1907,7 +2324,14 @@ func (_m *S3API) GetBucketLoggingRequest(_a0 *s3.GetBucketLoggingInput) (*reques // GetBucketLoggingWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) GetBucketLoggingWithContext(_a0 aws.Context, _a1 *s3.GetBucketLoggingInput, _a2 ...request.Option) (*s3.GetBucketLoggingOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.GetBucketLoggingOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.GetBucketLoggingInput, ...request.Option) *s3.GetBucketLoggingOutput); ok { @@ -1978,7 +2402,14 @@ func (_m *S3API) GetBucketMetricsConfigurationRequest(_a0 *s3.GetBucketMetricsCo // GetBucketMetricsConfigurationWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) GetBucketMetricsConfigurationWithContext(_a0 aws.Context, _a1 *s3.GetBucketMetricsConfigurationInput, _a2 ...request.Option) (*s3.GetBucketMetricsConfigurationOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.GetBucketMetricsConfigurationOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.GetBucketMetricsConfigurationInput, ...request.Option) *s3.GetBucketMetricsConfigurationOutput); ok { @@ -2072,7 +2503,14 @@ func (_m *S3API) GetBucketNotificationConfigurationRequest(_a0 *s3.GetBucketNoti // GetBucketNotificationConfigurationWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) GetBucketNotificationConfigurationWithContext(_a0 aws.Context, _a1 *s3.GetBucketNotificationConfigurationRequest, _a2 ...request.Option) (*s3.NotificationConfiguration, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.NotificationConfiguration if rf, ok := ret.Get(0).(func(aws.Context, *s3.GetBucketNotificationConfigurationRequest, ...request.Option) *s3.NotificationConfiguration); ok { @@ -2120,7 +2558,14 @@ func (_m *S3API) GetBucketNotificationRequest(_a0 *s3.GetBucketNotificationConfi // GetBucketNotificationWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) GetBucketNotificationWithContext(_a0 aws.Context, _a1 *s3.GetBucketNotificationConfigurationRequest, _a2 ...request.Option) (*s3.NotificationConfigurationDeprecated, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.NotificationConfigurationDeprecated if rf, ok := ret.Get(0).(func(aws.Context, *s3.GetBucketNotificationConfigurationRequest, ...request.Option) *s3.NotificationConfigurationDeprecated); ok { @@ -2189,9 +2634,94 @@ func (_m *S3API) GetBucketPolicyRequest(_a0 *s3.GetBucketPolicyInput) (*request. return r0, r1 } +// GetBucketPolicyStatus provides a mock function with given fields: _a0 +func (_m *S3API) GetBucketPolicyStatus(_a0 *s3.GetBucketPolicyStatusInput) (*s3.GetBucketPolicyStatusOutput, error) { + ret := _m.Called(_a0) + + var r0 *s3.GetBucketPolicyStatusOutput + if rf, ok := ret.Get(0).(func(*s3.GetBucketPolicyStatusInput) *s3.GetBucketPolicyStatusOutput); ok { + r0 = rf(_a0) + } else { + if ret.Get(0) != nil { + r0 = ret.Get(0).(*s3.GetBucketPolicyStatusOutput) + } + } + + var r1 error + if rf, ok := ret.Get(1).(func(*s3.GetBucketPolicyStatusInput) error); ok { + r1 = rf(_a0) + } else { + r1 = ret.Error(1) + } + + return r0, r1 +} + +// GetBucketPolicyStatusRequest provides a mock function with given fields: _a0 +func (_m *S3API) GetBucketPolicyStatusRequest(_a0 *s3.GetBucketPolicyStatusInput) (*request.Request, *s3.GetBucketPolicyStatusOutput) { + ret := _m.Called(_a0) + + var r0 *request.Request + if rf, ok := ret.Get(0).(func(*s3.GetBucketPolicyStatusInput) *request.Request); ok { + r0 = rf(_a0) + } else { + if ret.Get(0) != nil { + r0 = ret.Get(0).(*request.Request) + } + } + + var r1 *s3.GetBucketPolicyStatusOutput + if rf, ok := ret.Get(1).(func(*s3.GetBucketPolicyStatusInput) *s3.GetBucketPolicyStatusOutput); ok { + r1 = rf(_a0) + } else { + if ret.Get(1) != nil { + r1 = ret.Get(1).(*s3.GetBucketPolicyStatusOutput) + } + } + + return r0, r1 +} + +// GetBucketPolicyStatusWithContext provides a mock function with given fields: _a0, _a1, _a2 +func (_m *S3API) GetBucketPolicyStatusWithContext(_a0 aws.Context, _a1 *s3.GetBucketPolicyStatusInput, _a2 ...request.Option) (*s3.GetBucketPolicyStatusOutput, error) { + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) + + var r0 *s3.GetBucketPolicyStatusOutput + if rf, ok := ret.Get(0).(func(aws.Context, *s3.GetBucketPolicyStatusInput, ...request.Option) *s3.GetBucketPolicyStatusOutput); ok { + r0 = rf(_a0, _a1, _a2...) + } else { + if ret.Get(0) != nil { + r0 = ret.Get(0).(*s3.GetBucketPolicyStatusOutput) + } + } + + var r1 error + if rf, ok := ret.Get(1).(func(aws.Context, *s3.GetBucketPolicyStatusInput, ...request.Option) error); ok { + r1 = rf(_a0, _a1, _a2...) + } else { + r1 = ret.Error(1) + } + + return r0, r1 +} + // GetBucketPolicyWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) GetBucketPolicyWithContext(_a0 aws.Context, _a1 *s3.GetBucketPolicyInput, _a2 ...request.Option) (*s3.GetBucketPolicyOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.GetBucketPolicyOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.GetBucketPolicyInput, ...request.Option) *s3.GetBucketPolicyOutput); ok { @@ -2262,7 +2792,14 @@ func (_m *S3API) GetBucketReplicationRequest(_a0 *s3.GetBucketReplicationInput) // GetBucketReplicationWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) GetBucketReplicationWithContext(_a0 aws.Context, _a1 *s3.GetBucketReplicationInput, _a2 ...request.Option) (*s3.GetBucketReplicationOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.GetBucketReplicationOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.GetBucketReplicationInput, ...request.Option) *s3.GetBucketReplicationOutput); ok { @@ -2333,7 +2870,14 @@ func (_m *S3API) GetBucketRequestPaymentRequest(_a0 *s3.GetBucketRequestPaymentI // GetBucketRequestPaymentWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) GetBucketRequestPaymentWithContext(_a0 aws.Context, _a1 *s3.GetBucketRequestPaymentInput, _a2 ...request.Option) (*s3.GetBucketRequestPaymentOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.GetBucketRequestPaymentOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.GetBucketRequestPaymentInput, ...request.Option) *s3.GetBucketRequestPaymentOutput); ok { @@ -2404,7 +2948,14 @@ func (_m *S3API) GetBucketTaggingRequest(_a0 *s3.GetBucketTaggingInput) (*reques // GetBucketTaggingWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) GetBucketTaggingWithContext(_a0 aws.Context, _a1 *s3.GetBucketTaggingInput, _a2 ...request.Option) (*s3.GetBucketTaggingOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.GetBucketTaggingOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.GetBucketTaggingInput, ...request.Option) *s3.GetBucketTaggingOutput); ok { @@ -2475,7 +3026,14 @@ func (_m *S3API) GetBucketVersioningRequest(_a0 *s3.GetBucketVersioningInput) (* // GetBucketVersioningWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) GetBucketVersioningWithContext(_a0 aws.Context, _a1 *s3.GetBucketVersioningInput, _a2 ...request.Option) (*s3.GetBucketVersioningOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.GetBucketVersioningOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.GetBucketVersioningInput, ...request.Option) *s3.GetBucketVersioningOutput); ok { @@ -2546,7 +3104,14 @@ func (_m *S3API) GetBucketWebsiteRequest(_a0 *s3.GetBucketWebsiteInput) (*reques // GetBucketWebsiteWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) GetBucketWebsiteWithContext(_a0 aws.Context, _a1 *s3.GetBucketWebsiteInput, _a2 ...request.Option) (*s3.GetBucketWebsiteOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.GetBucketWebsiteOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.GetBucketWebsiteInput, ...request.Option) *s3.GetBucketWebsiteOutput); ok { @@ -2640,7 +3205,14 @@ func (_m *S3API) GetObjectAclRequest(_a0 *s3.GetObjectAclInput) (*request.Reques // GetObjectAclWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) GetObjectAclWithContext(_a0 aws.Context, _a1 *s3.GetObjectAclInput, _a2 ...request.Option) (*s3.GetObjectAclOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.GetObjectAclOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.GetObjectAclInput, ...request.Option) *s3.GetObjectAclOutput); ok { @@ -2661,20 +3233,176 @@ func (_m *S3API) GetObjectAclWithContext(_a0 aws.Context, _a1 *s3.GetObjectAclIn return r0, r1 } -// GetObjectRequest provides a mock function with given fields: _a0 -func (_m *S3API) GetObjectRequest(_a0 *s3.GetObjectInput) (*request.Request, *s3.GetObjectOutput) { +// GetObjectLegalHold provides a mock function with given fields: _a0 +func (_m *S3API) GetObjectLegalHold(_a0 *s3.GetObjectLegalHoldInput) (*s3.GetObjectLegalHoldOutput, error) { ret := _m.Called(_a0) - var r0 *request.Request - if rf, ok := ret.Get(0).(func(*s3.GetObjectInput) *request.Request); ok { + var r0 *s3.GetObjectLegalHoldOutput + if rf, ok := ret.Get(0).(func(*s3.GetObjectLegalHoldInput) *s3.GetObjectLegalHoldOutput); ok { r0 = rf(_a0) } else { if ret.Get(0) != nil { - r0 = ret.Get(0).(*request.Request) + r0 = ret.Get(0).(*s3.GetObjectLegalHoldOutput) } } - var r1 *s3.GetObjectOutput + var r1 error + if rf, ok := ret.Get(1).(func(*s3.GetObjectLegalHoldInput) error); ok { + r1 = rf(_a0) + } else { + r1 = ret.Error(1) + } + + return r0, r1 +} + +// GetObjectLegalHoldRequest provides a mock function with given fields: _a0 +func (_m *S3API) GetObjectLegalHoldRequest(_a0 *s3.GetObjectLegalHoldInput) (*request.Request, *s3.GetObjectLegalHoldOutput) { + ret := _m.Called(_a0) + + var r0 *request.Request + if rf, ok := ret.Get(0).(func(*s3.GetObjectLegalHoldInput) *request.Request); ok { + r0 = rf(_a0) + } else { + if ret.Get(0) != nil { + r0 = ret.Get(0).(*request.Request) + } + } + + var r1 *s3.GetObjectLegalHoldOutput + if rf, ok := ret.Get(1).(func(*s3.GetObjectLegalHoldInput) *s3.GetObjectLegalHoldOutput); ok { + r1 = rf(_a0) + } else { + if ret.Get(1) != nil { + r1 = ret.Get(1).(*s3.GetObjectLegalHoldOutput) + } + } + + return r0, r1 +} + +// GetObjectLegalHoldWithContext provides a mock function with given fields: _a0, _a1, _a2 +func (_m *S3API) GetObjectLegalHoldWithContext(_a0 aws.Context, _a1 *s3.GetObjectLegalHoldInput, _a2 ...request.Option) (*s3.GetObjectLegalHoldOutput, error) { + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) + + var r0 *s3.GetObjectLegalHoldOutput + if rf, ok := ret.Get(0).(func(aws.Context, *s3.GetObjectLegalHoldInput, ...request.Option) *s3.GetObjectLegalHoldOutput); ok { + r0 = rf(_a0, _a1, _a2...) + } else { + if ret.Get(0) != nil { + r0 = ret.Get(0).(*s3.GetObjectLegalHoldOutput) + } + } + + var r1 error + if rf, ok := ret.Get(1).(func(aws.Context, *s3.GetObjectLegalHoldInput, ...request.Option) error); ok { + r1 = rf(_a0, _a1, _a2...) + } else { + r1 = ret.Error(1) + } + + return r0, r1 +} + +// GetObjectLockConfiguration provides a mock function with given fields: _a0 +func (_m *S3API) GetObjectLockConfiguration(_a0 *s3.GetObjectLockConfigurationInput) (*s3.GetObjectLockConfigurationOutput, error) { + ret := _m.Called(_a0) + + var r0 *s3.GetObjectLockConfigurationOutput + if rf, ok := ret.Get(0).(func(*s3.GetObjectLockConfigurationInput) *s3.GetObjectLockConfigurationOutput); ok { + r0 = rf(_a0) + } else { + if ret.Get(0) != nil { + r0 = ret.Get(0).(*s3.GetObjectLockConfigurationOutput) + } + } + + var r1 error + if rf, ok := ret.Get(1).(func(*s3.GetObjectLockConfigurationInput) error); ok { + r1 = rf(_a0) + } else { + r1 = ret.Error(1) + } + + return r0, r1 +} + +// GetObjectLockConfigurationRequest provides a mock function with given fields: _a0 +func (_m *S3API) GetObjectLockConfigurationRequest(_a0 *s3.GetObjectLockConfigurationInput) (*request.Request, *s3.GetObjectLockConfigurationOutput) { + ret := _m.Called(_a0) + + var r0 *request.Request + if rf, ok := ret.Get(0).(func(*s3.GetObjectLockConfigurationInput) *request.Request); ok { + r0 = rf(_a0) + } else { + if ret.Get(0) != nil { + r0 = ret.Get(0).(*request.Request) + } + } + + var r1 *s3.GetObjectLockConfigurationOutput + if rf, ok := ret.Get(1).(func(*s3.GetObjectLockConfigurationInput) *s3.GetObjectLockConfigurationOutput); ok { + r1 = rf(_a0) + } else { + if ret.Get(1) != nil { + r1 = ret.Get(1).(*s3.GetObjectLockConfigurationOutput) + } + } + + return r0, r1 +} + +// GetObjectLockConfigurationWithContext provides a mock function with given fields: _a0, _a1, _a2 +func (_m *S3API) GetObjectLockConfigurationWithContext(_a0 aws.Context, _a1 *s3.GetObjectLockConfigurationInput, _a2 ...request.Option) (*s3.GetObjectLockConfigurationOutput, error) { + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) + + var r0 *s3.GetObjectLockConfigurationOutput + if rf, ok := ret.Get(0).(func(aws.Context, *s3.GetObjectLockConfigurationInput, ...request.Option) *s3.GetObjectLockConfigurationOutput); ok { + r0 = rf(_a0, _a1, _a2...) + } else { + if ret.Get(0) != nil { + r0 = ret.Get(0).(*s3.GetObjectLockConfigurationOutput) + } + } + + var r1 error + if rf, ok := ret.Get(1).(func(aws.Context, *s3.GetObjectLockConfigurationInput, ...request.Option) error); ok { + r1 = rf(_a0, _a1, _a2...) + } else { + r1 = ret.Error(1) + } + + return r0, r1 +} + +// GetObjectRequest provides a mock function with given fields: _a0 +func (_m *S3API) GetObjectRequest(_a0 *s3.GetObjectInput) (*request.Request, *s3.GetObjectOutput) { + ret := _m.Called(_a0) + + var r0 *request.Request + if rf, ok := ret.Get(0).(func(*s3.GetObjectInput) *request.Request); ok { + r0 = rf(_a0) + } else { + if ret.Get(0) != nil { + r0 = ret.Get(0).(*request.Request) + } + } + + var r1 *s3.GetObjectOutput if rf, ok := ret.Get(1).(func(*s3.GetObjectInput) *s3.GetObjectOutput); ok { r1 = rf(_a0) } else { @@ -2686,6 +3414,84 @@ func (_m *S3API) GetObjectRequest(_a0 *s3.GetObjectInput) (*request.Request, *s3 return r0, r1 } +// GetObjectRetention provides a mock function with given fields: _a0 +func (_m *S3API) GetObjectRetention(_a0 *s3.GetObjectRetentionInput) (*s3.GetObjectRetentionOutput, error) { + ret := _m.Called(_a0) + + var r0 *s3.GetObjectRetentionOutput + if rf, ok := ret.Get(0).(func(*s3.GetObjectRetentionInput) *s3.GetObjectRetentionOutput); ok { + r0 = rf(_a0) + } else { + if ret.Get(0) != nil { + r0 = ret.Get(0).(*s3.GetObjectRetentionOutput) + } + } + + var r1 error + if rf, ok := ret.Get(1).(func(*s3.GetObjectRetentionInput) error); ok { + r1 = rf(_a0) + } else { + r1 = ret.Error(1) + } + + return r0, r1 +} + +// GetObjectRetentionRequest provides a mock function with given fields: _a0 +func (_m *S3API) GetObjectRetentionRequest(_a0 *s3.GetObjectRetentionInput) (*request.Request, *s3.GetObjectRetentionOutput) { + ret := _m.Called(_a0) + + var r0 *request.Request + if rf, ok := ret.Get(0).(func(*s3.GetObjectRetentionInput) *request.Request); ok { + r0 = rf(_a0) + } else { + if ret.Get(0) != nil { + r0 = ret.Get(0).(*request.Request) + } + } + + var r1 *s3.GetObjectRetentionOutput + if rf, ok := ret.Get(1).(func(*s3.GetObjectRetentionInput) *s3.GetObjectRetentionOutput); ok { + r1 = rf(_a0) + } else { + if ret.Get(1) != nil { + r1 = ret.Get(1).(*s3.GetObjectRetentionOutput) + } + } + + return r0, r1 +} + +// GetObjectRetentionWithContext provides a mock function with given fields: _a0, _a1, _a2 +func (_m *S3API) GetObjectRetentionWithContext(_a0 aws.Context, _a1 *s3.GetObjectRetentionInput, _a2 ...request.Option) (*s3.GetObjectRetentionOutput, error) { + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) + + var r0 *s3.GetObjectRetentionOutput + if rf, ok := ret.Get(0).(func(aws.Context, *s3.GetObjectRetentionInput, ...request.Option) *s3.GetObjectRetentionOutput); ok { + r0 = rf(_a0, _a1, _a2...) + } else { + if ret.Get(0) != nil { + r0 = ret.Get(0).(*s3.GetObjectRetentionOutput) + } + } + + var r1 error + if rf, ok := ret.Get(1).(func(aws.Context, *s3.GetObjectRetentionInput, ...request.Option) error); ok { + r1 = rf(_a0, _a1, _a2...) + } else { + r1 = ret.Error(1) + } + + return r0, r1 +} + // GetObjectTagging provides a mock function with given fields: _a0 func (_m *S3API) GetObjectTagging(_a0 *s3.GetObjectTaggingInput) (*s3.GetObjectTaggingOutput, error) { ret := _m.Called(_a0) @@ -2736,7 +3542,14 @@ func (_m *S3API) GetObjectTaggingRequest(_a0 *s3.GetObjectTaggingInput) (*reques // GetObjectTaggingWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) GetObjectTaggingWithContext(_a0 aws.Context, _a1 *s3.GetObjectTaggingInput, _a2 ...request.Option) (*s3.GetObjectTaggingOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.GetObjectTaggingOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.GetObjectTaggingInput, ...request.Option) *s3.GetObjectTaggingOutput); ok { @@ -2807,7 +3620,14 @@ func (_m *S3API) GetObjectTorrentRequest(_a0 *s3.GetObjectTorrentInput) (*reques // GetObjectTorrentWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) GetObjectTorrentWithContext(_a0 aws.Context, _a1 *s3.GetObjectTorrentInput, _a2 ...request.Option) (*s3.GetObjectTorrentOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.GetObjectTorrentOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.GetObjectTorrentInput, ...request.Option) *s3.GetObjectTorrentOutput); ok { @@ -2830,7 +3650,14 @@ func (_m *S3API) GetObjectTorrentWithContext(_a0 aws.Context, _a1 *s3.GetObjectT // GetObjectWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) GetObjectWithContext(_a0 aws.Context, _a1 *s3.GetObjectInput, _a2 ...request.Option) (*s3.GetObjectOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.GetObjectOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.GetObjectInput, ...request.Option) *s3.GetObjectOutput); ok { @@ -2851,6 +3678,84 @@ func (_m *S3API) GetObjectWithContext(_a0 aws.Context, _a1 *s3.GetObjectInput, _ return r0, r1 } +// GetPublicAccessBlock provides a mock function with given fields: _a0 +func (_m *S3API) GetPublicAccessBlock(_a0 *s3.GetPublicAccessBlockInput) (*s3.GetPublicAccessBlockOutput, error) { + ret := _m.Called(_a0) + + var r0 *s3.GetPublicAccessBlockOutput + if rf, ok := ret.Get(0).(func(*s3.GetPublicAccessBlockInput) *s3.GetPublicAccessBlockOutput); ok { + r0 = rf(_a0) + } else { + if ret.Get(0) != nil { + r0 = ret.Get(0).(*s3.GetPublicAccessBlockOutput) + } + } + + var r1 error + if rf, ok := ret.Get(1).(func(*s3.GetPublicAccessBlockInput) error); ok { + r1 = rf(_a0) + } else { + r1 = ret.Error(1) + } + + return r0, r1 +} + +// GetPublicAccessBlockRequest provides a mock function with given fields: _a0 +func (_m *S3API) GetPublicAccessBlockRequest(_a0 *s3.GetPublicAccessBlockInput) (*request.Request, *s3.GetPublicAccessBlockOutput) { + ret := _m.Called(_a0) + + var r0 *request.Request + if rf, ok := ret.Get(0).(func(*s3.GetPublicAccessBlockInput) *request.Request); ok { + r0 = rf(_a0) + } else { + if ret.Get(0) != nil { + r0 = ret.Get(0).(*request.Request) + } + } + + var r1 *s3.GetPublicAccessBlockOutput + if rf, ok := ret.Get(1).(func(*s3.GetPublicAccessBlockInput) *s3.GetPublicAccessBlockOutput); ok { + r1 = rf(_a0) + } else { + if ret.Get(1) != nil { + r1 = ret.Get(1).(*s3.GetPublicAccessBlockOutput) + } + } + + return r0, r1 +} + +// GetPublicAccessBlockWithContext provides a mock function with given fields: _a0, _a1, _a2 +func (_m *S3API) GetPublicAccessBlockWithContext(_a0 aws.Context, _a1 *s3.GetPublicAccessBlockInput, _a2 ...request.Option) (*s3.GetPublicAccessBlockOutput, error) { + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) + + var r0 *s3.GetPublicAccessBlockOutput + if rf, ok := ret.Get(0).(func(aws.Context, *s3.GetPublicAccessBlockInput, ...request.Option) *s3.GetPublicAccessBlockOutput); ok { + r0 = rf(_a0, _a1, _a2...) + } else { + if ret.Get(0) != nil { + r0 = ret.Get(0).(*s3.GetPublicAccessBlockOutput) + } + } + + var r1 error + if rf, ok := ret.Get(1).(func(aws.Context, *s3.GetPublicAccessBlockInput, ...request.Option) error); ok { + r1 = rf(_a0, _a1, _a2...) + } else { + r1 = ret.Error(1) + } + + return r0, r1 +} + // HeadBucket provides a mock function with given fields: _a0 func (_m *S3API) HeadBucket(_a0 *s3.HeadBucketInput) (*s3.HeadBucketOutput, error) { ret := _m.Called(_a0) @@ -2901,7 +3806,14 @@ func (_m *S3API) HeadBucketRequest(_a0 *s3.HeadBucketInput) (*request.Request, * // HeadBucketWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) HeadBucketWithContext(_a0 aws.Context, _a1 *s3.HeadBucketInput, _a2 ...request.Option) (*s3.HeadBucketOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.HeadBucketOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.HeadBucketInput, ...request.Option) *s3.HeadBucketOutput); ok { @@ -2972,7 +3884,14 @@ func (_m *S3API) HeadObjectRequest(_a0 *s3.HeadObjectInput) (*request.Request, * // HeadObjectWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) HeadObjectWithContext(_a0 aws.Context, _a1 *s3.HeadObjectInput, _a2 ...request.Option) (*s3.HeadObjectOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.HeadObjectOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.HeadObjectInput, ...request.Option) *s3.HeadObjectOutput); ok { @@ -3043,7 +3962,14 @@ func (_m *S3API) ListBucketAnalyticsConfigurationsRequest(_a0 *s3.ListBucketAnal // ListBucketAnalyticsConfigurationsWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) ListBucketAnalyticsConfigurationsWithContext(_a0 aws.Context, _a1 *s3.ListBucketAnalyticsConfigurationsInput, _a2 ...request.Option) (*s3.ListBucketAnalyticsConfigurationsOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.ListBucketAnalyticsConfigurationsOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.ListBucketAnalyticsConfigurationsInput, ...request.Option) *s3.ListBucketAnalyticsConfigurationsOutput); ok { @@ -3114,7 +4040,14 @@ func (_m *S3API) ListBucketInventoryConfigurationsRequest(_a0 *s3.ListBucketInve // ListBucketInventoryConfigurationsWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) ListBucketInventoryConfigurationsWithContext(_a0 aws.Context, _a1 *s3.ListBucketInventoryConfigurationsInput, _a2 ...request.Option) (*s3.ListBucketInventoryConfigurationsOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.ListBucketInventoryConfigurationsOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.ListBucketInventoryConfigurationsInput, ...request.Option) *s3.ListBucketInventoryConfigurationsOutput); ok { @@ -3185,7 +4118,14 @@ func (_m *S3API) ListBucketMetricsConfigurationsRequest(_a0 *s3.ListBucketMetric // ListBucketMetricsConfigurationsWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) ListBucketMetricsConfigurationsWithContext(_a0 aws.Context, _a1 *s3.ListBucketMetricsConfigurationsInput, _a2 ...request.Option) (*s3.ListBucketMetricsConfigurationsOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.ListBucketMetricsConfigurationsOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.ListBucketMetricsConfigurationsInput, ...request.Option) *s3.ListBucketMetricsConfigurationsOutput); ok { @@ -3256,7 +4196,14 @@ func (_m *S3API) ListBucketsRequest(_a0 *s3.ListBucketsInput) (*request.Request, // ListBucketsWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) ListBucketsWithContext(_a0 aws.Context, _a1 *s3.ListBucketsInput, _a2 ...request.Option) (*s3.ListBucketsOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.ListBucketsOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.ListBucketsInput, ...request.Option) *s3.ListBucketsOutput); ok { @@ -3316,7 +4263,14 @@ func (_m *S3API) ListMultipartUploadsPages(_a0 *s3.ListMultipartUploadsInput, _a // ListMultipartUploadsPagesWithContext provides a mock function with given fields: _a0, _a1, _a2, _a3 func (_m *S3API) ListMultipartUploadsPagesWithContext(_a0 aws.Context, _a1 *s3.ListMultipartUploadsInput, _a2 func(*s3.ListMultipartUploadsOutput, bool) bool, _a3 ...request.Option) error { - ret := _m.Called(_a0, _a1, _a2, _a3) + _va := make([]interface{}, len(_a3)) + for _i := range _a3 { + _va[_i] = _a3[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1, _a2) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 error if rf, ok := ret.Get(0).(func(aws.Context, *s3.ListMultipartUploadsInput, func(*s3.ListMultipartUploadsOutput, bool) bool, ...request.Option) error); ok { @@ -3355,7 +4309,14 @@ func (_m *S3API) ListMultipartUploadsRequest(_a0 *s3.ListMultipartUploadsInput) // ListMultipartUploadsWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) ListMultipartUploadsWithContext(_a0 aws.Context, _a1 *s3.ListMultipartUploadsInput, _a2 ...request.Option) (*s3.ListMultipartUploadsOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.ListMultipartUploadsOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.ListMultipartUploadsInput, ...request.Option) *s3.ListMultipartUploadsOutput); ok { @@ -3415,7 +4376,14 @@ func (_m *S3API) ListObjectVersionsPages(_a0 *s3.ListObjectVersionsInput, _a1 fu // ListObjectVersionsPagesWithContext provides a mock function with given fields: _a0, _a1, _a2, _a3 func (_m *S3API) ListObjectVersionsPagesWithContext(_a0 aws.Context, _a1 *s3.ListObjectVersionsInput, _a2 func(*s3.ListObjectVersionsOutput, bool) bool, _a3 ...request.Option) error { - ret := _m.Called(_a0, _a1, _a2, _a3) + _va := make([]interface{}, len(_a3)) + for _i := range _a3 { + _va[_i] = _a3[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1, _a2) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 error if rf, ok := ret.Get(0).(func(aws.Context, *s3.ListObjectVersionsInput, func(*s3.ListObjectVersionsOutput, bool) bool, ...request.Option) error); ok { @@ -3454,7 +4422,14 @@ func (_m *S3API) ListObjectVersionsRequest(_a0 *s3.ListObjectVersionsInput) (*re // ListObjectVersionsWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) ListObjectVersionsWithContext(_a0 aws.Context, _a1 *s3.ListObjectVersionsInput, _a2 ...request.Option) (*s3.ListObjectVersionsOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.ListObjectVersionsOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.ListObjectVersionsInput, ...request.Option) *s3.ListObjectVersionsOutput); ok { @@ -3514,7 +4489,14 @@ func (_m *S3API) ListObjectsPages(_a0 *s3.ListObjectsInput, _a1 func(*s3.ListObj // ListObjectsPagesWithContext provides a mock function with given fields: _a0, _a1, _a2, _a3 func (_m *S3API) ListObjectsPagesWithContext(_a0 aws.Context, _a1 *s3.ListObjectsInput, _a2 func(*s3.ListObjectsOutput, bool) bool, _a3 ...request.Option) error { - ret := _m.Called(_a0, _a1, _a2, _a3) + _va := make([]interface{}, len(_a3)) + for _i := range _a3 { + _va[_i] = _a3[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1, _a2) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 error if rf, ok := ret.Get(0).(func(aws.Context, *s3.ListObjectsInput, func(*s3.ListObjectsOutput, bool) bool, ...request.Option) error); ok { @@ -3590,7 +4572,14 @@ func (_m *S3API) ListObjectsV2Pages(_a0 *s3.ListObjectsV2Input, _a1 func(*s3.Lis // ListObjectsV2PagesWithContext provides a mock function with given fields: _a0, _a1, _a2, _a3 func (_m *S3API) ListObjectsV2PagesWithContext(_a0 aws.Context, _a1 *s3.ListObjectsV2Input, _a2 func(*s3.ListObjectsV2Output, bool) bool, _a3 ...request.Option) error { - ret := _m.Called(_a0, _a1, _a2, _a3) + _va := make([]interface{}, len(_a3)) + for _i := range _a3 { + _va[_i] = _a3[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1, _a2) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 error if rf, ok := ret.Get(0).(func(aws.Context, *s3.ListObjectsV2Input, func(*s3.ListObjectsV2Output, bool) bool, ...request.Option) error); ok { @@ -3629,7 +4618,14 @@ func (_m *S3API) ListObjectsV2Request(_a0 *s3.ListObjectsV2Input) (*request.Requ // ListObjectsV2WithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) ListObjectsV2WithContext(_a0 aws.Context, _a1 *s3.ListObjectsV2Input, _a2 ...request.Option) (*s3.ListObjectsV2Output, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.ListObjectsV2Output if rf, ok := ret.Get(0).(func(aws.Context, *s3.ListObjectsV2Input, ...request.Option) *s3.ListObjectsV2Output); ok { @@ -3652,7 +4648,14 @@ func (_m *S3API) ListObjectsV2WithContext(_a0 aws.Context, _a1 *s3.ListObjectsV2 // ListObjectsWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) ListObjectsWithContext(_a0 aws.Context, _a1 *s3.ListObjectsInput, _a2 ...request.Option) (*s3.ListObjectsOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.ListObjectsOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.ListObjectsInput, ...request.Option) *s3.ListObjectsOutput); ok { @@ -3712,7 +4715,14 @@ func (_m *S3API) ListPartsPages(_a0 *s3.ListPartsInput, _a1 func(*s3.ListPartsOu // ListPartsPagesWithContext provides a mock function with given fields: _a0, _a1, _a2, _a3 func (_m *S3API) ListPartsPagesWithContext(_a0 aws.Context, _a1 *s3.ListPartsInput, _a2 func(*s3.ListPartsOutput, bool) bool, _a3 ...request.Option) error { - ret := _m.Called(_a0, _a1, _a2, _a3) + _va := make([]interface{}, len(_a3)) + for _i := range _a3 { + _va[_i] = _a3[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1, _a2) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 error if rf, ok := ret.Get(0).(func(aws.Context, *s3.ListPartsInput, func(*s3.ListPartsOutput, bool) bool, ...request.Option) error); ok { @@ -3751,7 +4761,14 @@ func (_m *S3API) ListPartsRequest(_a0 *s3.ListPartsInput) (*request.Request, *s3 // ListPartsWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) ListPartsWithContext(_a0 aws.Context, _a1 *s3.ListPartsInput, _a2 ...request.Option) (*s3.ListPartsOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.ListPartsOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.ListPartsInput, ...request.Option) *s3.ListPartsOutput); ok { @@ -3822,7 +4839,14 @@ func (_m *S3API) PutBucketAccelerateConfigurationRequest(_a0 *s3.PutBucketAccele // PutBucketAccelerateConfigurationWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) PutBucketAccelerateConfigurationWithContext(_a0 aws.Context, _a1 *s3.PutBucketAccelerateConfigurationInput, _a2 ...request.Option) (*s3.PutBucketAccelerateConfigurationOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.PutBucketAccelerateConfigurationOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.PutBucketAccelerateConfigurationInput, ...request.Option) *s3.PutBucketAccelerateConfigurationOutput); ok { @@ -3893,7 +4917,14 @@ func (_m *S3API) PutBucketAclRequest(_a0 *s3.PutBucketAclInput) (*request.Reques // PutBucketAclWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) PutBucketAclWithContext(_a0 aws.Context, _a1 *s3.PutBucketAclInput, _a2 ...request.Option) (*s3.PutBucketAclOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.PutBucketAclOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.PutBucketAclInput, ...request.Option) *s3.PutBucketAclOutput); ok { @@ -3964,7 +4995,14 @@ func (_m *S3API) PutBucketAnalyticsConfigurationRequest(_a0 *s3.PutBucketAnalyti // PutBucketAnalyticsConfigurationWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) PutBucketAnalyticsConfigurationWithContext(_a0 aws.Context, _a1 *s3.PutBucketAnalyticsConfigurationInput, _a2 ...request.Option) (*s3.PutBucketAnalyticsConfigurationOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.PutBucketAnalyticsConfigurationOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.PutBucketAnalyticsConfigurationInput, ...request.Option) *s3.PutBucketAnalyticsConfigurationOutput); ok { @@ -4035,7 +5073,14 @@ func (_m *S3API) PutBucketCorsRequest(_a0 *s3.PutBucketCorsInput) (*request.Requ // PutBucketCorsWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) PutBucketCorsWithContext(_a0 aws.Context, _a1 *s3.PutBucketCorsInput, _a2 ...request.Option) (*s3.PutBucketCorsOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.PutBucketCorsOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.PutBucketCorsInput, ...request.Option) *s3.PutBucketCorsOutput); ok { @@ -4056,21 +5101,21 @@ func (_m *S3API) PutBucketCorsWithContext(_a0 aws.Context, _a1 *s3.PutBucketCors return r0, r1 } -// PutBucketInventoryConfiguration provides a mock function with given fields: _a0 -func (_m *S3API) PutBucketInventoryConfiguration(_a0 *s3.PutBucketInventoryConfigurationInput) (*s3.PutBucketInventoryConfigurationOutput, error) { +// PutBucketEncryption provides a mock function with given fields: _a0 +func (_m *S3API) PutBucketEncryption(_a0 *s3.PutBucketEncryptionInput) (*s3.PutBucketEncryptionOutput, error) { ret := _m.Called(_a0) - var r0 *s3.PutBucketInventoryConfigurationOutput - if rf, ok := ret.Get(0).(func(*s3.PutBucketInventoryConfigurationInput) *s3.PutBucketInventoryConfigurationOutput); ok { + var r0 *s3.PutBucketEncryptionOutput + if rf, ok := ret.Get(0).(func(*s3.PutBucketEncryptionInput) *s3.PutBucketEncryptionOutput); ok { r0 = rf(_a0) } else { if ret.Get(0) != nil { - r0 = ret.Get(0).(*s3.PutBucketInventoryConfigurationOutput) + r0 = ret.Get(0).(*s3.PutBucketEncryptionOutput) } } var r1 error - if rf, ok := ret.Get(1).(func(*s3.PutBucketInventoryConfigurationInput) error); ok { + if rf, ok := ret.Get(1).(func(*s3.PutBucketEncryptionInput) error); ok { r1 = rf(_a0) } else { r1 = ret.Error(1) @@ -4079,12 +5124,12 @@ func (_m *S3API) PutBucketInventoryConfiguration(_a0 *s3.PutBucketInventoryConfi return r0, r1 } -// PutBucketInventoryConfigurationRequest provides a mock function with given fields: _a0 -func (_m *S3API) PutBucketInventoryConfigurationRequest(_a0 *s3.PutBucketInventoryConfigurationInput) (*request.Request, *s3.PutBucketInventoryConfigurationOutput) { +// PutBucketEncryptionRequest provides a mock function with given fields: _a0 +func (_m *S3API) PutBucketEncryptionRequest(_a0 *s3.PutBucketEncryptionInput) (*request.Request, *s3.PutBucketEncryptionOutput) { ret := _m.Called(_a0) var r0 *request.Request - if rf, ok := ret.Get(0).(func(*s3.PutBucketInventoryConfigurationInput) *request.Request); ok { + if rf, ok := ret.Get(0).(func(*s3.PutBucketEncryptionInput) *request.Request); ok { r0 = rf(_a0) } else { if ret.Get(0) != nil { @@ -4092,21 +5137,106 @@ func (_m *S3API) PutBucketInventoryConfigurationRequest(_a0 *s3.PutBucketInvento } } - var r1 *s3.PutBucketInventoryConfigurationOutput - if rf, ok := ret.Get(1).(func(*s3.PutBucketInventoryConfigurationInput) *s3.PutBucketInventoryConfigurationOutput); ok { + var r1 *s3.PutBucketEncryptionOutput + if rf, ok := ret.Get(1).(func(*s3.PutBucketEncryptionInput) *s3.PutBucketEncryptionOutput); ok { r1 = rf(_a0) } else { if ret.Get(1) != nil { - r1 = ret.Get(1).(*s3.PutBucketInventoryConfigurationOutput) + r1 = ret.Get(1).(*s3.PutBucketEncryptionOutput) } } return r0, r1 } -// PutBucketInventoryConfigurationWithContext provides a mock function with given fields: _a0, _a1, _a2 -func (_m *S3API) PutBucketInventoryConfigurationWithContext(_a0 aws.Context, _a1 *s3.PutBucketInventoryConfigurationInput, _a2 ...request.Option) (*s3.PutBucketInventoryConfigurationOutput, error) { - ret := _m.Called(_a0, _a1, _a2) +// PutBucketEncryptionWithContext provides a mock function with given fields: _a0, _a1, _a2 +func (_m *S3API) PutBucketEncryptionWithContext(_a0 aws.Context, _a1 *s3.PutBucketEncryptionInput, _a2 ...request.Option) (*s3.PutBucketEncryptionOutput, error) { + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) + + var r0 *s3.PutBucketEncryptionOutput + if rf, ok := ret.Get(0).(func(aws.Context, *s3.PutBucketEncryptionInput, ...request.Option) *s3.PutBucketEncryptionOutput); ok { + r0 = rf(_a0, _a1, _a2...) + } else { + if ret.Get(0) != nil { + r0 = ret.Get(0).(*s3.PutBucketEncryptionOutput) + } + } + + var r1 error + if rf, ok := ret.Get(1).(func(aws.Context, *s3.PutBucketEncryptionInput, ...request.Option) error); ok { + r1 = rf(_a0, _a1, _a2...) + } else { + r1 = ret.Error(1) + } + + return r0, r1 +} + +// PutBucketInventoryConfiguration provides a mock function with given fields: _a0 +func (_m *S3API) PutBucketInventoryConfiguration(_a0 *s3.PutBucketInventoryConfigurationInput) (*s3.PutBucketInventoryConfigurationOutput, error) { + ret := _m.Called(_a0) + + var r0 *s3.PutBucketInventoryConfigurationOutput + if rf, ok := ret.Get(0).(func(*s3.PutBucketInventoryConfigurationInput) *s3.PutBucketInventoryConfigurationOutput); ok { + r0 = rf(_a0) + } else { + if ret.Get(0) != nil { + r0 = ret.Get(0).(*s3.PutBucketInventoryConfigurationOutput) + } + } + + var r1 error + if rf, ok := ret.Get(1).(func(*s3.PutBucketInventoryConfigurationInput) error); ok { + r1 = rf(_a0) + } else { + r1 = ret.Error(1) + } + + return r0, r1 +} + +// PutBucketInventoryConfigurationRequest provides a mock function with given fields: _a0 +func (_m *S3API) PutBucketInventoryConfigurationRequest(_a0 *s3.PutBucketInventoryConfigurationInput) (*request.Request, *s3.PutBucketInventoryConfigurationOutput) { + ret := _m.Called(_a0) + + var r0 *request.Request + if rf, ok := ret.Get(0).(func(*s3.PutBucketInventoryConfigurationInput) *request.Request); ok { + r0 = rf(_a0) + } else { + if ret.Get(0) != nil { + r0 = ret.Get(0).(*request.Request) + } + } + + var r1 *s3.PutBucketInventoryConfigurationOutput + if rf, ok := ret.Get(1).(func(*s3.PutBucketInventoryConfigurationInput) *s3.PutBucketInventoryConfigurationOutput); ok { + r1 = rf(_a0) + } else { + if ret.Get(1) != nil { + r1 = ret.Get(1).(*s3.PutBucketInventoryConfigurationOutput) + } + } + + return r0, r1 +} + +// PutBucketInventoryConfigurationWithContext provides a mock function with given fields: _a0, _a1, _a2 +func (_m *S3API) PutBucketInventoryConfigurationWithContext(_a0 aws.Context, _a1 *s3.PutBucketInventoryConfigurationInput, _a2 ...request.Option) (*s3.PutBucketInventoryConfigurationOutput, error) { + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.PutBucketInventoryConfigurationOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.PutBucketInventoryConfigurationInput, ...request.Option) *s3.PutBucketInventoryConfigurationOutput); ok { @@ -4200,7 +5330,14 @@ func (_m *S3API) PutBucketLifecycleConfigurationRequest(_a0 *s3.PutBucketLifecyc // PutBucketLifecycleConfigurationWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) PutBucketLifecycleConfigurationWithContext(_a0 aws.Context, _a1 *s3.PutBucketLifecycleConfigurationInput, _a2 ...request.Option) (*s3.PutBucketLifecycleConfigurationOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.PutBucketLifecycleConfigurationOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.PutBucketLifecycleConfigurationInput, ...request.Option) *s3.PutBucketLifecycleConfigurationOutput); ok { @@ -4248,7 +5385,14 @@ func (_m *S3API) PutBucketLifecycleRequest(_a0 *s3.PutBucketLifecycleInput) (*re // PutBucketLifecycleWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) PutBucketLifecycleWithContext(_a0 aws.Context, _a1 *s3.PutBucketLifecycleInput, _a2 ...request.Option) (*s3.PutBucketLifecycleOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.PutBucketLifecycleOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.PutBucketLifecycleInput, ...request.Option) *s3.PutBucketLifecycleOutput); ok { @@ -4319,7 +5463,14 @@ func (_m *S3API) PutBucketLoggingRequest(_a0 *s3.PutBucketLoggingInput) (*reques // PutBucketLoggingWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) PutBucketLoggingWithContext(_a0 aws.Context, _a1 *s3.PutBucketLoggingInput, _a2 ...request.Option) (*s3.PutBucketLoggingOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.PutBucketLoggingOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.PutBucketLoggingInput, ...request.Option) *s3.PutBucketLoggingOutput); ok { @@ -4390,7 +5541,14 @@ func (_m *S3API) PutBucketMetricsConfigurationRequest(_a0 *s3.PutBucketMetricsCo // PutBucketMetricsConfigurationWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) PutBucketMetricsConfigurationWithContext(_a0 aws.Context, _a1 *s3.PutBucketMetricsConfigurationInput, _a2 ...request.Option) (*s3.PutBucketMetricsConfigurationOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.PutBucketMetricsConfigurationOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.PutBucketMetricsConfigurationInput, ...request.Option) *s3.PutBucketMetricsConfigurationOutput); ok { @@ -4484,7 +5642,14 @@ func (_m *S3API) PutBucketNotificationConfigurationRequest(_a0 *s3.PutBucketNoti // PutBucketNotificationConfigurationWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) PutBucketNotificationConfigurationWithContext(_a0 aws.Context, _a1 *s3.PutBucketNotificationConfigurationInput, _a2 ...request.Option) (*s3.PutBucketNotificationConfigurationOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.PutBucketNotificationConfigurationOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.PutBucketNotificationConfigurationInput, ...request.Option) *s3.PutBucketNotificationConfigurationOutput); ok { @@ -4532,7 +5697,14 @@ func (_m *S3API) PutBucketNotificationRequest(_a0 *s3.PutBucketNotificationInput // PutBucketNotificationWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) PutBucketNotificationWithContext(_a0 aws.Context, _a1 *s3.PutBucketNotificationInput, _a2 ...request.Option) (*s3.PutBucketNotificationOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.PutBucketNotificationOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.PutBucketNotificationInput, ...request.Option) *s3.PutBucketNotificationOutput); ok { @@ -4603,7 +5775,14 @@ func (_m *S3API) PutBucketPolicyRequest(_a0 *s3.PutBucketPolicyInput) (*request. // PutBucketPolicyWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) PutBucketPolicyWithContext(_a0 aws.Context, _a1 *s3.PutBucketPolicyInput, _a2 ...request.Option) (*s3.PutBucketPolicyOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.PutBucketPolicyOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.PutBucketPolicyInput, ...request.Option) *s3.PutBucketPolicyOutput); ok { @@ -4674,7 +5853,14 @@ func (_m *S3API) PutBucketReplicationRequest(_a0 *s3.PutBucketReplicationInput) // PutBucketReplicationWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) PutBucketReplicationWithContext(_a0 aws.Context, _a1 *s3.PutBucketReplicationInput, _a2 ...request.Option) (*s3.PutBucketReplicationOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.PutBucketReplicationOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.PutBucketReplicationInput, ...request.Option) *s3.PutBucketReplicationOutput); ok { @@ -4745,7 +5931,14 @@ func (_m *S3API) PutBucketRequestPaymentRequest(_a0 *s3.PutBucketRequestPaymentI // PutBucketRequestPaymentWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) PutBucketRequestPaymentWithContext(_a0 aws.Context, _a1 *s3.PutBucketRequestPaymentInput, _a2 ...request.Option) (*s3.PutBucketRequestPaymentOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.PutBucketRequestPaymentOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.PutBucketRequestPaymentInput, ...request.Option) *s3.PutBucketRequestPaymentOutput); ok { @@ -4816,7 +6009,14 @@ func (_m *S3API) PutBucketTaggingRequest(_a0 *s3.PutBucketTaggingInput) (*reques // PutBucketTaggingWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) PutBucketTaggingWithContext(_a0 aws.Context, _a1 *s3.PutBucketTaggingInput, _a2 ...request.Option) (*s3.PutBucketTaggingOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.PutBucketTaggingOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.PutBucketTaggingInput, ...request.Option) *s3.PutBucketTaggingOutput); ok { @@ -4887,7 +6087,14 @@ func (_m *S3API) PutBucketVersioningRequest(_a0 *s3.PutBucketVersioningInput) (* // PutBucketVersioningWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) PutBucketVersioningWithContext(_a0 aws.Context, _a1 *s3.PutBucketVersioningInput, _a2 ...request.Option) (*s3.PutBucketVersioningOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.PutBucketVersioningOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.PutBucketVersioningInput, ...request.Option) *s3.PutBucketVersioningOutput); ok { @@ -4958,7 +6165,14 @@ func (_m *S3API) PutBucketWebsiteRequest(_a0 *s3.PutBucketWebsiteInput) (*reques // PutBucketWebsiteWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) PutBucketWebsiteWithContext(_a0 aws.Context, _a1 *s3.PutBucketWebsiteInput, _a2 ...request.Option) (*s3.PutBucketWebsiteOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.PutBucketWebsiteOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.PutBucketWebsiteInput, ...request.Option) *s3.PutBucketWebsiteOutput); ok { @@ -5052,7 +6266,14 @@ func (_m *S3API) PutObjectAclRequest(_a0 *s3.PutObjectAclInput) (*request.Reques // PutObjectAclWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) PutObjectAclWithContext(_a0 aws.Context, _a1 *s3.PutObjectAclInput, _a2 ...request.Option) (*s3.PutObjectAclOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.PutObjectAclOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.PutObjectAclInput, ...request.Option) *s3.PutObjectAclOutput); ok { @@ -5073,6 +6294,162 @@ func (_m *S3API) PutObjectAclWithContext(_a0 aws.Context, _a1 *s3.PutObjectAclIn return r0, r1 } +// PutObjectLegalHold provides a mock function with given fields: _a0 +func (_m *S3API) PutObjectLegalHold(_a0 *s3.PutObjectLegalHoldInput) (*s3.PutObjectLegalHoldOutput, error) { + ret := _m.Called(_a0) + + var r0 *s3.PutObjectLegalHoldOutput + if rf, ok := ret.Get(0).(func(*s3.PutObjectLegalHoldInput) *s3.PutObjectLegalHoldOutput); ok { + r0 = rf(_a0) + } else { + if ret.Get(0) != nil { + r0 = ret.Get(0).(*s3.PutObjectLegalHoldOutput) + } + } + + var r1 error + if rf, ok := ret.Get(1).(func(*s3.PutObjectLegalHoldInput) error); ok { + r1 = rf(_a0) + } else { + r1 = ret.Error(1) + } + + return r0, r1 +} + +// PutObjectLegalHoldRequest provides a mock function with given fields: _a0 +func (_m *S3API) PutObjectLegalHoldRequest(_a0 *s3.PutObjectLegalHoldInput) (*request.Request, *s3.PutObjectLegalHoldOutput) { + ret := _m.Called(_a0) + + var r0 *request.Request + if rf, ok := ret.Get(0).(func(*s3.PutObjectLegalHoldInput) *request.Request); ok { + r0 = rf(_a0) + } else { + if ret.Get(0) != nil { + r0 = ret.Get(0).(*request.Request) + } + } + + var r1 *s3.PutObjectLegalHoldOutput + if rf, ok := ret.Get(1).(func(*s3.PutObjectLegalHoldInput) *s3.PutObjectLegalHoldOutput); ok { + r1 = rf(_a0) + } else { + if ret.Get(1) != nil { + r1 = ret.Get(1).(*s3.PutObjectLegalHoldOutput) + } + } + + return r0, r1 +} + +// PutObjectLegalHoldWithContext provides a mock function with given fields: _a0, _a1, _a2 +func (_m *S3API) PutObjectLegalHoldWithContext(_a0 aws.Context, _a1 *s3.PutObjectLegalHoldInput, _a2 ...request.Option) (*s3.PutObjectLegalHoldOutput, error) { + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) + + var r0 *s3.PutObjectLegalHoldOutput + if rf, ok := ret.Get(0).(func(aws.Context, *s3.PutObjectLegalHoldInput, ...request.Option) *s3.PutObjectLegalHoldOutput); ok { + r0 = rf(_a0, _a1, _a2...) + } else { + if ret.Get(0) != nil { + r0 = ret.Get(0).(*s3.PutObjectLegalHoldOutput) + } + } + + var r1 error + if rf, ok := ret.Get(1).(func(aws.Context, *s3.PutObjectLegalHoldInput, ...request.Option) error); ok { + r1 = rf(_a0, _a1, _a2...) + } else { + r1 = ret.Error(1) + } + + return r0, r1 +} + +// PutObjectLockConfiguration provides a mock function with given fields: _a0 +func (_m *S3API) PutObjectLockConfiguration(_a0 *s3.PutObjectLockConfigurationInput) (*s3.PutObjectLockConfigurationOutput, error) { + ret := _m.Called(_a0) + + var r0 *s3.PutObjectLockConfigurationOutput + if rf, ok := ret.Get(0).(func(*s3.PutObjectLockConfigurationInput) *s3.PutObjectLockConfigurationOutput); ok { + r0 = rf(_a0) + } else { + if ret.Get(0) != nil { + r0 = ret.Get(0).(*s3.PutObjectLockConfigurationOutput) + } + } + + var r1 error + if rf, ok := ret.Get(1).(func(*s3.PutObjectLockConfigurationInput) error); ok { + r1 = rf(_a0) + } else { + r1 = ret.Error(1) + } + + return r0, r1 +} + +// PutObjectLockConfigurationRequest provides a mock function with given fields: _a0 +func (_m *S3API) PutObjectLockConfigurationRequest(_a0 *s3.PutObjectLockConfigurationInput) (*request.Request, *s3.PutObjectLockConfigurationOutput) { + ret := _m.Called(_a0) + + var r0 *request.Request + if rf, ok := ret.Get(0).(func(*s3.PutObjectLockConfigurationInput) *request.Request); ok { + r0 = rf(_a0) + } else { + if ret.Get(0) != nil { + r0 = ret.Get(0).(*request.Request) + } + } + + var r1 *s3.PutObjectLockConfigurationOutput + if rf, ok := ret.Get(1).(func(*s3.PutObjectLockConfigurationInput) *s3.PutObjectLockConfigurationOutput); ok { + r1 = rf(_a0) + } else { + if ret.Get(1) != nil { + r1 = ret.Get(1).(*s3.PutObjectLockConfigurationOutput) + } + } + + return r0, r1 +} + +// PutObjectLockConfigurationWithContext provides a mock function with given fields: _a0, _a1, _a2 +func (_m *S3API) PutObjectLockConfigurationWithContext(_a0 aws.Context, _a1 *s3.PutObjectLockConfigurationInput, _a2 ...request.Option) (*s3.PutObjectLockConfigurationOutput, error) { + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) + + var r0 *s3.PutObjectLockConfigurationOutput + if rf, ok := ret.Get(0).(func(aws.Context, *s3.PutObjectLockConfigurationInput, ...request.Option) *s3.PutObjectLockConfigurationOutput); ok { + r0 = rf(_a0, _a1, _a2...) + } else { + if ret.Get(0) != nil { + r0 = ret.Get(0).(*s3.PutObjectLockConfigurationOutput) + } + } + + var r1 error + if rf, ok := ret.Get(1).(func(aws.Context, *s3.PutObjectLockConfigurationInput, ...request.Option) error); ok { + r1 = rf(_a0, _a1, _a2...) + } else { + r1 = ret.Error(1) + } + + return r0, r1 +} + // PutObjectRequest provides a mock function with given fields: _a0 func (_m *S3API) PutObjectRequest(_a0 *s3.PutObjectInput) (*request.Request, *s3.PutObjectOutput) { ret := _m.Called(_a0) @@ -5098,6 +6475,84 @@ func (_m *S3API) PutObjectRequest(_a0 *s3.PutObjectInput) (*request.Request, *s3 return r0, r1 } +// PutObjectRetention provides a mock function with given fields: _a0 +func (_m *S3API) PutObjectRetention(_a0 *s3.PutObjectRetentionInput) (*s3.PutObjectRetentionOutput, error) { + ret := _m.Called(_a0) + + var r0 *s3.PutObjectRetentionOutput + if rf, ok := ret.Get(0).(func(*s3.PutObjectRetentionInput) *s3.PutObjectRetentionOutput); ok { + r0 = rf(_a0) + } else { + if ret.Get(0) != nil { + r0 = ret.Get(0).(*s3.PutObjectRetentionOutput) + } + } + + var r1 error + if rf, ok := ret.Get(1).(func(*s3.PutObjectRetentionInput) error); ok { + r1 = rf(_a0) + } else { + r1 = ret.Error(1) + } + + return r0, r1 +} + +// PutObjectRetentionRequest provides a mock function with given fields: _a0 +func (_m *S3API) PutObjectRetentionRequest(_a0 *s3.PutObjectRetentionInput) (*request.Request, *s3.PutObjectRetentionOutput) { + ret := _m.Called(_a0) + + var r0 *request.Request + if rf, ok := ret.Get(0).(func(*s3.PutObjectRetentionInput) *request.Request); ok { + r0 = rf(_a0) + } else { + if ret.Get(0) != nil { + r0 = ret.Get(0).(*request.Request) + } + } + + var r1 *s3.PutObjectRetentionOutput + if rf, ok := ret.Get(1).(func(*s3.PutObjectRetentionInput) *s3.PutObjectRetentionOutput); ok { + r1 = rf(_a0) + } else { + if ret.Get(1) != nil { + r1 = ret.Get(1).(*s3.PutObjectRetentionOutput) + } + } + + return r0, r1 +} + +// PutObjectRetentionWithContext provides a mock function with given fields: _a0, _a1, _a2 +func (_m *S3API) PutObjectRetentionWithContext(_a0 aws.Context, _a1 *s3.PutObjectRetentionInput, _a2 ...request.Option) (*s3.PutObjectRetentionOutput, error) { + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) + + var r0 *s3.PutObjectRetentionOutput + if rf, ok := ret.Get(0).(func(aws.Context, *s3.PutObjectRetentionInput, ...request.Option) *s3.PutObjectRetentionOutput); ok { + r0 = rf(_a0, _a1, _a2...) + } else { + if ret.Get(0) != nil { + r0 = ret.Get(0).(*s3.PutObjectRetentionOutput) + } + } + + var r1 error + if rf, ok := ret.Get(1).(func(aws.Context, *s3.PutObjectRetentionInput, ...request.Option) error); ok { + r1 = rf(_a0, _a1, _a2...) + } else { + r1 = ret.Error(1) + } + + return r0, r1 +} + // PutObjectTagging provides a mock function with given fields: _a0 func (_m *S3API) PutObjectTagging(_a0 *s3.PutObjectTaggingInput) (*s3.PutObjectTaggingOutput, error) { ret := _m.Called(_a0) @@ -5148,7 +6603,14 @@ func (_m *S3API) PutObjectTaggingRequest(_a0 *s3.PutObjectTaggingInput) (*reques // PutObjectTaggingWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) PutObjectTaggingWithContext(_a0 aws.Context, _a1 *s3.PutObjectTaggingInput, _a2 ...request.Option) (*s3.PutObjectTaggingOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.PutObjectTaggingOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.PutObjectTaggingInput, ...request.Option) *s3.PutObjectTaggingOutput); ok { @@ -5171,7 +6633,14 @@ func (_m *S3API) PutObjectTaggingWithContext(_a0 aws.Context, _a1 *s3.PutObjectT // PutObjectWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) PutObjectWithContext(_a0 aws.Context, _a1 *s3.PutObjectInput, _a2 ...request.Option) (*s3.PutObjectOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.PutObjectOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.PutObjectInput, ...request.Option) *s3.PutObjectOutput); ok { @@ -5192,6 +6661,84 @@ func (_m *S3API) PutObjectWithContext(_a0 aws.Context, _a1 *s3.PutObjectInput, _ return r0, r1 } +// PutPublicAccessBlock provides a mock function with given fields: _a0 +func (_m *S3API) PutPublicAccessBlock(_a0 *s3.PutPublicAccessBlockInput) (*s3.PutPublicAccessBlockOutput, error) { + ret := _m.Called(_a0) + + var r0 *s3.PutPublicAccessBlockOutput + if rf, ok := ret.Get(0).(func(*s3.PutPublicAccessBlockInput) *s3.PutPublicAccessBlockOutput); ok { + r0 = rf(_a0) + } else { + if ret.Get(0) != nil { + r0 = ret.Get(0).(*s3.PutPublicAccessBlockOutput) + } + } + + var r1 error + if rf, ok := ret.Get(1).(func(*s3.PutPublicAccessBlockInput) error); ok { + r1 = rf(_a0) + } else { + r1 = ret.Error(1) + } + + return r0, r1 +} + +// PutPublicAccessBlockRequest provides a mock function with given fields: _a0 +func (_m *S3API) PutPublicAccessBlockRequest(_a0 *s3.PutPublicAccessBlockInput) (*request.Request, *s3.PutPublicAccessBlockOutput) { + ret := _m.Called(_a0) + + var r0 *request.Request + if rf, ok := ret.Get(0).(func(*s3.PutPublicAccessBlockInput) *request.Request); ok { + r0 = rf(_a0) + } else { + if ret.Get(0) != nil { + r0 = ret.Get(0).(*request.Request) + } + } + + var r1 *s3.PutPublicAccessBlockOutput + if rf, ok := ret.Get(1).(func(*s3.PutPublicAccessBlockInput) *s3.PutPublicAccessBlockOutput); ok { + r1 = rf(_a0) + } else { + if ret.Get(1) != nil { + r1 = ret.Get(1).(*s3.PutPublicAccessBlockOutput) + } + } + + return r0, r1 +} + +// PutPublicAccessBlockWithContext provides a mock function with given fields: _a0, _a1, _a2 +func (_m *S3API) PutPublicAccessBlockWithContext(_a0 aws.Context, _a1 *s3.PutPublicAccessBlockInput, _a2 ...request.Option) (*s3.PutPublicAccessBlockOutput, error) { + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) + + var r0 *s3.PutPublicAccessBlockOutput + if rf, ok := ret.Get(0).(func(aws.Context, *s3.PutPublicAccessBlockInput, ...request.Option) *s3.PutPublicAccessBlockOutput); ok { + r0 = rf(_a0, _a1, _a2...) + } else { + if ret.Get(0) != nil { + r0 = ret.Get(0).(*s3.PutPublicAccessBlockOutput) + } + } + + var r1 error + if rf, ok := ret.Get(1).(func(aws.Context, *s3.PutPublicAccessBlockInput, ...request.Option) error); ok { + r1 = rf(_a0, _a1, _a2...) + } else { + r1 = ret.Error(1) + } + + return r0, r1 +} + // RestoreObject provides a mock function with given fields: _a0 func (_m *S3API) RestoreObject(_a0 *s3.RestoreObjectInput) (*s3.RestoreObjectOutput, error) { ret := _m.Called(_a0) @@ -5242,7 +6789,14 @@ func (_m *S3API) RestoreObjectRequest(_a0 *s3.RestoreObjectInput) (*request.Requ // RestoreObjectWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) RestoreObjectWithContext(_a0 aws.Context, _a1 *s3.RestoreObjectInput, _a2 ...request.Option) (*s3.RestoreObjectOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.RestoreObjectOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.RestoreObjectInput, ...request.Option) *s3.RestoreObjectOutput); ok { @@ -5263,6 +6817,84 @@ func (_m *S3API) RestoreObjectWithContext(_a0 aws.Context, _a1 *s3.RestoreObject return r0, r1 } +// SelectObjectContent provides a mock function with given fields: _a0 +func (_m *S3API) SelectObjectContent(_a0 *s3.SelectObjectContentInput) (*s3.SelectObjectContentOutput, error) { + ret := _m.Called(_a0) + + var r0 *s3.SelectObjectContentOutput + if rf, ok := ret.Get(0).(func(*s3.SelectObjectContentInput) *s3.SelectObjectContentOutput); ok { + r0 = rf(_a0) + } else { + if ret.Get(0) != nil { + r0 = ret.Get(0).(*s3.SelectObjectContentOutput) + } + } + + var r1 error + if rf, ok := ret.Get(1).(func(*s3.SelectObjectContentInput) error); ok { + r1 = rf(_a0) + } else { + r1 = ret.Error(1) + } + + return r0, r1 +} + +// SelectObjectContentRequest provides a mock function with given fields: _a0 +func (_m *S3API) SelectObjectContentRequest(_a0 *s3.SelectObjectContentInput) (*request.Request, *s3.SelectObjectContentOutput) { + ret := _m.Called(_a0) + + var r0 *request.Request + if rf, ok := ret.Get(0).(func(*s3.SelectObjectContentInput) *request.Request); ok { + r0 = rf(_a0) + } else { + if ret.Get(0) != nil { + r0 = ret.Get(0).(*request.Request) + } + } + + var r1 *s3.SelectObjectContentOutput + if rf, ok := ret.Get(1).(func(*s3.SelectObjectContentInput) *s3.SelectObjectContentOutput); ok { + r1 = rf(_a0) + } else { + if ret.Get(1) != nil { + r1 = ret.Get(1).(*s3.SelectObjectContentOutput) + } + } + + return r0, r1 +} + +// SelectObjectContentWithContext provides a mock function with given fields: _a0, _a1, _a2 +func (_m *S3API) SelectObjectContentWithContext(_a0 aws.Context, _a1 *s3.SelectObjectContentInput, _a2 ...request.Option) (*s3.SelectObjectContentOutput, error) { + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) + + var r0 *s3.SelectObjectContentOutput + if rf, ok := ret.Get(0).(func(aws.Context, *s3.SelectObjectContentInput, ...request.Option) *s3.SelectObjectContentOutput); ok { + r0 = rf(_a0, _a1, _a2...) + } else { + if ret.Get(0) != nil { + r0 = ret.Get(0).(*s3.SelectObjectContentOutput) + } + } + + var r1 error + if rf, ok := ret.Get(1).(func(aws.Context, *s3.SelectObjectContentInput, ...request.Option) error); ok { + r1 = rf(_a0, _a1, _a2...) + } else { + r1 = ret.Error(1) + } + + return r0, r1 +} + // UploadPart provides a mock function with given fields: _a0 func (_m *S3API) UploadPart(_a0 *s3.UploadPartInput) (*s3.UploadPartOutput, error) { ret := _m.Called(_a0) @@ -5336,7 +6968,14 @@ func (_m *S3API) UploadPartCopyRequest(_a0 *s3.UploadPartCopyInput) (*request.Re // UploadPartCopyWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) UploadPartCopyWithContext(_a0 aws.Context, _a1 *s3.UploadPartCopyInput, _a2 ...request.Option) (*s3.UploadPartCopyOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.UploadPartCopyOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.UploadPartCopyInput, ...request.Option) *s3.UploadPartCopyOutput); ok { @@ -5384,7 +7023,14 @@ func (_m *S3API) UploadPartRequest(_a0 *s3.UploadPartInput) (*request.Request, * // UploadPartWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) UploadPartWithContext(_a0 aws.Context, _a1 *s3.UploadPartInput, _a2 ...request.Option) (*s3.UploadPartOutput, error) { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 *s3.UploadPartOutput if rf, ok := ret.Get(0).(func(aws.Context, *s3.UploadPartInput, ...request.Option) *s3.UploadPartOutput); ok { @@ -5421,7 +7067,14 @@ func (_m *S3API) WaitUntilBucketExists(_a0 *s3.HeadBucketInput) error { // WaitUntilBucketExistsWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) WaitUntilBucketExistsWithContext(_a0 aws.Context, _a1 *s3.HeadBucketInput, _a2 ...request.WaiterOption) error { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 error if rf, ok := ret.Get(0).(func(aws.Context, *s3.HeadBucketInput, ...request.WaiterOption) error); ok { @@ -5449,7 +7102,14 @@ func (_m *S3API) WaitUntilBucketNotExists(_a0 *s3.HeadBucketInput) error { // WaitUntilBucketNotExistsWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) WaitUntilBucketNotExistsWithContext(_a0 aws.Context, _a1 *s3.HeadBucketInput, _a2 ...request.WaiterOption) error { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 error if rf, ok := ret.Get(0).(func(aws.Context, *s3.HeadBucketInput, ...request.WaiterOption) error); ok { @@ -5477,7 +7137,14 @@ func (_m *S3API) WaitUntilObjectExists(_a0 *s3.HeadObjectInput) error { // WaitUntilObjectExistsWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) WaitUntilObjectExistsWithContext(_a0 aws.Context, _a1 *s3.HeadObjectInput, _a2 ...request.WaiterOption) error { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 error if rf, ok := ret.Get(0).(func(aws.Context, *s3.HeadObjectInput, ...request.WaiterOption) error); ok { @@ -5505,7 +7172,14 @@ func (_m *S3API) WaitUntilObjectNotExists(_a0 *s3.HeadObjectInput) error { // WaitUntilObjectNotExistsWithContext provides a mock function with given fields: _a0, _a1, _a2 func (_m *S3API) WaitUntilObjectNotExistsWithContext(_a0 aws.Context, _a1 *s3.HeadObjectInput, _a2 ...request.WaiterOption) error { - ret := _m.Called(_a0, _a1, _a2) + _va := make([]interface{}, len(_a2)) + for _i := range _a2 { + _va[_i] = _a2[_i] + } + var _ca []interface{} + _ca = append(_ca, _a0, _a1) + _ca = append(_ca, _va...) + ret := _m.Called(_ca...) var r0 error if rf, ok := ret.Get(0).(func(aws.Context, *s3.HeadObjectInput, ...request.WaiterOption) error); ok { @@ -5516,5 +7190,3 @@ func (_m *S3API) WaitUntilObjectNotExistsWithContext(_a0 aws.Context, _a1 *s3.He return r0 } - -var _ s3iface.S3API = (*S3API)(nil) diff --git a/mocks/StringFile.go b/mocks/StringFile.go index f6dc75b3..ba83f1ae 100644 --- a/mocks/StringFile.go +++ b/mocks/StringFile.go @@ -67,12 +67,18 @@ type ReadWriteFile struct { func (f *ReadWriteFile) Read(p []byte) (n int, err error) { // Deal with mocks for potential assertions - f.File.Read(p) + n, err = f.File.Read(p) + if err != nil { + return + } return f.Reader.Read(p) } func (f *ReadWriteFile) Write(p []byte) (n int, err error) { - f.File.Write(p) + n, err = f.File.Write(p) + if err != nil { + return + } return f.Writer.Write(p) } func (f *ReadWriteFile) Content() string { diff --git a/os/fileSystem.go b/os/fileSystem.go deleted file mode 100644 index b46d710b..00000000 --- a/os/fileSystem.go +++ /dev/null @@ -1,37 +0,0 @@ -package os - -import ( - "github.com/c2fo/vfs" -) - -//Scheme defines the filesystem type. -const ( - Scheme = "file" -) - -// FileSystem implements vfs.Filesystem for the OS filesystem. -type FileSystem struct{} - -// NewFile function returns the os implementation of vfs.File. -func (fs FileSystem) NewFile(volume string, name string) (vfs.File, error) { - file, err := newFile(name) - return vfs.File(file), err -} - -// NewLocation function returns the os implementation of vfs.Location. -func (fs FileSystem) NewLocation(volume string, name string) (vfs.Location, error) { - return &Location{ - fileSystem: vfs.FileSystem(fs), - name: vfs.AddTrailingSlash(name), - }, nil -} - -// Name returns "os" -func (fs FileSystem) Name() string { - return "os" -} - -// Scheme return "file" as the initial part of a file URI ie: file:// -func (fs FileSystem) Scheme() string { - return Scheme -} diff --git a/os/test_files/empty.txt b/os/test_files/empty.txt deleted file mode 100644 index e69de29b..00000000 diff --git a/os/test_files/prefix-file.txt b/os/test_files/prefix-file.txt deleted file mode 100644 index 5ba6ab4e..00000000 --- a/os/test_files/prefix-file.txt +++ /dev/null @@ -1 +0,0 @@ -hello, Dave \ No newline at end of file diff --git a/os/test_files/subdir/test.txt b/os/test_files/subdir/test.txt deleted file mode 100644 index debc1baf..00000000 --- a/os/test_files/subdir/test.txt +++ /dev/null @@ -1 +0,0 @@ -hello world too \ No newline at end of file diff --git a/os/test_files/test.txt b/os/test_files/test.txt deleted file mode 100644 index 95d09f2b..00000000 --- a/os/test_files/test.txt +++ /dev/null @@ -1 +0,0 @@ -hello world \ No newline at end of file diff --git a/s3/fileSystem.go b/s3/fileSystem.go deleted file mode 100644 index 710600de..00000000 --- a/s3/fileSystem.go +++ /dev/null @@ -1,50 +0,0 @@ -package s3 - -import ( - "github.com/aws/aws-sdk-go/service/s3/s3iface" - - "github.com/c2fo/vfs" -) - -// Scheme defines the filesystem type. -const Scheme = "s3" - -// FileSystem implements vfs.Filesystem for the S3 filesystem. -type FileSystem struct { - Client s3iface.S3API -} - -// NewFile function returns the s3 implementation of vfs.File. -func (fs FileSystem) NewFile(volume string, name string) (vfs.File, error) { - file, err := newFile(&fs, volume, name) - if err != nil { - return nil, err - } - return vfs.File(file), nil -} - -// NewLocation function returns the s3 implementation of vfs.Location. -func (fs FileSystem) NewLocation(volume string, name string) (vfs.Location, error) { - name = vfs.CleanPrefix(name) - return &Location{ - fileSystem: &fs, - prefix: name, - bucket: volume, - }, nil -} - -// Name returns "AWS S3" -func (fs FileSystem) Name() string { - return "AWS S3" -} - -// Scheme return "s3" as the initial part of a file URI ie: s3:// -func (fs FileSystem) Scheme() string { - return Scheme -} - -// NewFileSystem intializer for fileSystem struct accepts aws-sdk s3iface.S3API client and returns Filesystem or error. -func NewFileSystem(client s3iface.S3API) (*FileSystem, error) { - fs := &FileSystem{client} - return fs, nil -} diff --git a/utils.go b/utils/utils.go similarity index 78% rename from utils.go rename to utils/utils.go index 2ce663df..4badb4a9 100644 --- a/utils.go +++ b/utils/utils.go @@ -1,4 +1,4 @@ -package vfs +package utils import ( "errors" @@ -8,10 +8,14 @@ import ( "regexp" "runtime" "strings" + + "github.com/c2fo/vfs" ) const ( - Windows = "windows" + // Windows constant represents a target operating system running a version of Microsoft Windows + Windows = "windows" + // BadFilePrefix constant is returned when path has leading slash or backslash BadFilePrefix = "expecting only a filename prefix, which may not include slashes or backslashes" ) @@ -43,13 +47,13 @@ func AddTrailingSlash(path string) string { return path } -// GetFile returns a File URI -func GetFileURI(f File) string { +// GetFileURI returns a File URI +func GetFileURI(f vfs.File) string { return fmt.Sprintf("%s://%s%s", f.Location().FileSystem().Scheme(), f.Location().Volume(), f.Path()) } -// GetFile returns a Location URI -func GetLocationURI(l Location) string { +// GetLocationURI returns a Location URI +func GetLocationURI(l vfs.Location) string { return fmt.Sprintf("%s://%s%s", l.FileSystem().Scheme(), l.Volume(), l.Path()) } @@ -67,7 +71,7 @@ func CleanPrefix(prefix string) string { return prefixCleanRegex.ReplaceAllString(prefix, "") } -// Performs a validation check on a prefix. The prefix should not include "/" or "\\" characters. An +// ValidateFilePrefix performs a validation check on a prefix. The prefix should not include "/" or "\\" characters. An // error is returned if either of those conditions are true. func ValidateFilePrefix(filenamePrefix string) error { if strings.Contains(filenamePrefix, "/") || strings.Contains(filenamePrefix, "\\") { @@ -76,23 +80,16 @@ func ValidateFilePrefix(filenamePrefix string) error { return nil } -// Methods to ensure consistency between implementations - -func StandardizePath(path string) string { - if prefixSlashRegex.MatchString(path) { - return path - } else { - return "/" + path - } -} - // TouchCopy is a wrapper around io.Copy which ensures that even empty source files (reader) will get written as an // empty file. It guarantees a Write() call on the target file. -func TouchCopy(writer File, reader File) error { +func TouchCopy(writer, reader vfs.File) error { if size, err := reader.Size(); err != nil { return err } else if size == 0 { - writer.Write([]byte{}) + _, err = writer.Write([]byte{}) + if err != nil { + return err + } } else { if _, err := io.Copy(writer, reader); err != nil { return err diff --git a/utils_test.go b/utils/utils_test.go similarity index 75% rename from utils_test.go rename to utils/utils_test.go index f5e3658e..9f233538 100644 --- a/utils_test.go +++ b/utils/utils_test.go @@ -1,4 +1,4 @@ -package vfs_test +package utils_test import ( "testing" @@ -6,8 +6,8 @@ import ( "github.com/stretchr/testify/mock" "github.com/stretchr/testify/suite" - . "github.com/c2fo/vfs" // mocks also imports vfs resulting in circular dependency. See: https://github.com/golang/go/wiki/CodeReviewComments#import-dot "github.com/c2fo/vfs/mocks" + "github.com/c2fo/vfs/utils" ) /********************************** @@ -49,7 +49,7 @@ func (s *utilsTest) TestAddTrailingSlash() { } for _, slashtest := range tests { - s.Equal(slashtest.expected, AddTrailingSlash(slashtest.path), slashtest.message) + s.Equal(slashtest.expected, utils.AddTrailingSlash(slashtest.path), slashtest.message) } } @@ -81,12 +81,12 @@ func (s *utilsTest) TestGetURI() { mockFile2.On("Location").Return(mockLoc2) //GetFileURI - s.Equal("file:///some/path/to/file.txt", GetFileURI(mockFile1), "os file uri matches ") - s.Equal("s3://mybucket/this/path/to/file.txt", GetFileURI(mockFile2), "s3 file uri matches ") + s.Equal("file:///some/path/to/file.txt", utils.GetFileURI(mockFile1), "os file uri matches ") + s.Equal("s3://mybucket/this/path/to/file.txt", utils.GetFileURI(mockFile2), "s3 file uri matches ") //GetLocationURI - s.Equal("file:///some/path/to/", GetLocationURI(mockLoc1), "os location uri matches ") - s.Equal("s3://mybucket/this/path/to/", GetLocationURI(mockLoc2), "s3 location uri matches ") + s.Equal("file:///some/path/to/", utils.GetLocationURI(mockLoc1), "os location uri matches ") + s.Equal("s3://mybucket/this/path/to/", utils.GetLocationURI(mockLoc2), "s3 location uri matches ") } func TestUtils(t *testing.T) { diff --git a/vfs.go b/vfs.go index 6ff6837d..77137440 100644 --- a/vfs.go +++ b/vfs.go @@ -1,5 +1,3 @@ -// Package vfs provides a platform-independent interface to generalized set of filesystem -// functionality across a number of filesystem types such as os, S3, and GCS. package vfs import ( @@ -130,3 +128,5 @@ type File interface { // URI returns the fully qualified URI for the File. IE, s3://bucket/some/path/to/file.txt URI() string } + +type Options interface{} diff --git a/vfscp/doc.go b/vfscp/doc.go new file mode 100644 index 00000000..c453f0f5 --- /dev/null +++ b/vfscp/doc.go @@ -0,0 +1,23 @@ +/* +vfscp copies a file from one place to another, even between supported remote systems. +Complete URI (scheme:// authority/path) required except for local filesystem. +See github.com/c2fo/vfs docs for authentication. + + +Usage + +vfscp's usage is extremlely simple: + + vfscp + -help prints help message + +Examples + +Local OS URI's can be expressed without a scheme: + vfscp /some/local/file.txt s3://mybucket/path/to/myfile.txt +But may also be use the full scheme uri: + vfscp file:///some/local/file.txt s3://mybucket/path/to/myfile.txt +Copy a file from Google Cloud Storage to Amazon S3 + vfscp gs://googlebucket/some/path/photo.jpg s3://awsS3bucket/path/to/photo.jpg +*/ +package main diff --git a/vfscp/vfscp.go b/vfscp/vfscp.go index ab8d530a..4df07193 100644 --- a/vfscp/vfscp.go +++ b/vfscp/vfscp.go @@ -1,102 +1,117 @@ package main import ( - "errors" + "flag" "fmt" "net/url" "os" "path/filepath" + "github.com/fatih/color" + "github.com/c2fo/vfs/vfssimple" - "github.com/urfave/cli" ) +const usageTemplate = ` +%[1]s copies a file from one place to another, even between supported remote systems. +Complete URI (scheme://authority/path) required except for local filesystem. +See github.com/c2fo/vfs docs for authentication. + +Usage: %[1]s + + ie, %[1]s /some/local/file.txt s3://mybucket/path/to/myfile.txt + same as %[1]s file:///some/local/file.txt s3://mybucket/path/to/myfile.txt + gcs to s3 %[1]s gs://googlebucket/some/path/photo.jpg s3://awsS3bucket/path/to/photo.jpg + + -help + prints this message + +` + func main() { - app := cli.NewApp() - app.Name = "vfscp" - app.Usage = "Copies a file from one place to another, even between supported remote systems" - app.Flags = []cli.Flag{ - cli.StringFlag{ - Name: "awsKeyId", - Usage: "aws access key id for user", - EnvVar: "AWS_ACCESS_KEY_ID", - }, - cli.StringFlag{ - Name: "awsSecretKey", - Usage: "aws secret key for user", - EnvVar: "AWS_ACCESS_KEY", - }, - cli.StringFlag{ - Name: "awsSessionToken", - Usage: "aws session token", - EnvVar: "AWS_SESSION_TOKEN", - }, - cli.StringFlag{ - Name: "awsRegion", - Usage: "aws region", - EnvVar: "AWS_REGION", - }, + flag.Usage = func() { + fmt.Fprintf(os.Stdout, usageTemplate, os.Args[0]) } - app.Action = func(c *cli.Context) error { - err := checkArgs(c.Args().Get(0), c.Args().Get(1)) - if err != nil { - return err - } - srcFileURI, targetFileURI, err := normalizeArgs(c) - // TODO: if file is empty, create an empty file at targetFile. This should probably be done by vfs by default - // TODO: add support for S3 URIs. All relative paths or otherwise incomplete URIs should be interpreted as local paths. - fmt.Println(fmt.Sprintf("Copying %s to %s", srcFileURI, targetFileURI)) - srcFile, _ := vfssimple.NewFile(srcFileURI) - targetFile, _ := vfssimple.NewFile(targetFileURI) - return srcFile.CopyToFile(targetFile) + var help bool + flag.BoolVar(&help, "help", false, "prints this message") + flag.Parse() + + if help { + flag.Usage() + os.Exit(0) + } + + if len(flag.Args()) != 2 { + flag.Usage() + os.Exit(1) + } + + fmt.Println("") + + srcFileURI, err := normalizeArgs(flag.Arg(0)) + if err != nil { + panic(err) + } + targetFileURI, err := normalizeArgs(flag.Arg(1)) + if err != nil { + panic(err) } - app.Run(os.Args) + copyFiles(srcFileURI, targetFileURI) } -func checkArgs(a1, a2 string) error { - if a1 == "" || a2 == "" { - return errors.New("vfscp requires 2 non-empty arguments") +func copyFiles(srcFileURI, targetFileURI string) { + green := color.New(color.FgHiGreen).Add(color.Bold) + + copyMessage(srcFileURI, targetFileURI) + + srcFile, err := vfssimple.NewFile(srcFileURI) + if err != nil { + failMessage(err) } - return nil + targetFile, err := vfssimple.NewFile(targetFileURI) + if err != nil { + failMessage(err) + } + err = srcFile.CopyToFile(targetFile) + if err != nil { + failMessage(err) + } + + fmt.Print(green.Sprint("done\n\n")) + } -func normalizeArgs(c *cli.Context) (string, string, error) { - a1 := c.Args().Get(0) - a2 := c.Args().Get(1) - normalizedArgs := make([]string, 2) - for i, a := range []string{a1, a2} { - u, err := url.Parse(a) +func normalizeArgs(str string) (string, error) { + var normalizedArg string + u, err := url.Parse(str) + if err != nil { + return "", err + } + if u.IsAbs() { + normalizedArg = str + } else { + absPath, err := filepath.Abs(str) if err != nil { - return "", "", err - } - if u.IsAbs() { - normalizedArgs[i] = a - if err := initializeFS(u.Scheme, c); err != nil { - return "", "", err - } - } else { - absPath, err := filepath.Abs(a) - if err != nil { - return "", "", err - } - normalizedArgs[i] = "file://" + absPath - if err := initializeFS("file", c); err != nil { - return "", "", err - } + return "", err } + normalizedArg = "file://" + absPath } - return normalizedArgs[0], normalizedArgs[1], nil + return normalizedArg, err } -func initializeFS(scheme string, c *cli.Context) error { - switch scheme { - case "gs": - return vfssimple.InitializeGSFileSystem() - case "s3": - return vfssimple.InitializeS3FileSystem(c.String("awsKeyId"), c.String("awsSecretKey"), c.String("awsRegion"), c.String("awsSessionToken")) - case "file": - vfssimple.InitializeLocalFileSystem() - } - return nil +func failMessage(err error) { + red := color.New(color.FgHiRed).Add(color.Bold) + fmt.Printf(red.Sprint("failed\n\n")+"\n%s\n\n", err.Error()) + os.Exit(1) +} + +func copyMessage(src, dest string) { + white := color.New(color.FgHiWhite).Add(color.Bold) + blue := color.New(color.FgHiBlue).Add(color.Bold) + fmt.Print(white.Sprint("Copying ") + + blue.Sprint(src) + + white.Sprint(" to ") + + blue.Sprint(dest) + + white.Sprint(" ... ")) } diff --git a/vfssimple/doc.go b/vfssimple/doc.go new file mode 100644 index 00000000..b08aae94 --- /dev/null +++ b/vfssimple/doc.go @@ -0,0 +1,46 @@ +/* +Package vfssimple provides a basic and easy to use set of functions to any supported backend filesystem by using full URI's: + * Local OS: file:///some/path/to/file.txt + * Amazon S3: s3://mybucket/path/to/file.txt + * Google Cloud Storage: gs://mybucket/path/to/file.txt + +Usage + +Just import vfssimple. + + package main + + import( + "github.com/c2fo/vfs/vfssimple" + ) + + ... + + func DoSomething() error { + myLocalDir, err := vfssimple.NewLocation("file:///tmp/") + if err != nil { + return err + } + + myS3File, err := vfssimple.NewFile("s3://mybucket/some/path/to/key.txt") + if err != nil { + return err + } + + localFile, err := myS3File.MoveToLocation(myLocalDir) + if err != nil { + return err + } + + } + +Authentication and Options + +vfssimple is largely an example of how to initialize a set of backend filesystems. It only provides a default +initialization of the individual file systems. See backend docs for specific authentication info for each backend but +generally speaking, most backends can use Environment variables to set credentials or client options. + +To do more, especially if you need to pass in specific vfs.Option's via WithOption() or perhaps a mock client for testing via +WithClient() or something else, you'd need to implement your own factory. See github.com/c2fo/vfs/backend for more information. +*/ +package vfssimple diff --git a/vfssimple/vfssimple.go b/vfssimple/vfssimple.go index 72088d57..0c3824d8 100644 --- a/vfssimple/vfssimple.go +++ b/vfssimple/vfssimple.go @@ -1,139 +1,59 @@ package vfssimple import ( - "context" "errors" "fmt" "net/url" - "cloud.google.com/go/storage" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/aws/credentials" - "github.com/aws/aws-sdk-go/aws/session" - "github.com/aws/aws-sdk-go/service/s3" - "github.com/aws/aws-sdk-go/service/s3/s3iface" "github.com/c2fo/vfs" - "github.com/c2fo/vfs/gs" - "github.com/c2fo/vfs/os" - _s3 "github.com/c2fo/vfs/s3" + "github.com/c2fo/vfs/backend" + _ "github.com/c2fo/vfs/backend/all" ) -var ( - fileSystems map[string]vfs.FileSystem -) - -func init() { - fileSystems = map[string]vfs.FileSystem{} -} - -func InitializeLocalFileSystem() { - if _, ok := fileSystems[os.Scheme]; ok { - return - } - fileSystems[os.Scheme] = vfs.FileSystem(os.FileSystem{}) - return -} - -func InitializeGSFileSystem() error { - if _, ok := fileSystems[gs.Scheme]; ok { - return nil - } - ctx := context.Background() - client, err := storage.NewClient(ctx) - if err != nil { - return err - } - fileSystems[gs.Scheme] = vfs.FileSystem(gs.NewFileSystem(ctx, client)) - return nil -} - -// InitializeS3FileSystem will handle the bare minimum requirements for setting up an s3.FileSystem -// by setting up an s3 client with the accessKeyId and secreteAccessKey (both required), and an optional -// session token. It is required before making calls to vfs.NewLocation or vfs.NewFile with s3 URIs -// to have set this up ahead of time. If you require more in depth configuration of the s3 Client you -// may set one up yourself and pass the resulting s3iface.S3API to vfs.SetS3Client which will also -// fulfil this requirement. -func InitializeS3FileSystem(accessKeyId, secretAccessKey, region, token string) error { - if _, ok := fileSystems[_s3.Scheme]; ok { - return nil - } - if accessKeyId == "" { - return errors.New("accessKeyId argument of InitializeS3FileSystem cannot be an empty string.") - } - if secretAccessKey == "" { - return errors.New("secretAccessKey argument of InitializeS3FileSystem cannot be an empty string.") - } - if region == "" { - region = "us-west-2" - } - auth := credentials.NewStaticCredentials(accessKeyId, secretAccessKey, token) - awsConfig := aws.NewConfig().WithCredentials(auth).WithRegion(region) - awsSession, err := session.NewSession(awsConfig) - if err != nil { - return err - } - - SetS3Client(s3.New(awsSession)) - return nil -} - -// SetS3Client configures an s3.FileSystem with the client passed to it. This will be used by vfs when -// calling vfs.NewLocation or vfs.NewFile with an s3 URI. If you don't want to bother configuring the -// client manually vfs.InitializeS3FileSystem() will handle the client set up with the minimum -// required arguments (an access key id and secret access key.) -func SetS3Client(client s3iface.S3API) { - fileSystems[_s3.Scheme] = vfs.FileSystem(_s3.FileSystem{client}) -} - -// NewLocation is a convenience function that allows for instantiating a location based on a uri string. -// "file://", "s3://", and "gs://" are supported, assuming they have been configured ahead of time. +// NewLocation is a convenience function that allows for instantiating a location based on a uri string.Any +// backend filesystem is supported, though some may require prior configuration. See the docs for +// specific requirements of each func NewLocation(uri string) (vfs.Location, error) { - u, err := parseSupportedURI(uri) + fs, host, path, err := parseSupportedURI(uri) if err != nil { return nil, err } - return fileSystems[u.Scheme].NewLocation(u.Host, u.Path) + return fs.NewLocation(host, path) } // NewFile is a convenience function that allows for instantiating a file based on a uri string. Any -// supported file system is supported, though some may require prior configuration. See the docs for +// backend filesystem is supported, though some may require prior configuration. See the docs for // specific requirements of each. func NewFile(uri string) (vfs.File, error) { - u, err := parseSupportedURI(uri) + fs, host, path, err := parseSupportedURI(uri) if err != nil { return nil, err } - return fileSystems[u.Scheme].NewFile(u.Host, u.Path) + return fs.NewFile(host, path) } -func parseSupportedURI(uri string) (*url.URL, error) { - u, err := url.Parse(uri) +func parseSupportedURI(uri string) (vfs.FileSystem, string, string, error) { + var err error + var u *url.URL + u, err = url.Parse(uri) if err != nil { - return nil, err + return nil, "", "", err } - switch u.Scheme { - case gs.Scheme: - if _, ok := fileSystems[gs.Scheme]; ok { - return u, nil - } else { - return nil, fmt.Errorf("gs is a supported scheme but must be initialized. Call vfs.InitializeGSFileSystem() first.") - } - return u, nil - case os.Scheme: - if _, ok := fileSystems[os.Scheme]; ok { - return u, nil - } else { - return nil, fmt.Errorf("file is a supported scheme but must be initialized. Call vfs.InitializeLocalFileSystem() first.") - } - case _s3.Scheme: - if _, ok := fileSystems[_s3.Scheme]; ok { - return u, nil - } else { - return nil, fmt.Errorf("s3 is a supported scheme but must be intialized. Call vfs.InitializeS3FileSystem(accessKeyId, secretAccessKey, token string), or vfs.SetS3Client(client s3iface.S3API) first.") + host := u.Host + path := u.Path + + var fs vfs.FileSystem + for _, backendScheme := range backend.RegisteredBackends() { + if u.Scheme == backendScheme { + fs = backend.Backend(backendScheme) } - default: - return nil, fmt.Errorf("scheme [%s] is not supported.", u.Scheme) } + + if fs == nil { + err = errors.New(fmt.Sprintf("%s is an unsupported uri scheme", u.Scheme)) + } + + return fs, host, path, err }