Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support of stream operations #86

Open
iychoi opened this issue Jan 26, 2023 · 3 comments
Open

Support of stream operations #86

iychoi opened this issue Jan 26, 2023 · 3 comments

Comments

@iychoi
Copy link

iychoi commented Jan 26, 2023

As mentioned in the REAME, there's a limitation in supporting stream operations. However, can you reconsider implementing it for following reasons?

The libraries written in various languages (other than C/C++) use stream operations. For example, python-irodsclient and go-irodsclient implements parallel data upload and download based on stream operations. Many tools implemented using the libraries will have the same limitation in following logical quotas.

@trel
Copy link
Member

trel commented Jan 27, 2023

From the README:

These changes have the following effects:

  • The plugin allows stream-based writes to violate the maximum bytes quota once.
  • Subsequent stream-based creates and writes will be denied until the quotas are out of violation.

The limitation you're talking about is the first bullet -- that any stream operation can go past the quota once?

We don't have a good implementation option available to track and prevent that initial quota violation. Can discuss more, but the options we discussed at the time were all messy and hard and slow.

@iychoi
Copy link
Author

iychoi commented Jan 27, 2023

Yes, I am referring the first bullet.

Will it be hard to raise an error when closing a file opened if it violated the maximum bytes quota? I agree that it will be slow and inefficient checking the violation every time user writes.

@trel
Copy link
Member

trel commented Jan 28, 2023

Well, we could definitely do the math and throw an error... but then... do what with the object? or the data? It's already been written.

I think we worked out our options were...
a) we prevent it ahead of time, which is hard/impossible because we don't know how much more is coming, or
b) we track every byte, which is very slow and hard to coordinate with other possible writers, or
c) we throw error at the end (like you suggest), or
d) we let one stream go past the quota and update the stats (the current implementation).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

2 participants