forked from netty/netty
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
update #1
Open
dawn2zhang
wants to merge
1,892
commits into
dawn2zhang:4.1
Choose a base branch
from
netty:4.1
base: 4.1
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
update #1
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
normanmaurer
force-pushed
the
4.1
branch
2 times, most recently
from
February 8, 2022 09:13
f9023b9
to
54a2c6d
Compare
normanmaurer
force-pushed
the
4.1
branch
3 times, most recently
from
October 10, 2023 13:30
f3b47e5
to
58f75f6
Compare
Quarkus disables JNDI by default, which breaks the (default) DNS servers lookup in `DirContextUtils`, so DNS lookups default to the public Google DNS servers. This means, that looking up k8s service names, or any internal names, isn't possible without either enabling JNDI, which is not such a great option, or specifying DNS servers manually, which is awkward, since there is `/etc/resolv.conf`. This change prefers reading the nameservers from `/etc/resolv.conf` over looking those up on JNDI on Linux + macOS. This is effectively a noop-change when JNDI is enabled, because the nameservers available via JNDI are read from `/etc/resolv.conf`. Behavior for Windows (still requires JNDI) and Android (neither JNDI nor `/etc/resolv.conf`) is not changed. Fixes #13883
#13898) …ten. Motivation: ByteToMessageDecoder.channelReadComplete(...) should only call read() once if channelRead(...) did not produce any message. Modifications: - Correctly reset variable - Add unit test Result: Not break flow-control
Motivation: According to the specification for parsing of application/x-www-form-urlencoded content at https://url.spec.whatwg.org/#application/x-www-form-urlencoded, a key without an = should be able to be parsed and given an empty value. The current implementation of HttpPostStandardRequestDecoder fails to parse these no-value keys when they are the last value in the sequence. Modifications: HttpPostStandardRequestDecoder is modified to include a key with no value that is at the end of the undecoded chunk in the existing "special empty FIELD" code path that currently only handles such fields when they are followed by a '&' character. Additional tests are provided to throroughly exercise variations of content bodies with such empty fields. Result: Keys with no value that appear at the end of a x-www-form-urlencoded sequence will be parsed according to the spec.
…13908) Motivation: According to the specification for parsing of application/x-www-form-urlencoded content at https://url.spec.whatwg.org/#application/x-www-form-urlencoded, a key without an = should be able to be parsed and given an empty value. The current implementation of HttpPostStandardRequestDecoder fails to parse these no-value keys when they are the last value in the sequence. Modifications: HttpPostStandardRequestDecoder is modified to include a key with no value that is at the end of the undecoded chunk in the existing "special empty FIELD" code path that currently only handles such fields when they are followed by a '&' character. Additional tests are provided to throroughly exercise variations of content bodies with such empty fields. A test has also been added to verify that the change works with an empty last chunk, as suggested in the original PR #13904 Result: Keys with no value that appear at the end of a x-www-form-urlencoded sequence will be parsed according to the spec.
Motivation: The implementation of `PoolArena#numPinnedBytes()` uses a lock to compute the result of the method. This only purpose of this lock is to provide an accurate computation of metric of the list of chunks: the `chunkListMetrics` is final and an unmodifiable list. Note that the returned value is already an estimate since the computation also sums `activeBytesHuge` outside of the lock. In addition this method is used by the allocator and is already an estimate as it sums the list of arenas which can change during the iteration. Modifications: This removes the lock around the `chunkListMetrics` iteration in the `PoolArena#numPinnedBytes()` method. Result: Less locking
…13897) A lenient approach regarding end of lines can result in a parser differential. Example of such attacks: https://sec-consult.com/blog/detail/smtp-smuggling-spoofing-e-mails-worldwide/ As recommended in a private advisory we should document this and a possible mitigation in LineBasedFrameDecoder Motivation: Avoid new implementers of line based protocol to write vulnerable applications. Modification: Simple JavaDoc modification --------- Co-authored-by: Norman Maurer <[email protected]>
Motivation: Many browsers support cookie values separated by ';' instead of '; ', even if they violate the HTTP spec. HttpConversionUtil.toHttp2Headers would, however, silently drop a character of a second cookie in this case ('one=foo;two=bar' -> 'one=foo' and 'wo=bar'). Modification: Add a verification pass that checks that the cookie value can be split. If there is a semicolon without a space, or a character that `AsciiString` cannot handle, keep the cookie value as-is without splitting. Result: An invalid cookie is transmitted as-is instead.
#13876) …ine.getHandshakeStatus() returns the correct value all the time Motivation: We should call SSL_do_handshake(...) after we did run the delegating task so SSLEngine.getHandshakeStatus() will be able to see data that we produce as part of finish the handshake and return NEEDS_WRAP. Modifications: - Add SSL_do_handshake(...) call Result: Correctly reflect HandshakeStatus after running delegating tasks
Motivation: Zstandard(https://facebook.github.io/zstd/) is a high performance, high compression ratio compression algorithm,This pr is to add netty support for the zstandard algorithm,The implementation of zstandard algorithm relies on zstd-jni (https://github.com/luben/zstd-jni), which is an openSource third-party library,Apache Kafka is also using this library for message compression processing. This is the copy of #10422 Modification: Add ZstdDecoder and test case. Result: netty supports ZSTD with zstdDecoder --------- Signed-off-by: xingrufei <[email protected]> Co-authored-by: xingrufei <[email protected]> Co-authored-by: Norman Maurer <[email protected]> Co-authored-by: Chris Vest <[email protected]>
Motivation: Now that `ZstdDecoder` has been added, we should consider supporting ztsd content decompression Modification: Add support for zstd http and http2 content decompression Result: Netty supports zstd http and http2 content decompression Signed-off-by: xingrufei <[email protected]>
This reverts commit 57c579f.
Motivation: Java 22 has been released. Modification: Add a JDK 22 profile to the `pom.xml` file, and add JDK 22 to the PR build matrix. Result: We now build PRs on JDK 22, in addition to the existing Java versions.
* The InterfaceHttpPostRequestDecoder form implementations does not provide hard limits for the number of fields a form can have and the number of accumulated bytes. The former can be used by sending a large amount of fields that will fill the bodyListHttpData list, the later can be used by sending a very large field filling the undecodedChunk buffer since the decoder implementation buffers the field before handling it. This provides hard limits for both: maxFields defines the maximum number of fields the form can have, maxBufferedBytes defines the maximum number of bytes a field can cumulate. When a limit is reached, a decoder exception is thrown, letting the decoder controller take care of it. * Set default limits for maxFields/maxUnbufferedBytes (breaking change) * Update codec-http/src/main/java/io/netty/handler/codec/http/multipart/HttpPostRequestDecoder.java * Update codec-http/src/main/java/io/netty/handler/codec/http/multipart/HttpPostRequestDecoder.java --------- Co-authored-by: Julien Viet <[email protected]>
Motivation: Fix the javadoc rendering Modification: add `{@code }` to the code block. Result: <img width="698" alt="image" src="https://github.com/user-attachments/assets/46d6d1f7-fbf1-4ea9-a48c-515ef96cf8b8">
Motivation: 914ba08 made the methods public but did not change the naming of the methods. Usually we don't use the get* prefix in netty. Modifications: Remove get prefix from methods Result: Cleanup
…4472) Motivation: Corretto does not use "likes" and so we need to ensure we compute it without these. Modifications: Manually compose the classifier to use Result: Always use the correct classifier no matter on which linux system we compile
Motivation: PcapWriteHandler.Builder has a boolean for writePcapGlobalHeader so that the handler can support functionality like appending to an existing PCAP file. This boolean is currently not being checked when writing the global header, preventing appending from working. Modifications: PcapWriter checks PcapWriteHandler.writePcapGlobalHeader() returned true. Added unit test cases for when writePcapGlobalHeader is set to false. Result: PcapWriteHandler only writes the global PCAP header on initialization if writePcapGlobalHeader is true and if sharedOutputStream is false.
…ThreadLocalThread.willCleanupFastThreadLocals()` returns false (#14486) Motivation: The method `AdaptiveByteBufAllocator.usedHeapMemory()` does not return a correct value when `FastThreadLocalThread.willCleanupFastThreadLocals(Thread.currentThread()) == false` by using a customized `FastThreadLocalThread` thread. Modification: If `FastThreadLocalThread.willCleanupFastThreadLocals(Thread.currentThread()) == false`, then we will NOT use threadLocal magazine. Result: Fixes #14483. --------- Co-authored-by: laosijikaichele <laosijikaichele>
…oQueue(...) (#14495) Motivation: There is a possible race condition between methods `AdaptivePoolingAllocator.offerToQueue(...)` and `AdaptivePoolingAllocator.free()`: Assume 'thread-1' goes into method `offerToQueue(...)`, firstly check the `freed` flag is `false`, then 'thread-1' sleep 10 seconds. Then 'thread-2' call `AdaptivePoolingAllocator.free()` and finished the call. Then 'thread-1' wake up and continue to execute `offerToQueue(...)`, it successfully offered the chunk buffer to the `centralQueue`, and will not be freed by `finalized()`, which caused a leak. Modification: Check the `freed` flag again after offered the chunk buffer to the `centralQueue`, help to free the `centralQueue` if `freed == true`. Result: Avoid the possible race condition which may cause a memory leak. Co-authored-by: laosijikaichele <laosijikaichele>
…nd MAGAZINE_BUFFER_QUEUE_CAPACITY (#14493) Motivation: The customizable configurations `AdaptivePoolingAllocator.CENTRAL_QUEUE_CAPACITY` and `AdaptivePoolingAllocator.MAGAZINE_BUFFER_QUEUE_CAPACITY` MUST NOT less than 2. Modification: Add range check in the static block, to fail fast in class load stage with clearer error messages. Result: Fixes #14489. Co-authored-by: laosijikaichele <laosijikaichele>
Motivation: AsciiString objects can be windows onto byte array slices, and need not start at the beginning of the array. The header validation code assumed that AsciiString objects always started at index zero in the underlying byte array. Modification: Fix the end-index computation for the token validation loop, and add tests. Result: Header value token validation will no longer skip the end of values in carefully crafted AsciiString objects. Fixes #14482 --------- Co-authored-by: Norman Maurer <[email protected]>
) Motivation: The sentinel object `Magazine.MAGAZINE_FREED` should not be replaced. Modification: Check the `NEXT_IN_LINE` to make sure the `Magazine.MAGAZINE_FREED` not be replaced. Result: Fixes #14498. Co-authored-by: laosijikaichele <laosijikaichele>
#14494) Motivation: It's better to narrow the lock scope and avoid using nested lock when possible. The lock scope of `magazineExpandLock` in method `AdaptivePoolingAllocator.tryExpandMagazines(...)` can be narrowed. The `magazineExpandLock` guard the current(newest) version of `magazines`, but no need to guard the old version of `magazines`. We can get out of the `magazineExpandLock` immediately once the newest `magazines` is assigned, which means we can move the 'free-old-magazines' operation out of the `magazineExpandLock` scope. Another reason for doing this is that the 'free-old-magazines' operation required another lock(`Magazine.allocationLock`), which make it becomes a nested lock inside `magazineExpandLock`, it's better to avoid using the nested lock. Modification: Moved the 'free-old-magazines' operation out of the `magazineExpandLock` scope. Result: Narrowed the `magazineExpandLock` scope and avoided nest the `Magazine.allocationLock`. Co-authored-by: laosijikaichele <laosijikaichele>
…#14507 (#14509) Motivations: * it's not the responsibility of MqttPublishMessage, which is a ByteBufHolder to ensure the accessibility of the ByteBuf. The ByteBuf checks it himself. * the current design is broken and there's no way to check if the content is accessible as refCnt crashes with IllegalReferenceCountException instead of returning 0. Modification: Remove ByteBufUtil.ensureAccessible from MqttPublishMessage#content. ByteBuf's methods will check themselves. Result: Fixes #14507 It's now possible to check if the payload of a MqttPublishMessage is accessible.
…ed to central queue (#14508) Motivation: For `AdaptivePoolingAllocator`, the `magazine.usedMemory` should be decreased when the chunk be deallocated or be offered to the `centralQueue`. Modification: Decreased the `magazine.usedMemory` when the chunk be deallocated or be offered to the `centralQueue`. Result: The `magazine.usedMemory` becomes more accurate. --------- Co-authored-by: laosijikaichele <laosijikaichele> Co-authored-by: Norman Maurer <[email protected]>
…ompressor` (#14466) Motivation: When HttpContentCompressor is created using the default empty constructor or `HttpContentCompressor(int...)`, then some compression algorithms, like Brotli or Zstd, will never work regardless of their presence in the classpath because `factories` will never be initialized. Modification: Replaced default empty constructor parameters with `StandardCompressionOptions` and `HttpContentCompressor(int, int, int)` constructor. Result: Enabled the usage of Brotli and Zstd even when using the default empty constructor. --------- Co-authored-by: Norman Maurer <[email protected]>
Motivation: We should only use Zstd and Brotli by default if we can load the native lib. Modifications: Correctly detect if we can use the native libs or not Result: No issues when native libs cant be loaded.
#14524) …… (#14516) …l cases Motivation: The allocator uses different strategies when it comes to reusing previous allocated Chunks. In some cases we did not always correctly increment / degrement the zsed memory counters. Modifications; - Correctly update counters in all cases - Null out attached Magazine before adding Chunk to central queue Result: Fixes #14513
…kflows (#14501) Bumps [dawidd6/action-download-artifact](https://github.com/dawidd6/action-download-artifact) from 3.0.0 to 6. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/dawidd6/action-download-artifact/releases">dawidd6/action-download-artifact's releases</a>.</em></p> <blockquote> <h2>v6</h2> <p><strong>Full Changelog</strong>: <a href="https://github.com/dawidd6/action-download-artifact/compare/v5...v6">https://github.com/dawidd6/action-download-artifact/compare/v5...v6</a></p> <h2>v5</h2> <p><strong>Full Changelog</strong>: <a href="https://github.com/dawidd6/action-download-artifact/compare/v4...v5">https://github.com/dawidd6/action-download-artifact/compare/v4...v5</a></p> <h2>v4</h2> <h2>What's Changed</h2> <ul> <li><strong>VERSIONING CHANGE</strong>: now there will only be major releases of this action, e.g. v5, v6 and so on</li> <li>build(deps): bump undici from 5.28.3 to 5.28.4 by <a href="https://github.com/dependabot"><code>@dependabot</code></a> in <a href="https://redirect.github.com/dawidd6/action-download-artifact/pull/284">dawidd6/action-download-artifact#284</a></li> <li>build(deps): bump <code>@actions/artifact</code> from 2.1.4 to 2.1.5 by <a href="https://github.com/dependabot"><code>@dependabot</code></a> in <a href="https://redirect.github.com/dawidd6/action-download-artifact/pull/285">dawidd6/action-download-artifact#285</a></li> <li>build(deps): bump <code>@actions/artifact</code> from 2.1.5 to 2.1.7 by <a href="https://github.com/dependabot"><code>@dependabot</code></a> in <a href="https://redirect.github.com/dawidd6/action-download-artifact/pull/287">dawidd6/action-download-artifact#287</a></li> <li>build(deps): bump adm-zip from 0.5.12 to 0.5.13 by <a href="https://github.com/dependabot"><code>@dependabot</code></a> in <a href="https://redirect.github.com/dawidd6/action-download-artifact/pull/289">dawidd6/action-download-artifact#289</a></li> <li>Set allow_forks to false by default by <a href="https://github.com/timweri"><code>@timweri</code></a> in <a href="https://redirect.github.com/dawidd6/action-download-artifact/pull/290">dawidd6/action-download-artifact#290</a></li> </ul> <h2>New Contributors</h2> <ul> <li><a href="https://github.com/timweri"><code>@timweri</code></a> made their first contribution in <a href="https://redirect.github.com/dawidd6/action-download-artifact/pull/290">dawidd6/action-download-artifact#290</a></li> </ul> <p><strong>Full Changelog</strong>: <a href="https://github.com/dawidd6/action-download-artifact/compare/v3...v4">https://github.com/dawidd6/action-download-artifact/compare/v3...v4</a></p> <h2>v3.1.4</h2> <h2>What's Changed</h2> <ul> <li>build(deps): bump adm-zip from 0.5.10 to 0.5.12 by <a href="https://github.com/dependabot"><code>@dependabot</code></a> in <a href="https://redirect.github.com/dawidd6/action-download-artifact/pull/282">dawidd6/action-download-artifact#282</a></li> <li>build(deps): bump <code>@actions/artifact</code> from 2.1.2 to 2.1.4 by <a href="https://github.com/dependabot"><code>@dependabot</code></a> in <a href="https://redirect.github.com/dawidd6/action-download-artifact/pull/280">dawidd6/action-download-artifact#280</a></li> <li>fix: accept expired artifacts with documentation url by <a href="https://github.com/wdconinc"><code>@wdconinc</code></a> in <a href="https://redirect.github.com/dawidd6/action-download-artifact/pull/283">dawidd6/action-download-artifact#283</a></li> </ul> <h2>New Contributors</h2> <ul> <li><a href="https://github.com/wdconinc"><code>@wdconinc</code></a> made their first contribution in <a href="https://redirect.github.com/dawidd6/action-download-artifact/pull/283">dawidd6/action-download-artifact#283</a></li> </ul> <p><strong>Full Changelog</strong>: <a href="https://github.com/dawidd6/action-download-artifact/compare/v3...v3.1.4">https://github.com/dawidd6/action-download-artifact/compare/v3...v3.1.4</a></p> <h2>v3.1.3</h2> <h2>What's Changed</h2> <ul> <li>node_modules: upgrade by <a href="https://github.com/dawidd6"><code>@dawidd6</code></a> in <a href="https://redirect.github.com/dawidd6/action-download-artifact/pull/276">dawidd6/action-download-artifact#276</a></li> <li>build(deps): bump <code>@actions/artifact</code> from 2.1.1 to 2.1.2 by <a href="https://github.com/dependabot"><code>@dependabot</code></a> in <a href="https://redirect.github.com/dawidd6/action-download-artifact/pull/277">dawidd6/action-download-artifact#277</a></li> </ul> <p><strong>Full Changelog</strong>: <a href="https://github.com/dawidd6/action-download-artifact/compare/v3.1.2...v3.1.3">https://github.com/dawidd6/action-download-artifact/compare/v3.1.2...v3.1.3</a></p> <h2>v3.1.2</h2> <h2>What's Changed</h2> <ul> <li>Read workflow_search input as a boolean by <a href="https://github.com/klutchell"><code>@klutchell</code></a> in <a href="https://redirect.github.com/dawidd6/action-download-artifact/pull/273">dawidd6/action-download-artifact#273</a></li> </ul> <h2>New Contributors</h2> <ul> <li><a href="https://github.com/klutchell"><code>@klutchell</code></a> made their first contribution in <a href="https://redirect.github.com/dawidd6/action-download-artifact/pull/273">dawidd6/action-download-artifact#273</a></li> </ul> <p><strong>Full Changelog</strong>: <a href="https://github.com/dawidd6/action-download-artifact/compare/v3.1.1...v3.1.2">https://github.com/dawidd6/action-download-artifact/compare/v3.1.1...v3.1.2</a></p> <h2>v3.1.1</h2> <h2>What's Changed</h2> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/dawidd6/action-download-artifact/commit/bf251b5aa9c2f7eeb574a96ee720e24f801b7c11"><code>bf251b5</code></a> node_modules: upgrade</li> <li><a href="https://github.com/dawidd6/action-download-artifact/commit/93c629661111aae296c04004b30ae3ba22ed46f3"><code>93c6296</code></a> README: v5</li> <li><a href="https://github.com/dawidd6/action-download-artifact/commit/deb3bb83256a78589fef6a7b942e5f2573ad7c13"><code>deb3bb8</code></a> node_modules: upgrade</li> <li><a href="https://github.com/dawidd6/action-download-artifact/commit/1d93f37db2a8005b41437c75a4793d52e664d858"><code>1d93f37</code></a> README: v4</li> <li><a href="https://github.com/dawidd6/action-download-artifact/commit/854e2de9396304899dbe03bf9995fd73533190d1"><code>854e2de</code></a> Set allow_forks to false by default (<a href="https://redirect.github.com/dawidd6/action-download-artifact/issues/290">#290</a>)</li> <li><a href="https://github.com/dawidd6/action-download-artifact/commit/436c9d3774019b3e2789d7332e9c4efdba3d9d79"><code>436c9d3</code></a> build(deps): bump adm-zip from 0.5.12 to 0.5.13 (<a href="https://redirect.github.com/dawidd6/action-download-artifact/issues/289">#289</a>)</li> <li><a href="https://github.com/dawidd6/action-download-artifact/commit/14040524bb7e51dee9683e2e755e0d562621a1d5"><code>1404052</code></a> build(deps): bump <code>@actions/artifact</code> from 2.1.5 to 2.1.7 (<a href="https://redirect.github.com/dawidd6/action-download-artifact/issues/287">#287</a>)</li> <li><a href="https://github.com/dawidd6/action-download-artifact/commit/8a9be734dc508dcf8d67c27ba3f727b0d682ccb0"><code>8a9be73</code></a> build(deps): bump <code>@actions/artifact</code> from 2.1.4 to 2.1.5 (<a href="https://redirect.github.com/dawidd6/action-download-artifact/issues/285">#285</a>)</li> <li><a href="https://github.com/dawidd6/action-download-artifact/commit/df593bbd0462b45b479f042d043c3aa47fe1c483"><code>df593bb</code></a> build(deps): bump undici from 5.28.3 to 5.28.4 (<a href="https://redirect.github.com/dawidd6/action-download-artifact/issues/284">#284</a>)</li> <li><a href="https://github.com/dawidd6/action-download-artifact/commit/09f2f74827fd3a8607589e5ad7f9398816f540fe"><code>09f2f74</code></a> fix: accept expired artifacts with documentation url (<a href="https://redirect.github.com/dawidd6/action-download-artifact/issues/283">#283</a>)</li> <li>Additional commits viewable in <a href="https://github.com/dawidd6/action-download-artifact/compare/v3.0.0...v6">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=dawidd6/action-download-artifact&package-manager=github_actions&previous-version=3.0.0&new-version=6)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/netty/netty/network/alerts). </details> Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Motivation: A BoundedInputStream was recently introduced to protect against attacks where a large file was placed on the server. Unfortunately, this has an issue preventing it from reading the last bytes of a file where the buffer size would take the total bytes over the set bound, when reading just the remaining bytes in the file fits underneath the set bound. Modification: Adjusted the behaviour of BoundedInputStream to, where appropriate, read 1 more byte than the set bound, to see if the file terminates at this point, in which case there is no need to throw the exception Result: Fixes #14479.
Motivation: BlockHound version 1.0.10.RELEASE comes with newer byte-buddy dependency Modification: - Bump BlockHound version - Enable BlockHound tests on Java 18 and above as byte-buddy dependency is updated Result: Enable BlockHound tests on Java 18 and above
Motiviation: We should always add some details on why the exception was thrown to make things easier to debug. Modifications: Add details where we missed to do so Result: Fixes #14559
Motivation: To make our codec more flexible we should add support for unknown frames in SPDY. Modifications: - Add SpdyFrameDecoderExtendedDelegate that supports unknown frames - Add new constructor and protected method to SpdyFrameCodec that makes it easy to support it Result: More flexible SPDY implementation. --------- Co-authored-by: 虎鸣 <[email protected]>
Motivation: Support customized of the **unknwon** frame Modification: Change the returning type of `newSpdyUnknownFrame` from `SpdyUnknownFrame` to `SpdyFrame` Result: Users can override to fire a `CustomFrame` instead of the `SpdyUnknownFrame`
Motivation: Make the code style the same with `try finally release` when encoding SpdyDataFrame Modifications: Use same coding style Result: Same code style.
Motivation: `AdaptivePoolingAllocator` uses internally `CopyOnWriteArraySet`. This will cause the errors below when `BlockHound` is enabled. ``` reactor.blockhound.BlockingOperationError: Blocking call! sun.misc.Unsafe#park at sun.misc.Unsafe.park(Unsafe.java) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199) at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:209) at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:285) at java.util.concurrent.CopyOnWriteArrayList.addIfAbsent(CopyOnWriteArrayList.java:624) at java.util.concurrent.CopyOnWriteArrayList.addIfAbsent(CopyOnWriteArrayList.java:615) at java.util.concurrent.CopyOnWriteArraySet.add(CopyOnWriteArraySet.java:261) at io.netty.buffer.AdaptivePoolingAllocator$1.initialValue(AdaptivePoolingAllocator.java:164) at io.netty.util.concurrent.FastThreadLocal.initialize(FastThreadLocal.java:177) at io.netty.util.concurrent.FastThreadLocal.get(FastThreadLocal.java:142) at io.netty.buffer.AdaptivePoolingAllocator.allocate(AdaptivePoolingAllocator.java:223) at io.netty.buffer.AdaptivePoolingAllocator.allocate(AdaptivePoolingAllocator.java:215) at io.netty.buffer.AdaptiveByteBufAllocator.newDirectBuffer(AdaptiveByteBufAllocator.java:78) at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:188) at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:179) at io.netty.buffer.AbstractByteBufAllocator.buffer(AbstractByteBufAllocator.java:116) ``` ``` reactor.blockhound.BlockingOperationError: Blocking call! sun.misc.Unsafe#park at sun.misc.Unsafe.park(Unsafe.java) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199) at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:209) at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:285) at java.util.concurrent.CopyOnWriteArrayList.remove(CopyOnWriteArrayList.java:537) at java.util.concurrent.CopyOnWriteArrayList.remove(CopyOnWriteArrayList.java:528) at java.util.concurrent.CopyOnWriteArraySet.remove(CopyOnWriteArraySet.java:245) at io.netty.buffer.AdaptivePoolingAllocator$1.onRemoval(AdaptivePoolingAllocator.java:163) at io.netty.util.concurrent.FastThreadLocal.remove(FastThreadLocal.java:259) at io.netty.util.concurrent.FastThreadLocal.removeAll(FastThreadLocal.java:67) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:1050) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) ``` Modification: Allow blocking calls in some methods in `AdaptivePoolingAllocator` Result: No `BlockHound` exceptions Fixes #14560
Motivation: We can only add a Chunk to the central Queue if it is completely unused as otherwise it still is exclusively used by a Magazine Modifications: Ensure we only use the queue once the Chunk is not used anymore Result: Correctly reuse Chunks
Motivation: We can use readRetainedSlice to reduce the copy of bytes in `SpdyFrameDecoder`. Modification: Make use of `readRetainedSlice` instead of new a byteBuf and copy data to it. Result: Less copy
#14586) … are Magazine local cached Chunks (#14581) Motivation: Due a bug we did not re-use central queued Chunks if a Magazine already has a Chunk local cached that is too small to full-fill the allocation. This could lead to completely filling the queue. Modifications: Correctly try to reuse central queued chunks in all cases Result: Fixes #14553
Motivation: PcapWriteHandler should be able to support large files but is currently limited to 2GB due to the usage of int without wrapping. Modifications: segmentNumber and ackNumber are now long values to support uint32 values. segmentNumber and ackNumber wrap around when the uint32 max is reached. Ported unit test from apple/swift-nio-extras#85 Result: Fixes #11543
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Motivation:
Explain here the context, and why you're making that change.
What is the problem you're trying to solve.
Modification:
Describe the modifications you've done.
Result:
Fixes #.
If there is no issue then describe the changes introduced by this PR.