Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Azure Blob sdk v12 preview 3 doesn't include ETag in BlobDownloadResponse #5623

Closed
3 tasks
andreaturli opened this issue Oct 1, 2019 · 25 comments
Closed
3 tasks
Assignees
Labels
blocking-release Blocks release bug This issue requires a change to an existing behavior in the product in order to be resolved. Client This issue points to a problem in the data-plane of the library. customer-reported Issues that are reported by GitHub users external to the Azure organization. Storage Storage Service (Queues, Blobs, Files)

Comments

@andreaturli
Copy link

Describe the bug

BlobAsyncClient doesn't parse the etag header properly, as BlobDownloadHeaders#ETag is using the wrong value for the @JsonProperty(value = "ETag") instead of @JsonProperty(value = "etag")

Exception or Stack Trace
see Azure/Azurite#217 (comment)

To Reproduce
Azure/Azurite#217 (comment)

Code Snippet
Add the code snippet that causes the issue.

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

Setup (please complete the following information):

  • OS: [e.g. iOS]
  • IDE : IntelliJ
  • Version of the Library used
    v12.0.0-preview3

Additional context
Add any other context about the problem here.

Information Checklist
Kindly make sure that you have added all the following information above and checkoff the required fields otherwise we will treat the issuer as an incomplete report

  • Bug Description Added
  • Repro Steps Added
  • Setup information Added
@joshfree joshfree added bug This issue requires a change to an existing behavior in the product in order to be resolved. Client This issue points to a problem in the data-plane of the library. Storage Storage Service (Queues, Blobs, Files) labels Oct 2, 2019
@joshfree joshfree added the customer-reported Issues that are reported by GitHub users external to the Azure organization. label Oct 2, 2019
@joshfree
Copy link
Member

joshfree commented Oct 2, 2019

@andreaturli thank you for reporting this issue in the September preview of Azure Storage. @sima-zhu could you please investigate?

/cc @amishra-dev

@amishra-dev
Copy link

Thanks @joshfree
@rickle-msft fyi

@andreaturli
Copy link
Author

thanks @josefree for taking the time to have a look. Let me know if you need more info to investigate it

@sima-zhu
Copy link
Contributor

sima-zhu commented Oct 3, 2019

Taking a look

@samvaity samvaity removed their assignment Oct 3, 2019
@sima-zhu
Copy link
Contributor

sima-zhu commented Oct 3, 2019

@andreaturli Thanks for reporting this issue!

Can I get more details or code snippet on the issue?

Based on the rest API doc from service.
It is expect behavior to receive ETag as it is: https://docs.microsoft.com/en-us/rest/api/storageservices/get-blob#sample-response

ETag: "0x8CB171DBEAD6A6B"

We have several download tests and I did not run into issue for ETag.

I would be great if you can provide a simple code snippet to test.

@sima-zhu
Copy link
Contributor

sima-zhu commented Oct 7, 2019

Hi @andreaturli Do you still have the issue? If this is still a question, please leave your comments. Otherwise, I will close the issue tomorrow. Thanks!

@andreaturli
Copy link
Author

hi @sima-zhu I'll test it again shortly, I'll report here

@sima-zhu
Copy link
Contributor

sima-zhu commented Oct 8, 2019

Thanks

@andreaturli
Copy link
Author

andreaturli commented Oct 8, 2019

I still see the issue

Request try:'1', request duration:'411' ms, operation duration:'411' ms
GET: http://127.0.0.1:10000/devstoreaccount1/123456789/test
Authorization: REDACTED
Content-Length: 0
x-ms-version: 2018-11-09
x-ms-date: Tue, 08 Oct 2019 16:22:59 GMT
host: 127.0.0.1
connection: keep-alive
x-ms-client-request-id: 11c9e080-eb54-4e1d-908b-fcdb7dabd6fd
User-Agent:  Azure-Storage/11.0.0 (JavaJRE 1.8.0_191; MacOSX 10.14.6)

Exception in thread "main" java.lang.IllegalArgumentException: The argument must not be null or an empty string. Argument name: info.eTag.
	at com.microsoft.azure.storage.blob.Utility.assertNotNull(Utility.java:77)
	at com.microsoft.azure.storage.blob.DownloadResponse.<init>(DownloadResponse.java:56)
	at com.microsoft.azure.storage.blob.BlobURL.lambda$0(BlobURL.java:369)
	at io.reactivex.internal.operators.single.SingleMap$MapSingleObserver.onSuccess(SingleMap.java:57)
	at io.reactivex.internal.operators.single.SingleMap$MapSingleObserver.onSuccess(SingleMap.java:64)
	at io.reactivex.internal.operators.single.SingleResumeNext$ResumeMainSingleObserver.onSuccess(SingleResumeNext.java:65)
	at io.reactivex.internal.operators.single.SingleFlatMap$SingleFlatMapCallback$FlatMapSingleObserver.onSuccess(SingleFlatMap.java:111)
	at io.reactivex.internal.operators.maybe.MaybeToSingle$ToSingleMaybeSubscriber.onSuccess(MaybeToSingle.java:83)
	at io.reactivex.internal.operators.maybe.MaybeMap$MapMaybeObserver.onSuccess(MaybeMap.java:89)
	at io.reactivex.internal.operators.maybe.MaybeJust.subscribeActual(MaybeJust.java:36)
	at io.reactivex.Maybe.subscribe(Maybe.java:4156)
	at io.reactivex.internal.operators.maybe.MaybeMap.subscribeActual(MaybeMap.java:40)
	at io.reactivex.Maybe.subscribe(Maybe.java:4156)
	at io.reactivex.internal.operators.maybe.MaybeToSingle.subscribeActual(MaybeToSingle.java:46)
	at io.reactivex.Single.subscribe(Single.java:3394)
	at io.reactivex.internal.operators.single.SingleFlatMap$SingleFlatMapCallback.onSuccess(SingleFlatMap.java:84)
	at io.reactivex.internal.operators.single.SingleFlatMap$SingleFlatMapCallback$FlatMapSingleObserver.onSuccess(SingleFlatMap.java:111)
	at io.reactivex.internal.operators.single.SingleJust.subscribeActual(SingleJust.java:30)
	at io.reactivex.Single.subscribe(Single.java:3394)
	at io.reactivex.internal.operators.single.SingleFlatMap$SingleFlatMapCallback.onSuccess(SingleFlatMap.java:84)
	at io.reactivex.internal.operators.single.SingleResumeNext$ResumeMainSingleObserver.onSuccess(SingleResumeNext.java:65)
	at io.reactivex.internal.operators.single.SingleFlatMap$SingleFlatMapCallback$FlatMapSingleObserver.onSuccess(SingleFlatMap.java:111)
	at io.reactivex.internal.operators.single.SingleJust.subscribeActual(SingleJust.java:30)
	at io.reactivex.Single.subscribe(Single.java:3394)
	at io.reactivex.internal.operators.single.SingleFlatMap$SingleFlatMapCallback.onSuccess(SingleFlatMap.java:84)
	at io.reactivex.internal.observers.ResumeSingleObserver.onSuccess(ResumeSingleObserver.java:46)
	at io.reactivex.internal.operators.single.SingleTimeout$TimeoutMainObserver.onSuccess(SingleTimeout.java:133)
	at io.reactivex.internal.operators.single.SingleDoOnSuccess$DoOnSuccess.onSuccess(SingleDoOnSuccess.java:59)
	at io.reactivex.internal.operators.single.SingleMap$MapSingleObserver.onSuccess(SingleMap.java:64)
	at io.reactivex.internal.operators.single.SingleFlatMap$SingleFlatMapCallback$FlatMapSingleObserver.onSuccess(SingleFlatMap.java:111)
	at io.reactivex.internal.operators.single.SingleJust.subscribeActual(SingleJust.java:30)
	at io.reactivex.Single.subscribe(Single.java:3394)
	at io.reactivex.internal.operators.single.SingleFlatMap$SingleFlatMapCallback.onSuccess(SingleFlatMap.java:84)
	at io.reactivex.internal.operators.single.SingleDoOnSuccess$DoOnSuccess.onSuccess(SingleDoOnSuccess.java:59)
	at io.reactivex.internal.operators.single.SingleDoOnError$DoOnError.onSuccess(SingleDoOnError.java:52)
	at io.reactivex.internal.operators.single.SingleCreate$Emitter.onSuccess(SingleCreate.java:68)
	at com.microsoft.rest.v2.http.NettyClient$HttpClientInboundHandler.channelRead(NettyClient.java:918)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
	at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:438)
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:297)
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:413)
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265)
	at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:253)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965)
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:646)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:581)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:498)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:460)
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884)
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
	at java.lang.Thread.run(Thread.java:748)

and those are the logs from Azurite container

$ docker run -p 10000:10000 -p 10001:10001 mcr.microsoft.com/azure-storage/azurite:3.2.0-preview
Azurite Blob service is starting on 0.0.0.0:10000
Azurite Blob service successfully listens on 0.0.0.0:10000
Azurite Queue service is starting on 0.0.0.0:10001
Azurite Queue service successfully listens on 0.0.0.0:10001
172.17.0.1 - - [08/Oct/2019:16:15:39 +0000] "GET http://127.0.0.1:10000/devstoreaccount1/123456789/123456789?maxresults=1&include=&restype=container&comp=list HTTP/1.1" 404 -
172.17.0.1 - - [08/Oct/2019:16:17:12 +0000] "PUT /devstoreaccount1/123456789?restype=container HTTP/1.1" 201 -
172.17.0.1 - - [08/Oct/2019:16:22:36 +0000] "PUT /devstoreaccount1/123456789/test HTTP/1.1" 201 -
172.17.0.1 - - [08/Oct/2019:16:22:59 +0000] "GET http://127.0.0.1:10000/devstoreaccount1/123456789/test HTTP/1.1" 200 138793

so the blob is there but the java client can't parse the response correctly

@sima-zhu
Copy link
Contributor

sima-zhu commented Oct 8, 2019

Can I test it on my end?

@sima-zhu
Copy link
Contributor

sima-zhu commented Oct 8, 2019

@andreaturli BlobURL seems to the use previous version (prior to v12).

@andreaturli
Copy link
Author

Indeed I'm using v11

@sima-zhu
Copy link
Contributor

sima-zhu commented Oct 8, 2019

Can you switch to v12 to test? Or we can move to new issue regarding v11.

@andreaturli
Copy link
Author

andreaturli commented Oct 9, 2019

Using v12 (both preview2 and preview4) gives me the same problem

Using az cli I can do the following

az storage container create --name '123456789' --connection-string 'DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://127.0.0.1:10000/devstoreaccount1;'

az storage blob upload -f /Users/andrea/Desktop/test.json -c '123456789' -n test --connection-string 'DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://127.0.0.1:10000/devstoreaccount1;'

az storage blob download -c '123456789' --connection-string 'DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://127.0.0.1:10000/devstoreaccount1;' -n test -f /tmp/test

and this is the log from Azurite

$ docker run -p 10000:10000 -p 10001:10001 mcr.microsoft.com/azure-storage/azurite:3.2.0-preview
Azurite Blob service is starting on 0.0.0.0:10000
Azurite Blob service successfully listens on 0.0.0.0:10000
Azurite Queue service is starting on 0.0.0.0:10001
Azurite Queue service successfully listens on 0.0.0.0:10001
172.17.0.1 - - [09/Oct/2019:08:14:40 +0000] "PUT /devstoreaccount1/123456789?restype=container HTTP/1.1" 201 -
172.17.0.1 - - [09/Oct/2019:08:14:47 +0000] "PUT /devstoreaccount1/123456789/test HTTP/1.1" 201 -
172.17.0.1 - - [09/Oct/2019:08:14:54 +0000] "GET /devstoreaccount1/123456789/test HTTP/1.1" 206 138793

From java sdk v12I can see the following stacktrace:

GET /devstoreaccount1/123456789/test HTTP/1.1
host: 127.0.0.1:10000
accept: */*
Date: Wed, 09 Oct 2019 08:15:11 GMT
x-ms-version: 2019-02-02
Authorization: SharedKey devstoreaccount1:HnKxQX6xWbWl+7bj0wgpWnGpSXRQPqS0DfjU161mFJ0=
x-ms-client-request-id: 136c2873-34e2-4dfe-90db-16e056c40a23
User-Agent: azsdk-java-azure-storage-blob/12.0.0-preview.3 1.8.0_191; Mac OS X 10.14.6
content-length: 0
10:15:12.203 [reactor-http-nio-4] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.maxCapacityPerThread: 4096
10:15:12.203 [reactor-http-nio-4] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.maxSharedCapacityFactor: 2
10:15:12.203 [reactor-http-nio-4] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.linkCapacity: 16
10:15:12.203 [reactor-http-nio-4] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.ratio: 8
10:15:12.212 [reactor-http-nio-4] DEBUG reactor.netty.resources.PooledConnectionProvider - [id: 0x3772d1f7, L:/127.0.0.1:58110 - R:/127.0.0.1:10000] onStateChange(GET{uri=/devstoreaccount1/123456789/test, connection=PooledConnection{channel=[id: 0x3772d1f7, L:/127.0.0.1:58110 - R:/127.0.0.1:10000]}}, [request_sent])
10:15:12.212 [reactor-http-nio-4] DEBUG reactor.netty.resources.PooledConnectionProvider - [id: 0x3772d1f7, L:/127.0.0.1:58110 - R:/127.0.0.1:10000] Channel connected, now 1 active connections and 0 inactive connections
10:15:12.224 [reactor-http-nio-4] DEBUG reactor.netty.http.client.HttpClientOperations - [id: 0x3772d1f7, L:/127.0.0.1:58110 - R:/127.0.0.1:10000] Received response (auto-read:false) : [Server=Azurite-Blob/3.2.0-preview, last-modified=Wed, 09 Oct 2019 08:14:47 GMT, content-length=138793, content-type=application/json, etag="d-N6JVml9IEp3EKCuoZ5At4yiVgEg", content-md5=j/tFvnspCanOfT00Ua5Wvg==, x-ms-blob-type=BlockBlob, x-ms-lease-state=available, x-ms-lease-status=unlocked, x-ms-request-id=c63d111f-5128-43ff-83f0-5b63a2cdce76, x-ms-version=2018-03-28, date=Wed, 09 Oct 2019 08:15:12 GMT, x-ms-server-encrypted=true, x-ms-blob-content-md5=j/tFvnspCanOfT00Ua5Wvg==, Connection=keep-alive]
10:15:12.224 [reactor-http-nio-4] DEBUG reactor.netty.resources.PooledConnectionProvider - [id: 0x3772d1f7, L:/127.0.0.1:58110 - R:/127.0.0.1:10000] onStateChange(GET{uri=/devstoreaccount1/123456789/test, connection=PooledConnection{channel=[id: 0x3772d1f7, L:/127.0.0.1:58110 - R:/127.0.0.1:10000]}}, [response_received])
Exception in thread "main" java.lang.IllegalArgumentException: The argument must not be null or an empty string. Argument name: info.eTag.
	at com.azure.storage.common.Utility.assertNotNull(Utility.java:284)
	at com.azure.storage.blob.DownloadAsyncResponse.<init>(DownloadAsyncResponse.java:47)
	at com.azure.storage.blob.BlobAsyncClient.lambda$18(BlobAsyncClient.java:494)
	at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:100)
	at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:114)
	at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:73)
	at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1515)
	at reactor.core.publisher.MonoFlatMap$FlatMapInner.onNext(MonoFlatMap.java:241)
	at reactor.core.publisher.FluxSwitchIfEmpty$SwitchIfEmptySubscriber.onNext(FluxSwitchIfEmpty.java:67)
	at reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onNext(FluxMapFuseable.java:121)
	at reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2071)
	at reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.request(FluxMapFuseable.java:162)
	at reactor.core.publisher.Operators$MultiSubscriptionSubscriber.set(Operators.java:1879)
	at reactor.core.publisher.Operators$MultiSubscriptionSubscriber.onSubscribe(Operators.java:1753)
	at reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onSubscribe(FluxMapFuseable.java:90)
	at reactor.core.publisher.MonoJust.subscribe(MonoJust.java:54)
	at reactor.core.publisher.MonoMapFuseable.subscribe(MonoMapFuseable.java:59)
	at reactor.core.publisher.MonoSwitchIfEmpty.subscribe(MonoSwitchIfEmpty.java:44)
	at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:150)
	at reactor.core.publisher.FluxContextStart$ContextStartSubscriber.onNext(FluxContextStart.java:103)
	at reactor.core.publisher.FluxDoOnEach$DoOnEachSubscriber.onNext(FluxDoOnEach.java:159)
	at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1515)
	at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:144)
	at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:114)
	at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:73)
	at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1515)
	at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:144)
	at reactor.core.publisher.FluxDelaySubscription$DelaySubscriptionMainSubscriber.onNext(FluxDelaySubscription.java:179)
	at reactor.core.publisher.SerializedSubscriber.onNext(SerializedSubscriber.java:89)
	at reactor.core.publisher.FluxTimeout$TimeoutMainSubscriber.onNext(FluxTimeout.java:173)
	at reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onNext(FluxMapFuseable.java:121)
	at reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onNext(FluxMapFuseable.java:121)
	at reactor.core.publisher.FluxPeekFuseable$PeekFuseableSubscriber.onNext(FluxPeekFuseable.java:204)
	at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1515)
	at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:144)
	at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1515)
	at reactor.core.publisher.MonoIgnoreThen$ThenAcceptInner.onNext(MonoIgnoreThen.java:296)
	at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1515)
	at reactor.core.publisher.MonoSingle$SingleSubscriber.onComplete(MonoSingle.java:171)
	at reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2073)
	at reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain.onSubscribeInner(MonoFlatMapMany.java:140)
	at reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain.onNext(MonoFlatMapMany.java:178)
	at reactor.core.publisher.FluxRetryPredicate$RetryPredicateSubscriber.onNext(FluxRetryPredicate.java:81)
	at reactor.core.publisher.MonoCreate$DefaultMonoSink.success(MonoCreate.java:156)
	at reactor.netty.http.client.HttpClientConnect$HttpObserver.onStateChange(HttpClientConnect.java:383)
	at reactor.netty.resources.PooledConnectionProvider$DisposableAcquire.onStateChange(PooledConnectionProvider.java:501)
	at reactor.netty.resources.PooledConnectionProvider$PooledConnection.onStateChange(PooledConnectionProvider.java:443)
	at reactor.netty.http.client.HttpClientOperations.onInboundNext(HttpClientOperations.java:483)
	at reactor.netty.channel.ChannelOperationsHandler.channelRead(ChannelOperationsHandler.java:141)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
	at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:438)
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:323)
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:426)
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:278)
	at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:253)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965)
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:648)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:583)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:500)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:462)
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:905)
	at java.lang.Thread.run(Thread.java:748)
	Suppressed: java.lang.Exception: #block terminated with an error
		at reactor.core.publisher.BlockingSingleSubscriber.blockingGet(BlockingSingleSubscriber.java:93)
		at reactor.core.publisher.Mono.block(Mono.java:1494)
		at com.azure.storage.common.Utility.blockWithOptionalTimeout(Utility.java:241)
		at com.azure.storage.blob.BlobClient.downloadWithResponse(BlobClient.java:388)
		at com.azure.storage.blob.BlobClient.download(BlobClient.java:349)
		at co.elastic.cloud.snapshot.estimator.storage.AzureClient.readObject(AzureClient.java:70)
		at co.elastic.cloud.snapshot.estimator.storage.AzureClient.main(AzureClient.java:111)

although azurerite server gives me the following

172.17.0.1 - - [09/Oct/2019:08:15:12 +0000] "GET /devstoreaccount1/123456789/test HTTP/1.1" 200 138793

Notice I've tried with

        BlockBlobClient blockBlobClient = containerClient.getBlobClient(blobName).asBlockBlobClient();
        ByteArrayOutputStream result = new ByteArrayOutputStream();
        blockBlobClient.download(result);

@sima-zhu
Copy link
Contributor

sima-zhu commented Oct 9, 2019

Looking

@sima-zhu sima-zhu added the blocking-release Blocks release label Oct 9, 2019
@sima-zhu
Copy link
Contributor

sima-zhu commented Oct 9, 2019

I am trying to reproduce the issue but no lucky to failed at the same error.
However, since http headers are case insensitive, it is a better route to deserialize it in case-insensitive way.

Propose a change to team. Will post here if any updates.

@sima-zhu
Copy link
Contributor

sima-zhu commented Oct 9, 2019

What the service return to me is using ETag based on your code snippet above.

DefaultHttpResponse(decodeResult: success, version: HTTP/1.1)
HTTP/1.1 201 Created
Content-Length: 0
Last-Modified: Wed, 09 Oct 2019 22:50:56 GMT
ETag: "0x8D74D0B2582BEC9"
Server: Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0
x-ms-request-id: 552cc7ba-001e-00b5-19f4-7e3d89000000
x-ms-client-request-id: e20533e5-bfd8-4b33-91c7-64b599d830f5
x-ms-version: 2019-02-02
Date: Wed, 09 Oct 2019 22:50:55 GMT

@andreaturli
Copy link
Author

I am trying to reproduce the issue but no lucky to failed at the same error.
However, since http headers are case insensitive, it is a better route to deserialize it in case-insensitive way.

Propose a change to team. Will post here if any updates.
Thanks

Could you meanwhile share the java snippet you are using to test from your end? thanks!

@sima-zhu
Copy link
Contributor

The same code snippet you attached above.

BlockBlobClient blockBlobClient = containerClient.getBlobClient(blobName).asBlockBlobClient();
ByteArrayOutputStream result = new ByteArrayOutputStream();
blockBlobClient.download(result);

@sima-zhu
Copy link
Contributor

Added new Json feature property to model class in PR #5841

JsonFormat.Feature.ACCEPT_CASE_INSENSITIVE_PROPERTIES

We can handle over the response header name in a case-insensitive way.

@sima-zhu
Copy link
Contributor

The annotation does not work. Will go with other approach

@anuchandy
Copy link
Member

@jianghaolu like we discussed may be the safe approach is to introduce SerializerAdapter::deserializeHeaders(headers, type)? that way azure-core works with older version of sdks. Due to code freeze, change in azure-core requires special approval today, I just went through the process :) now

@sima-zhu
Copy link
Contributor

PR #5844 submitted the suggested change above.

@sima-zhu
Copy link
Contributor

The fix 3e03bf2 has been merged in.
Please feel free to reopen if the issue still exists.
Thanks for reporting this issue!

@ghost
Copy link

ghost commented Oct 15, 2019

Thanks for working with Microsoft on GitHub! Tell us how you feel about your experience using the reactions on this comment.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
blocking-release Blocks release bug This issue requires a change to an existing behavior in the product in order to be resolved. Client This issue points to a problem in the data-plane of the library. customer-reported Issues that are reported by GitHub users external to the Azure organization. Storage Storage Service (Queues, Blobs, Files)
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants