
问题描述在AzureBlob的官方示例中,都是对文件进行上传到Blob操作,没有实现对已创建的Blob进行追加的操作
博客园 2023-05-09 22:26:09
在Azure Blob的官方示例中,都是对文件进行上传到Blob操作,没有实现对已创建的Blob进行追加的操作。如果想要实现对一个文件的多次追加操作,每一次写入的时候,只传入新的内容?
(相关资料图)
Azure Storage Blob 有三种类型: Block Blob, Append Blob 和 Page Blob。其中,只有Append Blob类型支持追加(Append)操作。并且Blob类型在创建时就已经确定,无法后期修改。
在查看Java Storage SDK后,发现可以使用AppendBlobClient来实现。
/** * Creates a new {@link AppendBlobClient} associated with this blob. * * @return A {@link AppendBlobClient} associated with this blob. */ public AppendBlobClient getAppendBlobClient() { return new SpecializedBlobClientBuilder() .blobClient(this) .buildAppendBlobClient(); }
在 AppendBlobClient 类,有appendBlock 和 appendBlockWithResponse 等多种方法来实现追加。方法定义源码如下:
/** * Commits a new block of data to the end of the existing append blob. *View Code代码实现* Note that the data passed must be replayable if retries are enabled (the default). In other words, the * {@code Flux} must produce the same data each time it is subscribed to. * *
Code Samples
* * {@codesnippet com.azure.storage.blob.specialized.AppendBlobClient.appendBlock#InputStream-long} * * @param data The data to write to the blob. The data must be markable. This is in order to support retries. If * the data is not markable, consider using {@link #getBlobOutputStream()} and writing to the returned OutputStream. * Alternatively, consider wrapping your data source in a {@link java.io.BufferedInputStream} to add mark support. * @param length The exact length of the data. It is important that this value match precisely the length of the * data emitted by the {@code Flux}. * @return The information of the append blob operation. */ @ServiceMethod(returns = ReturnType.SINGLE) public AppendBlobItem appendBlock(InputStream data, long length) { return appendBlockWithResponse(data, length, null, null, null, Context.NONE).getValue(); } /** * Commits a new block of data to the end of the existing append blob. ** Note that the data passed must be replayable if retries are enabled (the default). In other words, the * {@code Flux} must produce the same data each time it is subscribed to. * *
Code Samples
* * {@codesnippet com.azure.storage.blob.specialized.AppendBlobClient.appendBlockWithResponse#InputStream-long-byte-AppendBlobRequestConditions-Duration-Context} * * @param data The data to write to the blob. The data must be markable. This is in order to support retries. If * the data is not markable, consider using {@link #getBlobOutputStream()} and writing to the returned OutputStream. * Alternatively, consider wrapping your data source in a {@link java.io.BufferedInputStream} to add mark support. * @param length The exact length of the data. It is important that this value match precisely the length of the * data emitted by the {@code Flux}. * @param contentMd5 An MD5 hash of the block content. This hash is used to verify the integrity of the block during * transport. When this header is specified, the storage service compares the hash of the content that has arrived * with this header value. Note that this MD5 hash is not stored with the blob. If the two hashes do not match, the * operation will fail. * @param appendBlobRequestConditions {@link AppendBlobRequestConditions} * @param timeout An optional timeout value beyond which a {@link RuntimeException} will be raised. * @param context Additional context that is passed through the Http pipeline during the service call. * @return A {@link Response} whose {@link Response#getValue() value} contains the append blob operation. * @throws UnexpectedLengthException when the length of data does not match the input {@code length}. * @throws NullPointerException if the input data is null. */ @ServiceMethod(returns = ReturnType.SINGLE) public ResponseappendBlockWithResponse(InputStream data, long length, byte[] contentMd5, AppendBlobRequestConditions appendBlobRequestConditions, Duration timeout, Context context) { Objects.requireNonNull(data, ""data" cannot be null."); Flux fbb = Utility.convertStreamToByteBuffer(data, length, MAX_APPEND_BLOCK_BYTES, true); Mono > response = appendBlobAsyncClient.appendBlockWithResponse( fbb.subscribeOn(Schedulers.elastic()), length, contentMd5, appendBlobRequestConditions, context); return StorageImplUtils.blockWithOptionalTimeout(response, timeout); }
第一步: 在Java项目 pom.xml 中引入Azure Storage Blob依赖
com.azure azure-storage-blob 12.13.0
第二步: 引入必要的 Storage 类
import java.io.ByteArrayInputStream; import java.io.IOException;import java.io.InputStream;import java.net.URISyntaxException;import java.nio.charset.StandardCharsets;import java.security.InvalidKeyException;import java.security.MessageDigest;import java.security.NoSuchAlgorithmException; import java.time.LocalTime; import com.azure.core.http.rest.Response; import com.azure.storage.blob.BlobContainerClient;import com.azure.storage.blob.BlobServiceClient;import com.azure.storage.blob.BlobServiceClientBuilder;import com.azure.storage.blob.models.AppendBlobItem;import com.azure.storage.blob.models.AppendBlobRequestConditions; import com.azure.storage.blob.specialized.AppendBlobClient;
第三步:创建AppendBlobClient 对象,使用 BlobServiceClient 及连接字符串(Connection String)
String storageConnectionString = "DefaultEndpointsProtocol=https;AccountName=*****;AccountKey=*******;EndpointSuffix=core.chinacloudapi.cn"; String containerName = "appendblob"; String fileName = "test.txt"; // Create a BlobServiceClient object which will be used to create a container System.out.println("\nCreate a BlobServiceClient Object to Connect Storage Account"); BlobServiceClient blobServiceClient = new BlobServiceClientBuilder() .connectionString(storageConnectionString) .buildClient(); BlobContainerClient containerClient = blobServiceClient.getBlobContainerClient(containerName); if (!containerClient.exists()) containerClient.create(); // Get a reference to a blob AppendBlobClient appendBlobClient = containerClient.getBlobClient(fileName).getAppendBlobClient();
第四步:调用appendBlockWithResponse 方法追加内容,并根据返回状态码判断是否追加成功
boolean overwrite = true; // Default value if (!appendBlobClient.exists()) System.out.printf("Created AppendBlob at %s%n", appendBlobClient.create(overwrite).getLastModified()); String data = "Test to append new content into exists blob! by blogs lu bian liang zhan deng @" + LocalTime.now().toString() + "\n"; InputStream inputStream = new ByteArrayInputStream(data.getBytes(StandardCharsets.UTF_8)); byte[] md5 = MessageDigest.getInstance("MD5").digest(data.getBytes(StandardCharsets.UTF_8)); AppendBlobRequestConditions requestConditions = new AppendBlobRequestConditions(); // Context context = new Context("key", "value"); long length = data.getBytes().length; Response运行结果展示rsp = appendBlobClient.appendBlockWithResponse(inputStream, length, md5, requestConditions, null, null); if (rsp.getStatusCode() == 201) { System.out.println("append content successful........"); }
但如果操作的Blob类型不是Append Blob,就会遇见错误Status code 409 ---- The blob type is invalid for this operation 错误
Exception in thread "main" com.azure.storage.blob.models.BlobStorageException: Status code 409, "参考资料> " at java.base/jdk.internal.reflect.DirectConstructorHandleAccessor.newInstance(DirectConstructorHandleAccessor.java:67) at java.base/java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:500) at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:484) at com.azure.core.http.rest.RestProxy.instantiateUnexpectedException(RestProxy.java:343) at com.azure.core.http.rest.RestProxy.lambda$ensureExpectedStatus$5(RestProxy.java:382) at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:125) at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1815) at reactor.core.publisher.MonoCacheTime$CoordinatorSubscriber.signalCached(MonoCacheTime.java:337) at reactor.core.publisher.MonoCacheTime$CoordinatorSubscriber.onNext(MonoCacheTime.java:354) at reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2397) at reactor.core.publisher.MonoCacheTime$CoordinatorSubscriber.onSubscribe(MonoCacheTime.java:293) at reactor.core.publisher.FluxFlatMap.trySubscribeScalarMap(FluxFlatMap.java:192) at reactor.core.publisher.MonoFlatMap.subscribeOrReturn(MonoFlatMap.java:53) at reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:57) at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:52) at reactor.core.publisher.MonoCacheTime.subscribeOrReturn(MonoCacheTime.java:143) at reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:57) at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:157) at reactor.core.publisher.FluxDoFinally$DoFinallySubscriber.onNext(FluxDoFinally.java:130) at reactor.core.publisher.FluxHandle$HandleSubscriber.onNext(FluxHandle.java:118) at reactor.core.publisher.FluxMap$MapConditionalSubscriber.onNext(FluxMap.java:220) at reactor.core.publisher.FluxDoFinally$DoFinallySubscriber.onNext(FluxDoFinally.java:130) at reactor.core.publisher.FluxHandleFuseable$HandleFuseableSubscriber.onNext(FluxHandleFuseable.java:184) at reactor.core.publisher.FluxContextWrite$ContextWriteSubscriber.onNext(FluxContextWrite.java:107) at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1815) at reactor.core.publisher.MonoCollectList$MonoCollectListSubscriber.onComplete(MonoCollectList.java:128) at reactor.core.publisher.FluxPeek$PeekSubscriber.onComplete(FluxPeek.java:259) at reactor.core.publisher.FluxMap$MapSubscriber.onComplete(FluxMap.java:142) at reactor.netty.channel.FluxReceive.onInboundComplete(FluxReceive.java:401) at reactor.netty.channel.ChannelOperations.onInboundComplete(ChannelOperations.java:416) at reactor.netty.channel.ChannelOperations.terminate(ChannelOperations.java:470) at reactor.netty.http.client.HttpClientOperations.onInboundNext(HttpClientOperations.java:685) at reactor.netty.channel.ChannelOperationsHandler.channelRead(ChannelOperationsHandler.java:94) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:655) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:581) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.base/java.lang.Thread.run(Thread.java:1589) Suppressed: java.lang.Exception: #block terminated with an error at reactor.core.publisher.BlockingSingleSubscriber.blockingGet(BlockingSingleSubscriber.java:99) at reactor.core.publisher.Mono.block(Mono.java:1703) at com.azure.storage.common.implementation.StorageImplUtils.blockWithOptionalTimeout(StorageImplUtils.java:128) at com.azure.storage.blob.specialized.AppendBlobClient.appendBlockWithResponse(AppendBlobClient.java:259) at test.App.AppendBlobContent(App.java:68) at test.App.main(App.java:31)InvalidBlobType
The blob type is invalid for this operation.RequestId:501ee0b9-301e-0003-4f7b-829ca6000000Time:2023-05-09T13:37:17.7509942Z
appendBlockWithResponse :https://learn.microsoft.com/en-us/java/api/com.azure.storage.blob.specialized.appendblobclient?view=azure-java-stable#com-azure-storage-blob-specialized-appendblobclient-appendblockwithresponse(java-io-inputstream-long-byte()-com-azure-storage-blob-models-appendblobrequestconditions-java-time-duration-com-azure-core-util-context)
Blob(对象)存储简介 :https://docs.azure.cn/zh-cn/storage/blobs/storage-blobs-introduction
问题描述在AzureBlob的官方示例中,都是对文件进行上传到Blob操作,没有实现对已创建的Blob进行追加的操作
5月7日8时22分 北京西站至重庆北站G51次列车 靠停在邢台东站 一名患病旅客被抱下列车 在邢台东站没有停点的
最新消息:2023青岛太平山中央公园做了哪些提升?全民阳光绿道:全民阳光绿道是太平山公园内最长的环线绿道
条条道路通万家要想让老百姓日子过得舒服家门口的道路得平坦、畅通近日九渡河镇立足百姓需求对黄坎村内多条
外媒爆料,剑桥大学教授霍金居然也在灯塔国头号皮条客爱泼斯坦组织的狂欢岛饭局中,而且还留下了照片证据…
鸡枞的人工种植鸡枞与白蚁的关系因为鸡枞这种野生菌并不是只需要土壤和水份就能生长的,它还有一个令人吃惊
1、白色的风信子:暗恋、恬适、沉静的爱(不敢表露的爱)。2、红色的风信子:感谢你,让我感动的爱(你的爱
永泰世茂温泉小镇绑定车位销售投诉直通车是湖南日报、华声在线、新湖南主办的投诉维权类栏目,帮助解决网上
格隆汇5月9日丨4月28日,明阳智能(601615)(601615 SH)发布22年净利润34 55亿元,同比增长9 4%,2023年一季
4月26日,钵池街道山前社区关工委组织开展“红色传承之旅”活动,社区“五老”及部分青年代表赴黄花塘新四
今天,大学路小编为大家带来了广西大专28联盟院校排名广西26联盟学校那个学校好,推荐四个,希望能帮助到广
据印度媒体报道,当地时间5月9日,印度中央邦一辆汽车从桥上坠落,造成至少22人死亡,31人受伤
1、对联形式多样,有正对、反对、流水对、联球对、集句对等。2、但不管何类对联,使用何种形式,却又必须具
南昌5月8日电(记者程迪)日出而作、日落而息,有着约900年历史的江西婺源县思口镇延村,延续着农耕生活的
格隆汇5月9日丨容大感光于2023年5月8日15:00-17:00召开业绩说明会,交流环节中,就“公司业务情况及收入占