iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: http://github.com/microsoft/kafka-connect-cosmosdb/issues/513
[BUG] Bulk Execution fails in Sink Connector for partition key that is not "id" · Issue #513 · microsoft/kafka-connect-cosmosdb · GitHub
Skip to content

[BUG] Bulk Execution fails in Sink Connector for partition key that is not "id" #513

Open

Description

Description

Bulk Execution is failing when containers are partitioned by anything other than "id".

Error Message:
[2023-04-27 07:47:27,333] ERROR Could not upload record to CosmosDb, but tolerance is set to all. Error message: Unable to write record to CosmosDB:
{null}, value schema {null}, exception {{'ClassName':'BulkOperationFailedException','userAgent':'azsdk-java-cosmos/4.42.0 Linux/3.10.0-1160.88.1.el7.
x86_64 JRE/11.0.8','statusCode':400,'resourceAddress':null,'innerErrorMessage':'Request failed with effectiveStatusCode: {400}, effectiveSubStatusCod
e: {0}, kafkaOffset: {10}, kafkaPartition: {1}, topic: {test_topic}','causeInfo':null,'responseHeaders':'{x-ms-substatus=0}'}}
(com.azure.cosmos.kafka.connect.sink.CosmosDBSinkTask)

Expected Behavior

Bulk execution should work for any partition key selection.

Reproduce

Send messages to Cosmos DB via sink connector where container is partitioned by some value other than "id".

Additional Context

Workaround: disabling bulk execution when adding Connector in Kafka Connect environment avoids the issue:

"connect.cosmos.sink.bulk.enabled": false

Activity

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Assignees

Labels

bugSomething isn't working

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions