I am seeing pretty much the same behavior as this issue here which I can't comment on as I don't have enough reputation. I have an API that is using cosmos and I have a daily batch job that needs to write into that cosmos instance. When the job kicks off it greatly affects API performance. Scaling up to an insanely high RU solves the issue but it solves it rather or not the "Write throughput budget " setting is set. So let's assume the API was using 2500 RUS. If I then use Data Flow to scale to 5000 RUs with a 2500 RU threshold it should not affect API performance as there are 2500 RUs still available that data flow is not touching. That is not the case though and we do still a direct performance hit to the API once the data flow job starts copying into CosmosDB. I guess my question is: what exactly is "Write throughput budget" and how do you use it effectively?
Powered by Algolia
No results for 'undefined'
Azure Data Factory Dataflow CosmosDB Sink "Write throughput budget" doing nothing
Author: Brandon Watts