How to compute Row Size in Complex Spark DataFrame?




Definitely there may be a better way of doing this. I was running into an error where data load from lake to CosmosDB is failing due to a record size exceeding 2MB limit. Hence, converting it to JSON grouped by primary key and computing the size seemed appropriate and easy. Comment below if this can be done better. 

Comments

Popular posts from this blog

How to prepare your LOB app for Intune?

Information Architecture - Setup your term store to scale

Generate token signing .CER from ADFS Federation Metadata XML