How to control the final file size using Kyuubi Spark engine #2342
Replies: 4 comments 4 replies
-
Hello @18580725343, |
Beta Was this translation helpful? Give feedback.
-
将结果插入目标表 目标表分区下的数据文件过大 |
Beta Was this translation helpful? Give feedback.
-
I convert it to a discussion because it's not a bug but a question |
Beta Was this translation helpful? Give feedback.
-
I guess you think that the final output file size is controlled by |
Beta Was this translation helpful? Give feedback.
-
Code of Conduct
Search before asking
Describe the bug
使用kyuubi执行spark-sql 设置相关参数 执行结果文件数过大
Affects Version(s)
1.3.0
Kyuubi Server Log Output
No response
Kyuubi Engine Log Output
No response
Kyuubi Server Configurations
No response
Kyuubi Engine Configurations
;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=kyuubi;?#spark.yarn.queue=batch;spark.driver.memory=10g;spark.executor.cores=2;spark.num-executors=8;spark.executor.memory=10g;spark.executor-memory=10g;gspark.sql.shuffle.partitions=2;spark.sql.adaptive.enabled=true;spark.sql.adaptive.shuffle.targetPostShuffleInputSize=268435456;spark.sql.storeAssignmentPolicy=LEGACY;"
Additional context
No response
Are you willing to submit PR?
Beta Was this translation helpful? Give feedback.
All reactions