【Spark SQL】查看所有Spark SQL参数

scala> spark.sql("set -v").show(1000,false)
+--------------------------------------------------------+---------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|key                                                     |value                                  |meaning                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  |
+--------------------------------------------------------+---------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|spark.sql.adaptive.enabled                              |false                                  |When true, enable adaptive query execution.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              |
|spark.sql.adaptive.shuffle.targetPostShuffleInputSize   |67108864b                              |The target post-shuffle input size in bytes of a task.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   |
|spark.sql.autoBroadcastJoinThreshold                    |10485760                               |Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join.  By setting this value to -1 broadcasting can be disabled. Note that currently statistics are only supported for Hive Metastore tables where the command <code>ANALYZE TABLE &lt;tableName&gt; COMPUTE STATISTICS noscan</code> has been run, and file-based data source tables where the statistics are computed directly on the files of data.                                                                                                                                                                                                                                                                                                                                                     |
|spark.sql.avro.compression.codec                        |snappy                                 |Compression codec used in writing of AVRO files. Supported codecs: uncompressed, deflate, snappy, bzip2 and xz. Default codec is snappy.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 |
|spark.sql.avro.deflate.level                            |-1                                     |Compression level for the deflate codec used in writing of AVRO files. Valid value must be in the range of from 1 to 9 inclusive or -1. The default value is -1 which corresponds to 6 level in the current implementation.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              |
|spark.sql.broadcastTimeout                              |300000ms                               |Timeout in seconds for the broadcast wait time in broadcast joins.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       |
|spark.sql.cbo.enabled                                   |false                                  |Enables CBO for estimation of plan statistics when set true.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             |
|spark.sql.cbo.joinReorder.dp.star.filter                |false                                  |Applies star-join filter heuristics to cost based join enumeration.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      |
|spark.sql.cbo.joinReorder.dp.threshold                  |12                                     |The maximum number of joined nodes allowed in the dynamic programming algorithm.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         |
|spark.sql.cbo.joinReorder.enabled                       |false                                  |Enables join reorder in CBO.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             |
|spark.sql.cbo.starSchemaDetection                       |false                                  |When true, it enables join reordering based on star schema detection.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    |
|spark.sql.columnNameOfCorruptRecord                     |_corrupt_record                        |The name of internal column for storing raw/un-parsed JSON and CSV records that fail to parse.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           |
|spark.sql.crossJoin.enabled                             |false                                  |When false, we will throw an error if a query contains a cartesian product without explicit CROSS JOIN syntax.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           |
|spark.sql.execution.arrow.enabled                       |false                                  |When true, make use of Apache Arrow for columnar data transfers. Currently available for use with pyspark.sql.DataFrame.toPandas, and pyspark.sql.SparkSession.createDataFrame when its input is a Pandas DataFrame. The following data types are unsupported: BinaryType, MapType, ArrayType of TimestampType, and nested StructType.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   |
|spark.sql.execution.arrow.fallback.enabled              |true                                   |When true, optimizations enabled by 'spark.sql.execution.arrow.enabled' will fallback automatically to non-optimized implementations if an error occurs.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 |
|spark.sql.execution.arrow.maxRecordsPerBatch            |10000                                  |When using Apache Arrow, limit the maximum number of records that can be written to a single ArrowRecordBatch in memory. If set to zero or negative there is no limit.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   |
|spark.sql.extensions                                    |<undefined>                            |Name of the class used to configure Spark Session extensions. The class should implement Function1[SparkSessionExtension, Unit], and must have a no-args constructor.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    |
|spark.sql.files.ignoreCorruptFiles                      |false                                  |Whether to ignore corrupt files. If true, the Spark jobs will continue to run when encountering corrupted files and the contents that have been read will still be returned.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             |
|spark.sql.files.ignoreMissingFiles                      |false                                  |Whether to ignore missing files. If true, the Spark jobs will continue to run when encountering missing files and the contents that have been read will still be returned.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               |
|spark.sql.files.maxPartitionBytes                       |134217728                              |The maximum number of bytes to pack into a single partition when reading files.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          |
|spark.sql.files.maxRecordsPerFile                       |0                                      |Maximum number of records to write out to a single file. If this value is zero or negative, there is no limit.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           |
|spark.sql.function.concatBinaryAsString                 |false                                  |When this option is set to false and all inputs are binary, `functions.concat` returns an output as binary. Otherwise, it returns as a string.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           |
|spark.sql.function.eltOutputAsString                    |false                                  |When this option is set to false and all inputs are binary, `elt` returns an output as binary. Otherwise, it returns as a string.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        |
|spark.sql.groupByAliases                                |true                                   |When true, aliases in a select list can be used in group by clauses. When false, an analysis exception is thrown in the case.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            |
|spark.sql.groupByOrdinal                                |true                                   |When true, the ordinal numbers in group by clauses are treated as the position in the select list. When false, the ordinal numbers are ignored.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          |
|spark.sql.hive.caseSensitiveInferenceMode               |INFER_AND_SAVE                         |Sets the action to take when a case-sensitive schema cannot be read from a Hive table's properties. Although Spark SQL itself is not case-sensitive, Hive compatible file formats such as Parquet are. Spark SQL must use a case-preserving schema when querying any table backed by files containing case-sensitive field names or queries may not return accurate results. Valid options include INFER_AND_SAVE (the default mode-- infer the case-sensitive schema from the underlying data files and write it back to the table properties), INFER_ONLY (infer the schema but don't attempt to write it to the table properties) and NEVER_INFER (fallback to using the case-insensitive metastore schema instead of inferring).                                                                                     |
|spark.sql.hive.filesourcePartitionFileCacheSize         |262144000                              |When nonzero, enable caching of partition file metadata in memory. All tables share a cache that can use up to specified num bytes for file metadata. This conf only has an effect when hive filesource partition management is enabled.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 |
|spark.sql.hive.manageFilesourcePartitions               |true                                   |When true, enable metastore partition management for file source tables as well. This includes both datasource and converted Hive tables. When partition management is enabled, datasource tables store partition in the Hive metastore, and use the metastore to prune partitions during query planning.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                |
|spark.sql.hive.metastorePartitionPruning                |true                                   |When true, some predicates will be pushed down into the Hive metastore so that unmatching partitions can be eliminated earlier. This only affects Hive tables not converted to filesource relations (see HiveUtils.CONVERT_METASTORE_PARQUET and HiveUtils.CONVERT_METASTORE_ORC for more information).                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  |
|spark.sql.hive.thriftServer.singleSession               |false                                  |When set to true, Hive Thrift server is running in a single session mode. All the JDBC/ODBC connections share the temporary views, function registries, SQL configuration and the current database.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      |
|spark.sql.hive.verifyPartitionPath                      |false                                  |When true, check all the partition paths under the table's root directory when reading data stored in HDFS. This configuration will be deprecated in the future releases and replaced by spark.files.ignoreMissingFiles.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 |
|spark.sql.inMemoryColumnarStorage.batchSize             |10000                                  |Controls the size of batches for columnar caching.  Larger batch sizes can improve memory utilization and compression, but risk OOMs when caching data.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  |
|spark.sql.inMemoryColumnarStorage.compressed            |true                                   |When set to true Spark SQL will automatically select a compression codec for each column based on statistics of the data.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                |
|spark.sql.inMemoryColumnarStorage.enableVectorizedReader|true                                   |Enables vectorized reader for columnar caching.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          |
|spark.sql.legacy.replaceDatabricksSparkAvro.enabled     |true                                   |If it is set to true, the data source provider com.databricks.spark.avro is mapped to the built-in but external Avro data source module for backward compatibility.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      |
|spark.sql.legacy.sizeOfNull                             |true                                   |If it is set to true, size of null returns -1. This behavior was inherited from Hive. The size function returns null for null input if the flag is disabled.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             |
|spark.sql.optimizer.excludedRules                       |<undefined>                            |Configures a list of rules to be disabled in the optimizer, in which the rules are specified by their rule names and separated by comma. It is not guaranteed that all the rules in this configuration will eventually be excluded, as some rules are necessary for correctness. The optimizer will log the rules that have indeed been excluded.                                                                                                                                                                                                                                                                                                                                                                                                                                                                        |
|spark.sql.orc.columnarReaderBatchSize                   |4096                                   |The number of rows to include in a orc vectorized reader batch. The number should be carefully chosen to minimize overhead and avoid OOMs in reading data.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               |
|spark.sql.orc.compression.codec                         |snappy                                 |Sets the compression codec used when writing ORC files. If either `compression` or `orc.compress` is specified in the table-specific options/properties, the precedence would be `compression`, `orc.compress`, `spark.sql.orc.compression.codec`.Acceptable values include: none, uncompressed, snappy, zlib, lzo.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      |
|spark.sql.orc.enableVectorizedReader                    |true                                   |Enables vectorized orc decoding.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         |
|spark.sql.orc.filterPushdown                            |true                                   |When true, enable filter pushdown for ORC files.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         |
|spark.sql.orderByOrdinal                                |true                                   |When true, the ordinal numbers are treated as the position in the select list. When false, the ordinal numbers in order/sort by clause are ignored.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      |
|spark.sql.parquet.binaryAsString                        |false                                  |Some other Parquet-producing systems, in particular Impala and older versions of Spark SQL, do not differentiate between binary data and strings when writing out the Parquet schema. This flag tells Spark SQL to interpret binary data as a string to provide compatibility with these systems.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        |
|spark.sql.parquet.columnarReaderBatchSize               |4096                                   |The number of rows to include in a parquet vectorized reader batch. The number should be carefully chosen to minimize overhead and avoid OOMs in reading data.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           |
|spark.sql.parquet.compression.codec                     |snappy                                 |Sets the compression codec used when writing Parquet files. If either `compression` or `parquet.compression` is specified in the table-specific options/properties, the precedence would be `compression`, `parquet.compression`, `spark.sql.parquet.compression.codec`. Acceptable values include: none, uncompressed, snappy, gzip, lzo, brotli, lz4, zstd.                                                                                                                                                                                                                                                                                                                                                                                                                                                            |
|spark.sql.parquet.enableVectorizedReader                |true                                   |Enables vectorized parquet decoding.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     |
|spark.sql.parquet.filterPushdown                        |true                                   |Enables Parquet filter push-down optimization when set to true.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          |
|spark.sql.parquet.int64AsTimestampMillis                |false                                  |(Deprecated since Spark 2.3, please set spark.sql.parquet.outputTimestampType.) When true, timestamp values will be stored as INT64 with TIMESTAMP_MILLIS as the extended type. In this mode, the microsecond portion of the timestamp value will betruncated.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           |
|spark.sql.parquet.int96AsTimestamp                      |true                                   |Some Parquet-producing systems, in particular Impala, store Timestamp into INT96. Spark would also store Timestamp as INT96 because we need to avoid precision lost of the nanoseconds field. This flag tells Spark SQL to interpret INT96 data as a timestamp to provide compatibility with these systems.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              |
|spark.sql.parquet.int96TimestampConversion              |false                                  |This controls whether timestamp adjustments should be applied to INT96 data when converting to timestamps, for data written by Impala.  This is necessary because Impala stores INT96 data with a different timezone offset than Hive & Spark.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           |
|spark.sql.parquet.mergeSchema                           |false                                  |When true, the Parquet data source merges schemas collected from all data files, otherwise the schema is picked from the summary file or a random data file if no summary file is available.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             |
|spark.sql.parquet.outputTimestampType                   |INT96                                  |Sets which Parquet timestamp type to use when Spark writes data to Parquet files. INT96 is a non-standard but commonly used timestamp type in Parquet. TIMESTAMP_MICROS is a standard timestamp type in Parquet, which stores number of microseconds from the Unix epoch. TIMESTAMP_MILLIS is also standard, but with millisecond precision, which means Spark has to truncate the microsecond portion of its timestamp value.                                                                                                                                                                                                                                                                                                                                                                                           |
|spark.sql.parquet.recordLevelFilter.enabled             |false                                  |If true, enables Parquet's native record-level filtering using the pushed down filters. This configuration only has an effect when 'spark.sql.parquet.filterPushdown' is enabled and the vectorized reader is not used. You can ensure the vectorized reader is not used by setting 'spark.sql.parquet.enableVectorizedReader' to false.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 |
|spark.sql.parquet.respectSummaryFiles                   |false                                  |When true, we make assumption that all part-files of Parquet are consistent with summary files and we will ignore them when merging schema. Otherwise, if this is false, which is the default, we will merge all part-files. This should be considered as expert-only option, and shouldn't be enabled before knowing what it means exactly.                                                                                                                                                                                                                                                                                                                                                                                                                                                                             |
|spark.sql.parquet.writeLegacyFormat                     |false                                  |If true, data will be written in a way of Spark 1.4 and earlier. For example, decimal values will be written in Apache Parquet's fixed-length byte array format, which other systems such as Apache Hive and Apache Impala use. If false, the newer format in Parquet will be used. For example, decimals will be written in int-based format. If Parquet output is intended for use with systems that do not support this newer format, set to true.                                                                                                                                                                                                                                                                                                                                                                    |
|spark.sql.parser.quotedRegexColumnNames                 |false                                  |When true, quoted Identifiers (using backticks) in SELECT statement are interpreted as regular expressions.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              |
|spark.sql.pivotMaxValues                                |10000                                  |When doing a pivot without specifying values for the pivot column this is the maximum number of (distinct) values that will be collected without error.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  |
|spark.sql.queryExecutionListeners                       |<undefined>                            |List of class names implementing QueryExecutionListener that will be automatically added to newly created sessions. The classes should have either a no-arg constructor, or a constructor that expects a SparkConf argument.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             |
|spark.sql.redaction.options.regex                       |(?i)url                                |Regex to decide which keys in a Spark SQL command's options map contain sensitive information. The values of options whose names that match this regex will be redacted in the explain output. This redaction is applied on top of the global redaction configuration defined by spark.redaction.regex.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  |
|spark.sql.redaction.string.regex                        |<value of spark.redaction.string.regex>|Regex to decide which parts of strings produced by Spark contain sensitive information. When this regex matches a string part, that string part is replaced by a dummy value. This is currently used to redact the output of SQL explain commands. When this conf is not set, the value from `spark.redaction.string.regex` is used.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     |
|spark.sql.repl.eagerEval.enabled                        |false                                  |Enables eager evaluation or not. When true, the top K rows of Dataset will be displayed if and only if the REPL supports the eager evaluation. Currently, the eager evaluation is only supported in PySpark. For the notebooks like Jupyter, the HTML table (generated by _repr_html_) will be returned. For plain Python REPL, the returned outputs are formatted like dataframe.show().                                                                                                                                                                                                                                                                                                                                                                                                                                |
|spark.sql.repl.eagerEval.maxNumRows                     |20                                     |The max number of rows that are returned by eager evaluation. This only takes effect when spark.sql.repl.eagerEval.enabled is set to true. The valid range of this config is from 0 to (Int.MaxValue - 1), so the invalid config like negative and greater than (Int.MaxValue - 1) will be normalized to 0 and (Int.MaxValue - 1).                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       |
|spark.sql.repl.eagerEval.truncate                       |20                                     |The max number of characters for each cell that is returned by eager evaluation. This only takes effect when spark.sql.repl.eagerEval.enabled is set to true.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            |
|spark.sql.session.timeZone                              |Asia/Shanghai                          |The ID of session local timezone, e.g. "GMT", "America/Los_Angeles", etc.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                |
|spark.sql.shuffle.partitions                            |200                                    |The default number of partitions to use when shuffling data for joins or aggregations.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   |
|spark.sql.sources.bucketing.enabled                     |true                                   |When false, we will treat bucketed table as normal table                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 |
|spark.sql.sources.bucketing.maxBuckets                  |100000                                 |The maximum number of buckets allowed. Defaults to 100000                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                |
|spark.sql.sources.default                               |parquet                                |The default data source to use in input/output.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          |
|spark.sql.sources.parallelPartitionDiscovery.threshold  |32                                     |The maximum number of paths allowed for listing files at driver side. If the number of detected paths exceeds this value during partition discovery, it tries to list the files with another Spark distributed job. This applies to Parquet, ORC, CSV, JSON and LibSVM data sources.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     |
|spark.sql.sources.partitionColumnTypeInference.enabled  |true                                   |When true, automatically infer the data types for partitioned columns.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   |
|spark.sql.sources.partitionOverwriteMode                |STATIC                                 |When INSERT OVERWRITE a partitioned data source table, we currently support 2 modes: static and dynamic. In static mode, Spark deletes all the partitions that match the partition specification(e.g. PARTITION(a=1,b)) in the INSERT statement, before overwriting. In dynamic mode, Spark doesn't delete partitions ahead, and only overwrite those partitions that have data written into it at runtime. By default we use static mode to keep the same behavior of Spark prior to 2.3. Note that this config doesn't affect Hive serde tables, as they are always overwritten with dynamic mode. This can also be set as an output option for a data source using key partitionOverwriteMode (which takes precedence over this setting), e.g. dataframe.write.option("partitionOverwriteMode", "dynamic").save(path).|
|spark.sql.statistics.fallBackToHdfs                     |false                                  |If the table statistics are not available from table metadata enable fall back to hdfs. This is useful in determining if a table is small enough to use auto broadcast joins.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            |
|spark.sql.statistics.histogram.enabled                  |false                                  |Generates histograms when computing column statistics if enabled. Histograms can provide better estimation accuracy. Currently, Spark only supports equi-height histogram. Note that collecting histograms takes extra cost. For example, collecting column statistics usually takes only one table scan, but generating equi-height histogram will cause an extra table scan.                                                                                                                                                                                                                                                                                                                                                                                                                                           |
|spark.sql.statistics.size.autoUpdate.enabled            |false                                  |Enables automatic update for table size once table's data is changed. Note that if the total number of files of the table is very large, this can be expensive and slow down data change commands.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       |
|spark.sql.streaming.checkpointLocation                  |<undefined>                            |The default location for storing checkpoint data for streaming queries.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  |
|spark.sql.streaming.metricsEnabled                      |false                                  |Whether Dropwizard/Codahale metrics will be reported for active streaming queries.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       |
|spark.sql.streaming.multipleWatermarkPolicy             |min                                    |Policy to calculate the global watermark value when there are multiple watermark operators in a streaming query. The default value is 'min' which chooses the minimum watermark reported across multiple operators. Other alternative value is'max' which chooses the maximum across multiple operators.Note: This configuration cannot be changed between query restarts from the same checkpoint location.                                                                                                                                                                                                                                                                                                                                                                                                             |
|spark.sql.streaming.noDataMicroBatches.enabled          |true                                   |Whether streaming micro-batch engine will execute batches without data for eager state management for stateful streaming queries.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        |
|spark.sql.streaming.numRecentProgressUpdates            |100                                    |The number of progress updates to retain for a streaming query                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           |
|spark.sql.streaming.streamingQueryListeners             |<undefined>                            |List of class names implementing StreamingQueryListener that will be automatically added to newly created sessions. The classes should have either a no-arg constructor, or a constructor that expects a SparkConf argument.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             |
|spark.sql.thriftserver.scheduler.pool                   |<undefined>                            |Set a Fair Scheduler pool for a JDBC client session.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     |
|spark.sql.thriftserver.ui.retainedSessions              |200                                    |The number of SQL client sessions kept in the JDBC/ODBC web UI history.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  |
|spark.sql.thriftserver.ui.retainedStatements            |200                                    |The number of SQL statements kept in the JDBC/ODBC web UI history.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       |
|spark.sql.ui.retainedExecutions                         |1000                                   |Number of executions to retain in the Spark UI.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          |
|spark.sql.variable.substitute                           |true                                   |This enables substitution using syntax like ${var} ${system:var} and ${env:var}.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         |
|spark.sql.warehouse.dir                                 |file:/C:/Users/EDY/spark-warehouse     |The default location for managed databases and tables.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   |
+--------------------------------------------------------+---------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+


spark.sql.adaptive.enabled

false
When true, enable adaptive query execution.

spark.sql.adaptive.shuffle.targetPostShuffleInputSize

67108864b
The target post-shuffle input size in bytes of a task.

spark.sql.autoBroadcastJoinThreshold

10485760
Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. Note that currently statistics are only supported for Hive Metastore tables where the command ANALYZE TABLE <tableName> COMPUTE STATISTICS noscan has been run, and file-based data source tables where the statistics are computed directly on the files of data.

spark.sql.avro.compression.codec

snappy
Compression codec used in writing of AVRO files. Supported codecs: uncompressed, deflate, snappy, bzip2 and xz. Default codec is snappy.

spark.sql.avro.deflate.level

-1
Compression level for the deflate codec used in writing of AVRO files. Valid value must be in the range of from 1 to 9 inclusive or -1. The default value is -1 which corresponds to 6 level in the current implementation.

spark.sql.broadcastTimeout

300000ms
Timeout in seconds for the broadcast wait time in broadcast joins.

spark.sql.cbo.enabled

false
Enables CBO for estimation of plan statistics when set true.

spark.sql.cbo.joinReorder.dp.star.filter

false
Applies star-join filter heuristics to cost based join enumeration.

spark.sql.cbo.joinReorder.dp.threshold

12
The maximum number of joined nodes allowed in the dynamic programming algorithm.

spark.sql.cbo.joinReorder.enabled

false
Enables join reorder in CBO.

spark.sql.cbo.starSchemaDetection

false
When true, it enables join reordering based on star schema detection.

spark.sql.columnNameOfCorruptRecord

_corrupt_record
The name of internal column for storing raw/un-parsed JSON and CSV records that fail to parse.

spark.sql.crossJoin.enabled

false
When false, we will throw an error if a query contains a cartesian product without explicit CROSS JOIN syntax.

spark.sql.execution.arrow.enabled

false
When true, make use of Apache Arrow for columnar data transfers. Currently available for use with pyspark.sql.DataFrame.toPandas, and pyspark.sql.SparkSession.createDataFrame when its input is a Pandas DataFrame. The following data types are unsupported: BinaryType, MapType, ArrayType of TimestampType, and nested StructType.

spark.sql.execution.arrow.fallback.enabled

true
When true, optimizations enabled by 'spark.sql.execution.arrow.enabled' will fallback automatically to non-optimized implementations if an error occurs.

spark.sql.execution.arrow.maxRecordsPerBatch

10000
When using Apache Arrow, limit the maximum number of records that can be written to a single ArrowRecordBatch in memory. If set to zero or negative there is no limit.

spark.sql.extensions

Name of the class used to configure Spark Session extensions. The class should implement Function1[SparkSessionExtension, Unit], and must have a no-args constructor.

spark.sql.files.ignoreCorruptFiles

false
Whether to ignore corrupt files. If true, the Spark jobs will continue to run when encountering corrupted files and the contents that have been read will still be returned.

spark.sql.files.ignoreMissingFiles

false
Whether to ignore missing files. If true, the Spark jobs will continue to run when encountering missing files and the contents that have been read will still be returned.

spark.sql.files.maxPartitionBytes

134217728
The maximum number of bytes to pack into a single partition when reading files.

spark.sql.files.maxRecordsPerFile

0
Maximum number of records to write out to a single file. If this value is zero or negative, there is no limit.

spark.sql.function.concatBinaryAsString

false
When this option is set to false and all inputs are binary, functions.concat returns an output as binary. Otherwise, it returns as a string.

spark.sql.function.eltOutputAsString

false
When this option is set to false and all inputs are binary, elt returns an output as binary. Otherwise, it returns as a string.

spark.sql.groupByAliases

true
When true, aliases in a select list can be used in group by clauses. When false, an analysis exception is thrown in the case.

spark.sql.groupByOrdinal

true
When true, the ordinal numbers in group by clauses are treated as the position in the select list. When false, the ordinal numbers are ignored.

spark.sql.hive.caseSensitiveInferenceMode

INFER_AND_SAVE
Sets the action to take when a case-sensitive schema cannot be read from a Hive table's properties. Although Spark SQL itself is not case-sensitive, Hive compatible file formats such as Parquet are. Spark SQL must use a case-preserving schema when querying any table backed by files containing case-sensitive field names or queries may not return accurate results. Valid options include INFER_AND_SAVE (the default mode-- infer the case-sensitive schema from the underlying data files and write it back to the table properties), INFER_ONLY (infer the schema but don't attempt to write it to the table properties) and NEVER_INFER (fallback to using the case-insensitive metastore schema instead of inferring).

spark.sql.hive.filesourcePartitionFileCacheSize

262144000
When nonzero, enable caching of partition file metadata in memory. All tables share a cache that can use up to specified num bytes for file metadata. This conf only has an effect when hive filesource partition management is enabled.

spark.sql.hive.manageFilesourcePartitions

true
When true, enable metastore partition management for file source tables as well. This includes both datasource and converted Hive tables. When partition management is enabled, datasource tables store partition in the Hive metastore, and use the metastore to prune partitions during query planning.

spark.sql.hive.metastorePartitionPruning

true
When true, some predicates will be pushed down into the Hive metastore so that unmatching partitions can be eliminated earlier. This only affects Hive tables not converted to filesource relations (see HiveUtils.CONVERT_METASTORE_PARQUET and HiveUtils.CONVERT_METASTORE_ORC for more information).

spark.sql.hive.thriftServer.singleSession

false
When set to true, Hive Thrift server is running in a single session mode. All the JDBC/ODBC connections share the temporary views, function registries, SQL configuration and the current database.

spark.sql.hive.verifyPartitionPath

false
When true, check all the partition paths under the table's root directory when reading data stored in HDFS. This configuration will be deprecated in the future releases and replaced by spark.files.ignoreMissingFiles.

spark.sql.inMemoryColumnarStorage.batchSize

10000
Controls the size of batches for columnar caching. Larger batch sizes can improve memory utilization and compression, but risk OOMs when caching data.

spark.sql.inMemoryColumnarStorage.compressed

true
When set to true Spark SQL will automatically select a compression codec for each column based on statistics of the data.

spark.sql.inMemoryColumnarStorage.enableVectorizedReader

true
Enables vectorized reader for columnar caching.

spark.sql.legacy.replaceDatabricksSparkAvro.enabled

true
If it is set to true, the data source provider com.databricks.spark.avro is mapped to the built-in but external Avro data source module for backward compatibility.

spark.sql.legacy.sizeOfNull

true
If it is set to true, size of null returns -1. This behavior was inherited from Hive. The size function returns null for null input if the flag is disabled.

spark.sql.optimizer.excludedRules

Configures a list of rules to be disabled in the optimizer, in which the rules are specified by their rule names and separated by comma. It is not guaranteed that all the rules in this configuration will eventually be excluded, as some rules are necessary for correctness. The optimizer will log the rules that have indeed been excluded.

spark.sql.orc.columnarReaderBatchSize

4096
The number of rows to include in a orc vectorized reader batch. The number should be carefully chosen to minimize overhead and avoid OOMs in reading data.

spark.sql.orc.compression.codec

snappy
Sets the compression codec used when writing ORC files. If either compression or orc.compress is specified in the table-specific options/properties, the precedence would be compression, orc.compress, spark.sql.orc.compression.codec.Acceptable values include: none, uncompressed, snappy, zlib, lzo.

spark.sql.orc.enableVectorizedReader

true
Enables vectorized orc decoding.

spark.sql.orc.filterPushdown

true
When true, enable filter pushdown for ORC files.

spark.sql.orderByOrdinal

true
When true, the ordinal numbers are treated as the position in the select list. When false, the ordinal numbers in order/sort by clause are ignored.

spark.sql.parquet.binaryAsString

false
Some other Parquet-producing systems, in particular Impala and older versions of Spark SQL, do not differentiate between binary data and strings when writing out the Parquet schema. This flag tells Spark SQL to interpret binary data as a string to provide compatibility with these systems.

spark.sql.parquet.columnarReaderBatchSize

4096
The number of rows to include in a parquet vectorized reader batch. The number should be carefully chosen to minimize overhead and avoid OOMs in reading data.

spark.sql.parquet.compression.codec

snappy
Sets the compression codec used when writing Parquet files. If either compression or parquet.compression is specified in the table-specific options/properties, the precedence would be compression, parquet.compression, spark.sql.parquet.compression.codec. Acceptable values include: none, uncompressed, snappy, gzip, lzo, brotli, lz4, zstd.

spark.sql.parquet.enableVectorizedReader

true
Enables vectorized parquet decoding.

spark.sql.parquet.filterPushdown

true
Enables Parquet filter push-down optimization when set to true.

spark.sql.parquet.int64AsTimestampMillis

false
(Deprecated since Spark 2.3, please set spark.sql.parquet.outputTimestampType.) When true, timestamp values will be stored as INT64 with TIMESTAMP_MILLIS as the extended type. In this mode, the microsecond portion of the timestamp value will betruncated.

spark.sql.parquet.int96AsTimestamp

true
Some Parquet-producing systems, in particular Impala, store Timestamp into INT96. Spark would also store Timestamp as INT96 because we need to avoid precision lost of the nanoseconds field. This flag tells Spark SQL to interpret INT96 data as a timestamp to provide compatibility with these systems.

spark.sql.parquet.int96TimestampConversion

false
This controls whether timestamp adjustments should be applied to INT96 data when converting to timestamps, for data written by Impala. This is necessary because Impala stores INT96 data with a different timezone offset than Hive & Spark.

spark.sql.parquet.mergeSchema

false
When true, the Parquet data source merges schemas collected from all data files, otherwise the schema is picked from the summary file or a random data file if no summary file is available.

spark.sql.parquet.outputTimestampType

INT96
Sets which Parquet timestamp type to use when Spark writes data to Parquet files. INT96 is a non-standard but commonly used timestamp type in Parquet. TIMESTAMP_MICROS is a standard timestamp type in Parquet, which stores number of microseconds from the Unix epoch. TIMESTAMP_MILLIS is also standard, but with millisecond precision, which means Spark has to truncate the microsecond portion of its timestamp value.

spark.sql.parquet.recordLevelFilter.enabled

false
If true, enables Parquet's native record-level filtering using the pushed down filters. This configuration only has an effect when 'spark.sql.parquet.filterPushdown' is enabled and the vectorized reader is not used. You can ensure the vectorized reader is not used by setting 'spark.sql.parquet.enableVectorizedReader' to false.

spark.sql.parquet.respectSummaryFiles

false
When true, we make assumption that all part-files of Parquet are consistent with summary files and we will ignore them when merging schema. Otherwise, if this is false, which is the default, we will merge all part-files. This should be considered as expert-only option, and shouldn't be enabled before knowing what it means exactly.

spark.sql.parquet.writeLegacyFormat

false
If true, data will be written in a way of Spark 1.4 and earlier. For example, decimal values will be written in Apache Parquet's fixed-length byte array format, which other systems such as Apache Hive and Apache Impala use. If false, the newer format in Parquet will be used. For example, decimals will be written in int-based format. If Parquet output is intended for use with systems that do not support this newer format, set to true.

spark.sql.parser.quotedRegexColumnNames

false
When true, quoted Identifiers (using backticks) in SELECT statement are interpreted as regular expressions.

spark.sql.pivotMaxValues

10000
When doing a pivot without specifying values for the pivot column this is the maximum number of (distinct) values that will be collected without error.

spark.sql.queryExecutionListeners

List of class names implementing QueryExecutionListener that will be automatically added to newly created sessions. The classes should have either a no-arg constructor, or a constructor that expects a SparkConf argument.

spark.sql.redaction.options.regex

(?i)url
Regex to decide which keys in a Spark SQL command's options map contain sensitive information. The values of options whose names that match this regex will be redacted in the explain output. This redaction is applied on top of the global redaction configuration defined by spark.redaction.regex.

spark.sql.redaction.string.regex

Regex to decide which parts of strings produced by Spark contain sensitive information. When this regex matches a string part, that string part is replaced by a dummy value. This is currently used to redact the output of SQL explain commands. When this conf is not set, the value from `spark.redaction.string.regex` is used.

spark.sql.repl.eagerEval.enabled

false
Enables eager evaluation or not. When true, the top K rows of Dataset will be displayed if and only if the REPL supports the eager evaluation. Currently, the eager evaluation is only supported in PySpark. For the notebooks like Jupyter, the HTML table (generated by repr_html) will be returned. For plain Python REPL, the returned outputs are formatted like dataframe.show().

spark.sql.repl.eagerEval.maxNumRows

20
The max number of rows that are returned by eager evaluation. This only takes effect when spark.sql.repl.eagerEval.enabled is set to true. The valid range of this config is from 0 to (Int.MaxValue - 1), so the invalid config like negative and greater than (Int.MaxValue - 1) will be normalized to 0 and (Int.MaxValue - 1).

spark.sql.repl.eagerEval.truncate

20
The max number of characters for each cell that is returned by eager evaluation. This only takes effect when spark.sql.repl.eagerEval.enabled is set to true.

spark.sql.session.timeZone

Asia/Shanghai
The ID of session local timezone, e.g. "GMT", "America/Los_Angeles", etc.

spark.sql.shuffle.partitions

200
The default number of partitions to use when shuffling data for joins or aggregations.

spark.sql.sources.bucketing.enabled

true
When false, we will treat bucketed table as normal table

spark.sql.sources.bucketing.maxBuckets

100000
The maximum number of buckets allowed. Defaults to 100000

spark.sql.sources.default

parquet
The default data source to use in input/output.

spark.sql.sources.parallelPartitionDiscovery.threshold

32
The maximum number of paths allowed for listing files at driver side. If the number of detected paths exceeds this value during partition discovery, it tries to list the files with another Spark distributed job. This applies to Parquet, ORC, CSV, JSON and LibSVM data sources.

spark.sql.sources.partitionColumnTypeInference.enabled

true
When true, automatically infer the data types for partitioned columns.

spark.sql.sources.partitionOverwriteMode

STATIC
When INSERT OVERWRITE a partitioned data source table, we currently support 2 modes: static and dynamic. In static mode, Spark deletes all the partitions that match the partition specification(e.g. PARTITION(a=1,b)) in the INSERT statement, before overwriting. In dynamic mode, Spark doesn't delete partitions ahead, and only overwrite those partitions that have data written into it at runtime. By default we use static mode to keep the same behavior of Spark prior to 2.3. Note that this config doesn't affect Hive serde tables, as they are always overwritten with dynamic mode. This can also be set as an output option for a data source using key partitionOverwriteMode (which takes precedence over this setting), e.g. dataframe.write.option("partitionOverwriteMode", "dynamic").save(path).

spark.sql.statistics.fallBackToHdfs

false
If the table statistics are not available from table metadata enable fall back to hdfs. This is useful in determining if a table is small enough to use auto broadcast joins.

spark.sql.statistics.histogram.enabled

false
Generates histograms when computing column statistics if enabled. Histograms can provide better estimation accuracy. Currently, Spark only supports equi-height histogram. Note that collecting histograms takes extra cost. For example, collecting column statistics usually takes only one table scan, but generating equi-height histogram will cause an extra table scan.

spark.sql.statistics.size.autoUpdate.enabled

false
Enables automatic update for table size once table's data is changed. Note that if the total number of files of the table is very large, this can be expensive and slow down data change commands.

spark.sql.streaming.checkpointLocation

The default location for storing checkpoint data for streaming queries.

spark.sql.streaming.metricsEnabled

false
Whether Dropwizard/Codahale metrics will be reported for active streaming queries.

spark.sql.streaming.multipleWatermarkPolicy

min
Policy to calculate the global watermark value when there are multiple watermark operators in a streaming query. The default value is 'min' which chooses the minimum watermark reported across multiple operators. Other alternative value is'max' which chooses the maximum across multiple operators.Note: This configuration cannot be changed between query restarts from the same checkpoint location.

spark.sql.streaming.noDataMicroBatches.enabled

true
Whether streaming micro-batch engine will execute batches without data for eager state management for stateful streaming queries.

spark.sql.streaming.numRecentProgressUpdates

100
The number of progress updates to retain for a streaming query

spark.sql.streaming.streamingQueryListeners

List of class names implementing StreamingQueryListener that will be automatically added to newly created sessions. The classes should have either a no-arg constructor, or a constructor that expects a SparkConf argument.

spark.sql.thriftserver.scheduler.pool

Set a Fair Scheduler pool for a JDBC client session.

spark.sql.thriftserver.ui.retainedSessions

200
The number of SQL client sessions kept in the JDBC/ODBC web UI history.

spark.sql.thriftserver.ui.retainedStatements

200
The number of SQL statements kept in the JDBC/ODBC web UI history.

spark.sql.ui.retainedExecutions

1000
Number of executions to retain in the Spark UI.

spark.sql.variable.substitute

true
This enables substitution using syntax like ${var} ${system:var} and ${env:var}.

spark.sql.warehouse.dir

file:/C:/Users/EDY/spark-warehouse
The default location for managed databases and tables.

posted @ 2022-06-30 10:32  梦醒江南·Infinite  阅读(793)  评论(0编辑  收藏  举报