Skip to main content
Edit this page

v24.10 Changelog for Cloud

Relevant changes for ClickHouse Cloud services based on the v24.10 release.

Backward Incompatible Change

  • Allow to write SETTINGS before FORMAT in a chain of queries with UNION when subqueries are inside parentheses. This closes #39712. Change the behavior when a query has the SETTINGS clause specified twice in a sequence. The closest SETTINGS clause will have a preference for the corresponding subquery. In the previous versions, the outermost SETTINGS clause could take a preference over the inner one. #60197#68614 (Alexey Milovidov).
  • Reimplement Dynamic type. Now when the limit of dynamic data types is reached new types are not casted to String but stored in a special data structure in binary format with binary encoded data type. Now any type ever inserted into Dynamic column can be read from it as subcolumn. #68132 (Pavel Kruglov).
  • Expressions like a[b].c are supported for named tuples, as well as named subscripts from arbitrary expressions, e.g., expr().name. This is useful for processing JSON. This closes #54965. In previous versions, an expression of form expr().name was parsed as tupleElement(expr(), name), and the query analyzer was searching for a column name rather than for the corresponding tuple element; while in the new version, it is changed to tupleElement(expr(), 'name'). In most cases, the previous version was not working, but it is possible to imagine a very unusual scenario when this change could lead to incompatibility: if you stored names of tuple elements in a column or an alias, that was named differently than the tuple element's name: SELECT 'b' AS a, CAST([tuple(123)] AS 'Array(Tuple(b UInt8))') AS t, t[1].a. It is very unlikely that you used such queries, but we still have to mark this change as potentially backward incompatible. #68435 (Alexey Milovidov).
  • When the setting print_pretty_type_names is enabled, it will print Tuple data type in a pretty form in SHOW CREATE TABLE statements, formatQuery function, and in the interactive mode in clickhouse-client and clickhouse-local. In previous versions, this setting was only applied to DESCRIBE queries and toTypeName. This closes #65753. #68492 (Alexey Milovidov).
  • Reordering of filter conditions from [PRE]WHERE clause is now allowed by default. It could be disabled by setting allow_reorder_prewhere_conditions to false. #70657 (Nikita Taranov).
  • Fix optimize_functions_to_subcolumns optimization (previously could lead to Invalid column type for ColumnUnique::insertRangeFrom. Expected String, got LowCardinality(String) error), by preserving LowCardinality type in mapKeys/mapValues. #70716 (Azat Khuzhin).

New Feature

  • Function toStartOfInterval() now has a new overload which emulates TimescaleDB's time_bucket() function, respectively PostgreSQL's date_bin() function. (#55619). It allows to align date or timestamp values to multiples of a given interval from an arbitrary origin (instead of 0000-01-01 00:00:00.000 as fixed origin). For example, SELECT toStartOfInterval(toDateTime('2023-01-01 14:45:00'), INTERVAL 1 MINUTE, toDateTime('2023-01-01 14:35:30')); returns 2023-01-01 14:44:30 which is a multiple of 1 minute intervals, starting from origin 2023-01-01 14:35:30. #56738 (Yarik Briukhovetskyi).
  • MongoDB integration refactored: migration to new driver mongocxx from deprecated Poco::MongoDB, remove support for deprecated old protocol, support for connection by URI, support for all MongoDB types, support for WHERE and ORDER BY statements on MongoDB side, restriction for expression unsupported by MongoDB. #63279 (Kirill Nikiforov).
  • A new --progress-table option in clickhouse-client prints a table with metrics changing during query execution; a new --enable-progress-table-toggle is associated with the --progress-table option, and toggles the rendering of the progress table by pressing the control key (Space). #63689 (Maria Khristenko).
  • This allows to grant access to the wildcard prefixes. GRANT SELECT ON db.table_pefix_* TO user. #65311 (pufit).
  • Introduced JSONCompactWithProgress format where ClickHouse outputs each row as a newline-delimited JSON object, including metadata, data, progress, totals, and statistics. #66205 (Alexey Korepanov).
  • Add system.query_metric_log which contains history of memory and metric values from table system.events for individual queries, periodically flushed to disk. #66532 (Pablo Marcos).
  • Add the input_format_json_empty_as_default setting which, when enabled, treats empty fields in JSON inputs as default values. Closes #59339. #66782 (Alexis Arnaud).
  • Added functions overlay and overlayUTF8 which replace parts of a string by another string. Example: SELECT overlay('Hello New York', 'Jersey', 11) returns Hello New Jersey. #66933 (李扬).
  • Add new Command, Lightweight Delete In Partition DELETE FROM [db.]table [ON CLUSTER cluster] [IN PARTITION partition_expr] WHERE expr; ``` VM-114-29-tos :) select * from ads_app_poster_ip_source_channel_di_replicated_local;. #67805 (sunny).
  • Implemented comparison for Interval data type values so they are converting now to the least supertype. #68057 (Yarik Briukhovetskyi).
  • Add create_if_not_exists setting to default to IF NOT EXISTS behavior during CREATE statements. #68164 (Peter Nguyen).
  • Makes possible to read Iceberg tables in Azure and locally. #68210 (Daniil Ivanik).
  • Add aggregate functions distinctDynamicTypes/distinctJSONPaths/distinctJSONPathsAndTypes for better introspection of JSON column type content. #68463 (Pavel Kruglov).
  • Query cache entries can now be dropped by tag. For example, the query cache entry created by SELECT 1 SETTINGS use_query_cache = true, query_cache_tag = 'abc' can now be dropped by SYSTEM DROP QUERY CACHE TAG 'abc' (or of course just: SYSTEM DROP QUERY CACHE which will clear the entire query cache). #68477 (Michał Tabaszewski).
  • A simple SELECT query can be written with implicit SELECT to enable calculator-style expressions, e.g., ch "1 + 2". This is controlled by a new setting, implicit_select. #68502 (Alexey Milovidov).
  • Support --copy mode for clickhouse local as a shortcut for format conversion #68503. #68583 (Denis Hananein).
  • Added ripeMD160 function, which computes the RIPEMD-160 cryptographic hash of a string. Example: SELECT hex(ripeMD160('The quick brown fox jumps over the lazy dog')) returns 37F332F68DB77BD9D7EDD4969571AD671CF9DD3B. #68639 (Dergousov Maxim).
  • Add virtual column _headers for url table engine. Closes #65026. #68867 (flynn).
  • Adding system.projections table to track available projections. #68901 (Jordi Villar).
  • Add support for arrayUnion function. #68989 (Peter Nguyen).
  • Add new function arrayZipUnaligned for spark compatiablity(arrays_zip), which allowed unaligned arrays based on original arrayZip. ``` sql SELECT arrayZipUnaligned([1], [1, 2, 3]). #69030 (李扬).
  • Support aggreate function quantileExactWeightedInterpolated, which is a interpolated version based on quantileExactWeighted. Some people may wonder why we need a new quantileExactWeightedInterpolated since we already have quantileExactInterpolatedWeighted. The reason is the new one is more accurate than the old one. BTW, it is for spark compatiability in Apache Gluten. #69619 (李扬).
  • Support function arrayElementOrNull. It returns null if array index is out of range or map key not found. #69646 (李扬).
  • Support Dynamic type in most functions by executing them on internal types inside Dynamic. #69691 (Pavel Kruglov).
  • Adds argument scale (default: true) to function arrayAUC which allows to skip the normalization step (issue #69609). #69717 (gabrielmcg44).
  • Re-added RIPEMD160 function, which computes the RIPEMD-160 cryptographic hash of a string. Example: SELECT HEX(RIPEMD160('The quick brown fox jumps over the lazy dog')) returns 37F332F68DB77BD9D7EDD4969571AD671CF9DD3B. #70087 (Dergousov Maxim).
  • Allow to cache read files for object storage table engines and data lakes using hash from ETag + file path as cache key. #70135 (Kseniia Sumarokova).
  • Support reading Iceberg tables on HDFS. #70268 (flynn).
  • Allow to read/write JSON type as binary string in RowBinary format under settings input_format_binary_read_json_as_string/output_format_binary_write_json_as_string. #70288 (Pavel Kruglov).
  • Allow to serialize/deserialize JSON column as single String column in Native format. For output use setting output_format_native_write_json_as_string. For input, use serialization version 1 before the column data. #70312 (Pavel Kruglov).
  • Supports standard CTE, with insert, as previously only supports insert ... with .... #70593 (Shichao Jin).