Updates and Deletes in ClickHouse®

AltinityDB
5 min readNov 12, 2018

--

It’s been two years already since ClickHouse development team published an excellent blog post “How to update data in ClickHouse”. In that old times ClickHouse supported only monthly partitions, and for mutable data structures, they suggested to use pretty exotic data structures. We were all waiting for a more convenient approach, and finally, it is there: ClickHouse now supports updates in deletes! In this article, we will see how it works.

Test data

Let’s load a test table with some data:

:) select count(*) from system.columns where table='test_update';┌─count()─┐
│ 332 │
└─────────┘
:) select count(*) from test_update;┌──count()─┐
│ 17925050 │
└──────────┘

We will be trying updates since it is more interesting. Deletes work pretty much the same.

How it works

The syntax for updates and deletes is non-standard SQL. ClickHouse team wanted to express the difference from traditional SQL: new updates and deletes are batch operations, performed asynchronously. It is even called ‘mutations’. Custom syntax highlights the difference.

ALTER TABLE <table_name> DELETE WHERE <filter>;

and

ALTER TABLE <table_name> UPDATE col1 = expr1, ... WHERE <filter>;

In our test table, there is column event_status_key.

:) select event_status_key, count(*) from test_update where event_status_key in (0, 22) group by event_status_key;┌─event_status_key─┬──count()─┐
│ 0 │ 17824710 │
│ 22 │ 1701 │
└──────────────────┴──────────┘

Let’s consider that status 22 was a mistake and we want to fix it. This is just 0.01% of the data, but without DELETE or UPDATE, we would have to reload the table.

:) ALTER TABLE test_update UPDATE event_status_key=0 where event_status_key=22;0 rows in set. Elapsed: 0.067 sec.

It returns immediately, but the update is asynchronous, so we do not know if data has been updated or not. Let’s check:

:) select event_status_key, count(*) from test_update where event_status_key in (0, 22) group by event_status_key; ┌─event_status_key─┬──count()─┐
│ 0 │ 17826411 │
└──────────────────┴──────────┘

Seems to be working. The status of the update operation can be looked up in system.mutations table:

:) select * from system.mutations where table='test_update';Row 1:
──────
database: test
table: test_update
mutation_id: mutation_162.txt
command: UPDATE event_status_key = 0 WHERE event_status_key = 22
create_time: 2018-10-12 12:39:32
block_numbers.partition_id: ['']
block_numbers.number: [162]
parts_to_do: 0
is_done: 1

Another interesting insight gives system.parts table:

:) select name, active, rows, bytes_on_disk, modification_time from system.parts where table='test_update' order by modification_time;┌─name──────────────┬─active─┬────rows─┬─bytes_on_disk─┬───modification_time─┐
│ all_1_36_2 │ 0 │ 3841126 │ 637611245 │ 2018-10-12 12:16:24 │
│ all_37_75_2 │ 0 │ 4358144 │ 598548358 │ 2018-10-12 12:16:47 │
│ all_112_117_1 │ 0 │ 638976 │ 167899233 │ 2018-10-12 12:17:00 │
│ all_151_155_1 │ 0 │ 778240 │ 27388052 │ 2018-10-12 12:17:29 │
│ all_76_111_2 │ 0 │ 3833856 │ 989762502 │ 2018-10-12 12:17:30 │
│ all_156_161_1 │ 0 │ 837460 │ 27490891 │ 2018-10-12 12:17:43 │
│ all_118_150_2 │ 0 │ 3637248 │ 859673147 │ 2018-10-12 12:17:52 │
│ all_1_36_2_162 │ 1 │ 3841126 │ 637611232 │ 2018-10-12 12:39:32 │
│ all_37_75_2_162 │ 1 │ 4358144 │ 598548352 │ 2018-10-12 12:39:32 │
│ all_76_111_2_162 │ 1 │ 3833856 │ 989762502 │ 2018-10-12 12:39:32 │
│ all_112_117_1_162 │ 1 │ 638976 │ 167899233 │ 2018-10-12 12:39:32 │
│ all_118_150_2_162 │ 1 │ 3637248 │ 859673147 │ 2018-10-12 12:39:32 │
│ all_151_155_1_162 │ 1 │ 778240 │ 27388052 │ 2018-10-12 12:39:32 │
│ all_156_161_1_162 │ 1 │ 837460 │ 27490891 │ 2018-10-12 12:39:32 │
└───────────────────┴────────┴─────────┴───────────────┴─────────────────────┘

It shows that every part has been touched by an update. But it has been pretty fast for small test dataset.

Now let’s try to do something more difficult. There is an array column in our table keeping integer segment ids.

:) select count(*) from test_update where has(dmp_audience_ids, 31694239);┌─count()─┐
│ 228706 │
└─────────┘

Consider that we want to add an additional segment to all users with segment 31694239.

:) alter table test_update update dmp_audience_ids = arrayPushBack(dmp_audience_ids, 1234567) where has(dmp_audience_ids, 31694239);

Instant response again, let’s check if data has been updated correctly.

:) select count(*) from test_update where has(dmp_audience_ids, 1234567)┌─count()─┐
│ 228706 │
└─────────┘
:) select dmp_audience_ids from test_update where has(dmp_audience_ids, 1234567) and length(dmp_audience_ids)<5 limit 1;┌─dmp_audience_ids─────────────────────┐
│ [31694239,31694422,31694635,1234567] │
└──────────────────────────────────────┘

Perfectly works and very fast!

What does not work

The new functionality is very helpful but has several limitations.

Updating the index column

Index columns are not updatable.

:) ALTER TABLE test_update UPDATE event_key = 41 WHERE event_key = 40;Received exception from server (version 18.12.17):
Code: 420. DB::Exception: Received from localhost:9000, ::1. DB::Exception: Cannot UPDATE key column `event_key`.

Distributed table updates

Updates to distributed tables do not work. If you try to update distributed table you may get something like:

Received exception from server (version 18.12.17):
Code: 48. DB::Exception: Received from localhost:9000, ::1. DB::Exception: Mutations are not supported by storage Distributed.

At the moment of writing, ALTER TABLE UPDATE/DELETE did not support distributed DDL, so more manual work is required in order to update/delete data in a distributed environment.

Frequent and point updates

The primary use case for the new functionality is for rare bulk operations because it requires quite a lot of resource-consuming background work. It is not a good idea to use it for point (single row) updates like in OLTP databases. For small datasets, it may work, but as soon as part size is something significant, updates may stall.

Implementation details

The limitations above follow from the implementation. ClickHouse team named new functionality ‘mutations’ for the reason: the data significantly changes (mutates) after every update/delete. ClickHouse stores data in physically sorted blocks called parts. A table may have one or several parts. ClickHouse tries to keep a number of parts small for fast queries with its magical merge process. Inside part every column is stored separately, sorted by index and compressed. This is very efficient in query time, but not for updates/deletes. When a table needs to be updated or deleted ClickHouse has to do the following:

  • Locate parts that need to be modified using WHERE condition
  • Rebuild every part removing the deleted rows or updating values
  • Replace old parts with new rebuilt ones

Rebuild operation may be very resource intensive for big tables with huge part sizes (the default maximum for part size is 150GB). It is more expensive for deletes since update only affects one or several columns (thanks again to the column store). Another important note is that mutation is atomic for single parts only if multiple parts are affected the changes are rolled over sequentially.

Few other details:

  • Mutations work for replicated and not replicated tables
  • Mutations are ordered, for replicated tables the ordering is synchronized via ZooKeeper
  • Several mutations for a single table can be merged into one operation to be more efficient
  • Merge operations put mutations on hold until the merge is complete

So it is pretty much designed for bulk operations.

Conclusion

ClickHouse finally supports UPDATE/DELETE functionality, though in its own custom way. There are some limitations, but it does the job when used for infrequent operations. In particular, it makes ClickHouse GDPR compliant, since data can be deleted any time.

This is an important milestone and huge usability improvement that extend ClickHouse capabilities. But ClickHouse development team is not going to end here — they have plans to add full SQL support for UPDATE/DELETE in 2019. Stay tuned!

For more updates check our blog.

--

--

AltinityDB
AltinityDB

Written by AltinityDB

Run ClickHouse® anywhere with Altinity: You control the environment, cost, data ownership, security. We support you every step of the way. Slack: bit.ly/34vnPLs