There is a new feature in analytical databases that seems to be all the rage, particular in cloud data warehouse – Autoscaling. Autoscaling’s promise is that if you have a particularly hard analytical workload, autoscaling will spin up new storage and compute to get the job done. If I have a heavy workload, new resources will easily spin up to meet the level of service that you need, and then automatically scale back down when they are not needed.
In the cloud databases I’ve evaluated, autoscaling seems to be implemented instead of tuning. Since the database automatically scales, they will tell you that tuning (or maintenance) becomes less critical. The old tried and true way of handling increased or chaotic workloads is having an enterprise architect look at the workload and design an optimal strategy for delivering it. This involves designing the right schemas, tuning individual queries with features like indexes, materialized views and projections. The architect might leverage a system that can sense long-running queries and assign difference resources (memory, cpu, etc) so that long-running queries don’t notably impact daily analytics. The newer autoscaling databases don’t offer much in tuning. After all, you can just autoscale when things get slow.
Continue reading it here:
https://my.vertica.com/blog/auto-scalin ... t-magical/