The top 7 things that were announced in 2023 on Google Cloud
My take on the highlights Google Cloud released in 2023
2023 has been a massive year for cloud, security and AI. There’s been so many announcements, an absolutely enormous cloud next conference and the explosion of LLMs everywhere, from data centers to phones! With so much that came out, I thought I’d pen together some of my highlights in releases. I’ve probably missed some big ones, but these were the ones that stuck out for me. Let’s get into my top seven! Here’s the emoji preview: 🪵🔐💿🤖🧱🔧🧠
Kicking off the year in January, this was a highlight for me. Powered by BigQuery, logging analytics is a capability that:
allows you to search, aggregate and transform all log data types including application, network and audit log data at no additional cost for existing Cloud Logging customers
Other bonuses that came with the general availability of this feature included multi-region support for log analytics buckets, an updated query experience, such as saving and sharing queries and custom log retention for up to 10 years!
Security Command Center now has a project-level PAYG option 🔐
This was probably my favourite, but with SCC having a pay as you go (PAYG) option, and that it can be scoped down to a project level, rather than an organisational level.
Security command center is a really great tool and having the ability to just pay for what you use rather than the previously prohibitively expensive pricing for the premium tier level of SCC for organisations or indie projects which are much smaller in nature but still would like to take advantage of Google’s SCC now can. Huge W.
Pricing info here: https://cloud.google.com/security-command-center/pricing#project-level-activations
Run Alloy DB anywhere - DC, Laptop or any Cloud! 💿
For those getting onto high performance Postgres with AlloyDB, Google Cloud created AlloyDB Omni, which is a downloadable edition of AlloyDB which is designed to run anywhere.
More info is here: https://cloud.google.com/alloydb/omni. It has a free developer edition too, so now you can develop against the same database interface you’d be using in the cloud or in production to get those integration tests running locally.
Yes please!
Announcement of DuetAI for GCP. An AI powered collaborator 🤖
I’m almost certain this doesn’t need an introduction, but just in case, DuetAI was announced at Next’23 this year, and it is being pushed HARD by Google. Especially in the past few months with the Gemini announcements. The short version is that it’s an AI assistant that will help you get stuff done much faster as it has context about what you’re trying to do.
For example, you could ask: “help me upgrade the nodes in my cluster to a different machine spec” and then DuetAI will look at your project’s resources, know the GCP documentation and commands and then help you write a command to change to the machine type. It could even ask some clarifying questions like, which cluster you’re referring to if you have multiple, or which machine spec you want.
I did a post on this recently where I used the in-editor assistance to get it to help me write a YAML file for a cloudbuild job. You can read that post below:
Infrastructure manager. Provision GCP resources with Terraform 🧱
This one came out of left field for me, but was a very welcome one, nonetheless! Infrastructure manager is a managed way to get started with IaC on GCP using Terraform resources to declaritively define your infrastructure.
The key points from the announcement are:
Infrastructure management (obviously). Makes use of the terraform plan and apply workflows and sets up the terraform automation for you.
Streamlines IaC. Quickly deploy stuff using pre-packaged modules and jump start solutions.
Integrates with the GCP ecosystem. So things like IAM, logging and so on can be controlled for Infrastructure manager.
Performance improvements from Cloud Spanner 🔧
I love me a good bit of cost savings! This announcement from Google on the performance improvements of Spanner for a no-cost change was very welcomed! Looked at another way, you can now do the same you were doing before but for less. Yay marketing and words!
But seriously.
we announced significant price-performance improvements for Cloud Spanner, now providing up to 50% increase in throughput and 2.5 times the storage per node than before, with no change in price.
They also go on to mention:
Spanner’s high throughput, virtually unlimited scale, single-digit millisecond latency, five 9s availability SLA, and strong external-consistency semantics are now available at half the cost of Amazon DynamoDB for most workloads.
Now don’t get me wrong this is all great stuff, but I’m not sure why they wanted to compare Spanner to DynamoDB when they’re two fairly different products. One being a globally horizontally scaling SQL database offering, and the other is a key/value NoSQL serverless DB. Perhaps they wanted to just highlight the features of availability, scalability and latency?
Anyway, Spanner’s improvements to price/performance now enables usage of Spanner for relational and key value workloads as well as each node being able to support 10TB of storage (compared to 4TB previously).
Memorystore for Redis Cluster went GA 🧠
Last, and definitely not least is the general availability of Redis Cluster which “provides up to 60 times more throughput and microseconds of latency” according to the blogpost heading. A number of organisation are already making use of Redis Cluster, but it was not previously available by GCP as a managed service. Welp, that changes with this release. Yay! Less things for me to have to care about.
So, what did you think? This year was pretty big with a number of game-changing releases such as DuetAI and Security and performance as well as cost reductions. Google also listed their biggest news of the year over on their blog which you can find more articles on here.