icon

Google Artifact Registry is replacing Container Registry

You may have seen the warnings in your Google Console recently: the Google Container Registry (GCR) product will be deprecated in May of 2023 in favor of Google Artifact Registry (GAR.)

Whenever I see these kinds of warnings as an engineer on a DevOps, Infrastructure, or Platform team (how many ways can we be classified?!), my heart tends to drop into my stomach a bit because it means a vendor has tossed another bundle of work onto what’s likely an already overloaded plate. Since we’re all trying to think more positively these days (I hope), let’s discuss why this can be a Good Thing (tm) for all of us, especially when considering DevSecOps and Cybersecurity principles within a Google Cloud environment.

What’s good?

  • Transition away from Bucket ACLs for access and move to IAM at the repository level (rather than a single container registry per project with ACL wrangling.) This layout gives us better POLP controls and allows us to consider centralizing a single project or a smaller set of projects as central places to hold all of our Org’s artifacts. Previously I’d see every GCP project with its own Container Registry like foo-staging, foo-prod, web-staging, and web-prod; each had its own Container Registry resource with its uniquely governed access policies. Spreading these across projects makes it hard to get a clear picture of the entire system, especially if you’re using 3rd party container vuln scanning and auditing tools. Access management also becomes a decentralized tangled mess as time passes, teams re-org, etc.
  • We can stream images to GKE and Serverless to reduce workload startup times and network costs at scale. I love this, particularly in use cases where large container images (multi-GiB) with a swiss-army use-case: pre-made CI/CD container images with all the baked-in goodies used by an Enterprise engineering team can exclusively pull in what it needs when it needs it. If you’re using smaller nodes like n2d-standard-8 and scaling from zero to hundreds or thousands of pods at any given time, this starts to make a big difference in wait times for CI jobs to complete (engineering $/hr) and network costs ($/GiB) pulled from GAR into GKE.
  • You can start enforcing Org policies related to location constraints for where your artifacts are stored. For example, you can now explicitly decide where these Artifact Registries can reside and prevent them from being created outside those geo-political regions in high-security or compliance-regulated environments.
  • You can start enforcing encryption standards for your registries. E.g., you can now create policies for your Organization to prevent a registry from being made that’s not CMEK-protected.
  • Previously Container Security scans only looked for OS-level vulnerabilities, e.g., a vulnerability in GLIBC that comes with the base Ubuntu image would have been picked up and reported. They have added Go and Java vulnerability scanning AKA Artifact Analysis to help you find issues with your packaged services and those base OS image layers. There is support for additional languages in the works, from what I have seen.
  • Two new types of Registries: Virtual and Remote. These are currently in Preview status (Terraform users, you might need to wait a bit before these appear in GA GCP Providers.) Virtual repositories allow us to create a “single” repository in front of multiple repositories across projects and regions - this comes in handy if you have a cluster of repositories for a single “stack” and want to easily delegate access to the teams or services that will be pulling these in for debugging or runtime purposes. Remote repositories are incredibly useful as they act as a pull-through cache for upstream Docker Hub or Maven Central artifacts. From an artifact governance perspective, this gives us much better visibility and insight into what artifacts are pulled into our environments and eliminates Docker Hub/Maven Central rate limiting or outage concerns when scaling out in production environments.

Google isn’t just going to cut you off, either. They’ve made a respectable upgrade path to transition out of GCR (while maintaining the existing gcr.io domains, you’ve probably got them baked into a ton of stuff!)

Another nice thing about GAR is that they’re not just supporting container images. They have been working on supporting other kinds of artifacts in ecosystems like Java, Node.js, and Python. There’s also a preview for Go module artifact registries if your projects have been enrolled in the pre-GA Preview group for Artifact Registry (ask your GCP account rep for the enrollment form.) The support for libraries/modules as artifacts in GAR is another win for those of us in the DevSecOps camp, as we can start to migrate our engineering teams into using these intermediary artifact registries rather than pulling directly from their public sources. This kind of design gives us better governance due to the single path of where these can come into our networks, improved build/pull times due to the intermediary caching within GCP’s networks, and resiliency during public registry downtime if they’re undergoing maintenance (planned or otherwise!) or even DDoS attacks.

Hopefully, this GCR to GAR transition seems less scary after I’ve outlined all the lovely new benefits of Google Artifact Registry! If you’d like to learn more about transitioning out of GCR or how to use GAR to build an effective DevSecOps strategy for your artifacts and container images, I’d be more than happy to assist!

- Kevin