Stop & Start GCP SQL Instances at Nights

One of the great things about Google Cloud SQL instances is that they let me shut them down at night. The data sticks around (hopefully), but I can’t access it, which is exactly what I want. Since my backend servers are already tucked in for the night (explained in another article), keeping the database running is just burning money for no reason. So, let’s put it to sleep too.

The related article explains how I shut down the GCP Instance groups at nights, and it’s complementary, as once my Cloud Function is working, I configure it to stop and start as many GCP services as need (if it’s possible).

Workflow

The plan is simple:

  1. Identify the target databases: Not all databases need this treatment. Some should be awake 24/7, while others can take a nap. To keep things organized, I add a label stop-and-start with the value yes to instances that should shut down at night. This makes it easy to differentiate between production and development environments.
  2. Schedule the stops and starts: I set up a scheduler that runs every night to stop the instances and every morning to start them again. No need for manual intervention, automation is the way to go.
  3. Create a Cloud Function: This function will do the actual stopping and starting of the instances. It finds all the affected databases and takes care of them one by one.

Cloud Function

The Cloud Function follows a straightforward process, it first looks for all SQL instances labeled with stop-and-start: yes.

def find_sql_instances(project):
    sql = discovery.build("sqladmin", "v1")
    all_instances = sql.instances().list(project=project).execute()
    sql_instances = []
    for instance in all_instances["items"]:
        if (
            "userLabels" in instance["settings"]
            and "stop-and-start" in instance["settings"]["userLabels"]
            and instance["settings"]["userLabels"]["stop-and-start"] == "yes"
        ):
            sql_instances.append(instance["name"])
    return sql_instances

Then, it shuts them down one by one (or starts them up, depending on the time of day).

def start_or_stop_sql_instances(project, instances, start):
    sql = discovery.build("sqladmin", "v1")
    database_instance_body = {"settings": {"activationPolicy": "ALWAYS" if start else "NEVER"}}
    for instance in instances:
        if start:
            print(f"Starting SQL instance {instance}...")
        else:
            print(f"Stopping SQL instance {instance}...")
        sql.instances().patch(project=project, instance=instance, body=database_instance_body).execute()

Here is the Cloud Function entry point.

@functions_framework.cloud_event
def stop_and_start(cloud_event):
    encoded_data = cloud_event.data["message"]["data"]
    decoded_data = base64.b64decode(encoded_data).decode("utf-8")
    message = json.loads(decoded_data)

    if message["action"] == "stop":
        start = False
    elif message["action"] == "start":
        start = True
    else:
        print("No action specified")
        return

    project = os.getenv("PROJECT_ID")

    sql_instances = find_sql_instances(project)
    start_or_stop_sql_instances(project, sql_instances, start)

Conclusion

For small architectures, this may not seem like a big deal. But when you’re dealing with cloud environments that cost thousands of euros per month, every optimization counts. By automatically stopping and starting SQL instances, you avoid unnecessary expenses, because, let’s be honest, Google doesn’t need more of your money than necessary.


Never Miss Another Tech Innovation

Concrete insights and actionable resources delivered straight to your inbox to boost your developer career.

My New ebook, Best Practices To Create A Backend With Spring Boot 3, is available now.

Best practices to create a backend with Spring Boot 3

Leave a comment

Discover more from The Dev World - Sergio Lema

Subscribe now to keep reading and get access to the full archive.

Continue reading