Hi.
In this video, I am going to walk you through how to deal with cloudsnap.
Cloudsnap are point in time snapshots created from existing PVCs and stored in Object Storage
compatible with S3 API.
In this case, we're going to use Google Cloud storage, with a PostgreSQL database, running
on top of a Portworx cluster.
So let's get started.
So before we go any further, let's take a look at the current environment.
So I have exactly one pod running as a part of deployment, and this pod is running a stateful
Postgres database.
And this is obviously backed by a PVC created from Portworx storage class.
So let's take a look at the data that's within this pod.
And this will help us figure out whether the database backup and restore process has been
successful or not.
So now we are within the pod and I am going to invoke the Postgres shell.
So let's take a look at the tables.
So we have a bunch of tables, and then let's also do a count of the rows from one of the
tables.
So this has about five million rows.
Alright.
So let's quit the Postgres shell and also get out of the pod, and get started with the
cloudsnap process.
So cloudsnap, as I mentioned, is going to deal with the object storage service of this
specific cloud provider.
Because Portworx will have to talk to the API, exposed by the cloud storage,
we need to ensure it has right credentials.
So we need to populate the credentials with the right level of access keys and secret
keys, and in the case of Google Cloud, we need to basically create a service account
and a key that's going to be associated with the object storage API.
So for that, we are going to first go to Google Cloud console and then create a key based
on the service account.
So let's do that.
So here we are in the credentials section of GCP console.
So click on create credentials and chose Service Account Key.
And the Service Account Key is going to associate a key with one of the services.
So we are going to choose a new service account and then create a role.
So this role is going to be explicitly dealing with storage, so we'll turn this into storage
admin, which will give full control of Google Cloud storage resources.
So make sure you select this, give it a name, something like px-snap, and then click on,
Create and Download the JSON file to your local development machine.
To save time, I have already gone ahead and created that and it's available here, so the
JSON file is already downloaded.
With that in place, we need to create a credential for Portworx to talk to Google Cloud Storage.
So before we go any further, we need to make sure that we are copying this
JSON file that we are seeing here into one of the nodes of the Kubernetes cluster because
we need to invoke the PXCTL binary from one of the nodes.
So first, let's copy the JSON file that we just downloaded from the GCP console.
And once this is done, let's SSH into one of the nodes.
So here, we have three nodes in the cluster, so I'm going to SSH into one of the nodes.
It doesn't matter which node, but you need to gain access to one of the nodes.
So once we are within the nodes, we can gain access to PXCTL command.
So let's first take a look at the credentials.
So when we execute PXCTL credentials list, it's going to show us all the available credentials.
In this case, we don't have any, so it is blank.
Now I'm going to create a credential to talk to Google Cloud Storage.
So we are going to pass parameters for the provider, which is Google, the project ID,
which is very specific to your GCP project, and then the actual JSON key that you downloaded
from the GCP console.
So this command will result in Portworx creating credentials required to talks to GCS, which
is Google Cloud storage.
Now, when we run PXCTL credentials list, it's going to show us a set of credentials, so
we have Google credentials, the project ID, it is not encrypted, and this is very important
before we create a cloudsnap.
Behind the scenes, cloudsnap will use these credentials to talk to the storage,
which is object storage backend, and in this case it is GCS.
Excellent.
So let's come out of this shell, we don't need to be in the node anymore, and we'll
go ahead with the rest of the steps.
If you are familiar with creating local snaps, local snapshots for Portworx, the process
is not very different.
The only prerequisite is that you need to create the credentials required for the cloud
provider.
And one important thing to note when you are creating credentials within Portworx is to
make sure that you are actually using the secret store as Kubernetes.
So when you are generating the spec, make sure that you are choosing the secret store
type as Kubernetes.
And this makes it seamless when you are creating credentials and accessing them from PVCs and
so on.
I have already gone ahead and integrated this with the spec, so we're already using the
Kubernetes secrets for Portworx.
Perfect.
Alright.
So with all of that in place, we can move on to storage ops where we deal with cloudsnaps
for backing up and restoring the database.
So let's take a look at the PVCs that we have, because we need a PVC as the base to create
a cloudsnap.
And if you've been following the series, you know that there is one and only one PVC backing
up our deployment and the PostgreSQL pod.
So we're gonna use this to essentially create the cloudsnap.
So how does the spec look like?
Well, it's not very different, except that within the annotations we are going to mention
cloud.
And this is the key difference between creating a local snapshot spec and a cloudsnap spec.
So in the cloudsnap spec, we'll add an annotation that will indicate Portworx that we're actually
using cloud.
And because there is only one set of credentials that we created, it's automatically going
to use that to talk to the object storage backend.
So let's create the snapshot from this PVC.
We can verify it with a couple of commands, let's check the volume snapshot creation,
then we're going to look at the data.
And here, we may have to wait for a couple of minutes because, at this point, Portworx
is negotiating.
And there we go, it's negotiating with the cloud storage provider to create a bucket
and copy all the relevant files to that bucket.
Now, it's a good idea to check our object storage browser by accessing the console.
So here we have the object browser within GCS, the GCP console.
Currently, we have no buckets, but when we actually do a refresh, we're going to see
a new bucket.
And this is created by the cloudsnap and through the credentials that we already supplied.
So this bucket is responsible for holding the files related to the snapshot,
and this is a good indicator that the process has been pretty successful and pretty smooth.
Alright.
So with all of that in place, we are now ready to create a new PVC from the snapshot.
But let's check the storage classes first.
Again, if you're not familiar with the snapshot of Portworx, whether it is local or cloudsnap,
you need to understand that the magic behind the snapshot is possible through STORK snapshot
storage class.
This is very essential in terms of taking the snapshot, restoring it, and managing the
entire life cycle.
So if you are using the default settings to install Portworx on your Kubernetes cluster,
by default, this is going to be installed, so you don't need to do anything specific.
But I wanted to call out the specific storage class which is responsible for the life cycle
of snapshots.
Okay.
Now, we're going to create a new PVC from the snapshot that we created.
So here is the PVC definition, everything remains the same, except that we are giving
an annotation that this PVC is not created from an existing PV, but it is coming from
a snapshot and be exposed to snapshot basically refers to this.
Now we're going to create this PVC, so when we do 'kubectl get pvc', it's going to take
a few seconds, and after that it's going to be bound.
So let's put this in watch, and there we go.
Now, this is bound, which means we are now ready to use this with our deployments, pods
and other elements.
Perfect.
So now it's time for us to create a pod.
So let's check out the number of pods.
We still have one pod, the original Postgres, which is based on the PVC, through which we
have taken the snapshot.
So let's leave this as is, and then create a new pod.
And this pod is not different from a standard deployment definition or a replica set or
a pod, except that we are pointing it to an existing cling, which is called the px-postgres-snap-clone,
which is in turn based on this PVC, and this PVC is based on the snapshot that we originally
created.
So it's basically a cascading set of objects that are inter-dependent from the
pod to the PVC to the snapshot.
Alright, so let's go ahead and create the new Postgres instance from the snapshot.
Okay, now then we do 'kubectl get pods'.
We'll see there are two pods, the one which was created originally that contains the sample
data, and now the other one which is basically a snapshot that we restored from the PVC,
which is pointing to the CloudSnap.
So it's time for us to make sure that the data is still there.
So, what we'll do now is to again grab the name of the pod, and then do an exec, access
the psql shell, list the tables.
Perfect.
Looks like everything is intact, and we'll finally check if the count matches.
There should be five million rows within this table, and there we go.
We have exactly five million rows.
That's it.
Now we can quit psql shell, and then get out of the pod.
So, that was the entire walkthrough of creating credentials with PXCTL, pointing the snapshot
to those credentials, and creating a snapshot, and from that creating a PVC, and finally
creating a deployment and restoring an existing snapshot to use within a pod or a deployment.
I hope you found this useful.
Thanks for watching.
Không có nhận xét nào:
Đăng nhận xét