Gke Filestore

Creating Filestore instance #

In here I am planning to create GCP filestore instance and mount it inside a GKE pod

Terraform code for filestore #

resource "google_filestore_instance" "prageesha_filestore_instance" {
  name = "prageesha-filestore-instance"
  zone = "us-central1-a"
  tier = "STANDARD"
  project = var.project_id

  file_shares {
    capacity_gb = 2014
    name        = "dp_vol"
  }

  networks {
    network = var.vpc_name
    modes   = ["MODE_IPV4"]
    reserved_ip_range = "172.16.1.0/29"
  }
}

Remote target I got : #

172.16.1.2:/dp_vol

Create PV and PVC #

apiVersion: v1
kind: PersistentVolume
metadata:
  name: fileserver
spec:
  capacity:
    storage: 1T
  accessModes:
  - ReadWriteMany
  nfs:
    path: /dp_vol/db-data
    server: 172.16.1.2

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: fileserver-claim
spec:
  accessModes:
  - ReadWriteMany
  storageClassName: ""
  resources:
    requests:
      storage: 1T

Deployment #

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nfs-busybox
spec:
  replicas: 1
  selector:
    matchLabels:
      name: nfs-busybox
  template:
    metadata:
      labels:
        name: nfs-busybox
    spec:
      containers:
      - image: praqma/network-multitool:latest
        imagePullPolicy: Always
        name: busybox
        volumeMounts:
          # name must match the volume name below
          - name: my-pvc-nfs
            mountPath: "/mnt"
      volumes:
      - name: my-pvc-nfs
        persistentVolumeClaim:
          claimName: fileserver-claim

How can you test. #

You can create a vm in same GCP vpc network and try to mount the volume there

https://cloud.google.com/filestore/docs/mounting-fileshares#install-nfs-yum

Terraform code for a VM to mount the filestore volume #

resource "google_compute_instance" "vm_bastion_prod" {
  project      = "${var.project_id}"
  count        = "1"
  name         = "prageesha-vm-bastion-us-${count.index}"
  machine_type = "n1-standard-1"
  zone         = "us-central1-a"
  tags         = ["bastion"]

  boot_disk {
    initialize_params {
      image = "centos-cloud/centos-7"
      size  = "20"
      type  = "pd-standard"
    }
  }

  network_interface {
    network = var.vpc_name
    subnetwork = "subnet-dev-vms"
    access_config {
      // Ephemeral IP - leaving this block empty will generate a new external IP and assign it to the machine
    }
  }
}

resource "google_compute_firewall" "bastion-ssh" {
  project      = "${var.project_id}"
  name    = "prageesha-us-bastion-ssh"
  network = var.vpc_name

  allow {
    protocol = "tcp"
    ports    = ["22", "80", "9090", "443", "8443", "8080"]
  }

  source_ranges = [
    "0.0.0.0/0",
  ]

  target_tags = ["bastion"]
}

Login to your vm and run below commands to mount the volume

sudo yum update
sudo yum install nfs-utils

sudo mkdir -p my-dir

sudo mount ip-address:/fileshare/fileshare-sub-dir my-dir

sudo mount 172.16.1.2:/dp_vol my-dir

Dynamically Provision PVs #

Create an instance of NFS-Client Provisioner connected to the Cloud Filestore instance you created earlier via its IP address (${FSADDR}). The NFS-Client Provisioner creates a new storage class: nfs-client. Persistent volume claims against that storage class will be fulfilled by creating persistent volumes backed by directories under the /volumes directory on the Cloud Filestore instance’s managed storage.

Source: https://cloud.google.com/community/tutorials/gke-filestore-dynamic-provisioning

helm install stable/nfs-client-provisioner --name nfs-cp --set nfs.server=${FSADDR} --set nfs.path=/volumes
watch kubectl get po -l app=nfs-client-provisioner

Make a persistent volume claim #

helm install --name postgresql --set persistence.storageClass=nfs-client stable/postgresql
watch kubectl get po -l app=postgresql