I have created a local Minikube cluster, then a deployment for a hello-world example. After it is active, a service is created at the same time.
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world
spec:
selector:
matchLabels:
run: hello-world-example
replicas: 2
template:
metadata:
labels:
run: hello-world-example
spec:
containers:
- name: hello-world
image: gcr.io/google-samples/hello-app:2.0 # Works
ports:
- containerPort: 8080
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: hello-app-service
spec:
# type: NodePort
type: LoadBalancer
selector:
app.kubernetes.io/name: hello-world
ports:
- protocol: TCP
port: 8080
targetPort: 8080
# nodePort: 31579
I apply the deployment and service with kubectl apply-f hello-app.yaml
I end up with the following example services:
NAMESPACE | NAME | TARGET PORT | URL |
---|---|---|---|
default | hello-app-service | 8080 | http://IP:31813 |
default | kubernetes | No node port | |
kube-system | kube-dns | No node port |
Note: My actual IP is not actually "IP"
When I curl the URL for the hello-app-service, I end up with this:
curl: (7) Failed to connect to IP port 31813 after 0 ms: Connection refused
However, when I expose the deployment service manually in CLI with
kubectl expose deployment hello-world --type=LoadBalancer --port=8080
I get the following result:
NAMESPACE | NAME | TARGET PORT | URL |
---|---|---|---|
default | hello-app-service | 8080 | http://IP:31813 |
default | hello-world | 8080 | http://IP:32168 |
default | kubernetes | No node port | |
kube-system | kube-dns | No node port |
And after I curl the URL for new service "hello-world", I end with the proper result:
Hello, world!
Version: 2.0.0
Hostname: hello-world-5bb7fff796-fmwl8
Can somebody please explain to me what I am doing wrong with the service? Why is the CLI service working, while the .yaml file service is not working, despite using the same configuration.
I have tested both with the same exact service settings as the CLI command (Using LoadBalancer type) and also with NodePort and setting the specific port.
Versions:
OS: Ubuntu 22.04.2 LTS
Docker version: 24.0.2
Kubectl version:
Client Version: v1.27.2
Kustomize Version: v5.0.1
Server Version: v1.26.3
Minikube version: v1.30.1
2
Answers
I’d change the selector for your service so it maps correctly to your deployment. Also, you can manually assign an external IP in your manifest:
EDIT
Kubernetes service external ip pending
You could always change your service to a NodePort or IngressController, but since you’re running minikube there’s a "magic command":
https://minikube.sigs.k8s.io/docs/handbook/accessing/
Cloud Run
Cloud Run
is a managed serverless compute platform that lets you run containers directly on top of Google’s scalable infrastructure.Scaling depends on below factors The CPU utilization of existing instances when they are processing requests or events over a one minute window, targeting to keep scheduled instances to a 60% CPU utilization.
The current request concurrency, compared to the maximum concurrency over a one minute window.
The maximum number of instances setting
The minimum number of instances setting
You can use the below commands to set maximum & minimum instances and concurrent requests
gcloud run services update SERVICE –concurrency CONCURRENCY
gcloud run services update SERVICE –min-instances MIN-VALUE –max-instances MAX-VALUE
Cloud run processes the requests parallely, not per second processing. So when you set maximum instances to 2 and concurrency to 20 , you have to consider 1st instances handling 20 concurrent requests at a time , then upcoming 20 requests are processed by 2nd instance at the same time.
Pricing: Pay-per-use, with an always-free tier, rounded up to the nearest 100 millisecond. Total cost is the sum of used CPU, Memory, Requests and Networking.
| CPU | MEMORY | REQUESTS |
|————-|——————————–|——————————-|——————————|
| Price | $0.00002400 per vCPU-second | $0.00000250 per GiB-Second | $0.40 per million requests |
| Always free | 180,000 vCPU-seconds per month | 360,000 GiB-seconds per month | 2 million requests per month |
Firestore :
Firestore
Get all documents from a collection import { collection, query, where, getDocs } from "firebase/firestore";
const q = query(collection(db, "cities"), where("capital", "==", true));
const querySnapshot = await getDocs(q);
querySnapshot.forEach((doc) => {
// doc.data() is never undefined for query doc snapshots
console.log(doc.id, " => ", doc.data());
});
Docs :
| Parameter | Type | Description |
|———–|———-|———————————-|
|
getDocs
|method
| Get documents from givenquery
|~~Deprecated: Do not use firebase 8 version~~
Note: Always use firestore modular sdk
https://firebase.google.com/docs/firestore
To Do:
Cloud Functions:
Google Cloud Functions is a serverless execution environment for building and connecting cloud services. With Cloud Functions you write simple, single-purpose functions that are attached to events emitted from your cloud infrastructure and services.
A lean
single purpose function
means following asingle responsibility principle (SRP)
. Your function does one thing only.What types of Cloud Functions exist?
Let’s check how to create a simple
Hello world
function through the console?index.js
file which contains the simple Hello world code andpackage.json
file which contains only a name attribute and a version attribute.exports.helloWorld = (req, res) => {
let message = req.query.message || req.body.message || ‘Hello World!’;
res.status(200).send(message);
};
{
"name": "sample-http",
"version": "0.0.1"
}
Reference: Google Cloud Functions
Cloud Storage
Cloud Storage
Cloud Storage is a service for storing your objects in Google Cloud. An object is an immutable piece of data consisting of a file of any format. You store objects in containers called buckets. All buckets are associated with a project, and you can group your projects under an organization. Each project, bucket, and object in Google Cloud is a resource in Google Cloud, as are things such as Compute Engine instances.
Create a new bucket using command line
In your development environment, run the gcloud storage buckets create command:
gcloud storage buckets create gs://BUCKET_NAME
Create a new bucket using client libraries
from google.cloud import storage
def create_bucket_class_location(bucket_name):
# bucket_name = "your-new-bucket-name"
storage_client = storage.Client()
bucket = storage_client.bucket(bucket_name)
bucket.storage_class = "COLDLINE"
new_bucket = storage_client.create_bucket(bucket, location="us")
print(
"Created bucket {} in {} with storage class {}".format(
new_bucket.name, new_bucket.location, new_bucket.storage_class
)
)
return new_bucket
When you create a bucket, you can specify a default storage class for the bucket. When you add objects to the bucket, they inherit this storage class unless explicitly set otherwise.If you don’t specify a default storage class when you create a bucket, that bucket’s default storage class is set to Standard storage.
1.Standard
2.Nearline
3.Coldline
4.Archive
For more information on Storage classes refer to this document
newline edit answer