Experience: is what you get soon after you need it.

Experience: is what you get soon after you need it.



My Cloud Certifications:

Cloud Certified Security Professional (ISC2)

CyberSecurity Certified Professional (ISC2)

AWS Certified Solutions Architect Associate

Azure Certified Architect Expert

Azure Certified Architect

Azure Certified Administrator

Oracle Cloud Infrastructure 2018 Certified Architect Associate.

Oracle Cloud Infrastructure Classic 2018 Certified Architect Associate.

Oracle Database Cloud Administrator Certified Professional.

Oracle Database Cloud Service Operations Certified Associate.

Search This Blog

Monday, November 11, 2024

Use ingress to route web traffic to your Pods via services

 Basics:

The Purpose of a Service in Kubernetes and How It Differs from Ingress

In Kubernetes, a Service is used as an abstraction layer (load balancer) that has a set of pods in the backend and provides a stable endpoint for accessing them. The primary purpose of a service is to enable communication between different components within a Kubernetes cluster. A service allows other pods or external applications to interact with the pods in the backend, regardless of the pod’s individual IP addresses, which can change over time. By defining a service, you ensure that users or other services can consistently access a set of pods, providing load balancing and service discovery capabilities.

A Service in Kubernetes usually takes one of several forms, including ClusterIP (for internal access within the cluster), NodePort (exposes the service on a specific port on each node), and LoadBalancer (exposes the service externally via a load balancer). Services are essentially network resources that allow you to abstract the complexity of individual pods and ensure reliable communication.


Now an Ingress is a higher-level abstraction for managing external access to services in the cluster. While services help direct traffic within the cluster, Ingress defines how external HTTP(S) traffic should reach the right service based on defined rules. For example, an Ingress can direct traffic based on the URL or host, such as routing `shaiksameer.com/onlinecart` to one service and `shaiksameer.com/video` to another, making it a powerful tool for routing external traffic to internal services.

So services will route internal/Node traffic to Pods and Ingress will route external http/https traffic to services



Now, imagine we have two services that are each pointing to two different running pods. For instance, one service could point to an online shopping cart application running in one pod, while another service points to a video streaming application running in another pod. These services make it possible for traffic to be sent to the right application, with each service forwarding requests to its respective pods. However, the challenge comes when we want to expose these services to the outside world and manage how traffic is directed to them based on specific paths in the URL.

controlplane ~ ➜  k get svc -n web-app 
NAME                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
default-http-backend   ClusterIP   10.109.20.37     <none>        80/TCP     20m
video-service          ClusterIP   10.107.56.221    <none>        8080/TCP   20m
onlinecart-service     ClusterIP   10.102.149.138   <none>        8080/TCP   20m


In this scenario, we can use an Ingress Controller to manage external traffic routing. Instead of directly exposing the services via NodePorts or LoadBalancers, we can configure an Ingress resource to route traffic based on URL paths. For example, requests coming to `shaiksameer.com/onlinecart` could be routed to the service that handles the online shopping cart pod, while requests to `shaiksameer.com/video` would be directed to the service serving the video pod.


controlplane ~ ➜  k describe svc onlinecart-service -n web-app 
Name:                     onlinecart-service
Namespace:                web-app
Labels:                   <none>
Annotations:              <none>
Selector:                 app=webapp-onlinecart
Type:                     ClusterIP
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.102.149.138
IPs:                      10.102.149.138
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
Endpoints:                10.244.0.4:8080
Session Affinity:         None
Internal Traffic Policy:  Cluster
Events:                   <none>

To achieve this, we’ll first create an Ingress Controller, which will act as the entry point for incoming HTTP(S) traffic. The Ingress Controller will inspect the incoming request and, based on predefined rules, forward it to the correct service. These rules will use the URL path to make decisions, ensuring that traffic for `/onlinecart` goes to one service and traffic for `/video` goes to another. Once the traffic reaches the correct service, that service will then forward the request to the appropriate pod, ensuring the right application handles the request.


apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: test-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  ingressClassName: nginx-example
  rules:
  - http:
      paths:
      - path: /onlinecart
        pathType: Prefix
        backend:
          service:
            name: onlinecart-service
            port:
              number: 8080
      - path: /watch
        pathType: Prefix
        backend:
          service:
            name: video-service
            port:
              number: 8080

By using an Ingress, you can manage multiple services with a single external endpoint, making it easier to scale and maintain your Kubernetes applications. This setup also allows you to implement advanced routing logic, such as load balancing, SSL termination, and path-based routing, all of which are essential for managing complex, distributed applications efficiently in Kubernetes.


controlplane ~  k replace -f ingress.yml -n web-app 
ingress.networking.k8s.io/test-ingress replaced

Sunday, September 22, 2024

Kubernetes create a Static Pod:


Kubernetes create a Static Pod:


cat my-static-pod.yaml

apiVersion: v1

kind: Pod

metadata:

  creationTimestamp: null

  labels:

    run: static-busybox

  name: static-busybox

spec:

  containers:

  - command:

     - sleep

     - "1000"

    image: busybox

    name: static-busybox

    resources: {}

  dnsPolicy: ClusterFirst

  restartPolicy: Always

status: {}

Wednesday, September 11, 2024

setup your local k8 environment with kubectl auto completion

 # set alias and make permanent

echo 'alias k=kubectl' >> ~/.bashrc

# add bash completion for kubectl
apt update && apt install -y bash-completion
echo 'source <(kubectl completion bash)' >> ~/.bashrc

# source the bash completion script 
echo 'source /usr/share/bash-completion/bash_completion' >> ~/.bashrc

# setup completion for kubectl
echo 'complete -o default -F __start_kubectl k' >> ~/.bashrc

# source our bashrc to use within the bash shell
source ~/.bashrc

Reference:
https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#enable-kubectl-autocompletion

Thursday, August 29, 2024

Get Ports in use on windows

 To get a list of bound ports in use, try this PowerShell command line:


Get-NetTCPConnection -State Bound | ForEach-Object {$p = (Get-Process -Id $_.OwningProcess);New-Object -TypeName psobject -Property @{ "LocalPort" = $_.LocalPort; "PID" = $p.Id; "ProcessName" = $p.Name; }} | Format-Table -AutoSize -Property PID, ProcessName, LocalPort
or you can also use
The command "netstat -anob" shows all in-use ports 

Sunday, February 11, 2024

Cloud Security Lifecycle

 

By integrating these six components into their security practices, organizations can establish a robust and holistic Azure Cloud Security Lifecycle. This approach helps safeguard cloud assets, respond effectively to security incidents, and continuously improve the overall security posture within the dynamic and evolving Azure cloud environment.














































Identify:

In the identification phase, organizations establish a comprehensive understanding of their cloud environment, including assets, users, and potential risks. This involves defining roles, responsibilities, and mapping out the cloud infrastructure. Azure provides tools for identity and access management, such as Azure Active Directory (AD), to centralize and manage user identities securely. Utilizing features like Azure Resource Graph and Azure Policy assists in gaining visibility into resources and enforcing compliance.

 

Protection:

Protection is centered around implementing safeguards and security measures to minimize vulnerabilities and potential threats. Azure offers a range of security controls, including Network Security Groups (NSGs), Azure Firewall, and Azure DDoS Protection, to safeguard against unauthorized access and network-based attacks. Utilizing Azure Security Center helps organizations implement and manage security policies, monitor the security state, and respond to potential security threats


Detect:

Detection involves continuous monitoring to identify and respond promptly to security incidents.      Azure Security Center, Azure Monitor, and Azure Sentinel are instrumental in providing real-time insights into the security posture of the cloud environment. These tools enable the detection of unusual activities, potential threats, and security vulnerabilities. Employing Azure Security Center's threat detection capabilities and leveraging Azure Monitor for logging and analytics contribute to a proactive detection strategy


Respond:

When a security incident is detected, the response phase involves taking immediate and effective actions to mitigate the impact. Azure Security Center's automated responses, such as playbooks and alerts, facilitate a swift response to security incidents. Azure Sentinel, a cloud-native SIEM (Security Information and Event Management) solution, aids in orchestrating and automating responses to security events, enhancing the efficiency of incident response teams


Recover:

The recovery phase focuses on restoring normal operations after a security incident. Azure Backup and Azure Site Recovery offer solutions for data backup, disaster recovery, and business continuity. By regularly backing up data and creating recovery plans, organizations can ensure minimal downtime and rapid restoration of services in the event of a security incident. Azure's recovery services contribute to a robust recovery strategy

Govern:

Governance involves establishing policies, procedures, and controls to ensure ongoing compliance and adherence to security best practices. Azure Policy allows organizations to define, enforce, and audit compliance with policies across their Azure environment. Azure Blueprints enables the creation of repeatable, standardized environments that comply with organizational standards. Azure Governance and Management Groups assist in implementing consistent governance across subscriptions