get into aws ecs fargate container



aws ecs execute-command \
    --region eu-west-1 \
    --cluster default \
    --task arn:aws:ecs:eu-west-1:00123456789:task/default/9773f658cd134c3c934dd80b5227ae5f \
    --container nginx-poc \
    --interactive \
    --command "/bin/sh"
	
aws ecs describe-tasks --cluster default --tasks 9773f658cd134c3c934dd80b5227ae5f --region eu-west-1 | grep enableExecuteCommand

aws ecs update-service --service nginx-poc-svc2 --cluster default --region eu-west-1 \
  --enable-execute-command \
  --force-new-deployment
  
 
 An error occurred (InvalidParameterException) when calling the UpdateService operation: The service couldn't be updated because a valid taskRoleArn is not being used. Specify a valid task role in your task definition and try again.
  • add role ecsTaskExecutionRole
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ecs:ExecuteCommand",
                "ssmmessages:CreateControlChannel",
                "ssmmessages:CreateDataChannel",
                "ssmmessages:OpenControlChannel",
                "ssmmessages:OpenDataChannel"
            ],
            "Resource": "*"
        }
    ]
}
  • AmazonECSTaskExecutionRolePolicy

AWS Lambda to stop ec2 instance with cron

  • Create aws lambda function
  • Attach IAM role as required
import boto3
region = 'ap-south-1'
instances = ['i-0e4e6863cd3da57b5']
ec2 = boto3.client('ec2', region_name=region)
def lambda_handler(event, context):
    ec2.stop_instances(InstanceIds=instances)
    print('stopped your instances: ' + str(instances))
  • Create Cloudwatch event bus rule and attach it to lambda function

Kong smtp email configuration with AWS SES

update /etc/kong/kong.conf

smtp_mock=off
smtp_host=email-smtp.eu-west-1.amazonaws.com
smtp_port=465
smtp_username=${KONG_SMTP_USER}
smtp_password=${KONG_SMTP_PASSWORD}
smtp_ssl=on
smtp_domain=example.com
[email protected]
admin_emails_from =Team1 <[email protected]>
portal_invite_email=Team1 <[email protected]>
portal_access_request_email=Team1 <[email protected]>
portal_approved_email=on
portal_emails_from=Team1 <[email protected]>
portal_emails_reply_to=Team1 <[email protected]>

Note : [email protected] should verified in AWS and It’s should be below format

Team1 <[email protected]>

Send email using AWS SES SMTP with python

import smtplib

email_user = '<aws-ses-user>'
email_password = '<aws-ses-password>'

sent_from = '"test" <[email protected]>' #This should be verified
to = ['[email protected]']
subject = 'test'
body = 'test'

email_text = """\
From: %s
To: %s
Subject: %s

%s
""" % (sent_from, ", ".join(to), subject, body)

try:
    smtp_server = smtplib.SMTP_SSL('email-smtp.ap-south-1.amazonaws.com', 465)
    smtp_server.ehlo()
    smtp_server.login(email_user, email_password)
    smtp_server.sendmail(sent_from, to, email_text)
    smtp_server.close()
    print ("Email sent successfully!")
except Exception as ex:
    print ("Something went wrong….",ex)

Issues and error messages:

prometheus service discovery – aws ec2 instance with tag

  • Create role(prometheus-ec2) with AmazonEC2ReadOnlyAccess policy
  • Attach role to ec2
  • Tag ec2

prometheus.yml

kind: ConfigMap
apiVersion: v1
metadata:
  name: prometheus-conf
data:
  prometheus.yml: |

global:
  scrape_interval:     10s
  evaluation_interval: 10s
scrape_configs:          
  - job_name: 'ec2-node'
    ec2_sd_configs:
      - region: ap-south-1
        port: 9100
    relabel_configs:
      - source_labels: [__meta_ec2_tag_app]
        action: keep
        regex: 'pro.*'
      - source_labels: [__meta_ec2_private_ip]
        action: replace
        target_label: ec2_private_ip

aws eks get k8 token kubectl

Note : If we creates eks cluster from UI it’s creates with different user and gives error when we do kubectl get pod

aws eks get-token  --cluster-name eks1
aws eks update-kubeconfig --name eks1
aws sts get-caller-identity
aws sts assume-role --role-arn "arn:aws:iam::1111111111:role/role-name" --role-session-name "tests3"
aws --profile=default eks update-kubeconfig --name eks1
aws eks create-cluster \
   --region ap-south-1 \
   --name eks1 \
   --kubernetes-version 1.20 \
   --role-arn arn:aws:iam::account_number:role/eks1-clst \
   --resources-vpc-config subnetIds=subnet-093a2ddfcb7bc30b1,subnet-0475d9e26dfdc9d00,subnet-0274975b4af3513ee
aws eks describe-cluster \
    --region ap-south-1 \
    --name eks1 \
    --query "cluster.status"

https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html

always check the minimum version of aws cli for eks

https://stackoverflow.com/questions/50791303/kubectl-error-you-must-be-logged-in-to-the-server-unauthorized-when-accessing

https://aws.amazon.com/premiumsupport/knowledge-center/eks-api-server-unauthorized-error/