How to access Load Balancer logs

AWS provides access logs for Elastic Load Balancers (ELB), allowing you to monitor and analyze traffic patterns. Below are general steps to access and view ELB access logs:

Amazon ELB Access Logs:

  1. Navigate to the EC2 Console:
    • Open the AWS EC2 Console.
  2. Select Load Balancers:
    • In the left navigation pane, choose “Load Balancers” under the “Load Balancing” section.
  3. Choose Your Load Balancer:
    • Click on the name of the load balancer for which you want to access logs.
  4. View Access Logs:
    • In the “Description” tab, look for the “Attributes” section.
    • Check if the “Access logs” attribute is set to “Enabled.”
  5. Access Logs Location:
    • If access logs are enabled, you can find them in an S3 bucket specified in the “S3 bucket” field.
  6. Navigate to S3 Bucket:
    • Open the AWS S3 Management Console.
    • Access the S3 bucket mentioned in the “S3 bucket” field.
  7. Access Log Files:
    • Inside the S3 bucket, you should find log files with a naming convention like <load-balancer-name>/<YYYY>/<MM>/<DD>/....
    • Download or view the log files to analyze the access logs.

AWS CLI:

You can also use the AWS Command-Line Interface (CLI) to access ELB access logs:

# Replace <your-load-balancer-name> and <your-s3-bucket-name> with your actual values

aws s3 cp s3://<your-s3-bucket-name>/<your-load-balancer-name>/<path-to-log-file> .

This command downloads the specified log file to the current directory.

Analyzing Access Logs:

Access logs typically include information such as client IP addresses, request timestamps, response status codes, and more. You can use tools like AWS Athena, Amazon CloudWatch Logs Insights, or other log analysis tools to query and visualize the logs.

Remember to adjust the steps based on the specific type of load balancer you are using (Application Load Balancer, Network Load Balancer, or Classic Load Balancer). Always refer to the official AWS documentation for the most accurate and up-to-date information.

These logs are much helpful if you are looking from which instance the request is coming.

Send Email with Attachment File from Lambda (Nodejs)

You can create a lambda (nodejs) with the following code (written in typescript) to send an email to a user with a file attachment.

In the below example, we are first getting the content from S3 bucket, then creating a csv and sending it to a user (SES verified user).

import {
    S3Client,
    GetObjectCommand
  } from "@aws-sdk/client-s3";
  import { SESClient, SendRawEmailCommand } from "@aws-sdk/client-ses";
  import { Readable } from "stream";
  const s3Client = new S3Client({ region: "ap-southeast-2" });
  const sesClient = new SESClient({ region: "ap-southeast-2" });


  export const sendEmail = async () => {
    const senderEmail = process.env.SENDER_EMAIL_ADDRESS;
    const recipientEmail: any = process.env.RECIEVER_EMAIL_ADRESS;
    const subject = "SUBJECT here";
    const bodyText =
      "Hello,\r\n\nPlease see the attached csv file \n\nThanks";
  
    const getObjectCommand = new GetObjectCommand({
      Bucket: process.env.BUCKET,
      Key: process.env.BUCKET_KEY,
    });
  
    const attachmentData = await s3Client.send(getObjectCommand);
    const attachmentBuffer = await streamToBuffer(
      attachmentData.Body as Readable
    );
  
    const attachmentBase64 = attachmentBuffer.toString("base64");
  
    const emailData =
      `From: ${senderEmail}\r\n` +
      `To: ${recipientEmail}\r\n` +
      `Subject: ${subject}\r\n` +
      `MIME-Version: 1.0\r\n` +
      `Content-Type: multipart/mixed; boundary="boundary"\r\n\r\n` +
      `--boundary\r\n` +
      `Content-Type: text/plain; charset=utf-8\r\n` +
      `${bodyText}\r\n\r\n` +
      `--boundary\r\n` +
      `Content-Type: application/octet-stream\r\n` +
      `Content-Disposition: attachment; filename="file.csv"\r\n` +
      `Content-Transfer-Encoding: base64\r\n\r\n` +
      `${attachmentBase64}\r\n\r\n` +
      `--boundary--`;
  
    const sendRawEmailCommand = new SendRawEmailCommand({
      RawMessage: {
        Data: Buffer.from(emailData),
      },
      Source: senderEmail,
      Destinations: [recipientEmail],
    });
  
    const result = await sesClient.send(sendRawEmailCommand);
    return result.MessageId;
  }

  async function streamToBuffer(stream: Readable): Promise<Buffer> {
    return new Promise((resolve, reject) => {
      const chunks: Uint8Array[] = [];
      stream.on("data", (chunk) => chunks.push(chunk));
      stream.on("end", () => resolve(Buffer.concat(chunks)));
      stream.on("error", reject);
    });
  }

How to create pipeline in Jenkins

Creating a Jenkins pipeline involves defining a script that specifies the entire build process, including stages, steps, and conditions. Jenkins Pipeline can be created using either Declarative or Scripted syntax. Below, I’ll provide a simple example using both syntaxes.

Declarative Pipeline:

pipeline {
    agent any
    
    stages {
        stage('Build') {
            steps {
                echo 'Building the project'
                // Your build steps go here
            }
        }
        
        stage('Test') {
            steps {
                echo 'Running tests'
                // Your test steps go here
            }
        }
        
        stage('Deploy') {
            steps {
                echo 'Deploying the application'
                // Your deployment steps go here
            }
        }
    }
}

In this example:

  • agent any specifies that the pipeline can run on any available agent.
  • stages define the different phases of the pipeline.
  • Inside each stage, you have steps where you define the tasks to be executed.

Scripted Pipeline:

Scripted pipelines use a more programmatic approach with a Groovy-based DSL. Here’s an example:

node {
    // Define the build stage
    stage('Build') {
        echo 'Building the project'
        // Your build steps go here
    }

    // Define the test stage
    stage('Test') {
        echo 'Running tests'
        // Your test steps go here
    }

    // Define the deploy stage
    stage('Deploy') {
        echo 'Deploying the application'
        // Your deployment steps go here
    }
}

In this example:

  • node specifies that the entire pipeline will run on a single agent.
  • Inside each stage, you have the code for the corresponding tasks.

Pipeline Setup in Jenkins

  1. Install the Docker Pipeline Plugin:
    • Navigate to “Manage Jenkins” > “Manage Plugins” in the Jenkins Classic UI.
    • Switch to the “Available” tab, search for “Docker Pipeline,” and check the box next to it.
    • Click “Install without restart.”
  2. Restart Jenkins:
    • After installing the Docker Pipeline Plugin, restart Jenkins to ensure the plugin is ready for use.
  3. Create a Jenkinsfile in Your Repository:
    • Copy the above script (declarative or scripted) to a file named ‘Jenkinsfile’.
  4. Create a New Multibranch Pipeline in Jenkins:
    • In the Jenkins Classic UI, click on “New Item” in the left column.
    • Provide a name for your new item (e.g., My-Pipeline).
    • Select “Multibranch Pipeline” as the project type.
    • Click “OK.”
  5. Configure Repository Source:
    • Click the “Add Source” button.
    • Choose the type of repository you want to use (e.g., Git, GitHub, Bitbucket) and fill in the required details (repository URL, credentials, etc.).
  6. Save and Run Your Pipeline:
    • Click the “Save” button.
    • Jenkins will automatically detect branches in your repository and start running the pipeline.

This is a very basic example. Depending on your project, you may need to add more advanced features, such as parallel execution, input prompts, error handling, and integration with external tools.

Make sure to refer to the official Jenkins Pipeline documentation for more in-depth information and advanced features.

How to search from an XML column in SQL

Search from an xml column in sql

<ArticlePage>
  <publishDate><![CDATA[201612151611499007]]></publishDate>
  <category><![CDATA[1000004]]></category>
</ArticlePage>
SELECT *
  FROM [Table_Name]
  where CAST([xml] as XML).value ('(/ArticlePage/category)[1]', 'varchar(max)') like '1000004'

How to connect to SQL database from Lambda using Nodejs

To connect to SQL database from lambda, I am using Nuget package ‘mssql’.

For this, I have created a serverless application using Nodejs 16.x.

To connect to SQL database from lambda, I am using Nuget package ‘mssql’.

In your serverless application, install mssql like below –

npm install mssql

Usage

I am using Typescript and DBConfig used above is just an interface.
Also replace all the config with your database credentials.

import sql from "mssql";

const config : DBConfig = {
 user: ${process.env.DB_USER},
 password: ${process.env.DB_PASSWORD},
 server: ${process.env.DB_SERVER},
 database: ${process.env.DATABASE},
 options: {
  trustServerCertificate: true,
 },
};

export const run = async () => {
 try {
  await sql.connect(config);
  const result = await sql.query`Select * from [TableName]`;
  var result = result.recordset;
}
catch (err) {
 console.error("Error:", err);
 return {
  statusCode: 500,
  body: JSON.stringify({
  message: "Error accessing the database.",
  error: err,
  }),
 };
}
}

Please note that the RDS and lambda should be in same VPC settings. If while running lambda, you get a timeout error, do verify the VPC settings for both RDS and lambda. If they are in different VPC, then you have to do further settings. Please refer to AWS documentation for that.

Also make sure the IAM role in lambda should have permission to access RDS

Refer to this if you are looking to send an email with attachment from lambda.