Tumgik
lakshya01 · 1 year
Text
Azure Private Cluster Secure                       Connection
                                            See the flow !!
Tumblr media
What is Azure Private Cluster?
Azure Private Cluster is a service provided by Microsoft Azure that allows you to create a private cluster of resources within your Azure virtual network. With Azure Private Cluster, you can deploy and manage Kubernetes clusters in a private network that is isolated from the public internet. This provides additional security for your applications and data, as well as greater control over network access.
What is Azure Bastion Host?
Azure Bastion is a fully managed Platform-as-a-Service (PaaS) that provides secure and seamless remote access to Azure virtual machines (VMs) over Secure Sockets Layer (SSL). It eliminates the need for a VPN connection or a public IP address on the virtual machine.
What is VNET peering and private dns zone?
VNET peering allows you to connect two virtual networks (VNets) within Azure, so that they can communicate with each other as if they were on the same network. With VNET peering, you can connect VNets in the same region or in different regions, and you can also create hub-and-spoke architectures to simplify network management.
Private DNS zones allow you to configure custom DNS names for resources in your Azure VNets. With Private DNS zones, you can create a private, fully-qualified domain name system (DNS) zone that is accessible only within your Azure VNets. This can help you simplify DNS management and provide name resolution for your resources without exposing them to the public internet.
What is virtual private link?
Virtual Private Link (VPL) is a service provided by Microsoft Azure that allows you to access Azure PaaS services privately and securely from your virtual network. With VPL, you can connect to Azure PaaS services over a private endpoint in your virtual network, which eliminates the need for public IP addresses or internet connectivity.
                                            Lets Start the Flow!!
1: Create a kubernetes cluster by simple going on azure portal and then search for kubernetes services and make sure you enable the private cluster along with it calico network policy security so that it will secure the cluster!! Use kubenet as default vnet that will setup by kubernetes only.
What is Calico Network Policy?
Calico is a popular open-source network policy and security solution for Kubernetes. It provides fine-grained network policy controls to secure your cluster and offers features such as network policy enforcement, network isolation, and encrypted communication. Calico is designed to integrate with Kubernetes and leverages Kubernetes' network policy APIs to enforce policies across the cluster.
2. After creating cluster create one separate virtual network so that you can launch vm into that vnet and connect to private cluster using vnet peering inorder to access cluster through the virtual machine!
3. After the creation of virtual machine, create a bastion host server because, It eliminates the need for a VPN connection or a public IP address on the virtual machine.
4. Now its time to create a vnet peering you can add peering by simply going into the vnet of your aks private cluster on the left side you see peering option go to that and click on add button and write the peering link name and the remote virtual netowork’s peering name and select the target virtual netowork i mean here you vm’s virtual network and all set. Now you can cross check the peering option of both vnets.
5. Now the last step is to add the virtual link, go to the private dns zone service you can find that service which was already created by kubernetes while creating the private cluster go to the respective dns zone and on the left side you see the virtual private link there is on already created for the vnet of kubernetes now you have to create one for the vnet you created for virtual machine ,click on the add option type any suitable name and select the vnet of you virtual machine,
6. All set now just go to the vm and connect it through bastion and dont forgot to install azure-cli and kubectl tools. Do “az login”, then “az aks get-credentials --resource-group <your resource group name> --name <cluster name>, then do kubectl get nodes thats all!! enjoy your connections.
                                        Thankyou!!!
 For any issues contact: [email protected]
0 notes
lakshya01 · 1 year
Text
CI/CD for Sql Server using Azure devops.
Tumblr media
                      This is the workflow
Prerequisite:
1. Downloaded and Installed SSMS.
2. Downloaded and installed Visual Studio 19.
What is SSMS?
SSMS stands for SQL Server Management Studio. It is a software application used to manage and administer SQL Server databases. SSMS allows users to create, modify, and execute SQL queries and scripts, manage database objects such as tables, views, stored procedures, and functions, and configure and monitor database instances. It is a powerful tool for database administrators, developers, and analysts working with SQL Server databases. SSMS is developed and maintained by Microsoft and is available as a free download for use with SQL Server.
What is Visual studio ?
Visual Studio 2019 is an integrated development environment (IDE) developed by Microsoft for creating software applications for Windows, web, cloud, and mobile platforms. It is the latest version of the Visual Studio product line, which was first released in 1997.
What is Azure DevOps?
Azure DevOps is a set of cloud-based services provided by Microsoft that helps software development teams to plan, build, test, and deploy software applications with increased efficiency and speed. It provides a unified and integrated platform that allows teams to collaborate and manage their projects, workflows, and development pipelines from a single location.
Lets Start..
Step 1: Create a connection with sql server management by writing the credentials of azure sql server create it along with on database.
Tumblr media
Step 2: Now its time to create the .dacpac file and create some table before creating the file and then go to database name right click go to tasks then extract data-tier application and choose location and create the file.
Tumblr media
Step 3: Now its time to import data using .dacpac file into visual studio so that we can create the .sln file that will used for building purpose. Launch visual studio create project select “sql template” for it and then go to project settings make sure you selected the “microsoft azure sql database”
Tumblr media
Step 4: Go to database name right click and import the .dacpac file after that you will see the table that was created during ssms process. As shwon in above fig. Push the repo to the Azure repo this repo contains .sln file that use for building .
Step 5: Go to the build pipeline section use the classic editor option use the “NET Desktop” and then remove all the highlighted task as we dont need it.
Tumblr media
Step 6: Choose the pipeline agent for this i used self hosted agent you can configure by visiting the official docs.https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/agents?view=azure-devops&tabs=browser#install
Step 7: Build the pipeline and after the build you will see these artifacts.
Tumblr media
Step 8: Now go to release pipeline link the artifacts use empty job in stage then use the “azure sql deployment” and fill the details of target sql server and create release . After the successful deployment you will get these tables in target as shown below i am showing images of both in same server and different server.
Tumblr media Tumblr media
                                        Thanks!!!!
For any queries mail to: [email protected]
0 notes
lakshya01 · 1 year
Text
CI/CD of Azure Synapse with Azure DevOps
Azure Synapse
Synapse is a cloud-based platform developed by Microsoft for data science and machine learning projects. The platform provides tools for data preparation, model development, and deployment.
Tumblr media
Azure DevOps
Azure DevOps is a suite of cloud-based services provided by Microsoft that supports the entire software development lifecycle. It provides tools for source control management, continuous integration and delivery, testing, and deployment automation. Azure DevOps can be used to manage projects of any size, from small teams to large enterprises.
Tumblr media
Steps to configure the Azure Synapse with Azure           DevOps?
1: First of all create an organization in azure devops then create a projects in the organization settings browse the azure marketplace download the extension name “synapse workspace deployment” configure it and done.
2: Create a Azure synapse workspace for separate steps for dev, for UAT, for production along with it create a spark pool.
NOTE :  If you are planning to deploy your Spark jobs or machine learning models to a production environment, then creating a Spark pool in Azure Synapse can be useful.
3: Go to “synapse-for-dev” (i am taking an example as it is for development env) configure git repository so that it will be connected with the azure devops repo. Click on the synapse and specify the repo type in our case it is azure devops repo.
Tumblr media
4: Create a notebook from the develop session and then select the spark pool then select the commit for the commit in branch. Then you can check the notebook is visible into the azure devops portal under repo.
5: Create one pipeline and commit it under the integrate section in azure synapse portal and check it in repository inorder to check it.
6: Create a pull request for every successful commit and then publish it as an artifacts (so when you publish it create an arm template which is used for deployment and it will be visible in repository.
7: Then lets add some variable groups for the release pipeline for the simplicity. Add the artifacts in the release pipeline section and then in the add new task use one step for UAT and one for the production .
8: Click on the empty job and at add task section type “synapse workspace deployment” as shown below
Tumblr media
Likewise you have to do in production server.
Note: These things that you have to make sure.
1: Make sure azure devops service principle has synapse artifacts publisher role assigned in the target workspace.
2: Integrate only development workspace with git repository.
3: Prepare your spark pool before you migrate your atifacts into your target workspace.
            Thanks!!!!
0 notes
lakshya01 · 1 year
Text
CI/CD of Azure Databricks with Azure DevOps
What is Azure Databricks?
Azure Databricks is a cloud-based big data processing and analytics platform that is built on top of Apache Spark. It provides a collaborative and interactive workspace for data engineers, data scientists, and analysts to process and analyze large datasets, build machine learning models, and implement advanced analytics solutions.
Tumblr media
What is Azure DevOps?
Azure DevOps is a suite of cloud-based services provided by Microsoft that supports the entire software development lifecycle. It provides tools for source control management, continuous integration and delivery, testing, and deployment automation. Azure DevOps can be used to manage projects of any size, from small teams to large enterprises.
Tumblr media
Steps to configure the Azure Databricks with Azure           DevOps?
1: Create the Key vault . For this go to the azure portal and search key vault and create it.
2: Now create the Azure Databricks workspace, fill the needful information.
3: Create the project in Azure DevOps and initialize the repository. Along with it go to the resource group and launch workspace under azure databricks section. 
4: Go to the portal of workspace navigate towards user settings and connect to repo using git integrations options, select “Azure DevOps service” under git provider section.
5: Create a Notebook using preferred language and then go to “revision history” to add the link of repository of azure devops project that created earlier along with it create a new feature branch for it.
6: Create a pull request and merge the feature branched to the master or main branch and then you see the code in master or main branch also then go to user settings on azure databricks and generate the access token and make sure you copy it otherwise it will gone and add it in keyvault.
7: Go to the library in azure devops add a variable and on this so that it can link secrets for azure keyvault.
Tumblr media
8: Now create a pipeline go to classic editor option use the azure repository and under the build pipeline section search for “Publish build artifacts”  and save it and run it.
9: Go to the release pipeline link the variable gorup then select the empty job anf use powershell for testing and write the inline script of powershell you will get it from docs.
10: Add the production server task to deploy and run the pipeline for testing. 
                 Thanks!!!!
0 notes
lakshya01 · 1 year
Text
Deploying Python app on Azure Function App using Azure Devops.
Python app:
Tumblr media
To deploy python applications we must build and publish to artifacts, so that from that artifact release pipeline of Azure DevOps can easliy access an deploy to running function applications.
So lets start the steps for the generation of .zip file and publish it to artifacts.
                              Continuous Integration part
Step 1: Go to the pipelines section of azure devops use the classic editor options as this options is good and easy to generate on the classic editor option you will get an option “Azure Functions for Python” click on that and you will get this window as show below:
Tumblr media
Here, fill the configurations from build tp publish and dont forgot to select the agents where you want to run , suppose i am using my self hosted agents so i will use my own agent as shown below:
Step 2: Configure the build section to run the builds and you can check the yaml format also for the same by going on the “view yaml file “ option.
Step 3: Select the install dependencies options.
Tumblr media
Step 4: After that configure the Archive files options here you can select the type i want .zip so i am selecting that only.
Step 5:  Now its time to publish to artifacts so that we can attach as a source for the release pipeline (it is also a public location).
                 Continuous Deployment Part...
This part is similar to the .NET deployment blog kindly have a look at the given link: https://at.tumblr.com/lakshya01/deploying-net-app-on-azure-function-app-using/3k6uyl98i1fd
   Thanks...
0 notes
lakshya01 · 1 year
Text
Deploying .NET app on Azure Function App using Azure DevOps.
What is .NET?
.NET is a free, cross-platform, open source developer platform for building many different types of applications.
With .NET, you can use multiple languages, editors, and libraries to build for web, mobile, desktop, games, IoT, and more.
Tumblr media
To deploy .net applications we must build and publish to artifacts, so that from that artifact release pipeline of Azure DevOps can easliy access an deploy to running function applications.
So lets start the steps for the generation of .zip file and publish it to artifacts.
                               Continuous Integration part
Step 1: Go to the pipelines section of azure devops use the classic editor options as this options is good and easy to generate on the classic editor option you will get an option “Azure Functions for .NET” click on that and you will get this window as show below:
Tumblr media
Here, fill the configurations from build tp publish and dont forgot to select the agents where you want to run , suppose i am using my self hosted agents so i will use my own agent as shown below:
Tumblr media
Step 2: Configure the build section to run the builds and you can check the yaml format also for the same by going on the “view yaml file “ option.
Tumblr media
Step 3: After that configure the Archive files options here you can select the type i want .zip so i am selecting that only.
Tumblr media
Step 4: Now its time to publish to artifacts so that we can attach as a source for the release pipeline (it is also a public location).
Tumblr media
Now its time to release it to Function App ..
                              Continuous Deployment Part.
Step 5: Select the release pipeline options in Azure DevOps for the deployment purpose:
Tumblr media
Here go to the path where you published the .NET file. 
Step 6:  Now configure the stage and add the agent in my case i have my self hosted agents to i selected that here you can see: 
Tumblr media
Note : You have to do this for the continuous deployment when every build is available.
Tumblr media
to learn how to host self agent go to the link shown here :
https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/agents?view=azure-devops&tabs=browser#install
Thanks..
0 notes
lakshya01 · 1 year
Text
VPNGateway P2S connection with using Azure Active Directory...
                                         VPNGateway
A VPN gateway is a type of networking device that connects two or more devices or networks together in a VPN infrastructure. It is designed to bridge the connection or communication between two or more remote sites, networks or devices and/or to connect multiple VPNs together.
                                        P2S connections:
A Point-to-Site (P2S) VPN gateway connection lets you create a secure connection to your virtual network from an individual client computer. A P2S connection is established by starting it from the client computer.
What is Azure Active Directory?
Azure Active Directory is Microsoft’s multi-tenant, cloud-based directory and identity management service. For an organization, Azure AD helps employees sign up to multiple services and access them anywhere over the cloud with a single set of login credentials. Please check out for further information - https://learn.microsoft.com/en-us/azure/active-directory/fundamentals/active-directory-whatis
             Chart of P2s connections with using Azure Active Directory.
Tumblr media
         Configure authentication for the gateway.
Step 1:  Locate the tenant ID of the directory that you want to use for authentication. Check this : https://learn.microsoft.com/en-us/azure/active-directory/fundamentals/active-directory-how-to-find-tenant
Step 2:  Configure the point-to-site VPN. 
Tumblr media
Address pool: client address pool
Tunnel type: OpenVPN (SSL)
Authentication type: Azure Active Directory
Audience Id for public is same :   41b23e61-6c1e-4545-b367-cd054e0ed4b4.
Note (Issuer:  Include a trailing slash at the end of the Issuer value. Otherwise, the connection may fail.)
If you want to find you tenant id refer:  https://learn.microsoft.com/en-us/azure/active-directory/fundamentals/active-directory-how-to-find-tenant
Step 3:  Download the Azure VPN Client profile configuration package , refer to the link: https://learn.microsoft.com/en-us/azure/vpn-gateway/openvpn-azure-ad-tenant#download-the-azure-vpn-client-profile-configuration-package 
Step 4:  To connect to your virtual network, you must configure the Azure VPN client on your client computers. https://learn.microsoft.com/en-us/azure/vpn-gateway/openvpn-azure-ad-client .
                                Thanks!!!!
0 notes
lakshya01 · 1 year
Text
Point to Site Connections Using Azure VPNGateways
What is Point to site Connections?
A Point-to-Site (P2S) VPN gateway connection lets you create a secure connection to your virtual network from a remote location such as home. P2S VPN is also a useful solution to use instead of S2S VPN when you have only a few clients that need to connect to a virtual network.
https://learn.microsoft.com/en-us/azure/vpn-gateway/point-to-site-about go for the further informations in detail.
What is VPNGateways?
VPN gateway is a specific type of virtual network gateway that is used to send encrypted traffic between Azure Virtual network and an on-premise locations such as home or conference over the public internet.
                                  Conceptual diagram of P2S
Tumblr media
Steps for the P2S connection.
1. Create a Resource Group in azure portal.
2. Create a virtual network.
3. Create subnets and along with it create a gateway subnets so gateway will use ip addresses assigned in this subnet.
4. After that create a virtual network gateway after completing the following above steps.
5. Now its time to create the  Create Self-sign root & client certificate. Please go through the provided link to understand the procedure.
https://learn.microsoft.com/en-us/azure/vpn-gateway/vpn-gateway-certificates-point-to-site
6. Next steps to  configure Point-to-Site connection. Next step of this configuration is to configure the point-to-site connection. In here we will define client ip address pool as well. It is for VPN clients. As shown in links kindly have a look https://learn.microsoft.com/en-us/azure/vpn-gateway/vpn-gateway-howto-point-to-site-resource-manager-portal .
7.  Now its time for testing VPN connection . Now we have finished with configuration. As next step, we need to test the connection. To do that log in to the same pc where we generate certificates. If you going to use different PC, first you need to import root cert & client certificate we exported.
Log in to Azure portal from machine and go to VPN gateway config page.
In that page, click on Point-to-site configuration
After that, click on Download VPN client.
8. After that unzip the vpn client setup and do the connections according to the pc, after that we can see new connection under windows VPN page. Connect it and then come to cmd prompt and type ipconfig to verify ip allocation from VPN address pool. Check the Azure portal so that the cross checking also be done. 
                                  Thanks!!!.
0 notes
lakshya01 · 2 years
Text
TASKS
TASK1 :  . Find the count of log statements in the attached(as access.log.zip) file “access.log” with successful response (status code 200).
SOL: For this i used two types different types of step so recently I am learning Splunk so i used splunk commands to count this so the command along with output is here :
Tumblr media
Second method is by linux command grep:
Tumblr media
TASK 2:  Write a command to find the files which contain words DEBUG, ERROR, and INFO in any directory of the filesystem.
SOL: So here i made a directory using mkdir  task3 was the first directory and inside task3 there was task 2 and inside task2 there was 5 directory from v1,v2,v3,v4,v5 and in every directory there was 5 files and in each files there was text written. 
In first three files f1 f2 f3 there was debug written just to check and f4 f5 have error,info respectively so by using this command we can fetch out files as well as directory names : 
Tumblr media
TASK 3:  What would be the sed command to convert the string input "Ab1Cd2Ef3Gh4Ij5….." to "a-bc-de-fg-hi-j…."?
SOL: So SED was new for me so i used to read first about it and a long way to match the outputs.
First I put the text into the files and start applying the commands 
Tumblr media Tumblr media Tumblr media
this is the final outcome 
Task 5:  Write steps to create and publish a docker image to the docker repository. e.g., Run tomcat image with a customized landing page, and should be accessible at: http://localhost:8080/
SOL: For this I used Amazon Cloud service name EC2 ubuntu instance and configured the docker and image for jenkins and as eg: given i had done likewise my final outcome was: 
I will show in 5 steps:
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
Step1: First installing docker on ubuntu i used my repository: https://github.com/lakshya0102/docker-installation-on-ubuntu-aws-.git
Step 2: After configuring I wrote the as shown in fig 1
Step 3: After writing the images i typed the command “docker build -t myjenkins .” so it will build the image and for  . means current directory.
Step:4 After the creation of images its time to access the port so in aws i used alltraffic in security group for connecting to all servers so that i can access jenkins image which i will run on the top of aws ubuntu. 
Tumblr media
Step5: Jenkins is accesible to port 8080 and now its time to publish  Step6: First i have to create a repository in docker hub and then we have to type the command  “docker build -t myjenkins/repository name” and after the build when we get the docker images.
Step:7 Type command “docker login” and type the credentials of docker hub and now last command to publish is “docker push myjenkins/repository name” and  now our Docker image is now available in Docker Hub. You can see it by visiting your repository.
TASK:4. Write a script/program to return true if the opening and closing braces are complete, otherwise false. e.g: a. Input string: List { information {{about the FILEs (the current directory by default). Sort entries alphabetically if} none of -cftuvSUX }nor --sort } is specified. Output: True b. Input string: Performs { the specified action on the files. { Valid actions are view, cat (uses only "copious output" rules { and sends output to STDOUT) , compose, com‐posetyped, edit and print. If no action is specified, the action will be determined by how the program was called.} Output: False
SOL: I was less sure about this task sorry for this but i tried :
Tumblr media
Thankyou sir I enjoyed the tasks and thankyou for your time .
0 notes
lakshya01 · 2 years
Text
Ingestion of Amazon CloudTrail                data in Splunk
                        Ingestion of AWS CloudTrail data in splunk
Tumblr media
Amazon CloudTrail :  AWS CloudTrail is an AWS service that helps you enable governance, compliance, and operational and risk auditing of your AWS account. Actions taken by a user, role, or an AWS service are recorded as events in CloudTrail. Events include actions taken in the AWS Management Console, AWS Command Line Interface, and AWS SDKs and APIs.
https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html
Amazon S3 :  Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Customers of all sizes and industries can store and protect any amount of data for virtually any use case, such as data lakes, cloud-native applications, and mobile apps. With cost-effective storage classes and easy-to-use management features, you can optimize costs, organize data, and configure fine-tuned access controls to meet specific business, organizational, and compliance requirements.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/GetStartedWithS3.html
Amazon SNS :  Amazon Simple Notification Service (Amazon SNS) is a web service that enables applications, end-users, and devices to instantly send and receive notifications from the cloud.
Amazon SQS :  Amazon Simple Queue Service (Amazon SQS) is a fully managed message queuing service that makes it easy to decouple and scale microservices, distributed systems, and serverless applications. Amazon SQS moves data between distributed application components and helps you decouple these components.
Splunk:  Splunk is a software platform to search, analyze and visualize the machine-generated data gathered from the websites, applications, sensors, devices etc. which make up your IT infrastructure and business.
Now the steps of Ingestion..
Step 1: First we make an IAM role for the sqs and s3 so, we have to create the Policy for that got to google and search for the  Configure CloudTrail inputs for the Splunk Add-on for AWS  and  under the Configure AWS permissions for the CloudTrail input copy the permissions avialable there,
Tumblr media
Step2 :  Create a policy and add this into the json option as shown in fig.
Tumblr media
and after creating the policy add this policy to the user by clicking on add permission. Don’t forget to copy access and security key.
Step 3: Now go to Cloud trail and in the dashboard section.
Tumblr media Tumblr media
and create the bucket..
Step 4: Now go to SQS and create the queue after that when you go to cloud trail you will see the following queue that you created and click on it.
Tumblr media Tumblr media
Step 5 : Now its to configure Splunk  go to Splunk dashboard and install aws addon from apps as shown in fig.
Tumblr media
so after the installation you will see the following as sown in fig.
Tumblr media
click on the aws and in the configuration option type the access id and secret id of the role that you have created.
Tumblr media Tumblr media
Step 6 : Now its time to go to input and time the details. as shown in fig..
Tumblr media Tumblr media
Type the name for the input select the aws account and when you select the account automatically it will connect to sqs queue and for the index you can select main now done save it .
Step 7: Now go to search and reporting.
Tumblr media
and on the search type : index=main and all set, then you will see the logs that are coming from the cloud trail via sqs ..
For the further information mail : [email protected]
0 notes
lakshya01 · 2 years
Text
Integrating AWS Security Hub with Splunk via Amazon Event Bridge.
Tumblr media
Amazon Security Hub
Amazon Security Hub gives you a comprehensive view of your security alerts and security posture across your Amazon Web Services accounts. There are a range of powerful security tools at your disposal, from firewalls and endpoint protection to vulnerability and compliance scanners.
https://docs.aws.amazon.com/securityhub/?id=docs_gateway
Amazon Event Bridge
Amazon EventBridge is a serverless event bus that makes it easier to build event-driven applications at scale using events generated from your applications, integrated Software-as-a-Service (SaaS) applications, and AWS services. EventBridge delivers a stream of real-time data from event sources such as Zendesk or Shopify to targets like AWS Lambda and other SaaS applications. You can set up routing rules to determine where to send your data to build application architectures that react in real-time to your data sources with event publisher and consumer completely decoupled.
https://docs.aws.amazon.com/eventbridge/index.html
CloudWatch Log  Groups
CloudWatch Logs enables you to centralize the logs from all of your systems, applications, and AWS services that you use, in a single, highly scalable service. You can then easily view them, search them for specific error codes or patterns, filter them based on specific fields, or archive them securely for future analysis. CloudWatch Logs enables you to see all of your logs, regardless of their source, as a single and consistent flow of events ordered by time, and you can query them and sort them based on other dimensions, group them by specific fields, create custom computations with a powerful query language, and visualize log data in dashboards.
Splunk Enterprise
Splunk Enterprise Security (ES) provides security information and event management (SIEM) for machine data generated from security technologies such as network, endpoint, access, malware, vulnerability and identity information. It is a premium application that is licensed independently.
Now Steps for the integration..
Step 1 : First of all we have to enable the security hub .
Step 2 : Then in the findings sections of security hub we can see the findings (logs) but if you dont have you can go use anyone of these that are under securityhub  just for the simplicity I used Gaurdduty (https://aws.amazon.com/guardduty/) and generated sample logs.
Step3 :  So now its time to operate Amazon Event Bridge for the event source to the target destination (here, event source is Security Hub and Target Source is Cloudwatch log groups).
Step 4: When you landed in amazon event bridge create rules in rules and after that select these as our event source is security hub ,
Tumblr media
And for the target ,
Tumblr media
and click on create ..
Step 5 : Now its time to create User with permission (CloudWatchlogReadonlyaccess) and create ok .
NOTE: Don’t forget to copy th access id and secret id of the use you created.
Tumblr media Tumblr media
After that now its time to operate Splunk  so first of all go to the splunk and create install aws addons apps on the dashboard of Splunk ..
Step 6 : In the apps there is the option called find more apps so click on that and type aws in search bar you find this ,
Tumblr media
install it and after that add the access id and secret id that you generated for with the permission cloudwatchlogsreadonlyaccess in the configuration section you will get this leave the region as global.. 
Tumblr media
Step 7: After that you add index in the Splunk and in the settings you will get the index option as shown in the figure, 
Tumblr media
after that click on the new index and then type the name for the index in my case ,
Tumblr media
After that come to addons aws apps and click on input and create the input ain the create input there are different parameters for the inputs click on custom data types as shown in fig,
Tumblr media
Step 8: Here you get the option cloudwatchlogs click on it 
Tumblr media
enter the details and save and done ..
Step 9 : Now youi configuration is done now go to search & reporting and search by typing (index=(name that you created for index)), after that you see the log stream is fetching via Amazon Event Bridge Target side.
For the further information mail : [email protected]
1 note · View note
lakshya01 · 2 years
Text
Docker tasks: 1. Docker installed  i used docker desktop for installation..i installed on windows and accessed through command line.
Tumblr media
2. Downloaded images (php,mysql,ngnix)
Tumblr media
0 notes
lakshya01 · 3 years
Text
AWS Transit Gateway
What is Transit Gateway:
Transit gateway is a network hub that connects vpcs together. It takes care of your private data and never travels to public networks.
There are the three concepts for setting a transit gateway connections:
1. Attachment : The connection from Amazon VPC and VPN to TGW.
2, Association : The Route table use to route packets coming from an attachment from (Amazon VPC and VPN).
3. Propagation: The Route table where the attachment routes are installed. 
Tumblr media
BENEFITS
Simplified connectivity – AWS resources in geographically dispersed VPCs need access to a wide variety of on-prem or remote infrastructure. Now,  you can connect all of your VPCs across thousands of AWS accounts and merge everything into a centrally-managed gateway.
Simplified visibility and network control – For large enterprises, VPCs are located in different AWS regions based on their business use cases. Complex network-routing is required to implement a hybrid network architecture. With centralized monitoring and controls you can easily manage all of your Amazon VPCs and edge connections in a single console. Developers and SREs can quickly identify issues and react to events on your network. AWS Transit Gateway provides statistics and logs that are then used by services such as Amazon CloudWatch and Amazon VPC Flow Logs to capture information on the IP traffic routed through the AWS Transit Gateway. You can use Amazon CloudWatch to get bandwidth usage between Amazon VPCs and a VPN connection, packet flow count, and packet drop count.
On-demand bandwidth – You can expand your network quickly to get the bandwidth requirements in order to transfer large amounts of data for your applications, to scale edge devices, or to enable your migration to the cloud.
To learn more about and setup TGW(Transit Gateway) follow the link:
https://docs.aws.amazon.com/vpc/latest/tgw/what-is-transit-gateway.html
0 notes
lakshya01 · 3 years
Text
serverless application model
Tumblr media Tumblr media
]
Tumblr media
then i created the api after that i had added post method in the api resource and then i putted the lambda function that i had created for the dynamo db table and after that i deployed the api after the deployment i test the api by creating one table in the api with the primary key that i described in dynamodb table in my case it was emplyeeid.. after that i checked the flow and the table was created when i deployed the api i get the ivoke url that i pasted into my postman tool as a client and when i updated the table and add more table coloums and rows it was updated automatically.
Tumblr media
2 notes · View notes
lakshya01 · 3 years
Text
DevSecOps : Implementing Security scan in CI and CD pipeline
 Lets start securing github repository first  :
In Jenkins pipeline to add security scan for Github repository. We use Truffle Hog tool that scan the Github repository and check for the credentials.
1.TruffleHog runs behind the scenes to scan your environment for secrets like private keys and credentials, so you can protect your data before a breach occurs. Secrets can be found anywhere, so TruffleHog scans more than just code repositories, including SaaS and internally hosted software. With support for custom integrations and new integrations added all the time, you can secure your secrets across your entire environment.
Tumblr media
Note: TruffleHog is a Regex(regular expressions) based scanner for Github secret.
2. Jenkins pipeline syntax for  implementing TruffleHog. 
stage(’check_git_repository_secrets’)            steps { sh ‘ docker pull gesellix/trufflehog’  sh ‘ docker run -t  gesellix/trufflehog --json (your github url) > trufflehog’  } }
so, here we use docker hub to implement docker for pulling the trufflehog image. To install docker go to the link : https://docs.docker.com/engine/install/.   ‘--json’  gives the output in the json format.
2 notes · View notes
lakshya01 · 3 years
Text
Tomcat
Tomcat 403 Access denied error of manager solved:
Step1 : Go to the directory where tomcat is installed.
step2: Go to the web app directory  then inside web app directory go to the manager directory and then inside manager directory go into META-INF and inside this open context.xml  and   do ‘#’’ (comment)
<Context antiResourceLocking="false" privileged="true" > <Valve className="org.apache.catalina.valves. RemoteAdarValvs allow="127\.\d+\. \d+\.\d+1: :111:0:0:0:0:0:0:1" /> <Manager sessionAttributeValueClassName Filter="java.lang\. ked) ?HashMap"/> </Context>
step 3: Next step go inside conf  directory which is located in the same directory where web app is located then open  tomcat-users.xml and then type the following lines under tomcat user tag .
<role rolename="manager-gui"/>
<role rolename="manager-script" />
<user username="tomcat" password="tomcat" roles="manager-gui,manager-script”/>
</tomcat-users>
2 notes · View notes
lakshya01 · 3 years
Text
python terminal in docker task 7.2
Tumblr media
2 notes · View notes