Anti-DDoS Prevention for your Server.!!!!

Distributed Denial of Service Attack (DDoS) is, unfortunately, an increasingly common form of premeditated attack against an organization’s web infrastructure.

Typically, it involves using multiple external systems to flood the target system with requests with the intention of overwhelming the system with network traffic. These attacks work because an unprotected system may find it difficult to differentiate between genuine traffic and DDoS traffic.

This article will help you understand which open source software you can use to prevent DDoS attacks.

DDoS Deflate

DDos Deflate is a lightweight open source shell script that you can easily implement on your server and configure to mitigate most DDoS attacks.

Here are some of the features of DDoS Deflate:

  • It can automatically detect rules within iptables or an Advanced Policy Firewall (APF).
  • Ability to block IP addresses temporarily (the default setting is 30 mins).
  • Whitelist and blacklist features for blocking or allowing connections to the server.
  • Management features to notify administrator of actions taken.


Fail2ban works in a similar way to DDoS Deflate, as it also bans traffic based on malicious IP address profiling.

It’s a good performer and some of the main features are as follows:

  • Easy to configure with some automation features included.
  • Compatible with existing firewalls, e.g. iptables.
  • Customizable blacklisting and whitelisting features.
  • Ability to block automated brute force attacks.
  • Time-based IP blocking.

Fail2Ban is good option for any web server that has SSH and few other services.

Apache mod_evasive module

The mod_evasive module is suited to protecting Apache web servers against DDoS attacks. It also includes notification features via email and SYSLOG.

This module is a strong performer, which has the added benefit of adapting to real-time situations by creating rules on the fly based on the following patterns being detected:

  • Requesting access to the same page too many times per second.
  • Making 50 concurrent connections to the same child process per second.
  • Making other requests from blacklisted IP addresses.

Some of the features which are available to prevent DDoS attacks are as follows:

  • The server administrator can limit access to certain pages based on the number of requests one particular IP can make (DOSPageCount option).
  • Access to an entire website can be limited based on how many connections one particular IP makes using the DOSSiteCount option.
  • The DOSHashTable feature can monitor who is accessing what in the web server based on their previous visits and can make a decision whether to allow or block the connection.

The administrator can be notified via email of what action Apache mod_evasive is taking.

Mod_evasive is relatively easy to use and because the open source modules are built into Apache, it’s free to use.


FastNetMon is another high-performance DDoS mitigation tool that was based on a packet analyzer engine (PF_RINGsFLOWNetflowPCAP).

Below are some of the main features of FastNetMon:


  • Handles both incoming and outgoing traffic.
  • Support of trigger block script if IP load network threshold of packets per second or bytes per second exceeded.
  • It can untag VLANs so it can separate different networks.
  • Capable of deciphering networks used in high-performance telecommunication.
  • It can decrypt encrypted protocols to investigate malicious packets.
  • It can reroute DDoS traffic to ‘black hole’.
  • Works well in mirrored networks.
  • Can work on server/soft (virtual) router.
  • High performance – can detect DoS/DDoS in 1-2 seconds.
  • High compatibility – works with Ubuntu, Free BSD, Mac OS and has been Tested up to 10GE with 5-6 Mpps on Intel i7 2600 with Intel Nic 82599.


HaProxy is an excellent open-source load balancing tool that is also effective against DDoS attacks against a cloud server.

It has the following features:

  • It can block traffic based on the bandwidth.
  • Contains blacklist and whitelist tables of IPs which it builds into its configuration based on the ruleset.
  • Ability to block IPs that might be performing DDoS attacks.
  • HaProxy can identify bots, which is why it’s effective against DDoS attacks.
  • Can prevent Syn Flood type attacks as well as capabilities like connection limitations etc.


Another low level DDoS monitoring and mitigation tool is DDOSMON. It can monitor traffic with possible attacks and it reacts by alerting and triggering user defined actions based on the type of attack.

It is capable of detecting the following attacks successfully:

It detects the attack, sends an email notification to the administrator and takes corrective actions.


As well as being a popular load balancing tool that sits on top of ApacheNGINX also has powerful built-in DDoS attack mitigation capabilities.

Some of the DDoS features of NGINX are:

  • Rate limits, identification of concurrent IPs to limit access based on the client IP addresses.
  • Ability to block clients based on their geo-location using the ngx_http_geo_module. Using this feature, whole countries can be blocked if required.
  • It can identify agents by checking their flash and JavaScript capabilities.
  • Can be combined with HaProxy for additional protection against DDoS.


These are some of the most popular, easy to use, but also very effective DDoS protection tools for safeguarding your cloud server. Between them, they should offer most server administrators the ability to protect their server against the risk of DDoS attacks.

Installation and Configuration of mod_pagespeed Module with Apache | Ubuntu 18.4

Mod_pagespeed is an Apache module that can be used to improve the speed of the Apache web server on Linux. Mod_pagespeed has several filters that automatically optimize Web Pages to improve better performance. It supports several operating systems such as Fedora, RHEL, Debian, Ubuntu, and CentOS. Mod_pagespeed module does not require modifications to existing content that means all internal optimizations and changes to files are made on the server-side.


  • A server running Ubuntu 18.04.
  • A static IP address is set up to your server
  • A root password is setup to your server.

Getting Started

Before starting, you will need to update your system with the latest version. You can do this by running the following command:

apt-get update -y
 apt-get upgrade -y

Once your server is updated, restart your server to apply the changes.

Install Apache Web Server

First, you will need to install Apache web server to your system. You can install it by just running the following command:

apt-get install apache2 -y

Once the installation has been completed, start Apache service and enable it to start on boot with the following command:

systemctl start apache2
 systemctl enable apache2

Once you have finished, you can proceed to the next step.

Install Mod_pagespeed Module

First, you will need to download the latest version of Mod_pagespeed from their official website. You can do it with the following command:


Once the download is completed, install it by running the following command:

dpkg -i mod-pagespeed-stable_current_amd64.deb

Once the installation has been completed successfully, restart Apache service to apply all the changes:

systemctl restart apache2

You can now verify the Mod_pagespeed module with the following command:

curl -D- localhost | head

You should see the following output:

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0HTTP/1.1 200 OK
Date: Sat, 11 May 2019 04:58:26 GMT
Server: Apache/2.4.29 (Ubuntu)
Accept-Ranges: bytes
Vary: Accept-Encoding
Cache-Control: max-age=0, no-cache, s-maxage=10
Content-Length: 10089
Content-Type: text/html; charset=UTF-8

100 10089  100 10089    0     0   182k      0 --:--:-- --:--:-- --:--:--  185k

Configure Mod_pagespeed Web Interface

Mod_pagespeed module provides a simple and user-friendly web interface to view server state. You can enable Mod_pagespeed web interface by creating /pagespeed.conf file:

nano /etc/apache2/mods-available/pagespeed.conf

Add the following lines:

<Location /pagespeed_admin>
    Order allow,deny
    Allow from localhost
    Allow from
    Allow from all
    SetHandler pagespeed_admin

<Location /pagespeed_global_admin>
    Order allow,deny
    Allow from localhost
    Allow from
    Allow from all
    SetHandler pagespeed_global_admin

Save and close the file, when you are finished. Then, restart Apache service to apply all the changes:

systemctl restart apache2

Once you have done, you can proceed to access Mod_pagespeed web interface.

Access Mod_pagespeed Web Interface

Now, open your web browser and type the URL You will be redirected to the following page:

mod_pagespeed web interface


Pagespeed statistics


Pagespeed configuration


Pagespeed Histograms


Pagespeed Console

Message History

Message History


Pagespeed Graphs

Congratulations! you have successfully installed Mod_pagespeed with Apache on Ubuntu 18.04 server.

How to install Jenkins in Ubuntu..!!

Jenkins Installation on Ubuntu 18.4

Jenkins is an open-source automation server that offers an easy way to set up a continuous integration and continuous delivery (CI/CD) pipeline.

Continuous integration (CI) is a DevOps practice in which team members regularly commit their code changes to the version control repository, after which automated builds and tests are run. Continuous delivery (CD) is a series of practices where code changes are automatically built, tested and deployed to production.

In this tutorial, we will show you how to install Jenkins on an Ubuntu 18.04 machine using the Jenkins Debian package repository.

Installing Jenkins

To install Jenkins on your Ubuntu system, follow these steps:

  1. Install Java.Since Jenkins is a Java application, the first step is to install Java. Update the package index and install the Java 8 OpenJDK package with the following commands:
    sudo apt updatesudo apt install openjdk-8-jdk

    The current version of Jenkins does not support Java 10 (and Java 11) yet. If you have multiple versions of Java installed on your machine make sure Java 8 is the default Java version.

  2. Add the Jenkins Debian repository.Import the GPG keys of the Jenkins repository using the following wget command:
    wget -q -O - | sudo apt-key add -

    The command above should output OK which means that the key has been successfully imported and packages from this repository will be considered trusted.

    Next, add the Jenkins repository to the system with:

    sudo sh -c 'echo deb binary/ > /etc/apt/sources.list.d/jenkins.list'
  3. Install Jenkins.Once the Jenkins repository is enabled, update the apt package list and install the latest version of Jenkins by typing:
    sudo apt updatesudo apt install jenkins

    Jenkins service will automatically start after the installation process is complete. You can verify it by printing the service status:

    systemctl status jenkins

    You should see something similar to this:

    ● jenkins.service - LSB: Start Jenkins at boot time
    Loaded: loaded (/etc/init.d/jenkins; generated)
    Active: active (exited) since Wed 2018-08-22 13:03:08 PDT; 2min 16s ago
        Docs: man:systemd-sysv-generator(8)
        Tasks: 0 (limit: 2319)
    CGroup: /system.slice/jenkins.service

Adjusting Firewall

If you are installing Jenkins on a remote Ubuntu server that is protected by a firewall you’ll need to open port 8080. Assuming you are using UFW to manage your firewall, you can open the port with the following command:

sudo ufw allow 8080

Verify the change with:

sudo ufw status
Status: active

To                         Action      From
--                         ------      ----
OpenSSH                    ALLOW       Anywhere
8080                       ALLOW       Anywhere
OpenSSH (v6)               ALLOW       Anywhere (v6)
8080 (v6)                  ALLOW       Anywhere (v6)

Setting Up Jenkins

To set up your new Jenkins installation, open your browser, type your domain or IP address followed by port 8080http://your_ip_or_domain:8080 and screen similar to the following will be displayed:

During the installation, the Jenkins installer creates an initial 32-character long alphanumeric password. Use the following command to print the password on your terminal:

sudo cat /var/lib/jenkins/secrets/initialAdminPassword

Copy the password from your terminal, paste it into the Administrator password field and click Continue.

On the next screen, the setup wizard will ask you whether you want to install suggested plugins or you want to select specific plugins. Click on the Install suggested plugins box, and the installation process will start immediately.

Once the plugins are installed, you will be prompted to set up the first admin user. Fill out all required information and click Save and Continue.

The next page will ask you to set the URL for your Jenkins instance. The field will be populated with an automatically generated URL.

Confirm the URL by clicking on the Save and Finish button and the setup process will be completed.

Click on the Start using Jenkins button and you will be redirected to the Jenkins dashboard logged in as the admin user you have created in one of the previous steps.

At this point, you’ve successfully installed Jenkins on your system.


In this tutorial, you have learned how to install and perform the initial configuration of Jenkins. You can now start exploring Jenkins features by visiting the official Jenkins documentation page.

How to Secure you SSH Connection in Ubuntu 18.4

Creating a New Sudo User

It is always best practice to disallow root authentication over SSH since this is the username people will try to hack into the most. Thus, the first thing we want to do to secure our server is create a new sudo user for SSH. To do so, enter the following command, replacing the red username with the username of your choice:

# adduser username

Follow the prompt to set a password and provide any other information you wish; only the password is required. Now, we want to give our new user sudo privileges so that we can become root and run commands which need administrative privileges. We can do this by entering the following command.

# usermod -aG sudo username

Last, we want to enable our new user to authenticate using the SSH public key we have already provided to the root user. We can use a simple rsync command to copy the public key over to our new user’s authorized_keys file.

# rsync --archive --chown=username:username ~/.ssh /home/username

Before proceeding to the next step, log out and make sure that you are able to authenticate to the server as the new user using SSH. If you are unable to login as your new user, you will still be able to log in as root; confirm all of the commands have been entered correctly and try to log in as your new user again.

Changing the SSH Daemon Configuration

Since we are using SSH keys and a new user to authenticate to our server, we do not ever want anyone to authenticate using a password or the root username. To accomplish this, we first want to navigate to the configuration file for the OpenSSH daemon. To do so, open the file in a text editor of your choice using the following command:

$ sudo vi /etc/ssh/sshd_config

There are three changes we want to make to this file.  First, we want to change the port on which OpenSSH listens for requests.

Warning: If you have any active firewalls, you will need to allow traffic through the port you choose or you will lock yourself out of your server. If you do lock yourself out of your server, you can regain access through IPMI or KVM.

At the top of the file, you will see a section that looks like this by default:

#Port 22
#AddressFamily any
#ListenAddress ::

Uncomment the “Port” section and choose any valid port number like in the following example. In our example, we use port 12345.

Port 12345
#AddressFamily any
#ListenAddress ::

Next scroll down to the # Authentication: portion of the file. You will see five options that will appear as follows by default:

#LoginGraceTime 2m
PermitRootLogin yes
#MaxAuthTries 6
#MaxSessions 10

Here we want to change the “yes” next to “PermitRootLogin” to “no.” It will appear as follows:

#LoginGraceTime 2m
PermitRootLogin no
#MaxAuthTries 6
#MaxSessions 10

Now we want to scroll down the sshd_config file a little further to make our final change – disabling password authentication. You will see a section that looks like this by default:

# To disable tunneled clear text passwords, change to no here!
PasswordAuthentication yes
#PermitEmptyPasswords no

We want to change the “yes” next to “PasswordAuthentication” to a “no.” It will appear as follows:

# To disable tunneled clear text passwords, change to no here!
PasswordAuthentication no
#PermitEmptyPasswords no

Save and exit the file. Finally, we need to restart OpenSSH for the changes to take effect. Do this by entering the following command:

sudo systemctl restart sshd.service

Let’s take a second to review what we did here. We changed the port number that we use to listen for SSH requests. Then, we disabled SSH access for the root user or any user trying to authenticate with a password. If we have done this correctly, the following command will no longer work to log in to the server.

$ ssh

To log in now, we are going to have to specify the port number we are using to listen for SSH requests. That means from now on we will need to use the following command, replacing the number next to “-p” with the port number we chose earlier:

$ ssh -p 12345

Make sure that this command works and that the previous one does not. If it does, you are all set to access your server securely through SSH.


With so many bad actors out there using the internet, it has never been more important to secure any potential entry points to your server. By following this guide, you have made the most common entry point on Linux servers much more secure.

Difference between Dockerfile and Docker Compose file..?

Why Docker Can't Solve All Your Problems in the Cloud | Threat Stack

So you need to deploy containers. Where do you begin? You could certainly do this from the command line, deploying each container via a long string of command options, every time. Or you could make use of a tool that allows you to carefully construct the deployment within a configuration file, and then deploy the container with a simple command.

But which configuration file do you use? Which method of deployment do you use? You can go with docker-compose or docker. Your choice could all hinge on the complexity of the application/service you plan on deploying. And that all boils down to using either a Dockerfile or a docker-compose.yml or both.

You see? It gets complicated because you can use docker-compose.yml in such a way that it will call upon a Dockerfile to allow you to create even more complex container rollouts.

Let’s see how these two are used in conjunction.


So let’s say we want to create a Dockerfile that will use the latest NGINX image, but install php and php-fpm. The file is named Dockerfile and we’ll house it in a new directory called dockerbuild. Create that new directory with the command:

mkdir ~/dockerbuild

Within ~/dockerbuild, create the Dockerfile with the command:

nano Dockerfile

In that file, paste the following:

FROM nginx:latest

RUN apt-get -y update && apt-get -y upgrade && apt-get install -y php-fpm

Where NAME is the name to be used as the maintainer and EMAIL is the maintainer’s email address.

This Dockerfile will pull the latest version of the official NGINX image and then build a new image based on it, and also upgrade the platform and install the php-fpm package and its dependencies (which includes PHP). It’s an incredibly simple example, but one that’s easy to follow.

You could then run the docker build command with that Dockerfile, like so:

docker build -t "webdev:Dockerfile" .

But we want to integrate that file into docker-compose.yml.

The docker-compose.yml file

Now let’s craft a docker-compose.yml file which uses that Dockerfile, but also adds a database to the stack. This docker-compose.yml file might look like:

version: '3'
    build: .
     - "8080:80"
    image: mysql
    - "3306:3306"
    - MYSQL_ROOT_PASSWORD=password
    - MYSQL_USER=user
    - MYSQL_PASSWORD=password
    - MYSQL_DATABASE=demodb

The important section here is web:. It is in the web section that we instruct the docker-compose command to use the Dockerfile in the same directory (the . indicates to run the build command in the current working directory). If we wanted to house our Dockerfile in a completely separate directory, it would be declared here. Say, for example, the docker-compose.yml file is in ~/dockerbuild and the Dockercompose file is in ~/nginxbuild, you could declare that with the line:

build: ~/nginxbuild

Save and close the file. You could then deploy the new container with the command (run from within the directory housing the docker-compose.yml file):

docker-compose up

The command would first build the NGINX container from the Dockerfile and then deploy the db container as defined in the db: section.

Of course, you could also define everything within the docker-compose.yml file, but making use of both Dockerfile and docker-compose.yml makes for a much more flexible and efficient system. Why? Say you’ve defined a very complex Dockerfile for an NGINX container and you want to reuse that in a container deployment within a complete stack. Why go through all the trouble of re-defining the NGINX container within docker-compose.yml when you can simply repurpose the Dockerfile.

Write once, use often

With this system, you can write once and use often. So craft a Dockerfile for a part of the stack and re-use it for multiple stacks, by way of docker-compose.yml. Remember, docker-compose.yml files are used for defining and running multi-container Docker applications, whereas Dockerfiles are simple text files that contain the commands to assemble an image that will be used to deploy containers.

So the workflow looks like this:

  1. Create Dockerfiles to build images.
  2. Define complex stacks (comprising of individual containers) based on those Dockerfile images from within docker-compose.yml.
  3. Deploy the entire stack with the docker compose command.

And that is the fundamental difference between Dockerfile and docker-compse.yml files.

All about CI/CD Pipleline..!!!!

A CI/CD Pipeline implementation, or Continuous Integration/Continuous Deployment, is the backbone of the modern DevOps environment. It bridges the gap between development and operations teams by automating the building, testing, and deployment of applications. In this blog, we will learn what a CI/CD pipeline is and how it works.

Before moving onto the CI/CD pipeline, let’s start by understanding DevOps.

DevOps is a software development approach that involves continuous development, continuous testing, continuous integration, continuous deployment, and continuous monitoring of the software throughout its development lifecycle. This is the process adopted by all the top companies to develop high-quality software and shorter development lifecycles, resulting in greater customer satisfaction, something that every company wants.

Your understanding of DevOps is incomplete without learning about its lifecycle. Let us now look at the DevOps lifecycle and explore how it is related to the software development stages.

CI stands for Continuous Integration and CD stands for Continuous Delivery/Continuous Deployment. You can think of it as a process similar to a software development lifecycle.
Let us see how it works.

The above pipeline is a logical demonstration of how software will move along the various stages in this lifecycle before it is delivered to the customer or before it is live in production.

Let’s take a scenario of a CI/CD Pipeline. Imagine you’re going to build a web application which is going to be deployed on live web servers. You will have a set of developers responsible for writing the code, who will further go on and build the web application. Now, when this code is committed into a version control system (such as git, svn) by the team of developers. Next, it goes through the build phase, which is the first phase of the pipeline, where developers put in their code and then again the code goes to the version control system with a proper version tag.

Suppose we have Java code and it needs to be compiled before execution. Through the version control phase, it again goes to the build phase, where it is compiled. You get all the features of that code from various branches of the repository, which merge them and finally use a compiler to compile it. This whole process is called the build phase.

Once the build phase is over, then you move on to the testing phase. In this phase, we have various kinds of testing. One of them is the unit test (where you test the chunk/unit of software or for its sanity test).

When the test is completed, you move on to the deploy phase, where you deploy it into a staging or a test server. Here, you can view the code or you can view the app in a simulator.

Once the code is deployed successfully, you can run another sanity test. If everything is accepted, then it can be deployed to production.

Meanwhile, in every step, if there is an error, you can shoot an email back to the development team so that they can fix it. Then they will push it into the version control system and it goes back into the pipeline.

Once again, if there is any error reported during testing, the feedback goes to the dev team again, where they fix it and the process reiterates if required.

This lifecycle continues until we get code/a product which can be deployed to the production server where we measure and validate the code.

We now understand the CI/CD Pipeline and its working; now, we will move on to understand what Jenkins is and how we can deploy the demonstrated code using Jenkins and automate the entire process.

The Ultimate CI Tool and Its Importance in the CI/CD Pipeline

Our task is to automate the entire process, from the time the development team gives us the code and commits it to the time we get it into production. We will automate the pipeline in order to make the entire software development lifecycle in DevOps/automated mode. For this, we will need automation tools.

Jenkins provides us with various interfaces and tools in order to automate the entire process.

We have a Git repository where the development team will commit the code. Then, Jenkins takes over from there, a front-end tool where you can define your entire job or the task. Our job is to ensure the continuous integration and delivery process for that particular tool or for the particular application.

From Git, Jenkins pulls the code and then Jenkins moves it into the commit phase, where the code is committed from every branch. The build phase is where we compile the code. If it is Java code, we use tools like maven in Jenkins and then compile that code, which can be deployed to run a series of tests. These test cases are overseen by Jenkins again.

Then, it moves on to the staging server to deploy it using Docker. After a series of unit tests or sanity tests, it moves on to production.

Docker is just like a virtual environment in which we can create a server. It takes a few seconds to create an entire server and deploy the artifacts we want to test. But here the question arises:

Why do we use Docker?

As we said earlier, you can run the entire cluster in a few seconds. We have a storage registry for images where you build your image and store it forever. You can use it anytime in any environment which can replicate itself.

Hands-On: Building a CI/CD Pipeline Using Docker and Jenkins

Step 1: Open your terminal in your VM. Start Jenkins and Docker using these commands:

 systemctl start jenkins 

 systemctl enable jenkins 

 systemctl start docker 

Note:Use sudo before the commands if it displays a “privileges error.”

Step 2: Open Jenkins on your specified port. Click on New Item to create a Job.

Step 3: Select a freestyle project and provide the item name (here I have given Job1) and click OK.

Step 4: Select Source Code Management and provide the Git repository. Click on Apply and Save button.

Step 5: Then click on Build->Select Execute Shell

Step 6: Provide the shell commands. Here, it will build the archive file to get a war file. After that, it will get the code which is already pulled and then it uses maven to install the package. It simply installs the dependencies and compiles the application.

Step 7: Create the new Job by clicking on New Item.

Step 8: Select freestyle project and provide the item name (here I have given Job2) and click on OK.

Step 9: Select Source Code Management and provide the Git repository. Click on Apply and Save button.

Step 10: Then click on Build->Select Execute Shell

Step 11: Provide the shell commands. Here it will start the integration phase and build the Docker Container.

Step 12: Create the new Job by clicking on New Item.

Step 13: Select freestyle project and provide the item name (here I have given Job3) and click on OK.

Step 14: Select Source Code Management and provide the Git repository. Click on Apply and Save button.

Step 15: Then click on Build->Select Execute Shell

Step 16: Provide the shell commands. Here it will check for the Docker Container file and then deploy it on port number 8180. Click on Save button.

Step 17: Now click on Job1 -> Configure.

Step 18: Click on Post-build Actions -> Build other projects.

Step 19: Provide the project name to build after Job1 (here is Job2) and then click on Save.

Step 20: Now click on Job2 -> Configure.

Step 21: Click on Post-build Actions -> Build other projects.

Step 22: Provide the project name to build after Job2 (here is Job3) and then click on Save.

Step 23: Now we will be creating a Pipeline view. Click on the “+” sign.

Step 24: Select Build Pipeline View and provide the view name (here I have provided CI CD Pipeline).

Step 25: Select the initialJob (here I have provided Job1) and click on OK.

Step 26: Click on Run button to start the CI/CD process.

Step 27: After successful build open localhost:8180/sample.text. It will run the application.

So far, we have learned how to create a CI/CD Pipeline using Docker and Jenkins. The intention of DevOps is to create better-quality software more quickly and with more reliability while inviting greater communication and collaboration between teams.

Source of Article: