Skip to main content

Jenkins - DevOps

What is Jenkins?

Jenkins is an easy-to-use open-source java-based CI/CD tool. Jenkins has huge community support and an ocean of plugins that can integrate with many open-source and enterprise tools to make your life so easy.

The following diagram shows the overall architecture of Jenkins and the connectivity workflow.

Following are the key components in Jenkins

  1. Jenkins Master Node
  2. Jenkins Agent Nodes/Clouds
  3. Jenkins Web Interface

Jenkins Server (Formerly Master)

Jenkins’s server or master node holds all key configurations. Jenkins master server is like a control server that orchestrates all the workflow defined in the pipelines. For example, scheduling a job, monitoring the jobs, etc.


Let’s have a look at the key Jenkins master components.

Jenkins Jobs

A job is a collection of steps that you can use to build your source code, test your code, run a shell script, run an Ansible role in a remote host or execute a terraform play, etc. We normally call it a Jenkins pipeline.

Jenkins JOB: Clone from GitHub > Compile Code > Run Unit test cases.

If you translate the above steps to a Jenkins pipeline job, it looks like the following.

Jenkins Plugins

Plugins are official and community-developed modules that you can install on your Jenkins server. It helps you with more functionalities that are not natively available in Jenkins.

For example, if you want to upload a file to s3 bucket from Jenkins, you can install an AWS Jenkins plugin and use the abstracted plugin functionalities to upload the file rather than writing your own logic in AWS CLI. The plugin takes care of error and exception handling.

Here is an example, of s3 file upload functionality provided by the AWS Steps plugin


You can install/upgrade all the available plugins from the Jenkins dashbaord itselft. For corporate network, you will have to setup a proxy details to connect to the plugin repository.

You can also download the plugin file and install it by copying it to the plugins directory under /var/lib/jenkins folder.

You can also develop your custom plugins. Check out all plugins from the Jenkins Plugin Index

Jenkins Global Security

  • Jenkins’s own user database:- Set of users maintained by Jenkins’s own database. When we say database, its all flat config files (XML files).
  • LDAP Integration:- Jenkins authentication using corporate LDAP configuration.
  • SAML Single Sign On(SSO): Support single signon using providers like Okta, AzureAD, Auth0 etc.

Jenkins credential

In Jenkins, you can save different types of secrets as a credential.
  1. Secret text
  2. Username & password
  3. SSH keys

Jenkins Agent

Jenkins agents are the worker nodes that actually execute all the steps mentioned in a Job. When you create a Jenkins job, you have to assign an agent to it. Every agent has a label as a unique identifier.

When you trigger a Jenkins job from the master, the actual execution happens on the agent node that is configured in the job.

Jenkins server-agent Connectivity

You can connect a Jenkins master and agent in two ways:

Using the SSH method:

 Uses the ssh protocol to connect to the agent. The connection gets initiated from the Jenkins master. Ther should be connectivity over port 22 between master and agent.

Using the JNLP method: 

Uses java JNLP protocol (Java Network Launch Protocol). In this method, a java agent gets initiated from the agent with Jenkins master details. For this, the master nodes firewall should allow connectivity on specified JNLP port. Typically the port assigned will be 50000. This value is configurable.

There are two types of Jenkins agents

Agent Nodes: 

These are servers (Windows/Linux) that will be configured as static agents. These agents will be up and running all the time and stay connected to the Jenkins server. Organizations use custom scripts to shut down and restart the agents when is not used. Typically during nights & weekends.

Agent Clouds:

Jenkins Cloud agent is a concept of having dynamic agents. Means, whenever you trigger a job, a agent gets deployed as a VM/container on demand and gets deleted once the job is completed. This method saves money in terms of infra cost when you have a huge Jenkins ecosystem and continuous builds.


https://devopscube.com/jenkins-pipeline-as-code/

Comments

Popular posts from this blog

Microservices design patterns

Microservices design pattern Next :  saga-design-pattern-microservices

Runtime Fabric (RTF)

MuleSoft's Anypoint Runtime Fabric (RTF) has many features that help with deployment and management of Mule applications: Deployment: RTF can deploy applications to any environment, including on-premises, in the cloud, or in a hybrid setup. It can also automatically deploy Mule runtimes into containers. Isolation: RTF can isolate applications by running a separate Mule runtime server for each application. Scaling: RTF can scale applications across multiple replicas. Fail-over: RTF can automatically fail over applications. Monitoring and logging: RTF has built-in monitoring and logging capabilities to help teams troubleshoot issues and gain insights into application performance. Containerization: RTF supports containerization, which allows applications to be packaged with their dependencies and run consistently across different environments. Integration: RTF can integrate with services like SaveMyLeads to automate data flow between applications. Management: RTF can be managed with A...

Integration Design Patterns

Understanding Integration Design Patterns: Integration design patterns serve as reusable templates for solving common integration problems encountered in software development. They encapsulate best practices and proven solutions, empowering developers to architect complex systems with confidence. These patterns abstract away the complexities of integration, promoting modularity, flexibility, and interoperability across components. Most Common Integration Design Patterns: Point-to-Point Integration: Point-to-Point Integration involves establishing direct connections between individual components. While simple to implement, this pattern can lead to tight coupling and scalability issues as the number of connections grows. Visualizing this pattern, imagine a network of interconnected nodes, each communicating directly with specific endpoints. Publish-Subscribe (Pub/Sub) Integration: Pub/Sub Integration decouples producers of data (publishers) from consumers (subscribers) through a central ...