on Sunday 9 December 2012

I've only played with virtual machines for the past 6 months. I always looked at VM's as a server side  technology, and never really fully appreciated their use. Quite ignorant!

One of the challenges my organisation faces is ensuring all the developers are working with the same versions of software as production. Sometimes, the production machines get a patch, but the developers continue to work on older versions. Sometimes this happens because they don't know how to upgrade, are too busy or are just plain lazy. Either way, when their code goes in for release and breaks we always get the "works on my machine" story.

So, what is a virtual appliance?

"Virtual appliances are ready-to-run virtual machines packaged with an operating system and software application. These self-contained appliances make it simpler to acquire, deploy and manage applications by eliminating underlying hardware and operating system dependencies." VMware

At my office we work with predominantly Oracle software - and so we have created a Database, Weblogic, Forms & Reports on one appliance, then Database, Weblogic and BPM on another. Now, when we make a patch we update the VM and share it amongst the developers. Job done!



Something that really irks me is my companies decision to clone production environments, and distribute these clones to create new environments. So, lets say a new project starts up and they want a replica of one of our databases - a request is made to the DBA, he takes a snapshot of production and puts this snapshot on the new target host. He/she then does a series of search and replaces for environment specific variables and then hands over the new environment.

You may be asking whats my problem? Well, here they are:

  • When you clone, you take everything; configuration, structure and worst of all data.
  • The process involves around 3/4 people and takes about a day.
  • Garbage in, garbage out - whilst production is the only "truth" when you molest it with search and replaces who knows what you're getting?
So, what solution did I offer? 

As we're talking about the database I shall start with that. Firstly, a database is no less important than application code and so should be version controlled. Once the source code is version controlled you can use tools such as dbdeploy to build your database in a repeatable manner. I'm a firm believer that no live data should exist outside of production or the disaster recovery. If you cant generate test data for the scenarios you will encounter then your test strategy is not sufficient. 

What happens when I want to retain data?

In a data retention environment such as User Acceptance, we only ever want to apply deltas to the DB not refresh it every time - well, dbdeploy handles this very well.

Cloning isnt all bad..

I think the cloning strategy has some value - version controlling your database will not suffice in a disaster recovery scenario because the most important part - the data - will be missing. I wholeheartedly agree with the cloning strategy in this scenario.



on Sunday 15 July 2012


The Problem

I've recently started working for a new company that operates in the insurance sector. They are an Oracle outfit with a few other technologies thrown in for good measure - there's even a mainframe. JOY!

Anyway, it quickly became apparent that they didn't truly understand their estate and they were looking at everything as a single system. As a build manager this set all kinds of alarm bells ringing - I hope they're not expecting me to build this huge system everytime there's a change to one small section...

Release day is worse!! Their current approach is to collate all of the changes for each of the subsystems and deploy them in one go and call it release X. Really?? Think about that in terms of Microsoft Office - I've made changes to Word and Excel, but not PowerPoint and then said we have version 2 of MS Office. We're then losing valuable information about the structure of Office; Word V2, Excel V2 and PowerPoint V1. I know its a crude example, but I hope you see my problem.

The Solution

Well, the first thing is going to be define solid boundaries in the system. Whilst this sounds quite straight forward, there's heavy dependancies on one of the systems in particular. Think of it as a viewer on a central database that is fed information from the other systems. Most often when a change is made, this 1 system will need an update too.

Implement a build framework. Currently, individual files are copied to a staging area then copied into the environments manually - this is not acceptable.


on Friday 8 June 2012
"Command Prompt Here" is a handy tool for Windows Explorer to open a command a prompt directly from a desired folder in filesystem. It avoids opening a new command prompt window and then traverse to your desired location.
The below are the steps to configure. Please note its not an installable, just configurable.

Hope this helps you in your day to day tasks.

In explorer, open Tools, Folder Options.

  • Select the File Types tab.
  • For Windows XP: Go to NONE / Folder.
  • Select the entry labeled Folder
  • For Windows 2000/XP: Press Advanced button.
  • Select New
  • In the action block type "Command Prompt" without the quotes.
  • In the app block type "cmd.exe" without the quotes.
  • Save and exit Folder Options.
on Thursday 7 June 2012
In one of my earlier blogs I mentioned the importance of writing down all the steps you perform as you progress through testing a new piece of software. I just found an old BEA Weblogic document, and at the end of it there was a checklist. I was so taken back by this simple type of document I had to write about it.

I was having a conversation with one of the contractors this morning and we were discussing the amount of bumff official documents have - you can write off the first 6 pages to indexes and official crap about target audience etc.

If the document is an instructional type then we may want to read all the additional info the first time, but we wont need it when we want to use it just for reference. In fact, this could hinder productivity. Instead, make it standard practice to include a checklist in the appendices outlining the instructions given in the document.

To make a checklist:
  • Only have 3 columns - Requirement, Notes and a Completed checkbox.
  • Requirements are triggers. Requirements should provide enough information to get the synapses in your brain firing and retrieving the memory of the additional information in the document.
  • Notes provide additional info. This can be links to external resources or other parts of the document. Include gotchas here.
  • Completed. As each task or step is actioned you can tick it off the list.
I'm sure you're reading this thinking "are you serious?", but as simple as this is, it's often the simple things that get forgotten about. The great thing about a checklist is, if you have followed the information I gave about making notes; this will take little extra effort to complete.

Here's a great example of how they can be used effectively: Weblogic- Deployment Checklists
on Saturday 2 June 2012
Just dont do it! Ever!!
on Wednesday 30 May 2012
As I work through the book Continuous Delivery: Reliable Software Releases Through Build, Test, and Deployment Automation (Addison-Wesley Signature) I find myself highlighting sections that I feel are extremely useful and important. I dont claim to know everything, I've only been in this field for 2 years and only worked for 1 company. My experience is limited but this book has proven to be a valuable resource - there are bits i disagree with and I will go through those as I progress through the book.

Here's a section the author discusses. When a developer is ready to submit a piece of work:

  1. Before submitting any changes a developer should check to see if a build is currently in the "Successful" status. If not, a developer should assist in fixing a build before submitting new code.
  2. If the status is currently "Successful", a developer should rebase his/her personal workspace to this configuration.
  3. Build and test locally to ensure update doesn't break developers functionality.
  4. If Successful, check in new code.
  5. Allow CI to complete with new changes.
  6. If build fails, stop and fix on developers machine. Return to step 3.
  7. If build passes, continue to next work item.
on Saturday 26 May 2012
I've just started reading Software Build Systems: Principles and Experience by Peter Smith PhD and one of the chapters discusses development tools and how to best manage them. This is only a small part of the book, but it's one of my favourites because it's easy to implement and the benefits are huge.


#1 Take Notes
I think this is one of the most important things to do when testing tools or running a proof of concept. It's so easy to get carried away with simply getting something to work that you actually forget the steps you took to get there. I would recommend getting yourself a little notepad and jotting down some bullet points - this can be fleshed out later. I personally like to open a text editor like notepad and put everything in there. Mainly because its easier to copy and paste file paths than jott them down on paper.

#2 Use Version Control for Source Control
This rule relates to tools you use that need to be compiled for different targets. You shouldn't always trust that your files will be available online (if thats your source). By keeping your tools in version control you can always rest assured that if anything catastrophic ever happens to your source, or even if they just remove them, you will always have them.

#3 Periodically Upgrage Tools
I'm going to aproach this rule quite differently to the book. I think there has to be a happy medium here; you need to upgrade your tools regularly to stay inside support, but also do it in a controlled manner and not as soon as a new release is made available.

A short time ago I was caught out upgrading one of the plugins on our continuous integration tool. To cut a long story short the latest version of one plugin was faulty and braught down our whole CI system. At the time I done a big bang upgrade of the plugins the CI used and it was a nightmare to track down the root cause and rectify. The moral of this is time the upgrades well, give new releases time for error's to manifest, but upgrade so you get new features and bugfixes.

#4 Version Binaries
I think this should be an extension of rule #2 because the reasons are pretty much the same. Just to reiterate though, keep the tools you use versioned and in your repository so that they are always available!
on Saturday 19 May 2012
When you start working with distributed domains there will come a time when you need to pack the domain and unpack it in its distributed areas.

Whether you create your domain via the GUI or by scripting, all you're actually doing is creating a series of configuration files. At this point you're not actually starting any servers - that comes later.

Lets consider the following architecture:

AdminServer = Machine A
Managed01   = Machine B
Managed02   = Machine C
Cluster01      = Managed01, Managed02


So, you run through the wizard and configured the domain above. You should now notice your domain has been created on Machine A, but if you log into Machine B or C nothing exists. This is where the need to Pack and UnPack comes in.

To pack the domain run the following WLST script:



This script opens the domain and extracts (as a jar) the configurations required for the servers that will reside on Machines B and C. It's a skeleton configuration because the Admin server information will be excluded - a domain only ever has 1 Admin server.

Now that we have a templateName.jar we can send it to the machines that the rest of the domain will reside on and run the unpack script on each machine:




Replace the dummy properties with the ones you want for your domain and the script will do the rest. Do this for each machine then you're done. Ensure you have a Nodemanager configured on each machine then start the admin server and administer the domain as per usual.
on Friday 18 May 2012
One of the most frustrating parts of my job is the administration of a Weblogic domain when it resides on a Linux machine. A numpty has decided that my team isn't allowed to have any permissions on Linux boxes and so we cant gain access to our Weblogic files - this means we cant stop and start our domains using startWeblogic.sh.

One solution I found was to Administer the domain via the Nodemanager.

To start, any server that you wish to administer via the Nodemanager has to be associated with a machine and a nodemanager needs to be running. Once done, start a WLST session.


 nmConnect('username','password','wl.host','nm.port','domain.name','domain.dir','socket.type')  

The command above will allow you to connect to the nodemanager.

 nmStart('server.name')  

Start the server.

 nmServerStatus('server.name')  

Get server status.

 nmKill('server.name')  

Force shutdown.

 nmDisconnect()  

Disconnect from the nodemanager.

on Tuesday 15 May 2012
To run many of the tools you find in Java development you first have to ensure the environment you're working in is configured correctly - this is often the command line for Windows or shell for Linux. For example when you want to run the 'javac' program from the command line, your computer needs to knows the path to that executable file.

**NOTE** it is possible to make environment variables permanent on a machine so that they only ever need to be set once. This can be dangerous for several reasons:

a) If you set a faulty path you could crash your machine and the change may be irreversible.
b) If you switch machine you will have to try and remember all the changes you made.

Here enters the setenv script!!


This is a very simple script to set up the Windows command line interface to use Java. Copy this into your favourite text editor and save it as setenv.bat. Now, edit the script to include your Java install then open a command line prompt. Once open type "set" and run. A series of value/variables pairs should be displayed but JAVA_HOME should not be set.

Now, run "setenv.bat" and "set". You should now notice both the PATH and JAVA_HOME variables have been set.

And here is the Linux version:

These scripts I have provided are very simple and only currently set up your Java environment. You can use the same principles to set up your build tools or any other third party tools you may wish to run from the command line.
on Saturday 12 May 2012

Sonar has one of the most boring user interfaces I think I've ever seen -- it's butt ugly! but, as ugly as it is there's no denying how useful it is.
 
The image above shows an extract from a publicly shared project. I really liked the layout that they used so I took a few of their ideas and added some of my own.

A dashboard should be simple, provide enough information about the project ,but not overwhelm the user. If your customers or stakeholders have access to this they may not be technically minded and so this page should show info that anyone can understand.

Dashboard

To change the dashboard, log in as the admin user and on the right of the page select "Edit Filter":


You should be presented with some new options:

The column headings that I have selected are:
  • Lines of code
  • Rules compliance
  • Unit test success (%)
  • Coverage
  • Complexity /method
  • Complexity /class
  • Public documented API (%)
  • Duplicated lines (%)
  • Total Useless Code (plugin)
  • Build date
To add these, select the center dropdown in the Add Column row of the Display window.

Project Layout

When you select a project from the name column on the dashboard you will be presented with a page like the following:


Now this is where you can geek out and go crazy with metrics. Hopefully, by this point the non-techies have seen what they wanted and left. This page is where you can get creative.

I hope i'm not generalising too much but I would like to think most technically minded people are working with widescreen monitors for maximum productivity. If this is true I would strongly advise going for a three column layout. This can be done by selecting "Edit Layout" on the right and choosing the tri-colum option.

Finally, to select the widgets you want to display press "Configure widgets" and then you can drag and drop them at your desired locations.
on Tuesday 8 May 2012
I had an error today when trying to create a datasource in Weblogic. The error was:

ORA-27101: shared memory realm does not exist



Cause: Unable to locate shared memory realm


Action: Verify that the realm is accessible

The cause of this error was due to the number of database sessions being outnumbered by the number of Weblogic instances. We set up 7 environments, each with 50 maximum connections but only 100 sessions on the DB side.... as you can imagine we smashed the limit.

You can check the number of sessions using:

 SELECT name, value  
  
 FROM v$parameter 
  
 WHERE name = 'sessions'  


And you can check how many are currently in use using:

 SELECT COUNT(*)  
  
 FROM v$session   

on Saturday 5 May 2012
I found this on the net a long time ago and thought I would share. Basically, it parses a log file and extracts the errors.

 <target name="build">
  
   <echo message="Add foo bar baz"/>
  
        <exec executable="${db.sqlplus}">
  
   </exec>
  
   <echo message="Load x y z"/>
  
        <exec executable="${db.sqlplus}" dir="foobar">
  
   </exec>
  
   <!--Check the log files here-->
  
           <check-log-file fileToCheck="${output.log.1}"/>
  
           <check-log-file fileToCheck="${output.log.2}"/>
  
   <antcall target="fail-if-error"/>
  
 </target>
  
 <!--=================================================================================
  
   Check the file named in the property file.to.check to see if there are errors.
  
   The way this works is to find all lines containing the text "ERROR" and put
  
   them into a separate file. Then it checks to see if this file has non-zero
  
   length. If so, then there are errors, and it sets the property errors.found.
  
   Then it calls the send-email target, which doesn't execute if the errors.found
  
   property isn't set.
  
 -->
  
 <macrodef name="check-log-file">
  
   <attribute name="file.to.check"/>
  
   <attribute name="file.errorcount" default="@{file.to.check}.errorcount" description="The file to hold the error lines"/>
  
   <sequential>
  
     <copy file="@{file.to.check}" tofile="@{file.errorcount}">
  
       <filterchain>
  
         <linecontains>
  
           <contains value="ERROR"/>
  
         </linecontains>
  
       </filterchain>
  
     </copy>
  
     <condition property="errors.found" value="true">
  
       <length file="@{file.errorcount}" when="gt" length="0"/>
  
     </condition>
  
     <antcall target="check-log-file-send-email">
  
       <param name="file.to.check"  value="@{file.to.check}"/>
  
     </antcall>
  
   </sequential>
  
 </macrodef>
  
 <!--=================================================================================
  
   If there are any errors, send an email to let someone know
  
 -->
  
 <target name="check-log-file-send-email" if="errors.found" description="Sends an email out if error detected">
  
   <resourcecount property="error.count">
  
     <tokens><!-- default tokenizer is a line tokenizer -->
  
       <file file="${file.to.check}.errorcount"/>
  
     </tokens>
  
   </resourcecount>
  
   <echo message="Database build (${e1.codeline} - ${error.count} errors found..."/>
  
   <antcall target="mail">
  
     <param name="from-address" value="build"/>
  
     <param name="to-list"    value="myemail"/>
  
     <param name="subject"    value="Automated database build error report for ${db.host}"/>
  
     <param name="message"    value="See attached log file, ${error.count} error(s) found..."/>
  
     <param name="attach"    value="${file.to.check}"/>
  
   </antcall>
  
 </target>
  
 <!--=================================================================================
  
   Fails the database build if errors were detected.
  
 -->
  
 <target name="fail-if-error" if="errors.found">
  
   <echo message="Errors found - setting database fail flag..."/>
  
   <fail message="Errors detected during ${codeline} database build. Check logs."/>
  
 </target>  
on Friday 4 May 2012
Just yesterday I was offered a new job with a relative large insurance firm in the UK. My role will be to bring the build management team up to scratch and implement a framework from the ground up. This got me thinking... what are the tools and good practices I cant live without?


Build & Release Website

Every software development outfit should have one of these! Some of the information you would store is:

  • Functional Release Notes
  • Technical Release Notes
  • Continuous Integration Results (Link to tools)
  • Project Documentation (Javadocs or LXR)
  • Runtime Logs
  • Deployed Applications (A list of environments and their current deployments)
  • Application Metrics
  • Third Party Documentation (Tools, Libraries)
  • Misc Documents

This should be maintained by the Build & Release team and should be automated. Our systems are configured to deploy the documentation once its been created.

Build & Release Calendar

Another tool no serious software company should be without. The projects that you support should be able to see your workload and manage requests accordingly. Also, if you need to justify where you're spending your time this acts as a great record for the finance department or Project Managers. There are free calendars available and tools like Jenkins also provide work well with Google Calendar.

Build Framework

No sh!t, really?! It doesn't matter what tools you use, but you should be able to build, package, run unit tests and deploy at very least. The aim here is to produce a framework that can be used with very little configuration and flexible enough to use with new projects. Some of the "powers-to-be" at my office have been trying to get us to switch to Maven for a while now. The argument that we're forced to use a single framework is a fair one, but my counter argument is that our Ant framework is so well made that we wouldn't be achieving anything by switching. Ant is a highly configurable tool but takes a lot to get up and running, however Maven is not so flexible but is much easier to use "out of the box".

Continuous Integration

CI is often considered something only used by large scale projects. This doesn't have to be the case. Even if you're a small operation with only a handful of developers it makes sense to offload the task of compiling your source and running a few simple tests. To put this into context, if you have 5 developers, a build takes 1 hour and each developer builds at least once a day, thats 25 hours wasted on builds a week. At a very generous rate of £15 per hour thats £375 a week and £1500 a month wasted in consumed resources for builds. Thats the build server bought and paid for!!

Here are 2 methods for removing a version of an element. The first method is limited to rolling back the latest version, however, the second will allow a rollback to any version. It's not a good idea to delete a version in ClearCase because they can be problematic.

Method 1- Subtractive Merge:
cleartool merge -graphical -to {ELEMENT} -delete -version {VERSION}

e.g cleartool merge -graphical -to deploy-lib-init.xml -delete -version \main\37

If no manual merges were required on checkin ,then chances are this will be resolved automatically. All of the data for this can be obtained by right clicking on the element in ClearCase Explorer -> Properties of version. You should get a window like:

**NOTE** Because the window has a fixed size you cant see the whole string for "Name", I would advise copying the string into a text editor.

D:\CC\CASPA\willis7_CASPA_int\IPS_Source\System\build\deploy-lib-init.xml@@\main\37


The two highlighted parts are the ELEMENT and VERSION respectively.


Method 2- Hijack with older version:


cleartool get –to {ELEMENT} {ELEMENT}@@{VERSION}


e.g. cleartool get –to deploy-lib-init.xml deploy-lib-init.xml@@\main\5

A slightly easier approach. This involves hijacking your local version with a version specified by yourself (the example above would take version 5). This can then be checked out and back in to create a new version that replicates the specified version.
on Thursday 3 May 2012


When we make any changes to our databases we like to restart the application servers to refresh the datasources. Some of our development environments are run as Windows services and so it would be nice to restart them remotely from the build machines. Here's how with Ant:

I hope you can see in this snippet that the code runs 2 tasks - Stop and Start. You can break down the code if you just want a Start or Stop only script.
on Monday 23 April 2012
Something simple, but very effective!

I want you to do something for me; at the top of the browser you’re reading this on there should be a help button (unless you’re on a mobile device). Click it, then click the about option. You should be presented with something similar to this:



If the application you or your company is developing doesn’t have this feature then you need to give yourself a slap on the wrist and ask yourself, WHY? As a release engineer we’re often expected to know exactly what’s in every environment at all times. If you’re a small organisation with only a few projects and deliveries then this is sometimes possible, but even then what happens if you go on holiday and someone else covers for you??

If you’re using Ant I would advise writing a small task that will pull a few basic parameters from your build machine. Some useful properties to gather are the following:

Username
Build Machine

Date
Time
Build Number

Here’s the Ant target I use. It includes some ClearCase stuff (the source tool we use) but I thought I would leave it in for information.


 <target name="buildinfo" description="Record build metadata">
  
 <echo message="${line.separator}" file="${build.props.file}" append="true"/>
  
 <echo message="buildby=${env.USERNAME}${line.separator}" file="${build.props.file}" append="true"/>
  
 <echo message="host=${env.COMPUTERNAME}${line.separator}" file="${build.props.file}" append="true"/>
  
 <echo message="builddate=${build.time}${line.separator}" file="${build.props.file}" append="true"/>
  
 <echo message="buildview=${cc.view}${line.separator}" file="${build.props.file}" append="true"/>
  
 <echo message="buildstream=${cc.stream}${line.separator}" file="${build.props.file}" append="true"/>
  
 <echo message="buildproject=${cc.project}${line.separator}" file="${build.props.file}" append="true"/>
  
 <echo message="foundationbaseline=${cc.found-baselines}${line.separator}" file="${build.props.file}" append="true"/>
  
 <echo message="recommendedbaseline=${cc.rec-baseline}${line.separator}" file="${build.props.file}" append="true"/>
  
 <echo message="Projrecommendedbaseline=${cc.rec-bline-stream}${line.separator}" file="${build.props.file}" append="true"/>
  
 <echo message="currentbaselines=${cc.current-baselines}${line.separator}" file="${build.props.file}" append="true"/>
  
 </target>
  


Write some code to get this to display in your app then its job done!

As I mentioned in my previous post, my favourite email plugin is the Email-ext plugin . Here's the configurations I use for notifying people when build conditions are met.

UNSTABLE


$PROJECT_DEFAULT_CONTENT

Failing tests:
${FAILED_TESTS}

When a build becomes unstable it's telling you a build has been succesful, but some of your JUnit tests have failed. It is a good idea to let the whole team know this has happened and send them the list of failed tests.

FAILURE

$PROJECT_DEFAULT_CONTENT
Build started by ${CAUSE}
Changes since last build:
${CHANGES}

A failed build is more serious than an unstable build because someone has submitted code that is erraneous and could impact everyone else who is working on the CI stream. For this reason we need to monitor what has been submitted since the last working build and why the build was started.

STILL FAILING

$PROJECT_DEFAULT_CONTENT
Build started by ${CAUSE}
Changes since last build:
${CHANGES_SINCE_LAST_SUCCESS}

This is the same as a failing build, but instead of notifying everyone, we now only email submitters. The reason for this is because they're adding to a broken environment, hopefully this will encourage the developer to chase down the error and fix it so that they can check their own build status.

FIXED

$PROJECT_DEFAULT_CONTENT


STILL UNSTABLE

$PROJECT_DEFAULT_CONTENT
Failing tests:
${FAILED_TESTS}

This is the same as still failing but instead we report the failed tests to the submitter in the hope they will fix the issue. If not then you need to turn Judge Dredd on them!
In the company I work for Cruise Control has always been the primary continuous integration tool. When I was asked to add a few projects to Cruise Control I thought it would be a fairly straightforward task.... I was wrong. Cruise Control works by setting up your projects with a series of .xml files and running a batch file that pulls the configurations from these files and starts the continuous polling.

Thankfully, Jenkins(Hudson at the time) was in the pipeline and I was given the responsibility of migrating one of our latest projects to it. Jenkins is a very easy to use tool which hides much of the 'hard' configuration behind a GUI.

Installing Jenkins

To run Jenkins you must first download it from the Jenkins website and save it to a sensible location. The latest version can be found at the Jenkins Website.

One thing to note here is that you will be downloading a .war file NOT a .zip . This caused a great deal of confusion when I downloaded this from a Windows PC and it decided to rename the file to a .zip - furthermore, Winzip will happily open the file as if it was a .zip . If this happens simply change the file extension to .war.

On the host machine you can now start running Jenkins. To get Jenkins running I created a small batch file (Linux script here):

@ECHO off
@rem *************************************************************************
@rem PROPERTIES - Set these here.
@rem
@rem *************************************************************************


SET HTTP_PORT="8080"
SET AJP_PORT="8000"
SET JAVA_HOME=D:\Tools\jdk\jdk160_14_R27.6.5-32


@rem JAVA properties
SET MAX_PERM_SIZE=128m


@rem *************************************************************************
@rem Make sure you have a Java JDK on your machine and that you have it added
@rem to the PATH environment variable
@rem *************************************************************************
SET PATH=%JAVA_HOME%\bin;%PATH%


@rem *************************************************************************
@rem Set the window title. Change project name to suit
@rem *************************************************************************
TITLE Jenkins Continuous Integration

@rem Start Jenkins
java -jar jenkins.war --httpPort=%HTTP_PORT% --ajp13Port=%AJP_PORT% -XX:MaxPermSize=%MAX_PERM_SIZE%
on Thursday 19 April 2012


Audit TrailKeeps a log of who performed particular Jenkins operations, such as configuring jobs.
Wiki
Download

A very simple plugin that logs all actions performed in Jenkins. If you suspect someone is messing with your hard work then check the log and find out. The log also tells how each build was started which is also another good bit of information.


ClaimAllow broken build claiming.
Wiki
Download

I’ve often experimented with project configurations which have broken the builds which have then sent an email notifying the whole team. It’s a poor use of resources if you have developers looking for faults in the build when you know you are the cause. This is where the plugin comes in. By claiming a build you’re ultimately telling the team “I broke the build and I’m working on fixing it”, this can save masses of developers time by not having them all looking for the same problem.

Console Columnprovides a fast-path console link available for views.
Wiki
Download

Very simple but effective plugin that adds a link to one or more (Last Console, last failed, last stable, last successful or last unsuccessful) consoles. I’ve created an “All Projects” view and added a column to link all last consoles.


 
 
 


Description Setter - sets the description for each build, based upon a RegEx test of the build log file.
Wiki
Download

If like me you have your build output lots of information in your logs you can look for this information and tie it to a build. I use Rational ClearCase for my source control and when I create a build I always pull down information about the baseline I’m using. All I need to do is set up a regular expression for this information then the plugin sets this information next to build.


Disk Usagerecord individual project disk usage.
Wiki
Download

Storage is cheap these days but you should still keep tabs on how much your CI is using. A great plugin for this is the Disk Usage plugin. It does exactly what it says on the tin so I won’t keep on. There is one caveat, and that is at the time of writing if you have several jobs sharing the same workspace it will think they are separate and add them all together. I have 3 jobs but only a single workspace (3gb) unfortunately this plugin does a 3x3 calculation and reports that I’m using 9gb - obviously that isn’t correct.


Email-extreplacement for Jenkins email publisher.
Wiki
Download

The email publisher built into Jenkins is good, but this is so much better. With this tool you can specify what information is emailed, to whom and for what condition.
The plugin works around the concept of triggers. Triggers are conditions that are met and warrant some action. Let’s say a build fails, you may want the whole team to know so that it can be resolved. However, if a developer submits a failing test you may not want to send everyone an email, just the culprit. The plugin allows you to do all of this as well as customise the emails you send. This is a great tool that we can’t live without now.


Job Config Historysaves copies of all job and system configurations. Wiki
Download

Have you ever changed the configuration of a build, it’s started failing but you can’t remember what you changed? Well, that’s exactly what this plugin solves. If you go into one of your jobs you will notice an icon on the left “Job Config History”.








If you click this option you will be presented with a page full of change history. You can compare entries and explore the XML for the faulty change. Note – the plugin doesn’t show you where in the GUI you have made the change so you will need to understand how XML works to find that yourself.




Hudson Tray Applicationprovides a Tray application that monitors this (and other) Hudson servers.
Wiki
This is a great little plugin that installs a simple app in the system tray that you can use to monitor the jobs on your CI server. The app means that you no longer have to log in via the browser to get an update on project statuses and you can even launch builds from the app itself.


Monitoring - Hudson/Jenkins' monitoring with JavaMelody.
Wiki
Download

Statistics make the world go round, right? It’s important to know if our servers are handling the strain of building our projects continuously. I strongly advise firing off a number of builds and seeing how well your server is performing. I’ve learnt a lot from just playing around and I’ve reached a nice performance from the build machines without risking out of memory errors.