The short answer is everything!
When I talk about build process metrics I'm not talking about code coverage, or lines of code - there's a plethora of tools that will extract that information for you such as cobertura, findbugs etc. What I mean is information about the build process itself.
Here's just an example of a few metrics I find useful:
Current time is Dec 28, 2013 10:41:38 AM
System.getProperty('os.name') == 'Windows 7'
System.getProperty('os.version') == '6.1'
System.getProperty('os.arch') == 'amd64'
System.getProperty('java.version') == '1.7.0_45'
System.getProperty('java.vendor') == 'Oracle Corporation'
System.getProperty('sun.arch.data.model') == '64'
You might be asking why this is important, or why bother when you're using a build automation tool such as Jenkins. Unfortunately for me not every project is fully automated. It should be, but we're not quite there yet. I like to use Gradle as it provides the tools to extract a lot of information about your build process. As this is performed at a lower level than CI this means for those odd tasks that are run manually we don't miss out on that juicy data. Data is valuable, just ask Google.....
So, what do we hope to find?
The interesting thing about data is you don't really know what's there until you start digging. One such example presented itself recently ... at my company we split the task of releasing software into 2 stages. First, we have the volatile build phase, followed by the stable release phase which is performed by a different team. During one projects build phase I was repeatedly running deployments of our web application in approximately 3 minutes to our QA environments. However, when the release manager was deploying into production it was taking 45 minutes..... what the deuce??
Before even looking at the data I knew we both used the same work issued laptop, with the same version of Java, and the Gradle wrapper ensured we were using the same version of the build tool. So, what was causing this massive increase in deployment time? Well, I later found out from the data that the release manager had failed to mention he was running the deployment from his bedroom across a VPN ..... Bingo! the increased deployment time was due to the network, and was easy to fix. (we quickly put this project in Jenkins with a manual start).
The moral of the story is to collect all the information you can. Gradle is an extremely powerful DSL, and its groovy support means your options for pushing and processing the data are endless.
I took inspiration from this post, but I wanted to use Groovy. I know there are plugins that do the same, but this was intended to be a learning exercise.
Whilst the example in the link is quite simple - I also need to address authentication of the user. Once available this would be used for displaying build status via a wall mounted monitor.
Requires "commons-codec-1.8.jar" and "commons-httpclient-3.1.jar" on the classpath
Whilst the example in the link is quite simple - I also need to address authentication of the user. Once available this would be used for displaying build status via a wall mounted monitor.
Requires "commons-codec-1.8.jar" and "commons-httpclient-3.1.jar" on the classpath
Problem
With Gradle the project name is derived from the project's directory name. However, in Jenkins the project's directory name is derived from the Jenkins job name.
There are a few steps in the build lifecycle that use the
project.name
, such as the jar task. If our Jenkins job was "myProject_build" this would result in a jar named myProject_build-1.0.jar
.Solution
With a single project the solution is to add a
settings.gradle
and set the (root) project name there (rootProject.name = "..."
).
With a multi-project build you may wish to override the sub-project name:
In this recipe we will see how easy it is to call the Weblogic tool "wldeploy" from Gradle.
Getting ready
The deployment depends on having a copy of the wlfullclient.jar available to the build scripts. There are plenty of tutorials online on how to create this, so I wont cover that here.
How to do it ...
- Create our build file.
- Create our Weblogic instance configurations.
How it works ...
I wont get into the configuration elements of this project. Its covered in great detail here. The important thing to take from this is we are tokenizing our build scripts making them reusable.
The deployToWLS task is where the magic happens. Here we define the ant task using the name, classname and classpath. Next we specify a directory with our deployables; I've used the "build/libs" directory as that's where my war files are packaged to. Next, through the powers of the eachFile closure we get the filename and trim the extension, we then deploy each file in that directory.
I've been using this for a few months now and whilst my usage isn't massive, I do tend to use the command line about once a day.
Here's some of my most used tasks:
setEnv
Here's some of my most used tasks:
setEnv
I wrote an environment configuration script in a previous post that I want to be called when I open ConEmu. The picture above shows how.
Git Bash
My VCS of choice is Git and it comes with Git Bash to allow me to call some of my favourite Git commands. Here's how that's integrated into ConEmu.
setWLSEnv
Finally, I use Weblogic for a lot of my Java EE container needs. To set the system path and classpath Weblogic comes with a setWLSEnv script. In a similar way to calling my own, here I simply call the setWLSEnv script provided by Weblogic.
I've been working quite closely with one of the testers who's written a test suite using Selenium. He had initially wrote a single gradle task for each browser.
As i hope you can see its a little clunky:
So, I set out to make this more flexible and minimize the amount of code in the build file. Using rules which are provided by the Gradle API I was able to write the following task:
Now all thats required is "gradle intTestChrome" or "gradle intTestFirefox".... perfect!
As i hope you can see its a little clunky:
So, I set out to make this more flexible and minimize the amount of code in the build file. Using rules which are provided by the Gradle API I was able to write the following task:
Now all thats required is "gradle intTestChrome" or "gradle intTestFirefox".... perfect!
This simple Gist adds 2 additional output listeners; Standard Out and Standard Error and pipes their output to a build log.
There's small changes you make when developing that you often disregard as too simple to matter. One such example is shown below.
Dates should be ordered: YEAR, MONTH, DAY. (e.g. YYYYMMDD, YYMMDD, YYYYMM). Time should be ordered: HOUR, MINUTES, SECONDS (HHMMSS). The reason for this is that files will always be sorted in correct chronological order.
As a build manager I use this most in my Gradle scripts for ordering the build output logs.
Dates should be ordered: YEAR, MONTH, DAY. (e.g. YYYYMMDD, YYMMDD, YYYYMM). Time should be ordered: HOUR, MINUTES, SECONDS (HHMMSS). The reason for this is that files will always be sorted in correct chronological order.
As a build manager I use this most in my Gradle scripts for ordering the build output logs.
Service oriented architecture is really starting to take flight in my organisation, but despite how mesmerising this new architecture is, its worth noting how much more difficult debugging has become.
I am telling this story as a build manager, and not a developer. The application is simple; Spring MVC front end, a service bus and a hand full of services (one for getting bank details, addresses and a third for generating documents).
SOA favours buy over build, and that is indeed what we did; in most cases we purchased off the shelf web services, the UI was developed by a third party, and the service bus was configured internally.
Round 1 of the build involved deploying the application and its stubs. This went well and we were able to successfully run through the customer journey. As the stubs were developed by the team writing the front end, they had a great deal of control over what the stubs done - our developers provided the WSDL and the third party developed to that spec.
Round 2 started integrating the service bus, and each service being sure not to add too many variables at once. Defects were raised, but for the most part this was pretty successful too. The early exchange of the WSDL meant we were all singing off the same hymn sheet.
Round 3 is when we finished adding new features, changed WSDL's and started hammering through the defects list. This is where the problems started. The interesting thing here is that because you have a lightly coupled architecture its easy to lose track of what each person is working on. We had one regression with the address lookup feature which we instantly put down to the new build of the UI - in actual fact the problem was due to the service bus not changing the config to take the change made to the UI. Whilst the problem sounds simple, it involved a lot of running around between different teams trying to isolate the problem, and because SOA is still maturing (as are the staff working around it) governance is still a little thin on the ground.
GET YOUR GOVERNANCE IN EARLY!
I am telling this story as a build manager, and not a developer. The application is simple; Spring MVC front end, a service bus and a hand full of services (one for getting bank details, addresses and a third for generating documents).
SOA favours buy over build, and that is indeed what we did; in most cases we purchased off the shelf web services, the UI was developed by a third party, and the service bus was configured internally.
Round 1 of the build involved deploying the application and its stubs. This went well and we were able to successfully run through the customer journey. As the stubs were developed by the team writing the front end, they had a great deal of control over what the stubs done - our developers provided the WSDL and the third party developed to that spec.
Round 2 started integrating the service bus, and each service being sure not to add too many variables at once. Defects were raised, but for the most part this was pretty successful too. The early exchange of the WSDL meant we were all singing off the same hymn sheet.
Round 3 is when we finished adding new features, changed WSDL's and started hammering through the defects list. This is where the problems started. The interesting thing here is that because you have a lightly coupled architecture its easy to lose track of what each person is working on. We had one regression with the address lookup feature which we instantly put down to the new build of the UI - in actual fact the problem was due to the service bus not changing the config to take the change made to the UI. Whilst the problem sounds simple, it involved a lot of running around between different teams trying to isolate the problem, and because SOA is still maturing (as are the staff working around it) governance is still a little thin on the ground.
GET YOUR GOVERNANCE IN EARLY!
Select all invalid objects:
Recompile all invalid objects:
One of the really annoying things about Oracle PL/SQL is the way it tells you something is wrong, gives you a line number and then leaves it up to you to find the statement that caused the error. Well, this little piece of SQL gives you the exact line in error plus the lines immediately before and after it. (Acknowledgements to Ken Atkins of ARIS Consulting).
When restarting my Grails application in development mode, it was becoming a pain to re-enter the test data to show people how the application worked.
Groovy provides great support for XML, and so using that for loading data was a no brainer.
First step, create a service:
This service is called from within BootStrap.groovy:
This service imports XML data into domain objects. If a domain object refers to another domain object, we user the power of GORM findBy. The implementation of load() is shown below:
And finally an example dataset:
Unfortunately, because of the format of XML conflicting with HTML I've had to provide a screenshot.
Inspiration taken from Sanjay Mysoremutt's Blog
Groovy provides great support for XML, and so using that for loading data was a no brainer.
First step, create a service:
class LoadDbService {
def fileName
LoadDbService(def fileName) {
this.fileName = fileName
}
def load (){
... detailed below ...
}
}
This service is called from within BootStrap.groovy:
class BootStrap {
def init = { servletContext ->
environments {
development {
def dbs = new LoadDbService('grails-app/resources/test_data/test_records.xml')
dbs.load()
}
}
}
This service imports XML data into domain objects. If a domain object refers to another domain object, we user the power of GORM findBy. The implementation of load() is shown below:
def data = new XmlSlurper().parse(fileName)
// These do not have dependencies and can therefore be generated easily
data.Release.each {
new Release(name: "${it.'@name'}", status: "${it.'@status'}", createdDate: new Date().parseToStringDate("${it.'@createdDate'}")).save(failOnError: true)
}
data.Stack.each {
new Stack(number: "${it.@'number'}").save(failOnError: true)
}
// for these imports to work correctly they need to search for the items that they relate to using .findByName
data.Version.each {
new Version(appName: "${it.@'appName'}", tag: "${it.@'tag'}", createdDate: new Date().parseToStringDate("${it.'@createdDate'}"), partOf: Release.findByName("${it.@'partOf'}")).save(failOnError: true)
}
data.Environment.each {
new Environment(hostname: "${it.@'hostname'}", testTier: "${it.@'testTier'}", lastRefresh: new Date().parseToStringDate("${it.'@lastRefresh'}"), ownedBy: Release.findByName("${it.@'ownedBy'}"), partOf: Stack.findByNumber("${it.@'partOf'}")).save(failOnError: true)
}
And finally an example dataset:
Unfortunately, because of the format of XML conflicting with HTML I've had to provide a screenshot.
Inspiration taken from Sanjay Mysoremutt's Blog
Gradle strikes again:
Yes, it really was that easy!
The Oracle database
This post assumes that you have an Oracle database available. I used an Oracle 10g XE instance running on localhost to develop this post. XE is free to download and is sufficient for development purposes.
I had a SCOTT schema with the password "oracle" in the database and also installed the utPLSQL schema, which is a necessary step to following this example. To replicate what I done I recommend downloading the Oracle Developer Days VM.
NOTE: If you install the utPLSQL schema into an Oracle 10g XE database, make sure that you have granted access on UTL_FILE to public as explained here (remember to connect as sysdba when doing this otherwise it won’t work).
The structure I used for this example is:
.
|-build
|---build.gradle
|---ut_run.sql
|-src
|---main
|-----sql
|-------run.sql
|---test
|-----sql
|-------run.sql
I recently seen this blog post and thought it would be really cool if I could integrate git flow into my build scripts. Whats great about this is that because this is a java library it was extremely painless to integrate.
With a little effort this could easily be converted into a plugin for Gradle, but here's my example:
This is only a snippet, so not all of the library is shown, but it works very well as a proof of concept.
With a little effort this could easily be converted into a plugin for Gradle, but here's my example:
This is only a snippet, so not all of the library is shown, but it works very well as a proof of concept.
I wanted a sandbox VM, but for it to replicate the production servers it needed to be based on Linux 5.5. As I was running this on my local machine in VirtualBox I also wanted Guest Additions installed.
VirtualBox 3.1: Beginner's Guide is a good book to start with if you're new to VirtualBox and want to know how to create virtual machines on your desktop.
So here's how to get Guest Additions installed:
- Download and Install Oracle Linux
- Download and copy the appropriate yum configuration file in place, by running the following command as root:
# cd /etc/yum.repos.d
# wget http://public-yum.oracle.com/public-yum-el5.repo
Guest Additions requires the following in order to install successfully:
kmod-ovmapi-uek libovmapi libovmapi-devel ovmd python-simplejson xenstoreprovider ovm-template-config ovm-template-config-authentication ovm-template-config-datetime ovm-template-config-firewall ovm-template-config-network ovm-template-config-selinux ovm-template-config-ssh ovm-template-config-system ovm-template-config-user
These can be downloaded by running the following command once the yum configuration is in place:
# yum install libovmapi xenstoreprovider ovmd python-simplejson xenstoreprovider
When creating a release package its nice to maintain a copy of all the metadata. I like to do this using the power of Groovy from within my Gradle build scripts.
import groovy.xml.MarkupBuilder
task xmlGen << {
def tag = "git describe --abbrev=0 --tags".execute().text
def sha = "git rev-parse HEAD".execute().text
def fw = new FileWriter("build_info.xml" )
def xml = new groovy.xml.MarkupBuilder(fw)
xml.build(id:tag){
ProjectName("ORCA")
SHA1(sha)
Date(new Date())
Components("component1, component2, component3")
}
}
And the output is:
<build id="v1.3.0">
<ProjectName>MyProject</ProjectName>
<SHA1>1f661e69797dc23281d2955d6ca2dda3cdd81dc0</SHA1>
<Date>Thu May 02 16:14:33 BST 2013</Date>
<Components>component1, component2, component3</Components>
</build>
To ensure no plain text passwords are stored on the servers the following piece of code can be used encrypt and decrypt passwords.
import java.security.*
import javax.crypto.*
import javax.crypto.spec.*
class DESCodec {
static encode = { String target ->
def cipher = getCipher(Cipher.ENCRYPT_MODE)
return cipher.doFinal(target.bytes).encodeBase64()
}
static decode = { String target ->
def cipher = getCipher(Cipher.DECRYPT_MODE)
return new String(cipher.doFinal(target.decodeBase64()))
}
private static getCipher(mode) {
def keySpec = new DESKeySpec(getPassword())
def cipher = Cipher.getInstance("DES")
def keyFactory = SecretKeyFactory.getInstance("DES")
cipher.init(mode, keyFactory.generateSecret(keySpec))
return cipher
}
private static getPassword() { "secret12".getBytes("UTF-8") }
}
Things to note in this script are:
|
So, I suggest if you have environment variables that need setting up, such as your VCS, or build tool you go via Jenkins initially, then if you get no joy from that use the Shell/Batch build step.
At my company there's a wide range of technologies being used; some as old as the Mainframe, and newer technologies such as web services. This means that we also have a wide range of age groups - some from the days of COBOL, and other newbies such as myself from the C++ and Java days.
Something I've come to learn is that the old timers are not open to change, and if they have to change they want it to be easy. I'm sure you can imagine the looks on their faces when I started showing them how to use Git bash - anyone would think I just punched a small child.
TortoiseGit is a very useful tool that nicely hides the implementation of Git from the user. I found the old timers were far more receptive to Tortoise than they were of Git bash. I still finished the Git training in bash, but I also offered a small section on Tortoise at the end. The reason for this is that I wanted them to truly understand what Git was doing before hiding that behind a nice UI.
Subscribe to:
Posts (Atom)