Sunday, June 5, 2016

REST API calls with RestTemplate and Basic authorizarion

I was playing around with RestTemplate, and it seems like a nice way to make REST calls. It also allows for easy handling of basic HTTP authorization.

Just add the username and password to the header:

String plainCreds = username + ":" + password;
byte[] base64CredsBytes = Base64.getEncoder().encode(plainCreds.getBytes());
String base64Creds = new String(base64CredsBytes);

HttpHeaders headers = new HttpHeaders();
headers.add("Authorization", "Basic " + base64Creds);

For GET requests thats about all you need. A simple call can be made like this:

HttpEntity<?> requestEntity = new HttpEntity(httpHeaders);
return restTemplate.exchange(url, HttpMethod.GET, requestEntity, String.class);

POST request are not very much more complicated:

httpHeaders.setContentType(MediaType.APPLICATION_JSON);
HttpEntity<String> requestEntity = new HttpEntity<String>(createIssueJSON, httpHeaders);
return restTemplate.exchange(url, HttpMethod.POST, requestEntity, String.class);

The entire code looks like this:

import org.springframework.http.*;
import org.springframework.web.client.RestTemplate;

import java.util.Base64;

public class Application {

    private static final String username = "user@domain.com";
    private static final String password = "password";
    private static final String jiraBaseURL = "https://jira.domain.com/rest/api/2/";
    private RestTemplate restTemplate;
    private HttpHeaders httpHeaders;

    public Application() {
        restTemplate = new RestTemplate();
        httpHeaders = createHeadersWithAuthentication();
    }

    private HttpHeaders createHeadersWithAuthentication() {
        String plainCreds = username + ":" + password;
        byte[] base64CredsBytes = Base64.getEncoder().encode(plainCreds.getBytes());
        String base64Creds = new String(base64CredsBytes);

        HttpHeaders headers = new HttpHeaders();
        headers.add("Authorization", "Basic " + base64Creds);

        return headers;
    }

    public ResponseEntity getIssue(String issueId) {
        String url = jiraBaseURL + "issue/" + issueId;

        HttpEntity<?> requestEntity = new HttpEntity(httpHeaders);
        return restTemplate.exchange(url, HttpMethod.GET, requestEntity, String.class);
    }

    public ResponseEntity createIssue(String key, String summary, String description, String issueType) {
        String createIssueJSON = createCreateIssueJSON(key, summary, description, issueType);

        String url = jiraBaseURL + "issue";

        httpHeaders.setContentType(MediaType.APPLICATION_JSON);

        HttpEntity<String> requestEntity = new HttpEntity<String>(createIssueJSON, httpHeaders);

        return restTemplate.exchange(url, HttpMethod.POST, requestEntity, String.class);

    }

    private String createCreateIssueJSON(String key, String summary, String description, String issueType) {
        String createIssueJSON = "{\"fields\":{\"project\":{\"key\":\"$KEY\"},\"summary\":\"$SUMMARY\",\"description\":\"$DESCRIPTION\",\"issuetype\": {\"name\": \"$ISSUETYPE\"}}}";

        createIssueJSON = createIssueJSON.replace("$KEY", key);
        createIssueJSON = createIssueJSON.replace("$SUMMARY", summary);
        createIssueJSON = createIssueJSON.replace("$DESCRIPTION", description);
        return createIssueJSON.replace("$ISSUETYPE", issueType);
    }

}


Download the code and tests from github: https://github.com/somaiah/restTemplate

Sunday, May 15, 2016

Jira + Confluence = < 3

I'm a big fan of Atlassian products, in particular Jira and Confluence, and I've already mentioned how helpful they have been in my previous projects

If you're not already using them, you should. And you should be using them together.

Why?

First the obvious :

You can manage sprints with the Meeting notes Blueprint and Retrospectives Blueprint
You can turn your Confluence into a knowledgebase. Everything is searchable and editing docs is easy. If you keep all your docs on Confluence, everyone always knows where info is. Of course confluence allows you to protect sensitive areas with password protection, so your knowledgebase can only reveal stuff you want to.

You can display Confluence info in JIRA - Links requirements in confluence in jira tasks using the Product Requirements Blueprint
But the real beauty of integrating Jira and Confluence is when you can display Jira data in Confluence. 
For example:


  • Display JIRA data like status reports and change logs in Confluence with JIRA reports Blueprint
  • Automatic linking in confluence to Jira tasks
  • Embed Jira Queries into Confluence to show task lists - for example to show which tasks are included in a release
  • Use the Jira Chart Macro to show the results of a Jira query (JQL) as a chart in Confluence
  • Use the Jira REST API to get detailed info about tasks. Search queries are returned as JSON objects which can be parsed and used to display charts on confluence with the confluence REST API


The last point is really my favorite.

Assume you are managing a team of developers in a large organization, and upper management would like periodic status updates of the team's performance. You could show them burn down charts, or really any of the plethora of charts that Jira offers out of the box, but what if they wanted more info? What if they wanted to see, for example,

  • How long a release took to implement (time to market)? 
  • Or how many bugs were found in each delivery (delivery quality)? 
  • Or what percent of each delivery was defects vs enhancements (productivity)?
Assuming the developers logged their hours faithfully against their Jira tasks, you can display this info quite easily.

I'll explain how to get the info for time to market. The other two are just a matter of getting different info from the JSON retrieved by the Jira REST API, and some bash magic


Assumptions:

1. For this example, I used the free Jira trial, and created 2 releases. My Jira looked like this:




Release 1 looked like this:



Release 2 looked like this:



Release 3 was unreleased, so we won't bother about it.

2. I also used the Confluence free trial. I created a space called "Moon Rocket Launch Info", and a page to display graphs in, called "Time to market graph"

3. I had the beautiful JQ JSON parser for bash installed (https://stedolan.github.io/jq/)

The code

Lets say I am interested in Bug, Story and Task issues. My JQL would look like this:

project = "MRL" AND issuetype in (Bug, Story, Task) AND fixVersion in releasedVersions()

This search returns a bunch of results that looks like this:




We can use curl to get these results into our script:


JIRA_REST_API_URL="https://somaiah.atlassian.net/rest/api/2"
JIRA_USER="admin"
JIRA_PASSWORD="secret"

JIRA_SEARCH_URL="${JIRA_REST_API_URL}/search?jql=project=%22MRL%22%20AND%20issuetype%20in%20(Bug,%20Story,%20Task)%20AND%20fixVersion%20in%20releasedVersions()"

JIRA_FILTER_INFO=`curl --globoff --insecure --silent -u ${JIRA_USER}:${JIRA_PASSWORD} -X GET -H 'Content-Type: application/json' "${JIRA_SEARCH_URL}"`


This returns a LOT of info about each jira tasks. In fact, each Jira issue returned looks like this, with a whole lot of info:



    {
        "expand": "operations,versionedRepresentations,editmeta,changelog,renderedFields",
        "id": "10021",
        "self": "https://somaiah.atlassian.net/rest/api/2/issue/10021",
        "key": "MRL-22",
        "fields": {
            "issuetype": {
                "self": "https://somaiah.atlassian.net/rest/api/2/issuetype/10001",
                "id": "10001",
                "description": "gh.issue.story.desc",
                "iconUrl": "https://somaiah.atlassian.net/images/icons/issuetypes/story.svg",
                "name": "Story",
                "subtask": false
            },
            "timespent": 21600,
            "project": {
                "self": "https://somaiah.atlassian.net/rest/api/2/project/10000",
                "id": "10000",
                "key": "MRL",
                "name": "Moon Rocket Launch",
                "avatarUrls": {
                    "48x48": "https://somaiah.atlassian.net/secure/projectavatar?avatarId=10324",
                    "24x24": "https://somaiah.atlassian.net/secure/projectavatar?size=small&avatarId=10324",
                    "16x16": "https://somaiah.atlassian.net/secure/projectavatar?size=xsmall&avatarId=10324",
                    "32x32": "https://somaiah.atlassian.net/secure/projectavatar?size=medium&avatarId=10324"
                }
            },
            "fixVersions": [
                {
                    "self": "https://somaiah.atlassian.net/rest/api/2/version/10000",
                    "id": "10000",
                    "name": "Version 1.0",
                    "archived": false,
                    "released": true,
                    "releaseDate": "2016-05-07"
                }
            ],
            "aggregatetimespent": 21600,
            "resolution": {
                "self": "https://somaiah.atlassian.net/rest/api/2/resolution/10000",
                "id": "10000",
                "description": "Work has been completed on this issue.",
                "name": "Done"
            },
            "resolutiondate": "2016-05-03T16:52:16.000+0200",
            "workratio": -1,
            "lastViewed": "2016-05-15T21:35:37.060+0200",
            "watches": {
                "self": "https://somaiah.atlassian.net/rest/api/2/issue/MRL-22/watchers",
                "watchCount": 0,
                "isWatching": false
            },
            "created": "2016-04-23T09:55:16.000+0200",
            "customfield_10022": 2.0,
            "priority": {
                "self": "https://somaiah.atlassian.net/rest/api/2/priority/3",
                "iconUrl": "https://somaiah.atlassian.net/images/icons/priorities/medium.svg",
                "name": "Medium",
                "id": "3"
            },
            "labels": [],
            "customfield_10016": "0|i0004n:",
            "customfield_10017": ["com.atlassian.greenhopper.service.sprint.Sprint@14c71ff[id=2,rapidViewId=1,state=CLOSED,name=Sample Sprint 1,startDate=2016-04-23T09:55:19.342+02:00,endDate=2016-05-07T09:55:19.342+02:00,completeDate=2016-05-07T08:35:19.342+02:00,sequence=2]"],
            "customfield_10018": null,
            "timeestimate": 0,
            "aggregatetimeoriginalestimate": null,
            "versions": [],
            "issuelinks": [],
            "assignee": {
                "self": "https://somaiah.atlassian.net/rest/api/2/user?username=admin",
                "name": "admin",
                "key": "admin",
                "emailAddress": "somaiah@gmail.com",
                "avatarUrls": {
                    "48x48": "https://somaiah.atlassian.net/secure/useravatar?avatarId=10351",
                    "24x24": "https://somaiah.atlassian.net/secure/useravatar?size=small&avatarId=10351",
                    "16x16": "https://somaiah.atlassian.net/secure/useravatar?size=xsmall&avatarId=10351",
                    "32x32": "https://somaiah.atlassian.net/secure/useravatar?size=medium&avatarId=10351"
                },
                "displayName": "Somaiah  [Administrator]",
                "active": true,
                "timeZone": "Europe/Berlin"
            },
            "updated": "2016-05-14T20:37:11.000+0200",
            "status": {
                "self": "https://somaiah.atlassian.net/rest/api/2/status/10001",
                "description": "",
                "iconUrl": "https://somaiah.atlassian.net/",
                "name": "Done",
                "id": "10001",
                "statusCategory": {
                    "self": "https://somaiah.atlassian.net/rest/api/2/statuscategory/3",
                    "id": 3,
                    "key": "done",
                    "colorName": "green",
                    "name": "Done"
                }
            },
            "components": [],
            "timeoriginalestimate": null,
            "description": null,
            "customfield_10010": null,
            "customfield_10011": null,
            "customfield_10012": null,
            "customfield_10013": null,
            "customfield_10014": "Not started",
            "customfield_10015": null,
            "customfield_10005": null,
            "customfield_10006": null,
            "customfield_10007": null,
            "customfield_10008": null,
            "customfield_10009": null,
            "aggregatetimeestimate": 0,
            "summary": "As a user, I'd like a historical story to show in reports",
            "creator": {
                "self": "https://somaiah.atlassian.net/rest/api/2/user?username=admin",
                "name": "admin",
                "key": "admin",
                "emailAddress": "somaiah@gmail.com",
                "avatarUrls": {
                    "48x48": "https://somaiah.atlassian.net/secure/useravatar?avatarId=10351",
                    "24x24": "https://somaiah.atlassian.net/secure/useravatar?size=small&avatarId=10351",
                    "16x16": "https://somaiah.atlassian.net/secure/useravatar?size=xsmall&avatarId=10351",
                    "32x32": "https://somaiah.atlassian.net/secure/useravatar?size=medium&avatarId=10351"
                },
                "displayName": "Somaiah  [Administrator]",
                "active": true,
                "timeZone": "Europe/Berlin"
            },
            "subtasks": [],
            "reporter": {
                "self": "https://somaiah.atlassian.net/rest/api/2/user?username=admin",
                "name": "admin",
                "key": "admin",
                "emailAddress": "somaiah@gmail.com",
                "avatarUrls": {
                    "48x48": "https://somaiah.atlassian.net/secure/useravatar?avatarId=10351",
                    "24x24": "https://somaiah.atlassian.net/secure/useravatar?size=small&avatarId=10351",
                    "16x16": "https://somaiah.atlassian.net/secure/useravatar?size=xsmall&avatarId=10351",
                    "32x32": "https://somaiah.atlassian.net/secure/useravatar?size=medium&avatarId=10351"
                },
                "displayName": "Somaiah  [Administrator]",
                "active": true,
                "timeZone": "Europe/Berlin"
            },
            "customfield_10000": null,
            "aggregateprogress": {
                "progress": 21600,
                "total": 21600,
                "percent": 100
            },
            "customfield_10001": "10000_*:*_1_*:*_889020000_*|*_10001_*:*_1_*:*_0",
            "customfield_10002": "com.atlassian.servicedesk.plugins.approvals.internal.customfield.ApprovalsCFValue@37e4eb",
            "customfield_10003": null,
            "customfield_10004": null,
            "environment": null,
            "duedate": null,
            "progress": {
                "progress": 21600,
                "total": 21600,
                "percent": 100
            },
            "votes": {
                "self": "https://somaiah.atlassian.net/rest/api/2/issue/MRL-22/votes",
                "votes": 0,
                "hasVoted": false
            }
        }
    }




What we are interested in is only the version, resolutionDate and createdDate. Running the jira response via JQ to filter these results:


echo ${JIRA_FILTER_INFO} | jq -r '.issues | map(.fields | (.fixVersions[] | { version: .name }) + { resolutionDate: .resolutiondate} + {createdDate: .created})'`

Returns an array of elements that look like this:

  {
    "version": "Version 1.0",
    "resolutionDate": "2016-05-05T18:30:16.000+0200",
    "createdDate": "2016-04-23T09:55:16.000+0200"
  },
  {
    "version": "Version 1.0",
    "resolutionDate": "2016-05-03T16:52:16.000+0200",
    "createdDate": "2016-04-23T09:55:16.000+0200"
  }


Now its just a matter of some bash magic to get the info from this array and present it to confluence. The full is available on github here: https://github.com/somaiah/jira-confluence-graphs/blob/master/src/timeToMarket.sh

But the final HTML to be sent to confluence looks like this:


<h2>Average time to market per release </h2>
<table>
    <tbody>
    <tr>
        <td>
            <h2>Project: Moon Rocket Launch</h2>

            <div style='line-height:50px;'><br/></div>
            <ac:macro ac:name='chart'>
                <ac:parameter ac:name='type'>line</ac:parameter>
                <ac:parameter ac:name='width'>400</ac:parameter>
                <ac:parameter ac:name='height'>600</ac:parameter>
                <ac:parameter ac:name='forgive'>true</ac:parameter>
                <ac:parameter ac:name='xLabel'>Release</ac:parameter>
                <ac:parameter ac:name='yLabel'>Days</ac:parameter>
                <ac:parameter ac:name='categoryLabelPosition'>down90</ac:parameter>
                <ac:rich-text-body>
                    <table>
                        <tbody>
                        <tr>
                            <th><p>&nbsp;</p></th>
                            <th><p> Version 1.0</p></th>
                            <th><p> Version 2.0</p></th>
                        </tr>
                        <tr>
                            <td><p>Average days used per release</p></td>
                            <td><p>8</p></td>
                            <td><p>2</p></td>
                        </tr>
                        </tbody>
                    </table>
                </ac:rich-text-body>
            </ac:macro>
            <div style='line-height:50px;'><br/></div>
        </td>
    </tr>
    </tbody>
</table>


Now you just need to update the page in confluence with a REST PUT:


echo '{"id":"'${CONFLUENCE_PAGE_ID}'","type":"page","title":"'${PAGE_NAME}'","space":{"key":"'${CONFLUENCE_SPACE}'"},"body":{"storage":{"value":"'${CONTENT}'","representation":"storage"}},"version":{"number":'${NEXT_PAGE_VERSION}'}}' > body.json

RESPONSE=`curl --globoff --insecure --silent -u ${CONFLUENCE_USER}:${CONFLUENCE_PASSWORD} -X PUT -H 'Content-Type: application/json' --data @body.json ${CONFLUENCE_REST_API_PAGE_URL}/${CONFLUENCE_PAGE_ID}`


The confluence page ID is the ID of the page you created (see assumptions)
The page name is the page title
The space key is the confluence space your page is in
The content is a code generated.
The page version should be updated by 1

The resulting graph on confluence looks like this:



If you are using Jenkins for CI, you can simply stick this script in Jenkins, and run it periodically (say every 2 weeks or so on)
Now you have a nice visual display of your TTM, thats automatically updated without any manual interevention. Neat?

As you can guess from the Jira info that's returned in the REST call, you can do a whole bunch of stuff - as I said before, delivery quality and productivity are only 2 of them. Simply tailor your JQL and you're set.

You can find the full code on github: https://github.com/somaiah/jira-confluence-graphs
Since I used the trial versions of jira and confluence, the responses and requests are saved in https://github.com/somaiah/jira-confluence-graphs/tree/master/resources

Thursday, February 11, 2016

Sonar + Jacoco + Maven multi module projects + Jenkins

I recently had to come up with a sonar set up for a multi maven project. I found several examples for stuff that came close to this, but it wasn't EXACTLY what I was looking for.

My set up:

sonarQube 5.3 set up on a server, running with an Oracle backend
Jenkins job running a build
Junit tests
Jenkins > Configure > Sonar Runner added



0. Set up the Jenkins job to build your code:

These details are out of scope here. This is just a normal Jenkins maven project that builds stuff into a workspace.

1. Set up the Jenkins job for sonar:

As recommended by sonarqube.org I created a separate job and in the Build section, chose to Invoke Standalone Solar Analysis

Note that under Advanced Project Options, I had to select Use custom workspace, and fill the workspace of the job that built my code in step 0.


My Build section looked like this:



For readability, my sonar properties were:



# Metadata
sonar.host.url=http://sonar.domain.com
sonar.projectKey=somekey-sonar-runner
sonar.projectName=SomeNameSonarQube Runner
sonar.projectVersion=dev

# Source info
sonar.forceAnalysis=true
sonar.sourceEncoding=ISO-8859-15
maven.test.failure.ignore=true
sonar.sources=.
sonar.exclusions=**/SomeJavaFile.java,**/target/**/*,**/resources/min/**/*.js,**/node_modules/**/*,**/generated-sources/**/*,**/generated_sources/**/*,**/resources/lib/**/*

# Tests
sonar.junit.reportsPath=**/target/surefire-reports
sonar.surefire.reportsPath=**/target/surefire-reports
sonar.jacoco.reportPath=${WORKSPACE}/target/jacoco.exec
sonar.jacoco.itReportPath=${WORKSPACE}/target/jacoco-it.exec
sonar.java.binaries=**/target/classes
sonar.java.coveragePlugin=jacoco

# Debug
sonar.verbose=true

Step 2: Maven pom.xml

Add the following properties:

<sonar.core.codeCoveragePlugin>jacoco</sonar.core.codeCoveragePlugin>
<sonar.dynamicAnalysis>reuseReports</sonar.dynamicAnalysis>
<sonar.jacoco.reportPath>${projectRoot}/target/jacoco.exec</sonar.jacoco.reportPath>
<sonar.jacoco.itReportPath>${projectRoot}/target/jacoco-it.exec</sonar.jacoco.itReportPath>
<sonar.language>java</sonar.language>

Add the jacoco-maven-plugin to your build
<build>
    <plugins>
        <plugin>
<groupId>org.jacoco</groupId>
<artifactId>jacoco-maven-plugin</artifactId>
<version>0.7.5.201505241946</version>
<executions>
<execution>
<id>agent-for-ut</id>
<goals>
<goal>prepare-agent</goal>
</goals>
<configuration>
<destFile>${sonar.jacoco.reportPath}</destFile>
<append>true</append>
</configuration>
</execution>
<execution>
<id>agent-for-it</id>
<goals>
<goal>prepare-agent-integration</goal>
</goals>
<configuration>
<destFile>${sonar.jacoco.itReportPath}</destFile>
<append>true</append>
</configuration>
</execution>
</executions>
   </plugin>
</plugins>
</build>

Thats it! Build your jenkins job (from step 0) to build your code, and the jacoco-maven-plugin will automatically create one large jacoco.exec and jacoco-it.exec file in the jenkins workspace of the sonar job under
   workspace/target

Now when you run the sonar jenkins job (from step 1), sonar will grab the jacoco exec files and figure out the code coverage and create those beautiful graphs which you can see from your sonar server.

Wednesday, October 21, 2015

Using curl with SSL cert chain

You can use the --insecure option to curl without SSL checks

curl --insecure -u user:passwd -X GET -H 'Content-Type: application/json' "https//somesecureserver.com/rest/field"


But what if you WANT to use SSL? The curl docs mentions the --cacert option, but its still a little unclear on how to do this.

First you'll need to get the entire certificate path to the https server. You need the entire path because curl does not come with any CA cert info. The cacert option also requires the cert in pem format. Lastly the entire certificate chain should be in 1 file, since the cacert option accepts only 1 file.

1. Get the all the certs from a browser

Get this by clicking on the Lock or Green portion from the address bar


Click on the Connection tab and then "Certificate Information"
Click on the Details tab. Here you can Copy this to a file.
Select the DER encoded binary x.509(.cer) option


Do this for all the entries that show up  in the Certificate Path tab (there will be around 3)


2. Convert the .cer files to PEM format with openssl:

openssl x509 -inform DES -in file1.cer -out file1.pem -text
openssl x509 -inform DES -in file2.cer -out file2.pem -text
openssl x509 -inform DES -in file3.cer -out file3.pem -text

3. Now append all these pem files into one repo

cat *.pem > certRepo

Now you can use the certRepo to connect via SSL

curl --cacert certRepo -u user:passwd -X GET -H 'Content-Type: application/json' "https//somesecureserver.com/rest/field"

Thursday, March 19, 2015

Load properties into a guava ImmutableMap with Spring

Create your props file (here called secret-identities.properties):


Clark\ Kent = Superman
Bruce\ Wayne = Batman
Kit\ Walker = The Phantom

Note that by definition keys in .properties files cannot contain spaces.  So you must escape them with a '\'

In your Spring configuration:

<bean id="mapProperties" class="org.springframework.beans.factory.config.PropertiesFactoryBean">
    <property name="ignoreResourceNotFound" value="true"/>
    <property name="fileEncoding" value="UTF-8"/>
    <property name="locations">
        <list>
            <value>classpath:secret-identites.properties</value>
        </list>
    </property>
</bean>

<bean id="secretIdentityMap" class="com.google.common.collect.ImmutableMap" factory-method="copyOf">
    <constructor-arg ref="mapProperties"/>
</bean>


Now you can get a fully injected ImmutableMap in your code:

@Autowired
private ImmutableMap<String, String> secretIdentityMap;


Thanks to Stackoverflow for inspiration!

Thursday, April 17, 2014

Project post mortem

I recently completed an engagement with one of the largest telecommunication companies in the world. We faced multiple hurdles, not limited to geography, time zones and infrastructure. Yet the project was an overwhelming success, and the client very satisfied.

I thought I would jot down a few things that we did right, and also note down some things we could have done better so as to hopefully help me do things right in the future.

Things we did right:

1. People:

I believe the right people can make or break a project. Of course I do, or I wouldn't be a consultant :) On this particular engagement I was lucky enough to be on a team of very talented and dedicated people. But what exactly made this group of talented and dedicated people succeed as a team? I thought I'd break it down further by roles.
  • Project Manager: We had a project manager who was engaged enough to be effective, but knew when to stay away to not interfere with the development process. 
    • I liked that there was one and only one project manager that I reported to. She attended the daily stand up, so I didn't need to waste any more time bringing her up to speed. (I have been on projects in the past where there were up to eight project managers, each of whom required a daily update from me!)
  • Product Owner: Our Product Owner sat with the development team, in the pen, with testers when they wrote their test scripts, and with the Usability Designer when the initial flow of the application was conceived. 
    • I liked that the product owner was able to break functionality into smaller user stories, and was brave enough to wait for future sprints until a full set of user stories (or an epic) was complete. 
  • Solution Architect: This was a person intimately familiar with the back end systems that needed to be accessed. 
    • He was heavily involved with the Usability Designer and the Product Owner when the product was conceived. Since he was so easily accessible, epics, usability flows and stories were completed quickly. Prioritization of stories was easy, because the solution architect knew exactly how complex a story could be.
  • Usability Designer: He was involved very early on, with the Product Owner and solution architect. He took several weeks to thrash out the flow of the application, finally coming out with a large flow diagram that he pasted on the walls of our meeting room. Each screen in the flow was represented by a single A4 sized paper. 
    • While initially I felt like I had entered Melvin B. Tolson's garage from "A Beautiful Mind", seeing a physical representation of every single screen on a wall turned out to be very helpful, since it gave everyone a very clear idea of what exactly the application did. 
  • Delivery Architect (me): I was responsible for selecting the technologies we would use, and architect the code base. I was also responsible for the developers and testers in India. I was based on site, in the same room as the Project Manager, Product Owner and Solution Architect. 
    • I liked that I could just lift my head up and talk to the Solution Architect, or the Product Owner, rather than send an email out and wait a few hours for a response. This was a huge benefit in terms of communication.
    • By being onsite, I think I provided a huge benefit to the client, because no one from the client had to deal with anyone in India. They spoke to me and only me, and I dealt with the off site team. Being a single point of contact streamlined communication for the client, while my developers were sheltered from questions and emails and could focus on coding.
  • Developers: I was lucky to have a developer on my team who quickly grasped what the product we were creating was about. 
    • He challenged me on my decisions, and offered alternatives of his own. I liked this, because it allowed me to come up with an architecture and a code base that I had defended truthfully. Of course, concepts that he suggested were also integrated if they turned out to be superior to my own.
  • Testers: Testers in India tested each user story before it was submitted for acceptance testing by the client.

2. Methodologies:

We followed SCRUM, with daily stand ups and 2 week long sprints. Each sprint ended with a functional demo (to managers, product owners and other "pigs") and a technical demo (to the solution architect, client developers and other "chickens"). 
    • The functional demo dealt with user story functionality being met. 
    • The technical demo was very often a peer review on the code.

3. Release Processes:

While we did not automate our release process as much as I would have liked (our release servers were not ready in time) I wrote custom scripts that automated releases as much as possible.

  • Environment related files were deployed by repository command. This allowed for the code to contain several directories, each corresponding to a certain environment. 
    • To update the properties, one needed to just do a repo update.
  • A script  grabbed the location of the release as an argument, and installed and restarted the server.
    • The server referred to the latest release with a sym link, so releases could be rolled back easily (although this was never needed :) 
Indian testers tested on one server, and client acceptance tests were performed on another server. This allowed us to have a more "stable" version that clients could see and which could be used for internal demos. This also gave the client a good "feeling", because they only saw stable versions of the code releases (i.e after our Indian testers had found all the bugs they could)

4. Development Tools:


  • I used IntelliJ to develop with Java. As is their motto, it is indeed a pleasure to develop with it. It is very convenient to have DB access and network sniffing from the same tool as one edits and debugs code. 
  • Documentation was stored in a wiki. I cannot emphasize enough the importance of this. Apart from user stories, I also stored installation instructions, developer notes (to explain some esoteric code, for example) and release documentation. Wikis are preferable to Word docs because they are searchable, easily updated and available in a single location. 

5. Language:

Documentation was written in English. This was useful since developers and testers were in India. I strongly support English documentation for all IT projects, especially in an era of "right shoring". Even if development is done onshore, it is not uncommon to fly in developers from other countries to develop onsite.

Things that could be improved:

1. People:

Even though I said I lucked out with developers, this was not so initially. I went thought several iterations of developers before I found the ones that worked. I went though several who were obviously not interested in their job. They exaggerated their skills on their resume, and their code delivery was poor. They had not heard of Clean Coding standards. Even after repeated critiques during peer reviews, they did not improve their coding standards.Very often they would say a task was "done" when they had not even unit tested it. This was exasperating, especially when the developer was on the other side of the world.

How could I improve this?

  1. Interview rigorously. Do not take resumes for granted. Ask questions such as which IDE they prefer and why, which IT related book they read last, which library would he choose for collections, caching and so on. Developers should be passionate about IT. They should be not be satisfied with delivering code that is below par.
  2. Ensure that teams are not spread out in more than 2 places. If there is an off shore team, all the developers in the offshore team must sit together. Not only will they help each with peer coding, setting up environments and so on, it also builds a sense of camaraderie.
  3. Communicate often and specifically. I held 10 minute "stand ups" on a communicator each morning. If any of the developers said they were not done with their task, after the meeting I specifically asked them why and how I could help. Sometimes junior developers are afraid to ask for help, and will keep on working on something when a little help from someone else could aid them a lot.
  4. Have developers from off site visit on-site. While this is a basic tenet of working agile, this is ignored all too often because of costs involved. There is a huge increase in productivity when a developer is on-site, in terms of communication, understanding requirements and peer development. Additionally I think this gives clients a good "feeling" about the person developing their code, and also gives the developer better motivation to develop better, since he now has a face to put with the client.

2. Methodologies:

SCRUM worked very well in my opinion. If there is one thing I would change, I would have peer reviews more often. Having a peer review after a delivery meant that if someone proposed a better way to do something, that improvement needed to be done in a subsequent sprint, if I could convince product owners that it was a high enough priority. With peer reviews done more often, or at least discussed more often, the changes would be smaller too.

3. Release Processes:

As I mentioned earlier, I was not a 100% satisfied with our automation of test and release processes. Going forward I would like to have tests and releases fully automated, so that every night a build is deployed, and checked with a bunch of automated tests. I can think of Selenium fitting in quite nicely with Jenkins here, but I have also been hearing things about the Go framework from Thoughtworks, which I want to look into.

I would like to have:
  1. Nightly builds
  2. After every build a series of automated tests are run to ensure nothing broke.
  3. If there was a compilation error, the most recent editor of the checked in file is notified by email.
  4. If the automated tests failed, the team is notified by email

I would like to start working on smaller, more frequent releases. Some teams aim on a release after each sprint. While this may not always be feasible (especially for a brand new product, for example), I would like to aim at releasing small bits of new functionality at a time. 

4. Development Tools:


  • Although I LOVED IntelliJ, not every developer shared my affection for it. Some chose to use Eclipse. Of course this is a matter of taste and is acceptable, but in my opinion it is more effective to have one common IDE. Installation, debugging, 3rd party plugins etc can all be simplified if the whole team uses the same IDE.
  • I have already said wikis are wonderful, but in addition I would like to integrate other tools with wikis. Jira, for example allows for tight coupling with confluence and crucible, which I have found to be invaluable to track peer reviews. Such a tight coupling would allow peer reviews to be a part of the development process of each task.


Saturday, April 12, 2014

Apply with LinkedIn button with AngularJS and Twitter bootstrap

I wrote a site with angularjs and twitter bootstrap that needed an "Apply with LinkedIn button@. The site had to support IE8 (ugh!)

In theory, setting up the button should be very easy. All you need to do is add this code into your web page:

<script type="text/javascript" src="http://platform.linkedin.com/in.js"> api_key: <YOUR API KEY GOES HERE> </script>

<script type="IN/Apply" data-companyId="1337" data-jobTitle="Chief Cat Wrangler" data-email="your-email-address@your-company.com" data-jobLocation="Oslo, Norway"> </script>

However, because of IE8 issues, and angular partials, this wasn't so straight forward, and I had to jump some hoops to get this to work.

1. IE8 specifically needed the
<script type="text/javascript" src="http://platform.linkedin.com/in.js">api_key: xxxx</script>
in the head.

Other browsers seemed to manage if I included this call just before I called the script.

But including the in.js call in the head of my index.html file meant my partials did not find it. So my linkedin buttons, which were being displayed in my partials, never showed. Sometimes it seemed like the page needed a refresh before the button showed up. In addition, I needed to change the jobs and locations dynamically.

This link from Eugene O'Neill : https://developer.linkedin.com/forum/use-apply-linkedin-button-whith-different-jobs helped me a lot.

I essentially had to "refresh" the DOM after loading the button each time.

So my code looked like this:

  a. index.html had in.js in the head:
<script type="text/javascript" src="http://platform.linkedin.com/in.js">api_key: xxxx</script>

  b. In my partial I had a div that looked like this:
<div id="linkedInTop" ng-bind-html="linkedIn()"></div>

    My jobs (and locations) checkbox tag looked like this:
<input type="checkbox" class="btn btn-primary" data-ng-model="job.checked" data-ng-checked="job.checked" value="{{job.name}}" data-ng-change="refreshLinkedIn()">{{job.name}}


  c. In my controller I had this:
$scope.linkedIn = function () {

    refreshLinkedInButton(jobTitles($scope, $filter), locations($scope, $filter));

};

function refreshLinkedInButton(jobTitle, location) {

    // create the script node

    var script = $('<script>')
        // add attributes
        .attr('type', 'IN/Apply') // make it an Apply with LinkedIn button
        .attr('data-jobTitle', jobTitle)
        .attr('data-companyid', '12345')
        .attr('data-email', 'someone@some.where.no)
        .attr('data-joblocation', location)
        .attr('data-logo', 'http://some.where.no/img/logo.png')
        .attr('data-themecolor', '#40b8e1');

    $('#linkedInTop').html(script);

    // check if IN.parse exists, if so, call it
    if (IN.parse) {
        IN.parse();
    }
    // otherwise, we will register it to fire on the systemReady event
    // this is just precaution, but can avoid a special race condition
    else {
        IN.Event.on(IN, "systemReady", IN.parse);
    }
}

This worked like a charm on all browsers, and allowed me to dynamically change jobs and locations.

Thanks again to Eugene O'Neill. The force is strong with him.

2. The other problem I had was that the apply button seemed to get "smushed". Turns out this was because of twitter bootstrap and box-sizing.

I had to add this to my CSS and that fixed it.
span[id*='li_ui_li_gen_'] {
    -webkit-box-sizing: content-box;
    -moz-box-sizing: content-box;
    box-sizing: content-box;
}

The CSS code above changes the box-sizing for all linked in generated content (the spans which contain the apply button) back to content-box.