Thursday, April 17, 2014

Project post mortem

I recently completed an engagement with one of the largest telecommunication companies in the world. We faced multiple hurdles, not limited to geography, time zones and infrastructure. Yet the project was an overwhelming success, and the client very satisfied.

I thought I would jot down a few things that we did right, and also note down some things we could have done better so as to hopefully help me do things right in the future.

Things we did right:

1. People:

I believe the right people can make or break a project. Of course I do, or I wouldn't be a consultant :) On this particular engagement I was lucky enough to be on a team of very talented and dedicated people. But what exactly made this group of talented and dedicated people succeed as a team? I thought I'd break it down further by roles.
  • Project Manager: We had a project manager who was engaged enough to be effective, but knew when to stay away to not interfere with the development process. 
    • I liked that there was one and only one project manager that I reported to. She attended the daily stand up, so I didn't need to waste any more time bringing her up to speed. (I have been on projects in the past where there were up to eight project managers, each of whom required a daily update from me!)
  • Product Owner: Our Product Owner sat with the development team, in the pen, with testers when they wrote their test scripts, and with the Usability Designer when the initial flow of the application was conceived. 
    • I liked that the product owner was able to break functionality into smaller user stories, and was brave enough to wait for future sprints until a full set of user stories (or an epic) was complete. 
  • Solution Architect: This was a person intimately familiar with the back end systems that needed to be accessed. 
    • He was heavily involved with the Usability Designer and the Product Owner when the product was conceived. Since he was so easily accessible, epics, usability flows and stories were completed quickly. Prioritization of stories was easy, because the solution architect knew exactly how complex a story could be.
  • Usability Designer: He was involved very early on, with the Product Owner and solution architect. He took several weeks to thrash out the flow of the application, finally coming out with a large flow diagram that he pasted on the walls of our meeting room. Each screen in the flow was represented by a single A4 sized paper. 
    • While initially I felt like I had entered Melvin B. Tolson's garage from "A Beautiful Mind", seeing a physical representation of every single screen on a wall turned out to be very helpful, since it gave everyone a very clear idea of what exactly the application did. 
  • Delivery Architect (me): I was responsible for selecting the technologies we would use, and architect the code base. I was also responsible for the developers and testers in India. I was based on site, in the same room as the Project Manager, Product Owner and Solution Architect. 
    • I liked that I could just lift my head up and talk to the Solution Architect, or the Product Owner, rather than send an email out and wait a few hours for a response. This was a huge benefit in terms of communication.
    • By being onsite, I think I provided a huge benefit to the client, because no one from the client had to deal with anyone in India. They spoke to me and only me, and I dealt with the off site team. Being a single point of contact streamlined communication for the client, while my developers were sheltered from questions and emails and could focus on coding.
  • Developers: I was lucky to have a developer on my team who quickly grasped what the product we were creating was about. 
    • He challenged me on my decisions, and offered alternatives of his own. I liked this, because it allowed me to come up with an architecture and a code base that I had defended truthfully. Of course, concepts that he suggested were also integrated if they turned out to be superior to my own.
  • Testers: Testers in India tested each user story before it was submitted for acceptance testing by the client.

2. Methodologies:

We followed SCRUM, with daily stand ups and 2 week long sprints. Each sprint ended with a functional demo (to managers, product owners and other "pigs") and a technical demo (to the solution architect, client developers and other "chickens"). 
    • The functional demo dealt with user story functionality being met. 
    • The technical demo was very often a peer review on the code.

3. Release Processes:

While we did not automate our release process as much as I would have liked (our release servers were not ready in time) I wrote custom scripts that automated releases as much as possible.

  • Environment related files were deployed by repository command. This allowed for the code to contain several directories, each corresponding to a certain environment. 
    • To update the properties, one needed to just do a repo update.
  • A script  grabbed the location of the release as an argument, and installed and restarted the server.
    • The server referred to the latest release with a sym link, so releases could be rolled back easily (although this was never needed :) 
Indian testers tested on one server, and client acceptance tests were performed on another server. This allowed us to have a more "stable" version that clients could see and which could be used for internal demos. This also gave the client a good "feeling", because they only saw stable versions of the code releases (i.e after our Indian testers had found all the bugs they could)

4. Development Tools:


  • I used IntelliJ to develop with Java. As is their motto, it is indeed a pleasure to develop with it. It is very convenient to have DB access and network sniffing from the same tool as one edits and debugs code. 
  • Documentation was stored in a wiki. I cannot emphasize enough the importance of this. Apart from user stories, I also stored installation instructions, developer notes (to explain some esoteric code, for example) and release documentation. Wikis are preferable to Word docs because they are searchable, easily updated and available in a single location. 

5. Language:

Documentation was written in English. This was useful since developers and testers were in India. I strongly support English documentation for all IT projects, especially in an era of "right shoring". Even if development is done onshore, it is not uncommon to fly in developers from other countries to develop onsite.

Things that could be improved:

1. People:

Even though I said I lucked out with developers, this was not so initially. I went thought several iterations of developers before I found the ones that worked. I went though several who were obviously not interested in their job. They exaggerated their skills on their resume, and their code delivery was poor. They had not heard of Clean Coding standards. Even after repeated critiques during peer reviews, they did not improve their coding standards.Very often they would say a task was "done" when they had not even unit tested it. This was exasperating, especially when the developer was on the other side of the world.

How could I improve this?

  1. Interview rigorously. Do not take resumes for granted. Ask questions such as which IDE they prefer and why, which IT related book they read last, which library would he choose for collections, caching and so on. Developers should be passionate about IT. They should be not be satisfied with delivering code that is below par.
  2. Ensure that teams are not spread out in more than 2 places. If there is an off shore team, all the developers in the offshore team must sit together. Not only will they help each with peer coding, setting up environments and so on, it also builds a sense of camaraderie.
  3. Communicate often and specifically. I held 10 minute "stand ups" on a communicator each morning. If any of the developers said they were not done with their task, after the meeting I specifically asked them why and how I could help. Sometimes junior developers are afraid to ask for help, and will keep on working on something when a little help from someone else could aid them a lot.
  4. Have developers from off site visit on-site. While this is a basic tenet of working agile, this is ignored all too often because of costs involved. There is a huge increase in productivity when a developer is on-site, in terms of communication, understanding requirements and peer development. Additionally I think this gives clients a good "feeling" about the person developing their code, and also gives the developer better motivation to develop better, since he now has a face to put with the client.

2. Methodologies:

SCRUM worked very well in my opinion. If there is one thing I would change, I would have peer reviews more often. Having a peer review after a delivery meant that if someone proposed a better way to do something, that improvement needed to be done in a subsequent sprint, if I could convince product owners that it was a high enough priority. With peer reviews done more often, or at least discussed more often, the changes would be smaller too.

3. Release Processes:

As I mentioned earlier, I was not a 100% satisfied with our automation of test and release processes. Going forward I would like to have tests and releases fully automated, so that every night a build is deployed, and checked with a bunch of automated tests. I can think of Selenium fitting in quite nicely with Jenkins here, but I have also been hearing things about the Go framework from Thoughtworks, which I want to look into.

I would like to have:
  1. Nightly builds
  2. After every build a series of automated tests are run to ensure nothing broke.
  3. If there was a compilation error, the most recent editor of the checked in file is notified by email.
  4. If the automated tests failed, the team is notified by email

I would like to start working on smaller, more frequent releases. Some teams aim on a release after each sprint. While this may not always be feasible (especially for a brand new product, for example), I would like to aim at releasing small bits of new functionality at a time. 

4. Development Tools:


  • Although I LOVED IntelliJ, not every developer shared my affection for it. Some chose to use Eclipse. Of course this is a matter of taste and is acceptable, but in my opinion it is more effective to have one common IDE. Installation, debugging, 3rd party plugins etc can all be simplified if the whole team uses the same IDE.
  • I have already said wikis are wonderful, but in addition I would like to integrate other tools with wikis. Jira, for example allows for tight coupling with confluence and crucible, which I have found to be invaluable to track peer reviews. Such a tight coupling would allow peer reviews to be a part of the development process of each task.


Saturday, April 12, 2014

Apply with LinkedIn button with AngularJS and Twitter bootstrap

I wrote a site with angularjs and twitter bootstrap that needed an "Apply with LinkedIn button@. The site had to support IE8 (ugh!)

In theory, setting up the button should be very easy. All you need to do is add this code into your web page:

<script type="text/javascript" src="http://platform.linkedin.com/in.js"> api_key: <YOUR API KEY GOES HERE> </script>

<script type="IN/Apply" data-companyId="1337" data-jobTitle="Chief Cat Wrangler" data-email="your-email-address@your-company.com" data-jobLocation="Oslo, Norway"> </script>

However, because of IE8 issues, and angular partials, this wasn't so straight forward, and I had to jump some hoops to get this to work.

1. IE8 specifically needed the
<script type="text/javascript" src="http://platform.linkedin.com/in.js">api_key: xxxx</script>
in the head.

Other browsers seemed to manage if I included this call just before I called the script.

But including the in.js call in the head of my index.html file meant my partials did not find it. So my linkedin buttons, which were being displayed in my partials, never showed. Sometimes it seemed like the page needed a refresh before the button showed up. In addition, I needed to change the jobs and locations dynamically.

This link from Eugene O'Neill : https://developer.linkedin.com/forum/use-apply-linkedin-button-whith-different-jobs helped me a lot.

I essentially had to "refresh" the DOM after loading the button each time.

So my code looked like this:

  a. index.html had in.js in the head:
<script type="text/javascript" src="http://platform.linkedin.com/in.js">api_key: xxxx</script>

  b. In my partial I had a div that looked like this:
<div id="linkedInTop" ng-bind-html="linkedIn()"></div>

    My jobs (and locations) checkbox tag looked like this:
<input type="checkbox" class="btn btn-primary" data-ng-model="job.checked" data-ng-checked="job.checked" value="{{job.name}}" data-ng-change="refreshLinkedIn()">{{job.name}}


  c. In my controller I had this:
$scope.linkedIn = function () {

    refreshLinkedInButton(jobTitles($scope, $filter), locations($scope, $filter));

};

function refreshLinkedInButton(jobTitle, location) {

    // create the script node

    var script = $('<script>')
        // add attributes
        .attr('type', 'IN/Apply') // make it an Apply with LinkedIn button
        .attr('data-jobTitle', jobTitle)
        .attr('data-companyid', '12345')
        .attr('data-email', 'someone@some.where.no)
        .attr('data-joblocation', location)
        .attr('data-logo', 'http://some.where.no/img/logo.png')
        .attr('data-themecolor', '#40b8e1');

    $('#linkedInTop').html(script);

    // check if IN.parse exists, if so, call it
    if (IN.parse) {
        IN.parse();
    }
    // otherwise, we will register it to fire on the systemReady event
    // this is just precaution, but can avoid a special race condition
    else {
        IN.Event.on(IN, "systemReady", IN.parse);
    }
}

This worked like a charm on all browsers, and allowed me to dynamically change jobs and locations.

Thanks again to Eugene O'Neill. The force is strong with him.

2. The other problem I had was that the apply button seemed to get "smushed". Turns out this was because of twitter bootstrap and box-sizing.

I had to add this to my CSS and that fixed it.
span[id*='li_ui_li_gen_'] {
    -webkit-box-sizing: content-box;
    -moz-box-sizing: content-box;
    box-sizing: content-box;
}

The CSS code above changes the box-sizing for all linked in generated content (the spans which contain the apply button) back to content-box.

Tuesday, April 1, 2014

Converting String to Integer

I ran some tests to see the fastest way to convert a String to Integer (or int)



public class HelloWorld {

  public static int limit = 1000000;
  public static String sint = "9999";

  public static void main(String[] args) {

      long start = System.currentTimeMillis();
      for (int i = 0; i < limit; i++) {
         Integer integer = Integer.valueOf(sint);
      }
      long end = System.currentTimeMillis();

      System.out.println("valueOf took: " + (end - start));


      start = System.currentTimeMillis();
      for (int i = 0; i < limit; i++) {
          int integer = Integer.parseInt(sint);
      }
      end = System.currentTimeMillis();

      System.out.println("parseInt took: " + (end - start));


      start = System.currentTimeMillis();
      for (int i = 0; i < limit; i++) {
          int integer = Ints.tryParse(sint);
      }
      end = System.currentTimeMillis();

      System.out.println("Ints.tryParse took: " + (end - start));


      start = System.currentTimeMillis();
      for (int i = 0; i < limit; i++) {
          Integer integer = NumberUtils.createInteger(sint);
      }
      end = System.currentTimeMillis();

      System.out.println("numberUtils.createInteger took: " + (end - start));

      start = System.currentTimeMillis();
      for (int i = 0; i < limit; i++) {
          int integer = NumberUtils.toInt(sint);
      }
      end = System.currentTimeMillis();

      System.out.println("numberUtils.toInt took: " + (end - start));

  }
}

My results were:
valueOf took: 77
parseInt took: 61
Ints.tryParse took: 117
numberUtils.createInteger took: 169
numberUtils.toInt took: 63

So the summary is

If you can get by using an int, use Integer.parseInt.
If you absolutely need an Integer, use Integer.valueOf
If you need the convenience of not handling exceptions when you parse, or if you are unsure of the format of the input (i.e its a string that need not be a number) use Ints.tryParse

Why use Ints.tryparse instead of Integer.parseInt?

Because I ran some other code with bad values:

public class HelloWorld {

    public static int limit = 1000000;
    public static String sint = "abcd";

    public static void main(String[] args) {

        long start = System.currentTimeMillis();
        for (int i = 0; i < limit; i++) {
            try {
                Integer.parseInt(sint);
            } catch (NumberFormatException e) {
                // do nothing
            }
        }
        long end = System.currentTimeMillis();

        System.out.println("parseInt took: " + (end - start));

        start = System.currentTimeMillis();
        for (int i = 0; i < limit; i++) {
            Ints.tryParse(sint);
        }
        end = System.currentTimeMillis();

        System.out.println("Ints.tryParse took: " + (end - start));

    }
}

Here the results were:
parseInt took: 2630
Ints.tryParse took: 87

This should be quite obvious as throwing exceptions is a huge performance hit.

How I set up ehCache

Ehcache is a nice way to cache method responses. This is nice for example if one of your methods calls a web service or takes very long to execute.

I used ehcache with Spring, so I had to do some extra stuff so the annotations would work.

First I added ehcache and google's ehcache annotations for spring to my pom


<dependency>
    <groupId>net.sf.ehcache</groupId>
    <artifactId>ehcache-core</artifactId>
    <version>2.6.0</version>
</dependency>
<dependency>
    <groupId>com.googlecode.ehcache-spring-annotations</groupId>
    <artifactId>ehcache-spring-annotations</artifactId>
    <version>1.2.0</version>
</dependency>

Then in my Spring context file I added:
<beans xmlns="http://www.springframework.org/schema/beans"
            xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
            xmlns:context="http://www.springframework.org/schema/context"
            xmlns:ehcache="http://ehcache-spring-annotations.googlecode.com/svn/schema/ehcache-spring"
            xmlns:util="http://www.springframework.org/schema/util"
            xsi:schemaLocation="http://www.springframework.org/schema/beans
            http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
            http://www.springframework.org/schema/context
            http://www.springframework.org/schema/context/spring-context-3.0.xsd
            http://www.springframework.org/schema/util
            http://www.springframework.org/schema/util/spring-util-3.0.xsd
            http://ehcache-spring-annotations.googlecode.com/svn/schema/ehcache-spring
            http://ehcache-spring-annotations.googlecode.com/svn/schema/ehcache-spring/ehcache-spring-1.1.xsd">

    <ehcache:annotation-driven cache-manager="ehCacheManager"/>
    <bean id="ehCacheManager" class="org.springframework.cache.ehcache.EhCacheManagerFactoryBean"/>

This allows us to use the @Cacheable and @TriggersRemove annotations that come from google.
I then placed ehcache.xml in my classpath (Putting it in WEB-INF didnt work)

My ehcache.xml looked like this:

<?xml version="1.0" encoding="UTF-8"?>
<ehcache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:noNamespaceSchemaLocation="http://ehcache.org/ehcache.xsd">
    <diskStore path="java.io.tmpdir/ClavisCaches"/>
    <cache name="personCache" maxElementsInMemory="9000" eternal="true" overflowToDisk="false"/>
</ehcache>


By setting the eternal value to true, this cache never expires unless I specifically trigger remove (see below). There are other ways to have the cache expire on a time limit. Please see the ehcache docs for this.

Then I created a Cache class that returned the Collection object that I wanted to cache:


import com.googlecode.ehcache.annotations.Cacheable;
import com.googlecode.ehcache.annotations.TriggersRemove;

...

@Cacheable(cacheName = "personCache")
public Collection<CacheElement> getPersonCacheSet() {

    List<Person> personList = wsClient.getSomeExpensiveCall();
    Collection<CacheElement> personCacheSet = new HashSet<CacheElement>(personList.size());

    for (Person person : personList) {
        CacheElement cacheElement = new CacheElement();
        cacheElement.setName(person.getName());
        cacheElement.setMobil(person.getMobil());
        personCacheSet.add(cacheElement);
    }

    return personCacheSet;
}

@TriggersRemove(cacheName = "personCache", when = When.AFTER_METHOD_INVOCATION, removeAll = true)
public void clearPersonCache() {
    // Intentionally blank
}

Note that the cacheable methods need to be called from a different class, because of the way the proxy is set up.

In my Spring configuration, I annotated this Cache class as a Service. This means Spring took care of initializing this class for me as a singleton on startup, so I could just autowire it and call the methods when needed.

Here is my jUnit file:


    @Autowired
    private PersonCache personCache;

    @Test
    public void testGetPersonCacheSet() {

        // The first call makes the expensive call, the second call gets the result from the cache
        Collection<CacheElement> personCacheSet0 = personCache.getPersonCacheSet();
        Collection<CacheElement> personCacheSet1 = personCache.getPersonCacheSet();

        assertSame(dealerCacheSet0, dealerCacheSet1);
    }

    @Test
    public void testClearCache() {

        // fill the cache
        personCache.getPersonCacheSet();

        // The first call gets the result from the cache
        Collection<PersonCacheElement> personCacheSet0 = personCache.getPersonCacheSet();

        personCache.clearPersonCache();

        // second call makes the expensive call
        Collection<CacheElement> personCacheSet1 = personCache.getPersonCacheSet();

        assertNotSame(personCacheSet0, personCacheSet1);
    }