Thursday, April 17, 2014

Project post mortem

I recently completed an engagement with one of the largest telecommunication companies in the world. We faced multiple hurdles, not limited to geography, time zones and infrastructure. Yet the project was an overwhelming success, and the client very satisfied.

I thought I would jot down a few things that we did right, and also note down some things we could have done better so as to hopefully help me do things right in the future.

Things we did right:

1. People:

I believe the right people can make or break a project. Of course I do, or I wouldn't be a consultant :) On this particular engagement I was lucky enough to be on a team of very talented and dedicated people. But what exactly made this group of talented and dedicated people succeed as a team? I thought I'd break it down further by roles.
  • Project Manager: We had a project manager who was engaged enough to be effective, but knew when to stay away to not interfere with the development process. 
    • I liked that there was one and only one project manager that I reported to. She attended the daily stand up, so I didn't need to waste any more time bringing her up to speed. (I have been on projects in the past where there were up to eight project managers, each of whom required a daily update from me!)
  • Product Owner: Our Product Owner sat with the development team, in the pen, with testers when they wrote their test scripts, and with the Usability Designer when the initial flow of the application was conceived. 
    • I liked that the product owner was able to break functionality into smaller user stories, and was brave enough to wait for future sprints until a full set of user stories (or an epic) was complete. 
  • Solution Architect: This was a person intimately familiar with the back end systems that needed to be accessed. 
    • He was heavily involved with the Usability Designer and the Product Owner when the product was conceived. Since he was so easily accessible, epics, usability flows and stories were completed quickly. Prioritization of stories was easy, because the solution architect knew exactly how complex a story could be.
  • Usability Designer: He was involved very early on, with the Product Owner and solution architect. He took several weeks to thrash out the flow of the application, finally coming out with a large flow diagram that he pasted on the walls of our meeting room. Each screen in the flow was represented by a single A4 sized paper. 
    • While initially I felt like I had entered Melvin B. Tolson's garage from "A Beautiful Mind", seeing a physical representation of every single screen on a wall turned out to be very helpful, since it gave everyone a very clear idea of what exactly the application did. 
  • Delivery Architect (me): I was responsible for selecting the technologies we would use, and architect the code base. I was also responsible for the developers and testers in India. I was based on site, in the same room as the Project Manager, Product Owner and Solution Architect. 
    • I liked that I could just lift my head up and talk to the Solution Architect, or the Product Owner, rather than send an email out and wait a few hours for a response. This was a huge benefit in terms of communication.
    • By being onsite, I think I provided a huge benefit to the client, because no one from the client had to deal with anyone in India. They spoke to me and only me, and I dealt with the off site team. Being a single point of contact streamlined communication for the client, while my developers were sheltered from questions and emails and could focus on coding.
  • Developers: I was lucky to have a developer on my team who quickly grasped what the product we were creating was about. 
    • He challenged me on my decisions, and offered alternatives of his own. I liked this, because it allowed me to come up with an architecture and a code base that I had defended truthfully. Of course, concepts that he suggested were also integrated if they turned out to be superior to my own.
  • Testers: Testers in India tested each user story before it was submitted for acceptance testing by the client.

2. Methodologies:

We followed SCRUM, with daily stand ups and 2 week long sprints. Each sprint ended with a functional demo (to managers, product owners and other "pigs") and a technical demo (to the solution architect, client developers and other "chickens"). 
    • The functional demo dealt with user story functionality being met. 
    • The technical demo was very often a peer review on the code.

3. Release Processes:

While we did not automate our release process as much as I would have liked (our release servers were not ready in time) I wrote custom scripts that automated releases as much as possible.

  • Environment related files were deployed by repository command. This allowed for the code to contain several directories, each corresponding to a certain environment. 
    • To update the properties, one needed to just do a repo update.
  • A script  grabbed the location of the release as an argument, and installed and restarted the server.
    • The server referred to the latest release with a sym link, so releases could be rolled back easily (although this was never needed :) 
Indian testers tested on one server, and client acceptance tests were performed on another server. This allowed us to have a more "stable" version that clients could see and which could be used for internal demos. This also gave the client a good "feeling", because they only saw stable versions of the code releases (i.e after our Indian testers had found all the bugs they could)

4. Development Tools:

  • I used IntelliJ to develop with Java. As is their motto, it is indeed a pleasure to develop with it. It is very convenient to have DB access and network sniffing from the same tool as one edits and debugs code. 
  • Documentation was stored in a wiki. I cannot emphasize enough the importance of this. Apart from user stories, I also stored installation instructions, developer notes (to explain some esoteric code, for example) and release documentation. Wikis are preferable to Word docs because they are searchable, easily updated and available in a single location. 

5. Language:

Documentation was written in English. This was useful since developers and testers were in India. I strongly support English documentation for all IT projects, especially in an era of "right shoring". Even if development is done onshore, it is not uncommon to fly in developers from other countries to develop onsite.

Things that could be improved:

1. People:

Even though I said I lucked out with developers, this was not so initially. I went thought several iterations of developers before I found the ones that worked. I went though several who were obviously not interested in their job. They exaggerated their skills on their resume, and their code delivery was poor. They had not heard of Clean Coding standards. Even after repeated critiques during peer reviews, they did not improve their coding standards.Very often they would say a task was "done" when they had not even unit tested it. This was exasperating, especially when the developer was on the other side of the world.

How could I improve this?

  1. Interview rigorously. Do not take resumes for granted. Ask questions such as which IDE they prefer and why, which IT related book they read last, which library would he choose for collections, caching and so on. Developers should be passionate about IT. They should be not be satisfied with delivering code that is below par.
  2. Ensure that teams are not spread out in more than 2 places. If there is an off shore team, all the developers in the offshore team must sit together. Not only will they help each with peer coding, setting up environments and so on, it also builds a sense of camaraderie.
  3. Communicate often and specifically. I held 10 minute "stand ups" on a communicator each morning. If any of the developers said they were not done with their task, after the meeting I specifically asked them why and how I could help. Sometimes junior developers are afraid to ask for help, and will keep on working on something when a little help from someone else could aid them a lot.
  4. Have developers from off site visit on-site. While this is a basic tenet of working agile, this is ignored all too often because of costs involved. There is a huge increase in productivity when a developer is on-site, in terms of communication, understanding requirements and peer development. Additionally I think this gives clients a good "feeling" about the person developing their code, and also gives the developer better motivation to develop better, since he now has a face to put with the client.

2. Methodologies:

SCRUM worked very well in my opinion. If there is one thing I would change, I would have peer reviews more often. Having a peer review after a delivery meant that if someone proposed a better way to do something, that improvement needed to be done in a subsequent sprint, if I could convince product owners that it was a high enough priority. With peer reviews done more often, or at least discussed more often, the changes would be smaller too.

3. Release Processes:

As I mentioned earlier, I was not a 100% satisfied with our automation of test and release processes. Going forward I would like to have tests and releases fully automated, so that every night a build is deployed, and checked with a bunch of automated tests. I can think of Selenium fitting in quite nicely with Jenkins here, but I have also been hearing things about the Go framework from Thoughtworks, which I want to look into.

I would like to have:
  1. Nightly builds
  2. After every build a series of automated tests are run to ensure nothing broke.
  3. If there was a compilation error, the most recent editor of the checked in file is notified by email.
  4. If the automated tests failed, the team is notified by email

I would like to start working on smaller, more frequent releases. Some teams aim on a release after each sprint. While this may not always be feasible (especially for a brand new product, for example), I would like to aim at releasing small bits of new functionality at a time. 

4. Development Tools:

  • Although I LOVED IntelliJ, not every developer shared my affection for it. Some chose to use Eclipse. Of course this is a matter of taste and is acceptable, but in my opinion it is more effective to have one common IDE. Installation, debugging, 3rd party plugins etc can all be simplified if the whole team uses the same IDE.
  • I have already said wikis are wonderful, but in addition I would like to integrate other tools with wikis. Jira, for example allows for tight coupling with confluence and crucible, which I have found to be invaluable to track peer reviews. Such a tight coupling would allow peer reviews to be a part of the development process of each task.

No comments:

Post a Comment