Home
March 11
…But that’s so much less efficient!

EfficientCogs.jpg

Once upon a time, before I became a Development Manager, I was a developer and, frankly, a technically pretty capable one. I was one of those guys that the other students on my university course relied on to do the team coursework.

I learnt then and reinforced later in my development career the habit of ensuring I did the work required of me as efficiently as​​​ possible. Basically, it meant that I got to go to the Student Union bar earlier. Then later in my career, slightly less selfishly, it meant that I got more work donethe customer got better value for money, and the customer was therefore happier. This makes perfect sense and it's the argument I've had played back to me again and again ever since by other developers, testers, business analysts, project managers and pretty much any other role in the development industry. The problem is that I was wrong.

I was wrong on pretty much every level and I'm going to split up the above statements to point them all out.

"I did the work required of me as efficiently as possible"

Once I'd sorted out the obvious efficiencies in my tools and working environment, like IDE shortcuts, templates and code generation, the major efficiency I could gain was by grouping related work. For example, work that was in the same area of the code base, or needed the same tools, or other sensible reasons like that. Looking back I can see that I recognised, correctly, that context switching was expensive and that I should avoid it. I know this served me well because I got to the student bar most nights. This is a lesson that most developers seem to understand intuitively.

So why is this part of the statement wrong? It's wrong because while I was making myself more efficient at producing code, that wasn't actually the work required of me. What was required of me was producing something of value for a customer. At university I may have been right because it really was a lot more about producing code than producing value so perhaps I can blame the education system for this! However, the process of producing something valuable for a customer involves an awful lot more than just writing code. Essentially I got fantastic at optimising locally at the expense of optimising the process as a whole. Did I do my work efficiently? Absolutely. Unfortunately in doing so, I was making the rest of the process less efficient. Read on to find out why!

"... it meant that I got more work done"

Don't get me wrong, I was definitely producing more code, but the meaning of "more work" in this context is so much wider than that.

Now this is moving more firmly to where my interests lie these days in the overall development management space. The process of producing value overall varies from company to company based on what the responsibilities of the company are to the customer. Let's look at two common scenarios.

Scenario 1 is a development shop which is contracted by a customer to run a project to produce a product. There are obviously a number of options and variations here but fundamentally the process as a whole we want to optimise includes not only the development process (development, testing, deployment, documenting etc.) but also the process of ensuring that we are producing the right product for the customer. This means that the requirements gathering, user acceptance and feedback processes are also all part of the whole.

The problem with my grouping or batching up the work needed of me as a developer is that although I had reduced the overall time it would take for me to do the development, I was actually taking a lot longer to produce any single feature. This meant that the testers were waiting longer to get my code and ultimately the customer was waiting longer to see the next feature. As a result the customer didn't get to provide feedback as often, and when they did, it was more extreme in both volume of change and reaction. A lot would have changed in the extended timeframe and the functionality delivered could be a long way from their vision, irrespective of the requirements they'd provided me. Another side effect of the additional delay in feedback was that there was significantly less flexibility to make changes because there wasn't the time in the schedule overall to do it.

Scenario 2 is an IT department which is responsible for development and support of the products they develop. On face value this is very similar to Scenario 1 when considering the process. The significant difference is the need to support the product. Even when only level 3 support (i.e. only defect investigation and fixing) is provided, if the quality of the product produced by the development team isn't incredibly high, then failure demand will start to impact team performance. This may impact the rate at which the department can produce value in the form of new software development, or worse it will impact the quality of future products because corners are cut to keep the value production rate up. The reason the second option is worse is that it will snowball very quickly to the point where the department is forever investigating and fixing defects and never producing anything of value anymore.

The point of covering the two scenarios above is to demonstrate how far-reaching the overall process can be and the huge number of variables that form part of the problem. The process we need to optimise overall can be huge and not considering a factor from the overall process when locally optimising can have a huge impact on the whole.

I now understand why optimising my efficiency as a developer in the form of batching work was negatively impacting the overall process of producing value for the customer. This is essentially one of the core arguments for optimising on cycle-time or throughput, for the whole process, ahead of optimising for utilisation of individuals.

"…. the customer got better value for money"

Looking at everything we've covered so far, I'm sure you can see that while I was definitely the most efficient cogwheel in the engine, I wasn't actually personally generating better value for the customer and in fact, because I was so personally efficient, the engine itself was actually producing worse value for money for the customer.

"… the customer was therefore happier"

And finally, over many customer engagements whether they've been internal to my organisation at the time or not, I finally realised that in the end the customer doesn't really care about value for money, at least not once they've chosen me as the provider and concluded the commercial negotiations. They've already decided how much they're willing to pay for the service of producing what they've asked for. Once the process has started, what they really care about in my experience is seeing their vision realised and being involved in the realisation of it. So that's what I now optimise for - producing happy customers.​​

​​​
Recorded webinar: Learning the hard parts of agile software development
March 05
Equinox IT team members contribute to Charity IT weekend
Ben Hughes, Hana Pearson-Coats and Sean Tomlins of Equinox IT participate in Charity IT weekend

While the rest of us enjoyed the Wellington sun this last weekend, three Equinox IT team members gave up their weekends to join a group of 40 volunteers to work on IT projects for charity organisations.

The event was the third hackathon organised by Charity IT, a New Zealand-based volunteer group that brings together charity organisations and IT professionals to overcome challenges the organisations are facing using IT.

Over the weekend Charity IT worked with four charities, building a web presence for Nature Through Arts Collective, an automated payroll system for Able Pet Care, an alumni sign up system for Big Brothers Big Sisters, and a more robust website back end for Kiwi Community Assistance.

The team from Equinox IT included Ben Hughes (systems analyst and agile coach), Hana Pearson-Coats (systems analyst) and Sean Tomlins (development specialist). Ben and Hana played central roles as organisers. Ben helped facilitate the whole event, allocating and managing teams, moving resources, motivating, and keeping activities on track. Hana met with charities to understand their requirements and championed one of the projects to successful completion. Sean played a key delivery role in a project team for one of the charities.

“It was insanely fun to work on something so worthy with such an inspiring group of people” said an enthusiastic yet tired Hana Pearson-Coats on Monday morning as she stepped straight back into her busy day job.

The next Charity IT hackathon will be held in Christchurch in May.
​​
March 04
What would life look like if I didn't have glasses? - The importance of accessibility
RossMcElwainAssessibilityStrawTestWeb.jpg

I recently attended a workshop by Derek Featherstone on accessibility as part of the Webstock event held in Wellington. It was a huge eye opener in realising what some people have to put up with because the rest of us aren't aware of the importance of accessibility.

Featherstone referenced a mobile phone conference, where someone who was visually impaired was asked what features they want for a mobile phone.

Their response – “a dial tone”.

If you're like me then the first thing that would cross your mind is "What's a dial tone?". We've been using mobile phones for so long now that the dial tone has died and most of us didn't even notice. For a lot of us it would never cross our minds that a dial tone would be something someone would want the most.

It made me think... what if I was someone who had issues seeing properly?

Then I realised... I am.

However, I'm one of the lucky ones that have an easy solution to my vision impairments. But what if I wasn't. What if I had to live my life without glasses. What if everyone who needed glasses or contacts had to live without them?

Writing this blog I had to change the font to size 28 before I could read it clearly without my glasses. If everyone were to navigate the internet with a font size of 28, would that affect how you design your webpage?

I would like to challenge you to try the straw test. Clench your fist as though you are holding a straw. Close one eye and look through the gap in your fist with your open eye. This is what life would look like if you have low vision. Imagine filling out a form online with low vision. What could you do to make things easier for a user with low vision?

Accessibility isn't a high priority for most IT projects these days but I really hope that this changes soon.

I'm really thankful for the opportunity to attend Derek's workshop. It exposed me to a whole new world I was too ignorant to realise existed. It really challenges me to think of all the small things we take for granted that would make a huge difference to someone else.

I hope you all join me on the journey to making the web more accessible for everyone. 

Image: Ross McElwain of Equinox IT practicing the accessibility straw test.

February 17
Do you need to learn Scrum?
DoyouneedtolearnScrum.jpg

If you read Ray Cooke’s earlier blog post Should you move your team to Scrum? you will come away with the following insights:
  • Agile software development is becoming widely adopted as a mainstream approach to developing software, as confirmed by Gartner
  • Scrum and its variants are by far the most widely used approaches to agile software development (75%), and thus Scrum is becoming mainstream
  • While we would never suggest that organisations become zealots sticking to one single methodology (we certainly haven’t), there is value in starting with one common approach such as Scrum to get everyone on the same page, and then once you get more mature evolve from there
  • Training is a key way to learn the fundamental principles and practices of Scrum, and scale the use of Scrum across new teams. We also acknowledge that training is only one part of learning, and that the real world learning happens on the project.
The purpose of this post is to extend Ray’s thinking, and focus the conversation to what this means for software project professionals in New Zealand, who find themselves working in a world that is increasingly dominated by Scrum projects.

Firstly, what is Scrum?
At its simplest, Scrum is an iterative framework and a set of practices for managing Agile software development projects. Its increasing popularity comes from its lightweight practices for managing collaborative teams, regular deliverables and changing requirements, and the use of these practices to deliver real value back to organisations.

In Scrum the ScrumMaster is responsible for ensuring that the process is followed correctly and for removing any blockages stopping the team from achieving its goals. The Product Owner is the voice of the customer, responsible for ensuring that the team delivers value to the business. The development team are self-organising and are responsible for delivering the product within multiple sprints (or iterations).

Why should you care about Scrum?
Our Equinox IT boss Roger tells stories of his days as a systems operator on Burroughs mainframes from a time when there were only a handful of mainframes in the whole of New Zealand. Roger’s Burroughs mainframe skills aren’t in high demand anymore, but luckily for him he has acquired new skills that make him highly relevant to our business today.

Over time less and less projects will follow outdated waterfall and RUP approaches (just like Roger found that over time less computing processing was done using Burroughs mainframes). Given the growth in Scrum, sometime soon you are most likely to be tapped on the shoulder to join a Scrum project team or to scout out the potential benefits of using Scrum on a project you are working on.

The 2013 State of Agile Survey referenced in Ray’s post showed that product owners, quality assurance and business analysts know the least about agile software development, while ScrumMasters, project managers, development managers and developers know more. As Scrum becomes mainstream and organisations need to scale the approach across multiple projects, then to be relevant you will need to build an awareness and knowledge about the Scrum approach. While early adopters have been developers and some project managers, we now need to see this knowledge propagate through to business representatives (product owners), analysts and testers and anyone else who touches software development projects.

If you are in any way engaging with software development activities in your organisation, and you don’t understand Scrum, then there is a compelling need to learn about this approach.

How do you learn Scrum?
A useful way to learn about Scrum is to start by understanding the key principles, practices and terminology. You need to be able to understand the core concepts and talk the same language to work effectively as part of a team. A training course can be great way of learning these fundamentals.

Then see if you can get yourself onto a Scrum project and get the real world learning of how to make Scrum work in practice. Both parts (learning the concepts and getting the experience) are essential. You are not going to be an effective team member working on a Scrum project if you haven’t got the understanding of the fundamentals and can’t talk the same language. It would be like driving on the road without knowing the road code. Likewise, someone who has undertaken Scrum training, but never worked on a Scrum project is like a day one driver who knows the road code; they know some stuff but have no practical skills.

Start learning Scrum now and be relevant in a world where Scrum is becoming commonplace.
​​​
February 16
Should you move your team to Scrum?
ScrumStandup.jpg

What is 'Agile' and why should you care?
I’ve been a Software Development Manager in one form or another for many years and I definitely wouldn’t consider myself an early adopter when it comes to 'Agile' techniques. With hindsight, this isn’t necessarily a bad thing. Frankly, my teams were delivering software whether I was using a visual board or not and whether I had a Scrum Master on the team or not; not necessarily as efficiently as they could, but we were still getting work done.

I first came across 'Agile' when I was searching for ways to resolve some problems I was having with project delivery through my team. Considering how much effort we were putting in to our work, and we were flat out and putting in plenty of overtime, we didn’t seem to be making a lot of headway. To compound the problem, I was getting complaints from stakeholders about excessively large estimates, late delivery and production defects. Fundamentally, failure demand was killing us. So my first look at agile techniques was through the lens of improving quality and getting the defect count down.

Like anything new it takes a while to work out why it’s useful, where it fits in with what’s in place already and to iron out the kinks. Agile techniques have now had that time – we’ve tried lots of experiments and learnt lots of lessons and so there’s a much wider and clearer body of work we can use and apply, and, most importantly for me, not just in green-field companies and projects.

Is there any evidence?
According to Gartner, project level agile software development has been through much of the hype cycle. You may remember or have experienced a few years ago the over-stated hype about agile from early adopters, with a shortage of tangible value shown, followed by general disillusionment when the initial reality didn’t live up to the hype. Now it's had some time to develop, Agile is out of the trough of disillusionment and is climbing the slope of enlightenment, where it is becoming widely adopted as a mainstream approach to developing software.

Agile software development on the Gartner hype cycle
ShouldyoumoveyourteamtoScrum.png
As it becomes mainstream, agile is no longer isolated to one or two projects within organisations. The 2013 State of Agile Survey​, conducted by VersionOne (the most current available version), shows that agile is being used on more and more projects within organisations in the US and Europe (43% of responses indicated that agile is used on 10 or more projects, up 13% from the previous year).

Why Scrum?
I’ve often found that picking and choosing what works best for the situation I’m in at the time works better than wholesale adoption of a methodology, especially in an existing company. That being said, in adopting new methodologies it is key to ensure that everyone has a common understanding of what you’re trying to achieve and how and when to apply it. It’s very helpful therefore to start from a common base, and one that has wide-spread adoption and buy-in such as Scrum means that your team may well already have some familiarity with the principles and there is plenty of material and training available both freely and from experienced practitioners on the market to help pick it up.

Scrum and its variants continue to be the most commonly used approaches to agile software development. In the 2013 State of Agile Survey, 75% of the respondents indicated that their organisations use Scrum, Scrum with XP or Scrumban. Even at Equinox IT, while we don’t follow pure Scrum, we have adopted many Scrum practices in our approach, which most closely aligns with Scrumban (a mixture of Scrum and Kanban).

Learning Scrum
Again referring to the 2013 State of Agile Survey, the results found that those organisations that have successfully scaled agile beyond a single team said that the biggest success factors were executive sponsorship followed closely by training.

We at Equinox IT agree that training is important to really learning the principles, practices and shared language of Scrum. It’s like learning the road code for driving – without it you can’t get on the road but then once you’re on the road, the learning and the progress really starts. Oh, and obviously not knowing the basic rule set on the road before joining all the other drivers doesn’t make for accident free journeys!

As with learning to drive though, training is clearly not the full story. Understanding the road code doesn’t make you a good driver. You just have the knowledge, the fundamental knowledge, to start playing the game, and playing the game is where the real world ‘how do I make this work in practice’ learning will begin to happen. Doing so with a continuous improvement mindset will ensure you and your organisation get better and better over time, to the point where this becomes an important competency that delivers success.

ScrumMasterCallToAction.png ScrumPOCallToAction.png​​
December 19
​Can you apply Lean principles to Scrum?​
Canyouapplyleantoscrum.jpgThis was another question from our recent webinar Learning the hard parts of agile software development.

Our opinion is that you can absolutely apply Lean principles to Scrum and most other agile approaches. Lean and Agile are two mindsets, but they are often seen as overlapping.

Lean and Scrum together
An agile mindset is about change management and adapting to that change, with ideas around quick delivery cycles. Lean is about minimising waste and doing the least possible effort to get the required result and the required quality. The two work hand-in-hand quite well.

In terms of applying Lean to Scrum it is about identifying the areas of potential waste. For example, doing the minimum amount of design upfront, to identify the smallest amount of work you need to do for each item, before entering into a Scrum sprint. This then continues into other areas – the right amount of analysis, the right amount of development, the right amount of testing.

Beware not to use this as an excuse for not doing things properly. Lean also places a strong focus on quality and minimising defect rates as rework is also another form of waste. So there is a need to identify the required level of functionality, quality and consistency, and then do the right amount of work to deliver that and nothing more.

How do we apply Lean principles to Scrum at Equinox IT?
At Equinox IT we do not use pure Scrum, but there are a lot of good ideas and practices in Scrum and we do use many of these, including planning meetings, daily stand-ups, cross-functional teams, co-location, retrospectives and review meetings.

In theory if you follow Scrum properly, you can’t alter it. This purest approach didn’t work for the pragmatic culture at Equinox IT, where we have always been focused on right-sizing approaches to our needs, our clients’ needs and the realities of New Zealand project conditions. In this way we do follow an approach of Scrumbut (as in “we follow Scrum… but”). While Scrumbut can have a negative connotation, done well we believe working this way has many merits. Because we also enhance Scrum with Kanban, Scrumban could also be used to describe the flavour of our approach.

One of the reasons we don’t follow pure Scrum is that the fit to the nature of our work was not a perfect match. One of the great benefits of Scrum is the ability to release at the end of sprints, and this works well when there are real deadlines from each sprint, but is an artificial timebox otherwise. For the client work that we do, delivering after each sprint doesn’t really mean a great deal to our clients. Clients can often expect a single release on a required date, so for this situation we find that Kanban works well. In Kanban we work to a realistic timeline, measure our progress and make projections, yet we do still work in tight iterations and small batches

We also subscribe to the Lean principle of continuous improvement. This is hard to achieve with the prescriptive approach of pure Scrum. We solve our own problems by trying stuff, keeping and tuning what works and discarding what does not. This means that we naturally over time evolve away from any one pure method to an overall approach that we have validated works for Equinox IT and our clients and our combined circumstances and needs. ​
Learnhardpartsagilesoftwaredevelopment.png

December 19
​Working on multiple agile projects and swarming tasks​
WorkingonMultipleAgileProjects.jpgDuring our webinar last week Learning the hard parts of agile software development​, we were asked a number of questions. Two of these are included below with responses:

How do we engage team members to swarm tasks on agile projects?
This can be tricky when people have a particular specialty and identify themselves in that role, and they are required to swarm on another activity which is outside of their core function. Take a developer for example, who may be required to test if a testing activity requires swarming.

What we find is that often project teams have some members who are interested in many aspects of the software development and are naturally inclined to swarm tasks outside of their specialty. These people make great exemplars of the benefits of cross-functional teams and this can be highlighted to the wider team. During stand-up meetings, or walking the Kanban board, the benefits of swarming and the contribution of people who do it well, can be demonstrated and the results speak for themselves when blocks that have been inhibiting the flow of work are cleared via a swarming approach.

It is a sensitive topic to work through, and you shouldn’t force it, but more show the benefit of swarming for the process that you are following and the project as a whole.

Can people effectively work on multiple agile projects concurrently?
Scrum projects have a ScrumMaster, who may be a floating resource across multiple projects. But following Scrum the development team (core developers, testers etc) generally are expected to stay on one project. If you have all members of the team spread across multiple projects then this can be challenging.

One problem is the task switching costs of people moving from one project to another. This creates significant churn and waste which adds up across the project team as a whole and becomes a big overhead.

Another problem, again using the example of Scrum, is looking ahead and planning the work that will be delivered in a sprint and being able to commit to that when resourcing is variable. If the whole team is working on multiple projects with variable allocations and unknown availability, then realistic sprint planning cannot happen. This makes running agile projects well very tricky.

While it may be attractive to spread resources, we believe that it is not that compatible with lean and agile approaches, and raises warning signs relating to the potential for waste and increased costs.
​​
Learnhardpartsagilesoftwaredevelopment.png

December 19
​What is the better method – Kanban or Scrum?​
Whatisbetterkanbanscrum.jpgWe were recently asked this question by a participant on the Learning the hard parts of agile software development webinar. There is no right answer to this question as it is dependent on the team dynamic that you have and the projects that you are working on. Kanban and Scrum are actually tackling quite different problems.

Scrum is more around the team creation and how you approach work. Scrum involves chunking work out into sprints between one and four weeks long, working through them iteratively, and getting a shippable product at the end.

Kanban the method relates to the concept of a continuous flow of work to deliver constant value, which contrasts to Scrum which delivers chunks of work in sprints. As such, Kanban is similar to Scrum but with an infinite sprint length. Many people see Kanban as the Kanban board, which is just one part of the method. The board is about putting a visualisation on your current process and using it to check and analyse how your process is going, to identify pain points.

Depending on the project you are working on Kanban or Scrum may be more suitable. If you are working on a project with larger time frames, say six to eight months and you expect it to deliver at that point, then Kanban may be a better choice. If you need to deliver every couple of weeks, then the sprint nature of Scrum may be more suitable.

Scrum is a larger change activity for your team than Kanban and this may also be a consideration when adopting an agile approach. The team will need to become familiar with a completely new approach to working - Scrum, then use it well enough to deliver project results. Kanban, on the other hand, you can use today, with your existing approach with no further change.

Note also that Kanban and Scrum can work in tandem, enhancing the effectiveness of each other and this may be a topic we can explore in a future blog post.

Further ideas on Kanban and Scrum are also explained in our blog post Can you apply Lean principles to Scrum?​
​​
Learnhardpartsagilesoftwaredevelopment.png 
December 17
‘Learning is king’ for agile software development projects
AgileSoftwareDevelopmentInformalMeeting.jpgDelivering excellent business results in a world of accelerating change requires smart people and teams who adapt and continuously improve. In this world ‘learning’ becomes king. Individuals and teams who/that are the best at ‘learning’ will be the most successful today and into the future.

Last week I facilitated a webinar entitled ‘Learning the hard parts of agile software development’ with three members of Equinox IT’s software development teams – Deane Sloan (Software Development Director), Ben Hughes (Systems Analyst) and Hana Pearson-Coats (Systems Analyst). Our software development teams have achieved excellent results from learning, adopting and applying agile and lean software development approaches.

61% of us learn agile best by doing
By learning I am talking very broadly, certainly much broader than attendance on a formal training course. The 70:20:10 model of learning references how 70 percent of learning occurs doing the work, 20 percent of learning is from others and only 10% of learning is from formal training courses.

During the webinar we polled participants on which approach they found ‘most’ useful to learn how to make agile work in practice. Fairly consistent with the 70:20:10 model our results showed that:
  • 61% of responders selected ‘learning by doing by working on agile projects'
  • 25% selected ‘shared learning from coaches, practitioners, managers and teams’
  • 7% selected ‘thought-leader content (books, blogs, podcasts, videos etc)’
  • 7% selected ‘formal training courses and conferences’

This is also consistent with what we see in the market and with our clients. We have found that learning agile and lean software development has many layers. Attending a one or two-day course or getting a Scrum Certification is important to understand the principles and an overview of practices. But this is really just the top layer of learning. Much harder is the learning required to make agile work in practice, the learning required to embed the changes for the long term, the learning required to engage with the business in a different way, the learning required to persevere in the face of resistance, and the learning required to foster a climate of continuous improvement. This hard learning doesn't occur in a classroom environment, but by experimenting, doing and working with those who have done it before.

Our tips for learning the hard parts of agile software development
During the webinar our team members provided a number of tips for learning the hard parts of agile software development. You can see the full set of advice by watching the Learning the hard parts of agile software development webinar. I summarise a few of the ideas discussed during the webinar here:
  • Context is really important – while the formal training component may only be 10% of the learning journey, it is still very important to learn the terminology, concepts and mindset.
  • Allow time to learn and acknowledge that agile and lean approaches will not be highly productive initially while the team is still learning.
  • Work with people who have successfully used agile and lean approaches before, stick to them, do what they are doing. If something is hard, keep on doing it until you get good at it.
  • Get people new to agile working on small tasks where they can start contributing in an environment where it is safe for them to learn. These team members can then be ratcheted up onto more comprehensive tasks from there.
  • Use facilities, such as Kanban boards, to visualise the flow of work so that the approach the team is following is visible, accessible and can be understood and learned.
  • Do your own research and take an experimental and iterative approach. By doing this you solve your own problems by learning what works and what does not, and can then apply this learning to do more of what works and less of what does not.
  • Be conscious of Lewin’s heuristic ‘behaviour is a function of people and their environment’. Learning can often best be enabled not by changing the people, but by changing the environment (co-location, cross-functional teams, embedding coaches in the team and so on).

For us learning has been fundamental to our successful adoption of agile and lean software development approaches. Even though we now are mature in our use of agile and lean, learning still plays a vital role as we seek to continuously improve and as we bring on new members to the team. In the rapidly changing world that we live in I truly believe that individuals and teams who are the best at learning will be the most successful. If learning is king then we all need to get better at learning and hopefully this post has provided some useful pointers to help you along the way.
Learnhardpartsagilesoftwaredevelopment.png

September 19
5 models to help your New Zealand organisation become a digital superstar

In a rapidly changing and increasingly digital world you and your organisation need models and frameworks to help you succeed.

Equinox IT is the only New Zealand-owned IT consultancy that sponsors the MIT Sloan Center for Information Systems Research (CISR). In June I had the good fortune to represent Equinox IT at the CISR summer session in Boston, USA. The summer session had the theme of ‘Generating Business Value from Digitization’. During the session I heard the latest MIT CISR industry-based research findings from leading IT thinkers such as Peter Weill, Jeanne Ross and Erik Brynjolfsson. Topics covered included total digitisation, the future of IT, business architecture, big data, business analytics and the relationship between business and IT.

In this post I summarise 5 key MIT CISR models that were presented at the session that I believe are highly relevant to New Zealand organisations looking to successfully operate in today’s digital economy. Digitalisation, from an IT perspective, refers to the creation of new resources and enriching traditional resources, making these available digitally to invited audiences.

1. Architectural maturity
MIT CISR Architectural Maturity modelTo operate successfully you need a digitised platform that fits your organisation. The architectural maturity model is based on four stages of development: business silos, standardised technology, optimised core and business modularity. In ‘business silos’, the organisational focus is on point or localised solutions that offer immediate business opportunity. ‘Standardised technology’, as its name implies, standardises IT across the organisation to reduce cost and risk. ‘Optimised core’ uses an integrated platform across the organisation to support enterprise wide priorities. ‘Business maturity’ optimises the digitised platform to achieve operational and strategic excellence across the organisation.

The scope of change that delivers business value for your organisation will depend on where you are in your architectural maturity journey.

2. Operating model
MIT CISR Operating ModelUnderstanding your operating model will help you identify the kind of organisational change that will deliver value, and the digital infrastructure needed to support that change. The operating model is determined by assessing the organisational levels of business process integration, and business process standardisation.

A ‘Diversified’ operating model has low integration and low standardisation, such as the independent business units operating within organisations like General Electric. A ‘Replication’ operating model has low integration with high standardisation, such as McDonalds family restaurants. A ‘Co-ordination’ operating model has high integration with low standardisation, such as banks, where credit cards may operate differently from retail banking, but they tightly share information. A ‘Unification’ operating model has high integration and high standardisation, such as international parcel delivery, where standard approaches and shared information is fundamental to getting the item across the globe.

No one operating model is ‘better’ than another, the way the business operates needs to work for your organisation. Understanding your operating model is important to help gauge what kind of change will add value, and what kind of digital infrastructure you will need to support that change, now and in the future.

3. Complexity
MIT CISR Complexity ModelComplexity can be broken down into two forms: How complex are the processes you use and how complex are the products and services you offer. In general, process complexity can reduce value, whereas product complexity can add value.

Understanding your complexity will help you increase the benefits that your products or services add to customers and minimise the bad customer experience of dealing with your organisation because the process is too hard. Digitising your organisation can facilitate both of these areas, offering options to enhance your products and services and streamlining processes.

4. IT value cycle
MIT CISR IT Value Cycle ModelTo deliver maximum value in a digital world, you need to understand the IT value cycle. MIT research shows that getting more business value requires a change to the IT value cycle, extending standard commit, build and run steps to also include an exploit step to gain more value. In this context, exploiting can be thought of as optimising, to gain maximum value from your digitised platform.

In our rapidly accelerating world, the cycle of change is becoming shorter and your organisation needs to both commit to the right change, and operate and exploit what you have today for greater value tomorrow.

5. Value from data
MIT CISR Winning the Data Race ModelGetting value from data within a digital world can improve decision making to build business capabilities and business intelligence within operational decision making.

It is important to understand that gaining value from data requires focus on purpose, analysing data, generating insight, acting on insight and making sure that data provides value.

Summary
Joining all of these models together – if you profile your organisation (or business area) using the first three models:

  • architectural maturity
  • operating model(s)
  • complexity

Pulling together the 5 models to help New Zealand organisations become a digital superstar...then you can do a better job of understanding what change you should commit to for maximum future benefit. You can also identify how you can operate and moving through to the last two models exploit what you have to get maximum value, and leverage data to inform your operational and strategic decision making.

Successful change is change that works with the way your organisation operates, and supports the strategic direction of the organisation. Applying these models can help you to identify and deliver successful change within the organisation you have now, and build the organisation you want.

All five of these models come from the great research performed by the MIT CISR, and you can find out more details about them on the MIT CISR website. In my 'Creating organisational success in a digitised world’ webinar I also discussed the models further and brought all 5 together into a framework for thinking about change, digitisation and value within your organisation. Becoming a digital superstar is about understanding where your business resides in each of these models and using IT as an asset to maximise your business value through digitalisation.

free recorded webinar: creating organisational success in a digitised world


September 12
Overcoming the challenges to successful cloud adoption in NZ
Overcoming challenges to successful cloud adoption in New ZealandIn this post I put forward ideas to help your organisation mature its approach to cloud adoption. The post covers some of the key problems that New Zealand organisations face when adopting cloud services, and five pragmatic tips to help you be more successful. If you want more detail, I also elaborate on this topic in my recorded webinar Moving your organisation to the cloud.

Cloud adoption is harder than you think

Cloud services have many benefits for organisations, and based on overseas trends we expect to see the adoption of cloud services accelerate in New Zealand. Wherever your organisation is today, effective cloud adoption and management will increasingly become pivotal to your future success.

Unfortunately many organisations underestimate the effort required for secure and effective cloud adoption. Organisations may also face resistance to change from roles impacted by new ways of working. Furthermore, many IT teams are being side-lined as business units procure services they need directly, without the perceived IT bureaucracy.

It is clear that new skills are required as IT’s role transitions from service delivery to service brokering. As cloud becomes more predominant, we need to start preparing ourselves to better leverage the full benefits that this powerful set of services have to offer.

Top 5 problems and solutions New Zealand organisations face when adopting cloud

Problem 1: Poor understanding of enterprise data – information is one of your most important assets, but how well do you know your information? Organisations who do not understand the nature and value of their information will find it near impossible to determine the best way to manage this information in the cloud.

Solution: Classify and categorise your data – you need to baseline an understanding of the information within your organisation. This means uncovering what data is used, and how it is used by systems, processes and people. Analysing your information in this way will allow you to categorise your data and form a view on the appropriate level of information security.

Problem 2: Uncertain data sovereignty rules – most cloud services are not located in New Zealand and as a result there is uncertainty around the legal requirements and implications of storing data with offshore cloud service providers. This has been further complicated by the US National Security Agency and UK Government Communications Headquarters accessing data that is stored with cloud providers who host in their respective countries.

Solution: Seek legal counsel for offshore services – legal advice will help you navigate the tricky area of data sovereignty, particularly if your information relates to government, financial or sensitive topics. It should also give you a better handle on whether your information is likely to attract attention from host nation intelligence agencies, and the implications of this.

Problem 3: Inadequate guidance and advice – adopting cloud services can be an unchartered territory for many organisations, and cloud vendor assistance can be biased towards their solution rather than the broader considerations and implications. There are risks with cloud services, and the process of adoption absolutely requires an ‘eyes open’ approach.

Solution: Understand what cloud adoption entails – building a repeatable process and check list for adopting cloud will be valuable, and will help you identify and mitigate important risks associated with cloud services. There are various resources that may help with this, including the New Zealand Cloud Computing Code of Practice, the ENISA Cloud Computing Information Assurance Framework, the CSA STAR Assessment Framework and the New Zealand Government Cloud Computing Considerations paper.

Problem 4: Difficulty applying security controls – everyone wants cloud services to be secure, but what are the appropriate security controls for your information and your particular cloud service implementation and circumstances?

Solution: Take a risk-based approach to information security – since you can’t apply every information security control to cloud services, I recommend taking a risk-based approach. This will allow you to assess the actual risks within the context of the cloud service and the needs of your organisation, helping you identify and implement the most appropriate security controls.

Problem 5: Outdated procurement and governance – procurement approaches that are centred around large up-front spend, predictable support and maintenance costs, do not translate well to cloud adoption, where costs are based on service provision.

Solution: Update your procurement processes – perform a gap analysis between your existing procurement and governance processes and what is required for effective cloud service procurement. This should allow you to identify the changes necessary to update your procurement processes to suit your organisation’s needs today.

During my Moving your organisation to the cloud webinar we polled the New Zealand attendees to identify which of these 5 problems was most significant for them. As you can see below the results were quite evenly spread, with security and data being the leading considerations:

Poor understanding of enterprise data: 24%
Uncertain data sovereignty rules: 21%
Inadequate guidance and advice: 19%
Difficulty applying security controls: 30%
Outdated procurement and governance: 6%

With cloud adoption in New Zealand set to accelerate, your organisation needs to prepare itself to become better at adopting and managing cloud services. Cloud service adoption is not as simple as cloud vendors will have you believe and there are many tricky ‘got-yas’ once you look beyond the vendor marketing. This post is a good start to prepare yourself for successful cloud adoption. The information presented here is also covered in more detail in my recorded webinar Moving your organisation to the cloud. If you need specific advice, you can always get hold of me by contacting Equinox IT.

Equinox IT was instrumental in setting up the New Zealand Cloud Computing Code of Practice, along with the IITP and other organisations such as Xero. We are now a signatory to the CloudCode.

Free recorded webinar: Moving your organisation to the cloud




September 12
Integrating business intelligence tools into your application
I’ve blogged several times over the years on various aspects of how we use Business Intelligence tools to visualise the mountains of data we accumulate in the course of our Performance Intelligence practice assignments. Even the practice name of “Performance Intelligence” reflects the vital role that such tools and techniques play in deriving insights from all of that data to allow us to get to the bottom of the really hard system performance problems.

Our data visualisation tool of choice is Tableau, because it connects to pretty much any type of data, offers a very wide range of visualisation types and can handle huge volumes of data remarkably quickly when used well. But up until now we have always treated Tableau as a standalone tool sitting alongside whichever performance testing or metrics collections tools we are using on a performance assignment. That works fine – but it does mean that the analysis and visualisation doesn’t form an integral part of our workflow in these other tools. There are lots of opportunities to streamline the workflow, allowing interactive exploration of test results data – drilling down from high-level summaries showing the impact to low-level detail giving strong clues about the cause of issues. If only we could carry context from the performance testing tool to the visualisation tool.

Tableau business intelligence tool integrated with Visual StudioWe have recently been working to address that, making use of an integration API which Tableau introduced with their last major release. First cab off the rank for us was integrating the visualisations into the Microsoft Visual Studio development environment, since that provides one of the performance testing tools which we use extensively in our practice, and the Microsoft Visual Studio environment offers the extensibility points necessary to achieve tight integration.

But whilst the integration is conceptually straightforward - we just want the visualisations to form a seamless part of the experience of using the tool and to know the context (what is the latest test run, for example), actually making it work seamlessly and perform well required careful design and significant software development skills.

The Tableau API uses an asynchronous programming model - JavaScript Promises (so called because when you call an asynchronous method the response you get back is not the answer you are after but a “promise” to return it in due course). Using this asynchronous model allows the client-side behaviour to remain responsive whilst potentially long-running requests involving millions of rows of data are handled on the server. Putting a simplistic proof-of-concept together was within my powers, but actually achieving tight integration in a robust and well performing way definitely needed the professionals. So I’m very glad that we had the services of our Business Application and Product Development business available to do the integration work.

We’re very pleased with the end result, and the folk at Tableau liked it enough to invite me to talk about it at their annual user conference in Seattle – which this year will have a staggering 5,000 attendees. As part of my conference session I put together a simple demonstration to show how the interactions work. It is a simple standalone web page with an embedded Tableau visualisation object in it – showing how the user can interact with the Tableau visualisation both from controls on the hosting web page and from within the Tableau object itself.

See the integrated Tableau wave pendulum demonstration

Image of YouTube video from San Diego State University showing a wave pendulum effectThe demo is an animated emulation of a video made by the San Diego State University Physics department showing a cool set of pendulums – the link to the video is in the demo page. It has a couple of controls on the hosting page (a play/pause) button to start and stop the animation and a field to enter the time delay between refreshes. You can also interact with the visualisation itself (when it is paused) by changing the Time Parameter – which determines how far through the cycle the emulation is.

You can download the source for the demonstration and also the source of the Tableau visualisation itself from the Tableau Public website using the links in the demonstration page, if you want to look under the covers to see how it all works.

Wave pendulum visualised using Tableau, integrated into an pageNote that whilst the example is a bit of fun, I absolutely don’t advocate trying to animate complex visualisations at a high refresh rate like this. But there may be contexts in which this technique could be useful.

Whilst the demonstration itself is very simple, the possibilities that are opened up by interacting with the visualisation from an external application or web portal to pass in external context are enormous.  

Contact Equinox IT to find out more about our performance intelligence services and our business application and product development services


July 08
Change or die - the changing role of IT departments

Change or die- the changing role of IT departmentsThe IT industry is undergoing a seismic shift. Unlike previous technology revolutions, the changes to the way services are delivered to organisations and end users is forcing dramatic changes to the way organisations view technology, and how they are structured to deliver IT services. Change is happening, and how organisations choose to react to that change will determine their future relevance in the marketplace. What is driving this change, and how can organisations, specifically IT departments adapt and thrive?

I was watching an article on the news the other night that talked about IP-based messaging platforms (like iMessage) replacing trusty old SMS. The recent acquisition of WhatsApp by Facebook for an eye-watering $16 billion USD shows just how much belief major industry players have in growth opportunities around this segment of the market, at the expense of ‘legacy’ telecommunications providers.

In the same way that telcos are seeing their role being relegated to a commodity provider of a ‘dumb pipe’ for home and mobile broadband, those in the traditional IT infrastructure space are also at risk of having their lunch eaten by external competitors who can now deliver what was traditionally a complex service as an on-demand utility.  Cloud infrastructure is transforming delivery and operation of servers, storage and network devices into ‘dumb infrastructure’ commodity services. 

Today, few IT departments have the depth and breadth to design, develop, test and operate an entire suite of enterprise applications. The rise of ‘off the shelf’ enterprise software over the last 30 years demonstrates the validity of a model where much of the heavy lifting is provided by an external vendor when in house-capability does not exist, or an organisation chooses to focus its limited resources elsewhere. In recent years we have seen this approach extend further, to a full Software as a Service (SaaS) delivery model where all aspects of the software development lifecycle are managed by the service provider.

The progression of software delivery towards SaaS points to the future of IT infrastructure within an IT department. Rather than build and operate infrastructure using ‘off the shelf’ servers, storage, networking and operating system components, a range of Infrastructure as a Service (IaaS) offerings are available on-demand. IaaS increases the responsiveness of technology to business needs, provides cost reduction opportunities, and delivers a better quality service while shielding customers from the complexities of service design, delivery and operation.

The benefits of IaaS are often the same attributes used by IT departments to justify their existence to the business. If one of the core capabilities of IT is no longer perceived to be of value, how can it continue to remain relevant to the business? Transforming IT from its current role as a service provider to one of a service broker is one such way to retain relevance. While some technical skills will remain relevant, especially those related to security, integration and data management, others such as system design, delivery and operation will recede in importance. The real shift however is moving away from a focus on technology towards a consultative model where IT works as part of the organisation to deliver business outcomes. An increase in skills relating to business processes, requirements analysis, the management of commercial agreements, and most importantly, the successful management of the intersections between business and technology will be required by tomorrow’s organisations.

Change of such a scale is not easy, and requires understanding, planning and commitment at the very highest levels of an organisation. Those that are able to successfully make the transition will reap the benefits of utility computing and flourish in tomorrow’s environment. Those that hold tightly to old operating models may find themselves like many of today's telcos - a service provider without a customer.


Invite to watch the webinar 'cloud services are like a bungy jump'


July 04
Drowning in documentation

At first glance, documentation may not be the sexiest topic for any IT professional, however it is often the standard by which our efforts are judged, especially by those that encounter our work after we have left the organisation. Few things in the life of an IT professional can be as frustrating as attempting to understand or support an IT system without having the required information available.

The standards of documentation I’ve encountered over the years vary wildly, and sadly many represent an afterthought of an otherwise talented developer, analyst, or architect. A solution is immediately more accessible and understandable when well-written documentation is incorporated as part of the solution. This in turn increases the value of both the service, and also of the writer. Creating well-written documentation should be a foundation skill of any architect, systems engineer or software developer.

I've encountered a number of projects and clients with differing expectations around how information should be captured. These range from engagements where the majority of time was spent writing documents to adhere to strict standards, through to projects where documentation is almost a foreign concept, and critical knowledge about systems is trapped in the minds of people who one day will move on, usually taking that knowledge with them. Even with more projects moving towards agile delivery methodologies, documentation remains an important component. What are some key tips to keep in mind when crafting good technology documentation?

1. Relevance to a wide audience

Can you as an author say with complete confidence that you know everyone who may want to make use of your documentation? If the answer is "no" (which it often is) then cast your net as wide as you possibly can before you start to sacrifice the truly important messages that you wish to convey. A document primarily aimed at a technical audience can with the right approach be just as useful to non-technical readers. In doing so you can not only increase the relevance and value of your documentation but you can also cut down on duplication - "one document to rule them all" is surely better than spending time and effort creating two or three audience-specific documents.

2. Forsake detail for the sake of brevity

Sometimes it can seem like a good idea to exhaustively document every nuance of your solution - you never know when you'll need to refer to it right? Realistically though, just how likely is it that you or another reader will need to know that the SSH service is set to start automatically on boot (which was the default anyway) - Can't they just look up the vendors online documentation for that information?

If you try and document everything it becomes tiresome to capture, almost impossible to maintain, and the messages that truly matter (like the fact that you must allocate at least 20GB of swap disk) get lost in the noise. Focus your documentation on the non-default decisions of your solution. Presumably each of these decision points is in some way justifiable, so provide the reason ‘why’ they were changed as well as ‘what’ needs to be changed. For those circumstances where it is useful to have a comprehensive configuration options documented, consider an appendix, or even better, make them part of build or deployment scripts so that they are truly reusable.

3. Standing the test of time

Let's be honest - most systems outlive their intended lifespan by a number of years. Dive into design and operational documentation for an enterprise application and you'll encounter a lot of information that is past its use-by date. Normally this relates to the attributes such as server names, operating system versions and such that at first glance seem to be reasonably static, but actually have a fairly highly chance of changing over a long period as components are upgraded and dead servers are replaced (virtualisation and IaaS complicates this even further).

In theory, documents should be updated when changes occur but in reality this rarely happens. However, most organisations today implement some kind of configuration management database (CMDB) to track IT assets. Linking to a service's CMDB entry in the documentation is a better way of providing up-to-date information about changeable infrastructure elements.

Documenting organisational structures, roles and personnel can be just as risky. Organisations change over time, as do the people that perform specific functions. When the need arises to capture this information, consider the role as well as the person, and to ensure longevity make sure you provide a description of the role. This way when "Systems Administrator" gets renamed to "Technology Service Availability Manager" after a particularly energetic round of corporate re-branding you can still track down the right person.

4. First impressions count

Nobody likes an ugly document.  Bad fonts and formatting issues can turn people off just by glancing through the first few pages. Most people learn through visual stimulation, so support your text with diagrams where appropriate. Take a look at professional websites and published documentation for pointers on how to structure a page in such a way that it is easy on the eyes and draws the reader in.

5. Use everyday language

This point has been hammered home so many times over the years, but it seems the powers of business pseudo-speak are strong. Throw out jargon like "synergy", "moving forward" and "leverage" like old ham. Once ingrained, this habit is incredibly hard to break and even this author falls back into referring to "deliverables" and "low-hanging fruit" without thinking.

The problem with this type of language is the emptiness of meaning. Jargon stands in the way of proper expression, and in the worst case, can be used to intentionally obscure facts and information. Technology is precise, unambiguous, and definitive. All things that jargon isn't.

Conclusion

We in the technology sector more often than not work in an abstract world. You can't hold software in your hand. Nowadays we rarely have the opportunity to even touch a server, storage or network component. Documentation is the bridge between the abstract and the real world. Rather than treat documentation as an afterthought, we should write with pride, and use it as an opportunity to increase the value of the solution and a chance to close the gap between technology and people.

June 25
The top 5 cloud integration challenges that NZ organisations face and how architects can help solve them
Cloud integration problems and solutionsRapid adoption of cloud technology worldwide has meant that many New Zealand organisations are already testing the water by shifting non-critical workloads to the cloud. However, when business-wide applications move into the cloud, they need to integrate not only with on-premise systems, but also other cloud services.

Our experience with New Zealand businesses has shown that integration is a key concern for those organisations considering cloud service migrations. This post explores the 5 top cloud integration problems and how IT Architects can address these when considering the implementation of cloud services.

Top 5 cloud integration problems

1. Fewer Integration Options
Often the options for integrating with a cloud service are dictated by the cloud service provider. While traditional batch file transfers may be supported, we find in most cases cloud service providers rely on APIs of various flavours. For organisations without a Service Oriented Architecture (SOA) capability or the ability to provide message translation services, successful cloud integration will require significant investment in your teams cloud architecture skills and experience.

2. Poor understanding of data flows
We find that organisations often do not have a clear understanding of the nature and extent of information that will be stored in their cloud service. Not only do cloud services hold your information outside of your organisation, but many services also require access to your on-premise systems, potentially exposing you to a range of security risks.

3. Ad hoc approaches to integration
Many organisations approach cloud integration on a project-by-project basis. However, as cloud adoption increases across the enterprise, this ad hoc approach to integration increases architectural complexity and overheads for the entire organisation that becomes increasingly hard to maintain and enhance. Project based approaches to cloud integration will result in ongoing costs remaining high, eroding the benefits cloud services can deliver.

4. Lack of suitable tools
The capability to integrate systems using SOA or custom APIs is something that many New Zealand organisations still lack. Many older on-premise applications have APIs that were developed prior to the emergence of cloud services and may be incompatible with newer, cloud-centric standards. In addition, some internal systems were not designed with any integration in mind, making successful cloud integration even more challenging.

5. Latency and performance
Due to New Zealand’s geographic location, cloud service providers are usually based halfway around the world from their kiwi clients. This raises concern around the ability of the provider to support the needs of their clients, such as reliable delivery of synchronous, near real-time communication.

What can New Zealand cloud architects do to mitigate these problems?
The following 5 approaches can be used to overcome some of the cloud integration problems your organisation may be facing:

1. Create an integration strategy
An integration strategy should present a structure of how your organisation will approach the integration of cloud services. It’s vital you apply your strategy at an enterprise level to ensure all areas of the business are accounted for. Practical guidelines directing standards and protocols for data transfer that are specific to your business should be included, with close ties to any existing identity, data classification, and information security policies. Standardised integration patterns should also be developed to enable a faster and smoother design and delivery of the solution.

2. Establish a shared integration platform
This demands a move away from integrating on a per-project basis, towards establishing a shared set of integration tools. Functions such as message, routing, translation and transformation are required to move data between systems that may not speak the same language. This means SOA gateways and message busses shift from “nice to have” status to key components of the IT service portfolio.

3. Balance flexibility with complexity
A good strategy should provide solid guidance but allow for flexibility to meet the needs of specific projects and systems. One of the key objectives of strategic integration is cost reduction, so ensure you don’t erode this benefit by defining unnecessary levels of detail and complexity in your integration architecture.

4. Understand your data
A good integration strategy leverages the knowledge an organisation has of its information. Make sure that your organisation comprehends the importance of its information, regardless of whether it is stored on-premise, in cloud services, or transferred between systems. This will enable the appropriate levels of control and visibility to be put in place, reducing threats to information security in a manner that fits the context and objectives of the business.

5. Don’t forget identity
Many organisations focus on data integration whilst forgetting the importance of identity integration. The result is identity fragmentation across your services, with users having to juggle multiple usernames and passwords. Through directory federation or identity replication, organisations can retain control over who accesses their cloud services, and provide a better user experience.

The uptake of cloud services is set to accelerate in New Zealand over the coming years. Cloud integration will continue to slow cloud adoption and the realisation of cloud benefits until we start taking a more strategic and enterprise-wide approach. Creating an integration strategy and establishing a shared platform that is right for your organisation will help you efficiently and effectively transition to cloud while maintaining the necessary control over your organisation's critical information.

Cloud Integration webinar call to action



1 - 15Next

Equinox IT

Find us

Wellington

Level 12
Equinox House
111 The Terrace
Tel: 04 499 9450

Auckland

Level 5
Shortland Chambers
70 Shortland Street
Tel: 09 307 9450

Connect with us

Work for us

VacanciesWe work at Equinox IT for the autonomy, collaboration, knowledge sharing, learning and balance. Why not join us?