Home
September 12
Overcoming the challenges to successful cloud adoption in NZ
Overcoming challenges to successful cloud adoption in New ZealandIn this post I put forward ideas to help your organisation mature its approach to cloud adoption. The post covers some of the key problems that New Zealand organisations face when adopting cloud services, and five pragmatic tips to help you be more successful. If you want more detail, I also elaborate on this topic in my recorded webinar Moving your organisation to the cloud.

Cloud adoption is harder than you think

Cloud services have many benefits for organisations, and based on overseas trends we expect to see the adoption of cloud services accelerate in New Zealand. Wherever your organisation is today, effective cloud adoption and management will increasingly become pivotal to your future success.

Unfortunately many organisations underestimate the effort required for secure and effective cloud adoption. Organisations may also face resistance to change from roles impacted by new ways of working. Furthermore, many IT teams are being side-lined as business units procure services they need directly, without the perceived IT bureaucracy.

It is clear that new skills are required as IT’s role transitions from service delivery to service brokering. As cloud becomes more predominant, we need to start preparing ourselves to better leverage the full benefits that this powerful set of services have to offer.

Top 5 problems and solutions New Zealand organisations face when adopting cloud

Problem 1: Poor understanding of enterprise data – information is one of your most important assets, but how well do you know your information? Organisations who do not understand the nature and value of their information will find it near impossible to determine the best way to manage this information in the cloud.

Solution: Classify and categorise your data – you need to baseline an understanding of the information within your organisation. This means uncovering what data is used, and how it is used by systems, processes and people. Analysing your information in this way will allow you to categorise your data and form a view on the appropriate level of information security.

Problem 2: Uncertain data sovereignty rules – most cloud services are not located in New Zealand and as a result there is uncertainty around the legal requirements and implications of storing data with offshore cloud service providers. This has been further complicated by the US National Security Agency and UK Government Communications Headquarters accessing data that is stored with cloud providers who host in their respective countries.

Solution: Seek legal counsel for offshore services – legal advice will help you navigate the tricky area of data sovereignty, particularly if your information relates to government, financial or sensitive topics. It should also give you a better handle on whether your information is likely to attract attention from host nation intelligence agencies, and the implications of this.

Problem 3: Inadequate guidance and advice – adopting cloud services can be an unchartered territory for many organisations, and cloud vendor assistance can be biased towards their solution rather than the broader considerations and implications. There are risks with cloud services, and the process of adoption absolutely requires an ‘eyes open’ approach.

Solution: Understand what cloud adoption entails – building a repeatable process and check list for adopting cloud will be valuable, and will help you identify and mitigate important risks associated with cloud services. There are various resources that may help with this, including the New Zealand Cloud Computing Code of Practice, the ENISA Cloud Computing Information Assurance Framework, the CSA STAR Assessment Framework and the New Zealand Government Cloud Computing Considerations paper.

Problem 4: Difficulty applying security controls – everyone wants cloud services to be secure, but what are the appropriate security controls for your information and your particular cloud service implementation and circumstances?

Solution: Take a risk-based approach to information security – since you can’t apply every information security control to cloud services, I recommend taking a risk-based approach. This will allow you to assess the actual risks within the context of the cloud service and the needs of your organisation, helping you identify and implement the most appropriate security controls.

Problem 5: Outdated procurement and governance – procurement approaches that are centred around large up-front spend, predictable support and maintenance costs, do not translate well to cloud adoption, where costs are based on service provision.

Solution: Update your procurement processes – perform a gap analysis between your existing procurement and governance processes and what is required for effective cloud service procurement. This should allow you to identify the changes necessary to update your procurement processes to suit your organisation’s needs today.

During my Moving your organisation to the cloud webinar we polled the New Zealand attendees to identify which of these 5 problems was most significant for them. As you can see below the results were quite evenly spread, with security and data being the leading considerations:

Poor understanding of enterprise data: 24%
Uncertain data sovereignty rules: 21%
Inadequate guidance and advice: 19%
Difficulty applying security controls: 30%
Outdated procurement and governance: 6%

With cloud adoption in New Zealand set to accelerate, your organisation needs to prepare itself to become better at adopting and managing cloud services. Cloud service adoption is not as simple as cloud vendors will have you believe and there are many tricky ‘got-yas’ once you look beyond the vendor marketing. This post is a good start to prepare yourself for successful cloud adoption. The information presented here is also covered in more detail in my recorded webinar Moving your organisation to the cloud. If you need specific advice, you can always get hold of me by contacting Equinox IT.

Equinox IT was instrumental in setting up the New Zealand Cloud Computing Code of Practice, along with the IITP and other organisations such as Xero. We are now a signatory to the CloudCode.

Free recorded webinar: Moving your organisation to the cloud




September 12
Integrating business intelligence tools into your application
I’ve blogged several times over the years on various aspects of how we use Business Intelligence tools to visualise the mountains of data we accumulate in the course of our Performance Intelligence practice assignments. Even the practice name of “Performance Intelligence” reflects the vital role that such tools and techniques play in deriving insights from all of that data to allow us to get to the bottom of the really hard system performance problems.

Our data visualisation tool of choice is Tableau, because it connects to pretty much any type of data, offers a very wide range of visualisation types and can handle huge volumes of data remarkably quickly when used well. But up until now we have always treated Tableau as a standalone tool sitting alongside whichever performance testing or metrics collections tools we are using on a performance assignment. That works fine – but it does mean that the analysis and visualisation doesn’t form an integral part of our workflow in these other tools. There are lots of opportunities to streamline the workflow, allowing interactive exploration of test results data – drilling down from high-level summaries showing the impact to low-level detail giving strong clues about the cause of issues. If only we could carry context from the performance testing tool to the visualisation tool.

Tableau business intelligence tool integrated with Visual StudioWe have recently been working to address that, making use of an integration API which Tableau introduced with their last major release. First cab off the rank for us was integrating the visualisations into the Microsoft Visual Studio development environment, since that provides one of the performance testing tools which we use extensively in our practice, and the Microsoft Visual Studio environment offers the extensibility points necessary to achieve tight integration.

But whilst the integration is conceptually straightforward - we just want the visualisations to form a seamless part of the experience of using the tool and to know the context (what is the latest test run, for example), actually making it work seamlessly and perform well required careful design and significant software development skills.

The Tableau API uses an asynchronous programming model - JavaScript Promises (so called because when you call an asynchronous method the response you get back is not the answer you are after but a “promise” to return it in due course). Using this asynchronous model allows the client-side behaviour to remain responsive whilst potentially long-running requests involving millions of rows of data are handled on the server. Putting a simplistic proof-of-concept together was within my powers, but actually achieving tight integration in a robust and well performing way definitely needed the professionals. So I’m very glad that we had the services of our Business Application and Product Development business available to do the integration work.

We’re very pleased with the end result, and the folk at Tableau liked it enough to invite me to talk about it at their annual user conference in Seattle – which this year will have a staggering 5,000 attendees. As part of my conference session I put together a simple demonstration to show how the interactions work. It is a simple standalone web page with an embedded Tableau visualisation object in it – showing how the user can interact with the Tableau visualisation both from controls on the hosting web page and from within the Tableau object itself.

See the integrated Tableau wave pendulum demonstration

Image of YouTube video from San Diego State University showing a wave pendulum effectThe demo is an animated emulation of a video made by the San Diego State University Physics department showing a cool set of pendulums – the link to the video is in the demo page. It has a couple of controls on the hosting page (a play/pause) button to start and stop the animation and a field to enter the time delay between refreshes. You can also interact with the visualisation itself (when it is paused) by changing the Time Parameter – which determines how far through the cycle the emulation is.

You can download the source for the demonstration and also the source of the Tableau visualisation itself from the Tableau Public website using the links in the demonstration page, if you want to look under the covers to see how it all works.

Wave pendulum visualised using Tableau, integrated into an pageNote that whilst the example is a bit of fun, I absolutely don’t advocate trying to animate complex visualisations at a high refresh rate like this. But there may be contexts in which this technique could be useful.

Whilst the demonstration itself is very simple, the possibilities that are opened up by interacting with the visualisation from an external application or web portal to pass in external context are enormous.  

Contact Equinox IT to find out more about our performance intelligence services and our business application and product development services


July 08
Change or die - the changing role of IT departments

Change or die- the changing role of IT departmentsThe IT industry is undergoing a seismic shift. Unlike previous technology revolutions, the changes to the way services are delivered to organisations and end users is forcing dramatic changes to the way organisations view technology, and how they are structured to deliver IT services. Change is happening, and how organisations choose to react to that change will determine their future relevance in the marketplace. What is driving this change, and how can organisations, specifically IT departments adapt and thrive?

I was watching an article on the news the other night that talked about IP-based messaging platforms (like iMessage) replacing trusty old SMS. The recent acquisition of WhatsApp by Facebook for an eye-watering $16 billion USD shows just how much belief major industry players have in growth opportunities around this segment of the market, at the expense of ‘legacy’ telecommunications providers.

In the same way that telcos are seeing their role being relegated to a commodity provider of a ‘dumb pipe’ for home and mobile broadband, those in the traditional IT infrastructure space are also at risk of having their lunch eaten by external competitors who can now deliver what was traditionally a complex service as an on-demand utility.  Cloud infrastructure is transforming delivery and operation of servers, storage and network devices into ‘dumb infrastructure’ commodity services. 

Today, few IT departments have the depth and breadth to design, develop, test and operate an entire suite of enterprise applications. The rise of ‘off the shelf’ enterprise software over the last 30 years demonstrates the validity of a model where much of the heavy lifting is provided by an external vendor when in house-capability does not exist, or an organisation chooses to focus its limited resources elsewhere. In recent years we have seen this approach extend further, to a full Software as a Service (SaaS) delivery model where all aspects of the software development lifecycle are managed by the service provider.

The progression of software delivery towards SaaS points to the future of IT infrastructure within an IT department. Rather than build and operate infrastructure using ‘off the shelf’ servers, storage, networking and operating system components, a range of Infrastructure as a Service (IaaS) offerings are available on-demand. IaaS increases the responsiveness of technology to business needs, provides cost reduction opportunities, and delivers a better quality service while shielding customers from the complexities of service design, delivery and operation.

The benefits of IaaS are often the same attributes used by IT departments to justify their existence to the business. If one of the core capabilities of IT is no longer perceived to be of value, how can it continue to remain relevant to the business? Transforming IT from its current role as a service provider to one of a service broker is one such way to retain relevance. While some technical skills will remain relevant, especially those related to security, integration and data management, others such as system design, delivery and operation will recede in importance. The real shift however is moving away from a focus on technology towards a consultative model where IT works as part of the organisation to deliver business outcomes. An increase in skills relating to business processes, requirements analysis, the management of commercial agreements, and most importantly, the successful management of the intersections between business and technology will be required by tomorrow’s organisations.

Change of such a scale is not easy, and requires understanding, planning and commitment at the very highest levels of an organisation. Those that are able to successfully make the transition will reap the benefits of utility computing and flourish in tomorrow’s environment. Those that hold tightly to old operating models may find themselves like many of today's telcos - a service provider without a customer.


Invite to watch the webinar 'cloud services are like a bungy jump'


July 04
Drowning in documentation

At first glance, documentation may not be the sexiest topic for any IT professional, however it is often the standard by which our efforts are judged, especially by those that encounter our work after we have left the organisation. Few things in the life of an IT professional can be as frustrating as attempting to understand or support an IT system without having the required information available.

The standards of documentation I’ve encountered over the years vary wildly, and sadly many represent an afterthought of an otherwise talented developer, analyst, or architect. A solution is immediately more accessible and understandable when well-written documentation is incorporated as part of the solution. This in turn increases the value of both the service, and also of the writer. Creating well-written documentation should be a foundation skill of any architect, systems engineer or software developer.

I've encountered a number of projects and clients with differing expectations around how information should be captured. These range from engagements where the majority of time was spent writing documents to adhere to strict standards, through to projects where documentation is almost a foreign concept, and critical knowledge about systems is trapped in the minds of people who one day will move on, usually taking that knowledge with them. Even with more projects moving towards agile delivery methodologies, documentation remains an important component. What are some key tips to keep in mind when crafting good technology documentation?

1. Relevance to a wide audience

Can you as an author say with complete confidence that you know everyone who may want to make use of your documentation? If the answer is "no" (which it often is) then cast your net as wide as you possibly can before you start to sacrifice the truly important messages that you wish to convey. A document primarily aimed at a technical audience can with the right approach be just as useful to non-technical readers. In doing so you can not only increase the relevance and value of your documentation but you can also cut down on duplication - "one document to rule them all" is surely better than spending time and effort creating two or three audience-specific documents.

2. Forsake detail for the sake of brevity

Sometimes it can seem like a good idea to exhaustively document every nuance of your solution - you never know when you'll need to refer to it right? Realistically though, just how likely is it that you or another reader will need to know that the SSH service is set to start automatically on boot (which was the default anyway) - Can't they just look up the vendors online documentation for that information?

If you try and document everything it becomes tiresome to capture, almost impossible to maintain, and the messages that truly matter (like the fact that you must allocate at least 20GB of swap disk) get lost in the noise. Focus your documentation on the non-default decisions of your solution. Presumably each of these decision points is in some way justifiable, so provide the reason ‘why’ they were changed as well as ‘what’ needs to be changed. For those circumstances where it is useful to have a comprehensive configuration options documented, consider an appendix, or even better, make them part of build or deployment scripts so that they are truly reusable.

3. Standing the test of time

Let's be honest - most systems outlive their intended lifespan by a number of years. Dive into design and operational documentation for an enterprise application and you'll encounter a lot of information that is past its use-by date. Normally this relates to the attributes such as server names, operating system versions and such that at first glance seem to be reasonably static, but actually have a fairly highly chance of changing over a long period as components are upgraded and dead servers are replaced (virtualisation and IaaS complicates this even further).

In theory, documents should be updated when changes occur but in reality this rarely happens. However, most organisations today implement some kind of configuration management database (CMDB) to track IT assets. Linking to a service's CMDB entry in the documentation is a better way of providing up-to-date information about changeable infrastructure elements.

Documenting organisational structures, roles and personnel can be just as risky. Organisations change over time, as do the people that perform specific functions. When the need arises to capture this information, consider the role as well as the person, and to ensure longevity make sure you provide a description of the role. This way when "Systems Administrator" gets renamed to "Technology Service Availability Manager" after a particularly energetic round of corporate re-branding you can still track down the right person.

4. First impressions count

Nobody likes an ugly document.  Bad fonts and formatting issues can turn people off just by glancing through the first few pages. Most people learn through visual stimulation, so support your text with diagrams where appropriate. Take a look at professional websites and published documentation for pointers on how to structure a page in such a way that it is easy on the eyes and draws the reader in.

5. Use everyday language

This point has been hammered home so many times over the years, but it seems the powers of business pseudo-speak are strong. Throw out jargon like "synergy", "moving forward" and "leverage" like old ham. Once ingrained, this habit is incredibly hard to break and even this author falls back into referring to "deliverables" and "low-hanging fruit" without thinking.

The problem with this type of language is the emptiness of meaning. Jargon stands in the way of proper expression, and in the worst case, can be used to intentionally obscure facts and information. Technology is precise, unambiguous, and definitive. All things that jargon isn't.

Conclusion

We in the technology sector more often than not work in an abstract world. You can't hold software in your hand. Nowadays we rarely have the opportunity to even touch a server, storage or network component. Documentation is the bridge between the abstract and the real world. Rather than treat documentation as an afterthought, we should write with pride, and use it as an opportunity to increase the value of the solution and a chance to close the gap between technology and people.

June 25
The top 5 cloud integration challenges that NZ organisations face and how architects can help solve them
Cloud integration problems and solutionsRapid adoption of cloud technology worldwide has meant that many New Zealand organisations are already testing the water by shifting non-critical workloads to the cloud. However, when business-wide applications move into the cloud, they need to integrate not only with on-premise systems, but also other cloud services.

Our experience with New Zealand businesses has shown that integration is a key concern for those organisations considering cloud service migrations. This post explores the 5 top cloud integration problems and how IT Architects can address these when considering the implementation of cloud services.

Top 5 cloud integration problems

1. Fewer Integration Options
Often the options for integrating with a cloud service are dictated by the cloud service provider. While traditional batch file transfers may be supported, we find in most cases cloud service providers rely on APIs of various flavours. For organisations without a Service Oriented Architecture (SOA) capability or the ability to provide message translation services, successful cloud integration will require significant investment in your teams cloud architecture skills and experience.

2. Poor understanding of data flows
We find that organisations often do not have a clear understanding of the nature and extent of information that will be stored in their cloud service. Not only do cloud services hold your information outside of your organisation, but many services also require access to your on-premise systems, potentially exposing you to a range of security risks.

3. Ad hoc approaches to integration
Many organisations approach cloud integration on a project-by-project basis. However, as cloud adoption increases across the enterprise, this ad hoc approach to integration increases architectural complexity and overheads for the entire organisation that becomes increasingly hard to maintain and enhance. Project based approaches to cloud integration will result in ongoing costs remaining high, eroding the benefits cloud services can deliver.

4. Lack of suitable tools
The capability to integrate systems using SOA or custom APIs is something that many New Zealand organisations still lack. Many older on-premise applications have APIs that were developed prior to the emergence of cloud services and may be incompatible with newer, cloud-centric standards. In addition, some internal systems were not designed with any integration in mind, making successful cloud integration even more challenging.

5. Latency and performance
Due to New Zealand’s geographic location, cloud service providers are usually based halfway around the world from their kiwi clients. This raises concern around the ability of the provider to support the needs of their clients, such as reliable delivery of synchronous, near real-time communication.

What can New Zealand cloud architects do to mitigate these problems?
The following 5 approaches can be used to overcome some of the cloud integration problems your organisation may be facing:

1. Create an integration strategy
An integration strategy should present a structure of how your organisation will approach the integration of cloud services. It’s vital you apply your strategy at an enterprise level to ensure all areas of the business are accounted for. Practical guidelines directing standards and protocols for data transfer that are specific to your business should be included, with close ties to any existing identity, data classification, and information security policies. Standardised integration patterns should also be developed to enable a faster and smoother design and delivery of the solution.

2. Establish a shared integration platform
This demands a move away from integrating on a per-project basis, towards establishing a shared set of integration tools. Functions such as message, routing, translation and transformation are required to move data between systems that may not speak the same language. This means SOA gateways and message busses shift from “nice to have” status to key components of the IT service portfolio.

3. Balance flexibility with complexity
A good strategy should provide solid guidance but allow for flexibility to meet the needs of specific projects and systems. One of the key objectives of strategic integration is cost reduction, so ensure you don’t erode this benefit by defining unnecessary levels of detail and complexity in your integration architecture.

4. Understand your data
A good integration strategy leverages the knowledge an organisation has of its information. Make sure that your organisation comprehends the importance of its information, regardless of whether it is stored on-premise, in cloud services, or transferred between systems. This will enable the appropriate levels of control and visibility to be put in place, reducing threats to information security in a manner that fits the context and objectives of the business.

5. Don’t forget identity
Many organisations focus on data integration whilst forgetting the importance of identity integration. The result is identity fragmentation across your services, with users having to juggle multiple usernames and passwords. Through directory federation or identity replication, organisations can retain control over who accesses their cloud services, and provide a better user experience.

The uptake of cloud services is set to accelerate in New Zealand over the coming years. Cloud integration will continue to slow cloud adoption and the realisation of cloud benefits until we start taking a more strategic and enterprise-wide approach. Creating an integration strategy and establishing a shared platform that is right for your organisation will help you efficiently and effectively transition to cloud while maintaining the necessary control over your organisation's critical information.

Cloud Integration webinar call to action



1 - 5Next

Equinox IT

Find us

Wellington

Level 12
Equinox House
111 The Terrace
Tel: 04 499 9450

Auckland

Level 5
Shortland Chambers
70 Shortland Street
Tel: 09 307 9450

Connect with us

Work for us

VacanciesWe work at Equinox IT for the autonomy, collaboration, knowledge sharing, learning and balance. Why not join us?