Cloud Computing Providers

Cloud Computing Providers

Before you choose your Cloud Computing Provider, let’s consider the principles and benefits of SOA and cloud computing itself.

Cloud Computing Basics

Perhaps I’m just grumpy this week.  Or, concerned for the future.  Or, most likely, both.  Nevertheless, I find conventional SOA lore more bothersome than usual.  Specifically, the paired notions that the sole reason to implement services (or not) is re-use potential, and that the main architectural aspect of SOA is governing said services for re-use.

Now, don’t misinterpret, there is true value in sharing services and governance is critical.  However, SOA, or better said, services-architecture doesn’t begin and end with re-use potential and enforcement.

For those with architectural backgrounds – software not marketing trend – what follows is nothing new.  You are well acquainted with foundational tenets such as separation of concerns, modularity, loose coupling, cohesion etc and the associated benefits. 

Unfortunately, based on my interactions over the last several months, I must report (a) this knowledge is not universal (b) people can’t articulate the benefits of well-architected software and/or (c) the dots don’t connect all the way to SOA.

Since the presence of well-defined (and well-built services) is assumed in a bevy of existing and emerging technology strategies — mashups, event-processing, business process automation and cloud computing — we need to correct the record on the total value of services and make the connection to proper architectural discipline.

To aid in this ‘services-architecture’ education, I’d like to call out excerpts of three works.  The first source is Luke Hohmann’s excellent 2003 book, Beyond Software Architecture.  In Chapter 1, Hohmann describes (reminds us of) architectural design principles that have stood the test of time:

Encapsulation

The architecture is organized around separate and relatively independent pieces that hide internal implementation details from each other. 

Interfaces

The ways that subsystems within a larger design interact are clearly defined.  Ideally, these interactions are specified in such a way that they can remain relatively stable over the life of the system.  One way to accomplish this is through abstractions over the concrete implementation.  Programming to the abstraction allows greater variability as implementation needs change. 

…Another area in which the principle of interfaces influences system design is the careful isolation of aspects of the system that are likely to experience the greatest amount of change behind stable interfaces. 

Loose Coupling

Coupling refers to the degree of interconnectedness among different pieces in a system.  In general, loosely coupled pieces are easier to understand, test, reuse, and maintain, because they can be isolated from other pieces of the system.  Loose coupling also promotes parallelism in the implementation schedule.  Note the application of the first two principles aides loose coupling. 

Appropriate Granularity

One of the key challenges associated with loose coupling concerns component granularity.  By granularity I mean the level of work performed by a component.  Loosely coupled components may be easy to understand, test, reuse, and maintain in isolation, but when they are created with too fine of a granularity, creating solutions using them can be harder because you have to stitch together so many to accomplish a meaningful piece of work.  Appropriate granularity is determined by the task(s) associated with the component. 

High Cohesion

Cohesion describes how closely related the activities within a single piece (component) or among a group of pieces are.  A highly cohesive component means that its elements strongly relate to each other.

Parameterization 

Components can be encapsulated, but this does not mean that they perform their work without some kind of parameterization or instrumentation.  The most effective components perform an appropriate amount of work with the right number and kind of parameters that enable their user to adjust their operation. 

Deferral

Many times the development team is faced with a tough decision that cannot be made with certainty. …By deferring these decisions as long as possible the overall development team gives themselves the best chance to make a good choice.  While you can’t defer a decision forever, you can quarantine its effects by using the principles of good architectural design.” 

To state the obvious, the above principles all apply to service design.  Pointing out the (apparently) less obvious, the value of applying these principles – protecting against and planning for change, breaking up work, smart-sizing assets, isolating risk – are benefits that can be derived from services, regardless of re-use potential.

Moving to a real world example, the March 2008 Harvard Business Review featured an article by David M. Upton and Bradley R. Staats, entitled Radically Simple IT.  The article is on Japan’s Shinsei Bank implementing a new enterprise system:

“In our research, we discovered a standout among the companies applying the path-based method: Japan’s Shinsei Bank. It succeeded in developing and deploying an entirely new enterprise system in one year at a cost of $55 million: That’s one-quarter of the time and about 10% of the cost of installing a traditional packaged system.

The new system not only served as a low-cost, efficient platform for running the existing business but also was flexible enough to support the company’s growth into new areas, including retail banking, consumer finance, and a joint venture to sell Indian mutual funds in Japan. 

The path-based principles that Shinsei applied in designing, building, and rolling out the system—forging together, not just aligning, business and IT strategies; employing the simplest possible technology; making the system truly modular; letting the system sell itself to users; and enabling users to influence future improvements—are a model for other companies. Some of these principles are variations on old themes while others turn the conventional wisdom on its head.”

Although the entire article is excellent, I wanted to call out the section on “Modularity, not just modules”.  The emphasis is mine.

“While the prevailing view that big IT programs and systems should consist of modules is hardly new, the concept of modularity is often misunderstood. Just because a software developer claims that the various parts of its applications are modules does not mean that they are actually modular.

Modularity involves clearly specifying interfaces so that development work can take place within any one module without affecting the others. Companies often miss that point when developing enterprise systems.

For example, we know of an automobile company that had teams working on multiple modules of a new enterprise system and claimed to have a modular design.

However, one team was in charge of interfaces and was constantly changing them. Every alteration by this group forced all the other groups to spend huge amounts of time redoing the work they had already completed. Rather than limiting the impact of changes by embracing modularity, this company had actually amplified problems! 

A truly modular architecture allows designers to focus on building solutions to local problems without disturbing the global system. With small, modular pieces, the organization can purchase off-the-shelf solutions or turn to inside or outside developers for a certain piece, accelerating the speed of development. Modular architecture also makes it easier to upgrade the technology within modules once the system is up and running.

Breaking down and solving problems in this way offers a number of advantages beyond speed. It allows the IT team to concentrate on obtaining the lowest-cost solution for each part and (by partitioning work) reduces the impact of a single point of failure.

Clearly specifying the functions of modules and the interfaces makes it easier to build a module that can be reused in other applications. 

The modular approach was a critical part of achieving the bank’s strategy, as Dvivedi described it, “to scale up and expand into new activities with ease, to be able to service the needs of the organization as it grows from a baby into an adult…and avoid building capacity before we need it.” Take loan-processing capabilities.

The project team rolled out the capabilities in small stages for three reasons: to prove to management that the computer system would perform as promised, to avoid overwhelming managers and users with too much automation all at once, and to be able to address any technical issues quickly as they arose.

Accordingly, the team initially sought to show that the system could correctly approve credit for a small number of loans (20 to 30 a day). Then the team developed the capacity to fully process 200 to 300 loans a day. As the business grew, Shinsei eliminated manual work to reach a capacity for processing 6,000 loans a day. 

Thanks to the modular structure of the automated system, Shinsei can simply replace one part (the loan-application or credit-checking functions, for example) without affecting the rest. What’s more, modularity has allowed Shinsei to change its IT when appropriate or necessary without having to risk upsetting customers.

It can keep the customer interfaces (such as web pages or the format of the ATM screen) the same while changing the back-end systems.”

Besides the excellent real-world example in applying and benefiting from the architectural principles cited by Hohmann, this article also calls out the ability to source functionality at a modular level.  With the advent of cloud computing, and subsequent opening of service markets, there is even more motivation to design and implement a services architecture.  As Dave Linthicum advises, “leverage other people’s work”.

Lastly, I want to point out a sidebar Guide to Modularity, from a 1997 Harvard Business Review article on Managing in an Age of Modularity.  The premise of this article was to introduce managers outside of technology and manufacturing to embrace modularity practices in product development:

“By breaking up a product into subsystems, or modules, designers, producers, and users have gained enormous flexibility. Different companies can take responsibility for separate modules and be confident that a reliable product will arise from their collective efforts.”

A Guide to Modularity

“Modularity is a strategy for organizing complex products and processes efficiently. A modular system is composed of units (or modules) that are designed independently but still function as an integrated whole.

Designers achieve modularity by partitioning information into visible design rules and hidden design parameters. Modularity is beneficial only if the partition is precise, unambiguous, and complete. 

The visible design rules (also called visible information) are decisions that affect subsequent design decisions. Ideally, the visible design rules are established early in a design process and communicated broadly to those involved. Visible design rules fall into three categories: 

  • An architecture, which specifies what modules will be part of the system and what their functions will be. 
  • Interfaces that describe in detail how the modules will interact, including how they will fit together, connect, and communicate. 
  • Standards for testing a module’s conformity to the design rules (can module X function in the system?) and for measuring one module’s performance relative to another (how good is module X versus module Y?). 

Practitioners sometimes lump all three elements of the visible information together and call them all simply “the architecture,” “the interfaces,” or “the standards.” 

The hidden design parameters (also called hidden information) are decisions that do not affect the design beyond the local module. Hidden elements can be chosen late and changed often and do not have to be communicated to anyone beyond the module design team.”

In respect to SOA, the design rule most often broken is identifying “what modules will be part of the system and what their functions will be.” 

The most common mistakes:the service portfolio map’s scope is immediately reduced to “those services that will be re-used”the service portfolio map mirrors the current software asset repositorythe service portfolio map is a derived artifact from individual project plans and deliverables

In creating a service portfolio map, start with business capabilities, business processes and/or business information, and perform business analysis to identify key business concepts that will represented by, further partitioned into, services.  Depending on the granularity of your starting point, work two to four levels down. 

For instance, if your starting point is Supply Chain, you’ll still be doing business analysis four levels down.  If your starting point is Warehouse Receiving, by the fourth level you are probably in implementation detail.  Fine for a Warehouse Receiving project, but too deep for your service portfolio map.

With a clear understanding of the architectural aspects and full benefits of services, and a high-level service portfolio map, you can better position your organization to succeed in this new environment where services, and a services mindset, are assumed.