Thursday, November 26, 2015

Main differences of managing public cloud-based software development projects




Nowadays more and more of software development projects are based not on traditional on-premises infrastructure but on the public cloud. Other projects are still under consideration, to move or not to move. Let me briefly summarize the impact on the management aspects of such projects, that I see as the most crucial.

First of all, if you develop the cloud-native software (not just moving your existing software to the cloud), the public cloud means your team can leverage the cutting edge industry approaches and solutions in a standardized, not home-brewed way.

What does this mean? Well, this means that a lot of sophisticated options regarding the information storage, access, manipulation are available for your development team in a way your CSP (Cloud Solution Provider) exposed such functionality, with all the abstraction helping to leverage it. This for example means when you use PaaS-level services of your CSP you can easily leverage such non-trivial things as DR (Disaster Recovery), FT (Fault Tolerance), security and so on just as attributes of your whole product/solution.

Of course it is still required to have design/plan/prototype activities in the project plan, but the risks you need to manage here are typically less lower. So some of the technical risks are being mitigated for you in fact.

One more big point is the infrastructure agility level: you need to manage the infrastructure provisioning for different stages/environments of your project, depending on the project phase. At the beginning of the project your team will mostly need prototyping environments to prove the concept of solution. Later during development you will need integration and testing environments to support your testing activities. Finally you will need UAT (User Acceptance Testing), staging and production environments.

Typically provisioning of these environments should be there in the project plan too, for traditional projects the duration of such provisioning activities can be really significant (days, etc.); in case of the cloud this can be from minutes to hours and often automated. Much easier to manage/track as you as a manager can see (and even initiate) it all yourself through the CSP’s self-service portal.

If you and your team are using Agile development practices, then I think cloud has one more thing that really helps – the ease of regular demos for the stakeholders. Public cloud is an ideal platform for all possible access possibilities (thanks to the “Broad network access” essential feature of any cloud), this external infrastructure will help your team and stakeholders to have a common playground.

On the other hand, there is also an implication of public cloud from the fact that CSP is an external organization. You need to manage the communication with CSP support from the very beginning of the project. You have to mitigate the risks of CSP link downtime, as this becomes a critical project resource. To see some deeper analysis of cloud-related risks, please read my other post “Risk Identification in Cloud-based Software Development Projects”.

Sunday, November 15, 2015

EU Personal Data: Safe Harbor vs Home Port



As you probably know, Safe Harbor acting for European personal data in the US from 2000 was recently ruled out by the ECJ (European Court of Justice) to be insufficient. This changes a lot the background for cloud computing consumers in Europe.
OK, what is this all about bit by bit:
  1. Europe has its own privacy laws. This is related to the European citizens personal data protection and is regulated by the Data Protection Directive from 1995.
  2. There are a lot of transnational US businesses that are storing, aggregating, analyzing the global customer data in the US-based datacenters. Such businesses can be of infrastructure level (like cloud services providers, hosting services, etc.), online services (social networks, blog platforms, search engines, etc.), e-commerce players and so on.
  3. Safe Harbor Privacy Principles were developed starting from 1998 and enacted in 2000 to make it possible for the European personal data to travel transatlantic and be handled there in safe manner.
  4. Last several years revelations of NSA activities and USA Patriot Act enforcement that are bypassing the European personal data protection and privacy laws.
  5. Safe Harbor is not sufficient to protect European citizens personal data anymore, as ruled by the court in 2015.
What will be the most likely consequences? Will it help European CSPs to rise and gain the market share? Will this create the workplaces in Europe? How will this boost the cloud consulting companies?

What we can see to the moment, the big US CSPs are opening more datacenters to keep the European data (and metadata) in Europe:


One one hand, cloud infrastructure business is a mass market with it's low margin economy, it is only possible to compete there having the global scope and huge resources. So probably Safe Harbor strike down will not significantly help any new European players to benefit from this situation, however new datacenters to be open in Europe should add more workplaces in EU member countries.

One the other hand, Europe has it's own strategy for the cloud computing (https://ec.europa.eu/digital-agenda/en/european-cloud-computing-strategy), C4E (Cloud-for-Europe) initiative, ECP (European Cloud Partnership, https://ec.europa.eu/digital-agenda/en/european-cloud-partnership) organization, so why not to coordinate/implement something of a level of pan-european CSP?

Wednesday, November 11, 2015

if(yourPublicCloud.isClosing)




With the news of HP Helion Public Cloud to close down in January 2016 we can again re-consider main public cloud risk factors we always should keep in mind.

Of course this is not the first time the public cloud provider is going dark, getting out of business or just shifting the strategy so the public cloud is discontinued (Nirvanix, Megacloud to name a few). As soon as public cloud is a mass market this takes a lot to compete in this low margin area.

Public cloud provider has to be a really big player to support the huge compute resources in the data centers across the globe. All the leaders in the area are having them: AWS, Microsoft Azure, Rackspace, Google. To some degree this can be considered a significant risk mitigation to rely on the leader CSP (Cloud Services Provider) that currently demonstates vision and strategy execution in public cloud area (for example, you can see those CSPs positioned at Gartner's Magic Quadrants upper-right part).

Nevertheless, if the public cloud you are using was announced to be closed down, what are the factors that can make migration from it more expensive, hard or even impossible, so you are getting “locked in” the that cloud? I would assume the main of them (but for sure not all) are:

  1. Using CSP-specific functionality that can't be easily migrated. This is a typical risk of PaaS (Platform-as-a-Service) model as opposed to the IaaS (Infrastructure-as-a-Service). You are not just dealing with some virtual machine images you can export/import/recreate; you are using vendor-specific services.
  2. Keeping a lot of the data in the cloud. For the data it is easy/cheap to get in, yet moving out of the cloud is typically charged much higher.
  3. Using public cloud as a primary infrastructure could be considered a risk as well. One thing is just to “burst” into public cloud when it is needed for the compute elasticity, another thing is to be fully based in public cloud.

Tuesday, November 3, 2015

May the Cloud Force be with you. What the recent movie tickets services crash teaches us.

Have you heard how frustrating was ordering tickets for the next Star Wars: The Force Awakens? Big online services like Fandango, MovieTickets, AMC, Regal, Cinemark and others across the globe were crashing when the fans flooded them in attempts to book the tickets just after the announcement.
“Cloud Force” could really help here. As this was a significant peak in booking service consumption, this could be addressed perfectly by the the cloud:

  • Rapid elasticity would allow to handle a sharply increased number of consumers without noticeable degradation of the service level. The computational nodes could be added automatically and transparently and deprovisioned when not needed any more.
  • Hybrid cloud scenario would allow to borrow the required computational power from the public cloud without any need in investing in dedicated infrastructure.

Even if this peak would be unexpected, the cloud could handle it based on the utilization metrics; however in this case the spike was perfectly foreseen so the anticipated load could be addressed with schedule-based elasticity triggers or even with manual provisioning.

The automatic elasticity comes extremely smooth if you deal with the cloud on the PaaS (Platform-as-a-Service) level, everything will be handled for you mostly transparently. If you want to keep the resources under control on IaaS (Infrastructure-as-a-Service) level, for some of the most popular public cloud providers those features that enable automatic elasticity would be:

  • Amazon Web Services: Auto Scaling, Elastic Load Balancing, Auto Scaling Group
  • Microsoft Azure: Cloud Services, Azure Load Balancer

Basically, the cloud wouldn't only help to handle such peaks in usage and thrive, it would also make it happen in really cost-effective way, without any requirement of statically assigned mostly idling infrastructure.

Those are apologies from some of the services: