Saturday, April 2, 2016

New tri-polar public cloud world



Looks like the status quo of the world of the public clouds is going to change. Current clear leaders, Amazon Web Services and Microsoft Azure have almost formed bipolar public cloud world. However the other big player that was not that eager before is starting to challenge the leaders: Google Cloud Platform.

In public cloud’s economy of scale the number and global distribution of the datacenters is absolutely crucial for the leadership. This takes tremendous capital expenditures to build and wire that kind of the global infrastructure, and huge operational expenditures to run it; only the biggest players can do this. This also takes great commitment to cloud computing vision to become invested in such an expensive game.

The expansion of the Amazon’s and Microsoft datacenter all over the world is very impressive, you can easily see it for both companies: Microsoft Azure https://azure.microsoft.com/en-gb/regions/, AWS https://aws.amazon.com/about-aws/global-infrastructure/ . Both of them are aggressively trying to cover the globe even better (even under the sea: http://www.nytimes.com/2016/02/01/technology/microsoft-plumbs-oceans-depths-to-test-underwater-data-center.html )

What Google responded with in terms of the datacenters for the public cloud wasn’t that impressive, you can see the reach here: https://cloud.google.com/about/locations/ . Yes, Google has great infrastructure supporting its own operations (search and other services), yet not it’s Cloud Platform.

However there is great news that Google made a higher stake on its public cloud and going to build the datacenters in more than 10 new regions over the world till the end of 2017:  https://cloudplatform.googleblog.com/2016/03/announcing-two-new-Cloud-Platform-Regions-and-10-more-to-come_22.html.

Why great news? Because this tougher competition means better rates for the cloud services for the consumers, more innovations from all those players. All of them are big enough to make it, none of them look “just experimenting” now.

Monday, March 21, 2016

Public clouds are transforming as fast as atmospheric clouds



Is your team developing any solutions that are hosted in public clouds? I mean any software that uses AWS or Microsoft Azure as an infrastructure (IaaS) or platform (PaaS) or both. If yes, you have definitely noticed that the pace of changes of AWS and Azure is extremely fast these days.

To see the speed of changes, you can go through the recent announcements lists for both leading public cloud vendors, AWS: https://aws.amazon.com/ru/new/, and Microsoft: https://www.microsoft.com/en-us/server-cloud/roadmap/recently-available.aspx. Such speed is only possible because there are no traditional release cycles, they release several times per month/week; this is DevOps world.

Of course such pace is a very significant factor for those parties who depend on the public clouds (cloud solution providers, independent software vendors, enterprises, etc.). Both leaders, Amazon and Microsoft, are trying as hard as possible to stick to the commitments they give to the partners/customers regarding the direction they move.

Microsoft, being very oriented to enterprise market, is even trying to make its roadmap clear and public, they recently announced the public Azure roadmap: https://blogs.technet.microsoft.com/server-cloud/2015/01/30/increasing-visibility-for-our-customers-announcing-the-cloud-platform-roadmap-site/. You can see the roadmap here: https://www.microsoft.com/en-us/server-cloud/roadmap.

I am not aware of any public roadmap for AWS for the moment. So what is the risk of having your public cloud provider evolving very fast, why is it significant to realize what set of technologies you public cloud will become in 1, 2, 3 year from now?

Of course, the services/APIs you use now, whether this is IaaS of PaaS level, will not become unsupported without any notice (that would threaten your cloud provider business image), however it is very possible the cloud technologies you are investing in right now will be superseded by their successor technologies in some mid-term perspective and there is no guarantee those two will be backward compatible.

Let’s consider an example with Microsoft Azure: there is big movement from IaaS v.1 to IaaS v.2 currently. New approach, called Azure Resource Manager changes the picture a lot: the whole ideology, API, limits, all this changes significantly (for the better). IaaS v.1 and v.2 are co-existing and not compatible, so you have to choose which way to go. It could sound obvious, just select the most recent version, however the v.2 just doesn’t support all of the Azure features yet the v.1 supports, so right now with v.1 you will be limited, but strategically you are winning.

To summarize, since public cloud vendors are moving really fast mostly due to an extremely intense competition and constantly evolving understanding of Cloud Computing, if you are a part (especially a leader) of a team developing for the public cloud, you need to understand the trajectory the vendor’s technologies are moving and to have the strategy for keeping up.

Thursday, November 26, 2015

Main differences of managing public cloud-based software development projects




Nowadays more and more of software development projects are based not on traditional on-premises infrastructure but on the public cloud. Other projects are still under consideration, to move or not to move. Let me briefly summarize the impact on the management aspects of such projects, that I see as the most crucial.

First of all, if you develop the cloud-native software (not just moving your existing software to the cloud), the public cloud means your team can leverage the cutting edge industry approaches and solutions in a standardized, not home-brewed way.

What does this mean? Well, this means that a lot of sophisticated options regarding the information storage, access, manipulation are available for your development team in a way your CSP (Cloud Solution Provider) exposed such functionality, with all the abstraction helping to leverage it. This for example means when you use PaaS-level services of your CSP you can easily leverage such non-trivial things as DR (Disaster Recovery), FT (Fault Tolerance), security and so on just as attributes of your whole product/solution.

Of course it is still required to have design/plan/prototype activities in the project plan, but the risks you need to manage here are typically less lower. So some of the technical risks are being mitigated for you in fact.

One more big point is the infrastructure agility level: you need to manage the infrastructure provisioning for different stages/environments of your project, depending on the project phase. At the beginning of the project your team will mostly need prototyping environments to prove the concept of solution. Later during development you will need integration and testing environments to support your testing activities. Finally you will need UAT (User Acceptance Testing), staging and production environments.

Typically provisioning of these environments should be there in the project plan too, for traditional projects the duration of such provisioning activities can be really significant (days, etc.); in case of the cloud this can be from minutes to hours and often automated. Much easier to manage/track as you as a manager can see (and even initiate) it all yourself through the CSP’s self-service portal.

If you and your team are using Agile development practices, then I think cloud has one more thing that really helps – the ease of regular demos for the stakeholders. Public cloud is an ideal platform for all possible access possibilities (thanks to the “Broad network access” essential feature of any cloud), this external infrastructure will help your team and stakeholders to have a common playground.

On the other hand, there is also an implication of public cloud from the fact that CSP is an external organization. You need to manage the communication with CSP support from the very beginning of the project. You have to mitigate the risks of CSP link downtime, as this becomes a critical project resource. To see some deeper analysis of cloud-related risks, please read my other post “Risk Identification in Cloud-based Software Development Projects”.

Sunday, November 15, 2015

EU Personal Data: Safe Harbor vs Home Port



As you probably know, Safe Harbor acting for European personal data in the US from 2000 was recently ruled out by the ECJ (European Court of Justice) to be insufficient. This changes a lot the background for cloud computing consumers in Europe.
OK, what is this all about bit by bit:
  1. Europe has its own privacy laws. This is related to the European citizens personal data protection and is regulated by the Data Protection Directive from 1995.
  2. There are a lot of transnational US businesses that are storing, aggregating, analyzing the global customer data in the US-based datacenters. Such businesses can be of infrastructure level (like cloud services providers, hosting services, etc.), online services (social networks, blog platforms, search engines, etc.), e-commerce players and so on.
  3. Safe Harbor Privacy Principles were developed starting from 1998 and enacted in 2000 to make it possible for the European personal data to travel transatlantic and be handled there in safe manner.
  4. Last several years revelations of NSA activities and USA Patriot Act enforcement that are bypassing the European personal data protection and privacy laws.
  5. Safe Harbor is not sufficient to protect European citizens personal data anymore, as ruled by the court in 2015.
What will be the most likely consequences? Will it help European CSPs to rise and gain the market share? Will this create the workplaces in Europe? How will this boost the cloud consulting companies?

What we can see to the moment, the big US CSPs are opening more datacenters to keep the European data (and metadata) in Europe:


One one hand, cloud infrastructure business is a mass market with it's low margin economy, it is only possible to compete there having the global scope and huge resources. So probably Safe Harbor strike down will not significantly help any new European players to benefit from this situation, however new datacenters to be open in Europe should add more workplaces in EU member countries.

One the other hand, Europe has it's own strategy for the cloud computing (https://ec.europa.eu/digital-agenda/en/european-cloud-computing-strategy), C4E (Cloud-for-Europe) initiative, ECP (European Cloud Partnership, https://ec.europa.eu/digital-agenda/en/european-cloud-partnership) organization, so why not to coordinate/implement something of a level of pan-european CSP?

Wednesday, November 11, 2015

if(yourPublicCloud.isClosing)




With the news of HP Helion Public Cloud to close down in January 2016 we can again re-consider main public cloud risk factors we always should keep in mind.

Of course this is not the first time the public cloud provider is going dark, getting out of business or just shifting the strategy so the public cloud is discontinued (Nirvanix, Megacloud to name a few). As soon as public cloud is a mass market this takes a lot to compete in this low margin area.

Public cloud provider has to be a really big player to support the huge compute resources in the data centers across the globe. All the leaders in the area are having them: AWS, Microsoft Azure, Rackspace, Google. To some degree this can be considered a significant risk mitigation to rely on the leader CSP (Cloud Services Provider) that currently demonstates vision and strategy execution in public cloud area (for example, you can see those CSPs positioned at Gartner's Magic Quadrants upper-right part).

Nevertheless, if the public cloud you are using was announced to be closed down, what are the factors that can make migration from it more expensive, hard or even impossible, so you are getting “locked in” the that cloud? I would assume the main of them (but for sure not all) are:

  1. Using CSP-specific functionality that can't be easily migrated. This is a typical risk of PaaS (Platform-as-a-Service) model as opposed to the IaaS (Infrastructure-as-a-Service). You are not just dealing with some virtual machine images you can export/import/recreate; you are using vendor-specific services.
  2. Keeping a lot of the data in the cloud. For the data it is easy/cheap to get in, yet moving out of the cloud is typically charged much higher.
  3. Using public cloud as a primary infrastructure could be considered a risk as well. One thing is just to “burst” into public cloud when it is needed for the compute elasticity, another thing is to be fully based in public cloud.

Tuesday, November 3, 2015

May the Cloud Force be with you. What the recent movie tickets services crash teaches us.

Have you heard how frustrating was ordering tickets for the next Star Wars: The Force Awakens? Big online services like Fandango, MovieTickets, AMC, Regal, Cinemark and others across the globe were crashing when the fans flooded them in attempts to book the tickets just after the announcement.
“Cloud Force” could really help here. As this was a significant peak in booking service consumption, this could be addressed perfectly by the the cloud:

  • Rapid elasticity would allow to handle a sharply increased number of consumers without noticeable degradation of the service level. The computational nodes could be added automatically and transparently and deprovisioned when not needed any more.
  • Hybrid cloud scenario would allow to borrow the required computational power from the public cloud without any need in investing in dedicated infrastructure.

Even if this peak would be unexpected, the cloud could handle it based on the utilization metrics; however in this case the spike was perfectly foreseen so the anticipated load could be addressed with schedule-based elasticity triggers or even with manual provisioning.

The automatic elasticity comes extremely smooth if you deal with the cloud on the PaaS (Platform-as-a-Service) level, everything will be handled for you mostly transparently. If you want to keep the resources under control on IaaS (Infrastructure-as-a-Service) level, for some of the most popular public cloud providers those features that enable automatic elasticity would be:

  • Amazon Web Services: Auto Scaling, Elastic Load Balancing, Auto Scaling Group
  • Microsoft Azure: Cloud Services, Azure Load Balancer

Basically, the cloud wouldn't only help to handle such peaks in usage and thrive, it would also make it happen in really cost-effective way, without any requirement of statically assigned mostly idling infrastructure.

Those are apologies from some of the services:

Tuesday, June 2, 2015

Cost Management for Projects That Use Public Cloud



Project cost management 


Cost of the project is one of the biggest aspects of the management. This is a typically achieved through the set of well-established approaches. According to PMI, the processes that help with project cost management are:

  1. Resource Planning
  2. Cost Estimating
  3. Cost Budgeting
  4. Cost Control

Let's consider how public cloud associated costs can be addressed withing this framework (cloud costs will be only the part of all project costs, yet we will focus on them specifically)

Public cloud resource planning 


Public cloud can be considered as a set of resources which depend on the cloud service model: PaaS (Platform-as-a-Service), IaaS (Infrastucture-as-a-Model), or SaaS (Software-as-a-Service). More specifically those resources would be:

  1. IaaS: server instances usage, storage space, network bandwidth, inbound and outbound traffic, load balancing, etc. 
  2. PaaS: developers accounts, technologies/services/frameworks usage, etc.
  3. SaaS: user account, applications/services usage, etc. 

Based on the type of the cloud, resource requirements should be defined.

Public cloud cost estimating 


As soon as we have resource requirements defined, we can use our WBS (Work Breakdown Structure), activity duration estimation and resource rates to make the estimation for the cloud related costs of the project.

Resource rates are provided by the CSP (Cloud Services Provider) and sometimes the costs structure is not very easy to grasp. For example, for AWS (Amazon Web Services), EC2 it looks like this. You have to decide about a lot of things to be able to calculate the costs, most likely this will require involvement of the project's System Architect or Tech. Lead. (There are also some online calculators, yet the process is not that simple)

Public cloud cost budgeting 


After the estimation of the required resources is done we can use it with WBS and project schedule to come up with the cost baseline.

Cost baseline for the whole project will include different costs, we are considering only cloud-related costs here.

As the baseline goes over time, it will go through the different phases of the project, each phase will require its own specific amount of the cloud resources. It would be pretty typical to expect that in the beginning of the project the project team will require the cloud resources for prototyping and closer to the project end a lot of testing will be performed in the cloud. Speaking of the testing activities in the cloud, makes sense to mention that load/scalability/performance testing can be most resource consuming and hence it can require significant part of the whole project cloud budget.

Public cloud cost control


This part is of the whole cost management processes set can look as the most fitting to the current public clouds state of the art. As one of the main cloud's features is "Measured Service" (see NIST cloud definition I mentioned in my previous post Cloud-related projects: when your backend is really based on cloud services?) you are usually in full control of the cloud costs to the moment. This means that you can use very structured reports to see how the cloud spendings are attributed. This can help a lot to revise the estimates, make needed updates or corrective actions. This part is where the public cloud is typically shining.