Quantcast
Channel: Private Cloud – Enterprise Irregulars
Viewing all 41 articles
Browse latest View live

CA Buys 3Tera – Quick Analysis

0
0

Last week, CA announced that it had purchased 3Tera, a cloud management provider. 3Tera was one of the early whacks at what become known as “cloud computing,” building out their offering before the term was widely used. Their basic model of modeling applications and deployments on-top of virtualized pools of resources, or fabrics, or grids depending on your diction, has been used by others who’re seeking to provide the management software needed run clouds.

Enterprises are apparently driving a vendors to provide private cloud offerings, and CA’s purchase of 3Tera feeds right into this market. While CA had purchased the left-overs from Cassatt, clearly more technology – and pre-built customer base – was needed.

It’s also note-worthy that – last I checked – 3Tera was working with service providers who were looking to offer clouds.

(I discussed also discussed the acquisition in this weeks IT Management and Cloud podcast episode.)

Action Plan: CA

What I’d expect to see from CA is more noise when to comes to providing tooling for private clouds to large enterprises. They now have a very real (for how young cloud computing is) cloud stack that has a track records. The challenge for CA – as it would be for any Big 4 vendor – will be related their cemented in IT Service Management, BSM, and ITIL acculturated tools into an IT culture that can take full advantage of cloud-driven infrastructure.

Action Plan: Competitors

For other vendors in this space, the question of how to seriously to take cloud offerings is further pushed. Just looking at the Big 4:

(Read the full article @ Coté's People Over Process)

CA Buys 3Tera – Quick Analysis is copyrighted by . If you are reading this outside your feed reader, you are likely witnessing illegal content theft.

Enterprise Irregulars is sponsored by Salesforce.com and Workday.


Private (Cloud) Phantasies

0
0

I am hearing plenty of conversations around private clouds. The basic theme is “we will virtualize our processing and storage and get many of the benefits of public clouds”.  And, of course, “we will have none of the security and service level issues with public clouds.”

Incumbent application vendors encourage that thinking as a way of deflecting attention from their own bloated piece of the budget. Outsourcers see that as a way to sell VM services. Life goes on.

But here is what clients are signing up for with private clouds:

Few tax or energy or other scale efficiencies

In my upcoming book, Mike Manos who helped Microsoft build out its Azure cloud data centers says

“Thirty-five to 70 individual factors, like proximity to power, local talent pool, are considered (as location factors) for most centers. But three factors typically dominate: energy, telecommunications, and taxes. In the past, cost of the physical location was a primary consideration in data center design.”

So the locations that amazon, and Google, and Yahoo! and other cloud vendors have chosen for their data centers reflect aggressive tax and telecommunication negotiations. Their global networks of data centers also allows them to do what Mike calls “follow the moon” – scouring locations every so often for cheapest locations to process energy intensive computing.

Additionally these new data centers have massive machine to man efficiency ratios – 3,000 to 5,000 servers for every data center staff.

Few clients will get any of these efficiencies from their own data center or even from their “private cloud” outsourcing or hosting provider.

Little AM leverage

The last significant productivity in application management most companies saw was through offshore labor arbitrage. Companies have gradually seen that dwindle with wage inflation, younger offshore staff and related turnover. And the irony is even in massive campuses in Bangalore and elsewhere, there is no real management “scale”. If you walk into one of those buildings you see fortified floors which cordon off each customer’s outsourced team. There is very little sharing of staff or tasks across clients. Compared to cloud AM models, that is hugely “unshared” and inefficient.

Persistence of broken application software maintenance and BPO models…

(Read this and other articles @ Deal Architect)

Private (Cloud) Phantasies is copyrighted by . If you are reading this outside your feed reader, you are likely witnessing illegal content theft.

Enterprise Irregulars is sponsored by Salesforce.com and Workday.

Private Cloud as a Stepping Stone

0
0

I’ve not been keen on the notion of private cloud — I think it’s often a misnomer, an attempt to pick-and-choose from the cloud computing model in a way that eliminates many of the benefits. But I have grudgingly come to accept that private cloud may have some uses as part of a strategy of introducing cloud computing into a largely on-premise enterprise IT infrastructure.

This formed the subject of my recent webinar discussion with IBM vice president Jerry Cuomo, who is CTO of Websphere, How Can the Cloud Fit Into Your Applications Strategy?, part of the Cloud QCamp online conference.

As I said in the webcast, most enterprises for the foreseeable future will continue to maintain important and substantial off-cloud assets. They can’t just switch off the lights, junk everything they’ve invested in and migrate it all to the cloud in one fell swoop. Instead, they’ll introduce cloud computing gradually into the mix of IT resources they draw upon, and as their usage of the cloud increases, they’ll find themselves managing a hybrid environment in which cloud-based assets will coexist and interact with on-premise IT assets.

In doing so, they’ll have to tackle three different integration challenges:

  • Migration. Transferring software assets and processes between on-premise and cloud environments. They’ll need as far as possible to automate the process of migrating assets on and off the cloud, so that it can act as a seamless extension to the on-premise infrastructure.
  • Integration. Data exchange and process workflow between cloud and on-premise systems. As a first step, they’ll probably rely initially on point-to-point integrations. But they will soon find a need to implement some form of mediation technology if the integration is to remain manageable and cost-effective…

Read the complete article @ The Connected Web

Private Cloud as a Stepping Stone is copyrighted by . If you are reading this outside your feed reader, you are likely witnessing illegal content theft.

Enterprise Irregulars is sponsored by Salesforce.com and Workday.

Sorting out Microsoft’s clouds – Quick Analysis

0
0

Watching the World Cup at Sarah & Brady's

Microsoft expanded its cloud offerings today, answering the call for “private cloud.”

Our strategy is to provide the full range of cloud capabilities in both public and private clouds.
Robert Wahbe

After today’s announcements, Microsoft has at least three cloud options for you: a public cloud that’s mostly a platform as a service (Azure), a private cloud in limited release (Windows Azure Appliance), and an outline for building a private infrastructure as a service cloud (“Private Cloud Deployment Kit”).

This is all notable as Microsoft has, until now, really only been know for the first, Azure, which provides a bundle of services for developing applications in several programming languages. Azure remains the only one of these clouds that’s widely, if not generally available.

I’m a bit unclear the “Private Cloud Deployment Kit,” and so far there’s not enough Google juice on whatever solid pages are up to find anything. While there’s a whole slew of .docx’s and .pptx’s on a Microsoft cloud site, the “solution” nature makes narrowing down a specific offering a bit, well, enterprise-y. Which is surprising coming from Microsoft who’s usually very good at not being so.

Evaluating Private Clouds

For private cloud, saving money is your main concern because you’re still worrying about everything.

For the newly announced Windows Azure Appliance, Microsoft is pairing its Azure software offerings with three hardware partners: Dell, HP, and Fujitsu. While they don’t call it a “private beta,” the “limited production release” makes it effectively so, in the Web 2.0 sense at least. This means you’ll need a special relationship with Microsoft (or one of it’s partners) to get Windows Azure Appliance.

Would it be worth it? It’s difficult to tell yet. Once the pricing and final specs are out, you could conceivably compare it to other offerings. For a private cloud, the only thing that really matters is pricing and TCO.

With a private cloud you’re still: managing your cloud, paying for and do any geographic dispersal (and manage the on-the-ground government hijinks there), going to be stuck on upgrade cycles getting hung up on your own fears about upgrading versus staying with what works….

In summary, with a private cloud you’re not getting the advantages of having someone else run the cloud infrastructure.

Clearly, if a private cloud is better than some calcified mess you’re in, then sure. But, the question at the back of your mind should always be: why not make it public cloud? I’m pretty sure there’ll be many legit reasons for several years to come – but things are murky at this point – maybe if you come up against them, you could share them and we could start cutting through the fog.

Nicely, Microsoft’s cloud-based desktop management offering Intune is good context here. Imagine if running all that desktop management infrastructure was no longer your concern. Intune is gated for just small businesses at the moment, but it’s clearly something that’d be appealing to enterprises.

¡Dale Gas!

(Read the full article @ Coté's People Over Process)

Sorting out Microsoft’s clouds – Quick Analysis is copyrighted by . If you are reading this outside your feed reader, you are likely witnessing illegal content theft.

Enterprise Irregulars is sponsored by Salesforce.com and Workday.

The Transient Nature Of Private Clouds

0
0

An interesting thread is now on within the enterprise irregulars group on what constitutes private clouds –as again very enlightened discussion therein. The issue that I want to talk about is if private cloud do indeed exist, then what is their adoption path ? Lets start from the beginning : the issue is can we can use the term ”cloud” for describing the changes that happen inside IT architectures within enterprise? Thought there can be no definitive answer – a series of transition to a new order of things, will in my opinion, become imminent.

The pressures on IT & the engulfing sense of change in the IT landscape are hard to overlook. The The pressures would mean more begin to seriously look at SaaS, re-negotiating license terms, rapid adoption of virtualization etc. As part of this and beyond, internal IT would be forced more and more to show more bang for the buck and it is my view that organizations would begin to look more and more to question committed costs and begin to aggressively look at attacking them more systematically – earlier sporadic efforts marked their endeavors. This could also unlock additional resources that could potentially go towards funding new initiatives. There are enough number of enterprises going this route and their service partners are also in some cases prodding them to go this way.

The change in many senses may make IT inside enterprises to look , behave and perform like cloud computing providers – though there would be limitations( in most places serious) on scale, usage assessments , security and the like. There are strong incentives propelling enterprises to channel their efforts and investments over the next few years in mimicking a private cloud service architecture that gets managed by them internally. This could well become their approach of staging towards finally embracing the cloud(public) over a period of time . These baby steps to nearly full blown efforts are needed in preparing organizations to embrace clouds and it may not be feasible at all to make the shift from on-premise to cloud like flip switch. Serious licensing issues, maturity, lack of readiness, integration concerns, security all come in the way of enterprises looking at public cloud in a holistic way. These steps need not be looked down – they would very well become the foundation to move into public clouds in a big way.

Let’s for a moment assess this theme from a security perspective – a dominant concern business expresses when it comes to clouds. While assessing security requirements in public clouds,we see the recognition that a whole host of chnages need to be done at application architecture levels, the need to accomodate specific compliance requirements, privacy provisions in the public cloud etc.

Lets think through this : setting up private cloud is a motherhood statement at best( in many organizational surveys, one can find setting private clouds is not in the CIO’s top three priorities – if anything virtualization finds a place-) to make this happen in a credible way means re-examining most parts of IT functioning and business –IT relationship inside enterprises. IT teams while conceptualizing private clouds are happy to retain existing architectural designs, happily propose a clasical DMZ/Perimeterized model for providing security and enabling access, too often leveraging a highly virtualized infrastructure. More often than not, it’s enabling virtualization, automation and self service and color it as private cloud. Do recognize the implicit differences in constructing a private cloud and a public cloud. Comfort with the status quo with some adjustments versus an opportunity to rethink architecture, security, privacy,compliance needs in a way summarizes the nature of thought process and expected results between the private and public clouds. Speaking more directly, public clouds present the opportunity for enterprises to review and achieve specific requirements in the areas like agility, flexibility and efficiency at optimal effort Versus a skewed , boxed implementation of private cloud setup. Taking advantage of the public cloud benefits would far outweigh the advantages of getting boxed inside with private clouds.

Most elements of the bedrock gets affected – the processes, culture, metrics, performance, funding, service levels etc. Well thought out frameworks, roadmaps need to be put in place to make this transition successful. These frameworks need to cater not only to setting up internal cloud but eventually help in embracing the public cloud over the years- not an easy task as it appears. A few of those organizations that master this transition may also look at making business out of these – so it’s a journey – that needs to be travelled onto embracing public clouds. Some business may take a staged approach and call it by private cloud, internal cloud or whatever but eventually the road may lead into public clouds!

(Cross-posted @ Sadagopan's weblog on Emerging Technologies,Thoughts, Ideas,Trends and The Flat World)

The Transient Nature Of Private Clouds is copyrighted by . If you are reading this outside your feed reader, you are likely witnessing illegal content theft.

Enterprise Irregulars is sponsored by Salesforce.com and Workday.

Eucalyptus 2.0

0
0

Recently, I had the chance to catch up with the Eucalyptus team, getting an overview of the new release and then a demo:

Interview

Mårten Mickos gives us the quick rundown of features: performance and scale improvements, storage versioning (as done with S3), and other enhancements and fixes.

Demo

(Read the full article @ Coté's People Over Process)

Eucalyptus 2.0 is copyrighted by . If you are reading this outside your feed reader, you are likely witnessing illegal content theft.

Enterprise Irregulars is sponsored by Salesforce.com and Workday.

SAP Private Cloud Work and Big Data – Press Pass

0
0

Image credit: Rackspace

I talk with the press frequently. They thankfully whack down my ramblings into concise quotes. For those who prefer to see more, I try to dump publish slightly polished up conversations I have with press into this new category of posts: Press Pass.

SAP made several announcement around virtualization, private cloud, and in-memory analytics today, during SAP TechEd. I spoke with a couple reporters on the topic, mostly sorting out “what it all means.”

Making SAP Applications (Private) Cloud-ready

Here’s a conversation I had with Chris Kanaracus around his story this afternoon covering announcements around “a new solution designed to manage SAP system landscapes in data centers using modern virtual infrastructure and clouds.”

SAP announced the start of a project to, as I understand it, make their applications run better on highly virtualized environments, or “private clouds.” (“Private cloud” has all but consumed “virtualization” for the most part at the moment, so it’s marginally safe to use those concept interchangeably at this high of a level.) Along with that re-working/optimizing, they have some additional management software targeted at that type of deployment.

I don’t have any figures or data, but the obvious question to ask is how many SAP installs are already running on such environments, highly virtualized ones if not “private clouds.” I’d guess from SAP starting this project, not many.VMware had some nice numbers around the types of applications running on their stack about a month ago, covered by Jon Brodkin recently: Exchange and SQL Server topped the list (see Bob Warfield’s SaaS-y take), while only 18% had virtualized SAP applications

Why would SAP do it’s own management software instead of having partners just do it? Well, most application companies like to bundle their own management tools – Oracle does, for example [hello William!]. Esp. for when it comes to things like installation, provisioning, and configuring things within the application, not just the infrastructure supporting it. Not knowing exactly what the “landscape management solution” would do, it’s tough to say. But, in theory if an application as beefy as SAP applications are was running on-top of a “private cloud” (or a “highly virtualized environment”) there would be a lot of application-level management needed beyond the usual event management, green-light red-light monitoring, and service request management. Indeed, the announcement lists functionality such as:

[S]ystem clone and copy framework, automated capacity management, SAP landscape visibility in all layers, and capabilities intended to help simplify the provisioning and management of SAP systems in virtual and cloud infrastructures.

Those are all things you’d worry about when running and application in a highly virtualized (or “private cloud” environment). Much of it has to do with automated otherwise manual tasks – like on-boarding new users and services, and allocating the infrastructure for it, etc. For example, in the retail space, if a new store popped up, you’d have to do provisioning within various SAP systems to setup new employees, track inventory, etc. In the utopic cloud world, much of that would be self-service automated with “private cloud” voodoo. Like setting up a Google Apps corporate instance, or a Salesforce.com account.

They say it’ll take a year, sound right? – For sure – re-architecting something as big as SAP applications to run on some whacky new environment like “private cloud” (which has about as many definitions as “pudding” does in the English language) takes a long time. And then there’s all the supporting tools, training materials, putting together SI engagement plans, and the marketing to prove to people that you should put in the time and money to switch.

How about the partners mentioned, and what about OpenStack? – I’d expect SAP to just partner with big name companies like Cisco, EMC, VMware – the whole UCS gang [in the announcement below, IBM is a big partner around analytics]. If a company came along …

(Read the full article @ Coté's People Over Process)

SAP Private Cloud Work and Big Data – Press Pass is copyrighted by . If you are reading this outside your feed reader, you are likely witnessing illegal content theft.

Enterprise Irregulars is sponsored by Salesforce.com and Workday.

Private cloud discredited, part 1

0
0

Back in January, I made a controversial prediction that private clouds will be discredited by year end. Now, in the eleventh month of the year, the cavalry has arrived to support my prediction, in the form of a white paper published by a most unlikely ally, Microsoft.

Titled simply The Economics of the Cloud (PDF), the document succinctly sets out the economic factors that make the public cloud model an inexorable inevitability, substantiating my long-held views. It deserves a full reading â don’t settle for the overview in the authors’ blog post announcing it. Here are some headline numbers that should give pause for thought:

  • 80% lower TCO. The combination of large-scale operations, demand pooling and multi-tenancy create enormous economies in public cloud data centers: “a 100,000-server datacenter has an 80% lower total cost of ownership (TCO) compared to a 1,000-server datacenter.”
  • 40-fold cost reduction for SMBs. “For organizations with a very small installed base of servers (<100), private clouds are prohibitively expensive compared to public cloud.”
  • 10-fold cost reduction for larger enterprises. “For large agencies with an installed base of approximately 1,000 servers, private clouds are feasible but come with a significant cost premium of about 10 times the cost of a public cloud for the same unit of service.”

Since I know there’s a subset of ZDNet readers who will leap into Talkback to cry wolf on cloud security without bothering to read either the rest of this blog post or even looking at the white paper, here’s what it has to say on that particular canard:

“Large commercial cloud providers are often better able to bring deep expertise to bear on this problem than a typical corporate IT department, thus actually making cloud systems more secure and reliable … Many security experts argue there are no fundamental reasons why public clouds would be less secure; in fact, they are likely to become more secure than on premises due to the intense scrutiny providers must place on security and the deep level of expertise they are developing.”

That paragraph alludes to one of the key factors that I’ve been highlighting in my recent evangelism of the cloud model, the economies of scale for collective scrutiny and innovation. Amazingly, the document reaches its conclusions without adding in the additional economic benefits of this factor, which surely must deliver a knock-out blow to the private cloud concept. The collective feedback and testing from a diversified customer base enhances not only the security of a public cloud infrastructure but also informs and directs its evolution at a far more rapid pace than any private cloud will allow. There’s a virtuous cycle here, of course, in that public clouds are already more cost-effective as platforms for innovation, so that there is going to more innovation happening here than on private clouds anyway. That innovation will help to further accelerate the evolution of public clouds, thus amplifying…

(Cross-posted @ Software as Services Blog RSS | ZDNet)

Private cloud discredited, part 1 is copyrighted by . If you are reading this outside your feed reader, you are likely witnessing illegal content theft.

Enterprise Irregulars is sponsored by Salesforce.com and Workday.


Research Report: 2011 Cloud Computing Predictions For Vendors And Solution Providers

0
0

This blog was jointly posted by @Chirag_Mehta (Independent Blogger On Cloud Computing) and @rwang0 (Principal Analyst and CEO, Constellation Research, Inc.)

Part 1 was featured on Forbes: 2011 Cloud Computing Predictions For CIO’s And Business Technology Leaders

As Cloud Leaders Widen The Gap, Legacy Vendors Attempt A Fast Follow
Cloud computing leaders have innovated with rapid development cycles, true elasticity, pay as you go pricing models, try before buy marketing, and growing developer ecosystems.  Once dismissed as a minor blip and nuisance to the legacy incumbents, those vendors who scoffed cloud leaders now must quickly catch up across each of the four layers of cloud computing (i.e. consumption, creation, orchestration, and infrastructure) or face peril in both revenues and mindshare (see Figure 1).  2010 saw an about face from most vendors dipping their toe into the inevitable.    As vendors lay on the full marketing push behind cloud in 2011, customers can expect that:

Figure 1. The Four Layers Of Cloud Computing

General Trends

  • Leading cloud incumbents will diversify into adjacencies. The incumbents, mainly through acquisitions, will diversify into adjacencies as part of an effort to expand their cloud portfolio. This will result into blurry boundaries between the cloud, storage virtualization, data centers, and network virtualization.  Cloud vendors will seek tighter partnerships across the 4 layers of cloud computing as a benefit to customers.  One side benefit – partnerships serve as a pre-cursor to mergers and as a defensive position against legacy on-premises mega vendors playing catch up.
  • Cloud vendors will focus on the global cloud. The cloud vendors who initially started with the North America and followed the European market, will now likely to expand in Asia and Latin America.  Some regions such as Brazil, Poland, China, Japan, and India will spawn regional cloud providers. The result – accelerated cloud adoption in those countries, who resisted to use a non-local cloud provider.  Cloud will prove to be popular in countries where software piracy has proven to be an issue.
  • Legacy vendors without true Cloud architectures will continue to cloud wash with marketing FUD. Vendors who lack the key elements of cloud computing will continue to confuse the market with co-opted messages on private cloud, multi-instance, virtualization, and point to point integration until they have acquired or built the optimal cloud technologies.  Expect more old wine (and vinegar, not balsamic but the real sour kind, in some cases) in new bottles: The legacy vendors will re-define what cloud means based on what they can package based on their existing efforts without re-thinking the end-to-end architecture and product portfolio from grounds-up.
  • Tech vendors will make the shift to Information Brokers. SaaS and Cloud deployments provide companies with hidden value and software companies with new revenues streams.  Data will become more valuable than the software code. Three future profit pools willl include benchmarking, trending, and prediction.  The market impact – new service based sub-categories such as data-as-service and analysis-as-a-service will drive information brokering and future BPO models.

SaaS (Consumption Layer)

  • Everyone will take the SaaS offensive. Every hardware and system integrator seeking higher profit margins will join the Cloud party for the higher margins.  Software is the key to future revenue growth and a cloud offense ensures the highest degree of success and lowest risk factors.  Hardware vendors will continue to acquire key integration, storage, and management assets.  System integrators will begin by betting on a few platforms, eventually realizing they need to own their own stack or face a replay of the past stack wars.
  • On-premise enterprise ISVs will push for a private cloud. The on-premise enterprise ISVs are struggling to keep up with the on-premise license revenue and are not yet ready to move to SaaS because of margin cannibalization fears,lack of   scalable platforms, and a dirth of experience to run a SaaS business from a sales and operation perspectives. These on-premise enterprise software vendors will make a final push for an on-premise cloud that would mimic the behavior of a private cloud. Unfortunately, this will essentially be a packaging exercise to sell more on-premise software.  This flavor of cloud will promise the cloud benefits delivered to a customer’s door such as pre-configured settings, improved lifecycle, and black-box appliance. These are not cloud applications but will be sold and marketed as such.
  • Money and margin will come from verticalized cloud apps. Last mile solutions continue to be a key area of focus.  Those providers with business process expertise gain new channels to monetize vertical knowledge.  Expect an explosion of vertical apps by end of 2011.  More importantly, as the buying power shifts away from the IT towards the lines of businesses, highly verticalized solutions solving specific niche problems will have the greatest opportunities for market success.
  • Many legacy vendors might not make the transition to cloud and will be left behind. Few vendors, especially the legacy public ones, lack the financial where with all and investor stomachs to weather declining profit margins and lower average sales prices.  In addition, most vendors will not have the credibility to to shift and migrate existing users to newer platforms.  Legacy customers will most likely not migrate to new SaaS offerings due to lack of parity in functionality and inability to migrate existing customizations.
  • Social cloud emerges as a key component platform. The mature SaaS vendors that have optimized their “cloud before the cloud” platform, will likely add the social domain on top of their existing solutions to leverage the existing customer base and network effects.  Expect to see some shake-out in the social CRM category. A few existing SCRM vendors will deliver more and more solutions from the cloud and will further invest into their platforms to make it scalable, multi-tenant, and economically viable.  Vendors can expect to see some more VC investment, a possible IPO, and consolidation across all the sales channels.

DaaS & Paas  (Creation and Orchestration Layers)

  • Battle for PaaS begins with developers. Winning the hearts and minds will drive the key goals of PaaS providers.  As mobile, social, and cloud intersect, expect new battle lines to be drawn by existing vendors seeking entry in the cloud.  The first platform to enable write once deploy any how will win.  PaaS vendors will seek to incorporate the latest disruptive technologies in order to attract the right class of developers and drive continuous innovation into the platform.
  • Vendors must own the platform (both DaaS and Saas) to survive. ISV’s who give up on investing in their own cloud platform to other ISV’s will be relegated to second class citizens.  Despite the tremendous upfront cost savings, these platform moves cut-off future revenue streams as the stack wars move to the cloud.  For example, ISV’s will avoid Java to mitigate risk with Oracle or IBM.  The ability to control information brokering services will be limited to the platform owner.
  • Tension between indirect channel partners and vendors in the cloud will only increase. Cloud shifts customer account control to the vendor.  Partners who wholeheartedly embrace the cloud risk losing direct relationships with their customers.  In the case of .NET development in Azure, greater allegiance by partners to Microsoft will result in less account control with Azure.
  • PaaS will be modularized and niche. New PaaS vendors will focus on delivering specific modules to compete with end-to-end application platforms.  One approach – dominate niche areas int the cloud such as programming language runtimes, social media proxies, algorithmic SDK, etc.  Expect more players to jump into fill big gaps in big data, predictive analytics and information management.
  • Mobile app development will move to the cloud. App dev professionals and developers want one place to reach the mobile enterprise to build, mange, and deliver.  The app dev life cycles will follow the delivery models and device management will prove to be the keystone in ensuring the complete development experience.  Vendors should expect the cloud to be the predominant delivery channel for mobile apps to end users.    Success will require seamless management of extensions and disconnected support.

IaaS (Infrastructure Layer)

  • Cloud management will continue to grow and consolidate. Cloud management tools saw significant growth and investment in the last couple of years.  This trend will continue.  Expect to see a lot more investment in this category as increasing customer adoption drives demand for tools to manage hybrid landscapes. Also expect consolidation in this category as several VC-backed start-ups seek profitable and graceful exits.
  • Cloud storage will be a hot cake. Explosive growths in information in many verticals for early adopters already factor into this fast-growing category. With more and more data moving to the cloud, customers can anticipate significant innovation in this category including SSD-based block storage, replication, security, alternate file systems, etc.  Data-as-a-service and NoSQL PaaS category will further boost the growth.
  • NoSQL will skyrocket in market share and acceptance. Substantial growth in the number of NoSQL companies reflect an emerging trend to dump the infrastructure of SQL for non-transactional applications.  The cloud inherently makes a great platform for NoSQL and that further drives the growth for data-as-a-service and storage on the cloud.

The Bottom Line For Vendors (Sell Side)
Cloud ushers a new era of computing that will displace the existing legacy vendor hegemony.  Many vendors caught off guard by the shift in both technology and user sentiment must quickly  make strategic course corrections of face extinction.  Here are some recommendations for vendors making the shift to Cloud:

  1. Embrace, don’t wait, don’t even hesitate. Which is worse; cannibalizing your margins or not having margins to cannibalize?  Faster time to market and greater customer satisfaction will pay off.  The move to cloud ensures  a seat at the table for the next generation of computing.
  2. Begin all new development projects in the cloud. The rapid development cycles for cloud projects ensures that innovation will meet today’s time to market standards.  Test out new projects in the cloud and experience rapid provisioning and elasticity.  However, don’t forget to fail fast and recover quickly.
  3. Avoid investing in platform led apps. Apps should drive platform design not the other way around.  Form really does follow function in the Cloud.  Platform designs must focus on agility and scale.  Apps prove out what’s really needed versus what’s theoretical.  Plan for social, mobile, analytics, collaboration, and unified communications but deliver only when it makes business sense.
  4. Focus on developers, developers, and developers. Steve Ballmer is right. Success in the cloud will require bringing the developers with along on the PaaS journey. Don’t make them wait until the platform is done.  Otherwise, it may be too late for the company and developer ecosystem.
  5. Prioritize power usage effectiveness (PUEs). As with the factories during the last turn of century, IaaS will be the heart of delivery.  Companies with the lowest cost of computing will win and be able to pass cost savings onto their customers or pocket the margin.  Further, data center efficiencies do their part in green tech initiatives.
  6. Help customers simplify their landscape. Build compelling business cases to shift from legacy infrastructure to cloud efficiencies.  Lead the race to optimize legacy at your competitor’s expense.

Your POV
What’s your cloud strategy for customers in 2011?  Will you make the key investments?  How will you compete effectively?  Looking for additional cloud strategy resources?

Please post or send on to rwang0 at gmail dot com or r at softwareinsider dot org and we’ll keep your anonymity.  Further, let us know if you need help with your next gen apps strategy, overall apps strategy, and contract negotiations projects.  Here’s how we can help:

  • Assessing SaaS and cloud options
  • Evaluating Cloud integration strategies
  • Designing a next gen apps strategy with cloud in mind
  • Providing contract negotiations and software licensing support
  • Demystifying software licensing
  • Assisting with legacy ERP migration
  • Planning upgrades and migration
  • Performing vendor selection
  • Renegotiating maintenance

Related resources and link

Reprints

Reprints can be purchased through the Software Insider brand or Constellation Research, Inc.  To request official reprints in PDF format, please contact r@softwareinsider.org.

Disclosure

Although we work closely with many mega software vendors, we want you to trust us.  A full disclosure listing will be provided soon on the Constellation Research site.

Copyright © 2010 R Wang and Insider Associates, LLC. All rights reserved.

(Cross-posted @ A Software Insider's Point of View)

Research Report: 2011 Cloud Computing Predictions For Vendors And Solution Providers is copyrighted by . If you are reading this outside your feed reader, you are likely witnessing illegal content theft.

Enterprise Irregulars is sponsored by Salesforce.com and Workday.

Research Report: Constellation’s Research Outlook For 2011

0
0

Organizations Seek Measurable Results In Disruptive Tech, Next Gen Business, And Legacy Optimization Projects For 2011

Credits: Hugh MacLeod

Enterprise leaders seek pragmatic, creative, and disruptive solutions that achieve both profitability and market differentiation. Cutting through the hype and buzz of the latest consumer tech innovations and disruptive technologies, Constellation Research expects business value to reemerge as the common operating principle that resonates among leading marketing, technology, operations, human resource, and finance executives. As a result, Constellation expects organizations to face three main challenges: (see Figure 1.):

  • Navigating disruptive technologies. Innovative leaders must quickly assess which disruptive technologies show promise for their organizations. The link back to business strategy will drive what to adopt, when to adopt, why to adopt, and how to adopt. Expect leading organizations to reinvest in research budgets and internal processes that inform, disseminate, and prepare their organizations for an increasing pace in technology adoption.
  • Designing next generation business models. Disruptive technologies on their own will not provide the market leading advantages required for success. Leaders must identify where these technologies can create differentiation through new business models, grow new profit pools via new experiences, and deliver market efficiencies that save money and time. Organizations will also have to learn how to fail fast, and move on to the next set of emerging ideas.
  • Funding innovation through legacy optimization. Leaders can expect budgets to remain from flat to incremental growth in 2011. As a result, much of the disruptive technology and next generation business models must be funded through optimizing existing investments. Leaders not only must reduce the cost of existing investments, but also, leverage existing infrastructure to achieve the greatest amount of business value.

Figure 1. Innovative Organizations Face Three Main Challenges In 2011


Disruptive Technologies: Growing Enterprise Adoption – And A Few Bumps Along The Way

A flurry of mobile, social, cloud, analytics, and unified communication technologies from the consumer tech world continue to enter the enterprise. Technology buyers can expect that:

  • IT teams will face device proliferation while hardware vendors will face device flops (@maribellopez). In the mobile landscape, organizations have spent the past decade trying to consolidate everything into a single device which we deemed the “smartphone”. Today, the adoption of devices like the iPad and Kindle demonstrate that consumers and businesses are willing to embrace a device that excels at a singular or small number of functions. IT will soon realize that it will be supporting at least two devices per person, if not three. Why? Tablets can’t replace laptops for most employees; and smartphones while ubiquitous are good for specific tasks. Thus, organizations will see employees using different devices for different apps. A majority of the tablets in use in 2011 will remain employee-purchased which will cause IT to formalize employee-liable policies for security, manageability, and reimbursement. After the successful introduction of the iPad, everyone believes they can build and sell a tablet. Dell, HP, RIM, and others will struggle to compete against Apple as they look for the right combination of user interface and applications to drive demand. Android-based tablets will fare better but only if the OEM has provided significant software wrappers to plug security, manageability and UI issues.
  • Large enterprises will stall on adoption of cloud for voice communications (@eherrell). Despite announcements by major vendors, the cost model for replacing on- premises based voice equipment will not justify moving voice communications to the cloud in 2011. Cloud adoption will most likely begin to pick up during last quarter of the year, when pricing models become more competitive.
  • Cloud security will trump on-premises efforts (@fscavo). 2011 will be the year when SaaS providers find the issue of security turning from a perceived weakness of their offerings to a perceived strength. A handful of well-publicized targeted attacks against in-house IT applications, similar to what was seen with the recent Stuxnet worm, will lead many corporate executives to conclude that their in-house IT organizations can’t match the level of security offered by SaaS providers.
  • Forecasts in cloud security breaches will call for partly cloudy cloud adoption (@rwang0). Despite the woes in on-premises security and move to the cloud, cyber attacks will force companies to move from public clouds to private clouds in 2011 . Concern about cyber gangs hacking into commercial and military systems leads to a worldwide trend that temporarily reduces public cloud adoption. Hybrid models for apps in the public cloud and data in the private cloud emerge as users migrate from on-premises models. Data integration and security rise to key competencies for 2011. The bottom line – improved data security reliability will drive overall cloud adoption in the latter half of 2011.
  • Android will become more enterprise friendly by fixing major manageability issues (@maribellopez). Android, while very popular with consumers due to its Verizon relationship, still strikes fear in the heart of IT. Its lack of on device hardware encryption, which even Apple supports, makes it a non-starter for some IT organizations. In general, Android will need to fix its support for Exchange including areas such as 802.1x WPA2 wireless network authentication, corporate proxy servers, Cisco VPNs using certificates, OpenVPN, CalDAV, remote wipe, and managed apps and configurations. Sure, Android is a consumer platform, but consumer phones are now enterprise phones. Google must address these issues or risk losing share in its war against Apple.
  • Social media landscape transforms into information commerce (@briansolis). The social media landscape will undergo an interesting transformation as it ushers in a genre of information commerce and the 3C’s of social content – creation, curation, and consumption. While blogging typically resides in the upper echelons of the social media hierarchy, new services further democratize the ability to publish and propagate information. 2011 heralds the year of information curation and the dawn of the curator. Curators introduce a new role into the pyramid of Information Commerce. By discovering, organizing, and sharing relevant and interesting content from around the Web through their social streams of choice, curators will invest in the integrity of their network as well as their relationships. Information becomes currency and the ability to recognize something of interest as well as package it in a compelling, consumable and also sharable format is an art. Curators earn greater social capital for their role in qualifying, filtering, and refining the content introduced to the streams that connect their interest graphs.
  • Enterprise social software migraine (@sameerpatel). On the Technology front, expect a lot of noise and confusion on the social and collaborative front when it comes to customer, employee and partner collaboration. Technology will come from 4 camps: Pure Play Enterprise Social Software Vendors, ERP and CRM providers layering in collaboration and community features, Networking and UC providers adding social networking to VOIP and Online Meeting offerings, and finally, specialist HR and LMS vendors extending their offerings to include collaboration. From a distribution perspective, today’s largely direct sales model will see expansion into Telco reseller providers who have sold managed hosting solutions such as email and messaging in the past, as well as system integrator and strategy consulting providers that are ramping up practices. Due to the nature of rapidly evolving use cases for social software, traditional sourcing mechanisms and criteria that might work for static ERP and CRM system selection will be inadequate to make long term and roadmap decisions on tools and integration for enterprise social software.
  • Tools, networks and services that cater to the role of the curator will emerge (@briansolis). Storify, Curated.by, Pearltrees, and Paper.li break through as the coveted services of choice amongst curators as they not only enable the repackaging and dissemination of information, but also deliver in captivating and engaging formats. Similar to blog posts, curated content represents social objects and curation services will spark conversations and reactions, while also breathing new life and extending the reach of existing content – wherever it may reside. Curators play an important role in the evolution of new media, the reach of information, and the social nicheworks that unite as a result. Curators promote interaction, collaboration, as well as enlightenment. More importantly, services that empower curators will also expand the topography for content creation. Forrester estimates that 70% of social media users are simply consumers, those who search and consume the content available today…but never say anything in public about it. However, the ease of curation combined with the pervasiveness of micro-blogging start to entice consumers to share information, converting the static consumer into a productive curator or creator
  • Social analytics will evolve from ad hoc experiments into refined information services (@rwang0). Organizations will continue to experiment in listening services that filter out noise from the social sphere, identify trends that deliver insight, and create models that support prediction. As algorithms increase in complexity, adapt to regional and cultural differences, and require greater vertical specialization, the end customer organization will no longer be able to support in house efforts. A new breed of information brokers will deliver social analytics at a scale that will support the challenges of big data in heterogeneous systems. Expect vendors such as Alterian, Attensity, Buzzmetrics, Cymfony, IBM, Radian6, SAS, Scoutlabs, Telligent, and Visible to shift their business models from software vendors to information brokers.
  • “The Cloud” will become the new Social Media (@ekolsky). Always looking for a technology or tool to overhype, the recent announcements from Oracle, SAP, Microsoft, Salesforce and many smaller vendors in the enterprise applications world has placed “The Cloud” as the center of controversy for 2011 and into 2013-2014. The lack of coherence and understanding of “The Cloud” will result in many wasted dollars trying to implement models that are not sustainable, or reliable, for organizations, fees paid to consultants with no real knowledge of the market, and failures that will only serve to reduce the speed of adoption of “The Cloud”. Buyers will still have to wait until 2015+ to see what happens, as most vendors have just begun developing their strategies and organizations have not yet adopted the model in sufficient numbers to justify faster R&D from vendors (with few exceptions).

Next Gen Business Models: “Outside In” Strategies Proliferate – With An Eye Towards Pragmatism.

Business model innovation will rely more on disruptive technology in 2011. Leading organizations will strive for a better synergy between business and technology. Constellation expects that:

  • Citizen engagement platforms will get more open despite the emphasis on security (@ideagov). The combination of the blow back from wikileaks; and the Republican takeover in the U.S. House of Representatives will lead to a rash of “military grade encryption packages” that will be stacked on top of many apps and platforms. Whether they work or not will not matter. Conversely expect to see an explosion of citizen engagement platforms striving to be even more “open.”
  • Organizations will put business back into social business (@sameerpatel). As organizations increasingly start to see the benefits of deploying social and collaborative initiatives to improve employee, customer and partner engagement, they will soon begin to realize that the decade old notion of streamlining repeatable processes made popular by ERP and CRM system-of-record deployments was largely over promised. In practice, customers and prospects have unique questions not answerable in the knowledge base or by marketing; employees living in rigid ERP systems need to constantly find experts who have the best answers and to collaborate with them. And reseller partners are constantly spending time looking for the right answers not available on asynchronous partner portals to keep end customers happy. Silo’d but open collaboration initiatives on activity streams and other enterprise social networking utilities currently being deployed will expose such engagement not historically possible in an ERP or CRM laden design. Consequently, LOB and IT leadership will realize that traditional process approaches and fluid collaborative constructs need to come together to truly accelerate business outcomes.
  • Sexy will be out for social media (@ekolsky). Organizations will realize that for something (social media) to feel sexy there is a lot of work that needs to happen behind the scenes. Time to pay the piper, as they say, and begin to build integrated platforms that can leverage social channels in constructing healthier, better relationships with customers. Despite the focus on tools and technologies, leaders will begin to realize that it is just about processes and people with support from technology. Want to get ahead? Plan, plan, plan – then roll up your sleeves and start doing, strategically speaking.
  • Organizations will get serious about mobilizing apps and embrace the platforms to support mobility (@maribellopez). In 2011, firms will deploy enterprise mobility management tools to support multiple device types and operating systems. Companies will also focus IT resources on moving line of business apps to devices. While cloud-based platforms and SaaS gain in importance, a majority of firms (75+%) will turn to in-house resources for development. As a result, firms will adopt mobile enterprise applications platforms and mobility frameworks to help them port apps using existing IT resources.
  • The distinction between BPO and ITO will blur (@pfersht). Integrated offerings from service providers with broad capability gain market share. With the leading IT services providers all heavily pushing BPO capability, there will be increased blurring of offerings as industrialized process solutions become more popular. Process-only BPO will continue to proliferate across horizontal offerings where there is significant labor arbitrage opportunity, namely finance and accounting, order management and procurement, however within industry-specific process, platform-enabled offerings are the only way providers can develop cost-effective utility models across their clients.
  • P2P will displace the old notions of B2B and B2C in social business (@rwang0). B2B and B2C will cease to exist in 2011. Organizations will conduct social business through Peer-to-peer (P2P) relationships. Attempts to stove pipe individuals into forced-fit, artificial market segmentations will fail because each individual brings multiple roles to the community. Each role brings a new perspective and a set of expectations in customer experience. Organizations will have to retool to the new rules of business and also move beyond social.
  • Sustainability software will lead to global starvation (@dahowlett). Sustainability software will conclusively prove that cows are the biggest contributors to greenhouse gases. The ensuing bovine cull will ensure population starvation on a massive scale thus solving our climate change issues. Those flogging carbon solutions will be put out of business.

Legacy Optimization: Flat IT Budgets And New Projects Increase Pressure On Legacy Costs

Between 66% and 75% of most technology budgets go towards supporting legacy systems. In order to make the shift to support new business models and disruptive technologies, leaders will have to find ways to optimize existing investments. Leading organizations can expect that:

  • Corporate IT spending will barely keep up with dollar inflation (@fscavo). Corporate IT spending in the US and Canada will increase a small 2.0% at the median, after two years of flat budgets. In addition, although most IT organizations are not currently adding to staff counts, we do see a significant upturn in initiation of new major projects, extended work hours for IT employees, and increasing use of IT contractors. This will lead to an improved IT employment picture by mid-year.
  • Technology refresh cycles will accelerate through 2011 (@ekolsky). The recent 2-3 years “nuclear winter” in enterprise applications, which coincided with the advent of social channels and social technologies, will give way to a massive acceleration of technology refresh channels – especially in Customer Service departments – leading to large-scale adoption of both new technology related to social media as well as new technology that was scheduled for later adoption. This large-scale adoption will result in several smaller vendors with innovative offerings gaining a sizable presence in Contact Centers and a disruption of the model for old-technology vendors that cannot adapt quickly to the changes we are seeing.
  • Organizations will cautiously recommit to BPO (@pfersht). Business Process Outsourcing uptake will creep back throughout 2011, as the recovery stutters and buyers pull the trigger on sourcing initiatives, however, many of the deals for the first-time buyer will be small in scope. Many businesses paralyzed by the Recession have been operating a “wait and see” strategy through 2010 regarding their Business Process Outsourcing (BPO) options. However, a slowing recovery and a growing pressure to meet budgets will drive a steady wave of increased BPO evaluation and contract signing in 2011, especially in Finance and Accounting and Procurement. HfS demand-side research has pinpointed a strong interest from buyers to increase scope in existing BPO contracts, and close to one-in-four businesses in the mid-market ($1bn – $3bn in revs) are expecting to investigate their first steps into F&A BPO. Moreover, many BPO services providers are more determined than ever to “penetrate and radiate” customers with initial small-sized contracts, due to the shortage of attractive captive acquisitions and affordable competitive acquisition candidates.
  • Organizations that reevaluate their IT strategies and contracts hand in hand will save the most money. (@rwang0) Most technology procurement strategies fail to align with IT strategy and vice versa. Consequently, buyers end up with extra device capacity and shelfware. As organizations consider their legacy optimization strategies, successful teams will bring enterprise architects, IT leaders, procurement teams, and business units together to identify waste to pay for innovation. Two -tier ERP and third party maintenance will prove to be examples where alignment can be achieved to create win-wins for IT and line of business leaders.
  • Organizations held hostage by high and useless software maintenance contracts will lead a massive backlash (@rwang0). Organizations faced with market pressures to create strategic differentiation amidst the burden of legacy systems will need to find a way to pay for innovation. Software maintenance fees will come under attack as user groups and leading organizations will spearhead efforts to renegotiate existing enterprise software vendor contracts. Existing software vendors caught off guard will suffer through a PR disaster that will cost them significant future sales. Third party maintenance vendors will continue to emerge to combat vendor-lock in and maintenance hegemony.

Your POV.

Ready for 2011? Got a prediction we missed? Add your comments to the blog or send us a comment at info (at) ConstellationRG (dot) com.

Please let us know if you need help with you in 2011. Here’s how we can help:

  • Disruptive technologies. Assessing the market for social, mobile, cloud, analytics, UC, and internet of things. Providing vendor selection frameworks. Comparing vendor capabilities. Negotiating vendor contracts. Providing independent validation and verification.
  • Next gen business models. Advising management teams and organizations on disruptive technology adoption leading practices. Designing next gen business models in Social CRM, digital marketing transformation, cloud adoption. social business, virtual commerce strategies, business process innovations, and cloud services.
  • Legacy optimization. Reviewing existing technology strategies for cost savings. Renegotiating existing maintenance contracts. Providing go forward optimization plans.

Resources and Report Download


Reprints

Reprints can be purchased through Constellation Research, Inc. To request official reprints in PDF format, please contact sales@ConstellationRG.com.

Disclosure

Although we work closely with many mega software vendors, we want you to trust us. For the full disclosure policy, stay tuned for the full client list on the Constellation Research website.

Copyright © 2010 Constellation Research, Inc. All rights reserved.

(Cross-posted @ Constellation Research)

Research Report: Constellation’s Research Outlook For 2011 is copyrighted by . If you are reading this outside your feed reader, you are likely witnessing illegal content theft.

Enterprise Irregulars is sponsored by Salesforce.com and Workday.

Building a halfway house to the cloud

0
0

Several private clouds are now coming to market based on the Vblock technology developed by VCE, a joint venture forged by Cisco, EMC and VMWare. Last week I groaned inwardly as I saw not one, but two announcements plop into my inbox. First came Sungard’s “fully managed cloud offering”, and then a couple of days later CSC got in touch to brief me about the launch of CSC BizCloud, “the industryâs first on-premise private cloud billed as a service.”

It’s entirely predictable of course that we’ll see a surge of fake cloud roll-outs this year, and I shouldn’t be surprised to find the usual suspects eager to host them. It’s a lucrative business when, as I highlighted last year when quoting a Microsoft white paper, “private clouds are feasible but come with a significant cost premium of about 10 times the cost of a public cloud for the same unit of service.”

There are occasions, though, when even I’ll admit that implementing private cloud can make sense as a stepping stone on the way to a fully native, cloud-scale infrastructure. In the past, I’ve framed this largely in terms of the technology challenges. In conversation last week with CSC’s vice president of cloud computing and software services, Brian Boruff, I learnt that there’s also an important cultural angle. It’s simply too much of a mindset adjustment for many organizations to move directly to cloud computing from where they’re starting right now.

“I don’t think cloud computing is a technology issue. The technology’s there,” Boruff told me. “It’s a people and a labor and a business issue. BizCloud â think of it as a sandbox. They can bring it inside the data center and start playing with it.

“Two years from now,” he continued, “I think you’ll see workloads that have moved into BizCloud moving into the public cloud â but it’s a journey.”

Some organizations of course are way ahead of the crowd. Boruff spoke of three generations that differ dramatically in their attitudes to outsourcing and subcontracting. At one extreme are the first-generation outsourcers, he said. “Some people that have never outsourced are scared to death of this cloud computing thing.” Others have been doing it for twenty years or more and it’s second nature to them. “We have one client,” he revealed, “that is a $35bn multinational whose strategy over the next three years is to move everything they do into an as-a-service model.”

For those who aren’t yet ready to go all the way into the cloud, halfway-house platforms like BizCloud provide an opportunity to get some of the benefits of virtualization and automation immediately while taking time to adapt to the wider impact of full-blown cloud computing, he explained. Since CSC offers fully multi-tenant public cloud infrastructure built on the same platform as BizCloud, it will be much easier, he assured me, to move to a public cloud infrastructure from BizCloud than it would be from a classic enterprise IT environment. In the meantime, IT management buys time to transition its workforce to the new realities of cloud.

“It’s not just capital investment. Think about all the people, all the labor investment of people that are running around managing highly inefficient workloads,” said Boruff. “If you’re the VP of infrastructure and somebody’s telling you to move to the public cloud, what does the future of your career look like?

“BizCloud is a way inside of someone’s data centers to say, instead of three people doing that workload, maybe you only need two or one. Let’s retrain them to run these highly virtualized data centers and then go after some of the applications.”

While it may still cost more than a true public cloud implementation, the cost savings compared to the existing enterprise infrastructure can still be huge. Telecoms billing provider Cycle 30, an early Sungard cloud customer, is said in its press release to have “saved millions of dollars” by adopting a cloud solution, albeit without specifying how the savings were calculated.

Nor is CSC holding back customers from moving all the way to the cloud if they’re ready â whatever the hit to its own revenues. Boruff cited the example of Britain’s national postal service, Royal Mail Group, which CSC helped move from an in-house implementation of Lotus Notes to Microsoft’s cloud-hosted BPOS suite (soon to be known as Office 365). “We were charging Royal Mail Group a lot of money to run Lotus Notes for them,” he said. “We had 40 people on site. We had to get rid of their jobs.”

That kind of story probably doesn’t help make IT decision makers any more eager to accelerate their progress cloudwards. If a hybrid cloud strategy buys a bit more time to allay staff fears and manage retraining and redeployment, maybe it’s not such a bad thing after all.

(Cross-posted @ Software as Services Blog RSS | ZDNet)

Building a halfway house to the cloud is copyrighted by . If you are reading this outside your feed reader, you are likely witnessing illegal content theft.

Enterprise Irregulars is sponsored by Salesforce.com and Workday.

Your very own OpenStack Cloud – Quick Analysis

0
0

  • Rackspace is now offering paid support for OpenStack-based clouds, seeding the team with their acquisition of Anso Labs and partnerships with hardware, cloud management, and cloud servicing companies.
  • Rather than try to take over the entire support market for OpenStack, Rackspace wants others to join in the market, leaving Rackspace to do the higher level support.
  • Dell, Opscode, and Rackspace also announced the beta of an (unnamed?) offering that combines Dell Power Edge C class hardware, OpenStack, and Chef to create bare-metal, bootstrapping clouds.

 

Cloud Builders

“The one thing we’ve heard [from businesses] is that people need a commercial entity to back an open source project,” Collier says. “Free and open source is great and all, but they want someone they call when they run into problems. Rackspace is the natural company to do that.”

Rackspace announced it’s support plan for OpenStack cloud installs today, Cloud Builders. Here’s the quick summary:

  • Rackspace is starting a new line of business to support uses of OpenStack beyond its own data centers, “Rackspace Cloud Builders.” This support would be anything from training, helping setup clouds, to high level escalation of problems with those clouds.
  • They purchased Anso Labs, creators of a core part of OpenStack and a cloud services company, to help seed this business. Rackspace expects the team to be 30-40 people this year, but draw on the 3,000+ support staff in the rest of Rackspace.
  • Rather than displace other companies who are looking to build businesses on OpenStack, Rackspace would like to be the “third level” support for these folks and others. As Mark Collier put it, “we’re not trying to be Accenture or anything like that.”
  • Momentum around OpenStack continues to be strong as gauged by “big” community members (such as Cisco, Canonical, and Dell) as well as the features road-map (pulling in more hyper-visors, upping storage limits, and providing more networking options). Indeed, RedMonk is asked about OpenStack frequently, both by users and other vendors.
  • In addition to Rackspace’s new team, they’ve put together partners: Opscode, Dell, Equinix, Cloudscaling, and Citrix. Presumably, these folks will help build and service the various clouds (private and otherwise) being supported. See below for an example of that between Rackspace, Dell, and Opscode.

For an introduction to OpenStack, see this RedMonk interview with Rackspace’s Jonathan Bryce (there’s a full transcript if you prefer):

Cloud Body of Knowledge

As part of this new business, Rackspace will be generating a lot of material around best practices, architectures, and other “documentation” and practices for running various types of clouds. I asked if that would be “open,” to which the answer was more or less “yes,” or at the very least, “that’s a good idea.”

RedMonk fields a lot of inquires around cloud best practices and experiences, so there’s obviously a hunger for it. Keeping this material “open” versus close-to-the chest (as big consulting outfits would do) would be very beneficial to Rackspace: the more OpenStack-based clouds there are out there, the wider the pie for their support offering. Additionally, being the “owner” and (potentially) “biggest user” of OpenStack would have plenty of benefits to Rackspace even if they didn’t monetize support.

OpenStack Installer, Dell-based clouds

Building a hyperscale cloud requires a different mindset (we like to call it “revolutionary”) compared to a traditional enterprise virtualized infrastructure. This means driving a degree of simplicity, homogeneity, and density that is beyond most enterprise systems.

The core lesson of these large systems is that redundancy moves from the hardware into the software and applications. In fact, the expectation of failure is built into the system as a key assumption because daily failures are a fact of life when you have thousands of servers.

Bootstrapping OpenStack Clouds

The Dell, Opscode, and Rackspace offering is the launch of a beta program for OpenStack clouds, based, of course, on Dell hardware (they’re actively seeking people to do PoC’s). As Dell sums it up: it’s an “OpenStack installer that allows bare metal deployment of OpenStack clouds in a few hours (vs. a manual installation period of several days).” In addition to using, of course, OpenStack, Dell is looking to use Chef for not only the on-going automation (“configuration management,” if you prefer) and initial setup. Their nicely detailed paper on the topic sums it up:

(Read the full article @ Coté's People Over Process)

Your very own OpenStack Cloud – Quick Analysis is copyrighted by . If you are reading this outside your feed reader, you are likely witnessing illegal content theft.

Enterprise Irregulars is sponsored by Salesforce.com and Workday.

3 Important Things from the Microsoft Management Summit 2011

0
0

Windows Intune "lounge"

This week in Las Vegas, Microsoft came out with some strong, confident direction in their IT Management portfolio. There were numerous products announced in beta or GA’ed and endless nuance to various stories like “what exactly is a Microsoft-based private cloud?” Rotating in my head though are three clusters of important offerings and concepts to keep track of, whether you’re a user of IT Management or a vendor looking to compete of frenemy with Microsoft:

  1. IT Management delivered as SaaS – thus far, success has been the exception, not the rule in delivering IT Management as a SaaS. Service-now.com has been the stand-out success here, driving incumbents BMC, CA, HP, and IBM to start offering IT Management function as a SaaS. Others have had rockier times getting IT heads to move their tool-chain off-premise. The common sentiment as told me by one admin last year was: well, if the Internet goes down, we’re screwed. Windows Intune, GA’ed at MMS 2011, is a SaaS-based (or “cloud-based,” if you prefer) service for desktop management – keeping the Microsoft portions of desktops up-to-date for $11-12/month/desktop. It’s not hard to imagine that Microsoft would want to extend this to servers at some point, as Opscode does now. The System Center Advisor product line (covering SQL Server, Exchange, Windows Server, Dynamics, and SharePoint) is knowledge-base served up as a SaaS – something klir (RIP) and Splunk have played around with – to make this kind of collaborative IT management work, you have to layer in a strong community like Spiceworks does, something that seems missing from the Advisor line at the moment. The feel I get from this momentum is that Microsoft would like to (after a long, multi-year “eventually”) move much of its portfolio to SaaS delivery. Admins can be “special snow-flakes” when it comes to moving their tools to SaaS, but at a certain point of cost & hassle avoidance vs. the risk of the Internet going down, it starts to make sense. And, really, if the Internet goes down, many businesses would be dead-in-the-water regardless of the IT Management tools available.
  2. Private cloud is what you need – while the focus on “Cloud and Microsoft” is often the public Azure cloud, Microsoft is also amped up to provide companies with the technology needed to use cloud-based technologies and practices behind the firewall, creating private clouds. Microsoft’s Project Concero is the spearhead of this, but there’s some interesting training wheels towards cloud that Microsoft wants to do with its virtualization management product. Strapping on the recently released System Center Service Manager and System Center Orchestrator (formally Opalis), and you have the self-service, highly-virtualized view of “private cloud.” The troubling aspect for Microsoft is the hardware layer. Time and time again, Microsoft executives rightly pointed out that “true” clouds need standardized hardware – at the same time they pointed out that most IT shops are far from “standardized.” When I asked them what that transformation would mean, being a software company, the answers weren’t too prescriptive. One hopes that the answer is more than “keep your eye on those Azure appliances we mentioned awhile back.” The issue is this: if private cloud means rip-n-replace of your existing hardware to get “standardized” hardware…then we’ve got some rocky budget hijinks ahead for anyone considering private vs. public…

(Read the full article @ Coté's People Over Process)

3 Important Things from the Microsoft Management Summit 2011 is copyrighted by . If you are reading this outside your feed reader, you are likely witnessing illegal content theft.

Enterprise Irregulars is sponsored by Salesforce.com and Workday.

Private cloud discredited, part 2

0
0

I wrote part 1 of this post last October, highlighting a Microsoft white paper that convincingly established the economic case for multi-tenant, public clouds over single-enterprise, private infrastructures. Part 2 would wait, I wrote then, for “the other shoe still waiting to drop … a complete rebuttal of all the arguments over security, reliability and control that are made to justify private cloud initiatives. The dreadful fragility and brittleness of the private cloud model has yet to be fully exposed.”

The other shoe dropped last month, and from an unexpected direction. Rather than an analyst survey or research finding, it came in a firestorm of tweets and two blog posts by a pair of respected enterprise IT folk. One of them is Adrian Cockcroft, CIO of Netflix, a passionate adopter of public cloud infrastructure. The other is Christian Reilly, who engineers global systems at Bechtel and had been a passionate advocate of private cloud on his personal blog and Twitter stream until what proved to be a revelatory visit to Netflix HQ:

“The subsequent resignation of my self imposed title of President of The Private Cloud was really nothing more than a frustrated exhalation of four years of hard work (yes, it took us that long to build our private cloud).”

Taken together, the coalface testimony of these two enterprise cloud pioneers provides the evidence I’d been waiting for to declare private cloud comprehensively discredited â not only economically but now also strategically. There will still be plenty of private cloud about, but no one will be boasting about it any more.

As both these individuals make clear, the case for private cloud is based on organizational politics, not technology. The pace of migration to the public cloud is dictated solely by the art of the humanly possible. In Cockcroft’s words, “There is no technical reason for private cloud to exist.” Or as Reilly put it, “it can bring efficiencies and value in areas where you can absolutely NOT get the stakeholder alignment and buy in that you need to deal with the $, FUD and internal politics that are barriers to public cloud.”

Cockcroft’s post systematically demolishes the arguments for public cloud:

  • Too risky? “The bigger risk for Netflix was that we wouldn’t scale and have the agility to compete.”
  • Not secure? “This is just FUD. The enterprise vendors … are sowing this fear, uncertainty and doubt in their customer base to slow down adoption of public clouds.”
  • Loss of control? “What does it cost to build a private cloud, and how long does it take, and how many consultants and top tier ITops staff do you have to hire? … allocate that money to the development organization, hire more developers and rewrite your legacy apps to run on the public cloud.”

Then he adds his killer punch:

“The train wrecks will come as ITops discover that it’s much harder and more expensive than they thought, and takes a lot longer than expected to build a private cloud. Meanwhile their developer organization won’t be waiting for them.”

But it’s Reilly who adds the devastating coup de grace for private cloud:

“Building the private cloud that is devoid of any plan or funding to make architectural changes to todayâs enterprise applications does not provide us any tangible transitional advantage, nor does it position our organization to make a move to public cloud.”

In a nutshell, an enterprise that builds a private cloud will spend more, achieve less and increase its risk exposure, while progressing no further along the path towards building a cloud applications infrastructure. It’s a damning indictment of the private cloud model from two CIOs who have practical, hands-on experience that informs what they’re saying. Their message is that private cloud is a diversion and a distraction from the task of embracing cloud computing in the enterprise. It can only make sense as a temporary staging post in the context of a systematically planned transition to public cloud infrastructure.

 

(Cross-posted @ Software as Services Blog RSS | ZDNet)

Private cloud discredited, part 2 is copyrighted by . If you are reading this outside your feed reader, you are likely witnessing illegal content theft.

Enterprise Irregulars is sponsored by Salesforce.com and Workday.

Monday’s Musings: Lessons Learned From Amazon’s Cloud Outage

0
0

Amazon’s Cloud Outage Catches Most Clients Offguard

The recent Amazon cloud outage at its Northern Virgina data center from 5 am Thursday, April 21, 2011 to roughly 5 am Friday, April 22 has shaken the confidence of some executives on public cloud computing.  Most notably, FourSquare, HootSuite, Reddit, and Quora publicly suffered visible performance issues.  The industry’s reassurances in the past on up time performance and massive redundancy capabilities combined with the massive corporate adoption had everyone believing that public clouds were bullet proof.  As calmer heads prevail, most CIOs, business leaders, and analysts realize that:

  • Cloud outages are rare but can happen. While most organizations can not deliver 99.5% up time let alone 90% performance, disruptions can and will happen.  The massive impact to so many organizations last week highlights potential vulnerabilities of betting 100% of capacity in the cloud.  More importantly, it showed that broad adoption does not equate with bullet-proof reliability.  Most organizations lacked a contingency plan.
  • Cost benefit ratios still favor cloud deployments. For most organizations, the cost of deploying in the cloud remains a factor of 10 cheaper than moving back to the traditional data center or even a private cloud.  Capital costs for equipment, labor for managing the data center, excess software capacity, and the deployment time required to stand up a server create significant cost advantages for cloud deployments.
  • Current service level agreements lack teeth and should be improved. Most organizations lack teeth in the cloud/saas contracts to address service level agreement failure.  Despite all backups and contingency plans, clients should consider scenarios where core business systems go down. What remedies are appropriate? What contingencies for system back up are in place.   Who is responsible for disaster recovery? Will the vendor provide  liability and for what?

The Bottom Line: Proactively Account For Breaches In Service Level Agreements In SaaS/Cloud Contracts

Organizations should protect themselves from future breaches through a combination of contract provisions and contingency plans.  Here are some suggestions recommended to clients:

  • Apply provision from the SaaS/Cloud bill of rights.  Though written in late 2009, this document remains a best practices guide to SaaS contracting.  Key provisions to apply include: Quality guarantees and remuneration, stipulate data management requirements, on-going performance metrics
  • Include service level agreements with teeth. Credits for free licenses for down time sound good on paper. In reality, down time when critical systems fail could result in massive financial losses.  Contracts should apply risk on the potential business loss.  Some clients include a provision that identifies compensation for a percentage of average daily business revenue during the time period of down time.
  • Reevaluate your Amazon deployment strategy. Believe it or not, Amazon technically did not violate its service agreements.  To deploy a true backup strategy, organizations should add copies of their server instance in multiple regions and data centers as an added layer of protection.  This ensures that a proper fail over occurs even if multiple regions experience outages.
  • Implement a real disaster recovery strategy. The Amazon outage exposed that many start ups failed to have a disaster recovery strategy.  A number of solution providers now provide cloud disaster recovery.  More importantly, these providers can recover physical or virtual machines in a cloud within minutes.  Whether organizations can fire up a backup server in time remains the open question.

Your POV.

Have you planned for a cloud outage?  How has your  experience been to date with the major cloud/SaaS providers? Has the recent outages change your views on SaaS/Cloud?  Add your comments to the blog or reach me via email: R (at) ConstellationRG (dot) com or R (at) SoftwareInsider (dot) com.

How can we assist?

Buyers, do you need help with your SaaS and Cloud apps strategy and vendor management strategy?  Trying to figure out how to save money and innovate with Cloud computing and SaaS? Ready to put the expertise of over 1100 software contract negotiations to work?  Give us a call!

Please let us know if you need help with your next gen apps strategy efforts. Here’s how we can help:

  • Providing contract negotiations and software licensing support
  • Evaluating SaaS/Cloud options
  • Assessing apps strategies (e.g. single instance, two-tier ERP, upgrade, custom dev, packaged deployments”
  • Designing innovation into end to end processes and systems
  • Comparing SaaS/Cloud integration strategies
  • Assisting with legacy ERP migration
  • Engaging in an SCRM strategy
  • Planning upgrades and migration
  • Performing vendor selection

Related Resources And Links

20091012 Research Report: Customer Bill of Rights – Software-as-a Service

20090714 Research Summary: An Enterprise Software Licensee’s Bill of Rights, V2

20091222 Tuesday’s Tip: 10 Cloud And SaaS Apps Strategies For 2010

20101214 Tuesday’s Tip: Dealing With Vendor Offers To Cancel Shelfware And Replace With New Licenses

20091208 Tuesday’s Tip: 2010 Apps Strategies Should Start With Business Value

20091006 Tuesday’s Tip: Why Free Software Ain’t Really Free

20080215 Software Licensing and Pricing: Stop the Anti-Competitive Maintenance Fee Madness

20090405 Monday’s Musings: Total Account Value, True Cost of Ownership, And Software Vendor Business Models

20090324 Tuesday’s Tips: Five Simple Steps To Reduce Your Software Maintenance Costs

20090223 Monday’s Musings: Five Programs Some Vendors Have Implemented To Help Clients In An Economic Recession

Reprints

Reprints can be purchased through Constellation Research, Inc. To request official reprints in PDF format, please contact sales (at) ConstellationRG (dot) com.

Disclosure

Although we work closely with many mega software vendors, we want you to trust us. For the full disclosure policy, stay tuned for the full client list on the Constellation Research website.

Copyright © 2011 R Wang and Insider Associates, LLC All rights reserved.

(Cross-posted @ A Software Insider's Point of View)

Monday’s Musings: Lessons Learned From Amazon’s Cloud Outage is copyrighted by . If you are reading this outside your feed reader, you are likely witnessing illegal content theft.

Enterprise Irregulars is sponsored by Salesforce.com and Workday.


Every SaaS provider runs a private cloud

0
0

One of the highly misleading assumptions built into the term ‘private cloud’ is the notion that there’s no privacy in the public cloud. People talk as though cloud providers don’t use firewalls or private networks or encryption. But of course they do. In most cases, the technology infrastructure they use is far more secure than any private enterprise infrastructure.

In fact, the paradox at the heart of ‘public’ cloud provision is that any provider has to own and manage their own private cloud to deliver a secure, reliable service. Does anyone imagine for a moment that Salesforce.com doesn’t guard the backend of its infrastructure at least as assiduously as any bank or government department? The crown jewels of its infrastructure run on physical servers that it owns and manages itself. Google is even more extreme, having its servers and data centers tailor-made to its own custom designs.

Even if a service sits on public cloud — such as Salesforce.com subsidiary Heroku, which runs on Amazon EC2 servers — the access into that virtual infrastructure is as locked down as any enterprise server pool. The fact that any Web visitor can set up an account and log into the public face of Heroku doesn’t detract from the security that governs back-end access into the server instances that make up the underlying platform. If anything, it guarantees that the provider will take extra steps to keep the back-end ultra-secure. Nor do I really understand why an enterprise infrastructure that includes publicly accessible web servers is somehow inherently more secure and hack-proof than a SaaS provider’s infrastructure. The track record of countless security breaches at banks, retailers and telecoms providers tells me the opposite.

So next time you log into your private on-demand shared instance of Salesforce.com, NetSuite, Google Apps, WebEx, PayPal or whatever, ask yourself why sharing the infrastructure with users from other organisations should make it any less safe than an application that runs on your own PC or on your organization’s own servers. The only difference is that the separation in a public cloud infrastructure is logical — implemented with software — rather than physical. But that logical separation in any reputable provider’s infrastructure is going to be as solid as cast-iron. Provided you take sensible precautions to protect your login credentials, there’s no reason to suppose you’re any less safe on shared infrastructure. On top of that, it comes with all the benefits that public cloud confers: enormous economies of scale, super-hardened resilience and boundless connectivity into the global resources of the connected web.

For many years, apologists for cloud-averse enterprise networks have hijacked the notion of privacy and set it up as a straw-man argument against running cloud computing on public infrastructure. Don’t let the simplistic terminology confuse you: public cloud infrastructure can support just as much privacy and security as any private enterprise network.

(Cross-posted @ Software as Services Blog RSS | ZDNet)

Every SaaS provider runs a private cloud is copyrighted by . If you are reading this outside your feed reader, you are likely witnessing illegal content theft.

Enterprise Irregulars is sponsored by Salesforce.com and Workday.

Citrix buys Cloud.com – Brief Note

0
0

Citrix announced the acquisition of Cloud.com this morning hoping to flesh out its long-running cloud road-map. This Brief Note covers what this means for Citrix and provides some background.

Summary

There’s some clear motivations for Citrix:

  • Filling whitespace – Citrix has long spoken about cloud computing, but they’ve yet to make a big splash on the overall scene. By purchasing Cloud.com, Citrix might be able to finally fill in the potential they’ve been marketecting for themselves over recent years.
  • Move from call centers to cloud – In the long term, Citrix needs to come up with new lines of business around virtualization and cloud computing to replace (or supplement) the “cash cow” of its legacy, desktop business. That BU could be revitalized if the whole idea of virtualized desktops takes off, but that’s been rocky when it comes to blow-out acceptance.
  • As real a deal as you’ll find – Cloud.com is regarded as one of the more “real” cloud offerings out there, building a respectable story over the short years they’ve been around. They’ve built up a nice list of public references out of 65 reported customers overall, including KT, a large public cloud project that most every serious cloud player is currently involved with (you hear all sorts of things good and bad about it, sure, but it provides an ongoing test of this whole “cloud” idea beyond Amazon).

The Citrix Context

If you’re like most people I talk with, you’re probably like, “Citrix…?” Indeed! Citrix is one of those under-appreciate, billion dollar companies. They started out with technology to host desktops on servers (think thin clients and call centers) and with their purchase of XenSource some years ago have been expanding into general IT management and, now, cloud. One of their core assets (and liabilities, the tin-foil hats would say, when it comes to portfolio expansion) is a long-standing, bi-directionally beneficial relationship with Microsoft (needed to make their thin client technology extra, super-magical).

Here’s their revenue by business unit since 2009, the red line is the virtualization and cloud gaggle:

How good is Citrix at capitalizing on acquisitions? It’s difficult to track what they’ve done with XenServer exactly, but it seems like they’ve done well. The Xen brand certainly took over most of Citrix, but because they bundle their NetScaler product line together with XenServer, you can’t perfectly track revenue for the “Data Center and Cloud Solutions” business unit. (And, indeed, NetScaler seems to have “lead” revenue for this BU in the most recent quarter). But, hey, when has tracking product line revenue ever been easy from the outside?

Last year the Data Center and Cloud Solutions BU reported $298.6 in revenue (up from $231.4M in 2009). As the chart below shows, it’s still a smaller part of overall company revenue, and smaller than the Citrix Online BU (home of GoToMeeting and other GoTo products):

As some rough comparisons: VMware brought in $844 million company-wide in the last quarter of 2010 and $2.9 Billion for all of 2010, most of that, no doubt, around virtualization and “data center management” – the Spring revenue is probably small compared to the overall VMW cash-pie. And when it comes to market share, I continue to see numbers that show VMware far in the lead, by large margins, with Microsoft’s Hyper-V a distant second, and Xen third, down there with the other hyper-visors.

But, with Gartner reckoning that “at least 40%” of x86 machines are virtualized, just to pick one estimate out of many, there’s all sort of room to close those gaps.

One thing is certain: Citrix has yet finished its cloud strategy over the years. Their CEO, Mark Templeton is extremely charismatic and along with the recently departed Simon Crosby can talk a good vision on cloud. Their presence in cloud, despite all that Xen-driven cloud, hasn’t been as much as it should be. Hopefully, the Cloud.com team (who seems to be taking a leadership role) can help accelerate that.

Back to Cloud.com

By coincidence, I recently wrote-up my current thoughts on Cloud.com in response to a press inquiry:

Cloud.com has a few things going for it: based on their momentum, their software seems to work, which is saying a lot for something as new as cloud computing. Many of the “real cloud” projects out there require a tremendous amount of what we used to call Systems Integration (now “cloud integration”?) work to make up for immature software. From what I’ve heard, Cloud.com seems to be suitably functional for its age…

(Read the full article @ Coté's People Over Process)

Citrix buys Cloud.com – Brief Note is copyrighted by . If you are reading this outside your feed reader, you are likely witnessing illegal content theft.

Enterprise Irregulars is sponsored by Salesforce.com and Workday.

Three Little Clouds and the Big Bad World

0
0

A few weeks back I seem to have mystified a few readers with my apparent volte-face on the topic of private cloud, when I pointed out that any public cloud provider has to run a private cloud to operate its own service or platform. To help explain my thinking, I thought it might be useful to share with you today the ancient fable of the three little clouds and the big bad world.

Image credit: Deviant Art

One day, three little clouds decided to leave the safety of the familiar enterprise network where they’d grown up and set off to make their fortune in the outside world. Their guardian was proud of their ambition, but like any parent-figure wanted to be sure they really understood all the risks they’d be facing. “Whatever you do,” she warned them, “watch out for the big bad world out there.”

The first little cloud decided to minimize its risk by reducing its exposure to the outside world. It had a website with an e-commerce capability, some mobile users and a handful of servers at remote sites, but it avoided adding any new SaaS applications or public cloud resources. It bought some reassuringly expensive firewalls, load balancers and fancy routers from a friendly travelling salesman and configured them as best as it could. What could possibly go wrong? But this little cloud was built of straw. It didn’t have the expertise or resources to keep its infrastructure up-to-date with all the threats it faced at various touch points with the public Internet. Very soon a storm blew up and the big bad world came huffing and puffing, stealing away the little cloud’s customer database and leaving all their credit card details fluttering in the wind.

The second little cloud saw all this, shook its head in dismay, and hired a chief security officer to make sure its infrastructure was proof against even the most determined attack. It built out enough server capacity to meet its needs for the foreseeable future and settled down in its comfortable new citadel. But this little cloud was wooden to the core. It hadn’t foreseen just how busy it would become. One day, the big bad world came knocking on all its doors and windows until the little cloud ran out of capacity. All its servers splintered under the load, collapsing in a terrifying outage that drove its customers far and wide, never to return.

Much further down the road, the third little cloud had built its infrastructure out of magical elastic virtual bricks that would never fall over and were perfectly engineered to resist attacks, even from within. When it saw its two little brothers running down the road, it opened its doors and let them host their operations on its own infrastructure. Hot on their heels came the big bad world, but the more it huffed and it puffed, the more elastic and resilient the magical bricks became.

Soon the little cloud and its brothers had swallowed up everything the big bad world could throw at it and still they asked for more. Before long, all the animals of the forest and the business people from the nearby town were clamoring to come inside so that they, too, could prosper from the elastic cloud’s insatiable capacity. And so the little private cloud grew up to become a world-leading public cloud provider and everyone lived happily ever after — even the big, bad world.

The moral of the story? It’s probably best summed up in a response to Ben Kepes that I posted in a comment on my prior blog post:

Public cloud services have to be built on private cloud infrastructure (sometimes it’s virtual private, sometimes its physical private) but the result is still a cloud-scale, highly connected public resource.

The same term, private cloud, is used to describe enterprise-centric architectures that shy away from connection and are populated with application stacks incapable of operating at cloud scale.

These two forms of private cloud are completely different animals from one another and it is the second category with which I have a problem.

For a happy ending, be sure to build a cloud-scale, highly connected public resource. And if you can’t afford to, run on someone else’s.

(Cross-posted @ Software as Services Blog RSS | ZDNet)

Three Little Clouds and the Big Bad World is copyrighted by . If you are reading this outside your feed reader, you are likely witnessing illegal content theft.

Enterprise Irregulars is sponsored by Salesforce.com and Workday.

Data Center War Stories talks to SAP’s Jürgen Burkhardt

0
0

And we’re back this week with the second installment in our Data Center War Stories series (sponsored by Sentilla).

This second episode in the series is with Jürgen Burkhardt, Senior Director of Data Center Operations, at SAP‘s HQ in Walldorf, Germany. I love his reference to “the purple server” (watch the video, or see the transcript below!).

Here’s a transcript of our conversation:

Tom Raftery: Hi everyone welcome to GreenMonk TV. Today we are doing a special series called the DataCenter War Stories. This series is sponsored Sentilla and with me today I have Jürgen Burkhardt. Jürgen if I remember correctly your title is Director of DataCenter Operations for SAP is that correct?

Jürgen Burkhardt: Close. Since I am 45, I am Senior Director of DataCenter Operations yes.

Tom Raftery: So Jürgen can you give us some kind of size of the scale and function of your DataCenter?

Jürgen Burkhardt: All together we have nearly 10,000 square meters raised floor. We are running 18,000 physical servers and now more than 25,000 virtual servers out of this location. The main purpose is first of all to run the production systems of SAP. The usual stuff FI, BW, CRM et cetera, they are all support systems, so if you have ABAP on to the SAP in marketplace, you, our service marketplace, this system is running here in Waldorf Rot, whatever you see from sap.com is running here to a main extent. We are running the majority of all development systems here and all training — the majority of demo and consulting system worldwide at SAP.

We have more than 20 megawatt of computing power here. I mentioned the 10,000 square meters raised floor. We have 15 — more than 15 petabyte of usable central storage, back up volume of 350 terabyte a day and more than 13,000 terabyte in our back up library.

Tom Raftery: Can you tell me what are the top issues you come across day to day in running your DataCenter, what are the big ticket items?

Jürgen Burkhardt: So one of the biggest problems we clearly have is the topic of asset management and the whole logistic process. If you have so many new servers coming in, you clearly need very, very sophisticated process, which allows you to find what we call the Purple Server, where is it, where is the special server? What kind of — what it is used for? Who owns it? How long is it already used? Do we still need it and all that kind of questions is very important for us.

And this is also very important from an infrastructure perspective, so we have so many stuff out there, if we start moving servers between locations or if we try to consolidate racks, server rooms and whatsoever, it’s absolutely required for us to know exactly where something is, who owns it, what it is used for etcetera, etcetera. And this is really one of our major challenges we have currently.

Tom Raftery: Are there any particular stories that come to mind, things issues that you’ve hit on and you’ve had to scratch your head and you’ve resolved them, that you want to talk about?

(Read this and other articles @ GreenMonk: the blog)

Data Center War Stories talks to SAP’s Jürgen Burkhardt is copyrighted by . If you are reading this outside your feed reader, you are likely witnessing illegal content theft.

Enterprise Irregulars is sponsored by Salesforce.com and Workday.

Choosing Between Public and Private Cloud Infrastructures

0
0

private cloudSince 1999 we have been investing in companies that develop SaaS applications targeting the mid-upper and global enterprise.  Through these investments and the hundreds of other SaaS companies, targeting these and other segments, we have considered during this period, we have started to notice a transition in the way companies utilize cloud computing infrastructures platforms to develop, test and deploy their applications.  In the last 5 years SaaS companies, particularly the earlier stage ones, have started to transition from exclusively building and deploying their applications on custom developed infrastructures, to utilizing third-party infrastructures and platforms for these tasks.  Third party infrastructures come in two flavors: Infrastructure as a Service (IaaS), e.g., Rackspace, VCE, or Amazon’s AWS, and Platform as a Service (PaaS), e.g., Salesforce’s Heroku, or Microsoft’s Azure.  During this period we have seen SaaS companies for which developing and deploying on a public infrastructure was absolutely the right decision, e.g., Dropbox developed and continues to deploy on AWS, and others which had to switch to a private infrastructure after having initially developed their application on a public one.

The decision to employ a custom/private infrastructure for a SaaS application, or, alternatively, the decision to switch from a public to a private infrastructure to develop and deploy such an application are expensive propositions for a SaaS company of any size.  Using a private infrastructure means that the SaaS company has full control of its infrastructure but also that a meaningful percentage of its capital is spent for the development, maintenance and upgrading of this private infrastructure.  Switching from a public infrastructure to a private one, or even switching among public infrastructures, done without proper planning leads to delays in product release schedules, increased downtime and low customer satisfaction.

SaaS entrepreneurs and management teams are asking two questions regarding the platforms and infrastructures used for their applications so that they can accomplish their development, testing and deployment goals while building profitable companies, maintaining their customers trust and expectations:

  1. What factors should I consider as I try to determine whether to use a third party/public cloud computing infrastructure?
  2. When should I move from exclusively using a public cloud computing infrastructure, even in a single-tenant mode, to using a private/custom infrastructure or to using a hybrid approach?

We see entrepreneurs selecting a third party platform to start developing their SaaS applications because they instinctively believe that the associated costs, for both development and initial deployment, will low.   They are often right about the startup phase of their business.  However, the decision for the long term use of such infrastructures is not as simple as it first appears because several interdependent factors need to be considered.  They include:

  • The economics associated with the company’s business model.  For example, a SaaS application that will be monetized using an advertising or a freemium model has very different economics than one that will be sold and monetized through a direct inside sales model.  The paying users of the application’s premium version must subsidize the usage of a very large number of users that utilize the application’s free version.  Therefore, the company’s operating model must take into account the costs of running the infrastructure used to develop and deploy such an application.  One can then determine if the company can create a profitable model using a third party infrastructure or roll out its own private infrastructure.
  • The SLAs the SaaS application will need to meet in order to satisfy its customers.  These SLAs can range from uptime to response time, from backup time to failover time, etc.  SLAs are themselves a complex factor.  They are dictated by the type of target user, e.g., consumer vs corporation, the number of users, e.g., hundreds for a specialized corporate application, to millions for a typical successful consumer application, the application company’s stage, e.g., the SLAs for an application that is entering its initial deployment phase are oftentimes different from the SLAs of a fully deployed application, the geographies where the application will need to operate, e.g., data storage and retention regulations in one geography may be different than in another.  Each SLA has an associated cost.  For example, if it is necessary for a SaaS application to run on multiple geographies from the time it is initially deployed, the use of a third party public infrastructure will enable the company to meet this requirement at a lower cost than having to build its own data centers.  Certain application types, e.g., entertainment applications such as Flixster, general utilities such as Wunderlist or Open Table, etc., that are targeting particular market segments, e.g., consumer, SOHO, or applications targeting specific segments of the broader SMB market, e.g., Square, LevelUP, Milyoni, can be developed and deployed on third party infrastructures and never need to migrate to private ones.  This is because the SLAs associated with such applications are more flexible and the third party infrastructures can easily accommodate them.  Moreover, the scalability and capabilities of these infrastructures are constantly improving so keeping up with the applications’ growth is possible. SaaS applications such as Evernote, or Carbonite that have more stringent SLAs and, in addition to consumer and SMB segments, target the enterprise, run on proprietary infrastructures because third party infrastructures cannot meet their SLAs at acceptable economics.
  • The regulations governing the industry targeted by the application.  For example, the data privacy regulations governing applications targeting the health care and financial services industries often necessitate the use of private cloud infrastructures by companies developing application for this industry.
  • The available in-house expertise and the importance of having such expertise.  The company must determine whether it has the in-house expertise to build and maintain a custom cloud infrastructure to support application development and deployment, particularly as the company grows, whether acquiring such expertise provides it with a competitive advantage, and whether it is willing to continue incurring the costs associated with the building, maintaining and upgrading the required infrastructure and the associated expertise.
  • The company’s stage.  Early stage companies have different priorities, e.g., time to market, than later stage ones, e.g., sustaining growth at a reasonable cost.

Based on the factors above,

  • Early stage SaaS companies use public cloud infrastructures to:
  1. Accelerate product development by focusing on the business logic and taking advantage of the ecosystem that is typically built around the third party platform to provide faster a more feature-rich application.
  2. Improve time to market by quickly onboarding customers.
  3. Address lack of expertise in building and effectively managing cloud infrastructures.
  • Growth stage companies use public cloud infrastructures to:
  1. Reduce product development costs while enabling collaboration among distributed development teams.
  2. Reduce the cost and time to customer on-boarding.
  3. Utilize the elastic supply of computation and storage provided by the public infrastructures in order to easily grow their customers while meeting SLAs.
  4. Achieve their growth goals while controlling capital and operating costs.

SaaS companies start using public cloud infrastructures and remain in such infrastructures if they target consumer and SMB market segments under business models that allow them to make money using such infrastructures, and can satisfy the SLAs of their target segments.  Companies start with public cloud infrastructures and completely migrate to custom/private ones when they want to target mid-upper and global enterprises.  If they target both the SMB and the large enterprise segments then they can use a hybrid approach remaining on public infrastructures to address the needs of the SMB segment and using their own private infrastructure to address the large enterprise segment, as Workday does which runs its application on both its own infrastructure, as well as in AWS.  In all of these cases when a migration from a public to a private cloud infrastructure is contemplated I advise the companies to build their application assuming a multi-cloud strategy.  This means that the application can simultaneously utilize several public cloud infrastructures, or that can it easily migrate from one public infrastructure to another, in this way also avoiding vendor lock-in.  The problem with hybrid environments is that you have to keep track of multiple different security platforms and ensure that all aspects of your business can communicate with each other.  Finally, if a company develops a SaaS application targeting a regulated industry such as health care or financial services then it needs to build and deploy its application on its own private infrastructure.

Determining the infrastructure and platform on top of which to develop and deploy a SaaS application is not as easy as it may initially appear particularly if the company is thinking long term.  The factors I provided above which have been derived from my years of experience in investing in SaaS application companies will hopefully help entrepreneurs and management teams put some structure around this decision.

(Cross-posted @ Trident Capital Blog)

Choosing Between Public and Private Cloud Infrastructures is copyrighted by . If you are reading this outside your feed reader, you are likely witnessing illegal content theft.

Enterprise Irregulars is sponsored by Salesforce.com and Workday.

Viewing all 41 articles
Browse latest View live




Latest Images