Skip to main contentdfsdf

    • A lot of startups go wrong when adopting cloud services. For starters, cloud becomes a yet another hosting provider with more competitive rates. I have personally seen deployments which are mere fork lifting of existing in-house deployment to a set of virtual machines and storage services. Those who take this road, get frustrated in handling the daily chores of managing their presence on a cloud environment, while some move over to another competitive services. Most of these merely try to make their systems take advantage of cloud environments by adopting certain services or retrofit existing application to a cloud provider. This is the just-being-cloud-ready problem and massive outages of popular services is a signal that needs attention.
    • Cloud Native means developing services that truly embrace the nature of cloud environments – which is fragile, failure-prone and scaled horizontal.

    4 more annotations...

    • Just putting an application on Amazon Web Services, Microsoft Azure or IBM Softlayer does not make an application cloud centric. Cloud centric applications are architected for horizontal instead of vertical scaling. Applications are broken into stateless components. Mean-time-to-recovery (MTTR) becomes a more important goal than mean-time-between-failure (MTBF). Cloud centric application architecture has many benefits, including optimization for lower cost, elasticity and high scale. Not every application, however, needs to be 100% cloud centric. Cloud centric architectures have drawbacks too. Examples include data consistency across nodes, noisy neighbors and managing operational data across transient nodes. Nodes, from a cloud development perspective, represent compute or data resource that is largely abstracted by a cloud platform. A node could be a virtual machine, physical server or cluster of servers
      • A few key foundational design points will help with understanding cloud centric application architecture. These include:

         
           
        1. Horizontal Scaling
        2.  
        3. Eventual Data Consistency
        4.  
        5. Reduce Network Latency
    • Security may not be the biggest barrier to public cloud adoption
       
      Although the issue of Security is, and always will be, a primary topic for consideration for organizations when considering moving workloads to the public cloud, I am not convinced that it is THE NUMBER ONE barrier to adoption. With a hat-tip to my friend, Chris Hoff, I also don’t believe that survivability is the main barrier to adoption either. By adoption, I mean actually moving the existing full workload from on-premise to an off-premise provider, such as AWS. From my experience, the majority of enterprise applications would just absolutely, flat out, not work in the cloud, period. Many of them are not web apps, and even if they are, they are so intrinsically linked to the traditional “behind the firewall” technologies, such as Active Directory, that the amount of work required and the complexity added to make them function off-premise (with the same performance and supportability) in an alien environment would hit a point of diminishing return so quickly that it becomes apparent to all that it is simply isn’t worth the effort. I’m not considering federation here, so in reality, there are very few opportunities for organizations to do anything other than “learn what they don’t know” in the public cloud – very few can really expect to be able to pick up existing workloads and move them easily.
    • Docker accelerates technology adoption, even in conservative organizations

       

      At lunch yesterday,  I spoke with two developers from a large Fortune 500 financial services company. They described how difficult it is to inject new technologies into their environment. Security professionals inside large organizations are designed to say “no” to new techologies, a constant conflict between more progressive groups like developers who like to adopt and use new technologies.

       

      Docker, as a standardized delivery system, pushes the responsibility of resource allocation and security isolation into the container, removing that responsibility from the list of security or operation roles. Though not a silver bullet, this makes it more likely that security teams will approve new technologies if they are only responsible for verifying that the Docker container process is secure. This is a game changer.

    • Docker makes it trivial to maintain legacy OS and code

       

      Docker makes it trivial to keep legacy OS, no matter what flavor of Linux you are running. Related to the issue above, many large organizations have aging legacy systems and codebases which they must support. Small new startups don’t have this problem. When I asked Fabio Kung from Heroku and Rafael Rosa about this problem, Fabio noted that Docker makes it trivial to support legacy systems and code. You don’t have to run expensive bare metal servers to host each legacy system. With Docker you get an inexpensive alternative to heavyweight VMs (as long as your legacy system is or runs on a Linux variant). Docker reduces the pain and cost of maintaining old systems and even records that process into a versioned “Dockerfile.”

    2 more annotations...

    • Docker is the shell, hermetically sealing an app into a container that runs on Linux. But to make it flexible and useful, Docker needs to work across several hosts and be portable to any cloud environment.

    2 more annotations...

    • Experience: more than just a pretty face

       

      By experience, I mean the combination of relative simplicity of use, with the power and flexibility to solve a wide range of problems. Experience is not just traditional user interface issues, like simple, intuitive screens and so on. In fact, I’d argue that user interfaces don’t have to be exceptional at all.

       

      However, great cloud experiences also include solid programming interfaces (if you intend to allow developers to integrate or extend your service) and human interfaces with your business itself, such as discovery, sales, support and documentation.

       

      Breadth of service offerings, if done well and integrated, can also seriously enhance overall experience. Adding data management, integration, automation and management services to basic infrastructure services is now a cost of entry for the lucrative developer market.

      • Ecosystem: Getting the market to build your value

         

        By ecosystem, I mean serving three primary categories of partners:

         
           
        • The customers themselves, who feed back information about the service and related experiences.
        •  
        • Vendors, projects, and individuals who extend or enhance your service to enable different solution options.
        •  
        • Expertise partners, such as system integrators, resellers, and even analysts and evangelists that identify opportunities to apply your service to specific problems and/or execute and operate those solutions.

    1 more annotation...

    • I think your assessment is largely correct, in that PaaS is currently just not a mature space at this point. What I hope, is that PaaS vendors focus on application lifecycle management, and allow the IaaS providers (projects) to support their needs with modular services. Frankly, if an IaaS cloud can provide database services, why should the PaaS layer spend time on that requirement? PaaS should be about service composition (or even container composition, if their customers want that), with easy to use hooks into underlying services like DB’s, Cache, etc…
    • 1. Build for Server Failure
       Physical machines in the cloud are temporary and they can fail at any time. Your software need to be prepared for server failure. Building for server failure begins with designing stateless applications that are resilient through a server or service reboot or re-launch.

       

      a. Setup auto scaling so that your application can respond to dynamic traffic based on a set of performance metrics.
       b. Setup database mirroring, master/slave configurations to ensure data integrity and minimum down time.
       c. Use dynamic DNS and static IPs so that components of your application infrastructure always have the right context.

    • 2. Build for Zone Failure
       Sometimes more than just single servers fail – there are power failures, outages, network failures. Zones (availability zones in AWS terminology) are logical separation within a single physical data center that are engineered to be insulated from failures in other zones.
       a. Spread your servers in each of your application tiers across atleast two zones.
       b. Replicate data across zones.

    2 more annotations...

    • PaaS is in a succinct way possible:

       

      It is PaaS when the infrastructure layer scales seamlessly with the platform layer which, in turn, scales seamlessly with the application.

      • When I say seamless, the scaling happens automagically without any need to get the hands dirty. When we build out a platform using various orchestration and configuration management tools, one or more of the following is required:

         
           
        • Even though infrastructure scaling is automatic, platform components like runtime, data store, etc. needs to be scaled separately. This introduces additional layer of complexity and additional overhead in terms of labor. Even though some people might want to use their existing investments on infrastructure tools with PaaS, it is definitely additional overhead while building up from a basic IaaS.
        •  
        • In addition to Ops overhead, it may require applications to programmatically handle operations which, in turn, implies requirement of newer types of knowledge in the realm of DevOps
        •  
        • Inside the Silicon Valley bubble, a version of DevOps, where developers have operations knowledge and Ops people code, is taking everyone by storm. Startups in the valley could easily find this talent to build their platform infrastructure on their own using an underlying IaaS offering. But such talent is not easily available elsewhere and, even when they are available, it is expensive
        •  
        • There is definitely a complexity overhead with IaaS+ approach to platform abstraction but there is also a significant cost overhea
    • Many enterprises place prohibitions on the use of Amazon Web Services (AWS), Google and other cloud services, despite the overwhelming evidence that these platforms enable innovation infinitely more than most of internal IT, and are every bit as secure as current systems -- often, more secure.

        

      Rather than the five guiding principles of the COBIT framework that encourage these prohibitions, IT governance should have only one: Exceed stakeholder expectations for agility, innovation, quality and efficiency to drive business value creation

    • How is that done while ensuring the proper allocation of capital, the right level of risk management, and while sticking to the service-level agreements demanded by the business? By throwing away convention and being bold; by inspiring your team to let go of preconceptions and fears so they will follow you; by removing people in your team that aren't capable of adapting to this new role of IT.

    3 more annotations...

    • It’s more than just technology
      Bernard Golden, VP of Enterprise Solutions for Enstratius, a Dell company 

        

      The biggest inhibitor to more prevalent cloud computing adoption is that organizations are still holding on to their legacy  processes, says Golden, who recently authored the Amazon Web Services for Dummies book. It’s not just about being willing  to use new big data apps, and spin up virtual machines quickly. It’s the new skill sets for employees, technical challenges  around integrating an outsourced environment with the current platform, and building a relationship with a new vendor. “For  people to go beyond just a small tweak, there needs to be a significant transformation in many areas of the organization,”  he says. “Each time there is a platform shift, established mechanisms are forced to evolve.” 

    • Regulatory compliance
      Andy Knosp, VP of Product for open source private cloud platform Eucalyptus   

       

      One of the biggest hurdles for broader adoption of public cloud computing resources continues to be the regulatory and compliance  issues that customers need to overcome, Knosp says. Even if providers are accredited to handle sensitive financial, health  or other types of information, there is “still enough doubt” by executives in many of these industries about using public  cloud resources. Many organizations, therefore, have started with low-risk, less mission critical workloads being deployed  to the public cloud. Knosp says the comfort level for using cloud resources for more mission critical workloads will grow.  It will just take time.

    • Security and application integration
      Krishnan Subramanian, director of OpenShift Strategy at Red Hat; founder of Rishidot Research 

       

      Security is still the biggest concern that enterprises point to with the cloud. Is that justified? Cloud providers spend a  lot of money and resources to keep their services secure, but Subramanian says it’s almost an instinctual reaction that IT  pros be concerned about cloud security. “Part of that is lack of education” he says. Vendors could be more forthcoming with  the architecture of their cloud platforms and the security around it. But doing so isn’t an easy decision for IaaS providers:  Vendors don’t want to give away the trade secrets of how their cloud is run, yet they need to provide enough detail to assuage  enterprise concerns. 

       

      Once IT shops get beyond the perceived security risks, integrating the cloud with legacy systems is their biggest technical  challenge, Subramanian says. It’s still just not worth it for organizations to completely rewrite their applications to run  them in the cloud. Companies have on-premises options for managing their IT resources and there just isn’t a compelling enough  reason yet to migrate them to the cloud. Perhaps new applications and initiatives will be born in the cloud, but that presents  challenges around the connections between the premises and the cloud, and related latency issues. 

    • New apps for a new computing model
      Randy Bias, CTO of OpenStack company Cloudscaling 

       

      If you’re using cloud computing to deliver legacy enterprise applications, you’re doing it wrong, Bias says. Cloud computing  is fundamentally a paradigm shift, similar to the progression from mainframes to client-server computing. Organizations shouldn’t  run their traditional client-server apps in this cloud world. “Cloud is about net new apps that deliver new business value,”  he says. “That’s what Amazon has driven, and that’s the power of the cloud.” Organizations need to be forward thinking enough  and willing to embrace these new applications that are fueled by big data and distributed systems to produce analytics-based  decision making and agile computing environment.     

    • cloud bursting by asking whether or not it was real. This led to Christofer Hoff twitterbird pointing out that “true” cloud bursting required routing based on business parameters. That needs to be extended to operational parameters, but in general, Hoff’s on the mark in my opinion.
    • cloud-bursting-today

    4 more annotations...

    • One of the key benefits of running enterprise PaaS at scale is incredible insight it gives you into how your developers build and operate your line of business systems.
    • Having an accurate and real-time view of your application composition and developer use cases makes problems which are nearly impossible to solve today trivial, almost as a side effect. Governance, compliance, security vulnerability identification, software bugs and yes, feature planning all become incredibly simplified. Once you start doing this at the scale of thousands of enterprise applications, you get statistically relevant insight that you can act on to make your developers and their end users more productive.

    1 more annotation...

    • On a whim, I asked this audience how many of them saw custom software development as a key part of their IT strategy. I expected about half the room of 100 or so to respond positively.

       

      One hand went up at the back of the room. (It turns out that was someone from NASA’s Jet Propulsion Laboratory. Well, duh.)

       

      Boom. Any discussion about why developers were bypassing IT to gain agility in addressing new models was immaterial here. The idea that Infrastructure as a Service and Platform as a Service were going to change the way software was going to be built and delivered just didn’t directly apply to these guys.

       

      I came away from that experience with a new appreciation for several things that I’m working hard to not lose sight of again.

    • 1. It’s the services, stupid

       

      Try as I might, I struggle to remember that cloud computing is more than just recasting IT to better meet the needs of software developers. Part of that is the fact that I live and work in the Silicon Bubble … er, Valley … and work on a product that targets the intersection of IT operations and software development.

       

      But some of it is the insistence that companies selling infrastructure and platform services are targeting “the enterprise,” when in fact they are not and cannot and should not target every enterprise. What AWS and Pivotal and Dell and others are largely targeting is enterprises that are developing software services.

       

      I define software services in this context as software that is designed to be run and accessed over the network, and is built for a dynamic set of consumers (human and/or other software). Certainly if you are building an application for personal use, or for the use of your immediate department, you can leverage cloud, but that’s not the major market opportunity.

       

      So, if you are a business selling cloud platform or infrastructure technologies or services to the enterprise, you likely aren’t wasting much time on these medium businesses that aren’t doing software development. So-called “virtual private clouds” can be used as computing pools in any business, but if they don’t serve developers, they don’t revolutionize the use of IT at nearly the scale as if they did serve developers.

       

      Now, if you offer a software application as a service (aka SaaS), that’s a different story entirely. Common business systems and platforms are where the opportunity is with these mid-sized businesses that don’t write much code.

    2 more annotations...

      • The secure hybrid cloud encompasses a complex environment with a complex set of security requirements spanning the data center (or data closet), end user computing devices, and various cloud services. The entry point to the entire hybrid cloud is some form of End User Computing device whether that is a smart phone, tablet, laptop, or even a desktop computer. Once you enter the hybrid cloud, you may be taken to a cloud service or to your data center. The goal is to understand how the data flows through out this environment in order to properly secure it and therefore secure the hybrid cloud, but since it is a complex environment, we need a simpler way to view this environment.To try and simplify the environment there are three basic goals: 

           
        1. Show the types of places security can be placed
        2.  
        3. Show the type of security that can be placed
        4.  
        5. Show that there is more than one type of security required

    3 more annotations...

    • "As engineers we strive for perfection; to obtain perfect code running on perfect hardware, all perfectly operated."
    • Netflix "Cloud Native" architecture embraces the Dystopia of "broken and inefficient" to deliver "sooner" and "dynamic".

    5 more annotations...

      • A hybrid cloud  architecture is then necessary to seamlessly combine an existing  physical/virtual infrastructure or private cloud with the resources  provided by a public cloud. A successful hybrid cloud architecture  should provide the following capabilities:

         
        • helping the user identify application boundaries by discovering  systems that are part of the application and detecting their  dependencies, and then instantiating the entire application in a public  cloud as a single operation (as opposed to instantiating the systems  individually)
        • identifying the configuration of the systems that participate in an  application and intelligently mapping their hardware resource  requirements to the resources offered by a public cloud (for example,  amount of memory, processing power)
        • transparently configuring existing application software stacks,  including the operating system, to run in a public cloud environment
        • enabling transparent and secure access of network services and  applications running in a private cloud by applications running inside a  public cloud (for example, the authentication/authorization service  accessed by an application running in a public cloud may reside in the  private cloud)
        • synchronizing continually, efficiently, and securely the application  software, configuration and data between a private and public cloud
        • providing a public cloud abstraction layer that can be implemented  by multiple public clouds, while at the same time working around  specific public cloud limitations (say, the 1 TB volume limit in AWS)
        • supporting data privacy for data that resides on a public cloud (for example, use encryption)
        • maintaining isolation of applications deployed in a public cloud by  leveraging the mechanisms of such cloud (for example, in the AWS cloud  use VPCs)
         

        Hybrid cloud architectures could therefore address numerous new use  cases that would be difficult to address solely within public and  private clouds.

1 - 20 of 71 Next › Last »
20 items/page
List Comments (0)