A few key foundational design points will help with understanding cloud centric application architecture. These include:
At lunch yesterday, I spoke with two developers from a large Fortune 500 financial services company. They described how difficult it is to inject new technologies into their environment. Security professionals inside large organizations are designed to say “no” to new techologies, a constant conflict between more progressive groups like developers who like to adopt and use new technologies.
Docker, as a standardized delivery system, pushes the responsibility of resource allocation and security isolation into the container, removing that responsibility from the list of security or operation roles. Though not a silver bullet, this makes it more likely that security teams will approve new technologies if they are only responsible for verifying that the Docker container process is secure. This is a game changer.
Docker makes it trivial to keep legacy OS, no matter what flavor of Linux you are running. Related to the issue above, many large organizations have aging legacy systems and codebases which they must support. Small new startups don’t have this problem. When I asked Fabio Kung from Heroku and Rafael Rosa about this problem, Fabio noted that Docker makes it trivial to support legacy systems and code. You don’t have to run expensive bare metal servers to host each legacy system. With Docker you get an inexpensive alternative to heavyweight VMs (as long as your legacy system is or runs on a Linux variant). Docker reduces the pain and cost of maintaining old systems and even records that process into a versioned “Dockerfile.”
By experience, I mean the combination of relative simplicity of use, with the power and flexibility to solve a wide range of problems. Experience is not just traditional user interface issues, like simple, intuitive screens and so on. In fact, I’d argue that user interfaces don’t have to be exceptional at all.
However, great cloud experiences also include solid programming interfaces (if you intend to allow developers to integrate or extend your service) and human interfaces with your business itself, such as discovery, sales, support and documentation.
Breadth of service offerings, if done well and integrated, can also seriously enhance overall experience. Adding data management, integration, automation and management services to basic infrastructure services is now a cost of entry for the lucrative developer market.
By ecosystem, I mean serving three primary categories of partners:
1. Build for Server Failure
Physical machines in the cloud are temporary and they can fail at any time. Your software need to be prepared for server failure. Building for server failure begins with designing stateless applications that are resilient through a server or service reboot or re-launch.
a. Setup auto scaling so that your application can respond to dynamic traffic based on a set of performance metrics.
b. Setup database mirroring, master/slave configurations to ensure data integrity and minimum down time.
c. Use dynamic DNS and static IPs so that components of your application infrastructure always have the right context.
PaaS is in a succinct way possible:
It is PaaS when the infrastructure layer scales seamlessly with the platform layer which, in turn, scales seamlessly with the application.
When I say seamless, the scaling happens automagically without any need to get the hands dirty. When we build out a platform using various orchestration and configuration management tools, one or more of the following is required:
Many enterprises place prohibitions on the use of Amazon Web Services (AWS), Google and other cloud services, despite the overwhelming evidence that these platforms enable innovation infinitely more than most of internal IT, and are every bit as secure as current systems -- often, more secure.
Rather than the five guiding principles of the COBIT framework that encourage these prohibitions, IT governance should have only one: Exceed stakeholder expectations for agility, innovation, quality and efficiency to drive business value creation
It’s more than just technology
Bernard Golden, VP of Enterprise Solutions for Enstratius, a Dell company
The biggest inhibitor to more prevalent cloud computing adoption is that organizations are still holding on to their legacy processes, says Golden, who recently authored the Amazon Web Services for Dummies book. It’s not just about being willing to use new big data apps, and spin up virtual machines quickly. It’s the new skill sets for employees, technical challenges around integrating an outsourced environment with the current platform, and building a relationship with a new vendor. “For people to go beyond just a small tweak, there needs to be a significant transformation in many areas of the organization,” he says. “Each time there is a platform shift, established mechanisms are forced to evolve.”
Regulatory compliance
Andy Knosp, VP of Product for open source private cloud platform Eucalyptus
One of the biggest hurdles for broader adoption of public cloud computing resources continues to be the regulatory and compliance issues that customers need to overcome, Knosp says. Even if providers are accredited to handle sensitive financial, health or other types of information, there is “still enough doubt” by executives in many of these industries about using public cloud resources. Many organizations, therefore, have started with low-risk, less mission critical workloads being deployed to the public cloud. Knosp says the comfort level for using cloud resources for more mission critical workloads will grow. It will just take time.
Security and application integration
Krishnan Subramanian, director of OpenShift Strategy at Red Hat; founder of Rishidot Research
Security is still the biggest concern that enterprises point to with the cloud. Is that justified? Cloud providers spend a lot of money and resources to keep their services secure, but Subramanian says it’s almost an instinctual reaction that IT pros be concerned about cloud security. “Part of that is lack of education” he says. Vendors could be more forthcoming with the architecture of their cloud platforms and the security around it. But doing so isn’t an easy decision for IaaS providers: Vendors don’t want to give away the trade secrets of how their cloud is run, yet they need to provide enough detail to assuage enterprise concerns.
Once IT shops get beyond the perceived security risks, integrating the cloud with legacy systems is their biggest technical challenge, Subramanian says. It’s still just not worth it for organizations to completely rewrite their applications to run them in the cloud. Companies have on-premises options for managing their IT resources and there just isn’t a compelling enough reason yet to migrate them to the cloud. Perhaps new applications and initiatives will be born in the cloud, but that presents challenges around the connections between the premises and the cloud, and related latency issues.
New apps for a new computing model
Randy Bias, CTO of OpenStack company Cloudscaling
If you’re using cloud computing to deliver legacy enterprise applications, you’re doing it wrong, Bias says. Cloud computing is fundamentally a paradigm shift, similar to the progression from mainframes to client-server computing. Organizations shouldn’t run their traditional client-server apps in this cloud world. “Cloud is about net new apps that deliver new business value,” he says. “That’s what Amazon has driven, and that’s the power of the cloud.” Organizations need to be forward thinking enough and willing to embrace these new applications that are fueled by big data and distributed systems to produce analytics-based decision making and agile computing environment.
On a whim, I asked this audience how many of them saw custom software development as a key part of their IT strategy. I expected about half the room of 100 or so to respond positively.
One hand went up at the back of the room. (It turns out that was someone from NASA’s Jet Propulsion Laboratory. Well, duh.)
Boom. Any discussion about why developers were bypassing IT to gain agility in addressing new models was immaterial here. The idea that Infrastructure as a Service and Platform as a Service were going to change the way software was going to be built and delivered just didn’t directly apply to these guys.
I came away from that experience with a new appreciation for several things that I’m working hard to not lose sight of again.
Try as I might, I struggle to remember that cloud computing is more than just recasting IT to better meet the needs of software developers. Part of that is the fact that I live and work in the Silicon Bubble … er, Valley … and work on a product that targets the intersection of IT operations and software development.
But some of it is the insistence that companies selling infrastructure and platform services are targeting “the enterprise,” when in fact they are not and cannot and should not target every enterprise. What AWS and Pivotal and Dell and others are largely targeting is enterprises that are developing software services.
I define software services in this context as software that is designed to be run and accessed over the network, and is built for a dynamic set of consumers (human and/or other software). Certainly if you are building an application for personal use, or for the use of your immediate department, you can leverage cloud, but that’s not the major market opportunity.
So, if you are a business selling cloud platform or infrastructure technologies or services to the enterprise, you likely aren’t wasting much time on these medium businesses that aren’t doing software development. So-called “virtual private clouds” can be used as computing pools in any business, but if they don’t serve developers, they don’t revolutionize the use of IT at nearly the scale as if they did serve developers.
Now, if you offer a software application as a service (aka SaaS), that’s a different story entirely. Common business systems and platforms are where the opportunity is with these mid-sized businesses that don’t write much code.
The secure hybrid cloud encompasses a complex environment with a complex set of security requirements spanning the data center (or data closet), end user computing devices, and various cloud services. The entry point to the entire hybrid cloud is some form of End User Computing device whether that is a smart phone, tablet, laptop, or even a desktop computer. Once you enter the hybrid cloud, you may be taken to a cloud service or to your data center. The goal is to understand how the data flows through out this environment in order to properly secure it and therefore secure the hybrid cloud, but since it is a complex environment, we need a simpler way to view this environment.To try and simplify the environment there are three basic goals:
A hybrid cloud architecture is then necessary to seamlessly combine an existing physical/virtual infrastructure or private cloud with the resources provided by a public cloud. A successful hybrid cloud architecture should provide the following capabilities:
Hybrid cloud architectures could therefore address numerous new use cases that would be difficult to address solely within public and private clouds.